{"_id": "228054", "title": "Are (mostly) client-side JavaScript web apps slower or less efficient?", "text": "I am in the midst of writing a web application for work. Everything is from scratch. I have been a PHP programmer for about 13 years, Node.js programmer for the past 2 years, and have no shortage of experience with JavaScript. I love Node.js, and recently rebuilt the company's API in it... So, in planning this web application, the approach I'm considering is, have the Node.js API for getting data from the server, but render everything in the browser. Use AJAX for retrieving data, History API for loading pages, and a MVC-like pattern for the different components. I have read articles detailing twitters rebuild a few years ago. It was more or less a client-side JavaScript app, but a couple years after launching it, they started moving a lot of processing/rendering back to the server, claiming the app improved dramatically in terms of speed. So, my question is as the title asks, is a client-side centric app substantially slower?"} {"_id": "110557", "title": "How can a team apply the Scrum methodology without a clear customer?", "text": "Our team is trying to understand and adapt Scrum and other agile practices, but we can not figure out how to deal with customer feedback when there is no customer. Every document about the subject emphasizes how important is having the customer involved in every sprint and how having early feedback helps correcting problems fast and maximizes satisfaction to both sides. I understand clearly this point. In our case, we have no single customer. We develop a website and a smartphone app for an already established and growing audience. I am sure this is a fairly common case so I would like to know some real world experience about how to apply and manage Scrum in this case. Do you just decide all features by yourselves or do incorporate user testing in the sprint? Any other solution?"} {"_id": "11546", "title": "Are there laws to protect us from hackers who disclose vulnerabilities prior to alerting the vendor?", "text": "Take the example of the recent ASP.NET (and Java Server Faces) vulnerability disclosure at a Hacker conference in Brazil. It's my understanding that the poet tool was demonstrated before Microsoft was even aware of the issue. Are there laws to protect legitimate customers from people who encite the hacker community to start hacking all the ASP.NET servers they can? Who knows how many legitimate businesses were compromised between when the tool was demoed and the patch was applied to the server."} {"_id": "89378", "title": "Is EF4 mature enough with MySQL or Oracle?", "text": "Is Entity Framework 4 with MySQL or Oracle mature enough to be used on production level web application? Can it provide high level of performance, or should we stick with just plain data access with `SqlCommand`?"} {"_id": "232179", "title": "Are patches a bad sign for the customer?", "text": "At the office we just got out of a long period where we released patches on a too-frequent basis. Near the end of that period we were doing almost three patches per week on average. Beside that this was very demotivating for the developers, I was wondering what the customer would think about this. I asked the question myself and concluded that I never knew software that was updated that frequently. However, for the case that comes the closest I do not care really since the patches are pretty quickly applied. The customers which received these patches differ a lot from each other. Some were really waiting for the patch where others did not really cared, yet they all got the same patches. The time to update the customers software is less than 30 seconds, so I do not expect any problems concerning time. They do need to be logged out though. So my question in more detail: Is getting updates frequently giving a 'negative' message to the receiver? Of course, I could ask the customers, but I'm not in that position nor do I want to 'Awaken the sleeping dogs'. PS: If there is anything I could do to improve my question, please leave a comment."} {"_id": "19397", "title": "Why would anyone need this java syntax?", "text": "One day while trawling through the Java language documentation, as you do, I found this little beauty lurking within Double: 0.25 == 0x1.0p-2 Now, obviously (!) this means take the number hexadecimal 1 and right shift it decimal 2 times. The rule seems to be to use base 16 on the integer side and base 2 on the real side. Has anyone out there **actually used the right hand syntax** in a necessary context, not just as a way getting beers out of your fellow developers?"} {"_id": "19392", "title": "When is it better to offload work to the RDBMS rather than to do it in code?", "text": "Okay, I'll cop to it: I'm a better coder than I am at databases, and I'm wondering where thoughts on \"best practices\" lie on the subject of doing \"simple\" calculations in the SQL query vs. in the code, such as this MySQL example (I didn't write it, I just have to maintain it!) -- This returns the username, and the users age as of the last event. SELECT u.username as user, IF ((DAY(max(e.date)) - DAY(u.DOB)) < 0 , TRUNCATE(((((YEAR(max(e.date))*12)+MONTH(max(e.date))) -((YEAR(u.DOB)*12)+MONTH(u.DOB)))-1)/12, 0), TRUNCATE((((YEAR(max(e.date))*12)+MONTH(max(e.date))) - ((YEAR(u.DOB)*12)+MONTH(u.DOB)))/12, 0)) AS age FROM users as u JOIN events as e ON u.id = e.uid ... Compared to doing the \"heavy\" lifting in code: Query: SELECT u.username, u.DOB as dob, e.event_date as edate FROM users as u JOIN events as e ON u.id = e.uid code: function ageAsOfDate($birth, $aod) { //expects dates in mysql Y-m-d format... list($by,$bm,$bd) = explode('-',$birth); list($ay,$am,$ad) = explode('-',$aod); //Insert Calculations here ... return $Dy; //Difference in years } echo \"Hey! \". $row['user'] .\" was \". ageAsOfDate($row['dob'], $row['edate']) . \" when we last saw him.\"; I'm pretty sure in a simple case like this it wouldn't make much difference (other than the creeping feeling of horror when I have to make changes to queries like the first one), but I think it makes it clearer what I'm looking for. Thanks!"} {"_id": "155638", "title": "Designing a Content-Based ETL Process with .NET and SFDC", "text": "As my firm makes the transition to using SFDC as our main operational system, we've spun together a couple of SFDC portals where we can post customer- specific documents to be viewed at will. As such, we've had the need for pseudo-ETL applications to be implemented that are able to extract metadata from the documents our analysts generate internally (most are industry- standard PDFs, XML, or MS Office formats) and place in networked \"queue\" folders. From there, our applications scoop of the queued documents and upload them to the appropriate SFDC CRM Content Library along with some select pieces of metadata. I've mostly used DbAmp to broker communication with SFDC (DbAmp is a Linked Server provider that allows you to use SQL conventions to interact with your SFDC Org data). I've been able to create [console] applications in C# that work pretty well, and they're usually structured something like this: static void Main() { // Load parameters from app.config. // Get documents from queue. var files = someInterface.GetFiles(someFilterOrRegexPattern); foreach (var file in files) { // Extract metadata from the file. // Validate some attributes of the file; add any validation errors to an in-memory // structure (e.g. List). if (isValid) { var fileData = File.ReadAllBytes(file); // Upload using some wrapper for an ORM or DAL someInterface.Upload(fileData, meta.Param1, meta.Param2, ...); } else { // Bounce the file } } // Report any validation errors (via message bus or SMTP or some such). } And that's pretty much it. Most of the time I wrap all these operations in a \"Worker\" class that takes the needed interfaces as constructor parameters. This approach has worked reasonably well, but I just get this feeling in my gut that there's something awful about it and would love some feedback. Is writing an ETL process as a C# Console app a bad idea? I'm also wondering if there are some design patterns that would be useful in this scenario that I'm clearly overlooking. Thanks in advance!"} {"_id": "155639", "title": "Which algorithms/data structures should I \"recognize\" and know by name?", "text": "I'd like to consider myself a fairly experienced programmer. I've been programming for over 5 years now. My weak point though is terminology. I'm self-taught, so while I know how to program, I don't know some of the more formal aspects of computer science. So, what are practical algorithms/data structures that I could recognize and know by name? Note, I'm not asking for a book recommendation about implementing algorithms. I don't care about implementing them, I just want to be able to recognize when an algorithm/data structure would be a good solution to a problem. I'm asking more for a list of algorithms/data structures that I should \"recognize\". For instance, I know the solution to a problem like this: > You manage a set of lockers labeled 0-999. People come to you to rent the > locker and then come back to return the locker key. How would you build a > piece of software to manage knowing which lockers are free and which are in > used? The solution, would be a queue or stack. What I'm looking for are things like \"in what situation should a B-Tree be used -- What search algorithm should be used here\" etc. And maybe a quick introduction of how the more complex(but commonly used) data structures/algorithms work. I tried looking at Wikipedia's list of data structures and algorithms but I think that's a bit overkill. So I'm looking more for what are the essential things I should recognize?"} {"_id": "127472", "title": "Is Agile Development used in Machine Learning and Natural Language Processing?", "text": "I've been developing web apps for a while now and it is standard practice in our team to use agile development techniques and principles to implement the software. Recently, I've also become involved in Machine Learning and Natural Language Processing. I heard people primarily use Matlab for developing ML and NLP algorithms. Does agile development have a place there or is that skill completely redundant? In other words, when you develop ML and NLP algorithms as a job, do you use agile development in the process?"} {"_id": "103545", "title": "Switching from SVN to Mercurial: one repository or many?", "text": "We currently have a large Subversion repository, with a tree like: root /libraries /library1 /trunk /library2 /trunk /solutions /solution1 /trunk /solution2 /trunk There are 81 solutions and 22 libraries. The `trunk` subdirectories were added so we could use branches, but in practice, we don't use branches at all. My question is: if we migrate this to Mercurial, should be set up one big Mercurial tree with the same structure? Or should we create 81+22 Mercurial repositories?"} {"_id": "123857", "title": "Allowing the user to specify the location of a logfile", "text": "I'm working on an application, and adding logging, but now I'm stuck. I want to allow ( _not force!_ ) the user to set the location of the logfile. Basically, my problem is: * logger initialization should be the first thing the program does * but I can't initialize the logger until I determine where the user wants the log to be saved * determining where the log should be saved ... is a process that should be logged How is this problem solved? Are log file locations not user-customizable? Is log output buffered until a logfile is set?"} {"_id": "254376", "title": "How to implement a genetic algorithm with distance, time, and cost", "text": "I want to make a solution to find the optimum route of school visit. For example, I want to visit 5 schools (A, B, C, D, E) in my city. Given the choice of five routes regarding what school I should visit first, then the second, then the third etc., how do I calculate the efficiency of each route with distance, time, and cost criteria? Once I've done this, how do I use my calculations (distance with time and cost fuel usage) in a genetic algorithm to find the optimum route?"} {"_id": "215304", "title": "Is this a typo in the Artistic License 2.0?", "text": "_I'm not sure if this would fit better inStackExchange/English, but regardless, there is no practical use to the answer, other than to cure my curiosity._ Note this sentence at the end of the Artistic License 2.0: > THE PACKAGE IS PROVIDED BY THE COPYRIGHT HOLDER AND CONTRIBUTORS \"AS IS' AND > WITHOUT ANY EXPRESS OR IMPLIED WARRANTIES. It does not affect any legal aspects of the license, but is there a reason they mixed the use of single and double quotes on `\"AS IS'`? The license is so new that this wouldn't have been for \"command prompt friendly\" reasons. Is there special use or meaning behind this in the English language, or was it a typo?"} {"_id": "148701", "title": "Information about how much time in spent in a function, based on the input of this function", "text": "Is there a (quantitative) tool to measure performance of functions based on its input? So far, the tools I used to measure performance of my code, tells me how much time I spent in functions (like Jetbrain Dottrace for .Net), but I'd like to have more information about the parameters passed to the function in order to know which parameters impact the most the performance. Let's say that I have function like that: int myFunction(int myParam1, int myParam 2) { // Do and return something based on the value of myParam1 and myParam2. // The code is likely to use if, for, while, switch, etc.... } If would like a tool that would allow me to tell me how much time is spent in `myFunction` based on the value of `myParam1` and `myParam2`. For example, the tool would give me a result looking like this: For \"myFunction\" : value | value | Number of | Average myParam1 | myParam2 | call | time ---------|----------|-----------|-------- 1 | 5 | 500 | 301 ms 2 | 5 | 250 | 1253 ms 3 | 7 | 1268 | 538 ms ... > That would mean that myFunction has been call 500 times with myParam1=1 and > myParam2=5, and that with those parameters, it took on average 301ms to > return a value. The idea behind that is to do some statistical optimization by organizing my code such that, the blocs of codes that are the most likely to be executed are tested before the one that are less likely to be executed. To put it bluntly, if I know which values are used the most, I can reorganize the if/while/for etc.. structure of the function (and the whole program) to optimize it. I'd like to find such tools for C++, Java or.Net. **Note** : I am not looking for technical tips to optimize the code (like passing parameters as const, inlining functions, initializing the capacity of vectors and the like)."} {"_id": "215301", "title": "How Microsoft Market DotNet?", "text": "I just read an Joel's article about Microsoft's breaking change (non-backwards compatibility) with dot net's introduction. It is interesting and explicitly reflected the condition during that time. But now almost 10 years has passed. ## The breaking change It is mainly on how bad is Microsoft introducing non-backwards compatibility development tools, such as dot net, instead of improving the already-widely used asp classic or VB6. As much have known, dot net is not natively embedded in windows XP (yes in vista or 7), so in order to use the .net apps, you need to install the .net framework of over 300mb (it's big that day). However, as we see that nowadays many business use .net as their main development tools, with asp.net or mvc as their web-based applications. C# nowadays be one of tops programming languages (the most questions in stackoverflow). The more interesing part is, win32api still alive even there is newer technology out there (and still widely used). Imagine if microsoft does not introduce the breaking change, there will many corporates still uses asp classic or vb-based applications (there still is, but not that much). There are many corporates use additional services such as azure or sharepoint (beside how expensive is it). Please note that I also know there are many flagships applications (maybe adobe's and blizzard's) still use C-based or older language and not porting to newer high-level language. ## The question How can Microsoft persuade the users to migrate their old applications into dot net? As we have known it is very hard and give no immediate value when rewrite the applications (netscape story), and it is very risky. I am more interested in Microsoft's way and not opinion such as \"because dot net is OOP, or dot net is dll-embedable, etc\". This question may be constructive, as the technology is vastly changes over times lately. As we can see, Microsoft changes Asp.Net webform to MVC, winform is legacy now, it is starting to change to use windows store rather than basic-installment, touchscreen and later on we will have see-through applications such as google class. And that will be breaking changes. We will need to account portability as an issue nowadays. We will need other than just mere technology choice, but also migration plans. Even maybe as critical as we might need multiplatform language compiler, as approached by Joel's Wasabi. (hey, I read his articles too much!)"} {"_id": "120126", "title": "What is the history of why bytes are eight bits?", "text": "What where the historical forces at work, the tradeoffs to make, in deciding to use groups of eight bits as the fundamental unit ? There were machines, once upon a time, using other word sizes, but today for non-eight-bitness you must look to museum pieces, specialized chips for embedded applications, and DSPs. How did the byte evolve out of the chaos and creativity of the early days of computer design? I can imagine that fewer bits would be ineffective for handling enough data to make computing feasible, while too many would have lead to expensive hardware. Were other influences in play? Why did these forces balance out to eight bits? (BTW, if I could time travel, I'd go back to when the \"byte\" was declared to be 8 bits, and convince everyone to make it 12 bits, bribing them with some early 21st Century trinkets.)"} {"_id": "235388", "title": "How can I gauge the supportability and reliability of a package before introducing it to a project?", "text": "I recently found a package (JavaBuilders) that I like and I think will help develop on my project but it has some issues: * No longer being developed (last commit on github >1 year ago) * Lack of activity implies no longer supported by Developers * No noticeable community implies it is not supported by a larger userbase For all I know this package is dead and any issues we have will be insurmountable without a large re-write to remove it's use in the Worst Case Scenario. Separately (probably worth another question) if this package is not widely known, it'll increase lead-in time for any new members who join the team. Is there a more reliable way to gauge the use of a package, to see if it's 'supportable' and flexible enough (e.g. if we want it to do feature X we can't go back to a developer who has vanished from developing the package) to not put the project at risk?"} {"_id": "173154", "title": "What are the differences between programming languages?", "text": "Once upon a time, I heard from someone > the only difference between programming languages is the syntax I wanted to deny it - to say that there are other **fundamental** aspects that truly set a language apart from others than just syntax. But I couldn't... So, can you? Whenever I search Google for something like \"differences between programming languages\", the results tend to be debates between two specific languages (I'd like something more general) - however, some of the aspects that people seemed to debate the most were: * Object-Oriented * Method/Operator overloading (I actually see this rather related to syntax) * Garbage-Collection (While it seems like a good difference, for some reason it doesn't seem that \"fundamental\") What important aspects other than syntax can you think of?"} {"_id": "173153", "title": "JSF best practice for binding UI components to backing bean?", "text": "In JSF is it ok to bind UI components to backing bean just to render messages or we should only bind when we need to do lot more than just rendering messages?"} {"_id": "203507", "title": "What's so useful about closures (in JS)?", "text": "In my quest to understand closures in the context of JS, I find myself asking why do you even need to use closures? What's so great about having an inner function be able to access the parent function's variables even after the parent function returns? I'm not even sure I asked that question correctly because I don't understand how to use them. Can someone give a real world example in JS where a closure is more beneficial vs. the alternative, whatever that may be? Edit: the question Why is closure important for JavaScript? did not answer clearly enough what I want to understand. So, I don't consider this a duplicate. I'll take a look at all of the answers/resources, and I'll mark an accepted one. Thanks to everyone!"} {"_id": "46137", "title": "What is the main difference between Scripting Languages and Programming Languages?", "text": "Say difference between Python and C++?"} {"_id": "254370", "title": "Security Pattern to store SSH Keys", "text": "I am writing a simple flask application to submit scientific tasks to remote HPC resources. My application in background talks to remote machines via SSH (because it is widely available on various HPC resources). To be able to maintain this connection in background I need either to use the user's ssh keys on the running machine (when user's have passwordless ssh access to the remote machine) or I have to store user's credentials for the remote machines. I am not sure which path I have to take, should I store remote machine's username/password or should I store user's SSH key pair in database? I want to know what is the correct and safe way to connect to remote servers in background in context of a web application."} {"_id": "177641", "title": "Is it efficient to use the number pad in your programming", "text": "I'm a programmer and use vim for all my software. I use the `hjkl` keys to avoid moving from the home row. Is it efficient/inefficient using the number pad on the keyboard? My thoughts are probably that it's inefficient because you need to move your hand? Many thanks."} {"_id": "89374", "title": "Help me understand this \"mindmap / constellation\" visual pattern", "text": "Please help me identify and understand this visual pattern, what's the common name used for such mindmap / constellation visualisations? http://asterisq.com/products/constellation/roamer/demo http://apps.asterisq.com/mentionmap/#user-scobleizer I am also looking a for a framework, library or **math formula** that will help me build something similar. I am especially interested in the auto arranging functionality as I will have plenty of nodes that I need to arrange on the screen in the best possible fashion. What's the math behing auto arranging somethoing in 2D? I also need combine to that all with extensive zooming and map-like navigation. If you know about anything that could help achieve my goal please don't hesitate to leave an answer. My preferred language is ActionScript 3 / Flash but I will be thankful for any info, tutorial or article in any language."} {"_id": "177649", "title": "What is constructor injection?", "text": "I have been looking at the terms constructor injection and dependency injection while going through articles on (Service locator) design patterns. When I googled about constructor injection, I got unclear results, which prompted me to check in here. What is constructor injection? Is this a specific type of dependency injection? A canonical example would be a great help! **Edit** Revisiting this questions after a gap of a week, I can see how lost I was... Just in case anyone else pops in here, I will update the question body with a little learning of mine. Please do feel free to comment/correct. _Constructor injection and property injection are two types of Dependency Injection._"} {"_id": "102813", "title": "Best practices for graph representation of a system architecture?", "text": "I don't really know the nomenclature regarding these matters, but here is a brief description of what I want. Please let me know if I should substantiate more. So I have this larger project involving databases, different languages, and interfaces (SQL, R, C, Python, both GUI & CLI). It's growing a little too big to fit into a simple mental construct of what is actually going on. I am interested in making the mother of all charts mapping out the project from a system architecture perspective. Generally speaking I'd like the information to show some meta information of the data rather and where it is produced / consumed. I guess it is close to a flow chart, but as I am no expert in these matters I am asking for help. Are there any tools for this? Any best practices regarding formatting etc? How about symbols for all the procedures / classes / methods / functions? Please chime in if you have any opinion regarding the matter. Just to frame a little bit more what I would be interested in: * I rather use tools like Latex than Visio * I hate large and fancy IDEs, but I adore VIM * I would do with static solutions, but I would be interested in automated solutions too (as long as they are not too complicated of course)"} {"_id": "78758", "title": "Downloadable Game vs Non-Downloadable", "text": "I want to make a networked building game with limited physics(just for the characters) and I wanted to know what the best route would be. I was looking at a browser based Java game or a downloadable C++ game or maybe even a downloadable Java game. Anyways, would a browser based game be too slow for something like this(thousands of blocks with multiple players)? What are your thoughts/suggestions?"} {"_id": "272", "title": "When someone asks you what you do, what do you say (e.g. programmer, developer, code monkey)?", "text": "Do you call yourself a programmer, a developer, or a code monkey? I personally prefer to say I am a developer."} {"_id": "66393", "title": "Motivation and practical learning", "text": "I'm a newcomer to StackExchange and this seems a very good place to ask my question, that's been wandering in my head for a few months. Currently I'm 22, I'm studying a BS of Computer Science in the UNED (Spanish Distance University) and I'm doing well. I have a job as well, doing web programming (PHP, SQL, CSS, HTML, Apache, that kind of stuff) in a company, but working from my home. I've tried those last few years to accomplish success in other programming languages. I started with C++, Java, Perl & Python and although I can say I have a decent level on it, it's difficult for me to find some projects where I can use them. And I mean **real projects** where you can drive the language level to its cap. I think the lack of motivation is one of the reasons behind this. Its like after a full day repetitive programming, my brain is exhausted and it is hard to achieve something. Also, I have problems with my learning methods. I read a lot of forums (like this one), a lot of books and websites talking about programming, but I don't know how to apply the knowledge acquired. It's like I need a different approach to learning, a more practical one. So, the question is: How do you find the motivation to keep programming after a full day at the job? How do you find a more practical learning method? (It's easy to keep _reading about_ programming after all, but it's another to actually code **something big** ). Thank you in advance. PS: I'm not a native speaker, so don't mind me for lexical and grammatical errors."} {"_id": "239180", "title": "Working on a project, lacking motivation to actually get to coding", "text": "I'm a hobbyist programmer. Often during a period of working on a project, I find myself having a hard time to actually get to coding. I sit in front of my computer, sometimes open the IDE, and instead of continuing work on the project, I find myself watching YouTube videos (often programming related), browsing Facebook, reading questions on this site, etc. Also sometimes in the design phase of something, I think hard of different designs and solutions, come up with a cool solution which I get excited about, sketch it on paper, but eventually when I actually sit in front of my computer to implement what I designed, I don't program but rather do other stuff, or sometimes code a little bit. I do like programming a lot. And I also enjoy designing stuff and thinking of ideas. However often when I actually sit down to continue my current project, I have this lack of motivation to code. I have two questions about this: 1. Is this normal? **Are there other people - _that enjoy programming very much_ \\- that experience the same problem?** Is this 'starting-to-work-bummer' feeling common? 2. How do you suggest I solve this? **When sitting down to code, should I just 'power through' this initial phase of lack of motivation, knowing it'll get more fun after I 'warm up' and get deep into coding?** **Or am I risking getting burned out and tired of programming by forcing myself into this when I lack motivation?** What do you suggest from your personal experience? **This isn't just a \"do you sympathize with this?\" kind of question. I'm asking in order to find a solution. Thanks for your help** _I don't think this is a duplicate. There are a lot of questions about lack of motivation, but this one in particular is about a specific feeling of lack of motivation when sitting down to code._"} {"_id": "29852", "title": "Motivating yourself to actually write the code after you've designed something", "text": "**Does it happen only to me or is this familiar to you too?** It's like this: You have to create something; a module, a feature, an entire application... whatever. It is something interesting that you have never done before, it is challenging. So you start to think how you are going to do it. You draw some sketches. You write some prototypes to test your ideas. You are putting different pieces together to get the complete view. You finally end up with a design that you like, something that is simple, clear to everybody, easy maintainable... you name it. You covered every base, you thought of everything. You know that you are going to have this class and that file and that database schema. Configure this here, adapt this other thingy there etc. But now, after everything is settled, you have to sit down and actually write the code for it. And is not challenging anymore.... Been there, done that! Writing the code now is just \"formalities\" and makes it look like re-iterating what you've just finished. At my previous job I sometimes got away with it because someone else did the coding based on my specifications, but at my new gig I'm in charge of the entire process so I have to do this too ('cause I get payed to do it). But I have a pet project I'm working on at home, after work and there is just me and no one is paying me to do it. I do the creative work and then when time comes to write it down I just don't feel like it (lets browse the web a little, see what's new on P.SE, on SO etc). I just want to move to the next challenging thing, and then to the next, and the next... Does this happen to you too? How do you deal with it? How do you convince yourself to go in and write the freaking code? I'll take any answer."} {"_id": "80599", "title": "Is it worth the experience gained by publishing a simple app on the App Store?", "text": "Eg. Another stupid Fart app or something of that sort. Not for money or recognition or anything else except maybe the experience you get from it. Are there any caveats one should be a aware of before deciding to go through with it?"} {"_id": "19974", "title": "IDE Generated Code", "text": "Many IDEs have the ability to spit out automatically written pieces of code. For example, it might generate a getter/setter pair for you. I wonder whether it isn't an attempt to work around some flaws in the language. If your IDE can figure out what the code should be without your help, why can't the compiler? Are there cases where this is a really good idea? What kind of code generation does your IDE provide? Would the same thing be better provided as a feature of the language?"} {"_id": "16701", "title": "Is time tracking required for web based/service industries?", "text": "I work in the web development industry and we implement a time tracking system to log our time (Project/time/comment). In the beginning of a project, we create a contract, and decide upon a price based on our hourly rate _x_ estimated hours. We log out times to see if we go \"over budget\". Is time tracking in this industry the norm? Is it required? What are the pros and cons?"} {"_id": "80591", "title": "Why is no C++ interview complete if it does not have vtable questions?", "text": "Frankly, I don't understand the practical importance of vtable. For me it is just a theoretical concept which needs to be mugged up since interviewer will ask it surely. Can anyone shed some light on it that why interviewers love vtable? I don't see how knowledge of vtable makes me a competent c++ developer :-|"} {"_id": "245825", "title": "Why GPL was made so that it requires open application code yet not open pipe of compiled applications?", "text": "Say we have a GPL library (CGAL for example). We have a big tasks chain like pipes modeling and testing. We would love to use library for our indoors application yet we must open sources... so we make a minimal application that takes arguments and files in, and returns files and data out. And is being used like `closedSourceApp > GPLApp > closedSourceApp`. And all interesting/relevant parts are excluded from GPL app. At the same time GPL library is not used at its full potential and does not get integrated into bigger applications. So the question is what reasons were behind GPL license idea to force project new code to live under it?"} {"_id": "245824", "title": "How to mentor a junior team member while remaining productive on your own projects?", "text": "This question is not about multitasking... there are plenty of tips around for that. This is about how to take steps to efficiently multi-project (work on two projects at a time) on a single large code-base. More specifically, how does a senior programmer go about working on one project, and mentoring another beginner programmer on a different project with the same codebase, and thus assisting with training, design, integration, troubleshooting, etc. The beginner needs to be mentored so as to prevent unnecessary complexity and debt from creeping into the codebase. I don't see any questions pertaining to this scenario. Closest is senior + senior, but mentoring a beginner is a different animal. I wonder if multi-project'ing is even normal in the industry. Are there patterns for making it more efficient? The context switching is brutal. Multitasking is already hard enough as it is. I would think both engineers should ideally work on the same project until the beginner is far enough along to be able to handle a project on his or her own, always with some assistance from a senior of course. Not to get too far off topic, but to give context, it seems management wants to skip the training step and throw the beginner into the deep end of a large code base. I am less concerned about the success of the near term project(s), and more concerned with resulting increased entropy and complexity in the code if I were to not mentor, which would prove disastrous in the long run."} {"_id": "245827", "title": "Setting global parameters: is this a reasonable use of const_cast and volatile?", "text": "I have a program that I run repeatedly with various parameter sets. Different parameters are used in different parts of the program (including different source files). The same parameter may also be used in different places. However, all parameters are **constant** during run-time after they have been set. I discovered very quickly that declaring the parameters locally does not make sense (it involves having to remember where every parameter is defined, etc.), so I resorted to using a `params.h` where I declared and defined all the parameters: `const int Param1 = 42;`, etc. The downside of this is that I have to recompile every time I change a parameter. So I'm thinking of using the method below. It relies on using`volatile` and `const_cast`, which is normally considered \"dirty\", but it ensures once the parameters have been set in `main`, they are not accidentally changed anywhere else in the program. I'm wondering whether people think this is OK, because eventually I want to open-source my code. In `params.h`: namespace Params { extern volatile const int Param1; // etc. } In `main.cpp`: #include \"params.h\" volatile const int Params::Param1 = 0; int main(int argc, char* argv[]) { int* const pParam1 = const_cast(Params::Param1); *pParam1 = // Get value from argv[] or some config.ini file. // etc. }"} {"_id": "88020", "title": "Why would a code analysis tool be priced based on lines of code count?", "text": "I heard some static analysis tools are priced depending on how much code they are licensed for. I can think that it's usual segmentation - the more code the customer has the more care he needs and the more useful the tool is for him, so he should pay more, so basically it's the way for the tool supplier to get more money. Are there any other objective reasons why a source code analysis tool would be priced depending on how much code the customer is planning to analyse?"} {"_id": "136792", "title": "Is this a proper implementation of an iOS MVC pattern?", "text": "After browsing the apple docs, I came across this sample of their MVC pattern: ![mvc](http://i.stack.imgur.com/BYaCl.png) Using NSNotificationCenter and without using KVO, would this diagram below represent a correct implementation of the MVC pattern? If not, what\u2019s wrong and how can it be corrected or improved? ![mvc example](http://i.stack.imgur.com/jjQa2.png) 1. App starts with the left light set to on, and the right one off. Only one light may be on at a time. 2. Users presses the right switch, which sends a target action to the view controller. 3. The view controller receives the message, and send a message to the right light data model. 4. The right light uses NSNotificationCenter to notify the controller the right light has changed. 5. The controller receives the message, and performs the following method `BOOL rightLightOn = [rightLightData on]; if( rightLightOn ) { [rightLightImage setImage:onImage]; [leftLightSwitch setOn:NO]; } else { [rightLightImage setImage:offImage]; }` 6. The switch change causes the UISwitch to call the method \u201cleftSwitchChanged\u201d in the controller. 7. The controller receives the message, and send a message to the left light data model. 8. The left light uses NSNotificationCenter to notify the controller the left light has changed. 9. The controller receives the message, and again performs the same method shown above, but modified for the right light. In addition, what if the system wasn\u2019t using a switch, and instead was using a UIButton that displayed the text, \u201cTurn On\u201d or \u201cTurn Off.\u201d Would the switch update it\u2019s own text, then call \u201crightSwitchChanged\u201d or would it call rightSwitchChanged immediately and wait for the view controller to change the text?"} {"_id": "88025", "title": "Quickly Large Data Pivoting", "text": "We are developing a product which can be used for developing predictive models and the slicing and dicing of the data in order to provide BI. We are having two kind of data access requirements. For predictive modeling, we need to read data on daily basis and do it row by row. In this the normal SQL Server database is sufficient and we are not getting any issues. In case of slicing and dicing data of huge sizes like 1GB of data having let us say 300 M rows. We want to pivot that data easily with minimum response time. The current SQL Database is having response time issues in this. We like our product to run on any normal client machine with 2GB RAM with Core 2 Duo processor. I would like to know how should I store this data and then how I can create a pivoting experience for each of the dimension. Ideally we will have data of let us say daily sales by sales person by region by product for a large corporation. Then we would like to slice and dice it based on any dimension and also be able to perform aggregation, unique values, maximum, minimum, average values and some other statistical functions."} {"_id": "12017", "title": "Software development books are useful, but when to find the time to read them?", "text": "I have 5 books in my \"read-wish-list\". When do I read them? I mean I could force myself to use 1 hour during working hours, but this will last for 2 days then someone will ask me to do more \"high priority things\". One option can be reading at night, but also this has limits, even because I prefer to spend time with kids. Could you please share your experiences? A long term plan is needed of course, it makes no sense to read 5 books in a week, but to continuously read something. For this reason it must not be a stressful thing. It should be easy. It must not be a struggle to find time to read, but it should be done on a continuous regular basis. Somehow this question can be similar to THIS ONE but I want to ask about books. How many if you read books at work for self improvement, not to tackle a specific task?"} {"_id": "48697", "title": "Should a programmer be indispensable?", "text": "As a programmer or system administrator, you could either strive to have your fingers in every system or to isolate yourself as much as possible to become an easily-substituted cog. Advantages of the latter include being able to take vacations and not being on call, while the former means that you'd always have something to do and be very difficult to fire. Aiming for either extreme would require a conscious effort. **Except for the obvious ethical considerations** , what should one strive for?"} {"_id": "245828", "title": "Pair-programming and company privacy guidelines", "text": "At our office new privacy regulations are introduced requiring every employee to protect his or her computer with a personal password. The employees are required not to share these passwords. Prior to this, everybody could access everybody's computer. No, there is little doubt that this is an improvement from a security point of view. However we - as developers - struggle with this, because we do a lot of pair programming (often for several days in a row) and due to flexible working hours and a like, access to different working station is needed without the presence of the developer who owns the computer. There is a tendency, that developers will share their passwords (even going so far as putting it as a list on the wall). How can you reconcile the need for accessing uncommitted code on a working station from different developers with security and privacy guidelines regarding these working stations?"} {"_id": "153777", "title": "Personal Version Control Benefits", "text": "What are the benefits, if any, of having a personal Version Control System? This includes such things as personal projects, hobbies, sample code accumulated over the years, etc. ### Over and above the obvious benefits such as backup/history."} {"_id": "203618", "title": "Should I Use Native Generic PHP or a Framework", "text": "I'm working with a team i just met. I've been using the normal native generic php for coding up until now, and built several webapps with it. But a team member suggests we switch to using a framework for development. I personally prefer going the normal way, using native generic PHP codes, but he Suggests we use a framework. I learnt Code Igniter has problem with loading images, and this is one problem with some frameworks, they have some difficulties you just have to go with, unlike writing native PHP Codes. I have a large archive of PHP Codes, that does the work of what some Framework does, i can use this and implement it in the WebApp. Is it better to go with a Framework or Go Native & Generic PHP. Another thing is that, this is a web app for mobile devices, which me and the team are developing for a company, and there will be need for maintenance in the nearest future, if we are not available for the maintenance. Our codes has to be very simple, not too ambiguous and self explanatory, and with comments too, for the future developer. Which is why i'm thinking we write out our own codes, and make it very Simple in the best possible way."} {"_id": "100010", "title": "How to flag a class as under development in Java", "text": "I'm working on a internship project, but I have to leave before I can finish up everything. I have 1 class that is not stable enough for production use. I want to mark/flag this class so that other people will not accidentally use it in production. I have already put the notice in Javadoc, but that doesn't seem enough. Some compiler error or warning would be better. The code is organized like this: [Package] | company.foo.bar.myproject |-- Class1.java |-- Class2.java |-- Class3.java <--(not stable) If there was a single factory class that calls those classes in public methods, I could have set the method to `class3` as `private`. However the API is NOT exposed that way. Users will directly use those class, e.g. `new Class1();`, but I can't make a top-level class private. What's the best practice to deal with this situation?"} {"_id": "119864", "title": "Has anyone used Sproutcore?", "text": "Has anyone used Sproutcore for a web application? If so, can you give me a description of your experience? I am currently considering it, but I have a few concerns. First, the documentation is bad/incomplete, and I'm afraid that I'll spend lots of time figuring things out or digging through source code. Also, I'm a bit hesitant to use a project that is relatively new and could undergo significant changes. Any thoughts from people who have developed in Sproutcore are appreciated! EDIT/PS: Yes, I've seen this post: http://stackoverflow.com/questions/370598/sproutcore-and-cappuccino . However I'm interested in a bit lengthier description of Sproutcore itself from someone who's used it for a significant project."} {"_id": "17254", "title": "Do poor writers make poor programmers?", "text": "I'm reading _Coders at Work_ by Peter Seibel, and many a time it has been mentioned that programmers who can't write generally make poor programmers - it's been claimed by Douglas Crockford, Joshua Bloch, Joe Armstrong, Dijkstra (and I've only read half the book). What's your view of this? Is an inability to express yourself in writing in a natural language such as English a hindrance of writing good code?"} {"_id": "52466", "title": "What should a freelancer's business card have?", "text": "For example, when I first started out freelancing a year ago, my business card had my name, email and website - and up top a list of the technologies I'm comfortable with. In retrospect I don't feel this was a wise decision. Why would a potential client know what Python or Ruby is? How could he know what .NET was? I still have a couple of the old batch left, but I'm going to send out for some new cards. What do you recommend we developers have to show on our business cards? Am I correct in thinking listing technologies is meaningless to potential clients?"} {"_id": "235967", "title": "Use a global variable, a singleton, or something else", "text": "**Preface:** I am working in PHP ( _Abandon hope all ye who enter here_ ). **Background:** There exists a large set of global functions in PHP, a number of which are miscellaneous system calls, like sleep (and others). Now, I use `sleep` (and others) in a bunch of different scripts I run in a bunch of different places, and I have found I need sleep to call `pcntl_signal_dispatch` as soon as the `sleep` finishes- but possibly not in all my scripts. **A Generalization:** I need to make global function do more than it currently does, hopefully without disrupting my current ecosystem too much. **My Solution:** I figure I could create a wrapper class that executes the correct `\"sleep\"`. _Singleton:_ I could make a singleton \"System\" class that wraps the global functions. Hell, I could even make the wrappers static methods. The downside is that there would be a lot of boilerplate checking to see which version I would need to execute, either a vanilla function call or one with extra stuff. _Global variable:_ I could make a generic \"System\" class that wraps the global functions. I could then extend the System class with different classes that override the wrapper functions. I create a global `System` variable within each script, dependent upon how I need the functions to behave. All my scripts have access to that global variable. The downside is I would have to make sure the global variable is declared, is never overwritten, and uses the proper `System`. _Something else:_ I could create a `SysControl` class with a static `System` variable and static wrappers of the `System`'s wrappers of the functions, and then swap out which `System` my `SysControl` class references. The downside is that I feel I am going overboard. Are there any more options I should consider? Which of these methods is the best, and why? What pitfalls should I look for going forward? EDIT: I ended up using the _Something else_ solution."} {"_id": "95796", "title": "Why is C++ often the first language taught in college?", "text": "My school starts the computer science curriculum with C++ programming courses, meaning this is the first language that many of the students learn. I've seen that many people dislike C++, and I've read a variety of reasons why. It almost seems to be popular opinion that C++ isn't a very good language. I get the impression it's not very liked based on some questions on StackExchange as well as posts such as: http://damienkatz.net/2004/08/why-c-sucks.html http://blogs.kde.org/node/2298 http://blogs.cio.com/esther_schindler/linus_torvalds_why_c_sucks http://www.dacris.com/blog/2010/02/16/why-c-sucks-part-2/ etc. _(Note: It is not my opinion that C++ is a bad language. In fact, it's the main language I use. However, the internet as well as some professors have given me the impression that it's not a very widely liked language. In fact, one of my professor constantly rags on C++, yet it's still the starting language at my college!)_ With that in mind, **why is this the first language taught at many schools? What are the reasons for starting a programming curriculum with C++?** Note: This question is similar to \"Is C++ suitable as a first language\", but is a little different since I'm not interested in whether it's suitable, but why it's been chosen."} {"_id": "63028", "title": "In term of performance : while , for ... Loops VS recursion", "text": "What is better for performance to write the loop as linear e.g. for , while or write it as recursion ?"} {"_id": "235962", "title": "What is the process of determining which method in a class hierarchy should execute known as?", "text": "I thought I understood inheritance and polymorphism, but I was given this question, and I can't, for the life of me, figure out what the proper answer is or what they're trying to get at: > The process of determining which method in a class hierarchy should execute > is known as: > > * a) inheritance > * b) polymorphism > * c) is-a > * d) has-a > * e) parent class > Looking at each of the terms, none of them seem like the proper answer. **Inheritance** is just when a class automatically gets the public variables and methods in its parent class. So this clearly isn't the right answer. **Polymorphism** allows us to write one method to handle object A, and as a result will work with everything that extends object A (or continues to extend it, IE Object B extends Object A. Object C extends Object B, etc) So this clearly isn't the right answer! **Is-a** : This doesn't even make any sense. Is-a is just used to declare that a class is an instance of its parent class (dog is-an animal), so it inherits its public variables and methods. I don't see how this is \"determining which method in a class hierarchy should execute\" **Has-a** : I'm not too familiar with this, but it's essentially composition, where Object-A has-a Object B, but object-B isn't an instance of Object A. This doesn't seem like the right answer either **Parent Class** : This is just the base class, if we trace the tree of inheritance up, the top of the tree is the parent class. Can someone please explain which term can also be defined as _\"The process of determining which method in a class hierarchy should execute is known as\"?_ Am I not understanding one or more of the terms? Is this simply a poorly worded question?"} {"_id": "180131", "title": "What are Web runtime environments and programming languages", "text": "I've been looking into the details behind these two different categories: 1. Web runtime environments 2. Web application programming languages I believe I have the correct information and have phrased it correctly but I am unsure. I have been searching for a while but only find snippets of information or what I can see as useless information (I could be wrong). Here are my descriptions so far: Web runtime environments - A Run-time environment implements part of the core behaviour of any computer language and allows it to be modified via an API or embedded domain-specific language. A web runtime environment is similar except it uses web based languages such as Java-script which utilises the core behaviour a computer language. Another example of a Run-time environment web language is JsLibs which is a standable JavaScript development runtime environment for using JavaScript as a general all round scripting language. JavaScript is often used to create responsive interfaces which improve the user experience and provide dynamic functionality without having to wait for the server to react and direct to another page. Web application programming languages - A web application program language is something that mimics a traditional desktop application within a web page. For example, using PHP you can create forms and tables which use a database similar to that of Microsoft Excel. Some of the other languages for web application programming are: * Ajax * Perl * Ruby Here are some of the resources used: http://en.wikipedia.org/wiki/Web_application_development http://code.google.com/p/jslibs/ **I would like some confirmation that the descriptions I have created are correct as I am still slightly unsure as to whether I have hit the nail on the head.**"} {"_id": "176063", "title": "How to be successful at BDD Specifications Workshops?", "text": "Today we tried to introduce BDD in our software development process by having a specification workshop. For this workshop we had 2 developers, 1 tester and 1 business analyst. The workshop lasted 1h30 and by the end of it we managed to figure out some BDD scenarios for our new feature. We tried to focus on finding the scenarios that we could miss, and the difficult ones. At the end of the workshop some people were actually unhappy with the workshop. One developer felt he **wasted his time** as he was used to be given out the scenarios directly by the business analyst and review them with her. The business analyst **didn't feel confident with our scenario coverage** (Had a feeling that we could have missed out other important stuff) but **more importantly felt that this workshop was also a waste of time as she could have figured out all these scenarios by herself and in a shorter period of time**. This experimental workshop lasted 1h30, and by the end of it, we didn't feel confident enought about what we did...sure we could have spent more time on it but honestly most people get exhausted after 1h30 of brainstorming to fetch out business rules from the BA brain. So my question is how that kind of workshop can actually work. In the theory, given you have a new feature to develop, you put the tree 'amigos' (dev/tester/ba) in the same room so that they can collaborate together on writing the differents requirements for the new feature using examples. I can see all the benefits from that. Specially in term of knowledge sharing and common product/end goal/done vision. Our conclusion from this experiment was that it is actually more cost effective to **first have a BA to work on his own on the examples** and only **then to have the scenarios to be reviewed/reworked by the 3 'amigos'**. By having the BA to work on his own, we actually feel more confident that we are less going to miss out stuff + we still get to review the scenarios afterward to double check. We don't think than simple one time brainstorming/deliberate discovery session is actually enought to seriously cover all the requirement for a new feature. The business analyst is actually the best person for that kind of stuff. The best thing we can do is to review what she wrote and see if then we have a common understanding (which could then lead to rewrite some of her scenarios or add new ones she could have missed). So how can you get that to work effectively **in practice** ?"} {"_id": "176065", "title": "Brief material on C++ object-lifetime management and on passing and returning values/references", "text": "I was wondering if anybody can point to a post, pdf, or excerpt of a book containing the rules for C++ variable life-times and best practices for passing and returning function parameters. Things like when to pass by value and by reference, how to share ownership, avoid unnecessary copies, etc. This is not for a particular problem of mine, I've been programming in C++ for long enough to know the rules by instinct, but it is something that a lot of newcomers to the language stumble with, and I would be glad to point them to such a thing."} {"_id": "21617", "title": "GPL Notice on Snippets", "text": "I just wan't to ask, is it ok to put a GPL notice inside a small script or a snippet? > This program is free software: you can redistribute it and/or modify it > under the terms of the GNU General Public License as published by the Free > Software Foundation, either version 3 of the License, or (at your option) > any later version. > > This program is distributed in the hope that it will be useful, but WITHOUT > ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or > FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for > more details. > > You should have received a copy of the GNU General Public License along with > this program. If not, see http://www.gnu.org/licenses/. Or just the Copyright notice will be enough?"} {"_id": "212820", "title": "Generating a custom widget that users can embed into their external website based on my server data which changes periodically", "text": "I'm deciding on how to generate the code to allow users generate an embeddable widget (much like the StackOverflow badge) into external websites. The content of the embedded widget will periodically change, but it doesn't need to be communicating in real-time with my server after a visitor has loaded the page where it's embedded. This opens up several possibilities for getting this to work: 1. On the server, pre-generate the static html content I want rendered in their page (and have a scheduled job which regenerates the static file). My users can then embed it in their page using an iframe reference to the static resource. 2. Same as above, but instead of the iframe I create a js file which they reference (much like an Google Analytics code) and then the js file served inserts the data in into their DOM. My web server would need to dynamically generate the js file on each request for the file resource. 3. Give them a js file which creates the element on their page like the above, except the js file payload doesn't include all the data and instead basically generates the DOM element as a template, then calls web services to populate the data (like their name and score) using JSONP requests to my server. I like 2 for the explicit server-side control, but it will take more time. 3 is good because the services I expose to retrieve this data can be reused for other purposes later. I don't generally like iframes, but they work and are very quick to implement. Any suggestions of which way to proceed or ideas that I've missed?"} {"_id": "212822", "title": "Why many designs ignore normalization in RDBMS?", "text": "I got to see many designs that normalization wasn't the first consideration in decision making phase. In many cases those designs included more than 30 columns, and the main approach was \"to put everything in the same place\" According to what I remember normalization is one of the first, most important things, so why is it dropped so easily sometimes? **Edit:** Is it true that good architects and experts choose a denormalized design while non-experienced developers choose the opposite? What are the arguments against starting your design with normalization in mind?"} {"_id": "69178", "title": "What is the benefit of git's two-stage commit process (staging)?", "text": "I'm learning git and I've noticed that it has a two-step commit process: 1. `git add ` 2. `git commit` The first step places revisions into what's called a \"staging area\" or \"index\". What I'm interested in is why this design decision is made, and what its benefits are? Also, as a git user do you do this or just use `git commit -a`? I ask this as I come from bzr (Bazaar) which does not have this feature."} {"_id": "45447", "title": "Foreign key restrictions -> yes or no?", "text": "I would like to hear some\u201dreal life experience\u201d suggestions if foreign key restrictions are good or bad thing to enforce in DB. I would kindly ask students/beginners to refrain from jumping and answering quickly and without thinking. At the beginning of my career I thought that stupidest thing you can do is disregard the referential integrity. Today, after \"few\" projects I'm thinking different. Quite different. What do you think: Should we enforce foreign key restrictions or not? *Please explain your answer."} {"_id": "250991", "title": "Is immutability very worthwhile when there is no concurrency?", "text": "It seems that thread-safety is always/often mentioned as the main benefit of using immutable types and especially collections. I have a situation where I would like to make sure that a method will not modify a dictionary of strings (which are immutable in C#). I\u2019d like to constrain things as much as possible. However I am not sure whether adding a dependency to a new package (Microsoft Immutable Collections) is worth it. Performance is not a big issue either. So I guess my question is whether immutable collections are _strongly_ advised for when there are no hard performance requirements and there are no thread safety issues? Consider that value semantics (as in my example) might or might not be a requirement as well."} {"_id": "250994", "title": "Static and dynamic data : should I use different databases?", "text": "Say I am building a website that uses two different types of data : * Static : information that will hardly change, like movie awards or world countries names (I want fast access so no external API) * Dynamic : information entered by users I have not written a single line of code yet. My static db is more likely to be quite large but will not change overtime. As for the dynamic db I have no idea yet but I might need scalability. Should I use different databases in the long run ? Is it common practice to do so ?"} {"_id": "125401", "title": "Do you keep DONE stories in the physical product backlog?", "text": "We have our Product Backlog as a physical Kanban board with TODO & DONE column. Some of the stories are moving from the TODO column to the Sprint backlog during our planning, and then during the Sprint review back in the Product Backlog in the DONE column for the one which we completed. I was wondering if keeping history of DONE stories on the wall was intresting at all. It's starting to take up space Sprints after Sprints and I can't see any value from it for now."} {"_id": "125402", "title": "How can I learn practical applications of software engineering principles?", "text": "I'm having problems bringing the software engineering I learned in school into my projects at my job. The patterns are all easy enough to apply, but coming up with architectures and general class structures is tough for me. It's so open ended and I never know how to approach my projects, with the exception of any project that cleanly fits the MVC architecture. I've been working on some increasingly larger projects. The end product works because I know testing well, but getting there is hell and going back to add features is worse. My work usually degrades into planning from the applications entry point and thinking through the use cases. This works eventually, but I end up with low cohesion and tightly coupled code. I had enough experience coming out of school to start in a job with more responsibilities than entry level programmers, so I don't have the benefits of learning from a more experienced team. The problem is I'm very inefficient. My boss said once that if I get these projects out faster and with the same level of testing I can make more. So I have to teach myself software engineering. I've read a lot of books, but they are a little too abstract; I need something concrete. **Where should I go to learn real project software engineering?** **EDIT:** It may help to know that I sometimes end up with clean code. It either just takes me a really long time to get there, or I give up and hack it cause I know it will work. Most people will say this comes with experience, but there must be some place I can go that says: if your project is like this this and this, try these architectures."} {"_id": "254679", "title": "Python IZIP list comprehension returns empty list", "text": "I have a list of strings that I am sorting. There are 12 different key strings within the list that I am using to sort by. So instead of writing 12 separate list comprehensions I would like to use a list of empty lists and a list of key strings to sort, then use izip to perform list comprehensions. Here is what I am doing: >>> from itertools import izip >>> tran_types = ['DDA Debit', 'DDA Credit'] >>> tran_list = [[] for item in tran_types] >>> trans = get_info_for_branch('sco_monday.txt',RT_NUMBER) >>> for x,y in izip(tran_list, TRANSACTION_TYPES): x = [[item.strip() for item in line.split(' ') if not item == ''] for line in trans if y in line] >>> tran_list[0] [] I would like to see an output more like the following: >>> tran_list[0] [['DDA Debit','0120','18','3','83.33'],['DDA Debit','0120','9','1','88.88']] The output doesn't make sense to me; the objects that izip returns are lists and strings >>> for x,y in itertools.izip(tran_list, TRANSACTION_TYPES): type(x), type(y) (, ) (, ) Why is this process returning empty lists?"} {"_id": "168087", "title": "Learning Issued Token in Federated Service", "text": "I would like to learn federated WCF service. I have the following in my system. \u2022 Windows XP \u2022 Visual Studio 2010 Express \u2022 SQL Server 2008 Express Is it possible to create a federated service sample with this infrastructure? Is there any article for that? **UPDATE** Federation: http://msdn.microsoft.com/en-us/library/ms730908.aspx Federation Sample: http://msdn.microsoft.com/en-us/library/aa355045.aspx"} {"_id": "254672", "title": "How could a system be zero-knowledge?", "text": "I'm actually interest about zero-knowledge storage system. Those system where the storage provider claims he can't have any access to the data stored. As far I know, the data are encryptit using a symmetric encryption system, such as AES or other, But those system needs a key to function. So what happens to the key? is it stored? Where? If a user connect from an other location, and want to retrieve one of his file, he must retrieve the key first. So if the storage provider store the key too, he could have access to the encrypted data. It's like having a chest locked and the code on a paper next to the chest, then claim that you couldn't open the chest. So is there a flaw in that? I'm I mistaking completely about those zero- knowledge system? If they can finally open the file, how some of them could escape justice by saying \"We can not know that we are hosting illegal files?\""} {"_id": "254673", "title": "LAMP without PHP", "text": "My question is that is the P part of the LAMP really necessary? I will have the database and I can connect to it via the appache http so why will I need php/python in the server side (if it is a really simple set up) also from a seperation of concern it might be more robust at least thertically to leave the P out of the equation and to use LAM only. What are you guys think? I need to build this as a linux server in order to backend mobile ios and android. cheers"} {"_id": "168089", "title": "Why has extreme programming (XP) gone out of date in favor of Agile, Kanban etc?", "text": "I like XP (extreme programming) especially the part where there are 2 programmers at the same screen since often a problem's solution gets closer if only you explain what you're doing and pair programming forces you to explain what your doing. Last 10 years or so, the XP style of working seems to have gone out of date in favor of the working methodologies Agile and/or Kanban. Why? Since XP to me seems a veru good way to work and is a lot about the programming where Agile and Kanban are more about processes."} {"_id": "44792", "title": "The Basics of Project Management / Software Development", "text": "It suddenly struck me today that I have never developed any large application or worked with a team of programmers, and so am missing out a lot - both in terms of technical knowledge and the social-fun part of it. And I would like to rectify that - an idea is to start an open source group by training college students (for no charge) and developing some open source application with them. Please give me some basic advice on the whole process of how to (1) plan and (2) manage projects in a team. What new skill sets would you recommend? (I have read _joel on software_ and _37 Signals_ , and got many insightful tips from them. But I'd like a little more technical knowledge ...) * * * Background (freelancer, past 4+ years) - Computer engineer > graphic / web designer > online marketing > moved on to programming in PHP, Perl, Python > did Oracle DBA OCP training to understand DB's > current self-assigned title - web application developer."} {"_id": "254674", "title": "Accepting the UUID collision risk based on number of clients", "text": "After reading some questions about the probability of UUID collisions it seems like collisions although unlikely, are still possible and a conflict solution is still needed. Therefore I am wondering about the background of choosing UUIDs for CouchDB * Is the \"unlikely collision\" a responsibility of the developer? * Was it expected that IDs will be used by a reduced set of clients? When I went through the documentation it looked like CouchDB algorithm was great to withstand partition, but the more I read about the problems of distributed ID generation, the more I believe taking the UUID collision risk is only feasible with a low number of clients. Although I am still interested in the previous questions, the main thing I want to find out is: * Is it the normal practice accepting the collision risk of UUIDs counting on a low number of distributed generators? Or always assumed that the probability of collision is so low that is not a concern?"} {"_id": "160258", "title": "Looking for a very memory-efficient way of finding exporting all relations in a family tree", "text": "Think of the question as a family tree, in the **PS** section I will explain what exactly it is but family tree is easier to imagine: so father, has kids, those kids may have more kids, those kids may have more kids, etc.. 1- I don't have the whole information in memory to traverse them. With each method call and hitting the database I have just the father at some level and its kids. See here is the high-level of the method that I have and need to some how use some good parts of it: private void Foo(string fatherNode) { // call some DB scripts and grab data you need to work with. int numberOfKids = // get it from the thing you populated from the DB call. for(int i = 1 to numberOfKids) { Node Child = // grab child[i] from the list we populated from DB calls //Add it to the treeView } } Well this was working because it is a GUI application and with each you know \"click\" event we were really requesting just one level of info but Now I need a new functionality where I can click an Export button and it writes the WHOLE structure of this whole family tree to a XML file..( so you can expand those nodes and still see the family hierarchy) 2- There is a lot of data. One Father might have 400 children, each children might have 10 more children and each of those chilcren might have 500 more children...so I need to also be concerned about getting memory exceptions... 3- Recursion? can we really load ALL of this hierarchy to memory? I don't think so..remember the goal is to export it to a XML SO Maybe the efficient way is write a good algorithm that at each call writes one level of hierarchy to file and doesn't load the whole thing in memory... But I am pulling my hair and banging my head on desk and can't crack the code and figure it out.. So what are your pseduo-code- suggestions... I am using C# by the way. * * * **P.S:** This is actually a Clinical Bioinformatics hierarchy, so you say Ok human genomes..ok now there 27000 genes under it, Ok now gets gene234 and let's say what are its children,..."} {"_id": "190339", "title": "Performance and other issues with using floating point types in C++", "text": "Being interested in C++ performance programming there is one aspect I really have no clue about- and that is the implications of using floating point calculations vs doubles vs normal integer mathematics? What performance issues with these types of calculations should I consider? What other issues need to be considered when deciding which kind of operators to use? All I currently know is that doubles are probably the most expensive because they are larger, floating point is next and then integer calculations are usually the fastest on a CPU because they are much simpler."} {"_id": "160254", "title": "Using a openid in a \"closed system\"", "text": "I would like to publish a website for certain family members only. Simply put, like a mishmash of family photos and videos. I want it to remain private, however. I was considering using openid for the login process because I really wanted to avoid: 1. Storing Usernames and Passwords (Too much maintainance) 2. Making a obfuscated url that can be picked up by toolbar (Not private enough) 3. Making a password to access the page (I've had users unable to manage this) So I was hoping to use openid to have say my brother log in using his google account. But from what I've seen openid used for, it wouldn't prevent others from logging in to the website. I was thinking maybe I could limit it manually however this paragraph from Google App Platform best practices has me double thinking: > From a protocol perspective, both logins to the two IDPs are legitimate and > the attributes returned by the 2nd IDP appear identical \u2014 the same email > address, name, and so on. The only thing that differs between the two > requests is the user's claimed identity. In fact, relying parties are > required to verify that IDPs only return identities that they're > authoritative to prevent a rogue IDP from asserting identities from other > providers. But nothing prevents a rogue IDP from asserting attributes like > names and email address that may not be truthful. Is openid the \"right\" tool for what I wish to do? (Have a private web interface that requires logins but no user management beyond identifying family members)"} {"_id": "160252", "title": "Is it a good practice to capture build artifacts in Artifactory that Jenkins produces?", "text": "We use Jenkins to run continuous integration builds. The output of these builds might be EAR files, WAR files, or just a collection of files that are TAR'd up. To this point we have used Jenkins to manage the produced artifacts. We have Artifactory deployed in-house. Would it be a bad idea to leverage Artifactory to capture the produced artifacts? Or is Artifactory meant to hold JARs with versions that can be pulled into projects with Maven when building and not meant to capture artifacts that a continuous integration tool uses?"} {"_id": "46867", "title": "How can I tell if software is highly-coupled?", "text": "I am familiar with the term \"highly coupled\" but am curious if there are signs (code smells) that can indicate that code is highly coupled. I'm currently working with Java EE but this can apply to any language. **Edit:** In case anyone's interested, this article sounds helpful: In pursuit of code quality: Beware the tight couple! (IBM)"} {"_id": "160251", "title": "Being a team manager and a developer in a Scrum team", "text": "I am managing a team of 6 people that recently moved to Scrum. We have a Scrum Master (one of the developers in the team) and a Product Owner. Since I have quite a lot of free time (because a lot of management work that I used to do is now done by the Scrum Master and Product Owner), and since I want to remain technically relevant, I am doing some technical development work. I act as a part of the development team, commit to some of the stories in each sprint, and participate in all the meetings as a part of the team. Do you think it is a good idea? Can it contradict the \"self-organization\" of the team?"} {"_id": "211558", "title": "Design patterns for multi-threaded messaging server", "text": "I'm designing an instant messaging server as a personal exercise to improve my understanding and application of multi-threading and design patterns in Java. I'm still designing, there's no code yet. My goal is to have a server that should make effective use of a multi-CPU box. I'd like the server to be distributable across multiple boxes, but wonder if that's running before I can walk. My initial thoughts are: * `ClientConnectionManager` has `ServerSocket` object that constantly accepts client `Socket` connections. * `ClientConnectionManager` has a thread pool that spawns a new `ClientProxy` object when a client socket connection is accepted, handing in the client `Socket` object. * The `ClientProxy` objects represent the client app and handles sending/receiving messages across the `Socket` stream. Is it correct that only one `ServerSocket` may bind to a Port? I take it there's no way to have a pool of objects accepting Socket connections? I have two ideas for passing messages between `ClientProxy` objects. Either directly between `ClientProxy` objects that are \"buddies\" or via a central \"Exchange\" object, or better yet, pool of objects. What are the Pros/Cons of the two approaches? Does the Exchange lend itself better to a distributed app? Would the Observer and Mediator patterns, respectively, be appropriate?"} {"_id": "158713", "title": "GDI low on memory", "text": "I am fresh to Visual C++. While moving forward in the book \"Programming Windows with MFC\", I came across GDI (Graphics Device Interface) and use of paint brush. The book says a brush cannot be created if your GDI is low on memory. I want to know when and how does GDI get low on memory? And more important, what is the reason that we cannot create brush when GDI is low on memory?"} {"_id": "158715", "title": "Are small amounts of functional programming understandable by non-FP people?", "text": "**Case** : I'm working at a company, writing an application in Python that is handling a lot of data in arrays. I'm the only developer of this program at the moment, but it will probably be used/modified/extended in the future (1-3 years) by some other programmer, at this moment unknown to me. I will probably not be there directly to help then, but maybe give some support via email if I have time for it. So, as a developer who has learned functional programming (Haskell), I tend to solve, for example, filtering like this: filtered = filter(lambda item: included(item.time, dur), measures) The rest of the code is OO, it's just some small cases where I want to solve it like this, because it is much simpler and more beautiful according to me. **Question** : Is it OK today to write code like this? * How does a developer that hasn't written/learned FP react to code like this? * Is it readable? * Modifiable? * Should I write documentation like explaining to a child what the line does? # Filter out the items from measures for which included(item.time, dur) != True I have asked my boss, and he just says \"FP is black magic, but if it works and is the most efficient solution, then it's OK to use it.\" What is your opinion on this? As a non-FP programmer, how do you react to the code? Is the code \"googable\" so you can understand what it does? I would love feedback on this :) **Edit** : I marked phant0m's post as answer, because he gives good advice on how to write the code in a more readable way, and still keep the advantages. But I would also like to recommend superM's post because of his viewpoint as a non-FP programmer."} {"_id": "158716", "title": "Is Liskov Substitution Principle incompatible with Introspection or Duck Typing?", "text": "Do I understand correctly that Liskov Substitution Principle cannot be observed in languages where objects can inspect themselves, like what is usual in duck typed languages? For example, in Ruby, if a class `B` inherits from a class `A`, then for every object `x` of `A`, `x.class` is going to return `A`, but if `x` is an object of `B`, `x.class` is not going to return `A`. Here is a statement of LSP: > Let _q(x)_ be a property provable about objects _x_ of type _T_. Then _q(y)_ > should be provable for objects _y_ of type _S_ where _S_ is a subtype of > _T_. So in Ruby, for example, class T; end class S < T; end violate LSP in this form, as witnessed by the property _q(x)_ = `x.class.name == 'T'` * * * _Addition._ If the answer is \"yes\" (LSP incompatible with introspection), then my other question would be: is there some modified \"weak\" form of LSP which can possibly hold for a dynamic language, possibly under some additional conditions and with only special types of _properties_. * * * _Update._ For reference, here is another formulation of LSP that I've found on the web: > Functions that use pointers or references to base classes must be able to > use objects of derived classes without knowing it. And another: > If S is a declared subtype of T, objects of type S should behave as objects > of type T are expected to behave, if they are treated as objects of type T. The last one is annotated with: > Note that the LSP is all about expected behaviour of objects. One can only > follow the LSP if one is clear about what the expected behaviour of objects > is. This seems to be weaker than the original one, and might be possible to observe, but I would like to see it formalized, in particular explained who decides what the expected behavior is. Is then LSP not a property of a pair of classes in a programming language, but of a pair of classes together with a given set of properties, satisfied by the ancestor class? Practically, would this mean that to construct a subclass (descendant class) respecting LSP, all possible uses of the ancestor class have to be known? According to LSP, the ancestor class is supposed to be replaceable with any descendant class, right? * * * _Update._ I have already accepted the answer, but i would like to add one more concrete example from Ruby to illustrate the question. In Ruby, each class is a module in the sense that `Class` class is a descendant of `Module` class. However: class C; end C.is_a?(Module) # => true C.class # => Class Class.superclass # => Module module M; end M.class # => Module o = Object.new o.extend(M) # ok o.extend(C) # => TypeError: wrong argument type Class (expected Module)"} {"_id": "193821", "title": "Are there any problems with implementing custom HTTP methods?", "text": "We have a URL in the following format > /instance/{instanceType}/{instanceId} You can call it with the standard HTTP methods: POST, GET, DELETE, PUT. However, there are a few more actions that we take on it such as \"Save as draft\" or \"Curate\" We thought we could just use custom HTTP methods like: DRAFT, VALIDATE, CURATE I think this is acceptable since the standards say > \"The set of common methods for HTTP/1.1 is defined below. Although this set > can be expanded, additional methods cannot be assumed to share the same > semantics for separately extended clients and servers.\" And tools like WebDav create some of their own extensions. Are there problems someone has run into with custom methods? I'm thinking of proxy servers and firewalls but any other areas of concern are welcome. Should I stay on the safe side and just have a URL parameter like action=validate|curate|draft?"} {"_id": "193820", "title": "What should I consider when choosing between taking a MOOC or working on a project?", "text": "As background, I've been programming for about 5 years, and feel like I'm somewhat \"up to speed\" on industry best-practices and techniques (for web development, specifically), as well as software development fundamentals. However, I know I have a lot to learn. Recently, many free online classes, MOOCs, have been made available from multiple initiatives, including many universities. A number of these courses are in software development, theoretical computer science, mathematics, and other fields that are generally relevant for programming. I've been taking many of these MOOCs, learned a lot, and had lots of fun in the process. However, the time that it takes for me to complete the material **leaves little room for doing much of anything else** , including diving deeper into the subject matter or applying the material in the form of a project (I'm usually taking 3+ courses at once). Thinking about this, I've come up with a few questions: * From a purely skill-oriented perspective, _what are the pros and cons of taking classes / working on projects?_ Will working on projects help me improve my real programming skills faster? Will I miss out on some deep insight by not taking courses? * From a career perspective, _what do employers value most?_ (I suspect that I already know the answer) Will a multitude of (open source) projects or a plethora of coursework be most convincing? * Finally, assuming courses are deemed as a \"net positive\", _how much formal education is enough?_ If I continue taking MOOCs, when should I take a break and work on a project?"} {"_id": "193824", "title": "java classes and database queries", "text": "Can someone please explain the best way to solve this problem. Suppose I have three classes 1. `Person` 2. `Venue` 3. `Vehicle` I have a DAO method that needs to return some or all of these attributes from each of the classes after doing a query. Please note, by requirements I am using one DAO for all three classes and no frameworks. Only my own MVC implementation How do I accomplish this? It seems very wrong to make a class `PersonVenueVehicle` and return that as an object to get the instance field, values. I was taught that the database entities must be reflected by classes, if this is case how is it implemented in such a situation?"} {"_id": "222228", "title": "Algorithm to find times when resources are available", "text": "I'm writing a semi-automatic scheduling application. Given some existing bookings and some resource requirements, it needs to find the times at which a new event can be scheduled. A human user will then evaluate the results and choose one of the options. It does not need to optimise a timetable for multiple events and hence it is not the usual NP-Hard timetabling problem. The system has a number of resources (trainers, rooms, equipment) each of which has a type (e.g. French teacher, seminar room, projector...). Resources are booked for events each of which has a start and end time. Now, say I need to schedule a 2 hour long French class using a projector in a seminar room, what are the times that at least one resource of each required resource type is available? In order to limit the problem space, it's acceptable to consider only 9am-5pm, Mon-Fri at 15 minute intervals for the next 90 days. Total number of resources in of the order of 1000. How can I do this without having to compare every resource with every other resource?"} {"_id": "163509", "title": "In centralized version control, is it always good to update often?", "text": "Assuming that: * You are in a team developing some software. * Your team is using centralized version control in the development process. * You are working on a new feature which will surely take several days to complete, and you won't be able to commit before that because it would break the build. * Your team members commit something every day that affects some of the files you're working with for your fancy new feature. Since this is centralized version control, you will have to update your local checkout at some point: at least once right before committing the new feature. If you update only once right before your commit, then there might be a lot of conflicts due to the many other changes by your teammates, which could be a world of pain to resolve all at once. Or, you could update often, and even if there are a few conflicts to resolve day by day, it should be easier to do, little by little. Can we say that it is always a good idea to update often?"} {"_id": "176681", "title": "Did C++11 address concerns passing std lib objects between dynamic/shared library boundaries? (ie dlls and so)?", "text": "One of my major complaints about C++ is how hard in practice it is to pass std library objects outside of dynamic library (ie dll/so) boundaries. The std library is often header-only. Which is great for doing some awesome optimizations. However, for dll's, they are often built with different compiler settings that may impact the internal structure/code of a std library containers. For example, in MSVC one dll may build with iterator debugging on while another builds with it off. These two dlls may run into issues passing std containers around. If I expose `std::string` in my interface, I can't guarantee the code the client is using for `std::string` is an exact match of my library's `std::string`. This leads to hard to debug problems, headaches, etc. You either rigidly control the compiler settings in your organization to prevent these issues or you use a simpler C interface that won't have these problems. Or specify to your clients the expected compiler settings they should use (which sucks if another library specifies other compiler settings). My question is whether or not C++11 tried to do anything to solve these issues?"} {"_id": "163506", "title": "How does one handle sensitive data when using Github and Heroku?", "text": "I am not yet accustomed with the way Git works (And wonder if someone besides Linus is ;)). If you use Heroku to host you application, you need to have your code checked in a Git repo. If you work on an open-source project, you are more likely going to share this repo on Github or other Git hosts. Some things should not be checked in the public repo; database passwords, API keys, certificates, etc... But these things still need to be part of the Git repo since you use it to push your code to Heroku. How to work with this use case? Note: I know that Heroku or PHPFog can use server variables to circumvent this problem. My question is more about how to \"hide\" parts of the code."} {"_id": "163502", "title": "Port numbers in Visual Studio projects and IIS", "text": "I have a few questions about localhost and port numbers as this is an area where I do not have a lot of knowledge, and because I recently had to work with setting up Visual Studio projects and IIS and there are things I'm not clear on. I have the following questions on the things I find confusing. I thought it made more sense to include them all in one question instead of making separate questions. 1. I have noticed a random port number is generated with projects I have worked on in the past, but I recently saw a project where the port number was fixed. What is the purpose of having a fixed/default localhost port number? i.e is it particularly useful on projects that have many programmers working on the project? 2. If a solution contains multiple projects (for example, WCF services, Domain, MVC/Web pages), is it possible to setup a different localhost port for each of them? If so, what is the benefit of this? 3. If a solution contains multiple projects and has different localhost urls/port numbers, must there be a corresponding website (and application pool) for each project in IIS? Or just for the project that contains the actual web pages?"} {"_id": "34609", "title": "Most commonly forgotten thing to do when programming/web developing", "text": "Does anyone else have that one thing they always forget to do when programming or developing a website? Personally mine is forgetting to include the Doctype in a website...the amount of time i have spent ages fixing/adding/hacking around with CSS to fix IE problems and it turns out to be the F'in Doctype declaration!!!"} {"_id": "112953", "title": "Is it wrong to take code you have produced at work and re-use it for personal projects?", "text": "Throughout my various workplaces and through my university life I always wrote code which I thought \"this would be really useful in generic situations\". Indeed, I intentionally write code, even if it takes me a while to write, which I know will help me in the future (e.g. custom `SubString()` functions). Hence the reason for 'Helper' classes. These functions I'm sure can probably be found elsewhere online but the point is, I wrote them, and I will use them again later in other jobs or for personal projects. Currently I don't maintain a personal code library as I never have time to sit there and copy code from my work place / projects to another personal location. Question is, is it wrong to take code you have produced at work and re-use it ( **a** ) for personal projects, and ( **b** ) in other jobs?"} {"_id": "237529", "title": "AI development, variation on the horizon effect", "text": "Developing an alpha-beta search for \"Colorito\", which has the characteristic that sometimes simple hill-climbing is impossible; so a sequence of 2 or 3 moves is the only way to make progress. I came up with an endgame position where, with a 6 ply search, it's possible to make good progress in 2 moves but the third move must be the \"bad\" move of a new sequence. It's also possible to make the same 2-move sequence take 3 moves, avoiding the \"bad\" move on the horizon. This 3-moves which could be 2-moves becomes the principle variation. The problem is that this same analysis applies to the next move, too; so the AI becomes stuck, making no progress. The search framework serves a lot of different games, so I'm most interested in generic approaches to detecting/avoiding this problem, but also game- specific ideas that might avoid this particular problem. Edit: After some thought, a reasonable and somewhat general approach is to allow \"pass\" moves at the terminal level of the search, even if the game does not. These artificial passes can be valued at zero, or slightly positive, so instead of finding a \"bad\" move at the search depth limit, the search will find a pass."} {"_id": "118806", "title": "Translating longer texts (view and email templates) with gettext", "text": "I'm developing a multilingual PHP web application, and I've got long(-ish) texts that I need to translate with gettext. These are email templates (usually short, but still several lines) and parts of view templates (longer descriptive blocks of text). These texts would include some simple HTML (things like bold/italic for emphasis, probably a link here or there). The templates are PHP scripts whose output is captured. The problem is that gettext seems very clumsy for handling longer texts. Longer texts would generally have more changes over time than short texts \u2014 I can either change the msgid and make sure to update it in all translations (could be lots of work and very error-prone when the msgid is long), or I can keep the msgid unchanged and modify only the translations (which would leave misleading outdated texts in the templates). Also, I've seen advice against including HTML in gettext strings, but avoiding it would break a single natural piece of text into lots of chunks, which will be an even bigger nightmare to translate and reassemble, and I've also seen advice against unnecessary splitting of gettext strings into separate msgids. The other approach I see is to ignore gettext altogether for these longer texts, and to separate those blocks in external subtemplates for each locale, and just include the one for the current locale. The disadvantage is that I'm separating the translation effort between gettext .po files and separate templates located in a completely different location. Since this application will be used as a starting point for other applications in the future, I'm trying to come up with the best approach for the long term. I need some advice for best practices in such scenarios. How have you implemented similar cases? What turned out to work and what turned out a bad idea?"} {"_id": "118801", "title": "Sharing Authentication Across Subdomains using cookies", "text": "I know that in general cookies themselves are not considered robust enough to store authentication information. What I am wondering is if there is an existing design pattern or framework for sharing authentication across subdomains without having to use something more complex like OpenID. Ideally, the process would be that the user visits abc.example.org, logs in, and continues on to xyz.example.org where they are automatically recognized (ideally, the reverse should also be possible -- a login via xyz means automatic login at abc). The snag is that abc.example.org and xyz.example.org are both on different servers and different web application frameworks, although they can both use a shared database. The web application platforms include PHP, ColdFusion, and Python (Django), although I'm also interested in this from a more general perspective (i.e. language agnostic)."} {"_id": "13053", "title": "First languages with generic programming support", "text": "Which was the first language with generic programming support, and what was the first major staticly typed language (widely used) with generics support. Generics implement the concept of parameterized types to allow for multiple types. The term generic means \"pertaining to or appropriate to large groups of classes.\" I have seen the following mentions of \"first\": > First-order parametric polymorphism is now a standard element of statically > typed programming languages. Starting with System F [20,42] and functional > programming lan- guages, the constructs have found their way into mainstream > languages such as Java and C#. In these languages, first-order parametric > polymorphism is usually called generics. From \"Generics of a Higher Kind\", Adriaan Moors, Frank Piessens, and Martin Odersky > Generic programming is a style of computer programming in which algorithms > are written in terms of to-be-specified-later types that are then > instantiated when needed for specific types provided as parameters. This > approach, pioneered by Ada in 1983 From Wikipedia Generic Programming"} {"_id": "222222", "title": "Building a distributed system on Amazon Web Services", "text": "Would simply using AWS to build an application make this application a distributed system? For example if someone uses **RDS** for the database server, **EC2** for the application itself and **S3** for hosting user uploaded media, does that make it a distributed system? If not, then what should it be called and what is this application lacking for it to be distributed? **Update** Here is my take on the application to clarify my approach to building the system: 1. The application I'm building is a social game for Facebook. 2. I developed the application locally on a LAMP stack using Symfony2. 3. For production I used an a single EC2 Micro instance for hosting the app itself, RDS for hosting my database, S3 for the user uploaded files and CloudFront for hosting static content. I know this may sound like a naive approach, so don't be shy to express your ideas."} {"_id": "237526", "title": "Practical programming according to the Dependency Inversion Principle", "text": "What the Dependency Inversion Priciple implies in practice is that in a system, high level components should depend on abstractions of the low level components (instead of on the low level components directly), and the low level components should be defined in terms of these abstractions. The key point for my question is that _the low level components are defined in terms of the abstractions, which is defined in terms of the high level components_. Meaning: the high level components 'define the abstraction' in terms of what would be convenient for them, and the low level components have to be defined according to that abstraction (usually an interface). So if the high level component is a `Car`, and the low level component is the `Engine`, and an interface `IEngine` is defined - it will be defined according to the needs of the `Car`, and `Engine` will have to fit these needs. So if it's convenient for the `Car` to be able to simply start the engine, `IEngine` would feature a method `start()` and `Engine` would have to implement it. My question is: When starting programming on a project designed according to the Dependency Inversion Principle - are the high level components usually implemented before the low level ones, ie. \"top to bottom\" development? Since the principle implies that the low level components are designed according to what is convenient for the high level components, it makes sense to first start programming the high level components, and only then define the `ILowLevelComponent` interface, based on what we learned the high level components need when we programmed them. For example we're implementing a system that simulates a car. `Car` is the high level component, while `Engine` and `SteeringWheel` are the low level components. `Engine` and `SteeringWheel` take care of the concrete work of moving the car around, while `Car` takes care of coordinating everything and creating a functioning system. If we were designing the system according to DIP, that means that `Engine` and `SteeringWheel` are defined in terms of an abstraction, that is defined in terms of what is convenient for `Car`. So it would make sense to first implement `Car`, understand exactly how it's going to work in high level terms and what it needs to work, and only then define the `IEngine` and `ISteeringWheel` interfaces, according to what the `Car` needs. And then ofcourse implement the concrete classes that implement the interfaces. **So what I'm asking is: when working on a project that is designed in the spirit of DIP, is the \"top to bottom development\" approach common? Is this how work is usually done on project following the Dependency Inversion Principle?**"} {"_id": "54171", "title": "Have you worked with poorly designed application?", "text": "Well , I have been asked to work in a Java web application that is very very poorly designed . In the name of \"making this easy\" , they have come up with their own \"framework\" to make things extremely difficult to understand . I am struggling to figure out the control flow . Do you have any such experience ? What do you do in such situations when the guy who has \"designed\" it has already left the company ?"} {"_id": "54175", "title": "What do you do before you start programming?", "text": "I'm not sure this question belongs here, it's not so much I problem I'm having with programming but rather a problem of what to do before I start programming. I want a visual representation of what variables I need and what classes have what methods.I know there is UML but I'm not sure if that is the best way, so what do you guys use before you start programming, which method? I don't want to start a flamewar about what is better just what are several approaches?"} {"_id": "256124", "title": "What is a good design pattern to implement REST services on mobile?", "text": "It is easy to implement calls to API endpoints, then to parse JSON and handle the data - but what is a good design pattern for this? Here are some ways I have tried but I feel like there should be a better way: 1. Create a singleton class that manages all networking code and all data parsing code for the entire application for all endpoints. Then any controller class can hit the singleton 2. Make network endpoint tasks straight from view controllers - using blocks within the view controller to manage responses Consider an application that can hit an endpoint (GET) to download a list of appointments. Also consider an POST endpoint where you can send new appointments. Data must be packaged/unpackaged in JSON and errors must be handled appropriately. What is a good design pattern to accomplish this? And before you downvote this post and say its too broad or subjective just look at this page: https://programmers.stackexchange.com/help/dont-ask > Some subjective questions are allowed, but \u201csubjective\u201d does not mean > \u201canything goes\u201d. All subjective questions are expected to be > **constructive**. A good answer to this common problem is hard to find."} {"_id": "94540", "title": "What to do if a team member misses a sprint planning?", "text": "Lets say a team member is on an annual leave. He won't be attending sprint planning but he will be back by mid of iteration/sprint. Lets say he has 50% capacity i.e. as he will be available for later half of the iteration, should we: 1. have a planning session with him after he is back. 2. have a planning session with him before he goes on annual leave i.e. before sprint planning. 3. don't schedule him for any task and assign him on non sprint tasks e.g. spikes etc 4. have his peers plan on his behalf during sprint planning and absent person can then add tasks when he is back and if he cannot do all the work he can descope. 5. have him sit with another developer and do pair programming for a while. 6. anything else.. i am interested to know what you are doing.. Note: We are doing (1) and it doesn't feel right."} {"_id": "62088", "title": "Going to coding conference -- bosses expectations reasonable?", "text": "Next week a coworker and I are being sent to a local coding conference. This morning our manager sent us an email pretty much telling us we need to take our laptops and hinted at doing a google document so he can see us take notes in real time. This really rubbed me the wrong way and both of us emailed him back. These emails I have posted below. I'm wondering if you guys have encountered these types of expectations from your bosses. I've been a professional programmer for over 10 years and have never had this type of micro management about being sent to a conference. Just wondering if \"coworker\" and I are not seeing things clearly or if our boss is being a little weird about this. * * * From: Boss Sent: Friday, March 25, 2011 11:51 AM To: Coworker; Me Subject: Re: DevConnections next week Really guys? Whatever works for you, I guess. But I'm expecting that you have detailed enough notes to adequately share sessions you attend, including any URLs for supporting information, etc. This is not a \"would be nice\" but a professional expectation for attendance at these type of events. \\--Boss On 3/25/11 10:08 AM, \"Coworker\" wrote: > The hotel where the conference is being held is huge. \"Lugging around your > laptop\" is an accurate description he he. > > I myself am not a classical learner. I learn by paying attention, > participating in the class topic, and asking questions. If I try to take > notes it's destructive to my learning process. A gifted teacher in community > college, Dr. Phar, pointed this fact out to me and told me I was a \"visual > learner\". After that I was able to excel in college and obtain my Master's > Degree. > > To meet Boss's requirements I generally take a half day after the conference > to put together some presentation material for the team. > > Coworker > > \\-----Original Message----- From: Me Sent: Friday, March 25, 2011 9:48 AM > To: Boss; Coworker Subject: RE: DevConnections next week > > Thanks for the reminder. I was definitely planning to take a notebook to > take notes and have no problem sharing information I come back with, but I > would rather not lug around the laptop if I have the choice. Is that ok? > Maybe we could take our laptops and keep them in the trunk in case of a work > emergency? What do you think? > > Me > > \\-----Original Message----- From: Boss Sent: Friday, March 25, 2011 8:39 AM > To: Me; Coworker Subject: DevConnections next week > > Coworker / Me, > > Don't forget to take your laptops next week and take notes to share with the > team. As in the past, we can schedule a review session to share the > highlights. Maybe use Google Docs and share your notes with me and . >Also, > take some time to coordinate your session attendance to help hit all the > good topics. > > Have fun, Boss"} {"_id": "256122", "title": "Questions about UML in relation to the command pattern", "text": "http://www.oodesign.com/command-pattern.html In reading through a tutorial about the command pattern, I came across a UML diagram that seems to omit some relationships. For the following diagram, the client instantiates a stock trade, an agent, and buy and sell stock orders. Why is it that the author omits the relationship arrow to the agent class? Why is the dashed line for <> used in place of a solid arrow like the one from Client to StockTrade? ![enter image description here](http://i.stack.imgur.com/9clLI.gif) I also compared the above diagram with the one below, which leaves out the Invoker/Agent class entirely and also uses the class CallbackTwo to aggregate Receivers. The C# implementation has an invoker class, although the implementations in other languages don't. Does this mean the invoker relationship is implicit? http://sourcemaking.com/design_patterns/command http://sourcemaking.com/design_patterns/command/c-sharp-dot-net ![enter image description here](http://i.stack.imgur.com/GljS8.png)"} {"_id": "215597", "title": "\"Whole-team\" C++ features?", "text": "In C++, features like exceptions impact your whole program: you can either disable them in your whole program, or you need to deal with them throughout your code. As a famous article on C++ Report puts it: > Counter-intuitively, the hard part of coding exceptions is not the explicit > throws and catches. The really hard part of using exceptions is to write all > the intervening code in such a way that an arbitrary exception can propagate > from its throw site to its handler, arriving safely and without damaging > other parts of the program along the way. Since even `new` throws exceptions, every function needs to provide basic exception safety \u2014 unless it only calls functions which guarantee throwing no exception \u2014 _unless you disable exceptions altogether in your whole project_. Hence, exceptions are a \"whole-program\" or \"whole-team\" feature, since they must be understood by everybody in a team using them. But not all C++ features are like that, as far as I know. A possible example is that if I don't get templates but I do not use them, I will still be able to write correct C++ \u2014 or will I not?. I can even call `sort` on an array of integers and enjoy its amazing speed advantage wrt. C's `qsort` (because no function pointer is called), without risking bugs \u2014 or not? It seems templates are not \"whole-team\". Are there other C++ features which impact code not directly using them, and are hence \"whole-team\"? I am especially interested in features not present in C. **Update** : I'm _especially_ looking for features where there's no language- enforced sign you need to be aware of them. The first answer I got mentioned const-correctness, which is also whole-team, hence everybody needs to learn about it; however, AFAICS it will impact you only if you call a function which is marked `const`, and the compiler will prevent you from calling it on non- const objects, so you get something to google for. With exceptions, you don't even get that; moreover, they're always used as soon as you use `new`, hence exceptions are more \"insidious\". Since I can't phrase this as objectively, though, I will appreciate any whole-team feature. **Update 2** : instead of C++ feature I should have written something like \"C++-specific feature\", to exclude things like multithreading which apply to a large amount of mainstream programming languages. ### Appendix: Why this question is objective (if you wonder) C++ is a complex language, so many projects or coding guides try to select \"simple\" C++ features, and many people try to include or exclude some ones according to mostly subjective criteria. Questions about that get rightfully closed regularly here on SO. Above, instead, I defined (as precisely as possible) what a \"whole-team\" language feature is, provide an example (exceptions), together with extensive supporting evidence in the literature about C++, and ask for whole-team features in C++ beyond exceptions. Whether you should use \"whole-team\" features, or whether that's a relevant concept, might be subjective \u2014 but that only means the importance of this question is subjective, like always."} {"_id": "62084", "title": "best practices in creating a product backlog in scrum", "text": "I am new to scrum, project management in general, and i am having problems deciding what to call a feature or a sub-feature (which are tasklists to creating that feature?) especially for the standard things we have in every web app. So i am looking for best practices on how people are creating product backlogs for a typical website software (users, profiles, admin, front-end) . I have this in mind as an example. - feature: home page - feature: contact us page - feature: admin panel - create user + create database (tasklist) + write stored procedures (tasklist) - delete user - add content - delete content - feature: subscribe - create subscribe page Also, how granular is too granular?"} {"_id": "62087", "title": "Once and only once - with more lines of code", "text": "I have an ugly bit of code - essentially iteration over some data structures where the meat of the action was changing, but the iteration code stayed same. The iteration constituted the bulk of code, and there were at least four cases of code copying. So I refactored the code, putting each level of iteration into its own class (I'll probably do another post on it later).The new code is considerably nicer (at least to my eye), with no copy/paste locations. However, due to some infrastructure I had to introduce to refactor nested loops into separate classes, I ended up with 10% larger number of LOCs. If I discount the extra infrastructure, I get about 5% smaller code compared to the original size. So, the code is more sophisticated, but it is not shorter and may be harder to understand for a less experinced programmer. I may still end up with shorter code in the future if more oppotunities for reuse present themselves. My question is: do you think it was worth it? Is it a good idea to refactor for \"once and only once\" if the LOC count goes up?"} {"_id": "256128", "title": "How to identify hackers based on ip addresses and the pages that were accessed", "text": "I saw some suspicious errors being generated on my site based on pages that were requested. My error is logging the path that the user is trying to access. Because of these errors (and the paths that they were trying to access) I created my own blacklist process where I can blacklist someone from my site based on ip address and/or username. After implementing this, I didn't see ANY errors of that kind....until today. Now, before I go ahead and blacklist this person, I'd like to make sure that it isn't a legitimate search engine just trying to build it's database with all links available from my site. So, my question, is there a way to see what company an ip address is assigned to? Or, do those crawlers from search engines only go to pages that exist?"} {"_id": "215308", "title": "Area of testing", "text": "I'm trying to understand which part of my code I should to test. I have some code. Below is example of this code, just to understand the idea. Depends of some parametrs I put one or another currency to \"Event\" and return his serialization in the controller. Which part of code I should to test? Just the final serialization, or only \"Event\" or every method: getJson, getRows, fillCurrency, setCurrency? class Controller { public function getJson() { $rows = $eventManager->getRows(); return new JsonResponse($rows); } } class EventManager { public function getRows() { //some code here if ($parameter == true) { $this->fillCurrency($event, $currency); } } public function fillCurrency($event, $currency) { //some code here if ($parameters == true) { $event->setCurrency($currency); } } } class Event { public function setCurrency($currency) { $this->updatedAt = new Datetime(); $this->currency = $currency; } }"} {"_id": "156656", "title": "Picture Parsing", "text": "If I open a picture file, lets say with an PNG extension, I will see bunch of code. Now let say I want to get some information from the picture mechanically. So the question here is what is the first step here, do I need to parse the picture?. If so how can I get the grammar for picture files? (PNG, JPEG, etc.) UPDATE: Found an issue with my thinking!. Parsers are for text-based languages, however pictures are binary files. So, I dont think we need to parse them we need to go in the opposite direction and turn them into abstract syntax trees, wondering how to do so without knowing the grammar!"} {"_id": "156654", "title": "Is C++.Net used extensively?", "text": "I am a C++ coder by tradition. Over the last 12 months or so I have been doing a lot of C# coding, and have been pleasantly surprised by C#'s pragmatic approach (once I stopped trying to code it as if it was \"C++ with garbage collection\"). We have recently had some graduates in and when helping one of them I realised she was using .Net within C++. After asking her why, she said she had been \"told to use C++ by her manager\". Obvious communication problem aside, I assume she was using .Net because that's the only framework she's been exposed to. I then came across an old project by a senior developer who also used C++ to drive a Forms front end. Now this would have been written around the time .Net first appeared, so I assume it was a learning exercise on his part to play around with .Net. It was only a small utility app. Having had to do some minor modification in this app, it seemed to me that using C++ to drive .Net gives you the worst of both worlds. No garbage collection or memory safety, but no similarly no real speed/optimisation opportunities since you're dealing with a managed framework. So my question is whether people do use C++ .Net for any large stand alone (ie non-plumbing) production code, and if so what are your reasons for doing so? I freely admit I have never delved deeply into the C++ .Net extensions so I may be doing it a disservice."} {"_id": "143178", "title": "Which open source PHP project has the 'perfect' OOP design I can learn from?", "text": "I am a newbie to OOP, and I learn best by example. You could say this question is similar to Which Scala open source projects should I study to learn best coding practices \\- but in PHP. I have heard-tell that Symfony has the best 'architecture' (I will not pretend I know what that exactly means), as well as Doctrine ORM. Is it worth it to spend many months reading the source code of these projects, trying to deduce the patterns used and learning new tricks? I have seen equal number of web pages dissing and liking Zend's codebase (will provide links if deemed necessary). Do you know of any other project that would make any veteran OOP developer shed tears of joy? Please let me add that practicality and scope of use is not a concern at all here - I just want to do: * Pick a project that has a codebase deemed awesome by devs way better and greater than me. * Write code that achieves what the project does. * Compare results and try to learn what I don't know. Basically, an academic interest codebase. Any recommendations please?"} {"_id": "252633", "title": "C programming practice, passing a pointer to a function", "text": "Consider the following C function which takes as argument a string, which is then stored inside a struct: struct mystruct* usestring(char* string) { struct mystruct *struct; struct = malloc(sizeof(struct mystruct)); struct->string = string; return struct; } My understanding is that the string passed to the function is the same string that is stored inside the struct. What is the proper etiquette in this situation? Should I make a copy of the string and store that in the struct, or should I expect that the function caller will not modify the string later?"} {"_id": "143170", "title": "Actor library / framework for C++", "text": "In the C++ project I am working on, we have an application consisting of several processes deployed on different machines. This network of processes is dynamic since processes (clients or background services) can be started and terminated during the application's lifetime. We have already a module that allows to transport arbitrary data over the network through RPC, and we are using it to exchange information (such as status information, progress information, error codes, etc) between the processes. We would like to have a more abstract layer to handle asynchronous communication (currently, the RPC is executed synchronously), and to allow processes to be started and terminated dynamically and still find each other. We recently looked for a solution and we think we can use something like Scala **actors** and **remote actors** (see e.g. this tutorial). Another language using the actor model that has been around longer than Scala is Erlang (see also this question). Actors are objects that have a behaviour and a mailbox, share no data. Actors are executed concurrently, and communicate through asynchronous message exchange. As part of their behaviour, actors can create other actors. Messages are also represented as objects. The simplest case is that actors live in the same process. In this case they can address each other using a handle (reference, pointer, unique identifier). The actor implementation hides the underlying threads and any other details. When actors live in different processes (and possibly on different machines in a network) they are identified by an IP address, a port and a name. In this case we speak of **remote actors** (see the short example at the top of this page). In our case, we would have one actor on each process, taking care of all the communication, i.e. we need some kind of remote actors. On wikipedia I have found some links to actor libraries for C++ and I have started to look at Theron. Theron seems a very well-written and documented library but, to my understanding, it does not support remote actors: all actors must live within the same process. It is possible to create several actor pools ( **frameworks** ), but all these pools must live in the same process. So I wanted to ask if someone knows other C++ libraries that support the remote actor concept as sketched above. **EDIT** This question has been edited wrt the original question, following indications from programmers-meta discussion. **UPDATE** Other frameworks I have looked at are libcppa (should support remote actors, but it is still under development, currently version 0.1), actor-cpp (also under development), and libactor, which is in C (the web site says it \"is usable, although it may not be ready for production\")."} {"_id": "156659", "title": "Design pattern for access to tree-like database in Java?", "text": "I'm developing a Roleplaying character viewer/manager programme for a locap LARP system. The Characters have access to skills that are layed out in a tree-like structure. There are a lot of skills, and potentially a lot per character. I know I can just import the Java swing library to get access to a tree, but I feel like this may bog me down, when all that needs to be done is access to a tree-like database, and the character needs only know that they have access to a subset of the skills. I'm not sure if the tree design pattern is the best choice for this (i.e. instantiate a tree for the skills known per character, and add to/delete parts as needed) or 'do the clever stuff' and use a list per character to see which parts (skills) of database (external tree) are 'owned'. My skills are broken down roughly as so: `Course catogory -> Weapon Type -> Actual Weapon -> Proficiency-->Special Skill`, which goes quite deep. Further to this there is little need to know what is at each level, just that it's children, until you hit the leaf, and the one up. My database effectively already exists on the LARP website, so if I don't have to reproduce it, that would be good. Here is a sample Weapons and Shields Warrior Priest Scout Mage 1H Weapon Proficiency 3 6 6 9 1H Weapon Specialisation 6 12 12 18 1H Weapon Expertise 12 24 24 36 1H Weapon Mastery 24 48 48 72 And later on: Magic Warrior Priest Scout Mage Learn Spell 9 x (l + 1) 9 x (l + 1) 6 x (l + 1) 3 x (l + 1) l = spell level, ... Create Talisman 9 + l 9 + l 6 + l 3 + l l = level,"} {"_id": "20255", "title": "What would you choose for your project between .NET and Java at this point in time?", "text": "You are just starting a new project and you have these two technologies to choose from, Java and .NET. The project you are working doesn't involve having features that would make it easy to choose between the two technologies (e.g. .NET has this that I need and Java does not) and both of them should work just fine for you (though you only need one of course). Take into account: * Performance * Tools available (even 3rd party tools) * Cross platform compatibility * Libraries (especially 3rd party libraries) * Cost (Oracle seems to try and monetize Java) * Development process (Easiest/Fastest) Also keep in mind that Linux is not your main platform but you would like to port your project to Linux/MacOs as well. You should definitely keep in mind the trouble that has been revolving around Oracle and the Java community and the limitations of Mono and Java as well. It would be much appreciated if people with experience in both can give an overview and their own subjective view about which they would choose and why."} {"_id": "216029", "title": "Found a better solution to a problem at work - should I deter from posting the code snippet online?", "text": "I think most of us, programmers, used Stack Overflow to solve every day problems: looked for an efficient algorithm to do something. Now imagine a situation: you have a problem to solve. Googled a bit, found a StackOverflow question but you are not really satisfied with the answers so far. So you have to do your own research: you need to do it because you want it in the company's app. Eventually after some hours you have found the better solution. You're happy, you added it to the company's code base, then you want to submit your answer with a code snippet (just several lines) to the question you've found before to help others too. But wait: the company's software is closed source, and you worked on it on the clock. So does this mean I shouldn't post the answer neither at work nor at home to that question in the rest of my life, because I solved it at work, and the company owns that piece of code?"} {"_id": "129181", "title": "Resume dilemma for professional job", "text": "On my resume, I list myself as having \"7 years of hands-on experience programming in C\". To clarify, I am a self-taught C programmer with some college courses thrown in the mix. I've worked on some small personal projects, and I consider myself to be more competent than a Computer Science grad with no actual real-world experience, though by no means am I anywhere near being an expert. The issue is this... I keep getting calls and emails from recruiters that see my resume on job sites, inquiring about my interest in senior developer positions, contracts, etc., of which I feel that I am completely under- qualified for. My resume only has 3 years of work experience listed (which is all IT stuff), so when they ask about my prior experience in C, I have to clarify that it was personal work, not professional work. I'd really like a job as a developer, but I don't want to get hired for something that I can't handle, nor do I want to misrepresent myself while trying to show off my strengths. I deliberately chose the phrasing \"hands-on\" to imply that it wasn't professional. How should I phrase my C experience on my resume to clarify it better?"} {"_id": "178653", "title": "Is there benefit to maintain a large project with bad code?", "text": "I'm currently maintain a large project with more than 100000 LOC. The code use the MFC as its framework, in genral, it only has interface part which heavily use the mfc api and a business logic part which full of bad code, confusing logic. The company has some small features delivered to the customer each year(most features are adding code to exisiting project, finding some reference of some api or variable and it' s no different with fixing 3-4 bugs ), most of the tasks are to resove issue and optimize performance . Like other company with maintaining position, it value people who knows much logic about its product. There are people who can quickly finish the job on such project, is it worth to train myself like such a programmer? Is there benifits to work on such project for a long time?"} {"_id": "129184", "title": "Programmers clipboard monitor under Windows", "text": "In the process of adapting a \"put on clipboard\"-solution to new behavior I have found that I need a good _programmers_ clipboard monitor for Windows. I do not need to have a history of items on the clipboard, but I need to see the details of what is currently placed there, including - this is important - the various flavors (RTF/HTML/plain text/etc) to be sure that all those I place there are correct. Free is preferred, but cheap will do. We are a Java shop, but I can install a Visual Studio Express edition if that makes things easier. (EDIT: The development box is Windows 7 64-bit) Any suggestions?"} {"_id": "194398", "title": "Books/sources on inner workings of JavaScript", "text": "When I started studying C++ a couple of years ago, a lot of the books and texts I read did a very thorough job of explaining, how the code written would translate into concrete operations in the hardware (like dynamically memory allocation, pointer arithmetics etc.). I found those explanations extremely helpful in order to fully understand the language. Now I am studying JavaScript and learning about the whole functional programming paradigm, with functions as first class objects and so on. I have read a lot of texts and books about how to use JavaScript, but I have yet to come across a source, that explains the low-level inner workings of the language (like how a function is represented in memory, what excactly happens when a function is called with .apply() and a new context is provided etc.). I guess you could say, that what I am looking for is the knowledge needed to write a JS compiler/VM..? How the runtime environments handle the different aspects of the language at a low level. Does anyone know where to find books or text, that goes into the very low level details of the language?"} {"_id": "23506", "title": "Background & Research Methods section (Writing an Article)", "text": "It is my first time writing an _article_ on a software project. I am supposed to use ACM UbiComp paper format. I already have a structure that I should follow and there is a **Background & Research Methods** section after Abstract, Introduction, Related Work sections. I have browser through several articles, but some of them either dont have it, have only background section or have only research methods section. **I am having hard time to find an article that has this section and moreover what I must write on here**. My project is about Bluetooth location tracking and I do have the implementation and evaluation, so it is not something theoretical."} {"_id": "228160", "title": "How this Fibonacci exponentiation by squaring algorithm works?", "text": "This is one of the best algorithms to calculate the nth Fibonacci sequence. it needs O(log(n)) time to do its job, so it's so efficient. I found it somewhere but don't know how it works! Can anyone tell me how this algorithm works? thanks. Here's the code: int fib3 (int n) { int i = 1, j = 0, k = 0, h = 1, t; while (n > 0) { if (n % 2) { t = j * h; j = i * h + j * k + t; i = i * k + t; } t = h * h; h = 2 * k * h + t; k = k * k + t; n /= 2; } return j; }"} {"_id": "179850", "title": "Can I display part of the source code of an open source plugin's with GPLv2 liscence?", "text": "I would like to publish part or full source code of one or more plugins (licensed under GPL, GPLv2, MIT or no license) in my website/blog. Website I am talking about is for everyone, free to use and free to copy code. Plus, is it okay if I do not provide any link to the source code/plugin? I will definitely give full credits to the developer."} {"_id": "144913", "title": "Should a new programmer focus on a single technology until he's proficient at it?", "text": "Ok, I've been teaching a buddy how to program for a while now. He's a very fast learner, and he's quite good at programming so far. However, he has one \"issue\" I keep trying to correct. He jumps in and starts doing highlevel programming without learning some of the basics (He's created a full blown web application but still doesn't know pagination or session management). This isn't the problem though. He keeps jumping around to new technology (Node.js, MongoDB, EC2, etc). I tried telling him that he should learn some of the basics about his RDMS of choice (MySQL) as he uses it everyday before investing a bunch of time into learning the basics of MongoDB (And probably moving to something new). Am I the one in the wrong here, or should he try to focus on one thing at a time and get really good at it?"} {"_id": "161619", "title": "Unit Testing: \"It's a code smell if you're refactoring and there are no collaborators\"?", "text": "I'm reading The Art of Unit Testing by Roy Osherove. I'm at section 7.2 Writing maintainable tests where the author has this note about code smell: > NOTE: When you refactor internal state to be visible to an outside test, > could it be considered a code smell (a sign that something might be wrong in > the code's design or logic)? It's not a code smell when you're refactoring > to expose collaborators. **It's a code smell if you're refactoring and there > are no collaborators (so you don't need to stub or mock anything).** **EDIT** : What the author means by \"collaborators\" is dependencies. Some of his examples for dependencies are classes that access a database or that access the OS's file system. Here is where he defines stub and begins to use the word collaborator: > A _stub_ is a controllable replacement for an existing **dependency** (or > **collaborator** ) in the system. The author doesn't have an example of this code smell and I'm having trouble understanding/picturing what this would look like. Can someone explain this a little more and perhaps provide a concrete example?"} {"_id": "129235", "title": "Advantage/disadvantage of parameters / return types declaration in languages with type inference", "text": "I would like to know your opinion on declaration by hand of parameters/return types in languages with type inference like Scala. Is there any reason why to or not to do this when compiler can infer types?"} {"_id": "129234", "title": "Functional Programming: Are Tuples a viable replacement for Types?", "text": "A while ago I decided to learn Haskell to help with learning more \"pure functional\" ideas that I could apply to F#. Right off the bat it seems as if there's no real types in Haskell like the ones that F# has. I'm fine with that so I wanted to find a way to remove myself from the hybrid OO/Functional design I've used in F# and really try to go classless and truly develop a function centric design. The first thing that came to mind when using Haskell was to just pass tuples around as they can hold information in key value pairs much like dynamic languages but don't have the constructor syntax like F# types. For example in F#: type DatabaseWorkValueItem(firstItem, secondItem, thirdItem) = member public x.FirstItem = firstItem member public x.SecondItem = secondItem member public x.ThirdItem = thirdItem And this would be in my new proposed design: module DatabaseWorkValueItem = let firstItem (item, _, _) = item let secondItem (_,item, _) = item let thirdItem (_,_, item) = item --Replace constructor with method let CreateValueItem (parameter:String, value:Object, sqlType:SqlDbType) = (parameter, value, sqlType) --Replace properties with more methods let GetDataType (item :(String *Object * SqlDbType)) = thirdItem item let GetParameter (item :(String *Object * SqlDbType)) = firstItem item let GetValue (item :(String *Object * SqlDbType)) = secondItem item They both get the same idea. One creates a class like Type with properties set on construction and the other uses all functions to create and read a tuple. Question is: Am I crazy to bother with this? Although I realize F# has the convenience of jumping between two paradigms, I can't help but feel that it's a comfort blanket to"} {"_id": "161610", "title": "Can unit testing software be used to unit test itself?", "text": "QUnit advertises itself on its web page like this: > ...capable of testing any generic JavaScript code, including itself! Can you really use unit testing software to test itself? Wouldn't defects in the software mean that the results are unreliable?"} {"_id": "129238", "title": "How to Document the Security/Encryption Code of an Application", "text": "I am working on an application that I developed a security layer for. It uses the hardware ID of the hard drive, MAC address and another hardware serial key, to lock the software a particular piece of hardware. I came across sites online, that said I should use fairly weird names for the procedures/functions/variables in the the security layer. For instance function XtDmat: Boolean; var x3mat: string; hhiTms: Interger; begin Result := False; x3mat := GWindosColr; //this returns the HD ID hhiTms := GalilDriverID; //this gets a controller ID if (t5gFx=x3mat) and (hhiTms=f4teXX) then Result := True; //match IDs with those saved in the registry end; And for the messages dialog box: procedure Tfrm_MainWindow.mxProtectorExpiration( Sender: TObject ); begin lbl_Remaining.Caption := GetClassPath('xsedfers34;;''kd'); // encrypted value decryted and shown '0 days remained' lbl_Message.Caption := GetClassPath('qwe23vftTxxx ii gtr'); //decryt and show 'This license has expired' btn_Go.Enabled := False; end; I did this because I used Delphi DEDE to decompile my code and found even the registry key 'HCKU\\software\\myapplication' to be in plain sight. Now, however, it has become difficult to 1. explain to my fellow team mates, why I did this (meaning the names), 2. document as the names do not make sense, 3. debug which gives me headaches. Can anyone suggest a good way to document this type of code in this type of situation. So the code becomes easier to, but is still diffcult to decompile. Alternatively, could one suggest a obfuscator for Delphi."} {"_id": "97935", "title": "How is Delphi XE2 going to work across platforms?", "text": "So I've been reading a little about Delphi XE2 and I probably will go to the world tour thing in Chicago coming up later this month and ask this question if no one can answer it here. What I wonder is how is my Delphi code going to be executed on a Mac? Is something else going to have to run (i.e. virtual machine) in order for the program to run?"} {"_id": "59037", "title": "Linking Libraries in iOS?", "text": "This is probably a totally noob question but I have missing links in my mind when thinking about linking libraries in iOS. I usually just add a new library that's been cross compiled and set the build and linker paths without really know what I'm doing. I'm hoping someone can help me fill in some gaps. Let's take the OpenCV library for instance. I have this totally working btw because of a really well written tutorial( http://niw.at/articles/2009/03/14/using-opencv-on-iphone/en ), but I'm just wanting to know what is exactly going on. What I'm thinking is happening is that when I build OpenCV for iOS is that your creating object code that gets placed in the .a files. This object code is just the implementation files( .m ) compiled. One reason you would want to do this is to make it hard to see the source code and so that you don't have to compile that source code every time. The .h files won't be put in the library ( .a ). You include the .h in your source files and these header files communicate with the object code library ( .a ) in some way. You also have to include the header files for your library in the Build Path and the Library itself in the Linker Path. So, is the way I view linking libraries correct? If , not can someone correct me on this ?"} {"_id": "193176", "title": "In Java, would you sacrifice type safety for a nicer programming interface", "text": "**When and why would you generally sacrifice typesafety for a nicer programming interface?** Let me give you an example: if you had the choice between two event aggregators, which one would you prefer and why? Reflective Version: SomeEvent.subscribe(instance, \"nameOfAMethod\"); //method called via reflection SomeEvent.fire(arg1, arg2); //the firing could actually even be statically typed Statically typed version: EventSystem.getEvent(SomeEvent.class).subscribe(new EventHandler() { public void eventOccurred(Object sender, Payload payload) { //event handler code here } }); EventSystem.getEvent(SomeEvent.class).fireEvent(payload); Please note, that in Java, due to type erasure, you cannot implement a generic interface with different type parameters more than once and need to resort to anonymous or external classes for handlers. Now the reflective event system has a nicer user interface, but you lose type safety. Which one would you prefer? Would you create empty event classes just for the sake of having a symbol, like Microsoft does it with PRISM in its event aggregator?"} {"_id": "178102", "title": "Why is there never any controversy regarding the switch statement?", "text": "We all know that the `goto`statement should only be used on very rare occasions if at all. It has been discouraged to use the `goto` statement countless places countless times. But why it there never anything like that about the `switch` statement? I can understand the position that the `switch` statement should always be avoided since anything with `switch` can always be expressed by `if...else...` which is also more readable and the syntax of the switch statement if difficult to remember. Do you agree? What are the arguments in favor of keeping the 'switch` statement? It can also be difficult to use if what you're testing changes from say an integer to an object, then C++ or Java won't be able to perform the switch and neither C can perform switch on something like a struct or a union. And the technique of fall- through is so very rarely used that I wonder why it was never presented any regret of having switch at all? The only place I know where it is best practice is GUI code and even that switch is probably better coded in a more object-oriented way."} {"_id": "238874", "title": "overriding implemented base class methods", "text": "I read somewhere that the chain of inheritance breaks when you alter a behavior from derived class. What does \"altering a behavior\" mean here? Is overriding an already implemented method in base class considered as \"altering behavior\"? Or, does the author mean altering method signatures and the output? Also, I ready Duplicating code is not a good practice, and its a maintenance nightmare. Again, does overriding an already implemented method in base class considered \"Duplicating code\"? If not, what would be considered as \"Duplicating code\"? I"} {"_id": "218601", "title": "Ordering if conditions for efficiency and clean code", "text": "This is purely a design question and the example is simple to illustrate what I am asking and there are too many permutations of more complex code to provide examples that would cover the topic. _I am not referring specifically to this example, just using the example to illustrate what I am referring to._ In an if statement, is there a preferred way to order conditions, in terms of coding convention and, more importantly, efficiency? For such a simple example, efficiency will not be an issue, but I am not asking for a code review per se, but an understanding of the concept. Two alternatives: 1. public double myStuff(int i) { // check for an invalid number of tourists if (i > 0) { // do something here return doubleSomething; } else { return Double.NaN; } } 2. public double myStuff(int i) { if (i <= 0) { return Double.NaN; } else { { // do something here return doubleSomething; } If the if statement was to become more complex, and possibly nested, in what order should conditions be addressed?"} {"_id": "178105", "title": "Understanding HTTP Cookies in Indy 10 for Delphi XE2", "text": "I have been working with Indy 10 HTTP Servers / Clients lately in Delphi XE2, and I need to make sure I'm understanding session management correctly. In the server, I have a \"bucket\" of sessions, which is a list of objects which each represent a unique session. I don't use username and password to authenticate users, but I rather use a unique API key which is issued to a client, and has an expiration. When a client wishes to connect to the server, it first logs in by calling the \"login\" command, which is a path like this: `http://localhost:1234/login?APIKey=abcdefghij`. The server checks this API Key against the database, and if it's valid, it creates a new session in the bucket, issues a new cookie (unique string), and sets the response cookies with `Success=Y` and `Cookie=abcdefghij`. This is where I have the question. Assuming the client end has its own method of cookie management, the client will receive this login response back from the server and _automatically_ save the cookies as necessary. Any future request from the client to the server shall automatically send along these cookies, and the client side doesn't have to necessarily worry about setting these cookies when sending requests to the server. Right? PS - I'm asking this question here on programmers.stackexchange.com because I didn't see it fit to ask on stackoverflow.com. If anyone thinks this is appropriate enough for stackoverflow.com, please let me know."} {"_id": "221178", "title": "Entity Framework Entities - Some Data From Web Service - Best Architecture?", "text": "We are currently using Entity Framework as an ORM across a few web applications, and until now, it has suited us well as all our data is stored in a single database. We are using the repository pattern, and have services (the domain layer) which use these, and return the EF entities directly to the ASP.NET MVC controllers. However, a requirement has come up to utilise a 3rd party API (through a web service) which will give us extra information that relates to the user in our database. In our local User database, we will store an external ID which we can provide the to API to get additional information. There is quite a bit of information available, but for the sake of simplicity, one of them relates to the user's company (name, manager, room, job title, location etc). This information will be used in various places throughout our web apps - as opposed to being used in a single place. So my question is, where is the best place to populate and access this information? As it is used in various places, it's not really sensible to fetch it on an ad-hoc basis wherever we use in the web application - so it makes sense to return this additional data from the domain layer. My initial thought was just to create a wrapper model class which would contain the EF entity (EFUser), and a new 'ApiUser' class containing the new information - and when we get a user, we get the EFUser, and then get the additional info from the API, and populate the ApiUser object. However, whilst this would be fine for getting single users, it falls over when getting multiple users. We can't hit the API when getting a list of users. My second thought was just to add a singleton method to the EFUser entity which returns the ApiUser, and just populate it when needed. This solves the above problem as we only access it when we need it. Or the final thought was to keep a local copy of the data in our database, and synchronise it with the API when the user logs in. This is minimal work as it's just a synchronisation process - and we don't have the overhead of hitting the DB and API every time we want to get user information. However, these means storing the data in two places, and also means the data is out of date for any user that hasn't logged in for a while. Does anyone have any advice or suggestions on how best to handle this kind of scenario?"} {"_id": "99935", "title": "MonoTouch/MonoDroid + C# = trustable?", "text": "> **Possible Duplicates:** > MonoTouch vs Objective-C for iPhone/iPod/iPad development > As a C# developer, would you learn Java to develop for Android or use > MonoDroid instead? I'm very curious about the tools named MonoTouch and MonoDroid to create applications for Android, iPhone and iPod by using c# code. Question: * Are these tools good enough to create applications that will be used in different environments? * Can these tools replace the orgininal way of creating application? If you create an application for Android, you often use Java to create it. * How different is it to create a application with Mono for Android compare to creating an application for Windows Mobile Phone?"} {"_id": "151386", "title": "Do Android developers have to pay sales taxes?", "text": "According to blog post by RetroDreamer Android developers have to pay sales taxes for their App sales in countries with have sales taxes while Apple developers don't as Apple pays the taxes directly. Is this an accurate description? Is so, how do the various people who publish Android apps handle it?"} {"_id": "66502", "title": "Project Closures in Scrum", "text": "In a typical software development environment, **project closures** mark the end of a project. 1. Project records are completed and archived, 2. resources released, 3. issues and lessons are documented, and 4. a formal dinner/party held for celebration. Last step is optional, though is very motivating for participants. :-) Contrast this with Scrum. I know that scrum runs on **stories from backlogs**. So, technically, every iteration closes certain stories. So, there are two questions here. 1. For a group that works on **multiple simultaneous projects** , how do project closures fit in? 2. For a project that involves **multiple groups** , how does this concept apply? Or, does project closure term not apply to **T &M projects** at all?"} {"_id": "108240", "title": "Why are interfaces useful?", "text": "I have been studying and coding in C# for some time now. But still, I can't figure the usefulness of Interfaces. They bring too little to the table. Other than providing the signatures of function, they do nothing. If I can remember the names and signature of the functions which are needed to be implemented, there is no need for them. They are there just to make sure that the said functions(in the interface) are implemented in the inheriting class. C# is a great language, but sometimes it gives you the feeling that first Microsoft creates the problem (not allowing multiple inheritance) and then provides the solution, which is rather a tedious one. That's my understanding which is based on limited coding experience. What's your take on interfaces? How often you make uses of them and what makes you do so?"} {"_id": "229691", "title": "what is the main utility of Interface in real world programming (OOPS)", "text": "what is the main utility of Interface. we know that we can implement dynamic behavior using interface but i guess it is not only the utility. so i like to know when we have to write interface and when we need to go for abstract class. **show me 5 or 10 most important uses of interface in real life scenario.** another main use is coming to my mind that project manager or team lead will implement basic skeleton through interface and other developer follow it. so please guys show me with sample code few most important use of interface which we can do with abstract class or concrete class. **one guy told me like this way which is not very clear to me** interfaces are defined contracts between classes or structs, consumers can exchange the implementation by a different one as long as the same contract is met that is the method names and signature that compose a specification that classes and structs can work against rather than working against a concrete implementation. The important part about interfaces is to know when to use them and as a matter of fact it's quite simple, when you want two or more unrelated objects to have the same common functionality but not necessarily the same implementation you will want to use interfaces; otherwise, when you have related objects that have a shared functionality and implementation then you may consider to use an abstract class instead of an interface. **this thing is not clear specially** when you want two or more unrelated objects to have the same common functionality but not necessarily the same implementation you will want to use interfaces; otherwise, when you have related objects that have a shared functionality and implementation then you may consider to use an abstract class instead of an interface. it would be nice if anyone explains with sample code when to go for interface & when abstract class. show me few best important area which is always handle with interface with sample code or best interface uses with sample code.thanks"} {"_id": "207207", "title": "Programming to interface in Java", "text": "What is the real use of interfaces in Java? What is meant by programming to interfaces? I heard these things several times but I don't know what it is and why it is used."} {"_id": "180001", "title": "Abstract Class, Interface: difference and use", "text": "> **Possible Duplicate:** > When to use abstract classes instead of interfaces and extension methods in > C#? What is the difference between abstract classes and interfaces in java? And under what circumstances should I choose to create an abstract class or an interface. What are the points that I must have in my mind before choosing one of the above?"} {"_id": "131332", "title": "What is the point of an interface?", "text": "> **Possible Duplicate:** > When to use abstract classes instead of interfaces and extension methods in > C#? > What other reasons are there to write interfaces rather than abstract > classes? This question sounded a bit trivial to me as well, till I gave a it serious thought myself. What is the point of a Java interface? Is it really Java's answer to multiple inheritance? Despite using interfaces for a while I never got around to think the point of all this. I know what an interface is, but still pondering on the why."} {"_id": "129075", "title": "What other reasons are there to write interfaces rather than abstract classes?", "text": "> **Possible Duplicate:** > When to use abstract classes instead of interfaces and extension methods in > C#? When I read and looked at codes using Abstract classes, I was able to justify it because it allows you to add common methods for any subclasses extending the abstract class. So for example, if objects behavior is similar, I would use Abstract classes to implement bodyless abstract methods that is required for each object, and simply use non abstract methods already implemented in the abstract class. I can think of a scenario dealing with multiple media file types (avi,mpg,mp4) and you would have common methods for all files, as well as media specific abstract methods that needs to be implemented. However, I am a bit confused as to why you would knowingly create an interface which cannot contain any non-abstract methods. Reading this page, it states that it hides information (you mean the abstract methods?). > Hiding details and providing common interfaces is called encapsulation, > which is an analogy from making an object look like it's covered by a > capsule (the interface in this case). This allows two objects differing in > internal representation but having the common interface interchangeably > usable (called interchangeability). Interfaces also allow to facilitate the > use of data structure and guard the state of the object from invalid inputs > and modification of the structure. So does this mean that any objects which share the common behaviors implemented uniquely can be treated like they are the same category? So for the media file example, does this mean any specific media file type implementing the `interface MediaFile` are to be passed as arguments for a method dealing with such type of objects? public class ServiceClass { public ServiceClass(){ //no-args constructor } public boolean runService(ICollaborator collaborator){ if(\"success\".equals(collaborator.executeJob())){ return true; } else { return false; } } } But can't you do above with an abstract class? Also, isn't the ability of having a non-abstract method better than having none at all for the future when you suddenly need to have an existing method that will apply to all classes extending the abstract class? Or is the difference of using Interface, the ability to protect the implementation of data completely? Once again, I don't see what cases you would use it. Other than that I implement `ActionListener` quiet often to have the `actionPerformed` method."} {"_id": "113313", "title": "What is Interface in Java programming language?", "text": "Last week my lecturer was teaching us about interfaces in Java. However, I failed to understand her explanation that well. Does anyone have a good description, or explanation of Java interfaces, and reasons to make use of them?"} {"_id": "245907", "title": "Why do APIs generally consist of interfaces?", "text": "I am starting out in Java API design and in reading existing code bases, I have found that most APIs consist of interfaces only with their implementations bundled in a different package. Having read that interfaces are generally more problematic to use/maintain than abstract classes, why aren't abstract classes used instead of interfaces?"} {"_id": "239189", "title": "Is using interfaces on internal code a good idea?", "text": "I'm working on a set of automated tests that we use internally at work. Lately, we've been designing classes that implement interfaces in addition to inheritance. As I understand it, interfaces in Java are used to fulfill a contract. A class that implements an interface must include implementations of any members of that interface. This works really well for building libraries or objects that are meant to be used by outside teams or individuals since it guarantees certain methods will be present. What about code that's entirely kept within a single team in a single location? (Assume this code will stay internal, or else this question changes entirely.) If I can simply go to my VCS history or ask a team-member about changes to code, do I really need to enforce these changes using interfaces? Even if a team is large, you could still develop conventions or tests to verify things. I feel like we're just making more work for ourselves by using interfaces, but maybe there are additional arguments. NOTE: This project is a Java project and will stay that way. Interfaces here are instances of `interface` with all the syntax and rules that apply."} {"_id": "109602", "title": " Why should I use interfaces if the implementation will mostly stay the same?", "text": "> **Possible Duplicate:** > Why are interfaces useful? In our company we have a service oriented architecture in our asp.net application. We use interfaces for every crap class. Its a huge overhead. The service classes, dataprovider classes etc... they all use interfaces but **practically** we could also go without those interfaces and just use the class type. So why should we do it that complicated and lose lots of productivity?"} {"_id": "41740", "title": "When to use abstract classes instead of interfaces with extension methods in C#?", "text": "\"Abstract class\" and \"interface\" are similar concepts, with interface being the more abstract of the two. One differentiating factor is that abstract classes provide method implementations for derived classes when needed. In C#, however, this differentiating factor has been reduced by the recent introduction of extension methods, which enable implementations to be provided for interface methods. Another differentiating factor is that a class can inherit only one abstract class (i.e., there is no multiple inheritance), but it can implement multiple interfaces. This makes interfaces less restrictive and more flexible. **So, in C#, when should we use abstract classes instead of interfaces with extension methods?** A notable example of the interface + extension method model is LINQ, where query functionality is provided for any type that implements `IEnumerable` via a multitude of extension methods."} {"_id": "163459", "title": "Understanding interfaces", "text": "> **Possible Duplicate:** > When to use abstract classes instead of interfaces and extension methods in > C#? > Why are interfaces useful? > What is the point of an interface? > What other reasons are there to write interfaces rather than abstract > classes? > What is the point of having every service class have an interface? > Is it bad habit not using interfaces? I am reading Microsoft Visual C# 2010 Step by Step which I feel it is a very good book on introducing you to the C# language. I have just finished reading a chapter on interfaces and although I understood the syntax of creating and using interfaces I have trouble of understanding the point on why should I use them? Correct me If I am wrong but in an interface you can only declare methods names and parameters.The body of the method should be declared in the class that inherits the interface. So in this case why should I declare an interface if I am going to declare the entire method in the class that inherits that interface? What is the point? Does this have something to do with the fact that a class can inherit multiple interfaces?"} {"_id": "145436", "title": "How to synchronize web page refresh with file upload from windows application?", "text": "I have a rather simple app in .net which uploads images in fixed intervals in an ftp server. The file name is specific. I need to have a web page that refreshes **but** i want this to happen only when the upload finishes. How i find a way to determine that the upload has finished? And if the solution has to do with renaming and deleting temp files, should i do this from the windows app or server side? Thanks update: What if i could find a way to check if a filename is a **valid** jpg? I mean if the transfer is not over this check should return false..ideas?"} {"_id": "50415", "title": "Python lower_case_with_underscores style convention: underscores not popular?", "text": "PEP8 recommends using > lowercase, with words separated by underscores as necessary to improve > readability for variable and function names. I've seen this interpreted as `lower_case_with_underscores` by most people, although in practice and in Python's native methods it seems like `lowercasewithoutunderscores` is more popular. It seems like following PEP8 strictly would be awkward since it seems to suggest mixing both `lower_case_with_underscores` and `lowercasewithoutunderscores`, which would be inconsistent. What is your interpretation of PEP8's variable names, and what do you actually use in practice? (Personally, I like `lowerCamelCase` as a compromise between readability and ease of typing.)"} {"_id": "189107", "title": "When can one call themselves a \"Rubyist\"?", "text": "I was wondering what that term even meant. Is it something to do with one's amount of knowledge about the Ruby language or just the plain idea of using it. When can one call themselves a \"Rubyist\"?"} {"_id": "223203", "title": "GitHub etiquette for duplicating a repo to change functionality", "text": "I've found a GitHub project I'd like to add some features to. After contacting the maintainer, the changes aren't in line with the direction he's going but he's interested to see what I do with it. What is the GitHub etiquette for using one repo as a base for another project that almost certainly won't ever be merged back into the original? Instead of forking, my instinct is to create a brand new repo and manually copy the current state of the original code into it. Then, in the documentation, give credit and a link to the original author/repo for the starting point. Is that acceptable, or is there some other standard approach?"} {"_id": "189103", "title": "I'm trying to create a visual representation of something, but I don't know what words to use, so I'll try to describe it", "text": "My project team is making a site that will use reddit style voting to track users' opinions on various issues, and use the data to create a \"heat map\". I say heat map in quotes because I'm not sure that it's the correct term - I don't have the math to express it properly. The idea is you'll be able to see, on a 2 dimensional graph, how your opinons compare to those of other voters in your municipality, and to politicians currently running for office. This will help voters make more informed decisions about voting. I'm not sure how to code this or even what to start googling for. Can anyone suggest a direction? ** I'll change the title to something more useful when I have more information, sorry"} {"_id": "189102", "title": "How to overcome the fear of building a web application with a recurring payment system?", "text": "I'm a Ruby on Rails developer. I'd like to create a web application. I will let users get a payed subscription to use the product. So I will need a recurring billing system (e.g. via paypal). But there is the fear to fail and make some bad mistakes, because I will handle money from other people.... regardless following the instructions of e.g. Ryan Bates on Railscasts how to implement such a recurring payment system. What is your advice to overcome this fear of building such a product? What are the risks I need to calculate when designing such a system?"} {"_id": "52816", "title": "What do you do to remain productive when working on your own?", "text": "I find working in isolation, on a piece of code that won't be seen by anyone else for weeks, draining. I'm looking for ideas to try to keep myself productive and motivated. What do you do to remain motivated and productive, when given a long term programming task, and working on your own (for example, from home, without any team-mates or coworkers)?"} {"_id": "52818", "title": "Marking services for secure handling; Annotation or inheritance?", "text": "We have a lot of services, some that demand some security, some that don't. We want an easy way of telling, in code, if a service will be secure or not. What would be the better way: Annotation or inheritance? public class SomeServiceImplementation : BaseSecureService, ISomeService or [SecureService] public class SomeServiceImplementaion : ISomeService"} {"_id": "111546", "title": "Is this a ridiculous way to structure a DB schema, or am I completely missing something?", "text": "I have done a fair bit of work with relational databases, and think I understand the basic concepts of good schema design pretty well. I recently was tasked with taking over a project where the DB was designed by a highly- paid consultant. Please let me know if my gut intinct - \"WTF??!?\" - is warranted, or is this guy such a genius that he's operating out of my realm? DB in question is an in-house app used to enter requests from employees. Just looking at a small section of it, you have information on the users, and information on the request being made. I would design this like so: ### User table: UserID (primary Key, indexed, no dupes) FirstName LastName Department ### Request table RequestID (primary Key, indexed, no dupes) <...> various data fields containing request details UserID -- foreign key associated with User table Simple, right? Consultant designed it like this (with sample data): ### UsersTable UserID FirstName LastName 234 John Doe 516 Jane Doe 123 Foo Bar ### DepartmentsTable DepartmentID Name 1 Sales 2 HR 3 IT ### UserDepartmentTable UserDepartmentID UserID Department 1 234 2 2 516 2 3 123 1 ### RequestTable RequestID UserID <...> 1 516 blah 2 516 blah 3 234 blah The entire database is constructed like this, with every piece of data encapsulated in its own table, with numeric IDs linking everything together. Apparently the consultant had read about OLAP and wanted the 'speed of integer lookups' He also has a large number of stored procedures to cross reference all of these tables. Is this valid design for a small to mid-sized SQL DB? Thanks for comments/answers..."} {"_id": "149926", "title": "Is it a good idea to use CouchDB?", "text": "Is it a good idea to use CouchDB for a web application that is going to be platform agnostic (from tablet to PC): * The app is a big form which I need to be able to modify at will. * I also need to scan the results to retrieve the data to send some file to help the client. * I need to be able to upload the results to a master server. * I want to mainly use the app offline and online at will. The data is questions with answers and comments linked to them. At the end a percentage is generated for the user and depending on the result, a call to another application will be made to \"send\" information to the user depending on its results. The application needs to be offline because we don't know if the user is connected to the internet at the time they answer the questions. The data needs to be the same on all platforms (replication is a given). I cannot rely on the browser even if the app is going to be built with HTML5, CSS and JavaScript. Is it possible with CouchDB and is it a too big mandate for only one person? If there are not enough details, ask and I will explain more thoroughly. **EDIT:** After all your answers here is what i have concluded. * I am gonna use SQLite and sync it with our sql databases. * NoSQL is not made for the kind of app i am working on. * Using what you know is sometimes the way to go * If you dont know how to use the technology and intend to use it for a huge project your coding alone. dont."} {"_id": "60900", "title": "Do abstractions have to reduce code readability?", "text": "A good developer I work with told me recently about some difficulty he had in implementing a feature in some code we had inherited; he said the problem was that the code was difficult to follow. From that, I looked deeper into the product and realised how difficult it was to see the code path. It used so many interfaces and abstract layers, that trying to understand where things began and ended was quite difficult. It got me thinking about the times I had looked at past projects (before I was so aware of clean code principles) and found it extremely difficult to get around in the project, mainly because my code navigation tools would always land me at an interface. It would take a lot of extra effort to find the concrete implementation or where something was wired up in some plugin type architecture. I know some developers strictly turn down dependency injection containers for this very reason. It confuses the path of the software so much that the difficulty of code navigation is exponentially increased. My question is: when a framework or pattern introduces so much overhead like this, is it worth it? Is it a symptom of a poorly implemented pattern? I guess a developer should look to the bigger picture of what that abstractions brings to the project to help them get through the frustration. Usually though, it's difficult to make them see that big picture. I know I've failed to sell the needs of IOC and DI with TDD. For those developers, use of those tools just cramps code readability far too much."} {"_id": "26560", "title": "Which back-end web programming language to use", "text": "I have a project where I will be collaborating to build a fairly simple site with some database access. I will be doing the back-end work, and my colleague will be doing the web design. The problem is that my colleague has only worked with PHP developers and I have a lot more experience in Perl. The options would be to either learn PHP while doing the project or for my colleague to learn how to design around Perl. (I guess a third option would be to decline the project because this obstacle is just too insurmountable). If the answer is to use Perl, the next question is which templating module would be easiest for my PHP-aware web designer colleague to adapt to. HTML::Mason? HTML::Template? Something else?"} {"_id": "126469", "title": "Design Document From Code", "text": "I am not much familiar with documenting/System designing stuff. I have to maintain an application written in C# - working as windows service. However there is no documentation for this system which makes it really pathetic to find where some problem (conceptually) occurred. I would like to know the best way to design/document it (using current code) manually or preferably automatically so that I can identify the exact problems. Like I feel as if sequence diagram won't much help probably. Also, please guide me if I am taking it on wrong side?"} {"_id": "175824", "title": "Quality Assurance activities", "text": "> **Possible Duplicate:** > Verification of requirements question Having asked but deleted the question as it was a bit misunderstood. If Quality Control is the actual testing, **what are the commonest true quality assurance activities?** I have read that verification (reviews, inspections..) but it does not make much sense to me as it looks more like quality control as mentioned here: _DEPARTMENT OF HEALTH AND HUMAN SERVICES ENTERPRISE PERFORMANCE LIFE CYCLE FRAMEWORK Practices guide_ > Verification - \u201cAre we building the product right?\u201d **Verification is a > quality control technique** that is used to evaluate the system or its > components to determine whether or not the project\u2019s products satisfy > defined requirements. During verification, the project\u2019s processes are > reviewed and examined by members of the IV&V team with the goal of > preventing omissions, spotting problems, and ensuring the product is being > developed correctly. > Some Verification activities may include items such as: \u2022 Verification of > requirement against defined specifications \u2022 Verification of design against > defined specifications \u2022 Verification of product code against defined > standards \u2022 Verification of terms, conditions, payment, etc., against > contracts And the opposite ( _project management knowledge scope_ \\- google result) **Verification is a quality assurance process or technique applied by ...**"} {"_id": "127735", "title": "Android development using C and C++", "text": "I am a C, C++ developer. I am interested in mobile development. I want to know how can I develop Android apps using C and C++, I have read that they are providing a kit for C, C++ developers but it does not have all functions as of Java kit. Should I go for C/C++ development kit or it's better to learn java as they may not provide all the functionality in future?"} {"_id": "75536", "title": "Are there some types of software that cannot be developed by all major programming languages?", "text": "I'd like to know if some of the major programming languages can absolutely not be used to create some very specific types of software. By major programming language I mean the likes of C++, C#, Java, Ruby, Python. By \"cannot be developed\" I mean cannot be developed or it is unrealistic to do it due to performance, difficulty of implementation, etc. I've always thought that any programming language could be used to solve any problem but lately I've been thinking that some languages are unsuitable for some projects. If you can provide examples of such applications, it would be appreciated. Thanks."} {"_id": "65266", "title": "How does \"new message\" notification work?", "text": "I'm interested to know the implementation of the 'new message' in gmail for example. I know that ajax is used, but what else it is used on the server and client side. Can you explain me the scenario? Gmail is just a sample to sustain my question. I'm interested about the cheapest solution which allow to display new content in asynchronous manner without a user triggered event. If you know a resource which explains the optimal way of implementing such app I will appreciated. TY"} {"_id": "214539", "title": "Technique to deal with occasionally blocking json api?", "text": "I have a web app that occasionally (after some idleness) will block a very simple request for small chunks of data (30~50 kb) up to 20 or so seconds. Assuming I can't refactor or modify the API, is there some pattern in javascript or jquery to accommodate a situation like this? I'm thinking to set a timeout for 5 seconds or so for the api call and retry the ajax request -- via jquery's $.ajax() with the timeout argument. I imagine this could be the equivalent of the user refreshing a slow-loading page. Thanks for your thoughts."} {"_id": "396", "title": "What are some good office-layout guidelines for a small development team?", "text": "Our office is moving soon and we have the opportunity to redesign our office space. We have five developers and a couple of testers, working with a project manager, and a business analyst. I'd like your ideas for an ideal office layout for a a small development team like ours."} {"_id": "209441", "title": "Generic term for \"objects\" vs \"fundamental types\"?", "text": "What are the exact terms to call **data types with a logic structure** (like C structures, C++ or Java objects) versus **fundamental data types** (like numeric types, characters, booleans...) independently of any language or paradigm. (I am searching for abstract/academic computer science words)."} {"_id": "250569", "title": "Could the trivium be used to successfully teach programming languages?", "text": "The trivium is a systematic method of critical thinking for deriving certainty from any information which is comprised of: * grammar * logic * rhetoric Joseph, Sister Miriam (2002). in _The Trivium: The Liberal Arts of Logic, Grammar, and Rhetoric. Paul Dry Books, Inc._ shortly describe this stages: > Grammar is concerned with the thing as-it-is-symbolized, Logic is concerned > with the thing as-it-is-known, and Rhetoric is concerned with the thing as- > it-is-communicated When I learned about the trivium I realized that would be the best way to approach teaching programming languages. Adapted to programming languages this would be the stages: * in the grammar stage there would be the learning phase where writing any program, playing with all the all variants of syntax, simple, data structures, using all the keywords, datatypes, making simple programs, would be the goal. * in the logic stage there will be teaching algorithms, complex conditionals, as a learning goal * in the rhetoric stage there could be teaching programming paradigms, procedural programming, OOP, collaboration and versioning tools (git) etc. What would be the shortcomings of the trivium method applied to programming languages, what would be the learning subjects who could not be integrated with the trivium method, and would the trivium stages order(grammar first, logic second, etc) apply well to programming learning?"} {"_id": "209448", "title": "My github pull request was merged, what's the convention at this stage?", "text": "I forked a project on Github, made a small change and sent a pull request to the original maintainer, who has pulled it in. Now the last commit there is `Merged pull request #11 from my_username/master`. This is the first time I'm doing this, so I'm not sure what the etiquette now is: I did a `git pull upstream master` and then `git push origin master`, and now the last commit on my own repository reads `Merged pull request #11 from my_username/master` which feels pretty weird to me. Is this the way people usually do it, is there anything I need to do to \"clean up the history\" or something? Note: since this was a tiny documentation change, I hadn't created any branches, I just made the change in my `master` branch and sent the pull req. So there's no cleanup to be done in that part."} {"_id": "114443", "title": "Is it really that hard to find good developers?", "text": "There was a topic about job marker few days ago, and urban legends about this topic have been floating around as well. So, what's your experience? Have you tried hiring programmers (it's probably a good idea to mention the sector - web/database/embedded/general, and a continent)? And what are your experiences? Do companies receive 500+ CVs a day? No-one applies? People who apply are bad at things? What are the main reasons why candidates are found \"not good enough\"?"} {"_id": "134736", "title": "Good open source projects to master Python concurrency", "text": "Concurrent programming in Python is very colorful (and confusing too). There's just too many options with each having it's pros and cons... * Thread based (`threading` module) * Process based ( `multiprocessing` module) * Co-routines (`greenlet`, `gevent`, `eventlet`) * Async (`Twisted`, `Tornado`) * Inter-process communication (`subprocess` module) * Message queue based (`\u00d8MQ`, `PyCom`, `mpi4py`) * Others (`Pyro`, `execnet`, `Parallel Python`) I know some of these and can write programs using them. But, I just don't feel I know them pretty well -- I can't decide what to use when. I don't know how to put these into perspective. So, my simple question is -- what are some open source projects that employ these techniques so that I can see them in action in real program."} {"_id": "114446", "title": "OO Design principle name?", "text": "I think I remember reading somewhere that one of the principles of good OO design is to write methods which take the least derived type possible, but return the most derived type possible? First, is there such a principle and second, what name would it go by? I'm looking for the name so that when I mentor other developers I can refer them to it."} {"_id": "93138", "title": "How to advocate Stack Overflow at work", "text": "I am thinking of doing a short presentation at work about using Stack Overflow as a resource for your day job. What is your experience doing this? Would you deem it a valid resource to tell your colleagues about it or is it similar to telling them about Google as a resource? Is there a better way of doing it? I was leaning toward asking questions side of Stack Overflow rather than answering them to avoid you-shouldn't-be-doing-this-on-work-time argument. * * * Just as a follow up. Originally I didn't want to make the question too specific to my own case. My presentation will only be a quick four minute talk, which I will repeat over an hour to different groups. I may ask a question before the talk on Stack Overflow and refer to it during the presentation. Hopefully I will get some activity during the hour. I am also going to talk briefly about some of the other Stack Exchange sites that would fit the audience as they are not all developers. I think Super User, Server Fault and Programmers should work well. I will not be doing the presentation for another couple of months as it has been rescheduled, but I will update on how I got on."} {"_id": "164590", "title": "Python as a first language?", "text": "I have just started working in Information Security World. I want to learn the Python language for creating my own automated tool for Fuzzing, SQL-Injection etc. My question is I don't know much about C language (only basic knowledge) but I want to learn directly Python Language so is it good? I have seen there is lots of difference between Python and C (obviously) and for Information Security field Python = GOD so I want to know learning Python need any experience on C language? If not so can I start learning Python directly?"} {"_id": "165615", "title": "Make audible Ding! sound, or growl notification, when `rake test` finishes!", "text": "I lose a ton of productivity by getting distracted while waiting for my tests to run. Usually, I'll start to look at something while they're loading --- and 15-20 minutes later I realize my tests are long done, and I've spent 10 minutes reading online. Make a small change... rerun tests ... another 10-15 minutes wasted! How can I make my computer make some kind of alert (Sound or growl notification) when my tests finish, so I can snap back to what I was doing??"} {"_id": "104277", "title": "In what order should I read these books?", "text": "This question is not a duplicate - I'm not asking about the best books. Rather, I'm asking for the best order in which to study. I'm new to programming but I finished a very, very simple book (Beginning C# 3.0: An Introduction to Object Oriented Programming) and wrote some simple programs. These are the books I plan to read from here: Pro C# 2010 and the .NET 4 Platform C# 4.0 in a Nutshell: The Definitive Reference CLR via C# C# in Depth, Second Edition What would be the best order in which to study them?"} {"_id": "165616", "title": "is it safe to use jQuery and MooTools together?", "text": "I just need to know is it safe to use jQuery and MooTools Together in one web framework? I am not trying create application using both of them, but I am in a situation where I need to modify mootool based application framework, so I am used to jquery, I don't want to waste my time learning mootools and I think jquery is better than the mootools in many contexts like number of applications, plugins etc. so questions are 1. is it safe to use mootools and jquery in one framework? 2. will there be cross browser issues? 3. how robust the application will be when using both?"} {"_id": "104273", "title": "Two interviews: Advice on replying to lesser of the two if awaiting result of the preferred one", "text": "I recently gave two interviews (Say company A and B) and am inclined to get into company A more than B. The hiring manager of A might get a little late in replying. I do not want to lose B if I don't get into A. I want to know if I get a positive reply from company B before A, is there a polite way to say that I am waiting on one other result? Or can I buy a week or so? How do I convey this? Thanks in advance."} {"_id": "75287", "title": "Does KISS encourage tools and frameworks that expose complex leaky abstraction layers?", "text": "Tools and frameworks make complex tasks simple. This seems like something that would be supported by KISS(keep it simple stupid). Tools and frameworks also have the potential to introduce leaky layers of abstraction, where the complexity of the issues are far more problematic than anything you would have written yourself. I am not interested in the effectiveness of tools and frameworks, since that is more of a religious opinion, and can be kept private. What I am interested in is whether or not tools and frameworks are favored by KISS. Obviously some layers of abstraction are accepted by just about everyone, but there are plenty of examples of edge cases where it is harder to tell. For example: For a web service in .Net, would KISS favor a basic WCF service (super easy to create; fairly difficult understand/see what is actually going on under the hood) or a basic REST service (more challenging to create; easier to understand/see what is actually going on under the hood)? Note: There are plenty more examples of this, so feel free to suggest more appropriate ones."} {"_id": "100680", "title": "When should you rewrite?", "text": "> **Possible Duplicate:** > When is a BIG Rewrite the answer? In Joel Spolsky's famous (or infamous) article Thins You Should Never Do, Part I, he makes the case that doing a rewrite is always a wrong move because: 1. It automatically puts you behind (your competitors/schedule/etc.) 2. The code probably isn't as bad as the programmers believe (anything someone else wrote is always a mess, although some are bigger messes than others - and even then) 3. It's probably easier/quicker/cheaper to fix what's truly wrong with it than to rewrite from scratch. 4. In rewriting from scratch, you are probably going to re-introduce bugs that were fixed in previous versions the original code. Instead, he recommends fixing what's wrong with the code. I assume this is a good summary of his post, and I'll postulate that it is generally true. I'm trying to collect a set of rules/guideline for my team, and one of them (based on Joel's article) is We don\u2019t re-write ... ever! - But we can refactor large portions of the code. (this exception takes care of ugly/problematic code issues) One question that has cropped up is \"What about when technology changes and you can no longer get support for older versions?\". This is what I think of as the Sisyphean Upgrade path. E.g., I recall when Oracle moved from SQL*Forms 2.x/3.x through to Forms 6i and beyond, Forms that were originality developed in the old .inp format were no longer supported with the current version of Oracle and Oracle Forms. So you had a choice of sticking with an unsupported database with unsupported tools or rewriting the Forms from scratch (or converted with 3rd party tools and then gone over with a fine toothed comb). Which I will call porting, particularly when you are doing a faithful line by line/function by function translation into the new tech and not adding any functionality. The rule then became: We don\u2019t re-write ... ever! - But we can refactor large portions of the code. - And we can port the code to a new platform when we *truly* have no other choice. (and by truly I mean, you can't even buy your way out of the problem for 2x the cost of porting it) Are there any other exceptions that I've missed?"} {"_id": "43948", "title": "How can I convince management to deal with technical debt?", "text": "This is a question that I often ask myself when working with developers. I've worked at four companies so far and I've become aware of a lack of attention to keeping code clean and dealing with technical debt that hinders future progress in a software app. For example, the first company I worked for had written a database from scratch rather than use something like MySQL and that created hell for the team when refactoring or extending the application. I've always tried to be honest and clear with my manager when he discusses projections, but management doesn't seem interested in fixing what's already there and it's horrible to see the impact it has on team morale. What are your thoughts on the best way to tackle this problem? What I've seen is people packing up and leaving. The company then becomes a revolving door with developers coming in and out and making the code worse. How do you communicate this to management to get them interested in sorting out technical debt?"} {"_id": "155488", "title": "I've inherited 200K lines of spaghetti code -- what now?", "text": "I hope this isn't too general of a question; I could really use some seasoned advice. I am newly employed as the sole \"SW Engineer\" in a fairly small shop of scientists who have spent the last 10-20 years cobbling together a vast code base. (It was written in a virtually obsolete language: _G2_ \\-- think Pascal with graphics). The program itself is a physical model of a complex chemical processing plant; the team that wrote it have incredibly deep domain knowledge but little or no formal training in programming fundamentals. They've recently learned some hard lessons about the consequences of non-existant configuration management. Their maintenance efforts are also greatly hampered by the vast accumulation of undocumented \"sludge\" in the code itself. I will spare you the \"politics\" of the situation (there's _always_ politics!), but suffice to say, there is not a consensus of opinion about what is needed for the path ahead. They have asked me to begin presenting to the team some of the principles of modern software development. They want me to introduce some of the industry- standard practices and strategies regarding coding conventions, lifecycle management, high-level design patterns, and source control. Frankly, it's a fairly daunting task and I'm not sure where to begin. Initially, I'm inclined to tutor them in some of the central concepts of _The Pragmatic Programmer_, or Fowler's _Refactoring_ (\"Code Smells\", etc). I also hope to introduce a number of Agile methodologies. But ultimately, to be effective, I think I'm going to need to hone in on 5-7 core fundamentals; in other words, what are the most important principles or practices that they can realistically start implementing that will give them the most \"bang for the buck\". So that's my question: What would _you_ include in your list of the most effective strategies to help straighten out the spaghetti (and prevent it in the future)?"} {"_id": "131052", "title": "Reengineering the project from scratch", "text": "> **Possible Duplicate:** > When do you rebuild an application or keep on fixing the existing one I am currently working on a project that has been in development for the last few years used throughout the organization but the way the project has been coded the maintainability of it is completely shot. Reading the code presents with pages and pages of Anti-Patterns and trying to identify the path of a business workflow takes on occasion days. At this point I would probably classify the software in its current state as \"Working by accident\" rather then as intended. So I am looking for some wisdom as to the following: At what point would you consider simply dumping the project into an abandonware pile and starting from scratch? P.S. I understand that in a lot of cases organization would consider it cheaper to maintain the existing project."} {"_id": "141005", "title": "How would you know if you've written readable and easily maintainable code?", "text": "How would one know if the code he has created is easily maintainable and readable? Of course in your point of view (the one who actually wrote the code) your code is readable and maintainable, but we should be true to ourselves here. How would we know if we've written pretty messy and unmaintainable code? Are there any constructs or guidelines to know if we have developed a messy piece of software?"} {"_id": "253273", "title": "Self-Evaluation: How do I know if I actually have a \"good grasp\" of OOP?", "text": "If I skip the back story and any thoughts I have on this topic, there's really only one question left to ask: **How can I find out if I have a \"good grasp\" on OOP?** (I am specifically using PHP, but it probably won't matter...) Right now I kind of think of classes as a collection of functions with some global-ish variables accessible by all those functions. This helps to reuse code, keep files short and the namespace clean. Some of you mentioned inheritance: To me that again just means that I can extend an existing class with more functions and more global-ish variables. It's like an add on to my existing class. And the same benefits come into place: reuse code, keep files short. I have a ominous feeling that I'll be disillusioned here in a minute..."} {"_id": "47416", "title": "How to explain to a non-technical person why the task will take much longer than they think?", "text": "Almost every developer has to answer questions from business side like: Why is going to take 2 days to add this simple contact form? When a developer estimates this task, they may divide it into steps: * make some changes to the Database * optimize DB changes for speed * add front end HTML * write server side code * add validation * add client side javascript * use unit tests * make sure SEO set-up is working * implement email confirmation * refactor and optimize the code for speed * ... These maybe hard to explain to a non-technical person, who basically sees the whole task as just putting together some HTML and creating a table to store the data. To them it could be 2 hours MAX. So is there a better way to explain why the estimate is high to a non- developer?"} {"_id": "211528", "title": "How to educate business managers on the complexity of adding new features?", "text": "We maintain a web application for a client who demands that new features be added at a breakneck pace. We've done our best to keep up with their demands, and as a result the code base has grown exponentially. There are now so many modules, subsystems, controllers, class libraries, unit tests, APIs, etc. that it's starting to take more time to work through all of the complexity each time we add a new feature. We've also had to pull additional people in on the project to take over things like QA and staging, so the lead developers can focus on developing. Unfortunately, the client is becoming angry that the cost for each new feature is going up. They seem to expect that we can add new features _ad infinitum_ and the cost of each feature will remain linear. I have repeatedly tried to explain to them that it doesn't work that way - that the code base expands in a fractal manner as all these features are added. I've explained that the best way to keep the cost down is to be judicious about which new features are really needed. But, they either don't understand, or they think I'm bullshitting them. They just sort of roll their eyes and get angry. They're all completely non-technical, and have no idea what does into writing software. Is there a way that I can explain this using business language, that might help them understand better? Are there any visualizations out there, that illustrate the growth of a code base over time? Any other suggestions on dealing with this client?"} {"_id": "121844", "title": "What is the most effective approach to learn an unfamiliar complex program?", "text": "> **Possible Duplicate:** > How do you dive into large code bases? I have quite a bit of experience with different programming languages and writing small and functional programs for a variety of purposes. My coding skills aren't what I have a problem with. In fact, I've written a decent web application from scratch for my startup. However, I have trouble jumping into unfamiliar applications. What's the most effective way to approach learning a new program's structure and/or architecture so that I can start attacking the code effectively? Are there useful tools for their respective languages (Python and Java are my two primary languages)? Should I be starting with just looking at function names or documentation? How do you veterans approach this problem? I find this has to be with minimal help from coworkers or contributors who are already familiar with the application and have better things to do than help me. I'd love to practice this skill in an open source project so any suggestions for starting points (maybe mildly complex) would be great too!"} {"_id": "78881", "title": "Contributing to open source software (how to hack)", "text": "> **Possible Duplicate:** > How do you dive into large code bases? I am currently a student and started programming a few years ago. I am able to write complete working software in many languages. However, there is something that bugs me about contributing to open source projects: how do you understand how the organization structured its source code? I tried to add functionality to Apache Tomcat and found it difficult to find myself in all these source files. Is there something I am missing that makes it easier to understand the organization of the work done?"} {"_id": "221144", "title": "What's the best approach to studying Open Source projects or any large codebase?", "text": "There is a Open Source project which I need to use in my project. All other functionality is built on top of it. I am new to programming and find it very daunting. There are a lot of Open Source projects and APIs out there. At one point or another we need to use these in our projects since we don't want to re-invent it again. There are tons of lines of code written in those Open Source projects by other programmers. It's really hard to understand. How do I pick a library or API and understand it quickly to use it in my project? Do I need to understand each line of code, or just have a general overview of how it works? Please share your experience of how you approach it and any guidance on how to look at it."} {"_id": "34227", "title": "Developing my momentum on open source projects", "text": "I've been struggling to develop momentum contributing to open source projects. I have in the past tried with gcc and contributed a fix to libstdc++ but it was a once off and even though I spent months in my spare time on the dev mailing list and reading through things I just never seemed to develop any momentum with the code. Eventually I unsubscribed and got my free time back and uncluttered my mailbox. Like a lot of people I have some little open source defunct projects lying around on the net, but they're not large and I'm the only contributor. At the moment I'm more interested in contributing to a large open source project and want to know how people got started because I find it difficult while working full time to develop any momentum with the code base. Other more regular contributors, who are on the project full-time, are able to make changes at will and as result enter that positive feedback cycle where they understand the code and also know where it's heading. It makes the barrier to entry higher for those that come along later. My questions are to people who actively contribute to large opensource projects, like the Linux kernel, or gcc or clang/llvm or anything else with say a developer head count of more than 10. * How did you get started? Was there a large chunk of time in your life that you just could dedicate to working on the project? I know in Linus's case he had a chunk of time (6 months) to get it started. * What barriers to entry did you encounter? * Can you describe the initial stages of the time spent with the project, from when you had little understanding of the code to when you understood enough to commit regularly. Thanks"} {"_id": "167209", "title": "Persuading management that refactoring code is a good idea", "text": "> **Possible Duplicate:** > Best supporting argument for refactoring Has anyone got any tips for persuading management that refactoring code is a good idea ? I was asked something like > \"After this refactoring, will I have a better product ? How does time spent > on this benefit the company?\" Or something along those lines. I can see why management asked it but my response was not fantastic. I said something about how a well designed piece of software will mean that adding new features in the future will be quicker and easier. Does anyone else have any tips ? Many thanks."} {"_id": "247034", "title": "How to handle a client that wants me to use code from Google under all circumstances?", "text": "I have a client that has been upset with the fact that it takes more time than he thinks to complete a feature / bug in a web application. He is more of a project manager that has hired me as a sub-contractor. He has constantly been questioning the estimates I have given him, and is always trying to get me to agree to do things in a less amount of time than I have estimated. I have made it clear to him that I will not always bill for the full estimate if it does not in fact take me that long to complete. He also told me that if there is code on google (w3schools), that I should use that and not write it myself. I have been doing business with this person for 12 months. Just to clarify what I mean by \"Google\", I mean that he would always prefer I copy / paste code that I have found from Google rather than writing code myself."} {"_id": "29788", "title": "How do you dive into a big ball of mud?", "text": "> **Possible Duplicate:** > How do you dive into large code bases? So there is a question about understanding code. Mine is a similar problem. It started when I joined my current org. and like I always do at beginning I tried to get a feel of the code starting from high up and working my way down to details. Only, there is no detail, or to be more precise I have not been able to discern any design to the this big ball of mudness. Application is riddled with duplicate/unnecessary data, there is no attempt at encapsulation or abstraction or hierarchy with same code copied in multiple classes/modules for the same functionality (of course this means that the duplicated code is out of sync everywhere). Majority of the code concerns the manipulation of POD style classes entirely obscuring business logic in between, DAOs are horribly mutated.To illustrate following is a typical statement in the code W.X[Y].Z = A.B.C[D] To summarize the code is to OO programming, sorry make it any programming, what Freddy was/is to teens at Elm street. Now I would like to gradually refactor this insanity away, but I for that I must somehow get a hook on the code. So my question is, does anyone know of any systematic approach to this situation , or do I have to hack till I get the job done. Update: Just to add some additional information. 1. Ours is a new team so we can't talk to anyone who is familiar. 2. I have switched to a new domain as a result I am not familiar with it. 3. We don't have any direct interaction with the customer."} {"_id": "228468", "title": "Engineering a better solution, coming from existing codebase", "text": "**The Code** I have high-business-value daily-used-by-customer software that is written in PHP and spans approximately 600K lines of code. Customer for a long time needs, wants, and demands new features and functionality. The time to have it done is _yesterday_. So, just write the new features and implement new functionality and deliver it to the customer, yes? Well, no, here are some problems that have been causing considerable pain to current developers: **Existing code-base is ... abig ball of mud.** Notable problems: * code is hairy - a single feature permeates everything, tracing code is a pain, and adding a feature may potentially impact everything else * there are no tests * mix of procedural and object oriented code ridden in bad programming practices * files reaching 6000-lines of HTML, CSS, PHP, SQL, jQuery, JavaScript, comments * complete disregard/non-existence of MVC/separation-of-concerns pattern. Code is intermixed together * some business logic depends on volatile things that have no relation to the code (like database metadata) * hardcoded values, paths, and lack of configurability contribute to lack of security of current architecture * large repeated blocks of essentially the same code contribute to similar features working slightly differently. Updating one does not update the other * code is slow, lack of documentation., etc, etc. quite a few other things can be done better **It works...** The good thing is that it works... The functionality that's there, is reasonably worked-out for actual business cases, but ... going forwards is painful. **The Problem** It is easier now (and faster) to add a new feature using existing code style, mostly using cut-n-paste-n-modify approach, thus perpetuating the badness, than it is to do rewrites of this thing using currently existing modern best practices. **Solution?** There is a talk about rewriting the whole thing using one of the currently- leading frameworks (i.e. ZF2), but in doing so it means ignoring customer demands, taking a long time to build the software, and essentially creating new software (version 0.0.1) for the customer, with all the new-software bugs and lack of mature-software feel and functionality. Another thought is to do some incremental development. i.e. When a new feature comes about, write it using a new approach. This is currently failing for the reason stated under \"The Problem\" title above. Yet another idea is to do some slow refactoring of existing codebase... It might work in cleaning up things like MVC and a host of other things, and it will take a long time, and it will essentially feel like unraveling a messed up tightly-wound knotted ball of yarn. But doing this will not address things like unit testing, dependency injection, modern framework principles, and so on. So, in the end, new features are coming, code is being added, and the existing codebase is not getting any better. What might you suggest to do in this case?"} {"_id": "141968", "title": "What to do when tackling an unfamiliar Code base?", "text": "> **Possible Duplicate:** > How do you dive into large code bases? When you join a pre-existing project that has a large code base, what are some of the things you do to get acclimatized quickly towards being able to code soundly in it? How much time do you spend reading, making diagrams etc as opposed to diving right in and coding?"} {"_id": "114316", "title": "How should I go about learning a very large and complex application?", "text": "Being a young and fairly inexperienced developer recently employed by a \"real\" software company I'd like some opinions and pointers on how to do the following: **Approaches on how to get familiar with a companies products , especially when you've no idea how it all works**. The company I'm at now has one HUGE product , continually evolving and having been here for 2 weeks I've still no idea how it all sticks together apart from a special kind of glue made from the tears and frustration of young developers, it's incredibly fragmented and only 4 people in the office know the inner workings, all of them constantly busy **Give useful input on the product :** Now, I know I'm just a kid in the company and I can't be expected to deliver anything ground-breaking in my first months, but you have to give as much as you're paid. I'm getting around twice as much as at my previous job but objectively speaking I've not done anything to deserve it yet. Just sat around staring at my laptop screen trying to decipher code. Now you might say that this is what I'm here for , to start off with at least, but at my previous job I was something of a go-to-guy, making decisions and contributing to almost every aspect of the company's daily running. While that was a lot of work I enjoyed the feeling of being 'connected' in the inner workings of the company I have a stake in (a symbiotic relationship if you will). _please edit this point down, I'll leave it up to you guys to decide what's important_ _I'll post more things as I think of them, have to get back to reading and writing something only vaguely resembling C#_"} {"_id": "170109", "title": "What kind of process should I use to learn a big system?", "text": "> **Possible Duplicate:** > How do you dive into large code bases? I just joined a new company and started to study one of the their bigger system. For me to be productive, I need to understand the entire system without too much help. Other programers are really busy and dont' have time to hold my hands. I used to use brain map to draw a pictorial representation of the system. Any recommendations on what is the right appproach to dissect a big program? It is a .net prgoram by the way."} {"_id": "39386", "title": "How to estimate the length of a programming task", "text": "What process do you use to estimate how long a (significant) programming task will take. Do you use one or more of the following: * intuition/guessing * reference to similar tasks whose estimated/actual lengths you recorded * reference to the lengths of similar task others recorded * saying it will take the length of time allocated for it (by your manager)"} {"_id": "204639", "title": "How to properly take over a complex PHP project", "text": "I've been tasked to correct some issues on the backend of a large website project that has been outsourced for some time. The original developers and us are not on good terms so it's not feasible to get them involved. The technology stack is as follows: * MySQL 5.x * PHP 5.3 (and a variety of supporting libraries) * Apache with mod_rewrite The problems I am facing are: * No documentation, not even comments * 4 index files in root, plus 2 main files in a combo of php and html and 1 default.php, the referenced index file in .htaccess references their local test server. * Duplicate file names / files * Atrocious file system layout ( ultiple js folders, multiple script folders etc ) * Reliance on original mysql functions ( not mysqli or PDO ) * multiple frameworks, JQuery, various marketplace API's etc. It has been a highly frustrating several hours trying to sort out where to begin with this mess, let alone how to fix the items I need. So my question is: What would be the best place to start or If this was dropped on your lap, how would you approach it? I know this is subjective, and opinions are what I am looking for, so any advice is appreciated."} {"_id": "114835", "title": "What is the best method to start understanding BIG project source code?", "text": "> **Possible Duplicate:** > How do you dive into large code bases? Sometimes before developing new products we need to understand some existing products or existing source code. Sometimes to write another small module of that big project we need to understand that big source code. **In our case we need to study and understand a project with lots of files and folders. What is the easiest and most comfortable way to do it ? (especially for C and C++ and under Linux)**"} {"_id": "126224", "title": "How to fix very bad code?", "text": "> **Possible Duplicates:** > How do you dive into large code bases? > How do I handle refactoring that takes longer than one sprint? I have 2 files that cover 5000 lines of code. I have been ask to fix THE problem. The code is object oriented but use hundred of flags, is multi threads, uses variable name without meaning, has recursion problems, etc... The problem is that rewriting the code will take a couple of months and there is not specifications. The specifications are encrypted in the code. So the big question is do you have any ways or technique to fix/understand such bad code?"} {"_id": "222641", "title": "How to organize legacy multi page web app with tons of Javascript spaghetti", "text": "I have inherited of an application and need to reorganize and I hope I will be able to modularize the tons of Javascript that is everywhere. It is a **multi-page webapp**. Each page has a script tag in the header, which currently contains a **DomReady handler** which initializes all user event handlers for that specific page, and also very often contains from **1 to 10 javascript functions that are specific** to this page (meaning they are not useful anywhere else in the app). Then, there is a jQuery import on every page, plus an import of somthing like App.js which is simply a **very large collection of global functions** which are useful in many different pages of the application. Page of the application are quite different one from another, so it seems to make sense not to make every page load everything. I'm desperately trying to organize all this, and after investigating modern solutions like RequireJS, Browserify, simple lightweight MVP frameworks like Riot.js, loose coupling through Mediator pattern and so on. A lot of these seem to target single-page web applications mainly. I'm having a hard time to imagine how to reorganize this 40 page application where almost every page needs a **separate initialization, a few specific functions and a large number of general ones**... Also, the webapp can be installed on a user's own server and **individual page scrips behaviour must be customizable** through overriding some of the functions (or adding to the initialization, or the modules if they become modules in the future). What would an JS expert do ?"} {"_id": "102649", "title": "If TDD is about design why do I need it?", "text": "TDD gurus more and more tell us that TDD is not about tests, it is about design. So I know some developers who create really great design without TDD. Should they practice TDD then?"} {"_id": "222544", "title": "Emotional detachment from bad code", "text": "We all would like to work with good, easy to maintain code that follows best practices and design patterns. However, reality is far from ideal. After all, content on sites like The Daily WTF is not made up, and we all sometimes have to maintain code that is, to put it mildly, subpar in quality. I have problem working with that code. Of course, working with bad code is difficult from technical standpoint, but I have, well, emotional problem. When I have to, say, fix tiny bug in poorly designed application, I tend to focus on how bad the overall application is instead of how to fix simple bug. I think \"jeez how one can be so stupid to write this\". I treat work as punishment rather than challenge. It impeds my work productivity. How can I change my mindset to focus on task instead of contemplating how bad is the codebase I have to deal with? In this question I assume that obvious solution, i.e. fixing the bad code is not possible/applicable: it works, is too big, fixing it woud take too much effort, it will be soon phased out, was written by your boss etc. TO CLARIFY: I do not want to use overall code quality to justify spoiling it further. I assume overall code quality is wide known concern, but there just happen to be tasks with higher priority. Let's consider following example: print \"

\", sql(\"SELECT title FROM news WHERE id='$ID'\"), \"

\" There are two problems: 1. Data and presentation layers are mixed 2. Code is vulnerable to SQL injection `#1` hurts maintainability of codebase, but do not cause any bugs or feature loss by itself. Moreover, fixing it, for example by moving to a proper MVC architecture requires significant effort. `#2`, on the other hand, may lead to data loss and is trivial ro repair (by escaping `$ID`). My question is, provided you are _assigned_ to fix `#2`, how focus on task instead of wishing \"those guys writing Ruby on Rails apps are so happy\"."} {"_id": "178856", "title": "Does TDD lead to the good design?", "text": "I'm in transition from \"writing unit tests\" state to TDD. I saw as Johannes Brodwall creates quite acceptable design from avoiding any of architecture phase before. I'll ask him soon if it was real improvisation or he had some thoughts upfront. I also clearly understand that everyone has experience that prevents to write explicit design bad patterns. But after participating in code retreat I hardly believe that writing test first could save us from mistakes. But I also believe that tests after code will lead to mistakes much faster. So this night question is asking for people who is using TDD for a long time share their experience about results of design without upfront thinking. If they really practice it and get mostly suitable design. Or it's my small understanding about TDD and probably agile."} {"_id": "6395", "title": "How do you dive into large code bases?", "text": "What tools and techniques do you use for exploring and learning an unknown code base? I am thinking of tools like `grep`, `ctags`, unit-tests, functional test, class-diagram generators, call graphs, code metrics like `sloccount`, and so on. I'd be interested in your experiences, the helpers you used or wrote yourself and the size of the code base with which you worked. I realize that becoming acquainted with a code base is a process that happens over time, and familiarity can mean anything from \"I'm able to summarize the code\" to \"I can refactor and shrink it to 30% of the size\". But how to even begin?"} {"_id": "167492", "title": "How to justify rewriting/revamping legacy software in a business case?", "text": "I work for a great little software company which makes good revenue from our main software package. The problem for me is that it's almost unmaintainable. It's written in Delphi 7 (has upgraded versions over time) and has been worked on by a lot of developers over the past 20 or so years. The software lacks any meaningful architecture - there's no object orientation whatsoever, horrible amounts of cyclical dependencies and an over-reliance on global variables to name just a few things. Another huge thing for me is Delphi 7 does NOT support 64-bit. The problem here for me is that my management team don't care about technical things, they want to know why they should care. Obviously that's expected, so what I'm asking here is for some guidance, or tales, or pitfalls about this kind of thing. There's a few things I would love to include, namely for me, the length of time taken to debug/write a feature in \"legacy\" code, versus coherent, well structured OO code. Does anyone know of any blog posts or the like where this is talked about? For us in the company this is a huge reason. Despite being decent developers we feel like writing a new feature is just piling more rubbish on top. On top of that, even for me who has a decent level of understanding of the code, changing things is infuriating - a small change can have a ridiculous domino effect. Anyone have any experiences they'd like to share?"} {"_id": "6268", "title": "When is a BIG Rewrite the answer?", "text": "Just read the question about the Big Rewrites and I remembered a question that I've been wanting answered myself. I have a horrible project passed down to me, written in old Java, using Struts 1.0, tables with inconsistent relationships, or no relationships at all and even tables without primary keys or fields meant to be primary keys but aren't unique at all. Somehow most of the app \"just works\". Most of the pages are reused (copy pasted code) and hard-coded. Everyone who's ever worked on the project has cursed it in one form or the other. Now I had long considered to propose to upper management a total rewrite of this horrendous application. I'm slowly attempting to on personal time but I really feel that this deserves some dedicated resources to make it happen. Having read the articles on big rewrites I'm having second thoughts. And that's not good when I want to convince my superiors to support my rewrite. (I work in a fairly small company so the proposal has the possibility of being approved) TL;DR When is a big rewrite the answer and what arguments can you use to support it?"} {"_id": "237891", "title": "Junior - How to understand the flow of the code?", "text": "_A junior asking..._ **First a rather large chunk of background** Assume you have done some programming and now you want to take the next step out in the real world or at least learn more. I have no mentor, not yet anyway, but I have no intention sitting at home and twiddle my thumbs. My idea is to study the vast sea of open source. The problem is to understand a project. You can see the files and read the code line by line but it is of little help since there is > 1000 files with > 1000 lines of code. It is like standing in the middle of a forest at midnight. Where do you go ? The Internet have a lot of ideas on how to get involved in open source. As a developer, it is assumed you have at least > 2 years of experience. You read code like a newspaper. Fluent and with great ease. This is not very helpfull since I don't have that kind of experience yet. My idea is to use UML to get started. This should show some interest too, and will generate good questions to ask. I will start at the top with the packages, the folders. For a given file the include statement (or import or ...) provide all required dependencies. Dependencies can be divided into external och internal. The classes is of various types like subclass, abstract, data, factory, template and so on. This provide more relations between classes. Once you know something about the structure you can start to understand the flow. Now you can start to see the forest. The UML diagram can be auto generated of course but I will learn so much more by doing this by hand. Learning is the key here. This is my idea on how to get started with an alien code base as a junior. I have just started so I would be very happy if you shared your view, ideas and experience. **My question is:** _Imagine you stand in front of an alien code base, what do you do in order to understand the flow of the code ?_ For me it is a just huge set of files, I can't really see the flow between them. I really would like to be a good developer and know I still have much to learn. Hope you understand. **Thanks !! // Jim G**"} {"_id": "109262", "title": "When do you rebuild an application or keep on fixing the existing one", "text": "> **Possible Duplicates:** > When is a BIG Rewrite the answer? > Have you ever been involved in a BIG Rewrite? I am at a customer where I have been tasked to fix a number of issues they have in their existing systems. These systems are all quite old and was built in older technologies to really low development standards. All of this makes it quite difficult and frustrating to properly maintain these systems, because of the two thoughts that often come to mind while working on this: * A. It would have been so much easier / quicker to do this in one of the newer technologies. * B. I would not have coded this feature like this. There are many better ways to design this application to make it more maintainable and better performing. (examples are bad OO design, code is often repeated, settings are hard coded, etc.. the list is long). So under the circumstances I try to make the best of it and fix everything to the best of my ability without complaining too much, but my question is now: How do you decide when it is time to rebuild a system with newer technology, versus keeping on plugging holes in an old one? And how do you make a case for rebuilding? How can you persuade the powers-that-be it is necessary to rebuild the system?"} {"_id": "171295", "title": "How to manage and estimate unstructured requirements received from customers", "text": "A lot of the times during the bidding phase of a project I receive a software system's requirements from our potential customers in a very unstructured format from various sources [email, word documents, excel]. It is usually a bunch of \"product development\" guys from the customer's side who come up with these \"proposed solutions\" to the business problems they have. While they are the experts at the business domain, a lot of the times they don't have the solutions right. This results in * multiple versions of the same requirement * mixing up of two requirements into one * a few versions of the requirement later down the line, the requirements which were combined together get separated out again, each taking with it some of the new additions How do you work with such requirements coming in and sort them out into proper use cases and before development begins? What tools can we use to track a particular requirement's history, from the first time it was conceived till the time it gets crystallized into a proper use case? Estimating work against requirements received in such a fashion is a nightmare which ends up in making mistakes in understanding the requirement correctly and estimating the effort against it correctly. Once we win the project, then the customers have given some more thought to their requirements and have been able to articulate it properly. What happens in this case is that some functionality gets dropped, some enhanced, some take a whole new turn. This basically can nullify some of the work item's estimates that were made before the project was won. I would be interested in knowing if there is any system by which we can build a tree of a particular requirement and how each branch resulted in a different estimate. Any tips, tools, tricks to make this activity more manageable? I'm just trying to get some insights from someone more experienced than I am in requirements management and effort estimation."} {"_id": "68399", "title": "How do you accurately create estimates for programming projects given to you?", "text": "I am looking for some insight from you smart people on SO. I'm a relatively new developer (3+ years of experience) primarily on the .NET framework, and I'm absolutely terrible at knowing how to properly estimate even the smallest of projects. I know that it's going to be a pretty big hindrance to my upward mobility as well as my joy of programming if I don't figure out good tools, methodologies, approaches, etc. Therefore, I am seeking your knowledge and wisdom on this subject. I have heard of SWAG and have most of the time employed that to give my estimates in the past, but I don't feel comfortable just \"feeling it out.\" Certainly, from my experience of doing similar projects I can recall how long it took me and give a relatively accurate estimate, but even that isn't always very accurate. Plus, it seems that inherent in every project are gotchas and unforeseeables to make estimating even more difficult. Thoughts?"} {"_id": "192089", "title": "Limit useless complexity in code", "text": "I have a question, to explain that, what better than an entirely fictional example? Let's say you are a young developer just being employed in a firm. All data is stored in a huge database (let's say 500+ tables with billion rows). Your boss ask you to make some consolidation queries stuff. So, you start making your query and, during the development process you learn a lot of conditions to add to your query. Result? Your query works pretty well, result asked is correct but is slow and not very easy to understand. Why? Cause the query, due to a lot of modifications became very complicated. After that, with checking that with a colleague who work in the firms since years, he wrote the same query than you but... easier to learn and faster to execute. So, in fact the main question is: how can we limit this useless complexity ? How can make code more logic in fact? Actually, my initial idea was to draw activity diagrams of code to see where are bottlenecks but I think a better approach is possible. Looking for Books, Links, Ideas, Approaches, Methodologies..."} {"_id": "196847", "title": "Are there any well-known quantitative approaches to evaluate a particular design whether it satisfies or violates the SOLID design principles?", "text": "I designed an application framework by considering the SOLID design principles and supported by design patterns. However, I wonder if there are any automated tools or well-known approaches to evaluate whether the SOLID design principles are satisfied or violated in the proposed design?"} {"_id": "16326", "title": "How to learn to make better estimates?", "text": "I suck at estimates. When someone asks me how long something will take, I don't even dare to make a guess since I will be completely off the mark. Usually I'm way too optimistic, and should probably multiply my guess with some large X factor... How can I learn to make better estimates? It's not taught at my uni, and even though we have deadlines for all laborations I never think about how long something will actually take. Which I should. For everyone's sake (especially mine)."} {"_id": "202757", "title": "How do you get users to rank their software enhancement needs?", "text": "I've inherited a legacy software system, and have been tasked with performing usability and system upgrades. While there's nothing bad with the system, from discussions with the users, there are \"small\" usabilty issues that need to be addressed. At this stage I'm the lone developer on this system, and apart from testing I don't use the system at all, so its difficult for me to know what issues may exist or are percieved to exist. I'm going to have some time to speak with them all and discuss what they percieve to be good/bad or indifferent about the system. Since its essentially just me for the time being my time is limit. So I was considering asking them to imagine that I'd only be able to do one change, have them all write privately what they'd want that one change to be, and then helping them rank those, but I'm hoping for other tips as well. What techniques exist for getting users to explain their wants, needs, and requirements, while also having them rank them by importance or desirability?"} {"_id": "190048", "title": "Entropy in large scale software systems", "text": "I work on a fairly large software system and over the years it has accumulated a lot of entropy. There is plenty of scope for refactoring but there is always pressures to build the next features upon what's already there. This adds to more entropy, because design choices for implementing new features are typically made by first accepting what's there and 'working-around' some earlier weaker design that is otherwise ripe for refactoring. What are some ways to manage this kind of complexity and build functionality without substantially weakening the structure of the system further. I know this is a very broad question and the approaches/solutions depend on the particular software system at hand, but I am still hoping there are some generic ways to manage the problem I am highlighting."} {"_id": "115839", "title": "How clean should new code be?", "text": "I'm the lead designer in our team, which means I'm responsible for the quality of the code; functionality, maintainability and readability. How clean should I require my team members' code to be if we are not short on time? In my view, we should clean up old code we modify; adding a line to a method means you clean up that method. But what about new code? We could make it sparkling clean, so that if another coder comes along tomorrow and makes a small modification, she doesn't have to clean it up at all. But that means if no-one ever reads that piece of code again, we've wasted time making it sparkling clean. Should we aim for \"almost clean\" and then clean it up further on future visits? But that would mean not getting the full value for the understanding we had when we wrote it in the first place. Right now, I'm going for \"sparkling clean\"; partly as a tutorial for my colleagues who are not as picky as I."} {"_id": "171652", "title": "Quantifying the value of refactoring in commercial terms", "text": "Here is the classic scenario; Dev team build a prototype. Business mgmt like it and put it into production. Dev team now have to continue to deliver new features whilst at the same time pay the technical debt accrued when the code base was a prototype. My question is this (forgive me, it's rather open ended); how can the value of the refactoring work be quantified in commercial terms? As developers we can clearly understand and communicate the value in technical terms, such a the removal of code duplication, the simplification of an object model and so on. But this means little to an executive focussed on the commercial elements. What will mean something to this executive is the dev. team being able to deliver requirements at faster velocity. Just making this statement without any metrics that clearly quantify return on investment (increased velocity in return for resource allocated to refactoring) carries little weight. I'm interested to hear from anyone who has had experience, positive or negative, in relation to the above. \\----------------- EDIT ---------------- Thanks for the responses so far, all of which I think are good. What I want to develop is a metric that proves (or disproves!) all of these statements. A report that ties velocity to refactoring and shows a positive effect."} {"_id": "198244", "title": "How do I survive in a Waterfall world?", "text": "I currently work in a company where the Waterfall model is deeply ingrained. We are pushing Agile principles but it's a long process. Meanwhile, I have to work in a Waterfall world with sequential projects phases. My question is about the analysis phase. I must provide a list of task and how long they will take. What techniques can I use to improve the accuracy/precision of my estimates, while fulfilling the requirement of having a detailed estimate up front? One example I know would be to use prototyping to get more knowledge before making the estimate."} {"_id": "220235", "title": "Considerations before rewriting a software component from scratch?", "text": "A piece of software is a patchwork of old and undocumented efforts. There are no comments, no documentation, and the code is hairy -- it involves _Unix shell scripts that check for dummy files and then call SQL statements that call database procedures that modify data._ The original developers have left and we score a solid 2 on the Joel Test but I can raise it to at least 4 - yay... The code is reasonably error-free, but we constantly need to add new features into it which is highly error-prone because of the state of the code, so the deadlines slip and the efforts rise. **We want to rewrite this software in order to reduce maintenance and development efforts.** As part of the rewrite, we will introduce specs, comments, test cases -- all things that we currently don't have. It'll still be a bit complex afterwards, but no more than necessary. This is not a BIG rewrite because we are not going to switch languages or frameworks; we'll still need shell scripts (but fewer) and database procedures (but fewer). The implications are also fairly simple because we're in control of the installation sites and we can fully replace the old code with the new code at the flick of a switch. I know a rewrite is never good but I think these are reasonable counterarguments. Nevertheless, my concern is the typical danger of introducing brand new bugs instead of the old known ones, and also the danger of not implementing certain details because nobody even knows they exist or are needed. **How can I approach this rewrite in an efficient manner?**"} {"_id": "238007", "title": "How long do you spend on analysis? Is this analysis paralysis?", "text": "I am not very good at estimating how long a piece of work will take to complete. I am guilty of putting my finger in the air and guessing. Usually things are later than expected, however sometimes they are earlier than expected. For example, a recent individual task was delivered in four months instead of five months. However, I am usually late. I want to get better at this. If I say something will take four months then I want it to take four months or less and not four and a half months or five months etc. The next task I am approaching was moved to the top of the project plan within the last week because of external pressures (external to the department). The business area want to know exactly how long it will take to complete. I find it difficult to estimate because of the following reasons: 1. The system I develop is very complex. I inherited it a few years ago and the previous developer did not follow good practice e.g. SOLID, separating layers (business layer etc) etc. 2. The business requirements are not particularly well understood by the business area because the system is very complex and we have to take into account legislation, which is only guidance and completely open to interpretation. 3. We lack the appropriate tools e.g. testing framework, continuous integration etc. 4. I am a sole developer so I have no-one to really turn to for guidance or help e.g. with testing etc. Anyway I have read questions like this: How to respond when you are asked for an estimate? and this: http://msdn.microsoft.com/en-GB/library/hh765979.aspx. My manager has suggested that I spend one month doing the analysis for the next piece of work (it is complex). I think I could do it in a week (I usually only spend a few days). What should the Agile output of an analysis be? How long do you typically spend of analysis?"} {"_id": "239018", "title": "How do you get relevant and useful information out of a client in regards to a job?", "text": "A brief background on this specific question: > After meeting with a client and hearing all of their requests, requirements, > and other comments about a specific software job, planning out the software > functionality, design, and basic user experience, the client comes back with > a comment regarding, \"That is not what we are looking for. You can't do what > we want to do.\" Now, I understand the concept that not everyone understands tech, and that some people are more visual than others and may need to see things before they can be sure about something. I also understand the process of revisions and other modifications to meet specific functionality demands or requirements that may only come up after you start putting things together. I mean, if everyone got it right in version 1.0, imagine how great a world this would be! **Given** : There is no magic formula for a guaranteed success rate when a lot of the work is artistic in the sense that the same set of directions can be interpreted differently by different people. I'm aware that asking the **right** questions is important, and I have done some research about what questions others are asking in similar jobs. I also have previous experience with clients (this isn't my first job), and through it all, I have developed my own little system for getting this information-- which has had a 99% success rate up to this point. Usually if I don't get it right the first time, it's fixed during the revision and modification process. I know I'm not perfect, but I've never really had a client flat out refuse what they asked for... So **how do you get relevant and useful information out of a client in regards to job requirements and specifics?**"} {"_id": "21538", "title": "Best supporting argument for refactoring", "text": "Currently I am working on a code best described as C code living in C++ body. However I haven't been able to convince power that be to re-factor on ground of ease of maintenance. What in your opinion is the best argument for refactoring the code."} {"_id": "188292", "title": "Should I try to persuade my manager that code tidying should take priority over meeting deadlines?", "text": "My manager has tight deadlines to meet. The current project I am working on is currently on schedule, but I've noticed a couple of quite significant areas in the code that are really badly written. (Bits of code get called two or three times, when they only need to be called once.) The problem is, as far as my manager is concerned, the program works. As he sees it, there's no point making lengthy changes to code that will make us miss the release date, for no tangible improvement that he can see. He keeps saying \"That's not a priority. It's working fine as it is.\" Should I keep trying to persuade him, or should I just do what he suggests and leave it? Why or why not? Note that how to persuade management has already been covered."} {"_id": "174770", "title": "What is the way to understand someone else's giant uncommented spaghetti code?", "text": "> **Possible Duplicate:** > I\u2019ve inherited 200K lines of spaghetti code \u2014 what now? I have been recently handled a giant multithreaded program with no comments and have been asked to understand what it does, and then to improve it (if possible). Are there some techniques which should be followed when we need to _understand_ someone else's code? OR do we straightaway start from the first function call and go on tracking next function calls? C++ (with multi-threading) on Linux"} {"_id": "132977", "title": "How-to convince company to start documenting for legacy software", "text": "It has been less than a year since I joined my current company. Their majority of sales have come from a single product that has been alive since the last 10 years. However, there is minimal (if at all) documentation. Not only do the developers in the company struggle with the lack of documentation but also there is a high amount of turnover, causing everyone to lose their time. This is because experienced developers have left the company and there is less and less resources to communicate/brain storm with. Without getting into too much detail, I have suggested to the previous manager that there needs to be some sort of documentation (at least an Architecture document) that outlines the product. I also suggested using JavaDoc and other automatic documentation tools. These suggestions were responded to by slight smiles and statements of the sort \"We do not have enough time\", \"We need short-term improvements right now\" and even \"The code itself should be the documentation\" from the programmers themselves. I have already wasted enough time trying to find out if what I needed per requirement/bug had existed in this big (really) code base. I am looking for any suggestions that you might give regarding the need of documentation. Or, rather, if this is a lost case for this legacy system or organization."} {"_id": "199870", "title": "What exactly makes code \"clean\"?", "text": "I am an intern at a small software company, doing typical grunt work. Many tasks that the actual developers are doing are above my head however I finally got to see some real \"action\" and write some code myself. I will admit the task was perhaps a bit too difficult for me but I managed and my coded worked in the end. When my mentor was doing code-review he had said to me \"Man, this part is ugly.\" When I said, \"Why? Is something wrong with it?\" he replied, \"No, the code works, but it's just not clean code.\" The concept of of clean code is one I have run across a few times before, especially now since that I get to hang around professional developers. But what exactly is it that makes code \"clean\"? In other words what is the criteria? I have heard in formal mathematics, a \"beautiful\" proof, is one that is ideally as clear and as short as possible and uses creative techniques. Many people know that in literature, \"good\" literature is one that can express feelings effortlessly and elegantly to the extent that one can \"feel\" it. But one can't \"feel\" code (at least I can't) and I think most would agree that short code is not necessarily the best (as it could be very inefficient) or even that the most efficient way is always preferred (as it could be very long and complex). So, what is it! Please, try to enlighten me to just what exactly makes code \"clean\"."} {"_id": "253622", "title": "How to tidy up a massive spaghetti project", "text": "I previously asked this on StackOverflow, but was advised it was a better fit for Programmers. I have recently begun working on a codebase (as a Java developer) with the following characteristics: * Roughly 800,000 lines of source (including whitespace and comments). Primarily Java, but also XML, PHP, Shell scripts, JSP, JS, HTML, CSS * Frameworks: Mainly Stripes, a bit of Struts, and some quite old customised Hibernate (and I expect lots of other bits and bobs). * No particular methodology for stories, sprints etc. * No tests * No build process * No dependency management (everything is just setup in IDEA) * Some very old code dating back as far as 2002 * No coding standards * Masses of code redundancy, though given the lack of tests this is difficult to track down * Multiple modules and inter-dependencies * A massive 10,000 line XML file containing named Hibernate queries * 'Dump-all' classes at the top of the inheritance tree containing references to multiple services that may only be used in a handful of child-classes (As an example, one class is extended by 573 classes. A particular service contained in it is used by only 3 children) * Demotivated developers I know this a really open-ended question: Where would you start in trying to tidy up this system?"} {"_id": "228702", "title": "Arguments to non-technical manager for program upgrade", "text": "**My situation:** I'm currently intern in a big company, developing an automation tool (for company internal use only). There are about 30 persons in the team, but we are only 3 developers, all interns (in master degree). The core of the software and the UI were made by experienced employees but three years ago. **The problem:** A lot of the technologies used can be upgraded (.net 3.5 to 4.0, using LINQ for Database interactions, Forms to WPF ...) and the User Interface is really not ergonomic. **What I want to do:** I would like to upgrade it and remake the UI, to improve performance, maintainability and ergonomy but my managers - business analysts - don't feed the need for improving a software that works fine. **Why I can't:** They want us to do new automation tasks, they think it would be a waste of time to do the upgrade and they are not sure we will success to do completely what I am aiming for. **My Temporary solution:** As I am motivated and really think the upgrade is necessary, I spend some evening and week-ends developing the new version, hopping once it's done I can show them and they will like it. But all this _working time_ is not payed and I don't have that much so I still hope to have like a full working week at the office to work on it. **My Question:** What is the best way to try to convince my managers that time used to develop the upgrade won't be a waste of time (knowing that all the technical arguments won't be understood)? I also don't want them and my coworkers to misunderstand my motivation with a tentative to show off or some other selfish intentions."} {"_id": "146471", "title": "Understanding already existing complex code base", "text": "> **Possible Duplicate:** > What is the most effective way to add functionality to unfamiliar, > structurally unsound code? Till now, all I have worked on is with Java projects that I build from scratch (mostly course projects and some hobby stuff). But now, I have come across a huge code base of about 46000 lines spread across around 200 classes. Additionally there are around 10 dependent libraries. The problem is not only that I have never worked with someone else's code before, but I have not worked with such a huge code-base before too. I am asked to completely understand the code-base and suggest improvements to the existing design. Now, I am kind of stuck. **I start with some part of the code and then by the time I reach some other part of the code, I have lost the working details of the previous part.** I could really use your suggestions and experiences to deal with this issue. How do I proceed/document the details of the classes to understand the code better? Edit: This is a university level research project. Students have developed it over couple of years and the bad part: The students who wrote this have already graduated. The existing documentation (only in the form of Javadocs, not UML stuff) is a lot but not helpful in terms of understanding the application design."} {"_id": "252588", "title": "Why are deadlines always so short?", "text": "I'm a junior developer in a small company (in a team of 2 developers). Everytime we are asked to implement a new feature: * the deadline is set so that we just have time to do the development: there is no error margin (if something takes a little bit more time than expected, we are late) ; almost no time for tests (we are pretty sure there will be bugs) ; no time to refactor anything * we are asked to start the development before the features are completely specified, so new things to do show up but the deadline stays as it is The result is that we deliver software that has bugs and the technical debt keeps growing. We use technology that was considered as a recent version in 2006 or so. There was one release that I remember very well. When we did a demonstration to the boss, he told us \"What!?? You took two weeks to produce that?!\" and all we could answer was \"No, in fact it took us three weeks\". I had a meeting with him and another junior developer (that quit the company because of this said meeting!), and he basically told us that he was not happy because what we released was crap. The technical debt and the need to refactor were so big in the parts that we had to modify (but we were told not to refactor because we have no time) that: * we had to add a layer of bad code on top of the already bad code; * we took much time to try and understand completely incomprehensible code. I don't feel like I'm responsible for this bad release, I think that it's the boss' responsibility to hire good developers that, at least, understand the basic principles of OOP. But hiring juniors is so much less expensive... He is already saying that we are late for the development I'm working on at the moment. He spent more than one year to \"redesign\" a tool that was already present in our application, but now he wants us to implement it in two weeks. I personally think that if we want to do it correctly, we need at least one month, or maybe even 6 weeks. I don't know what to do, because I think that the boss thinks that we are just slow and ineffective. That's not the way I will get the raise I think I deserve. Why is it like that? That doesn't seem so hard to understand that if I need x days to complete a task, I can't do it in x/2 days, even if you say \"please\" and ask with a smile. That seems so obvious that the deadline will be missed and/or that the quality requirements won't be met. I don't need to know how I can explain to my boss that software development is a longer process that just clicking an icon. Because I hope he already knows that."} {"_id": "95589", "title": "Boss doesn't believe my time estimate... advice/backup?", "text": "I'm working at a startup software company -- 3 developers, less than 15 employees including the CEO. We deal exclusively with Windows Mobile, the .NET CF, and passing the information gathered from our handheld application to and from our website. My director and I just had a meeting over an urgent project that hasn't begun yet, but it should probably get on a roll soon if we are to meet delivery deadlines for a prospective but very powerful and influential client. He proceeded to explain the project to me as follows: * The new project will consist primarily of a .NET CF app that allows the client to conduct multiple soil samples over the course of a single session. * The area being sampled by the user should be shown on a map of appropriate size and scale. * The map should be dynamically split into grid squares of a size set by the user (in acres). * On each grid square, the user will have the ability to take notes and mark individual points via real-time GPS from the on-board receiver. * In addition, functionality should exist to help point the user to any grid square given their current position. * All information collected should be easily retrieved and read in a very user-friendly manner on the customer's handheld device -- a device which we provide -- at any time. * All information collected should be easily sent -- through ActiveSync, wirelessly, or what-have-you -- to our website where it should be viewable/editable in an interface similar to that of the handheld device. With that description in mind I thought things over (albeit rather quickly). With our current staff size, limited budget, and limited resources, I projected that such an endeavor would take roughly 9 to 18 months. Forgive me if I'm off in left field, I'm a recent CS grad and pretty new to the \"real\" world. However, the project is currently only realized in my director's head, with no design documentation or specs. My question here is, how far off am I, really? Once again, my director -- who has no background in software or IT whatsoever, but is a subject matter expert insofar as soil sampling goes -- put the project at about a 3-month duration. Keep in mind that we're currently using an unsupported SDK for the rest of our GPS and GIS needs, and ESRI products are almost too far out of range for us. Current functionality in our other apps goes give us a leg up, with the ability already in place for us to draw areas, polylines, and plot points on a map. I'm just kind of confused/afraid here, wondering if I'm completely wrong or if I'm right but just without confidence. Any and all advice is appreciated. Thanks!"} {"_id": "101391", "title": "What to do with a not well organized application?", "text": "I'm a newly graduate programmer and just got hired before my graduation. In the office, I used to create and revise modules of some applications developed by other programmers in our company. The problems I encountered with their applications are: 1. Unnormalized database at it's best, they've broken all the rules of database normalization, Codd must be angry. 2. 50% of the content of the database is literally NULL (they should have default value I swear). 3. One stored procedure is used in all database transactions, full of \"if-else\" statements. 4. They are reinventing the application settings in .NET WinForms, they create their own file which contains everything they want. I think they are very fan of VB6 or maybe they don't study really, they're guessing how to achieve something. 5. No error handling! Clients sometimes report \"Exceptions\" which they shouldn't be seeing. 6. Web files and Windows Forms are not organized or grouped according to their use. 7. Naming convention, there are Camel case, pure lower case, with and without underscores and abbreviations. 8. Bad programming practices like database transaction in each iteration of a loop. 9. They developed website with SQL injection in mind, they welcome them. 10. HTML elements were not used according to their sematics. 11. CSS are not optimized for different browsers. 12. They include several versions of jQuery in one HTML! . . . N. Many more! The worst thing is, I felt being blamed for it's fragility. I mean, when I add code, there are times that it ends up with error, sometimes because they did not create constraints or they allowed duplicate data. The system is so fragile and dependent to one another, it's like walking in a field with landmines! (THIS HAPPENS SOMETIMES, THIS IS NOT THE REAL ISSUE) What should I do?"} {"_id": "205786", "title": "What should I do if I don't have any formal spec?", "text": "I recently was assigned to a task, but it says only a couple of words, like, do it as it made there, with no actual spec attached. What is the best thing to do in that circumstances?"} {"_id": "199144", "title": "How to quickly understand a huge piece of code", "text": "(This is a general question but I think is important.) How do you quickly understand a huge piece of code, say, a project with tens of thousands of lines of code (written by other people)? Are there any useful source analysis tools (for example, call graph visualizer) or IDEs that may be helpful? Any techniques are also welcome. Thanks."} {"_id": "255311", "title": "When systems get larger and larger, how do you keep a global understanding of your system?", "text": "Today I realized painfully that for some decisions you need a good overall understanding of the system. Otherwise, the risk is too high that your assumption turn out to be wrong. Imagine that you are a developer for an online web shop. To understand the system, you have to understand many connected subsystems, for example: 1. How to receive and process product information from various suppliers 2. How the customer can search and order products on your web shop 3. How the orders are processed and managed by your customer service 4. How your SAP system handles the invoicing process 5. ... The larger the system becomes, the more you have to understand. If that knowledge is lacking, sub-optimal solutions where developed when specialized teams worked together. Specialized teams, which only understood their part of the system in detail. To deal with that problem, our company changed the strategy, so that one development team always has to be responsible for all aspects of a feature. Even if it involves the complete process chain. (It is kind of a feature team, but not from teams working on different subsystems.) **What are effective strategies for developers to keep their system and operational process knowledge up to date?** I think good system documentation is a key, but I'm afraid that there is a point where the human mind cannot scale as fast as the system involves. At some point to have to simply, but that simplified assumptions can turn out to be costly mistakes. When you have to implement and maintain the code, you just have to know the exact details. As a developer, I currently have to face a difficult conflict of interests: 1. I need to spend more time to understand our system and operational process. 2. I need to develop and maintain our code. As time is limited, 2) mostly wins. The result is that I mostly gain deeper knowledge along the way, and some half knowledge from casual conversations. Do you know how huge companies like Amazon solve that problem? I would assume that no single human is capable of understanding such a complex process, and be able to contribute code at multiple subsystems at the same time."} {"_id": "206404", "title": "I have 200k lines of poorly designed code, will units tests or integration tests be more valuable?", "text": "I've inherited a lot of poorly designed code; the code has no tests. I am putting tests in place before I attempt a major refactor, but I have run into a problem with my unit tests. The problem is, I will unit test a function, and then later decide that function is a liability. I will either get rid of the function, or change its purpose. Either way, the original function gone and the unit tests fail. Should I spend a lot of time putting unit tests into a design I will change dramatically? Wont those unit tests be useless after a major redesign? Would integration tests be more valuable? I can chose program features I know will not change and create some integration tests to ensure they, indeed, do not change. I might change all the code behind the feature, but the feature should still work, and the integration tests should still pass. I know there are lot of questions on similar topics. I am asking specifically about the use and value of unit test vs integration tests in messy legacy code."} {"_id": "211470", "title": "Should We Code for Performance or Stability?", "text": "There is a point in time where you make design choices and debate them with management. In my case I have to debate my positions and design choices with senior management but it is frustrating that management only strives for performance while I think stability is a must while performance can be achieved later. E.g. We are facing a design choice to make a recovery mechanism due to lack of transactionality in certain processes i.e. we need to guarantee transactionality of a those processes making them complete fully or rollback the changes it made to database. The current code makes this difficult because we are using stored procedures that manage their own transactions. This means that if the process calls 3 or 4 stored procedures, there is 3 or 4 transactions and if we want the recovery process we need to rollback those changes (yes, they are committed at that time, this means that we need to make more transactions to the database in order to leave it in a consistent state or at least somehow \"ignore\" those records). Of course, I wanted to remove the transactions from the stored procedures and commit the transaction in the code after the process ends or rollback there if the process has exceptions. The case is that management thinks that this approach will make the process slow and also will impact greatly in our code. I think this is correct but also I think that making the rollback process ourselves is plainly reinventing the wheel, error prone and IMHO it will take too much time in stabilize. So, after the previous example, **What could be the most beneficial approach in such cases?** I mean, I want a Win-Win situation but I think it is just plainly impossible to agree on this because every time I want to talk about it I get responses like \"there should be another way\", \"you should not tell me there is no way around\", \"this is not factible\", \"the performance will degrade\", etc. and I think I will end making this faux recovery process just to comply with management. OTOH I could be wrong and I should do what is told to me without complaining."} {"_id": "197842", "title": "How much poor quality code should a junior put up with?", "text": "I am a junior developer who has been working at a new job for a few weeks. I am working with a large framework for scientific desktop software, adding pieces of functionality. While there are general aspects of the job that are appealing (such as the people and the pay), I have to admit that I find the work unrewarding and boring. I feel disappointed because I do like programming in general. I'm concerned that if I \"settle for\" this job - or this position within this job - I may lose my love of programming. I think that I am getting frustrated because it is hard to get feedback on the correctness of my code that I'm writing. It is hard to unit-test. Testing involves firing up the program and stepping through the code. I do not feel like I have a mentor. My impression of the code base is that it does not use much in the way of OOP principles. (Perhaps many of the people who have worked with it are more experienced as scientists than developers.) Do you think I'm being soft/naive/idealistic and should continue to work with this code for a year or two (and do more personal projects on the side)? Or do you think that I should go with my gut feeling and try to find a new job where I can work with higher quality (maybe TDD) code?"} {"_id": "210078", "title": "Best way to quickly explore/grok open source C/C++ projects?", "text": "I'm looking for a suggested workflow for quickly being able to download various c/c++ open source projects and then begin intelligently navigate sources. \"Intelligently\" means being able to jump around to usages, definitions, implementations of a particular symbol, both in the downloaded code base, and also the headers of referenced libraries perhaps installed in /usr/local or /System/Frameworks etc. this code jumping could be driven by etags, ctags, gnu global, eclipse CDT indexer, Xcode's indexer, I'm open to anything, but I have no idea what is the best solution for most cases out there. Shortly afterwards I would like to begin stepping through unit tests, examples, demos in the debugger so I can better grok how the code works Obviously, this isn't going to be possible for every project out there.. but most of the projects I am interested in are either autotools or cmake based -- In general, I don't really care what IDE (emacs, vim, xcode, eclipse, etc) I use, and I don't mind running a command line debugger -- I just want to quickly download a C/C++ project, \"fool around with it\" at a source level, and then move on if its not interesting. My question is particularly directed to the guy or gal super hacker who routinely explores several different large C/C++ projects every few days/weeks --> how do YOU productively do this? (My background/experience level is that I know enough about c/c++ tooling to get things to build/or debug a broken build on several different os'es usually within a few minutes, but it can take hours before I get to the point where I can take a large project and get to the point where I am stepping through examples in the debugger. I have a feeling this is because I try to use modern IDE's, but I'd be better of just using 'old school' tools. when dealing with jvm stuff -- it is usually pretty quick to get to a place where one can get intellij to grok code and start debugging -- I'd like to replicated this as much as possible in C/C++) thanks!"} {"_id": "201350", "title": "How to document legacy code (shell scripts)?", "text": "I got involved into a project where we are taking over a bunch of legacy code. Code is basically shell scripts and PL/SQL packages/procedures/functions. There is no documentation how the code works and there is also no comments on the shell scripts. I was thinking that first thing to do is to get an understanding of the dependencies of the code, how to scripts call each other and so on. Do you have any suggestions what would be the best way to document the dependencies between scripts and PL/SQL code? What would be the most suggested tool for this (for diagrams, perhaps)?"} {"_id": "132926", "title": "How does one determine whether or not to rewrite poorly-designed code?", "text": "> **Possible Duplicate:** > When is a BIG Rewrite the answer? I'm on a small team that's been handed a poorly-written, half-finished 2D Java game. Our objective is to do as much as we can to make it better in about 11 weeks. I'm pretty sure the code is not maintainable and I need to decide if I should pitch the idea of a rewrite (near rewrite, anyway). For example, the Main class, which is an applet, has most of the game logic just thrown into it. Keyboard and mouse events are handled with huge if/else blocks, none of it is commented (the only comments are the sections of code that have been commented out), and all of the drawing and animation code is just in one huge chunk. Images are loaded in a separate class that just goes through each file, each on a separate line, and reads the image into a static Image variable. This all seems like poor design. We have until the end of the semester, roughly 11 weeks, to make the game better than it is right now. It's playable but severely broken in spots. I've currently been assigned a small feature to implement: a clickable main menu instead of one that's controllable by keyboard only. On one hand I know I'm capable of implementing this but on the other I'm afraid to try to add new features because a) the more stuff we add the more likely it is that the program will break because of the bad design choices and b) the more I'll be adding to the mess that's already there. I took the weekend to try to duplicate the basic game functionality with a different design (to see if it could be done quickly enough). I'm able to load and cache sprites, create game entities and move them, and keep track of the game's state to determine which keyboard input to handle and what to draw on- screen. A spritesheet loader was next on my list (I found out that the game art isn't in sprite sheets but each is in a separate file). A trusted friend said that it might be better to have to maintain the old code but looking at it it's too brittle. We could refactor as we go but I think that would be difficult too. Our group is meeting this Friday. I planned to discuss the idea of an overhaul and present what I've been working on but now I'm not so sure. I feel like it's a simple-enough 2D Java game that once we put together a solid base that it would be a lot easier to add features but we'd have to redesign the base code pretty quickly, probably in two to four weeks or thereabouts. I'd like to think that we can do it that fast but I'm not completely sure. I guess you could reduce my question down to this: how do you best determine when to maintain poorly-designed code and when to spend some time starting almost completely from scratch?"} {"_id": "87114", "title": "Estimating Coding/Development efforts?", "text": "> **Possible Duplicate:** > How possible is it to estimate time for programming projects? We have designed a User Interface (UI) in HTML for an Intranet Application.it took almost 30 days to make it,how we should estimate the development effort(coding) for it,is there some ratio we can use to estimate,it need not have to be exact but some estimate would help."} {"_id": "163455", "title": "What to do when you inherit an unmaintainable codebase?", "text": "> **Possible Duplicate:** > Techniques to re-factor garbage and maintain sanity? > I've inherited 200K lines of spaghetti code -- what now? I'm currently working at a company with 2 other PHP developers aside from me, and 1 junior developer. The senior developer who originally built the system we're all working on has resigned and will only be here for a matter of weeks. The other developer, who is the only other guy who knows anything about the system, is unhappy here and is looking for a new job. I'm very real danger of being left behind as the only experienced developer on this codebase. Since I've joined this company I've tried to push for better coding standards, project documentation, etc and I do think I've made some headway, but the vast majority of the code is simply unmaintainable and uncommented. A lot of this has to do with the need to get things done fast at points in the project before I joined, but now the technical debt is enormous, even with the two developers who do understand the system on board. Without them, it will simply be impossible to do anything with it. The senior developer is working on trying to at least comment all his code before he leaves but I think the codebase is simply too vast to properly document in the remaining time. Besides, when he does comment it still doesn't make things as clear as it could. If the system was better organized and documented I could probably start refactoring it incrementally, but the whole thing is so tightly coupled that it's very difficult to make any changes in one module without having unintended knock-on effects in other modules. Naturally, there's no unit tests either, and I honestly don't think this codebase could possibly be unit tested anyway given how it's implemented. There also never seems to be enough time to get things done even with 3 developers and 1 junior developer. With one developer and one junior, neither of which had significant input into the early design of the system, I don't see how we could possibly get anything done with keeping the current system working, implementing new features as needed and developing a replacement for the current codebase that is better organized. Is there an approach I can take to cope with this situation, or should I be getting my own CV in order as well at this point? If it was just me and the junior designer who would be left I'd go for the latter option almost without question. However, there's a team of front-end developers and content managers as well, and I'm worried what would become of them if I left and put them in a position where there would be no developers at all. The department might just be closed down altogether under such circumstances, and then I'd have their unemployment on my conscience as well!"} {"_id": "234266", "title": "Familiarize with unknown source code", "text": "I have to continue feature development, issue fixing of a halfway completed code base. There is a no documentation, all developers had left the company. The technology stack is somewhat familiar to me (Spring, Hibernate, Web Services (Rest, SOAP), MySQL). Also there are non descriptive JIRA items that the team had worked over 3 years. Seems code has implemented in consistent way like layering, ant file per project, DAO classes. I would like to get feedback from all our experts. What should be the approach to get familiar with the project? Do you recommend to use profilers, bug track tools like Sonar to get some hints? Any other approach I can try out?"} {"_id": "250746", "title": "Can I refactor \"safely\" without specs?", "text": "I have inherited a legacy web application many years which: * Does not make use of Object Oriented principles, even though the language would permit it * Has no unit tests, nor any sort of test suite, automated or not * Generally ignores various general best practices making the code very hard to read and maintain * Splits its workings (including code for business rules) across the web application, database views and stored procedures. Some of the code is even externalized in that outside applications (not under me or my team's control) often read _or write_ directly into the database * Has only the barest documentation (including some I've created) * No documented technical or functional specs to speak of either * Has changed rules so many times and so fast that no one truly knows how it's supposed to work anymore. Rules are stacked on as missing features or specific cases are discovered on a daily basis. * Happens to be the most business-critical application of the company * Is still constantly evolving Refactoring almost always comes with the main suggestion of adding unit tests to make sure you don't break anything. How do you do this when you do not even know how the thing is supposed to work (and no one else can tell you _for sure_ )? Is this a case where you'd need to go back to square 1 and slowly, iteratively, as new features are requested, ask for specs focused on the modified section and _then_ refactor what's there? Is there any other way to deal with this situation? This question focuses mostly on the lack of specs in opposition the refactor safety rule of creating unit tests to ensure the rules are respected throughout refactoring. That said, a few more bits on the situation beyond lack of specs: * The \"best\" programming structure found in the code so far is the function. They tend to not be self-contained and call on many global variables (even local variables were erroneously set as global variables risking unexpected behaviors). I get the feeling the first step might be to refactor _this_ to even allow unit testing. * The issues mentioned apply to the whole application, the database procedures and even sister applications. Everything is intermingled together and almost never through interfaces. It's still possible to find * Code tends to be duplicated in various places. It happens with business-rules code, meaning you may get different results depending on _where_ in the application you call up a feature. Obvious maintenance complexity and risk."} {"_id": "245426", "title": "Working with a large, messy object", "text": "I have been handed a very cluttered, \"One Ring\" object (one object to rule them all). The OR class has 40 fields. These fields map to 16 different objects (the OR has all the fields from the 16 objects concatenated together. Some of the fields are common to all objects, some are unique to individual objects [all of the 16 objects are descended from a common parent]). The OR points to a catch-all table in a denormalized SQL database. I've been asked to write an adapter for this that will take random instances of the 16 objects, map the fields from a given object to an instance of the OR, and send it on its way. Oy. Changing the back end or the code is not an option (I tried). As I'm sure is no surprise, this beast is a nasty thing to manage. I would love to hear some thoughts about ways to help make working with this a bit more manageable. I'm primarily concerned with code maintenance: comprehension and readability. I know whatever gets implemented will be far from perfect, because I'm basically trying to make garbage look pretty. That said, for now, I have to live with it, so I'm trying to make my side of the train wreck a little bit more livable."} {"_id": "215935", "title": "How to educate business managers on the complexity of adding new features?", "text": "We maintain a web application for a client who demands that new features be added at a breakneck pace. We've done our best to keep up with their demands, and as a result the code base has grown exponentially. There are now so many modules, subsystems, controllers, class libraries, unit tests, APIs, etc. that it's starting to take more time to work through all of the complexity each time we add a new feature. We've also had to pull additional people in on the project to take over things like QA and staging, so the lead developers can focus on developing. Unfortunately, the client is becoming angry that the cost for each new feature is going up. They seem to expect that we can add new features _ad infinitum_ and the cost of each feature will remain linear. I have repeatedly tried to explain to them that it doesn't work that way - that the code base expands in a fractal manner as all these features are added. I've explained that the best way to keep the cost down is to be judicious about which new features are really needed. But, they either don't understand, or they think I'm bullshitting them. They just sort of roll their eyes and get angry. They're all completely non-technical, and have no idea what does into writing software. Is there a way that I can explain this using business language, that might help them understand better? Are there any visualizations out there, that illustrate the growth of a code base over time? Any other suggestions on dealing with this client?"} {"_id": "233169", "title": "Next steps for developing new product", "text": "I was hired about a year ago as the lead (well, really the only) developer on a new project/product we will call product \"B\". Product B was designed to pursue a new market for the company. This company has had great success with a similar product for a different market. We will call this other product \"A\". \u201cA\u201d is almost 5 years old now and is reliable. The two markets don\u2019t necessarily overlap, but a lot of the core functionality in A could be re-purposed to work for B. My deadline to pursue this new market for product B was pretty aggressive. We had to demonstrate certain capabilities to potential clients quickly and show them that we were a serious competitor in the new market. To expedite the development, we took the shortest route to create a new product. We started with a deep copy (a branch) of the companies' product A and re-purposed it as B. This approach made a lot of sense given the time constraints as we inherited a lot of overlapping core functionality and infrastructure for free. We removed a bunch of features and options that didn't apply, left some things that carried over, and added a bunch of new features that were needed to secure these customers. It wasn't a perfect transition, but it was good enough. Product B has been around for about a year now, and things are looking good as we have secured two customers with several more potential customers on the horizon. What are the next steps? From a \u201cbest practices\u201d perspective, it doesn't seem like a good strategy to continue developing product B as a copy of product A. It\u2019s not a solid foundation to build a new product. We accomplished our immediate goal by creating a \u201cproduct\u201d and demonstrating it to potential clients. Now, shouldn't we take a step back, find out what the users really want and refine our design? We really don\u2019t have a \u201cnew product\u201d. We created some \u201cvaporware\u201d for a couple of demo installs. Unfortunately, I get the impression from management that they think we have a new product to develop and sell and there is no time to step back and refactor and refine. Why would we spend time and money redesigning when we already have something we can use and sell? Here are my issues: * Since it was created from a copy of A, the product B code base is littered with artifacts and \u201ctechnical debt\u201d from product A. Some of this code is no longer reachable. Some of this code is reachable, but maybe the business logic does not make sense for product B. For example, there may be lookups against empty tables in the database or checks against things that don\u2019t exist. This type of logic not only hurts performance, but it also a maintenance nightmare. I\u2019m going to estimate that we\u2019re actually using 35% of product A\u2019s code base, and the rest is baggage. * Management thinks it was smart to create B from a copy of A: \u201cIt will save money since code can be reused between the products.\u201d If a bug is fixed in A, why can\u2019t it be merged into B? Or if I develop a new feature in B, why can\u2019t it be merged to A? This argument is flawed. Although B was created from A, it was done so over a year ago, and A has a dedicated team with 4 developers. I am alone on product B. The code branch for A continues to diverge from B. Now, it\u2019s very difficult to merge anything. * Finally, I feel as if we've put unnecessary constraints on ourselves by starting with a product that's already 5 years old. Some aspects were designed very poorly and will no doubt create the same problems with a new user base. Let\u2019s take our lessons learned and use this opportunity to improve the design instead of copying crap. Also, let\u2019s utilize new technologies where it makes sense - a lot of things have changed in 5 years! How can I convince management that we need to refine our product \u2013 most likely from the ground up? My boss thinks in terms of time and dollars, and I feel that I am going to lose this debate if I\u2019m not careful. Can this type of effort be quantified and put into management terms? I've been reading up on \u201cemergent design\u201d and I\u2019m trying to approach my argument from the angle that we don\u2019t have to deliver everything all at once. Let\u2019s focus on small pieces of working code with the most business value."} {"_id": "236482", "title": "How do you write good software while learning a language?", "text": "Quick background: I'm in an advanced C++ course for a study abroad program. The problem is that I don't have any background in C++. I have a modest C background, but I'm starting to think I have more holes in my knowledge than I knew about. My problem is that I'm writing really shitty code right now, but the university has an automated testing system that always seems to stump me in some way: does your code meet all the automated tests (no points if not), did you use the naive solution in terms of time or space (you get less points for this), do you have ANY memory errors (usually no points if so). So, my question is what do you do during development that leads to verifiable, integrated results? Is there a better way to learn and develop so I'm not always cornering myself with features of C/C++? I'm looking to develop a method of software engineering for myself in which I can get full points for these homeworks in less than 10 hours per week (I'm currently spending 15 hours and not getting full points)."} {"_id": "188933", "title": "What can I do to maintain respect for a poorly written codebase?", "text": "In my job I have to maintain a poorly written codebase which is both hard to understand, has tons of comments that are just plain wrong, has a bunch of weird decisionmaking going on in it and a whole lot more. I primarily do bugfixing which ranges from finding that little comment somewhere in the code that makes some random part not work, to understanding complex circular dependencies with some kind of singleton'a'palooza going on, and occasionally I actually get to write some real code, which is when I'm the happiest. But I'm concerned about the bugfixing witch I must confess many times ends up in being sloppy patchwork because I don't have the time or the mental fortitude to look at the horrendous code anymore. Now the biggest problem this project has is not that I'm just a whiny guy, I'm most certainly not alone in having above mentioned sentiments about the codebase. The biggest problem is probably the cost of maintaining it... But i digress. What I'm wondering is if there is some practices that I and my colleagues can try use to regain some respect for the codebase we've inherited and are working so that we don't walk down the path of creating more code rot?"} {"_id": "170894", "title": "Inheriting projects - General Rules?", "text": "> **Possible Duplicate:** > When is a BIG Rewrite the answer? > Software rewriting alternatives > Are there any actual case studies on rewrites of software success/failure > rates? > When should you rewrite? > We're not a software company. Is a complete re-write still a bad idea? > Have you ever been involved in a BIG Rewrite? This is an area of discussion I have long been curious about, but overall, I generally lack the experience to give myself an answer that I would fully trust. We've all been there, a new client shows up with a half-complete project they are looking to finish and launch. For whatever reason, they fired their previous developer, and it's now up to you to save the day. I am just finishing up a code review for a new client, and in my estimation is would be better to scrap what the previous developers built since and start from scratch. There's a ton of reasons why I am leaning toward this way, but it still makes me nervous since the client isn't going to want to hear \"those last guys built you a big turd, and I can either polish it, or throw it in the trash\". What are your general rules for accepting these projects? How do you determine whether it will be better to start from scratch or continue with the existing code base? What other extra steps might you take to help control client expectations, since the previous developer may have inflated those expectations beyond a reasonable level? Any other general advice?"} {"_id": "180186", "title": "Refactoring several huge C++ classes / methods. How to start?", "text": "> **Possible Duplicate:** > I\u2019ve inherited 200K lines of spaghetti code \u2014 what now? I'm dealing with legacy code. It contains some BIG classes (line count 8000+) and some BIG methods (line count 3000+). For one of these methods I wrote a unit test that covers at least some code. This function has no specific structure, it is a mess of nested loops and conditions. I'd like to refactor but still I have no idea how to start. I actually started by extracting one single function. In the end it had to have 21 parameters :-/ Should I ... 1\\. continue extracting awful functions knowing that in the long run I will succeed? 2\\. start extracting even bigger functions? So I expose the structure of this monster and then I can start the real refactoring. 3\\. only extract small \"good\" functions (and / or classes) just because I am a helluva clean coder? 4\\. do something completely different?"} {"_id": "225955", "title": "How to manage stress from ongoing difficult maintenance?", "text": "I recently signed on to a contract where I am supporting an aging legacy system. The system is a collage of .net and VBScript components. Part of it operates as a web application; part of it operates as a Windows Service. As anyone on this site can tell you, it is possible to write great code with legacy technology and it is possible to write terrible code in new technology. The project I'm working on is some of the worst spaghetti code I've ever seen. It is over ten years old, and has been developed, supported, and maintained by a half-dozen unrelated companies. There are few code comments, and it's like each developer didn't know what the previous developer was doing. Yet somehow, the program works most of the time and is a critical part of a big business. There are hundreds of thousands of lines of code in the system, and here's how it works: Each week, my client gives me several bug reports to review at the beginning of the week. I am told specifically not to try to improve the design of the system, but to make the smallest change possible to fix each bug. Here is what bothers me: First of all, it is very taxing just to try to read the code and discern what it does. There are iframes galore and very odd ways of interfacing browser to server and server to DB. It is common for to take days to find the corner of the codebase where a bug is caused, and if the fix isn't trivial, it can take days more to understand the related parts of the codebase. It feels like pushing a rock up a hill for a living. The work is very exhausting mentally. It is difficult to maintain focus, and then I feel additionally stressed by wanting to perform my best while being tired and unfocused. To top it all off, I know the whole time that I am working on a bug that as soon as it's done, I am to start all over with another bug. The work is very remunerative and I would like to fit myself better to it. How do some of the more experienced developers handle this sort of stress? **Edit:** I think that Robert Harvey brought up a good point in the comments below: that I could be stressing myself with my own expectations. Much of my past experience has been with green-field development, so perhaps the slower pace of maintenance. Perhaps this maintenance requires a different mindset and way of thinking. Also, I take great exception to the idea that I'm inappropriately asking for help with my mental health. I do understand that psychiatry is now considered a medical discipline, but that doesn't mean that any and all questions about managing stress and being happy should be handled only by mental health professionals. That's just silly."} {"_id": "204872", "title": "How to restructure Python frameworks", "text": "I just joined a group of five developers (non-professionals) working on a medium sized Python framework (> 50 modules, > 10.000 lines of code). The project has no documentation whatsoever and there are no coding conventions of any kind. My job is to 1. get an overview of what the framework does (theoretical physics calculations), 2. write a detailed documentation for the current code, 3. define development guidelines, 4. design a new software architecture, 5. perform code refactoring. Unfortunately, I'm not used to programming languages with a dynamic type system in such large projects (until now I've used Java for such purposes and Python for shorter scripts). Speaking in terms of anti-patterns, the code is pure spaghetti code, contains a god class and lots of copy-and-paste programming. In my oppinion, the code has degenerated to a point that is unmaintainable. However, rewriting the entire code is not an option. In \"AntiPatterns: Refactoring Software, Architecture, and Projects in Crisis\" William Brown et al. suggest the following steps for code refactoring, when adding new code in such software projects: 1. Control access to member variables (in Python I would use properties). 2. Avoid duplicated code. 3. Reorder function arguments to improve consistency. 4. Remove code that is, or may become, inaccessible. 5. Rename classes, functions, data types to improve consistency and set standards. Step 1 seems to be somehow un-Pythonic to me and steps 2-5 are trivial. I have resigned at trying to figure out the big picture. Instead, I am approaching the issues one at a time. However, I am constantly stuck at trying to resolve the insane amount of object coupling. I am uncertain if my approach is a good idea at all, as I have no experience in software restructuring. It is also hard to ask a specific question as I am stuck in the beginning of my work. I would appreciate if someone could provide insight into best practices in this topic. * * * **Edit:** This question now focuses on the approach of restructuring Python frameworks instead of simply getting an overview."} {"_id": "199975", "title": "How Much Of A (Broken) Legacy Framework To Keep", "text": "I've inherited a hosted system (system \"A\") which can be used to manage products, inventory, and orders, and can send those products to various third- parties. Quite simply, system \"A\" doesn't work. The product and inventory systems are slow, convoluted, and buggy. The third-party integration doesn't work at all. The code is an unholy, spaghettied mess, so fixing things is not simple. My task is to try and salvage the system into something usable, and my original plan was to just refactor, refactor, and refactor some more until the system worked. However, my company also has a separate system (system \"B\") that we use to build e-commerce websites. Among other things, system \"B\" can manage products, inventory, and orders as well - just in a lesser capacity in some cases. System \"B\" is also constantly being worked on and updated by a team. My new plan is to essentially throw out system \"A\", and re-create it based on system \"B\". Since my refactors would have eventually restructured system \"A\" entirely, I figure I can just save time by starting over with a different, existing framework. However, if I go that route, I would still have to re-code the third party integration, and minor functionality that system \"A\" has and system \"B\" does not. Right now, system \"A\" is basically defunct - nobody is really using it, so I do have the freedom of a slightly extended rewrite period if I choose. Is this a good idea, or should I stick with the original plan of refactoring system \"A\"?"} {"_id": "134855", "title": "What characteristics or features make code maintainable?", "text": "I used to think I knew what this was, until I really started thinking about it... \"maintainable\"... what exactly makes code maintainable? To me, if code must be maintainable that means we can expect to revisit the code to make some sort of change to it in the future. Code is always \"changeable\" no matter what state it is in. So does this mean code needs to be easy to change? However, if our code was scalable/extensible, there would be no need to directly change it, because it will \"change\" (adapt) for free and automatically. I've also seen code maintainability used interchangeably with coding standards. Using a consistent coding pattern or style, does this really make code more maintainable? More readable and more consistent? Sure, but how does this improve the maintainability? The more I try to think into this, the more confused I get. Anyone have a proper explanation?"} {"_id": "209124", "title": "First step in analysing a proposed system", "text": "A client asked us to automate his offline paper-based processes. He sent us a copy of the physical forms that he uses. Our team met and discussed how the navigation should be, what major components would exist in the UI. That's it. Problem is, we don't have actual \"written\" requirements. With a lack of an actual business/system analyst in our company, what are the steps that we should take to reach the point where we are ready to start developing the system?"} {"_id": "205760", "title": "What's a good way to manage long piece of code in files?", "text": "I am a web developer and at the moment am finding it hard to cope with long un-documented code written by previous developers in an organisation I work for. With the deadline gun always pointed at my forehead, it's doing my head in. I use aptana studio's search feature for some quick wins but it is becoming overwhelming to a point that I do not want to work with them anymore. Is there any quick/proven approach that I can take to help me with a scenario like this?"} {"_id": "39411", "title": "What can I do to get better at estimating how long projects are going to take?", "text": "I don't want to make life hard for management. I really don't. They're nice enough guys, but every time I am assigned a new project or task and get asked \"how long do you think it will take to do this\" I end up sputtering off ridiculous timeframes; \"between a day and three weeks\". I would like to believe that it's not entirely my fault: I am the company's sole programmer, I am relatively new to proper development (Was it only six months ago that I wrote my first unit test? sigh...), and I am working with a code base that at times is downright absurd. So I would like some advice. Obviously, experience is the biggest thing I lack but anything that would make me better would be greatly appreciated. I am looking for reading material, methodologies, maybe even actual tools. Any way that I can give my boss more accurate information without having to sit down and design the darn thing first is appreciated. Ok magic stackoverflow genie, what have you got for me? **EDIT:** @Vaibhav and others suggesting I take time to research and sketch out the system I agree with you in principle, but how do you balance that with real-world constraints? When you're a one man show or even a part of a small team \"I will need 2 days to build an estimate\" is a real deterrent when you can fight 4 fires in the time it takes to get a simple estimate."} {"_id": "198124", "title": "Document existing Visual Studio Project", "text": "I am a small business owner and my sole developer quit. Cold. I am not a coder. He developed an existing application that works .net, c#, javascript, sql, visual studio based, 250,000 lines of code. Needs to evolve to v2.0. I have zero documentation. Is there a product that can read the code and generate documentation that would significantly reduce the learning curve for my new development team? What is it? If not, suggestions?"} {"_id": "177624", "title": "quick approach to migrate classic asp project to asp.net", "text": "Recently we got a requirement for converting a classic asp project to asp.net. This one is really a very old project created around 2002/2003. It consists of around 50 asp pages. I found very little documentation for this project, FSD and design documents for only a few modules. Just giving a quick look into this project my head start to hurt. It is really a mess. I checked the records and found none of the developers who worked on this project work for the company anymore. My real pain is that this is an urgent requirement and I have to provide an estimated deadline to my supervisor. I found a similar question classic-asp-to-asp-net, but I need some more insight on how to convert this classic asp project to asp.net in the quickest possible way."} {"_id": "238597", "title": "Previous programmer died unexpectedly; how do I pick up where he left off?", "text": "I have recently taken a job finishing the development of a .Net application. The delivery of the application is two years behind schedule. The previous developer died before I had a chance to meet him. I have been given copies of directories from the deceased's laptops. There are many directories of code (including seemingly duplicate directories), two SQL Server databases, and some Visual Studio project files. A quick wc shows about 300,000 lines of .Net code. There is no source control, no documentation, no licenses for third party controls, no unit tests, no build scripts, no release scripts, no install, and no real software development process to speak of. Maybe some of these things did exist, but I can't find them. I am an experienced software developer, but I have never been in a situation quite like this. Lastly, I have budget to buy tools as necessary, but not more staff. _What would be your 90-day plan to get this project moving again?_ Obviously I have months or years of work ahead of me; _what should be my priorities?_"} {"_id": "41409", "title": "Why does TDD work?", "text": "Test-driven development (TDD) is big these days. I often see it recommended as a solution for a wide range of problems here in Programmers SE and other venues. I wonder why it works. From an engineering point of view, it puzzles me for two reasons: 1. The \"write test + refactor till pass\" approach looks incredibly anti-engineering. If civil engineers used that approach for bridge construction, or car designers for their cars, for example, they would be reshaping their bridges or cars at very high cost, and the result would be a patched-up mess with no well thought-out architecture. The \"refactor till pass\" guideline is often taken as a mandate to forget architectural design and do _whatever is necessary_ to comply with the test; in other words, the test, rather than the user, sets the requirement. In this situation, how can we guarantee good \"ilities\" in the outcomes, i.e. a final result that is not only correct but also extensible, robust, easy to use, reliable, safe, secure, etc.? This is what architecture usually does. 2. Testing cannot guarantee that a system works; it can only show that it doesn't. In other words, testing may show you that a system contains defects if it fails a test, but a system that passes all tests is not safer than a system that fails them. Test coverage, test quality and other factors are crucial here. The false safe feelings that an \"all green\" outcomes produces to many people has been reported in civil and aerospace industries as extremely dangerous, because it may be interepreted as \"the system is fine\", when it really means \"the system is as good as our testing strategy\". Often, the testing strategy is not checked. Or, who tests the tests? In summary, I am more concerned about the \"driven\" bit in TDD than about the \"test\" bit. Testing is perfectly OK; what I don't get is driving the design by doing it. I would like to see answers containing reasons why TDD in software engineering is a good practice, and why the issues that I have explained above are not relevant (or not relevant enough) in the case of software. Thank you."} {"_id": "177058", "title": "How do I maintain a really poorly written code base?", "text": "> **Possible Duplicate:** > I\u2019ve inherited 200K lines of spaghetti code \u2014 what now? Recently I got hired to work on existing web application because of NDA I'm not at liberty to disclose any details but this application is working online in sort of a beta testing stage before official launch. We have a few hundred users right now but this number is supposed to significantly increase after official launch. The application is written in PHP (but it is irrelevant to my question) and is running on a dual xeon processor standalone server with severe performance problems. I have seen a lot of bad PHP code but this really sets new standards, especially knowing how much time and money was invested in developing it. * it is as badly coded as possible there is PHP, HTML, SQL mixed together and code is repeated whenever it is necessary (especially SQL queries). there are not any functions used, not mentioning any OOP * there are four versions of the app (desktop, iPhone, Android + other mobile) each version has pretty much the same functionality but was created by copying the whole code base, so now there are some differences between each version and it is really hard to maintain * the database is really badly designed, which is causing severe performance problems also for fixing some errors in PHP code there is a lot of database triggers used which are updating data on SELECT and on INSERT so any testing is a nightmare Basically, any sin of a bad programming you can imagine is there for example it is not only possible to use SQL injections in literally every place but you can log into app if you use a login which doesn't exist and an empty password. The team which created this app is not working on it any more and there is an outsourced team which suggested that there are some problems but was never willing to deal with the elephant in the room partially because they've got a very comfortable contract and partially due to lack of skills (just my opinion). My job was supposed to be fixing some performance problems and extending existing functionality but first thing I was asked to do was a review of the existing code base. I've made my review and it was quite a shock for the management but my conclusions were after some time finally confirmed by other programmers. Management made it clear that it is not possible to start rewriting this app from scratch (which in my opinion should be done). We have to maintain its operable state and at the same time fix performance errors and extend the functionality. My question is, as I don't want just to patch the existing code, how to transform this into properly written app while keeping the existing code working at the same time? **My plan is:** 1. Unify four existing versions into common code base (fixing only most obvious errors). 2. **Redesign db and use triggers to populate it with data (so data will be maintained in two formats at the same time)** 3. All new functionality will be written as separate project. 4. Step by step transfer existing functionality into the new project 5. After some time everything will be in the new project Some explanation about #2, right now it is practically impossible to make any updates in existing db any change requires reviewing whole code and making changes in many places. **Is such plan feasible at all?** Another solution is to walk away and leave the headache to someone else."} {"_id": "127658", "title": "How to reply to incomplete requests from potential customers?", "text": "Working as a freelancer, I receive many weird, invalid or incomplete requests from the actual or potential customers. The most frequent case is this one: > Hi, > > I need a website where people can register and there are also postings and > ratings. How much will it cost to me? > > Thank you. The request sucks, but **it doesn't mean that a customer like this is not worth it**. This person doesn't know how to make a request correctly, but with **a bit of effort and a bit of learning and advice** , this person may become a valuable customer who will not waste my time. For a while, I just **replied by asking to provide details**. They never do. Recently, I decided to **reply in a more detailed way** , explaining _why_ is it impossible to give a price (except telling that it would be somewhere between $500 and $50 000). First I just made a simple explanation, telling that their description of the project is too sparse. Then I added further info, metaphors, etc., or by making comparisons with other domains which are better known by people with no technical background. For example: > \u201cImagine you want to build a two-storey house. Do you believe it's possible > to determine the cost of building a house just by knowing the number of > storeys? You probably need to provide much more details: is it built with > rock or wood? Are there solar panels on the roof? Is there a swimming pool > in the backyard? > > A large Victorian-style house using the newest technologies, with two > garages, a large terrace, etc. will cost much more than an tiny modest two- > storey house for a family who really don't have too much money to spend.\u201d It's still not working: those potential customers never respond. I also tried the \"let me gather the project requirements for you from scratch and do the specification and architecture, but don't forget to pay me for this\" technique, but it looks like scam\u00b9. In all cases, in my country (France), this never works with new customers for several reasons. Some hints show me that **some of those people actually succeed finding a developer and succeed with their project**. It means that my approach with those potential customers fails, while there is one approach used by someone else which works well. How to reply to such price requests, considering that those people don't know me, don't trust me yet, don't want to spend days writing a detailed document describing every functionality of the project, and sometimes don't even know precisely what they want, but are not ready to pay you thousands of dollars just for requirements, specification and architecture steps? * * * \u00b9 Most projects are small enough and have tiny funding; most customers don't bother to know that the source code is clean and maintainable, that it was regularly refactored, and that you have unit and integration tests. **They want to pay less, now, no matter how expensive it would be later to maintain the codebase.** In this context, talking about functional and non functional requirements, architecture etc., is perceived like the attempt either to waste half of the customer's money doing marketing jibber-jabber instead of writing code, or to scam them by making them pay for something they don't need nor understand, and then disappearing with their money when it will come to actually writing source code. They don't know that you are a professional, and they even don't care."} {"_id": "246688", "title": "How to take over sizable codebase, without having access those who implemented?", "text": "I've started a job as mobile lead in a 100employee company. Their Mobile products (iOS and Android) has been developed by external teams, and now they have decided to assemble internal teams, so I'm heavily involved in hiring ATM, both for iOS and Android. I will have teams of 5 people for each of the platforms. I've been manager before, but it's the first time I'm not starting code from scratch. My question is how to handle taking over code? The iOS code is about 50 classes, many of them over 700 lines. The code quality is around 5/10. What should be my first steps for taking over the code base? Little by little or major restructure first, or both using different resources?"} {"_id": "50576", "title": "How do you stay productive when dealing with extremely badly written code?", "text": "I don't have much experience in working in software industry, being self- taught and having participated in open source before deciding to take a job. Now that I work for money, I also have to deal with some unpleasant stuff, which is normal of course. Recently I was assigned to add logging to a large SharePoint project which is written by some programmer who obviously was learning to code on the job. After 2 years of collaboration, the client switched to our company, but the damage was done, and now somehow I need to maintain this code. Not that the code was _too_ hard to read. Despite problems\u2014each project has one class with several copy-pasted methods, enormous `if` nestings, Systems Hungarian, undisposed connections\u2014it's still readable. However, I found myself absolutely unproductive despite working on something as simple as adding logging. Basically, I just need to go through the code step by step and add some trace calls. **However, the idiocy of the code is so annoying that I get tired within 10 minutes of starting**. In the beginning, I used to add `using` constructs, reduce nesting by reversing `if`'s, rename the variables to readable names\u2014but the project is large, and eventually I gave up. I know this is not the task I should be doing, but at least reducing the mess gave me some kind of psychological reward so I could keep going. Now the trick stopped working, and I still have 60% of my work to do. I started having headaches after work, and I no longer get the feeling of satisfaction I used to get\u2014which would usually allow me to code for 10 hours straight and still feel fresh. This is not just one big rant, for I really do have an actual question: > Is there a way to stay productive and not to fight the windmills? Is there some kind of psychological trick to stay focused on the task, instead of thinking _\u201cHow stupid is **that**?\u201d_ each time I see another clever trick by the previous programmer? The problem with adding logging is that I actually have to _understand_ what the code does, and doing so hurts my brain in an unpleasant fashion."} {"_id": "80625", "title": "We're not a software company. Is a complete re-write still a bad idea?", "text": "I understand the reasoning behind Joel Spolsky's article \"Things You Should Never Do, Part I\", but I always see it referenced in regards in situations where the end goal is the production of software. What if I'm a developer who maintains an ecommerce site? My job isn't writing a retail platform, but instead putting it to use. In fact, this wouldn't even be a re-write, as such, but a big database and web design transition. The software our site is based on is written in classic ASP, and is fundamentally missing many features that customers expect from a current shopping site. Rather than continue to add these features in piecemeal, my gut feeling is that I should start to transition to a more modern platform. We would lose the customizations that we've made over the years, but frankly, many of this features already exist (and have have almost certainly been implemented better!) in the package that I'd like to switch to. Am I falling victim to the spirit of Netscape, or am I right in thinking my time is better spent in places besides making our tools do what we need? To clarify, this is the equivalent of switching blogging platforms for us. Any \"development\" that I do is essentially rewriting the front end of our website, while the back end is out of my control. Suppose WordPress development had stopped years ago, and was missing \"modern\" features like commenting, static pages, clean permalinks, etc. Sure, I could write a plug-in to add those things, but after a while, wouldn't it be better to switch platforms to something that had all those (needed) features built in from the beginning?"} {"_id": "229690", "title": "On the process of replacing an internal framework by a public one", "text": "I am working on several applications which depends on a framework which was developed by a prior engineer in the company. The framework was mainly developed and maintained in the early 2000s, and provide the same functionality available in `boost`, `Qt` or even `c++11`. I believe the framework was developed internally because the engineer in question is an autodidact, and that at this time he didn't knew a framework which will satisfy his need. Right now the framework has reached his limit, and we plan to switch to public and well known frameworks to do the same job. The issues we are facing : * We have to deal with custom basic data structure implementations (strings, arrays) * We are limited by the capacity to do event-driven or clean concurrent programming correctly * Coupling and code complexity can be very high (due to people working on both framework and apps ) * Nobody even remotely want to maintain that library * GUI application management is just inferior to Qt. For instance it is very hard (read _we still havent managed_ ) to make a CLI based on this framework. This is an issue when we want to create CLI executables which does some of the functionality of the UI. Basically **the application is very resistent to change**. One developer joked that _the source code became self aware, and that we have lost control_. We are a small team, and want to move away from our lib incrementally. We cannot go into full rework mode, as the application has features\\bugs to be taken care of in the meantime. We are lucky that the management part of the company understood the issues and is in a cooperative mindset. * What approach or strategy should we consider? * What should we do in order to not be stuck midway with a unusable mutant? * What are the opportunities? Is it teh occasion to improve the architecture? To improve testatibility? Or the development process? * If several things come to mind, which one should be done first?"} {"_id": "40676", "title": "Dealing with bad/incomplete/unclear specifications?", "text": "I'm working on a project where our dev team gets the specifications from the business part of the company. Both the business management and the IT management require estimates and deadline projections, as they should. The good thing is that estimates are mostly made by the actual developers who get to do the required features. The bad thing is that the specifications are usually either too simple (it turns out you're left with a lot of question marks over your head because a lot of information seems to be missing) or too complex(up to the point that you can't even visualize where everything would \"fit\" in the app). More often than not, the business part of the specs are either incomplete or unaware of what can and can't be done (given the previously implemented business logic). Dev team is given about a day per new spec to give an estimate and we do try to clear uncertainties, usually by meeting up with whoever did the spec. Most of the times it turns out that spec writers haven't really thought everything through, and it's usually only when we start designing and developing that we end up in trouble, as a lot of the spec seems to have holes. How do you deal with this? Are you generous on estimates in advance?"} {"_id": "225341", "title": "How to integrate unit testing process into legacy software development process?", "text": "I'd like you to share your insights of how have you successfully turned the direction of rotting legacy code base to modular application design where it's easy and useful to add unit tests? What I'm here trying to find out is little bit different what has been asked before. I want to hear about your own experiences on actual systems. I'm interested in hearing what were the critical points which mattered most and resulted in improvement. I have been working for over five years with a 15 year old C/C++ code where I have seen the change from old C code built as a monolith application to modularized modern C++ application, now utilizing unit testing as a development process (to simplify a lot). As I see it, critical things to reach the success (as I see it), has been 1. highly talented individuals doing great things and prototyping (somewhat outside the official process) unit testing and promoting it to others 2. management willingness to invest in architectural development (not to enable unit testing, but for business reasons) 3. heavy investment in continuous integration system (which is not easy thing to setup for large legacy C++ application) 4. dedicated people to develop and maintain unit testing frameworks, integrations and training of developers 5. and of course the developers themselves who make the progress. So, **what worked** in your case, and **what did not?**"} {"_id": "206475", "title": "Are missed deadlines common in programming jobs?", "text": "It was my freelancer job at oDesk. I have done several jobs earlier in given time, but is was the first time I missed the deadline. It was a very lengthy job and I tried my best but I still missed the deadline. Now, I am very scared. Because it's my fault that I missed the deadline. My question is: Is this is a big concern or are missed deadlines common among programming jobs, so I shouldn't worry too much about this?"} {"_id": "254360", "title": "How would you rewrite/refactor this ?", "text": "Old application that is used by 50-60.000 paying customers. Company is several hundred people big. Application has a lot of business critical code (30% of all code) written in classic asp. Application has a lot more .net code. Application has a COM+ bridge for enabling asp to \"talk\" to .net Organization lacks some/a lot knowledge on what is causing the 10-20% server-reset per day (might be due to COM+ ?) There is no red line through the application; no architecture, no real patterns etc. The application has been like this for at least 5 years. The asp code base is increasing, slowly but certainly. I have read refactoring stories and I have knowledge on why you some of the times should not re-write a system. I would love for the old asp code to vanish as well as the COM+ component. But the pain is that no one really knows what is going on inside the asp classic code and the attitude inside all the teams are \"this is just how it is\". Down the line, this causes a lot of other issues like recruiting, dev effeciency, business needs that cannot be met, scale etc. With these little facts, does that justify a re-write of the asp code and the removal of the COM+ component ? How would you go about it ?"} {"_id": "213973", "title": "How to ask management to increase the number of programmers in the team without sounding incompetent", "text": "I am currently working for a small-medium sized company (~50 employees) as the sole IT staff. Recently, we are on track to replace one expensive yet critical legacy system in favor of an application I am currently working on during my free time at work. However, its been very busy lately with issues spawning left and right so the development of the application has been lagging a lot. How do I ask a very thrifty management to hire more programmers to help me build the new application without sounding incompetent. FYI. I do support of all our applications, setup and administration of the network and servers and in-house application development. EDIT 1: Nature of the Critical Legacy System for added information The Legacy System is the center of our operations. Basically, all our reports, notifications and data are being handled or are passing through this system. We want to replace it for a few reasons: 1. We are bleeding money paying for support that is rarely helpful (really terrible support) plus licensing cost. 2. Ridiculous cost for adding new features (500$ to adjust the width of a column in a report) 3. System Design/Architecture problems that cause the system to be slow and unscalable (Other branches find the system unusable because of how slow it is when used outside of the Office LAN) 4. Its a program made with access '97. Need I say more? It still works but its like trying to cut something with a dull blade in terms of performance"} {"_id": "46272", "title": "How can I estimate how long a project will take?", "text": "I'm working as a web developer and I want to be able to determine if I'm efficient. Does this include the how long it take to accomplish tasks such as: * Server side code for the site logic with one language or multiple php,asp,asp.net. * Client side code like javascript with jquery for ajax, menus and other interactivity * Page layout, html, css (color, fonts (but I have no artistic sense!)) * The needs of the site and how it will work (planning) How can i judge how long it will take to complete a website? The site has CMS for adding and editing news, products, articles on the experience of the company. Also, they can edit team work, add Recreational Activities and a logo gallery with compressed psd download, and send messages to cpanel and to email. You are starting from scratch except JQuery and PHPmailer. How can I estimate how long the job will take, and how can I calculate the required time to finish any new projects? I'm so sorry for many scattered questions, but I'm in my first experiment and I want to take benefits from the great experience of those who have it."} {"_id": "194347", "title": "Strategy for reading and understanding Node.js code", "text": "Concretely I am looking at this 2000 line file of what I will pretty arbitrarily call \"mediocre\" code. * It's not well-commented * variable names and function names seem consistently intelligent * functions are not well-documented * functions are good length In short: its lines and small structures are readable, but it's impossible to infer architecture or design at a glance. Most code I've worked on can be described thusly, to be fair. So I need to understand this. And work on it. This is an important skill in software development I'm still weak at, and it's extremely important when working in a nascent system. So my question is, when encountering foreign code like this that I am employing as client code but now need to understand and modify, what is a quick strategy for having a good understanding?"} {"_id": "66438", "title": "Techniques to re-factor garbage and maintain sanity?", "text": "So I'm sitting down to a nice bowl of c# spaghetti, and need to add something or remove something... but I have challenges everywhere from functions passing arguments that doesn't make sense, someone who doesn't understand data structures abusing strings, redundant variables, some comments are red- hearings, internationalization is on a per-every-output-level, SQL doesn't use any kind of DBAL, database connections are left open everywhere... Are there any tools or techniques I can use to at least keep track of the \"functional integrity\" of the code (meaning my \"improvements\" don't break it), or a resource online with common \"bad patterns\" that explains a good way to transition code? I'm basically looking for a guidebook on how to spin straw into gold. Here's some samples from the same 500 line function: protected void DoSave(bool cIsPostBack) { //ALWAYS a cPostBack cIsPostBack = true; SetPostBack(\"1\"); string inCreate =\"~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\"; parseValues = new string []{\"\",\"\",\"\",\"\",\"\",\"\",\"\",\"\",\"\",\"\",\"\",\"\",\"\",\"\",\"\",\"\",\"\",\"\",\"\",\"\",\"\",\"\",\"\",\"\",\"\",\"\",\"\",\"\",\"\",\"\",\"\",\"\",\"\",\"\",\"\",\"\",\"\",\"\",\"\",\"\",\"\",\"\",\"\",\"\",\"\",\"\",\"\",\"\",\"\",\"\",\"\",\"\",\"\"}; if (!cIsPostBack) { //....... //.... //.... if (!cIsPostBack) { } else { } //.... //.... strHPhone = StringFormat(s1.Trim()); s1 = parseValues[18].Replace(encStr,\" \"); strWPhone = StringFormat(s1.Trim()); s1 = parseValues[11].Replace(encStr,\" \"); strWExt = StringFormat(s1.Trim()); s1 = parseValues[21].Replace(encStr,\" \"); strMPhone = StringFormat(s1.Trim()); s1 = parseValues[19].Replace(encStr,\" \"); //(hundreds of lines of this) //.... //.... SQL = \"...... lots of SQL .... \"; SqlCommand curCommand; curCommand = new SqlCommand(); curCommand.Connection = conn1; curCommand.CommandText = SQL; try { curCommand.ExecuteNonQuery(); } catch {} //.... } I've never had to refactor something like this before, and I want to know if there's something like a guidebook or knowledgebase on how to do this sort of thing, finding common bad patterns and offering the best solutions to repair them. I don't want to just nuke it from orbit,"} {"_id": "226534", "title": "How to convince/prove my manager that a rewriting is needed rather than a refactoring", "text": "My manager wants me to refactor a **gigantic amount of terribly-coupled, macroed, full of private namespace methods, hierarchy-perverted (even 10+ levels of inheritance)** code which hasn't been (indeed) designed with a \"whole-vision\" concept. I tried to explain to him that this code needs to be rewritten, not just refactored but he didn't listen and stated explicitly that **I need to refactor it.. somehow**. I could dive into all the \"dealing with legacy code\" books but I think nothing useful would come out. How can I convince him that a rewriting, instead of a refactoring, is actually needed?"} {"_id": "228371", "title": "Does anyone have experience with a difficult customer?", "text": "We have a reoccurring conflict with one of our larger and strategically important customers. The company (let's call them \"C\" for now) sells and distributes technical articles, everything from hardware components to hairdryers, and has more than 400000 products in their catalog. The product owner, who is also the head of product documentation and internal software at \"C\", has at some point in the 90's taken a course in software testing, and is now writing \"Requirement specifications\" as user tests. These are often contradictory and error prone, but she maintains that what she writes is what she want's, until she writes something else (in same imperative but ambiguous and abstract format). Furthermore, she demands a final budget for the project, and is unwilling to work in smaller iterations of budgeting. I'm looking for hints on how to control the development-process (and our own economy) and still provide the customer with a product that gives the company most value-for-money. Normally, I'd start a project by identifying goals of the project and business use-cases of the organization - and then deriving scope from those goals, using User-stories. The details (specifications) of how user-stories should be implemented, is decided at specifications prior to the sprint including the respective user-stories. This has repeatedly proven to result in better products and greater ROI for the customers. Does anyone have experience of knowledge of techniques that can be used to \"Reverse engineer\" imperative user-tests, or any other input on how to handle the situation? Historically, she has been allowed to completely dominate the development process, which has blown our budgets and taught her that she is in control of our process. Our management has now asked me to control (and defend) the development process. They told \"C\" that there will be used no more hours than the agreed budget, regardless of how \"done\" she (the product owner) feels that we are, which of course will be my leverage. She will become defensive, even aggressive to get things her way - any sound arguments based on professional reference or experience would be of help. * * * Follow up: I thought I'd follow up in case anyone stumbled across this post with similar problems: We have turned the communication around, and are expecting to deliver the next project within budget and on time (I'll edit this post again if hell breaks loose later on). Initially, I focused on creating trust between me and the Product Owner. That meant choosing the relevant battles, and presenting myself as her tool to get most value for her money. Requirements are still expressed in more or less the same way (although she is open for input about syntax, she is firm about the format: Word Documents and Emails), but we are slowly trading control with visibility regarding the development process. I managed to persuade her to participate in weekly meetings for project status, backlog grooming and prioritization. Between each meeting, she gets the chance to test the newest revision, which makes her test-feedback what used to be an object for discussion, the most valuable feedback for the backlog grooming and prioritization. By having a head-to-head talk about the feedback from the tests, we're able to \"weed out\" misunderstandings, contradictions and scope creep before it leaks into development. The trust between us enabled us to have a constructive talk about each others organizations and the processes and intentions of these, which opened my eyes to quite a big part of the problems being our lack of capacity (professionally, theoretically, and methodically). The product owner has called management to personally compliment the progress we are making, so I'm calling success on these first steps - but constantly watching the scope and progress, as to not \"rest on ones laurels\""} {"_id": "162923", "title": "What defines code readability?", "text": "> **Possible Duplicate:** > How would you know if you've written readable and easily maintainable code? It is often said that readability is perhaps the most important quality- defining measure of a given piece of code for reasons concerning maintainability, ease of understanding and use. **What defines the word _readable_ in context of program source code?** What kind of definitive aspects are there to code readability? I would be grateful with code examples of _readable_ code, along with reasoning why it is readable."} {"_id": "14492", "title": "How to gain understanding of large systems?", "text": "> **Possible Duplicate:** > How do you dive into large code bases? I have worked as a developer developing C/C++ applications for mobile platforms (Windows Mobile and Symbian) for about six years. About a year ago, however, I changed job and currently work with large(!) enterprise systems with high security and availability requirements, all developed in Java. My problem is that I am having a hard time getting a grip on the architecture of the systems, even after a year working with them, and understanding systems other people have built has never been my strong side. The fact that I haven't worked with enterprise systems before doesn't exactly help. Does anyone have a good approach on how to learn and understand large systems? Are there any particular techniques and/or patterns I should read up on?"} {"_id": "86454", "title": "Taking part in an open source project", "text": "I am new to open source development. I want to add some functionality to an open source project PRISM. But I couldn't find any developer guide or code architecture guide sort of a thing for it. How should I begin use its code. I have just imported the archive file in Eclipse and began reading the code. I want to ask whether there are any specific tools which will help me?"} {"_id": "230521", "title": "How to handle not-quite-legacy code?", "text": "This is a frequently asked topic... And I have read through many posts, articles and am about to read the book `Working Effectively with Legacy Code`, but before that and mainly because it will take more than a couple days for book to arrive, and I would like to keep my sanity in check, here's the question: I am looking at not-that-old (1-2 years) code by a company I recently joined, and I have found so many bad practices in there that it's almost heart breaking... Massive code duplication, global variables (which are inconsistently used), and things like using a factory class to instantiate various objects that extend a parent object... and then calling methods that don't exist in the parent, but are defined in all the subclasses... The code works... It's rigorously QAed and bugs are at a minimum, which is fine... but I cringe so much looking at the code... I recently asked my boss if I could refactor the entire codebase, he asked how I would be able to guarantee the changes will not break anything, I asked we could invest some time to create _real_ unit tests, as their so called unit tests are actually end-to-end tests, so we have to trace the debug log very time something breaks. He said we don't have enough time to create the unit tests (actually the code isn't very unit testable, since there is so much tightly couple code... and I would've had to pretty much re-write most of the classes), so in effect I couldn't even do minor changes to code if it isn't listed as a bug in the issue tracker... What I'm doing right now is alot of copy and pasting... then fine tuning that since the logic isn't exactly the same... What can I do? :/ Sorry I think I'm just venting here, and there is nothing I can do in the short/medium term... sigh"} {"_id": "253834", "title": "How to redesign the UI of a large project?", "text": "I'm currently working at a quite big Android project (a social network, you can see it here if is useful to answer the question). We decided to restyle the whole app, changing all the UI design. The code I actually have is complex, there are many controllers, and it's a little messy too. Is it \"better\" in terms of time and code cleanliness to take one controller per time, and rewrite it entirely, or to _modify_ one controller per time? The layout change entirely anyway, so for that I know I've to restart from scratch. Maybe it could seem a stupid question, but with the \"time\" variable I really can't see the best way."} {"_id": "246223", "title": "When does extracting methods from code stop to make sense?", "text": "I am currently studying the refactoring methods defined by Marting Fowler (http://refactoring.com/catalog/). He states a tip for replacing chunks of code by a single method that does that job. So far, I agree, as we all learned about the downsides of Spaghetti-code. But the example for this rule looks as follows: Replacing void printOwing() { printBanner(); //print details System.out.println (\"name: \" + _name); System.out.println (\"amount \" + getOutstanding()); } by void printOwing() { printBanner(); printDetails(getOutstanding()); } void printDetails (double outstanding) { System.out.println (\"name: \" + _name); System.out.println (\"amount \" + outstanding); } Is the readability and thus, the immediate understanding of the code really better in the latter example? There is no indication about what \"details\" are in reference to \"owing\". For example, will the name be listed or is there another method printName()? Is the interest listed as well? I would need to search for the printDetail() method's implementation to find out about that. The method printOwing() itself is already a print-method. Would it not be easier for maintaining the code to just list the System.out.println()'s in this method commenting the purpose as in the first example instead of \"scattering\" the code this way? Is there a rule of thumb about when to stop \"methodizing\" and when it still makes sense?"} {"_id": "87757", "title": "How to convince my boss that quality is a good thing to have in code?", "text": "My boss came to me today to ask me if we could implement a certain feature in 1.5 days. I had a look at it and told him that 2 to 3 days would be more realistic. He then asked me: \"And what if we do it quick and dirty?\" I asked him to explain what he meant with \"quick and dirty\". It turns out, he wants us to write code as quickly as humanly possible by (for example) copying bits and pieces from other projects, putting _all_ code in the code-behind of the WebForms pages, stop caring about DRY and SOLID and assuming that the code and functionalities will never ever have to be modified or changed. What's even worse, he doesn't want us do it for just this one feature, but for _all_ the code we write. > We can make more profit when we do things quick and dirty. Clients don't > want to pay for you taking into account that something _might_ change in the > future. The profits for us are in delivering code as quick as possible. As > long as the application does what it needs to do, the quality of the code > doesn't matter. They never see the code. I have tried to convince him that this is a bad way to think as the manager of a software company, but he just wouldn't listen to my arguments: * **Developer motivation:** I explained that it is hard to keep developers motivated when they are constantly under pressure of unrealistic deadlines and budget to write sloppy code very quickly. * **Readability:** When a project gets passed on to another developer, cleaner and better structured code will be easier to read and understand. * **Maintainability:** It is easier, safer and less time consuming to adapt, extend or change well written code. * **Testability:** It is usually easier to test and find bugs in clean code. My co-workers are as baffled as I am by my boss' standpoint, but we can't seem to get to him. He keeps on saying that by making things more quickly, we can sell more projects, ask a lower price for them while still making a bigger profit. And in the end these projects pay the developer's salaries. What more can I say to make him see he is wrong? I want to buy him copies of Peopleware and The Mythical Man-Month, but I have a feeling they won't change his mind either. A lot of you will probably say something like \"Run! Get out of there _now_!\" or \"I'd quit!\", but that's not really an option since .NET web development jobs are rather rare in the region where I live... * * * # Update Wow, I hadn't expected to get so many answers. Thank you all for your contributions and your opinions! As quite a few of the answers and comments point out, the type of company and the type of projects play a big role in this topic. I have explained a few things here there in comments on some answers, but it's probably better to add it here as well. The company I work for is rather small. We have 4 developers, 1 designer, 1 boss and 1 jack-of-all-non-technical-trades (the boss' wife). The projects we do can be divided into two categories: 1. Smallish websites built with our own CMS or e-commerce framework (65%) 2. Middle-sized web applications (35%) So while a lot of our projects are rather small, they are built on top of the same system. This system is about 4 years old and the code base is below par to say the least. It always is a dread to add new functionalities or modify standard functionalities for specific customers. One of the goals set by the boss is to start moving our focus to product development. So that means we'll be developing bigger applications that will serve as the base for other projects or are something SaaS-like. I totally agree that doing things quick and dirty can be the best solutions for certain projects. But when you are extending an existing CMS that will be used by all sites you will develop in the next few years or building a SaaS product from scratch, there are better approaches I think."} {"_id": "238734", "title": "How to code efficiently?", "text": "I often compare my code to others and I find their solution more efficient and shorter than mine. Although both solutions work, I can't help it but wonder if mine is not adequate enough. As a result, sometimes I get a mental block when I'm about to start a project because I keep wondering if my solution is the best way. Are there any steps one can take to code in a more efficient manner?"} {"_id": "135311", "title": "What is the most effective way to add functionality to unfamiliar, structurally unsound code?", "text": "This is probably something everyone has to face during the development sooner or later. You have an existing code written by someone else, and you have to extend it to work under new requirements. Sometimes it's simple, but sometimes the modules have medium to high coupling and medium to low cohesion, so the moment you start touching anything, everything breaks. And you don't feel that it's fixed correctly when you get the new and old scenarios working again. One approach would be to write tests, but in reality, in all cases I've seen, that was pretty much impossible (reliance on GUI, missing specifications, threading, complex dependencies and hierarchies, deadlines, etc). So everything sort of falls back to good ol' cowboy coding approach. But I refuse to believe there is no other systematic way that would make everything easier. Does anyone know a better approach, or the name of the methodology that should be used in such cases?"} {"_id": "225052", "title": "Should I start refactor this messy project even if I know I won't have the time to completely refactor everything?", "text": "I have this older project (1st release 2005-ish) I've inherited that a customer asks me to fix or add something to now and then. It's a bit of a mess architecture-wise. Basically it's an ASP.NET WebForms site where 90% of the logic is in the code behind file of each view and the other 10% is in the data access layer. Although \"layer\" might be the wrong word, since all data access is just piled into one service, an all-in-one 3500 line class with ADO.NET code. All other parts of the site use this one service for everything. I'm still a newbie myself, with about 2 years of work exp. And every time I come back to this project it feels worse because I've learned and seen how to do architecture better in other projects. So although I've been smelling the bad code in this project pretty much since day 1, it's now almost unbearable. No business logic is testable, and there are always smaller bugs I know would've been caught earlier with tests in place and a better defined business layer. So my dilemma is that I want to refactor this using MVP (for example https://github.com/webformsmvp/webformsmvp), move the logic from code behind and data access service into a business layer, and at the very least split the big repository class into smaller repositories. The problem is that the customer is not willing to dedicate the time or money for this amount of work in one batch so I'm considering refactoring bits and pieces as I come across them so that this will be refactored over a longer period of time. However I'm not sure if I should mess up the codebase even further by introducing a lot of new concepts, architecture and frameworks if I'm not sure that I can complete the work myself. It feels kinda bad leaving a project in a kind of partly- refactored state where the guy after me might not now where to begin or how to continue the refactor (or if he even will and continues down his own path instead)."} {"_id": "234763", "title": "If TDD is design, how do you know your TDD is well designed?", "text": "Given a large group (50+) of programmers: * All given the same problem, * All using Test-Driven Development (TDD), * All pair programming, * All doing group-based code review, I have personally seen the wide spectrum tests that are possible for the same problem, even on the first test. So, if TDD is design, how do you know your TDD is optimal for the current problem, and how do you know it is not? Following the first test, is the approach for reviewing the consecutive tests any different, and if so, how?"} {"_id": "105018", "title": "How do you find your way in deeply nested, interfacey code?", "text": "I know most people hate flat and long functions, and hate when code is not full of ISomethings. The problem is that I guess my mind works in different way, and I always have problems with that type of code in any non-trivial solution. So, since most of people enjoy explosive number of functions, can you describe what is the preferred method when dealing with unknown code-bases written in this way? So far, for me, it looks like: I have a object with interface IFoo, great, I need to extend it with method Bar1. Reverse lookup, we land nowhere, global search on who implements IFoo, it's Baz1, Baz2, Baz3, Baz4, an they are created by 3 class factories. So we start one by one, definition of Baz1, looks nice, but it's behaviour is completely dependent on parameters used when object was created through class factory. And what's worse, it's just a wrapper around some other functionality of yet another class with IFooBar. Which again uses some internal implementation of classes with ISomethingElse, which again turns into an explosive graph. How do you navigate all that effectively?"} {"_id": "202107", "title": "OO - are large classes acceptable?", "text": "Despite many years in IT, I still struggle with OO design. One particular problem I seem to keep ending up with is large classes, often containing many hundreds of lines of code. The OO world talks a lot about SRP, and I _could_ argue that these large classes I end up with are each handling a single responsibility. The complex nature of the app (a scientific data processing system) means that one of these responsibilities requires an awful lot of code! E.g. a class might have a single public method `Calculate()`, but there could be 15-20 or more private methods to support this operation. Where possible I will extract methods into separate classes for re-usability, but often there is little or no scope for this. Part of me says I should split these large classes up purely to improve readibility (e.g. grouping similar methods into their own classes), but the times I've done this feels like a wasted exercise - all I've really done is spread the methods around, created more class dependencies, and made things a little more difficult for other devs to find stuff. But then I read articles where developers talk about how all their classes are small enough to fit on the screen without scrolling, and I worry that I'm doing something wrong! In a previous job, I worked on a system where it had been OO'd to the extreme, and it was common to see classes containing just one method, often with no more than one or two lines of code. Personally I found this difficult to find my way around the code-base and to debug, and for this reason (and sheer number of classes) I could argue that it becomes more difficult to maintain - not the aim of good OO design. I guess it's down to personal taste. So, is it acceptable to have large classes, or can you suggest other ways that I could deal with them?"} {"_id": "219467", "title": "Who should provide the Requirement doc / SRS?", "text": "I am creating a Mobile application (CRM) for another software company. They told me they need these these stuff from here and there, within 15-30 minutes. But they never provided a detailed client requirement document in writing. I have been asking this for number of times, but still no. As soon as I started working I realized what they told by mouth is not even 10% of the work. They first asked me to come up with my own design which is similar to another app they mentioned. I did, 100% similar design to the app they mentioned. Then they said they don't like it so they gave me their design. I completed there design and submitted it. Now they again say they don't like their own design and asking me to do another design! Apart from that, they are asking for some \"Activity Sliding Animations\", \"Windows Like UI\" etc now. In order to do this, I need to remove the entire work I did and start from the scrach, because it can be done by Fragments and mine contains Activities. I even have no idea how the backend will be. Even after one month, we are still designing and changing the designing (Because you have to send at least 3-4 mails, contact at least 2 of their team via phone to get an answer. Very slow communication). And will they pay for the trashed designs? I don't think so. So what about the other customers of mine which I said 'no' due to the lack of time I am going to have because of this project? I feel almost crazy. They always say come up with your own idea, but that is very risky in this situation. If my idea is not accepted, waste of time and money. I am so tired of this, I wasted lot of time designing two 100% different designing, and about to proceed to 3. Since I don't know how the software will be, I haven't done a cost estimation as well. So my question is, client is not supposed to give you any documented requirement? May be an SRS? (What I meant by SRS is, complete detailed document about functionality they need. Including these 'animation' stuff, because I have no super power to guess) The requirement doc they provided is useless, it only says \"update\", \"delete\", \"edit\". All my other clients did provide complete data, some even provided drafts of the design process, except this one. They should at lease ptovide me their requirements in details, in writing, I need some advice here."} {"_id": "139683", "title": "How to deal with bad code?", "text": "> **Possible Duplicate:** > Techniques to re-factor garbage and maintain sanity? > Code maintenance: keeping a bad pattern when extending new code for being > consistent, or not? I was hired about 6months by a company that uses Agile, but after learning the code I've realized that it's bad code - methods with over 100 lines of code, duplicate code, methods that says it does one thing but does a few other unrelated things. It works, but the more we update it, the more it become like a house of cards. Refactoring one little thing requires changing lots of others, which can make program unworkable. What did go wrong and is it possible to fix it? I thought agile was supposed to make a good code."} {"_id": "210248", "title": "How many days is it normal for a new hire programmer to take to get up to speed?", "text": "I have just landed a role as a C#/Asp.Net developer at a large software house. I have previously worked at a much smaller software house for about two years but it was a varied/mixed role there, and here the asp.net applications we have are a factor of 10 or so larger. As seems to be the norm, I have been given the task of fixing bugs. At the moment I am just trying to understand the system. How long , in your experience , does it \"roughly\" take ( and is generally acceptable) for a new developer to get up to speed? It of course varies from company to company but as a general rule, when you have hired someone/have worked with someone new, how many days/weeks would it have been normal for them to get to grips with the system?"} {"_id": "193553", "title": "Time required to start coding at a new company", "text": "I am a software engineer for 4 years, and I just changed my company for the first time. Company works with pair programming, and it's been 3 days, I couldn't even write a single line of code. It's so frustrating for me because I was very productive at my previous company. The codebase is large, they are using 5-6 languages/tools that I am not familiar with, like rspec, haml, jasmine and others. But still, I feel awful. This weekend I created UML to get better understanding of the application, but still I am guessing I'll not be able to write decent amount of code this week. **Is this normal?** _What is your experience when you change your job, and dive into a large codebase written with languages/libraries you are not familiar with._ **Of course I am not asking for _exact_ time required, but past experiences or things to make the process would be great.** Btw, I've already read below questions&answers, How do you dive into large code bases? http://stackoverflow.com/questions/215076/whats-the-best-way-to-become- familiar-with-a-large-codebase http://stackoverflow.com/questions/214605/the-best-way-to-familiarize- yourself-with-an-inherited-codebase **UPDATE** All great suggestions! I just came from work, I've worked a lot! _**About pair programming:_** Generally they write code, and I am trying to not miss even a second! If I try to write the code, I know it's gonna take forever, because I don't even know which files should I edit, but beside that, as I said, they are using 6-7 languages/frameworks that I'm not familiar with, and learning all these syntaxes at one is not easy. _**How well the company prepared for engineers:_** I can't say they are well organized, they kind of expecting me to start writing code immediately. _**Taking notes, being proactive:_** I'm always taking notes when they write a new command/or anything about data models. My peers are very smart and kind people, and I'm trying to ask lots of questions, even **lots of stupid questions** sometimes. _**Is this common?:_** @Telastyn, thanks for your answers, it made me feel a little better. It seems like my problem is not that uncommon, but I was really productive before this job, and now I really feel useless and not smart. I hope I can start solving bugs/implementing issues very soon. _**About frameworks/languages they use:_** I was really honest about that, I didn't say I know something that I don't know actually. But I wasn't expecting that much different things, and since I started working at the day I accepted the offer, I didn't have time to prepare myself. @Southpaw Hare, thanks alot for sharing your experience. You are absolutely right. There is no guarantee that I'm going to learn all of these stuff, but I'm trying. At the end, it's is hard to learn all of the syntax at once, and I think that is the main problem too. Because I can navigate in ruby code well since I know that language, and I navigate in js codes thanks to browser inspectors, but the problem is writing the actual codes with the frameworks/languages I don't know."} {"_id": "29671", "title": "What to do with bad source code?", "text": "I have been contracted to modify an application for a software company and the source code is quite frankly, a mess. It's not commented much and the author is inconsistent with coding conventions. The guy seems experienced , but his style isn't great. What do you suggest doing to get the project done properly and to make thing easier? **Edit** I didn't mean to say _rewrite_ , I've been hired to modify the the UI in a certain part of an app. I don't think that the client cares so much abut what I do to the source code , so long as it works and so long as it's decent quality code. That said, I've expanded some > if(condition) ? TrueResult : FalseResult; for readability. While I don't think those particularly make bad code, there is an inconsistency in the style throughout the code and a few \"left over\" constants that don't seem to be used anywhere. There are also some arbitrary numbers where constants would Have been wise for readability's sake. It's not as bad as I initially had thought, but I still have some questions for the original developer."} {"_id": "252392", "title": "How to handle bad code base", "text": "Company I worked for recently inherited a custom CMS system that is extremely buggy, no documentation, and unreadable logic in everything. The client will not have the budget to re-platform for another 6 months or more. The issue we are having is fixing bugs generally requires either patching it or completely rewriting the entire functionality/feature and is extremely hard to estimate time. Should we continue to patch until they re-platform or refactor entire functionality and add documentation?"} {"_id": "159662", "title": "Software rewriting alternatives", "text": "We have a legacy system to bring up-to-date because: 1. It uses an unpopular (amoung our users) non-sql database (Btrieve) 2. Provides only a text interface 3. Is written in Turbo Pascal (but compiled in free pascal) which is hard to recruit/retain developers to maintain it To give you an idea of the scale, the code base is 15MB over 756 files. All the studies I've looked in to such as those listed here suggest it is vastly more expensive to rewrite than to adapt what you have. If I thought we could encapsulate the legacy stuff behind a web service and then design new client(s), that would be fine. But I'm stuck with this legacy database which 90% of those 756 files reference. We even attempted replacing the low-level DB code with code to convert to SQL, and while it worked fine, it was too slow because the code is still accessing records one at a time generating far too many SQL statements than it would need. Plus this approach does not allow any improvements in the DB design. When faced with such an upheaval as replacing the DB that is referenced so heavily in code and the other requirements listed, can it still pay to reuse or are we better off treating the new version as a whole new product and just listing the current systems abilities as requirements (bearing in mind no handy requirements doc exists for the current system). I know it's too hard to give a definitive answer on this, so just any advice would be appreciated especially if you've experienced a similar situation."} {"_id": "196449", "title": "Starting a recurring project from scratch", "text": "We have a project that keeps recurring. The client expects us to run a website twice a year and this is happened for the last year and a half. I took the last working copy and based our latest website on it. Now, a co-worker has suggested that next time we should start from scratch instead of fighting against legacy code. I have already started refactoring the existing code and so have the other developers who were on the project. The code is cleaner than before and it meets client needs. The refactoring was ongoing while we developed new features. What are some good reasons to advise against starting from scratch?"} {"_id": "37091", "title": "How to convince my boss to improve code quality?", "text": "The place I'm working for is a service provider. We have a lot of services, which are written to deal with deadline, so their code are really terrible: * No coding convention, everyone codes in his own style * No unit testing (which is really bad) * No refactoring (which is truly worse) * No automation build/deployment etc and these code are used again and again, so bad code continue to spread all over my department. I really want to set up a standard quality for our code, by requiring everyone to follow \"rules\": every line of code which does not follow convention will be rejected, and every function of code which does not pass unit testing will not be committed,...But I don't know how to convince my boss to allow me to do this. I'm relatively new comer, so inspiring people from my works is really hard, and I think it's easier if my boss support me to this. Thank you very much for your advices"} {"_id": "87460", "title": "Deciphering foreign code", "text": "What is the best strategy to go about understanding some one else's code for a medium sized project, if the code is not well documented and does not adhere to many coding standards?"} {"_id": "235678", "title": "What can I do with a poorly coded vb6/access2003 project?", "text": "I'm currently working on a big vb6 project and since I'm not the leader of the team (we're just 2) but I'll be in the near future I've started wondering about improving the code and the data structure. There are 5 poorly coded projects with a total of 60-70 forms. The code is really bad, I've tried to improve it with the use of functions to avoid repetitions but it's still awful. It would take me a lifetime to just describe all the stuff that are wrong. The database is even worse, there are 5 .mdb (access 2003) badly designed, the person who created them doesn't know what normalization is and so Index are random, no relationship between table (some table that should be related are on different Dbs...) and so on. I feel bad for users that use this thing and whenever they type in something wrong they need to relaunch the application because there is no control on errors. Beside that it's slow as hell (DBs are on a local server, the apps are local). Where should I start? I was thinking about getting rid of Access, but what's the point if the structure has been conceived so badly?"} {"_id": "254643", "title": "Gathering requirements from confusing co-worker", "text": "I'm working at my first software internship, and it's great so far. One problem I'm having, however, is getting clear requirements from my co-worker (we'll call him Person A) who assigns projects. Normally, I get requirements from a co-worker/manager, design/code/test my changes, and eventually commit them for integration in our product. The problem is that, fairly often, it is virtually impossible to get clear requirements from Person A. I will ask questions to better understand what problem is being solved, and one of the following often happens: * Person A will interrupt mid-sentence to think out loud, derailing the current conversation. Sometimes this is for good reason. Example: > Me: \"So if there should be a table mapping Bazzes to Foos, what if there > were-\" > > Person A: \"Actually, I wonder if we actually need to add a BizzleContainer > in this here.\" > > [I wait a few minutes for A to think through their unrelated thought, and > try to help solve it.] This will repeat a few times, and by the time the conversation is done, I'm more confused than when we started and half an hour has gone by, so I don't want to waste the person's time any longer. Note that I wouldn't mind if the interruption was related to the current topic, but it seems that interrupting someone to take a conversation down a completely different path is poor form. * Person A often barely understands the requirements. These factors add up to wasting a lot of development time based on simple misunderstandings of what needs to be done. Co-workers B and C are capable of clearly expressing what needs to be done, but they are often overrun with demands for their time. So rather than annoy B and C to get requirements clarifications, co-workers often go to A, and get less than satisfactory results. How do you wring clear, correct requirements from someone who understands them but does not have the inclination to take the time/effort to explain what they actually are?"} {"_id": "75866", "title": "How To Deal With Terrible Design Decisions", "text": "I'm a consultant at one company. There is another consultant who is a year older than me and has been here 3 months longer than I have, and a full time developer. The full-time developer is great. My concern is that I see the consultant making absolutely terrible design decisions. For example, M:M relationships are being stored in the database as a comma-delimited string rather than using a conjunction table to hold the relationships. For example, consider two tables, Car and Property: Car records: * Camry * Volvo * Mercedes Property records: * Spare Tire * Satellite Radio * Ipod Support * Standard Rather than making a table CarProperties to represent this, he has made a \"Property\" attribute on the Car table whose data looks like \"1,3,7,13,19,25,\" I hate how this decision and others are affecting the quality of my code. We have butted heads over this design three times in the past two months since I've been here. He asked me why my suggestion was better, and I responded that our database would be eliminating redundant data by converting to a higher normal form. I explained that this design flaw in particular is discussed and discouraged in entry level college programs, and he responded with a shot at me saying that these comma-separated-value database properties are taught when you do your masters (which neither of us have). Needless to say, he became very upset and demanded I apologize for criticizing his work, which I did in the interest of not wanting to be the consultant to create office drama. Our project manager is focused on delivering a product ASAP and is a very strong personality - Suggesting to him at this point that we spend some time to do this right will set him off. There is a strong likelihood that both of our contracts will be extended to work on a second project coming up. How will I be able to exert dominant influence over the design of the system and the data model to ensure that such terrible mistakes are not repeated in the next project? A glimpse at the dynamics: I can be a strong personality if I don't measure myself. The other consultant is not a strong personality, is a poor communicator, is quite stubborn and thinks he is better than everyone else. The project manager is an extremely strong personality who is focused on releasing tomorrow's product yesterday. The full-time developer is very laid back and easy going, a very effective communicator, but is someone who will accept bad design if it means not rocking the boat. Code reviews or anything else that takes \"time\" will be out of the question - there is no way our PM will be sold on such a thing by anybody."} {"_id": "223869", "title": "How to understand codebase of complex opensource application", "text": "I'm trying to develop a new feature for open source application. But the existing application uses different APIs like Qt and KDE3. And more over the existing files are also very complexand depend on each other. e.g I wanted to create an object of PlaylistBox::Item but its constructor is protected. So I searched the codebase using grep, there is one line that uses this class, but then there are initialization variables of this constructor. Now how do I create these variables when I don't have any use and way to initialize them. My constructor on github, SyncList::SyncList(QWidget* parent)//: PlaylistBox(player,parent,stack) now how do I initialize these variables of PlaylistBox. This is the link to PlaylistBox constructor PlaylistBox::PlaylistBox(PlayerManager *player, QWidget *parent, QStackedWidget *playlistStack) : and this is PlaylistBox::Item in same file playlistbox.cpp#L775 PlaylistBox::Item::Item(PlaylistBox *listBox, const QString &icon, const QString &text, Playlist *l) This was just an example case where I face problem. I want to know the basic approach to read such huge and interconnected codebases where one class inherits another and so on. How do I figure out which class should I use. I keep on trying different values hoping one of them will fix it, but clearly I'm doing it totally wrong way."} {"_id": "125435", "title": "What is a good way to refactor a large, terribly written code base by myself?", "text": "> **Possible Duplicate:** > Techniques to re-factor garbage and maintain sanity? I have a fairly large PHP code base that I have been writing for the past 3 years. The problem is, I wrote this code when I was a terrible programmer and now it's tens of thousands of lines of conditionals and random MySQL queries everywhere. As you can imagine, there are a ton of bugs and they are extremely hard to find and fix. So I would like a good method to refactor this code so that it is much more manageable. The source code is quite bad; I did not even use classes or functions when I originally wrote it. At this point, I am considering rewriting the whole thing. I am the only developer and my time is pretty limited. I would like to get this done as quickly as possible, so I can get back to writing new features. Since rewriting the code would take a long time, I am looking for some methods that I can use to clean up the code as quickly as possible without leaving more bad architecture that will come back to haunt me later. So this is the basic question: What is a good way for a single developer to take a fairly large code base that has no architecture and refactor it into something with reasonable architecture that is not a nightmare to maintain and expand?"} {"_id": "98930", "title": "It's my first week of work, I've got the code checked out and am told to look around it until I have an assignment next week. What do I do?", "text": "> **Possible Duplicate:** > How do you dive into large code bases? I've been in this situation before and I kind of just poke around the code while really surfing the internet. I care about this job though, and I want to excel. What kinds of things should I look for? How should I go about starting to learn the framework? In my experience, I learn by doing - but clearly they expect something out of me or they wouldn't give me this week to figure out the code."} {"_id": "181044", "title": "How to understand and debug legacy software?", "text": "> **Possible Duplicate:** > I\u2019ve inherited 200K lines of spaghetti code \u2014 what now? Not long ago my company placed me in a team that deals with some of the most complex bugs that are in production. The thing is that almost all of these bugs are in legacy applications I am having a really difficult time understanding and debugging, and these are some of the reasons: * Bad software design * Lots of code duplication * Misleading comments * Bad names * No documentation at all * The creators of the software no longer work in the company * Really big classes and methods, very badly programmed * Bugs are very badly documented and the operations team makes very poorly documented reports on the issues that occur. It is very time consuming and frustrating. As a TDD and ATDD developer, I try by starting writing tests to triangulate and pinpoint the problem, but there are so many things that need to be mocked that it is very difficult. Also the business analysts don't provide criteria for this software since they don't even know it themselves. The only thing they say is that it needs to be fixed. I thought that maybe here I could find somebody with experience in legacy software that could give me some advice on how to deal with bugs in an environment like this. I feel I am doing software archeology when working with this. Also it is very stressful and this big amount of frustration makes me feel useless and unproductive since it takes weeks sometimes to fix bugs. I always worked green field development and this is the first time I am doing production support. I would really appreciate some advice."} {"_id": "183851", "title": "Study Doom 3 Source Code", "text": "> **Possible Duplicate:** > How do you dive into large code bases? I want to study the source code of a large project (for example, the Doom 3 source code) and I would like some help determining how I should navigate the code. How should I start reading it? Should I find a main() function and go on from there? Do you have any other tips that may help me?"} {"_id": "225860", "title": "What is the norm for introducing new hires to a code base?", "text": "After college I worked at one company for 6 months, and I've just now joined another one, bringing the grand total in my career so far to two. So the first company had a few hundred thousand lines of code, maybe, across a couple hundred files. There was absolutely no introduction to where to start learning the organization of the code base...just, \"Here, go find the source of this bug.\" It was horrifically intimidating. I did slowly start to find my way around the front end, but 6 months in I was only _just_ starting to be introduced to the server-side code. I was really happy to leave the company...I was relatively terrified of trying to find my way around the back end, which was by far the larger part of the code. (I'm a front end developer whose work occasionally requires me to make minor modifications to the back end.) But this new company...it's a big one, focused around a website...think Amazon. 2000 employees. I would estimate millions of lines of code but there's no way for me to know for sure. One of my coworkers showed me a 3-dimensional graph of the file dependencies...I almost fainted. It looked like this: ![enter image description here](http://i.stack.imgur.com/yh6ek.jpg) The coworker was chuckling fondly as he rotated it and zoomed around. Yet again, there is no guide, no introduction, just....here you go, get coding. I am horrified and very stressed. The job seems great in every way besides this, and I am quickly starting to worry that I am not competent as I seem to be the only person who finds this intimidating. What are other people's experience with this scenario? Is it just me? What have you done in the past? How long does it take you to get a firm grasp on things? I have asked my coworkers this question and they said \"a couple weeks.\" A COUPLE WEEKS TO LEARN A THOUSAND PLUS FILES, HUNDREDS OR THOUSANDS OF LINES LONG EACH??? Am I incompetent? Just say it."} {"_id": "106168", "title": "How do you approach a new project where the code has already been written?", "text": "> **Possible Duplicate:** > How do you dive into large code bases? I'm about to take on maintenance and enhancements of a fairly large and complex Java EE project with a Javascript front end. I'm trawling through the code trying to work out how it all hangs together but I wondered if any of you people have come up with a methodology for doing this that works best for you? do you start with the UI and work backwards to the database? Or take discrete 'slices' through the system for particular bits of functionality? Do you take notes? How do _you_ get up to speed with a new project?"} {"_id": "28551", "title": "How do you go about understanding others' code?", "text": "What do you do to understand some code that you didn't write? Or code that you wrote long time ago and don't remember what it does anymore. Do you have some technique that you go about? Do you analyze the structures first, or the public methods, or do you draw flow charts, etc.? Or do you fire up the debugger and just step through it? Or do you just ad-hoc your way through until you understand it?"} {"_id": "196414", "title": "Prevent code from getting mess", "text": "I am a student and a freelance programmer. These days I am developing a software in VB6 which has recently crossed 100KB of source code. The main problem, I face is, many times I have to refactor my existing source code which is really very boring. Also the size of my code base is not very big (only 100KB). What are the techniques I should you use to prevent my code from getting mess?"} {"_id": "129951", "title": "General approach to re-factoring an large, very badly written legacy system", "text": "> **Possible Duplicate:** > Techniques to re-factor garbage and maintain sanity? > What is a good way to refactor a large, terribly written code base by > myself? Really open question here. I'm not after an answer.. only advice. Any past experience people have had with re-factoring legacy systems that they could pass on would be amazing. Here's some information about the software I am to re-factor (you'll cringe!): * Web application * Database driven (MySQL) * PHP4 and PHP5 * Most of the code is PHP4 * Nearly all the code is procedural * Code that is PHP5 isn't OO.. example: 10,000 line+ file with one class and one function * Global variables used everywhere * No source control was used to write the software (you can see from the comments in the code) * Massive amounts of code repetition * No separation of concerns - user interface and logic is combined everywhere * Application relies on order of database tables * Few code comments * 250,000+ lines of code * Application in heavy use Basically, the software is our core product and I have been hired to do a major re-factor (amongst other things). It's a massive task and I can't just dive in and fix all the little things.. I need an overall strategy. I've written some scripts to tidy indentation up, removed commented-out code everywhere and made the project into a repo but now it's time to do the real stuff. I kind of have a vague idea but not sure how to go about it. I could somehow leave the current code alone and write some layer of software over it that abstracts away from all the horribleness. It would be good if the new layer was some sort of MVC architecture. At the same time I would go into the current code, remove redundancies because otherwise the new layer would be using bad code anyway so the code could slow down even more. As you can see.. need some clues/hints/tips/advice/experiences! Thanks very much :)."} {"_id": "134470", "title": "Newbie to Intermediate PHP Programmer - How to Deal with Maintenance Spaghetti Code?", "text": "> **Possible Duplicate:** > Techniques to re-factor garbage and maintain sanity? So I was given this project for a client, but the person who wrote the site originally did a terrible job (worse programmer than I am). They've got all sorts of kludgy fixes and I'm having trouble figuring out what variables are doing what. Like, for example, one section says this: $id = \"\"; $cuid = \"\"; session_start(); if (isset($_SESSION['id'])) { $id = $_SESSION['id']; $cuid = $id; $valid_user = $_SESSION['valid_user']; $cuname = $_SESSION['cuname']; $website = $_SESSION['website']; $site = $website; $email_contact = $_SESSION['email_contact']; } $cuid = $_GET['cuid']; include_once(\"dbfunctions.php\"); if (isset($_SESSION['id'])) { $cuid = $_SESSION['id']; } elseif (isset($cuid)) { $_SESSION['id'] = $cuid; } Which just seems to repeat the same thing, over and over. How am I, as a newer programmer (about a year experience, mostly in Python, now with about 6 months experience in PHP) supposed to deal with this sort of thing? More senior developers can't take a look at it - they've got more major clients right now."} {"_id": "211755", "title": "To rewrite or slowly refactor old C++ project", "text": "Our team has recently inherited a relatively large project from another company (~250k lines). It was developed using C++Builder and we intend to port the Ui side to Qt. Most of the Ui code is separate from the business logic (yay!) but the logic side is quite a mess. There is a lot of diamond inheritance going on (virtual inheritance thankfully) but it makes understanding the code quite difficult to do. We've got very little documentation to go on, what's there is out of date (comments included). I generated class diagrams using Doxygen, here's the most complex one (I had to remove most of the class names but kept some of the more important ones and kept the standard C++ data types and std classes, yes, they inherit from std) ![confusing inheritance](http://i.stack.imgur.com/iEOhW.png) So far we've been able to convert the base program to Qt and we're at a point where we can start converting the program's functionality bit by bit. Problem is, is it worth it long term? We would like to be maintaining this software as our own long term. Is there a general approach we should take to untangling this kind of inheritance mess or should we simply redesign from scratch and only keep bits and pieces of the existing code as we go? EDIT: Some more info Zavior posted a link to an article about why we should not start from scratch but Ptolemy also brought up some good questions and I'd like to add some information about our situation. The program we have is not 'bug free.' There are known issues that users have workarounds for most. There is no 'list' of these issues, that is currently being compiled by talking to all the existing users one by one as they tend to keep things to themselves. We are all new developers to this project. Our only resource is a developer who started working on the project about half-way through its lifetime. He is available via e-mail/chat mostly. He has also put together some documentation on how some parts of the code work. The program has only been used as an internal tool so far. We would like to make it commercially viable. EDIT 2: One of the most important things we want to do is have the program be in Qt. It's currently using the VCL framework from C++Builder that no one on our team is familiar with and we only have 1 license for. It's during my work porting from VCL to Qt that I found the messy code structure and question the decision to 'convert' vs redo."} {"_id": "39468", "title": "How possible is it to estimate time for programming projects?", "text": "It seems like it is nearly impossible to get close because you could run into any number of issues and things not first anticipated. How close can we be expected to reasonably estimate? Our PM wants to be able to have things like Gant charts and such mapping out weeks at a time and such... So we say we can get these bugs done, and this is how long each will take, and the goal will be Friday, but things get thrown off and pushed into the next week, like every time! How are we suppose to guess the right time?"} {"_id": "61655", "title": "How do you know you're writing good code?", "text": "Serious question here. I love programming. I've been messing around with code since I was a kid. I never went the professional route, but I have coded several in-house apps for various employers, including a project I got roped into where I built an internal transaction management/reporting system for a bank. I pick stuff up quickly, understand a lot of the concepts, and feel at ease with the entire process of coding. That all being said, I feel like I never know if my programs are any good. Sure, they work - but is the code clean, tight, well-written stuff, or would another coder look at it and slap me in the head? I see some stuff here on SO that just blows my mind and makes my attempts at coding seem completely feeble. My employers have been happy with what I've done, but without peer review, I'm in the dark otherwise. I've looked into peer code review, but a lot of stuff can't be posted because of NDAs or confidentiality issues. A lot of you pros might have teammates to look at stuff over your shoulder or bounce ideas around with, but what about the indies and solo guys out there like me? How do you learn best practices and make sure your code is up to snuff? Or does it not matter if it's \"the best\" as long as it runs as expected and provides a good UX? EDIT: Wow, I can't believe the great response and helpful answers this question generated!!! In the spirit of democracy I have selected the answer with the most upvotes as the correct one -- in reality, I wish I could accept many of these answers, since they all made some great points and helped me expand my mind a bit when it comes to my programming. Thank you to everyone who took the time to answer and comment!"} {"_id": "219834", "title": "Methods to understanding coding relationships and infrastructure within an unfamiliar codebase", "text": "I have recently begun studying a codebase which I will soon be working with. The current codebase has been written by a team of about 5 rockstar developers (whatever that means), and it hasn't really been documented for a newcomer like me. I have a general understanding of what technologies are being used, but not enough to begin coding and adding features. I first need to understand the general architecture that's employed throughout the code, all the basic relationships, and the process flow from input to output, as well as read and write - from the server to the client and from the client to the server. When I'm able to understand their system at this level, developing on this codebase should be much more do-able. I'm not concerned with reading through the code and studying their use of the language. I know that I can Google syntax and a library or an API if I need to figure something out. But what are some approaches to understanding the architecture and all the important relationships for code that doesn't specifically document it? The codebase has been written by advanced-level developers who've used a lot of top-down method, so how does a newcomer come and understand it from the mind of the developer who initially wrote it? You'll find this quite often in code- bases that are hosted on GitHub and such. A lot of code. Little documentation. So where does one begin? What are some suggestions, and what has your experience been?"} {"_id": "193990", "title": "Maintenance code needs improvements", "text": "I am currently maintaining/enhancing a project a bit old speaking of the 1990's. Atleast 15 developers would have worked over it. Going through the code for understanding is bit difficult. 1. No coding standards being followed. 2. OOP concepts not clear. 3. Too many unnecessary methods and functions, also unused files. 4. Worst of all global objects (hard to determine when its created and destroyed). At times I feel should I be correcting the code so its better to work on, but my PM says we should not bother about it. As we have to finish the tasks within deadlines. This also leads to difficulties in tracking the bugs occurring. Any suggestions on how should I approach it?"} {"_id": "186021", "title": "How should I go about \"overhauling\" a large legacy application?", "text": "> **Possible Duplicate:** > I\u2019ve inherited 200K lines of spaghetti code \u2014 what now? For my next project, I've been tasked with \"overhauling\" a large legacy web application with many parts. It is a JSP application written in 2004 and it is used heavily by my company. This application was designed badly in that there is no separation of concerns; no service layer, no DAO layer, no MVC structure. Just a bunch of JSP files loaded with scriptlets containing logic and database queries mixed in with HTML. It's a mess. My boss has defined \"overhaul\" as basically rewriting the entire thing using the technologies we use for our more recent applications, which are: * Maven * Spring * JPA (w/Hibernate) * JSF **AND** add a bunch of new features. My question is: How do I go about this? What should be my strategy for completely redoing a large application that is currently in use AND add a bunch of features? Should I rewrite the existing application first and get it working with the updated technologies and THEN worry about adding new features afterward? Or should I go about it like I'm creating a whole new application and implement the new features WITH the old features? Has anyone done this successfully? If so, what strategies were instrumental to your success? * * * **Edit: More info from comments:** How bad does the design have to be to justify a rewrite? This design is pretty bad. There is tons of business logic hiding in the application. No specs, no logs of requirements changes, no list of bug fixes, no list of open bugs, no test suite, no documentation. And the guy who originally wrote it is now in upper management."} {"_id": "205515", "title": "How to accurately predict release items?", "text": "We are having a disconnect between development and business needs. Business is asking me to produce an accurate list of deliverables for a fixed date and development being difficult to predict is pushing back saying that can only produce a list 80 or 90% accurate. So my question is, how do we solve this problem? How to provide an accurate list of deliverables weeks before they are completed and fully tested?"} {"_id": "254915", "title": "How to set more accurate \"deadlines\"?", "text": "Context: My work is to build financial reports using mostly Excel spreadsheets. This consists mostly of formulas and a good deal of VBA. Each person is built by a single person. We service many sub-departments under the finance / accounting department and we often have many projects lined-up. I have had a rough time in the last few months because of projects taking longer than expected, then projects overlapping and getting very close to critical deadlines. Question: Obviously, we try our best to plan our projects so that we have a little bit of room to breathe but I always feel helpless when I have to estimate how long a given project will take me to complete. Are there techniques that can be used to better estimate how long a project can take to complete ? Analysis: I am thinking that one way to get a better idea of the length of the project would be to look into our process for getting requirements which could be infinitely better. It would help to get a clearer picture but it would still not give us a measurement. Then I thought that maybe isolating and measuring the average development time of different tasks that need to be done for most reports (e.g. build the queries, write the functions, construct the report, automate the process) would be one way to achieve this but it does not take into account the variability in complexity of those tasks. Some reports have very simple queries while others are extremely complex. Obviously the more complex the query the longer it would take. Note: Most of the time we have to deal with legacy code / inefficient reports which can cloud the waters even more."} {"_id": "67166", "title": "When do code hacks become bad?", "text": "When you begin a new project/function/object you mostly have an idea of the model you want to build. It can be based on the clients' wish, on your ideas for the app or whatever. In the middle you often realise that your model will not work. There may be new requirements, you didn't think of something etc. Then you have two options. Either you rewrite your code to work with the new specifications, or you \"hack\" the current code to do what you want. A rewrite is time consuming, and you may need to do it several times, but in the long run it often pays. Hacks are fast and often effective for the moment, but many hacks will make the code really bad, and after a while they may come back and bite you in the behind... How do you determine when to do what? _(Pardon my very non-academic way of explaining this, but I hope you understand what I'm getting at.)_"} {"_id": "98101", "title": "How to organize a one-man project?", "text": "Every once in a while (read: about every day) I come up with a new idea, start a new project in my favorite editor/IDE, start coding and the next day I delete it and start something new. I've been programming for about six years now and in those six years I have only really completed one very small project (a Dashboard widget for Pastebin.com). Though this might be great for learning coding, I really want to complete something. What are some things I should do before, while and after the actual coding? What are good resources that teach me how to organize such one-man projects? * * * If it matters, I want to do web or Mac development."} {"_id": "167614", "title": "Is eval the defmacro of javascript?", "text": "In Common Lisp, `defmacro` basically allows us to build our own DSL. I read this page today and it explains something cleverly done: > But I wasn't about to write out all these boring predicates myself, so I > defined a function that, given a list of words, builds up the text for such > a predicate automatically, and then evals it to produce a function. Which just looks like `defmacro` to me. Is `eval` the `defmacro` of javascript? Could it be used as such?"} {"_id": "222559", "title": "Is it fair to say that \"macros don't compose\"?", "text": "On this blog post aphyr (who is a brilliant programmer) states: > Clojure macros come with some important restrictions. Because they\u2019re > expanded prior to evaluation, macros are invisible to functions. They can\u2019t > be composed functionally\u2013you can\u2019t `(map or ...)`, for instance. The classic example of this is: (reduce and [true true false true]) ;RuntimeException Yet we can write: (reduce #(and %1 %2) [true true false true]) ;false **Is 'macros don't compose' a valid claim?** Surely the only valid point is that macros are not first class functions and shouldn't be treated as such."} {"_id": "226322", "title": "How do International Call Rerouting Apps work?", "text": "Since DTMF seems impossible on Android devices, I really wonder how it is possible to setup a direct dail service like Jinggling (http://www.jinggling.com/how_it_works.html). I realize one registers his phone number, but to what end? Say a customer now dails an international number, the App intercepts the call, reroutes it to their own service but how gets the desired number transmitted? The whole point is that this is works without wifi-/data-connection, no VOIP. The only way I can imagine this to work is using SMS to exchange auth and other related data. Or am I missing something?"} {"_id": "222555", "title": "Gradual Typing: \"Almost every language with a static type system also has a dynamic type system\"", "text": "This claim by Aleks Bromfield states: > Almost every language with a static type system also has a dynamic type > system. Aside from C, I can't think of an exception Is this a valid claim? I understand that with Reflection or Loading classes at runtime Java gets a bit like this - but can this idea of 'gradual typing' be extended to a large number of languages?"} {"_id": "167619", "title": "TypeScript or JavaScript for noob web developer", "text": "Following the recent release by Microsoft of TypeScript I was wondering if this is something that should be considered for a experienced WinForm and XAML developer looking to get into more web development. From reviewing a number of sites and videos online it appears that the type system for TypeScript makes more sense to me as a thick client developer than the dynamic type system in Javascript. I understand that Typescript compiles down to JavaScript but it appears that the learning curve is shallower due to the current tooling provided by Microsoft. What are your thoughts?"} {"_id": "57663", "title": "Does Apple own any rights to software you create with their developer tools?", "text": "Or do you own the rights to any software you create, regardless of whether you used tools supplied by a third party? Thanks"} {"_id": "231123", "title": "I'm struggling with abstracting my animation code in my game using a functional style. How can I do this?", "text": "My game is a top down 2D shmup programmed in a functional style. I'm struggling with abstracting the code that is responsible for animating the projectiles. There are many types of guns with many types of projectiles. Here are a few that are very different: 1. A **standard gun** that shoots a bullet that travels a distance over time. 2. A **laser** that shoots a line that goes from start to end in an instant. * This is different from the **standard gun** projectile because its projectile doesn't travel over time. It's drawn as a line. Unlike the standard gun, the laser beam animation should last after the projectile hits something. 3. A **psychic shockwave** that forms a circular blast wave around your body and damages anything in range. The size of the circle is variable. * This is different from the **standard gun** projectile because its size is variable and circular. Also, it doesn't move. I'm struggling on how I can use a sprite sheet for this. It seems like the best way might be to draw a circle on the screen. All the other projectiles are sprites. * This is different from the **laser** because it's a circular shape rather than a line. If this were object oriented, the path I'd take is straight-forward: I'd create an `Animation` class that is responsible for animating itself. The client code would have no idea _how_ the animation occurs, it would just pass in the necessary objects in order to get it done. But I'm not sure how to do something like this in a functional style. Right now I represent everything as data. A `projectile` contains `animation` data like so: projectile: { type: \"laser\" animation: ... } But what feels very wrong is I have to have one switch to create this projectile, and then in my rendering code, I have to have _another_ switch to decide how to use the data to animate it. EG: if projectile.type is \"standard gun\" rotateBulletTowardsVelocityAndAnimateAtPosition(projectile) if projectile.type is \"laser\" rotateLaserTowardsTargetAndAnimateFromStartToTarget(projectile) if projectile.type is \"psychic shockwave\" drawCircleAroundProjectilePosition(projectile) This doesn't seem very abstract to me. What's a better way to code this while still being functional about it?"} {"_id": "250386", "title": "Difference between Yesod and Ocsigen web frameworks", "text": "I have been looking at web-frameworks in functional languages and eventually found Yesod and Ocsigen interesting. As far as I understand, they leverage the type system to statically prevent basic errors such as invalid HTML or internal dead links. My question is are there practical differences such as feature focus or maturity between these two frameworks? What does one have that the other miss? For example, with Yesod, can I write the entire web-application in Haskell and have the appropriate component compiled to Javascript? Thanks."} {"_id": "250384", "title": "Unit testing statically loaded data", "text": "Scenario: I have a configuration file containing some structured data that is loaded in at runtime and is **not** modified by the application, but is referenced in many places. There are functions that retrieve specific data from the configuration file (after it's been loaded into memory). I'd like to write unit tests that ensure that data has not been changed inadvertently by a developer, is this good practice or overkill? E.g. `Assert(GetDataForKey(\"SomeKey\") == \"MyValue\")`"} {"_id": "250383", "title": "How to prove that following practices and industrial standards are profitable, and investment in refactoring are good", "text": "I started working in a company not much time ago, and what I see a lot inside a project is a high amount of low quality code and some custom ideas instead of industry standards and selected best practices. But when I trying to talk with manager about that, I often see the answers, that guys who doing that make things quick and they are good etc, etc, etc"} {"_id": "105630", "title": "How can I study C# from Stack Overflow", "text": "How can I study C# from Stack Overflow? I have only the basics of the C# language with some simple exercises; now I want to go to the highest level C# through Stack Overflow, reading any questions tagged \"c#\". The questions are not ordered from easiest to hardest. Do you have any idea on how to learn C# from Stack Overflow?"} {"_id": "141842", "title": "C++ Multithreading on Unix", "text": "I have two related questions: 1. Are there any good books for multithreading in C++, especially now that C++11 contains multithreading in the standard library? 2. I have the Wrox Programming on Unix book (1000 pages fat red one) and within it, it uses the Unix Thread class. * How does this code relate to boost and the C++11 multithreading libraries? * Is it better/worse/just specific to Unix etc? * Is the performance the same?"} {"_id": "141843", "title": "What is the actual purpose of MVC?", "text": "I've seen a lot of stuff that describes how it's done, but not a lot that tells WHY it's done. Is it just a way to keep the code readable, or is there a better reason?"} {"_id": "250388", "title": "Best patterns for variable-scope disposables", "text": "I have `Client` which uses a disposable `Connection` for talking to a remote service. A `Connection` is somewhat expensive to set up and needs to be Dispose()d properly. I want to allow multiple methods in `Client` to share a connection, like this: using (client._connection = client.Connect()) { client.SyncTime(); client.UpdateUsers(); client.SomeOtherOperation(); } But I don't want to _insist_ on `Client`'s consumers managing connections. They should be allowed to call individual single `Client` methods without being aware or concerned with `Connection`s. Here is the pattern I'm using at the moment. It makes SRP cry. Any suggestions on how I can do this better, or should I be doing this at all? public class Client { Connection _connection; public void SyncTime() { bool wasConnected = _connection == null; try { if (!wasConnected) _connection = Connect(); _connection.ActuallyDoSomeWork(); ... } finally { if (!wasConnected) _connection.Dispose() } } }"} {"_id": "60012", "title": "Why isn't Lisp more widespread?", "text": "I am starting to learn Scheme by the SICP videos, and I would like to move to Common Lisp next. The language seems very interesting, and most of the people writings books on it advocate that it has unequaled expressive power. CL seems to have a decent standard library. **Why is not Lisp more widespread?** If it is really that powerful, people should be using it all over, but instead it is nearly impossible to find, say, Lisp job advertisements. I hope it is not just the parenthesis, as they are not a great problem after a little while."} {"_id": "146319", "title": "C# open-source framework for multithreaded task management", "text": "I have an API library for C# that provides flexible multithreaded task management of objects on task bases. The library was developed extensively for a project that was later cancelled by the client. I received the legal rights to the library, and the client doesn't care what I do with it now. I don't know what to do with this source code. Can you please explain to me, what the steps are in taking source code and making it a successful open source project? How do you attract developers? How will people know the library exists? EDIT: The reason I ask this question, is to bring the source code to an open source state. I'll be adding a full set of unit tests, documentation, examples, etc.. etc.. That represents more investment of my time, and if there is no pay off \"i.e. no one uses it\". Then what's the point?"} {"_id": "204651", "title": "Will object reuse optimize this often-called function?", "text": "Suppose I have a function that I need to call a lot, maybe a few thousand times on every mouse down or mouse move. It uses an instance of a function (class), called `Transform`: function func1(a, b, c) { var t = new Transform(); t.rotate(a); t.scale(b, c); return t.m[0]; } So I'm creating thousands of new transforms as I call this `func1` lots. What if, instead of creating `new Transform()`s every time, I created a small system to allocate extra transforms only as they are needed, and re-use them: window.Util = { _CachedTransforms: [], tempTransform: function() { var arr = Util._CachedTransforms; var temp = arr.pop(); if (temp === undefined) return new Transform(); return temp; }, freeTransform: function(temp) { Util._CachedTransforms.push(temp); } } Then instead I could call `func2`: function func2(a, b, c) { var t = Util.tempTransform(); t.reset(); t.rotate(a); t.scale(b, c); var result = t.m[0]; Util.freeTransform(t); return result; } Using `func2` several thousand times, `new Transform` is only ever called once. This might suggest a benefit, but the numbers from jsperf don't seem to suggest any. If you want to see these two functions in action as well as the Transform class, take a look at jsperf: http://jsperf.com/transforms And especially: http://jsperf.com/transforms/2 To simulate it occuring lots during an event, my jsperf test does: var a; for (var i = 0; i < 4000; i++) { a += func1(1, i, 3); // vs func2 } There may be better ways to test if this is advantageous or not. Am I missing something? More broadly, is object reuse like this still a good idea in this scenario, or ever?"} {"_id": "113145", "title": "Is the concept of computational complexity important for software developers?", "text": "I was under the impression that the concepts of time and memory complexity are a must for graduates of compsci courses, but having studied engineering I have no knowledge if that is the case. I have recently been surprised to interview some graduates of a local college that do not even know the concept. I guess my question is: Is the concept of computational complexity important for software developers? And should it be taught in undergraduate courses?"} {"_id": "146317", "title": "How to fulfill EASTL license", "text": "I'm integrating EASTL into my game programming framework. I did some changes to it since I have my own memory management sub-system and now I'm wondering what I need to do to not break the license under which EASTL was released. The source code files include the following notice: Copyright (C) 2005,2009-2010 Electronic Arts, Inc. All rights reserved. Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met: 1. Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer. 2. Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution. 3. Neither the name of Electronic Arts, Inc. (\"EA\") nor the names of its contributors may be used to endorse or promote products derived from this software without specific prior written permission. THIS SOFTWARE IS PROVIDED BY ELECTRONIC ARTS AND ITS CONTRIBUTORS \"AS IS\" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL ELECTRONIC ARTS OR ITS CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. Since I will probably be releasing the source code on GitHub at some point, but do not plan on releasing any binaries without source code, can I just add something similar to my own source files and be okay with this? Will something like the following do? /* * Copyright (c) 2012 Bartlomiej Siwek All rights reserved. * Based on EASTL (https://github.com/paulhodge/EASTL) - copyright information follows. */ /* * Copyright (C) 2005,2009-2010 Electronic Arts, Inc. All rights reserved. * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions * are met: * * 1. Redistributions of source code must retain the above copyright * notice, this list of conditions and the following disclaimer. * 2. Redistributions in binary form must reproduce the above copyright * notice, this list of conditions and the following disclaimer in the * documentation and/or other materials provided with the distribution. * 3. Neither the name of Electronic Arts, Inc. (\"EA\") nor the names of * its contributors may be used to endorse or promote products derived * from this software without specific prior written permission. * * THIS SOFTWARE IS PROVIDED BY ELECTRONIC ARTS AND ITS CONTRIBUTORS \"AS IS\" AND ANY * EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED * WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE * DISCLAIMED. IN NO EVENT SHALL ELECTRONIC ARTS OR ITS CONTRIBUTORS BE LIABLE FOR ANY * DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES * (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; * LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND * ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF * THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. */ P.S. This is a repost of a legal-like question from StackOverflow where it was found off-topic. If this is off-topic here as well, please tell me where I can ask such question?"} {"_id": "113142", "title": "LINQ to SQL - Business logic in another assembly?", "text": "So I am trying my hand at this whole tiered application thing with ASP.NET 4. The software I've developed is a maintenance nightmare and it isn't very well organized. I've done some looking around the web and can't seem to find an example of what I'm looking for here. Someone please pick this design apart. My Data Access Layer will be an assembly with LINQ to SQL. I will have a Business Logic Layer that references the DAL assembly and does what BLLs do. Very general and basic stuff. Then I will have a business objects class that does some that is data-related but is more \"work\" type stuff - like generating documents, etc. The web site will interact with the BLL and the business objects. So: DAL | | | BLL----------BO | | | | | | Web----------| Am I making this way too complicated? Should I just put my business logic right in there with the LINQ classes in the DAL? I'm not sure how a DataContext would work in a separate assembly with classes that inherit from LINQ classes. Thoughts?"} {"_id": "185904", "title": "Can my GitHub and SourceForge account share the same repository?", "text": "I like that SourceForge can also let people browse your code using Git. But, before I even set up the project on SourceForge, I had a GitHub repository for it. Now that I have created my SourceForge project, I had the option to use the Git tool. The problem is when I did that it made a new repository from scratch. I would like to use my existing GitHub repository with my SourceForge project, is this possible? As of now the SourceForge project has no files, so the button on the summary page, instead of a 'Download' button, is a 'Browse Code' button, but the button leads to an empty repository. So I am wondering, is it possible to link my SourceForge project with my existing GitHub repository, or do I need to remove the 'Code' tool from my SourceForge project and just leave a link to my repository in the description?"} {"_id": "198446", "title": "Are there languages that expand on the STL's iterator types?", "text": "Many languages use the concept of an iterator. The c++ STL expands on this with input iterators, output iterators, forward, bidirectional, random access and others. As far as I know, these distinctions don't exist in other languages. In this talk the author of the STL, Alexander Stepanov, talks about iterators (among other things), and mentions the idea and of 2-dimensional iterators. I haven't seen these anywhere. Question 1: Are there other languages/ libraries that expand on iterators and/or use 2-dimensional iterators? Question 2: Could anyone point me to some good resources on iterator theory (if that is what it's called)?"} {"_id": "191147", "title": "Efficiently compute distance of location to POIs?", "text": "I want to write an android application that holds information where (climbing) rocks are located in a wider area. I want to display all rocks nearby my current location (say below 500m) and may be update the displayed distance to them (straight-line distance) in real time. To naively compute the distance between two points will not be efficient regarding a rock database with thousands of entries. For this special case the use of online services is discouraged as they are not available at all. I've heared about some location-aware databases. Does anyone have experiences with them (under android)? Is it efficient enough to divide the map of rocks into quadrants and enrich the rock database with these information? (Then compute the quadrant of the current location and only compute the distances of the rocks in the nearby quadrants?) I tried searching for an answer to this question (I'm sure others had these question too) but could not find anything appropriate. **tl;dr** How do I **efficiently** find points of interest near my current gps location?"} {"_id": "191141", "title": "Server distribution for high performance", "text": "I've developed a Socket Application on top of TCP in .NET C#, which allows many clients to send files from one another via a VPS I'm using. Most file transfers will occur between people in the same region, say, even in the same neighborhood. So, If 300 clients are connected, some of them will be connected from say Europe, others from, say the USA etc... Those who are connected from Europe will never try to send files to those in the USA. What matters to me the most is scalability (hopefully, I will have thousands of users connecting simultaneously), and low latency (responsiveness) when it comes to upload/download transfer rates. Something tells me, that if I want the files (up to 2MB) to be transferred quickly between my clients, I should get a VPS in Europe, USA, Asia etc. In this way, users will get higher transfer rates, and in case one of the server fails, they will be able to use the the other one(s). Besides, I should have a separate database for user info/statistics to which all of the servers connect when needed. My question is, what is the common practice for such usage and requirements? Any kind of clue/terminology I should start to get familiar with will be highly appreciated. Thanks"} {"_id": "204711", "title": "How to justify migration from Java 6 to Java 7?", "text": "We were migrating from Java 6 to Java 7. The project is behind schedule and risks being dropped, in which case it will continue to use Java 6. What are the specific improvements in Java 7 that we could go back to our manager with and convince him it is important to use JDK 7? Looking for bug fixes that I could highlight in Oracle Java 7 (with respect to Java 6). Fixes in security, performance, Java 2D/printing, etc. will be more sellable in my case. Compiler fixes for example, will not of much use. [I am going through many sites like Oracle adoption guide, bug database, questions on Stack Overflow]. Update: Thanks for the answers. We rescheduled the update to next release. Closest we got was security. Accepting the highest voted answer."} {"_id": "231194", "title": "Is it generally a good idea to work with JDK 6 instead of JDK 7?", "text": "As far as I know, there aren't a lot of differences between JDK 6 and JDK 7. At least, I haven't yet come across a difference (I coded with JDK 7 and with JDK 6). A lot of computers run JRE 6 or JRE 7. Compiling my program using JDK 7 will narrow the number of potential computers that can run my application. Is it generally a good idea to always program using JDK 6 and compile my program for JRE 6? Is this something common? Or is it generally a bad idea?"} {"_id": "210320", "title": "Organisation of $(document).ready", "text": "I have some code that looks something like this, except rather than 2 sections, there are about 20, and they have real code in them: /*------------ * Contents: * 1. Load slider * 2. Check form /*------------ /*------------ * 1. Load slider /*------------ var slider_id = 101; var num_slides = 10; $(document).ready(function(){ // Initiate document ready slider }); /*------------ * 2. Check form /*------------ var fields = {\"text\",\"number\",\"textarea\"}; $(document).ready(function(){ // Do some document ready form stuff }); I'm trying to keep everything neat as there is going to be a handover at some stage. My question is should I do it like I have done, or something like so: /*------------ * 1. Load slider * 2. Check form /*------------ /*------------ * 1. Load slider /*------------ var slider_id = 101; var num_slides = 10; /*------------ * 2. Check form /*------------ var fields = {\"text\",\"number\",\"textarea\"}; $(document).ready(function(){ /*------------ * 1. Load slider /*------------ // Initiate document ready slider /*------------ * 2. Check form /*------------ // Do some document ready form stuff }); So, do I minimise document ready calls, or group associated content together? Just self taught so no idea about these kinds of best practises. Thanks"} {"_id": "198442", "title": "Storing escaped mysqli characters in database but outputting them correctly with htmlspecialchars()", "text": "I have a database sanitizing function that I use when the user enters some data into my website. I escaping the mysqli characters with mysqli_real_escape_string(), but I wish to also output the content with htmlspecialchars(). The problem is, let's just say the user enters something with a single quote; that will be escaped by the sanitize function and will be output as \\'. Is there any way to store it like that, and then output it as '?"} {"_id": "210322", "title": "Generic service control interface", "text": "I need an interface to a back-end service, mostly for control commands (stop, status, cancel, reload config). The service might be in Python, Perl, Java, or whatever, and runs continuously. The interface will let me send infrequent commands to the running process. A signal handler can tell a process to stop, and USR1 and USR2 signals let it do two more things. Tomcat sets up a special listener on a fixed port number to get it to shutdown, perhaps due to limitations of Java signal handling. But I'd like it to respond, not just react. And I'd like to use this for a service written in any language on any platform. I'm hoping to avoid middleware or databases to communicate, since it's a simply request/response (Q: \"how are you?\" / A: \"I have been running for 3 hours at peak efficiency\"). I glanced at these but they seem like overkill for a simple control interface: * dBus \\- general interprocess message bus * UPnP \\- device communciation * Avahi \\- DNS queries to find services * Hadoop YARN \\- Distributes work across a network, parallel processing Each seem compelling, but they feel heavy weight or else require special software to be installed ahead of time, or aren't for every language or every platform. If I wrote my own, I could write a single-threaded TCP listener with JSON requests/responses. That seems about as lightweight and universal as you can get, and is small enough for a \"hello world\" service, and big enough for speaking to any service. JSON over TCP would be very command-line friendly (curl or netcat), and therefore easily scriptable. Lots of languages already have a JSON library (C++, Java, etc), in any platform. A harder approach is adding HTTP semantics to the JSON request/response so you could browse to your application or use curl to control it. But embedding HTTP seems like a lot of space and complexity when it will only support a few resources (PUT text/plain \"true\" to /stop to shutdown your app, GET application/json from /status to see how it is running). Frankly, I'm surprised nobody invented a standard service control interface for simple things like basic control commands. At work we used CORBA and registered in a naming service and then had to deal with Java/C++ ORB connection issues and it seemed like tons of code for something that should be simple."} {"_id": "251385", "title": "Is there a name for this anti-pattern? (Variable has context-dependent meaning)", "text": "I'm working on a legacy system with troubling behavior - it uses the same identifiers to signify different things in different places. The system in question is a CRM system. It uses special codes for each subscriber's account to signify a bundle of features they've purchased access to. These codes are transmitted to licensing servers, which activate the appropriate features. However, the same code means different things depending on which licensing server happens to serve the subscriber. For example: * Code 241 means the subscriber has access to features 1-10, 12, and 20-50 if they are served by licensing server A. * Code 241 means the subscriber has access to features 1-20, 33, 34, and 50 if they are served by licensing server B. Clearly (to me, at least), it would be better if each code meant one things. That is, 241 should mean (1-10, 12, 20-50) everywhere and new code 242 should mean (1-20, 33, 34, and 50) everywhere. Is there a name for a problem of out-of-band data being required to uniquely specify something?"} {"_id": "251387", "title": "JavaScript Combination Inheritance Pattern", "text": "In the chapter of Professional JavaScript for Web Developers on OOP, Nicholas Zachas describes a JavaScript inheritance pattern which he refers to as _combination inheritance_. The basic idea is that methods are inherited via prototype chaining; whereas parent properties are passed on to descendants by calling the parent constructor within the child constructor. A small example: function Parent() { this.parentProperty; } Parent.prototype.parentMethod = function() { console.info(parentProperty); } function Child() { Parent(); } Child.prototype = new Parent(); He concludes his discussion by saying that \"combination inheritance is the most frequently used inheritance pattern in JavaScript.\" However, in his following discussion on the prototypal inheritance pattern, he mentions a shortcoming of combination inheritance, namely that the child type invokes the parent constructor twice: once when creating the prototype and then again in the constructor. My question is why does the combination inheritance pattern suggest extending the prototype by having the child's prototype be an **instance** of the parent type, why shouldn't it just reference the parent's prototype directly? Using the previous example, why not just do the following: Child.prototype = Parent.prototype;"} {"_id": "6815", "title": "How can I maintain my technical skills after becoming a project manager?", "text": "As I advance in my career, I have found that I do less technical work and more project management work. I joke that I am getting dumber every day. Each time I go back to doing technical work it seems to be a little harder to get things going. What suggestions do people have for maintaining technical expertise throughout your career?"} {"_id": "223784", "title": "Where does Windows Workflow Foundation fit into the messaging oriented middleware architecture?", "text": "I've been trying to find information online about where WWF 4.5 fits with in an architecture based around an enterprise message queue using central broker(s) (RabbitMQ in particular). It seems like responsibilities overlap a bit since WWF offers the ability to pause-and-resume (persist/restore) workflow instances. I'm wondering if I'm erroneously equating this pause-and-resume WWF functionality to the general message queuing concept. Having looked at MassTransit, in particular a code listing of a sample Saga showing parallel code \"activities\" that then \"combine\" when completed... it seems like WWF excluding the pause-resume functionality in this context, is better compared against MassTransit.. is this an accurate assessment?"} {"_id": "24147", "title": "How would you introduce an agile methodology like scrum?", "text": "If you've found agile and walk into a workplace that doesn't particularly follow any methodology and they are resistant to change (as most people usually are), how would you introduce an agile methodology like scrum? **NOTE:** * Well, I've phrased it as a hypothetical question, but it isn't. * I'm not very confident about Agile myself"} {"_id": "223786", "title": "Can I share ~1k of dynamically updated data between HTML5/JS pages using only apache2?", "text": "I have an HTML5/Javascript web site. There is a form which updates JSON data. There are other pages which I would like to load that JSON data dynamically. I know how to do this via Tomcat/JSP but I'd like to keep this site solely apache2. Is there a way to persist and read the JSON data? It is ok if the data is temporal and is lost upon an apache2 bounce."} {"_id": "37307", "title": "As a self-taught programmer, how do I get the academic foundation without attending school again?", "text": "I've made a pretty good living as a self-taught programmer, but when I find that I discuss some low-level fundamental topics with my peers who have a CS degree, holes appear in my knowledge. I'm a big picture (architecture) guy, so for a long time this hasn't bothered me, but lately I've wondered if there is an approach I can take that will help me learn these fundamentals without going back to school? Are there books, websites or videos that you can recommend that would give me a ground-up perspective as opposed to a learn it as you need it mentality?"} {"_id": "223780", "title": "How to trace logical errors in algorithms", "text": "I am beginner in algorithms. Last year I participated in Google Code Jam. One of the major issues I faced during the competition was my code was working fine on my test cases, but when I submitted on a large number of test cases, I failed to pass them causing some logical error. So, my question is basically how can I trace logical errors in my code in such situation where it fails to match some test cases?"} {"_id": "233033", "title": "How can I deal with a slow API in PHP?", "text": "I'm writing a public web app to get stock data from a magento store. I've accessed the data, but it turns out that i have to query each product individually for stock data. With thousands of items this is a resource hog and takes lots of time. This cannot be avoided. It's just how their API works and I have to do it this way. How can I code something in php that runs in the background, harvesting the api data by nibbling it in a calmer more friendly manner, and not hammer their shops with SOAP api calls? Am I right in thinking that I need to use a background process like this: $pid = shell_exec(sprintf('%s > /dev/null 2>&1 & echo $!', $command)); ...and then query that $pid? Store them in a database? Are there any examples of background process managers?"} {"_id": "187415", "title": "Legitimate reasons for circular references in C++", "text": "I have project written in C++ that I am working on which has a parent-child relationship where each child has only one parent. I had previously decided after looking at this post that I would make the children know nothing of their parents. A little background on my project before I go into my question: The parents in this situation are objects representing a collection of interlinked computational \"blocks\". The parent (The \"Model\" class) takes in one or more input values, runs them through its blocks, and the blocks return back one or more output values. The child objects are the Blocks and a Block is owned by exactly one Model. The issue is that a certain kind of Block can contain a Model so that I can nest them and re-use models. Although most blocks know nothing of the Model class, the ModelBlock (inheriting Block) knows about Models and is allowed to encapsulate a Model. **The Question:** The above seemed to be working pretty well, at least in the console version of my application. However, I need a GUI to be able to create any large Model without losing my mind and so I have started making one. The issue has reared its ugly head again when I realized that the Blocks in general need to know about their Model so that ModelBlocks can inform a Model when it is being used inside of another Model. This is so that I can easily find \"orphaned\" Models that are not the root model and not used in any other context (meaning they will never be run). I guess I could make it specific to ModelBlocks, but if I'm going to do something like that, why not make it apply to all the blocks? In C#, its seemingly not a bad practice to use a circular reference to accomplish this (Entity Framework does it up the wazoo...it wreaks havoc with serializers). However, it seems very taboo in C++ to use circular references. So, I am wondering if the above is a legitimate use. Were I to do this, Blocks would be informed of their parent Model's destruction so that they know if they are orphans. Even if the above isn't, are the legitimate uses for circular references in C++ since they seem to be relatively common other languages (of course, that's assuming that those programs in those other languages using circular references don't have serious design issues)? If I didn't explain my question well enough just let me know and I'll try to clarify. EDIT: I should mention that the actual implementations are separate from all this since I am using interfaces (classes with all pure virtual functions) to define how a block looks or how a model looks. Here is a partial class diagram with my proposed changes: http://oi50.tinypic.com/16c8cwn.jpg Here is the source: https://github.com/kcuzner/Simulate/tree/develop. If you feel like building it, it depends on Qt >= 4.7, boost >= 1.52. The projects are Qt-Creator projects. Console and GUI depend on Engine."} {"_id": "223789", "title": "Is Convention Over Configuration \"Knowledge in the World\" or \"Knowledge In Your Head\"?", "text": "In Don Norman's seminal work \"The Design of Everyday Things\", the author coined the phrases and explains the difference between \"Knowledge in the World\" and \"Knowledge In Your Head\"; an example of this is a multi-switch light panel that can either incorporate \"Knowledge in the World\" by being a model/map of the room, with the switches in the corresponding location, or \"Knowledge In Your Head\" (which is how they are almost always designed/implemented) when you have to memorize which switch toggles which light. You could think of \"Knowledge in the World\" as something you can deduce by using observation and logic, and \"Knowledge in your Head\" as something that has to be memorized. In the world of DI, and \"Auto-Registration,\" which relies on the \"Convention Over Configuration\" pattern, would using this process be a case of the programmer using \"Knowledge in the World\" (the functionality is expected, as the framework provides it) or is it \"Knowledge In Your Head\" (the programmer has to be aware of and learn the convention). I am taking the wishy-washy/middle-of-the-road view that Convention Over Configuration is a midway world twixt the two. Am I wrong?"} {"_id": "187410", "title": "Marked nodes in Fibonacci heaps", "text": "I don't understand why Fibonacci heaps have marked nodes (picture). A node is marked when its child is deleted. Quoting from Wikipedia: \"[Keeping degree of each node low] is achieved by the rule that we can cut at most one child of each non-root node. When a second child is cut, the node itself needs to be cut from its parent and becomes the root of a new tree.\" Why do we need to do that? Why not just leave the node where it is after the second child is cut? The heap structure is not violated. I don't see the point of this optimization."} {"_id": "117628", "title": "Why are wrapper classes not suited for use in callback frameworks?", "text": "I just read the question what are callback frameworks?, where the asker cites the following from Effective Java: > The disadvantages of wrapper classes are few. One caveat is that wrapper > classes are not suited for use in callback frameworks, wherein objects pass > selfreferences to other objects for subsequent invocations (\u201ccallbacks\u201d). So I am curious, why are wrapper classes not suited for use in callback frameworks? And can you provide an example of what the problem is?"} {"_id": "59959", "title": "What is a \"cross-functional team\" actually?", "text": "The general meaning of \"cross-functional team\" is a team which combines specialists in different fields that are required to reach the goal. But it looks like in Agile cross-functionality means not only combining different specialists, but making them mix. Henrik Kniberg defines cross- functional team this way: \"Cross-functional just means that the team as a whole has all skills needed to build the product, and that each team member is willing to do more than just their own thing.\" But where is the line drawn? Is it normal to ask developers to become testers for an iteration if it is required?"} {"_id": "37079", "title": "Unobtrusive JavaScript (regarding asp.net mvc3) not clear", "text": "Unobtrusive JavaScript avoids injecting inline JavaScript into HTML. This makes your HTML smaller and less cluttered, and makes it easier to swap out or customize JavaScript libraries http://www.asp.net/mvc/mvc3#BM_JavaScript_and_Ajax_Improvements what really is it ?( i am really trying to go deeper in the above article) can some one explain in simpler words"} {"_id": "59954", "title": "Are there real world applications where the use of prefix versus postfix operators matters?", "text": "In college it is taught how you can do math problems which use the ++ or -- operators on some variable referenced in the equation such that the result of the equation would yield different results if you switched the operator from postfix to prefix or vice versa. **Are there any real world applications of using postfix or prefix operator where it makes a difference as to which you use?** It doesn't seem to me (maybe I just don't have enough experience yet in programming) that there really is much use to having the different operators if it only applies in math equations. EDIT: Suggestions so far include: 1. function calls //f(++x) != f(x++) 2. loop comparison //while (++i < MAX) != while (i++ < MAX) 3. operations on objects where ++ and -- have been overloaded"} {"_id": "146869", "title": "Does the use of Comparator interface breaks encapsulation in Java?", "text": "According to the essay \"The Object Calisthenics\" by Jeff Bay in the book ThoughtWorks Anthology, Use of Getters and Setters should be avoided as they break encapsulation and we should instead ask the object to do something for us instead of digging in to the object and getting its fields. how about comparison of objects ? I have to find best object say a Resort for the customer on the basis of a rule that uses few properties of the Resort object. This rule can change in future. So I find use of comparator interface better as compared to comparable interface. But then I will have to use getters and setters for to access individual properties to compare to objects in a different class."} {"_id": "11846", "title": "How large is ok for a Class?", "text": "I\u2019m a long time developer (I\u2019m 49) but rather new to object oriented development. I\u2019ve been reading about OO since Bertrand Meyer\u2019s Eiffel, but have done really little OO programming. The point is every book on OO design starts with an example of a boat, car or whatever common object we use very often, and they start adding attributes and methods, and explaining how they model the state of the object and what can be done with it. So they usually go something like \"the better the model the better it represents the object in the application and the better it all comes out\". So far so good, but, on the other hand, I\u2019ve found several authors that give recipes such as \u201ca class should fit in just a single page\u201d (I would add \u201con what monitor size?\" now that we try not to print code!). Take for example a `PurchaseOrder` class, that has a finite state machine controlling its behavior and a collection of `PurchaseOrderItem`, one of the arguments here at work is that we should use a `PurchaseOrder` simple class, with some methods (little more than a data class), and have a `PurchaseOrderFSM` \u201cexpert class\u201d that handles the finite state machine for the `PurchaseOrder`. I would say that falls in the \u201cFeature Envy\u201d or \u201cInappropriate Intimacy\u201d classification of Jeff Atwood's Code Smells post on Coding Horror. I\u2019d just call it common sense. If I can issue, approve or cancel my real purchase order, then the `PurchaseOrder` class should have `issuePO`, `approvePO` and `cancelPO` methods. Doesn\u2019t that goes with the \u201cmaximize cohesion\u201d and \u201cminimize coupling\u201d age old principles that I understand as cornerstones of OO? Besides, doesn\u2019t that helps toward the maintainability of the class?"} {"_id": "137185", "title": "Code sharing with a newbie coder in our group?", "text": "I'm working on a group project for school, and it's just me and a teammate. Well he is really behind with coding skills, and I was wondering if I should still give him access(email a copy) to the source I've been working on solely. So far he hasn't done anything except a basic TUI. Is it advisable to gently tell him to \"catch up\" to the coding level first?"} {"_id": "197667", "title": "Dependency injection and ease of use", "text": "I'm writing a handy library (we'll call it Thinger) that goes off and fetches an XML document, does some X-Path query on it and does something helpful with the result of that. (What I'm actually doing is far too boring to bother you all with, this is just a simplified example.) Because I'm a wise and lazy programmer I'm making use of two third-party components, one for fetching remote XML documents (we'll call it Fetcher) and one for running X-Path queries (we'll call it Querier). My library provides a Thinger class, into which one must inject a Fetcher and a Querier. I could do this via setters or a constructor. But then I think about the poor people using my library, who now have to think about injecting my dependencies when all they really want to do is call something like new Fetcher()->fetch(someurl); So far, I have written a static factory which instantiates a new Thinger, Fetcher and Querier, assembles them as appropriate and returns the Thinger. That left me feeling slightly unclean. Surely there's a better way? EDIT Most frameworks etc would probably get around this using a Dependency Injection Container or similar, but since I'm writing a fairly small third party library that seems like a bad approach."} {"_id": "189957", "title": "What is difference between publisher-subscriber and reactor patterns?", "text": "Publish-subscribe and Reactor patterns looking very similar to me. How they are different? In both patterns a message is getting passed to subscribers indirectly (listeners in reactor pattern). I feel observer pattern is very similar to two other patterns too. What are key differences between those patterns?"} {"_id": "197663", "title": "Scratch - why do schools teach students a language that is not used anywhere else?", "text": "Why do schools teach Scratch instead of more commonly used programming languages (C, C++, Java, C#, Python etc)?"} {"_id": "197668", "title": "Defining a status between last check and now", "text": "I have sets of probing data from an internal monitoring tool which represent the availability of different services (databases, webservices and so on). Now my task is to visualize this data and I reached a point where I should make a decision concerning the data interpretation for the most recent data set. **Quick overall definition** , each data set reflects the given status at the time when it was verified and the probing intervals vary from service to service, so there is no constant time window. Hence if there is an interval of e.g. five minutes between set A and B, for these five minutes (assuming with B there is a change) the status of A is always assumed. So far no problem, my question now however is what should be assumed for the time between **now** and the **most recent** data. I see two solutions * Only visualize data up until the most recent check, and ignore the passed time since then or * Assume the status from the most recent check is still valid up until now (like between two given data entries) I could see arguments for both and hope someone can point me in the right direction as to which approach would be the most logical one."} {"_id": "138987", "title": "Do I need to test everything?", "text": "I'm going to start my first real project in Ruby on Rails, and I'm forcing myself to write TDD tests. I don't see real advantages in writing tests, but since it seems very important, I'll try. Is it necessary to test **_every_** part of my application, including static pages?"} {"_id": "211054", "title": "How to design a task scheduler (like cron) with a Calendar Queue", "text": "I've been working on a dynamic task scheduler based on a Calendar Queue structure and I've hit a bit of a wall. The Calendar Queue lets me enqueue and dequeue events, however I think I might not be understanding the pseudo code correctly, but it can't seem to find any pseudo code referencing the application loop and how the application waits for the correct time before executing the task. Heres a naiive solution for what I'm talking about: Let calQueue be a CalendarQueue populated with events. while(1){ time = DateTime.now nextTask = calQueue.dequeue() if(time!= nextTask.time){ sleep(nextTask.time-time); executeTask(nextTask) } } Obviously this has many problems: 1. many events may occur at the same exact time 2. the executeTask method may take a significant amount of time to execute, which means events might be missed. 3. I'm using a database backed CalendarQueue, and so the dequeue method may take a not insignificant time to complete. 4. This also ignores tasks that were added to the calQueue that must execute before the next task, ie the task was added while it was sleeping, for immediate execution. Other considerations: 1. I would like to eventually make this a distributed (clustered) scheduler. 2. I've looked at cron and fcron but honestly they are both pretty large applications, and somewhat difficult to trace. Basically what I'm looking for is how I could design a CalendarQueue scheduler."} {"_id": "211056", "title": "Main method templating", "text": "Now that I've gotten into a dependency injection groove, I find main methods for different applications all look basically the same. This is similar to stuff you might find on the Guice documentation, but I'll put an example here: public class MyApp { public static void main(String... args) { Injector inj = Guice.createInjector(new SomeModule(), new DatabaseModule() new ThirdModule()); DatabaseService ds = inj.getInstance(DatabaseService.class); ds.start(); SomeService ss = inj.getInstance(SomeService.class); ss.start(); // etc. } } For multiple applications I have main methods that all look just like that. But now, lets say I want to make a brand new app, that, say, reuses my `DatabaseModule`. I pretty much have to copy and paste the `main` method and make appropriate changes... I view copy-paste as a code smell. Further, lets say I realize I should probably be putting shutdown hooks in, now I have to go through and change every main method in all my applications to attach shutdown hooks to each service. Is there some good way to template this process and minimize the boilerplate?"} {"_id": "68348", "title": "In DVCS with production and development branches, whats the master/default branch?", "text": "If your following the standard DVCS methodology of having a production branch and a development branch, I'm really interested in what do you do with the default (mercurial) or master (git) branch? * Do you delete it (not recommended by several guides)? * Do you use it as production? * Do you use it as development? * Or do you leave it empty and have separate dev and production branches? Note: I'm really interested in Mercurial specifically, but any other DVCS or just general rules would be ok"} {"_id": "211053", "title": "Why isn't `length` generic by default?", "text": "Haskell often provides two versions of a given function f, i.e: f :: Int ... genericF :: Integral i => i ... There exist many standard library functions with those two versions: length, take, drop, etc. Quoting the description of `genericLength`: > The genericLength function is an overloaded version of length. In > particular, instead of returning an Int, it returns any type which is an > instance of Num. It is, however, less efficient than length. My question is: where does the efficiency loss comes from? Can't the compiler detect that we are using `genericLength` as an Int and therefore us `length` for better performance? Why isn't `length` generic by default?"} {"_id": "211052", "title": "a lot of small objects - OO pasta", "text": "In the code I am working on, there are a lot of really small objects like: class HasFieldLameSetter { public: HasFieldLameSetter(field& p_):m(_p){} void set(bool p2) { m.hasLame = p2; } field& m; }; Having lots of small classes creates a hard-to-read and complicated \"code pasta\". Sometimes, reading it is really really hard because I spend a lot of time jumping from file to file to find out that the class did something trivial like in the example setting bool to true. In addition, those objects are being passed around everywhere by \"dependency injection\" which makes reading it even more difficult. **How do I persuade the author of the code to write slightly bigger objects?** In my opinion too many small objects is just a nightmare for programmers. Am I missing something, or is there a mistake in my thinking? I would be happy to read any papers that might change my point of view."} {"_id": "157240", "title": "When Rob Pike says \"Go is about composition\", what exactly does he mean?", "text": "From Less is Exponentially More > If C++ and Java are about type hierarchies and the taxonomy of types, Go is > about composition."} {"_id": "184298", "title": "Understanding open and listening ports", "text": "I developed an app in Java (which is working perfectly; with this app you can scan TCP/UDP ports -for testing purposes only-), but meanwhile when I was writing the code I read several documentation (wiki) about sockets. I thought open port is mean, a protocol where the client/server can establish/bind a connection even if there are nothing on the server side which can handle/response data. I thought open port is mean (if there are no routers/firewalls) a protocol which is ready to be used. I was testing my application on my local server machine, but at first I thought my application is not working. It was working, but there was no any software which could response to my queries. After I opened my APACHE server and I wrote a simple UDP server (which can handle/response to any query on a specific port) the application was able to find those ports. I thought listening port is a port which is used and it's opened. What does listening port is meaning? If I send UDP packets to a remote host and there are no closed ports (router or firewall) what happens to the packet? Does the data is written in the buffer? Or it simply ignores/denies the packet."} {"_id": "184299", "title": "Is it better to create a stored procedure or entities to get to the data I need?", "text": "I just jumped into a new project with a new company using Entity Framework and ASP.NET MVC 4. I am no expert on Entity Framework, but I think I have a decent grasp of how to use it. From what I can tell, my models should reflect a table in the database omitting the columns I don't need. Also, we can use stored procedures with Entity Framework, or an IoC to map the model. I need to connect the relationship with two objects, whose relationship is pretty nested. I could create a stored procedure to do this or create a bunch of models (based on the tables in the database) and use an IoC configuration to have a table from the database for each model, then query the models in the repository layer for the data I need using Linq to Entities. Seems like I would have a lot of \".Include(x=>x.SomeModel)\" My question is: which option is better for maintenance and integration in the future of the project? A stored procedure or Linq to entities? Or am I not understanding Entity Framework properly?"} {"_id": "184290", "title": "How to connect several (mobile) devices", "text": "I want to start a little project, where I want to connect several of my devices. Some are Android mobile devices, others are desktop devices like a PC or laptop. Furthermore I want keep the project as generic as possible. Means I want to share it, so other people can use it. By connecting I mean send messages between them. My problem is, that I'am not sure what technology or architecture to use, and hope to get some advice from you. (I think this question is more about software architecture then gorilla vs. shark) I have considered several approaches already. I looked at Google's Cloud Messaging, but that seems not to fit my requirements, where several users can register to send independent messages. It looks more like, sending from one master to several devices. The next thing I thought about was something lile VLC did with its Android remote app, where the desktop application hosts something like a server to which the mobile app must connect. This seems to be limited to LAN and only fits to about 80% of my use cases. Is there another approach which does not require something like a server who is aware of all clients, which has to seperate the user's devices and route the messages?"} {"_id": "108105", "title": "What should I understand before I try to understand functional programming?", "text": "I've been Learning `Me A Haskell For Great Good` because I thought it looked interesting. Of course, I'm completely sold on the idea that functional programming will end world hunger, bring about world peace, send crows to peck out the eyes of my enemies, bring many young women with good birthing hips to my beck and call, double my order for free, and just generally improve my life in every way, shape, and form with zero drawbacks whatsoever. I am pretty well-versed in imperative paradigms. I learned Java and C++ before I dropped out of university, after which I taught myself Python. My problem is that even beginner tutorials get way far over my head pretty quickly and I'm not sure if there's something specific I'm missing or not understanding or if I just can't quite wrap my head around the whole paradigm just yet. I made it all the way to Section 4.7 of the Haskell Tutorial for C Programmers before I couldn't understand it anymore. Am I missing something foundational (like those last two years of university)? Or should I just keep going slowly and hopefully it will click? Put another way, what information would help me best bridge the gap between an imperative paradigm and a functional paradigm?"} {"_id": "108101", "title": "a better approach for reviewing performance of developers?", "text": "I am a web developer. My office sets a list of criteria such as * Discipline * Attendance * Project Schedule * Teamwork * Problem Solving * Idea Sharing * Dedication for evaluating employees performance. Each criteria is allocated some points and the review is based on the total points scored in each criteria. My problem is I feel these criteria are quite vague and I don't think they can actually portray an employees performance. In software development, I think there are other other ways to measure an employees performance. Any suggestions? Thanks"} {"_id": "220846", "title": "Product backlog acceptance criteria", "text": "I read on a similar forum http://stackoverflow.com/questions/8117530/examples- for-acceptance-criteria-for-a-user-story about putting in the acceptance criteria for product backlog item. I am working on a Scrum based project and I need more info as my backlog item is as follows - UserProfiles ( I am using this in the title field in tfs 2013) and the description contains all the \"As a user I want to be able to create new users and assign permissions' The acceptance criteria talks about how the user interacts with the UI (as discussed in one of the answers in the above link - which is very logical) For example * As the user clicks on the home button, system will bring up 3 options * And then the user can enter their user name Is this the right approach I am using to use title for PBI and put the actual user story in description field in tfs Thanks in advance"} {"_id": "136582", "title": "Extensibility data model pattern", "text": "I was wondering how you'd be able to map the following criterias to common design patterns. I use PHP 5.3 and MySQL 5.5 and have my own mvc framework for my company but some parts could be better and here are the requirements: (Take the concept of a catalog since it's the first app that would be affected in our application pool) The controllers (products, clients and basket) all manage different information namely their own set of models. For example, the basket will create basket objects in memory, assign products to it, quantities, etc. Then it will save the basket to database and try the payment logic and save the payment associated to the basket. My issue is that most of the time, our clients want to customize the different objects such as the client (add more coordinate fields or descriptive fields) or the product (add custom properties to the product). If my controller always refers to the class product but i want to extend it to create: MyCompany_Models_Product MyClient_Models_Product extends MyCompany_Models_Product How will i setup my system to support extension of classes and change the classes used by my different modules, controllers, views and other models. If i hardcode MyCompany_Models_Product as the class name in the controller and all other object, how would i go at creating an overridable system. Is there any pattern i can go for that will leverage this problem?"} {"_id": "170137", "title": "Execute a Managed bean from a JSF view in WEB-INF folder", "text": "We are initiating a Spring + Primefaces project and the first problem we have encountered concerns storing the XHTML pages into the WEB-INF folder. When we use a faces form in a view located inside the WEB-INF folder, then the `commandButton` does not execute the managed bean method. Our bean: In fact we think the problem is that with JSF, the pages are rendered using a link to the same page as the action of the form, so if the page is located in WEB-INF it is not publicly accessible. We know that having all our XHTML views in the web folder instead of WEB-INF actually solves the issue, but we would like to store that pages into WEB-INF."} {"_id": "224350", "title": "Does Exception Handling Violates \"Program to Abstraction\"?", "text": "I am talking based on experience with Java and C#. I do not know if other language have different exception handling implementation. In order to achieve loose coupling, we need our code being programmed to use abstraction rather than implementation. However the exception handling case is the opposite. The best practice is you need to handle specific exception type (`SqlException`, `StackOverflowException`, etc). This thing may be better (or not) in java thanks for it's `Checked Exception`, there is a kind of \"contract\" between the interface and the consumer. But in C# or for the `Unchecked Exception` there is no contract about what exception can be thrown by the interface. For example, say that we use `Repository Pattern` to decouple the DAL with BLL. The simple catch exception usually be used like: public void Consume() { try{ productRepository.Get(k=>k.Id == \"0001\"); } catch(Exception e){ // handle } } In more specific case we usually use `SqlException`. However it means that we must know that the `ProductRepository` is a repository to database server. What if the implementation changed to use file repository instead? Now you need to catch `FileNotFoundException` or something like that. Why does it violates the \"code to abstraction\" principle? And what can we do to prevent it?"} {"_id": "254398", "title": "Try and catch error trapping, why is it so significant?", "text": "I try to understand why should I \"try\" a method to catch errors. It looks to me like the concept is: \"letting a process run assuming its not properly developed\". Should I always assume that a program based on classes and inheritance is bound for unexpected errors, which I should handle with tools like try catch and throw? Should all Java programs be included within try catch framework?"} {"_id": "229549", "title": "What is the advantage of wrapping exceptions", "text": "It's very common in .NET for an exception to be wrapped in several layers of \"outer exceptions\" which give marginally more contextual data. For example, in EF if your update fails, you get exceptions wrapped similar to this: * `EntityException` * `DbUpdateException` * `SqlException` The data I need to understand what failed is almost always in the `SqlException`, so what's the advantage to the other two? What if I was using EF inside a custom library, should I wrap this exception with one of my own? Like `MyCustomLibraryException: Could not update the data. See inner exception for details.`"} {"_id": "189222", "title": "Are exceptions as control flow considered a serious antipattern? If so, Why?", "text": "Back in the late 90's I worked quite a bit with a code base that used exceptions as flow control. It implemented a finite state machine to drive telephony applications. Lately I am reminded of those days because I've been doing MVC web apps. They both have `Controller`s that decide where to go next and supply the data to that destination logic. User actions from the domain of an old-school telephone, like DTMF tones, became parameters to action methods, but instead of returning something like a `ViewResult`, they threw a `StateTransitionException`. I think the main difference was that action methods were `void` functions. I don't remember all the things I did with this fact but I've been hesitant to even go down the road of remembering because since that job, like 15 years ago, _I never saw this in production code at any other job_. I assumed this was a sign that it was a so-called anti-pattern. Is this the case, and if why? Update: when I asked the question, I already had @MasonWheeler's answer in mind so I went with the answer that added to my knowledge the most.. I think his is a sound answer as well."} {"_id": "177831", "title": "Try/Catch or test parameters", "text": "> **Possible Duplicate:** > Arguments for or against using Try/Catch as logical operators > Efficient try / catch block usage? I was recently on a job interview and I was given a task to write simple method in C# to calculate when the trains meet. The code was simple mathematical equation. What I did was that I checked all the parameters on the beginning of the method to make sure, that the code will not fail. My question is: Is it better to check the parameters, or use try/catch? Here are my thoughts: * Try/catch is shorter * Try/catch will work always even if you forget about some condition * Catch is slow in .NET * Testing parameters is probably cleaner code (Exceptions should be exceptional) * Testing parameters gives you more control over return values I would prefer testing parameters in methods longer than +/- 10 lines, but what do you think about using try/catch in simple methods just like this \u2013 i.e. `return (a*b)/(c+d);` There are many similar questions on stackexchnage, but I am interested in this particular scenario."} {"_id": "145941", "title": "Efficient try / catch block usage?", "text": "Should catch blocks be used for writing logic i.e. handle flow control etc? Or just for throwing exceptions? Does it effect efficiency or maintainability of code? What are the side effects (if there are any) of writing logic in catch block? **EDIT:** I have seen a Java SDK class in which they have written logic inside the catch block. For example (snippet taken from `java.lang.Integer` class): try { result = Integer.valueOf(nm.substring(index), radix); result = negative ? new Integer(-result.intValue()) : result; } catch (NumberFormatException e) { String constant = negative ? new String(\"-\" + nm.substring(index)) : nm.substring(index); result = Integer.valueOf(constant, radix); } **EDIT2** : I was going through a tutorial where they count it as an advantage of writing logic of exceptional cases inside the exceptions: > Exceptions enable you to write the main flow of your code and to deal with > the exceptional cases elsewhere. **Any specific guidelines when to write logic in catch block and when not to?**"} {"_id": "82682", "title": "What to put in a try/catch?", "text": "**Note on the question: this is not a duplicate,Efficient try / catch block usage? was asked after this one. The other question is the duplicate.** I was wondering what was the best way to use try/catch. Is it better to limit the content of the try block to the minimum or to put everything in it? Let me explain myself with an example: Code 1: try { thisThrowsAnException(); } catch (Exception e) { e.printStackTrace(); } thisDoesnt(); Code 2: try { thisThrowsAnException(); thisDoesnt(); } catch (Exception e) { e.printStackTrace(); } Assuming that `thisThrowsAnException()`... well... can throw an exception and `thisDoesnt()`... I'm sure you got it. I know the difference between the two examples: in case the exception is caught, `thisDoesnt()` will be called in the first case, not in the second. As it doesn't matter for me, because the throwing of that exception would mean the application should stop, why would one use a version better than the other?"} {"_id": "237843", "title": "Is ok to throw exception in normal code path which eliminate a possible programmer error?", "text": "I know that exception should be thrown in exceptional case (e.g. out of memory, programmer error). For these cases, I don't need to worry about performance throwing these exception. But what happen if use exception in normal code path? In my case, use it to stop user code supplied function. So for given code, is `foo` better or `foo2` better? It is very likely that `outputFunc` will return false / throw exception. // library provided function void foo(std::function)>); // how user suppose to use it foo([](std::function outputFunc){ while (/*condition*/) { // user have to check the return value and stop work when it return false if (!outputFunc(/*some value*/)) return; } }); // library provided function void foo2(std::function)>); // how user suppose to use it foo2([](std::function outputFunc){ while (/*condition*/) { // this will throw exception and handled internally when it should stop // user does not have to check the return value but they need to write exception-safe code outputFunc(/*some value*/)); } }); * * * In my case, the use of exception eliminate a possible programmer error (by not checking return value of `outputFunc`) and make code easier to write (no need to check return value) And the exception is throw/catch internally, no user code required to deal with them (except they need to write exception-safe code in the lambda, but we should always write exception-safe code)"} {"_id": "132612", "title": "Is rethrowing an exception leaking an abstraction?", "text": "I have an interface method that states in documentation it will throw a specific type of exception. An implementation of that method uses something that throws an exception. The internal exception is caught and the exception declared by the interface contract is thrown. Here's a little code example to better explain. It is written in PHP but is pretty simple to follow. // in the interface /** * @return This method returns a doohickey to use when you need to foo * @throws DoohickeyDisasterException */ public function getThatDoohickey(); // in the implementation public function getThatDoohickey() { try { $SomethingInTheClass->doSomethingThatThrowsAnException(); } catch (Exception $Exc) { throw new DoohickeyDisasterException('Message about doohickey failure'); } // other code may return the doohickey } I'm using this method in an attempt to prevent the abstraction from leaking. My questions are: Would passing the thrown internal exception as the previous exception be leaking the abstraction? If not, would it be suitable to simply reuse the previous exception's message? If it would be leaking the abstraction could you provide some guidance on why you think it does? Just to clarify, my question would involve changing to the following line of code throw new DoohickeyDisasterException($Exc->getMessage(), null, $Exc);"} {"_id": "107723", "title": "Arguments for or against using Try/Catch as logical operators", "text": "I just discovered some lovely code in our companies app that uses Try-Catch blocks as logical operators. Meaning, \"do some code, if that throws this error, do this code, but if that throws this error do this 3rd thing instead\". It uses \"Finally\" as the \"else\" statement it appears. I know that this is wrong inherently, but before I go picking a fight I was hoping for some well thought out arguments. And hey, if you have arguments FOR the use of Try-Catch in this manner, please do tell. For any who are wondering, the language is C# and the code in question is about 30+ lines and is looking for specific exceptions, it is not handling ALL exceptions."} {"_id": "231300", "title": "Partial recovery from an Exception", "text": "I have seen _Exception Handling_ blocks that they were `throw`ing the recently caught `Exception` in the `catch` block. Something like: } catch ( Exception $e ) { // Do some recovery here callSomeFunction(); throw $e; } To me it doesn't make that much sense to `throw` the exact same `Exception` like this, but I'm not sure if I'm right or not. Maybe there are situations that that's the only option? My question is what are the pros and cons of this approach and shouldn't it be totally avoided as much as possible?"} {"_id": "220848", "title": "Starting point for building an iPhone app", "text": "I have one idea that I want to make an app for iPhone, I have recently bought a mac for the same purpose. I want to make small app and build on it in later versions. I work in in a small company and have 1 year of experience on working of iOS platform . I am a developer and don't have any design background so I want to know - should I take help from some designer? Or learn on my own and start coding for the app? I would like to know before starting out, what things should I consider or give more importance to. In general how should I approach building the app, design first and code later or reverse way."} {"_id": "30336", "title": "how to evaluate own project", "text": "I am working on a open source project in pure `C`, that I have started some time ago, but only recently found time to add some features. I can clearly some weaknesses of my old design, so I am trying to refactor my old code. I have no idea however, how to evaluate properly my new code. Do you know about any techniques or tools for code evaluation? I am pretty good with object oriented design, but for about three years I had no contact with purely structural one. Therefore I don't have enough experience, to be able to discern between good and bad design choices."} {"_id": "109857", "title": "Is it worth it for a developer to be a scrum master? Is it an official designation or?", "text": "Is it beneficial for a developer to become a 'scrum master'? Is this an official certification or just someone who specializes? What are the steps towards becoming one?"} {"_id": "169925", "title": "Are long methods always bad?", "text": "So looking around earlier I noticed some comments about long methods being bad practice. I am not sure I always agree that long methods are bad (and would like opinions from others). For example I have some Django views that do a bit of processing of the objects before sending them to the view, a long method being 350 lines of code. I have my code written so that it deals with the paramaters - sorting / filtering the queryset, then bit by bit does some processing on the objects my query has returned. So the processing is mainly conditional aggregation, that has complex enough rules it can't easily be done in the database, so I have some variables declared outside the main loop then get altered during the loop. variable_1 = 0 variable_2 = 0 for object in queryset : if object.condition_condition_a and variable_2 > 0 : variable 1+= 1 ..... ... . more conditions to alter the variables return queryset, and context So according to the theory I should factor out all the code into smaller methods, so That I have the view method as being maximum one page long. However having worked on various code bases in the past, I sometimes find it makes the code less readable, when you need to constantly jump from one method to the next figuring out all the parts of it, while keeping the outermost method in your head. I find that having a long method that is well formatted, you can see the logic more easily, as it isn't getting hidden away in inner methods. I could factor out the code into smaller methods, but often there is is an inner loop being used for two or three things, so it would result in more complex code, or methods that don't do one thing but two or three (alternatively I could repeat inner loops for each task, but then there will be a performance hit). So is there a case that long methods are not always bad? Is there always a case for writing methods, when they will only be used in one place? UPDATE: Looks like I asked this question a over a year ago. So I refactored the code after the (mixed) response here, split it into methods. It is a Django app retrieving complex sets of related objects from the database, so the testing argument is out (it would have probably taken most of the year to create relevant objects for the test cases . I have a \"this needs done yesterday\" type work environment before anyone complains). Fixing bugs in that part of the code is marginally easier now, but not massively so. before : #comment 1 bit of (uncomplicated) code 1a bit of code 2a #comment 2 bit of code 2a bit of code 2b bit of code 2c #comment 3 bit of code 3 now: method_call_1 method_call_2 method_call_3 def method_1 bit of (uncomplicated) code 1a bit of code 2a def method_2 bit of code 2a bit of code 2b bit of code 2c def method_3 bit of code 3"} {"_id": "210125", "title": "collection naming - singular or plural", "text": "This is related to this question but not quite the same. BTW, I'm not a native English speaker. I keep having a hard time choosing a proper name for collections - Lets say that we have a collection of _item_ s and that each _item_ has a _name_. How will we call the collection of names such that the collection's name will be the most understandable under standard naming conventions? var itemNames = GetItemNames(); // Might be interpreted as a single item with many names var itemsNames = GetItemsNames(); // No ambiguity but doesn't look like proper English var itemsName = GetItemsName(); // Also sounds strange, and doesn't seem to fit in e.g. a loop: foreach( itemName in itemsName){...} As native English speakers, how would you name your collections, and how would you expect them to be named?"} {"_id": "234564", "title": "Pair Rotation in a team for effective pair programming", "text": "We follow pair programming in our company and always face the issue of balanced and effective pair rotation within the developers on stories. We follow a simple metrics in which every developer's name is mapped with every other developer and we mark the respective intersection whenever two developers are pairing. This is not working out well, we cannot track how much time a pair has spent pairing and people forget to update the metrics many times. Tracking the pair rotation is helpful because we want the project knowledge to be shared across the team, and not just one pair. So usually what happens is, whoever is pairing keeps pairing till the entire story is completed (given they have better context), and no body else knows about what is being done & if the story or a regression/production bug comes back, the same pair has to pick it up (leaving whatever they are currently doing), which is what creates a bottleneck. Are there any known metrics that can be used for tracking the pair rotations."} {"_id": "169920", "title": "Is C# development effectively inseparable from the IDE you use?", "text": "I'm a Python programmer learning C# who is trying to stop worrying and just love C# for what it is, rather than constantly comparing it back to Python. I'm caught up on one point: the lack of explicitness about where things are defined, as detailed in this Stack Overflow question. In short: in C#, `using foo` doesn't tell you what names from `foo` are being made available, which is analogous to `from foo import *` in Python -- a form that is discouraged within Python coding culture for being implicit rather than the more explicit approach of `from foo import bar`. I was rather struck by the Stack Overflow answers to this point from C# programmers, which was that in practice this lack of explicitness doesn't really matter because in your IDE (presumably Visual Studio) you can just hover over a name and be told by the system where the name is coming from. E.g.: > Now, in theory I realise this means when you're looking with a text editor, > you can't tell where the types come from in C#... but in practice, I don't > find that to be a problem. How often are you actually looking at code and > can't use Visual Studio? This is revelatory to me. Many Python programmers prefer a text editor approach to coding, using something like Sublime Text 2 or vim, where it's all about the code, plus command line tools and direct access and manipulation of folders and files. The idea of being dependent on an IDE to understand code at such a basic level seems anathema. It seems C# culture is radically different on this point. And I wonder if I just need to accept and embrace that as part of my learning of C#. Which leads me to my question here: is C# development effectively inseparable from the IDE you use?"} {"_id": "150760", "title": "Single Responsibility Principle - How Can I Avoid Code Fragmentation?", "text": "I'm working on a team where the team leader is a virulent advocate of SOLID development principles. However, he lacks a lot of experience in getting complex software out of the door. We have a situation where he has applied SRP to what was already quite a complex code base, which has now become very highly fragmented and difficult to understand and debug. We now have a problem not only with code fragmentation, but also encapsulation, as methods within a class that may have been private or protected have been judged to represent a 'reason to change' and have been extracted to public or internal classes and interfaces which is not in keeping with the encapsulation goals of the application. We have some class constructors which take over 20 interface parameters, so our IoC registration and resolution is becoming a monster in its own right. I want to know if there is any 'refactor away from SRP' approach we could use to help fix some of these issues. I have read that it doesn't violate SOLID if I create a number of empty coarser-grained classes that 'wrap' a number of closely related classes to provide a single-point of access to the sum of their functionality (i.e. mimicking a less overly SRP'd class implementation). Apart from that, I cannot think of a solution which will allow us to pragmatically continue with our development efforts, while keeping everyone happy. Any suggestions ?"} {"_id": "154723", "title": "How to determine if class meets single responsibility principle?", "text": "Single Responsibility Principle is based on high cohesion principle. The difference between the two is that highly cohesive classes feature a set of responsibilities that are strongly related, while classes adhering to SRP have just one responsibility. But how do we determine whether particular class features a set of responsibilities and is thus just highly cohesive, or whether it has only one responsibility and is thus adhering to SRP? Namely, isn't it more or less subjective, since some may find class very granular (and as such will consider a class as adhering to SRP), while others may find it not granular enough?"} {"_id": "232145", "title": "Lots of classes with only one single static method with same name as class - Code smell?", "text": "I'm trying to follow the single responsibility principle (SRP) in my applications. I have lots of CRUD classes I just name xxxxxManager. Following the SRP, I made 4 classes for each one : xxxxxCreator, xxxxxGetter, xxxxxDeleter, xxxxxUpdater They end up having only one static method : xxxxxCreator::create(Model $model), xxxxUpdater::update(Model $model), etc. This is a code smell for me. Am I pushing the SRP too far ?"} {"_id": "216283", "title": "Parallel.For Inconsistency results", "text": "I am using VB.net to write a parallel based code. I use _Parallel.For_ to generate pairs of 500 objects or in combination C(500,2) such as the following code; but I found that it didn't always generate all combinations which should be 124750 (shown from variable Counter). No other thread was runing when this code was run. I am using a Win-7 32 Bit desktop with Intel Core i5 CPU 650@3.2GHz, 3.33 GHz and RAM 2GB. What's wrong with the code and how to solve this problem? Thank You. Dim Counter As Integer = 0 Parallel.For(0, 499, Sub(i) For j As Integer = i + 1 To 499 Counter += 1 Console.Write(i & \":\" & j) Next End Sub) Console.Writeline(\"Iteration number: \" & Counter)"} {"_id": "228691", "title": "Quoting for a project with closed source", "text": "I sit with a project that I need to quote, and until the job is awarded, I can only use \"View source\" to check, and ask a few questions about the system. That makes quoting outright impossible for me to do. My question is: how do one go about quoting for such a project? What kind of questions should I ask? Do I decline to quote? Due to uncertainty, do I pad my quote?"} {"_id": "232817", "title": "Should I include myself as an author after modifying 3rd-party code?", "text": "It's common practice to make some tweaks or fixes in 3rd-party code (be it a simple gist or an entire library). But it's also common that many of these code have they own licensing rules and eventually a header on every file with copyright informations. After making those modifications what's the correct thing to do next? Keep the licence info untouchable or try to update it including yourself with something like `@author` or `@revision` tags? Another common problem is changing the 3rd-party namespace/package to fit it to your project conventions. Some license types include these kind of information in their license block, can I change it freely? I know the answer for these questions depends on each license type, so to make my question more specific... Considering general license rules (usually they are different in minor aspects, right?), is ethical(or at least allowed) that I freely **add information** to the license block about my modifications and perhaps also **modify how do I refer** to it in my code (e.g use `YACorp.YALib` as `Utils.YALib`)? Actualy my concerns are more about \"respect to the community\" than the legal aspects, I'm asking more about how much we can \"go wild\" remaining ethical if our project can be considered private or personal."} {"_id": "232818", "title": "Moving data between databases with XML", "text": "I just read this quote online: > XML is a very useful technology for moving data between different databases > or between databases and other programs. However, it is not itself a > database. Which is quoted from the book 'Effective XML: 50 Specific Ways to Improve Your XML'. I don't understand what you would bother moving data between databases in an XML format. Databases usually contain a lot of data, so wouldn't it be verbose to transform it into an XML format? Is there no way, to just serialize the database data into something less verbose, somehow, instead of bothering with conversions?"} {"_id": "224103", "title": "J2EE - Session swap", "text": "Application server - JBoss AS 7.1.1 JDK6 J2EE 1.3 My web application is more than 10 years old and facing this session swap problem in my portal. Noticed that swap happens mostly when many concurrent users accessing the portal and underlying windows server is busy (more than 90% CPU usage) To analyse this issue, I logged customer data (customer id, ip address, jsession id) to a table and found that customer having unique jsession id initially has his data and all of a sudden for the same jsession id and ip address receiving different customer data. customer1 123.123.12.123 jsessionid123 11:10:02 customer2 123.123.12.123 jsessionid123 11:10:04 ip address (123.123.12.123) having jsession id (jsessionid123) somehow gets customer2 data Any order placed by customer1 in ip - 123.123.12.123 gets created for customer2, I confirmed this by calling customer2 and they confirmed that they didn't place the order. customer1 won't realise he placed order for customer2 - all the data gets changed, like basket items, customer object, products etc. Now I need to find a fix for this, but first I need to know which part of my code is creating this problem. Do I have to use a stress test software? or any better mechanism to find out the problematic code?"} {"_id": "170689", "title": "How is the Linux repository administrated?", "text": "I am amazed by the Linux project and I would like to learn how they administrate the code, given the huge number of developers. I found the Linux repository on GitHub, but I do not understand how it is administrated. For example the following commit: https://github.com/torvalds/linux/commit/31fd84b95eb211d5db460a1dda85e004800a7b52 Notice the following part: ![enter image description here](http://i.stack.imgur.com/dayjp.png) So one authored and Torvalds committed. How is this possible. I thought that it was only possible to have either pull or pushing rights, but here it seems like there is an approval stage. I should mention that the specific problem I am trying to solve is that we use pull requests to our repo. The problem we are facing is that while a pull request is waiting to get merged, it is often broken by a commit. This leads to a seemingly never ending work to adapt the fork in order to make the pull request merge smoothly. Do Linux solve this by giving lots of people pushing rights (at least there are currently just three pull requests but hundreds of commits per day)."} {"_id": "39509", "title": "How should I evaluate a training class?", "text": "My company is giving us the possibility to sign up for some offsite training classes on Design Patterns. Browsing through the brochures, I'm _already_ feeling bored (and somewhat repelled by the marketingy buzzwordy silverbulletty enterprisey managerese) - I already know the basics about design patterns (read the GoF book years ago, used a few as needed, read articles on the net, etc.). I'm worried that the training is going be mostly watching a powerpoint with stuff I mostly know, and arguing details over a UML diagram. My programming experience is mostly in games, simple web development and mathy stuff, in Python, C++ or simple scripting languages; I never worked on anything \"enterprisey\", or in Java or .Net (for some reason, Java and .Net seems especially associated with Design Patterns), nor am I planning to in the forseeable future. I'm much more interesting in things like functional programming and Haskell and making micro domain-specific-languages to solve specific problems - I'm closer to the \"hacker\" culture (I'm mostly self- taught) than to the \"enterprise\" culture. But maybe this is just me having too high of an opinion of myself and passing a good opportunity to learn useful stuff. Or being a snob and refusing to learn about different cultures. So, how could I tell if a Design Patterns class is going to be useful? How you found such trainings useful? How you ever felt apprehensive about them?"} {"_id": "111955", "title": "How to draw a (UML) class diagram when the classes are dispersed across a distributed system?", "text": "Basically how to denote that class foo is from a different server than class bar ?"} {"_id": "186427", "title": "Is it wise to include something like OpenSSL or GnuTLS with a project in a repository?", "text": "I am currently working on a project that makes use of the OpenSSL library for secure communications. Since this library is a requirement for building the project, I am considering including it in the project's repository. Here are the pros and cons as I see them: **Pros:** * Self-contained build - with one exception, everything needed for building the project is contained in the repository. It would be nice if the user didn't need to go through the trouble of installing and configuring the development headers / libraries for OpenSSL. **Cons:** * Security concerns - it goes without saying that the only version of OpenSSL that anyone should be using is always the latest. As soon as a new release is issued, I will need to update the version of OpenSSL in the project's repository. So the question boils down to this: > Is it wise to include a security library with the project that we need to > _always_ ensure gets updated whenever a new release is issued?"} {"_id": "115277", "title": "Is it a good idea to use something like the Twitter Bootstrap in production?", "text": "I'm interning at a new startup, and I've been tasked with designing the front- end of the site. I really, really want to use the new Twitter Bootstrap (http://twitter.github.com/bootstrap/). It's all CSS, looks amazing, works great, and lets me get to designing and not worrying about the little stuff. However, is this a good idea if it's a demo of a webapp that we're building that could become production code? Will it look silly and amateur if I use this instead of coding it all myself? Is the bootstrap too Twitter-ish for a real company's brand?"} {"_id": "115271", "title": "Communication Between Different Technologies in a Distributed Application", "text": "I had to a incorporate several legacy applications and services in a network- distributed application. The existing services and applications are written using different languages and technologies, including: java, C#.Net and C++; all running on MS Windows machines. Now I'm wondering about the communication mechanism between them. What is the simple and standard way? Thanks! PS. communications include simple message sending and remote method invocations."} {"_id": "225316", "title": "What's the best way to retrieve a value and a status", "text": "Given that all else is equal, and there are no coding standards defining the best approach, what would be the recommended way in C++ to check that a value exists and return it if it does? For example, something like one of these declarations: bool getMethod(double& ret); double getMethod(bool& ok); void getMethod(bool& ok, double& ret); pair getMethod(); bool checkMethod(); // eg if (checkMethod()) double getMethod(); // result = getMethod(); or something else, like return a struct The value may exist about 50% of the time. The existing code already does a lot of the last method, i.e. using a checkMethod() but I was wondering if that's really an efficient way to do it - via two calls (half the time)."} {"_id": "225311", "title": "Question on Graph", "text": "I am trying to implement a graph data structure in C#. I have the following interfaces: public interface IVertex { TValue Value { get;} VertexList> Neighbours { get;} int InDegree { get;} int OutDegree { get;} } public interface IWeightedEdge { IVertex source { get;} IVertex destination { get;} TWeight weight { get; } } public interface IWeightedGraph { bool AddVertex(TVertexValue value); TVertexValue RemoveVertex(IVertex vertex); bool RemoveVertex(TVertexValue value); bool AddEdge(IVertex source, IVertex dest, TEdgeWeight weight); bool RemoveEdge(IVertex source, IVertex dest); } From this, you can see that it is the responsibility of the Vertex class (the class implementing IVertex interface) to tell about its adjacent vertices (Neighbors function), its in degree, out degree etc. While I was designing this, I was told by a friend of mine that the Graph class (the one that will implement IGraph) should take the responsibility of operations like retrieving adjacent vertices ,finding the in/out degrees etc. His point is that the above operations are valid only when a vertex becomes part of a Graph. But my point is that a Vertex is standalone; it can exist even outside of a graph. So, the vertex should provide the operations on it. Which one you think is correct? Please share your thoughts on this."} {"_id": "91217", "title": "Whether to open-source gigantic idea", "text": "So, I have this idea for a huge game that would take years to develop if I did it on my own (with help from friends) I've learned a lot about game programming, but nowhere near enough to start coding the thing itself. It would still take at least two years if I started a business and hired programmers How much will open-sourcing it speed up the project? Should I open-source it, given that I have no code to show for my learning in this kind of programming? Can an open-source project be directed to keep the original idea alive, or will it often veer away from the creator's original idea? Can an open-source project receive any kind of reliable funding (can we have investors, for example)? Should I make a business and hire programmers to get it moving, and then open- source it? If that ends up being the case, I'd like to send the full specification to anyone who wants to see it."} {"_id": "180146", "title": "How do we differentiate between a computer and a calculator?", "text": "In this SO Question there is a comment by `starblue` that > A computer without loops is a calculator Is this true? Is that the only difference? Is there a set of criteria to differentiate or has the line become very blurred?"} {"_id": "35214", "title": "I can get visual studio for free through school, should I?", "text": "I am currently using dev++, I am a complete beginner, (Freshman CS major) learning C++. I can get one of the newest versions of visual studio (2008 or 2009 i think) for free through my school. Not sure if it is worth the trouble of getting. thoughts?"} {"_id": "101703", "title": "What are the boundaries of the product owner in scrum?", "text": "In another question, I asked about why I feel scrum turns active developers into passive developers, and it seems that the overall problem is not scrumy (related to scrum), and rather it's related to the bad implementation of scrum. So, here I have some questions about the scope of the responsibilities of PO (product owner) and the limitations he/she shouldn't pass. 1. Should PO interfere the UI design, when there are designers at work in scrum team? (an example of this which has happened to us, is to replace checkboxes with a drop down list with two items, namely, yes and no; or to make some boxes larger, or to left-align some content instead of centering them on the page, or stuff like that). If yeah, to what extent? Colors? Layout? 2. Should PO interfere in Design and architecture of coding? This hasn't happened to us yet, but I'm really curious about the boundaries. For example does PO has the right to change the platform (moving from ASP.NET MVC to PHP, or something like that), or choosing the count of servers (tier architecture), etc. 3. Should PO interfere in validation mechanisms? For example, this field should be required, or we don't need to get this piece of information from user. Sometimes, analyzers and designers confirm that something can be handled behind the scene, like extracting the user profile info from another source, instead of asking for it in UI. 4. How granular could/should PO get into the analysis and design? For example, a user story might be: \"As a customer, I'd like to be able to buy new domains online\". However, scrum team can implement this user story in a wizard of five steps, or in one single page. To which level PO should monitor, or govern, or supervise the technical analysis, design, and implementation? I asked these questions to judge whether our implementation is right or wrong?"} {"_id": "136328", "title": "Windows OS design decisions", "text": "I've seen an interview with Richard Stallman some time ago, and he was asked a question about security and Windows OS which he answered saying that there are some relatively bad design decisions that were made when creating the Windows OS. These design decisions (according to R. Stallman) were known at the time for creating security vulnerabilities later on. Can anyone explain what he's talking about ? The video with the interview can be found here http://rt.com/news/richard- stallman-free-software-875/ \\- it starts at 8:30."} {"_id": "188771", "title": "Building a string translation database for multiple (in-house) projects", "text": "At our company we have an existing translation ms-sql table wich stores strings like this: Id | Key | Language | Value 1 | hello-world | nl-BE | Hallo Wereld 2 | hello-world | en-GB | Hello World There are 3 languages in the system and I expect this to grow to a maximum of about 10 in the future This table is read by multiple very different projects (about 60 projects, mostly websites/web applications and some web services), that each open a database connection to the translation database, cache the translations Feedback from the front-end devs is that our UIto input or modify translations 's biggest downside is that they cannot know what project uses what strings. They sometimes modify strings not knowing they are breaking 7 projects with it. Now they just have to type something like `this.Translate(\"Hello World\")` and the system takes care of the rest. I could ofcourse force them to something like `this.Translate(\"Hello World\",\"AwesomeApplication1\")` but that seems like it's going to require quite a lot of refactoring across the many many projects. **How would you go about providing this solution? How would you, as a dev, provide the \"project name\" to the translation? How would you store this in the database?** Important note: the translation re-use is the whole point of the centralised database, so scoping translations to one project by going 1|hello-world|nl-BE|Hallo Wereld|MyAwesomeApplicatoin1 5|hello-world|nl-BE|Hallo Wereld!|MyAwesomeApplicatoin2 is not really a wanted option. I'd prefer something like : 1|hello-world|nl-BE|Hallo Wereld|MyAwesomeApplicatoin1,MyAwesomeApplicatoin2 or a foreign key equivalent of just putting the names in the table. **UPDATE** Based on the advise to normalize the database I have come up with something like this so far: ![](http://i.stack.imgur.com/UQhQ3.png) //this allows me to distinquish if translations where added by developer or by translator **UPDATE2:** added edmx instead of text. If people are interested I could github the WCF project i'm wrapping this concept in so other people can test and use it."} {"_id": "181817", "title": "Should your best programmers have to check everyone else's code into source control?", "text": "One of the differences between svn and git is the ability to control access to the repository. It's hard to compare the two because there is a difference of perspective about who should be allowed to commit changes at all! This question is about using git as a centralized repository for a team at a company somewhere. Assume that the members of the team are of varying skill levels, much the way they are at most companies. Git seems to assume that your only your best (most productive, most experienced) programmers are trusted to check in code. If that's the case, you are taking their time away from actually writing code to review other people's code in order to check it in. Does this pay off? I really want to focus this question on what is the best use of your best programmer's time, not on best version-control practices in general. A corollary might be, do good programmers quit if a significant portion of their job is to review other people's code? I think both questions boil down to: is the review worth the productivity hit?"} {"_id": "230934", "title": "Securely Validating Software License", "text": "I'm currently developing a product (in C#) that will be available for downloading for free but requires a monthly subscription in order to be used after a specific trial period. What i am intending to do is to let the user register an account on our website and charge it with credits in order to use the application. However a couple of problems faced me as a new comer to the licensing field * How is this logic usually implemented. * How to connect my C# application to my website's database and grab the data required (is trial expired, if yes does the user have enough credits). * How to make sure that i am at least 80% (or so) secure from attackers who might start some MATM attack in order to alter the received packets and gain unauthorized access to my program. * As i already heard. Using SSL may grantee me that my application is connecting to the right address, But how again to make such connection with my basic website. (Wordpress website or VBulletin). Excuse me if my question seems a little broad but i have crawled the internet searching for a specific answer for my needs and found nothing. So i really need an efficient answer (or maybe a conversation) about this :D. Oh and a small declaimer: by saying grabbing `DATA` and so on i am not asking for a way to grab them i am just giving an example of what i want to achieve. For the sake of narrowing up the things i came up with an idea that i want to know more about. * Can i use `Wordpress` or VBulletin and create and RSA encrypted login to them and let the response be encrypted too ?"} {"_id": "206815", "title": "Is there an existing market for software libraries?", "text": "Kinda have the impression library development is either done by big companies like Microsoft that give them away for free, or the open-source community. Is there an existing market to _sell_ software libraries?"} {"_id": "122485", "title": "Elegant ways to handle if(if else) else", "text": "This is a minor niggle, but every time I have to code something like this, the repetition bothers me, but I'm not sure that any of the solutions aren't worse. if(FileExists(file)) { contents = OpenFile(file); // <-- prevents inclusion in if if(SomeTest(contents)) { DoSomething(contents); } else { DefaultAction(); } } else { DefaultAction(); } * Is there a name for this kind of logic? * Am I a tad too OCD? I'm open to evil code suggestions, if only for curiosity's sake..."} {"_id": "230451", "title": "Fail-fast design", "text": "I recently tend to design my methods in the following way: if ( 'A' === $a ) { if ( 'B' === $b ) { return 'some thing'; } else if ( 'C' === $c ) { return 'some other things'; } } /* If you made it up to here, then something has gone wrong */ return false; So basically the idea is to check if the first condition is correct, if it is then continue until you reach the first `return` statement to return the expected result, if not, move on to the next condition and so on. The key is that all the method's functionality has already shrunken down and wrapped up in a conditional statement piece by piece, so you should meet at least one condition to take a useful action or if you reach the very last line, then it will `return false`, that means none of the expected conditions were met, that in my design it means the input wasn't in a proper or expected format, data-type, range, etc. so the method were not able to continue processing that. I can achieve the same goal in a couple of different ways, but this one seems to be cleaner as I generally need to write code only for _positive_ conditions (I actually don't know how should I call it) -- unless I need to take a specific action if a certain condition haven't met. And in the end I will get `false` if the method wasn't able to accomplish its job -- with real conditions in place most of the time it's almost like a jump to the end of method when you can't pass through an `if` statement. It's actually designed in a way that you will either pass this `if` and will heading to the either next `if` or a `return` statement or you are very unlikely to match any other `if` statements and will jump directly to the last `return false` statement. Now, my questions are: 1. Is it a (true) fail-fast design, which in this case is only looking for the proper condition to continue and is willing to fail as soon as one fails. Is that correct or am I mixing up things here? If not, is there any specific term for that? 2. Also as I recently generalized this approach for all the new methods I create, I wish to know if there is any drawback or design flaw here that might bite me back later."} {"_id": "220461", "title": "Keep indentation level low", "text": "I hear a lot that you should not write functions larger than one screen size, that you should extract things into functions if you call things very often and all these coding guidelines. One of them is also keep your indentation level low. Some people told me even I should max indent 4 times. But in every day programming it keeps happening that I need O(n^2) or O(n^3) loops with some flow control, which creates out of nowhere 7 or 8 layers of indentation... Are there any good ways to avoid that? One problem is, if you try to avoid complex if/else statement nesting I tend to do this in one row which can look very terrible and be unreadable and not very easy to maintain."} {"_id": "127128", "title": "What is the Best Style for Functions with Multiple Returns and If/Else Statements?", "text": "> **Possible Duplicate:** > Elegant ways to handle if(if else) else In a function where there are multiple returns, what is the best style to use? This style, where there is no 'else' because it is frivolous and you think it is more concise? f() { if (condition) { return value; } return other_value; } Or this, if you think that 'else' provides nice syntax sugar to make it more readable? f() { if (condition) { return value; } else { return other_value; } } Feel free to share whatever other reasons you may have for preferring one over another. EDIT: The above samples are hypothetical, if they were this simple, as Yannis Rizos noted below, the ternary operator ( condition ? value : other ) would be acceptable. However I'm also interested in cases like this: f() { if (condition1) { return value; } else if (condition2) { return value2; } ... else { return other_value; } }"} {"_id": "177488", "title": "Licensing approach for .NET library that might be used desktop / web-service / cloud environment", "text": "I am looking for advice how to _architect_ licensing for a .NET library. I am not asking for tool/service recommendations or something like that. My library can be used in a regular desktop application, in an ASP.NET solution. And now Azure services come into play. Currently, for desktop applications the library checks if the application and company names from the version history are the same as the names the key was generated for. In other cases the library compares hardware IDs. Now there are problems: * an Azure-enabled web-application can be run on different hardware each time (AFAIK) * sometimes the hardware ID for the same hardware changes unexpectedly * checking the hardware ID or version info might not be allowed in some circumstances (shared hosting for example) So, I am thinking about what approach I can take to architect a licensing scheme that: * is friendly to customers (I do not try to fight piracy, but I do want to warn the customer if he uses the library on more servers than he paid for) * can be used when there is no internet connection * can be used on shared hosting What would you recommend?"} {"_id": "218092", "title": "Not assigning Bugs to a specific user", "text": "My question: Is there a benefit to NOT assigning a Bug to a particular developer? Leaving it to the team as-a-whole? Our department has decided to be more Agile by not assigning Bugs/Defects to individuals. Using Team Foundation Server 2012, we'll place all Bugs in a development team's \"Area\" but leave the \"Assigned To\" field blank. The idea is that the team will create a Task work item which will be assigned to an individual and the Task will link to the Bug. The Team as a whole will therefore take responsibility for the Bug, not an individual, aligning to Scrum - apparently. I see the down side. The reporting tools built into TFS become less useful when you cannot sort by assigned vs unassigned, let alone sorting by which user Bugs are assigned. Is there a benefit I'm not seeing? Besides encouraging teamwork by putting the responsibility on the team-as-a-whole instead of an individual?"} {"_id": "218093", "title": "Confusion about inheritance", "text": "I know I might get downvoted for this, but I'm really curious. I was taught that inheritance is a very powerful polymorphism tool, but I can't seem to use it well in real cases. So far, I can only use inheritance when the base class is an abstract class. Examples : 1. If we're talking about `Product` and `Inventory`, I quickly assumed that a `Product` **is an** `Inventory` because a `Product` must be inventorized as well. But a problem occured when user wanted to sell their `Inventory` item. It just doesn't seem to be right to change an `Inventory` object to it's subtype (`Product`), it's almost like trying to convert a parent to it's child. 2. Another case is `Customer` and `Member`. It is logical (at least for me) to think that a `Member` **is a** `Customer` with some more privileges. Same problem occurred when user wanted to upgrade an existing `Customer` to become a `Member`. 3. A very trivial case is the `Employee` case. Where `Manager`, `Clerk`, etc can be derived from `Employee`. Still, the same upgrading issue. I tried to use composition instead for some cases, but I really wanted to know if I'm missing something for inheritance solution here. My composition solution for those cases : 1. Create a reference of `Inventory` inside a `Product`. Here I'm making an assumption about that `Product` and `Inventory` is talking in a different context. While `Product` is in the context of sales (price, volume, discount, etc), `Inventory` is in the context of physical management (stock, movement, etc). 2. Make a reference of `Membership` instead inside `Customer` class instead of previous inheritance solution. Therefor upgrading a `Customer` is only about instantiating the `Customer`'s `Membership` property. 3. This example is keep being taught in basic programming classes, but I think it's more proper to have those `Manager`, `Clerk`, etc derived from an abstract `Role` class and make it a property in `Employee`. I found it difficult to find an example of a concrete class deriving from another concrete class. Is there any inheritance solution in which I can solve those cases? Being new in this OOP thing, I really really need a guidance. Thanks!"} {"_id": "218096", "title": "Writing generic code when your target is a C compiler", "text": "I need to write some algorithms for a PIC micro controller. AFAIK, the official tools support either assembler or a subset of C. My goal is to write the algorithms in a generic and reusable way without losing any runtime or memory performance. And if possible, I would like to do this without increasing the development time much and compromising the readability and maintainability much either. What I mean by generic and reusable is that I don't want to commit to types, array sizes, number of bits in a bit field etc. All these specifications, IMHO, point to C++ templates, but there's no compiler for it for my target. C macro metaprogramming is another option, but, again my opinion, that greatly reduces readability and increases development time. I believe what I'm looking for is a decent C++ to C translator, but I'd like to hear anything else that satisfies the above requirements. Maybe a translator from another high-level language to C that produces very efficient code, maybe something else. Please note that I have nothing against C, I just wish templates were available in it."} {"_id": "218098", "title": "OOP PHP make separate classes or one", "text": "I'm studying OOP PHP and working on a small personal project but I have hard time grasping some concepts. Let's say I have a list of items, each item belongs to subcategory, and each subcategory belongs to category. So should I make separate classes for category (with methods to list all categories, add new category, delete category), class for subcategories and class for items? Or should I make creating, listing and deleting categories as methods for item class? Both category and subcategory are very simple and basically consist of ID, Name and parentID (for subcategory)."} {"_id": "128859", "title": "Never written much unit tests, how can I practice more of it?", "text": "I have an interview coming up soon next week and there's a few things on their list of responsibilties for this software development job (the job position title is vague, it says java developer) that I am worried about: * Unit test features * Debug new features * Provide recommendation on test case design (what is this?) I am worried because in my past software projects, I have not bothered writing unit tests. I know how to use JUnit to write tests in Eclipse and the process but it's just that I have not much experience writing tests. For example, I rely on data from the web, which varies a lot from each other, and process this data. How can I write test cases for every single type of data? Also, if the data is from a local database, how would I write a test case that checks that data from tables are being read properly? I've used the debug feature in Eclipse whenever I could, but sometimes, when I am accessing a library which doesn't come ith source code for (e.g. commercial 3rd party Jars), the step into feature would stop. So for most cases, I just System.out.println to discover where the bug is happening. Is there a better method? How can I practice in the short period of time to be somewhat competent in writing unit test?"} {"_id": "128854", "title": "Is it common to have stored procedures disabled on MySQL managed hosting plans?", "text": "I'm building an app that I think would benefit from the use of MySQL stored procedures. I know stored procedures have a bad reputation, but I think they suit my application well. Using stored procedures would reduce the number of queries per request significantly (mostly inserts), as well as allow me to leverage the procedures across multiple implementations of this software. I think the software will benefit. My concern however, is that pushing functionality into stored procedures may make it harder for users to deploy the software on shared hosting services. Is it common practice to expect users to have stored procedure capability (i.e. allow scripts to create and run stored procedures)? **Edit:** This application is intended for my end user to deploy on her own."} {"_id": "128856", "title": "JSP, EL, JSTL: learning from a web designer perpective", "text": "I have two books geared toward learning JSP. From what I've read, one motivating feature/benefit is that it is easier for web designers (they apparently don't, as a rule, know Java) to work with dynamic pages via EL/JSTL. From what I've seen so far, learning EL/JSTL requires a basic understanding of Java. I have yet to see an example that doesn't have a Java class to reference. Since that is what I've seen, I wonder if someone can tell me that this is true or not. To make the question clear: Do web designers have to get a grasp, on a very basic level, of how a Java class works (e.g. fields, setters, getters, args, constructors...)? This question is really meant to help me decide whether or not to ask another question; it is a qualifier."} {"_id": "128851", "title": "Empirical Evidence of Popularity of Git and Mercurial", "text": "It's 2012! Mercurial and Git are both still strong. I understand the trade-offs of both. I also understand everyone has some sort of preference for one or the other. That's fine. I'm looking for some information on level of usage of both. For example, on stackoverflow.com, searching for Git gets you 12000 hits, Mercurial gets you 3000. Google Trends says it's 1.9:1.0 for Git. What other empirical information is available to estimate the relative usage of both tools?"} {"_id": "145811", "title": "How the Erlang get soft-realtime with GC?", "text": "Generally GC is not a good choice to get a soft real-time attribute. But Erlang is GC based language can be soft real-time. Does it mean Erlang have almost no GC latency? How does it work?"} {"_id": "187052", "title": "Style for creating IEnumerable unions", "text": "There isn't any cool LINQ sugar for creating unions. The `Enumerable.Union()` method is usually called like this: var bigList = list1.Union(list2); The alternative is to call `Enumerable.Union()` which can be more readable: var bigList = Enumerable.Union(list1, list2); However neither of these methods are very stylish (more importantly, readable) when scaling out The following is probably the best method: var reallyBigList = list1.Union(list2).Union(list3); Which can result in some messy method chaining. Alternatives need incidental variables: var list1and2 = list1.Union(list2); var reallyBigList = list1and2.Union(list3); or var list1and2 = Enumerable.Union(list1, list2); var reallyBigList = Enumerable.Union(list1and2, list2); Is there a clean way of setting up these more complex unions? Would an extension like `Enumerable.Union(params IEnumerable collections)` (used like `var reallyBigList = Enumerable.Union(list1, list2, list3)`) be better?"} {"_id": "92745", "title": "Separating the database, API, and the interface", "text": "Most of my projects start small. Many times I start with a web page (1 file) that has the code to... select stuff from the database => display it to the user and offer editing => receive the edits (sometimes complex enough that it requires parsing) => update the database all of this in 1 file, of course. As the project grows it always becomes annoying to have so much code in one pile, 50 `if` statements. **Question** : What is a good, proven & intuitive data separation? I guess i'm looking for a diagram that explains which code goes in what file. I would like to keep this language independent, but in case it matters I'm using PHP and MySQL."} {"_id": "92747", "title": "What are some math concepts to review for an interview?", "text": "What are some good math concepts to review for a software engineering interview? The position is not expected to be particularly math heavy, i.e graphics programming, but I imagine refreshing on some math concepts would help going into an interview for a high caliber company. The more specific the better. Thanks in advance!"} {"_id": "73444", "title": "Can I redistribute simplified BSD code?", "text": "Here is a plugin I wish to redistribute: http://plugins.jquery.com/project/TextAreaResizer I've written a Jquery plugin that needs the text resizer as a prerequisite. The license type says: License type: Simplified BSD License/FreeBSD License I did a bit of googling and it said you have to retain all license info, but this plugin doesn't come with any! Can I package a modified version of this (I changed one number in the source) and distribute it with my plugin? Thanks for any insight"} {"_id": "92740", "title": "How should I plan out my code base?", "text": "I'm currently working on a project that's about to reach over 5,000 lines of code, but I never really completely thought out the design. What methods should I use to structure and organize my code? Paper and pen? UML diagrams? Something else?"} {"_id": "107280", "title": "Reference website for CSS 3 properties (like box-reflect)", "text": "Today, I started reading the book _CSS Mastery 2nd edition_. It is really a good book for anyone who is interested in learning some techniques. On page 151, I discovered the `box-reflect` property from CSS3. I tried to find out more information about which browser suports it and the syntax of the property, but I only found a site that didn't explain much. Can anyone please provide me with a link to a website that is up to date regarding the CSS3 elements? The site should also contain information about browser-support for the different elements. I'm primarily interested in the `box-reflect` property and to update my knowledge of CSS3 properties."} {"_id": "73449", "title": "Functional programming language for web development", "text": "I want to choose to learn a functional programming language, it should fulfill the criteria: 1. Open source, static typed & fully object-oriented too. 2. Must has web framework because it's for web development. 3. I want to avoid oracle jvm altogether, therefore not scala, clojure. 4. Should be rather popular, i.e. has real-world applications in industry. Please recommend which good language I can use? thanks."} {"_id": "137165", "title": "Is it possible to combine programming languages?", "text": "I've been programming for a while, I've written some rudimentary programs, and I want to keep learning. I've reached that point where you just don't know what to learn next, and I'd like to ask a question for my own curiosity. The question, in a nutshell, is if you can combine multiple programming languages into 1 result? For example, can this code be possible? cout << \"Hello world!\"; or import java.util.Scanner; cout << \"Insert a number from 1 to 10\"; Scanner n = new Scanner(System.in); System.out.println(\"The value you entered was\" +n.newLine()); This feels like a silly question but I can't possible know if it's possible or not, so that's why I'm asking it. In this question I notice he is using Python code in html code, if my above example is not possible, what did he do?"} {"_id": "201896", "title": "What is \"short-circuiting\" in C?", "text": "I have heard of the term \"short-circuiting\" being used in C. What does this mean and in what scenario would it be used?"} {"_id": "205396", "title": "How to remove data redundancy in a table?", "text": "F1 F2 F3 C1 C2 C3 ------------------------------------------------ A B C 1 D E F 2 A B C 3 D E F 4 A B C 5 D E F 6 i have a table like this. i need my table to be free from Data redundancy like this F1 F2 F3 C1 C2 C3 ----------------------- A B C 1 3 5 D E F 2 4 6 i am unable to figure out the optimized logic. can someone help me out.?"} {"_id": "205394", "title": "data stored in dbase (application result dd.mm.yyyy)", "text": "I have a .dbf file (dbase) with data of type \"date\" to read directly via php. I have the application result (format: dd.mm.yyyy) and the stored hex code. I can't interpret the hex data to get the same results or similar results to the application. Which alogrithm is used to get this results? Examples: app.result=database.hex; 01.01.1000=17 3C 68; 02.01.1000=17 3C 69; 01.02.1000=17 3D 08; 01.01.2000=2E 7C 50; 01.01.1903=2C 58 6F; Can anybody help me?"} {"_id": "205391", "title": "License that grants the initial author all rights of derived/modified work?", "text": "_Please note that I do not want to argue about the moral aspect of this question. I know that there are probably many different concerns._ I have been researching for a special license that could be used in a following scenario, but with no success: Let's assume I have a software and want to release its source code, so that others can use it too. Since I don't want others to make changes and keep them closed I could the GPL. But here's the thing: Even if I profit from changes that other people do. I can relicense only my code because I'm not holding the copyrigth any code that not made by me. But I would like to. I want to know if there is a ready-to-use license that will grant me (the initial author) the rigth to relicense any changes or code that was added by others. Or in even shorter terms: A GPL that does not count for me. If you are interessted why I need this: I don't know if I will ever use my software for a commercial product. Probably not, but if I just use the GPL, I migth end up in a situation where I can't remove any other code since there are to many committers."} {"_id": "44", "title": "Are certifications worth it?", "text": "I am finishing my college degree in programming soon and I'm exploring the next steps to take to further my career. One option I've been considering is getting a certification or a series of certifications in the area of development I want to work in. Are these certifications worth the time and money? Do employers place a lot of value in them?"} {"_id": "56742", "title": "Are certification courses worth it?", "text": "I'm planning on getting certification in Database Development for SQL Server (MSTC - 70-433). I'm a junior level report writer at a new job and the company is offering to pay the majority, if not all, of training course fees. The course is five days. I noticed that MS has a self-paced training kit (book) that I could use. I'm wondering if this would be a better option because it will allow me to go as quick as possible. I've also heard about video training sessions (Lynda.com) but they seem to go at slow pace. My questions are: 1. What should I expect at a certification course? Is it hands-on training? Small classes with personal feedback or not? 2. Would I be better off learning at my own pace using the training kit? (I'd rather this not turn into a certifications are pointless discussion..)"} {"_id": "184051", "title": "Would a programming certificate teach me anything?", "text": "I'm 16-years old, and due to being placed in accelerated programs during my whole academic career, I just finished high school over the holiday season. For the past year or so I've been teaching myself web development (front-end, more specifically), and want to go into a career in the field. I have a front- end internship all lined up for the summer (some places don't offer them year- round, I guess), but I want to continue learning between these two periods. More specifically, I want to learn some server-side development so I can understand the interconnection between the front-end and back-end better. My question is, would taking on a programming certification program at my local community college teach me anything, or is the information too outdated to be relevant whatsoever? PHP is the server-side language covered in the certificate (it also covers JS and CSS). I feel like in self-teaching myself I may have skipped over some fundamentals of JavaScript, and I'd really like to have a solid understanding of the language from both a syntax and philosophical perspective (i.e. understanding advantages, distadvantages, etc). Edit: I've been building a portfolio, and am considering skipping going for a degree altogether. I just want to do this for self-improvement."} {"_id": "125222", "title": "Programming certificates", "text": "> **Possible Duplicate:** > Are certifications worth it? I'd be glad to hear your opinions on which programming certificates are the most widely recongnized in the industry and how useful are they actually. I've googled a lot this question but I've never asked here and I'd like to gather as many opinions as possible before applying for any. I'm interested mainly in developing business application. I have knowledge in Java, SQL, PHP, some C#. Thank you in advance!"} {"_id": "107828", "title": "What are other certifications beside cisco and Microsoft , which put some weight in resume", "text": "> **Possible Duplicate:** > Are certifications worth it? I want to know which are those certifications which can help in increasing job chnaces more. Although experience is more important than certifications but if there is tie on experience then certifications can matter"} {"_id": "39032", "title": "Do Microsoft Certifications matter?", "text": "> **Possible Duplicate:** > Are certifications worth it? I'm curious what experience others have had, both from the perspective of an employer and an employee on Microsoft Certifications. I'm kind of sitting on the fence myself on this issue. I tend to have a cynical view of folks with massive amounts of letters following their name, and sometimes think that these certificates are so specialized that they prove a candidate knows \"almost everything about almost nothing\". Deeply specialized knowledge isn't always needed in the real world, especially if it means total ignorance of related subjects. Nobody needs to here \"That's not my specialty - we need to hire another guy on the team\" very often. On the other hand, everybody knows how easy it is to BS through an interview, and nobody wants the experience of hiring an employee and finding out two weeks later that they don't really have a good grasp on what you need them to do."} {"_id": "178552", "title": "Java Certifications process: Good or Bad idea", "text": "> **Possible Duplicate:** > Are certifications worth it? I am looking to get into more Java Development after being thrown on a project that required Java development for a month and successfully knocking it out. I still feel as though I could use a little bit more refinement with Java and I thought certifications might be the way to go. I would like to be certified in JPA, Java Web services, and Java Front end development (probably JSF or Spring MVC or something like that). I would like to be an architect at some point as well. I have a couple of questions about what is the best way to go about getting certified. My first question is regarding the Java Certification process itself: 1) Is there a baseline certification that I would need to get before I go out and get certified in JPA , Web Services, Front End development, and Architect certification? 2) Do you think that certification is even a good idea? Do you know of employers that would give someone a leg up if say, he has only worked on one or two Java EE projects but has a certification or two? 3) Is this a good step in terms of career development? Is focusing just on development limiting my self to the basement of some development firms? How can I make myself more valuable as a business asset? Any other thoughts you guys have on certifications and their business worth?"} {"_id": "204999", "title": "How valuable are master level language certifications like C++ grandmaster?", "text": "My understanding of standard programming/computer certifications is that they show you have some core competency about a specific language or technology. There seems to be a little room for variation here, but for the most part I don't feel like I don't see very many \"expert\" level certifications. How valuable are master level language certifications such as the recent C++ Grandmaster? Do you think they would have a big impact on getting a good job? (I understand that C++ grandmaster is a really extreme, maybe unrealistic, example.) If so, what valuable \"master\" or \"expert\" level certifications exist for specific programming languages? Are certifications more valuable for showing core competency or specific specialty knowledge? I am not very knowledgable about this subject so let me know if I made bad assumptions."} {"_id": "35521", "title": "Is an SEI certificate worth anything?", "text": "I'm talking about SEI Software Architecture Professional Certificate. I've been web developing for 3 years and I'd like to move into Software Architecture. My work will probably pay the fees for this program if I ask nicely enough. But does the SEI have the reputation to make this \"certificate\" worth having? More importantly, is the training any good? Has anyone done this?"} {"_id": "179004", "title": "What are the certifications to be need for the Java Architect?", "text": "I'm the senior software developer working in a private company. I want to became the Software Architect in the Java side. Could you please tell me the certifications releated to my scenario?"} {"_id": "99922", "title": "Should I focus on getting certified?", "text": "I am a graduate in computing and I am on the job hunt at the moment. So I now have an MSc and a Bsc qualification in applied computing. People have been saying get a professional qualification right away while others have said \"not now\". I am really unsure what to think of them at the moment, someone said to me they are bad as it makes you look like you are not open to developing in other environments. For example, if I was to get a MS certification employers might think I am useless outside of a Microsoft environment. Are certifications worth the money? Should I really get them now, so soon after I graduate or wait till I have some experience in the working world?"} {"_id": "36920", "title": "Is the CakePHP certification worth it?", "text": "I have over 5 years of experience in web technologies\u2014primarily the LAMP stack\u2014and over 2 years in the CakePHP framework. I am considering the CakePHP certification, is it worth it?"} {"_id": "83079", "title": "If you're a .NET developer, does getting the Microsoft Certified Master SQL Server 2008 cert make sense?", "text": "So I've been working at a job for the past year as a .NET developer/DBA. The job is my first DBA job. Traditionally, I'm a .NET developer and have about 5 years experience in it. Does it make sense for me to continue my certs to get the MCM (assuming I'm eligible)? I'm thinking it doesn't because I've been working with SQL Server for 5 years but only as a DBA for a year. Would it help a developer to have that? Or is it useful only in switching? I became interested in SQL Server because I felt that during the recession, people were looking more and more for developers who could add value across the stack. Now, I'm not so sure..."} {"_id": "126830", "title": "Is cloudera hadoop certification worth the investment", "text": "I am considering investing time to learn Hadoop and it's related technologies. The problem is that my current day job will not be using Hadoop any time soon and even if I learn from books, blogs personal projects I will not have much to backup when I actually need to show that I have Hadoop experience. So while continuing my job I would like to invest into my own training and I am thinking about Hadoop certification from cloudera. What do you think about it? Please answer from your perspective (if you took the certification and training course or are in the market for hiring new hadoop developers and what do you look for) I am sure that there is no shortcut to becoming an expert in Hadoop, but certification and training seems like a jump start."} {"_id": "73148", "title": "Thoughts on Agile / Scrum Certifications - Worth it?", "text": "For those involved with the Agile community (either as developers or management) you've all probably heard about the Certified ScrumMaster designation as well as Certified Product Owner, and Certified Scrum Developer. You may have also heard about PMI (Project Management Institute) new \"Agile Certification.\" Isn't Agile (as a philosophy) all about opening up to newer and (possibly) more effective methods of developing? Does there need to be a certification to back up the \"skills?\" What is the best reason to get an Agile certification?"} {"_id": "182433", "title": "Fundamental Difference between fn() and new fn() in javascript", "text": "In what all aspects, calling a function with and without new keyword differ in javascript? I mean what all things are differing between? `testFn()` vs `new testFn()`"} {"_id": "231042", "title": "Creating a very basic compiler using Java", "text": "I want to try and create my own very basic language, with it's very basic compiler. All using Java. For now, it will only need to enable to 'programmer' to print things to the screen. I had an idea for how to do this, and what I wanted to know is: Will this be considered an actual 'compiler', an actual 'language', and an actual 'virtual machine'? (All very, very basic of course). My idea was to create a program which will serve as the 'IDE' (editor and compiler), and another one which will serve as the 'virtual machine'. This means that the IDE will not compile the written code to some existing machine code, like the Java Bytecode, but will actually compile to some kind of compiled code made up by me. This compiled code will only be understandable by my 'virtual machine' program, and will only be able to run inside this program. The 'virtual machine' program, will use high-level Java operations in order to understand and execute the compiled code. The 'virtual machine' program will be a Java program, running on the JVM. My question is: **Conceptually** , is this considered a virtual machine, and 'machine code'? If not, is this still considered a 'programming language', even though it's compiled bytecode can only run inside a specific program?"} {"_id": "182434", "title": "How to practice programming?", "text": "I've been a PHP developer for 5 years now. Before I started working in that field, I've been practicing at home. I've created my own CMS, so I could actually show some of my code to the potential employer. In the final version, it had about 2,000 lines of code, and no one except me was using it. But that was not the case, because I've learned a lot during this project. Lately I've decided to move forward and abandon web developing for sake of C/C++ programming. I've read a few books recommended on stackoverflow, skimmed over a glibc manual, and solved some problems on Project Eurler as adviced. Now it's time to start some project, but I can't come up with anything. What is the next step I should take?"} {"_id": "124112", "title": "Simulating Agile Software Development", "text": "Looking for a guide or example of where a group facilitator walks a group through an agile software development cycle playing the role of both the event facilitator and the key stakeholder or project owner."} {"_id": "124115", "title": "what limitation will we face if each user-perceived character is assigned to one codepoint?", "text": "Hi all I was wondering what limitations will we have if Unicode had decided to assign one and only one codepoint to every user-perceived character? Currently, Unicode has code-points that correspond to combining characters. Such characters combine with a previous code-point (or sequence thereof) to present to the user what appears to be a single character. From what I can see, the current standard is full of complexities. Even if I try to avoid any kind of complexities by using an encoding like UTF-32, this problem still **persists**. It's not an encoding issue at all. For example, in Unicode when a grapheme cluster is represented internally by a character sequence consisting of base character + accent, then when the user clicks the `>` button to skip to the next user-perceived character, we had to skip from the start of the base character to the end of the last character of the cluster. Why does it need to be so hard? Why can't we assign a single code point to every user perceived character such going to the user-perceived character is simply a matter of reading the next code point? The unicode website seems to acknowledge this complexity, (although I could not understand what exactly is so rare about having to figure out the number of user-perceived character counts in a string): > In those **relatively rare circumstances where programmers need to supply > end users with user-perceived character counts** , the counts should > correspond to the number of segments delimited by grapheme clusters. > Grapheme clusters may also be used in searching and matching Diacritics are also the reason why things don't work as expected. For example, if I throw 2 \u30d4 characters (japanese katakana PI using the unicode representation `U+30d2 U+309a`) into a String builder and reverse it, I would naturally expect the output to be 2 \u30d4 characters (i.e. \u30d4\u30d4), but it gives an invalid output of \u309a\u30d4\u30d2 ! **If Unicode had assigned individual code points for each user-perceived character and scrapped the idea of grapheme clusters, this wouldn't have happened at all.** What I would like to know is, what exactly is wrong with representing every user-perceived character as one unicode codepoint? Is it likely that doing so would would exceed U+10FFFF possible code points (if it does exceed U+10FFFF code points I see no reason why they couldn't have set the limit to 2^32 in the first place), Even when there is so much spare space to include the whole family of Elf Languages ? Unicode states: > If you wanted, for example, to give each \u201cinstance of a character on paper > throughout history\u201d its own code, you might need trillions or quadrillions > of such codes; But seriously that doesn't make sense at all. Where is the source for making such a claim? A trillion codes is quite an over-statement."} {"_id": "124118", "title": "New to iPhone Development - iOS5 Storyboard", "text": "I'm new here and pretty new to iOS development. My question is basically, should I learn the old school development methods or just learn how to do things using the latest tools (i.e. Storyboard)? I've had a go with the Storyboard feature of XCode 4.2 and it's very powerful. My only concern is that it requires iOS 5. I don't mind learning the old way of doing things but I've been having trouble finding tutorials/examples for XCode 4.2 that don't use the storyboard. An example would be the with my trouble finding a good tutorial on how to embed a Navigation Controller into a TabBarController. A lot of the material out there seems to be for older version of XCode. Using the storyboard I'm able to set this up with seconds but still haven't managed to get it working without it. So in short :) would you guys suggest I continue my project using the Storyboard or make the extra effort to do things a little more manually?"} {"_id": "140856", "title": "Certifications for Javascript developers?", "text": "I'm looking for a solid and but fast paced entry in field of javascript development. The following topics come to my mind: * Javascript advanced concepts, OOP * jQuery, jQuery-UI, jQuery-Mobile * backbone.js * node.js * BDD and/or TDD The courses of http://www.codelesson.com seem promising. What certificates for Javascript developers exist/can be recommended? What other vendors can you recommend?"} {"_id": "125648", "title": "Custom heap allocators", "text": "Most programs can be quite casual about heap allocation, even to the extent that functional programming languages prefer to allocate new objects than modify old ones, and let the garbage collector worry about freeing things. In embedded programming, the silent sector, however, there are many applications where you can't use heap allocation at all, due to memory and hard real-time constraints; the number of objects of each type that will be handled is part of the specification, and everything is statically allocated. Games programming (at least with those games that are ambitious about pushing the hardware) sometimes falls in between: you can use dynamic allocation, but there are enough memory and soft real-time constraints that you can't treat the allocator as a black box, let alone use garbage collection, so you have to use custom allocators. This is one of the reasons C++ is still widely used in the games industry; it lets you do things like http://www.open- std.org/jtc1/sc22/wg21/docs/papers/2007/n2271.html What other domains are in that in-between territory? Where, apart from games, are custom allocators heavily used?"} {"_id": "140851", "title": "Is this kind of Design by Contract useless?", "text": "I've just started informatics university and I'm attending a programming course about C(++). The programming professor prefers to connect every topic with a type of programming design that is similar to Design by Contract. Basically, what he asks us to do is to write every exercise with comments that denote the pre-condition, post-condition, and invariants that should prove the correctness of each program we write. But this doesn't make any sense to me. Maybe writing down your thoughts prevent you from making some mistakes, but if this is all an abstract thing, then if your program intuition is wrong, you'll write your program wrong, and then you'll also write pre and post conditions wrong. In the end, you'll be convinced that a wrong program might actually be correct. I had some programming experience before this course and I found myself comfortably with just writing a program and unit testing it. It takes less time to accomplish and is less \"abstract\" than just thinking about what every single piece of your program should do in every case (which is kinda like mentally testing it). Finally, determining the pre and post conditions takes me about 80% of the total time of the exercises. It's harder to think about putting down this pre and post correct than to write the program itself. Am I right in thinking that working with pre-conditions, post-conditions, and invariants aren't worth anything? Should I convince myself that this method is right?"} {"_id": "125644", "title": "Architecting a multi-technology solution", "text": "I am trying to solve an issue with reusing components. I have some UI components (a mix of JS, CSS, and HTML) that are not specific to any application. These UI components need to be able to be used in multiple applications. One is a RoR application, the other is an ASP.NET MVC application. How can I maintain these components (e.g. with version control, distribution) and still make them accessible to the different platforms that need to consume them? For example, ApplicationA and ApplicationB use UIComponentA, which is not natively a part of their projects or repositories, because UIComponentA lives in its own repository. How can these projects get the dependencies they need while keeping the components reusable and hosted in their own location? Does anyone have experience with a similar situation, and if so, how did you tackle it? If you need further information, don't hesitate to ask, I will update this question."} {"_id": "219320", "title": "When and how should I use exceptions (in Python)?", "text": "# The Setting I often have trouble determining when and how to use exceptions. Let's consider a simple example: suppose I am scraping a webpage, say \"http://www.abevigoda.com/\", to determine if Abe Vigoda is still alive. To do this, all we need to do is download the page and look for times that the phrase \"Abe Vigoda\" appears. We return the first appearance, since that includes Abe's status. Conceptually, it will look like this: def get_abe_status(url): # download the page page = download_page(url) # get all mentions of Abe Vigoda hits = page.find_all_mentions(\"Abe Vigoda\") # parse the first hit for his status status = parse_abe_status(hits[0]) # he's either alive or dead return status == \"alive\" Where `parse_abe_status(s)` takes a string of the form \"Abe Vigoda is _something_ \" and returns the \" _something_ \" part. Before you argue that there are much better and more robust ways of scraping this page for Abe's status, remember that this is just a simple and contrived example used to highlight a common situation I'm in. Now, where can this code encounter problems? Among other errors, some \"expected\" ones are: * `download_page` might not be able to download the page, and throws an `IOError`. * The URL might not point to the right page, or the page is downloaded incorrectly, and so there are no hits. `hits` is the empty list, then. * The web page has been altered, possibly making our assumptions about the page wrong. Maybe we expect 4 mentions of Abe Vigoda, but now we find 5. * For some reasons, `hits[0]` might not be a string of the form \"Abe Vigoda is _something_ \", and so it cannot be correctly parsed. The first case isn't really a problem for me: an `IOError` is thrown and can be handled by the caller of my function. So let's consider the other cases and how I might handle them. But first, let's assume that we implement `parse_abe_status` in the stupidest way possible: def parse_abe_status(s): return s[13:] Namely, it doesn't do any error checking. Now, on to the options: ## Option 1: Return `None` I can tell the caller that something went wrong by returning `None`: def get_abe_status(url): # download the page page = download_page(url) # get all mentions of Abe Vigoda hits = page.find_all_mentions(\"Abe Vigoda\") if not hits: return None # parse the first hit for his status status = parse_abe_status(hits[0]) # he's either alive or dead return status == \"alive\" If the caller receives `None` from my function, he should assume that there were no mentions of Abe Vigoda, and so _something_ went wrong. But this is pretty vague, right? And it doesn't help the case where `hits[0]` isn't what we thought it was. On the other hand, we can put in some exceptions: ## Option 2: Using Exceptions If `hits` is empty, an `IndexError` will be thrown when we attempt `hits[0]`. But the caller shouldn't be expected to handle an `IndexError` thrown by my function, since he has no idea where that `IndexError` came from; it could have been thrown by `find_all_mentions`, for all he knows. So we'll create a custom exception class to handle this: class NotFoundError(Exception): \"\"\"Throw this when something can't be found on a page.\"\"\" def get_abe_status(url): # download the page page = download_page(url) # get all mentions of Abe Vigoda hits = page.find_all_mentions(\"Abe Vigoda\") try: hits[0] except IndexError: raise NotFoundError(\"No mentions found.\") # parse the first hit for his status status = parse_abe_status(hits[0]) # he's either alive or dead return status == \"alive\" Now what if the page has changed and there are an unexpected number of hits? This isn't catastrophic, as the code may still work, but a caller might want to be _extra_ careful, or he might want to log a warning. So I'll throw a warning: class NotFoundError(Exception): \"\"\"Throw this when something can't be found on a page.\"\"\" def get_abe_status(url): # download the page page = download_page(url) # get all mentions of Abe Vigoda hits = page.find_all_mentions(\"Abe Vigoda\") try: hits[0] except IndexError: raise NotFoundError(\"No mentions found.\") # say we expect four hits... if len(hits) != 4: raise Warning(\"An unexpected number of hits.\") logger.warning(\"An unexpected number of hits.\") # parse the first hit for his status status = parse_abe_status(hits[0]) # he's either alive or dead return status == \"alive\" Lastly, we might find that `status` isn't either alive or dead. Maybe, for some odd reason, today it turned out to be `comatose`. Then I don't want to return `False`, as that implies that Abe is dead. What should I do here? Throw an exception, probably. But what kind? Should I create a custom exception class? class NotFoundError(Exception): \"\"\"Throw this when something can't be found on a page.\"\"\" def get_abe_status(url): # download the page page = download_page(url) # get all mentions of Abe Vigoda hits = page.find_all_mentions(\"Abe Vigoda\") try: hits[0] except IndexError: raise NotFoundError(\"No mentions found.\") # say we expect four hits... if len(hits) != 4: raise Warning(\"An unexpected number of hits.\") logger.warning(\"An unexpected number of hits.\") # parse the first hit for his status status = parse_abe_status(hits[0]) if status not in ['alive', 'dead']: raise SomeTypeOfError(\"Status is an unexpected value.\") # he's either alive or dead return status == \"alive\" ## Option 3: Somewhere in between I think that the second method, with exceptions, is preferable, but I'm not sure if I'm using exceptions correctly within it. I'm curious to see how more experienced programmers would handle this."} {"_id": "173569", "title": "Statistical Software Quality Control References", "text": "I'm looking for references about hypothesis testing in software management. For example, we might wonder whether \"crunch time\" leads to an increase in defect rate - this is a surprisingly difficult thing to do. There are many questions on how to measure quality - this isn't what I'm asking. And there are books like Kan which discuss various quality metrics and their utilities. I'm not asking this either. I want to know how one applies these metrics to make decisions. E.g. suppose we decide to go with critical errors / KLOC. One of the problems we'll have to deal with with that this is not a normally distributed data set (almost all patches have zero critical errors). And further, it's not clear that we really want to examine the difference in means. So what should our alternative hypothesis be? (Note: Based on previous questions, my guess is that I'll get a lot of answers telling me that this is a bad idea. That's fine, but I'd request that it's based on published data, instead of your own experience.)"} {"_id": "229770", "title": "What does the implementation of .NET string.Split(char[], StringSplitOptions) look like from inside?", "text": "That is if we were to see how Microsoft wrote this method what would it look like? I'm mainly interested in the use of the StringSplitOptions enumeration with the other parameter and how they probably structured their code to account for each option."} {"_id": "134364", "title": "Has anyone thoroughly compared C# common coding standards?", "text": "Most of the C# programmers I know embrace one of the common coding standards. However being aware of the standards is one thing, telling the differences is another. Browsing the common coding standards documents our there on C#, my first reaction was it's more of the same. Admittedly I didn't bother reading everything. Has anyone ever compared them and point out the **major** differences between them? * **Microsoft's Design Guidelines for Developing Class Libraries** * **The IDesign C# Coding Standard, development guidelines, and best practices** * **Lance Hunt's C# Coding Standards**"} {"_id": "134362", "title": "Good industry qualifications to aim for as a grad?", "text": "I am a fairly recent grad and have been working for over a year now in software development. I would like to get a couple of professionally recognised exams on my CV. I am currently looking at Oracle certified java programmer. But are there any others that would look good and people would recommend?"} {"_id": "141104", "title": "\"PHP: Good Parts\"-ish book / reference", "text": "Before I had my first proper contact with Javascript I read an excellent book \"Javascript: The Good Parts\" by Douglas Crockford. I was hoping for something similar in case of PHP. My first thought was this book: \"PHP: The Good Parts\" from O'Reilly However after I read the reviews it seems it totally misses the point. I am looking for a resource that would: * concentrate on known shortcommings of PHP, * give concrete examples, * be as exhaustive as possible I already see that things can go wrong. **Research:** I looked through SO, and Programmers for materials. I obviously found this question: http://stackoverflow.com/questions/90924/what-is-the-best-php-programming-book It's general, mine is specific. Moreover I'm reading the top recommendation \"PHP Objects, Patterns, and Practice\" right now. I find it insufficient -- it doesn't address the bad practices as much as I would like it to. **Motivation:** I don't _dislike_ PHP. I'm not even competent enough just yet to really state that. In 3 months however, I will start a job that will likely involve a lot of development in that language. I do not really know what quality the code will be. I also don't want to be the guy to introduce all the bad practices and then learn everything the hard way. I try to find out about as many traps that I possibly can. In case of Javascript Crockford's lectures really served as a guiding light. I would never feel confident in Javascript without them."} {"_id": "141108", "title": "Hidden web standards behind Google \"custom searchEngines\"?", "text": "Today while playing with Google Chrome Omnibox, I notice a strange behavior. I guess there's some \"hidden\" web standard behind it, but can't figure it out. Here's how to reproduce: 1. Go to http://edition.cnn.com/ 2. Use the search function at the higher right corner, Search a random keyword, for example: \"abc\" 3. Close the tabs. 4. Open a new tab, type until Chrome reminds you about http://edition.cnn.com/, then press \"Tab\" 5. The Omnibox now shows \"Search CNN.com\"! And when you type \"abc\" and press Enter, it uses the CNN search function to do the job, not Google! I also tried it for several different sites. To some it won't work. But to some sites, like CNN, vnexpress.net, it works after I use the search function of that site once. I also learnt about `chrome://settings/searchEngines` (type it in your chrome box and you will see), and learnt about you can add custom search engine in chrome. But the question is, why Chrome can realize the search URL automatically to some pages, and not others? It's not because some site subscribe to Google service, because I can do the same method for my site (http://ledohoanglong.wordpress.com), and I'm sure that there's no subscription. So I guess there's a method to \"expose\" the search function of a site, so that Google Chrome can catch it (after I call the search function of that site once, of courses). Does anyone know about how it works behind the scene?"} {"_id": "134368", "title": "Assembly in a research paper", "text": "I am doing a research paper on programming, and I need to somehow explain assembly... I've never learned the language, but I understand what it is used for and kinda what it looks like... MOV A,47 ADD A,B HALT And from what I understand, it is used for writing some compilers, small bits of code that need to be really fast, and other tasks... But this is kind of a question for people who have used/studied assembly and understand it fairly well. And no, I'm not asking for you to type up this segment of the research paper for me. That would be selfish. I just need a brief explanation, clarification on the subject (I'm just unsure if what I know is true or not) and a pointer in the right direction. For a research paper (I need to explain this VERY simply...) what is purpose of using Assembly in today's society? Of course it was important back in the day but now with all the high level languages, when is this low level language used?"} {"_id": "203942", "title": "Why is the game industry, specifically , so harsh on programmers?", "text": "On hackernews and /r/programming I've heard several reports of how the games industry is incredibly harsh on programmers. Someone on this site also linked this blog post in an answer I read recently. According to various reports, programmers in the games industry are severely overworked. Perhaps not when working for small games companies, but definitely when working for places like EA(the place discussed in the blog post). So my question is, why? I'm a developer for a large networking company, I sometimes work more than 8 hours a day, but I wouldn't dream of working 12 hour days 6 days a week like the blog post describes. I'd quit and move on without a second thought. Why does the games industry, specifically, have this problem?"} {"_id": "125398", "title": "What's the point of camel-case-navigation?", "text": "I heavily use CTRL+ to jump between tokens in code (the `_`s are the navigation points): _fooBar _+ _barFoo_; Some editors, like the one in the current version of QtCreator, have (by default or optionally) camel case navigation, which goes like this: _foo_Bar _+ _bar_Foo_; What is the rationale behind this? I remember some people back in school saying \"cool\" and \"fancy\", but is that the whole story? I do not think it is cool, I find it very annoying actually."} {"_id": "125399", "title": "Differences between Design by Contract and Defensive Programming", "text": "Could Designing by Contract (DbC) be a way to program defensively? Is one way of programming better in some cases than the other?"} {"_id": "135947", "title": "Do CDNs (such as MSFT and Google) act on the referrer header sent by clients?", "text": "Will my site be automatically indexed, or search ranking affected based on my use (or non use) of the Google/MSFT CDN? My clients will be sending them the referrer header which I may or may not want included in search results."} {"_id": "155055", "title": "Spring MVC vs raw servlets and template engine?", "text": "> **Possible Duplicate:** > Why not Spring framework? I've read numerous articles about the Spring MVC framework, and I still can't see the benefits of using it. It looks like writing even a simple application with it requires creating a big hodgepodge of XML files and annotations and other reams of code to conform to what the framework wants, a whole bunch of moving parts to accomplish a simple single task. Any time I look at a Spring example, I can see how I can write something with the same functionality using a simple servlet and template engine (e.g. FreeMarker, StringTemplate), in half the lines of code and little or no XML files and other artifacts. Just grab the data from the session and request, call the application domain objects if necessary, pass the results to the template engine to generate the resulting web page, done. What am I missing? Can you describe even one example of something that is actually made simpler with Spring than using a combination of raw servlets with a template engine? Or is Spring MVC just one of those overly complicated things that people use only because their boss tells them to use it? **EDIT** : A summarized concrete description would be helpful. For example, \"I used Spring MVC to build an application that did XYZ, and its feature Q saved me from having to do the extra work of A and B and C which would have been necessary with raw servlets and a template engine.\" Simply mentioning marketing buzzwords like \"dependency injection\" doesn't clarify anything."} {"_id": "42818", "title": "Best ways to sell management on the benefits of Open Source Software?", "text": "I have worked in a few places where the use of Open Source Software in products they produce is strictly forbidden for various reasons, such as: * no formal support * lack of trust in something perceived as \"just downloaded from the internet\" * How can it be professional if it's not supported, we don't pay for it etc etc I'm looking for the best ways to convince/prove to management that things won't fall apart should we use these tools."} {"_id": "125391", "title": "What is the best way to evaluate new programmers?", "text": "What is the best way to evaluate the best candidates to get a new job (talking merely in terms of programming skills)? In my company we have had a lot of bad experiences with people who have good grades but do not have real programming skills. Their skills are merely like code monkeys, without the ability to analyze the problems and find solutions. More things that I have to note: * The education system in my country sucks--really sucks. The people that are good in this kind of job are good because they have talent for it or really try to learn on their own. * The university / graduate /post-grad degree doesn't mean necessarily that you know exactly how to do the things. * Certifications also mean nothing here because the people in charge of the certification course also don't have skills (or are in low paying jobs). We need really to get the good candidates that are flexible and don't have mechanical thinking (because this type of people by experience have a low performance). We are in a government institution and the people that are candidates don't necessarily come from outside, but we have the possibility to accept or not any candidates until we find the correct one. I hope I'm not sounding too aggressive in my question; and BTW I'm a programmer myself. _**edit:_** I figured out that asked something really complex here. ~~I will un-toggle \"the correct answer\" only to let the discussion going fluent, without any bias.~~"} {"_id": "85676", "title": "About languages strongly typed with late binding, do they make sense?", "text": "I never learnt anything about VB6 (and I dont want to) but I wanted to search for bad things in computer software, so my first though was VB6. So for example, VB6 was strongly typed with late binding. Makes some sense to have a language with that combination? (I dont think so). I want to know reasons of why VB6 was like this! or why is good idea for a lenguage to be like this. Bad things that happend with a lengugage like this? good things?"} {"_id": "208575", "title": "What software is used to write functional and non-functional requirements?", "text": "Is there any software which can be used to write functional and non-functional requirements? When writing those, it is essential to: * Store the document in text format to be able to make diffs and minimize the impact on the version control, * Apply extensive formatting when creating diagrams would be an overkill, * Use diagrams, preferably stored in text format as well, * Cross-link the requirements or the appendixes. Currently, I can only think about three editors which _may_ be used, but which are not truly suitable: 1. **A Markdown editor.** Benefits: text only; easy to add formatting. Disadvantage: not powerful enough: one can add images, but not diagrams; formatting is rather limited; no cross-linking (unless adding much HTML markup). 2. **An HTML editor.** Benefits: text only; ability to add diagrams with HTML 5. Advanced formatting is possible. Disadvantages: markup difficult to change by hand, or if WYSIWYG editor is used, markup is often too verbose and low quality; cross-linking limited to manual only. 3. **A document editor such as OpenOffice or Microsoft Word.** Benefits: ability to apply some advanced formatting; excellent automated cross-linking. Disadvantages: binary format, which means no diff and wasted space in version control. What are my other choices? What is actually used in the industry?"} {"_id": "208576", "title": "Do you keep your project code names the same in the source tree?", "text": "Sometimes when I start working on a project, I just can't think of a good name, or think of a good name that isn't already taken. As a result, I'll end up picking some sort of code name for the project. My question is, how do you handle name changes in your source code once you find a real name for your project? Should you continue to refer to the code name in your source tree, namespaces, binaries, etc?"} {"_id": "170912", "title": "Ways to ensure unique instances of a class?", "text": "I'm looking for different ways to ensure that each instance of a given class is a uniquely identifiable instance. For example, I have a `Name` class with the field `name`. Once I have a `Name` object with `name` initialised to John Smith I don't want to be able to instantiate a different `Name` object also with the name as John Smith, or if instantiation does take place I want a reference to the orginal object to be passed back rather than a new object. I'm aware that one way of doing this is to have a static factory that holds a `Map` of all the current Name objects and the factory checks that an object with John Smith as the name doesn't already exist before passing back a reference to a `Name` object. Another way I could think of off the top of my head is having a static Map in the `Name` class and when the constructor is called throwing an exception if the value passed in for `name` is already in use in another object, however I'm aware throwing exceptions in a constructor is generally a bad idea. Are there other ways of achieving this?"} {"_id": "38569", "title": "Shell independence in programming groups", "text": "Our programming environment is dependent upon certain environment variables being set. For example, to use distcc, one needs to define the `DISTCC_HOSTS` environment variable. The way we handle this is forcing each developer to `source` a global `tcshrc` file upon invoking a new shell. The global `tcshrc` file contains statements to set the environment variables up (among other things). However, this is awfully discriminatory as each developer is forced to use `tcsh` since setting an environment variable is different per shell. The most obvious solution to this problem to have corresponding global `bashrc` and `zshrc` files, but that of course becomes cumbersome since now we have to maintain three different files all containing the same logic. Are there any clean solutions to solve this sort of situation?"} {"_id": "203490", "title": "Are 3rd-party controls and MVC anathema?", "text": "At http://www.codeproject.com/Articles/552846/Why-s-How-s-of-Asp-Net-MVC- Part-1, I read this: \"You should not use Asp.Net MVC if you rely on 3rd party vendor controls for of the UI.\" The author doesn't seem to say why (maybe the explanation is too subtle, or I'm too dense). Thoughts? Is it true? Why would this be?"} {"_id": "134092", "title": "Why don't public web applications use ini files for configuration", "text": "Almost every public CMS out there uses a .php configuration file for the database settings and so on. For example WordPress automatically creates a .php config file when you install it. Why don't they just use a .ini file? PHP already has parse_ini_file() and I'm sure other languages have similar functions."} {"_id": "129796", "title": "How to determine the effectiveness of a code review process?", "text": "We've introduced a code review process within our organisation and it seems to be working well. However, I would like to be able to measure the effectiveness of the process over time, i.e. are we not finding bugs because the code is clean or are people just not picking up on bugs? Currently, we don't have an effective fully-automated test process. We primarily employ manual testing, so we can't rely on defects found at this stage to ensure that the code review process is working. Has anyone come across this issue before or has any thoughts on what works well in measuring code reviews?"} {"_id": "134097", "title": "Why should I prefer composition over inheritance?", "text": "I always read that composition is to be preferred over inheritance. A blog post on unlike minds, for example, advocates using composition over inheritance, but I can't see how polymorphism is achieved. But I have a feeling that when people say prefer composition, they really mean prefer a combination of composition and interface implementation. How are you going to get polymorphism without inheritance? Here is a concrete example where I use inheritance. How would this be changed to use composition, and what would I gain? Class Shape { string name; public: void getName(); virtual void draw()=0; } Class Circle: public Shape { void draw(/*draw circle*/); }"} {"_id": "227654", "title": "Declaring a field name starting with underscore", "text": "Before forming a class in Java or other programming languages that support OOP, should I use underscore (_) in each ( **local** or **private** ) field declaration. More precisely: > private String _customername; Is this a correct declaration?"} {"_id": "249520", "title": "How do I fix an \"emergent\" bug?", "text": "I'm writing a PDE solver, and I have a bug that only shows up in very large test cases. That is, with small grids the program gives correct answers, but there's a large amount of unaccounted-for error (I've accounted for roundoff, discretization, and other standard types of error that unavoidably occur) that creeps in when my test cases get into the days-to-finish range. I can't run this in a debugger, that would take weeks. And printing out intermediate results is not particularly useful given that I can't manually inspect the output to see what's wrong. **How can I find and fix this \"emergent\" bug?**"} {"_id": "249522", "title": "Linking to MIT-Licensed dll", "text": "I'm currently working on some closed-source proprietary software that makes use of a library ( **SharpAVI** ) distributed with the MIT License. The SharpAVI source _isn't_ being used directly anywhere in my project, only the unchanged .dll provided for download on the codeplex site linked above. The .dll is referenced in one very small distinct project inside a much larger solution (consisting of a single wrapper class around the functionality provided by SharpAVI). I'm particularly concerned about this wording: > The above copyright notice and this permission notice shall be included in > all copies or substantial portions of the Software. Does the license need to be included in a file with _my_ code, which is linked to the licensed code by .dll, and includes none of the original source? Would this make my wrapper code an MIT-licensed product (that would seem to go against the non-viral nature of the license)? If my wrapper needs to become licensed under the MIT License, wouldn't that propagate to any other code that references it as well? **How do I properly attribute the license information for a linked MIT- Licensed .dll without releasing my own code under the MIT License?**"} {"_id": "249524", "title": "Why can't C arrays have 0 length?", "text": "The C11 standard says the arrays, both sized and variable length \"shall have a value greater than zero.\" What is the justification for not allowing a length of 0? Especially for variable length arrays it makes perfect sense to have a size of zero every once and a while. It is also useful for static arrays when their size is from a macro or build configuration option. Interestingly GCC (and clang) provide extensions that allow zero length arrays. Java also allows arrays of length zero."} {"_id": "58824", "title": "Good resources and tools for modern, heavy JavaScript development?", "text": "I am interested in doing some projects that involve heavy use of JavaScript. Namely HTML5 based canvas games, potentially using node.js as well. I am interested in learning modern best practices, tools and resources for JavaScript. JavaScript is tough to research because you end up wading through a lot of really outdated material, hailing from the times that \"JavaScript\" was a four letter word. If you are heavily involved in JavaScript programming... * What text editor or IDE do you use? * What unit testing framework do you use? * Do you use Selenium, or something else? * What other tools do you use? * What communities exist that discuss recent advents in JavaScript? * What books do you read/refer to? * What blogs do you read?"} {"_id": "96421", "title": "Is there a book for learning Javascript -> DOM -> jQuery?", "text": "> **Possible Duplicate:** > The importance of javascript and the best way to learn it? I would like to learn Javascript, the DOM model and jQuery. I've found many recommendations for books for each subject, but I wonder if there is a single book which teaches the three subjects in order and how they relate to each other. Thanks for any advice."} {"_id": "18891", "title": "Project Suggestion for computer networks using C#", "text": "I am looking for suggestion for a project in computer networks using C# as the development tool, as part of a university course I am taking. The project should last about a month, involving two people. I'll be happy to hear any intresting ideas you might have."} {"_id": "236900", "title": "Ruby on Rails Development: updating model attribute using f.association collection", "text": "Im currently working on a rails program. It contains many different databases but to keep things simple I have a people, student, and faculty table. A person can be a student or faculty member and there all connected by the person_id. A faculty member advises students so the student is connected to the faculty member by the id of the faculty. To create a student you need to select a major and a faculty member(faculty_id). Being that the student and faculty members are all people.. in my dropdown I had to use
<%= f.input :faculty_id %> <%= f.association :person, label: \"Advisor\", label_method: :to_label , value_method: :id, include_blank: false, collection: Person.where(\"role_id=?\", @role.id)%>
to limit the people in the drop down to only faculty members. But the issue Im running into now is how would I set the faculty_id attribute of the student with the selected value of the association collection?"} {"_id": "18895", "title": "How can you write tests for Selenium (or similar) which don't fail because of minor or cosmetic changes?", "text": "I've been spending the last week or so learning selenium and building a series of web tests for a website we're about to launch. it's been great to learn, and I've picked up some xpath and css location techniques. the problem for me though, is seeing little changes break the tests - any change to a div, an id, or some autoid number that helps identify widgets breaks any number of tests - it just seems to be very brittle. so have you written selenium (or other similar) tests, and how do you deal with the brittle nature of the tests (or how do you stop them being brittle), and what sort of tests do you use selenium for?"} {"_id": "89453", "title": "Is OO-programming really as important as hiring companies place it?", "text": "I am just finishing my masters degree (in computing) and applying for jobs. I've noticed many companies specifically ask for an understanding of object orientation. Popular interview questions are about inheritance, polymorphism, accessors etc. Is OO really that crucial? I even had an interview for a programming job in C and half the interview was OO. In the real world, developing real applications, is object orientation nearly always used? Are key features like polymorphism used A LOT? I think my question comes from one of my weaknesses. Although I know about OO, I don't seem to be able to incorporate it a great deal into my programs."} {"_id": "18899", "title": "How to get people involved in your project?", "text": "I have many project ideas and unfortunately all require a team of at least 3-5 developers. I've started a project by myself too many times without reaching the end. The point is to involve people in the project with the hope of giving them a share if the project succeeds. I know that the last point however is the hardest and most complicated one. I\u2019m hoping to get some answers that point me in the right direct. 1. Where do I find people that may be interested in the idea? 2. How do I convince people in my network to put time in new ideas? **Update:** Regarding \"projects\" I'm referring to projects in which I have initiated during my career as developer. Of course it wouldn\u2019t be serious of me to launch several simultaneous projects and complain about not to be able to deliver them to the end."} {"_id": "229774", "title": "How can I layout this class to make it easier to use for API developers?", "text": "I have an application that allows the user to place a string of text into a PDF document. I have three different ways that they can do this: 1. **Use a Form Field.** Then they have four properties to define: 1. Provide field name 2. provide instance of the field 3. provide an X-axis Offset 4. provide a Y-axis Offset 2. **Search for a string of text.** Then they have four properties to define: 1. Provide string of text to search for 2. provide instance of that string of text 3. provide an X-axis Offset 4. provide a Y-axis Offset 3. **Define Page Coordinates.** Then they have three properties to define: 1. Provide page number 2. provide an X-axis Offset 3. provide a Y-axis Offset In my API, I want the object to be setup intuitively so that it is clear they are choose one of the three different methods and then whichever one they choose, they have the options for each. In my old API, all of those options are grouped together as a list of properties like this: Placement.FormField_FieldName Placement.FormField_Instance Placement.SearchText_Text Placement.SearchText_Instance Placement.PageCoordinates_PageNumber Placement.XOffset Placement.YOffset I find this to be a little confusing for the developer using the API because you could never use all of the properties together. You would either use **Form Field** like this: Placement myPlacement = new Placement(); myPlacement.FormField_FieldName = \"MyPdfFormFieldName\"; myPlacement.FormField_Instance = ; myPlacement.XOffset = 0; myPlacement.YOffset = 0; Or use **Text Search** like this: Placement myPlacement = new Placement(); myPlacement.SearchText_Text = \"My String Of Text\"; myPlacement.FormField_Instance = 1; myPlacement.XOffset = 0; myPlacement.YOffset = 0; Or use **Page Coordinates** like this: Placement myPlacement = new Placement(); myPlacement.PageCoordinates_PageNumber = 1; myPlacement.XOffset = 0; myPlacement.YOffset = 0; Is there a better way to setup the class (or sub classes) to make it easier for the developer to understand how to use this functionality?"} {"_id": "250776", "title": "How are mixins or traits better than plain multiple inheritance?", "text": "C++ has plain multiple inheritance, many language designs forbid it as dangerous. But some languages like Ruby and PHP use strange syntax to do the same thing and call it mixins or traits. I heard many times that mixins/traits are harder to abuse than plain multiple inheritance. What specifically makes them less dangerous? Is there something that isn't possible with mixins/traits but possible with C++-style multiple inheritance? Is it possible to run into the diamond problem with them? This seems as if we were using multiple inheritance but just making excuses that those are mixins/traits so we can use them."} {"_id": "120775", "title": "How to drastically improve code coverage?", "text": "I'm tasked with getting a legacy application under unit test. First some background about the application: It's a 600k LOC Java RCP code base with these major problems * massive code duplication * no encapsulation, most private data is accessible from outside, some of the business data also made singletons so it's not just changeable from outside but also from everywhere. * no abstractions (e.g. no business model, business data is stored in Object[] and double[][]), so no OO. There is a good regression test suite and an efficient QA team is testing and finding bugs. I know the techniques how to get it under test from classic books, e.g. Michael Feathers, but that's too slow. As there is a working regression test system I'm not afraid to aggressively refactor the system to allow unit tests to be written. **How should I start to attack the problem to get some coverage quickly** , so I'm able to show progress to management (and in fact to start earning from safety net of JUnit tests)? I do not want to employ tools to generate regression test suites, e.g. AgitarOne, because these tests do not test if something is correct."} {"_id": "122014", "title": "What are the key points of Working Effectively with Legacy Code?", "text": "I've seen the book Working Effectively with Legacy Code recommended a few times. What are the key points of this book? Is there much more to dealing with legacy code than adding unit/integration tests and then refactoring?"} {"_id": "252000", "title": "A container only assets in Rails", "text": "I have multiple segmented Rails apps that are connected to each other: * App A * App B * App C While I try to keep the design, such as header and footer, to look similar, but each app has its own stylesheet, javascript and images in the asset folder. Some of these assets are duplicate, like CSS for footer. It makes maintenance difficult. I'm wondering if I could do something like having App D, where App A, B, and C will just retrieve the assets from App D. So if I need to change anything, I can just change the assets in App D. How do I achieve this?"} {"_id": "246209", "title": "Use and manage Front End Assets for Web", "text": "I am a beginner and am currently developing a kind of cms using PHP. The number of libraries that we can potentially use in the front end is large. I have a question about properly selecting, managing and using front end asset libraries. I don't think just including a bunch of front end libraries like this is the best approach: // and soon.. What if we use, let's say 20 ? Is that good practice, declaring the script tags 20 times ? What I do currently is use `Assetic`, a php library for managing assets. I create a dump file (and cache it) for each request to my application before loading a template. My controller (I use MVC) could be something like this: function indexAction() { // some logic $assetManager = $this->get('assetic'); // get assetic service $css = $assetManager->createAsset(array( '@bootstrap', '@jquery_ui' // and many other library )); $assetContent = $css->dump(); // create asset url, dont mind about this $data['stylesheet'] = $this->createAssetUrl($assetContent); return $this->render('index-template', $data); } And in the template (index-template), I could put kind of: \" rel=\"stylesheet\" type=\"text/css\"> I do that in almost every controller to include a bunch of assets to view. But I am not sure if this is the best method. Are there any better practical method that I don't know ?"} {"_id": "252433", "title": "Where and when should I use RESTful", "text": "When I should use REST? In what kind of web applications? In terms of java in what situation RESTful would be better than ordinary servlets / JSP ? Imagine that I have a web page where users can upload two kinds of files and create some objects that are later saved in DB(current tasks etc). The only think users can download is this two kinds of files. Will RESTful be a good approach in that kind of service?"} {"_id": "225372", "title": "What did machine code for 4-bit architecture look like?", "text": "I don't know how a 4-bit instruction could be enough to do something so I read about the Intel 4004 and it says that it used 8-bit instructions and then I can understand how opcode and numbers has enough digits. Is it true that there never were any 4-bit instructions and that 4-bit systems use 8-bit instructions? And that opcode and two numbers in an instruction can be done in 8 bits but not in 4 bits?"} {"_id": "232027", "title": "Manual dependency injection or abstract factory", "text": "We're starting to use dependency injection in a fairly large, interactive program. It's early yet, but I have a feeling that the majority of the objects being injected are going to want runtime data passed in to their constructors. Prior to this, I've only used DI in web applications where building everything at the composition root isn't a problem. When working with runtime data, it sounds like the options are to use manual injection or abstract factories. Is there anything important to consider when deciding between the two or does it mainly come down to personal preference? I'm using Ninject with the factory extension, so going the factory route isn't too onerous. However, if we end up with factories in every class, is that a sign that we're doing something wrong? Should we prefer manual injection when the power of an IoC container isn't needed? Could it simply mean that we're over-using dependency injection?"} {"_id": "237058", "title": "Should I be using OOCSS in a CSS theme?", "text": "I have heard of OOCSS some time ago, but never really looked into it. Today I did so, and I thought of the implications of applying OOCSS to a simple CSS theme. One problem I thought of is that, by its nature, OOCSS encourages DRYness and consequently the use of many concise classes that should be applied to elements by the HTML developer. As a CSS theme, I would assume it isn't something significant enough for a developer would go out of his/her way to add specific classes to their elements when writing HTML: as opposed to: Perhaps the HTML developers would be OK with writing the extra code, although it seems there is just as much repeating code on the HTML side as there would be without OOCSS. So, is OOCSS not a one-size-fits-all methodology for writing CSS code, as in this example? Or am I not understanding things properly?"} {"_id": "215975", "title": "How to handle fine grained field-based ACL permissions in a RESTful service?", "text": "I've been trying to design a RESTful API and have had most of my questions answered, but there is one aspect of permissions that I'm struggling with. Different roles may have different permissions and different representations of a resource. For example, an Admin or the user himself may see more fields in his own User representation vs another less-privileged user. This is achieved simply by changing the representation on the backend, ie: deciding whether or not to include those fields. Additionally, some actions may be taken on a resource by some users and not by others. This is achieved by deciding whether or not to include those action items as links, eg: edit and delete links. A user who does not have edit permissions will not have an edit link. That covers nearly all of my permission use cases, but there is one that I've not quite figured out. There are some scenarios whereby for a given representation of an object, all fields are visible for two or more roles, but only a subset of those roles my edit certain fields. An example: { \"person\": { \"id\": 1, \"name\": \"Bob\", \"age\": 25, \"occupation\": \"software developer\", \"phone\": \"555-555-5555\", \"description\": \"Could use some sunlight..\" } } Given 3 users: an Admin, a regular User, and Bob himself (also a regular User), I need to be able to convey to the front end that: Admins may edit all fields, Bob himself may edit all fields, but a regular User, while they can view all fields, can only edit the description field. I certainly don't want the client to have to make the determination (or even, for that matter, to have any notion of the roles involved) but I do need a way for the backend to convey to the client which fields are editable. I can't simply use a combination of representation (the fields returned for viewing) and links (whether or not an edit link is availble) in this scenario since it's more finely grained. Has anyone solved this elegantly without adding the logic directly to the client?"} {"_id": "87294", "title": "How to explain pointers to a Java/VB programmer", "text": "I am writing a game and my friend has offered to help me as it is a RPG and will take a long time to do the \"scripting\" bit of the game. The problem is IMO he's not that good a programmer :( (add flame war here). He has only programmed in Java and VB and keeps saying really stupid things to me like \"Why don't you drag and drop an onClick event\" to design my UI when I'm using DirectX. I tried explaining pointers to him but his response was, if it's just a variable that holds a memory address, why don't you just use an int? I create an instance of an attack class and give the creature a pointer to it so if several creatures use the same attack there is only one instance of it. He keeps saying why not put if statements in the creature class for every attack class and set true for the ones that are there. He has programmed mainly in VB and a little in Java just to learn OOP. How can I explain advanced C++ concepts like pointers and memory management to him? He just doesn't understand there are no super functions like form.show in C++."} {"_id": "175244", "title": "Android: Not able to experiment on own?", "text": "I have just started learning Android App Development a few days ago, with prior knowledge of C/C++, HTML and CSS. This is the situation I am facing Repeatedly: I am learning from a Video Tutorial Series, after each video, or each few videos, I say myself: Let's use what I have just learnt in the simple (and also \"meaningless\") app that I have made so far by watching the tutorials. I start implementing it, but then after a few minutes, I realize that I cannot do it because I do not know a few other syntax related to the particular thing, (or) I do not know whether these things can be combined with these other things by the use of . (dot). Whatever I try, I get either an error in eclipse or \"Sorry...the app com.example.simple has stopped unexpectedly....\" when the app runs. Then I search StackOverflow, Google and learn that what I want to implement requires learning about a few more classes, syntax and creating a few more java classes. I am not able to experiment on my own. Is it normal? Is it the HARD-WAY in which one is supposed to learn? Should I first learn Java and then come back to Android - would that be helpful?"} {"_id": "146452", "title": "Ruby workflow in Windows", "text": "I've done some searching and quite haven't come across the answer I am looking for. I do not think this is a duplicate of this question. I believe Windows could be a suitable development environment based on the mix of answers in that question. I have been developing in Ruby (mostly Rails but not entirely) for about a year now for personal projects on a Macbook Pro however that machine has faced an untimely death and has been replaced with a nice Windows 7 machine. Ruby development felt almost natural on the Mac after doing some research and setting up the typical stack. My environment then included the standard (Linux like) stuff built into OSX, Text Wrangler, Git, RVM, et al. Not too much of a deviation from what the 'devotees' tend to assume. Now I am setting up my new Windows box for continuing that development. What would my development environment look like? Should I just cave and run Linux in a VM? Ideally I would develop in Windows native. I am aware of the Windows Ruby installer. It seems decent but its not exactly as nice as RVM in the osx/linux world. Mercurial/Git are available so I would assume they play into the stack. Does one develop entirely in Windows? Does one run a webserver in a Linux VM and use it as a test bed while developing in Windows? Do it all in a VM? What does the standard Windows Ruby developer environment look like and what is the workflow? What would a typical step through be for adding a new feature to an ongoing project and what would the technology stack look like?"} {"_id": "182306", "title": "Why is it often said that the test cases need to be made before we start coding?", "text": "Why is it often said that the test cases need to be made before we start coding? What are its pros and what the cons if we don't listen to this advice? Moreover, does that advice refer to black box testing or White Box testing or both?"} {"_id": "108848", "title": "Why don't many code review tools seem to be syntax aware or provide more in-depth analysis of changes?", "text": "Why don't many code review tools seem to be syntax aware or provide more in- depth analysis of changes? Is it simply too hard to do? I find this to be a major hole of most programmer's toolkits. From what I have seen, which admittedly is not much, code review tools just compare code line- by-line with many of them not even being able to do syntax highlighting. Is there a solution out there that is smart enough to offer file-level, method-level code review/comparison? One of the simple problems I have is that methods get re-ordered in code and my code review software breaks down completely, but they should be able to do so much more. I'm interested in others opinions/knowledge on the topic of code review/comparison tools."} {"_id": "89949", "title": "What are the pros and cons of having a CaseInsensitiveString type in Java?", "text": "I'm tempted to create a `final class CaseInsensitiveString implements CharSequence`. This would allow us to define variables and fields of this type, instead of using a regular `String`. We can also have e.g. a `Map`, a `Set`, etc. What are some of the pros and cons of this approach?"} {"_id": "33526", "title": "Who should pay for fixes/bugs?", "text": "So I just started freelancing both in desktop/web development and this client who already accepted my work, and payed me keeps coming back at me each time he finds a bug etc. And I have found myself spending more time than I thought fixing them for free. Is this alright, or should I start charging a support fee? Which is the best way to deal with fixes on a supposedly accepted and completed work?"} {"_id": "196946", "title": "Keeping SQL Server DB Up to date in Team environment", "text": "We are working in a Visual Studio Environment using TFS. When developers get the latest source code it includes a copy of the 'update' sql scripts for the database of the application. The challenge is keeping our local copies of the DB up to date. When changes are made to the DB by another developer, we are looking for a way to: 1. indicate to other developers that their version of the DB is out of date 2. indicate to other developers the order in which they should run the update scripts We are looking for suggestions. For point number 2, I have used numbers such as 01_DropCreate_TableX, 02_Create_StoreProcedure_X. etc DH"} {"_id": "69830", "title": "Picking powers of two for sizes/limits", "text": "A habit I've observed among programmers, and a habit I sometimes subconsciously exhibit myself, is to pick powers of two (or powers of two minus one) when defining a database schema, a data buffer, etc. Have you made the same observation? If I'm not being blatantly subjective, the follow-up questions are: Are there still valid reasons to use powers of two [minus one] in modern technologies? Assuming these habits are mostly vestiges of old technological limitations, I'm just wondering what different flavors of limitations there once were. Some potential reasons I can think of are data structure optimizations and addressing bits. I'm wondering what else was/is out there..."} {"_id": "203237", "title": "Multi-module web project with Spring and Maven", "text": "Assume we have a few projects, _each_ containing some web resources (e.g., html pages). parent.pom +- web (war) +- web-plugin-1 (jar) +- web-plugin-2 (jar) ... Let's say `web` is the deployable `war` project which depends on the known, but selectable, set of plugins. What is a good way to setup this using `Spring` and `maven`? 1. Let the plugins be `war` projects and use mavens **poor** support for importing other `war` projects 2. Put all web-resource for all plugins in the `web` project 3. Add all web-resources to the classpath of all `jar` `web-plugin-*` dependencie and let spring read files from respective classpath? 4. Other? I've previously come from using `#1`, but the `copy-paste` semantics of `war` dependencies in maven is horrible."} {"_id": "87125", "title": "How can one find software development work that involves directly the final end user?", "text": "I've worked in software development for 15 years and, while there have been signficant personal achievements and a lot of experience, I've always felt detached from the man/woman-on-the-street, the every day person, how it affects their lives, in a number of ways: * the technologies: embedded software, hidden away, stuff not seen by the everyday person. Or process technology supporting manufactured products * the size of the systems, meaning many jobs, divided up, work is abstract, not one person can see the whole picture * the organisations: large, with departments dealing with different areas, the software, the hardware, the marketing, the sales, the customer support * the locations and hours: out-of-town business parks away from the rest of society, fixed locations, inflexible: 9-5 everyday This to me seems typical of the companies I worked for and see elsewhere. Granted, there are positives such as the technology itself and usually being among high calibre co-workers, but the above points frustrate me about the industry because they detach the work from its meaning. How can one: * change these things in an existing job, or compensate for them? * find other work that avoids these and connects with the final end user? Job designs tend to focus on the job content and technical requirements rather than how the job aims to fulfil end user needs, is meaningful."} {"_id": "87124", "title": "Would this be the equivalent of creating a branch, while working with a detached head in Git?", "text": "Let's say I checked out a version different than HEAD. Let's say I made some commits, and so an anonymous branch was created. Afterwards I may have checked out a different branch, so now the only way to get to my commits is via `reflog`. If do this: >> git reflog | grep -i mycommit sha1hash >> git branch reattaching >> git cherry-pick hash_of_commits >> git checkout master >> git merge reattaching Is it the equivalent of: >> git reflog | grep -i mycommit sha1hash >> git branch reattaching sha1hash >> git checkout master >> git merge reattaching What happens to the detached head commits, as I think that via cherry-picking, they will exist in 2 places. Will they forever remain in my repository?"} {"_id": "103501", "title": "Thoughts on Development using Virtual Machines", "text": "I'll be working as a development lead for a startup and I've suggested that we use VMs for development. I'm not talking about each developer having a desktop with VMs for testing/development, I mean having a server rack where all VMs are managed and have the developers work from a microPC (ChromeOS anyone?) locally, or even remotely from their home computer. To me, the benefits are the fact that it's extremely scalable, cheaper in the long run, easier to manage and that we utilize the hardware its maximum potential. As for cons, I can't think of any particular showstoppers other than we'll need someone to setup/maintain said setup. I was hoping that some of you might of had a similar setup at your place of employment and be able to weight in with your opinions. Thanks."} {"_id": "108435", "title": "Boss wants to convert all developers to virtualized desktops", "text": "> **Possible Duplicate:** > Thoughts on Development using Virtual Machines My boss has been singing the praises of leveraging virtualized desktops for all our developers in the company - about 100 people. I think this is a horrible idea and can think of no other companies pursuing a similar course of action. He believes that he can run Word and Outlook much faster over his server hosted virtual machine and has somehow come to believe that this will work for software development just as well. I have grave doubts of this. **Can anyone else provide me with some objective technical and/or business data on this scenario which can be used to dissuade him?** I can see desktop virtualization as a viable option for task based workers who sit in front of MS Office or a LOB app all day, but for software development, I see this as a huge potential blunder. I for one, have no interest in returning to dumb terminals and blind faith in operations to keep the servers and network tuned to our needs. If such a system is instituted, I may very well leave the company... as I see the policy betraying a clear lack of understanding and a diminished perception of value in what the devs are doing..."} {"_id": "83634", "title": "Leading a team, am I being overbearing?", "text": "I'm in what seems to me a very strange position. I'm \"team lead\" in role for a particular project, Sr. Software Engineer in job title. On my team I have 4 developers, one of whom serves a similar role on another project but now mine has been given priority so he's working on mine. I also have 2 testers, one of whom is a Manager. Another member of the team is the \"Customer Representative\" who is a part of a completely unrelated department. I also have a Manager who is directly above me and I believe also above the Manager of Test that's part of my team...not so sure about that though. I've tried to get clarification on why my role is exactly several times. It's been hard for me to figure out where my authority begins and ends, if I even have any. The answer I'm currently working with is that I am \"technical lead\" of the team. This seems to mean that my authority is over technical decisions regarding architecture, design, and process/coding standards as they pertain to the product code itself. Today something came up and the results of code I delegated to one of the members of my team where shown to the rest of the company in our Scrum show- it-all-off meeting. The customer representative person does the showing off. Something was shown off today that I really disagreed with and nobody had ever even asked me if I wanted to have a say in what happened. In short, in order to provide the ability of a user to display a value in a report in the following manners (\"doc\" units, design units, rounded, not rounded) they provided access fields for each permutation. Thus we have the value in rounded doc units, rounded design units, unrounded doc units, unrounded design units. Each record that the user will be wishing to work with has many such values and each one is permuted in this manner. I really hate this. The people we showed this to want to make sure that the API we use for reports is the same as the way we do things like export data to Excel. Unfortunately, now we're gaining this momentum in a direction that I think is really, really bad. I did get a little upset at the next meeting and I asked the two people who'd done this, \"Why wasn't I involved in this decision??\" It's an issue that keeps coming up and I have a hard time it seems to just get people on the team I'm supposed to be leading to ask me if I want to be involved. Sometimes I do not and I think whatever they come up with will be fine. Other times I do. Unless people ask me though it's hard to even know that something is going on that needs my input and they don't give me that opportunity. Unfortunately, my authority doesn't extend to telling people, \"Next time you go off and do something like this on your own without even talking to me, you're going to be disciplined.\" That's a \"PR\" issue that is one area that's quite clearly not in my scope of authority. That's fine with me actually since I don't want to have to deal with that kind of crap if someone else is willing. Today though, my manager, in front of everyone (which I guess is partly my fault too for bringing it up like that) told me that I can't be involved in every decision and need to delegate. I of course think I'm right....I always do. I don't say things I think are BS. I think I should have been approached about this issue and asked if I had a better idea. My direction for this would have actually been to just decide on ONE value to provide for now, since this was actually the very beginning stages of a new feature, and discuss options for providing further access in the future if so desired. I never would have approved of or recommended the current implementation and I really don't think it should have seen the light of day. The question is, am I the one being unreasonable? * * * Well, the two of us talked about it and agreed that we both \"dropped the ball\" and we seem to be on the same page. Monday mornings... We're going to try to make sure my role is clear in the team and that yeah, I get to decide when there's a design or task change that needs to happen; I get proposed to and either agree or decide I need to look deeper. Then there are some other bits I can try to work on to make sure they know that they can come to me."} {"_id": "83633", "title": "License key solution in web application, what is the best approach?", "text": "I am stumped by a request from my manager. I work for a small startup and we developed a web application for a fixed rate with a maintenance agreement for a MUCH larger company. Knowing horror stories about how large companies will only pay their bills to the last second we decided that we would like to protect ourselves by being able to license this web application in a way that if we don't get paid the software no longer works. I have seen this done before for desktop applications however this will be a web application that they will host internally and will not be accessible from the Internet. What is the best approach to do this, we would like it to have a small footprint and would like the ability to renew the license key that they have distributed. Has anybody done something similar? Are we completely out of our minds? Does anybody have any better suggestions?"} {"_id": "252386", "title": "Specific reasons why a top left origin is better/worse than a bottom left origin for computer graphics", "text": "I have been researching why the origins for most graphic applications are located in the top left corner of the screen. From everything I have read, they are located there because that is where the electron gun on a cathode ray tube would start to scan, from top to bottom. _Other than this historical reason, is there any usefulness to having the origin in the upper left hand corner versus (which is personally more natural to me) the bottom left hand corner?_ From my knowledge, I can't think of any reason that one origin might be more useful than the other. In terms of math, I don't think it should make any difference, as long as the relation between the pixels/point in the image stay the same you should be able to apply any mathematical transformation in the same way using either origin. I am specifically looking for examples that show why it would be better to use one origin over the other, and not just for the reason that \"this is the way it was done before\" or \"its easier this way because device x uses it\" because all of those reasons relate back to the historical reason."} {"_id": "252380", "title": "Alternatives to the use of the Id/Name properties with non-inputs elements in HTML", "text": "I'm migrating a website that use Javacsript/HTML/PHP using reusable javascript code, in certain moment I saw the opportunity to simplify code in functions that use almost the same code. **Let's say** : _I want to display values from the database into various`` or `
` elements using AJAX. all the referred elements use the same layout to display the data, the difference only resides in the element id or a minimal procedure to validate data._ **That's all.** However, I quickly came to the conclusion that I don't have a way to refer a specific element in HTML without use the `id` property, and the only alternative that I think was the use of `name` property. I know that this property it's only for `` in a `
`,and because this elements are not `` this rule doesn't _apply_ ( _I know that in HTML the rules are not enforced_ ). The basic idea is to stop using `id`s to make my code more reusable. To pass from this:
+---
| +--- +---
+---
+---
| +--- +---
+--- to this:
+---
| +--- +---
+---
+---
| +--- +---
+--- Doing my code in this way, i think will break the HTML5 conformity rules. I know this isn't a problem _per se_ , because I have solved in not orthodox way, but I want to hear what is the best way to deal with this situations. I know that exist a question related HTML - Alternative for ID when ID is only unique within a certain scope? but in this case I don't want to deal with classes and CSS rules because the current page is using a lot of CSS and there are a lot Javascript function in the page that manipulates the CSS class, using Class as kind of identifier, I think would lead to some bugs (but not so sure)."} {"_id": "218538", "title": "Why would someone use @Native annotations?", "text": "link : http://download.java.net/jdk8/docs/api/java/lang/annotation/Native.html In Java 8, there will be @Native annotations. > Indicates that a field defining a constant value may be referenced from > native code. The annotation may be used as a hint by tools that generate > native header files to determine whether a header file is required, and if > so, what declarations it should contain. Problem is: **What for?** Do you have any idea on which problematics would be efficiently solved by this feature?"} {"_id": "202725", "title": "Recommened design pattern to handle multiple compression algorithms for a class hierarchy", "text": "For all you OOD experts. What would be the recommended way to model the following scenario? I have a certain class hierarchy similar to the following one: class Base { ... } class Derived1 : Base { ... } class Derived2 : Base { ... } ... Next, I would like to implement different compression/decompression engines for this hierarchy. (I already have code for several strategies that best handle different cases, like file compression, network stream compression, legacy system compression, etc.) I would like the compression strategy to be pluggable and chosen at runtime, however I'm not sure how to handle the class hierarchy. Currently I have a tighly-coupled design that looks like this: interface ICompressor { byte[] Compress(Base instance); } class Strategy1Compressor : ICompressor { byte[] Compress(Base instance) { // Common compression guts for Base class ... // if( instance is Derived1 ) { // Compression guts for Derived1 class } if( instance is Derived2 ) { // Compression guts for Derived2 class } // Additional compression logic to handle other class derivations ... } } As it is, whenever I add a new derived class inheriting from Base, I would have to modify all compression strategies to take into account this new class. **Is there a design pattern that allows me to decouple this, and allow me to easily introduce more classes to the Base hierarchy and/or additional compression strategies?**"} {"_id": "202729", "title": "Best practice for storing HTML coming from text fields to a database?", "text": "I have an application that allows users to edit certain parts of text and then email that out. My question is what is the best way to store this in a Microsoft SQL Server database. Right now I have two tables, one holding the HTML data and one holding the plain text data. When the user saves the info, it replaces newlines with br's and puts it in the HTML-conntaining table and then puts the regular text in the other table. This way the text box has the newlines when they go to edit, but the table that contains the HTML data, has the BR's. This seems like a silly way to do things. What would be the best practice? Thanks."} {"_id": "202567", "title": "Sharing business logic between server-side and client-side of web application?", "text": "Quick question concerning shared code/logic in back and front ends of a web application. I have a web application (Rails + heavy JS) that parses metadata from HTML pages fetched via a user supplied URL (think Pinterest or Instapaper). Currently this processing takes place exclusively on the client-side. The code that fetches the URL and parses the DOM is in a fairly large set of JS scripts in our Rails app. Occasionally want to do this processing on the server-side of the app. For example, what if a user supplied a URL but they have JS disabled or have a non-standard compliant browser, etc. Ideally I'd like to be able to process these URLS in Ruby on the back-end (in asynchronous background jobs perhaps) using the same logic that our JS parsers use WITHOUT porting the JS to Ruby. I've looked at systems that allow you to execute JS scripts in the backend like execjs as well as Ruby-to-Javascript compilers like OpalRB that would hopefully allow \"write-once, execute many\", but I'm not sure that either is the right decision. Whats the best way to avoid business logic duplication for apps that need to do both client-side and server-side processing of similar data?"} {"_id": "159846", "title": "Why does the C library use macros and functions with same name?", "text": "I am reading 'The Standard C Library' by PJ Plauger which is really interesting. The book explains not only how to USE the library but also how it is implemented. I have finished reading the `ctype.h` section and in the header the functions are declared as both macros AND functions. For example int isdigit(int); but also #define isdigit(c) (_Ctype[(int)(c)] & _DI) I don't understand why BOTH are used? Also, if I try to recreate my own custom `ctype` header and implementation, I can only compile successfully if I remove the macro (comment out the define). This aspect was not really explained in the book. Can someone please explain?"} {"_id": "226129", "title": "DI or Factory Pattern ? Both ? or a different apprach?", "text": "Lets say we have an abstract class called `BaseSwitch`, inherited by concrete implementations `Switch A` and `Switch B`, Each Switch representing a real- life switch (A telephony tool which among its responsibilities is writing CDR; all data records of calls hitting the switch). Each Switch in real life writes CDR in a different format and to different sources, say some switch writes to a text file another writes to a MySQL database. The Switches as entities and CDR details are defined by the system's end user My goal is writing `Importer` classes responsible for importing CDR based on the source of the data determined by the switch entity into my system, but hiding the `Importer` from the switch classes. The layer responsible for importing the CDR will loop upon switches, and instatiate an 'Importer' object, based on the CDR format defined in each switch. Can anyone suggest an approach to use ? EDIT : More clarification Below : public class SwitchBase { public abstract string CDRFormat { get; } } public class SwitchA:SwitchBase { public override string abc { get { return \"Text\"; } } } public class SwitchB : SwitchBase { public override string CDRFormat { get { return \"MySQLDatabase\"; } } } public class CDR { } public class MySQLImporter { ICollection GetCDR() { //DoSomething } } public class TextImporter { ICollection GetCDR() { //DoSomething } }"} {"_id": "178478", "title": "Benefit of using Data URI to embed images within HTML document and its cross-browser compatibility", "text": "I want to embed an image using Data URI within HTML document so that we don't need image as a separate attachment, we get just one HTML file that contains the actual image. What are its advantages and disadvantages? Does IE10 supports it? Is it useful to have such an implementation? I am working on an application, where we have html documents that link towards images stored in some location. If I use tiny online editor, as images are saved somewhere in the server, while editing the document, i can provide a link towards that image, but can't preview the final document with images from within tiny editor. If I chose to download the file locally, then i will need to download the images from server side. It looks a bit overkill, so I thought if Data URI could be used in such a situation."} {"_id": "233375", "title": "Euclidean Space Fully Connected Nondirectional Graph: Shortest Path to all nodes", "text": "So this may be a question that is born of my inability to properly express my intentions to Google. In a fully connected non-directional graph such that any three points can be properly represented in a triangle in Euclidean geometry it would seem that a greedy first algorithm which chooses the shortest edge would find the shortest path from a starting point to visit all other points. I can't find a condition under which this is false, but one of my neighbors (another programmer) insists that I am wrong, though still cannot come up with a condition where I am wrong. Edit: I am not sure this question is an instance of of the general TSP, which I know is NP hard, or if it is I thought that all of the constraints on the allowed graphs would have impacted the difficulty. From wiki: TSP is `the shortest possible route that visits each city exactly once and returns to the origin city`. In this problem there is no requirement to return to the origin city. Further, we don't care about selecting the optimal origin city since the origin city is set in our specific example that we were discussing. Additionally, the graph in this problem must conform to the constraints of euclidean distances in either 2 or 3 dimensions so a three node graph such as below (key is the node, val is the distances between other nodes) could not exist since the edge from A to B would make the edges from C to A and B infeasible. ` A:{B:80, C:10}, B:{A:80, C:5}, C:{A:10, B:5} ` Also by fully connected I meant all nodes have edges between them and every other node. Sorry if the description came off as confusing, but I am wondering if given all these constraints, does it become solvable in a more simplistic manner."} {"_id": "233378", "title": "Porting an open source project from ISC license and public domain", "text": "I have ported libsodium and NaCl to .NET. NaCl is the original project that is in the public domain while libsodium is a derived work from NACL and is using the ISC license. I looked at both projects to port the code to .NET. For my project I prefer to use MPL v2, but I'm not sure I'm allowed to change the original license. I know ISC is a permissive but it's not clear if I can use another license for my work. Any help would be appreciated."} {"_id": "233379", "title": "Developing a virtual machine / sandbox", "text": "I'm interested in learning how a virtual machine/sandbox actually works. I have developed an 8051 emulator and also wrote a dissassembler for x86, so this part of a virtual machine is not really the problem. What I'm interested in, is the sandbox functionality of it. To illustrate what I mean consider this example. Let's assume I have a function which simply opens a file. So nothing fancy. int fd = open(path); Now when this code is executed natively it will go to the operating system and opens the file (assuming that it exists). Now when I run this in a virtual machine environment, the specified path is not the one that the operating system sees, but rather something that the VM will substitute, thus redirecting the open call. So what I'm interested in is, how a VM can do this, when the executed code is run natively (like x86 on a x86) because for an interpreted VM it is rather obvious. When I google for virtual machines I either find only links talking about interpreters like Java, LLVM or similar, but nothing that goes into more detail. I downloaded the sourcecode from Oracle Virtual Box, but as this is a rather big codebase, it's quite hard to understand the concept just form digging in that code."} {"_id": "178470", "title": "What does an interviewer notice most on my resume?", "text": "When applying for a position such as a software developer for a company, what does an interviewer notice **most** on my resume concerning the work i have done? Is he/she concerned with the amount of work i do with others(Open source projects), The specific accomplishments I've made in my field(programs, apps) or the amount of time i spend helping others(forums, mentoring)? For those of you who have applied and work/worked in a position similar to a software developer,In your personal experience, what do you think helped you the most in landing the job? P.s. if 'software developer' is to broad a term, i would specifically enjoy working with teams to create large applications such as dropbox / google / skype etc..."} {"_id": "178476", "title": "What type of pattern would be used in this case", "text": "I want to know how to tackle this type of scenario. We are building a person's background, from scratch, and I want to know, conceptually, how to proceed with a secure object pattern in both design and execution... I've been reading on Factory patterns, Model-View-Controller types, Dependency injection, Singleton approaches... and I can't seem to grasp or 'fit' these types of designs decisions into what I'm trying to do.. First and foremost, I started with having a big jack-of-all-trades class, then I read some more, and some tips were to make sure your classes only have a single purpose.. which makes sense and I started breaking down certain things into other classes. Okay, cool. Now I'm looking at dependency injection and kind of didn't really know what's going on. Example/insight of what kind of hierarchy I need to accomplish... * class Person needs to access and build from a multitude of different classes. * class Culture needs to access a sub-class for culture benefits * class Social needs to access class Culture, and other sub-classes * class Birth needs to access Social, Culture, and other sub-classes * class Childhood/Adolescence/Adulthood need to access everything. Also, depending on different rolls, this class hierarchy needs to create multiple people as well, such as Family, and their backgrounds using some, if not all, of these same classes. Think of it as a people generator, all random, with backgrounds and things that happen to them. Aging, death of loved ones, military careers, e.t.c. Most of the generation is done randomly, making calls to a mt_rand function to pick from most of the selections inside the classes, guaranteeing the data to be absolutely random. I have most of the bulk-data down, and was looking for some insight from fellow programmers, what do you think? **EDIT** Flowchart added. I decided to leave a few things out, but you get the idea.. I didn't really know what types of visuals to use, so I prioritized the boxes importance by size, and the most connections. The non-boxes are flavor text, with no life altering events. ![http://i.stack.imgur.com/ZdYkq.jpg](http://i.stack.imgur.com/ZdYkq.jpg)"} {"_id": "8034", "title": "License requirements for including open source software", "text": "In an open source project, a number of other open source libraries have been included to implement needed functionality, some as libraries (LGPL), and some as source code (non-LGPL). The new BSD license was selected for the project. The included open source libraries are licensed under the new BSD, MIT, Apache, and LGPL licenses, but no GPL licensed code. How should these other open source libraries be credited? Do all the library licenses need to be included in the main project license file? Is it sufficient to just provide links to the project web sites in the Help->About dialog and documentation? Is any credit _really_ needed?"} {"_id": "76204", "title": "CRM: In-House vs. OTS", "text": "Many, many years ago, when I was young and naive and wrote everything from scratch unless it came with the language, I was working for a company with two sales people in two locations who were trying to share leads and contacts. I had just discovered a shiny new hammer, PHP, so of course I built them what I would today would describe as a primitive MySQL backed CRM system. They loved it--at this point most in history all their competitors were using local Access databases. I have since learned how much easier life is when you don't try to reinvent the wheel. I am again faced with a growing company that needs an online CRM system. I am roughly aware of what CRM systems do of course, but I am not very familiar with what exactly products like MS Dynamics or SAP offer. I am having a hard time sorting through marketing fluff trying to figure out just what exactly these companies spend tens of millions of dollars and euros developing. Most of them appear to be fairly straightforward enterprise applications with a few bells and whistles that I'm not really interested in, such as Outlook and Sharepoint integration or the ability to create workflows through a click-and-drag interface. So my question is, are you crazy to attempt to develop a custom CRM system from scratch?"} {"_id": "207801", "title": "What does IE mean by saying \"'console' is undefined\"?", "text": "I like IE's persnicketiness (the debugging tools that take you right to your code is even more user-friendly than what I've found in F12 Chrome Dev Tools), but why does it say, \"'console' is undefined\" re: this line of jQjuery: console.log(\"entered submit button click\"); How could _console_ be undefined? Neither Chrome nor Firefox complain about it... And ironically, IE shows me this error message where? In the \"Console\" tab! Shirley it couldn't be case-sensitive, and it expects \"Console.log\"? BTW and anyway, I'm impressed with IE's F12 tools; I wonder if it's \"The Avis Effect\" at work - they were #5 (among Browsers) and have thus begun fighting like a rabid wolverine to claw and scratch their way upwards?"} {"_id": "163981", "title": "Have javac call automatically run java", "text": "I want to be able to call `javac `, and then automatically run java on the compiled `.class` file. I thought initially to use a x86 disassembler to hack it (javac.exe) but bumped that idea; I then found the open source code for JDK, and concluded that maybe a batch file would be easier. How can I do this?"} {"_id": "182066", "title": "does haskell have dependent types?", "text": "The questions all the title. Just to clarify, Haskell already has the ability to parametise a type over another type (this is just like template programming in c++), but can it parametise a type over values - this is a dependent type. So for example you can have a type thats parametised over the integers, for example vectors of size n, matrices of size n x m, etc. If not, why not. And is there any possibility that it will be supported in the future?"} {"_id": "207804", "title": "Compiler design decision for dynamic method invocation", "text": "I asked about Compiler interpretation of overriding vs overloading on StackOverflow, and got good answers, but this led me to another question that I'm not sure is appropriate for SO, but I think is for here. One should read the original question and accepted answer, but perhaps it's understandable just by looking at the code below: public static void whatIs(Circle s) { System.out.println(\"Circle\"); } public static void whatIs(Square s) { System.out.println(\"Square\"); } and we attempt to call, whatIs(shapes[0]); //array of Shape objects (interface implemented by Circle,Square) whatIs(shapes[1]); we will get two errors (one for `Square` and one for `Circle`) indicating that: > * method Driver.whatIs(Square) is not applicable > * actual argument Shape cannot be converted to Square by method > invocation conversion > As is suggested in my question, using `instanceof` can give the desired results, and as the answered suggested: > The compiler could auto generate code like > > > if (shapes[0] instanceof Circle) > { > whatIs((Circle) shapes[0]); //prints \"Circle\" > } > **but, it does not**. Just to be clear, I know one can use an abstract class instead of an interface to achieve similar functionality, nonetheless, does anyone know why the Java compiler won't automatically do this for you? I'm not a compilers guy, but I have a feeling that this is not an opinion based question. I assume that there is a good reason for this decision."} {"_id": "163985", "title": "Why is the concept of lazy evaluation useful?", "text": "It seems lazy evaluation of expressions can cause a programmer to lose control over the order in which their code is executed. I am having trouble understanding why this would be acceptable or desired by a programmer. How can this paradigm be used to build predictable software that works as intended, when we have no guarantee when and where an expression will be evaluated?"} {"_id": "12589", "title": "Graduate expectations versus reality", "text": "When choosing what we want to study, and do with our careers and lives, we all have some expectations of what it is going to be like. Now that I've been in the industry for almost a decade, I've been reflecting a bit on what I thought (back when I was studying Computer Science) programming working life was going to be like, and how it's actually turning out to be. My two biggest shocks (or should I say, broken expectations) by far are the sheer amount of maintenance work involved in software, and the overall lack of professionalism: 1. **Maintenance** : At uni, we were all told that the majority of software work is maintenance of existing systems. So I knew to expect this in the abstract. But I never imagined exactly how overwhelming this would turn out to be. Perhaps it's something I mentally glazed over, and hoped I'd be building cool new stuff from scratch a lot more. But it really is the case that most jobs are overwhelmingly maintenance, bug fixing, and support oriented. 2. **Lack of professionalism** : At uni, I always had the impression that commercial software work is very process-oriented and stringently engineered. I had images of ISO processes, reams of technical documentation, every feature and bug being strictly documented, and a generally professional environment. It came as a huge shock to realise that most software companies operate no differently to a team of students working on a large semester-long project. And I've worked in both the small agile hack shop, and the medium sized corporate enterprise. While I wouldn't say that it's always been outright \"unprofessional\", it definitely feels like the software industry (on the whole) is far from the strong engineering discipline that I expected it to be. Has anyone else had similar experiences to this? What are the ways in which your expectations of what our profession would be like were different to the reality?"} {"_id": "59928", "title": "Difference Between Unit Testing and Test Driven Development", "text": "From reading the descriptions, I understand that in TDD tests are done prior to writing the function and in Unit Testing, its done afterwards. Is this the main difference, or the two terms can't even be compared as such. Perhaps, Unit Testing is an integrated part of TDD."} {"_id": "99735", "title": "TDD - is it just about unit tests?", "text": "Do I understand it right that classical TDD is just about unit tests? Don't understand me wrong: I know the difference between TDD and just unit testing. I am asking whether it is correct to use integration test in TDD workflow. Currently I work on the project where TDD is surely only about unit tests and there is at least one serious problem with it. The majority of our unit tests are behavioural tests which often become false negative (false red) during refactoring (just because some sequence of dependencies calls changed). The project was created in TDD style to make refactoring simple and suddenly refactoring became a hell. Epic fail! The most obvious decision now is to make our test not unit but integrational (but still with TDD). So if previously we stubbed/mocked class dependencies, now we won't do it (at least not all of them). As result most of our tests will become state (instead of behavioural). And state tests become false negative much more seldom (because they test the result but not the workflow of execution). So I would like to know how widespread is an approach of using TDD with integration tests. Is it okay? Would be grateful for any resources on this topic. I have read this article but it is a bit... strange **Update.** Here I will clarify what I mean under unit test and integration test. Unit test is a test which stubs/mocks all class dependencies. Integration test has real implementations of dependencies (although it can stub/mock some dependencies if needed)."} {"_id": "86657", "title": "How do you successfully hire out a few programmers to make it cost effective?", "text": "Many of us know this situation well: we're a one-man (woman) development team, we need some extra help to keep up with all the tasks, the budget is small and we decide to get some help. But hiring someone is difficult. Either the person is inexperienced and I end up becoming their full-time teacher in the hopes they will produce work they way I want, or the person is skilled but for whatever reason doesn't hand over code within budget that I can just plug in and use without reworking it myself. Any thoughts/ideas?"} {"_id": "44915", "title": "Is a Model Driven Architecture in Language Oriented Programming (MPS) feasible at this time", "text": "As a side project I am developing some sort of DSL where I describe a data model, and generate desired code files from it. I believe this is called Model Driven Architecture. My partial existing implementation uses C#, CodeDOM, XML and XSLT to do this manually. I discovered there already exist better environments to do this in. The one which fascinated me the most is called MPS, which follows the Language Oriented Programming paradigm. This article, written by a cofounder of JetBrains was a real eye opener for me. I truly believe LOP has a very good chance of becoming the next big programming paradigm once it has broader support. From my short experience with MPS, I noticed it is still mainly Java- oriented. My question is, how feasible is it to generate code files for other (multiple) languages instead of just Java. I don't need full language support from the start, so preferably, I need to be able to implement a language in a agile way. E.g. first support only one type, add access modifiers, ... Perhaps some other (free) environment already provides this out of the box. P.S.: I find it important to have a lot of control over the naming conventions and such of the generated code. This is one of the reasons why I started my own implementation. UPDATE: Judging from the answers it seems like people think I'm only interested in .NET solutions. This is not the case, any other suggestions are highly welcomed!"} {"_id": "44918", "title": "How should I study programming languages?", "text": "I am a student of computer engineering. I have never done any programming before, and as you can understand, I don't know how to study it or how to make my own programs. My English is weak [edited for clarity - ed], and so if you don't like the choices I list, please feel free to provide others. How should I study? How should I learn programming languages? 1. Study completely from a book. 2. Don't study from a book, just try writing code. 3. A mix of the two; study from a book, then try writing code. 4. Study half the book, then write the code by hand on paper. 5. Listed to the teacher, then try to solve general problems (those not from any specific chapter)."} {"_id": "95966", "title": "How important are Haskell's advanced concepts like Monads and Applicative Functors for most routine programming tasks?", "text": "I've read the Learn You a Haskell book up to the point where they introduce Monads and stuff like Just a. These are so counterintuitive for me that I just feel like giving up trying to learn it. But I really want to try. I'm wondering if I can at least avoid the advanced concepts for a while and just starting using the other parts of the language to do a lot of practical tasks with the more basic parts of Haskell, i.e. functions, standard IO, file handling, database interfacing and the like. Is this a reasonable approach to learning how to use Haskell?"} {"_id": "86659", "title": "Does new generation of programmers use Emacs?", "text": "I know that Emacs was very popular a few decades ago, but now - when there are a lot of IDEs and text editors, is it still popular between us?"} {"_id": "103850", "title": "As a high school student what should I be doing to make myself better/more hireable?", "text": "I'm a year away from university/college at this point in time, I'm currently 17 years old and practice programming as a hobby. I'd like to get a job one day in the market, I -love- programming. And I was wondering what would be my best course of action at the moment to get myself ready for a potential career in programming? I've talked to a couple comp sci majors and it doesn't seem like what they're doing isn't terribly hard (although, I'm sure it differs from school to school). I kind of feel like I'm in an odd limbo where it's gotten a bit stale, because I don't feel like I'm making 'progress'. As far as programming this is my current skill set: Languages: Lua, I know this one very well, I'd put myself at least a little bit past 'moderate' with Lua, I know how to use it's features and what to do, and not to do. C, I know C in a way I'd call a bit below moderate, I can write C but I don't know the best practices, and I often find myself consulting documentation on C. Projects: I've made a couple toy interpreters, I shipped a roleplaying modification for a Garrys Mod (while this doesn't sound like much, I and my friend have produced at -least- 20,000 lines, including comments and such), and I've made a couple small games. Experience: I'm fairly familiar with GNU tools and Linux, I can write a basic makefile etc. I know how to use command line tools, and I use Linux as my day to day operating system, but also have years of experience with Windows. I know how to use Git and SVN (I'm a master at niether, but I can use them well enough to maintain a project.) So if there's anything you guys could suggest? Should I be actively doing things now or just kind of waiting it out for university? I'm feeling lost. Edit: I put \"is\" instead of \"isn't\" terribly hard. What I meant to say is all the Comp Sci major I've talked to said that the material isn't terrible hard if you've got a bit of experience programming."} {"_id": "194939", "title": "How to prepare for a programming job?", "text": "I'm basically done with my computer science classes in college but I don't seem to be \"ready\" for a computer science job, nor am I in a position to get an internship at the moment. I'm wondering if there is any way to prepare and gain experience I would need for a job without having an actual degree yet. Any certain books I can read or skills I should learn that would help? Thank you."} {"_id": "219665", "title": "CSS structure for creating responsive websites", "text": "I'm a back-end engineer who works on a small team so occasionally needs to do some front-end. I like to develop a good workflow and project structure before I start anything, so I'm wondering about creating a responsive front-end for a web app. What's a best-practice way to strcture CSS for creating responsive websites that need to \"scale down\" for smaller screens? Specifically: do you create the mobile version first, then expand the code with min-width media queries in the CSS? Or do you do the opposite, creating the desktop-resolution version first? Secondly, do you define multiple element selectors and definitions in the same media query? Or do you define a media query for each element and selector? It would seem that the second method would make for more readable code perhaps -- although it's certainly more frustrating to \"tweak\" a working site to create a new version with media queries rather than simple fence-off the old settings in one media query and re-style the necessary parts from the new version from scratch."} {"_id": "219662", "title": "NT Kernel Source", "text": "Well, the first question would be: Is it legal to take hold of the NT Kernel Source? If so, proceed to the second paragraph; if not, proceed to the third. The first thing you'd probably ask is \"Why NT Kernel Source?\" The answer is simply, I want to make my own OS compatible with Windows. This is of course, all a hobby, and I'm not planning any large project. Ignore this if it's possible to get an NT Kernel Source: \"ReactOS\" I'm pretty sure that it uses Window's source code itself, with a few minor exceptions, so I'd like to know how they got the code."} {"_id": "160250", "title": "Implementing cache system in Java Web Application", "text": "I worked with JPA (Eclipselink implementation) and Hibernate. As I understand these two have great caching systems. I am interested in caching in a Web application and in order to better understand the process I'm trying to implement something on my own. Sadly, I cannot find any in depth documentation about this subject. I'm interested in things like high scalability, sharing memory on different machines and other important theoretical matters. Is there any tutorial or open project I could check out? Thank you! *LE: * I want to cache DB information in POJOs just like JPA or Eclipselink"} {"_id": "138043", "title": "Is Visual Source Safe (The latest Version) really that bad? Why? What's the Best Alternative? Why?", "text": "Over the years I've constantly heard horror stories, had people say \"Real Programmers Dont Use VSS\", and so on. BUT, then in the workplace I've worked at two companies, one, a very well known public facing high traffic website, and another high end Financial Services \"Web-Based\" hosted solution catering to some very large, very well known companies, which is where I currently Reside and everything's working just fine (KNOCK KNOCK!!). I'm constantly interfacing with EXTREMELY Old technology with some of these financial institutions.. OLD LIKE YOU WOULDN'T BELIEVE.. which leads me to the conclusion that if it works \"LEAVE IT\", and that maybe there's some value in old technology? at least enough value to overrule a rewrite!? right?? Is there something fundamentally flawed with the underlying technology that VSS uses? I have a feeling that if i said \"someone said VSS Sucks\" they would beg to differ, most likely give me this look like i dont know -ish, and I'd never gain back their respect and my credibility (well, that'll be hard to blow.. lol), BUT, give me an argument that I can take to someone whose been coding for 30 years, that builds Platforms that leverage current technology (.NET 3.5 / SQL 2008 R2 ), write's their own ORM with scaffolding and is able to provide a quality platform that supports thousands of concurrent users on a multi-tenant hosted solution, and does not agree with any benefits from having Source Control Integrated, and yet uses the Infamous Visual Source Safe. I have extensive experience with TFS up to 2010, and honestly I think it's great when a team (beyond developers) can embrace it. I've worked side by side with someone whose a die hard SVN'r and from a purist standpoint, I see the beauty in it (I need a bit more, out of my SS, but it surely suffices). So, why are such smarties not running away from Visual Source Safe? surely if it was so bad, it would've have been realized by now, and I would not be sitting here with this simple old, Check In, Check Out, Version Resistant, Label Intensive system. But here I am... I would love to drop an argument that would be the end all argument, but if it's a matter of opinion and personal experience, there seems to be too much leeway for keeping VSS. UPDATE: I guess the best case is to have the VSS supporters check other people's experiences and draw from that until we (please no) experience the breaking factor ourselves. Until then, i wont be engaging in a discussion to migrate off of VSS.. UPDATE 11-2012: So i was able to convince everyone at my work place that since MS is sun downing Visual Source Safe it might be time to migrate over to TFS. I was able to convince them and have recently upgraded our team to Visual Studio 2012 and TFS 2012. The migration was fairly painless, had to run analyze.exe which found a bunch of errors (not sure they'll ever affect the project) and then manually run the VSSConverter.exe. Again, painless, except it took 16 hours to migrate 5 years worth of everything.. and now we're on TFS.. much more integrated.. much more cooler.. so all in all, VSS served it's purpose for years without hick-up. There were no horror stories and Visual Source Save as source control worked just fine. so to all the nay sayers (me included). there's nothing wrong with using VSS. i wouldnt start a new project with it, and i would definitely consider migrating to TFS. (it's really not super difficult and a new \"wizard\" type converter is due out any day now so migrating should be painless). But from my experience, it worked just fine and got the job done."} {"_id": "127534", "title": "Why do programming language (open) standards cost money?", "text": "Isn't it counter-productive to ask for 384 Swiss franks for C11 or 352 Swiss franks for C++11, if the aim is to make the standards widely adopted? Please note, I'm not ranting at all, and I'm not against paying; I would like to understand the rationale behind setting the prices as such, especially knowing that ISO is network of national standard institutes (i.e. funded by governments). And I also doubt that these prices would generate enough income to fund an organization like that, so there must be another reason."} {"_id": "65065", "title": "Struggling not to use Hungarian notation", "text": "I've seen arguments for and against Systems Hungarian. For some years I've been working on a legacy project that uses this system by naming every variable, function with a prefix of the variable type eg (strName, intAge, btnSubmit etc) (I know the original Hungarian Apps prefixes by the kind of variable, not the type). I'd like my next project to abandon it fully, but I do find it harder to name similar things uniquely without resorting to it. Lets say I have an webform for collecting email addresses and storing them in a database table, and button which calls the function which save the address to the db. If I'm using Hungarian style notation, I might call the box `txtEmail` the button `btnEmail` and the value contained in the textbox `strEmail`. I might then use a function `storeEmail(strEmail)` to store the email. I have a clear convention here, it's obvious what each variable is. What would be the best practice for naming these variables * without resorting to Hungarian Systems, * without making them over long or confusing * and with a clear convention to use across the whole of my project?"} {"_id": "65060", "title": "How much to each part?", "text": "I have been involved in a project and now it is coming to an end. The good news is the company will start to commercialize the software which is specific software to ISPs. As a programmer / developer of the project, I intend to still support the project, attend to new custommer customizations, basically the software will still alive and be sold in the market. How much should I charge to stay ON the project? Should I charge per hour? per month? or even by percent over each customer? What is the default for this kind of situation? or How do most programmers/developers deal with that? Thanks in advance, It is a very important question to me."} {"_id": "158179", "title": "How important is it to implement a caching system in an MVC style framework?", "text": "I am writing my own PHP framework (...waits for the groans to subside) for the purpose of learning (best practices, design principals etc.) as I'm entirely self-taught and consequently there are gaps in my knowledge. I understand that most mainstream frameworks incorporate some form of caching system, which I'm guessing is for keeping regularly used classes and/or other files close by for fast loading (right?). Ideally I envisage a system that is somewhat self monitoring and self managing in that it tracks class/file usage, memory limits and request processing performance, removing items from cache automatically when no longer needed or increasing the cache size when performance drops too low etc. (even some form of basic garbage collection is better than nothing, right?). So my questions are: * Firstly: Have I understood everything correctly so far? * How important is it to implement this kind of system over regular include/require methods? * How can I determine memory usage and other system performance metrics? * To cache classes do I have to use the '_ _sleep()' and '_ _wakeup()' magic methods? * I also gather that to be stored a class has to be serialized. Should I use a $_SESSION variable, a temporary DB table or some kind of flat-file / SQLite DB as the cache? Thanks for any and all help / suggestions."} {"_id": "208168", "title": "Match two strings but allow for a degree of error", "text": "How can I match two strings, but at the same time allow for X number of characters to be incorrect in the match. The number of errors should be a controllable variable. While X number of characters can not match in the string there should be a limit as to how many run in a sequence. Given two strings I might allow 5 characters to be different, but not more than 2 in a row. I'm looking for a recommended algorithm for comparing these two strings, or maybe there is already a known solution for this."} {"_id": "154849", "title": "Best practice for writing a service", "text": "We currently have a C++ socket server used by a java client. All of the socket code in both C++ and java is stick-built at a low-level with messages passed via JSON strings. Are there cots packages or better techniques commonly used in this situation to reduce the cost of future development and maintenance?"} {"_id": "163062", "title": "Java exam: prebuilt methods?", "text": "> **Possible Duplicate:** > Is using build-in sorting considered cheating in practice tests? I'm studying for a Java exam at the university. In your opinion, could I use, during the exam, Arrays.sort(intArray); instead of creating my own algorithm? How would you judge it, if you were the professor?"} {"_id": "244875", "title": "How to save and clear Files/PDFs received from web service in iOS apps?", "text": "My web service will send me a PDF and I have to store it locally on my iphone app and share it through whatsapp later. So how can I save this file received in my app for later sharing? How can I again clear that from local db so that app is not heavy. Thanks"} {"_id": "244878", "title": "Is there a more intelligent way to do this besides a long chain of if statements or switch?", "text": "I'm implementing an IRC bot that receives a message and I'm checking that message to determine which functions to call. Is there a more clever way of doing this? It seems like it'd quickly get out of hand after I got up to like 20 commands. Perhaps there's a better way to abstract this? public void onMessage(String channel, String sender, String login, String hostname, String message){ if (message.equalsIgnoreCase(\".np\")){ // TODO: Use Last.fm API to find the now playing } else if (message.toLowerCase().startsWith(\".register\")) { cmd.registerLastNick(channel, sender, message); } else if (message.toLowerCase().startsWith(\"give us a countdown\")) { cmd.countdown(channel, message); } else if (message.toLowerCase().startsWith(\"remember am routine\")) { cmd.updateAmRoutine(channel, message, sender); } }"} {"_id": "120621", "title": "How can I permute pairs across a set?", "text": "I am writing a bet settling app in C# and WinForms. I have 6 selections, 4 of them have won. I know that using the following formula from Excel: =FACT(selections)/(FACT(selections-doubles))/FACT(doubles) This is coded into my app and working well: I can work out how many possible doubles (e.g., AB, AC, AD, AE, BC, BD, BE, etc.) need to be resolved. But what I can't figure out is how to do the actual calculation. How can I efficiently code it so that every combination of A, B, C, and D has been calculated? All my efforts thus far on paper have proved to be ugly and verbose: is there an elegant solution to this problem?"} {"_id": "22749", "title": "What advantages do continuous integration tools offer on a solo project?", "text": "If you're doing a solo project - would you use CI tools to build from a repository? I've used Hudson and Cruise Control in a team environment, where it's essential to build as soon as anyone checks anything in. I think the value of version control is still obvious, but do I need to build after every commit, seeing as I would just have built on my local machine, and no-one else is committing?"} {"_id": "68522", "title": "Correct way to handle sample logins or test accounts", "text": "In my JSP web application, I want to redirect sample users to a different page from the real users. Currently we do something like if((user.getName().equalsIgnoreCase(\"Sample\")) ... This is (a) hardcoded and (b) executed for each normal user as well. Is there a better way to handle this logic? I am loathe to redirect these users via Javascript validation - which is one of the suggestions given internally"} {"_id": "212588", "title": "Loadbalancing and failover in code", "text": "I have HTTPS based webservices (not REST, rather old code). I am generating Java client stubs using Axis & using that to call the webservices. There are around 20 different APIs on the webservice. I have 2 servers hosting the webservices (identical - 2 servers are being used for redundancy) and data is synced between the 2 servers - so that same API called either server produces the same result). I do not have hardware or software load balancers or clusters. So I am planning to implement the failover & loadbalancing in code (the failover is important, it's OK even if I don't do load balancing). I was planning to do this in code i.e if I get a connect exception, I will trap the exception and do the same Webservice call on the other server. I was wondering if there are any known design patterns for this or any pitfalls I should be aware of."} {"_id": "212587", "title": "Why to have an application with GUI on linux when command line is available?", "text": "If the question appears to be off topic then please migrate to some other suitable domain on stackexchange. **Q.why to have GUI along with CLI, when you already have command line interface?** I'm currently developing an application. As of now application has a command line interface. The target machines for application will be **80% servers and 20% workstations**. I'm considering to provide GUI too for the application. **GUI Use Case:** Product registration is main use case i.e. my product needs license key to run. And after that only some text info will be shown every time its launched. Only 2-3 buttons to perform extended functions. **CLI Use Case:** Product registration will expect some arguments as command line arguments. Later the 2-3 extended functions available in GUI can also be started from command line."} {"_id": "212585", "title": "Architecture strategies for a complex competition scoring system", "text": "Competition description: * There are about 10 teams competing against each other over a 6-week period. * Each team's total score (out of a 1000 total available points) is based on the total of its scores in about 25,000 different scoring elements. * Most scoring elements are worth a small fraction of a point and there will about 10 X 25,000 = 250,000 total raw input data points. * The points for some scoring elements are awarded at frequent regular time intervals during the competition. The points for other scoring elements are awarded at either irregular time intervals or at just one moment in time. * There are about 20 different types of scoring elements. * Each of the 20 types of scoring elements has a different set of inputs, a different algorithm for calculating the earned score from the raw inputs, and a different number of total available points. The simplest algorithms require one input and one simple calculation. The most complex algorithms consist of hundreds or thousands of raw inputs and a more complicated calculation. * Some types of raw inputs are automatically generated. Other types of raw inputs are manually entered. All raw inputs are subject to possible manual retroactive adjustments by competition officials. Primary requirements: * The scoring system UI for competitors and other competition followers will show current and historical total team scores, team standings, team scores by scoring element, raw input data (at several levels of aggregation, e.g. daily, weekly, etc.), and other metrics. * There will be charts, tables, and other widgets for displaying historical raw data inputs and scores. * There will be a quasi-real-time dashboard that will show current scores and raw data inputs. * Aggregate scores should be updated/refreshed whenever new raw data inputs arrive or existing raw data inputs are adjusted. * There will be a \"scorekeeper UI\" for manually entering new inputs, manually adjusting existing inputs, and manually adjusting calculated scores. Decisions: * Should the scoring calculations be performed on the database layer (T-SQL/SQL Server, in my case) or on the application layer (C#/ASP.NET MVC, in my case)? * What are some recommended approaches for calculating updated total team scores whenever new raw inputs arrives? Calculating each of the teams' total scores from scratch every time a new input arrives will probably slow the system to a crawl. I've considered some kind of \"diff\" approach, but that approach may pose problems for ad-hoc queries and some aggegates. I'm trying draw some sports analogies, but it's tough because most games consist of no more than 20 or 30 scoring elements per game (I'm thinking of a high-scoring baseball game; football and soccer have fewer scoring events per game). Perhaps a financial balance sheet analogy makes more sense because financial \"bottom line\" calcs may be calculated from 250,000 or more transactions. * Should I be making heavy use of caching for this application? * Are there any obvious approaches or similar case studies that I may be overlooking?"} {"_id": "46637", "title": "Code maintenance: keeping a bad pattern when extending new code for being consistent, or not?", "text": "I have to extend an existing module of a project. I don't like the way it has been done (lots of anti-pattern involved, like copy/pasted code). I don't want to perform a complete refactor for many reasons. Should I: * create new methods using existing convention, even if I feel it wrong, to avoid confusion for the next maintainer and being consistent with the code base? or * try to use what I feel better even if it is introducing another pattern in the code ? * * * Precison edited after first answers: The existing code is not a mess. It is easy to follow and understand. BUT it is introducing lots of boilerplate code that can be avoided with good design (resulting code might become harder to follow then). In my current case it's a good old JDBC (spring template inboard) DAO module, but I have already encounter this dilemma and I'm seeking for other dev feedback. I don't want to refactor because I don't have time. And even with time it will be hard to justify that a whole perfectly working module needs refactoring. Refactoring cost will be heavier than its benefits. Remember: code is not messy or over-complex. I can not extract few methods there and introduce an abstract class here. It is more a flaw in the design (result of extreme 'Keep It Stupid Simple' I think) So the question can also be asked like that: You, as developer, do you prefer to maintain easy stupid boring code OR to have some helpers that will do the stupid boring code at your place ? Downside of the last possibility being that you'll have to learn some stuff and maybe you will have to maintain the easy stupid boring code too until a full refactoring is done)"} {"_id": "132214", "title": "How can I deal with developers who don't use English in code?", "text": "> **Possible Duplicate:** > Code maintenance: keeping a bad pattern when extending new code for being > consistent, or not? I'm working on a project for a Greek public organization, as a part for a course semester project. I have to develop a 3-tier application that manages financial data. A professor of mine acts as an advisor and is helping me on the project. I write my own code but there are also pieces of code written by other developers that I can also use, and some the code is also written by my professor. The Greek developers program without using any patterns or conventions, they are generally unaware of phrases like \"best-practices\" and \"programming methodology\". They just write code of pure quality. Most of them are at least 10 years more experienced than me but I am on my way. My professor is a \"pragmatic programmer\" and produces \"clean code that works\" of high quality. The Greek developers speak English as well. The problem is that some of them use greeklish for the various names of variables, classes and methods- obviously greek charachters cannot be used for such purposes. For example, the greek translation of \"user\" is \"\u03c7\u03c1\u03ae\u03c3\u03c4\u03b7\u03c2\" which in greeklish could be written in the following ways: * xristis, * xrhsths, * hristis, * christis, * ... As a result, the same meaning can be expressed with different characters. Some of the developers use English in their code, and there are no naming conventions at all. I'm working on this project just for the sake of knowledge and further practice. I would like to apply practices I read in books in software that can also be used and modified by others. Should I keep using greeklish just to be in harmony with the already existing code? In that case, who knows In a few months I may get valuable feedback for my effort, which will eventually make me a better programmer. Or, should I just keep doing the right thing, taking example from my professor? In that case my code might be an one-use throw-away effort because none of the Greek developers would like to follow a new set of conventions."} {"_id": "96246", "title": "When developing on an old code base, should I use Best Practices or go for Consistency", "text": "> **Possible Duplicate:** > Code maintenance: keeping a bad pattern when extending new code for being > consistent, or not ? As my experience in programming increases with each project, I look back at earlier projects and cringe at some of the ways the code is structured or how well i have implemented a design pattern. These projects are still utilized in production environments (without issue) and have new features implemented on a yearly cycle. Updating the entire structure is not feasible. When implementing new features should I structure the updates with best practices in mind, or should I keep to existing (inferior) structure to maintain consistency with the rest of the project?"} {"_id": "213236", "title": "Clarification in steps OS takes during a page fault, related to TLB", "text": "I am reading the book operating systems by Galvin, and there is something I don't understand. Assume that there is a hardware managed Translation Lookaside Buffer (TLB). Now when a program requests for a page which is in its own address space, but is not found in the TLB (miss), then which of the following happens? (assuming its not in the page table either) **OPTION 1** 1. OS checks the page table, its not there as the valid/invalid bit is set to invalid. 2. OS selects a victim frame, swaps in the page from disk 3. OS updates page table flags 4. Replace an entry in TLB with the new entry in page table 5. Restart the instruction, which now runs like nothing happened **OPTION 2** 1. When there is a TLB miss, the page table entry for the page is brought into TLB first 2. Instruction is restarted 3. There is TLB hit, but the valid/invalid bit is invalid => page is not in memory 4. Then OS selects victim frame, swaps in page....rest of the steps are same I think the question boils down to **whether only the TLB can be inspected for valid/invalid bit (due to hardware reasons) or the page table itself can be inspected.** Because OPTION 2 is clearly slower. (but a friend insists that that is what must happen) Thanks in advance!"} {"_id": "155463", "title": "Countour Mapping API", "text": "I have done some searching on Google, but haven't come up with much as of yet. I am wanting to take a set of point data, which I had previously been using to create weighted points for a heat map through the Google API, and turn them into a contour map to overlay on the Google Maps API. I haven't seen anything in Google's code that would let me do this. Does anyone know of a good API to create such an overlay? Or is there possibly something I have overlooked that Google offers?"} {"_id": "155464", "title": "Organizing code for iOS app development", "text": "I've been developing an app for the iOS platform, and as I've been going along, I've noticed that I've done a terrible job of keeping my files (.h, .m, .mm) organized. Is there any industry standards or best practices when it comes to organizing files for an iOS project? My files include custom classes (beside the view controllers), customized View Controllers, third-party content, code that works only on iOS 5.0+ and code that works on previous versions. What I'm looking for is a solution to keep things organized in a manner that others (or myself in years to come) can look at this and understand the basic structure of the application and not get lost in the multiple files found therein."} {"_id": "155467", "title": "Selecting a JAX-RS implementation for a new project", "text": "I'm starting a new Java project which will require a RESTful API. It will be a SaaS business application serving mobile clients. I have developed one project with Java EE 6, but I'm not very familiar with the ecosystem, since most of my experience is on the Microsoft platform. Which would be a sensible choice for a JAX-RS implementation for a new project such as described? Judging by Wikipedia's list, main contenders seem to be Jersey, Apache CXF, RESTeasy and Restlet. But the Comparison of JAX-RS Implementations cited on Wikipedia is from 2008. My first impressings from their respective homepages is that: * **CXF** aims to be a very comprehensive solution (reminds me of WCF in the Microsoft space), which makes me think it can be more complex to understand, setup and debug than what I need; * **Jersey** is the reference implementation and might be a good choice, but it's legacy from Sun and I'm not sure how Oracle is treating it (announcements page doesn't work and last commit notice is from 4 months ago); * **RESTeasy** is from JBoss and probably a solid option, though I'm not sure about learning curve; * **Restlet** seems to be popular but has a lot of history, I'm not sure how up-to-date it is in the Java EE 6 world or if it carries a heavy J2EE mindset (like lots of XML configuration). What would be the merits of each of these alternatives? What about learning curve? Feature support? Tooling (e.g. NetBeans or Eclipse wizards)? What about ease of debugging and also deployment? Is any of these project more up-to-date than the others? How stable are them?"} {"_id": "211551", "title": "What is the story behind Java Vulnerabilities?", "text": "I always appreciated the Java language. It is known as a very secure platform and many banks use it in their web applications. I wanted to build a project for my school and I discussed the options with some developers. However, one of them said we should ignore Java ecause of vulnerabilities appreared recently in it. For this reason I want to make sure, what is the story behind this and does that mean that Java today considered not much secure as it was previously?"} {"_id": "81561", "title": "Using an actor model versus a producer-consumer model?", "text": "I'm doing some early-stage research towards architecting a new software application. Concurrency and multithreading will likely play a significant part, so I've been reading up on the various topics. The producer-consumer model, at least how it is expressed in Java, has some surface similarities but appears to be deeply dissimilar to the actor model in use with languages such as Erlang and Scala. I'm having trouble finding any good comparative data, or specific reasons to use or avoid the one or the other. Is the actor model even possible with Java or C#, or do you have do use one of the languages built for the purpose? Is there a third way?"} {"_id": "48634", "title": "How to self Motivate technically to put my ideas into execution or just getting a job at MNC like google or microsoft", "text": "I mean, > How to self Motivate to get a job at google or create another google in > future. ,as there is no mentor who can guide me on this topic, so asked it here: I'm a Graduate in BE IT,but with less grades,with interest in learning new programming languages, but not yet done anything great like developed some system or anything. And I'm left with 2 more years to prove my worth to someone. So,is their a quick guide to start learning a language and then just go on implementing your ideas and it gets appreciated or I get a good Job ant Big MNC's. By the way, I just build one website for my one client and running my wordpress blog. And I had tried my hands on basic of C++,Java,JS,JSP,PHP,Ubuntu,web designing in past."} {"_id": "81562", "title": "What does Windows 8 mean for the future of .NET?", "text": "Microsoft showed off a demo of Windows 8, including a new platform that allows developers to use HTML5 and JavaScript. Is this new platform the main way to develop for Windows 8? Is Microsoft phasing out the .NET platform in favor of the HTML 5 stack? What does Windows 8 mean for .NET developers?"} {"_id": "154596", "title": "Best practices for including open source code from other public projects?", "text": "If I use an existing open source project that is hosted for example on github within one of my projects, should I check in the code from the other project into my public repo or not? I have mixed feelings about this, #1 I want to give proper credit and attribution to the original developer, and if appropriate I will contribute back any changes I need to make. However given that I have developed / tested against a specific revision of the other projects code, that is the version that I want to distribute to users of my project. Here is the specific use case to illustrate my point. I am looking for a more generalized answer than this specific case. I am developing simple framework using rabbitmq and python for outbound messages that will allow for sending sms, twitter, email, and is extensible to support additional messaging buses as well. There is a project on github that will make the creation and sending of SMS messages developed by another person. When I create my own repo how do I account for the code that I am including from the other project?"} {"_id": "174803", "title": "What is considered third party code?", "text": "Inspired by this question Using third-party libraries - always use a wrapper? I wanted to know what people actually consider as third-party libraries. **Example from PHP:** If I'm building an application using Zend framework, should I treat Zend framework libraries as third party code? **Example from C#:** If I'm building a desktop application, should I treat all .Net classes as third party code? **Example from Java:** Should I treat all libraries in the JDK as third party libraries? Some people say that if a library is stable and won't change often then one doesn't need to wrap it. However I fail to see how one would test a class that depends on a third party code without wrapping it."} {"_id": "69697", "title": "Scheme vs Haskell for an Introduction to Functional Programming?", "text": "I am comfortable with programming in C and C#, and will explore C++ in the future. I may be interested in exploring functional programming as a different programming paradigm. I am doing this for fun, my job does not involve computer programming, and am somewhat inspired by the use of functional programming, taught fairly early, in computer science courses in college. Lambda calculus is certainly beyond my mathematical abilities, but I think I can handle functional programming. Which of Haskell or Scheme would serve as a good intro to functional programming? I use emacs as my text editor and would like to be able to configure it more easily in the future which would entail learning Emacs Lisp. My understanding, however, is that Emacs Lisp is fairly different from Scheme and is also more procedural as opposed to functional. I would likely be using \"The Little Schemer\" book, which I have already bought, if I pursue Scheme (seems to me a little weird from my limited leafing through it). Or would use the \"Learn You a Haskell for Great Good\" if I pursue Haskell. I would also watch the Intro to Haskell videos by Dr Erik Meijer on Channel 9. Any suggestions, feedback or input appreciated. Thanks. P.S. BTW I also have access to F# since I have Visual Studio 2010 which I use for C# development, but I don't think that should be my main criteria for selecting a language."} {"_id": "154598", "title": "web server response code 500", "text": "I realize that this may spur a religious discussion, but I discussed this with friends and get great, but conflicting answers and the actual documentation is of little help. What does the 500 series response codes mean from the webserver? Internal Server Error, but that is vague. My assumption is that it means that something bad happened to the server (file system corruption, no connection to the database, network issue, etc.) but not specifically a data driven error (divide by zero, record missing, bad parameter, etc). Something to note, there are some web client implementations (the default Android and Blackberry httpclients) that do not allow access to the html boddy if the server response is 500 so there is no way to determine what caused the issue from the client. What I have been been implementing recently is a web service that returns a json payload wrapped in a response object that contains more specific error information if it is data related, but the server response will be 200 since it finished the actual processing. Thoughts?"} {"_id": "43339", "title": "Is code maintenance typically a special project, or is it considered part of daily work?", "text": "Earlier, I asked to find out which tools are commonly used to monitor methods and code bases, to find out whether the methods have been getting too long. Most of the responses there suggested that, beyond maintenance on the method currently being edited, programmers don't, in general, keep an eye on the rest of the code base. So I thought I'd ask the question in general: Is code maintenance, in general, considered part of your daily work? Do you find that you're spending at least some of your time cleaning up, refactoring, rewriting code in the code base, to improve it, as part of your other assigned work? Is it expected of you/do you expect it of your teammates? Or is it more common to find that cleanup, refactoring, and general maintenance on the codebase as a whole, occurs in bursts (for example, mostly as part of code reviews, or as part of refactoring/cleaning up projects)?"} {"_id": "43336", "title": "Building a satisfying community at work", "text": "In the influential \"Peopleware\", DeMarco and Lister state that an organisation that builds a satisfying community will tend to keep it's people. We have about 40 programmers at work. How do we go about creating a \"community\" out of them?"} {"_id": "13673", "title": "Open Source Projects Leading to Uniquely Interesting Opportunities", "text": "Not sure if I asked this at the right place (it's development related). It would be great if you can direct me to the right place if I did ask in the wrong place. I was encouraged by hosts during some company tours to participate in the open-source software community if interested in software. The reasons include showing initiative to contribute useful components to a project while others can evaluate the quality of your design and coding practices. However, other than case studies of CUSP being bought out by Apple Inc., I can't think of other case studies showing the trend of interesting opportunities being attracted over the long term. Do you have any high-profile projects fitting this criteria that comes to your mind? Appreciate it in advance for your suggestions!"} {"_id": "201408", "title": "Pros of start learning programming with Python if what I really want is Javascript?", "text": "A friend of mine seeks to learn Javascript programming but he never programmed before. I've found Python as a quite nice language that takes most unneeded \"strangeness\" out of learning programming while remaining useful. Personally my first language was Pascal and while experimented a little with C++, C, Java, C#, Php and Javascript, I then found Python and this is actually what I most of the time use for years. I find Python a good language to introduce most programming concepts in a way where you can focus on the concept and not on the language. Actually imho Python has only 1 big gotcha, the significant whitespace, while other languages often feature more that is unnecessary when introducing today's programming itself (like overhead, pointers, implicit conversions, operator overloading, etc.). I've also heard that many instructors use this language to introduce programming concepts, and I tend to agree with this choice. As Javascript has it's own strenghts and weaknesses as an programming- introductory language, I wondered whether it's worth considering using Python instead for learning programming and then learn Javascript, when you already understand many programming concepts. I'm not used to teaching, so I would ask ones who are, what pros and cons would teaching Python first have over teaching Javascript right away, for a newcomer to programming?"} {"_id": "53302", "title": "Should managers prohibit programmers from using IM in office?", "text": "> **Possible Duplicate:** > Would you allow your programmers to use Messenger and social networks like > Facebook? A manager may believe that using IM clients in the office is not acceptable, but many programmers use them for legitimate purposes, for example in order to easily contact one another. Do you think the IM chat prohibition is reasonable?"} {"_id": "154283", "title": "Should I demand unit-testing from programmers?", "text": "I work at a place where we buy a lot of IT projects. We are currently producing a standard for systems-requirements for the requisition of future projects. In that process, we are discussing whether or not we can demand automated unit testing from our suppliers. I firmly believe that proper automated unit-testing is the only way to document the quality and stability of the code. Everyone else seems to think that unit-testing is an optional method that concerns the supplier alone. Thus, we will make no demands of automated unit-testing, continous testing, coverage-reports, inspections of unit-tests or any of the kind. I find this policy extremely frustrating. Am I totally out of line here? Please provide me with arguments for any of the opinions."} {"_id": "116570", "title": "XML or HTML for User Manual/Help and Why?", "text": "I want to make User Manual/Help to my program. I found two good ways - XML or HTML - which should I use and why? I know how to do it in html (I know how to use js, css and such) but in XML it's harder (or I imagine?) anyway which way should I choose and why?... **Edit** : There's one important thing I forgot to mention: I need it for the army, I can make it only on army's computer and I cannot install anything you said. I can use notepad and that's it."} {"_id": "116352", "title": "How to manage two major versions using SVN?", "text": "At the company I'm working we support two versions of the software we develop. One version is available for customers, and one version the developers are developing new functionality in. The version available for customers is also changed by developers, to fix the bugs our customers have found. So, for example we have a 4.1 version available for customers, and we are developing 4.2. As soon as we release 4.2, 4.1 gets closed, and we start developing on 4.3. Currently we have two trunks, one for each version that is open for development. Every time a bug is fixed in the released version, we have to merge it in the new version too. This is extra work. Next to this, we would like to work in advance, and have a version already finished 'on the shelf', and already start on a new version. Which would mean if we fix a bug in the released version, we would have to merge it in three trunks! Is there a better way of structuring this, and possibly eliminate the duplicate merges? Are we doing something completely wrong? Thanks in advance."} {"_id": "17355", "title": "What are some famous one-liner or two-liner programs and equations?", "text": "I'm experimenting with a new platform and I'm trying to write a program that deals with strings that are no longer than 60 characters and I'd like to populate the data store with some famous or well-known small chunks of code and equations, since programming and math goes along with the theme of my software. The code can be in any language and the equations from any discipline of mathematics, just so long as they're less than a total of 60 characters in length. I suspect people are gonna break out some brainfuck for this one. For example, #include int main(){printf (\"Hi World\\n\");return 0;} 60 characters exactly! Thanks so much for your wisdom!"} {"_id": "45803", "title": "what do you do to keep learning?", "text": "After the school i think my learning process has been slow(not yet stopped), so give me some advice which will be help me to keep learning. UPDATE: I am a web developer in PHP. I am working in PHP last 2 years."} {"_id": "10656", "title": "Plagued by indecision - how to choose technologies to use for projects?", "text": "I have always been fascinated with the newest and best technologies available. I graduate from college this year, and over the course of the past few years, I have spent a lot of time learning new programming languages, web frameworks, Linux distributions, IDEs, etc., in an effort to find the best of each. I have installed and played around with Ubuntu, Gentoo, Debian, Arch Linux, SUSE, VectorLinux, Puppy Linux, Slackware, and Fedora, I have spend a good amount of time in Vim and Emacs, and have played around with Visual Studio, Eclipse, NetBeans, gedit, and several more obscure ones. I have played with all sorts of languages - I started with the common ones like C, Java, Visual Basic, but always heard that they were \"bad\" (for relative definitions of bad). I then discovered the scripting languages and have quite a bit of experience in PHP, Perl, Python, and Ruby. Then I heard that functional languages are where it's at, so I played around with Scheme, Lisp, Haskell, Erlang, and OCaml. I've played around with obscure languages like Forth and J. When I do web development, I go back and forth between frameworks and languages. Should I use plain PHP, Ruby on Rails, Django, CakePHP, CodeIgniter, Yii, Kohana, or make my own? I have a very broad and shallow knowledge of computer science. As soon as I have learned a useful amount of one technology, I see something else shiny and go after it. My progression often goes like this: \"Perl is better than PHP, but wait, Python is better than Perl. Oh, but now I see that Ruby has the power of Perl and it is cooler than Python. Well, now that I have seen a little of Ruby, it is too complicated. Python is cleaner. Oh, but it is too hard to deploy Python, so I should use PHP if I want to do real web development.\" And so on and so forth. What technology should I use for projects? Should I just pick one language/framework/IDE and sort of forget about the other things that are available for a while? I don't have all that much in the way of completed projects, because I never stay with something long enough to finish it."} {"_id": "15449", "title": "How do I get up-to-date on web development technologies?", "text": "I'm a regular user here, but linking my screen name to my real identity is dirt simple, and the question I'm about to ask would lead to a very unpleasant conversation with my current employer were they to see it. So, my apologies for this sock-puppet account; I promise Socky and I won't be voting on each other's stuff, and Socky won't be doing much of anything other than voting on answers to his own questions. I may have an opportunity to work with somebody I've worked with in the past. Short, extremely understated summary: It went well. This time he's looking to bring me on as a partner to a fledgling web venture. My job title isn't defined, but I'd basically be \"the guy\" in terms of the tech side of the company. One of the first things I'd be doing is fleshing out the current and future requirements of the site and determining the best way to meet them -- overhaul the existing site? Scrap the existing site entirely and replace it with something like Drupal? I'd also be tasked with making sure that the site is not only a pleasant user experience, but that it looks and feels like a modern, professional website. One problem: I'm not qualified. At least, not yet. I wasn't really qualified for the last time we worked together, but I taught myself what I needed to know and ... like I said. It Went Well. However, I was a lower-level code wrangler that time around, and a lot of key decisions had already been made. I had more room for error and time to learn as I was going. This outing, not so much. It's been a long damn time since I was completely up-to-date on web programming; my current gig has a healthy dash of web stuff, but the web interface isn't the main business and it's built on old technology. (Technology it honestly doesn't use particularly well.) I just don't spend a lot of time playing with new tech when I'm not at work; I generally spend that time on hobbies and passions I don't get paid to do. But I want this to happen. And I want to do it right. So my question is: what resources would you recommend I use to both get myself up-to-date and stay up- to-date on professional-caliber web programming? I know I need to beef-up my jQuery-fu; I've been exposed to it a little, and holy crap does it make hard jobs easy. I also know I need to acquaint myself with Drupal and other content management systems so that I can accurately gauge whether using one as the foundation for the site would be a good idea or a waste of time. But I'm certain there are other technologies out there that would help me do that job that I don't yet know anything about. What are some good resources for helping me figure out what I don't know? Websites, magazines, podcasts, whatever. I need to figure out how to get back into the game properly. This is scary as hell, but it also feels like it could be a huge step forward in my career. (Assuming it's not a step into a pool filled with laser sharks.) My thanks in advance for any assistance anybody can offer in curing my ignorance."} {"_id": "221031", "title": "Keeping up to date with the speed of technology", "text": "I am going through 2 ebooks on ASP.NET MVC and SignalR . **SignalR: Real-time Application Development** Released in 2013 **Introducing ASP.NET MVC 4 Fourth Edition** Released in 2012 As i have been going through these books and writing out the code I have been told by visual studio that some of the code is obsolete. With MVC the section was on security and signalR on the main map connection. As of this question Entity Framework has just released its 6th version and now everyone is starting on WEB API as opposed to WCF. This is all very intimidating to say the least. Besides having to learn all these technologies, we also have to keep up to date with new releases every few months. In a standard web app there is ASP.NET /MVC, EF, SQL, jQuery, jQuery Lib( knockoutjs etc), HTML5 + CSS3, WEB API, Deployment environment setup. And these technologies are changes so fast. Its very often i come across old code looking for answers and to realize there are conflicts with other libraries, serious security concerns or bad practice for maintainability and testing. My question is. How can i stay up to date with all of this?"} {"_id": "92337", "title": "How do you remember numerous API?", "text": "There are lots of other APIs I need to use besides the Selenium test tool to be able to get tests working. Not using them for just one week and the mind has lost all of them. How is it possible to remember zillions of APIs?"} {"_id": "78234", "title": "How can I learn libraries and stuff faster?", "text": "As a programmer, it seems like I spend half my time figuring out how to use various libraries/frameworks/tools. First Javascript, then Jquery, then canvas, then Backbone, etc, etc. Every tool has its own set of non-obvious idiosyncrasies that you need to be aware of. Mastering them takes a lot of time. Any suggestions on how to speed up the learning process?"} {"_id": "211511", "title": "How can I prepare for the next software paradigm?", "text": "Throughout the history of programming whenever a new technology was made a **application** for it had to be made. _E.g. When Android came out Android Apps came out ~ giving millions of dollars in revenue to the respective developers who made the apps. (The OS came in 2008)_ The beauty about computers is that it is always changing. You cannot invent a new innovation for writing as most English students learn about Shakespeare while CS students face a ever-changing curriculum.. Programmers have this advantage. **They can make a lot of money if they play their cards right.** So I want to be prepared. I want to be first in line for the next promising innovation. I know many of them will fail but I think my instinct for spotting potential success is high. _But how can I be prepared?_ I am planning to assimilate fellow programmers, friends and offshore developers/contractors. I want to make my own start-up ~ for now as a sole proprietor. There is no way I'm pitching an idea to a large company. I want to get a BN and register a company, I believe it will give more **flexibility**. Plus its **better working with other minds as designing something by yourself without help is a recipe for burnout.** So my question - going more specific is what approach should I take? * Should I start my own software company, passive as it may be in the beginning? * Who should help me? Is it wiser getting internal/external developers to help me with my ideas for the new paradigms? I am a first year CS student, 18 years old with good social skills."} {"_id": "194497", "title": "How can I cope with every increasing/changing number of frameworks?", "text": "I started coding with php, mysql, html, and javascript back in 2002 and heck that is all you basically needed to create any type of website. Over the years I picked up other languages, but what turned me off were all of the frameworks and JQuery that was popping up every few months and clients (if freelancing) or employers expected you to be an expert on some crap a geek released recently. Because of this I got out of programming since I don't know if I can constantly throw away all of my previous knowledge and replace it with some new hip fad that does the exact same thing but just uses different function names. I miss knowing the core knowledge of PHP and being able to code from scratch and I'm afraid the skill of mastering a technology to completion is obsolescent. Is it right to feel this way, and can any of us feel secure about our programming career in the future?"} {"_id": "94822", "title": "html/css/js best practices", "text": "> **Possible Duplicate:** > What should a developer know before building a public web site? Are there any books or resources on html/css/js and web design best practices? There are plenty books which just learn the basics - syntax and other stuff which is not quite difficult, but I can't find use cases, real examples, how professional desginers use them."} {"_id": "54796", "title": "Best way to acquire a complete comprehensive knowledge of web design and development?", "text": "> **Possible Duplicate:** > What should every programmer know about web development? Assuming one wants to be a professional in web design and development, what technologies would you recommend one learn? In what order should this be done? Any other suggestions. Note : I am interested in finding out about a video training package that contains all the listed technologies."} {"_id": "35890", "title": "Best Programming Language for Web Development", "text": "I am a Web Developer in PHP, and also know Javascript and some bit of CSS which is needed for web development. I use Symfony framework to build Websites and Web Application. As now i want to learn new Programming Language, which is best for Web Development(like Ruby, Python), as i have heard about Frameworks like Rails and Django. Which language will be best for Web Development apart from PHP or like PHP?"} {"_id": "46716", "title": "What technical details should a programmer of a web application consider before making the site public?", "text": "What things should a programmer implementing the technical details of a web application consider before making the site public? If Jeff Atwood can forget about HttpOnly cookies, sitemaps, _and_ cross-site request forgeries _all in the same site_ , what important thing could I be forgetting as well? I'm thinking about this from a web developer's perspective, such that someone else is creating the actual design and content for the site. So while usability and content may be more important than the platform, you the programmer have little say in that. What you do need to worry about is that your implementation of the platform is stable, performs well, is secure, and meets any other business goals (like not cost too much, take too long to build, and rank as well with Google as the content supports). Think of this from the perspective of a developer who's done some work for intranet-type applications in a fairly trusted environment, and is about to have his first shot and putting out a potentially popular site for the entire big bad world wide web. Also, I'm looking for something more specific than just a vague \"web standards\" response. I mean, HTML, JavaScript, and CSS over HTTP are pretty much a given, especially when I've already specified that you're a professional web developer. So going beyond that, _Which_ standards? In what circumstances, and why? **Provide a link to the standard's specification.**"} {"_id": "113688", "title": "What do I need to learn to become a better web programmer?", "text": "> **Possible Duplicate:** > What should every programmer know about web development? I'm 15 years old. I've been programming for 2 years. I suppose I am a good programmer (not designer). I can use PHP(Good), Css(simple), and JQuery(not too bad). What else can I learn related to web development (maybe system programming)? Thanks for your suggestions.."} {"_id": "55238", "title": "Please guide this self-taught Web Developer", "text": "One of the major regrets in life is that I didn't do something with my introversion. I didn't manage to get past the first year of college because of that. I have chosen the path where there are no video games and other time sinks, all I have is the internet to quench my thirst of learning the ins and outs of the field of Web Developing/Designing. Though currently, I'm taking a Web Design Associate course at one of the best Computer Arts and this is the last month of the class. Even though I'm still a sapling, I love this field so much. So basically, At school I'm learning web design while at home I'm teaching myself web- developing. First thing first, returning to college seems impossible at the moment because of some financial problems. I'm pretty comfortable with CSS and HTML and I'm into PHP/MySQL at the moment. **Could you please provide me a web-development Curriculum to follow.** And do I need to learn about the theories behind? And I think I'm still young(I'm 18 at the time of writing). **Is it a good thing or bad thing for choosing this path?** I'm glad with my decision but in all honesty, **I'm worrying about my future and employment because I'm an undergrad** , coming from a country where companies are degree b!tches, it saddens me so. Thank you. (My questions are the bold parts. )"} {"_id": "202606", "title": "Junior software developer - How to understand web applications in depth?", "text": "I am currently a junior developer in web applications and specifically in ASP.NET MVC technology. My problem is that the C# senior developer in the company has no experience with this technology and I try to learn without any guidance. I went through all tutorials (e.g music store), codeplex projects and also read Pro ASP.NET MVC 4. However, most of the examples are about CRUD and e-commerce applications. What I don't understand is how dependency injection fits in web applications (I have realized that is not only used for facilitating unit testing) or when I should use a custom model binder or how to model the business logic when there is already a database schema in place. I read the forum quite often and it would very helpful if some experienced developer could give me an insight about how to proceed. Do I need to read some books to understand the overall idea behind web applications? And what kind of application should I start building myself - I don't think it would be useful to create similar examples with the tutorials."} {"_id": "115951", "title": "Web based applications", "text": "> **Possible Duplicate:** > What should every programmer know about web development? Where do I start if I want to build a web based application? Books, tutorials, online resources. From a quick look at the web I can see that Html, Css and JavaScript should be the subjects to learn. Do you agree? What would you recommend to a professional C++ programmer?"} {"_id": "202642", "title": "What must one know when approaching web development?", "text": "I just started working as a novice Web Developer. I know PHP pretty well, as well as some basic jQuery. Anyway, my boss told me I should explore and learn about MVC, Memcache, Design Patterns, how Apache servers work and how to set one up etc. What I want to ask is actually this: What should I learn further? Web Development is a big area and most odds are that I'll never stop learning, but what are the basics I should learn about? What are the fundamentals? Currently I'm focusing on Server Side Development, but a very big part of me also wants to become a front-end ninja, so please consider that in your comments. Thanks in advance, you rock. :)"} {"_id": "109719", "title": "Website development from scratch v/s web framework", "text": "Do people develop websites from scratch when there are no particular requirements or they just pick up an existing web framework like Drupal, Joomla, WordPress, etc. The requirements are almost similar in most cases; if personal, it will be a blog or image gallery; if corporate, it will be information pages that can be updated dynamically along with news section. And similarly, there are other requirements which can be fulfilled by WordPress, Joomla or Drupal. So, Is it advisable to develop a website from scratch and why ? **Update:** to explain more as got commentt from @Raynos (thanks for comment and helping me clearify the question), the question is about: 1. Should web sites be developed and designed fully from scratch? 2. Should they be done by using framework like Spring, Zend, CakePHP? 3. Should they be done using CMS like Joomla, WordPress, Drupal (people in east are using these as frameworks)?"} {"_id": "111531", "title": "What should a web developer know about HTTP?", "text": "> **Possible Duplicate:** > What should every programmer know about web development? I've heard rumblings on this site that web developers should be familiarize themselves with HTTP. What aspects of HTTP is it beneficial to know as a web developer, and why? **Update** : This is _not_ an exact duplicate of This is not a duplicate of \"What should every programmer know about web development?\" I am asking specifically about the _HTTP protocol_. I am not using HTTP to mean web development in general."} {"_id": "143271", "title": "What is the best way to maintain our programming experiences?", "text": "> **Possible Duplicate:** > how do you remember programming related stuff? During my work experiences, I always met many kind of blocking problems with different technologies. When I remember the efforts I spent to find the solutions I become frustrated and want to find a way to keep all in mind. Generally I keep all the project I made in my Hard Drive, and I usually reuse them when I encounter a problem i already encountered. But this is not to really efficient when you reopen your own code and say: Who is the sucker who wrote this code!? I'm thinking to make my own website in which I can post some tutorials / articles about the problems I met and their solutions. So I keep all in mind and help community. Do you think that it will be a good idea or just a loss of time regarding the actual programming forums?"} {"_id": "24737", "title": "Beginning Design as a Programmer", "text": "I'm just starting to get in to web development. I'm learning Rails at the moment. I have lots of experience with various programming languages. I've been searching for books to help me get started; I want to focus on writing clean, standards-compliant HTML and CSS. Does anyone know of any modern, standards-based resources I can use, particularly books? Thanks!"} {"_id": "110638", "title": "How to learn to become a web developer?", "text": "> **Possible Duplicate:** > What should a developer know before building a public web site? I have been working in telecommunication companies for more than 3 years, and now I want to try to become a web developer. I use C++ for a long time, and it is my most familiar language, and the platform is Unix or Linux. Because of my programming skills, I tried to apply several positions like C/C++ software engineers which I think require this kind of knowledge. But I failed for all the positions. A majority of the positions I applied really need was C language skills other than C++. And all need strong knowledge about the data structure and algorithm which is a filed that I am not good at. After several trials, I realized that there was a problem. What I am good at is at something like C++, Object Orientated design, and the analysis ability, which is a little high level thing. But the positions I applied require C, algorithm which were a little low level. So is there some positions in web which are suitable for my programming skills? or How many things I need to learn to become a web developer based my programming skills? Thanks."} {"_id": "125746", "title": "Introductory to Web programming but with Experience", "text": "> **Possible Duplicate:** > What should every programmer know about web development? I'm not sure where to start with web development. I have experience with HTML and PHP fundamentals and have tried out a little CSS. As of yet, I haven't tried Javascript. Should I focus on one of these or something else?"} {"_id": "63386", "title": "What is the best way to remember Java APIs?", "text": "I have been doing Java for nearly a year now and I have difficulty remembering Java APIs and method names. What is the best way to remember them?"} {"_id": "202508", "title": "What concepts/technologies should an ASP.NET developer be familiar with", "text": "I am in the process of filing in the gaps in my knowledge so that I can become a better developer. I am an ASP.NET developer and I sometimes need to do pure back-end stuff too. I ahve compiled a list of things I deem as necessary to know. Please feel free to fill in any gaps: 1. WebForms 2. MVC 3. jQuery 4. HTML5 5. WPF 6. WCF 7. The Repository pattern 8. Dependency injection (Castle.Windsor) 9. NHibernate 10. Entity Framework 11. Asynchronous programming 12. SQL. As in: hardcore SQL with temp tables and groupings and variables and stuff. 13. Unit Tests"} {"_id": "140423", "title": "how do you remember programming related stuff?", "text": "How do you remember programming related stuff? Have you had the feeling that you've encountered the error you have before you right now, a few years ago and you could swear you knew the cause then but now you've forgotten it? Did you work with the xsl's string parsing some time ago but now you can't remember exactly which are the string functions altogether from xsl and you have to start from scratch? Or perhaps you forget about some feature from Apache Commons like \"filtering a collection by some predicate\" that you surely used in the past. So how do you do it? I tried having a blog but when I develop apps, I never find the time to update the blog or write about my experiences. Also, using a wiki is a nice thing but then I found it difficult to keep a clean separation between them since many times I needed to change a blog post to add new information about that topic. This made me think that I actually should have put this topic in the wiki instead of the blog. Do you have any systems that help you remember about your programming experience? What's your setup?"} {"_id": "70117", "title": "a few newbie questions about web app development", "text": "1. I have a hosting package with Just Host that I use for my Wordpress blogs. Is this the same as having a \u201cweb server\u201d when referencing web app development? I live-chatted with tech support and they said is has apache installed and I can configure mysql and python. I'm planning on programming my app in python. Do I need something else from another service? 2. Do I need to setup a \u201cdeveloper environment\u201d on my mac just to build my MVP?. I\u2019ve read that I need to download MAMP and this will give me apache and a virtual server, is that true? I\u2019ve download, MAMP and python. My web app will require a database but I\u2019m not sure what to download for that. 3. Is anyone willing to review my idea and give me a sense as to what I need to do and if it would be obtainable or realistic that I could do it myself? My exposure to the tech community is limited (html/css) but I know i can compelete my goals with a few helping hands. Its learned that people don't know your starving unless you ask for a plate.. so ladies and gents can someone please feed me?"} {"_id": "29396", "title": "Where to master HTML, CSS and Javascript?", "text": "> **Possible Duplicate:** > What should a developer know before building a public web site? I gotten interested in web-development lately. I am still a student. I learnt basics of HTML, CSS and JavaScript. Then I thought I should improve my server side scripting. So I am learning Struts2 and I am doing better there. Now I have decided I should finally put my skill to some use. So my friend and I have decided to come up with a fun website for our class. But now I am realizing that, though I know server-side scripting to a good extent (not great, just good considering I am a beginner), I am nowhere near good in the basic elements viz. HTML, CSS and JavaScript. I mean, I can't do cool stuff with it. I am aware of w3schools, but it would be great if you can point out a more intuitive place where I can learn to do all the cool stuff in a short time. Some of the problems I am facing are: 1) How should I design the basic layout of my website? 2) How can I use 3rd party APIs like that of Facebook graph?"} {"_id": "166540", "title": "Web Developer or Web Designer", "text": "> **Possible Duplicate:** > What should every programmer know about web development? I have built 10+ straightforward websites using the Symfony framework and Wordpress. Does this mean I am a web developer or a web designer? I set the sites up in Symfony and wrote the CSS/Javascript/HTML. I had to write themes for Wordpress and move some code about. What experience do I need to to be a web developer and not a designer? What experience distinguishes me from a designer that knows HTML and jQuery?"} {"_id": "131051", "title": "What should a web designer know?", "text": "> **Possible Duplicate:** > What should every programmer know about web development? I want to be a web designer. I've spent the past few years on ASP.Net MVC. Where should I focus to learn web design? When I think it's only about HTML, CSS, JavaScript - it's not. Can someone tell me how I can take it practically. I already have been writing HTML for the past several past years but not practically (it's not well-design or look better). I feel there is a big chapter in web-design. Can someone explain me how I can start learning then in future if someone give me a works I can do them in better way."} {"_id": "176302", "title": "How to become an expert web-developer?", "text": "> **Possible Duplicate:** > What should every programmer know about web development? I am currently a Junior PHP developer and I really LOVE it, I love internet from first time I got into it, I always loved smartly-created websites, always was wondering how it all works, always admired websites with good design and rich functionality, and finally I am creating web-sites on my own and it feels really great. My goals are to become expert web-developer (aiming for creating websites for **small and medium business** , not **enterprise-sized** systems), to have a great full-time job, to do freelance and to create my own startup in future. **General question:** What do I do to be an expert, professional and demanded web-programmer? **More concrete questions:** 1). **_How do I choose languages and technologies needed?_** I know that every web-developer must know HTML+CSS+JS+AJAX+JQuery, I am doing some design aswell cause I like it and I need it for freelance also. But what about backend languages? Currently I picked PHP cause it's most demanded in my area and most of web uses it, but what would happen in future? Say, in 3 years, I am good at PHP and PHP frameworks by than, but what if some other languages get most popular? Do I switch to them? I know that good programmer is not about languages and frameworks but about ability to learn and to aim the goals, but still I think that learning frameworks for some language can take quite some time. Am I wrong? 2). **_In general, what are basic guidelines to be expert web-developer?_** What are most important things I should focus on? Thank you!"} {"_id": "255542", "title": "What is a good generalization of web development architecture?", "text": "I'm new to web development. From looking at popular open-source frameworks for both front-end and back-end, I have a general idea of what the modern full- stack web setup looks like: Database <-> Back-end language ~ REST API <-> Front-end Notes: * The back-end language (Python, Ruby, PHP, Java) generates the API, which is the only layer between the back and the front. The API has authentication to protect private data. * The front-end sends GET and POST requests to the API. A MVC framework can be used, such as Backbone, Angular, or Ember.js. Is my understanding accurate?"} {"_id": "90877", "title": "Document Database versus Relational Database : how to choose?", "text": "I'm a SQL guy, but I know there is _Not Only SQL_ databases - document- database mostly. As with most technologies there are pro and cons for each technology. I've read some articles, but they were too theorical. What I would like is two real cases : 1. when a switch from relational- to document-database gave an improvement 2. when a switch from document- to relational-database gave an improvement Improvement being any thing that makes better programs - less developpement time, scalabilty, performance, anything that is programming related. There is a caveat for 2. : stories like \" falling back to relational database because everybody knows SQL\" is not good"} {"_id": "60255", "title": "Reverse engineering airline reservations systems", "text": "I saw an interesting conversation on here about airline reservation systems and the services that connect them (like Sabre). This is an almost identical analog to a problem I've been trying to figure out and I'm looking for the fundamental components required to build a service like Sabre (no this isn't a competing service but something that functions remarkably similar). So the elements of building a Sabre would be: Access to the individual databases (through API I'm assuming) A portal through which a search is entered The question(s): Does it matter if the databases are compatible with one another (i.e. all on oracle etc.)? Just how big a project is this? It seems relatively straightforward to someone who's not a database programmer, but I totally know that's naiive. If my company were working to build a service like this (again, not for airline seats but the mechanism is very similar) what would first steps be? To save everyone the snarky retorts, I understand that this is over my head personally, I'm trying to figure out what resources I'd need to get folks who could do it. No, I don't think I can teach myself, I know I need to hire crazy smart folks to do it. This is an attempt to understand the elements I need to put in place to get a start on this. I genuinely appreciate any help folks can give. PS - two links that describe the essential functions relatively well: Airline Reservation System and Computer Reservations System"} {"_id": "120082", "title": "Would it be possible to create an open source software library, entirely developed and moderated by an open community?", "text": "Call it democratic software development, or open source on steroids if you will. I'm not just talking about the possibility of providing a patch which can be approved by the library owner. Think more along the lines of how Stack Exchange works. Anyone can post code, and through community moderation it is cleaned up and eventually valid code ends up in the final library. For complex libraries an elaborate system should probably be created, but for a simple library it is my belief this is already possible e.g. within the Stack Exchange platform. **Example** Take a library of extension methods for .NET for example. Everybody goes their own way and implements their own subset of what they feel is important, open- source library or not. People want to share their code, but there is no suitable platform for it. extensionmethod.net is the result of answering this call for extension methods, but the framework hopelessly falls short; there is no order, or structure at all. You don't know whether an idea is any good until you try it, so I decided to create an Extension Methods proposal on Area51. I belief with proper moderation, it could be possible for the site to be more than a Q&A site, and that an actual library (or subsets of it) could be extracted from it. Feel free to give feedback to this particular idea on it's proposal page on area51, but it is just meant to be an example. This question is meant to find an answer to the general idea of creating an open source software library moderated by an open community. **Questions** 1. Is it possible? What would be the main problems to such an approach? 2. Has anything like this been attempted before? 3. Are there platforms better suited for this?"} {"_id": "60250", "title": "How to write constructors which might fail to properly instantiate an object", "text": "Sometimes you need to write a constructor which can fail. For instance, say I want to instantiate an object with a file path, something like obj = new Object(\"/home/user/foo_file\") As long as the path points to an appropriate file everything's fine. But if the string is not a valid path things should break. But how? You could: 1. throw an exception 2. return null object (if your programming language allows constructors to return values) 3. return a valid object but with a flag indicating that its path wasn't set properly (ugh) 4. others? I assume that the \"best practices\" of various programming languages would implement this differently. For instance I think ObjC prefers (2). But (2) would be impossible to implement in C++ where constructors must have void as a return type. In that case I take it that (1) is used. In your programming language of choice can you show how you'd handle this problem and explain why?"} {"_id": "232261", "title": "Options for client-side encryption of local web databases", "text": "My scenario is as follows: * Web application, run from the browser, designed for mobile devices. * Uses WebSQL storage which may contain sensitive data. * Uses Application Cache to enable offline use where there is no connectivity. * Can connect to an API where even more sensitive data can be downloaded. Authentication is handled with BASIC Authentication, and unencrypted data is transferred over the wire, as the application shall access the API in the following environments/scenarios: * On a private, secured, local Wi-Fi network. * Over the public internet, over a VPN connection. * Over the public internet, through a HTTPS connection. So far, based on my limited knowledge, the security of sensitive data transferred over the wire is covered. However, the data 'at-rest' in the local WebSQL storage, and formatted in the HTML pages of the web application, is not secure. The current concern is if a mobile device contains sensitive data, and it is lost or stolen, how to minimise the risk of the sensitive data being accessed. As the application needs to be usable without interaction with the server, any software-based encryption would be contained on the client itself. This means that, presumably, attempting to encrypt the WebSQL database will be pointless as if a skilled intruder can bypass the hardware encryption, they can presumably determine the encryption/decryption logic as well? The proposed solution for encryption of data at rest, is to use the built-in hardware encryption that is part of iOS and Android devices, requiring users to enter a password at the lock screen. There are related 'remote wipe' features as well that could be used. **Q: How 'secure' is the proposed solution? If the built-in hardware encryption is not enough, what are the best strategies for implementing client-side encryption of WebSQL data without need of interaction with a server?** Is the implementation of a software-based, data encryption/decryption solution, only going to give a marginal security benefit, as the encryption/decryption code can be accessed and reverse engineered? Is it just me, or is the main benefit of that merely marketing purposes, i.e. being able to 'say' that it is encrypted? Interesting links: http://security.stackexchange.com/questions/10529/are-there-actually-any- advantages-to-android-full-disk-encryption Also... > Underlying storage mechanisms may vary from one user agent to the next. In > other words, any authentication your application requires can be bypassed by > a user with local privileges to the machine on which the data is stored. > Therefore, it's recommended not to store any sensitive information in local > storage. > > https://www.owasp.org/index.php/HTML5_Security_Cheat_Sheet#Client- > side_databases Does this mean the only secure option is to produce a hybrid application? E.g. using Phonegap. Is this a great deal more secure, could it just be reverse engineered to view the JavaScript that encrypts/decrypts in the same way anyway?"} {"_id": "235024", "title": "Arrays' subscripts priority", "text": "I was reading a lecture on arrays for Fortran 90 and I came across this sentence : 'Fortran always stores by columns - the first subscript varies more rapidly than the second, and so on.' What does the author mean to say ?"} {"_id": "235025", "title": "Dependency Inversion Principle: Understanding how both low level components and high level components depend on abstractions", "text": "I'm learning about the Dependency Inversion Principle. It states that: > High level modules should not depend upon low-level modules. Both should > depend upon abstractions. For a while I tried to understand what it means that both the high level components and the low level components, rely on the abstractions and are _dependent on them_. _I'm assuming both should depend on the **same abstraction** in some way. Please correct me if this is wrong._ I have come to some conclusion about what this means. Please confirm if this is accurate. ![enter image description here](http://i.stack.imgur.com/p3IRQ.png) \" **The high level components** are dependent on the abstraction\" - Meaning: The high level components **talk to an interface to communicate with the low level components** , instead of communicating with concrete low level components directly. The low level components implement this interface. \" **The low level components** are dependent on the abstraction\" - Meaning: The low level components are **defined and designed in the terms of the interface.** They are designed to **fit the interface**. They are dependent on the interface, in the way that **the interface defines how they are designed.** (Often low level classes implement that interface). This way, both the high level components and the low level ones are 'dependent on the abstraction', but in different ways. Is this a good understanding?"} {"_id": "232260", "title": "Can the author of code licensed under CC-NC-ND use it in commercial closed-source software?", "text": "I am working on project on which I need to open-source part of my code in order to simplify extension by the enduser. What I want, is to make an npm module, that will expose part of my code, so my users can build extensions for the product (in JavaScript), but I want a guarantee that this code will not be used for commercial or other development work, beside extensions for my product. I found **Creative Commons Attribution NonCommercial NoDerivs** license to be a fit. **My problem:** Can **I** , as the author of this code, use it in a commercial closed-source application? **Disclaimer:** I know this is kind of a legal question, but please, state what you think, no one is holding you accountable/liable for it. Thanks."} {"_id": "19589", "title": "Do you use Cobertura in the enterprise?", "text": "Cobertura is a reporting tool for code-coverage of unit tests. The license for Cobertura is complicated: Ant tasks are covered under Apache 1.1 (easy enough), but instrumented bytecode involves the GPL. Details are here: http://cobertura.sourceforge.net/license.html In a classic CYA, the conclusion of the license states \"it depends on your interpretation of the license\". How have you interpreted the license? Do you use Cobertura in developing commercial software for the enterprise? i.e. simply using it to report on your JUnit tests. I'm **not** asking about bundling Cobertura with your product."} {"_id": "19584", "title": "How does one make a cluster capable program?", "text": "In cluster computing, there seems to be two options: task redirection and task splitting. Task redirection seems easy enough, you just have the master dispatch the small calls to other nodes in the cluster for processing (eg webserver clusters (I think)). Task splitting however seems wildly more complex. Since I don't think you can have two threads of the same program run on different machines, meaning you have to split up the work. How though does one split up the work? I can see some stuff like rendering or video encoding just because you can tell each node to work on a different part of the video, but when you want to do things like calculate the 5 trillionth digit of pie, how would you split that up? Or even in science where you need to simulate weather or other resource intensive tasks? In short, how do you split up tasks that aren't really designed for splitting up?"} {"_id": "64005", "title": "Is having candidates play Portal a good interviewing tactic?", "text": "I read a blog that said something along the lines of: > Some candidates could be inexperienced but with potential to be great. A > good way to test this is to have them solve (non-programming) puzzles to > test their problem solving skills. I wonder if solving the puzzles in the game Portal could be a good way to do this. First of all, its fun, candidates would enjoy the interview and be more enthusiastic about the job. However, many programmers are familiar with this game, and it would only work if they have never played it before. Also, my first time playing it I didn't find it very difficult, but I also love solving puzzles. So maybe its not difficult enough to really test problem solving skills. If this is a bad example, what are some good examples (of problems to solve)?"} {"_id": "69036", "title": "where to allocate the room for new enviroment variable?", "text": "in unix, i want to modify enviroment variables. if size of new value is larger than the old one, the room for the new variable is allocated by malloc. however, is memory for enviroment variables above the stack? (not heap memory) where to allocate the room for new variable?"} {"_id": "81381", "title": "What is the best way to create and guide an enterprise architecture?", "text": "Wikipedia defines Enterprise Architecture as follows: > An enterprise architecture (EA) is a rigorous description of the structure > of an enterprise, its decomposition into subsystems, the relationships > between the subsystems, the relationships with the external environment, the > terminology to use, and the guiding principles for the design and evolution > of an enterprise. This description is comprehensive, including enterprise > goals, business functions, business process, roles, organisational > structures, business information, software applications and computer > systems. Practitioners of EA call themselves \"enterprise architects.\" > > An enterprise architect is a person responsible for developing the > enterprise architecture and is often called upon to draw conclusions from > it. By producing an enterprise architecture, architects are providing a tool > for identifying opportunities to improve the enterprise, in a manner that > more effectively and efficiently pursues its purpose. As a company grows, the enterprise architecture has a tendency to fracture (if it existed at all) and become a big ball of mud. Unfortunately at that point, any individual development group within a company is usually not structured or positioned properly to create an enterprise architecture, and there is usually little incentive for any particular group to do so since they are focused on their business problems. On the flip side, creating a separate \"architecture group\" that is not closely aligned with the business priorities and delivers from on high what architecture should be isn't sufficient either since the work they do usually falls on death ears by those people doing the \"real work.\" What then is the best way to create and guide an enterprise architecture?"} {"_id": "81382", "title": "Will any js libraries be updating api's with html5 under the hood?", "text": "So there is a lot of cool stuff in the html5/css3 specs which should be able to replace a lot of the stuff that have been traditionally done with javascript. Does anyone know if any of the major js libraries (jquery, mootools, extjs, etc) or any newer js libraries are going to be implementing their libraries in such a way that an html5 implementation will be used; but defaults to existing implementation (html4/pre-CSS3/js) when older browsers (IE8/FF3.5 and earlier) are used for backwards compatibility? Basically, I'm trying to find out if there is a trend in this direction; or if this is kind of a reset point where completely new libraries are being written as opposed to mashups."} {"_id": "2235", "title": "Have you ever written anything that made you a lot more efficient (or want to)?", "text": "Sometimes we want to be more efficient and productive, what have you written to reach this?"} {"_id": "52696", "title": "Open Source Project to learn about systems engineering", "text": "I am a an engineering student and I want to learn more about systems programming . I wanted to know the various open source projects that I have as options to learn from about the same. The Linux Kernel Project is very stable and has a very large code base.I wanted to know about projects that would not take much time to get boot strapped and start contributing. Thanks"} {"_id": "124259", "title": "Managing expectations", "text": "I am in my senior year of college. I am an intern at a $120 million a year company. I am responsible for maintaining three websites, I'm essentially the dba for the marketing database, and write and support in-house software. I'm booked pretty solid. My issue is that the upper management likes to severely over commit me in about every possible way. For example, the VP of the company told one of our larger customers that we had a cross ref mobile app ready to show them at the meeting a week away. I had not even started such a thing, nor had I ever written a mobile app in my life. It was a fairly simple project, but I still had to kill myself to get something delivered in that time frame on top of my regular responsibilities. It's a double edged sword, if I did not get the app delivered, it would have looked bad on me. However, getting it done gets the VP thinking \"well if that's what you can do in a week...\" The upper management has no idea of the work that goes into IT projects. They seem to think it is magic. I've complained about my situation to a few people and all I ever hear is \"Welcome to _name of our company_ \" I would just leave, but it's a great job in many ways in a growing company. Not a bad place to be when I'm about to graduate. Anyways, my question is: As an intern (low man on the totem pole) how do I get the management to understand what goes into my workday and what is reasonable to accomplish? How do I get them to not commit me to projects with out discussing it with me? I don't know what to say or how to say it to put it into perspective for them without sounding like I am whining or incapable. Make sense?"} {"_id": "148425", "title": "When Business Object fields should not exactly reflect database columns", "text": "Main advantage with Hibernate annotations is the fact that a simple POJO (also called a Business Object the most of time) can become persistent through Hibernate annotations (or actually JPA) . In the scenario where our conceptual domain model (business objects used by clients) does not exactly reflect the physical model (database), how to deal with? Should I create a \"second\" model that represents the \"true\" business objects used by clients AND a \"data storage object\" containing mapping Hibernate annotations? Of course, with this solution, DAOs will be responsible to convert each BO to Data Object and vice-versa."} {"_id": "118304", "title": "Common practice for abandonware in SVN", "text": "I have a general repository for small utilities (which were deemed too small at the time to warrant their own repository. 'Nother problem of itself maybe), some of which are deprecated and likely to never serve again. But one rule where I work is to never throw anything away. Deleting from SVN means it's not really deleted, it's just in the history somewhere, but that can still be hazardous in case you need to find that old thing again. What would be the best strategy for keeping deprecated items, but also keeping them out of the way?"} {"_id": "122026", "title": "migrating product and team from startup race to quality development", "text": "> **Possible Duplicate:** > A simple ways to improve the release quality in RAD environment This is year 3 and product is selling good enough. Now we need to enforce good software development practices. The goal is to monitor incoming bug reports and reduce them, allow never ending features and get ready for scaling 10x. The phrases \"test-driven-development\" and \"continuous-integration\" are not even understood by the team cause they were all in the first 2 year product race. Tech team size is 5. The question is 1. how to sell/convince team and management about TDD/unit testing/coding standards/documentation - with economics. 2. train the team to do more than just feature coding and start writing test units along - which looks like more work, means needs more time! 3. how to plan for creating units for all backlog production code"} {"_id": "118300", "title": "Free Offline download of MSDN?", "text": "I was doing some searches for MSDN documentation and see it's online for free. It appears now they have a way to download it from the web, but I wasn't able to find out exactly how to do so. For the offline content, do you have to have Visual Studio installed and if so, will the express version work for this? Or can you download and run the MSDN help files standalone?"} {"_id": "215175", "title": "How to safely run random binary codes?", "text": "Okay, so I am looking for a way to safely run a randomly generated binary code. I also need to be able to decompile the code. Any ideas and all programming languages are welcome. BTW it most be binary code; byte code or source code will not work. This is a research project, so I can't go adding variables."} {"_id": "210520", "title": "Finding an object on an infinite line", "text": "Question: There is an infinite line. You are standing at a particular point you can either move 1 step forward or 1 step backward. You have to search for an object in that infinite line. Your object can be in any direction. Give an optimal solution My approach: Go 1 step forward, 2 step back ward Go 2 step forward, 4 step back ward and so on Complexity: Lets say the required object is at point n. Total number of steps: 3 + 6 + 9 + .... n = 3(1 + 2 + 3 ... n) = O(n^2) Is there a way to improve the efficiency?"} {"_id": "251148", "title": "What are some version control systems based on different concepts than Git, Mercurial etc?", "text": "Most of the version control systems I have used have a lot of similar functionallity for source code files: they work on their text content, diff using well known diff algorithms, the changes contain deleted and inserted lines of text, the whole workflow is based on some form of commits, branches and merging of branches. Several days ago I found and I started playing with Darcs, a version control system created by the Haskell community based on Patch theory. I wonder if there are any other, more esoteric version control systems based on different concepts(e.g. different kind of commits, working on code structure, etc)"} {"_id": "195493", "title": "Simple method to authenticate human input into a form for relaying via email to a 3rd party", "text": "I put up a webpage advertising a particular birthday party. A simple html page served by apache2 on ubuntu server 12.04. I have a link on the page to a cgi script-Python-that asks the requester to submit text values that will be formatted and relayed in an email to the bithrday boy. What's a simple way to authenticate that a human being is filling out the form, so that I can avoid having the birthday boy receiving spam. Using standard CGI because I do not expect an overload of concurrent requests. I am aware of things like reCaptcha, but am looking for canonical strategies for quick and dirty authentication. Not being paid for this and the site will be up for but 9 days, so I don't want to put time into a solution more robust that what I actually need. Would asking a human answerable question suffice? If so could I get a away with a dropdown box of selectable answers. I want quick and dirty, but I also don't want to wind up with boat loads of spam because I underestimated the intelligence of spam bots."} {"_id": "251141", "title": "Best pratice for return json in a REST application?", "text": "I'm starting now with REST (using Laravel 4.2) and Mobile (Android, iOS, SP, etc.) applications. Initially I'm checking if the request is ajax/json and then return a json response. But this condition is in a normal action (HTTP), so: **if ajax/json** return json, **else** return the view/template/layout Is it correct to do that in the same action, or is the best practice to create another controller for that? For example: * PostsController * PostsMobileController Or * PostsController * Mobile\\PostsController (or Json\\Controller) - _using namespaces_"} {"_id": "195498", "title": "Is it 'safe' to expect myClasses to agree not to only call package Scope methods from other Package scope methods?", "text": "The questions says it all, but a quick overview of the situation. I'm creating a Model which contains classes (all inherriting myObject) which have a large amount of interconnection. I want the controller to be able to create any of these objects any time, without modifying the Model. Only an explicit cal to the model to \"AddToModel\" would 'install' the object into the model (including updating all the connected objects). myObjects will use a factory pattern, if a user tries to create something already represented in the Model the already- instanced object in the model will be returned instead of constructing a new one. To help with encapsulation I want all of my Objects to be effectivly-immutable to the controller, no matter what he does with an object he can't change the Model without calling the Model's add/remove. The model still has to be able to change myObject state; so I would put all of the objects into the Model package. Each object will then have package scope methods for updating state, including add/remove methods which install them in the model and update connected objects. So essentially Package scope methods can change model state, public can't. My concern is that this will all break if any of myObjects calls one of their package scope methods from a public method; messing with my state without explcit calls to Model. I am writing this all, so I can abide by the contract that \"only package methdos can call other package methods\" But what if someone comes by later and tries to call \"addToModel\" from a constructor because they didn't read my comments and don't realize this breaks an assumed contract? Is it 'safe' to expect others to read comments and abide by such an implicit contract when messing with 'my' model? Can I enforce this with some sort of pattern (preferable without too much abstraction/interfaces as it could confuse some of the other developers). ps, I'm using Java if that helps. I think I might even be able to enforce this with the security API, though that may just prove more confusing/complicated since it would result in an obscure runtime exception."} {"_id": "166852", "title": "Writing a desktop application for progammer from PHP background", "text": "I have a client who wants a tool for him to be able to upload his products, enter orders, and keep track of customer details. There are quite a few highly customised requests, which is why he wants the tool custum made. He does not care much about the interface design - it just has to be usable and provide access to the databade. I've already designed the database. I have no experience of desktop applications and usually write my web apps in PHP with the Yii framework. But hosting this on a server seems like overkill. I also have .net experience from a few years ago. What would be the best options for writing this as a desktop application?"} {"_id": "166856", "title": "Would a typical corporate firewall block a Java applet having the following behaviour", "text": "I'm thinking of developing a proxy-like program to forward ports on a remote PC to a local PC (for example SSH). Assume that both local and remote PCs are running behind typical firewalls (i.e. consumer broadband router firewall, Windows firewall or corporate firewalls). The program will be a Java program which the user will run on both the remote and local PC. The remote client will periodically poll a central server to determine whether there are pending client connections. A session could be initiated as follows: 1. The local client contacts the central server and request the current connection details for a specific remote client. 2. The central server responds with the remote server's last received IP address and port. 3. The next time the remote server polls the central server, the client's IP address and port are returned. The remote server initiates a connection to the local client using the IP address and port returned by the central server and listens for a response on a random port. The remote server will pass the value of the port it's listening on to central server. 4. Goto 1, if client fails to connect to server. All coomunication with the central server will be via HTTP/HTTPS on ports 80 & 443. Would this work or will a typical firewall block the interactions."} {"_id": "156883", "title": "What is a normal \"functional lines of code\" to \"test lines of code\" ratio?", "text": "I'm pretty new to TDD approach and my first experiments say that writing 1 line of functional code means writing about 2-3 lines of testing code. So, in case I'm going to write 1000 LOC, the whole codebase including tests is going to be something like ~3500 LOC. Is this considered normal? What is the ratio in code you write?"} {"_id": "237845", "title": "Unit Testing: How much more code?", "text": "I'm fairly new to unit testing. In school it's always been, \"hey it works, onward!\" But I've started to write professionally, and even at work that's been basically the mantra. However, I've started to see the validity of unit testing and TDD. I've come to a realization, that when I write maybe a 20 line piece of code, I'll usually write a 100-250 lines of code testing those lines of code. Is this about average? Are there better best practices of unit testing that I'm not aware of? Any way, I thought it was an interesting observation and was wondering, on average how much more code do you write when you write your unit tests?"} {"_id": "162574", "title": "Why do we have to use break in switch", "text": "Who decided, and basing on what concepts, that `switch` construction (in many languages) has to be, like it is? Why do we have to use `break` in each statement? Why do we have to write something like this: switch(a) { case 1: result = 'one'; break; case 2: result = 'two'; break; default: result = 'not determined'; break; } I've noticed this construction in PHP and JS, but there are probably many other languages that uses it. If `switch` is an alternative of `if`, why we can't use the same construction for `switch`, as for `if`? I.e.: switch(a) { case 1: { result = 'one'; } case 2: { result = 'two'; } default: { result = 'not determined'; } } It is said, that `break` prevents execution of a blocks following current one. But, does someone really run into situation, where there was any need for execution of current block and following ones? I didn't. For me, `break` is always there. In every block. In every code."} {"_id": "99521", "title": "Doctoral research and work for a company with similar profile", "text": "What are the common guidelines and best practices for developers who are studying for their master or doctoral degrees, while still working to earn the living in a commercial entity. Especially, if profile of the commercial entity is partially similar to the research topic. The reason for this question is because: * Master/doctoral students have to publish their findings, therefore releasing information to public domain * Commercial entities usually have clauses that all work items assigned to their employees, electronically, in spoken, and in written, are actually copyrighted, and the employee is transferring all ownership to a commercial entity * Most commercial research is considered trade secrets and not to be released to a public domain, NDA, etc. So, for example, if you are employed for a company that simulates turbine parameters using server clusters, while your research topic is, for example, \"Physics simulation frameworks\". The thing suddenly becomes muddy. Yet, at the same time, seems that thousands of people work and publish their findings in conferences while working in the same exact field. What's the catch? How can you keep doing your research work while maintaining rights to your exact research field, and a copyright to implement the proof of concept/prototype/product after graduating if you later decide so. Seems that a lot of people do somehow."} {"_id": "162573", "title": "Project frozen - what should I leave to the people after me?", "text": "So the project I've been working on is now going to be frozen indefinitely. It is possible that if and when the project unfreezes again, it won't be assigned to me or anybody from the current team. Actually, we inherited the project after it had been frozen before, but there was nothing left by the prior team to help us understand even the basic needs of the project, so we wasted a lot of time getting to know the project well. My question is what do you think we should do to help the people after us to best understand the needs of the project, what we have done, why we've done it, etc. I am open to other ideas of why should we leave some tracks to the others that will work on this project also. Some steps we already have taken: * technical documentation (not full but at least there is some); * source-control system history; * estimations on which parts of the project need improvement and why we think so; * bunch of unit tests. * issue tracker with all the tickets we've done ( **EDIT** ) What do you think of what we've already prepared and what else can we do?"} {"_id": "162578", "title": "What's a good model for continuous manager <-> programmer feedback?", "text": "Is it important for managers to give devs regular feedback on how they're doing and vice versa? I say vice versa because I consider employees to be responsible to their manager, and managers to be responsible to their employees. Everyone seems to think this is a good idea but in practice I rarely see it happen because so many shops are \"agile\" now and that usually means a daily standup plus a weekly kickoff, etc. So one-on-ones just don't happen. In my last position I had my first one-on-one w/ my manager 6 months after I'd been w/ the company. It turned out there was lots of misunderstanding, misalignment and confusion built up and snowballed. Not really surprising when there's no direct personal communication for that long."} {"_id": "57713", "title": "Why learn Flash Builder 4 (Flex) when I can just use Flash Professional?", "text": "I want to learn Flash Builder 4 (Flex) because I see so many jobs requesting experience with it. I also just like knowing stuff. I am also very interested in focusing on RIA development now. BUT... can anyone tell me CLEARLY why the heck I would ever use FLEX over Flash Pro? It is a time investment, so is it worth it? All I read are misguided posts about how Flash Pro is for games and banner ads, and Flex is for programmers and RIAs blah blah... this simply isn't so from my 9 years of contracting experience. I'm 99.9% certain that I can build anything a flex developer can build, but using Flash Pro. I can build powerful AS3-driven apps for the desktop, mobile device, or browser, and I can link to databases with XML and I can import text files and communicate with ColdFusion and everything. The advantage with Flash Pro is that I can also easily and clearly animate transitions and build custom elements that look the way I want/need them to look for my specific client. Why would I want to use a bunch of pre-built components that drive my file sizes to the moon? Who is happy with a drag-n-drop button? Is Flex just a thing made for programmer people with no artistic inclination? What is the advantage of using it? It takes me back to Visual Basic class. Seems like a pain to have to use multiple tools to import crap from Flash Pro into Flex and yada yada... why when I can do it all nicely in Flash Pro to begin with. Am I clueless, or missing some major piece of the puzzle? Thanks for any clarity. PS, I couldn't care less about the code editors. It ain't that bad people. They make it out like the thing doesn't even respond to keyboard input or something. Does everything I need it do anyways. Please help out here. If I just don't need to learn it, I don't want to waste the time."} {"_id": "7942", "title": "What tools are available for remote communication when working from home or with a distributed team?", "text": "My supervisor is allowing my team to dip our toes in the water of working from home. Considering a recent aquisition of another company is requiring some employees to love this new idea which will hack up to an hour off their commute into work every morning, I really want this to succeed. In order to make it a success, we need good tools to make our lives a lot easier. We currently are set up with OpenVPN, and Team Foundation Server 2010 with SharePoint 2010, and use Live Messenger (for SharePoint integration and easier remote desktop) for IM. These are just what we use (and they are currently working well) , but you can suggest other products. So, what are some great tools that will helps us collaborate, communicate, and generally work together when we're hours apart?"} {"_id": "42176", "title": "Do you Know of Any Audio Podcasts Discussing PHP Standard PHP Library (SPL)", "text": "Do you Know of Any Audio Podcasts Discussing PHP Standard PHP Library (SPL). I'd like to listen to this in the bath."} {"_id": "198178", "title": "What UML diagram should I use to show a platform's architecture?", "text": "I have been learning UML and have a basic understanding now, but I keep seeing these sort of high level architecture diagrams. Here's one from Microsoft: ![enter image description here](http://i.stack.imgur.com/nQw2n.jpg) Source: A bad picture is worth a thousand long discussions. Is this a UML diagram? Are there any rules to follow for creating these types of diagrams? I would like to represent something similar, i.e. the realtionship between different installed systems / APIs... i.e. Platform architecture."} {"_id": "198179", "title": "What aspects should I be wary of when choosing a web development framework?", "text": "I am a relatively new to the world of software development and I've used the Ruby on Rails framework to develop relatively simple applications before. I know that frameworks can be extremely useful in terms of getting applications up and running quite quickly. However I also know about the well documented problems that Rails had in terms scalability, and I was wondering if there were any other factors that I should be wary of before deciding to use a web frameworks in general aside from scalability, particularly when designing non-trivial business applications."} {"_id": "42171", "title": "What is the convention for skipping lines in programming?", "text": "I was wondering what is the convention for skipping lines? Do we group similar things together, skip no lines or just group everything together except if statements or similar constructs. If there is no convention for this, then tell me how you do it and at the end ill determine if i want to use any. Currently i skip no lines. Languages - C, C++, C#, PHP and Java"} {"_id": "80389", "title": "Can CSS be considered a DSL?", "text": "According to Wikipedia, CSS is a style sheet language. However, it's pretty much the only such type of language in use (at least from a web developer's perspective). When trying to categorize CSS as a language or technology (e.g. for a r\u00e9sum\u00e9), would it be acceptable to simply call it a domain-specific language? If you don't think it can, why not?"} {"_id": "198174", "title": "What is a UNiversal IT Test (UNITT) and how do I prepare for one?", "text": "I will be taking an advanced PHP UNiversal IT Test (UNITT) for a position I am applying for. However, I am unfamiliar with the term 'unitt test'. What is a unitt test and how would a company execute one? If anyone is familiar with this, please let me know the best way to prepare"} {"_id": "80383", "title": "When is an Open Source project ready for production?", "text": "When you find a new open source library / project, what criteria do you look at before incorporating it into your source base. * Are there legal questions you need to answer? * Do you look for a certain amount of development velocity? * Is the community buzz a good enough reason? * Does your decision change if you are the one on the line for the project? * Does the complexity of the domain or code change your way you think about it?"} {"_id": "16913", "title": "Should I be involved in my project's business side?", "text": "Before my current job, I was always involved in the technical aspects of a project like: * architecture * design * performance * security * etc Now I'm team lead of a project that's a game on a web site (not mine) and somehow got involved in the business side of the project: * what users expect * ads showing in which pages of the site * mechanics of the game * etc But I quite don't agree with business people's (customer or product owner if you like) decisions of the directions of the site. Of course I raise my concerns, some of them are taken into account, most of them aren't. I continue my work as usual as I like working here but I feel like the product would be better than what it is now. I think that's because my goal is to make an interesting and challenging game, and theirs is to attract as many people and earn money as possible (it's a paid game). Have you guys ever happened to be in this kind of situation? What are your experiences?"} {"_id": "97472", "title": "When does implementing MVVM not make sense", "text": "I am a big fan of various patterns and enjoy learning new ones all the time however I think with all the evangelism around popular patterns and anti- patterns sometimes this causes blind adoption. I think most things have individual pros and cons and it's important to educate what the cons are and when it doesn't make sense to make a particular choice. The pros are constantly advocated. \"It depends\" I think applies most times but the industry does a poor job at communicating what it depends ON. Also many patterns surfaced from inheriting values from previous patterns or have derivatives, which each one brings another set of pros and cons to the table. The sooner we are more aware of the trade off's of decisions we make in software architecture the sooner we make better decisions. This is my first challenge to the community. Even if you are a big fan of said pattern, I challenge you to discover the cons and when you shouldn't use it. Define when MVVM (Model-View-ViewModel) may not make sense in a particular piece of software and based on what reasons. MVVM has a set of pros and cons. Let's try to define them. GO! :)"} {"_id": "236598", "title": "Organising models in ASP.NET MVC", "text": "I'm building websites in ASP.NET MVC and I quickly figured out that my models are getting harder to organise as they grow. What I normally do is create one model to add and edit data. But in my add view I have 2 extra properties. So I'm re-using the same model with 2 extra properties but I don't use these my edit view. Now I have a few questions regarding this. How do you organise it? * Do you create one file class with multiple models in it or is it better to create a new file for each model? * Is it better to create a new model if you need a few more properties?"} {"_id": "97478", "title": "Understanding Application binary interface (ABI)", "text": "I am trying to understand the concept of Application binary interface (ABI). From The Linux Kernel Primer: > An ABI is a set of conventions that allows a linker to combine separately > compiled modules into one unit without recompilation, such as calling > conventions, machine interface, and operating-system interface. Among other > things, an ABI defines the binary interface between these units. ... The > benefits of conforming to an ABI are that it allows linking object files > compiled by different compilers. From Wikipedia: > an application binary interface (ABI) describes the low-level interface > between an application (or any type of) program and the operating system or > another application. > > ABIs cover details such as data type, size, and alignment; the calling > convention, which controls how functions' arguments are passed and return > values retrieved; the system call numbers and how an application should make > system calls to the operating system; and in the case of a complete > operating system ABI, the binary format of object files, program libraries > and so on. 1. I was wondering whether ABI depends on both the instruction set and the OS. Are the two all that ABI depends on? 2. What kinds of role does ABI play in different stages of compilation: preprocessing, conversion of code from C to Assembly, conversion of code from Assembly to Machine code, and linking? From the first quote above, it seems to me that ABI is needed for only linking stage, not the other stages. Is it correct? 3. When is ABI needed to be considered? Is ABI needed to be considered during programming in C, Assembly or other languages? If yes, how are ABI and API different? Or is it only for linker or compiler? 4. Is ABI specified for/in machine code, Assembly language, and/or of C?"} {"_id": "190782", "title": "When to separate application concerns", "text": "I've worked for a number of companies that have, over time, branched out their core service to provide additional services and/or revenue streams. My question is when is the proper time to separate these concerns into multiple applications, even when they're operating on the same/extensions-of-some data? For example, say a company has a core business application that resolves around a music index/music discovery. If this company later decided to branch out into offering websites for musicians that utilize their existing index, should this be a separate application that receives its data from the core- business-application via API, or is it reasonable to lump this into a module of the existing application, and use existing business objects? It seems to me that the **ability** to lump things into one application shouldn't be reason enough to do so. In Unix, we practice Separation of Concerns, but when it comes to enterprise development, this principal seems to be lost. In the example above, I would feel that these _should_ be separate applications, but in my experience I witness developers lump these together to save time."} {"_id": "90127", "title": "An Extra-Intellectual search in a large text", "text": "Have you ever seen, met, or worked with a smart/intellectual search library/framework/algorithm during your career? I need a very nice mechanism to search among text, I don't need a \"Personal Google\", just thought that programmers' community have met with such problem and know/can provide a good solution. Is it possible to search by \"meaning\" and could you advice a way/technique/thoughts according this? Thank you."} {"_id": "133506", "title": "When stuck, how quickly should one resort to Stack Overflow?", "text": "I'm self-learning iOS development through the iTunes U CS193p course, and I often find myself stuck. I've been trying to get unstuck myself, but it might take me hours and hours to figure out what I'm doing wrong, be it missing a method or not really getting a whole concept like delegation. I'm worried that I might be wasting too much time, and I'd be better off going to Stack Overflow shortly after I get stuck so I can move on. In your experience, does quickly asking on Stack Overflow hamper the learning process or improve it?"} {"_id": "29359", "title": "Will HTML5/JS Eventually Replace All Client Side Languages?", "text": "I'm just wondering about the future of it all. IMHO, there are 4 forces that define where technology goes: Microsoft, Apple, Google, Adobe. It looks like in Apple's iPhone/iPad iADs can now be programmed in HTML5. So does that mean HTML5 will eventually replace objective-c? Also, Microsoft has now shifted it's focus from WPF/Silverlight to HTML5 and I assume Visual Studio 2011 will be all about tooling support for HTML5. Because that's what Microsoft do. (Tools). In a few months IE9 the last major browser will support HTML5. Similarly Adobe is getting on the HTML5 bandwagon and allows to export flash content to HTML5 in their latest tools. And we all know how much in bed Google is with html5. Heck, their latest Operating System (Chrome OS) is nothing but a big fat web browser. Apps for Mobile (i.e., iPhone, Android, WM7) are very hard for a company to program especially for many different devices (each with their own language) so I'm assuming this won't last too long. I.e., HTML5 will be the unifying language. Which is somewhat sad for app developers because now users will be able to play the \"cool\" html5 apps for free on the web and it'll be hard to charge for them. So are strongly-typed languages really doomed, and in the future, say 5-10 years, will client side programming only be in HTML5? Will all of us become javascript programmers? :) Because the signs are sure pointing that way..."} {"_id": "63255", "title": "How to figure out real life examples of design patterns?", "text": "Hi I am learning design patterns from a book . How to figure out the actual production code where it is implemented . For example , if I am learning strategy pattern , it might be implemented in Spring framework . But how to go and search there and see it . Do you guys have any easy ways of finding such stuff ?"} {"_id": "53569", "title": "What are QUICK interview questions for the Microsoft stack development jobs?", "text": "I'm looking for your best \"quick answer\" technical interview questions. We are a 100% Microsoft shop and do the majority of our development on the ASP.NET web stack in C# and have a custom SOA framework also written in C#. We use a combination of Web Forms, MVC, Web Services, WCF, Entity Framework, SQL Server, TSQL, jQuery, LINQ, and TFS in a SCRUM environment. We are currently on .NET 3.5 with a very near transition to .NET 4.0. Our interviewing process includes a 55 minute interview with two technical people (usually an architect and a senior developer). The two interviewers have to share the time for questions. That isn't enough time for very many true programming problems so I'm looking for more good questions that have quick, yet meaningful, answers. We are mainly interviewing for Senior Dev positions right now but may interview for some Juniors in the future. Please help? EDITED FOR CLARIFICATION: The should not necessarily be specific to the MS stack. I just don't want questions that are specific to OTHER technology stacks like Ruby or Java."} {"_id": "63250", "title": "How do you cope with the dynamic nature of high-level software development?", "text": "I consider myself a high level software developer. I enjoy reading a lot, and it's helped me over the course of my career. I think I am doing well. Right now, I spend a lot of time learning new things. I don't suck when it comes to writing code right now, but I'm about to start a family, and I regularly see many seniors with 14-15 years of experience who\u2014because they cut back on learning new things\u2014now suck at programming. They were inspiring figures at some point of time, but they are not anymore. You might argue that basics never change, but it does appear to make a difference when you are coding on Delphi for 10 years and suddenly everyone is using the .NET framework. It's true that an experienced developer will take less time when learning a new framework, but it still **does** demand time and effort. How does a software developer manage the demands of the job while still being able to concentrate on things that necessarily take you out of the job, like starting a family?"} {"_id": "10512", "title": "What are the major differences when moving from console-based to GUI-based programming?", "text": "I started, like many others, with console-based (as in terminal, not Playstation) programming. But sooner or later, one needs to touch upon GUI- based programming, whether you want to or not. This transition holds many changes in how you need to think about the frontend (and possibly also the backend). So, what are the major differences when moving from console-based programming to GUI-based programming?"} {"_id": "10511", "title": "What makes a successful development team?", "text": "What are the signs of a successful development team? What qualities does a team of developers need to possess in order to be successful?"} {"_id": "95183", "title": "If I finish one project, should I wait until the next day to start another one or begin right away?", "text": "There are some days where I feel like I get a lot done. I've finished deploying my most recent project and cleared everything from my TODO list, and still have some time left in my day. I DO have a couple of larger projects I could dive into, but its so hard to when I feel like I've accomplished quite a bit in one morning and know I'll be leaving for the day in a few hours. Its even worse when it's a Friday (like today) In this sort of situation I tend to slack off a bit. Browse SO and answer questions to learn, work on some custom libraries for myself, surf the web for cool and interesting things to learn.... I try to keep it limited to work- related things but I find it really hard to stay focused. My question is, how do I get back on track? Or should I even bother? Is it acceptable to spend the rest of the day learning, or should I just sigh and start wading into another project even knowing that I won't have time to get much done and that I'll have forgotten most of it by Monday?"} {"_id": "54691", "title": "What makes a good Scrum Master?", "text": "How do I identify a good Scrum Master? Here are some possibilities: 1. The person is agile (doesn't just do agile). Indicators: Blog, volunteer activities 2. The person connects well with others at the level of emotions and needs (not just technical stuff). 3. The person is relentless and fearless when removing impediments"} {"_id": "142100", "title": "Project life cycle management - Maven vs 'manual' approach", "text": "I have a question concerning the life cycle management of a/multiple project(s), more specific to the advantages/disadvantages of using technologies such as Maven. Currently we work in a continuous-integration environment but lots of things still need to be manually performed (dependency management, deploying, setting up documentation, generating stats, ...). My impression is that this approach often leads to errors, miscommunications or things just are forgotten. I know and have used Maven in the past but in smaller environments and I was always really enthusiastic about it. But I was wondering if someone could share some insights, experiences, pros, contras, ... about the use of Maven (or similar technology) in larger environments and for multiple projects. I would like to use the suggestions made here to start the debate about moving to the next level in project management!"} {"_id": "223818", "title": "The real difference between web and native apps in mobile and smart devices", "text": "I still don't understand the difference between a native app and web or hybrid apps from the interface point of view. Now, apps such as Twitter, Facebook, etc. Are they native? or web? or hybrid apps? In those apps what is the most code was used? is it HTML and JS? or is it a language such as Objective C or Java? I'm still confused."} {"_id": "142107", "title": "Syntax logic suggestions", "text": "This syntax will be used inside HTML attributes. Here are a few examples of what I have so far: This will make input \"a\" do something if `b` is not checked and `c` is checked (b and c are assumed to be checkboxes if they don't have a `:value` defined) * * * This will make input \"a\" do something if `b`doesn't have `foo` or `bar` values, and if `c` has the `foo` value. * * * Makes input \"a\" do something if `b` has a value assigned. * * * So, essentially `,` acts as logical AND, `:` as equals (=), `!` as NOT, and `|` as OR. The `|` (OR) is only needed between values (at least I think so), and AND is not needed between values for obvious reasons :) `EMPTY` means empty value, like `` Do you have any suggestions on improving this syntax, like making it more human friendly? For example I think the \"EMPTY\" keyword is not really appropriate and should be replaced with a character, but I don't know which one to choose."} {"_id": "142105", "title": "What's the best way to comment in a code review?", "text": "My team just started using crucible/fisheye for initiating code reviews whenever one of us checks something in. There are only 3 of us, and we're each encouraged to review the code and leave comments where we see fit. My question is, how do I best leave a comment on a line of code I see a problem with? I want to get my point across without seeming abrasive. I don't want to seem like I'm on a high horse and say \" _I've been doing it this way..._ and I also don't want to seem like I'm trying to be authoritative and say something like \" _This should be done this way..._ \" but I still need to get the point across that what they're doing is not very good. **To Clarify:** This is a really good resource for _what_ I should be looking to comment on: code review: Is it subjective or objective(quantifiable)?, but I'm looking for _how_ to comment on it."} {"_id": "223767", "title": "Auto Transaction Failsafe's, Third Party API's, Coldfusion Schedule Files", "text": "I have an automated invoicing web app and I'm trying to build in some failsafes and a structure that will, under no circumstance, allow an invoice to be double charged. All things working perfectly, the scheduled task runs and everything charges as it's supposed to. But there are a few scenarios we know of that can cause problems. I've had a DB server go down in the middle of a process, which lead to a number of duplicate charges. Also if for whatever reason, the file needs to be checked in a browser, a developer hits refresh before the file is done processing, duplicate charges will occur. There are a few similar scenarios to protect against, so you can see the need for a way to handle this given the sensitivity of the transactions (imaging seeing your power company auto charging you twice on your bank statement). Current configuration: * Query * API POST For Transaction if response = \"1\" - Insert Into Payments - Update Invoice Amount Due - Insert Into Transactions - Email Accepted Notification else - Email Declined Notification There's a few more smaller processes that happen between a few of these, as well as quite a few if statements but those are the heavy lifting parts. What I would like to learn and hear about are what would be the best way to handle safeguarding this with considering the Third Party API for the gateway being there? How would you break it up? One idea I had was flagging each process after it ran to give a record of where it was when it failed, and build out the query to handle it accordingly. Not sure how efficient or more likely how inefficient it would be."} {"_id": "21406", "title": "Horizontal growth vs vertical growth", "text": "I worked in a lot of different environments since I finished University. I worked as software analyst in optics, numeric control, military and networking environments and learned using a lot of different tools. My main skill is software design (and C++ programming) but I learned Python, Qt, MFC (still used, I couldn't believe...), Bash and Perl. I also learned to configure VPNs, MySQL and tons of other tools and programming libraries. I also developed Linux realtime modules. Since I like a lot to learn new stuff and working on project not for long times (2 years top) I become a consultant: the best way to ensure horizontal growth. I chose this way because of personal preferences but also because my greatest fear was to end working on a sub-sub-sub problem of a sub application domain for 30-40 years of my life. I've met people like that. They have huge vertical skills in the application domain. For example a colleague of mine has been programming automation of machine tools in C++ for 20 years. He knows every single little thing he needs to know for machine tools but polymorphism is the most advanced C++ feature he knows and C++ it's his main language! My fear is basically that the job market could punish me in the long run. Of course my colleague could have the same problem: if his company goes bankrupt I wish him luck finding another job in the same niche. There is a trade off between vertical growth and horizontal growth. How do you cope with that? Do you think that vertical growth has more advantages in the long run? Please think in terms of job market and in terms of personal growth. Is there a way to balance the two kinds of growth?"} {"_id": "21400", "title": "How large non-OO code bases are managed?", "text": "I always see abstraction is a very useful feature the OO provides for managing the code-base. But how are large non-OO code bases are managed? Or do those just become a \"Big Ball of Mud\" eventually? **Update:** It seemed everyone is thinking 'abstraction' is just modularization or data- hiding. But IMHO, it also means the use of 'Abstract Classes' or 'Interfaces' which is a must for dependency-injection and thus testing. How non-OO code bases manage this? And also, other than abstraction, the encapsulation also helps a lot to manage large code bases as it define and restrict the relation between data and functions. With C, it is very much possible to write pseudo-OO code. I don't know much about other non-OO languages. So, is it THE way to manage large C code bases?"} {"_id": "223812", "title": "Best way to keep large amounts of data (no relational database needed)", "text": "I would like to know what should be the best way to keep data on a server related to the following points: * **Chat** logs * **Heavy text content** * **User references** like amounts of ids (1,4,14,524,23220,...) I'm using **PHP** and a **mysql** database but as obvious I know the topics above don't go well into mysql at large scale and maneuvering. So I would like to know how ( **no, i'm not asking for your work, just your 3-line orientation** ) exactly should I keep the data :) * * * Ok basically I keep my users in a sql table and another only for their friendships in which I have a field containing references related, _allegedly_ , to chat logs. Now the thing is I don't actually understand whether I should have that content in a file, keep the file path in that sql field and then retrieve the file when needed, parse it and display it to the user or keep it in a database such as mongodb, raven, couch because I've never used no-sql database and wanted to know of experienced people on it. Same goes for the **heavy text content** and user references. For example, in my users table I have a field containing its friends in the following manner 1,4,5,6,14,51,... and since I've been told this is bad practice and **certainly** should be used whilst dealing with large amounts of data that would need to be changed constantly I came here in a hopeful act of guidance and enlightment."} {"_id": "48117", "title": "How do software projects go over budget and under-deliver?", "text": "I've come across this story quite a few times here in the UK: NHS Computer System Summary: We're spunking \u00a312 Billion on some health software with barely anything working. I was sitting the office discussing this with my colleagues, and we had a little think about. From what I can see, all the NHS needs is a database + middle tier of drugs/hospitals/patients/prescriptions objects, and various GUIs for doctors and nurses to look at. You'd also need to think about security and scalability. And you'd need to sit around a hospital/pharmacy/GPs office for a bit to figure out what they need. But, all told, I'd say I could knock together something with that kind of structure in a couple of days, and maybe throw in a month or two to make it work in scale. * If I had a few million quid, I could probably hire some really excellent designers to make a maintainable codebase, and also buy appropriate hardware to run the system on. I hate to trivialize something that seems to have caused to much trouble, but to me it looks like just a big distributed CRUD + UI system. So how on earth did this project bloat to \u00a312B without producing much useful software? As I don't think the software sounds so complicated, I can only imagine that something about how it was organised caused this mess. Is it outsourcing that's the problem? Is it not getting the software designers to understand the medical business that caused it? **What are your experiences with projects gone over budget, under delivered? What are best practices for large projects? Have you ever worked on such a project?** **EDIT** *This bit seemed to get a lot of attention. What I mean is I could probably do this for say, 30 users, spending a few tens of thousands of pounds. I'm not including stuff I don't know about the medical industry and government, but I think most people who've been around programming are familiar with that kind of database/front end kind of design. My point is the NHS project looks like a BIG version of this, with bells and whistles, notably security. But surely a budget millions of times larger than mine could provide this?"} {"_id": "211693", "title": "DDD/SOA Using .NET Message pattern(s) / Request Response with File Saving", "text": "I've done some research on this but I can't find more specific examples to help me with this. I'm new to SOA/Patterns in general please take it easy... :) Can you display an example of using the fileupload control in .net implementing SOA Messaging Patterns?"} {"_id": "211696", "title": "More efficient way to paginate search results", "text": "I'm making a website in PHP where the user can search a big MySQL database. The user is shown the first result. I want the _next_ button to take the user to the next result, and so on. The trivial solution is for each result page to execute the user's search query _again_ and use OFFSET and LIMIT to get the _n_ -th result that is displayed. But this feels like a Schlemiel the Painter's algorithm: re- executing the same query over and over to get to the _n_ -th result is inefficient. Since others must have faced this situation before: how is this typically solved?"} {"_id": "208298", "title": "Collecting user information for debugging and support", "text": "I have a Java application that I need to support. I'd like it to collect user information, such as system information: OS and hardware for easier diagnostics and support. * Are there any security & privacy regulations I need to follow? * Do I specifically need consent from the user to do this?"} {"_id": "67579", "title": "How can I teach my co-workers the SOLID principles?", "text": "I've got a group of very talented, yet new, developers on my team. I've fully embraced the SOLID principles in the projects I'm working on, and my fellow developers have seen the wisdom of the ways - but I am not an experienced teacher (nor am I an experienced student - everything I've learned I've learned on my own or from blog posts). I've been given leeway to hold teaching sessions with these guys, but I need to come up with some sort of lesson plan or else I'll probably just confuse and frustrate them and myself. There are plenty of resources on the web that describe or introduce the SOLID principles, but I haven't seen many that go in depth with them, and I've seen none at all that discuss how to _teach_ them. Am I missing some resource that's already out there for this? We're .NET developers, a mix of C# and VB.NET, so while I'll take examples in any language, .NET examples will be the best help as I won't have to translate them before showing them to my team. I should mention that we don't have a budget for books. Feel free to mention them, but bear in mind that if they're not cheap, they'll need to be _really_ compelling for me to consider. I won't be reimbursed in any case."} {"_id": "187715", "title": "Validation of the input parameter in caller: code duplication?", "text": "Where is the best place to validate input parameters of function: in caller or in function itself? As I would like to improve my coding style, I try to find the best practices or some rules for this issue. When and what is better. In my previous projects, we used to check and treat every input parameter inside the function, (for example if it is not null). Now, I have read here in some answers and also in Pragmatic Programmer book, that the validation of input parameter is responsibility of caller. So it means, that I should validate the input parameters before calling of the function. Everywhere the function is called. And that raises one question: doesn't it create a duplication of checking condition everywhere the function is called? I am not interested just in null conditions, but in validation of any input variables (negative value to `sqrt` function, divide by zero, wrong combination of state and ZIP code, or anything else) Are there some rules how to decide where to check the input condition? I am thinking about some arguments: * when the treating of invalid variable can vary, is good to validate it in caller side (e.g `sqrt()` function - in some case I may want to work with complex number, so I treat the condition in caller) * when the check condition is the same in every caller, it is better to check it inside the function, to avoid duplications * validation of input parameter in caller takes place only one before calling of many functions with this parameter. Therefore the validation of a parameter in each function is not effective * the right solution depends on the particular case I hope this question is not duplicate of any other, I searched for this issue and I found similar questions but they don't mention exactly this case."} {"_id": "251853", "title": "Why would the switch from C# to scala make sense in order to take advantage of scala's functional capabilities?", "text": "What is the benefit of using functional program for large scale software projects? I have heard it is pretty performance equivalent to regular OOP. I also have heard that it is more \"mathematically elegant.\" Either way what are the benefits of doing large scale applications functionally? In particular why would the switch from C# to scala make sense in order to take advantage of scala's functional capabilities?"} {"_id": "187710", "title": "Is it possible to get dynamically generated html in asp.net tags using HTTP Modules?", "text": "I want to know if it is possible to write to a log/text file dynamically generated HTML in asp.net tags in an .aspx page using HTTP modules. By dynamically generated HTML, I mean the html content that asp.net generates in response to statements between the tags <% %> or <%# %> or <%$ %>. So when asp.net renders a page, it essentially converts everything into HTML. There is some static html written in the asp.net page, rest is what i call dynamically genrated HTML. My goal is to dump somewhere all the dynamically generated HTML content. Can it be done using HTTP Modules or any other mechanism ? Example: If the .aspx page is like this ... `Total Credit Line ` `<%=creditLimit.ToString(\"C\")%>` ... I want the HTTP Module to write in a log file, the HTML rendered by the <%=creditLimit.ToString(\"C\")%> asp tag and all other asp tags on the page. The HTTP Module would be generic and can be added to any IIS website. Note that there would be no difference in the output that is seen by a browser."} {"_id": "251857", "title": "How to tell node.js which javascript code runs on server vs client?", "text": "I am trying to learn the theory of node.js but can't seem to figure out how node.js knows whether to pre process it on the server or send it to browser for execution."} {"_id": "187713", "title": "Algorithm for deciding change in gesture", "text": "I am developing an application where I am taking dynamic gestures as input and then mapping them to keyboard controls. By dynamic gesture, I mean for example hand moving from left to right or hand moving right to left. I have four gestures: hand moving bottom to top, top to bottom, left to right and right to left. I am able to recognize gestures but the problem is how to decide the start and ending of the gesture on a continuous video input. What is the most efficient and effective algorithm for detecting this on a video input?"} {"_id": "96987", "title": "Who would be responsive for damage caused after cloning foreign application but NOT running it as freelancer?", "text": "This is just theoretical question, without context. In case * somebody hire me (as freelancer (I'm A1)) * client (A2) wants to develop **CLONE** of an existing application from B1 company * after some time I give completed application to my client (A2) * after some time original owners of application (B1) I cloned want to go to court Who would be responsive for damage caused to B1 company? Me as freelancer (A1) or my client (A2) which runs that site and take money from it?"} {"_id": "68631", "title": "What are the most cost effective ways of increasing your marketability to employers", "text": "There's a difference between becoming a better developer and showing you're a better developer to potential employers or to your current employer (ie. raises and such). I'm wondering what the most cost effective ways of doing that are. By cost effective, I don't just mean how much money said developer going to have to put out. Most commonly these are money and time. The latter is often more significant. For instance: Having your own blog can really develop you as a programmer, but it _may_ not be seen as a big deal to your current or future developers, and is potentially a lot of work, whereas some certs can go a long way with some employers and is much less work than building up a lot of quality content in a blog or developing a personal/OS project - even if the latter may be more fun. Assuming you're in the .NET space, what are the more cost effective ways that you can increase your marketability to your current or future employers? In an ideal world, we/I would do them all, but for the purpose of prioritizing, one should/could tackle the higher marketability/cost ones first before moving on to the others."} {"_id": "160049", "title": "JavaScript vs third party libraries", "text": "I program in Java and it doesn't make sense to me to think about learning a Java library or a framework without knowing the actual language the thing is built with. Same goes for C. I always avoided JavaScript simply because I wasn't interested in the client side of things but that has changed now. I'm confused as to how and why do people avoid learning JavaScript and instead jump right ahead with a library like JQuery ? How can I program without knowing the features of JS, what is a prototype based language, functions as first class citizens, OOP, closures, etc. Also, are most of the things today in the client-side world built with the help of third party libraries?"} {"_id": "160048", "title": "Teaching kids to program - how to teach syntax?", "text": "I've been spending this week teaching kids (11-18) to program. Teaching them the core concepts and the logic has been going fine, but I've noticed one snagging point for them all: syntax. I feel like teaching them the syntax of the language comes a massive second to the core concepts and logic. However, with struggling with the syntax, they aren't learning as effectively. More often than not, their logic is good enough. The problems they have are related to syntax. Does anybody know of an effective way of teaching syntax? I've been thinking about creating a cheat sheet showing the syntax for different statements (assignment, if, while, for, etc) but feel this might be a bit of an information overload, has anybody any experience with using this method? Hopefully this question isn't considered off topic!"} {"_id": "91340", "title": "Java out of web and without GUI?", "text": "Is java used out of web and without GUI? To develop web applications you have to learn design stuff, html+css+javascript+ajax => it takes really a lot of time to learn it. Developing a GUI application also requires design skills. Is it possible to write something in pure java and what exactly?"} {"_id": "149483", "title": "Which data structure to represent a triangular undirected graph", "text": "I was wanting to create a graph similar to this in C++: ![enter image description here](http://i.stack.imgur.com/GtckL.png) The one I will implement will have more edges and vertices, but will use this triangle style. I was wanting to create something that I could perform various search algorithms on in order to find shortest paths to selected vertices (or longest without loops etc.). It is an unweighted graph to begin with; however, I was planning to give the edges weights in the future. I was having some trouble with how I would represent this graph for a couple of reasons: since I want to have more vertices (maybe 105 vertices), any kind of list would be tedious to write out which vertices are connected to which in every object (or is there a way to automate the assignment of members of a vertex object signifying the connectedness?). Also, I was finding it strange trying to implement it in a 2d array because the array is square in nature, and I am dealing with triangles; thinking about the weighting of edges and assigning a value in between two vertices made it even less intuitive with an array."} {"_id": "157739", "title": "2 Dimensional Arrays in C++", "text": "I started learning arrays in C++ and came over a little side note in the book talking about 2D arrays in breif. I tested it out and i was amazed that it could give the programmer the ability to store data or information and lay it out infront of him in a spread sheet format. In a normal array to access elements i would simply do this: int matrix[2] = { 1, 15}; But in 2D arrays :The only way it tells me to actually acces elements is by using a loop: int fly[2][2]; int i = 0; int n=0; for(i=0; i<2; i++){ for (n=0; n<2; n++){ fly[i][n] =0; } } for(i=0; i<2; i++){ for (n =0; n<2; n++){ cout << fly[i][n]<< endl; } } I have tried accessing elements the old way: int fly[2][2] = { 0}; but i noticed that this changes all the elements to 0 So.. 1. Can anyone explain when i try accessing this 2D array like this all the elements change. 2. Is there another way to access 2D array elements without using a loop. Thank you all."} {"_id": "149481", "title": "Reorganizing code based on dependencies", "text": "I'm wondering if there is a tool that can generate a dependency graph between C language object files and then analyze how to turn that graph into a DAG by modifying code that creates cycles, moving it to other source files. I've drawn a dependency graph by hand and I could probably make one by using makedepend or something similar, but the moving of functions also has to be done by hand. Sorry if that's vague, but that's about the best I can describe it right now."} {"_id": "158908", "title": "Why null pointer instead of class cast?", "text": "In Java: int count = (Integer) null; throws a java.lang.NullPointerException. Why doesn't this throw a Class Cast Exception for ease in programmer understanding? Why was this exception chosen over any other exception?"} {"_id": "103472", "title": "In a javascript only web application, what state information should the URL contain?", "text": "I'm writing a single page web application that talks to a business layer via asynchronous RPC (encoded with json). I'm targeting fairly modern browsers, so I can at minimum control the URL after the hash. However, the URL won't really be what drives the application's state (that will be maintained by a javascript model layer), so I don't _have_ to do anything with it. I could even keep it the same the whole time without any implementation problem. The question is, what _should_ go in the URL? I won't go into the application because I'm hoping the question is generally useful. What kinds of expectations do users have about backward and forward buttons? About typing in a URL? Are there currently any generally applicable best practices yet for this kind of application?"} {"_id": "209955", "title": "Does macros support make Scala a Lisp dialect?", "text": "I've read recently that macro support in Scala is now official. I checked the documentation page and they are reminiscent to the LISP ones. In one of his essays Paul Graham writes that when \"you add this final increment of power\" to a language it is no longer a new one but a lisp dialect. Is this the case with Scala? It does not seem like a LISP dialect to me and the macro support in Scala is somewhat awkward for me."} {"_id": "209954", "title": "Force user to extend class or use configuration", "text": "What is better practice: force user to extend abstract class or make class with configuration? Eg. pseudocode: ClassA{ this.name this.weight this.height this.width constr(config){ this.name = config.name this.weight = weight this.height = height this.width = width } } or abstract ClassA{ this.name this.weight this.height this.width } ClassB extends ClassA{ this.name = 'Item' this.weight = 12 this.height = 11 this.width = 14 } Until now i was using configuration because most libraries use this method, but whats wrong with second method? I recently found that method with extending is producing clearer code but maybe I am missing something?"} {"_id": "224812", "title": "Alternatives to Perl/python scripts for find & replace", "text": "I'm working in a fairly old yet sufficiently unproductive code base that I need to create a(some) script(s) to help me out. For example: * we add a version # and timestamp at the header of the file (yes we use a CVS based sys but this is beyond my control). * we have duplicated layout code for different languages (this is pre-unicode era so we just duplicate things) and when a control's attribute changes in one language that change needs to be cascade to the other ones. So, my first thoughts were a couple of perl or python scripts to do find & replace to solve those two issues. But I wanted to reach out and see if anyone else had a different approach."} {"_id": "168548", "title": "Making a design for a Problem", "text": "I have written many codes using OOPS and I am still to understand when is a code good enough to be accepted by experts. The thought procedure of every man is different and so is the design. My question is should I do something in particular to design my programs in such a way that they are good enough to be accepted by people. Other thing I have also read Head First Object Oriented Design but at last I feel that the way they design the problems is much different I would have designed them."} {"_id": "24578", "title": "MVC for our application?", "text": "There are some issues about how to manage our program designs and programming styles. I was assigned to find a solution to write reusable code - though, the programming team does not follow the rules. I would rather use MVC to perform a well structured programming style. I found out that a blueprint for next works requires a bunch of experts. The thing is that I have to do it all myself. And the worse part is that I have to use a general MVC platform. I need your helps and suggestions for: 1. Is there a way that I can write a document for MVC - to use it in our design in Java? 2. How can I represent it? 3. How much work does it need? 4. How can I connect the Model, the View, and the controller parts together?"} {"_id": "151778", "title": "Is it common to declare class properties at the bottom of the class declaration?", "text": "I work in a PHP development shop and several new developers have joined the team. One of the new members insists on declaring class properties at the bottom of the class declaration rather than the at the top, as one would normally expect. In all my 5 years of working in web development, I have never seen this done. Is it a common coding style?"} {"_id": "168540", "title": "What are the advantages of using the Java debugger over println?", "text": "I always use `System.out.println(...)` to debug my code, and it works pretty well. In which cases do you use the eclipse java debugger? I never had to use it and the little bug symbol is still ab bit mysterious to me. Are there cases where the debugger helps me that I can\u00b4t solve with a println?"} {"_id": "24574", "title": "Some advice concerning Oracle and Java", "text": "I've seen the news yesterday that Java 7 and Java 8 passed but the majority weren't very happy. Can anyone clarify what consequences does this have on Java and the community ? What licenses did Oracle change to be exact? A lot of people think that this might be the end of Java. Should I go ahead with learning Java or should I maybe move to .NET ? I don't have any preference as to one or the other, but I'm a bit concerned now with all this pressure surrounding the Java ecosystem. Since time is important for everyone and time is what I have less have these days I'd like some advices from the community here."} {"_id": "54038", "title": "Professional Developers, may I join you?", "text": "I currently work in technical support for a software/hardware company and for the most part it's a good job, but it's feeling more and more like I'm getting 'stuck' here. No raises in the 5 years I've been here, and lately there seems to be more hiring from the outside than promotion from within. The work I do is more technical than end-user support, as we deal primarily with our field technicians who have a little more technical skill than the general user base. As a result I get into much more technical support issues... often tracking down bugs in our software, finding performance bottlenecks in our database schema, etc. The work I'm most proud of are the development projects I've come up with on my own, and worked on during lunch breaks and slow periods in Support. Over the years I've written a number of useful utilities for the company. Diagnostic type applications that several departments use and appreciate. These include apps that simulate our various hardware devices, log file analysis, time-saving utilities for our work processes, etc. My best projects have been the hardware simulation programs, which are the type of thing we probably wouldn't have put a full-time developer on had anyone thought to do it, but they've ended up being popular and useful enough to be used by development, QA, R&D, and Support. They allow us to interface our software with simulated hardware, rather than clutter up our work areas with bulky, hard to acquire equipment. Since starting here my life has moved forward (married, kid, one more on the way), but it feels like my career has not. I still earn what I earned walking in the door my first day. Company budget is tight, bonuses have gone down, and no raises or cost of living / inflation adjustments either. As the sole source of income for my family I feel I need to do more, and I'd like to have a more active role in _creating something_ at work, not just cleaning up other people's mistakes. I enjoy technical work, and I think development is the next logical step in my career. I'd like to bring some \"legitimacy\" to my part-time development work, and make myself a more skilled and valuable employee. Ultimately if this can help me better support my family, that would be ideal. Can I make the jump to professional developer? I have an engineering degree, but no formal education in computer science. I write WinForms apps using the .NET framework, do some freelance web development, have volunteered to write software for a nonprofit, and have started experimenting with programming microcontrollers. I enjoy learning new things in the limited free time I have available. I think I have the aptitude to take on a development role, even in an 'apprentice' capacity if such an option is possible. Have any of you moved into development like this? Do any of you developers have any advice or cautionary tales? Are there better career options I haven't thought of? I welcome any and all related comments and thank you in advance for posting them."} {"_id": "79726", "title": "GPL Confusion! Can I sell a product with GPL covered components without making the source available?", "text": "I'm really confused.. I'm looking into making a commercial program and there are a few open source, GPL covered components i'd like to use.. Am I allowed to sell my product with the components in without distributing my source? For example.. say i was making a commecial text editor, and my friend has a really awesome GPL free-software text editor.. But i wanted to make a text editor like his but with special features.. Am I allowed to fork his text editor add all my special features and sell it on without having the source code to my special features available to my users?"} {"_id": "175761", "title": "Does it matter the direction of a Huffman's tree child node?", "text": "So, I'm on my quest about creating a Java implementation of Huffman's algorithm for compressing/decompressing files (as you might know, ever since Why create a Huffman tree per character instead of a Node?) for a school assignment. I now have a better understanding of how is this thing supposed to work. Wikipedia has a great-looking algorithm here that seemed to make my life way easier. Taken from http://en.wikipedia.org/wiki/Huffman_coding: > * Create a leaf node for each symbol and add it to the priority queue. > * While there is more than one node in the queue: > * Remove the two nodes of highest priority (lowest probability) from the > queue > * Create a new internal node with these two nodes as children and with > probability equal to the sum of the two nodes' probabilities. > * Add the new node to the queue. > * The remaining node is the root node and the tree is complete. > It looks simple and great. However, it left me wondering: when I \"merge\" two nodes (make them children of a new internal node), **does it even matter what direction (left or right) will each node be afterwards?** I still don't fully understand Huffman coding, and I'm not very sure if there is a criteria used to tell whether a node should go to the right or to the left. I assumed that, perhaps the highest-frequency node would go to the right, but I've seen some Huffman trees in the web that don't seem to follow such criteria. For instance, Wikipedia's example image http://upload.wikimedia.org/wikipedia/commons/thumb/8/82/Huffman_tree_2.svg/625px- Huffman_tree_2.svg.png seems to put the highest ones to the right. But other images like this one http://thalia.spec.gmu.edu/~pparis/classes/notes_101/img25.gif has them all to the left. **However, they're never _mixed up_ in the same image (some to the right and others to the left)**. So, does it matter? Why?"} {"_id": "237312", "title": "Where do utility libraries fit in a layered architecture?", "text": "Consider this mock-up of a software stack designed with layered architecture in mind : ![Basic example of a layered-structured software stack.](http://i.stack.imgur.com/RQCSF.png) Every application layer is decoupled through API calls, but a memory handling library is used throughout. All layers need to copy, allocate or somehow affect memory. Is this bad use of the layered architecture pattern, or is it one of its pitfall ? On one hand, trying to force the pattern upon the library calls, perhaps by adding memory handling wrappers to every application layers, would create a lot of boilerplate code. On the other hand, letting layers, through library calls, create arbitrary entry points into different layers seems to contradict the very rationale behind layered architecture."} {"_id": "115074", "title": "Would it be useful for a programmer to get qualified in Prince2", "text": "Even if the programmer wasn't going to perform the function of project manager, would there be any benefit on a small team of one of the devs being Prince2 qualified?"} {"_id": "179469", "title": "Amnesic Environment", "text": "My team is looking for a technical term which may or may not exist. We are trying to describe an environment, such as a database, which has been built up over time with little or documentation about the change process that has gone into it. Generally these are legacy systems who's original developers have long since moved on and they are in such a tangled and unmanageable state that the only way to recreate the environment, say for testing purposes, is to copy it and do your best guess at re-configuring it for its new purpose. So far the best term we have come up with is `Amnesic`, such as \"setting up the new test environment is going to be a challenge because it is an amnesic db.\" However, we are still not quite happy with the term and were wondering if a better and/or more accepted term for this situation exists."} {"_id": "179468", "title": "Forking a repo on GitHub but allowing new issues on the fork", "text": "I have previously forked other people's repos on GitHub, and I have noticed that issues stay with the original repo, and that I can't file issues on the forked repo. I now have the following task. I am working for a small business where development was being done by one of the principals on his personal account. He has amicably left the project, and we would like to migrate that project away from his personal account to a new \"role\" account on GitHub. I would naturally fork the repo, in order to preserve the code history, but then I'll end up with a repo where we can't file new issues, which is quite undesirable. How can I make a copy of this original repo into our new account, ideally still preserving code history, but be able to file new issues within this new account?"} {"_id": "34320", "title": "How do I convince my boss that it's OK to use an application to access an outside website?", "text": "That is, if you agree that it's OK. We have a need to maintain an accurate internal record of bank routing numbers, and my boss wants me to set up a process where once a week someone goes to the Federal Reserve's website, clicks on the link to get the list of routing numbers (or the link giving the updates since a particular date), and then manually uploads the resultant text file to an application that will make the update to our data. I told him that a manual process was not at all necessary, and that I could write a routine that would access the FED's routing numbers in the application that keeps our data updated, and put it on whatever schedule was appropriate. But he is greatly opposed to doing this, and calls it \"hacking the Federal Reserve website.\" I think he's afraid that the FED is going to get after us. I showed him the FED's robot.txt file, and the only thing it forbids is an automated indexing of pages with extension .cf*: User-agent: * # applies to all robots Disallow: CF # disallow indexing of all CF* directories and pages This says nothing about accessing the same data automatically that you could access manually. Anyone have a good counterargument to the idea that we'd be \"hacking\" the FED?"} {"_id": "178733", "title": "Is it a bad practice to include all the enums in one file and use it in multiple classes?", "text": "I'm an aspiring game developer, I work on occasional indie games, and for a while I've been doing something which seemed like a bad practice at first, but I really want to get an answer from some experienced programmers here. Let's say I have a file called `enumList.h` where I declare all the enums I want to use in my game: // enumList.h enum materials_t { WOOD, STONE, ETC }; enum entity_t { PLAYER, MONSTER }; enum map_t { 2D, 3D }; // and so on. // Tile.h #include \"enumList.h\" #include class tile { // stuff }; The main idea is that I declare all enums in the game in **1 file** , and then import that file when I need to use a certain enum from it, rather than declaring it in the file where I need to use it. I do this because it makes things clean, I can access every enum in **1 place** rather than having pages openned solely for accessing one enum. Is this a bad practice and can it affect performance in any way?"} {"_id": "178734", "title": "add a prefix to localhost", "text": "Is there any possibility to add a prefix before localhost? My question is that I want to add a prefix before localhost for my project url (ie. \"dev.localhost/project/default.htm\"). This is for an ASP.NET application running in IIS."} {"_id": "228903", "title": "HTML markup vs programmatic JS", "text": "I've been thinking about the consequences of using programmatic JavaScript components versus HTML markup. For example I looked into the Enyo Framework, which has its power in composition. One can build components composed of simpler components, and at the end they may be built from tags, but one doesn't write HTML Markup with it. I also thought that it would be possible to develop a swing-like UI library which renders itself on an HTML5 canvas. There also would be no need for any markup. But is there a need for this? Isn't HTML5 capable enough for these goals? I did some brain storming on a webapp similar to draw.io where there would be many Items which need to be resizable and draggable. They would need to be connected, transformed, rotated, etc., and all of that needs to be obvious from varying the borders of items, and other UI clues. Is it possible to achieve this with HTML5, or since I have to code a lot anyway, would there be any harm to use only Javascript components without HTML markup? What are the trade-offs and what yields more pain?"} {"_id": "145574", "title": "Recommendations for teaching junior programmers good coding style", "text": "I am a big fan of good coding style, producing clean, clear code that runs well and is easy to use and integrate into larger systems. I believe that we programmers are essentially craftspeople who should take pride in our work, every line. I am not fond of code that is inconsistently formatted, cluttered with commented out experimental code, and rife with unhelpful or misleading function and variable names. But I sometimes find it hard to argue with code like this that basically works. If you think that coding style matters, I am looking for recommendations for ways of teaching good, professional coding style to the junior programmers under me. I want them to take pride in their work, but my concern is that they appear to become satisfied when their code just barely works, and seem to have no interest in producing what professionals like me would consider professional code. On the other hand, if you think coding style is not particularly valuable, I welcome your comments and am open to reconsidering my standards and tolerances. I know the usual arguments in favor of good code: comprehensibility, ease of maintenance, etc., but I would also like to hear some rebuttals to arguments like \"it works, what more do you want?\" In our industry we can use the compiler to hide a multitude of sins. Clean code and messy code can both lead to identically functioning programs, so does cleanliness matter, and if so, why? [While there are several existing questions on coding style, I didn't find any that related to teaching methods and ways to instill good coding style in junior programmers.]"} {"_id": "35052", "title": "Depending on another open source library: copy/paste code or include", "text": "I'm working on a large class and started implementing new features that need graphics. I started writing the graphics functions myself, but I know that open source libraries exist that can provide me with this functionality without me having to write it myself. The problem is that I prefer the class to be self-sufficient and not dependent on any other library. If I don't write it myself, I would have to ask the user to make sure a graphics library is already installed (less user-friendly). If I write it myself, I do a lot more work than I have to. I could also copy/paste some of the relevant code into my own class, but not sure about the disadvantages of doing this (it's an open source library that matches my license, so I'm not concerned with legality, just programming-wise if there are disadvantages). So what should I do: * copy paste code from the external library * write the code myself so it's truly self-sufficient * ask the user to download and install another library"} {"_id": "165138", "title": "Is it dangerous for me to give some of my Model classes Control-like methods?", "text": "In my personal project I have tried to stick to MVC, but I've also been made aware that sticking to MVC too tightly can be a bad thing as it makes writing awkward and forces the flow of the program in odd ways (i.e. some simple functions can be performed by something that normally wouldn't, and avoid MVC related overheads). So I'm beginning to feel justified in this compromise: I have some 'manager programs' that 'own' data and have some way to manipulate it, as such I think they'd count as _both_ part of the model, and part of the control, and to me this feels more natural than keepingthem separate. For instance: One of my Managers is the PlayerCharacterManager that has these methods: void buySkill(PlayerCharacter playerCharacter, Skill skill); void changeName(); void changeRole(); void restatCharacter(); void addCharacterToGame(); void createNewCharacter(); PlayerCharacter getPlayerCharacter(); List getPlayersCharacter(Player player); List getAllCharacters(); _I hope the mothod names are transparent enough that they don't all need explaining._ I've called it a manager because it will help manage all of the PlayerCharacter 'model' objects the code creates, and create and keep a map of these. I may also get it to store other information in the future. I plan to have another two similar classes for this sort of control, but I will orchestrate when and how this happens, and what to do with the returned data via a pure controller class. This splitting up control between informed managers and the controller, as opposed to operating just through a controller seems like it will simplify my code and make it flow more. My question is, is this a dangerous choice, in terms of making the code harder to follow/test/fix? Is this somethign established as good or bad or neutral? I oculdn't find anything similar except the idea of Actors but that's not quite why I'm trying to do. **Edit:** Perhaps an example is needed; I'm using the Controller to update the view and access the data, so when I click the 'Add new character to a player button' it'll call methods in the controller that then go and tell the PlayerCharacterManager class to create a new character instance, it'll call the PlayerManager class to add that new character to the player-character map, and then it'll add this information to the database, and tell the view to update any GUIs effected. That is the sort of 'control sequence' I'm hoping to create with these manager classes."} {"_id": "148999", "title": "What alogrithm is used on the smart cards for the DNSSEC Trusted Community Representatives?", "text": "I've been doing some reading about DNSSec and am interested in the algorithm that they chose to use when splitting the trusted key up between the 7 Trusted Community Representatives (TCR). I unfortunately can't find any information about the algorithm itself anywhere. Is it just an implementation of the Shamir Secret Sharing Algorithm, or is it using a different approach? A little bit of information can be found here."} {"_id": "165136", "title": "Why setter method when getter method enough in PHP OOP", "text": "I am practicing OOP with PHP, and I am struck at setter and getter methods. I can directly access the class properties and methods with getter method then what's the use of setter method? See my example. classVar; } } $obj = new MyClass; echo $obj -> Getter(); ?>"} {"_id": "197257", "title": "Is it a good idea to format the code in eclipse using auto format", "text": "I use eclipse for coding, and the language which we use is Java. Once it was suggested by someone that to properly format the code, using the auto formatter(CTRL+SHIFT+F) While this command does format the code, but sometimes I feel that overall look becomes weird, and it is not actually very readable. So is this a recommended thing to do? If not what is the better of formatting our code in eclipse ?"} {"_id": "33020", "title": "When is optimization not premature and therefore not evil?", "text": "\"Premature optimization is root of all evil\" is something almost all of us have heard/read. What I am curious what kind of optimization not premature, i.e. at every stage of software development (high level design, detailed design, high level implementation, detailed implementation etc) what is extent of optimization we can consider without it crossing over to dark side."} {"_id": "99445", "title": "Is micro-optimisation important when coding?", "text": "I recently asked a question on Stack Overflow to find out why isset() was faster than strlen() in PHP. This raised questions around the importance of readable code and whether performance improvements of micro-seconds in code were worth even considering. My father is a retired programmer, and I showed him the responses. He was absolutely certain that if a coder does not consider performance in their code even at the micro level, they are not good programmers. I'm not so sure - perhaps the increase in computing power means we no longer have to consider these kind of micro-performance improvements? Perhaps this kind of considering is up to the people who write the actual language code? (of PHP in the above case). The environmental factors could be important - the Internet consumes 10% of the world's energy. I wonder how wasteful a few micro-seconds of code is when replicated trillions of times on millions of websites? I'd like to know answers preferably based on facts about programming. **Is micro-optimisation important when coding?** My personal summary of 25 answers, thanks to all. Sometimes we need to really worry about micro-optimisations, but only in very rare circumstances. Reliability and readability are far more important in the majority of cases. However, considering micro-optimisation from time to time doesn't hurt. A basic understanding can help us not to make obvious bad choices when coding such as if (expensiveFunction() || counter < X) Should be if (counter < X || expensiveFunction()) (Example from @zidarsk8) This could be an inexpensive function and therefore changing the code would be micro-optimisation. But, with a basic understanding, you would not have to, because you would write it correctly in the first place."} {"_id": "79938", "title": "What optimizations are premature?", "text": "I've been here for nearly a month and it seems that people have a tendency to be eager to use the \"Premature Optimization is the root of all evil\" argument as soon as someone mentions efficiency. What is really a premature optimization? What is the difference between what is essentially writing a well designed system, or using certain methods and premature optimizations? Certain aspects that I feel should be interesting topics within this question: * In what way does the optimization influence code complexity? * How does the optimization influence development time/cost? * Does the optimization emphasize further understanding of the platform(if applicable)? * Is the optimization abstractable? * How does the optimization influence design? * Are \"general solutions\" the better choice instead of specific solutions for a problem because the specific solution is an optimization? EDIT / update: Found these two links that are very interesting regarding what a premature optimization really is: http://smallwig.blogspot.com/2008/04/smallwig-theory-of-optimization.html http://www.acm.org/ubiquity/views/v7i24_fallacy.html"} {"_id": "45259", "title": "Is premature optimization always bad?", "text": "I work in a small sized software/web development company. I have gotten into the habit of optimizing prematurely, I know it is evil and promotes bad code, but I have been working at this firm for a long while and I have deemed this as a necessary evil. It has never caused me an issue so far in the past, but it might if I get partners or a successor. Should I change my current practice now to prepare for that case, or should I not worry about it?"} {"_id": "119094", "title": "The cross-over between designing for performance/pre-mature optimisation", "text": "> **Possible Duplicate:** > When is optimization not premature and therefore not evil? Whilst designing my own .Net SQL access library, I found that I want everything thing to run as fast as possible so I tend to look at the fastest ways of doing things. This often gets criticised as pre-mature optimisation when I am just looking for the fastest way of doing something. My question is there a cross-over between designing for performance and pre- mature optimisation?"} {"_id": "89536", "title": "How do you control nodes in a server farm?", "text": "I've been reading about hadoop and multi-node setups, and it says in the documentation that you must have a JVM and hadoop software already running on those nodes. My question is, do people install this software on each of these computers individually? or is there a software solution that automates this process? I've also read about KVM switches, but I'm not sure whether this is what people usually use in these situations."} {"_id": "91984", "title": "How to Evaluate a Programmer's Communication Skills", "text": "If I was interested in interview questions like \"Describe how you explained a difficult technical issue to a non-technical person\" I would use the Googles to search on communication skill interview questions.. But, what I really would like is to have someone actually explain _to me_ some sort of relatively difficult technical issue. So I'm interested in are answers that identify a useful topic for this purpose or examples of what others have do to create scenarios for the purpose of evaluating a candidate's communications. In other words some exercise that actually forces the candidate to communicate something challenging rather than describe some time in their past when they were required to do so."} {"_id": "91985", "title": "Which language should I use for a computationally intensive program?", "text": "This is a multi-part question. I am writing a computationally intensive program that will perform computations on very large numbers, on the scale of factorial(100) . I'm considering using Java or C++ (that's all I know, and C++ only slightly) but I'm not sure which would be better to use in this context. I know C++ will be faster, but Java has a built-in utility for large numbers, the BigInteger class, and I don't know of any equivalent in C++. So here are the questions... 1. Is C++ so much faster than java as to make it worth it to learn and find a way to handle large numbers? 2. If I should use C++, how would I handle the large numbers? 3. Is it possible to just specify a new data type in C++ that represents a number, but with larger bounds than the int?"} {"_id": "129375", "title": "Should SpecFlow be used with BDD as a solo developer?", "text": "I am a long time fan of TDD and after reading the RSpec book, would like to transistion to a BDD process. I like the idea of driving from the outside in, as it is presented in the book. What I am having a hard time getting a handle on is how to structure the tests. I have tried SpecFlow, but it seems cumbersome to use when I am the only one really ever going to be looking at the tests. I like the idea of just using straight NUnit, rather then adding another framework, like it is presented here. Is this a good way to try and structure BDD tests? Is there more information out there on comparing the two ways (that may even be more recent)?"} {"_id": "129379", "title": "Would be semantically correct to make a \u201cLogin\u201d constructor in an api class?", "text": "Since the methods of the class will only work if the user is logged in, is it right or is there some problem that might make my code slow/inneficient?"} {"_id": "126346", "title": "What is the right way to develop ASP.NET applications in order to separate data access from data visualization?", "text": "I'm currently involved in a migration to TFS from SVN of a large project that is going to be divided into five different sites. This project allows some providers to insert five different product data into the enterprise DB. Like a lot of companies out there, we have a _technological debt_ that should be solved (or at least, reduced) with this new project organization. However, I'm quite new to ASP.NET and also our software is still running 2.0 .NET, so maybe I'm missing something or things should be done in a different way nowadays with 3.5+. One of those five different parts is going to be a new one. This is the one I'm going to develop. In order to achieve high degree of modularity, the code of the current project is divided into several DAO ( _Data Access Object_ ) under App_Code, `*.ascx` modules living in different parts of the project structure, VB classes that inherit from `System.Web.UI.*` (for **each** control that is displayed on the web) under App_Code, and lots of stored procedures for providing data to the controls. Also, we have several `*.aspx` that use `` for displaying each of the controls that inherit from System.Web.UI. The question comes when, for creating a simple `` with the options obtained from `cProvider`. If you're still there, here goes my question: Is this the right way to do things? Is this _high_ level of modularity the correct way to develop mantainable .NET applications? Do I have to follow this style and organization or should I take another one (please give me your recommendations)? From my point of view, having to do all those steps to paint a simple dropdown list doesn't follow KISS principles at all, and becomes unmantainable if somebody else has to take this project, knowing that the code is the only documentation we have. EDIT: links to documentation regarding this topic would be highly appreciated."} {"_id": "151194", "title": "Where do you turn for code base improvement on a multi-project scale?", "text": "Are there resources out there for a programmer with a large code base to have professional alalysis performed for the goal of finding the areas of most needed improvement? Logic/Reason tells me there's a limit to what a code review question can absorb, and also that the amount of work involved is not something that should need be free."} {"_id": "156877", "title": "Approaches to manage related binary files, apart from code", "text": "I'm working on a website for a photographer, the whole code is just a few files, and whole repository is a few MBs without the actual photographs. But, when I add the actual photographs to git and commit, then the repository jumps to 75 MBs. Considering the photos will be updated, the size of the repository is going to grow big as photos gets updated. I know I'm doing something very wrong. Please, share your methods, for managing big binary files separate from source code, and still maintain dependencies. PS: I don't use a db currently, all are plain text files."} {"_id": "126343", "title": "Large Ecommerce Site Topology and Deployment Guidelines", "text": "## Problem I am a lead engineer for a highly trafficked eCommerce website (upwards of 1m page views an hour). For various reasons we have the opportunity to rebuild large portions of our infrastructure. This brings up a number of interesting problems in balancing flexibility, stability, speed to market, etc... **At a high level I am interested in how others have handled similar situations. In particular I am want to know how others have architected their site to provide stable deploys in fast moving environment.** * One of the main trade offs I am looking at is breaking our site by functional area and providing them each their own sub domain. The primary driver for this is that we are hosting on Azure and deployments are all or nothing. * I am also interested in how others have maintained architectural integrity in a fast moving environment and team composed of varying levels of experience. ## Tech We are primarily a Microsoft shop: SQL Server, Windows Server 2008, .Net 4.0, Visual Studio 10, etc... We have also made the decision to host our main site on the Azure platform and are making heavy use of table and blob storage as well as app fabric cache. We are considering using multiple Azure data centers but have not made that final decision as of yet. All static content including JavaScript and CSS will be minified and hosted on our CDN . Our site is built using .Net MVC 3 and is supported by a DDD style architecure. Our data access libraries and business rules are fairly well encapsulated and separate from actual display logic. ## Process While not a true agile shop we iterate very quickly with frequent deployments, multivariate testing, just in time requirements, etc... We are generally very entrepreneurial and have the need to respond quickly to new opportunities as they arise. (code for \"it can get chaotic\") ## Team Our development team is made up of a couple dozen developers with varying levels of skill and experience. While some of the developers can operate independently others need to have very careful code reviews done. The QA team has a very thorough manual review process. They are also beginning to build a suite of QTP tests to automate regression testing. The dev team in turn makes use of Unit Testing and BDD testing when appropriate. ## Not Considerations I know there are a lot of strong opinions out there so to preempt religious wars here are a couple of responses that will not be helpful to me. * Just use Java, Oracle, PHP, Ruby, etc... * Just use EC2 * Just ask your developers to be more careful * Just tell your developers to code faster * Just tell \"the business\" to slow down * Just use Google Checkout ## Who I'm looking for feedback from I know there are many engineers who, while very skilled, have never worked on systems that supports more than a few hundred concurrent users. The lessons learned from a site supporting 30,000 concurrent users are very different from those of an internal support app (I've worked on both, btw, and am not disparaging internal apps. They just require a different approach). If you have experience in similar situation I would love to hear how you approached the problem and what the draw backs of your solution where."} {"_id": "126340", "title": "Alternate string formatting options in C++?", "text": "I'm looking at optimizing some string formatting code that's hit a lot in our code. We had been using ostringstream, and I converted the code to use sprintf (actually Microsoft's more secure sprintf_s). I've traded type safety for run- time performance. But I've debugged enough sprintf related weird crashes to know that sprintf has its own serious flaws that the compile time checking of ostringstream catch. But ostringstream is more than an order-of-magnitude slower then sprintf by my measure, and that's not tolerable in this code. I'm also not thrilled with the readability of C++ stream string formatting, but this may be entirely subjective. So unfortunately when formatting strings in C++ I've got several \"standard\" options that are suboptimal: 1. sprintf -- fast, but not type safe. Can have insidious bugs when the wrong format string is not used. 2. ostringstream -- slow, but type safe. IMO Ugly, too verbose, and difficult to read. 3. boost::format -- a little more readable then ostringstream IMO, but in my performance benchmarks appears to be even _slower_ then ostringstream, so this is out. To summarize, I'm not really satisfied with the \"standard\" options. I'd like something that takes both performance and type safety seriously. What other string formatting options are out there for C++?"} {"_id": "112047", "title": "Career Switch from testing to Development", "text": "I have few questions regarding the switching from testing to development. 1) Do you think the QA exp wont be counted? 2) Does switching from mnc to startup company for skills development will affect a resume? 3) Does a small profile project looks bad on resume,after 2 years of experience. 4) What does a manager looks in a candidate's profile? previous Company? previous project? or skills he/she has developed? 5) Does a software development in a non-it company counted? Please reply your views."} {"_id": "110510", "title": "How do you annotate code changes and code authorship", "text": "What is the best way to annotate who authored a file and subsequent changes that were made? I'm a contractor on a new project, one that's just starting, which is using Subversion. The other day I noticed a team member had updated a script written by an outside consultant, and he updated the header from \"Written by X\" to \"Writen by Y and X\" (since he had made a decent number of updates). I didn't think this was a good idea, because others after us may update that script with small or large changes and it'd be unclear when to update the \"Written by\" header (or how to order the names). I pitched the way I had done it at my previous company (with an edit log at the top of each file - which was enforced by whoever verified the changes, we were using Clear Case so changes wouldn't be pushed up until they were verified), but then he mentioned we could just use the \"svn log\" command to see edits and it'd be hard to enforce an edit log. So now I'm not so sure what the best way to annotate authorship and changes is. Files can completely change over time, so I don't like the idea of a stale \"Written by\" header. And a \"Written by\" header that just includes a long list of people with no context doesn't seem useful. There's removing the authorship header, and simply using the subversion change log, but then what do you do about \"Written by\" headers from code that's given to you (from consultants, old code bases, downloaded from the net, etc)? How do other teams handle this?"} {"_id": "130427", "title": "What is the best way to structure an Android application?", "text": "I am starting a new Android application. What is the best structure to use? I am planning to make it a multi-package design as follows: 1. Main package, including the Activity 2. Service and data layer 3. Entity package, including entity class. Any advice ?"} {"_id": "151199", "title": "Alternatives for comparing data from different databases", "text": "I have two huge tables on separate databases. One of them has the information of all the SMS that passed through the company's servers while the other one has the information of the actual billing of those SMS. My job is to compare samples of both of these tables (for example, the records between 1 and 2 pm) to see if there are any differences: SMS that were sent but not charged to the user for whatever reason that may be happening. The columns I will be using to compare are the remitent's phone number and the exact date the SMS was sent. An issue here is that dates usually are the same on both sides, but in many cases differ by 1 or 2 seconds. I have, so far, two alternatives to do this: 1. (PL/SQL) Create two tables where i'm going to temporarily store all the records of that 1hour sample. One for each of the main tables. Then, for each distinct phone number, select the time of every SMS sent from that phone from both my temporary tables and start comparing one by one using cursors. In this case, the procedure would be ran on the server where one of the sources is so the contents of the other one would be looked up using a dblink. 2. (sqlplus + c++) Instead of storing the 1hour samples in new tables, output the query to a text file. I will have two text files, one for each source. Then, open the first file and load all of it's content on a hash_map (key-value) using c++, where the key will be the phone number and the value a list of times of SMS sent from that phone. Finally, open the second file, grab each line (in this format: numberX timeX), look for numberX's entry on the hash_map (wich will be a list of times) and then check if timeX is on that list. If it isn't, save it somewhere to finally store it on a \"uncharged\" table (this would also be the final step on case 1) My main concern is efficiency. These samples have about 2 million records on each source, so just grabbing one record on one side and looking it up on the other would not be possible. That's the reason I wanted to use hash_maps Which do you think is a better option?"} {"_id": "110517", "title": "How often do you use DI container in your ASP.NET MVC application", "text": "While reading a book, I came across DI(dependency Injector) and the subsequent DI Container tool. Previously, I developed an application following a tutorial on asp.net website which never used such tool. So, my question can be summed- up in following two concerns :- * **How often do you use DI Container ?** * **What requirements make you do so ?** EDIT : Examples with and without DI Container. I have written the codes to understand which is better approach. _**Without** DI Container_ \\-- LinqValueCalculator lc = new LinqValueCalculator(); ShoppingCart sc = new ShoppingCart(lc); decimal total = sc.CalculateStockValue(); Console.WriteLine(\"Total: {0:c}\", total); _**With** DI Container_ \\-- (Ninject is used in this example) IKernel ninjectKernel = new StandardKernel(); ninjectKernel.Bind().To(); IValueCalculator calcImpl = ninjectKernel.Get(); ShoppingCart cart = new ShoppingCart(calcImpl); Console.WriteLine(\"Total: {0:c}\", cart.CalculateStockValue()); I will be honest, I feel that writing first code was easier and seemed natural. But, your views are what counts as i am just learning the MVC."} {"_id": "110515", "title": "Knowledge base / questionarie / desicion-tree / decision-making platform", "text": "I am looking for a software platform/programming framework which can do the following: * INPUT: a user inputs some text * PROCESS/REFERENCE DATA: the user is then asked to answer a list some questions regarding the INPUT and attribute the answers either to the whole INPUT or to some parts of the INPUT * OUTPUT: the list of answers attributed to the original INPUT Some clarifications: * re: REFERENCE DATA: the list of questions should allow for a sub/follow-up question * re: PROCESS: the answering process should be as flexible as possible (user should be able to skip questions, provide his or her own answers, etc) * this is NOT meant to be an automatic/machine learning tool - the user (the human) will be classifying the INPUT himself/herself based on the REFERENCE DATA"} {"_id": "23718", "title": "What's the most used programming language in high performance computing? And why?", "text": "I believe a lot of Fortran is used in HPC, but not sure if that's only for legacy reasons. Features of modern programming languages like garbage collection or run-time polymorphism are not suitable for HPC since speed matters so not sure where C# or Java or C++ come in. Any thoughts?"} {"_id": "253179", "title": "Would these two scenarios be good candidates for a NoSQL database?", "text": "I've checked a few other threads around the topic and search around, I am wondering if someone can give me a clear direction as to **_why_** should I consider NoSQL and **_which_** one (since there are quite a few of them each with different purposes) * Why NoSQL over SQL? * Is MongoDB the right choice in my case? * Are NoSQL databases going to take the place of relational databases? Is SQL going away? Like many others - I started with relational databases and been working on them ever since, thus when presented with a problem, the first instinct is to always think of _\"I can create these tables, with these columns, with this foreign keys\"_ , etc My overall goal is **How to get into \"NoSQL\" mindset**? ie getting away from the inclination of always thinking about tables/columns/FKs (I understand that there are cases where RDBMS is still the better way to go) I am thinking of 2 scenarios for example just to get more concrete direction **Scenario 1** Imagine a database to model building a furniture instructions (think of IKEA instructions) where you would have the object \"furniture\" which would have a list of \"materials\" and have a list of \"instructions\" * Furniture - would simply have a name that have a list of Materials and Instructions * Materials - would be a name + quantity, may be we can even have \"Material Category\" table as well * Instructions - would simply be an ordered list of texts My first instinct would go the RDBMS way: * Create a table called \"Furniture\", \"Material\" and \"Instruction\" and the approppriate columns * Create the appropriate JOIN tables as necessary and FKs The use of this system can include _searching_ based on materials or may be combination of materials. And may be think of extending the data stored to include information on how many people are required to build it? Difficulty level? how much time it would take? Would something like this be a good candidate for a NoSQL database? **Scenario 2** Imagine a database to model a User database with basic information (eg. name, email, phone number, etc), but you also want to have the flexibility of being able to add any custom fields as you wish. Think of different systems consuming this user database, each system will want to have their own custom attribute to be attached to the user My inclination would go the RDBMS way: * Create a table for \"USER\" with columns: ID, name, email, phone * Create a table for \"USER_ATTRIBUTE\" with columns: ID, USER_ID, attr_name, attr_type, attr_value The USER_ATTRIBUTE will allow that customization and flexibility without having to shut down the system, alter the database and restart it. Would something like this be a good candidate for a NoSQL database?"} {"_id": "59201", "title": "How to REALLY start thinking in terms of objects?", "text": "I work with a team of developers who all have several years of experience with languages such as C# and Java. Most of them are young enough to have been shown OOP as a standard way to develop software in university and are very comfortable with concepts such as inheritance, abstraction, encapsulation and polymorphism. Yet, many of them, and I have to include myself, still tend to create classes which are meant to be used in a very functional fashion. The resulting software is often several smaller classes which correctly represent business objects which get passed through larger classes which only supply ways to modify and use those objects (functions). Large complex difficult-to-maintain classes named Manager are usually the result of such behaviour. I can see two theoretical reasons why people might write this type of code: 1. It's easy to start thinking of everything in terms of the database 2. Deep down, for me, a computer handling a web request feels more like a functional operation than an object oriented operation when you think about Request Handlers, Threads, Processes, CPU Cores and CPU operations... I want source code which is easy to read and easy to modify. I have seen excellent examples of OO code which meet these objectives. How can I start writing code like this? How I can I really start thinking in an object oriented fashion? How can I share such a mentality with my colleagues?"} {"_id": "150824", "title": "Is the puts function of ruby a method of an object?", "text": "I'm trying to understand how the `puts` function fits in Ruby's \"Everything is an object\" stance. Is `puts` is the method of a somehow hidden class/object? Or is it a free- floating method, with no underlying object?"} {"_id": "150825", "title": "Web Service - SOAP", "text": "My experience with web services is slim and I'm trying to understand this a little bit more. I have done for instance a web service using visual studio. In order to use it, I add a web service reference in my projects and this creates a proxy and the use is pretty simple. Does this use SOAP? I ask this because i will be now facing a web service that i must communicate using SOAP with attachements and i'm trying to understand the concept behind this and the difference to what i have done so far. Will the proxy still be viable or do i need to create the XML by hand and post it to the web service? This concepts still confuse and so any help is appreciated. EDIT: I'm not developing the service, i will be just using it."} {"_id": "253176", "title": "Can I use all JetBrains software after purchase", "text": "So I am thinking about buying the Ruby version of the IDEA that JEtBrain is offering. If I buy a personal license for it, does it allow me to also use the license for IntelliJ also so I can develop Java applications too. Or is it required to buy a separate key for each of them? Was not able to find that kind of information on their homepage, so I turned here with my question."} {"_id": "92186", "title": "Why is filesystem preferred for logs instead of RDBMS?", "text": "Question should be clear from its title. For example Apache saves its access and error logs in files instead of RDBMS no matter on how large or small scale it is being utilized. For RDMS we just have to write SQL queries and it will do the work while for files we must decide a particular format and then write regex or may be parsers to manipulate them. And those might even fail in particular circumstances if great care was not paid. Yet everyone seems to prefer filesystem for maintaining the logs. I am not biased against any of these methods but I would like to know why it is practiced like this. Is it speed or maintainability or something else?"} {"_id": "73602", "title": "A scrum team for tiny projects", "text": "I have been wondering if it would be smart to form a single scrum team for several tiny projects. I am aware that it would be very hard to state a single sprint goal, if a team is working on several different projects. On the other hand: Putting projects that usually are assigned to single developers in a common team backlog would (IMHO) benefit by the synergies of a scrum team. I should mention, that the projects does not vary that much from a technical perspective. I am in a consultancy department where we have projects of very different sizes. For some time I have been working on a larger project using Scrum. Lately, the team has been disassembled, and I am left with one-man assignments. The idea came to me because I was getting more and more frustrated about not having my old teammates for sparring, QA and good old fashioned team spirit. I realized that about half of my department is working on one-man assignments. Then it dawned on me, that there might be a way out of the solitude: A Scrum of small projects. I am keen to hear your opinion on the matter: What do you think of the idea? What are the pitfalls? What are the limitations? How do i convince management that it is a great idea?"} {"_id": "92189", "title": "Learning the rules of chess", "text": "A similar question asks whether a computer can learn to play optimally in chess by analyzing thousands of games. If a machine can look at the state of the board for a few games of chess (or a few games of checkers) in the beginning and after each move, can it be programmed to learn the rules of the game? If it can, to what extent (e.g., would it be able to account for castling or promotion) would this work? Which machine learning algorithms would make this possible?"} {"_id": "74054", "title": "What language to use for prototyping and creating quick scripts?", "text": "Right now, I use Python for my quick scripts and prototypes (e.g. algorithms, my pseudocode is very Python-like as well). The oher languages that I am familiar with include Java, C, x86 Assembly and Scheme, and Python is pretty much the best for this among these in my opinion. Perl gets a lot of rep for this over and over again, and I have heard that Ruby isn't bad either, and the Python community praises Python for this too. What language (could be another one than these 3) do you think is the best programming language for: * Creating quick prototypes of applications or algorithms * Creating simple scripts for small, repetitive tasks Important features for such languages include: * Little boilerplate code, not too verbose * (Very) high-level * Interpreted * Good and comprehensive standard library"} {"_id": "203863", "title": "Modelling a resource that may not be part of the parent resource", "text": "I am designing an API and am having a couple of problems with certain parts of the resource modelling. I have the notion of a `SurveyItem` which as a collective form a `Survey`: Survey Resource Endpoint: http://api.somewhere.com/survey Accessing Survey Items http://api.somewhere.com/survey/1/surveyitems Pretty basic and i am not overly concerned with the above. `SurveyItems` cannot exist outside the context of a survey hence the hierarchy. Where i am struggling a little is the next level down. A `SurveyItem` is not necessarily one thing (bear with me) as it is based on an interface which is implemented but you may have something like a \"Simple Survey\" that lets you take notes, a \"Video Survey\" will let you take a survey via video etc, so they are all the all part of the base implementation but could potentially have different sub resources. Image and note survey type sample endpoints http://api.somewhere.com/survey/1/surveyitems/1/images http://api.somewhere.com/survey/1/surveyitems/1/notes Video Survey http://api.somewhere.com/survey/1/surveyitems/2/videos http://api.somewhere.com/survey/1/surveyitems/2/notes As you can see above a `SurveyItem` may have different sub endpoints depending on the type of SurveyItem it is. I was thinking of modelling it the way above but using a HATEOAS approach of responding with a collection URLs of the available sub endpoints for each individual survey type so they are discover- able by clients. Is this a reasonable thing to do or are there better approaches to handling things like this?"} {"_id": "253286", "title": "Erlang return value conventions", "text": "Should functions that return tuples, always return tuples? For example, I have a function `is_user_name_allowed` that returns a tuple in this he form if the username is not allowed: `{false, [\"Reason 1\",\"Reason 2\"]}` If the I want to return true, is there any convention/benefit to returning a tuple as opposed to simply returning `true`? (`{true}` vs `true`)"} {"_id": "253287", "title": "Estimation of space is required to store 275305224 of 5x5 MagicSquares?", "text": "Here are some examples of 5x5 Magic Squares found by the Magic Square Generator by Marcel Roos: Magic squares 5x5 Sum must be: 65 Solution: 1 1 2 13 24 25 3 23 17 6 16 20 21 11 8 5 22 4 14 18 7 19 15 10 9 12 Solution: 2 1 2 13 24 25 3 23 16 8 15 21 19 10 6 9 22 4 14 20 5 18 17 12 7 11 Solution: 3 1 2 13 24 25 3 23 19 4 16 21 15 10 12 7 22 8 9 20 6 18 17 14 5 11 ... The number of all possible solutions is 275305224. Since calculation of all solutions takes a very long time, I would like have one person with a high speed computer find all solutions (in a long continues span of time) and then share them on the web for other people. What would be an efficient way to store these solutions, using some sort of compression? **(simple logical trick)by attention to the principle that the sum in all rows and columns and diagonal are equal to 65** **we only need to know the value for only 14 cells of 25 cells as shown below , this cause the storage space almost reduce to half 14/25** legend: **+** means stored value and **x** means skipped value ! + + + + x + + + + x + + + + x x + x + x x x x x x"} {"_id": "253285", "title": "Limitations of the Identity Map pattern", "text": "After asking about the implementation in Ruby of the Identity Map pattern because the potential memory leak in long running server apps, I am considering my initial concept of that pattern. Initially I thought it was intended to cache database results AND guarantee that **only one instance of the same object exist in memory**. Is this last assumption right? Within DDD there is the tendency where Entity equality is based on having the same **id** not the same memory address, that fits perfectly with the memory problem, however after using many ORMs I always had the \"feeling\" of having a **unique instance** of my objects. Is this assumption a _dangerous idea_? Should I normally be concerned about my objects having multiple instances? Or even being out of sync with the database?"} {"_id": "32581", "title": "How do you explain Separation of Concerns to others?", "text": "If you had a colleague who didn't understand the benefits of Separation of Concerns, or didn't understand it quite enough to apply consistently in their daily work, how would you explain it to them?"} {"_id": "8721", "title": "How do I improve my coding skills?", "text": "Here's a bit information about me, before starting with the question. I am a Computer Science Undergraduate, Java being my primary coding language. The basic problem in my University are the teaching standards. No one is concerned about teaching coding knowledge to students, rather than just theoretical knowledge. The effect being, most of my fellow college mates don't understand programming at all. Even I haven't been able to come out of the traditional programming environment, which limits my coding to an extent. What are the possible ways by which I can develop and expand my programming/coding skills. Also, can you suggest the sources for the same? **Edited** : Sources suggesting development of coding skills."} {"_id": "137696", "title": "What is the best way to stay on the cutting edge of Software Engineering?", "text": "> **Possible Duplicate:** > How do I improve my coding skills? I was asked this ( _What is the best way to stay on the cutting edge of Software Engineering?_ ), and it's really bothered me that I didn't have a good answer. There are really two parts to this question: 1. Where is a good place (websites - magazines) to go to learn about emerging technologies, frameworks, design principles, etc? 2. How can I get a good feel on which emerging technologies will be adopted by \"the industry?\" I realize the second will be harder than the first since no one has a crystal ball, but any advice would be welcomed."} {"_id": "15742", "title": "Continuous Professional development \u2013 the best approach", "text": "From my experience in the current working environment, one's professional development is definitely in the hands of the individual. As I see it there are several routes to the same objective based on cost, time and availability 1. External Offsite training Online 2. Training providers (itunes U,Pluralsight etc) 3. Books 4. Specialist /user groups 5. Specialist web sites (Channel9, stackoverflow,dnrtv,codeplex etc) What would you consider to be the best approach (blend) to continued learning and maintaining a professional standard of work?"} {"_id": "216197", "title": "how to write good programming logic?", "text": "recently I got job as a java developer, and now I have assigned project too. I want to know what is a good logic? when I check in the code my team lead is saying that its a good code. But when it comes to my project manager he is saying that its a bad code. And he is changing my code, after his changes if I see his code its really very very good and even simple. can you please tell me how to develop the good program, good logic? what is the best way to structure a problem in terms of code?"} {"_id": "124536", "title": "Logical reasoning skill tests", "text": "I'm job hunting so I decided to sharpen a bit of my skills. I've been working on implementing solutions to some of the \"Project Euler\" problems, but I would like to work on stuff which is not specific to programming. Does anyone have good resources with tests that be used to test logical reasoning skills, creativity and those kinds of things? I know some people question their value as a predictor of developer hability, but there are some HR departments which use those kinds of tests, so I wana get some preparation done. Thanks."} {"_id": "113947", "title": "How I do become a better programmer as a junior developer", "text": "I am a junior developer in South Africa. I just graduated from college last year. My current employers hired me as a developer but they do not give me anything to develop and I've been working here for 8 months now. How I do become a better developer? How do I grow? I do some development at home, but how can I get more exposure? I have looked for another job, but that's not the point. I do not want to give up, I want to become a better developer. How can I improve my skills?"} {"_id": "117103", "title": "Increasing confidence in personal knowledge about coding", "text": "> **Possible Duplicate:** > I'm graduating with a Computer Science degree but I don't feel like I know > how to program I have been a .net dev for about 3 years at a non-software company, and I want to try and implement unit testing into our software projects. I have been reading blogs about the basics of unit testing and am currently reading The Art of Unit Testing. I feel I have a fairly good understanding of how unit tests can be set up and implemented, but I am wary of implementing it because if things were set up wrong, no one would notice / my team would be writing bad tests thinking they were good tests. I feel that my whole underlying issue is with fear of failure, being in implementing unit testing or in persuading the team to start using / writing unit tests. How do/did you increase your confidence in your own knowledge about coding?"} {"_id": "119415", "title": "How to Deliberately Practice Software Engineering?", "text": "I just finished reading this recent article. It's a very interesting read, and it makes some great points. The point that specifically jumped out at me was this: > The difference was in how they spent this [equal] time. The elite players > were spending almost three times more hours than the average players on > deliberate practice \u2014 the uncomfortable, methodical work of stretching your > ability. This article (if you care not to read it) is discussing violin players. Of course, being a software engineer, my mind turned towards software ability. Granted, there are some very naturally talented individuals out there, but time and time again, it is those folks who stretch their abilities through deliberate practice that really become exceptional at their craft. My question is - how would one go about practicing the \"scales\" of software engineering and computer science? When I practice the piano, I will spend more of my time on scales and less on a fun song. How can I do the same in developing software? To head off early answers, I don't feel that \"work on an open source project,\" and similar answers, is really right. Sure...that _can_ improve your skills, but you could just as easily get stuck focusing on something that is unimportant to your craft as a whole. It can become the equivalent of learning \"Twinkle Twinkle Little Star\" and never being able to play Chopin. So, again, I ask - how would you suggest that someone deliberately practice software engineering?"} {"_id": "9598", "title": "What do you do to improve your logical programming skills?", "text": "Do you think that only the programming pratice will help you to improve your logical programming skill or do you train your brain with puzzle games, trying imagine how universe works, playing instruments and so on? Devoting more time with programming, will do you get logical programming skills more fast?"} {"_id": "43528", "title": "I'm graduating with a Computer Science degree but I don't feel like I know how to program", "text": "I'm graduating with a Computer Science degree but I see websites like Stack Overflow and search engines like Google and don't know where I'd even begin to write something like that. During one summer I did have the opportunity to work as a iPhone developer, but I felt like I was mostly gluing together libraries that other people had written with little understanding of the mechanics happening beneath the hood. I'm trying to improve my knowledge by studying algorithms, but it is a long and painful process. I find algorithms difficult and at the rate I am learning a decade will have passed before I will master the material in the book. Given my current situation, I've spent a month looking for work but my skills (C, Python, Objective-C) are relatively shallow and are not so desirable in the local market, where C#, Java, and web development are much higher in demand. That is not to say that C and Python opportunities do not exist but they tend to demand 3+ years of experience I do not have. My GPA is OK (3.0) but it's not high enough to apply to the large companies like IBM or return for graduate studies. Basically I'm graduating with a Computer Science degree but I don't feel like I've learned how to program. I thought that joining a company and programming full-time would give me a chance to develop my skills and learn from those more experienced than myself, but I'm struggling to find work and am starting to get really frustrated. I am going to cast my net wider and look beyond the city I've grown up in, but what have other people in similar situation tried to do? I've worked hard but don't have the confidence to go out on my own and write my own app. (That is, become an indie developer in the iPhone app market.) If nothing turns up I will need to consider upgrading and learning more popular skills or try something marginally related like IT, but given all the effort I've put in that feels like copping out. EDIT: Thank you for all the advice. I think I was premature because of unrealistic expectations but the comments have given me a dose of reality. I will persevere and continue to code. I have a project in mind, although well beyond my current capabilities it will challenge me to hone my craft and prove my worth to myself (and potential employers). Had I known there was a career overflow I would have posted there instead."} {"_id": "179060", "title": "What are the ways to start making actual/real-world programs using Java/C++ to excel my Programming Skills?", "text": "> **Possible Duplicate:** > How do I improve my coding skills? > How I do become a better programmer as a junior developer The programming that we learn at university is not that vast, like those are really small exercises to build our logic, but everyone knows that this will not be the scenario when I'll get out in the market as a professional programmer, I really want to make real life programs which would actual make some impacts and will be useful. Tell me in the light of your experience that how should I start making those programs and polish my self as a professional programmer, if there are any sources available for it then kindly also recommend me those."} {"_id": "200730", "title": "Improving logic/creativeness as a programmer", "text": "I am currently working as a software developer and am getting by ok, but feel that there is something missing from my skillset. Looking at job interview questions and processes of some of the big tech companies there is definitely an emphasis on problem solving and creativeness when tackling software problems. Some of the examples I would not have any chance at all solving. How can these skills be improved/learned? Or is this is simply a natural thing that cant be learned?"} {"_id": "151317", "title": "Thinking skills to be a good programmer", "text": "I have been programming for last 15 years with non-CS degree. Main reason I got into programming was that I liked to learn new things and apply them to my work. And I was able to find and fix programming errors and their causes faster than others. But I never find myself a a guru or an expert, maybe due to my non-CS major. And when I saw great programmers, I observed they are very good, much better than me of course, at solving problems. One skill I found good in my mid-career is thinking of requirements and tasks in a reverse order and in abstract. In that way, I can see what is really required for me to do without detail and can quickly find parts of solution that already exist. So I wonder if there are other thinking skills to be a good programmer. I've followed Q&As below and actually read some of books recommended there. But I couldn't really pickup good methods directly applicable for my programming work. What non-programming books should a programmer read to help develop programming/thinking skills? Skills and habits to develop to be good at programming (I'm a newbie)"} {"_id": "212499", "title": "Stepping between learning about programming and actually programming a small project (C#)", "text": "I'm a Games Design Student, aware of the frequent advice that I should be good at something else as well (Programming/Art) so I'm useful when a designer is not required and also for the sake of being more useful - generally as a wish to increase my employability, value in group projects and make me capable of individual projects (which would require Design AND programming AND Art all stemming from what I can produce). As such I wish to complete some minor C# projects. In particular, I wish to write a simple Pong clone over this weekend before term starts up again on Monday. The problem is, when ever I try to write a program, unless it's blindly following instructions (which is simply parroting) I am unable to establish a workflow that moves me forward. If I have an idea or mechanic I wish to document I have some templates I refer to and fill out. If I wanted to produce an art asset I'd consider what asset is (a car, a person) the graphical style (pixel art or drawing or 3d), the technical constraints (resolution/file formats). For a programming workflow I'm under the impression I have my pseudo-code of what I want to make, I go through each item and check a scripting reference to see how I make each step and break them down further as required. However I have, say, \"Draw Paddle_1, Move Paddle1 up/down in relation to input,...\" and I cannot progress. I've been given modules in different languages over the past two years (which I've passed all of - since it just involves parroting instructions) however I am at a loss for picking something to do, how to do it then doing it. How do I make this leap between loose knowledge of programming and knowing basics of structure and the vocabulary towards actually being able to write something (short of copy/pasting bodies of code from googling them - after doing that dozens of times I only end up frustrated and angry and walk away from programming for months, until it comes up again)?"} {"_id": "129918", "title": "How to improve yourself and grow as a programmer?", "text": "> **Possible Duplicate:** > How do I improve my coding skills? I'm a 4 yrs experienced programmer with JAVA being the strong point of mine. I know basics of web, C++ android, blackberry programming. Was wondering how should I improve myself. Does learning new languages help or learning design stuff and learn to become a architect? How do you guys plan yourself for future? May be learn a language every year? or be in touch the the new technologies? How do you guys improve yourself as a core programmer?"} {"_id": "119253", "title": "Where to go from here, how to improve / learn more", "text": "> **Possible Duplicate:** > I'm graduating with a Computer Science degree but I don't feel like I know > how to program I finished University around 4 years ago double degree in Software Eng/Comp Sci. Got my first job at a startup in my final year, was with them for 2.5 years then started my own business. So far everything is going great, lots of clients and stead work etc, but coming right out of uni and into a start up I never had any form or senior software engineer guiding my work or suggesting improvements etc... Whats the best way for me to improve & learn more? Books? MS Exams? Other? I develop in C#, ASP.NET/MVC. **Update** The problem isn't really with releasing products, I've released quite a few which are up and running with customers happy. It's more with quality of code, best practices, how do I know something I am code is correct, it may work but there may be ways of coding it much more efficiently or by adhering to some kind of standard Cheers for any responses! Matt"} {"_id": "84278", "title": "How do I create my own programming language and a compiler for it", "text": "I am thorough with programming and have come across languages including BASIC, FORTRAN, COBOL, LISP, LOGO, Java, C++, C, MATLAB, Mathematica, Python, Ruby, Perl, JavaScript, Assembly and so on. I can't understand how people create programming languages and devise compilers for it. I also couldn't understand how people create OS like Windows, Mac, UNIX, DOS and so on. The other thing that is mysterious to me is how people create libraries like OpenGL, OpenCL, OpenCV, Cocoa, MFC and so on. The last thing I am unable to figure out is how scientists devise an assembly language and an assembler for a microprocessor. I would really like to learn all of these stuff and I am 15 years old. I always wanted to be a computer scientist someone like Babbage, Turing, Shannon, or Dennis Ritchie. * * * I have already read Aho's Compiler Design and Tanenbaum's OS concepts book and they all only discuss concepts and code in a high level. They don't go into the details and nuances and how to devise a compiler or operating system. I want a concrete understanding so that I can create one myself and not just an understanding of what a thread, semaphore, process, or parsing is. I asked my brother about all this. He is a SB student in EECS at MIT and hasn't got a clue of how to actually create all these stuff in the real world. All he knows is just an understanding of Compiler Design and OS concepts like the ones that you guys have mentioned (i.e. like Thread, Synchronization, Concurrency, memory management, Lexical Analysis, Intermediate code generation and so on)"} {"_id": "160896", "title": "I want to create a new language", "text": "> **Possible Duplicate:** > How do I create my own programming language and a compiler for it I want to create a new general purpose language that will compile to JavaScript and I'd like to be able to write it in that same JavaScript. I wouldn't go much into detail about why, but how can I do it? Currently I have professionally coded in JS, Java, Groovy, PL/SQL, PHP among the rest, so I'm covered on the language user side of the story, but I have no theoretical or practical knowledge from the create language side. I have browsed the web for the past week and noticed a soup of acronyms and/or names like RR, LALR, BNF, AST etc. so I figure I need two things: * Good Book(s) about theory * Working projects or experiments that I can use to jump start my learning (something free like that narcissus) EDIT: No, this question does not cover the EXACTLY SAME content and it is not on the same topic even worse, even though there are two good answers, they don't answer how I can practically USE JS TO BUILD THE TOOLS (JS TRANSPILER) I NEED FOR THE NEW LANGUAGE. Any ideas?"} {"_id": "115924", "title": "Should the creation of an object implicitly or explicitly create a file?", "text": "I'm creating an object whose sole purpose is to read in a file of one format and create another of a different format. Is it best to create the output file implicitly during object initialization or to have a public method that gives the user the choice when this file will be created?"} {"_id": "98604", "title": "Invert how to teach programming?", "text": "I was thinking about the responses to my thread on programming Program like a writer..? and most people agreed that you should have some structure and build from there instead of just typing away. New programmers, however, tend to just type, find mistakes, add code, and repeat unti it works. To quote Richard Pattis: > \u201cWhen debugging, novices insert corrective code; experts remove defective > code.\u201d This is fine and all, but the problem is that people **repeat their mistakes**. After making a mistake enough times, they eventually learn to avoid it. To give a practical example, consider `warning: control reaches end of non- void function`. Most new programmers will see this, google it, figure out what's wrong, add in `return 0;`, and they're good to go. They don't care about the error; they just want the solution What if we invert this concept? Why not teach them how to mess up first? A possible assignment could be: > Consider the warning \"control reaches end of non-void function\" What causes > this warning to occur? Provide code to demonstrate all the ways this warning > can appear. Evaluate the following code samples and explain why the warning > appears. This way, the programmer learns that all non-void functions need to return something. They would have to write code to create this error (debugging/testing), and evaluate existing code to find the error (code review). I'm not saying we teach this _before_ fundamentals, because you should understand iteration/recursion, loops, assignment, equality, etc before writing significant code. I'm saying that instead of solving errors as rookies make them, why not concentrate this knowledge in the beginning. What do you guys think of this idea? What are the shortcomings of this approach, and can it be feasibly implemented anywhere?"} {"_id": "256125", "title": "php make dollar optional", "text": "Is there any reason the PHP language could not be updated in a future version to make the $ prefix on variable names optional? Reasons that would break existing code? I'm thinking it would still be required for string interpolation, like \"Hello $name\", but most of the time it would be optional. For example, $name = 'Bob'; echo \"Hello $name\"; would still be valid, but so would name = 'Bob'; echo \"Hello $name\";"} {"_id": "196125", "title": "Is it a good practice to create a ClassCollection of another Class?", "text": "Lets says I have a `Car`class: public class Car { public string Engine { get; set; } public string Seat { get; set; } public string Tires { get; set; } } Lets say we're making a system about a parking lot, I'm going to use a lot of the `Car` class, so we make a `CarCollection` class, it may has a few aditionals methods like `FindCarByModel`: public class CarCollection { public List Cars { get; set; } public Car FindCarByModel(string model) { // code here return new Car(); } } If I'm making a class `ParkingLot`, what's the best practice? **Option #1:** public class ParkingLot { public List Cars { get; set; } //some other properties } **Option #2:** public class ParkingLot { public CarCollection Cars { get; set; } //some other properties } Is it even a good practice to create a `ClassCollection` of another `Class`?"} {"_id": "115929", "title": "How to account for a bug fixing iteration?", "text": "We have implemented Scrum quite successfully for the past 5 months. Though, we are 3 weeks away from PROD without _ever_ doing any end-to-end integration test. OUCH! I need help. Without tackling the causes of this (at THIS point), we need now to plan the current iteration, which consists of minor improvements and MANY still unknown bug fixes. How do you account for this scenario? How do you plan your iteration to fix bugs yet to be found?"} {"_id": "196126", "title": "Which technologies to create web interface for scientific instrument?", "text": "I am working on software for some scientific instrumentation. An embedded computer running Windows 7 is inside the instrument, which controls various components of the device; cameras, motors, thermometers, etc. I've written a basic interface in WPF/C# and use remote desktop to test the application. (Reason is safety, the instrument will be used in high radiation areas). What I want: A server runs on the embedded PC. The user accesses a web page at 192.168.x.x (Internet access would be nice, but not necessary) and clicks 'collect data' and the server loads the relevant C# .dll's, acquires data and uploads the data to the user. Speed/scalability is not much of a concern, as data acquisition could take anywhere from minutes to days. What I'm confused about: How do I go about executing my existing C# code from a web application? i.e., which .NET technology(ies) should I look into using?"} {"_id": "166442", "title": "What is the precise definition of programming paradigm?", "text": "Wikipedia defines **programming paradigm** thus: > a fundamental style of computer programming which is echoed in the descriptive text of the paradigms tag on this site. I find this a disappointing definition. Anyone who knows the words _programming_ and _paradigm_ could do about that well without knowing anything else about it. There are many styles of computer programming at many level of abstraction; within any given programming paradigm, multiple styles are possible. For example, Bob Martin says in _Clean Code_ (13), > Consider this book a description of the _Object Mentor School of Clean > Code_. The techniques and teachings within are the way that _we_ practice > _our_ art. We are willing to claim that if you follow these teachings, you > will enjoy the benefits that we have enjoyed, and you will learn to write > code that is clean and professional. But don't make the mistake of thinking > that we are somehow \"right\" in any absolute sense. Thus Bob Martin is not claiming to have _the correct_ style of Object-Oriented programming, even though he, if anyone, might have some claim to doing so. But even within his school of programming, we might have different styles of formatting the code (K&R, etc). There are many styles of programming at many levels. So how can we define _programming paradigm_ rigorously, to distinguish it from other categories of programming styles? _Fundamental_ is somewhat helpful, but not specific. How can we define the phrase in a way that will communicate more than the separate meanings of each of the two words--in other words, how can we define it in a way that will provide additional meaning for someone who speaks English but isn't familiar with a variety of paradigms?"} {"_id": "56479", "title": "How does a CS student negotiate in/after a job interview?", "text": "Alright, I've gotten to the second step in the interview process. At this point I'm working under the assumption that I might be offered a position -- flying my butt to Redmond would be quite an expense if they weren't at least considering me for something (*crosses fingers*). So, if one is offered a position, how should a CS student negotiate? I've heard a few strategies about dealing with software companies when you are being considered for a hire, but most of them are considering the developer in a powerful position. In such examinations, (s)he has lots of job experience, and may even be overqualified for what the employer is looking for. (s)he is part of a small job market of qualified developers, because 99% of applications companies receive are from those who are woefully under qualified. I'm in a completely different position. I think I compare favorably to most of my fellow students, and I have been a programmer for almost 10 years, but often I still feel green compared to most of my coworkers. I'm in a position where the employer holds most of the chips; they'd be doing me quite a favor by hiring me. I think this scenario is considerably different than the targets for most of the advice I've seen. Above all, I don't want to be such a prick negotiating that it damages my chances to actually operate in a position, even if it means not negotiating at all. How should one approach a scenario like this? * * * P.S. If this is off topic feel free to close it -- I think it's borderline and I'm of the opinion that it's better to ask and be closed than not ask at all ;)"} {"_id": "3317", "title": "What's the difference between a \"developer\" and a \"programmer\"?", "text": "> **Possible Duplicate:** > What are the key differences between software engineers and programmers? > What is the difference between software engineer and software developer? What's the difference in this terminology? Is one considered more professional than the other? ### Related Questions * When someone asks you what you do, what do you say (e.g. programmer, developer, code monkey)? * What is the difference between software engineer and software developer? * What are the key differences between software engineers and programmers?"} {"_id": "4951", "title": "What are the key differences between software engineers and programmers?", "text": "What are the key differences between software engineers and programmers?"} {"_id": "127545", "title": "How to climb up the hierarchy from a programmer (codesmith) to a full-fledged Software Engineer?", "text": "> **Possible Duplicate:** > What is the difference between software engineer and software developer? > How to move from Programmer to Project Lead Writing code all your life is not really very practical in terms of growth or is it? I have heard professionals of the industry emphasizing upon focusing on UML and System Analysis & Design methodology to new and experienced programmers to grow further and excel to become a Software Engineer. Communication skills also play a vital role to manage teams and projects. Seeking for some good advice here besides learning UML and SAD."} {"_id": "13892", "title": "How to move from Programmer to Project Lead", "text": "At my job, I'm currently a programmer, but in the next few weeks I'll be taking control my own project. I was wondering if anyone else here has been in the same situation, and if so, what advice you can offer to help me be able to better run my project. Experience in dealing with contractors would be greatly appreciated. A little more info: * Project will have 3 people including myself, with extra people coming in when needing testing. * The project has been programmed mainly by 2 people * I would like to contribute to the programming as I like doing it and think I can add to the program, but am afraid of how the contractors will react. I don't want to create bad feelings which may harm the project. EDIT: Forgot to mention that I'll have to be picking up communications with customers to make sure their needs are met. Any advice on talking to customers cold would be greatly appreciated. EDIT 2: This is not a new project, I'm picking it up around version 6. Sorry that I didn't make it clear before."} {"_id": "87573", "title": "What is the actual difference between Computer Programmers and Software Engineers? Is this description accurate?", "text": "According to the Bureau of Labor Statistics, this is the difference: > Computer programmers write programs. After computer software engineers and > systems analysts design software programs, the programmer converts that > design into a logical series of instructions that the computer can follow They predict employment to increase for software engineers by 34% but to decline for programmers. Is there actually any such real distinction between the 2 jobs? How can one get a job designing programs (to be implemented by others)?"} {"_id": "106365", "title": "Software Engineer and IT Professionals", "text": "If you graduated as an Information Technology or a Computer Science student and you have a job that involves computer programming such as developing websites and systems, can you be called Software Engineer?"} {"_id": "200620", "title": "Why software engineering = programming?", "text": "I live in South America, and here the translation for \"Software Engineering\" is a knowledge for the SWEBOK (Software Engineering Body of Knowledge) subjects, such as: Requirements, Development methods (Waterfall, Spiral, SCRUM, etc..), Tests, Quality, Maintenance, etc.. **Why do English materials such as classes, articles, and even stackoverflow questions about Software Engineering just address (basically) programming?** Example: http://stackoverflow.com/questions/131571/recommended-books-for- software-engineering Most of recommended books in the answers above are basically programming, and the Software Engineering 'bibles' that have SWEBOK items written by Pressman and Summerville are almost unnoticed. Actually, most of SWEBOK items doesn't have focus in the programming code. Secondary Question: What is the English name for the profession/subject that really needs to know all SWEBOK subjects? (not only 'good programming')."} {"_id": "198433", "title": "How should I describe myself on my CV? As a software developer? A software engineer? A senior developer?", "text": "I know this has been asked a lot, but I'd like to know what to present myself as on my CV and to future employees. I'm 24, and began as a C#/ASP.NET developer after graduating from University in 2010 (I studied Computer Science). Since then I have worked for a relatively small software development house (about 8 employees), and since early 2012 I've been essentially the most senior .NET developer (the others in the company have more experience but only in drastically different/outdated languages, and are just learning .NET). My roles there are designing and developing applications. I don't usually write the specs, but the specs are essentially the customers requirements - all the technical details including the design of the database and application are up to me. I also have 3 MCTS certificates and I'm working towards an MCPD. As I've essentially only got around 3 years experience, I'm a bit unsure of how to present myself - clearly I'm not what most would constitute as 'senior'! Would you class me as a software developer, or something more?"} {"_id": "237868", "title": "Computer Science and the IT industry", "text": "I have a Computer Science degree and I'm employed as a software engineer. I'm regularly approached by recruiters on LinkedIn with \"programmer\", \"software developer\", or general \"software and/or computer industry\" propositions. Every now and then though a recruiter will approach me with an \"IT position\" opportunity. I understand that in the absolute general sense anything and everything between the keywords _computer_ and _software_ will at some point include IT, but when it comes to more specific discussions, particularly offers for an interview for an \"IT position\", I often find that to be a bit odd. It often strikes me as if I am being offered to interview for the position of a DBA, sys admin, or an IT staff manager, positions that are as foreign to me as the medical field. To avoid subjectivity, I'll make my question simple. Is a _software engineer_ , _coder_ , _programmer_ or a _software developer_ , in an IT position? IT industry (app development, scientific, multimedia/game)?"} {"_id": "174815", "title": "Does what I'm doing make me a software engineer?", "text": "> **Possible Duplicate:** > What are the key differences between software engineers and programmers? I am a software company owner for 8 years now. After years of operation I figured out that all the while what I was doing was a software engineer's job. There are several questions in my mind that I want to ask from you guys. I know that answers will be subjective. As a software engineer, do you really need to be a seasoned programmer? Is it true that software engineers don't code but just make diagrams, functional specs. and othe related documents? I also noticed that there are no SE standards or board exams to pass and its really a dynamic and situational job. So basically I can just proclaim myself as a software engineer based on experience and product?"} {"_id": "221503", "title": "Are there any implementations of deterministic regular expressions?", "text": "The question itself is already in the title, so here I will just provide additional details. I call a regular expression \"deterministic\", if, after converting the regular expression into a nondeterministic finite automaton in the obvious way, the result is already a deterministic finite automaton, without performing an explicit nondeterminism removal procedure. For example: * **Deterministic:** `ab(cb)*` * **Nondeterministic:** `a(bc)*b` I never use nondeterministic regular expressions. In particular, this means that matching my regular expressions never requires backtracking. I want a regular expression implementation that is explicitly designed to handle deterministic regular expressions, and takes advantage of them in order to match character streams that must be read sequentially, such as C++'s `std::istream`s. My preferred languages are Haskell, Standard ML, Rust and C++11, although I am open to suggestions for any statically typed languages."} {"_id": "107917", "title": "How long should a sprint planning meeting last?", "text": "In your experience, how long should a planning meeting (SCRUM) last? 8 hours? Or should it be shorter (succinct) and further discussions should be planned as part of the sprint (10 days sprint)?"} {"_id": "20407", "title": "Will high reputation in Stack Overflow help to get a good job?", "text": "In a post, Joel Spolsky mentioned that 5 digit Stack Overflow reputation can help you to earn a job paying $100k+. How much of that is real? Would anyone like to share their success in getting a high paid job by virtue of their reputations on Stack Exchange sites? I read somewhere that, a person got an interview offer from Google because a recruiter found his Stack Overflow reputation to be impressive. Anyone else with similar stories?"} {"_id": "4180", "title": "Does having a high \"rep\" on StackOverflow help you get a job? What other community sites do?", "text": "> **Possible Duplicate:** > Will high reputation in Stack Overflow help to get a good job? Just curious, what Web2.0 websites do employers use (if any) to pre-screen potential employees? Does any employer actually refer to a user's online \"reputation\" to get a job?"} {"_id": "20949", "title": "Will high reputation on Programmers help to get a good job?", "text": "> **Possible Duplicate:** > Will high reputation in Stack Overflow help to get a good job? In reference to this question, do you think that having a high reputation on this site will help to get a good job? Aside silly and humorous questions, on Programmers we can see a lot of high quality theory questions. I think that, _if_ Stack Overflow will eventually evolve in \"strictly programming related\" (which usually is \"strictly _coding_ related\"), the questions on Programmers will be much more interesting and meaningful (\"Stack Overflow\" = \"I have this specific coding/implementation issue\"; \"Programmers\" = \"Best practices, team shaping, paradigms, CS theory\"). So could high reputation on this site help (or at least be a good reference)? And then, more o less than Stack Overflow?"} {"_id": "56477", "title": "What do you wish you had been taught in uni before moving to industry?", "text": "Is there something that you wish you could have learnt, as part of a course or something, in uni, now that you've been in industry for a while? Maybe something such as \"I wish there was a course in time estimation\" or \"I wish I had learnt how to work in large projects\", and so on."} {"_id": "256121", "title": "How to securely implement Roles in a Windows Form application?", "text": "As an ISV, what is considered best practice for implementing Application Role based security? In other words, only allow users to access certain features in the application based on what Roles they belong to. We currently just use a table in our database to store this, but it has been suggested that for max HIPAA compliance, this isn't secure enough. My first thought was to just use Active Directory Groups, as it would seem that is what they are designed for, however, it just doesn't seem practical to rely on our clients IT departments to create Groups and assign users to them for groups that are specific only to our application. This can't be best practice for ISVs ... At this point it my research, it seems that possibly the best solution might be to use something like Active Directory Application Mode (ADAM) (Active Directory Lightweight Directory Services) and possibly AzMan (Windows Authorization Manager)? Again, this is for a Windows Forms Application, not a Web application or in an \"In House\" solution. If it matters, we are also in the process of transitioning our home baked authentication to instead use active directory for authentication. Also of note is this needs to be secure (HIPAA Compliant). All the information i can find on this subjects seems to be for developing applications for in house use, or for web applications, and neither of these approaches feels appropriate for a winform application. (We are using .net for development.) help! :)"} {"_id": "220676", "title": "Why does Mono for Android cost money if the mono project is opensource and therefore everything based on it must be opensource?", "text": "Why does Mono for Android cost money if the mono project is opensource and therefore everything based on it must be opensource?"} {"_id": "220670", "title": "Sell product containing purchased GPL software", "text": "I recently purchased a product (a website template, licensed by GNU GPL 2.0). My product, which is a web software, coded in PHP, uses that web template as its appearance. The question is: How can I sell a software which includes some parts of a GNU GPL 2.0 commercial website template? Do I have to pay a license for the website template each time I sell mine? Thanks."} {"_id": "57782", "title": "What are the preconditions to get an experienced developer from working as a freelancer to owning a small software company?", "text": "I've been a software developer for 8 years. I've worked on about 20 projects, some smaller, some bigger. I know how to help myself by using google magic, msdn, youttube, tutorials, how-to's etc. I'm playing around with the idea to get a friend of mine (who has been a software-developer for 5 years) and start my own software-developer-company. What do you think are the preconditions to get myself from a freelancer to owning my own little company with 2-3 employees?"} {"_id": "57783", "title": "developing web applications using .NET platform - key features in training?", "text": "I'm working for some polish company and we are going to train programmers in developing web applications using .NET and I'm asking you for your opinion what should be placed in that kind of course. thanks in advance :) PS: I just need little help with research and mostly some kind of discussion about what's needed and what would be useless waste of time depending on yours experience"} {"_id": "57781", "title": "Do you need to pay $100 a month for a Server when building a website?", "text": "Do you need a virtual or dedicated Server when simply coding? Would you be able to build a website from scratch from your on PC and take a Server only one day befor going live in Beta?"} {"_id": "142926", "title": "Is there any practical trick to remember the difference between big-endian and little-endian?", "text": "I don't work every day with big-endian and little-endian problems and thus I find very difficult to remember which one is what. Recently I got an interview asking the difference between the two; since I didn't remember I decided to \"guess\" (50% chance, after all) but I failed. So, is there any wide known pratical trick to remember what is the difference between big endian and little endian?"} {"_id": "142923", "title": "Validating allowed characters or validating disallowed characters", "text": "I've always validated my user input based on a list of valid/allowed characters, rather than a list of invalid/disallowed characters (or simply no validation). It's just a habit I picked up, probably on this site and I've never really questioned it until now. It makes sense if you wish to, say, validate a phone number, or validate an area code, however recently I've realised I'm also validating input such as Bio Text fields, User Comments, etc. for which the input has no solid syntax. The main advantage has always seemed to be: **Validating allowed chars reduces the risk of you missing a potentially malicious character, but increases the risk the of you not allowing a character which the user may want to use. The former is more important.** But, providing I am correctly preventing SQL Injection (with prepared statements) and also escaping output, is there any need for this extra barrier of protection? It seems to me as if I am just allowing practically every character on the keyboard, and am forgetting to allow some common characters. Is there an accepted practice for this situation? Or am I missing something obvious? Thanks."} {"_id": "117561", "title": "What's the difference between Scala and Red Hat's Ceylon language?", "text": "Red Hat's Ceylon language has some interesting improvements over Java: * The overall vision: learn from Java's mistakes, keep the good, ditch the bad * The focus on readability and ease of learning/use * Static Typing (find errors at compile time, not run time) * No \u201cspecial\u201d types, everything is an object * Named and Optional parameters (C# 4.0) * Nullable types (C# 2.0) * No need for explicit getter/setters until you are ready for them (C# 3.0) * Type inference via the \"local\" keyword (C# 3.0 \"var\") * Sequences (arrays) and their accompanying syntactic sugariness (C# 3.0) * Straight-forward implementation of higher-order functions I don't know Scala but have heard it offers some similar advantages over Java. How would Scala compare to Ceylon in this respect?"} {"_id": "227908", "title": "Find possible variations of one item out of multiple baskets.", "text": "I have three baskets of balls and each of them has 10 balls which have the following numbers: > Basket 1: 1, 2, 3, 4, 5, 6, 7, 8, 9, 10 > > Basket 2: 11, 12, 13, 14, 15, 16, 17, 18, 19, 20 > > Basket 3: 21, 22, 23, 24, 25, 26, 27, 28, 29, 30 What would be the possible variations If I were to pick one ball from each basket? I guess this is called as Probability in Mathematics but not sure. How would you write this code in C# (or any other programming language) to get the correct results? Edit: Based on @Kilian Foth's comment, here is the solution in C#: class Program { static void Main(string[] args) { IEnumerable basket1 = new List { \"1\", \"2\", \"3\", \"4\", \"5\", \"6\", \"7\", \"8\", \"9\", \"10\" }; IEnumerable basket2 = new List { \"11\", \"12\", \"13\", \"14\", \"15\", \"16\", \"17\", \"18\", \"19\", \"20\" }; IEnumerable basket3 = new List { \"21\", \"22\", \"23\", \"24\", \"25\", \"26\", \"27\", \"28\", \"29\", \"30\" }; foreach (var item1 in basket1) foreach (var item2 in basket2) foreach (var item3 in basket3) { Console.WriteLine(\"{0}, {1}, {2}\", item1, item2, item3); } Console.ReadLine(); } }"} {"_id": "56786", "title": "What could be my path? Networking, programming, or something else?", "text": "Well first and foremost, I would like to give my brief description: I was an aviation student but I didn't pursue that path because I lost my interest. Now I'm an I.T. student and currently stopped schooling because of confusion. I don't know which path I should choose: could it be programming or networking? Someone told me that on networking the money is easy, the job is easy. Others told me that programming is best suited for me because I'm very skilled and excellent at figures. I want to chose networking, but I can't find my passion for it, my mind tells me but my heart doesn't... and on programming, I don't know which language I should pick or if I like it or not. A good mentor, even if only online, would be a very big plus to me, but I don't think if there are many who could spent their time on teaching a nobody... but I'm very eager to learn. My real passion is gaming! I want to work in the gaming industry, I want to be a man behind those games! I've been a gamer freak since birth. But I don't know how to get in to that industry. I don't know what to do. I don't know which path would really suit me. Sorry if some of you find this a pointless question, but please bear with me, this could be the turn of my life."} {"_id": "227905", "title": "Use of fixtures in unit testing", "text": "I'm trying to understand the use of fixtures in the context of unit testing models in a web MVC system. The problem I am trying to understand is that fixture data does no give a true representation of the data that the system accepts. A simple example is say I have a user model that has email validation. Fixtures will bypass the validation so the tests maybe testing invalid data. **Edit:** What I'm trying to get at is, wouldn't it make more sense to create the test data through the application business logic? So for the email example, if a developer accidentally created a fixture with an email that would not pass validation, this could cause false negatives in testing (eg testing sending emails would fail because the fixture has invalid data). The email example probably isn't the best, but I hope that makes more sense what I'm trying to get at?"} {"_id": "227902", "title": "What is stopping people from copy-pasting open-sourced codes into their own projects and releasing only the compiled binaries?", "text": "Hi I'm new to programming, and I've always wondered what would be the point of contributing open-sourced codes if there is no way to ensure everybody who uses the code will adhere to the licenses (or is there?)."} {"_id": "250481", "title": "Is it a bad practice to give two very different files with the same general purpose the same name?", "text": "Is it a bad practice to give two very different files with the same general purpose the same name, separating them into different directories? I'd like to keep my file names short, and both of the files have the same general purpose without being identical. I'm not sure whether or not this would be considered a bad practice in a professional programming environment. **I'd like to know what the best practice is in this situation.** Alternatively, at the expense of the name's short length, I could use: "} {"_id": "127239", "title": "understanding linux kernel", "text": "I want to learn linux kernel, I think it's kinda hard to understand the whole thing as many of you mentioned in other questions, anyway, I'm really interested in the process management part of it. I wanna do something really small first, and I hope one of you can help me do it, which is booting the pc and printing some text and shutdown, Is there a code (bootstrap program) small enough to do this, where I can just copy it to a cd and boot the pc and print something and then shutdown? Thanks,"} {"_id": "250483", "title": "Potentially justifiable use case for const_cast or bad design?", "text": "I'm designing a data structure in C++, and I want to expose an interface to the user to traverse the structure in some order. Instead of creating several different types of enumerators, I want to keep it simple and provide a single enumerator, while maintaining const-correctness. My design is summarised as follows: class DataStructureNode; // The basic building block of the structure class DataStructureEnumerator { public: friend class DataStructure; DataStructureEnumerator GetNext() { ... } DataStructureEnumerator GetPrevious() { ... } private: const DataStructureNode* m_Node; ... }; class DataStructure { public: DataStructureEnumerator GetEnumerator() { ... } DataStructureNode* GetNodeFromEnumerator(const DataStructureEnumerator& e) { // Let's assume this function also ensures the node belongs to this structure return const_cast(e).m_Node; // <--- Ooo! const_cast! } const DataStructureNode* GetNodeFromEnumerator(const DataStructureEnumerator& e) const { // Let's assume this function also ensures the node belongs to this structure return e.m_Node; } }; This design has the benefit that there is only one type of enumerator that handles both mutable and immutable structures. Of course, the alternative (and the conventional) design in C++ is to have two (or more) enumerator classes, such as `DataStructureEnumerator` and `DataStructureConstEnumerator`. The conventional design comes with added code- bloat, more templates, and an uglier interface (in my opinion!), but it's the norm in C++ communities. I'm interested in hearing opinions of the community on this design. Are there any potential pitfalls that I'm missing with my design?"} {"_id": "250482", "title": "Dynamic programming in Bin packing", "text": "**Problem:** Given a list L of objects of possible sizes from set S={1,2,4,8} and unlimited supply of bins of sizes 16 each and we have to use minimum possible numbers of bins to pack all objects of L. I am also searching for an optimal (or near optimal) solution using dynamic programming or otherwise in the following scenarios when 1. L is not given offline, instead we are asked to fit objects one by one without knowing future requests(1-D online vector bin packing). 2. sizes can be from SXS and bins are of capacity (16,16) each. (2-D vector bin packing). **Assumption:** Optimal packing means the best possible packing from all possible packing i.e the packing with the minimum numbers of bins used. Actually generalized vector bin packing is NP-hard, but I think due standard sizes from finite set, more efficient and better solutions may exist. **My approach:** Clearly, 1-D offline can packed in <=optimal+1 using clever pairing of objects . But I am stuck for the other 2 aspects asked above. I know about First Fit,Best Fit, First Fit Decreasing algorithms but I am searching for this problem specific solution."} {"_id": "210442", "title": "Automating database deployment using a CMS", "text": "My team are developing Umbraco websites and looking to improve the automation of our database deployment. Currently we have automated builds and deployments of our source code using Kiln and Team City, but cannot work out how to handle the database deployment. The challenge is applying changes made in the development environment, while keeping new database content that has been added in the live environment via the CMS. Currently our changes are deployed using courier manually, but we aim to convert this into a '1 click deployment' which is what we desire for running our automated tests and ease of deployment. How can we automate this without wiping new content added in the CMS?"} {"_id": "21870", "title": "Can you be Agile without doing TDD (test driven development)?", "text": "Is is possible to correctly call yourself (or your team) \"Agile\" if you don't do TDD (Test-Driven Development)?"} {"_id": "253820", "title": "Approach for packing 2D shapes while minimizing total enclosing area", "text": "Not sure on my tags for this question, but in short .... I need to solve a problem of packing industrial parts into crates while minimizing total containing area. These parts are motors, or pumps, or custom- made components, and they have quite unusual shapes. For some, it may be possible to assume that a part === rectangular cuboid, but some are not so simple, i.e. they assume a shape more of that of a hammer or letter T. With those, (assuming 2D shape), by alternating direction of top & bottom, one can pack more objects into the same space, than if all tops were in the same direction. Crude example below with letter \"T\"-shaped parts: ***** xxxxx ***** x ***** *** ooo * x vs * x vs * x vs * x o * x * xxxxx * x * x o xxxxx xxx Right now we are solving the problem by something like this: 1. using CAD software, make actual models of how things fit in crate boxes 2. make estimates of actual crate dimensions & write them into Excel file (1) is crazy amount of work and as the result we have just a limited amount of possible entries in (2), the Excel file. The good things is that programming this is relatively easy. Given a combination of products to go into crates, we do a lookup, and if entry exists in the Excel (or Database), we bring it out. If it doesn't, we say \"sorry, no data!\". I don't necessarily want to go full force on making up some crazy algorithm that given geometrical part description can align, rotate, and figure out best part packing into a crate, given its shape, but maybe I do.. **Question** Well, here is my question: assuming that I can represent my parts as 2D (to be determined how), and that some parts look like letter T, and some parts look like rectangles, which algorithm can I use to give me a good estimate on the dimensions of the encompassing area, while ensuring that the parts are packed in a minimal possible area, to minimize crating/shipping costs? Are there approximation algorithms? Seeing how this can get complex, is there an existing library I could use? **My thought / Approach** My naive approach would be to define a way to describe position of parts, and place the first part, compute total enclosing area & dimensions. Then place 2nd part in 0 degree orientation, repeat, place it at 180 degree orientation, repeat (for my case I don't think 90 degree rotations will be meaningful due to long lengths of parts). Proceed using brute force \"tacking on\" other parts to the enclosing area until all parts are processed. I may have to shift some parts a tad (see 3rd pictorial example above with letters T). This adds a layer of 2D complexity rather than 1D. I am not sure how to approach this. One idea I have is genetic algorithms, but I think those will take up too much processing power and time. I will need to look out for shape collisions, as well as adding extra padding space, since we are talking about real parts with irregularities rather than perfect imaginary blocks. I'm afraid this can get geometrically messy fairly fast, and I'd rather keep things simple, if I can. But what if the best (practical) solution is to pack things into different crate boxes rather than just one? This can get a bit more tricky. There is human element involved as well, i.e. like parts can go into same box and are thus a constraint to be considered. Some parts that are not the same are sometimes grouped together for shipping and can be considered as a common grouped item. Sometimes customers want things shipped their way, which adds human element to constraints. so there will have to be some customization."} {"_id": "253821", "title": "How do programmers handle several versions of a same program?", "text": "A lot of apps, for instance, have a free version and pro version. Do programmers, when a new update is released, remove lines of code that shouldn't be available in the free version? As for Visual Studio. Or demos. Or a lot of things actually."} {"_id": "104752", "title": "Developing an internet-enabled application as a Kiosk on Windows 7", "text": "I am finalizing development of a desktop Java application that communicates with an outside web server, and now I need to start seriously considering deployment. This application will run on a large touchscreen all-in-one workstation running Windows 7. It will be located in a public-area and thus must be LOCKED-DOWN Hanibal Lecter style. Early in the project nobody really concerned themselves with this fact just assuming that we can buy some magical software for Windows 7 that will automatically take care of all this, however I am finding now that this looks to be a LOT more complicated than my manager ever thought. I need to: \\- Lock down the standard hot-keys (ALT+TAB, ALT+CTRL+DEL, etc...) * Prevent the user from opening ANY programs other than the kiosk application and its spawned executables * Prevent the user from closing the application * Start the kiosk application on startup (this can be done without kiosk software) * Auto-login to Windows on reboot (Windows Updates, power failure, bratty kid pressing the power button, etc...) * Administrator passcode escape sequence for routine maintenance by desktop support professionals. To my dismay I am having a really hard time finding software that contains the whole package and am finding numerous swaths of competing information on the best way to do this. I am not necessarily looking for free or open source software and am willing to pay for software that can help me achieve this. Have any of you ever wrote kiosk software before and if so what approaches have you taken to do this?"} {"_id": "104754", "title": "Remake old web forms application in asp.net mvc", "text": "I've inherited the code maintenance of a complex web site for a customer that continuously requests enhancements for it. This application took years to develop and I'm facing increased difficulties to enhance it. It's organized but at the same time the .net code is mixed with ajax, javascripts and old school html that it takes me days to figure out how some pages work. First off, I not new to asp.net but I'm not familiar with the new MVC stuff but from what I've read it seems to be a step in a better direction. The current code is all in one big dll. The application code is divided into multiple folders representing the different departments and each department has it's own pages for handling general stuff like employee management, reports, budget and also their own information. For example, even though each department uses a different webpage for employee information handling, they want different fields and so it was simpler to create different pages than to use a single page that adapts to each department. But it is really a nightmare to maintain right now and I would like to create a parallel project where I could start a fresh project, create a better structure and from there start migrating the old code to this new environment and refactor it as I go. The idea is to migrate the old application to a new web site that has a similar look while maintaining both operating until everything is running in the new site. It may sound insane but it really is used extensively for hundreds of people everyday and it bugs me that I have to modify crappy code to make it work. How would you go about this issue? Thanks [edit] If found this link Things You Should Never Do on another post, very much to the point of my question."} {"_id": "253828", "title": "How should I design a wizard for generating requirements and documentation", "text": "I'm currently working in an industry where extensive documentation is required, but the apps I'm writing are all pretty much cookie cutter at a high level. What I'd like to do is build an app that asks a series of questions regarding business rules and marketing requirements to generate a requirements spec. For example, there might be a question set that asks \"Does the user need to enter their age?\" and a follow-up question of \"What is the minimum age requirement?\" If the inputs are \"yes\" and \"18\", then this app will generate requirements that look something like this: \"The registration form shall include an age selector\" \"The registration form shall throw an error if the selected age is less than 18\" Later on down the line, I'd like to extend this to do additional things like generate test cases and even code, but the idea is the same: generate some output based on rules determined by answering a set of questions. Are there any patterns I could research to better design the architecture of such an application? Is this something that I should be modeling as a finite state machine?"} {"_id": "17790", "title": "What are some things to be aware of when getting ready to hand a project off?", "text": "I'm current the sole developer/architect of a fairly large web application (ASP.NET MVC stack, roughly 150K+ lines of code) and the end of development is on the horizon. As such, I'm starting to think about what needs to be done for the hand off of the project and I want to make sure I do the right thing for anyone that has to maintain the project in the future. What are some things to be aware of when getting ready to hand a project off to another developer or team of developers of maintenance?"} {"_id": "241126", "title": "Where can I find better designed nginx documentation?", "text": "Actually I have a sub-question here:\"Is Nginx official documentation opensourced?\" Nginx is a good software, but I found its official documentation(http://nginx.org/en/docs/ really hard to read, because lack of design. Especially compare with documentations on this site (readthedocs.org). So I decided to move it to readthedocs.org little by little if it's opensourced. And I'm so confused that as the 2nd biggest web-server software on world, aren't there anyone else like me feel its documentation hard to read? Maybe there are many and I just don't know them. If you know any other site which have a better designed nginx documentation, please tell me, so I won't need to do the rebuild task."} {"_id": "127236", "title": "What is the required background to apply Machine Learning to Finance?", "text": "I'm interested in machine learning in relations to the stock market (predicting future values of stocks etc). What topics would I need to learn - e.g. what branch of AI to look into etc? and what libraries/tools do I need?"} {"_id": "204605", "title": "Avoid malicious code while dynamically loading classes with ClassLoader", "text": "## Background One of the advantages of decoupled components in systems is that you can extend the system without having to touch the existing code. Sometimes you don't even have to recompile the old code because you can dynamically load classes from disk like this: clazz = Demo.class.getClassLoader().loadClass(\"full.package.name.to.SomeClass\"); That allows for a kind of plug-in architecture of sorts (give or take). ## Question How do you prevent malicious code from running when dynamically loading a class from disk using `ClassLoader` ?"} {"_id": "64949", "title": "Programming style: Reoccuring error checks", "text": "Hey, I have a question about programming style, because in my current code I am using a bigger function which calls some smaller functions and all of these need to be error-checked. So something like this: void bigFunction() { /* some computations */ if(smallFunction1() == -1) { free(mem1); free(mem2); fclose(file); unlink(filename); return -1; } if(smallFunction2() == -1) { free(mem1); free(mem2); fclose(file); unlink(filename); return -1; } if(smallFunction3() == -1) { free(mem1); free(mem2); fclose(file); unlink(filename); return -1; } /* more computations and stuff in biggerFunction */ } I think you can clearly see my problem: The code after one of these functions fails is always the same, and I feel like repeating this coder again and again will make my code more and more unreadable. How to deal with this problem? gotos came into my mind, but in my programming courses in university I was told never use gotos (though I forget the reason why...)"} {"_id": "99959", "title": "Project problems from a Java beginner", "text": "A week ago I asked the question; First big project, how to get started. Make menus, save to harddrive etc. I tried to do everything without doing a GUI with Swing but I realized that for every view I wanted to present to the user I had to make a new class. Right now I have 3 menu classes (the mainMenu, computerCategoryMenu,laptopCategoryMenu) I stopped because I would need to make another 6 menu classes and then another 6 if the user registered and another 6 for the administrator of the site. It doesn't look right to me. Furthermore everything looks too static for me. I don't know how to make the administrator add or delete products. Should I store all my products in a hashmap and then get them to show in different menus? Do you think that doing a GUI with Swing will make things much easier?"} {"_id": "167792", "title": "Tiling Problem Solutions for Various Size \"Dominoes\"", "text": "I've got an interesting tiling problem, I have a large square image (size 128k so 131072 squares) with dimensons 256x512... I want to fill this image with certain grain types (a 1x1 tile, a 1x2 strip, a 2x1 strip, and 2x2 square) and have no overlap, no holes, and no extension past the image boundary. Given some probability for each of these grain types, a list of the number required to be placed is generated for each. Obviously an iterative/brute force method doesn't work well here if we just randomly place the pieces, instead a certain algorithm is required. 1) all 2x2 square grains are randomly placed until exhaustion. 2) 1x2 and 2x1 grains are randomly placed alternatively until exhaustion 3) the remaining 1x1 tiles are placed to fill in all holes. It turns out this algorithm works pretty well for some cases and has no problem filling the entire image, however as you might guess, increasing the probability (and thus number) of 1x2 and 2x1 grains eventually causes the placement to stall (since there are too many holes created by the strips and not all them can be placed). My approach to this solution has been as follows: 1) Create a mini-image of size 8x8 or 16x16. 2) Fill this image randomly and following the algorithm specified above so that the desired probability of the entire image is realized in the mini-image. 3) Create N of these mini-images and then randomly successively place them in the large image. Unfortunately there are some downfalls to this simplification. 1) given the small size of the mini-images, nailing an exact probability for the entire image is not possible. Example if I want p(2x1)=P(1x2)=0.4, the mini image may only give 0.41 as the closes probability. 2) The mini-images create a pseudo boundary where no overlaps occur which isn't really descriptive of the model this is being used for. 3) There is only a fixed number of mini-images so i'm not sure how random this really is. I'm really just looking to brainstorm about possible solutions to this. My main concern is really to nail down closer probabilities, now one might suggest I just increase the mini-image size. Well I have, and it turns out that in certain cases(p(1x2)=p(2x1)=0.5) the mini-image 16x16 isn't even iteratively solvable.. So it's pretty obvious how difficult it is to randomly solve this for anything greater than 8x8 sizes.. So I'd love to hear some ideas. Thanks"} {"_id": "153843", "title": "Undefined behaviour in Java", "text": "I was reading this question on SO which discusses some common undefined behavior in C++, and I wondered: does Java also have undefined behaviour? If that is the case, then what are some common causes of undefined behaviour in Java? If not, then which features of Java make it free from such behaviours and why haven't the latest versions of C and C++ been implemented with these properties?"} {"_id": "153845", "title": "HP openview servicedesk: looking for api information ?", "text": "Good day folks. I am very confused in this situation. I need to implement system which will be based on HP open view service desk 4.5 api. But this system are reached the end of supporting period. On oficial site no information available **I am looking an information about this API(articles, samples etc)**. Now i have only web-api.jar and javadoc. Methods in javadoc is bad documented. If you have any info, please share it with me. Thanks. **Second question: there are methods for api(with huge amount of methods) understanding if it not documented or information is not available?** PS:If it question is not belong here i will delete it."} {"_id": "167797", "title": "Could Apple and Microsoft allow the GPLv3 on their locked-down devices?", "text": "It seems that both Apple and Microsoft prohibit GPLv3-licensed software in the app stores for their locked-down devices (i.e. iOS, Windows Phone and the Metro part of Windows). I have heard various explanations for this. However: Would they even be able to allow this license in their app stores if they wanted to, or does the GPL's anti-tivoization clause already prohibit this?"} {"_id": "64944", "title": "Studies on breakdown of various costs associated with a Software project", "text": "I am looking for links and advice on studies done on the breakdown of costs associated with Software development. In particular I am looking on what percentage of effort is testing vs programming, and how it changed with team sizes, duration of project and similar factors."} {"_id": "167795", "title": "In what practical ways is it good to remember the memory/pointers model?", "text": "A variable refers to a value. A variable is also stored in a memory address. People say that it's good to have this memory model in mind. Is that true? What is some sample code that shows this as beneficial in practice? You dont have to use programming, you can use a real-world analogy if you like. edit: they keep removing the helpful answer i summed up here -- no idea why"} {"_id": "81976", "title": "Is it possible to refactor inheritance to composition when virtual methods are called inside the base class?", "text": "Let's say I have a class called Country and two subclasses called AI and Player. The Country class has a number of virtual methods to allow player- specific or AI-specific behavior. I want to make it possible for the player to switch countries during a game. To this end, I am refactoring the inheritance relation to composition. Then I will be able to switch out a Player's internal instance of Country for another at any time. But this brings up a bit of a problem: When an _external_ object attempts to call one of these virtual methods on a Player, it is delegated correctly to the Country method after any appropriate processing has been done. But when an _internal_ Country method calls one of the virtual methods, the code in the Player or AI version no longer has a chance to run. So now I'm left with updating every one of those methods to make sure both versions are called, which in itself is not a trivial task (infinite loops, anyone?). This seems much harder than it ought to be. I've done similar refactorings before and never run into this issue. Why? Am I missing some blindingly obvious observation? * * * **Edit** for a concrete example now that I have access to the code again: class Country { public: virtual void AddMoney(float money) { _vMoney += money; } void TakeLoan() { float loan = ...; AddMoney(loan); // <-- does not route through AI or Player implementation } } class AI // : public Country { private: Country *pCountry; public: virtual void AddMoney(float x) { pCountry->AddMoney(x); // formerly Country::AddMoney(x) AllocateMoney(x); } }"} {"_id": "75096", "title": "What can I expect when moving from University to a real programming job?", "text": "I'm about to move into my final year at university, and it's finally kicked in that in just over a year from now, I'll be doing my best to secure a job. Yet I still have no clue about the level of proficiency expected of graduates trying to make their way into the programming industry. I've read that they'll weed out the non-programmers with some fizzbuzz style questions, but after that, what sort of people am I likely to be up against? And what skills would be worth studying to push myself ahead of the competition? I'm currently trying to get comfortable with inheritance / polymorphism, currently just with my own code, the next task I've set myself is gaining confidence in extending existing code. I've studied PHP, Javascript(+JQuery), Actionscript (and probably some others I can't think of off the top of my head) as hobbies, Java at university, and currently pushing my way through C# (in an attempt to kill two large birds with one stone). Is this panicking undergraduate headed in the right direction? D="} {"_id": "175082", "title": "How do I trust an off site application", "text": "I need to implement something similar to a license server. This will have to be installed off site at the customers' location and needs to communicate with other applications at the customers' site (the applications that use the licenses) and an application running in our hosting center (for reporting and getting license information). My question is how to set this up in a way I can trust that: 1. The license server is really our application and not something that just simulates it; and 2. There is no \"man in the middle\" (i.e. a proxy or something that alters the traffic). The first thing I thought of was to use with client certificates and that would solve at least 2. However, what I'm worried about is that someone just decompiles (this is build in .NET) the license server, alters some logic and recompiles it. This would be hard to detect from both connecting applications. This doesn't have to be absolutely secure since we have a limited number of customers whom we have a trust relationship with. However, I do want to make it more difficult than a simple decompile/recompile of the license server. I primarily want to protect against an employee or nephew of the boss trying to be smart."} {"_id": "186245", "title": "Good serialization solution for communication between Python AND Haskell programs?", "text": "I found this: http://msgpack.org/ But.. I've never used it, so I don't know what whether it is much good. Anybody has proposals? Basic requirements: * serialize as many data structures used typically in both languages as possible, have no problem with nesting * as reliable/unbuggy as possible * high performance is nice to have, but not critical * as simple as possible to use, but not simpler"} {"_id": "167425", "title": "Should a domain expert make class diagrams?", "text": "The **domain expert** in our team uses UML class diagrams to model the **domain model**. As a result, the class diagrams are more of **technical models rather than domain models** (it serves of some sort of technical specifications for developpers because they don't have to do any conception, they just have to implement the model). In the end, the domain expert ends up doing the job of the architect/technical expert right? Is it normal for a domain expert (not a developer or technical profile) to do class diagrams? If not, what kind of modeling should he be using?"} {"_id": "186247", "title": "I have a previous invention (software / framework) that I plan to use on my new job. What happens to my copyright if I improve it during the job?", "text": "I have filled that standard form where you list your previous inventions before starting your new job so the employer is legally aware you have the copyright over them. But if I want to use this invention (software code / framework) in my new job, would they have any copyright claims on it if for example I change / improve something on my code while working for them? I have heard something like a shared ownership. Something like: you would own what was done before and employer would own any improvements you did on it while employed. That kind of sucks and take away any incentive you would have to do improvements on your product since _once improved, the improvement is gone_. :( Has anyone gone through this situation before and could share some ideas so I can protect my code?"} {"_id": "186240", "title": "Choosing a open source licence for embedded system library", "text": "I'm writing a few libraries that I want to release as open source projects, but I want to find a licence that means I'll receive the credit when it is used in a project. Im not fussed about anything else, if people want to profit off of it then thats fine. So what licence is most appropriate? The only factor for me is that I receive the credit for producing the library if it is ever used."} {"_id": "232227", "title": "Composite repository with plugins for multitenant app", "text": "Im working on an app that has flexible storage configuration. For example: public class Repo1:IRepository { public void DoSomething(int tenantId, string someKey) { ... } } This repo can be combined in Aggreagate or Composite repositories. Also there are connectors to external aplications: public class Connector1:IConnector { public void DoSomething(string someKey) { ... } } Connectors are not aware of tenant they are working in, because each tenant has uniquie combination of connectors configured for it. Currently I wrap connectors in special repo implementation: public class ConnectorRepo:IRepository { public void DoSomething(int tenantId, string someKey) { var tenantConnectors = ConnectorProvider.GetConnectors(tenantId); foreach(var connector in tenantConnectors) { connector.DoSomething(someKey); } } } Basicaly connector and repo are sharing same method signatures, except repo methods have tenantId parameter. I am thinking of making to repository contracts with and without tenantId, and wrap repos without multitenanct support(ex connectors) with tenant aware facades, but such high level abstraction feels fragile to me. Would be very gratefull for any ideas and suggestions."} {"_id": "186242", "title": "Is MSDN / Azure Active Directory information available in different formats?", "text": "I'd like to read the documentation of Azure Active Directory on my kindle (or similar device), but don't want to use \"Print to Kindle\" for each and every page. Is the Microsoft documentation for this technology available in other formats? (.chm, PDF, etc). If an alternative format is available, then I can adapt that to my needs. http://msdn.microsoft.com/en-us/library/jj673460.aspx"} {"_id": "232229", "title": "Understanding dependency injection", "text": "I'm reading about dependency injection (DI). To me, it is a very complicated thing to do, as I was reading it was referencing inversion of control (IoC) as well and such I felt I was going to be in for a journey. This is my understanding: Instead of creating a model in the class which also consumes it, you pass (inject) the model (already filled with interesting properties) to where it is needed (to a new class which could take it as a parameter in the constructor). To me, this is just passing an argument. I must have miss understood the point? Maybe it becomes more obvious with bigger projects? My understanding is non-DI (using pseudo code): public void Start() { MyClass class = new MyClass(); } ... public MyClass() { this.MyInterface = new MyInterface(); } And DI would be public void Start() { MyInterface myInterface = new MyInterface(); MyClass class = new MyClass(myInterface); } ... public MyClass(MyInterface myInterface) { this.MyInterface = myInterface; } Could some one shed some light as I'm sure I'm in a muddle here."} {"_id": "60692", "title": "How do I create a mature SW design before the implementation itself? And how do I cope with changes?", "text": "I am doing a project in my university where, for the first time in my life, I have to create a system architecture/design and I will be the head of a group of 4 students that are in the beginning of the course. Basically, I will be able to do everything: I have almost complete independence to choose what exactly the software should do (goals, features, etc), than I have to design it. Because it is a university project, I do have a lot of time to think about the software concept and design it. My plans are to create a draft of the idea and refine it with the professor. Once we do have a concrete idea, I want to start the SW design itself. And this is the point where my questions arrives: I know that creating a design that contemplates everything and will not change during development is impossible, because of that, I am sure the design will be changed once the implementation begins. However, I want to present a mature design, based in logical decisions and good practices, but, of course, I won't be able to develop the SW to identify the traps I am falling into (this will be the student's task). What tricks do you use to create a mature design and to avoid traps before starting implementing? How do you approach and \"study\" the software proposal to find inconsistencies or points needing clarification? How do you do to include the changes to your design, without ruining it? (Many of my designs start very nicely, but at the end they are ruined by the changes)."} {"_id": "209182", "title": "Is Java still king of cross-platform compatibility? Is the answer still Swing?", "text": "So basically, my company is looking to create an app that can be distributed on Linux, Mac OS X, and Windows. We are hoping for a cross platform solution. I have experience working with Java Swing years ago and thought it was reasonable, but not great. It assuredly did however work on all platforms just fine. Is Java and Swing still the end all be all of code once run everywhere, or are there other options for desktop apps?"} {"_id": "209181", "title": "Diagramming messages on a service bus", "text": "I'm looking for a way to clearly diagram how multiple applications communicate via a service bus. The best I've come up with so far is a sequence diagram, but I really don't like that. Sequence diagrams necessarily relate some sort of sequence, and that's really not what I want. Furthermore, since every service communicates with the service bus and sequence diagrams place each service on a separate column, as the number of services increases, you end up with a lot of overlapping arrows. For example, given 4 services `FOO`, `BAR`, `BAZ`, and `QUX`: * `FOO` publishes messages of type _publish_ and _regen_. * `BAR` publishes messages of type _requeue_. * `BAZ` subscribes to messages of type _publish_ , _regen_ , and _requeue_ , and publishes messages of type _transmit_. * `QUX` subscribes to messages of type _transmit_. * Any service can publish any message type at any time (there is no implied sequence). What kind of diagram should I use to clearly and unambiguously represent this information? Here's the best I've come up with so far: ![Sequence diagram](http://i.stack.imgur.com/Dhhqg.png)"} {"_id": "57433", "title": "study materials for Mysql certification?", "text": "I'm preparing for MySQL certification, nowadays officially titled: Oracle Certified Professional, MySQL 5.0 Developer certification After looking through MySQL forum it looks like most people recommended this book: http://www.amazon.com/MySQL-5-0-Certification-Study-Guide/dp/0672328127/ref=sr_1_1?ie=UTF8&qid=1299972594&sr=8-1 Which as far as I learned was the official preparation source at the time when MySQL was controlled by MySQL AB and Sun. Now, however - Oracle officially doesn't recommend this book. To be precise, I don't know what they recommend. I could only find this \"value package\": http://education.oracle.com/pls/web_prod-plq-dad/db_pages.getpage?page_id=532 Can someone who got MySQL certification confirm that this book is what they have used? Also, if there is any other moderately priced study materials out there - please let me know. **Update** I passed second one Monday March 28th. I have to say second exam was more complicated. So I have two exams now and waiting for my package from Oracle. Took me approximately 2.5 week total to prepare for both exams. With 0.5 weeks for 1st and 2 weeks for 2nd. Also - to everyone who wants to take it: read the book thoroughly and go through all questions on the CD that comes with that book! This helps immensely. Questions on the exam are very similar to those on the CD."} {"_id": "26596", "title": "Metric by which to hold developers accountable", "text": "I asked a question on lines of code per hour and got torn a new one. So my matured follow-up question is this: If not lines of code, then what is a good metric by which to measure (by the hour/day/unit-of-time) the effectiveness of remote programmers?"} {"_id": "203896", "title": "How do you manage a team of software developers?", "text": "I am reading the book \"Cracking the Sales Management Code\", and the book says that sales people can be more or less managed (\"measure\") by measuring whether they are consistently improving on their Sales Objectives, such as the ability to get new sales, close sales cycle length, deal win/deal lost ratio, hitting sales quota and so on. And I wonder, whether is it possible to manage ( or to measure) software developers in this way? Now I am fully aware that managing people by numbers is a very tricky thing to do; it is very possible for the managers to set the wrong metric, and hence result in the mismanaging of the software developers. But still, I wonder whether this there any books/literature that explores this issue? **Note: When I say I want to \"measure\" my developers, I don't mean that I want to use the numbers to penalize them or I want to \"rank\" them. My purpose is simple, I want to have the numbers ( the meaningful numbers) so that I can help them improve their software development skills ( in whatever sense).**"} {"_id": "45641", "title": "Annual Review: what hard data should a developer bring?", "text": "Many companies have annual reviews and/or performance appraisals for their employees. I've heard that it's generally a good idea to muster up some hard data to analyze and bring to the review. The better the data, the better the chances to help support a promotion or raise. What I mean by hard data, are _tangible numbers_ \\-- something that can be _measured_ and/or _calculated_. Obviously data that a developer would have access to. Something _intangible_ would be how beautiful the code a developer has written. I think this would be very hard to measure, nor would upper management care for it. My question is: For a software developer, **what kind of hard data should be analyzed and brought to a review to help highlight good work that was done?** An example I've been given in the past: The support tickets produced by each project a developer was involved in. Are the numbers low? Are the rate per month getting lower? Etc."} {"_id": "177167", "title": "How to tell whether your programmers are under-performing?", "text": "I am a team lead with 5+ developers. I have a developer (let's call him **A** ) who is a good programmer, who writes good clean, easy to understand code. However he is somewhat difficult to manage, and sometimes I wonder whether he is really under-performing or not. 1. Our company requires the developers to indicate the work progress in the bug tracker we use, not so much as to monitor the programmers but to keep the stakeholders apprised of the progress. The thing is, **A** only updates a task progress when it is done ( maybe 3 weeks after it is first worked on) and this leaves everyone wondering what is going on in the middle of the development week. He wouldn't change his habit despite repeated probing. ( It's OK, developers hate paperwork, I do, too) 2. Recent 2-3 months he on leave quite often due to various events-- either he is sick, or have to attend a lot of personal events etc. ( It's OK, bad things happen in a string. It's just a coincidence) 3. We define sprints, or roadmaps for each month. And in the beginning of the sprint, we will discuss the amount of work each of the developers have to do in a sprint and the **developers get to set the amount of time they need for each task**. He usually won't be able to complete all of them. (It's OK, the developers are regularly missing deadlines not due to their fault). 4. I am based in Singapore. Not sure if that matters. Yeah, Asians are known to be reticent, but does that matter? If only one or two of the above events happen, I won't feel that **A** is under-performing, but they all happen together. So I have the _feeling_ that **A** is under-performing and maybe-- God forbid--- slacking off. This is just a feeling based on my years of experience as programmer. But I could be wrong. It is notoriously hard to measure the work of a programmer, given that not all two tasks are alike, and there lacks a standard objective to measure the commitment of a programmer to your company. It is downright impossible to tell whether the programmer is doing his job or slacking off. All you can do, is to trust them-- yeah, trusting and giving them autonomy is the best way for programmers to work, I know that, _so don't start a lecture on why you need to trust your programmers, thank you every much_ \\-- but if they abuse your trust, can you know? ## Outcome: I've a straight talk with him regarding my perception on his performance. He was indignant when I suggested that I had the feeling that he wasn't performing at his best level. He felt that this was a completely unfair feeling. I then replied that this was my feeling and I didn't know whether my feeling was right or not. He would have none of this and ended the discussion immediately. Before he left he said that he \"would try to give more to the company\" in a very cold tone. I was taken aback by his reaction. I am sure that I offended him in some ways. Not too sure whether that was the right thing to do for me to be so frank with him, though. * * * **My question is: How can you tell whether your programmers are under- performing? Surely there are experience team leads who know better than me on this?** * * * ## Extra notes: 1. I hate micromanaging. So all that we have for our software process is Sprint ( where tasks get prioritized and assigned, and at the end of the month, a review of the amount of work done). Developers would require to update the tasks as they go along everyday. 2. There is no standup meeting, or anything of the sort. Mainly because we have the freedom to work from home and everyone cherishes this freedom. 3. Although I am the one who sets the deadline, but the developers will provide the estimate for each tasks and I will decide-- based on the estimate-- the tasks that go into a particular sprint. If they can't finish the tasks at the end of the sprint, I will push them to the next. So theoretically one can just do only 1 or 2 tasks during the whole sprint and then push the remaining 99 tasks to the next sprint and still he will be fine as long as justifies this-- in the form of daily work progress updates"} {"_id": "29062", "title": "What is an example of a good SMART objective for a programmer?", "text": "Following on from this question, I wondered if folk might be able to suggest some samples of what might be considered a \"good\" objective in a periodic review cycle for a programmer? Let's define SMART from the most popular definitions in the Wikipedia entry: * Specific * Measurable * Attainable * Relevant * Time-bound"} {"_id": "142098", "title": "How to measure team productivity in Agile project environment?", "text": "Are there any techniques other than velocity? What are pros and cons of using those from your experience?"} {"_id": "193282", "title": "How to measure team productivity?", "text": "The upper management at our company has laid out a goal for our software team to be \u201c15% more productive\u201d over the next year. Measuring productivity in a software development environment is very subjective, but we are still required to come up with a set of metrics. What sorts of data can we capture that would measure our team\u2019s productivity?"} {"_id": "34122", "title": "KPI's for Programmers", "text": "Do you know any Key Performance Indicators for Developers? What should be measured and monitored?"} {"_id": "194582", "title": "How can one measure contributions to a project?", "text": "Has anyone ever come across this problem? When you have a team of developers working on a project, how can you measure their contributions to said project? Is there a \"formal\" way of doing it? Number of commits? Number of bug fixes? Lines committed? Or maybe on a ticket basis? I'm trying to think of a workflow where a group of developers can work seamlessly interchanging tasks and at the end of the month, getting paid for they contributions (or merit if you will). I hope this question is not entirely out of place here, if it is, feel free to close it."} {"_id": "62817", "title": "How should developer performance be measured?", "text": "In many companies there is a formal procedure of reviewing employees' work. For example, a salesperson can stay she'll sell one million units at the beginning of the year. When she comes up for review a year later, she says she's sold two million units. Thus, her manager decides to promote her. But what should developer say? I'll fix a million bugs, I'll write a hundred unit tests? I can't imagine many things that can be measured here, especially If you don't have a roadmap for year and if you working on maintenance. What types of solid performance metrics work for programmers? Or are performance measurements not applicable for developers at all?"} {"_id": "215986", "title": "How long should I wait to autosave user input?", "text": "I have a table where a User can edit data by simply choose a field, then edit its value. I want to fire an update function to automatically save the data to a MySQL database. However, I think that it would be inefficient to fire that function on literally every change, e.g every single keypress. How long should I wait to save my input?"} {"_id": "101988", "title": "Should developers accept overtime/weekend work/denied bonus payments?", "text": "I assume there are many developers in a situation like this: * you are an employed software architect/engineer * you are good at your job * your company needs more good developers than they can find * there are many options to get a new job or get self-employed * work is not your first priority (you enjoy family/free time) Aren't we stupid when we still accept overtime/weekend work/denied bonus payments? I would think employers would need to give in, if we demand better working conditions. What do you think would happen when we all demand better working conditions?"} {"_id": "228613", "title": "Using module level declared global \"singletons\" in python", "text": "Ok, I know that using singletons is generally a bad practice, but if I do it (for db connection, logging et al.) am I allowed to go (in respect of clean design) with a module defined variable that is initialized during the startup. eg.: if __name__ == '__main__': datasources.database.db = DB(dbpath) where db is declared here, at the top level of a module: db = None class DB(object): def __init__(self, path): .... What could be a reasonable compromise between passing the db for each object that uses it in the \"true OO way\" and having the global?"} {"_id": "18567", "title": "Tablets Running IDEs", "text": "I am new to the tablet market, but i am trying to learn more. Are any current tablets capable of running programming software such as Visual Studio? I understand they'd have to come up with some kind of streamlined version, but I can imagine the programmers out there excited about the possibility. Does anything like that already exist, or are there rumors of it being available in the near future?"} {"_id": "254021", "title": "Questioning one of the arguments for dependency injection: Why is creating an object graph hard?", "text": "Dependency injection frameworks like Google Guice give the following motivation for their usage (source): > To construct an object, you first build its dependencies. But to build each > dependency, you need its dependencies, and so on. So when you build an > object, you really need to build an object graph. > > Building object graphs by hand is labour intensive (...) and makes testing > difficult. But I don't buy this argument: Even without dependency injection, I can write classes which are both easy to instantiate and convenient to test. E.g. the example from the Guice motivation page could be rewritten in the following way: class BillingService { private final CreditCardProcessor processor; private final TransactionLog transactionLog; // constructor for tests, taking all collaborators as parameters BillingService(CreditCardProcessor processor, TransactionLog transactionLog) { this.processor = processor; this.transactionLog = transactionLog; } // constructor for production, calling the (productive) constructors of the collaborators public BillingService() { this(new PaypalCreditCardProcessor(), new DatabaseTransactionLog()); } public Receipt chargeOrder(PizzaOrder order, CreditCard creditCard) { ... } } So there may be other arguments for dependency injection ( **which are out of scope for this question**!), but easy creation of testable object graphs is not one of them, is it?"} {"_id": "18564", "title": "Whats the greatest most impressive programing feat you ever witnessed?", "text": "Everyone knows of the old adage that the best programmers can be orders of magnitude better than the average. I've personally seen good code and programmers, but never something so absurd. So the questions is, what is the most impressive feat of programming you ever witnessed or heard of? You can define impressive by: 1. The scope of the task at hand _e.g. John single handedly developed the framework for his company, a work comparable in scope to what the other 200 employed were doing combined._ 2. Speed _e.g. Stu programmed an entire real time multi-tasking app OS on an weekened including its own C compiler and shell command line tools_ 3. Complexity _e.g. Jane rearchitected our entire 10 millon LOC app to work in a cluster of servers. And she did it in an afternoon._ 4. Quality _e.g. Charles's code had a rate of defects per LOC 100 times lesser than the company average. Furthermore he code was clean and understandable by all._ Obviously, the more of these characteristics combined, and the more extreme each of them, the more impressive is the feat. So, let me have it. What's the most absurd feat you can recount? Please provide as much detail as possible and try to avoid urban legends or exaggerations. Post only what you can actually vouch for. Bonus questions: 1. Was the herculean task a one-of, or did the individual regularly amazed people? 2. How do you explain such impressive performance? 3. How was the programmer recognized for such awesome work?"} {"_id": "57619", "title": "Programming Practice/Test Contest?", "text": "My situation: I'm on a programming team, and this year, we want to weed out the weak link by making a competition to get the best coder from our group of candidates. Focus is on IEEExtreme-like contests. What I've done: I've been trying already for 2 weeks to get a practice or test site, like UVa or codechef. The plan after I find one: Send them (the candidates) a list of direct links to the problems (making them the \"contest's problem list) get them to email me their correct answers' code at the time the judge says they have solved it and accept the fastest one into the team. Issues: We had practiced on UVa already (on programming challenges too), so our former teammate (which will be in the candidate group) already has an advantage if we used it. Codechef has all it's answers public, and since it shows the latest ones it will be extremely hard to verify if the answer was copied. And I've found other sites, like SPOJ, but they share at least some problems with codechef, making them inherit the issue of Codechef So, what alternatives do you think there are? Any site that may work? Any place to get all stuff to set up a Mooshak or similar contest (as in the stuff to get the problems, instructions to set up the server itself are easy to google)? Any other idea?"} {"_id": "106661", "title": "Which is best programming style to start learning to programm POP or OOP?", "text": "If somebody has to start learning to program, where should he/she start? Should he start to write procedure-oriented programs or jump to OOP?"} {"_id": "216024", "title": "Why can't Java/C# implement RAII?", "text": "Question: Why can't Java/C# implement RAII? Clarification: I am aware the garbage collector is not deterministic. So with the current language features it is not possible for an object's Dispose() method to be called automatically on scope exit. But could such a deterministic feature be added? My understanding: I feel an implementation of RAII must satisfy two requirements: 1\\. The lifetime of a resource must be bound to a scope. 2\\. Implicit. The freeing of the resource must happen without an explicit statement by the programmer. Analogous to a garbage collector freeing memory without an explicit statement. The \"implicitness\" only needs to occur at point of use of the class. The class library creator must of course explicitly implement a destructor or Dispose() method. Java/C# satisfy point 1. In C# a resource implementing IDisposable can be bound to a \"using\" scope: void test() { using(Resource r = new Resource()) { r.foo(); }//resource released on scope exit } This does not satisfy point 2. The programmer must explicitly tie the object to a special \"using\" scope. Programmers can (and do) forget to explicitly tie the resource to a scope, creating a leak. In fact the \"using\" blocks are converted to try-finally-dispose() code by the compiler. It has the same explicit nature of the try-finally-dispose() pattern. Without an implicit release, the hook to a scope is syntactic sugar. void test() { //Programmer forgot (or was not aware of the need) to explicitly //bind Resource to a scope. Resource r = new Resource(); r.foo(); }//resource leaked!!! I think it is worth creating a language feature in Java/C# allowing special objects that are hooked to the stack via a smart-pointer. The feature would allow you to flag a class as scope-bound, so that it always is created with a hook to the stack. There could be options for different types of smart pointers. class Resource - ScopeBound { /* class details */ void Dispose() { //free resource } } void test() { //class Resource was flagged as ScopeBound so the tie to the stack is implicit. Resource r = new Resource(); //r is a smart-pointer r.foo(); }//resource released on scope exit. I think implicitness is \"worth it\". Just as the implicitness of garbage collection is \"worth it\". Explicit using blocks are refreshing on the eyes, but offer no semantic advantage over try-finally-dispose(). Is it impractical to implement such a feature into the Java/C# languages? Could it be introduced without breaking old code?"} {"_id": "38604", "title": "Separation of Concern/Single Responsibilty Principle", "text": "I've just been reading through the SOLID principles, and this is one I have real difficulty with when designing software. I am relatively new (4 months professionally) at coding. The idea is simply to encapsulate classes into ones that perform a single task. For example, what if you have a class that creates and populates a Treeview and configures it to your specification. You then want data retrieved from a database when the user clicks on the nodes. Would you say this is two separate concerns? 1. Populate and configure the treeview 2. Obtain data from the database? Also, in the same app. You have a datagridview that is also populated with dynamic data. Would you therefore put that data retrieval in the same class as the treeview data retrieval or would that be yet another completely seperate class? Thanks."} {"_id": "102752", "title": "Modifying an open source application", "text": "What is the general workflow when I want to add a feature to an open source application I didn't originally write? How do I get to know the code? How do I find the spot that needs to be changed or added? How do I actually make the change without breaking anything else? How do I test that everything is still working? What are the general guidelines on such a project?"} {"_id": "232599", "title": "Extracting models into a external dependency", "text": "About 9 months ago, I asked a question about creating a service layer for my application. Unfortunately, in the ensuing time, no progress in that area was made, primarily due to time constraints. As such, I am now re-evaluating this idea, and have another which might be a stopgap with a quicker development time. I work with a large monolithic application that really is made up of two distinct applications (except for the fact that they do, and must, share data). This doesn't sit right with me, and I would like to try to start breaking them apart into separate entities, for a variety of reasons. For example, the test suite is huge (and therefore very slow), and changing something in one \"subapp\" shouldn't affect the other anyhow. Scalability is another consideration, as one \"subapp\" receives significantly more traffic than the other. When I approached this idea last year, I was thinking about separating all the data out into its own layer, that the subapps could query, but as mentioned this seems to be more of a commitment than I could sell at the moment. Instead, what I am thinking now would be to extract the models (or at least the shared models) into a gem (or, I suppose a Rails Engine) that is required by each subapp. The upside would be that each subapp could be developed independently, tested independently, and scaled independently. Also, once in place, it might be easier to rip out an external \"model provider\" and replace it with proper SOA (as at that time, everything will already be split up at least), than making the jump straight from monolithic application. The downsides I can think of center around ease-of-use. New code would require touching multiple repositories, and keeping mental context might be difficult. Similarly, my developers who are less experienced with design might have trouble finding the code they need, or keeping the separation intact. Of course, these can be mitigated with proper tools and code review, and the same downsides likely apply for SOA. Can anyone weigh in on if they think this is a good or bad idea? Is it worth taking the effort for what is clearly an \"intermediary\" step, rather than trying to find time to do it \"properly\" (whenever that will be). Are there any other downsides (or upsides) that I am not considering?"} {"_id": "102757", "title": "Tool that turns truth table into smallest possible if / else block", "text": "Is there any application or tool that will take a truth table and turn it into a compacted if block? For instance, lets say I have this truth table where A and B are conditions and x, y and z are possible actions: A B | x y z ------------- 0 0 | 0 0 1 0 1 | 0 0 1 1 0 | 0 1 0 1 1 | 1 0 0 This application could produce this if block: if(A) { if(B) { do(x) } else { do(y) } } else { do(z) } This is an easy sample, but I frequently have several conditions that combined in different ways should produce different outputs and it gets hard to figure out the most compacted and elegant way to represent their logic in an if block."} {"_id": "232596", "title": "How to indicate to a web server the language of a resource", "text": "I'm writing an HTTP API to a publishing server, and I want resources with representations in multiple languages. A user whose client GETs a resource which has Korean, Japanese and Trad. Chinese representations, and sends `Accept-Language: en, ja;q=0.7` should get the Japanese. One resource, identified by one URI, will therefore have a number of different language representations. This seems to me like a totally orthodox use of content negotiation and multiple resource representations. But when each translator comes to provide these alternate language representations to the server, what's the correct way to instruct the server which language to store the representation under? I'm having the translators PUT the representation in its entirety to the same URI, but I can't find out how to do this elegantly. `Content-Language` is a response header, and none of the request headers seem to fit the bill. It seems my options are 1. Invent a new request header 2. Supply additional metadata in a multipart/related document 3. Provide language as a parameter to the `Content-Type` of the request, like `Content-Type: text/html;language=en` I don't want to get into the business of extending HTTP, and I don't feel great about bundling extra metadata into the representation. Neither approach seems friendly to HTTP caches either. So option 3 seems like the best way that I can think of, but even then it's decidedly non-standard to put my own specific parameters on a very well established content type. Is there any by-the-book way of achieving this?"} {"_id": "202520", "title": "GPL code allowing non-GPL local copies of nondistributed code", "text": "I have come across a book that claims that alterations and augmentations to GPL works can be kept close-source as long as these are not redistributed into the wild. Therefore, customizations of websites deriving from GPL packages need not be released under the GPL and developers can earn profit on them by offering their services to their clients while keeping their GPL-based code closed source at the same time. (cf. Chapter 17 of WordPress Plugin Development by Wrox Press). I've never realized this, but essentially, by putting restrictions on redistributable code the GPL says nothing about what can and cannot be done with code which is kept private in terms of the licensing model. Have I understood this correctly?"} {"_id": "251464", "title": "In general, should an organization adopt a single methodology or decide on a per-project basis?", "text": "I work for a company that, in my opinion, should be doing all of it's web development work in a fully agile manner. We have vague, competing ideas about the product at any given time. And we have strict deadlines. So, in the web arena it seems to make sense to operate in as agile a manner as possible. However, I could conceive of projects on the business apps side -- or even a complex sub-project on the web side (integrating with a pre-existent 3rd party app?) that, at least for the sake of argument, isn't at all changeable in scope. The scope of the integration piece would, for all intents and purposes, be fully specifiable up-front with zero chance for change. In general, is it acceptable to take project X in an organization that is normally attempting to achieve agility and work through it in a waterfall manner? Does it somehow compromise the agility of the organization on a whole? If the organization is truly trying to be agile, should the \"rigid\" project still be \"managed\" in an agile manner?"} {"_id": "251466", "title": "How do VMs implement function calling?", "text": "I'm reading a compiler textbook that compiles to some form of assembly. Since I don't know this assembly language I decided to invent my own simple \"assembly language\" and implement a basic \"virtual machine\" which will execute these instructions. Currently I'm thinking how I can implement function decleration and function calling in the language and VM. I had the following idea, please tell me what you think: Function declerations would look like simple labels. At the end of a function there's an `end` statement. The 'main program' is a function by itself. For examle: main: // some logic CALL funcA // more logic END funcA: // .. some logic END However the difference between `call ` and `goto
"} {"_id": "153630", "title": "In rails, what defines unit testing as opposed to other kinds of testing", "text": "Initially I thought this was simple: unit testing for models with other testing such as integration for controller and browser testing for views. But more recently I've seen a lot of references to unit testing that doesn't seem to exactly follow this format. Is it possible to have a unit test of a controller? Does that mean that just one method is called? What's the distinction? What does unit testing _really_ means in my rails world?"} {"_id": "105786", "title": "Should I use one database per application or share a single database amongst multiple applications", "text": "I have multiple applications some that use data from the same sources. Is it best practice (or what are the pros/cons) to: * leave the data in databases shared by multiple applications 1. saves space as only one database is needed 2. complicates indexing as different applications have different querying needs * import data daily into per-app databases 1. uses more space as duplicated data exists in per-app databases 2. easier indexing as each app can focus on its individual needs I may have left out other advantages/disadvantages, please list if any, also how is this done at your workplace?"} {"_id": "120003", "title": "What is the meaning of driven ? (test-driven , data-driven, event-driven and etc)", "text": "I was reading a paper that makes a comparison between ASP.NET Web Forms and ASP.NET MVC. While reading it, such as data-driven, event-driven, test-driven are the terms that I see a lot. So what are they used for? when we can call them for a programming language or framework? What do they need to have?"} {"_id": "153637", "title": "Why are effect-less functions executed?", "text": "All the languages I know of would execute something like: i = 0 while i < 100000000 i += 1 ..and you can see it take a noticeable amount of time to execute. Why though, do languages do this? The only effect this code will have is taking time. edit: I mean inside a function which is called function main(){ useless() } function useless(){ i = 0 while i < 100000000 i += 1 }"} {"_id": "153634", "title": "Is it bad practice for a module to contain more information than it needs?", "text": "I just wanted to ask for your opinion on a situation that occurs sometimes and which I don't know what would be the most elegant way to solve it. Here it goes: We have module A which reads an entry from a database and sends a request to module B containing ONLY the information from the entry module B would need to accomplish it's job (to keep modularity I just give it the information it needs -> module B has nothing to do with the rest of the information from the read DB entry). Now after finishing it's job, module B has to reply to a module C if it succeeded or failed. To do this module B replies with the information it has gotten from module A and some variable meaning success or fail. Now here comes the problem: module C needs to find that entry again BUT the information it has gotten from module B is not enough to uniquely find the exact same entry again. I don't think that module A giving more information to module B which it doesn't need to do it's job but which it could then give back to module C would be a good practice because this would mean giving some module information it doesn't really need. What do you think?"} {"_id": "63702", "title": "Software updating solution for both Linux/Windows platform", "text": "Are you guys aware of any commercial (non-commercial?) online automatic updater that can be integrated with one's software? After quick research I found zillions of them but for Windows only and again only few got C/C++ API. Do this kind of solutions have any update management/statistics features?"} {"_id": "63705", "title": "Do you care if the person you're interviewing owns or contributes to Open Source projects?", "text": "Do you care if the person you're interviewing has created or contributed to Open Source projects? From the pool of people we've interview over the years at my workplace, I haven't found one person who contributes to an open-source project. In fact I'm the only person in the office that is really interested in the (.NET) OSS scene, but I don't believe that makes me any more competent, and it possibly reflects on my social life more than anything. Does OSS participation make your ears prick up if you see it on a CV/Resume? Or are you apathetic towards it? _(This is what made me think of the question)_"} {"_id": "95454", "title": "How do I learn Concurrency in Ruby?", "text": "Yesterday I read this great article about Concurrency in JRuby from EngineYard and I realise I need to leverage my skills about concurrency in Ruby, by mentioning Ruby here I mean it could be **all** implementations of Ruby : JRuby, Ruby MRI, Rubinius, Ruby on Parrot / Cardinal except IronRuby, I don't code in Windows Environment. What I learned from Haskell world, especially from Simon Peyton-Jones, he said on his video that OOP is all about state. Note: I'm very very inexperienced when I mention Haskell, I only watched some good videos about Haskell, solved very few (less than 10) euler project in Haskell, read some good Haskell books and tutorials, I've never coded in Haskell professionally, I just read some source codes in Haskell, ex. pugs, Darcs, and some other Haskell codes I don't remember the source. I might want to use Jaskell to trick concurrency as long as I use jruby, but that's for my long term learning plan, I want to maximise my concurrency skill in ruby first, **do you have any suggestions for me to learn concurrency in ruby?** I wish there were a kind of Head First book for me to learn Concurrency in Ruby. Things like strategies / how to tackle concurrency problem in ruby, inspired from EngineYard article and Haskell. Maybe some good downloadable videos that explain step by step to strengthen my fundamental about concurrency generally and then move to concurrency in ruby specifically. Advice? Thank you."} {"_id": "219976", "title": "What's the difference between robustness and fault-tolerance?", "text": "Systems / programs / distributed algorithms / ... are often described with the predicate **robust** or **fault-tolerant**. What is the difference? * * * Details: When I google for +robust +\"fault-tolerant\", I only get two hits, both unhelpful. When I googlescholar for the terms, I find a lot of papers that have both terms in their title. Unfortunately, they do not precisely define the terms :( But since they use both terms, it seems that neither implies the other."} {"_id": "151229", "title": "I'm having trouble learning", "text": "I'm only 13 but i'm genuinely interested in CS and would really like it if I could actually accomplish it. I've read books on C++ and C#, but ALL of them are the same!! They all say \"Ok so since you have no prior knowledge in this what so ever, write a snippet that will do this and then make a GUI and then throw it into the Priafdhsu hfad then add the program and then program your own compiler to do some stuff\". It's really getting annoying. I've payed near $40 (via Paypal) on ebooks that supposedly taught people to program with no prior knowledge. ALL OF THEM EXPECT ME TO ALREADY KNOW THE LANGUAGE. Is there something that I'm missing or am I suppose to be born with the property of CS? I would very much appreciate it if someone could explain this to me or possibly refer me to a tutorial on Programming Theory that starts from below ground zero as I have know knowledge in CS at all."} {"_id": "236513", "title": "Delphi 7 using RAM for database", "text": "I have an existing database app written in D7 with apollo databases. The client has given me a fast desktop with 24gb ram Can I somehow load the database files into ram to speed up processing? With millions of records some procedures take 1/2 hour or more, mostly due to disk read/write"} {"_id": "176107", "title": "Record management system java web framework", "text": "We're currently reconsidering technologies and frameworks to get more agile with \"simple\" RMS CRUD-based projects. In short, short-living things like this Right now we have a custom extension on top of SmartGWT but after some time it has proven not to be flexible enough. I also personally dislike the java-js compilation process and the whole GWT codebase. Not only is the design ugly, it also makes certain low-level js things very complicated if not completely impossible. So what I'm looking for is: * closest to **web** as possible, like JSF or possibly Tapestry, it is very important to be able get \"low\" and weave framework if necessary. Happens more often than we thought. * datagrid capable - Ext.js & PrimeFaces looks pretty good, Vaadin does too. * db-schema generators (optional, no matter in which way) If it were only on me, I'd probably stick to Ext.js + custom rest-based java solution, possibly generated from database schema (not sure about concrete tooling yet). I only have experience with vanilla Ext.js, vanilla GWT and JSF 2.0 / Seam, so it hard for me to judge or even propose other frameworks. **Whats your experience?** **What are the downsides you've faced?**"} {"_id": "219971", "title": "How to unit test a web client?", "text": "I am having a lot of trouble understanding how to unit test my web client. I have just finished a project using TDD for the first time - this project hd no external dependencies such as API calls or databases, it was pure C# code. I definitely saw the benefits of using TDD and I would like to continue practising it. My next project involves writing a SOAP client. I'm struggling to get past the first test which is using a simple IClient that logs in to the API successfully. Here is the IClient interface: public interface IClient { bool IsLoggedIn { get; } bool Login(out string error); } I have no idea how I would go about testing this. I'm thinking the unit test method would be something like `Login_WithValidCredentials_ReturnsTrue`, but I'm not sure how I could do this without actually simulating every possible response from the API. Is this code actually unit testable, or should this be left to an integration test. If anyone could give me an example of a simple unit test then I would be very grateful (I am using Moq)."} {"_id": "176109", "title": "Are there examples of non CRUD approaches?", "text": "I'm a programmer but also have worked as an archivist. As archivist it's a lot about keeping data. I often get into arguments with colleagues when it comes to operations on data. I don't like the U and the D in CRUD too much. Rather then update a record I prefer to add a new one and have a reference to the old record. That way you build a history of changes. I also don't like deleting records but rather mark them as inactive. Is there a term for this? Basically only creating and reading data? Are there examples of this approach?"} {"_id": "238495", "title": "New to Data Warehouses/ERP", "text": "The organisation I work for is very application rich i.e. there are lots of SQL and Oracle databases. We are thinking about a data warehouse. I believe there is a difference between an Enterprise Resource Planning system and a Data Warehouse. An ERP would involve integrating all the systems together and having one database schema for everything. A Data Warehouse would involve creating a copy database (or copy databases) for analysis reasons. The Data Warehouse could extract information from the original repositories or the ERP system. Have I understood this correctly? Are there Data Warehouse products that are available to but or to you have to use Microsoft tools/Oracle tools e.g. SSIS, SSAS etc to write your own?"} {"_id": "240444", "title": "Website as an API client vs using the API only when needed?", "text": "I'm developing a website (using **Django** ) which will depend on an API for it's main functionality which is create/update/delete objects. But the API also provides: * User sign up and login * User relations to their objects * User groups and permissions This is great but I'm conserned about fully depending on the API for everything even user sign up and login, so I have 2 choices: **Using the API for everything** : Advantages: * User authentication, objects, relations, permissions are already managed * My job is only to query the API and display the results Disadvantages: * Lots of HTTP requests to the API * The website will break if the API goes down * The website rendering time will be slower (will use ajax) **Using the API when needed** : Advantages: * Fewer HTTP requests * The website will be a little faster * If the API goes down, not all the functions of the website will stop Disadvantages: * I'll have to manage user authentication, permissions and relations to his objects in the API * Duplicate user data in my database and the API (I'll have a copy of the user objects) * Worried about the data sync (I'll update the database only on create/update/delete requests to the API) Which one is a better choice and why ? what design is usually used for such cases ?"} {"_id": "153188", "title": "What non-programming tools do programmers use?", "text": "I'm reading _Code Complete_ with the intention of learning how to better structure my code, but I'm also learning a lot about how many aspects of programming something there are that aren't just writing the code. The book talks a lot about problem definition, determining the requirements, defining the structure, designing the code, etc. What tools are used for these non-writing steps of programming? Is there software that will help me design and plan out what I'm going to write before I do?"} {"_id": "240442", "title": "Can I use a part of another program without having to give away my rights on my own code?", "text": "I'm currently making a game and want to use some textfiles (lists of names) that are covered under the GNU General Public License (or the Attribution- ShareAlike 3.0 Unported License). Do I have to release my whole game under one of these licenses if I only used these (relatively small) files in my game or is it just these files I have to keep licensed like that? If this is not permitted am I allowed to do this anyway if I choose not to actually release my code and only \"hosted\" this game, since it's a web game."} {"_id": "153184", "title": "Partitioning set into subsets with respect to equality of sum among subsets", "text": "Let's say I have `{3, 1, 1, 2, 2, 1, 5, 2, 7}` set of numbers, I need to split the numbers such that sum of subset1 should be equal to sum of subset2 `{3,2,7} {1,1,2,1,5,2}`. First we should identify whether we can split number(one way might be dividable by 2 without any remainder) and if we can, we should write our algorithm two create s1 and s2 out of s. How to proceed with this approach? I read partition problem in wiki and even in some articles but I am not able to get anything. Can someone help me to find the right algorithm and its explanation in simple English?"} {"_id": "159129", "title": "Future scope for a mobile Developer", "text": "I\u2019ve started my career as a Software Tester but after 3 years I realized, it didn\u2019t give me the so called job satisfaction and I made a brave decision to jump into software development arena. Finally I have become an Android Mobile developer and completed 1 year of experience. I love my job and I love every bit of it, but going forward I\u2019m wondering what the scope for mobile developers is?? Developers who work with C# or Java have their larger scope and they can become architect etc in future. But as a mobile developer what can I achieve.. Mobile development architect is not something practical because mobile projects are small and it does not require much technical ability like j2ee project needs. Should I change my career track to Java or j2ee or something from mobile?? Any advice is appreciated."} {"_id": "240448", "title": "How should templates be named?", "text": "In D I can create templates like this: template Foo(A) { A add(A a, A b) { ... } A multiply(A a, A b) { ... } A concatenate(A a, A b) { ... } } What should a template be named ideally? What conventions exist out there? I'm looking to something similar like 'function names must always start with a verb'."} {"_id": "14956", "title": "Which is best for learning how to do a certain thing: writing your own or looking at someone else's?", "text": "Often when I'm writing code to do a certain thing, I'm faced with either writing my own or using someone else's code. Assume here that this \"thing\" is something that I've never done before and am interested in learning how it's done. **Which would you say is better from a learning perspective: try writing your own solution; or looking at code by someone else?** I've always written my own code if I have an idea on how to do it, but resorted to looking at someone else's when I don't have a clue. I believe that the best is probably a combination of both: make your own attempt and then look at how someone else did."} {"_id": "21480", "title": "Examples of Java APIs that demand an action sequence", "text": "As part of a research I'm working on, I'm looking for public APIs that only work correctly when you apply a certain sequence of actions on them. For example, the `java.nio.channels.SocketChannel` class, from the Java standard library, only works correctly with sequences such as `open() -> connect() -> read() -> read() -> close()`. A more complete demonstration if how it may be used may be represented in the following graph: ![valid usage of SocketChannel](http://i.stack.imgur.com/zVOq9.png) Additional examples of Java standard library APIs that require certain sequences are `java.io.PrintStream` (very similar to the one above) and `java.util.Iterator` (which requires a `next()` call between every two `remove()` calls, thus enforcing a certain sequence). So, does you favorite API for doing X also behave that way? **I would very much like to know about additional APIs that require a certain method sequence for correct usage** ; especially classes that are not part of the Java standard library. The more complex the sequence(s) required, the better. * * * Some APIs require a sequence that spans across multiple classes, for example: X x = new X(); x.setup(); Y y = x.createNewY(); Z z = new Z(y); z.doSomething(); These examples are also interesting, but I'm mostly looking for sequences that all appear in the same class. * * * **EDIT** added bounty for greater visibility. I'm sure many of you have encountered many APIs that will match this description - I would really appreciate some good examples."} {"_id": "49732", "title": "Types of quotes for an HTML templating language", "text": "I'm developing a templating language, and now I'm trying to decide on what I should do with quotes. I'm thinking about having 3 different types of quotes which are all handled differently: backtick ` double quote \" single quote ' expand variables ? yes no escape sequences no yes ? escape html no yes yes ## Backticks Backticks are meant to be used for outputting JavaScript or unescaped HTML. It's often handy to be able to pass variables into JS, but it could also cause issues with things being treated as variables that shouldn't. My variables are PHP-style (`$var`) so I'm thinking that might mess with jQuery pretty bad... but if I disable variable expansion w/ backticks then, I'm not sure how would insert a variable into a JS code block? # Single Quotes Not sure if escape sequences like `\\n` should be treated as literals or converted. I find it pretty rare that I want to disable escape sequences, but if you do, you could use backticks. So I'm leaning towards \"yes\" for this one, but that would be contrary to how PHP does it. # Double Quotes Pretty certain I want everything enabled for this one. ## Modifiers I'm also thinking about adding modifiers like `@` or `r` in front of the string that would change some of these options to enable a few more combinations. I would need 9 different quotes or 3 quotes and 2 modifiers to get every combination wouldn't I? My language also supports \"filters\" which can be applied against any \"term\" (number, variable, string) so you could always write something like \"blah blah $var blah\"|expandvars Or \"my string\"|escapehtml Thoughts? What would you prefer? What would be least confusing/most intuitive?"} {"_id": "250540", "title": "Why use getters only as opposed to marking things final?", "text": "I'm working in a java role after working a couple years in functional programming. Our company was bought by google and I took a java role after the acquisition. Coming back to java as a polyglot developer, I'm generating getters for immutable objects and I'm seeing it's a total hogwash. Are there any reasons why the 'getField' convention should be used so prolifically? To me it seems almost horrifying at this point that so many libraries expect public getter methods to work with their functionality when simply making a field public and final would have the same effect as making only a getter on a public mutable field. Why isn't it more of a common practice to ditch the setters and just expose a final field? EDIT: I don't think this is actually the right forum as I think this is a discussion point rather than an answer after evaluating the perspectives."} {"_id": "254535", "title": "Having an inherited function return the derived type instead of the base type", "text": "I am writing two classes in C#: * A `Matrix` class that represents a general Matrix with n-by-m dimensions * A `SquareMatrix` class that inherits from `Matrix` and has the constraint of being n-by-n The reason I designed it this way is because square matrices support additional specific operations like calculating the determinant or the inverse, so being able to guarantee that those functions are avaliable with the specific type you're using are nice things to have. Additionally it would support all the regular Matrix operations and can be used as a Matrix I have a function in `Matrix` called `getTranspose()`. It calculates the transpose of the Matrix and returns it as a new `Matrix` I inherited it in `SquareMatrix`, but because the transpose of a square matrix is guaranteed to be square matrix, I also want it to return a `SquareMatrix` I am unsure about the best way to do this. * I can re-implement the function in `SquareMatrix`, but that would be code duplication because it's essentially the same calculation * I can use implicit typecast operators, but if I understand correctly that would cause unnecessary allocations (upcast `SquareMatrix` to `Matrix`, create a new `Matrix` as the transpose, create a new `SquareMatrix` during typecasting and throw away the tranposed `Matrix`) * I can use explicit typecast operators, but it would be stupid to have to typecast the transpose of a `SquareMatrix` explicitly, and it also has the same problem of the implicit operator with unnecessary allocations Is there another option? Should I change the design of having `SquareMatrix` inherit from `Matrix`? This problem also applies to operators. It seems that I have to either implement typecasting operators which might cost in performance, or have to re-implement the same code."} {"_id": "125560", "title": "Should I take a job working with an esoteric language?", "text": "How worthwhile is it to take a job that will mainly involve working with an esoteric or niche language? Will this limit my prospect of moving back into languages that I prefer working with later in my career, or will the general experience of working in a software engineering job be enough? I suppose this question could be rephrased as: how much do employers care about specific language experience, as opposed to general programming experience and aptitude?"} {"_id": "158453", "title": "Design Patterns (java) -- Strategy with fields. Ever acceptable?", "text": "Both here on stack overflow and on Java Effective it is suggested that strategy design patterns should be stateless. In fact in the book it is also suggested to make each strategy object a singleton. The problem I have is that some strategies I envision for my program need states/fields. Either because they are path-dependent in their behavior or because I want them heterogeneous (a statistical distribution of similar strategies, if you prefer). This forces me to break both Java Effective suggestions: I instantiate a new strategy for each user class AND each of these strategies contain its own fields. Is that very bad? Should it be done differently? It was suggested to me to keep the fields that make the strategy heterogeneous in the class that uses it and then pass it as an argument. I find that very anti-oo. Those fields don't belong to the user class. In fact, if that class uses another strategy it might not need those fields at all. That seems to run against the reason I am using the strategy pattern in the first place. Mostly I am just very confused * * * I make a simple example here. Imagine you have a class Gambler, which represents somebody making bets on horses. Now this class will require a strategy predictStrategy that will work something like this: interface predictStrategy{ public Horse predictWinningHorse(HorseRace r); } Now, I can have many implementations where the strategy is to choose at random, or pick the white horse or whatever. That's easy. Imagine though that I implement a strategy that looks at past predictions and somewhat \"learns\" from its past mistakes. Clearly each strategy will have to have its own memory on which to learn. I might have to add one more method to the interface (or make an extension) interface predictStrategy{ public Horse predictWinningHorse(HorseRace r); public void addObservation(HorseRace r, Horse oldPrediction, Horse actualWinner); } So that the Gambler class calls \"strategy.addObservation(...)\" at the end of each race to improve its predictive power. Can this be **really** be done with a _stateless_ strategy object? It seems impossible to me."} {"_id": "190078", "title": "Handling recurring cURL POST request", "text": "I've basically came here for some advise based on my criteria, The application I'm trying to build is built completely from an API. It's like a \"marketplace bidding\" application, what I'm trying to accomplish is an \"Auto bidding\" function, but for this to happen it will have to send a cURL POST request every 3 seconds based on criteria given, then return the JSON result for me to handle server side, I'd basically like a direction on what would be the best way to handle this kind of idea, I got told something called \"polling\" would work but I've never heard of it. Initially the idea is as provided below 1. Load set page \"bid.php\" 2. Enter certain criteria 3. Click submit as to which an \"Ajax\" request of some sort would start 4. The recurring cURL request would be activated through the Ajax. 5. Display the data back from the cURL request every 3 seconds via Javascript. I'm not initially asking for the work to be given to me, I'm just asking the professionals as to which they recon would be the best way I can do this task."} {"_id": "46907", "title": "What should we tell our unsupported IE6 users?", "text": "In the upcoming version of our web app, we've broken IE6, and we don't intend to fix it. We've had a clear warning posted for IE6 users for some months; we've decided it's time not to support it. My question is: how should we communicate this to our users? Some people here feel that we should block IE6 users who would try to access the web app, because it's not going to work for them. Others feel that we should just leave up a warning, saying \"This doesn't work in IE6,\" but not block them; instead, if they click to dismiss the warning, just let them in to the broken site to see for themselves that it doesn't work. Who is right? Is there a better way?"} {"_id": "220371", "title": "Simplest possible Paxos algorithm (distributed consensus) explanation", "text": "I am looking for a simple explanation of the **Paxos algorithm** that can be used for reaching consensus in a distributed environment (possibly peer to peer network). Every explanation I have encountered so far was a tough reading of multiple pages. I am looking for a simplified explanation that still preserves the core principles."} {"_id": "178317", "title": "How do I prove or disprove \"god\" objects are wrong?", "text": "**Problem Summary:** Long story short, I inherited a code base and a development team I am not allowed to replace and the use of God Objects is a big issue. Going forward, I want to have us re-factor things but I am getting push-back from the teams who want to do everything with God Objects \"because its easier\" and this means I would not be allowed to re-factor. I pushed back citing my years of dev experience, that I'm the new boss who was hired to know these things, etc, and so did the third party offshore companies account sales rep, and this is now at the executive level and my meeting is tomorrow and I want to go in with a lot of technical ammo to advocate best practices because I feel it will be cheaper in the long run (And I personally feel that is what the third party is worried about) for the company. My issue is from a technical level, I know its good long term but I'm having trouble with the ultra short term and 6 months term, and while its something I \"know\" I cant prove it with references and cited resources outside of one person (Robert C. Martin, aka Uncle Bob), as that is what I am being asked to do as I have been told having data from one person and only one person (Robert C Martin) is not good enough of an argument. **Question:** What are some resources I can cite directly (Title, year published, page number, quote) by well known experts in the field that explicitly say this use of \"God\" Objects/Classes/Systems is bad (or good, since we are looking for the most technically valid solution)? **Research I have already done:** 1. I have a number of books here and I have searched their indexes for the use of the words \"god object\" and \"god class\". I found that oddly its almost never used and the copy of the GoF book I have for example, never uses it (At least according to the index in front of me) but I have found it in 2 books per the below, but I want more I can use. 2. I checked the Wikipedia page for \"God Object\" and its currently a stub with little reference links so although I personally agree with that it says, it doesn't have much I can use in an environment where personal experience is not considered valid. The book cited is also considered too old to be valid by the people I am debating these technical points with as the argument they are making is that \"it was once thought to be bad but nobody could prove it, and now modern software says \"god\" objects are good to use\". I personally believe that this statement is incorrect, but I want to prove the truth, whatever it is. 3. In Robert C Martin's \"Agile Principles, Patterns, and Practices in C#\" (ISBN: 0-13-185725-8, hardcover) where on page 266 it states \"Everybody knows that god classes are a bad idea. We don't want to concentrate all the intelligence of a system into a single object or a single function. One of the goals of OOD is the partitioning and distribution of behavior into many classes and many function.\" -- And then goes on to say sometimes its better to use God Classes anyway sometimes (Citing micro-controllers as an example). 4. In Robert C Martin's \"Clean Code: A Handbook of Agile Software Craftsmanship\" page 136 (And only this page) talks about the \"God class\" and calls it out as a prime example of a violation of the \"classes should be small\" rule he uses to promote the Single Responsibility Principle\" starting on on page 138. The problem I have is all my references and citations come from the same person (Robert C. Martin), and am from the same single person/source. I am being told that because he is just one guy, my desire to not use \"God Classes\" is invalid and not accepted as a standard best practice in the software industry. Is this true? Am I doing things wrong from a technical perspective by trying to keep to the teaching of Uncle Bob? **God Objects and Object Oriented Programming and Design:** The more I think of this the more I think this is more something you learn when you study OOP and it's never explicitly called out; Its implicit to good design is my thinking (Feel free to correct me, please, as I want to learn), the problem is I \"know\" this, but but not everybody does, so in this case its not considered a valid argument because I am effectively calling it out as universal truth when in fact most people are statistically ignorant of it since statistically most people are not programmers. **Conclusion:** I am at a loss on what to search for to get the best additional results to cite, since they are making a technical claim and I want to know the truth and be able to prove it with citations like a real engineer/scientist, even if I am biased against god objects due to my personal experience with code that used them. Any assistance or citations would be deeply appreciated."} {"_id": "186549", "title": "Would it be bad design to abstract a graphics library and wrap it in a single class?", "text": "I'm starting a game project in C++ using the SFML. It provides various classes for handling graphics, input, etc, but I would like to wrap it all up in a single `Media` class. I believe that by doing so, I could simplify my effort and have the class take care of the details, limiting the amount of code I have to write and therefor limiting chances for a bug to sneak past my eyes. However, I have read several times that monolithic, do-it-all classes such as this might not be a good idea. Why is that?"} {"_id": "220372", "title": "Can I release a script that depends on both proprietary and GPL'd libraries to be run?", "text": "I'm writing my thesis project in GNU Octave. My project basically consists of a bunch of \".m\" files that are written in Octave. I'm also using a proprietary (and unreleased) shared library developed by my supervisor. To be able to access the library, I've written an OCT-file wrapper (dynamic extension for Octave). My question is: am I allowed to distribute my \".m\" files that depend on both Octave internals and the proprietary library? I'm not going to distribute any dependencies nor the wrapper files, just the \".m\" files. Is it illegal to distribute these (I don't mind being forced to release them under a specific license) if they depend on a proprietary piece of software to be fully functional? Even if it's just to document my experiments? I've already asked at the Octave forums, but they were too passionate and aggressive and didn't give any useful arguments. I would like to hear facts, not ideology. PS: I know I could use MEX, which is the only explicitly allowed means of communication with proprietary libraries in Octave. However, it's not possible for technical reasons to use MEX in my case."} {"_id": "211615", "title": "What's best practice for a SELECT * FROM SQL in PHP?", "text": "I need to make a \"SELECT * FROM\" Sql from PHP, that now returns about 10.000 records, that I insert in an Array and re-utilize in a page to show results in a table. I need to have all that elements also in a hidden input fields, to connect via API to a SMS service to send SMS \" /> Obviously, with 10k elements loading of page is about 10-15 seconds (in a VPS with 2GB ram). I'm worrying because in about two months element will be 12-13k, about 1k elements at the month. I cannot paginate elements because I need all together in that input... In your opinion, what's best practice to no \"destroy\" VPS or browser? Thank you very much"} {"_id": "211614", "title": "Is it ever a good idea to use the design pattern name in the implementing classes?", "text": "Recently I came across a moderately large python codebase with lots of `MyClassAbstractFactory`, `MyClassManager`, `MyClassProxy`, `MyClassAdapter` etc. classes. While on the one hand those names pointed me to research and learn the corresponding patterns, they were not very descriptive of what the class **does**. Also, they seem to fall within the forbidden list of words in programming: `variable`, `process_available_information`, `data`, `amount`, `compute`: overly broad names, that don't tell us anything about the function **when used by themselves**. _So should there be`CommunicationManager` or rather `PortListener`? Or maybe I do not understand the problem at all...?_"} {"_id": "53070", "title": "Should I migrate to MVC3?", "text": "I have a MVC2 project, my question is: should I migrate to MVC3? Why? I'd like the opinion of some who already migrated, or at least used MVC3 and MVC2. Already read http://weblogs.asp.net/scottgu/archive/2011/01/13/announcing- release-of-asp-net-mvc-3-iis-express-sql-ce-4-web-farm-framework-orchard- webmatrix.aspx and I already know about the described tool for migrating: http://blogs.msdn.com/b/marcinon/archive/2011/01/13/mvc-3-project-upgrade- tool.aspx What I'd really appreciate is your valuable insight. Best regards."} {"_id": "53072", "title": "Start a new project. Where is the inspiration?", "text": "_The question may seem strange at first glance_ ... And **I may be alone** in this situation. But currently I want to improve my skill through learning new framework. however, do a simple \"`HelloWorld`\" application does not satisfy me. I think a **practical project** is required to apprehend the basics of a framework. I know there is many scholar-like project like design a library manager or CMS. But I have recently planned to learn ~3 differents frameworks and I can't find any motivations through \" _unexciting_ \" project. In the end I have not started any framework ... I'm aware of _I do not develop the project of the year_ ... I'm just looking for something **a bit exciting**. **I do not expect any project subject** , anyway stimulation provided by a project is something subjective. But I want to know your inspiration ! (or method to learn framework ^^) I'm talking about web-dev framework but I'm open to any kind information !"} {"_id": "220374", "title": "Is a senior programmers advice about always using books a good idea?", "text": "I am a junior developer and have only been in the industry for 5 years. At my current company there is a senior let's call him Infestus. Occasionally I am being given opportunity to shine and do something completely brand new from scratch. One of the most recent examples was that I had to make a singleton in the multithreaded application. I have decided to use this method. As soon as Infestus saw it, he quickly proceeded to call me stupid and told me to use this approach. Upon asking him why he just brushed it off as this is better and that's how this and this book about java says it is better. And it is a common pattern, whenever I get a chance to do something new, I quickly get shot down by Infestus and the only reasoning why his method is books written by famous programmers. He is always trying to give me books to read so that I may \"learn\" which ways to program. I have only been programming for money for 5 years, but is it always a good idea to just blindly follow the book on best ways for solving a problem, or should I try experimenting every now and then? The constant barrage of complaints from the Infestus is starting to cause me to never try anything new and follow examples in books. EDIT: I am utterly lost. Yes I know that following anything blindly is a bad idea. But this godlike programmer Infestus who seems to know a lot, tells me that the only way to program properly is by reading books and following everything down to a T. All the rules he imposes are the ones written in books, so I am just wondering if books are the only correct way. EDIT2: Infestus is not my boss. He is just one of senior developers in charge of reviewing the code. And most of his comments after reviews consist of book names where such and such method is wrong."} {"_id": "204455", "title": "Unit Test - an Enigma?", "text": "I am an objective-c/iOS apps developer and I was wondering if I perform Unit Tests on my code, how should I go about it? What is unit test anyway? Ain't I testing a specific part of code while running the whole app itself? What is its use? I have read few articles, wikipedia etc. but I am not sure whether unit testing is a time consuming or a time saving process. I know the definition but I am in the dilemma about its purpose. Plz guide a new bee here. :) Have already gone through the following: Unit testing best practices for a unit testing newbie"} {"_id": "90211", "title": "How would I go about changing encryption methods on existing passwords?", "text": "If I have an application that is using a less secure method for storing passwords, such as SHA-1, how would I go about converting to SHA-256 or SHA-512?"} {"_id": "58624", "title": "How can I move from Java and ColdFusion to Ruby on Rails?", "text": "Currently I work with ColdFusion 9+ and some Java in a Windows environment. Prior to ColdFusion, my background was in Java and JSP. I'm considering a move towards Ruby on Rails, as I think it would be a real challenge, keep things fresh, and provide more job opportunities. In order to get into it, I started to build my personal website in Rails 3.0. But what else can I do to make this transition from what I know now to Ruby and Rails? Are there specific or idiomatic aspects of Ruby or Rails I should keep in mind when switching over from a ColdFusion and Java mindset?"} {"_id": "181202", "title": "Alternatives to the repository pattern for encapsulating ORM logic?", "text": "I've just had to switch out an ORM and it was a relatively daunting task, because the query logic was leaking everywhere. If i'd ever had to develop a new application, my personal preference would be to encapsulate all query logic (using an ORM) to futureproof it for change. Repository pattern is quite troublesome to code and maintain so i was wondering if there are any other patterns to solve the problem ? I can foresee posts about not adding extra complexity before it's actually needed, being agile etc. but i'm only interested about the existing patterns solving a similar problem in a simpler way. My first thought was to have a generic type repository, to which i add methods as needed to specific type repository classes via extension methods, but unit testing static methods is awfully painful. IE: public static class PersonExtensions { public static IEnumerable GetRetiredPeople(this IRepository personRep) { // logic } }"} {"_id": "168631", "title": "What is the justification for Python's power operator associating to the right?", "text": "I am writing code to parse mathematical expression strings, and noticed that the order in which chained power operators are evaluated in Python differs from the order in Excel. From http://docs.python.org/reference/expressions.html: _\"Thus, in an unparenthesized sequence of power and unary operators, the operators are evaluated from right to left (this does not constrain the evaluation order for the operands): -1_ *2 results in -1.\"* This means that, in Python: `2**2**3` is evaluated as `2**(2**3) = 2**8 = 256` In Excel, it works the other way around: `2^2^3` is evaluated as `(2^2)^3 = 4^3 = 64` I now have to choose an implementation for my own parser. The Excel order is easier to implement, as it mirrors the evaluation order of multiplication. I asked some people around the office what their gut feel was for the evaluation of `2^2^3` and got mixed responses. Does anybody know of any good reasons or conciderations in favour of the Python implementation? And if you don't have an answer, please comment with the result _you_ get from gut feel - `64` or `256`?"} {"_id": "168633", "title": "Understanding the levels of computing", "text": "Sorry, for my confused question. I'm looking for some pointers. Up to now I have been working mostly with Java and Python on the application layer and I have only a vague understanding of operating systems and hardware. I want to understand much more about the lower levels of computing, but it gets really overwhelming somehow. At university I took a class about microprogramming, i.e. how processors get hard-wired to implement the ASM codes. Up to now I always thought I wouldn't get more done if learned more about the \"low level\". One question I have is: how is it even possible that hardware gets hidden almost completely from the developer? Is it accurate to say that the operating system is a software layer for the hardware? One small example: in programming I have never come across the need to understand what L2 or L3 Cache is. For the typical business application environment one almost never needs to understand assembler and the lower levels of computing, because nowadays there is a technology stack for almost anything. I guess the whole point of these lower levels is to provide an interface to higher levels. On the other hand I wonder how much influence the lower levels can have, for example this whole graphics computing thing. So, on the other hand, there is this theoretical computer science branch, which works on abstract computing models. However, I also rarely encountered situations, where I found it helpful thinking in the categories of complexity models, proof verification, etc. I sort of know, that there is a complexity class called NP, and that they are kind of impossible to solve for a big number of N. What I'm missing is a reference for a framework to think about these things. It seems to me, that there all kinds of different camps, who rarely interact. The last few weeks I have been reading about security issues. Here somehow, much of the different layers come together. Attacks and exploits almost always occur on the lower level, so in this case it is necessary to learn about the details of the OSI layers, the inner workings of an OS, etc."} {"_id": "163626", "title": "Suggestions needed on an architecture for a multiple clients and customisable web application", "text": "Our product is a web based course managemant system. We have 10+ clients and in future we may get more clients. (Asp.net,SQL Server) Currently if one of our customers need extra functionality or customised business logic, we will change the db schema and code to meet the needs. (we only have one branch code base and one database schema) To make the change wont affect each others route, we use a client flag, which defined in a web config file, thus those extra fields and biz logic only applied to a particular customer's system. if(ClientId = 'ABC') { //DO ABC Stuff } else { //Normal Route } One of our senior colleagues said, in this way, small company like us can save resources on supporting multiple resources. But what I feel is, this strategy makes our code and database even harder to maintain. Anyone there crossed similar situation? How do you handle that?"} {"_id": "58629", "title": "Should a programmer \"think\" for the client?", "text": "I have gotten to the point where I hate requirements gathering. Customer's are too vague for their own good. In an agile environment, where we can show the client a piece of work to completion it's not too bad as we can make small regular corrections/updates to functionality. In a \"waterfall\" type in environment (requirements first, nearly complete product next) things can get ugly. This kind of environment has led me to constantly question requirements. E.G. Customer wants \"automatically convert input to the number 1\" (referring to a Qty in an order). But what they don't think about is that \"input\" could be a simple type-o. An \"x\" in a textbox could be a \"woops\" not I want 1 of those \"toothpaste\" products. But, there's so much in the air with requirements that I could stand and correct for hours on end smashing out what they want. This just isn't healthy. Working for a corporation, I could try to adjust the culture to fit the agile model that would help us (no small job, above my pay grade). Or, sweep ugly details under the rug and hope for the best. Maybe my customer is trying to get too close to the code? How does one handle the problem of \"thinking for the client\" without pissing them off with too many questions?"} {"_id": "168636", "title": "Separating java projects", "text": "I have a large java project, and we use maven for our build cycle. This one project is used extensively - in other projects, in various applications, some of which are contained in it and some which are elsewhere... To be honest, it's a bit of a mess (various bits added on at different times for specific purposes), and I'd like to clean it up a bit. Also, it's not fully tested (lots of bits have been added on without proper unit and integration testing), and there are some tests that take a long time to run or don't actually pass... (uh-oh) - so tests are switched off in the maven build cycle (again, uh-oh). I am thinking about separating this large project into smaller specific projects, such that the 'final' sub-project (or several sub-projects) would pick up the various sub-projects that it would need. My thinking is as follows: * if I separate the big project into various sub-projects, this makes it clear what each project's responsibility is. * by separating into sub-projects, I can then clean up the testing of each sub-project individually, and turn on the testing for that sub-project in the maven build cycle. I am slightly concerned about what impact this might have on the build time. * Would imposing a structure on the large project (i.e. into smaller sub-projects) slow the compiler down? Also, I have a slight concern on what impact this might have editing time in IDEs (we principally use Intellij). Intellij seems to build each project in turn through the dependency tree - i.e. if C depends on B depends on A, and I change A, it won't try to build B unless A compiles, and so on. Arguably that's advantageous, but I have found that if - for example, I change an interface in A that is widely used in B and C, it takes some time to fix all the errors from that change... Another question is how to use factory classes. Some aspects of the project depend on external jars. Occasionally (thankfully not often) these are updated, and we have to migrate. We tend to handle this by using a Factory class which points to the correct versions of the external code (so we don't have to change all the implementations throughout the code base). At the moment this is all in the large project, but I am thinking that by switching to sub- projects, I could develop a new project for the implementation of new external code, make sure that sub-project is fully functional and tested, and then switch the dependencies / factory class in the user project. However, this is made more complicated by the extensive use of interfaces throughout the large project. For example * sub-project A - contains interfaces * sub-project B - depends on A for interfaces and old external jar * sub-project C - depends on B (and thus A and old external jar), and contains a Factory class that uses B's implementations of interfaces If I need to change the external jar of B, I can: * create sub-project B_ii - again depends on A, and now the new external jar * once fully functional, I can add C's dependency to B_ii, and change the Factory class to use the new implementations of interfaces. * when that's all working, I can then remove C's dependency to original B, and if desired, remove sub-project B. * Is that a sensible way of going about this? So, in general, my questions are: * Does anyone have any experience of breaking up large projects? Are there any tips/tricks that you would be willing to share? * What impact did this have on your development and build times? * What advice could you offer on structuring such a break-up of such a project?"} {"_id": "158981", "title": "Most recent vs Most used", "text": "We are building a business application (a laboratory management system to be more precise) mostly for internal company use only. To make it easier for users to find items which they work on we are implementing a list of most used items. We had a little debate on which method would be better to implement: display the most recent vs display the most used. **My arguments on most-recent** * A little bit easier to implement. I think this is worth to mention because we are dealing with business application which will be sold as a single copy, so this may directly effect application price. Also simpler implementation means less code and may effect maintenance code. * Counterargument: difference in implementation difficulty in this case is too small * It is easier for users to guess what will they find in this list so they know if it is worth to look at the list at all. * Items which were relative yesterday and used a lot might not be relevant today and in recent item list they quickly disappear. **My arguments on most-used** * Actually it is quite easy to display a mixed version of recently-used and most-used by combining last access date and a number of access something like this `(today - lastaccess) * number_of_access` * Counterargument: this requires fine-tuning What arguments would you give for one or another?"} {"_id": "164936", "title": "Why do exclusively outsourcing projects as a company?", "text": "A prospective employer told me they took a company level decision to do only outsourcing projects. I do not understand why they took such a decision and the guy I talked to did not elaborate. He further said only that \"their intention is to build software components\". Since they are growing quite fast and reached around 300 employees, shouldn't they be at least open to the possibility of having a project of their own, maybe? All other companies I've had contact with were at least open to have one in the future. I talked to a few of their employees and some are working in parallel on more than 2 outsourced projects (dividing time something like 4 + 4 hours / day). It seemed like a lot of projects with a period of a few months, maybe half a year come and go. Why would a company choose to provide only outsourcing services like that? How does it work to keep hundreds of people on outsourced projects with a seemingly high project turnover rate?"} {"_id": "158988", "title": "Why was the Java App store discontinued?", "text": "A Java App Store sounds like a cool idea right? Well, I turned to Google and found some remnants of an app store that Oracle had been developing. It looks like it has since been discontinued, but I haven't been able to find much commentary on why it was abandoned. So why was it discontinued? Links to other references, for instance blog posts discussing the difficulties of Oracle's app store, would be much appreciated."} {"_id": "18212", "title": "Entity Framework book 1st Edition", "text": "I recently purchased the O'Reilly book: Programming Entity Framework by Julia Lerman. Unfortunately it is the 1st edition. I didn't realize that there was a newer edition. Which in hind sight explains the lessor price for the book. So my question... Should I return it and get the 2nd edition or read the 1st edition? The 1st edition spends a lot of time in the version 1 of the EF and the 2nd edition spends a lot of time in version 2 (which is where I want to be). I think I want to return it, but I thought I would ask the question first... Thoughts?"} {"_id": "96906", "title": "How much documentation should a client expect from a one-man team?", "text": "I work for a small electronics and support software company that just recently got a contract with a large company that's very strict on documentation (I do the support software, essentially Windows or browser applications). Upon reviewing the required documentation from the large company, I realize that I wear more hats than I realized. Namely, we have to produce documentation describing: * Requirement analysis (me) * Architectural design (me) * Detailed design (me) * (Actual) Software development (me) * Database design development (me) * Testing plan, preparation, and results (me) * Proof of compliance with requirements (me) * User manual (me) I'll also be doing most (if not all) of the testing myself. And the deployment, training, and customer support. If there's any application technician needs for sales support, that's me too. Is that common? Do small software companies that get big contracts ask that commonly from single programmers?"} {"_id": "164939", "title": "By what features and qualities are \"free\" and \"premium\" themes differentiated", "text": "I have a lot of time invested in creating Wordpress templates. I want to release combinations of these templates along with different styles and Fancy Front pages as \"Premium Wordpress Themes\". What I need to know is what does \"Premium\" mean? What do people expect of a GPL theme vs. a Premium theme? Are there features that are considered required to be premium? Are there features that are in demand but considered \"exceptional\" i.e. not part of every premium theme? How can I tell the difference? I have heard tounge-in-cheek answers that say that any theme that makes money is premium, but I mean to ask about what gives an outstanding theme it's quality. Why is it worth more? I am technically able to do many things, but as a lone developer with a family to feed, I can't afford to spend time on features that no one cares about. I have to try to isolate the things that people want. This is serious food and rent to me. How can I get this kind of info so I can make my project successful?"} {"_id": "120919", "title": "What is jQuery and JavaScript's role in MVC?", "text": "I'm not sure I understand the role of jQuery/JavaScript in MVC. I've found an article about it on _A List Apart_ , but it doesn't clear anything up for me. Does it bridge the communication between the View and the Controller, or does it do something else?"} {"_id": "124831", "title": "ASP.Net MVC or Web Forms?", "text": "> **Possible Duplicate:** > Why would you use MVC over Web Forms? What is the difference between ASP.Net MVC and Web Forms? Is either preferable? I want to use ASP.Net development as a springboard to teach myself C#. What is the best way to go about this?"} {"_id": "255666", "title": "Salesforce trigger Prevent duplication", "text": "I have written a trigger which prevent duplicate name. please refer following condition. \" **Test** \" and \" **Te st** \" are same name (which mean don't need to consider any in between spaces.) \" **TEST** \" and \" **test** \" are same name (which mean ignore uppercases and lower cases.) My Question : If I have already one record inserted in system with name \" **Te st** \". and Now I am trying to insert a new record with name \" **test** \"(which is obvious not correct as per duplication). How can i handle this exception? Thank you in advance."} {"_id": "255664", "title": "Is renaming a program considered modifying a program legally?", "text": "If a software license prevents you from modifying a program (i.e \"You may not modify\u2026\") is renaming the program (e.g. changing it from a.exe to b.exe) considered modification legally of the software if the software remains binary identical since all that has really changed is the path to the program? Edit. I am not asking for legal assistance here I am asking what is the case law in this area. What modification is must have been dealt with at some point."} {"_id": "255662", "title": "Git collaboration with pull request in a company", "text": "I am thinking of a way to do collaboration between coders in a company, suppose we have 2 type of servers: Production and Staging. Now we want to review all changes going to Production via pull-request, at the same time, we want coders to be able to push experimental changes whenever they want to Staging. The objective here is make sure our guys can do experimental stuff and can't (accidentally) mess up with Production. My idea is to just create 2 separate repository for Production (read-only) and Staging (read-write). Will this be the ideal setup? Any other types of setup I should be aware of? ![enter image description here](http://i.stack.imgur.com/KhtPf.png)"} {"_id": "246620", "title": "In Java, what are some good ways to separate APIs from implementation of *entire projects*?", "text": "Imagine you have a software module, which is a plugin to some program (similar to Eclipse) and you want it to have an API which other plugins can call. Your plugin is not freely available, so you want to have a separate API module, which _is_ freely available and is the only thing other plugins need to directly link to - API clients can compile with only the API module, and not the implementation module, on the build path. If the API is constrained to evolve in compatible ways, then client plugins could even include the API module in their own jars (to prevent any possibility of `Error`s resulting from nonexistent classes being accessed). Licensing is not the only reason to put API and implementation in separate modules. It could be that the implementation module is complex, with myriad dependencies of its own. Eclipse plugins usually have internal and non- internal packages, where the non-internal packages are similar to an API module (both are included in the same module, but they could be separated). I've seen a few different alternatives for this: 1. The API is in a separate package (or group of packages) from the implementation. The API classes call directly into implementation classes. The API _cannot_ be compiled from source (which is desirable in some uncommon cases) without the implementation. It is not easy to predict the exact effects of calling API methods when the implementation is not installed - so clients will usually avoid doing this. package com.pluginx.api; import com.pluginx.internal.FooFactory; public class PluginXAPI { public static Foo getFoo() { return FooFactory.getFoo(); } } 2. The API is in a separate package, and uses reflection to access the implementation classes. The API can be compiled without the implementation. The use of reflection might cause a performance hit (but reflection objects can be cached if it's a problem. It is easy to control what happens if the implementation is not available. package com.pluginx.api; public class PluginXAPI { public static Foo getFoo() { try { return (Foo)Class.forName(\"com.pluginx.internal.FooFactory\").getMethod(\"getFoo\").invoke(null); } catch(ReflectiveOperationException e) { return null; // or throw a RuntimeException, or add logging, or raise a fatal error in some global error handling system, etc } } } 3. The API consists only of interfaces and abstract classes, plus a way to get an instance of a class. package com.pluginx.api; public abstract class PluginXAPI { public abstract Foo getFoo(); private static PluginXAPI instance; public static PluginXAPI getInstance() {return instance;} public static void setInstance(PluginXAPI newInstance) { if(instance != null) throw new IllegalStateException(\"instance already set\"); else instance = newInstance; } } 4. The same as above, but the client code needs to get the initial reference from somewhere else: // API package com.pluginx.api; public interface PluginXAPI { Foo getFoo(); } // Implementation package com.pluginx.internal; public class PluginX extends Plugin implements PluginXAPI { @Override public Foo getFoo() { ... } } // Client code uses it like this PluginXAPI xapi = (PluginXAPI)PluginManager.getPlugin(\"com.pluginx\"); Foo foo = xapi.getFoo(); 5. Don't. Make clients link directly to the plugin (but still prevent them from calling non-API methods). This would make it difficult for many other plugins (and most open source plugins) to use this plugin's API without writing their own wrapper."} {"_id": "255660", "title": "File reading in front end", "text": "Is there any way to ready a file @ the front end itself, which should be supported for IE 8 and IE 9 also. Please provide any suggestions available for this. Looking for code to read an excel file on the front end/javascript code."} {"_id": "255668", "title": "Rename PDF extracted Pages File from a Single Booklet PDF in correct sequence", "text": "I have huge PDF files which are in booklet format. example, Assume a booklet pdf file has 24 pages which each page contains 2 pages which is in two sides - left side and right side. The first page has 48th page number on the left side and 1st page number on the right side The second page has 2nd page number on the left side and 47th page number on the right side The third page has 46th page number on the left side and 3rd page number on the right side I have vertically cut the Booket PDF files in to separate individual PDF files using bulk operation in separate folder. for example, the cutted PDF file will be as follows in the above case 1st pdf file - 48th page 2nd pdf file - 1st page 3rd pdf file - 2nd page 4th pdf file - 47th page 5th pdf file - 46th page 6th pdf file - 3rd page.. Similarly for other PDF files too....if a PDF file has 95 pages.... the first page has 95th page number on the left side and 1st page number on the right side Now the issue is how to rename and arrange the files correctly in the sequence for EACH PDF file so that we can merge the PDF file as one at last for each PDF file... After renaming correctly in the proper sequence for the above file 1st pdf file should point to 1st page 2nd pdf file should point to 2nd page 3rd pdf file should point to 3rd page... The problem is all the PDF files which we are planning to split will have different set of pages..example PDF1 file - has 48 pages as above -> files should be renamed and arranged as 1, 2, 3, 4...48 correctly PDF2 file - has 96 pages -> files should be renamed and arranged as 1, 2, 3, 4.....96 correctly PDF3 file - has 56 pages -> files should be renamed and arranged as 1, 2, 3, 4.....56 correctly After renaming, the final file will be as follows 1st pdf file - should be pointed to 1st page..i.e 2nd pdf file name should be renamed as 1st pdf file 2nd pdf file - should be pointed to 2nd page i.e 3rd pdf file name should be renamed to 2nd pdf file 3rd pdf file - should be pointed to 3rd page i.e 6th pdf file name should be renamed to 3rd pdf file Could someone help me with a program which will rename the vertically cutted files in a proper sequence?? Thanks in advance."} {"_id": "255669", "title": "Why is the use of conjunctions in method names a bad naming convention?", "text": "In my team, we work closely with a few software architects. They approve all design decisions of our projects, do some code reviews etc. Our projects consist mainly of backend functionality implemented in PHP using the Symfony 2 framework. So syntactically, the code, naming conventions and project structure look almost identical to what Java would look like (Symfony 2 encourages such structure). I'm mentioning this because Java-specific conventions also apply in our case (if possible). Recently, they suggested something that I find very strange: all methods should have conjunctions in their name e.g. `getEntityOrNull`, `setValueOrException` etc. Such a naming convention feels very wrong to me, but I can't come up with any concrete arguments or online articles/pages that specifically challenge this. The only things I came up with is: * such information should be present in the method's annotations, like `@return` or `@throws` * the use of conjunctions (\"and\", \"or\" etc.) in method names usually suggest that the Single Responsibility Principle is not properly respected What are some other concrete arguments against this naming convention?"} {"_id": "256002", "title": "This program in Java doesn't stop executing!", "text": "I have written this program but it doesn't stop executing. Can you help me with it? /* * To change this license header, choose License Headers in Project Properties. * To change this template file, choose Tools | Templates * and open the template in the editor. */ package inplacesort; import java.util.*; /** * * @author ASUS */ public class InplaceSort { static Scanner console = new Scanner(System.in); /** * @param args the command line arguments */ public static void main(String[] args) { Vector intList = new Vector (); //getting the numbers from the user char ans = 'y'; while (ans == 'Y' || ans == 'y') { System.out.print(\"Enter a Number: \"); intList.addElement(console.nextInt()); System.out.print(\"Do You Want to Continue?(Y/N)\"); ans = console.next().charAt(0); } System.out.println(intList); for (int i = 1; i < intList.size(); i++) { //if (intList.elementAt(i) < intList.elementAt(i-1)) //{ int j = i - 1; while (j > 0 && intList.elementAt(i) < intList.elementAt(j)) { j--; } for (int k = intList.size() - 1; k >= j; k--) { intList.insertElementAt(intList.elementAt(k),k + 1); } intList.insertElementAt(intList.elementAt(i+1),j); intList.removeElementAt(i+1); //} } System.out.print(intList); } }"} {"_id": "256007", "title": "Is it possible to port a python desktop app(command line) into cloud", "text": "I am new to python and cloud computing . As part of my new project i am planing to port a python desktop application(command line) to cloud. is it possible"} {"_id": "20352", "title": "The Next Step, from Senior Developer to Architect", "text": "**A little background:** Bachelor In Engineering, Master Degree in Comp Sci. Landed junior dev straight out of Uni. 2 years later, left for Senior Dev. Now planning next step... Platform: _.NET, VB,C# (upto 4.0, LINQ, PLINQ) ,F#,Winform,ASP.NET+(usual web tech,js,xslt, dxhtml,etc),WPF, db ->MS T-SQL_ **Question:** What are the next steps I should be thinking about if my goal is to become a .NET Architect? What things do I need to gain experience in, both soft skills + technical. What is the fastest route there? Are there any advice in general related to becoming a Architect? Thanks in advanced."} {"_id": "256004", "title": "Managing code changes in this coding environment", "text": "I have the following coding environment: * Three Developers * Source code resides in Team Foundation Server (2010) * Development Environment * Stage Environment * Production Environment As the Senior Developer, I need to review code before its pushed to the Staging Environment. Once I review the code, it is pushed to Stage where it gets tested by our (small) test team. Once changes are tested and accepted, I move to production. Suppose the following: `Developer A` submits a change, I review it and push to stage. `Developer B` submits a change, I review it and push to stage. During the testing, a user says the change `Developer A` made has bugs and needs to be fixed. But the changes `Developer B` made need to go to production. This happens more than not. Currently, I need to wait for `Developer A` to fix the bugs and test again or have `Developer A` disable their changes. How to solve this dilemma?"} {"_id": "143435", "title": "What are the disadvantages of naming things alphabetically?", "text": "Let me give you an example of how I name my classes alphabetically: * Car * CarHonda (subclass of Car) * CarHondaAccord (subclass of CarHonda) There are two reasons why I put the type of the class earlier in its name: 1. When browsing your files alphabetically in a file explorer, related items appear grouped together and parent classes appear above child classes. 2. I can progressively refine auto-complete in my IDE. First I type the major type, then the secondary type, and so on without having to memorize what exactly I named the last part of the thing. My question is why do I hardly see any other programmers do this? They usually name things in reverse or some random order. Take the iOS SDK for example: * UIViewController * UITableViewController What are the disadvantages of my naming convention and the advantages of their convention?"} {"_id": "143633", "title": "Should I add old code into my repository?", "text": "I've got an SVN repository of a PHP site and the last programmer didn't use source control properly. As a result, only code since I started working here is in the Repo. I have a bunch of old copies of the full code base saved in _files_ as \"backups\" but they're not in source control. I don't know why most of the copies were saved nor do I have any reasonable way to tag them to a version number. I _do_ have the dates the backups were made, all backups have proper file system timestamps. Due to upgrades to the frameworks and database drivers involved, the old code is quite defunct; it no longer works on the current server configuration. However, the previous programmers had some _unique_ logic, so I hate to be completely without old copies to refer to what on earth they were doing. Should I keep this stuff in version control? How? Wall off the old code in separate Tags/branches?"} {"_id": "213926", "title": "Using BSD and GPL in one project", "text": "I am an author of project, that consists of two parts: daemon and library. I want to license library under BSD License (or MIT, will decide later), and daemon under GPLv3. I want to permit the library to be linked into closed-source products, but I want daemon to be GPL, as it uses GPL code heavily, and I do not want its commercial forks. The thing is: This library just provides client interface for the daemon, where daemon acts just like sort of server. Is it permitted to license my project using separate licenses for different parts of my project, linked with some sort of RPC protocol?"} {"_id": "213925", "title": "How to pass parameters to a function in C", "text": "Suppose I'm writing a program in C in which several parameters are asked at the beginning of the execution to the user and then remain costant until the end. Now, I need to pass these parameters to some function. Since they are unchanged through the program, my temptation would be to declare them as global variables, to have them visible to all functions. However, I see that this is not a good practice if the dimension of the program gets big (this has been asked and answered here). However, I neither see the point of creating functions with, say, 6 arguments, when the variable ones are only 2. Is there a more elegant way to do this without compromising too much the manipulability of the code?"} {"_id": "213924", "title": "Name of the Countdown Numbers round problem - and algorithmic solutions?", "text": "For the non-Brits in the audience, there's a segment of a daytime game-show where contestants have a set of 6 numbers and a randomly generated target number. They have to reach the target number using any (but not necessarily all) of the 6 numbers using only arithmetic operators. All calculations must result in positive integers. An example: Youtube: Countdown - The Most Extraordinary Numbers Game Ever? A detailed description is given on Wikipedia: Countdown (Game Show) For example: * The contentant selects 6 numbers - two large (possibilities include 25, 50, 75, 100) and four small (numbers 1 .. 10, each included twice in the pool). * The numbers picked are `75`, `50`, `2`, `3`, `8`, `7` are given with a target number of `812`. * One attempt is (75 + 50 - 8) * 7 - (3 * 2) = 813 (This scores 7 points for a solution within 5 of the target) * An exact answer would be (50 + 8) * 7 * 2 = 812 (This would have scored 10 points exactly matching the target). Obviously this problem has existed before the advent of TV, but the Wikipedia article doesn't give it a name. I've also saw this game at a primary school I attended where the game was called \"Crypto\" as an inter-class competition - but searching for it now reveals nothing. I took part in it a few times and my dad wrote an Excel spreadsheet that attempted to brute-force the problem, I don't remember how it worked (only that it _didn't_ work, what with Excel's 65535 row limit), but surely there must be an algorithmic solution for the problem. Maybe there's a solution that works the way human cognition does (e.g. in-parallel to find numbers 'close enough', then taking candidates and performing 'smaller' operations)."} {"_id": "186426", "title": "User session timeout handling in SaaS apps - discussing several approaches", "text": "I know this has a great chance of being marked as duplicate, but couldn't find exactly what I'm looking for This is a common problem and I'm sure it has some well defined best practice solution # Background 1. A single page SaaS app, has lot's of drag and drop, user can interact with it without much server communication for periods of time 2. Server session only holds user object, using a non persistent session cookie 3. Session expires on the server after X hours 4. Some things are loaded only during log-in # Problem 1. User works on the app, when done, user doesn't log out, just keeps the browser open 2. User comes back after more than X hours (session is invalidated on server) 3. User interacts with the app without needing a server connection (drags and drops things, text edits...) 4. Only on the next server interaction (let's assume there is no auto save) user is thrown to login page and loses some of their work ## Possible solutions Here are some solutions I have in mind, would like to hear if there are any others, and if there is anything fundamentally wrong with any of them. **1\\. Never log the user out** * **How?** either keep a long session, keep a persistent cookie, or javaScript \"keep alive\" ping * **Pros** : user doesn't need to worry about anything, fixes the problem for them * **Cons** : not PCI compliant, not secure, and needs development changes, e.g. things loaded to session only on user log in need to move to either a pub sub model (listening on event changes) or have cache timeout. **2\\. Local Storage** * **How?** use new local storage to temporarily store state if logged out, redirect to login page, persist once logged in * **Pros** : Also base for \"work offline\" support, not just handling session timeout * **Cons** : harder to implement, need to do state merge of data tree, not all browsers support **3\\. Auto save** Every user action that changes the model, should persist immediately (or via some sort of a client side queue), e.g. if they check a checkbox, change a text field, or drag and drop something, once they are done, persist the changes. * **How?** Use an MV** framework (Backbone.js / Knockout.js / Ember.js / Angular.js etc) to bind the model, and persist on changes. * **Pros** : Seems like a clean solution, session is active as long as user is active, no client side work is done without persisting it. * **Cons** : The last action user is doing after a session timeout is lost. **4\\. Log the user out after session expires** this can have several approaches 1. Ask the server \"has session expired\" - this is a bit of a catch 22 / Schrodinger's cat, as the mere question to the server extends the session (restarts the timeout), * **How?** Either have a server that supports such question (I don't know of any, but I come form Java land) or, one can just keep a table of session IDs, and last access time manually, and ask the server by passing the session ID as a parameter instead of the cookie, I'm not sure if this is even possible, but it sounds dangerous, insecure and bad design whatsoever.login page, persist once logged in * **Pros** : If there was such native support in servers, sounds like a clean, legitimate question (asking if user X still has a session or not without renewing it if they do) * **Cons** : If the server doesn't support it (and again, I don't know if any server or framework has this functionality) then the workaround has huge security risks potentially. 2. One workaround I've heard is have a short session on the server side, and a keep alive client side ping, that has a maximum number of pings * **How?** Short session on server, client pings every sessionTimeOut/2, has max retries of Y. * **Pros** : Kind of fixes the problem, quick and dirty * **Cons** : Feels like a hack, handling the session renewal yourself instead of letting the server do it 3. Client side timer * **How?** Have a timer on the client side and sync it with the server one by restarting it on every request to be equal to the max server session timeout minus some padding, after user is not sending any request to the server, UI shows a \"sessions is about to time out, do you want to continue?\" (like you have on online banking ) * **Pros** : Fixes the problem * **Cons** : Can't think of any except the need to make sure the sync works # The Question I'm probably missing something in the above analysis, might have some silly mistakes, and I would like your help to correct them. What other solutions I can have for this?"} {"_id": "241238", "title": "Source control: projects which share a 3rd library which is still under development in RTC", "text": "this is not a new question for source control, but IBM's Rational Team Concert is a slightly different animal than Git, SVN, etc. We have a number of .NET web sites under development with Visual Studio. Each web site is represented by a VS solution. They all share in common a .NET library, also under development. Is there a smart way to set up in RTC so that each web site's repository does not contain its own copy of the common library's source? My first thought was to just set the library up with its own repo in RTC, then have each \"child\" web site solution refer to the common library in sandbox. But I'm new to RTC and not so sure this is the most RTC-ish way, or what trouble I'll encounter later. Thoughts? Thanks for any light shed."} {"_id": "216340", "title": "Is a Mission Oriented Architecture (MOA) a better way to describe things than SOA?", "text": "I might sound like a troll, but I would like to seriously understand this deeper. The place I work at has started to use the term MOA, versus SOA as we believe it drives more clarity and want to compare it to the true goals of SOA. A Mission Oriented Architecture is an approach whereby an application is broken down into various business mission elements, with the database, file assets, batch and real time functionality all tightly coupled in terms of delivering that piece of the functionality. The mission allows the developers to focus on a specific piece of functionality to get it right, and to build it with the ability for that piece to scale as an independent entity within the overall application. By tightly coupling the data, file assets and business logic you achieve the goals of working on a very large problem in bite size pieces. Some definitions of SOA mix it up with what is essentially a method call on a web service versus a true \"service\". As an architect, I have always found it fun getting everyone on the same page regarding SOA. Is it better to call it a \"mission\" versus a \"service\"?"} {"_id": "243130", "title": "Appropriate design / technologies to handle dynamic string formatting?", "text": "recently I was tasked with implementing a way of adding support for versioning of hardware packet specifications to one of our libraries. First a bit of information about the project. We have a hardware library which has classes for each of the various commands we support sending to our hardware. These hardware modules are essentially just lights with a few buttons, and a 2 or 4 digit display. The packets typically follow the format `{SOH}AADD{ETX}`, where AA is our sentinel action code, and DD is the device ID. These packet specs are different from one command to the next obviously, and the different firmware versions we have support different specifications. For example, on version 1 an action code of 14 may have a spec of `{SOH}AADDTEXT{ETX}` which would be AA = 14 literal, DD = device ID, TEXT = literal text to display on the device. Then we come out with a revision with adds an extended byte(s) onto the end of the packet like this `{SOH}AADDTEXTE{ETX}`. Assume the TEXT field is fixed width for this example. We have now added a new field onto the end which could be used to say specify the color or flash rate of the text/buttons. Currently this java library only supports one version of the commands, the latest. In our hardware library we would have a class for this command, say a DisplayTextArgs.java. That class would have fields for the device ID, the text, and the extended byte. The command class would expose a method which generates the string (`\"{SOH}AADDTEXTE{ETX}\"`) using the value from the class. In practice we would create the Args class as needed, populate the fields, call the method to get our packet string, then ship that down across the CAN. Some of our other commands specification can vary for the same command, on the same version, depending on some runtime state. For example, another command for version 1 may be `{SOH}AA{ETX}`, where this action code clears all of the modules behind a specific controller device of their text. We may overload this packet to have option fields with multiple meanings like `{SOH}AAOC{ETX}` where OC is literal text, which tells the controller to only clear text on a specific module type, and to leave the others alone, or the spec could also have an option format of `{SOH}AADD{ETX}` to clear the text off a a specific device. Currently, in the method which generates the packet string, we would evaluate fields on the args class to determine which spec we will be using when formatting the packet. For this example, it would be along the lines of: if m_DeviceID != null then use {SOH}AADD{ETX} else if m_ClearOCs == true then use {SOH}AAOC{EXT} else use {SOH}AA{ETX} I had considered using XML, or a database to store String.format format strings, which were linked to firmware version numbers in some table. We would load them up at startup, and pass in the version number of the hardwares firmware we are currently using (I can query the devices for their firmware version, but the version is not included in all packets as part of the spec). This breaks down pretty quickly because of the dynamic nature of how we select which version of the command to use. I then considered using a rule engine to possibly build out expressions which could be interpreted at runtume, to evaluate the args class's state, and from that select the appropriate format string to use, but my brief look at rule engines for java scared me away with its complexity. While it seems like it might be a viable solution, it seems overly complex. So this is why I am here. I wouldn't say design is my strongest skill, and im having trouble figuring out the best way to approach this problem. I probably wont be able to radically change the args classes, but if the trade off was good enough, I may be able to convince my boss that the change is appropriate. What I would like from the community is some feedback on some best practices / design methodologies / API or other resources which I could use to accomplish: Logic to determine which set of commands to use for a given firmware version Of those command, which version of each command to use (based on the args classes state) Keep the rules logic decoupled from the application so as to avoid needing releases for every firmware version Be simple enough so I don't need weeks of study and trial and error to implement effectively."} {"_id": "243133", "title": "Displaying device contacts with an indication that the contact is registered to the app", "text": "We are developing a mobile app that needs to pick up device contacts, display them and indicate if the contact has already registered with this app. We have our DB in the server and the app fetches data using web services. What will be the best approach to implement the above scenario taking performance into consideration. **Option 1:** Every time user opens the app,fetch the contacts and send the list of email addresses to the server, check with the registered email ids and return the list of registered users in the contact list. In this approach whenever user opens the particular page, he needs to wait for few seconds to load data, but the contacts will be the latest from the device. **Option 2:** First time when the user opens the app, fetch contacts ,send the entire list of contacts and save it in the DB, retrieve list of registered users in the contacts then save this to local DB. From now on, data will be fetched from local DB and displayed. When a new user registers in the app, again check with records in central DB and send list of new users who are in your contacts that have registered to your app. This list will be added to local DB. and the process continues. In this case the new contacts added by user will not be updated in the app but retrieval and display of records would be quick. What would be the correct approach? In case there is a better way of doing this, please let me know."} {"_id": "236668", "title": "What algorithm should I use to create an automatic staff scheduling feature?", "text": "Imagine a small local business (in my case a dog daycare) with a few dozen part-time employees . The goal is to automatically create weekly staff schedules. My question is about what algorithmic approaches to explore for this problem. There are many constraints to keep in mind, chiefly (1) the availability of the staff and (2) the needs of each shift, not just how many staff for each shift but the skills needed for each shift (e.g. for a certain shift, you may need someone who knows how to drive to do pick-ups/drop-off of dogs, for another, someone who know how to give dogs baths, etc). Other constraints include things like avoiding or requiring certain staff combos -- perhaps due to personality conflicts on one hand, or need for training by osmosis from a senior to junior staff on the other. Also, there are preferences to take into account. Some staff prefer mornings, some two days in a row rather than say Monday and Thursday, etc. We know we can't always accommodate everyone's preferences. In fact we have a hierarchy of which employees get first dibs on their choices. I have a hunch that there is a way to reduce or express this problem into an existing, already solved algorithm. But I don't know which algorithms to explore. Which existing, specific algorithms would be most promising?"} {"_id": "161715", "title": "What are the licensing requirements for shipping Java along with application", "text": "We have an application in Java. Now we want to export this application to clients (free of charge). Can we ship JRE7 along the application (precaution as client might not have Java installed). Is it legal to do this?"} {"_id": "179791", "title": "Language parsing to find important words", "text": "I'm looking for some input and theory on how to approach a lexical topic. Let's say I have a collection of strings, which may just be one sentence or potentially multiple sentences. I'd like to parse these strings to and rip out the most important words, perhaps with a score that denotes how likely the word is to be important. Let's look at a few examples of what I mean. **Example #1:** > \"I really want a Keurig, but I can't afford one!\" This is a very basic example, just one sentence. As a human, I can easily see that \"Keurig\" is the most important word here. Also, \"afford\" is relatively important, though it's clearly not the primary point of the sentence. The word \"I\" appears twice, but it is not important at all since it doesn't really tell us any information. I might expect to see a hash of word/scores something like this: \"Keurig\" => 0.9 \"afford\" => 0.4 \"want\" => 0.2 \"really\" => 0.1 etc... **Example #2:** > \"Just had one of the best swimming practices of my life. Hopefully I can > maintain my times come the competition. If only I had remembered to take of > my non-waterproof watch.\" This example has multiple sentences, so there will be more important words throughout. Without repeating the point exercise from example #1, I would probably expect to see two or three really important words come out of this: \"swimming\" (or \"swimming practice\"), \"competition\", & \"watch\" (or \"waterproof watch\" or \"non-waterproof watch\" depending on how the hyphen is handled). Given a couple examples like this, how would you go about doing something similar? Are there any existing (open source) libraries or algorithms in programming that already do this?"} {"_id": "74222", "title": "How to improve programming skills as a Junior without Senior", "text": "At the moment I'm 23 and working as a junior programmer at a software service provider. While I'm really happy with my job and my colleagues, I sometimes would love to have someone who could tell me what is bad about my code (architecture and so on), why it is bad AND what could I do to make it better. As we have no senior java programmers anymore (I'm the last one who can program java), I would love to get your advice about becoming a better programmer. Disclaimer: Sometimes I ask my non java programmer colleagues if my code is \"good\" or \"bad\" but I have the feeling that they can't judge because they aren't java programmers. (They hardly program anything) Are there any communities which are happy to mentor junior developers? Would it help if I read tons of books? Any advice how I can improve my existing skills and learn if what I'm doing is bad or good? Epilog: I didn't even think about switching the employer back in May when I asked this question but what all of you said was stuck in my head and I decided to switch in April. I now found a new employer and I got a position as a junior Java developer at the new company. I hope my professional growth will get better in future with my new employer. I just wanted to say thank you for all of your advices."} {"_id": "212293", "title": "How should I create a combined interface for two logically independent modules?", "text": "I'm having trouble coming up with a good way to structure the interfaces for two modules that are logically independent but whose implementations may be combined for the purposes of performance or efficiency. My specific situation is the replacement of the separate queuing and logging modules in a message routing application with a combined database-backed implementation (let's call it `DbQueueLog`). It's easy enough for `DbQueueLog` to implement both the `IQueue` and `ILogger` interfaces so clients of the old queuing and logging modules can use it seamlessly. The challenge is that the most efficient and performant implementation of `DbQueueLog` involves combining the `EnqueueMessage(Message m)` and `LogMessage(Message m, List p)` methods into an `EnqueueAndLogMessage(Message m, List p)` method that minimizes the number of cross-process database calls and the amount of data written to disk. I could create a new `IQueueLog` interface with these new methods, but I'm uncomfortable with what that would mean if a future iteration of the application moved back to separate queuing and logging modules. Are there any design approaches to this situation that would allow me to build the efficient, performant `DbQueueLog` implementation now without permanently coupling together the application's queuing and logging modules? Edit: The application is built on Windows using C# in case any there are any platform-specific techniques that might be available."} {"_id": "179798", "title": "How to create a random generator", "text": "How could a random generator be implemented? I'm not speaking about the invocation of a language _mathRandom()_ method, but the implementation of the routines that lead to generate numbers totally random."} {"_id": "135383", "title": "Does the use of debuggers have an effect on the efficiency of programmers?", "text": "> **Possible Duplicate:** > Are debugging skills important to become a good programmer? I'm a young Java developer and I make a systematic use of the Netbeans debugger. In fact, I often develop my applications when I debug step by step in order to see immediately if my code works. I feel spending a lot of time programming this way because the use of debugger increase execution time and I often wait for my app to jump from a breakpoint to an other (so much that I've the time to ask this question). I never learned to use a debugger at school, but at work I've been told immediately to use this functionality. I started teaching myself to use it two years ago, and I've never been told any key tips about it. I'd like to know if there are some rules to follow in order to use the debugger efficiently. I'm also wondering if using the debugger is eventually a good practice? Or is it a loss of time and I've to stop now this bad habit?"} {"_id": "228686", "title": "Is debugging a waste of time?", "text": "I work on a lot of projects for fun, so I have the freedom to choose when I want to finish the project and am not constrained by deadlines. Therefore, if I wanted to, I could follow this philosophy: > Whenever you hit a bug, don't debug. Instead, spend time learning about > topics related to the bug from textbooks and work on other projects until > one day you can come back to the bug and solve it instantly thanks to your > piled up knowledge. However, maybe the 'philosophy' is bad because 1. debugging is a skill that should be practiced so that when I need to finish projects by a deadline I will have acquired the skills necessary to debug quickly. 2. staring at lines of code and _struggling_ to understand them when debugging makes you a much better programmer than writing new lines of code does. 3. debugging isn't a waste of time for some reason which I hope you'll tell me."} {"_id": "191884", "title": "Do your stories include tasks across disciplines? How do you do capacity planning?", "text": "My organization does web projects and employs a handful of disciplines like backend dev, frontend, BA, UX, graphic design, QA. We've been pushing to have tasks for every discipline in our sprints with explicit dependencies (Can't build a page without comps, can't do comps without wires, etc). I've heard some other organizations say that scrum is only for dev tasks. Are we barking up the wrong tree? And, if not, are there any good tools for doing capacity planning when only certain resources can do certain tasks?"} {"_id": "143438", "title": "How often do experienced programmers have trouble getting their code to perform its intended purpose?", "text": "I'm kind of inexperienced with programming (ie less than a year) and I have recently been getting discouraged, mostly from not being able to solve problems with my own code (Not forgetting parentheses or syntax issues but more logic based problems). I just wanted to know, is this something that will dissipate with practice and time or does it only get worse as my programs become more complex?"} {"_id": "178224", "title": "I want to build a Virtual Machine, are there any good references?", "text": "I'm looking to build a Virtual Machine as a platform independent way to run some game code (essentially scripting). The Virtual Machines that I'm aware of in games are rather old: Infocom's Z-Machine, LucasArts' SCUMM, id Software's Quake 3. As a .net Developer, I'm familiar with the CLR and looked into the CIL Instructions to get an overview of what you actually implement on a VM Level (vs. the language level). I've also dabbled a bit in 6502 Assembler during the last year. The thing is, now that I want\u00b9 to implement one, I need to dig a bit deeper. I know that there are stack based and register based VMs, but I don't really know which one is better at what and if there are more or hybrid approaches. I need to deal with memory management, decide which low level types are part of the VM and need to understand _why_ stuff like ldstr works the way it does. My only reference book (apart from the Z-Machine stuff) is the CLI Annotated Standard, but I wonder if there is a better, more general/fundamental lecture for VMs? Basically something like the Dragon Book, but for VMs? I'm aware of Donald Knuth's Art of Computer Programming which uses a register-based VM, but I'm not sure how applicable that series still is, especially since it's still unfinished? Clarification: The goal is to build a specialized VM. For example, Infocom's Z-Machine contains OpCodes for setting the Background Color or playing a sound. So I need to figure out how much goes into the VM as OpCodes vs. the compiler that takes a script (language TBD) and generates the bytecode from it, but for that I need to understand what I'm really doing. * * * \u00b9 I know, modern technology would allow me to just interpret a high level scripting language on the fly. But where is the fun in that? :) It's also a bit hard to google because Virtual Machines is nowadays often associated with VMWare-type OS Virtualization..."} {"_id": "59288", "title": "Is using techniques from OpenSource code copyright infringement?", "text": "If I read through the source code of programs in the similar field of application like the one I am working on (e.g.: 3D car racing simulation), and find some techniques or patterns they use better than the ones I am using, and use those in my project - is it theft? (No source code would be directly copied in any way.)"} {"_id": "237809", "title": "Does implementing algorithms improve your programming skills?", "text": "I have been self studying algorithms and data structures for some time now and currently enjoying it. Whenever I understand an algorithm, I usually try to code them from scratch for fun and I always will discover some logic I've never thought of before. I'm not so sure, but I always think that when I code algorithms, I am maybe practicing my skills in programming. Is this true? I know they are important, but will my programming skills really improve by coding them?"} {"_id": "53123", "title": "How important is studying algorithms and theory is to becoming a great programmer?", "text": "> **Possible Duplicate:** > Should I keep investing into data structures and algorithms? I'm a CS student. I want to become a really great programmer, what do I need to do to be come a great programmer? Other then writing lots of code, I've heard that studying algorithms and theory (logic!) is help. What do you recommend to become the best? What do I need to read? What do I need to study?"} {"_id": "122689", "title": "How important is learning Algorithms for high level language programmers", "text": "> **Possible Duplicate:** > How important is studying algorithms and theory is to becoming a great > programmer? Today I learned the fast sort Algorithm. I doubt I will ever implement my own version though as C# has its own built in Sort method for lists and arrays. How important is learning Algorithms for high level language programmers? In my example I gained no benifit from knowing the Algorithm but perhaps my example was too trivial to be accurate?"} {"_id": "79585", "title": "Software License: Open Source, but no distribution (for free or for profit)", "text": "I wrote a program in PHP. I want people to be able to use the code to learn from and even include bits in their own apps but I also want to maintain the right to sell it and make it clear that others are not to distribute copies either for free or for profit. I haven't used anyone elses code so I don't have to worry about license compatibility. Which license out there is right for that? I know I can't stop people from ignoring the license but I just want them to at least know it's there and know they're doing something wrong if they try. I can't find anywhere that explains licenses in plain English and yes, I have tried to read the GNU and BSD licenses but I can't get it. Thanks. Ok so here are the clarifications other have asked for: I am going to sell this program for very cheap. People are free to use as much code as they want from it so long as they have first purchased the program and they do not profit from my code. I wouldn't mind if people just took the whole thing and added on and made it better but then people wouldn't be honest and they would just add something on and give it free and then no one would buy my product. I suppose I should just let people have it and ask for donations. So maybe I should GPL or BSD it. EDIT: I understand why everyone is getting confused. It is contradictory but the thing about legalese is that everything seems black or white. I'm trying to achieve a gray area. I want people to buy this thing from me, not copy it and give it away to people, and even use parts of it. By \"use parts\", what I'd say if someone asked me casually would be \"just be reasonable\". So basically don't use a majority of my code in your work and then totally outshine what I did because you'll have it easy as I did most of the dirty work already! I mean, sure, use my DB connect script in your project, I don't care. OR... use all the code and add to it! But only for personal use and don't you dare start giving it out because I'm going to be making money off this thing and that totally undermines my work. And it'd be nice to show me your additions and I'll give credit but the main thing is don't be redistributing my software that I have to earn a living with. Its hard to put that sentiment into legalese. And I just can't understand all these licenses."} {"_id": "92108", "title": "Are web best practices so important if they are always violated by large companies?", "text": "Usually, there are a bunch of rules and best practices which help optimizing a website, bring new customers, and in general making user experience fast, smooth and pleasant while (sometimes) reducing the server load. Also, usually, the largest companies don't bother to use those best practices. Except for few companies (like Google), on the largest websites, we can see: * table layouts, not minified JavaScript, no CSS sprites where they should be, several CSS files, intrusive JavaScript even in situations where it was simple to be unobtrusive, calls to JavaScript files in ``, etc. * meaningless errors, annoying popups, register forms with huge amount of fields to fill, UX issues on register\u00b9, stupid questions and situations which make it impossible to use the website\u00b2, confusing situations on key parts of the website\u00b3, multiple redirects, slow pages, etc. On one hand, those companies are paying a huge amount of money to develop, optimize\u2074 and host their websites since their success relies partially\u2075 or completely\u2076 on it; on the other hand, they are constantly violating the best practices while people advocating those best practices explain that following them helps to achieve better UX and faster websites with lesser footprint on environment (which can be non-negligible on websites hosted on thousands of servers). In such a case, it is logical to ask: * If the large companies which are really successful, do have a lot of money for their websites and competent employees and which care about website optimization violate constantly those best practices, **are those best practices true?** * Or, in another words, if those best practices are so important and helps so much to optimize websites, **why those companies don't care about them?** Let's take an example of Dell.com. I'm pretty sure they hire the best of the best to create their home page. Their home page use table layouts. Does it mean that people who tell that table layouts are evil are wrong? Does it mean that the best of the best hired by Dell are incompetent? * * * \u00b9 First example: eBay makes it impossible, when registering, to _paste_ your mail address in both fields, making it longer to use the registration form with no reason at all except to annoy users; best practice would be to forbid _copying_ , but allow pasting. Second example: Microsoft Live limits the length of a password to 16 characters, with no apparent reason at all. \u00b2 For example, when you've not being to Amazon for a very long time, it says that the password is invalid, then, to recover it, asks you the information about your last transaction, which makes the account unusable if you've never done any transaction before with the account. \u00b3 Dell, for example, makes it impossible to order a rack server without any hard disk, while this can be perfectly valid if you already have the hard disks you want to reuse. \u2074 Such optimization includes partial flush to send the most important content faster, studies on the relationship between time spent by people waiting for pages to load and the number of people using the website, etc. \u2075 As for Dell, Microsoft and others. \u2076 As for eBay or other web-based companies."} {"_id": "95491", "title": "web development without the knowledge of client side scripting or programming?", "text": "> **Possible Duplicate:** > How much HTML and CSS should server side developer know? What will happen if someone wants to be a web developer but not interested to learn the client side scripting or programming. I am asking this question, because I think it is not possible, but a few days back, I had an argument with one of my rigid programmer friend. I could not make him understand. Do you guys agree with me? I believe that without the good client side knowledge, it is impossible to be a good web developer."} {"_id": "50758", "title": "C# Dev - I've tried Lisps, but I don't get it", "text": "After a few months of learning about and playing with Lisp, both CL and a bit of Clojure, I'm still not seeing a compelling reason to write anything in it instead of C#. I would really like some compelling reasons, or for someone to point out that I'm **missing something _really big_**. The strengths of Lisp (per my research): * Compact, expressive notation - More so than C#, yes... but I seem to be able to express those ideas in C# too. * Implicit support for functional programming - C# with LINQ extension methods: * mapcar = .Select( lambda ) * mapcan = .Select( lambda ).Aggregate( (a,b) => a.Union(b) ) * car/first = .First() * cdr/rest = .Skip(1) .... etc. * Lambda and higher-order function support - C# has this, and the syntax is arguably simpler: * \"(lambda (x) ( body ))\" versus \"x => ( body )\" * \"#(\" with \"%\", \"%1\", \"%2\" is nice in Clojure * Method dispatch separated from the objects - C# has this through extension methods * Multimethod dispatch - C# does not have this natively, but I could implement it as a function call in a few hours * Code is Data (and Macros) - Maybe I haven't \"gotten\" macros, but I haven't seen a single example where the idea of a macro couldn't be implemented as a function; it doesn't change the \"language\", but I'm not sure that's a strength * DSLs - Can only do it through function composition... but it works * Untyped \"exploratory\" programming - for structs/classes, C#'s autoproperties and \"object\" work quite well, and you can easily escalate into stronger typing as you go along * Runs on non-Windows hardware - Yeah, so? Outside of college, I've only known one person who doesn't run Windows at home, or at least a VM of Windows on *nix/Mac. (Then again, maybe this is more important than I thought and I've just been brainwashed...) * The REPL for bottom-up design - Ok, I admit this is really really nice, and I miss it in C#. Things I'm missing in Lisp (due to a mix of C#, .NET, Visual Studio, Resharper): * Namespaces. Even with static methods, I like to tie them to a \"class\" to categorize their context (Clojure seems to have this, CL doesn't seem to.) * Great compile and design-time support * the type system allows me to determine \"correctness\" of the datastructures I pass around * anything misspelled is underlined realtime; I don't have to wait until runtime to know * code improvements (such as using an FP approach instead of an imperative one) are autosuggested * GUI development tools: WinForms and WPF (I know Clojure has access to the Java GUI libraries, but they're entirely foreign to me.) * GUI Debugging tools: breakpoints, step-in, step-over, value inspectors (text, xml, custom), watches, debug-by-thread, conditional breakpoints, call-stack window with the ability to jump to the code at any level in the stack * (To be fair, my stint with Emacs+Slime seemed to provide some of this, but I'm partial to the VS GUI-driven approach) I really like the hype surrounding Lisp and I gave it a chance. But is there anything I can do in Lisp that I can't do as well in C#? It might be a bit more verbose in C#, but I also have autocomplete. What am I missing? Why should I use Clojure/CL?"} {"_id": "83569", "title": "How to support design decisions?", "text": "There are bad design decisions. On the same pitch, being a senior engineer can sometimes be hard. Youngsters have awesome and radical ideas, which a senior engineer wonders whether will work on the field, and if it is worth spending all that effort. Experience suggests that these are short-term solutions are very expensive in the long run. However, it might be hard to convince otherwise. How does one go about in this scenario? **Patterns** are a good start, however, it takes a little experience to figure out where to apply a pattern, as a newbie might restate the problem to fit the pattern. Also, patterns do not help at coarser/component levels. Another idea is to **refer to a good design** that _works in the field_ , and is outside the organization. The problem here is that sometimes drawing a parallel might be hard. Are there any other ideas?"} {"_id": "145778", "title": "Is BSD-License compatible with Apple AppStores?", "text": "I would like to know if it is possible to make an application to be sold on Apple AppStores that is linked (during compilation) to a BSD-licensed library."} {"_id": "148915", "title": "Impact of Website Redesign on Google Analytics", "text": "I just finished a redesign for a website that is currently using Google Analytics. I want to continue to use Google Analytics, but I'm not sure if I should create a new profile for the new site or simply use the old UA number. The new website has a completely different URL structure and much of the content has been updated/deleted/added."} {"_id": "148914", "title": "I have one afternoon to extol the benefits of .NET over VB6... what do I say?", "text": "My company is a small twenty-man engineering firm. All the application programming here is done in VB6 by two people, who have taught themselves VB6 from an assembly background while working here for the past 25+ years, and myself. As a result, the VB6 code is riddled with horrendous code smells, like tons of stringly typed variables, terribly long functions, hundreds of public global variables (some of which are preferred over passing around arguments and returning values,) and not a single object class. Refactoring is nigh impossible, and any changes require too much digging through code, and once made, always seem to introduce more holes. My boss is realizing that VB6 is a dead technology, and is willing to listen to my pleas of moving to .NET for new development. We are moving forward to .NET, but he sees it as a way to keep up compatability with the newer Windows OS's, not as a way to write better code. How can I best explain the benefits of .NET language over VB6, beyond mere up- to-date-ness? What can I say to best emphasize that the move to .NET is a good move _but also_ that it means our current programming paradigm should also begin to change? As soon as my boss hears that Visual Basic .NET looks just like VB6, I know that his first instinct will be to simply convert our old code mess to .NET. I understand that it will be impossible to change anyone's mindset in a single afternoon, but how can I at least convince my assembly-toting boss that things like strongly-typed variables, custom classes, and private fields aren't a _total_ waste of time and energy?"} {"_id": "148918", "title": "Why do trees grow downward?", "text": "Why do trees grow downward in computer science? I have a feeling it goes back to a printer, and that a program traversing a tree first prints the root, and uses the notion of a bottomless stack of paper to express the indefinite levels of recursion that might be encountered. References: > Trees grow downward, having their roots at the top of the page and their > leaves down below From ON HOLY WARS AND A PLEA FOR PEACE. > by convention, trees are drawn growing downwards From the Wikipedia article on tree data structures. > Real trees grow from their root upwards to the sky, but computer-science > trees grow from the root downwards From David Schmidt's lecture notes."} {"_id": "61392", "title": "free as in free beer", "text": "Some years ago (more precisely in 1998) the confusion english-speaking people start making with the term **free** when applied to software led some members of the free software foundation to create a new term: **open source** ( http://www.gnu.org/philosophy/free-software-for-freedom.html ) The great concern was that people was misunderstanding free as in free of speech, with free as in free beer (see wikipedia: http://en.wikipedia.org/wiki/Free_and_open_source_software ). The free here relates to the the liberty of using and sharing your software, in a way that goes against proprietary \"copyright\" (and patented) software. It was first introduced by Richard Stallman, in 1986. The core is the set of freedoms: * Freedom 0: The freedom to run the program for any purpose. * Freedom 1: The freedom to study how the program works, and change it to make it do what you wish. * Freedom 2: The freedom to redistribute copies so you can help your neighbor. * Freedom 3: The freedom to improve the program, and release your improvements (and modified versions in general) to the public, so that the whole community benefits. Well, we all know about that (at least those who is following this discussion since ever). In my opinion, the term open-source clarified us. But it was already assured by Freedom 1. In other languages (like spanish or portugese, for instance), we can say free as libre or livre. And the other meaning of free, as in free beer, as gratis. So, for others, there was never a confusion (actually, only when reading english articles about it). Freedom number 2 guarantees my right to redistribute copies at will. I understand that as I don't need to pay the owner any royalties for every copy I give away to my friends or students. And of course, we will agree with that. This, ultimately means, that I **can** give free (gratis) copies, or also I can charge for copies (support, media, etc.). Now, If something can be given away for free (gratis), then it is free (gratis). No matter if someone, or some enterprise, wants to sell, you still have ways (sites, downloads, friends, etc.) to get it for free (gratis). So, in my non-english point of view, we have 3 different things here. The most important, undoubtedly, is the free (freedom, liberty, libre, livre, so you can all the stuff you want) quality of the software. The other, is being open- source (so you can see the code inside). The last one **IS** another **GREAT** quality, that is, there **EXIST** free (gratis, non-chargeable) software. Being free (gratis) does not mean it is forced free (gratis). You can have people who sells, who pays, and who gets/sends for free. Still, it is a quality. What I don't understand is why, after so many years, people from FSF are still cautious to say that there is free (gratis) software also. Yes, free as in free beer. If it is a quality why hide it? Just to prevent confusion? Well, let me tell you the news: confusion was already made since the beginning. I was alive and interested in the subject when it started, and I remember lots of discussions about this free being free of charge, or free of restrictions. After lots of consulting, lawyers helped to write the first \"license\" in terms that could be used in court, introduced the term \"copyleft\", and made it \"clear\" that free is not about price. At that time, because of the novelty, ok, I agree to emphasize this half-part of the free. But now we are over it. Isn't it time to tell people that we ALSO have free as in free of charge. Three qualities is better than two: livre/gratis/open. So, why don't we tell people that free software can be just free? Edited: More objectively: 1- Is a free (freedom) software necessarily free (gratis)? 2- Is a free (gratis) software necessarily free (freedom). The first one is the important question, as the second is just there to hold fast typers (we all know the answer). Another question raised from the discussion: Is free (gratis) a quality? (I assumed that as taken for granted in the question introduction)"} {"_id": "61394", "title": "Do you see an use for \"Spreadsheet programming\"?", "text": "A while ago I stumbled upon the concept of using spreadsheets (I mean cells and formulas not Macro code) as a way to specify programming logic. The idea is: * create a spreadsheet with a cleanly defined flow of calculations (which sometimes are better suited to the \"dataflow\" paradigm of spreadsheets instead of procedural or object oriented programming styles) * define input cells * define output cells * compile the whole thing into a stand-alone executable class (or function, procedure, ...) * use it in normal code within a broader software project * use the spreadsheet as source code to be maintained over time The idea is to use this technique for problems the really fit into the model and that this would lead to naturally well-documented and easy maintainable code. I am interested in knowing whether you have experienced using the technique and what for. An example application that came to my mind are insurance tariff calculators, that are typically drafted, created and validated by actuaries on Excel sheets and only later coded (it's a painful process) in some hard-to-maintain programming logic."} {"_id": "61396", "title": "what should i use running cron or getting by query", "text": "I have a table which has the amount,entrydate and amtperday field , which is to decrease daily. so what is better running the cron job daily to decrease amount. or or at the time of fetching data calculate the amount perday with the date field , i can get the no. of day > amount = amount- no.of day * amtperday so which is good Please suggest ?"} {"_id": "216833", "title": "Design Patterns - Why the need for interfaces?", "text": "OK. I am learning design patterns. Every time I see someone code an example of a design pattern they use interfaces. Here is an example: http://visualstudiomagazine.com/Articles/2013/06/18/the-facade-pattern-in- net.aspx?Page=1 Can someone explain to me why was the interfaces needed in this example to demonstrate the facade pattern? The program work if you pass in the classes to the facade instead of the interface. If I don't have interfaces does that mean"} {"_id": "216831", "title": "Simple website with a GPL V3 Framework", "text": "I write web-based software and simple website (\"Home\", \"Who we are\", \"Contact\"). For a simple website I'm using a covered GPL v3 framework. The user surf the website, send an email, take info, etc. I repeat: simple website, not a Joomla or Wordpress. 1) Will the website be covered with the GPL? I don't modify the framework. I'm using his classes in other classes... (OOP). 2) For the point 1, if yes, do I need to add (e.g. in the footer) name of framework and his link? 3) I must permit download of entire website to study code (nothing that a programmer has interest in)? E.g. placing it in Github? 4) If 2 is NO, how you can \"understand\" that we use that framework? In effect no php lines are exposed to the browser... You cannot understand that when you push \"Send email\" the site is calling `$this->send($email)`. If you write me an email \"Are you using XXX framework\"? I can answer NO."} {"_id": "111462", "title": "browsing with curl instead of firefox (or any other browser)", "text": "My company has 2 Intranets (one technical, and one commercial), I am the administrator of the technical Intranet. We need to see if a client has a contract for a service we offer. As such, for ease of use (for us in the tech department) I made a button in our Intranet that reads data via `ajax->php->curl->dom` from the commercial intranet, gets the type of the contract, and if it's valid prints the contracts into a DIV tag. So we don't have to open a new tab, look up the client, go through all the commercial info in there just to see something as simple as \"yes is ok\" or \"no disable the service\"). Now the admin from the commercial Intranet (the nephew of somebody on the Board) went to the Board complaining that we steal information from them according to this internal decision: Any addition of new functions, any changes to the current functions or operating procedures of any service related to commercial databases provided by the company will be made \u200b\u200bonly with approval from the Board. But we did not add new functions, we did not modify existing functions or procedures, we just read the data with curl instead of a normal browser. I am into optimizations, efficiency, and user-friendly experience so in our Intranet with 2 or 3 clicks (and max 3 seconds) you get the info you need, otherwise I think I did something wrong and I try to improve. The script we are talking about returns the data to the user in around 1 - 3 seconds, but now the Board forbids us to use it anymore, and our team is slower (you need about 2-5 minutes, and around 8-12 clicks to find a client with the commercial intranet (it's a real mess). In your opinion is there a legal difference betwen using curl of firefox (or any other browser), to read a simple HTML page (no javascript invloved), acording to the decision above or in general ? P.S. I know you are not lawers, just give a programmer point of view. Thank you"} {"_id": "74764", "title": "How often should I/do you make commits?", "text": "I am a recent (as of yesterday) college grad - BS in Computer Science. I've been a huge fan of version control ever since I got mad at an assignment I was working on and just started from scratch, wiping out a few hours worth of actually good work. Whoops! Since then, I've used Bazaar and Git (and SVN when I had to for a Software Engineering class), though mostly Git to do version control on all of my projects. Usually I'm pretty faithful about how often I commit, but on occasion I'll go for a while (several functions, etc.) before writing a new commit. So my question for you is, how often should you make commits? I'm sure there's no hard and fast rule, but what guideline do you (try to) follow?"} {"_id": "136596", "title": "Can commits be considered too small?", "text": "> **Possible Duplicates:** > git / other VCS - how often to commit? > How often should I/do you make commits? The usage of source control is very different from one developer to another and from project to another. Some commit very often; others can spend a whole day or several days without committing (especially when they work on the project alone or they know that other team members are working on very different part of the project). ## Examples Sometimes, I've seen extremely small commits, both in real life and in webcasts and other learning material. Some examples, mostly from real life, are: * A commit which solves a bug #... or implements a feature #... by changing one line of code. IMHO, it's a perfectly valid case for a commit, especially if the bug tracking system is linked to the version control and is updated automatically according to the revisions. Even without this link, it's useful to track which commit solved what, independently of the number of changes required to solve a bug or implement a feature. * A commit which changes a single configuration setting (given that in the context, configuration settings must be in source control). IMHO, this could be merged sometimes with another commit, unless the previous setting breaks the build or introduces a bug or can affect other developers (for example a connection string which changed after the test database server was migrated). * A commit which corrects spelling of a word, for example in a string displayed to the user. IMHO, in most cases, this can be merged with another commit (unless, again, it breaks the build). The only case where it cannot be merged is when, if left, the wrong spelling can be propagated through code and would be too complicated or impossible to change later, as with HTTP _referer_ header. * A commit which adds a comment to a method (while the method was already explicit enough) or solves another minor style-related rule. Example: in .NET Framework, StyleCop requires to document every method, and the XMLDoc comment for a constructor (which is method too) must begin with: > Initializes a new instance of the class. A commit can enforce this last rule, replacing a comment in legacy code: > Creates a new vehicle with the specified number of wheels. by: > Initializes a new instance of the Vehicle class, using the specified number > of wheels. In other words, the revision has no meaning other than to conform the piece of code to the style standards used in the codebase. IMHO, this can be merged with another commit in every case (after all, style- related rules must be enforced at commits to reject the commits of the code which doesn't match them), unless there are several changes in several places. ## Questions Am I wrong on those points? Is there such a thing as a commit too small, or is a practice of committing very often a best practice? Does it worth it to commit too small changes, given that it would \"pollute\" the revision log and make it more difficult to find the relevant changes among tiny changes nobody cares about and which don't break or unbreak the build, nor affect other developers?"} {"_id": "210707", "title": "Using Git - better to do lot of incremental updates or one big daily update?", "text": "Ive just started working with Git (Github) in anticipation for an up coming project I'm project managing and designing and developing the front end of. One thing I couldn't work out is, is it preferable to publish each individual change as you make them, ie. updated sidebar js, designed new FAQ page (each as individual commits) and the back end developers would be doing the same ie. added this class, refactored this.. Or is it better to do a daily / half daily commit of all the work you've done? My thoughts were that if you do lots of small commits its easier to roll back, but also at the same time every commit you make the rest of the team has to get locally before they can commit their code. You obviously don't have this problem so much if you do daily or half daily commits, but its a little more complicated to roll back if you need to? Is there a best practice for this or is it down to team preference? Background: I'm using Github via the mac desktop app not the CL."} {"_id": "209401", "title": "Git for a solo developer", "text": "I'm a developer working on a Wordpress project. I work on this alone and I want to improve the way I use Git but I don't know where to start. Currently, I'm using git to commit all of the work that I've done for the day as long as that specific work is done. Here are some instances where I commit a specific work: * A specific feature whose code can be written within a day * A bugfix For bigger features that can take weeks to finish I create a branch for it, then commit the sub-features once they're done. But often times I feel like I'm committing way too much; for example, when I'm trying to solve lots of bugs I do not commit each and every bug that I resolve. Sometimes I also end up working with 2 or more things that should be in there own commits. Then I'll find it hard to just target the specific files within a specific change so I end up putting them all in one commit and end up with a commit message like `add search feature and add caching feature`. What I want to know are the best practices for solo developers working with Git. For solo developers out there you're welcome to answer with your own workflow as well."} {"_id": "83837", "title": "When to commit code?", "text": "When working on a project, the code may be developed reasonably fast in a single day or bit by bit for a prolonged period of few weeks/months/years. As code commits are becoming to be considered as a measure of project development, it doesn't really mean it has more code written than a project that has lesser commits. So the question is when to really make a commit to the repository so that the commit is justifiable? As a add-on: Is it a correct practice to measure a project's development based on its commits?"} {"_id": "10793", "title": "When is a version control commit too large?", "text": "I've heard in several places \"Don't make large commits\" but I've never actually understood whats a \"large\" commit. Is it large if you work on a bunch of files even if there related? How many parts of a project should you be working on at once? To me, I have trouble trying to make \"small commits\" since I forget or create something that creates something else that creates something else. You then end up with stuff like this: Made custom outgoing queue Bot -New field msgQueue which is nothing more than a SingleThreadExecutor -sendMsg blocks until message is sent, and adds wait between when messages get sent -adminExist calls updated (see controller) -Removed calles to sendMessage Controller -New field msgWait denotes time to wait between messages -Starting of service plugins moved to reloadPlugins -adminExists moved from Server because of Global admins. Checks at the channel, server, and global level Admin -New methods getServer and getChannel that get the appropiate object Admin belongs to BotEvent -toString() also show's extra and extra1 Channel -channel field renamed to name -Fixed typo in channel(int) Server -Moved adminExists to Controller PluginExecutor -Minor testing added, will be removed later JS Plugins -Updated to framework changes -Replaced InstanceTracker.getController() with Controller.instance -VLC talk now in own file Various NB project updates and changes --- Affected files Modify /trunk/Quackbot-Core/dist/Quackbot-Core.jar Modify /trunk/Quackbot-Core/dist/README.TXT Modify /trunk/Quackbot-Core/nbproject/private/private.properties Modify /trunk/Quackbot-Core/nbproject/private/private.xml Modify /trunk/Quackbot-Core/src/Quackbot/Bot.java Modify /trunk/Quackbot-Core/src/Quackbot/Controller.java Modify /trunk/Quackbot-Core/src/Quackbot/PluginExecutor.java Modify /trunk/Quackbot-Core/src/Quackbot/info/Admin.java Modify /trunk/Quackbot-Core/src/Quackbot/info/BotEvent.java Modify /trunk/Quackbot-Core/src/Quackbot/info/Channel.java Modify /trunk/Quackbot-Core/src/Quackbot/info/Server.java Modify /trunk/Quackbot-GUI/dist/Quackbot-GUI.jar Modify /trunk/Quackbot-GUI/dist/README.TXT Modify /trunk/Quackbot-GUI/dist/lib/Quackbot-Core.jar Modify /trunk/Quackbot-GUI/nbproject/private/private.properties Modify /trunk/Quackbot-GUI/nbproject/private/private.xml Modify /trunk/Quackbot-GUI/src/Quackbot/GUI.java Modify /trunk/Quackbot-GUI/src/Quackbot/log/ControlAppender.java Delete /trunk/Quackbot-GUI/src/Quackbot/log/WriteOutput.java Modify /trunk/Quackbot-Impl/dist/Quackbot-Impl.jar Modify /trunk/Quackbot-Impl/dist/README.TXT Modify /trunk/Quackbot-Impl/dist/lib/Quackbot-Core.jar Modify /trunk/Quackbot-Impl/dist/lib/Quackbot-GUI.jar Modify /trunk/Quackbot-Impl/dist/lib/Quackbot-Plugins.jar Modify /trunk/Quackbot-Impl/lib/javarebel.stats Add /trunk/Quackbot-Impl/lib/jrebel.info Modify /trunk/Quackbot-Impl/nbproject/private/private.properties Modify /trunk/Quackbot-Impl/nbproject/private/private.xml Modify /trunk/Quackbot-Impl/nbproject/project.properties Modify /trunk/Quackbot-Impl/plugins/CMDs/Admin/reload.js Add /trunk/Quackbot-Impl/plugins/CMDs/Operator/hostBan Modify /trunk/Quackbot-Impl/plugins/CMDs/Operator/mute.js Modify /trunk/Quackbot-Impl/plugins/CMDs/lyokofreak/curPlaying.js Modify /trunk/Quackbot-Impl/plugins/CMDs/lyokofreak/lfautomode.js Modify /trunk/Quackbot-Impl/plugins/listeners/onJoin.js Modify /trunk/Quackbot-Impl/plugins/listeners/onQuit.js Modify /trunk/Quackbot-Impl/plugins/testCase.js Add /trunk/Quackbot-Impl/plugins/utils/whatsPlaying.js Modify /trunk/Quackbot-Impl/src/Quackbot/impl/SandBox.java Add /trunk/Quackbot-Impl/vlc_http Add /trunk/Quackbot-Impl/vlc_http/current.html Modify /trunk/Quackbot-Plugins/dist/Quackbot-Plugins.jar Modify /trunk/Quackbot-Plugins/dist/README.TXT Modify /trunk/Quackbot-Plugins/dist/lib/Quackbot-Core.jar Modify /trunk/Quackbot-Plugins/nbproject/private/private.properties Modify /trunk/Quackbot-Plugins/nbproject/private/private.xml Modify /trunk/Quackbot-Plugins/src/Quackbot/plugins/JSPlugin.java Add /trunk/Quackbot-Plugins/vlc_http Add /trunk/global-lib/jrebel.jar Yea.... So for questions: * What are some factors for when a commit becomes too large ( _non-obvious stuff_ )? * How can you prevent such commits? Please give specifics * What about when your in semi-early stages of development when things are moving quickly? Are huge commits still okay?"} {"_id": "206979", "title": "How big should a single commit be?", "text": "A discussion has come up at work and I want to get the opinion of other programmers. During my time in college, we only used SVN for sourcecontrol. Everybody commits to the same server and all changes are pulled to all machines. In this system, it's easy when you should push your changes: when you're done with your changes. However, this makes merging changesets by multiple developers a complete nightmare. However, for the past year or so that I've been programming professionally, we've been using Mercurial for source control. At first, I was using it like SVN: work on a featur, commit and push in one go, but only when it's finished. Gradually, my habits started to change. I learned about branches, how commits could be a draft, public or secret and even used a USB stick as a central repository. My commits became smaller, but they increased in quantity. Instead of committing and pushing in one go, I would make commits for every change in a named branch. Here's what the repository for one my personal projects looks like: ![Repository layout](http://i.stack.imgur.com/t0h7X.png) I see a couple of advantages in this approach. * Small commits mean they are focused on a single subject. Every commit gets a one-line description of the changes, which makes them easy to follow even after weeks of inactivity. * Named branches keep you on topic. If you're working in the \"refactor-text-rendering\" branch, for example, you shouldn't make changes to the math library, because that's not related to the branch's topic. However, the lead developer is annoyed by my methodology. He feels that the small commits clutter up the repository's history. His preferred methodology is to work in a patch. When the feature is done, he merges the commits and pushes them in one go. So, what do you prefer? A flow of consciousness history of commits or a patch of changes every now and then? Or something else entirely?"} {"_id": "221582", "title": "What to do if my own opensource code greatly resembles another project's code?", "text": "I'm writing a Java class, part of an opensource game development framework under the Apache 2.0 license. There is another framework, under the same license, that happens to also have a Java class that addresses the same subject as mine. I recently checked it out, and my implementation is extremely similar to that. Heck, even the local variables. The differences are minimal. Since both classes are opensource, I guess it doesn't matter. However, since the codes are so similar, anyone may come to the conclusion that I just modified theirs a bit, and eventually quote the license: > * You must give any other recipients of the Work or Derivative Works a > copy of this License; > > * You must cause any modified files to carry prominent notices stating > that You changed the files; > > Since I wrote my class, there's no need to put their copyright notice in my code. However, it cannot be proven that I didn't just copy their code and modified a bit. What is one supposed to do in this scenario?"} {"_id": "151118", "title": "Reasons for either 32-bit or 64-bit as development machine", "text": "I'm about to make a new Linux install, which will be primarily used for programming. I've seen benchmarks showing speed improvement of 64-bit version, however, I have hard time of telling how much these benchmarks translate to improvement in every day usage. And of course there are other aspects to consider. Usage I have in mind: * mainly programming Python, with occasional C, C++ and Java; * IDEs, which are using Java platforms (Eclipse and IntelliJ); * on very rare occasions having to compile for 32-bit platform; * not planning to have more than 64GB of RAM anytime soon (and I don't mind using PAE kernels); * machine in question has 4GB RAM and Athlon II X2; What are pros and cons of choosing either i386 or x86_64 distro?"} {"_id": "66679", "title": "Freelancing dilemma.. to do or not to do?", "text": "My current organization has stated clearly that I should not work on any freelancing projects in the fields where the company offers their service. I agreed to that and joined the organization. At the time of joining, I also mentioned that I am a member of an NGO for last two years, not as a programmer but giving my social services there. They agreed to that and everything is working smoothly. But recently my NGO has decided to build their site. I was called to be a part of programming team but I refused because of that condition. The NGO agreed and excluded me from the programming team. But they insisted on me being a translator and requested that I arrange for translators from other languages. I had managed to bring a team of translators. Now my NGO is planning to make me head of translation team because they felt being a programmer and translator both I can be a bridge between two teams. Here is dilemma begins, my company is in website development business but they don't offer translation service. For the NGO website I am going to be credited for translation. Here my company is very strict on freelancing rules. But I am bit optimistic that they will allow me to be part of the NGO project which in sense not going to compete with any of their products. They are strict in freelancing and I am sure that they are not going to allow me. If anything goes wrong, on the other side, I don't see any legal problems in doing this and am excited to be a part of the website. What would you do if you faced this situation?"} {"_id": "245207", "title": "Sortable listview using SQLite", "text": "I'm implementing a sortable listview using bauerca's library here. This works, but now I need to define some things where I need some help. This is the functionality of my app_ I can create up to 5 names. These names are saved in a SQLite database. Then, in other activity, these names are loaded from the database and shown to select which ones I'm going to use. With 5 names, there are several combinations I can use, but the basic porpouse of this app is that when I select some names, these are shown in a sortable listview where I can sort them. After sorting them, they should be saved in another database to remember the position of each one. For this, the second database has these columns: NAME, GROUP, ORDER. **My approach is this:** if I select for example `NAME1` and `NAME2`, this would be `GROUP2_1`. If I select `NAME1` and `NAME3` this would be `GROUP2_2`... and this with all the posible combinations. Then, in the `ORDER` column, the position of each name in the listview is saved. But seems that this approach with the GROUP column doesn't work... it saves the position of each name, but regardless of the group to which it belongs. So for example, if I have these names: MARK PAUL JERRY If I select MARK and JERRY, the listview at first time would show them that way: MARK JERRY But now I sort them: JERRY MARK So these combination belongs to GROUP2_1 and JERRY has 1 value in the ORDER column and MARK the 2. If now I go back and I select this other combination: MARK PAUL JERRY This should be GROUP3_1, and now I sort them this way: MARK JERRY PAUL So now MARK has ORDER value 1, JERRY 2 and PAUL 3. Now if I go back and load again the first combination, it will load the names with the last asigned ORDER value so instead of loading in this order: JERRY MARK It will load them using the values of the last combinations: MARK JERRY Sorry for such a long explanation, but I dind't know other best way to explain my issue. In resume, the problem is that the ORDER value is asigned to each name, regardless of the GROUP, so this is not a valid approach for what I need to get. **Update -- key definition** Thats my current table: db.execSQL(\"CREATE TABLE \" + TABLE_NAME + \" (\" + TravelOrder._ID + \" INTEGER PRIMARY KEY AUTOINCREMENT,\" + TravelOrder.NAME + \" TEXT NOT NULL \" + \");\");"} {"_id": "214474", "title": "REST or a message queue in a multi-tier heterogeneous system?", "text": "I'm designing a REST API for a three-tier system like: `Client application` -> `Front-end API cloud server` -> `user's home API server (Home)`. `Home` is a home device, and is supposed to maintain connection to `Front-end` via Websocket or a long poll _(this is the first place where we're violating REST. It gets even worse later on)_. `Front-end` mostly tunnels `Client` requests to `Home` connection and handles some of the calls itself. Sometimes `Home` sends notifications to `Client`. `Front-end` and `Home` have basically the same API; `Client` might be connecting to `Home` directly, over LAN. In this case, `Home` needs to register some `Client` actions on a `Front-end` itself. Pros for REST in this system are: * REST is human-readable; * REST has a well-defined mapping of verbs (like CRUD), nouns and response codes to protocol objects; * It works over HTTP and passes all the possible proxies; REST contras are: * We need not only a request-response communication style, but also a publish-subscribe; * HTTP error codes might be insufficient to handle three-tier communication errors; `Front-end` might return `202 Accepted` to some async call only to find out that necessary `Home` connection is broken and there should have been `503`; * `Home` needs to send messages to `Client`. `Client` will have to poll `Front-end` or to maintain a connection. We are considering WAMP/Autobahn over Websocket to get publish/subscribe functionality, when it struck me that it's already looking like a message queue. Is it worth evaluating a sort of messaging queue as a transport? Looks like message queue contras are: * I'll need to define CRUD verbs and error codes myself on a message level. * I read something about \"higher maintenance cost\", but what does it mean? how serious are these considerations?"} {"_id": "245200", "title": "Why use subtyped functions?", "text": "Say you have arguments A1 >: A2 (contra-variant), and return types B1 <: B2 (covariant). The corresponding functions are such that: A1 => B1 <: A2 => B2 Sometimes, this makes sense to me - I will grasp it in a flash of lateral thinking. And then a few days later it is lost to me until I struggle through it for another hour or so. Using an example from Odersky's Scala MOOC (this is not an assignment spoiler), Empty and NonEmpty are both subtypes of IntSet. I understand that you cannot pass an IntSet into a function expecting a NonEmpty, since the IntSet might be an Empty. Therefore the latter (A2 => B2) cannot be a subtype of the former (A1 => B1). And I understand that you _can_ pass a NonEmpty into a function expecting an IntSet, apparently because the function will only be calling IntSet-specific methods that all subtypes have implemented. Therefore the former _can_ be a subtype of the latter. Similarly, a function that returns an IntSet cannot be a subtype of a function that returns a NonEmpty, because that returned IntSet cannot necessarily do everything the NonEmpty can do. Conversely, a function that returns a NonEmpty _can_ be a subtype of a function that returns an IntSet, since that returned NonEmpty will conform to the IntSet contract. But what I'm having trouble grasping is, if you _can_ pass a NonEmpty into a function expecting an IntSet, and if that function were a subtype of a function expecting a NonEmpty (ostensibly because it is calling NonEmpty- specific methods in its implementation), then why would you ever want to use that subtyped function? Does anyone have examples of this being useful? Why use subtyped functions? To me it just seems dangerous to pass in an IntSet if you know that it is going to call NonEmpty-specific functionality."} {"_id": "214476", "title": "Rails/Node.js interaction", "text": "I and my co-worker are developing a web application with rails and node.js and we can't reach a consensus regarding a particular architectural decision. Our setup is basically a rails server working with node.js and redis, when a client makes a http request to our rails API in some cases our rails application posts the response to a redis database and then node.js transmits the response via websocket. Our disagreement occurs in the following point: my co-worker thinks that using node.js to send data to clients is somewhat business logic and should be inside the model, so in the first code he wrote he used commands of broadcast in callbacks and other places of the model, he's convinced that the models are the best place for the interaction between rails and node. I on the other hand think that using node.js belongs to the runtime realm, my take is that the broadcast commands and other node.js interactions should be in the controller and should only be used in a model if passed through a well defined interface, just like the situation when a model needs to access the current user of a session. At this point we're tired of arguing over this same thing and our discussion consists in us repeating to ourselves our same opinions over and over. Could anyone, preferably with experience in the same setup, give us an unambiguous response saying which solution is more adequate and why it is?"} {"_id": "85810", "title": "Distribute No Modify License", "text": "I would like to release some code and some documentation to be distributed freely, but not if it has been modified in any way. Basically, needs to be shared \"as-is\". The reason is I would like to retain control of all modifications without making it it closed source. Is there a commonly available license that allows for this? Thanks"} {"_id": "127075", "title": "What percentage of code and runtime do you typically allow for testing?", "text": "When I write code, a certain amount of it is just for the purpose of logging, tracing, or otherwise \"testing\" what the program is doing. This not unit testing; this is not when it is running in \"debug\" mode or something similar; it is not things like assert statements that get stripped out by some compiling step. But actual production code and runtime execution for the purpose of knowing what the system is doing and writing it out somewhere. Naturally, there have to be various settings in various places (config files, admin interfaces, etc), that allow for more or less of such things. But even if all settings are at their lowest level, I still like to have my system keep track of its own state and report it in some minimalist manner that I or operations people can use to troubleshoot even without turning on/up the other settings. So here is my question: how much of your code or system run-time do you typically devote to such permanent-troubleshooting purposes? I say both code and run-time as they are two measures: how many lines of code relative to all lines of code, because the more the code the more the maintenance; and how much run-time performance because these things do consume CPU time, however small. And there is not always a direct connection between lines of code and run-time consumption. Personally, I'm happy to devote up to 10% of my code to such things and up to 5% of the run-time. Have you ever read any industry authorities that have an opinion on this? Who? What is your opinion?"} {"_id": "127071", "title": "Concurrent Data Processing by Multiple People", "text": "Let's say I have a page, accessible by several people concurrently, that lists information that must be processed in some way by the user, after which it is either marked as \"completed\" and effectively disappears, or is left unprocessed and thus left in the list to be tried again later. What is the best way to ensure that only one person is handling a given item in the list at a time, assuming that I must show the list (i.e. no one user/one item requirement, which is what I would prefer), and that any given user can click on any item in the list at any time? This is an issue that has come up several times where I work, and I'm not really satisfied with the solution that I've implemented up until now. I was hoping someone had a better idea. Basically my solution involved creating a table in a database to track who accessed what item and when, and then enabling/disabling features based on that (e.g. not allowing a user to edit info if it's checked out by someone else). It looks something like the following (using Oracle 11g...these are stripped way down for brevity here, with INFO columns acting as a stand in for all other columns that are irrelevant here). CREATE TABLE \"SOME_INFO\" ( \"INFO_ID\" NUMBER(19,0) NOT NULL ENABLE, \"INFO\" VARCHAR2(20 BYTE) NOT NULL ENABLE, \"IS_PROCESSED\" CHAR(1) NOT NULL ENABLE, CHECK (\"IS_PROCESSED\" IN ('Y', 'N')) ENABLE, CONSTRAINT \"PK_SOME_INFO\" PRIMARY KEY (\"INFO_ID\") USING INDEX ); / CREATE TABLE \"SOME_PERSON\" ( \"PERSON_ID\" NUMBER(19,0) NOT NULL ENABLE, \"INFO\" VARCHAR2(20 BYTE) NOT NULL ENABLE, CONSTRAINT \"PK_SOME_PERSON\" PRIMARY KEY (\"PERSON_ID\") USING INDEX ); / CREATE TABLE \"PROCESS_HISTORY\" ( \"INFO_ID\" NUMBER(19,0) NOT NULL ENABLE, \"PROCESSOR_ID\" NUMBER(19,0) NOT NULL ENABLE, \"CHECKED_OUT\" DATE NOT NULL ENABLE, \"CHECKED_IN\" DATE ); / \"Checking out\" a record from SOME_INFO for a given user is accomplished by INSERTing a new record into PROCESS_HISTORY with INFO_ID = the selected record's INFO_ID, PROCESSOR_ID = the users PERSON_ID, and CHECKED_OUT = the current time (CHECKED_IN is left NULL). \"Checking In\" is accomplished by simply setting CHECKED_IN to the current time. An automated process is used to periodically check in records that have been checked out for more than some predefined maximum amount of time (the time varies from page to page based on what's involved in processing the info). From this I can tell if a given record is currently checked out or not, and thus alter the page's behavior accordingly. Having said that, I can't help but think there's got to be a better way to do this. I'm hoping it's a common enough problem that there is a standard or semi-standard method of doing this that I simply haven't been able to find."} {"_id": "127070", "title": "Is there an IDE for python that creates the same kind of reflective environment that Smalltalk provides?", "text": "As anyone who has used Smalltalk knows, one of the main benefits (other than a late-bound language that discourages many poor practices), is that the system is totally transparent and reflective, which makes understanding APIs and existing code easy, and locating functionality pretty easy. Is there anything that creates a similar environment for Python? A few examples of features of a smalltalk development environment, not natively found in python are: * search class/method/etc names, * examine inheritance hierarchies * functionality to show the full interface of a given class/object, and where the properties therein originate * an integrated graphical debugger which allows one to examine the full state of everything in the system, and see every instance of a given type, as well as all threads. Note that I use windows, so anything that works well on windows would be particularly useful."} {"_id": "26354", "title": "Leading Web Application Publications", "text": "Almost every major industry has it's standard publications which are considered \"must reads\" in that particular field. After spending some time on Google looking for web application publications, I've come up empty; are there any high quality web application publications available online or through print that someone can point me to?"} {"_id": "113379", "title": "Whether to put the business logic in Stored Procedure or Not?", "text": "There is always a debate over the topic - \"Whether to put the business logic in Stored Procedure or Not?\". If we decide not to use the ORM Tool and not to put the Business Logic in Stored Procedure then where would we put the Business Logic? In my previous applications I have always preferred putting all of the Business Logic in Stored Procedures only. Then from .NET code I call these Stored Procedures using Data Access Application Blocks. SQLHelper etc. But this can not be the scenario all the time. So I did some googling but ended up in confusion....... Any suggestionss...?"} {"_id": "32501", "title": "Where can I obtain a first-generation copy of the classic \u201cSorting out Sorting\u201d?", "text": "I cannot even figure out who made it - even the IMDB page is mostly blank, and Wikipedia does not seem to have any information about it. For such a useful film in CompSci, it strikes me as odd that the only meaningful Internet presence I can find are horrible quality copies or excerpts on YouTube. [Note: SO claims this question was migrated to here, but the URL they provide gives a 404, and I can't find it by searching Programmers.SE, so I'm re- asking...]"} {"_id": "55706", "title": "Writing Resumes for Internships?", "text": "I'm an undergraduate student starting to look for internships. I understand a lot about how to embellish a real-world resume--emphasizing tasks done at previous jobs and whatnot--but I'm not sure if it will translate well to low- experience internship resumes. Internship Resumes are marked by: * Few to no past Software-related full-time jobs or internships * Few to no non-school-involved Software-related activities Obviously if you have no experience or activities to list, you're pretty well stuck. So let's assume we have one of each. I'm basically wondering: 1. What is a company looking for most from Intern candidates? Past work, GPA/coursework, Outside projects (Open Source, etc), certain skill sets (languages) 2. Should I be emphasizing _tasks_ , or _jobs/positions_ when listing my experiences? 3. Are skills important to list? If so, which ones in particular?"} {"_id": "51082", "title": "Would you use (a dialect of) LISP for a real-world application? Where and why?", "text": "LISP (and dialects such as Scheme, Common LISP and Clojure) haven't gained much industry support even though they are quite decent programming languages. (At the moment though it seems like they are gaining some traction). Now, this is not directly related to the question, which is would you use a LISP dialect for a production program? What kind of program and why? Usages of the kind of being integrated into some other code (e.g. C) are included as well, but note that it is what you mean in your answer. Broad concepts are preferred but specific applications are okey as well."} {"_id": "165710", "title": "Participate in open source project", "text": "Currently, I am through a very creative phase as a developer. I think it's a good time to contribute to an open source project. Not as \"permanent\" developer to a project but in a \"help wanted\" manner in many projects. The open source hosting services that I know are SourceForge and CodePlex. Any suggestions that will help me on this direction?"} {"_id": "93275", "title": "Work Item Traceability in TFS 2010", "text": "I have created a Windows Form project (VS solution) under a TFS 2010 project. I may eventually add more solutions to the TFS project. **My question** : Can we create a Use Case WIT for a specific solution within a TFS project? Furthermore, is it possible to create a \"traceability matrix\" that starts at the Use Case level and goes down to the the code level (at least the namespace level) of that particular VS solution?"} {"_id": "197835", "title": "Fundamental TDD: stuck with writing a test so I can write code that I want", "text": "I have a Season class. This Season has a few properties: among them, a list of Games. This should be populated from the same source that populated the rest of the Season properties. I have a test to ensure all items in that list are of type Game. There is no hard requirement that a Season must contain any Games, so testing that the list is not empty would not be entirely correct -- however, the test Season I'm using has plenty of games in it. Writing \"the minimum amount of code to make a test pass\" is already done, because it is the empty list (so there are no not-Game objects). What would be the _fundamental_ TDD way to get the production code to parse+add games to the Season object? Would the _pragmatic_ way be to test via the size? * * * An alternative would be to generate the list of Games in my test, and see if Season.games == list_of_games The problem with that is that there are a crapload of Foreign relations that will be going into those games, making the test code equivilant to the method that I'm testing -- which I don't think is the way it should work. Or should I build them \"by hand\" from the test data -- instead of parsing it?"} {"_id": "197836", "title": "In C# what is lifetime or lifespan of constant variable?", "text": "In C# if i declare a constant variable is any memory allocated to it as it acts as a compile time replacement? How long is the variable's life?"} {"_id": "253503", "title": "Why are packed structures not part of the C language?", "text": "Every C compiler offers the option to \"pack\" C structures (e.g. `__attribute__ ((__packed__))`, or `#pragma pack()`). Now, we all know that packing is required, if we'd like to send or store data in a reliable way. This must also have been a requirement since the first days of the C language. So I wonder why packed structures are not part of the C language specification? They're not even in C99 or C11 even though the necessity of having them is known for decades now? What I am missing? Why is it compiler specific?"} {"_id": "75362", "title": "Bottom-up or top-down approach?", "text": "Not sure if the title _really_ fits with the question, but figured it was the closest to give an overview of my question.. I'm working on a web application that (fairly heavily) employs the use of browser-specific CSS functions such as -moz-transform) and I'm curious as to what approach those with more experience developing these types of applications use. For context, I'm referringly _solely_ to front end development (my backend pipeline's pretty solid :)). The way I see it, there are two methods of development: * Get new functionality working in all browsers, only introducing new code once all applicable browsers have all been tested. * Get your functionality complete in one browser. Once your application is complete, start testing on other browsers and introduce fixes where needed. Generally, I've been following the latter. I feel as though it allows me to concentrate on the core functionality and get more completed faster than if I was essentially halting functional development every time something new was completed to deal with browser nuances. So, what practices do _you_ follow when it comes to client-side development? I don't work professionally as a web developer, so best practices in web development are a little tougher to come by, other than my own trial and error (and scouring w3c recommendations). However, always a good thing to learn from those smarter than you :)."} {"_id": "3622", "title": "How much credit do you take when you used plugins etc.?", "text": "I often develop an application entirely myself. But did I really? I feel strange about that claim and never know when that is true. I mean I designed it, coded it, but I used XYZ plug-in. Can I still claim that I did it all myself even though I didn't create the plugin I used? Consider this conversation? > **ME:** I designed and developed this app entirely myself. > > **Other:** Cool, how did you program XYZ part? > > **ME:** I didn't program that part I used XYZ plugin. > > **Other:** So you didn't really program it _ALL_ yourself than did you? I mean if I must give them credit for the plug-in I used, then do I have to give the language authors credit for the language I used and the IDE authors credit as well? Where do I draw the line? This is just something that always crosses my mine as soon as I am about to take full credit for a project, and was wondering others opinions on the matter."} {"_id": "167711", "title": "Forking a GPL dual licensed software with business owned copyrights", "text": "After receiving some threats of the copyrights holder of a dual licensed software(GPL2 and commercial) to buy the commercial version for projects in production, I am thinking to make a fork. In a case of GPL2 and commercially dual licensed with business owned copyrights software, is forking the GPL2 version an option? Also, is forking a good way to deal with such cases? ## Background information The software is a web CMS released under 2 versions a GPL2 free open source edition and a commercial edition including technical support and extra functionality. The problem is that now, basing their argumentation on the \"distribution\" definition of the GPL2, the company holding the copyrights argue that delivering the software and some extensions to a client is considered as a \"distribution\". And that such a \"distribution\" falls under the GPL2 obligation to release the custom made extension code. Custom made extensions are mainly designs, templates and very specific functionality. Basically they give me 3 choices: 1. Buying the commercial licensed edition for projects based on the GPL in production, 2. Deleting all the projects in production based on GPL2 version, 3. Releasing all the extensions as GPL2 code. The first 2 options are nothing realistic for finished projects. The third option could be fine, but as most of the extensions are very specific, cleaning the code to make it usable by other users means lot of works and also I am not sure the clients will appreciate to have their website designs and specific functionality released publicly. The copyrights holding company even contacted some clients directly, giving them the \"choice\". I know that this is a very corporate interpretation of GPL2, and a such action is nothing close to legal, but as an independent developer, I don't want to take the risk to get involved in some long and tiring legal procedures. * * * PS. This question was first asked on Stack Overflow where it felt out of the scope and closed, after reading the present site FAQ, discussing about software licensing seems fine."} {"_id": "110332", "title": "Is it good or bad form to name a function after the workaround it fixes?", "text": "Lets say you have to write some code to fix a bug that on first glance by another engineer would seem weird or unnecessary. Would it be good or bad form to put the code in a method named for example \"preventWindowFromJumpingWhenKeyboardAppears\" or just name it \"forceSpecifiedWindowPosition\" and then add a comment about why you are doing this when calling the method?"} {"_id": "110333", "title": "Making an attractive, yet still technical architecture diagram", "text": "Does anyone have any advice for making an \"attractive\" software architecture diagram? My manager told me to make my current architecture diagram (which was built just using Visio and basic icons) more \"attractive\" for a presentation I have to give to executive level types who are non-technical. I'm guessing he meant to have something that you'd show to customers or for marketing people to use. Any specific icon sets or particular tips people have? I cannot post my current diagram for privacy reasons but to get the general idea, it's just text, lines, and server icons (http://www.227volts.com/wp- content/uploads/2009/03/exchange2007visiostencils.jpg) that is the icon set that I am using. I'm honestly confused on how to make something like that \"more attractive\" (hell, I think black and white are always the best color combinations to use :P) Edit: So is something like this http://rollerweblogger.org/roller/resource/linkedin- today.png still considered \"professional\" with all the colors and such? I asked my manager and all he said was just make it more marketable while evading questions about what I should do specifically."} {"_id": "209276", "title": "Application qos involving priority and bandwidth", "text": "Our manager wants us to do applicaiton qos which is quite different from the well-known system qos. We have many services of three types, they have priorites, the manager wants to suspend low priority services requests when there are not enough bandwidth for high priority services. But if the high priority services requests decrease, the bandwidth for low priority services should increase and low priority service requests are allowed again. There should be an algorithm involving priority and bandwidth. I don't know how to design the algorithm, can anyone assist in getting this right? All these services are within a same process. We are setting the maximum bandwidth for the three types of services via ports of services via TC (TC is the linux qos tool whose name means traffic control)."} {"_id": "209272", "title": "Architecturally speaking, does a database abstraction layer, such as Microsoft's Entity Framework, void the need for a separate Data Access Layer?", "text": "# The way it was For years, I have organized my software solutions as such: * Data Access Layer (DAL) to abstract the business of accessing data * Business Logic Layer (BLL) to apply business rules to data sets, handle authentication, etc. * Utilities (Util) which is just a library of common utility methods I have built over time. * Presentation Layer which could of course be web, desktop, mobile, whatever. # The way it is now For the past four years or so I have been using Microsoft's Entity Framework (I am predominately a .NET dev) and I am finding that having the DAL is becoming more cumbersome than clean on account of the fact that the Entity Framework has already done the job that my DAL used to do: it abstracts the business of running CRUDs against a database. So, I typically end up with a DAL that has a collection of methods like this one: public static IQueryable GetObjects(){ var db = new myDatabaseContext(); return db.SomeObjectTable; } Then, in the BLL, this method is used as such: public static List GetMyObjects(int myId){ return DAL.GetObjects.Where(ob => op.accountId == myId).ToList(); } This is a simple example of course as the BLL would typically have several more lines of logic applied, but it just seems a bit excessive to maintain a DAL for such a limited scope. Wouldn't it be better to just abandon the DAL and simply write my BLL methods as such: public static List GetMyObjects(int myId){ var db = new myDatabaseContext(); return db.SomeObjectTable.Where(ob => op.accountId == myId).ToList(); } I am considering dropping the DAL from future projects for the reasons stated above but, before doing so, I wanted to poll the community here for your hindsight/foresight/opinions before I get down the road on a project and discover a problem I didn't anticipate. Any thoughts are appreciated. # Update The consensus seems to be that a separate DAL isn't necessary but (making my own inference here) is a good idea to avoid vendor lock in. For example, if I have a DAL that is abstracting EF calls as illustrated above, if I ever switch to some other vendor I don't need to rewrite my BLL. Only those basic queries in the DAL would need to be rewritten. Having said that, I am finding it tough to envision a scenario in which this would happen. I can already make an EF model of an Oracle db, MSSQL is a given, I am pretty sure that MySql is possible as well (??) so I am not sure if the extra code would ever grant a worthwhile ROI."} {"_id": "114658", "title": "Data acquisition, storage and management", "text": "I have a device that can measure different values over time, one sample per second. After one measurement run I can export the data in form of a CSV-file. One row per second with timestamp and about 20 columns (values). The question now is how to store these data masses to make the data accessable. I want to work with it using Matlab. I have for example 50 files of data, each representing one measurement run. Each containing around 120 tsd. samples of 20 values of type float, for example a temperature. It would be great if I could request all samples at the same temperature for example. So actually I am looking for a software that can handle and organize that data or a hint how (and what kind of) database to build. Thank you for your help!"} {"_id": "118106", "title": "Monitor Process", "text": "I have some previous posts talking about how to use python to \"do something\" when a record is inserted or deleted into a postgres database. I finally decided on going with a message queue to handle the \"jobs\"(beanstalkd). I have everything setup and running with another python process that watches the queue and \"does stuff\". I am not really a \"systems\" guy so I am not sure what is a good way to go about monitoring the process to make sure if it fails or dies that it restarts and sends a notification. Google gave some good ideas but I thought asking here I could get some suggestions from people that I am sure have had to do something similar. The process is critical to the system and it just needs to always work and if its not working then it needs to be addressed and other parts of the system \"paused\" until the problem is fixed. My thoughts were to just have a cronscript run every minute or two that checks to see if the process is running. If not it restarts it. Another script (or maybe just another function of the first) would be to monitor the jobs and if the jobs waiting to be processed hit a specific threshold to also flag that there is a major problem. Specifics about process... The process updates the orders in a legacy system with the qty's of items that are shipped or back ordered from our warehouse. SO if these things are not done then when the order is invoiced it will have incorrect qtys and the people involved wouldn't have a good way to spot this unless they are checking each line. I thought I might also have a flag on the order that says \"yes i have been touched\" and if its hasn't to just notify the invoice agent. This same method is going to be used for updating orders with shipping information based on when orders are shipped from UPS Worldship. I don't know, i think i have a handle on this but it just feels \"kludgy\"."} {"_id": "57707", "title": "c++ write own xml parser vs using tinyxml", "text": "I am currently in a task to generate an XML file for an srt text file containing timestamps and corresponding text. To generate an exe file which accepts file name input and outputs the relevant XML file to be used as part of an automated script. Is it Advisable to use Tinyxml for this? Edit: your comments regarding this are very much appreciated what's the easiest way to generate xml in c++?"} {"_id": "43329", "title": "Etymology of \"String\"", "text": "So it's obvious that a string of things is a sequence of things, and so a sequence of characters/bytes/etc. might as well be called a string. But who first called them strings? And when? And in what context such that it stuck around? I've always wondered about this."} {"_id": "104429", "title": "How do you manage workflow tasks for a distributed team?", "text": "I work for a small software company which is responsible for delivering roughly a thousand custom software packages for roughly 100 customers. We are struggling at tracking the release process. The entire thing is pretty repeatable but, because of the different distribution processes, there are about 2 dozen steps from the time we sign a contract to the point a product goes out the door. To make matters more complicated, we have a few different 'types' of employees that are involved. We have a sales person who gets all requirements and deal terms. We have developers who do any modifications required by the customer. There is another type of employee which does the work of running the final build, testing and distributing the product. I'm looking for ideas on how people manage these sorts of processes as we look to fix some issues which have been plaguing us for a while. Primarily, we are seeing a lot of issues sequencing the events AND communicating the information associated with each product to the build/distribution team. Currently we use SVN(SCM), Jenkins/Hudson(Build) and Redmine(Feature/Bug Tracker). Are there software solutions or is this just an internal process we have to document and understand? I'd really like to find a way to keep everyone aware of what the status of a particular app is and notify people that they are expected to do something."} {"_id": "225567", "title": "How should a senior programmer monitor another senior programmer?", "text": "I'm working with a new senior programmer who has almost the same amount of experience as me. He his own project to work on -- but I have to make sure he does not mess things up. Now, how can I monitor him since he's not junior? Do I examine his code? Cons of this are that I am not as deep into project as himself so it's time consuming for me (have my own project besides this). Do I only do QA from the perspective of another senior? Any advise here is appreciated. I was recently in the same situation where I relied that another senior will handle all tickets properly, and then it took me 3 weeks to fix his oversights. So I do not want to be in the same situation again."} {"_id": "116650", "title": "Is there any Boost equivalent library for C?", "text": "Is there any Boost equivalent library for C?"} {"_id": "106379", "title": "Can the language make us stupid?", "text": "Do programming languages, that we are mainly coding in, really change the way we are thinking about problems? Sort of programming kind of the Sapir\u2013Whorf hypothesis. And if they do, doesn't it really mean, that there **are** programming languages, which can make us, well, not stupid, but narrow-minded? **UPDATE:** this question is actually not about acknowledging the fact that we all should know at least three or four languages. This goes without saying. But what if there are some \"dangerous\" languages, which is better not to learn at all?"} {"_id": "52066", "title": "Why do people store shipping and handling in the order header and not a line item", "text": "I have worked on several ordering systems. Every one of them has tracked the shipping and handling as part of the order header rather then tracking it as a line item. To me, it would make more sense to keep them with the line items, even if the information is not displayed that way to the user. What are the reasons for keeping them in the header?"} {"_id": "54397", "title": "Explaining interfaces to beginning programmers?", "text": "I've had discussions with other programmers on interfaces (C#). I tried to use the analogy of interfaces being like a contract between programmers. Meaning that when you design to an interface, you are designing to a \"thought out plan\". This didn't fly. The other programmers (limited experience) couldn't get the concept. Or worse, refused to participate. How do you explain to people like that there **are** reasons to use interfaces? Thanks"} {"_id": "60118", "title": "Appropriate UML diagram", "text": "What is an appropriate UML diagram if I want to display how user request web page, enters some data, then posts the form (and if the validation has not succeeded, user is redirected back to requested web page)?"} {"_id": "102869", "title": "How do I avoid \"Developer's Bad Optimization Intuition\"?", "text": "I saw on a article that put forth this statement: > Developers love to optimize code and with good reason. It is so satisfying > and fun. But knowing when to optimize is far more important. Unfortunately, > developers generally have horrible intuition about where the performance > problems in an application will actually be. How can a developer avoid this bad intuition? Are there good tools to find which parts of your code really need optimization (for Java)? Do you know of some articles, tips, or good reads on this subject?"} {"_id": "60114", "title": "Where can I learn about hardware/software co-design?", "text": "I'm involved in an embedded software project where we also need to specify the target hardware. I come from a mostly software-only background, and would like to learn more about the simultaneous design of software and hardware. Are there any books/resources/courses out there that you would recommend?"} {"_id": "113028", "title": "When do 'static functions' come into use?", "text": "OK, I've learned what a static function is, but I still don't see why they are more useful than private member functions. This might be kind of a newb-ish question here, but why not just replace all private member functions with static functions instead?"} {"_id": "9810", "title": "How do you respond to: \"Ever since the update...\" questions from clients?", "text": "Ever since the update, people keep calling and saying \"Ever since the update X, Y and Z are slow, bad and crashing\" This has happened ever since the dawn of updates. What do people expect? Gamma comes after beta, and gamma testing always turns our users into The Incredible Hulks... Perhaps you've never heard this from a client, perhaps you're in college or a FLOSS Dev who can spread the blame around more than 5 or 6 guys, perhaps you unit test your code, perhaps you're not in that interesting situation where customers actually call you requesting the exact time of day you'll be releasing today's patch (I'd love to do that to Microsoft) or perhaps you're a sorry son-of-a-biscuit like me who just shipped a new update and went home and is dreading going back to work tomorrow. Anyway, ya'll are smarter than me anyway. How do you field criticism framed in \"You must be a bad programmer because you're making your software worse\"?"} {"_id": "132248", "title": "Getting users to write decent and useful bug reports", "text": "Does anyone know a good way to **_get users to write a semi-decent_** (read: _useful_ ) **_bug report_**? We wanted to put up something that would make sense to most users (be easy to read and understand), yet give useful information to developers as well. _**It does not work when I click the blue button! Ahhh, I just lost a week's work ... make it work._** isn't very useful, as it is. I started to fix about a list, but thought to check up with you guys, whether a similar method already exists."} {"_id": "236016", "title": "How to make sure that reported issues are not caused by wrong credentials or typos of the client?", "text": "I have found myself a few times in the situation where a client reports an issue like ' _I can no longer login to my account_ '. Sure enough when trying to login with the client's credentials myself everything works. After that, you start testing in other environments and situations, but still it works every time. After numerous tests you start to wonder if the user is using the right credentials. Some clients get offended when asking this. Others are so sure that they enter the right credentials, they don't even bother trying the credentials that you send them. How can I make sure that the client is using the right credentials?"} {"_id": "231891", "title": "Not enough information in Bugs / Tickets", "text": "I work in a small team of developers that supports several pieces of software that a large multi-site company uses. We use Spiceworks to raise issues and support requests throughout the IT department not just developers. We constantly get tickets raised with the following or similar: \"Function X is broken, please fix it, it is urgent\" Now this statement is practically useless to me, and it wastes my time and theirs whilst I try to work out the problem often only to discover it is something they had not realised or that should be assigned to someone else. Now my question is in what ways could I try to get people to write more information in bugs? I have thought of several ideas such as a template but I am pretty sure they would be ignored. The only thing I can think of that would almost certainly work is to refuse to action any tickets not written correctly, but I have my doubts as to whether I would be allowed to do this. Incidentally the people raising the bugs tend to have trouble using basic computer functions so I cant ask them to do anything very technical without a great deal of effort. Thanks all"} {"_id": "113020", "title": "Is using F# good enough for learning the important functional programming concepts of Haskell?", "text": "I'm coming from Linux and Ruby. I've been interested in learning more functional programming, and in particular the ML-ish style. I've tried reading through the Real World Haskell book and trying some Haskell that way. But it's hard to learn this way for me because Haskell is very weird to me and I don't think I'll manage to really learn it until I try to do something real with it. That's why I'm considering learning F# instead, because I'm guessing that this way I can kill two birds with one stone: learn an ML-style functional language paradigm, and also learn Windows programming APIs (.NET libraries). This seems to be a more productive route than learning the Haskell library counterparts of all the Ruby libraries I've grown to love, but staying within the Linux/OSX universe. My question is, how sound is this reasoning? Is F# an adequate substitute for Haskell if you want to learn the programming ideas and techniques that Haskell teaches you?"} {"_id": "113026", "title": "Cons of working from home", "text": "I am a software developer for a large corporation, and currently work from home two days a week, and I absolutely love it. However, because of juggling of personnel, my team's office may get shutdown in the next couple of months. I have been asked my thoughts on this, and I'm not entirely sure. I love my two days at home, and wouldn't mind even four; but going to the office I feel is essential. When I'm at the office I have lunch with the other developers, share home cooked goodies, and discuss specifics of work. Because I'm also a fairly new member of the team, I'm still learning a lot, and occasionally one of the other team members have to show me how to do something that would be incredibly difficult to do over the phone or IM. For those of you who work from home full time and don't have an office to meet with your other co-workers, what are some problems that have occurred? How did you deal with them? Side note: I'm not concerned about work/home divide, I've got a spare room setup as my office (with only office stuff in it [standing desk, printer, books, etc]), and I'm in a relationship with no prospects of children, and my partner understands not to disturb me while I'm working)."} {"_id": "82517", "title": "Microsoft Visual Web Developer 2008 Express?", "text": "I just came across this while searching on Google for HTML/CSS tutorials. Does anyone recommend this software for building websites? Or should I just stick to learning pencil and paper hand coding instead? I just want to put my focus in the right area. Heres the link: http://msdn.microsoft.com/en-us/beginner/bb964635.aspx"} {"_id": "246292", "title": "How should object identification be managed?", "text": "I have a java/swing application in (hopefully good) MVC structure. Here is a overview of my model classes: ![enter image description here](http://i.stack.imgur.com/egVUN.png) One or more `worker`s may work at one `working location` and one `worker` may work at different `working location`s (not at the same time, of course). A `worker` may have different types of a `date` assigned. The `working location` has a list of all its `worker`s. Each `date` has a reference to its `worker`. All model objects are mutable. For managing purposes I implemented a `Database.java` which is responsible for storing the model classes in java collection classes and at program close saving all objects into an xml file (I use java simple xml framework for this task). It has the following three lists: `List`, `List` and `List`. So the procedure is: 1. Load xml file on program start 2. Add/edit/remove model-objects to/in/from database (working with the collections of Database.java 3. Save all model-objects from the Database collections into the xml file on program exit (If there is something bad in my mindset until now, please tell me a better way! I will really appreciate it.) * * * Here is the question: **What is the proper way of identifying model-objects?** I can think of two possibilities: 1. Use object references * (+) There is no extra code for handling the generation of new unique ids * (-) \"Everyone\" can edit the objects (Because this is a private project \"Everyone\" means only me) 2. Use unique IDs for every object * (-) more code * (+) database may return copies of the model objects to be edited and only when the controller calls the database's methods with the copyied model objects they will be updated * * * **EDIT:** There was an important part which I forgot to mention: Multi-Threading and Multi-User Both of them can certainly be answered with \"No\". The application will be some kind of one user planning software to replace the current \"pen-and-paper- planning\". I would say the main requirement is to avoid budgeting workers who are on holiday or not to forget to budget workers who come back from holiday the next day. Because this are the mistakes that often happen. Also there is no need for deeper analysis of the model objects and therefore no need for complex database queries _at the moment_. Therefore I like the idea of using XML to create a standalone application. Or is there a good reason to blow up the code to fulfill unlikely requirements?"} {"_id": "134838", "title": "any website monitoring library/modules to use in my website?", "text": "I'm trying to build a site which user can add their websites which should be monitored and can view a detailed report and statistics of it. And also it should be able to monitor local webservices (mainly), 1) is there any popular java library to use for monitoring a website? (in sense to ping a website and load some content etc., or to show whether site is up or not kinda) 2) Also to validate a flow of website (if possible)? Example: Go to login page, find fields , fill fields and submit form and validate for successful login or not. I know this kinda looks big but any small part done can be helpful, I've found several of monitoring projects, but they only do normal websites but not local/other webservices and also they can't validate flow of website."} {"_id": "2463", "title": "Have you used Grapple mobile?", "text": "I'm looking for a way to easily port content driven applications from one mobile platform to another. And recently I've came a cross a grapple mobile that claims that they can compile one code written in js and css to almost all platforms. Have you used it? Are such apps approved by apple? (AFAIK Apple restricted the development to objective-c, c, c++)"} {"_id": "134832", "title": "Collaboratively working with a WebDesigner - Best practices", "text": "I'm a programmer. I have always worked by myself. I usually do only back-end, but now I have a webproject that I want to hand the \"View\" part to a designer to make it beautiful. What I'm looking for are suggestions, best practices, recommended literature to make it easy and separate (easy to merge) for both parts. I googled a lot, so what I want is your personal opinion, experience on this subject. 1 - The back-end will be Java. 2 - Besides the back-end I haven't chosen (opinions ???) on what is best (JSF, only HTML (REST)) for the frontend Any help and insight is greatly appreciated Thanks in advance"} {"_id": "132343", "title": "Where do I find collaborators for hobby projects", "text": "I am one of those people who learn by doing, and I often get a new idea for a project beside work, which could be fun and perhaps later on profitable. I have had quite a few ideas, but I usually burn out, which is often because I am alone on the projects. Most of my collegues have kids, or don't want to use their spare time for hobby projects. Therefore it is often hard to find other people who I can work with on my hobby projects. My question is therefore, if there is a good place to find co-developers for small non-open source projects?"} {"_id": "191064", "title": "A class with extra field", "text": "Let's say I have an animal class...with fields of name, height and weight. I want to create a bird class which is an animal, but it has also, say, wing size. How can I do that? My general idea is creating the animal class, and the bird class which inherits animal, and has additional field ...but then different kinds of birds would inherit that class. **My Questions:** * Is my solution an appropriate OO approach to this type of problem? * Is there a known design pattern for handling data modeling problems of this type?"} {"_id": "156562", "title": "visibility:hidden Vs visibility:collapse", "text": "In CSS, what is the difference between visibility:hidden and visibility:collapse They are given to be separate in W3Schools, but I see no difference in their output. I have tried it in Google Crome."} {"_id": "191063", "title": "Should classes, enums and other entities be placed in separate files?", "text": "My company's team lead\\architect argues that a large-scale project is easier to understand if \"entities connected by logic\" are placed in one .cs file. I quote: * \"The whole structure of the logic and the interface and the class can be seen in one place, this is an argument which can't be refute. To see the same thing but with a bunch of files you need to use the tools, class diagram, R# for navigation, etc.\" * \"Following the poor theory I might scream that an army of separated files is cool, but when it comes to making changes to the existing code, especially if you were not a writer of this code, it's very difficult to understand plenty of scattered files. So on forums, you can write that \"one enum- one file\", but in practice this approach should never be used \" * \"... As to the separation of code base between developers, nowadays it's not a problem edit simultaneously the same file. The merge is not a problem.\" I heard and read many times that we have to create one .cs file per enum, class and so on and this is the best practice. But I can't convince him. He says that he don't trust to any well-known programmers such as Jon Skeet. By the way here is Skeet's opinion on this topic: Where is the best place to locate enum types? What do you think? Is there a real problem? Or is it a matter of taste and should be regulated by the coding standard of the organization?"} {"_id": "117304", "title": "Basic premise on counting sort. How is k related to to Big Oh?", "text": "I am reading (Cormen) on counting sort. I understand the structure of the algorithm but the statement: > In practice, we usually use counting sort when we have k = O(n), in which > case the running time is Theta(n). Is not clear to my mind. k is just the range of integers expected in the input. Why is the asymptotic notation used in this statement for k? I don't get it."} {"_id": "117301", "title": "How do you structure unit tests for multiple objects that exhibit the same behavior?", "text": "In a lot of cases I might have an existing class with some behavior: class Lion { public void Eat(Herbivore herbivore) { ... } } ...and I have a unit test... [TestMethod] public void Lion_can_eat_herbivore() { var herbivore = buildHerbivoreForEating(); var test = BuildLionForTest(); test.Eat(herbivore); Assert.IsEaten(herbivore); } Now, what happens is I need to create a Tiger class with idential behavior to that of the Lion: class Tiger { public void Eat(Herbivore herbivore) { ... } } ...and since I want the same behavior, I need to run the same test, I do something like this: interface IHerbivoreEater { void Eat(Herbivore herbivore); } ...and I refactor my test: [TestMethod] public void Lion_can_eat_herbivore() { IHerbivoreEater_can_eat_herbivore(BuildLionForTest); } public void IHerbivoreEater_can_eat_herbivore(Func builder) { var herbivore = buildHerbivoreForEating(); var test = builder(); test.Eat(herbivore); Assert.IsEaten(herbivore); } ...and then I add another test for my new `Tiger` class: [TestMethod] public void Tiger_can_eat_herbivore() { IHerbivoreEater_can_eat_herbivore(BuildTigerForTest); } ...and then I refactor my `Lion` and `Tiger` classes (usually by inheritance, but sometimes by composition): class Lion : HerbivoreEater { } class Tiger : HerbivoreEater { } abstract class HerbivoreEater : IHerbivoreEater { public void Eat(Herbivore herbivore) { ... } } ...and all is well. However, since the functionality is now in the `HerbivoreEater` class, it now feels like there's something wrong with having tests for each of these behaviors on each subclass. Yet it's the subclasses that are actually being consumed, and it's only an implementation detail that they happen to share overlapping behaviors (`Lions` and `Tigers` may have totally different end-uses, for instance). It seems redundant to test the same code multiple times, but there are cases where the subclass can and does override the functionality of the base class (yes, it might violate the LSP, but lets face it, `IHerbivoreEater` is just a convenient testing interface - it may not matter to the end-user). So these tests do have some value, I think. What do other people do in this situation? Do you just move your test to the base class, or do you test all subclasses for the expected behavior? **EDIT** : Based on the answer from @pdr I think we should consider this: the `IHerbivoreEater` is just a method signature contract; it does not specify behavior. For instance: [TestMethod] public void Tiger_eats_herbivore_haunches_first() { IHerbivoreEater_eats_herbivore_haunches_first(BuildTigerForTest); } [TestMethod] public void Cheetah_eats_herbivore_haunches_first() { IHerbivoreEater_eats_herbivore_haunches_first(BuildCheetahForTest); } [TestMethod] public void Lion_eats_herbivore_head_first() { IHerbivoreEater_eats_herbivore_head_first(BuildLionForTest); }"} {"_id": "210209", "title": "Can Scrum use technical specifications in the Product Backlog rather than user stories?", "text": "At the company I am currently working for we started to do Scrum projects. It was not so hard to convince the managers to move from waterfall to Scrum. We're doing a project where we rebuild our platform from scratch. So (most) functionality is known and most improvements are rather technical. In this it could be justified to have technical tasks rather than user stories. Our backlog has got all kinds of technical tasks like: * Rewrite DB class from MySQL to PostgreSQL. * Implement system logging. * Rewrite object cache. Things that come up during the stand-ups include that long \"research tasks\" are wanted, but they are never done. Also, the team members claim in the middle of the sprint that unplanned tasks need to be added. How should a Scrum Master deal with this? Could it be that for this kind of project, Scrum is NOT the way to go?"} {"_id": "235425", "title": "Best practices for caching search queries", "text": "I am trying to improve performance of my ASP.net Web Api by adding a data cache but I am not sure how exactly to go about it as it seems to be more complex than most caching scenarios. An example is I have a table of Locations and an api to retrieve locations via search, for an autocomplete. /api/location/Londo and the query would be something like SELECT * FROM Locations WHERE Name like 'Londo%' These locations change very infrequently so I would like to cache them to prevent trips to the database for no real reason and improve the response time. Looking at caching options I am using the Windows Azure Appfabric system, the problem is it's just a key/value cache. Since I can only retrieve items based on keys I couldn't actually use it for this scenario as far as Im aware. Is what I am trying to do bad use of a caching system? Should I try looking into NoSql DB which could possibly run as a cache for something like this to improve performance? Should I just cache the entire table/collection in a single key with a specific data structure which could assist with the searching and then do the search upon retrieval of the data?"} {"_id": "59692", "title": "Does anyone do hardware benchmarks on compiling code?", "text": "I've seen a bunch of sites that benchmark new hardware on gaming performance, zipping some files, encoding a movie, or whatever. Are there any that test the impact of new hardware (like SSDs, new CPUs, RAM speeds, or whatever) on compile and link speeds, either linux or windows? It'd be really good to find out what mattered the most for compile speed and be able to focus on that, instead of just extrapolating from other benchmarks."} {"_id": "97227", "title": "Setting $_POST variables as a means of passing data / Not passing parameters in functions", "text": "I've got a legacy PHP web application wherein almost each and every function makes references to $_POST variables - retrieving their values, AND **setting them (or setting new POST variables)** as a means to communicate with the calling code or other functions. eg: function addevent() { ... $add_minutes=$_POST['minutes']; ... $_POST['event_price']=''; ... } I would have thought that the straightforward approach would have been to pass to a function all the values that it needs, and return all what they generate. As an old-school programmer, albeit a bit out of touch now, I find this structure grotesquely unsettling - where data is passed arbitrarily all over the place. What do others think? Is the above approach acceptable now? * * * Edit 1 One conceivably genuine use-case is as follows. This is a function which handles embed tags or an uploaded file. function brandEvent($edit_id='') { ... switch($_POST['upload_or_embed']) { case 'embed': ... // uses $_POST['embed_video_code'] ... break; case 'upload': ... // uses DIFFERENT POST variables ... } } Is this a sensible code structure?"} {"_id": "97226", "title": "When to mark a user story as done in scrum?", "text": "There is a notion in scrum that emphasizes **delivery of workable units** at the end of each sprint. Each workable unit also maps directly of indirectly to a user story and when in new sprint PO introduces new PBI (new user stories), this means that practically team can't always go back to previous user stories to do the rest of the job, which in turn means that when you implement a user story, you should do it as complete as it's known to the team in that time, and you shouldn't forget anything (something like \"I'm sorry, I've forgotten to implement validation for that input control\" or \"I didn't know that cross- browser check is part of the user story\"). At the other hand, test, backward compatibility, acceptance criteria, deployment and more and more concepts come after each user story. So, when can team members know that the user story is done **completely** , not just for demo, and start a new one?"} {"_id": "45202", "title": "Learning C, C++ and C#", "text": "I'm sure you guys are tired of this question but after wading through hours of similar posts and questions I've really not made any progress to my specific concerns. I was hoping you guys could shed some light on a couple of questions I have before I decide on a course of action. BACKGROUND: I'm wanting to enroll in some type of program to learn a programming language/get a certificate/degree to work in the field. I've always been interested and bought a book on VB back in high school and dabbled. Now I want to get serious after a huge hiatus. Question 1: I've read it's counter-productive to learn C first, then C++ or C# because you develop bad habits. In a lot of college courses I've looked at, learning C/C++ is mandatory to advance. Should I ever bother learning C? On a related note, I really don't understand the difference between C and C++, or C# for the matter other than it incorporates .NET (which, I understand, is a compilation of tools and libraries that make programming easier and faster). Question 2: Where did you guys learn to program? Where do you recommend? Is it possible to land a job programming being self-taught? Is my best chance an ITT tech or a regular college? I was going to enroll in a JC and go from there but I can't decide what to do. LAST question :) I heard C++ is being \"ported\" to .NET. True? And if so, is this going to make C++ a solid, in-demand language to learn? Thanks for looking. :)"} {"_id": "41990", "title": "Is learning how to use C (or C++) a requirement in order to be a good (excellent) programmer?", "text": "When I first started to learn how to program, _real_ programmers could write assembly in their sleep. Any serious schooling in computer science would include a hefty bit of training and practice in programming using assembly. That has since changed, to the point where I see Computer Science degrees with assembly, if included at all, is relegated to one assignment, and one chapter, for a total of two weeks' work out of 4 years' schooling. C/C++ programming seems to have followed a similar path. I'm no longer surprised to interview university graduates who have not spent more than two weeks programming in C++, and have only read of C in a book somewhere. While the most serious CS degrees still seem to include significant time learning and using one or both of the languages, the trend is clearly towards less enforced C/C++ in school. It's clearly possible to make a career producing good work without ever reading or writing a single line of C or C++ code. Given all of that, is learning the two languages worth the effort? Are they at all required to excel? (beyond the obvious, non-language specific advice, such as \"a good selection of languages is probably important for a comprehensive education\", and \"it's probably a good idea to keep trying out and learning new languages throughout a programmers' career, just to stretch the gray cells\")"} {"_id": "191998", "title": "Can one can survive in the IT industry without knowledge of C and C++?", "text": "I am just a graduate from India and I have knowledge of Java, JSP and Servlets, Android application development and some iOS development. I do not have a background in C or C++ and a little weak background in Data Structures and Algorithms. I want to know whether I can make a successful career in the IT industry without knowledge of C or C++ but having a strong grasp on Java and Python (which is what I am thinking of learning next). As for the Data Structures and Algorithms part I am planning to study them again with implementation in Java as I am not fluent in C or C++. Can I go good in future if I know Data Structures through Java?"} {"_id": "196879", "title": "Jenkins to automate deployment of ASP.NET applications", "text": "Is there any possibility to automate/semi-automate deployments of ASP.NET web applications using Jenkins. It can be under controlled or uncontrolled environments, for uncontrolled user needs to enter userid and password. I am looking out for ways to copy the files from target to destination and run sql scripts in web farm scenario. **Edit** Currently we are using bat files to xcopy/configure app pool/sql cmd, etc to deploy the application. But for this to work, production support team needs to download the source code, build the project and run the bat files to deploy the application. Now, we want to automate the deployment without user downloading the source code and end user just needs to visit a url and fill userid and password parameters and select svn tag and it should get deployed. But Jenkins is running under anonymous login, so the existing bat file will not work since it doesn't have permissions to run the script. So, I would like to know if there exists any alternatives for this kind of situation. It will be good if user context is impersonated by using entered userid and password allowing existing batch file to run without further changes. If it is not possible, we would like to explore other ideas too but we don't have flexibility to choose a automated tool like puppet, etc, we should stick around with these batch files."} {"_id": "196878", "title": "Custom Alphabetic Sorting of Array in Java", "text": "I have a requirement to read a text file with lines in tag=value format and then output the file with specific tags listed first and the rest sorted alphabetically. The incoming file is randomly sorted with the exception of the first line. The output needs the first two lines to always be the same. For example, given the following tags: NAME AGE SSN MARITAL_STATUS NO_OF_DEPENDENTS The input will always have NAME first and the remaining tags (there are literally hundreds) sorted randomly. The output needs to have SSN first and NAME second and the rest sorted alphabetically so I would end up with: SSN NAME AGE MARITAL_STATUS NO_OF_DEPENDENTS Note: These are just sample tags. The actual file has 13 fields which need to be listed first and the remaining few hundred listed alphabetically. I'm trying to figure out the best way to do the sorting. Right now my plan is to read the lines in the incoming file and marshal them into two List objects. The first will contain the specific tags which need to be placed first and the second will have everything else. I will then sort the second list and merge it into the first list. This seems complicated and I feel like I'm missing an easier or more elegant approach."} {"_id": "97228", "title": "What are some good online coding log/management tool?", "text": "I am looking for an online, free coding management tool, where I could log my coding process, for example, bugs to be fixed and has been fixed. What are some good recommendation?"} {"_id": "199523", "title": "Where should I put my method", "text": "I am writing a Java program using the MVC design pattern. I have classes `Item` and `Supplier`. In the database they are connected through a `item_supplier` table. I'm writing a method which will give me all suppliers for a specific item (using itemID): `public ArrayList getItemSuppliers(int itemID)` I have a DB layer as well and I have `DBItem` and `DBSupplier`. Where should this method go? I will use it only (mostly) on my `ItemUI` so I am thinking of `DBItem` as the correct place. \\-- Usually when we have the SalesLineItem pattern (Sales * - 1 SalesLineItem 1 - * Item) we have a separate class, but in that case, do I need such as my only interaction with that table (`item_supplier`) will be with this printing (and one updating) method? Basically, do I need to do a `ItemSupplier` model layer class and respectively `DBItemSupplier` or can I just have those two methods `getItemSuppliers` and `updateItemSuppliers` on either `DBItem` or `DBSupplier` (and if the latter, where?)"} {"_id": "99043", "title": "Is SRP (Single Responsibility Principle) objective?", "text": "Consider two UI designers who want to design \"user attractive\" designs. \"User attraction\" is a concept that is not objective and only resides in the mind of designers. Thus designer A could for example pick up red color, while designer B picks blue. Designer A create a layout which is entirely different from designer B, and so on. I read about SRP (Single Responsibility Principle) and what I understood was kind of a subjective analysis or break down of responsibilities that can vary from an OO designer to another OO designer. Am I right? In other words, is it possible to have two excellent object oriented analyzer and designer who come up with two different designs for one system based on SRP principal?"} {"_id": "199529", "title": "How can I choose what to use to write my next webservice in C#?", "text": "I'm about to write some webservices from scratch and I'm a bit confused about the approach to take. The obvious choices were WCF and MVC 4 Web API, but I am having a hard time deciding. The two factors that are important are making it self-describing (eg, MEX etc.) and easy to consume (Maybe RESTful?). I'm struggling to pick which one to discard in favour of the other. The way things are going, I'm hearing more and more about REST so I should probably go in that direction, but apparently that's also possible to implement using WCF. I'm quite torn about how best to proceed, and there isn't a lot of information out there saying \"Oh, these days developers always use X because Y\" so it's tricky to judge the merits of each, and how easy those merits are to implement."} {"_id": "102288", "title": "Do donations count as \"commercial use\"?", "text": "Lets say I'm building community that uses some webapp. These webapp uses a a modified version a a GPLv2 licensed theme. Additionally the community uses a donation mechanism to fund itself. Donation overheads will be given back to the community in some form, so there is no commercial goal behind it. Questions: 1. Am I required to publish the modified version of the theme if I only use it? 2. Is this \"commercial use\" in terms of the GPL? (I'm not sure it this is the right SE site)"} {"_id": "99049", "title": "Using a microframework, or rolling your own", "text": "I really like a microframework in Python called Flask. I have used it for the past 2 months, and I find it excellent. Now, I would like to use it in deployment, but, there are a few things I'm afraid of. Firstly, even though its a microframework, it depends on Werkzeug (a WSGI framework type of thing), which, in case I run into a problem, makes it more difficult to debug stuff. Second, its not extremely popular, which means that support would be difficult to get. So, what I was thinking is I could roll my own microframework in Python that doesn't have all the features of Flask, but, only the ones I need, and I would know it through and through and could fix any problems that I might encounter, I estimate this will take 1000-2000 lines, and about 2-3 weeks for me, and probably a lot less with a team of 2. What do you guys suggest?"} {"_id": "9657", "title": "Programming with Dyslexia", "text": "I have very severe Dyslexia along with Dysnomia and Dysgraphia. I have known about it since I was a child. My reading and writing skills are pretty crippled, but I have learned to deal with it. However, with today's IDEs, I find it very easy to stay focused and in the zone when I code. But when I write text (like this post) I find it much harder to stay focused. In general, do dyslexics find it easier to read and write code compared to general reading and writing? What types of tricks and tools do dyslexics use to help them master programming better than normal reading and writing?"} {"_id": "194586", "title": "Android game loop in separate thread", "text": "I am about to port a game that was initially written for iOs and I am having some doubts how certain things should be done. I have been looking many of the examples of the game loops for android, and almost everywhere the game loop is designed to run in separate thread and not in main UI thread which seems fine and logical to me. However, my game scene is combination of OpenGL graphics and standard android views (buttons, labels, etc...) and by design, the primary game logic and OpenGL drawing would be done in separate thread and rest of the standard stuff would be done in main UI thread. In many situations, I will need to call functions that should run in game thread from main UI thread and vice versa. As a very basic, simple example, when user performs touch (which is detected in UI thread) the event should propagated and be processed by the game thread. Do you know some good mechanisms or patterns on how to such cross thread interaction? Is there a simpler solution to a game loop, maybe by not running loop in a separate thread?"} {"_id": "113683", "title": "What are ways I can speed of development time when building applications?", "text": "I noticed that overtime with experience that the curve of learning shifts from trying to learn a language or technology (the way it works) to how to develop applications faster and with less code. I am very interested to see how other developers have minimized the time it takes to get applications to market. I know for a fact that by learning just one thing it can save you a ton of time. For example I started learning lambda expressions and code that normally took 3 lines now only takes 1 and is alot faster to type. Another angle is shortcuts and tools in an IDE itself. For instance I can't believe I didn't know how to remove unused using statements from my source code so I used to do it manually until I found out there was a tool in VS for removing them. Are there any resources for learing faster ways of developing applications? Keyboard Shortcuts, tools, programs, references? Is there like a reference where one can learn these things? If not can you please share how you learned some of these tips? **NOTE:** This question is geared towards .NET, C# and the Visual Studio IDE"} {"_id": "194580", "title": "How does Javascript code become asynchronous when using callbacks?", "text": "I've been doing a lot of reading online trying to figure out how to write asynchronous JavaScript code. One of the techniques that has come up a lot in my research is to use callbacks. While I understand the process of how to write and execute a callback function, I'm confused why callbacks seem to automagically make the JavaScript execution asynchronous. So, my question is: how does adding in callback functions to my JavaScript code make said code automagically async?"} {"_id": "194583", "title": "Factory Method: does the Product have to be a different class than the Creator?", "text": "I want to build three sites in PHP. I'm doing this as slowly, thoughtfully and carefully as I can, to learn as much about things like OOP and software architecture as possible. From past experience I already know there will be a time when I will be glad to have logging functionality. I probably want to have different types of logs, by which I mean that in a `.ini` file, I want to be able to specify if logging should go to a text file (perhaps a tab-delimited or CSV file), a database table, or smoke signals. I'd have (let's say) a Logger class, which would define, for example, an AddEntry() method. For each log type, I'd make a subclass that knows how to log to CSV file, write to a database or kindle a fire, respectively. Since I know I will want to have an instance of a subclass, but I won't know which subclass to instantiate until runtime, I figured I'd use the Factory Method pattern. However, When I look at its Wikipedia article I'm noticing that the `Creator` and `ConcreteCreator` have a different type than the `Product`, if I'm reading the UML correctly. My question comes down to this: I want to have the factory method as a static method of the Logger base class itself. If I do that, am I setting myself up for a trap that I'm not seeing yet? Is there a reason why the `Creator` and `ConcreteCreator` should be of a different type than the `Product`?"} {"_id": "184412", "title": "What's the difference between fault, error and defect?", "text": "> **Possible Duplicate:** > Difference between defect and bug in testing In computer science technical writing, especially in software engineering, what's the difference between fault, error and defect? I want to quote an answer on Stack OverFlow by Daniel Joseph: > To quote the Software Engineering Body of Knowledge > > Typically, where the word \u201cdefect\u201d is used, it refers to a \u201cfault\u201d as > defined below. However, different cultures and standards may use somewhat > different meanings for these terms, which have led to attempts to define > them. Partial definitions taken from standard (IEEE610.12-90) are: > > Error: \u201cA difference\u2026between a computed result and the correct result\u201d > > Fault: \u201cAn incorrect step, process, or data definition in a computer > program\u201d > > Failure: \u201cThe [incorrect] result of a fault\u201d > > Mistake: \u201cA human action that produces an incorrect result\u201d Based on my understanding of above definition, error is the result of fault, i.e., failure. Could someone explain more clearly?"} {"_id": "215687", "title": "Difference between bug, defect and flaw", "text": "I was reading \"Software Security: Building Security In\" and in the first chapter I faced with 3 terms: bug, defect and flaw. The author gave a definition for each of them but I couldn't completely understand these. Can someone give me some examples for each term? What is a defect and what is a flaw? I think I know what bug is, a bug is a malfunction of a part of system which produces undesirable result, be it crashing on a wrong input or miscalculating a series of computations. Can someone elaborate more and correct me if I am wrong in this? **UPDATE** To be more precise in the book I mentioned above, they (the words) are presented in a way to make a distinction, that's why I am asking to know more. In that book there are some examples denoting which sample belongs to what and which category. For example: Buffer overflow is said to be a bug and issues in method overriding (subclassing issues) is being related to flaw category. Again race condition handling issues are considered bugs and Error-handling problems (fails open) are told to be flaws! I want more elaboration on these regards."} {"_id": "105484", "title": "Would you tell your manager about intent to apply for a growth opportunity elsewhere", "text": "I like the company I am currently with as a .NET Software Developer - high- tech, great atmosphere, great people and great cause. I also have a fair bit of flexibility and resposibility in that I try to keep us up to date with recent technologies and come up with the architecture for our desktop applications, when we have to design something new, a rare occassion. Overall, it is a great place, albiet with mediocre compensation. It has been almost 4 years and things are getting stagnant in terms of growth and challenges. There is currently a good growth prospect with a previous employer that I am entertaining and will have a good chance with sufficient effort. But I have neither asked for a promotion nor thoroughly expressed my career ambitions to my current manager. So there is this feeling I am putting him at a disadvantage. My current manager is a level headed guy which makes me think I can discusss this challenge with him. After all, he could act as a reference for me in the future. Also, I am sure there are good thing to be said about honesty and professionalism and trying to avoid using the other offer to gain internal promotion. I want to grow professionally but cannot affort to jeopardize relationships or burn any bridges. Would you talk to your manager about entertaining a position somewhere else? If I were a manager, I sure would appreciate if one of my employees came up to me to talk about this. If he were one of my key guys, I sure would do what is neccessary to keep him around."} {"_id": "148688", "title": "When should I backup files?", "text": "At our company we are not using a source control so we do backup manually. My habit is like this: I backup only scripts where I removed code snippets, and those scripts where I only added code snippets I don't backup. Is it reasonable to backup each time when you change something, or there are some conditions for file subversion? And when we backup changed files, should we only backup that file or all the project?"} {"_id": "112377", "title": "What factors should be evaluated when determining a desktop software price?", "text": "What factors should be evaluated when determining a desktop software price?"} {"_id": "115680", "title": "random.choice in other languages", "text": "Python has a nice function in the standard library `random.choice` which returns a random element from a sequence. But none of the other languages I've developed in have thought to include such a feature. Do other languages provide such a feature? Why not?"} {"_id": "115684", "title": "PCI Compliance, FDMS and TransArmor", "text": "So, I've been tasked to work on an integration project where we will ask customers for credit card information and send it over to our integration partners, who will process the payment/cc info and process the rest of the order. So, one the tasks for become PCI compliant. However, the partner is also looking to get PCI compliant, and they said that as part of that process, they cannot accept cc info over the internet from an external source. But then I was reading about it, and found out there is some way to become a TPA (Third Party Accepter) that can allow the passing of the credit card to the partner. I also heard the word FDMS thrown around at the first meeting - some research led me to believe this stands for First Data Merchant Services. What I found is that First Data is a merchant processor, who actually contacts the issuing banks to check the credit cards and then collect payment. The partner wants to achieve a level of PCI compliance where they do not store credit cards, so they said when they take credit cards they will send it to FDMS using something called TransArmor, where they send the CC info to FDMS and they send back a token - and that is what they use to access the CC info. I did my research on this, and there is a lot of information out there, and I cannot find a clear and concise place where I can read about all of this - so I have a few questions on this whole process: 1. Is there one place where I can read what exactly it takes to become PCI compliant? 2. Are there different levels of PCI compliance? 3. Where exactly does TPA fall into all this? 4. Is there an example of how TransArmor works? Lots of loaded questions, but thanks all!"} {"_id": "125960", "title": "How to manage your functional documentation?", "text": "Our functional / business documentation is spread across Word files on our corporate Intranet. It's difficult to find and update information. There has to be a better way. Any ideas? We had thought a Wiki would work great. Would seem to be an easy way to find information and easy for individual developers and analysts to add bits of documentation quickly and easily. We'd be curious to know if other development (or business analysis) teams use Wiki's with success. The target for this documentation is the internal development team: developers, qa, and to a lesser extent business analysts."} {"_id": "113977", "title": "Do I need to Clean/Rebuild a project before Debugging/Publishing it in Visual Studio?", "text": "This is probably a stupid question, but do I need to Clean/Rebuild before Debugging or Publishing a Visual Studio project? I see other developers doing it all the time, and at some point I started doing it without even thinking. It seems habit to always go Clean, wait, Rebuild, wait, Publish. I know I didn't always do it.... I think I started to do it after spending a bunch of time debugging an error, only to discover it went away when I Cleaned and Rebuilt the solution. I've had this issue more than once too, so I know it wasn't a one-time thing, but it seems like a huge waste of time to always be cleaning/rebuilding your projects."} {"_id": "113976", "title": "Why coffeescript instead of javascript?", "text": "I think somehow building a language which compiles to another language feels like a bad idea from the start, instead of learning javascript properly from the start. Look into Douglas Crockfords Good Parts and then are you hooked. And javascript is not hard - writing good code is hard regardless of language! If you write crappy code in javascript, then you propably will write crappy code in coffeescript or lattescript or what the flavor will be of the day. And to say that coffeescript syntax is beautiful passes me. I like my curly braces and C syntax - and would preferably work in a language which is like that (sorry VB!). And more toys and languages and frameworks for doing the same thing all over again feels not like progress to me anymore!"} {"_id": "112082", "title": "How to handle 'external' dependencies in scrum?", "text": "If you've planned a number of user stories for a sprint and one candidate story is dependent on some external provider delivering something to your team. For example an online service provider adding a new API call to their system or enabling your test account on their system or suchlike. You know it's coming 'soon'. Do you go ahead and add the story to the sprint hoping they'll deliver what is required in time for you to complete your story **or** do you wait till the next sprint, when you know it will be ready and you can start immediately even if it means not starting the story as early as you could. If the former how do you handle 'unearned' story points lost because of the dependency? partial credit (eek!) or take it on the chin."} {"_id": "197584", "title": "What principle of OOAD is this pattern breaking?", "text": "I'm trying to make a case for not putting the structure in the parent BaseModule class I've shown below. I'm more for a Strategy Pattern, and minimizing inheritance in favor of has-a relationships, but I am not really sure of what principle of OOAD this pattern is breaking. What principle of OOAD is this pattern breaking? Potential Pattern: public interface IStringRenderer { string Render(); } public class BaseModel : IRequestor { public abstract void Init(); public abstract void Load(); } public class BaseModule : IStringRenderer where TModel : BaseModel { public TModel Model { get; set; } public override string Render() { return string.Join( \"\", new string[] { \"
\", \"\", \"\", \"\", \"\", \"\", RenderMainContent(), \"\" }); } public abstract string RenderMainContent(); } public class MyModuleModel : BaseModel { public List Ints { get; set; } ... } public class MyModule : BaseModule { public override string RenderMainContent() { return string.Join(\",\", Model.Ints.Select(s => s.ToString()).ToArray()); } } Preferred Pattern (duplicated some code to be clear): public interface IStringRenderer { string Render(); } public class BaseModel : IRequestor { public abstract void Init(); public abstract void Load(); } public class MyModuleModel : BaseModel { public List Ints { get; set; } ... } public class MyModule : IStringRenderer { public MyModuleModel Model { get; set; } public MyModule(MyModuleModel model) { this.Model = model; } public override string Render() { return string.Join( \"\", new string[] { \"
\", \"\", \"\", \"\", \"\", \"\", RenderMainContent(), \"\" }); } public string RenderMainContent() { return string.Join(\",\", Model.Ints.Select(s => s.ToString()).ToArray()); } } There are parts of this pattern that I'm not trying to focus on, but instead I am thinking about where the 'structure', or the 'tags' are living. In the first example they live in the parent, but in the second example they have been pushed into the derived class. Right now, the examples are pretty simple, but something like 'side-nav' could potentially get complicated, and may need to be controlled by the child anyway. I feel the principle here is that I don't feel like the 'tag' structure shouldn't be in the parent class as in the first example. There are other things I've removed in my 'preferred version' -- namely the generics. (Any suggested reading on good/bad for that choice is welcome). Proponents of the first example's embedded structure like the fact that they can wield changes to large amounts of child classes. I feel offering the flexibility to derived classes is the way to go, and if there needs to be a parent that parent only offers functionality and cross-cutting features, but does not bottle up canned structure."} {"_id": "182267", "title": "should you always enter a bug in a bug tracking system", "text": "> **Possible Duplicate:** > Should developers enter bugs into the bug tracking system? > Should I log trivial fixes? For a project I'll be doing in school soon I will be working with 6 other people. We will probably be using YouTrack to track our bugs and we can obviously see the value in this system. My question, however, is whether a bug should always be entered in the bug tracking system. In our case, some people will be front-end developers, others will be back-end developers but the complexity of the project will not be of such a high level that back-end developers wouldn't know how to solve a front end bug. By this I don't mean that they will actually solve it but because they (think) they know it's an easy fix, what's stopping them from just letting one of the front-end developers know about the bug by telling him across the table. The main reasons I can come up with are these: * The bug might at first seem very easy to fix but actually uncover a far larger problem (at this point in time presumably it would be entered, but by the frond-end developer. * You might take someone of out of the \"zone\" by just casually mentioning they have a bug somewhere, which might be solved in 10 seconds but then requires them 15 minutes to get back in the zone. On the other hand it does seem silly that you'd have to enter a bug which **could** be fixed in 2 minutes, especially if the part you're working on at the time needs that bug to be fixed. I understand this is probably a bit subjective, but I also feel that other people could give very good reasons for doing it one way or the other which is what I would like to hear."} {"_id": "139893", "title": "Should I log trivial fixes?", "text": "I'm in a code shop of two. And while I understand that a bug tracker is useful where the number of programmers is greater or equal to one, I'm not so convinced that logging bugs, changes, and fixes is worth the time when they're trivial. When I find a simple bug, I understand it, fix it, and run it through some testing. And THEN I realized I need to go log it. I know in theory that bug logging should be done somewhere between finding the bug and fixing the bug, but if fixing it is faster than logging it, it seems like a drag. In larger code-shops, the boss pays attention to who is doing what and it's nice to know where others are mucking about. I find myself describing things that I've already fixed and then instantly closing them. I have doubts that anyone will ever look at this closed bug again. Is it time to trim the process fat?"} {"_id": "197586", "title": "regular, average programmer - scared of geeks and their skills", "text": "Just like in any other field, 90% of workers do trivial routine things and only less than 10% actually do the most difficult things. Most programmers in the industry are average, and so am I. There are lots of smarties/geeks who have been coding/programming since age 10-12. Obviously they have more experience than me -- I started learning software development/programming at age 20, when I first went to university. I realize that to match their level, I would have to spend all my time for years, just solving problems, making my brains work hard and get used to a developer's mindset. With many other things to do in life, I understand that I will most likely be constantly behind geeks - I can't solve TopCoder/Google CodeJam problems, I'm bad with algorithms etc. I should have studied programming since childhood to match the level of other geeks. In other words, I feel guilty, stupid and most of all envious and scared of geeks! No super brainy geek knows everything about programming but: there are guys who are simply 10x better than me. I feel compelled to study all the time, but I can't and don't want to. I'm sure many developers have had this feeling - when you envy someone much smarter than you. How do you deal with this feeling? I'm really confused, I keep thinking they can steal my job, and I can't even compete with them."} {"_id": "142229", "title": "What is a generic term for name/identifier? (as opposed to label)", "text": "I need to refer to a number of _things_ that have both an identifier value (used in code and configuration), and a human-readable label. These things include: * database columns * dropdown items * subapplications * objects stored in a dictionary I want two unambiguous terms. One to refer to the identifier/value/key. One to refer to the label. As you can see, I'm pretty settled on the latter :) For the former, _identifier_ seems best (not everything is strictly a _key_ , and _value_ and _name_ could refer to the label; although, _identifier_ usually refers only to a variable name), but I would prefer to follow an established practice if there is one. Is there an established term for this? (Please provide a source.) If not, are there any examples of a choice from a significant source (Java APIs, MSDN, a big FLOSS project)? _(I wasn't sure if this should be posted here or toEnglish Language & Usage. I thought this was the more appropriate expert audience. Happy to migrate if not.)_"} {"_id": "142228", "title": "Publish/Subscribe/Request for exchange of big, complex, and confidential data?", "text": "I am working on a project where a website needs to exchange complex and confidential (and thus encrypted) data with other systems. The data includes personal information, technical drawings, public documents etc. We would prefer to avoid the Request-Reply pattern to the dependent systems (and there are a LOT of them), as that would create an awful lot of empty traffic. On the other hand, I am not sure that a pure Publisher/Subscriber pattern would be appropriate -- mainly because of the complex and bulky nature of the data to be exchanged. For that reason we have discussed the possibility of a \"publish/subscribe/request\" solution. The Publish/Subscribe part would be to publish a message to the dependent systems, that something is ready for pickup. The actual content is then picked up by old-school Request-Reply action. How does this sound to you?"} {"_id": "223510", "title": "Term for an error when code is executed before its ajax response?", "text": "What's the term for an error caused by executing a block of code before its relevant ajax response has come back? e.g., a timeline: **13:00:01** execute getDataViaAjax(); **13:00:02** execute doSomethingWithMyAjaxData(); **13:00:03** ajax data comes back, too late since I forgot to make the code wait for it... I have a feeling I may say \"doh!\" when someone tells me the answer, but I can't for the life of me think what type of error it might be called."} {"_id": "211150", "title": "What is the benefit of 64 bit A7 in iPhone", "text": "I'm trying to figure out why going to 64 bit processors is such a big deal in an iPhone. I understand that there will be twice as many registers so the processor can work with twice as much data which should increase performance. However I don't see many phones going to more than 4GB of memory any time soon. It seems like overkill and it would negatively impact battery life. Another problem that I see is that most variables now need twice as much memory. This will create problems in a mobile environment with small amounts of memory. I believe that the folks at Apple are intelligent and they probably have great reasons for doing this, I'm just trying to understand them. **EDIT** Don't know much about GPU's but I was told that with 64 bit registers 2 pixels can be loaded into each register and operations can be performed on them individually. Is there a graphical advantage regarding 64 bit?"} {"_id": "211151", "title": "Should my team use some common well-regarded coding standard as a basis for its own?", "text": "The R&D team I'm in has decided to adopt a coding standard. We have only recently formed, and have too little code and common coding time of our own to base our standards/conventions document on what has developed organically in our team, and on good examples from our own code etc. Now, every one of us has some experience from past workplaces - although none of us is at a state of saying \"let's adopt this here comprehensive document I have found to be fitting for the kind of work we do here\" (*). Plus, some of us (including myself) only have experience from places with no official coding standard, or writing in different languages in a different setting (high- pressure weekly-release production environment as opposed to more research- oriented development work) So, one of the options I've been thinking about is taking a relatively well- known and well-regarded document, snipping off what we don't care about/care for, and making some modifications based on our preferences. Is this a common practice? Do you believe this is a good idea? If so, what would be a reasonable 'baseline' coding standard (don't tell me which is best, I don't want to start a religious conflict here; just point out what would be comprehensive or 'neutral' enough to build upon.) **Notes:** * We are expecting to work with C, C++, OpenCL, CUDA, Python. * We are a team of 4 people + manager, expected to grow to about 5-6 within a year or so. * In our company, teams are almost entirely autonomous and usually don't interact at all (not even by using each other's code - the work is on entirely different projects); so - no company-wide considerations to make. * Regarding tools, at the moment what we know is that we're going to be using Eclipse, so its code formatter is going to be one tool at least. Ctrl+Shift+F has long been my friend * When I write Java, I've adopted the practice of adhering as strictly as possible to Bloch's Effective Java. Now, that's not quite a coding standard, but you could call some bricks, cement and mortar for a coding standard. I was thinking of possibly including something like that as part of the 'mix' (minding that we don't do Java). * I mean coding standards in the wider sense of the word, e.g. adopting suggestions made in the answers to this P.SE question. * I've found a big list of C++ coding standards documents; maybe I should mine that our baseline. * (*) That's not quite true, but I don't want to complicate this question with too many specifics."} {"_id": "211158", "title": "How to do ab testing", "text": "Many books on startups/kanban strongly advocate the use of AB testing to validate product features. I haven't had any experience with this but it sounds like a great idea for some projects. My question is, how does one go about AB feature testing in practice? Say you want to test how registered users interact with a specific feature. Do you create tables to set what batches of users see what features? What about non-registered users. ect. Say it's determined the feature isn't needed based on your tests. Do you go back into the code base and cut that code out?"} {"_id": "137712", "title": "In scrum, how do you use \"usability testing\" in team?", "text": "I am currently reading some material by Jeff Patton regarding Agile and UX. http://www.agileproductdesign.com/presentations/index.html I was interested to find out from the community what UX design and testing behaviors seemed MOST helpful to their scrum teams to create GREAT and delightful products. 1. hallway usability? 2. paper prototypes? 3. seeing how users actually use the software? 4. others? Thanks!"} {"_id": "184159", "title": "Newbie ASP.NET developer being forced into MVC4 with WebForms", "text": "I recently got hired on as a new ASP.NET developer (C# code behind). When I arrived, I was told that they were moving to MVC 4, and so I bought two books on that. However, the other day I learned that they are NOT using Razor, but using WebForms. Most of the results via Google returned articles from before the WebForms view engine was apparently selectable as an option instead of Razor, but my question is this: * Is it possible to learn MVC through WebForms, and not Razor? By possible, I mean is it \"worth the time\", or should I stick with Razor (as covered in both of my books) and slowly adapt that knowledge to WebForms? * Are there any drastic changes that need to be understood when learning MVC from the WebForms perspective? Thank you for your time."} {"_id": "137716", "title": "What were the historical conditions that led to object oriented programming becoming a major programming paradigm?", "text": "What were some of the economic (and other historical) factors that led to object oriented programming languages becoming influential? I know that _Simula_ started things off, but was the adoption of OOP languages due to the ever increasing needs of business? Or, was the adoption due more to the new things that could be done with OOP languages? **Edit** I'm really most interested in whether there was some factors going on external to the languages themselves that allowed for them to take hold."} {"_id": "137715", "title": "A Class named Class?", "text": "This is more of a style question, but it is something I am currently pondering for a project of mine. Assume that you're creating an application which is modeling a school. So there are entities like Student, School, etc. Now this is all fine and intuitive until you get down to Class, as (in most languages) `Class` is a reserved word. So, given that `Class` is a reserved keyword, what would you call such an entity that models a school class?"} {"_id": "184150", "title": "Need interpretation of section in C# specification", "text": "I am reading the C# specification. I could use clarification on a segment: > C# has a unified type system. All C# types, including primitive types such > as int and double, inherit from a single root object type. Thus, all types > share a set of common operations, and values of any type can be stored, > transported, and operated upon in a consistent manner. Furthermore, C# > supports both user-defined reference types and value types, allowing dynamic > allocation of objects as well as in-line storage of lightweight structures. What does \u201cin-line storage of lightweight structures\u201d mean in this context?"} {"_id": "251009", "title": "What constitutes a good Database System(Java / SQL related)", "text": "I've given this so much research but still couldn't come to my own conclusion, before continuing this question I would like to give you a rough idea of my current situation. I'm 18 years old and have recently been selected and offered by my school to design and develop a database system for my Psychology department. The system itself is very easy, especially on paper, but after reading so much I'm not sure how to tackle this situation. I've never built a whole system before, I've worked with multiple APIs, game clients, personal programs and small mini projects but never have I done it for someone else. I kindly as for your help to answer my question. I have been coding for a while and Java is my strongest point, I still have alot to learn in regards to other C-Style languages, my school trusts me and I'd feel ashamed to let them down. What constitutes a good Database system? * * * When building a database system what are the first key factors that one ought to consider? I presume it's directly related to the actual skeleton of the system and the layout. That's relatively easy to do or better, not as complicated as the rest to do. This would include a normalized database which is secure and easy to use. This is where my question comes in with Java and SQL joined together. I can obviously(And already have) written a database system entirely in Java, literally no SQL in it, wrote the queries myself and did it all with Java however I don't think that's very satisfactory for most people (I'm not sure why, if someone could clarify would be great). I've read into JDBC and I still have to go a bit more into it as self-teaching it does get a tad tricky every now and then. I however searched more into it and have seen various Java Database Engines, such as: > HypersonicSQL - http://java-source.net/open-source/database-engines \\- Open > source Database Engine How good would it be to work with such engines, is there any security risks run with it or should I just scrap off the idea? I know a lot of you here on StackOverflow are VERY knowledgeable in this field and know alot of you have built your own database systems (offline) so I kindly ask to pass on some knowledge that could prove valuable."} {"_id": "220965", "title": "How to manage styles on a group of websites", "text": "Up until now the styles were handled by a single CSS file (for each site). I'm customizing it like this: * opened the site in Firefox * opened inspector/style editor * changed CSS values around until I was happy with how the site looks * copied the generated CSS code and pasted it in the .css file from the server The design is quite outdated, so I decide to upgrade the application that powers the sites and the designs. But now, my developer tells me I'm doing it wrong. He wants to use \"less\", \"compile and cache to css\", create themes and allow me to change only basic stuff like colors and fonts through an administration interface. He says that if I want to customize deeper things I would have to learn CSS and edit the \"less\" files inside a code editor, not Firefox. Is he crazy or what? I don't understand how is this better than what I'm currently doing?"} {"_id": "157476", "title": "ISC license advice", "text": "Is the ISC license suitable as a MIT or Simplified BSD license replacement? What are the pros and cons of ISC compared to MIT or BSD?"} {"_id": "157477", "title": "DVCS blessed repo replication among geographically distributed teams", "text": "My company is exploring the move from Perforce to a DVCS and we currently use lots of Perforce proxies because the **software development teams are spread over Germany, China, USA and Mexico** and sometimes bandwidth from one place to another is not that great. Speaking with IT, we started looking for a way to keep things running smooth from the geographically distributed perspective so that everyone gets the latest and greatest without determining what repo server is the source of truth (i.e. **replicating transparently** ). I thought that **maybe we could emulate the DNS mechanism through pre-push and pre-pull hooks**. For example, consider countries A, B, and C. Upon pulling from blessed A, A itself will pull for changes from B, which in turn will pull for changes in C. If B and C have new changes they will fall towards A. Conversely, when there is a push, it could be propagated to all blessed repositories. I'm aware that **generally you only have one blessed repo, however this may not scale globally** and each blessed repository would just be applicable to the teams from a single country. **My question is** : is the conception of DVCS repo replication something used in practice?, has anyone done it succesfully?, if so, what is the correct way to do it?"} {"_id": "68573", "title": "Generate a productive environment when developers have different opinions", "text": "My question is simple. I'm a developer and work with another developer who's been here for many more years than I have. He has his opinion about implementing stuff; he's more of a do it yourself kind of person. I'm more of a let's not re-invent the wheel kind of person. Recently for example we had a big discussion about how I think we should move to the cloud and how he thinks that we should keep an in-house solution for our servers (to host our websites). Or how I think Java is better suited for what we need, but he likes PHP so that's what we have to use. These are all arguments that people have on the web... some like the cloud, some don't... some like Java, some don't. And I'm totally ok with diverse opinion... however it's come to the point of being unproductive, and without a boss that knows about programming (our boss doesn't know anything about coding, he just manages us) that takes the position of making a decision, it's always a winner towards his opinion, because he's the more experienced developer... which I don't think is productive. Did you ever confront such a team issue? What do you think I should do? What is the best way to create a productive environment?"} {"_id": "157472", "title": "Is it reasonable to use POCO's that inherit from DTO's?", "text": "I'm designing a tiered .NET application, and I want to use the Code First approach. I'm new to this, so I'm struggling to envision how it ought to be designed. Would the following be a reasonable approach? What problems or limitations might I run into? * Write DTO classes for my domain. These will be used for data transfer between the server and client over WCF. * Write a set of POCO classes that inherits from the DTO classes. Use these to generate the database on the server. * Write another set of POCO classes that inherits from the DTO classes. Use these to generate an SDF database on the client."} {"_id": "104994", "title": "Transition from engineering to programming", "text": "I got a degree in Chemical Engineering but held a part time job as a PHP programmer the whole time. This has developed into a full-time job. I was wondering how having a Chem-e degree instead of a CS one will affect me progressing further in this career. Does anyone have experience with this and maybe some hints?"} {"_id": "150045", "title": "What is the point of having every service class have an interface?", "text": "At the company I work at, every service class has a corresponding interface. Is this necessarily? Most of these interfaces are only used by a single class and we are not creating any sort of public API. With modern mocking libraries able to mock concrete classes and IDEs able to extract an interface from a class with two or so clicks, is this just a holdover from earlier times?"} {"_id": "159813", "title": "Do I need to use an interface when only one class will ever implement it?", "text": "Isn't the whole point of an interface to for multiple classes to adhere to a set of rules and implementations?"} {"_id": "170009", "title": "Everything has an Interface", "text": "> **Possible Duplicate:** > Do I need to use an interface when only one class will ever implement it? I am taking over a project where every single real class is implementing an Interface. The vast majority of these interfaces are implemented by a single class that share a similar name and the exact same methods (ex: MyCar and MyCarImpl). Almost no 2 classes in the project implement more than the interface that shares its name. I know the general recommendation is to code to an interface rather than an implementation, but isn't this taking it a bit too far? The system might be more flexible in that it is easier to add a new class that behaves very much like an existing class. However, it is significantly harder to parse through the code and method changes now require 2 edits instead of 1. Personally, I normally only create interfaces when there is a need for multiple classes to have the same behavior. I subscribe to YAGNI, so I don't create something unless I see a real need for it. Am I doing it all wrong or is this project going way overboard?"} {"_id": "180777", "title": "How do I prevent unknowningly duplicating code?", "text": "I work on a rather large code base. Hundreds of classes, tons of different files, lots of functionality, takes more than 15 minutes to pull down a fresh copy, etc. A big problem with such a large code base is that it has quite a few utility methods and such that do the same thing, or has code that doesn't use these utility methods when it could. And also the utility methods aren't just all in one class (because it'd be a huge jumbled mess). I'm rather new to the code base, but the team lead who's been working on it for years appears to have the same problem. It leads to a lot of code and work duplication, and as such, when something breaks, it's usually broken in 4 copies of basically the same code How can we curb this pattern? As with most large projects, not all code is documented(though some is) and not all code is... well, clean. But basically, it'd be really nice if we could work on improving the quality in this respect so that in the future we had less code duplication, and things like utility functions were easier to discover. Also, the utility functions are usually either in some static helper class, in some non-static helper class that works on a single object, or is a static method on the class which it mainly \"helps\" with. I had one experiment in adding utility functions as Extension methods(I didn't need any internals of the class, and it definitely was only required in very specific scenarios). This had the effect of preventing cluttering up the primary class and such, but it's not really anymore discoverable unless you already know about it"} {"_id": "34354", "title": "How do you structure your shared code so that it is \"re-findable\" for new developers?", "text": "I started working at my current job about 8 months ago, and its been one of the best experiences I've had as a young programmer. It's a small company, and both my co-developers are brilliant guys. One of the practices that they both have been encouraging is lots of code- reuse. Our code base is mainly C#, and we're using a centralized revision control system. The way the repository is currently structured, there is a single folder in which all shared class libraries are placed (along with unit tests for each library), and our revision control system allows for sharing or linking those libraries out to other projects. What I'm trying to understand at this point is how the current structure of the folder can be made more conducive for finding those libraries again. I've talked to the other developers about this, and they agree that it's gotten a little messy. I find that I am sometimes \"reinventing the wheel\" because I didn't realize that there was an existing piece of code that solved a particular problem. The issue is complicated further by the fact that we're sharing some code between ASP.NET MVC2, WinForms, and Windows CE projects, **and** sharing code between applications built against multiple versions of .NET. How do other people approach this? Is the answer in naming the libraries in a certain way or is it preferable to invest in some code-search software? Is the answer in doc comments? Should we be sharing libraries at all or should we simply branch the class libraries for re-use? Thanks for any and all help!"} {"_id": "68578", "title": "Who is responsible for software licensing in ITIL?", "text": "In a standard ICT Department who is responsible for managing software licenses? I would start by saying that the ICT Manager is accountable for it. But who gets the responsibility, using ITIL as a standard model for an ICT department would the Service Delivery Manager or Development Manager? The Service Delivery Manager needs to keep things running, but when moving forward with new versions of software does the Development Manager get controlling vote in what to purchase? As it's not just about delivering the service it's about improving it."} {"_id": "109958", "title": "How to define complex business rules using User Stories?", "text": "A quick and dirty definition of User Story: \"As a , I want so that \" In this commonly accepted definition there is little space for defining business rules, constraints or user input. Trivial example just to illustrate: > > As a , I want to so that > > can find their availability online> In this silly example, where would one define the fields needed when registering a book? Should it be written anywhere? Or should the required business rules be passed as word of mouth by the Product Owner?"} {"_id": "76040", "title": "What do you call a \"cell\" in database terminology?", "text": "With \"cell\" I mean the value of a particular column in a particular row. I'm not sure if its called just \"cell\" or this is a spreadsheet thing."} {"_id": "4272", "title": "What are the best monitors for programming?", "text": "From time to time I have tried some monitors. My main work is coding (work, phd, etc). At work I have an LG Flatron L246WH which I highly recommend. However at home I have an LG W2363V with which I feel pretty uncomfortable when coding. Fonts, subpixels or whatever mess with my minds when using smooth fonts. Currently, what are the best monitors out there, to best fit our needs?"} {"_id": "4274", "title": "Should I change language to stop becoming stale?", "text": "I'm an ASP.Net/C# programmer using SQL Server as a back end. I am the Technical Director of the company, I'm extremely happy in everything I do and consider the languages and system we use to be perfect for what we do. In the back of my mind though I know that over time programmers can become stale. I remember as a wee youngster that all those \"old\" developers were past it and couldn't keep up with the youngsters. So considering I'm happy in everything I'm doing. What options are there for keeping up with everything and avoiding becoming stale. One particular idea that I use is to let all the new developers use and showcase the things that they think are cool. If anything catches my eye then absolutely it will be something we all use going forward. Thoughts?"} {"_id": "147475", "title": "Build functionality around the design, or the other way around?", "text": "When you build an application, is it better to design the UI first (in Photoshop or whatever), then implement the functionality following the UI you just designed, or do the programming and build the design as you go? Advantages I see using UI as reference: * the app will end up very UI-friendly :) * like pdr mentioned, if working for a client, he gets exactly what he imagined Disadvantages: * the programming will get more complicated, so development time increases any others? :D"} {"_id": "96729", "title": "What's the difference between working at a software company and a company whose focus is in another field?", "text": "Recently, I was approached by a local ad agency with a job opportunity. They are bringing all web/interactive development in-house and adding to their development team. I'm growing sick of my cushy, yet boring corporate job, and am intrigued by the position. Having only worked for software shops where the primary business was making software, I worry that they may not put emphasis on quality software practices, since development is not the focus of their business. Could anyone with experience at both compare/contrast working at a software company with working at a company that just happens to have an in-house software development team or department?"} {"_id": "45230", "title": "Do programmers at non-software companies need the same things as at software companies?", "text": "There is a lot of evidence that things like offices, multiple screens, administration rights of your own computer, and being allowed whatever software you want is great for productivity while developing. However, the studies I've seen tend toward companies that sell _software_. Therefore, keeping the programmers productive is paramount to the company's profitability. However, at companies that produce software simply to support their primary function, programming is merely a support role. Do the same rules apply at a company that only uses the software they produce to support their business, and a lot of a programmer's work is maintainence?"} {"_id": "194795", "title": "Will working in IT limit your career prospects as a programmer?", "text": "I took a job working as an IT guy (SQL programming, helpdesk, etc.) because I had need of a job (to pay back student loans accumulated from school). I'm very happy to have a job, but I eventually want to be a c++ programmer. I'm studying my c++ textbooks in my spare time and starting to do some projects with my newly acquired SQL skills. But the question that keeps rolling around in my head is \"will I be a SQL programmer for the rest of my life now?\" In your experience, will spending a year or two at my current job be a detriment in getting into C++ programming?"} {"_id": "103422", "title": "Is it a bad practice to stop providing support for outdated software if updates are free and easy?", "text": "Providing support for outdated software is both unexciting and expensive. If people are still using the version which was sold ten years ago, it means that they will find bugs which do not exist in next versions, their software may not work as expected on a new hardware or operating system, etc. The support people must also be trained to use this old version. This is unavoidable in several cases: * When every next version of the software is paid, you can't force everyone to spend their money every two years to buy the next version. Example: Microsoft has still to support Windows XP, since it is understandable that people don't want to pay hundreds of dollars for Windows Vista, then Seven, then Windows 8. * When the update is complicated. Example: when you know that installing SP2 of Microsoft SQL Server 2008 may render your server completely unusable, you may want to postpone installing updates and stay with and outdated version which has one strong point: it _works_. * When legacy systems or security considerations are involved. Example: auto-updating a space shuttle software every week may have some disadvantages. In the same way, you can't update hardware if the legacy software requires the old hardware to run. Now, let's say that the update process is as well done as in Google Chrome. The user doesn't have to bother with the updates; there are no security or legacy reasons to not to update, and the updates are free. In this case, is it a bad practice, for a company, to stop providing any support for the versions outdated for a few months, and ask their customers to update their software first, then contacting the support if the problem persists?"} {"_id": "72293", "title": "How do you diagram global or shared state?", "text": "Say you're building an FSM for something like a game and you've got states like: * MainMenu * Options * SinglePlayer * MultiPlayer Your state diagram might look something like this: ![enter image description here](http://i.stack.imgur.com/vjxus.png) Now say you have a shared state, `DevConsole`, (shows the console when tilde is pressed and receives KB input etc. I'm sure you've seen it before) such that no matter what state you're in, this state applies. How do you diagram that? _edit_ * An example of how it would function would be like this: public class StateMachine { protected State sharedState; protected State previousState; protected State currentState; public void Update() { if(this.hasSharedState) this.sharedState.Update(); if (this.previousState != null && this.previousState.IsExiting) this.previousState.Update(); else this.currentState.Update(); } // called by individual states public void ChangeState() { // creates a new state adn sets its state machine owner to this machine this.previousState = this.currentState; this.previousState.Exit(); this.currentState = StateBuilder.Build(this); } }"} {"_id": "79242", "title": "C programming in 2011", "text": "Many moons ago I cut C code for a living, primarily while maintaining a POP3 server that supported a wide range of OSs (Linux, *BSD, HPUX, VMS ...). I'm planning to polish the rust off my C skills and learn a bit about language implementation by coding a simple FORTH in C. But I'm wondering how (or whether?) have things changed in the C world since 2000. When I think C, I think ... 1. comp.lang.c 2. ANSI C wherever possible (but C89 as C99 isn't that widely supported) 3. `gcc -Wall -ansi -pedantic` in lieu of static analysis tools 4. Emacs 5. Ctags 6. Autoconf + make (and see point 2 for VMS, HP-UX etc. goodness) Can anyone who's been writing in C for the past eleven years let me know what (if anything ;-) ) has changed over the years? (In other news, holy crap, I've been doing this for more than a decade)."} {"_id": "170760", "title": "Protobuf design patterns", "text": "I am evaluating Google Protocol Buffers for a Java based service (but am expecting language agnostic patterns). I have two questions: The first is a broad general question: > What patterns are we seeing people use? Said patterns being related to class > organization (e.g., messages per .proto file, packaging, and distribution) > and message definition (e.g., repeated fields vs. repeated encapsulated > fields*) etc. There is very little information of this sort on the Google Protobuf Help pages and public blogs while there is a ton of information for established protocols such as XML. I also have specific questions over the following two different patterns: 1. Represent messages in .proto files, package them as a separate jar, and ship it to target consumers of the service --which is basically the default approach I guess. 2. Do the same but also include hand crafted wrappers (not sub-classes!) around each message that implement a contract supporting at least these two methods (T is the wrapper class, V is the message class (using generics but simplified syntax for brevity): public V toProtobufMessage() { V.Builder builder = V.newBuilder(); for (Item item : getItemList()) { builder.addItem(item); } return builder.setAmountPayable(getAmountPayable()). setShippingAddress(getShippingAddress()). build(); } public static T fromProtobufMessage(V message_) { return new T(message_.getShippingAddress(), message_.getItemList(), message_.getAmountPayable()); } One advantage I see with (2) is that I can hide away the complexities introduced by `V.newBuilder().addField().build()` and add some meaningful methods such as `isOpenForTrade()` or `isAddressInFreeDeliveryZone()` etc. in my wrappers. The second advantage I see with (2) is that my clients deal with immutable objects (something I can enforce in the wrapper class). One disadvantage I see with (2) is that I duplicate code and have to sync up my wrapper classes with .proto files. Does anyone have better techniques or further critiques on any of the two approaches? * * * *By encapsulating a repeated field I mean messages such as this one: message ItemList { repeated item = 1; } message CustomerInvoice { required ShippingAddress address = 1; required ItemList = 2; required double amountPayable = 3; } instead of messages such as this one: message CustomerInvoice { required ShippingAddress address = 1; repeated Item item = 2; required double amountPayable = 3; } I like the latter but am happy to hear arguments against it."} {"_id": "91319", "title": "WebSockets current state on security and browser support", "text": "I found many things when searching, but nothing that seemed both updated and complete. In last quarter of 2010 I remember reading what were very sad news for me, as WebSockets were disabled in most (if not all? can't remember) modern browsers that supported it by default due to security concerns in the proxy protocol the WSP uses. I wonder what's the current state of the issue and if there's any known plans from the next version of browsers to start enabling WS by default. I know that in Chrome 13b I get it enabled and I don't think I enabled it myself, but the same does not happen in the other browsers I use. Also, what draft is being used in the current versions? I will be giving a presentation on HTML5 for a group of developers next month and this is the topic I think I'm the least updated on. I need to implement current protocol to at least feel certain I know what I'm talking about and have no problem answering questions :) As many technical and specific details as possible will be greatly appreciated."} {"_id": "91318", "title": "Marriage of Lisp and LaTeX - has it been done?", "text": "I like `LaTeX`, but I find its macro system and logic both complex and weak. languages such as Schem/Lisp/Clojure are very good at macros. I imagine the entire document written in a lisp family language, which, when run, would emit LaTeX code and produce a document. Has this been done before? Any links?"} {"_id": "186742", "title": "Is using '{}' within format strings considered Pythonic?", "text": "I just learned you can write '{}{}'.format(string_a, string_b) instead of '{0}{1}'.format(string_a, string_b) in Python, i.e. you can omit the numerals for the string format parameters when you want things to slot in one by one in order. Is this considered Pythonic? NOTE: \"Pythonic\" is a commonly used term among Python programmers to mean idiomatic Python code. Within Python culture, there tends to be clear consensus on style questions, especially for very specific ones like this one, given the language's explicit design philosophy of \"There should be one -- and preferably only one -- obvious way to do it.\" This is quoted from \"The Zen of Python,\" a set of aphorisms which goes a long way towards defining what is \"Pythonic\" and which is included with every distribution of Python (at any Python interpreter command line, enter `import this` to see it)."} {"_id": "115530", "title": "Starting a new startup/web application, how to choose a hosting provider?", "text": "When creating a new startup related to a web application, how to choose a hosting provider ? Assuming the code of the web application is oriented DDDD (Distributed Domain Driven Development) to handle large deployment scenario, the idea is to avoid too much cost for hosting. Basically, the launch of the web app, with its very users will be able to fit on a \"single\" box (DB + APP), maybe two for redundancy. Eventually the apps will grows progressively to more and more users (I hope:)). How can I choose wisely the hosting ? Today, I see three options : * hosting ourself : not actually an option today as it requires a lot of administrative skills and related task * hosting on virtual/dedicated servers : maybe a good options as virtual dedicated hosting is quite cheap, but I fear this will quickly limit us in term of scalability * hosting on cloud (amazon or azure) : probably the best option in the long term, but with a higher cost to start (having to adapt a bit the application, cost of instances) Does anybody have feedback/advise about such requirements ? PS: FYI, the web apps will probably be written with ASP.NET MVC as web framework, and Ncqrs+NServiceBus to target the DDDD pattern in a CQRS style Edit: as a backend, MongoDB is today our probable choice, as NoSQL is marrying well with event-sourcing + CQRS (no need for joins, etc.). However, finding VPS with asp.net AND mongodb can be challenging. I may have to use some traditional RDBMS found on all providers (MS SQL SErver or MySQL)"} {"_id": "115537", "title": "How to create an extendable web application?", "text": "How would you implement an extendable web application? What I'm thinking about is a web application similar to Jenkins or Hudson which provides plug-in support. While it's obvious to me how plug-ins can be located and loaded, I don't know how they can be integrated within the overall architecture. I'm especially uncertain about the following points. **How can a plug-in change the view, e.g. add input elements to a form?** My first idea would be that a plug-in can register partials / fragments for a certain form. Example: Newsletter plug-in which registers a typical newsletter checkbox fragment which will be rendered in the user registration view. **How can a plug-in react to incoming requests?** Again, a direct approach would be to provide listeners for certain requests or actions, e.g. POST request to /user. **How could a plug-in persist data?** I'm assuming this is a situation where NoSQL data storage solutions would be superior to relational databases. I would appreciate any comments, ideas and experiences (maybe there is even a design pattern) that you have regarding extendable web applications."} {"_id": "186745", "title": "Using assembly to write to a file", "text": "I am working with a trading application (reading data from the exchange) which generates a bucket load of data on a per second basis. We have different \"log- levels\" but even the minimal log-level generates so much data ! This process of log creation is quite I/O intensive. Since I can write assembly using __asm__ , I was wondering if the operation of writing to the files is coded in assembly, will there be a speedup ? The rest of the code is C++. Also, what are the caveats of doing such a thing ?"} {"_id": "131924", "title": "How should an administrating web application control a WCF service's entities?", "text": "We have a WCF Service (self-hosted) which can receive several types of calls from the outside, each call encapsulates a business logic procedure. We need to construct a web application which will act as an administrating app, for all users of the service. So we need to have access to all inner-entities of the service and not just the business calls. How should this be done the most effective way? **EDIT:** I wasn't too specific because I see this as a general issue. our Customers are retailers which have clients of their own, so when I say Customer, it's our client and when I say Client - it's our client's client. **Example:** * Service X allows registered clients of Customers to commit transactions. * there are several transaction types. * the service receives the transactions from several different devices. * The service is a WCF self hosted service. The Customer, needs to register devices for use, register different configurations for different devices and many other registering and administrating which needs to be done. All of this is done in a website. **The website users are the Customers, the Retailers, not the end clients** So as I see it I have these options 1. Reference the dll which holds all service-related entities - this is ok now because the website and the service are on the same machine, but what if this changes? is this a good approach? 2. Make service calls for all of this registering and administrating, which I'm not even sure how it should be done. So, I need your help in understanding the best way to do this..."} {"_id": "143578", "title": "Localization in php, best practice or approach?", "text": "I am Localizing my php application. I have a dilemma on choosing best method to accomplish the same. Method 1: Currently am storing words to be localized in an array in a php file 'bienvenida' ); ?> I am using a function to extract and return each word according to requirement Method 2: Should I use a txt file that stores string of the same? My question is which is a better method, in terms of speed and effort to develop the same and why? Edit: I would like to know which method out of two is faster in responding and why would that be? also, any improvement on the above code would be appreciated!!"} {"_id": "161167", "title": "Can a single simple language such as Clojure replace Html + JavaScript + CSS + Flash + Java Applets ...?", "text": "Please do not dismiss the idea right away. I know that it is hard to compete with a mainstream approach that already works (mostly), so my question is partly \"academic\". I also am aware that ClojureScript exists and is very cool, but it is a patch on an existing ugly thing, a useful abstraction. Disclaimer: I am a programmer but not a web developer, and this is why I am soliciting other's feedback. Being a developer and using the web every day and reading up on various topic and viewing the html source of a page from time to time I think I have some idea about web development. Anyhow, the problems as I see them: * The web started out quick and dirty and noob-friendly, but now it takes a great deal of skills to make a good modern interactive web page, and you just have to be good at it in order to be competitive today. This often means that quick and dirty \"learn web in 21 days\" just does not cut it at all. * Html started out quick and dirty, as a noob-friendly protocol. It is currently a mess. * JavaScript language is not without its flaws, but ok. * CSS appears to be a decent attempt to clean things up. It is worth keeping, at least the idea of it - that you can style the appearance in a separate file. * Putting it all together - JavaScript + Html + CSS becomes rather dirty. There have been good ideas/tools that mitigate the problem such as: AJAX libraries abstract away the specific flavors of the JavaScript. Powerful libraries such as JQuery, Node.js, etc. allow doing cool things in JavaScript as imperfect as it is. Google Web Kit does a very good job of translating a GUI design into a web page. Web MVC frameworks such as ASP.Net, RoR, Django abstract things away and do a lot of leg work for you, **HOWEVER** , these are all abstractions on top of a crappy base. * The demand for what the web could do today is ever-increasing; Google's ChromeBook is a manifestation of that. You run a browser full screen and everything that you might want to do - keyboard/mouse interactions, sound, video, games, text, images, power point presentations - everything is happening inside of it. Thanks for the fast browsers and fast computers and \"the cloud\", but it could be a lot better! From a graphics point of view, a browser is just a rectangular canvas that you can paint anything on. Currently the browser executable weighs many megabytes because it has to know how to parse Html, JavaScript, CSS and display all of it. If you start from scratch and realize that it is pretty much just a canvas to be painted, then I think the browser can be much smaller and simpler. The price to pay is having to write a valid program for everything in a funky syntax such as Lisp or Clojure, even for the simplest of things such as displaying a label. That used to be the cool part of the html - if you wanted just to type the paragraph, you would type it verbatim. This rarely happens anymore. If all you want is to just type the paragraph of text, you still have to think about inline or CSS styling, placement. The following piece of HTML (found on the front page of this site) programming-languages learning is not that much easier to craft than some alternative Lispy syntax (and I have not put that much thought into it): (create-link :target \"/questions/tagged/programming-languages\" :class \"post-tag\" :title \"show questions tagged 'programming-languages'\" :rel \"tag\" :content (text \"programming-languages\")) This might not be a valid Clojure syntax; I sort of made it up. It does not have to try to mimic html - in fact that is the point of starting from scratch. The huge advantage here would be that `(text ...)` and `(create-link ...)` are not part of the core language that a browser would have to understand. The browser would only need to understand a \"safe\" Clojure (one that cannot wipe your hard drive clean) and be able to draw and play music and listen to keyboard and mouse and similar things, and everything else - drawing text, playing a video, displaying a combo box and interacting with it would be all done in a carefully designed library. Why did I choose Clojure? It is a tiny language that can accomplish a lot, plus the philosophy of building complicated programs out of simple building blocks is very attractive. I do think that being able to support a single powerful language such as Clojure would be enough to accomplish everything that Html and Html5 and CSS and JavaScripts and Silverlight and Flash can accomplish. Somewhat of a tangential discussion - I think the same is true for LaTeX - it could be redone with Clojure as an underlying language, and a source file would be a full-blown program that spits out a PDF or a ps as it executes. I understand that starting from scratch is VERY HARD because a modern browser has lots and lots of useful features. Starting with a clean base can pay off though. What are your thoughts on this crazy idea? I realize that the answers would probably be subjective due to the nature of this question, but I still am curious what you think of this."} {"_id": "131929", "title": "What is the easiest and cheapest way to distribute apple iOS enterprise app?", "text": "I work as a developer in a company, The company I work with has to deploy the app I made to their client. Number of clients are 3 and each client has around 100 iPads. Now my question is, * What different ways we can use in order to deploy and distribute app seamlessly? * How good is AppCentral to distribute applications?"} {"_id": "225215", "title": "My organization is relatively small; how can I follow ISO9000/1 as a matter of best practice?", "text": "I work for a company of <60 people. I understand that larger companies typically are ISO9000/9001 certified, as a matter of quality assurance. I also understand that for a company my size, such certification is not cost- effective. However, one could still make a case that as a matter of QA best- practice, I think it makes sense to try to follow the ISO standards anyway, at least insofar as reasonably practical. Am I correct in all these beliefs? If all the above is correct, are there any \"high points\" that would be particularly beneficial for a company my size to follow, or any parts of the standards that seem less relevant?"} {"_id": "97822", "title": "Can someone explain how a GUI works and when I should start using one?", "text": "I've been learning C++ for about a month now, and before I go any further, I'd like to clear up this tedious question I keep on having. I know what a GUI is, but I don't really know how it works, and maybe examples of popular ones? Although I know command line programming is the bare fundamentals, I think it'd be fun messing around with a GUI. Although I have around 3 million other questions, I'll save them :D"} {"_id": "228428", "title": "Is Interactive the term for web application that respond quickly to user?", "text": "Just a quick question on Web Application attribute, if responsive (which mainly stems from web design) means website that adapts to different screen size, then the closest term I can find for Web Application which is highly 'responsive' to the user input (click, touch, etc.) is interactive? One might call it 'native-like' experience, or great user experience but I find these still too vague. Well here is what I have at the moment: **Responsive Web Application** refers to web applications which adopt responsive design, i.e. changes it layout on different screen sizes to increase user experience (supposedly). **Interactive Web Application** refers to web application which has a lot interactivity to it, i.e. elements that can receive user inputs and reacts accordingly. **Native-like Web Application** refers to web application that offers the same speed and performance as native application (either desktop software or mobile native apps). I might have missed the right term, would love input on this?"} {"_id": "228429", "title": "Rejecting time bound programming exercises in interview", "text": "I have come across a few hands on programming scenarios at recent job interviews. 1. Implement a queue in C# only using arrays. Call to get an item will take first item out from the queue(write on paper in 15 minutes) Approach: Implemented on time.Take items from index 0 with remaining items pushed up.Put items at lastindex. 2. Implement a program to create a folder tree which has an options for adding nodes to it and display tree to console.console. Shouldn't use any inbuilt collections/library(code on computer in 45 minutes). Approach:Implemented on time with tree implemented with List and traversal is done through recursion. In both scenarios, my solutions were not up-to the mark and I was rejected. I understand the flaws in the solution as I would have done it differently if given ample time. The first solution should not shuffle the array, and the second should not do recursion(subjective). But I am not sure if that was fair to me. It was both pressure situations because it is time bound, and the problems were like \"new\" to me (not really, but I haven't come across such problems at work in the past 10 years, we always use inbuilt collections). In agile development, we estimate our work with lot of focus on quality and all, I am not sure what exactly the point of time bound tests. I would have given a problem with no time limit to the candidate. Would like to know the community's opinion. How to tackle this at the next instance. What about denying to do any time bound tests in interviews offering to do it if it is not time bound?"} {"_id": "228422", "title": "I'm making a report generator, how should I run it?", "text": "So I've started a project to generate reports for our system. These are reports that we deliver to our end customers and they are so specialized that no existing system can generate them. This is not the problem however, the generator is working just fine. For simpleness I've based it on an ASP.NET-solution that generates HTML that I squeeze through phantomjs to generate PDFs of the result. This also works fine. However, this is a one-shot operation. Right now I have a .BAT-file that runs phantomjs with a specific url, and this generates my PDF. However, I need to make some kind of queue-system, so the generation of reports is automatic when they are ordered. The ordering-system is part of a legacy-system that is scheduled to be renewed, but \"Not Right Now\"(TM). So right now the orders are put in an SQL- table, with a \"status\"-column saying if it's generated or not. What I'm looking for are ideas for a system to run these reports, and maybe in the future even easily switch for a \"real\" queue based system (RabbitMQ or such). I'm not locked to any language/framework as long as it's stable on windows, and can communicate with SQL Server. (I know C# and JS best). I was thinking of just making it with C#, but I'm not quite sure about the robustness of shelling out to phantomjs.. (I'm not sure about the robustness of shelling out to anything as a critical business operation ;) )"} {"_id": "35316", "title": "What source code organization approach helps improve modularity and API/Implementation separation?", "text": "Few languages are as restrictive as Java with file naming standards and project structure. In that language, the file name must match the public class declared in the file, and the file must live in a directory structure matching the class package. I have mixed feelings about that approach. While I never have to guess where a file lives, there's still a lot of empty directories and artificial constraints. There's several languages that define everything about a class in one file, at least by convention. C#, Python (I think), Ruby, Erlang, etc. The commonality in most these languages is that they are object oriented, although that statement can probably be rebuffed (there is one non-OO language in the list already). Finally, there's quite a few languages mostly in the C family that have a separate header and implementation file. For C I think this makes sense, because it is one of the few ways to separate the API interface from implementations. With C it seems that feature is used to promote modularity. Yet, with C++ the way header and implementation files are split seems rather forced. You don't get the same clean API separation that you do with C, and you are forced to include some private details in the header you would rather keep only in the implementation. There's quite a few languages that have a concept that overlaps with interfaces like Java, C#, Go, etc. Some languages use what feels like a hack to provide the same concept like C# using pure virtual abstract classes. Still others don't really have an interface concept and rely on \"duck\" typing--for example Ruby. Ruby has modules, but those are more along the lines of mixing in behaviors to a class than they are for defining how to interact with a class. In OO terms, interfaces are a powerful way to provide separation between an API client and an API implementation. So to hurry up and ask the question, from a personal experience point of view: * Does separation of header and implementation help you write more modular code, or does it get in the way? (it helps to specify the language you are referring to) * Does the strict file name to class name scheme of Java help maintainability, or is it unnecessary structure for structure's sake? * What would you propose to promote good API/Implementation separation and project maintenance, how would you prefer to do it?"} {"_id": "23182", "title": ".Net Best Practices : Common Bugs Introduced By Refactoring, Carelessness, and Newbies", "text": "What are the common bugs introduced by refactoring, carelessness, and newbies? I would like to request the experienced programmers here to share their experience and list the bugs they used to introduce when they were inexperienced. In your response, please write a headline mentioning **the kind of bug in bold text** , followed by few linebreaks, and then an explanation, cause of the bug, and finally the fix."} {"_id": "132741", "title": "Literate programming, good/bad design methodology", "text": "I have recently found the concept of literate programming. And I found it rather intriguing. Yet I have not been encountered with claims that it is a bad way to structure a program. It seems not covered many places. Not even here could I find any questions regarding this. My question is **not** about its flaws or ways of handling documentation. I consider the documentation a side-effect of what it would mean for the flow of literate programming. I know that the design was originally intended for easy documentation as well as the concept of _forward_ programming flow. The concept of dividing the problem into small sentence based problems seems to be really a brilliant idea. It will thus ease the understanding of the program's flow. A consequence of the literate design method is also that the number of functions required will be limited to the imagination of the programmer. Instead of defining a function for a certain task, it could be created as a `scrap` in the literate method. This would yield automatic insertion of the code, instead of a separate function compilation and subsequent requirement of an inter-procedural compilation optimization step to obtain the equivalent speed. In fact Donald E. Knuth first attempt showed an inferior execution time, due to this very fact. I know that compilers can be made to a lot of this, however this is not my concern. So I would like to get feedback of why one should consider this a bad/good design methodology?"} {"_id": "132747", "title": "Is having public constants \"bad\"?", "text": "Is this: public MyClass { public const string SomeString = \"SomeValue\"; } worse than this: public MyClass { public static string SomeString { get{ return \"SomeValue\";}} } Both can be referenced the same way: if (someString == MyClass.SomeString) ... The second however, has the protection of being a property. But really how much better is this than a const? I have learned over and over the perils of having public fields. So when I saw some code using these constants on public fields, I immediately set about refactoring them to properties. But halfway through, I got to wondering what benefit it was to have the static properties over the constants. Any ideas?"} {"_id": "136048", "title": "Do C library functions generally mimic the Intel assembly language style?", "text": "I'm looking at the basic strcpy function. It is char *strcpy( char *dest, const char *src ); Which reminds me of assembly language: `MOV DEST, SRC`"} {"_id": "23189", "title": "How do you avoid being the victim of a syntax pedant on StackOverflow?", "text": "Either I just have bad luck, I am wrong a lot, or there are an increasing number of pedants trolling on StackOverflow. We know that programmers are pedantic (this question), especially toward non- programmers. But as a programmer, when you are trying to help others by answering questions on StackOverflow, what do you do to avoid being called out in comments, voted down, or engaged in a useless argument because of a minor oversight, or worse, something you purposely didn't address to prevent confusing the asker with too much information? **Example** : I recently had someone argue with me about my choice of escaping in a regular expression: I chose `(?<=\\=)` (a look-behind) when apparently `(?<==)` was all that was needed. I felt my answer was more safe (I didn't have a good reference to immediately find out whether the escape was unnecessary or necessary in all regular expression variations) and I thought it was less confusing to someone new to regular expression look-aheads. Should I: * Be less thorough so that there are less possible points of contention in my answer? * Burden the asker with more information than they might need at that point in time? * Just ignore it? _**Note** : I didn't ask this question on meta.SO because I am not talking about the workings of SO, I'm interested in how to improve answers to programming questions in general, for everyone involved--the asker, and other experts who might also be answering or reading my answer._"} {"_id": "188634", "title": "Why different testing methods are so contradicting topic?", "text": "There are a lot of very smart and experienced people, who claim that if you don't write unit tests then your code will be buggy and God will kill all kittens. On other hand, another bunch of smart people will say that unittesting is counterproductive. So if you will write unittest then kittens will be damned too. Also, I heard very good arguments on both sides for and against integration testings, for and against TDD. Also, there are lot of popular open source projects. Some of them use some automated testings and some of them not. So, why different testing methods are so contradicting topic? _Update 1_ I wrote couple of examples of anti-unittesting sentiments in comments. However, just to be clear there are A LOT of very successful open source projects which doesn't have unittests. This implies that not everybody thinks that unittests are useful. _Update 2_ As soon as we start discussing some particular piece of testing (as example mocking, which came up in the comments), we immediately try to distinguish between being a core value or a mean. However, testing by itself isn't a core value. It's just a way to decrease support cost, which in it's turn, just a way to increase profitability. So, as I see it - unittests are something like 4 connections aways from a profit. _Update 3_ It's interesting that amount of people in \"balanced\" camp is quite small. Most of people have really strong opinion on this topic."} {"_id": "185207", "title": "How can I build a C# project (installer) for multiple environments", "text": "I would like to propose a solution to our companies problem with building consistent installers for different environments. Our current process is to build an installer for test, perform testing, update app config, build installer for production. Unfortunately this has lead to issues in the past where the installer was not properly updated etc. What can I suggest as a best practice to mitigate this problem?"} {"_id": "30895", "title": "Should I point out spelling/grammar related mistakes in someone's code?", "text": "While reviewing a coworker's code, I came across some spelling mistakes in function names and also grammatical errors like 'doesUserHasPermission()' instead of 'doesUserHavePermission()' in function and variable names. Should I point these out to him? Or am I being too pedantic to notice these ?"} {"_id": "181919", "title": "Shipping my first class library. Any gotchas I need to be aware of?", "text": "I'm a Web Developer about to unlock the \"First Class Library Published\" achievement in my career and I'm sweating bullets (literally - I was up all night stressing out). I'd love to tap the experience of the community to see if anyone has any suggestions or recommendations for making sure this goes as smoothly as possible. Are there any specifics or gotchas I need to be aware of? Anything special about the build process that can come back to bite me? Here's where I'm at: * Library is unit tested and has approx 97% code coverage * API is well documented and xml docs for intellisense support have been created * I've ensured that public/private class accessors are accurate and correct. The same goes for all getters/setters * Error handling isn't as graceful as I'd like it to be, but I'm up against a deadline and have accepted that its an \"as good as its going to be\" for now * No friendly logging. Debug.Writeline was used extensively...I've learned recently that this is a reflection of my inexperience :( Your advice is greatly appreciated! The library will be used to generate reports. Standard hat -- connects to readonly database, performs calcs, formats and outputs data to response stream. * * * I was tapped as a fringe resource to fill in for one of the programmers that quit, and this task was given to me as a \"cut your teeth\" project. The class library is going to be released for other programmers in the company to use while they write production code."} {"_id": "87826", "title": "Picking Core Language For Large Scale Web Platform", "text": "Now I have work with PHP and ASP.NET quite a bit and also played around few other language for web development. I am now at a point where need to start building a backend platform that will have the ability to support a large set of applications and I am trying to figure out which language I want to choose as my core language. When I say core language I mean the language that the majority of the backend code is going to be in. This is not to say that other languages won't be used because my guess is that they will but I want a large majority of the code (90%-98%) to be in 1 language. While I see to benefit of using the language that is best for the job, having 15% in php, 15% in ASP.NET, 5% in perl, 10% in python, 15% in ruby, etc\u2026 seems like a very bad idea to me (not to mention integrating everything seamlessly would probably add a bit of overhead). If you were going to be building a large scale web platform that need to support multiple applications from scratch, what would you choose as your core language and why?"} {"_id": "87825", "title": "Is a request for a code sample after a job offer common?", "text": "I was verbally offered a job and the manager insisted that I start the day after the following day from the interview; so two days after the interview. I left the interview unsure of the offer the manager called me later that day and I agreed to take the position. At this point, I was told that I would get an offer letter the following day and would start the day after that. Later that evening I was asked for a code sample. I have yet to receive the offer letter. I've been mostly contracting and usually answer technical questions or show samples at the beginning of the process and find this situation somewhat odd. Is this a common practice?"} {"_id": "181912", "title": "Database History Table / Tracking Table", "text": "Currently I want to structure a tracking/history table like this: * PrimaryKey - ID * OtherTableId - fk * fieldName - name of the field its tracking * OldValue * NewValue * UserName * CreateDateTime So basically I want to have a table that will track another tables history, store the column name of the changed field with the new and old value. My question is can anyone poke holes in this? Also, what is the easiest way to ensure that only a column name from the tables its tracking is entered into the fieldName column? Currently my options are to have a enum in the service I'm building, or create another status table and make the fieldName an fk. Any better ideas? **Edit** Goal: There are currently only 2 fields that we care to track. One field will be shown on a web page to display history, the other field will only be accessed by one department and they have access to a view of the database which they\u2019d be able to query. They\u2019d be querying just this one field to get information on who changed the field and what to. This is the reason we wanted to set it where a database field defines the table column rather than having an exact copy of the table record history. We only want two fields tracked with the possibilities of adding or removing fields in the future. Thanks!"} {"_id": "156700", "title": "in Objective-C, value of type double/float can only be NAN, INFINITY, & normal number?", "text": "I knew double or float value can be not only normal value(-1.3, 0, 1.0, 2.3) but also NAN and INFINITY in Objective-C. Is there other special values except for NAN and INFINITY for double/float value in Objective-C?"} {"_id": "156702", "title": "Being relevant and hireable outside of my domain.", "text": "I'm a recent college graduate and have been hired to work for a business software giant. The job by itself is great with amazing perks and a decent pay. Also there is the joy of working in core product development, building something which will be used by millions of people. However a huge problem is that we do most of our development in a proprietary language which is not used in any other product development company outside of here. The only other companies using this language are our partners and customers who would like to implement/customize the software in their business/clients' businesses. Im afraid that working here for a length of time could make me irrelevant as a developer outside the company. While ideally one could say that software development is software development and the language does not matter, the fact is that most companies hire based on past experience, and in this context, my experience would be 0. My apprehensions are supported by the fact that that the attrition rate in this company is much less than the others, so you could find a lot of people who have not changed their job in the past 10-15 years. This is great from the company's perspective and they take a lot of pride in this. However I'm sure that most people do not leave cause they are tied up, since their skills are irrelevant outside of the company. I really love programming and want to remain a programmer in product development rather than being a manager. What should I do to be hireable in a new company if and when I choose to leave my current job? Thanks in advance."} {"_id": "180353", "title": "Why do so many namespaces start with com", "text": "I've noticed that a lot of companies use \"reverse domain name\" namespaces and I'm curious where that practice originated and why it continues. Does it merely continue because of rote practice, or is there an outstanding architecture concept I might be missing here? Also note questions such as: http://stackoverflow.com/questions/189209/do-you- really-use-your-reverse-domain-for-package-naming-in-java that sort of answer my question but not 100% (If it makes you feel any better, I'm really curious if I should be using it for my javascript namespacing efforts, but I'm more curious about then when and why, and that should help guide me on the javascript answer, nota bene: \"window\") Example of this practice extending to folders and files: ![http://imgur.com/jtdXo](http://i.imgur.com/jtdXo.jpg)"} {"_id": "196731", "title": "Looking for a freeware NoSql key-value database to offload a Java HashMap", "text": "I have a Java HashMap with a String key and POJO value in a long-running application, and it's taking up a large chunk of memory (over 500mb, and this number is expected to grow - I'm guesstimating that it will exceed 2GB in two to three months); this is used to memoize the results of an expensive calculation (typically 2-4 seconds, but up to 20 seconds), so I'd like to offload the HashMap to the hard drive rather than replace it with a [Soft/Weak]HashMap with the expectation that external lookup will be less expensive than recalculation; I'd also like to make the map persistent in case the app crashes. My only experience with NoSql databases has been with DynamoDB, but I'd like a freeware database rather than trying to restrict myself to DynamoDB's free tier. * The app is written in Java, so I'll need a Java API for the database * The app runs on a single machine, with no expectation of migrating to a distributed architecture * I prefer the database be strongly consistent, but eventual consistency is acceptable * The machine has a traditional (non-SSD) hard drive * The map's keys are strings (length < 40), and its values are POJOs; if need be I can serialize the POJOs to strings with Jackson before persisting them, though I'd prefer the database handle this * The POJOs belong to several different subclasses with a common abstract parent class; all of the fields are in the parent class (the subclasses only add/override methods, any fields they add are transient) * There aren't any security requirements - the data I'd be storing doesn't need to be password-protected or anything * The values in the database won't expire (I'll take care of stale values in application code - if POJO.someProperty != someOtherProperty then I recalculate the POJO)"} {"_id": "72435", "title": "Why is Google blocking users from accessing their local file system in Chromium?", "text": "The question is based on this issue in Chromium. It is marked as `Won't Fix`. Do you see any reason to block a local html file from accessing another local html file located in the same folder?"} {"_id": "140173", "title": "Should there be more scientific study of the effectiveness of various hyped-up ideas in software development?", "text": "Everyone seems to implicitly assume that the free market of ideas will eventually converge on the \"right\" solutions in software development. We don't assume that in medicine - we recognise that scientific experiments are needed there - so why should we assume it in software development? I am not arguing for regulation of programmers. It is far too early to even talk about that. Before healthcare could be effectively regulated, there was a need for scientific experiments to establish which treatments worked and which didn't. Software engineering doesn't even have this scientific evidence base to back up touted methodologies such as Scrum or Agile, or programming paradigms like functional programming or MDA. As (a) large software projects are responsible for many government project failures (with the UK government being a really good example) (b) Agile and Lean are being used outside of software development, including in the public sector [of course, Lean originated outside of software development] this is increasingly politically relevant. Government project failures may be influenced by a failure to use a best practice, or even by using something that is considered by some people to be a best practice, but which actually makes things worse, or just costs money without really helping very much. The question is, why is this scientific evidence base (for all intents and purposes) nonexistent? There is a large open source community from which research participants could be drawn. My fear is that the closed-source and in-house software developers would treat with suspicion any research based on this community, fearing (perhaps rightly) that the results would not translate over. And companies that develop closed-source and in-house software would probably not be willing for their developers to participate in any scientific studies. For one thing, it would probably take time away from getting work done; for another, the results could be embarrassing to the company or to senior managers."} {"_id": "140175", "title": "How to create a code generation tool for Visual Studio", "text": "I'm working on a tool that will generate some C# code, which I hope could prove useable to a wider audience. This tool will read an assembly and a text file containing some data and then generate a bunch of C# classes. The easiest way to do this is of course to simply create a console exe that can be associated with the file type, in the same vein as the code generation tool for XSD files. But I fear that this will be a bit difficult to integrate. I'd much rather have something that feels better integrated. I'll still make the console app anyway, since it's practically a free byproduct and it might be useful for continuous integration. The assembly in question will typically be the **current** assembly that will be generated when the code is compiled, which I suspect normally also will be the assembly where you want the generated files to be hosted. Is there a way of making these kinds of tools? Am I looking into writing a visual studio extension just for this? If the simple exe version is the way to go, is there any way to make it easier on the user by offering installation help along the way, to make it as easy as possible."} {"_id": "39449", "title": "Dealing with awful estimates", "text": "A recent project I worked on was proven to be severely underestimated by the architect. The estimate was out by at least 500%. Unfortunately I was brought onto the project after the estimate had been signed off with the customer. As senior dev, I quickly realised that the functional and technical spec. contained some huge gaps and uncertanties. As a result I felt compelled to call an emergency meeting with the business and technical directors to let them know the reality. As first and foremost a developer, I found this a very stressful and difficult situation. The \"business\" accused IT of being incompetent and being the messenger I received a few \"bullets\". The customer threatened to cancel the account, however to date the project is still unfinished and I am no longer directly involved with it. The architect was a nice guy socially, but based on this episode was either simply incompetent or there were large sales/business pressures influencing his estimate. So, as programmers, what is your experience of this sort of situation and how would you advise dealing with it?"} {"_id": "23453", "title": "How Do You Determine Your Hourly Rate?", "text": "In pricing a service or product it is general practice to not merely charge for the effort spent and the costs involved + margin but from the value delivered down. As an independent consultant, how do you set the price of your work? What is the process of determining your hourly rate as an independent software developer? If you have one, what is your hourly rate and how did you arrive at that figure? What factors would you take into account. Is the process of setting a hourly rate only based on balancing demand with supply so that supply (you and your time) doesn't get overwhelmed? When are the good times to raise your rates? Are there projects in which you have had to charge higher than other projects? If so, please cite examples. Do you change your hourly rate based on what kind of development you are doing? (for example, .net programming more, and lesser for php programming) Do you set hourly rates based on what type of business the client is? If so how? If you know of any relevant articles or material on the topic of charging for programming services, please post the same."} {"_id": "23455", "title": "Relevance of HTML5: Is now the time?", "text": "It seems like most of the jobs I'm receiving, and most of the Internet, is still using standard HTML (HTML 4, let's say) + CSS + JS. Does anyone have any vision on where HTML5 is as a standard, **particularly regarding acceptance and diffusion?** It's easy to find information about inconsistencies between implementations of HTML5 and so forth. What I want to know about is relevance of HTML5."} {"_id": "109040", "title": "Why do interrupts need to be turned off when inside other interrupt code?", "text": "Simple question that will help me understand my OS class... thanks! Basically, why is it unsafe to have an interrupt within an interrupt? (or exception within exception)"} {"_id": "109041", "title": "Why is there an emacs vs. vim debate?", "text": "I'm new to the whole scene, so I was wondering why programmers tend to be really passionate about this debate? Thanks. Also, are these referred to as \"text editors\"?"} {"_id": "92665", "title": "Is there an open source license for this?", "text": "I have written code at home, on my own time and using my own knowledge and equipment, while under no contract or NDA. I want to make this code open source so that I can use it in software I write for an employer, without denying myself the right to use it at home or elsewhere later. I'm not sure if saying it is in the \"public domain\" would fit this purpose, or if I need to find an open source license. I want anyone to be able to use the code in closed source proprietary software with zero requirements for including a license with the source or binary. And I want to minimize the risk of anyone being sued for using it. (I'm aware that one can never be 100% safe from being sued.) Is there an open source license that fits this purpose? To what extent is what I want to do even possible? I wouldn't mind putting the license in comments in the code files themselves, but that obviously doesn't go with the binary."} {"_id": "107288", "title": "Could spending time on Programmers.SE or Stack Overflow be substitute of good programming books for a non-beginner?", "text": "Could spending time (and actively participating) on Programmers.SE and Stack Overflow help me improve my programming skills any close to what spending time on reading a book like Code Complete 2 (which would otherwise be next in my reading list) will help. Ok, may be the answer to this question for someone who is beginning with programming might be a straight no, but I'd like to add that this question I'm asking in context when the person is familiar with programming languages but wants to improve his programming skills. I was reading this question on SO and also this book has been recommended by many others (including Jeff and Joel). To be more specific, I'd also add that even though I do programming in C, Java, Python,etc but still I'm not happy with my coding skills and reading the review of CC2 I realized I still need to improve a lot. So, basically I want to know what's the best way for me to improve programming skills - spend more time on here/SO or continue with CC2 and may be come here as and when time permits."} {"_id": "177230", "title": "systems/software engineering design process", "text": "I just developed my first non-trivial android app. It was a complete nightmare. I came up with an idea, build the app, changed my idea, and implemented a lot of input from others on new features. All in all my app took 10 times longer than I think that it should have, it is almost impossible to look the source code and tell what's going on with the classes, and may or may not have unused methods that I'll never be able to find... So I would like an opinion from those of you with experience on how to plan out my designs for the future. I created a flow chart (pencil drawn) of a plan: ![flow chart of plan](http://i.stack.imgur.com/Nyi2U.jpg) I would like constructive criticism."} {"_id": "177234", "title": "How can I pull data from PeopleSoft on demand?", "text": "I work in IT at a university and I'm working with about 5 different departments to develop a new process for students to apply to a specific school within the university (not the university as a whole). We're using a web-based college application vendor and adding the applicant questions for the school itself to the main university application. Currently the main application feeds into PeopleSoft. The IT staff here is building a new table to hold just our school's applicant data. I want to be able to access that data from PeopleSoft for use in external applications, but our IT staff doesn't really seem to understand what I'm requesting, as they simply tell me I can have access to the PS query tools. The problem is, I don't want to run just ad hoc queries, I want to be able to connect from outside PeopleSoft and show current data within the external app. I am unable to find documentation or get a clear answer to my question. Does PeopleSoft support access via a web services API or anything similar, and does that sound like the right direction for me to take?"} {"_id": "177235", "title": "How are blocking calls implemented?", "text": "This may be a very simple question. I'm curious how blocking calls are implemented. Specifically, how do they block? Is this just thread.sleep?"} {"_id": "74550", "title": "Google App. Engine for RoR and Python apps", "text": "I fairly understand that this Q+A site is programmers destination and questions on hosting are not permitted here, but anyone who has heard of Google's App. Engine is well aware that this question is suited for this site only. Google App. Engine supports either Java or Python interpreter. I want to know what type of applications can be hosted on this engine? If my Python or RoR application needs a database behind, will this engine support it? For RoR applications, which interpreter to choose? What are the advantages of Google App.Engine over a local IDE?"} {"_id": "74553", "title": "Demand of other frameworks in the market. Should a job-seeker go after them?", "text": "I have reading about variety of frameworks for web applications like OpenRasta, Django, CodeIgniter etc. It is a passion of developers to dip their hands on any new technology, but from the job-seeker's point of view, what is the significance of these new buzzes. Is it better for a job-seeker to stick with the standard frameworks like ASP.NET, Rails etc., or devote time on other open-source frameworks as well?"} {"_id": "6121", "title": "Why am I never completely satisfied with my program?", "text": "For as long as I've programmed, I've never really been entirely satisfied with any projects that I've finished. Sure, they do what I've set out to do, but there has always been _something_ in the code that I, in retrospect, would have done differently but can't be bothered to refactor. Is this just me, or is it a common programming trait?"} {"_id": "205297", "title": "Where to set the model in this design (service-provider pattern)?", "text": "we are modelling an application using the \"Service-Provider\" pattern, where the service will offer a generic functionality implemented by different providers registered on the service. The responsibility of the service will be to choose the right provider based on certain conditions. The layers we currently have are the following ones: ![enter image description here](http://i.stack.imgur.com/QjZXP.png) What I don't like about this approach it that the client needs a reference to the providers to access the A definition. I think that normally dependences should go only from one layer to its layer below, right? One solution could be duplicating the model on the service layer but that would mean duplicating ... and of course we should avoid that: ![enter image description here](http://i.stack.imgur.com/h4qlF.png) Another solution could be creating a package only for models, but this could be overkill and also breaks the top - down dependency chain: ![enter image description here](http://i.stack.imgur.com/k8P3K.png) I'm not happy with any of these solutions but I currently do not have others. What do you think, do you like any, are there others that I have not thought about? Thanks in advance."} {"_id": "139545", "title": "Should we do entity-relationship modeling or object-oriented modeling first?", "text": "When building an application from scratch, should I start with the object- oriented (OO) model or the entity-relationship (ER) model?"} {"_id": "139541", "title": "Best approach for a Java web chat client", "text": "I am trying to add a chat to my web application written in Java. What would be the best solution in this matter? I have read something about JMS, but the pub/sub pattern doesn't seem to be designed for this kind of usage, and for the p2p pattern, I need to build a queue for each user(this doesn't sound right neither). What would you suggest?"} {"_id": "131210", "title": "Interactive training site for Javascript complete with code challenges", "text": "A few months ago I discovered a cool course called Rails for Zombies. This is a great site that allows us to write code and see the results. It takes us through the paces to get us up to speed with Rails. You have to pass each level (including code challenges) before being taken to the next level, and it gets you grounded in the fundamentals of Rails. I'm wondering if an interactive tutorial site exists for Javascript? One that will walk me through the paces of writing better Javascript, and challenge me along the way."} {"_id": "240604", "title": "Specification languages vs automated tests", "text": "I recently listened to an episode of Software Engineering Radio in which Leslie Lamport was interviewed. One thing he discussed was his specification language, TLA+. Essentially, he seemed to be arguing that, for programs where correctness is very important, we need to think carefully and specify carefully before writing code, and TLA+ is meant to be a tool to do that. He said a team at Amazon has recently had success using it. Personally, I write executable tests for my code. I see the tests as a specification, which has the huge benefit of **proving** whether the code conforms to it. I assume that Mr. Laport, being a brilliant and accomplished computer scientist, has long known about this, and still sees a need for his language. But why? Are formal specification languages and automated tests complementary approaches, or at odds? Do they lend themselves to different kinds of code?"} {"_id": "124015", "title": "Reverse Engineering PHP application without reading the (ugly) Code", "text": "I have this new customer, that has this PHP App. It was written by a single developer that wanted to \"make yet another framework\" back in 2005. About 3 Years later the developer left the company, and with him all Knowledge on what this thing is actually doing. Now, as the App already went in production the manager just hired a few more developers / freelancers (that are not available anymore too) to fix bugs here and there and develop some more functionality. Some tried to follow the undocumented guidelines of the software, some did not. You might be able to imagine how the code looks today ... its an utter mess! I talked to the manager and told him what I think of his software and managed to get him to realy think about rewriting the damn thing. But here comes my problem: To be able to estimate the effort needed to rewrite I would need to know what the thing is doing. The manager can tell me from his perspective what it's doing but there is just no technical knowledge about it. And as with all software that grew over years there are these \"special edge cases\". Basically my idea is \"record/log\" the live system for a few weeks to actually get a technical, somewhat complete conclusion of what this thing is doing most of the time and what are the things that rarely get touched/used. E.g. what was the Request and what route did it go to render the results. Reading and trying to understand the code is impossible. It would help though to see which Classes/Functions are called and then read/understand them. So, is there any tool to log/record Http Request/Responses and what call graph of the php app it triggered? Preferably something that would not have to get written into the code? I ditched PHP years ago and am somewhat rusty with my PHP Utility and Standard Library Toolset to know of something that could help me here."} {"_id": "124017", "title": "Company wants to move its programs to a new framework / concept - what are options?", "text": "So there is an insurance company which wants to move long term its proprietorially written software for different insurance products to a new platform / framework / concept / something. One of the applications is ~700 pages big. We would require some workflow application (ibm websphere products?). Transaction control, historisation, support for running batches, integration of different systems (basically our DB2 is on a host zOs system). The front end is a web gui. The business logic code is written in C++. We would like to move step by step to the new environment. What would be good options?"} {"_id": "247050", "title": "More idiomatic syntax for 2nd level vector value update", "text": "I'm pretty sure there has to be a more idiomatic way of writing this: (defn update-2nd-level-vector [map index value] (assoc map :key1 (assoc (get map :key1) :key2 (-> map (get-in [:key1 :key2]) (assoc index value))))) Example of its working: => (update-2nd-level-vector {:key1 {:key2 [0 1]}} 0 1) {:key1 {:key2 [1 1]}}"} {"_id": "121224", "title": "What are binaries?", "text": "I see very often people using term **binaries** in different context. What are binaries? Collection on binary files, installation files, .dll files or what? Or is it just an general term for some collection of files on disk?"} {"_id": "121225", "title": "What is the best design decision approach?", "text": "I have two classes (named `MyFoo1` and `MyFoo2`) that share some common functionality. So far, it does not seem like I need any polymorphic inheritance but, at this point, I am considering the following options: 1. Have the common functionality in a utility class. Both of these classes call these methods from that utility class. 2. Have an abstract class and implement common methods in that abstract class. Then, the `MyFoo1` and `MyFoo2` classes will derive from that abstract class. Any suggestion on what would be the best design decision?"} {"_id": "247058", "title": "OpenSSL Client model for half duplex communication over socket", "text": "I have read in this SO question that OpenSSL socket communication can be only half duplex in a single thread. Assuming what I have read is true, I am wondering if I can apply philosopher's dining problem to send SSL_write() and receive SSL_read() on a non-blocking socket in a single thread that communicates with a OpenSSL TCP Server. Both server and client are non-blocking. Would that be a good model? Or should I always set priority to read SSL_read()? What would be the best approach to code? I am using C++ to code this single threaded non-blocking socket model (without BOOST or other libraries)."} {"_id": "121221", "title": "When to use Constants vs. Config Files to maintain Configuration", "text": "I often fight with myself on whether to put certain keys in my web.config or in a Constants.cs class or something like this. For example if I wanted to store application specific keys for whatever the case may be..I could store it and grab it from my web config via custom keys or consume it by referencing a constant in my constants class. When would you want to use Constants over config keys? This question really applies to any language I think."} {"_id": "121222", "title": "Is it possible to determine whether my web site is being accessed as a trusted site?", "text": "I am working on site which have a lot of configuration and security settings and I have to check either clients browser is on trusted zone or not using JavaScript. Is it possible to determine whether my web site is being accessed as a trusted site? The reason I'd like to do this is that some functions won't work unless the site is being accessed as a trusted site, and I'd like to be able to warn users. Is there any solution ?"} {"_id": "182263", "title": "Correlation between college grades and job performance?", "text": "In Facts and Fallacies of Software Engineering at the end of fact 2, Robert Glass says: > The problem is\u2014and of course there is a problem, since we are not acting on > this fact in our field\u2014we don't know how to identify those \"best\" people. We > have struggled over the years with programmer aptitude tests and certified > data processor exams and the ACM self-assessment programs, and the bottom > line, after a lot of blood and sweat and perhaps even tears were spent on > them, was that the correlation between test scores and on-the-job > performance is nil. ( **You think that was disappointing? We also learned, > about that time, that computer science class grades and on-the-job > performance correlated abysmally also [Sackman 1968].** ) Is this fact supported by more recent papers?"} {"_id": "182262", "title": "For small-ish programs, should a single method handle most method calls to centralize program flow?", "text": "I'm fairly new to OOP (and programming in general), but what I find myself doing is that in the event that I don't need to pass a value from one method to another, I'll have my method calls centralized in some class within a Form.cs file. For example: public void CentralMethod() { MethodA(); MethodC(); MethodD(); } MethodA() will then do some stuff and call MethodB() directly, to which it passes a value. However after MethodB() is done, control reverts back to CentralMethod() and then MethodC() is called, etc. I can see how this is a nightmare on a large project, but is this at all a habit that one should get into on a relatively small project? I didn't necessarily do this by design, but I found myself doing it when it didn't make logical sense to include some method calls within other methods."} {"_id": "182261", "title": "Can an entire software infrastructure stack be built entirely from source in-house? Is it practical and sustainable?", "text": "I'm a big fan of using open source software but I mostly use community binary releases for the job. I'm wondering about companies that go the extra degree and build everything they use in their production server application infrastructure entirely from source, in-house, in an effort to promote transparency and supportability (instead of relying on binaries). Is it practical and sustainable to do this? It seems that the sheer number of dependencies to maintain and build from source for some of the major open source projects would quickly complicate things."} {"_id": "231254", "title": "Select custom output formats from database with SQL", "text": "I am planning to make a simple rest service application, and I am currently deciding the architecture. I have decided that I want to write the middle layer in multiple languages, so that it is easy to deploy/integrate with as wide a range of systems as possible. Due to the necessity of porting the code to multiple languages, I want to keep the middle layer logic as simple as possible, to alleviate the coding effort involved. I am planning to put as much code into SQL as possible. For example, traditionally, I might select some data from the database, then loop through the array of results, and process them further before sending back as a well formed response, like perhaps some HTML derived from the data. However, given I want the middle layer to be as simple as possible, I would prefer to do all the processing as SQL. I have prepared sql statements that select from my results by concatenating and using group aggregates for repeating elements. It all seems to work fine, but I have a niggling doubt that I have left the reservation. I am concerned that I have not seen this kind of thing done, so perhaps there is a good reason I have not thought of. My question is... ### Is it a bad idea to write complex formatting logic in SQL instead of application code? In terms of... 1. performance 2. maintenance of code 3. limitations of language 4. anything else **n.b. I don't want to use PL_SQL or anything like that, I am only interested in using very basic SQL commands to achieve my formatting, as I want the database code to be portable between different databases too**"} {"_id": "173441", "title": "What triggered the popularity of lambda functions in modern mainstream programming languages?", "text": "In the last few years anonymous functions (AKA lambda functions) have become a very popular language construct and almost every major / mainstream programming language has introduced them or is planned to introduce them in an upcoming revision of the standard. Yet, anonymous functions are a very old and very well-known concept in Mathematics and Computer Science (invented by the mathematician Alonzo Church around 1936, and used by the Lisp programming language since 1958, see e.g. here). So why didn't today's mainstream programming languages (many of which originated 15 to 20 years ago) support lambda functions from the very beginning and only introduced them later? And what triggered the massive adoption of anonymous functions in the last few years? Is there some specific event, new requirement or programming technique that started this phenomenon? **IMPORTANT NOTE** The focus of this question is the introduction of **anonymous functions** in modern, main-stream (and therefore, maybe with a few exceptions, non functional) languages. Also, note that anonymous functions (blocks) are present in Smalltalk, which is not a functional language, and that normal named functions have been present even in procedural languages like C and Pascal for a long time. Please do not overgeneralize your answers by speaking about \"the adoption of the functional paradigm and its benefits\", because this is not the topic of the question."} {"_id": "184636", "title": "What techniques or tools do you use to make your app simpler?", "text": "I've been programming a new iOS app for about 5 months, and I'm pretty much on the final stretch. My team and I agreed to make the app more simple once we were at this point of the development process. This question doesn't really apply to a whole team (where there are people who do design for a living), but more about from a programmer's mind. I tend to dig into every little detail about a personal project, and figure out a way to create a feature for it as I'm sure most other programmers on here tend to do. While that's definitely not bad, it has to be controlled. So what I'm asking is what you all use as far as technique's and tools to help yourself keep things simple. Or in other words, how you control your programmer-type mind from not getting out of hand :P"} {"_id": "184634", "title": "Git: rebasing on top of a refactor that affects the target rebase branch", "text": "Let's say I have a feature (topic) branch that I keep rebased on top of my development branch. During the course of the project (before merging in my feature branch), I decide that I need to make a large refactor to my project (for example, after updating a 3rd part module). So I do this large refactoring on my development branch. Now I want to rebase my feature branch so I can take advantages of my refactoring. **The problem** There are still things in my feature branch that need refactored to match the changes I made in my development branch. Should I: 1. Go back through my feature branch's history trying to edit commits (with `rebase -i`) to make the branch look as though all of the work had been done after the refactoring. * The advantage to this is that it keeps the history clean. * The disadvantage is that this can take a whole lot of time if the changes made in the development branch cause a lot of changes to need to be made in the feature branch. 2. Fix the things that need to be refactored in my feature branch and make a commit for this. * The advantage here is that it will be much easier to identify and fix things that need to be fixed * The disadvantage is that now the tree will have _two_ commits in it for the refactor. One for the refactor done for the entire development branch, and a smaller commit done for the feature branch, once merged in it will look a little funny. Which strategy should I go with? * * * **An Example** Lets say on the development branch I rename `functionA` to `functionB`. Now In my feature branch I've never modified the file containing `functionA` (now `functionB` in development). So when I run `rebase development` while on my feature branch it rebases cleanly. The problem is if I've ever made a call to `functionA` in my feature branch, it's going to fail now since it was renamed `functionB`. Now, should I just do a find and replace for `functionA` -> `functionB` and make one commit on the feature branch (option 2). Or should I go back through my history, finding where I introduced the call to `functionA` and rewrite the commit so it is introduced as calling `functionB` (option 1)?"} {"_id": "141225", "title": "Is it worth it to switch from home-grown remote command interface to using JMX", "text": "Without knowing too much about JMX, I've always assumed that it would be the best approach for building in remote management to our standalone Java server application. Our server application has some minimal remote control capability, using text commands sent via TCP/IP socket to it. Using the home grown approach, it is fairly to add a new command. (Just create new command text, and the code to handle that in the message receiver). On the other hand, we have hardly implemented any commands, even though there are many things we would like to be able to execute remotely. I am trying to weigh the value of moving to incorporating JMX (learning it, and building the interfaces), versus just sticking with the home-grown approach. Does anyone have any experience or advice regarding changing an existing application to use JMX?"} {"_id": "141223", "title": "How do I balance program CPU reverse compatibility whist still being able to use cutting edge features?", "text": "As I learn more about C and C++ I'm starting to wonder: How can a compiler use newer features of processors without limiting it just to people with, for example, Intel Core i7's? Think about it: new processors come out every year with lots of new technologies. However you can't just only target them since a significant portion of the market will not upgrade to the latest and greatest processors for a long time. I'm more or less wondering how this is handled in general by C and C++ devs and compilers. Do compilers make code similar to `if SSE is supported, do this using it, else do that using the slower way` or do developers have to implement their algorithm twice, or what? More or less how do you release software that takes advantage of newer processor technologies while still keeping a low common denominator?"} {"_id": "208186", "title": "What's the best way to scale and split an agile team building a web app?", "text": "I've recently joined a company where I'm working as the scrum master on an agile development project building a web app. The team is just about to be the maximum size for an agile team (expecting 9 next week). We have talked about potentially splitting the team into two teams, not so much to shorten standups (which aren't excessive at the moment) but to stop people from being completely bored in sprint planning sessions (which again aren't excessively long). There are two very distinct layers to the project - high technical backend dev (like seriously complex), and UI design/build/integration. It seems that when the backend guys are talking technical the UI guys zone out, and vice versa. It seems like the logical way to split the team if only to be more time efficient, but I have one massive reservation in that all I might really be doing is reducing collaboration and knowledge sharing. The two teams just won't really have a good idea about what the rest of the team are building. Does anyone have any experience in dealing with something like this?"} {"_id": "238782", "title": "Why prefer non-static inner classes over static ones?", "text": "This question is about whether to make an inner class in Java `static` or not. I searched around here and on StackOverflow, but couldn't really find any questions regarding the design implications of this decision. The questions I found are asking about the difference between static and non- static inner classes, which is clear to me. However, I have not yet found a convincing reason to ever use a non-static inner class in Java - with the exception of anonymous classes, which I do not consider for this question. Here's my understanding of the effect of using static inner classes: * **Less coupling:** We generally get less coupling, as the class cannot directly access its outer class's attributes. Less coupling generally means better code quality, easier testing, refactoring, etc. * **Single`Class`:** The class loader need not take care of a new class each time, we create an object of the outer class. We just get new objects for the same class over and over. For a non-static inner class, I generally find that people consider access to the outer class's attributes as a pro. I beg to differ in this regard from a design point of view, as this direct access means we have a high coupling and if we ever want to extract the inner class into its separate top-level class, we can only do so after essentially turning it into a static inner class. So my question comes down to this: Am I wrong in assuming that the attribute access available to non-static inner classes leads to high coupling, hence to lower code quality, and that I infer from this that (non-anonymous) inner classes should generally be static? Or in other words: Is there a convincing reason why one would prefer a non- static inner class?"} {"_id": "195184", "title": "Stubbing and mocking boundaries", "text": "Suppose I'm building a JSON API and would like to include an endpoint that returns a list of recently created posts. Somewhere in my system is an existing Post model with the following test: * create a few posts in the db (some new, some old) * posts = Post.recent * assert that posts only contains new posts Now I want to test drive the JSON endpoint wrapping this functionality (the \"controller\"). The most obvious way to test this would be a full integration test like: * create a few posts in the db (some new, some old) * make a request to /posts * assert that only the new posts were returned in the response This seems to duplicate much of what is already being tested in the model and will start to slow down the test suite as more code paths need to be tested (i.e. auth, query string params, etc.). The alternatives\u2026 1. Mock the model * mock Post.recent * make a request to /posts * assert that Post.recent was called 2. Stub the model * stub Post.recent * make a request to /posts * assert that the stubbed posts were return in the response Either one of these options alone does not seem sufficient to fully test the controller but when combined seems fairly comprehensive. The piece that remains untested is the agreement between the model and controller about Post.recent. What if I decide to refactor my model and model tests and Post.recent no longer exists\u2026my tests will still pass even though the production system will fail. How does one address this? Are integration tests the only answer? If so, why bother stubbing and mocking at all?"} {"_id": "135827", "title": "GPL copying copyright notices", "text": "The \"GPL How to\" has the following to say about applying copyright notices of code copied from other programs: > If you have copied code from other programs covered by the same license, > copy their copyright notices too. Put all the copyright notices together, > right near the top of each file. I did a _total_ refactoring of a C# library, which itself was a port from Visual Basic (VB) code. The original VB code is currently released under the Microsoft Public License (Ms-PL), but originally under GPL. The C# library is GPL, continuing work on the older VB library. Basically I only used the same 'technique' they used. I don't care making the library GPL, so that's not the issue. (From this answer I gather my library might be considered a derivative work.) I do however find it cumbersome having to copy the previous copyright notices in every source file. I would rather only reference them in the README file. This question discusses the need to add notices to _every_ source file, but there's no consensus between the answers. Therefore I'd like to add the license to every file as a safety measure. Do I have to include copyright notices **of the projects on which my library was based** in all source files as well?"} {"_id": "135829", "title": "How DEP and ASLR play role in security?", "text": "Lines from CLR via C#: A managed module is a standard 32-bit Microsoft Windows portable executable (PE32) file or a standard 64-bit Windows portable executable (PE32+) file that requires the CLR to execute. By the way, managed assemblies always take advantage of Data Execution Prevention (DEP) and Address Space Layout Randomization (ASLR) in Windows; these two features improve the security of your whole system. Que: I want to know the Security in which context here and what DEP and ASLR does here??"} {"_id": "200409", "title": "Daily Scrum when the team is just fixing bugs", "text": "I understand all the advantages of Daily Scrum and my team does it when we are working on stories. But sometimes we just have bugs to fix for days, while we're waiting for new stories, and when this happen we put daily scrum aside. Some of developers says that \"There's no need to do a Daily Scrum when we have nothing to discuss\". I don't know if a Daily Scrum when we don't have any new story to develop is necessary. Is there anything interesting we can do on a daily scrum when we just have bugs to fix?"} {"_id": "208232", "title": "Professional Development: Finding that \"pet project\" to work on - then managing other commitments", "text": "At the moment, like a fair few of you I'd imagine, I spend at least 40 hours working on projects that use a specific set of technologies. Sometimes I'm doing maintenance, and have those technologies enforced, and sometimes I'm doing new builds.. but have the skillset of the team enforced due to maintenance reasons (quite rightly). Yet I've got a growing set of technologies I want to test the waters with; be they custom Javascript frameworks or new Mobile Development techniques. I have no idea how to get to grips with them though! Take for instance the Chromium Embedded Framework; I've always been a fan of the UI of the GitHub app, and also been impressed by both Evernote and Spotify. When I found out how those UIs were produced I naturally tried thinking of a project that I could toy about with and produce over a weekend. (Which, with minimal OS interaction (perhaps simple file manipulations), should be ample) **But I can't think of a single thing to develop.** At work I've been tasked with retraining on Ruby on Rails; not wanting to go in blind - I decided to hammer a few books and try a few techniques. Now aside from the usual Lynda.com examples, and the very good _\"Agile Web Development with Rails\"_ project from the book... **I can't think of a single thing to develop.** Ordinarily I'd consider contributing to some FOSS software, and with frameworks I use regularly I do actually have a few ideas and I belong to the relevant mailing lists - i.e The Apache Cordova mailing list, and have signed the appropriate paperwork where required. (i.e For Apache Licensed projects) **Sometimes it's good to have an idea and run with it from scratch though, especially if you're new to the technology. So how do my fellow programmers manage doing that, and more importantly, how do you get ideas that allow you to use that specific framework/technology? It's hard to shoehorn ideas in to specific technologies at times.** I fear this is half the battle and the other **half is how to keep such a project when work commitments can eat in to your personal time** , and as it is, downtime is quite a valuable commodity!"} {"_id": "75607", "title": "Is saying \"JSON Object\" redundant?", "text": "If JSON stands for JavaScript Object Notation, then when you say JSON object, aren't you really saying \"JavaScript Object Notation Object\"? Would saying \"JSON string\" be more correct? Or would it be more correct to simply say JSON? (as in \"These two services pass JSON between themselves\".)"} {"_id": "75606", "title": "Advantages/Disadvantages of NFA over DFA and vice versa", "text": "What are the relative pro's and con's of both DFA's and NFA's when compared to each other? I know that DFA's are easier to implement than NFA's and that NFA's are slower to arrive at the accept state than DFA's but are there any other explicit, well known advantages/disadvantages?"} {"_id": "208239", "title": "Clean, Modular Code vs MV* Frameworks", "text": "I've been hearing a-lot about the \"new\" MV* frameworks. I've tinkered with KnockoutJS, creating an invoicing application, but I much prefer to write clean, modular code in raw JavaScript - leveraging utility APIs and other libraries when necessary. Given a methodical/structured/SOLID approach to writing a JavaScript application, where OOP, SOC, SRP and other design principles are adhered to, wouldn't the usage of MV* frameworks be superfluous? Are there any articles that express/address these concerns? I've found one in the past _note - i've migrated this question from SO to this site, as its more appropriate for this audience._"} {"_id": "195181", "title": "Are animations and other eye candies considered non-functional requirements?", "text": "I've seen many lists on the internet that includes many 'ities' (maintainability, scalability, portability, etc), but I'm not sure if animations, screen transitions, and similar features are functional or non functional requirements."} {"_id": "233548", "title": "Should We Use Surrogate Primary Keys for Every Table?", "text": "We are developing a data model for a marketing database that will import transaction, customer, inventory, etc. files and the directive is ONE process that works for every client. We have been told every client will have different import layouts and different columns that identify a table's primary key. Our initial idea is to get definitions from the client on what columns make each record unique, store those in a mapping table, and then have lookup tables that translate those primary keys to an internal surrogate keys automatically for every destination table so that every table conforms to an integer primary key no matter how many columns/types are used to make up the real pk. The first major problem I saw was when/if have to map back to the lookup tables to get data that we did not store in the main data model, but I have ben assured that anything we ever want to query will be duplicated in the lookup and main tables so that should not be a concern. This kind of flexibility seems like it will cause some serious limitations on: 1. Technology stack (no way to dynamically map these import files in SSIS, need lot of dynamic SQL or java/c#) 2. Scalability (based on previous concern and initial testing, this would be difficult to scale without speed concerns) 3. Complexity (we are already running into some complex code changes when we try to implement all these tables with historical change logging while maintaining mapping to the lookup tables for example) My question - Is this feasible or is there another obvious solution we are missing?"} {"_id": "138639", "title": "Does Service-Oriented Architecture require the robustness principle?", "text": "I try to migrate more and more of our IT infrastructure to a _Service-Oriented Architecture (SOA)_ , that means separation of independent tasks and implementation of this tasks as decoupled services, simply accesible via HTTP. If you don't like the term SOA, just put in another - the basic idea is to put functionality in little modules and expose them by well-defined interfaces. It also means a lot of documentation and communication, because people tend to think in integrated systems. When I combine multiple services to a new component, I always take care to catch errors: if one service fails, the rest of the system should keep on running as best as possible. You probably know the Chaos Monkey, which I keep in mind. However, if other people use services, they tend to think in reliable parts. Does good SOA require the robustness principle? In short, if you use a service, you should not expect to much quality: be aware of any kinds of errors: the service response may not contain all information (missing fields), it may include additional, unknown parts, it may respond very slow, or it may not work at all. Is this a property of loose coupling or am I just to lazy to guaranteeing strict service quality? ;-)"} {"_id": "134199", "title": "In an MVC project, does the Information Architect define the models?", "text": "The definition of an Information Architect's responsibilities seems to fit neatly with the definition of 'model' in the MVC pattern. But I've never heard the two concepts discussed together. In a large project that has a dedicated Information Architect as well as a team of developers, does the Information Architect actually say \"Model A has properties x, y, and z and has foreign keys to Models B and C...\" etc, or do they just express the concepts more vaguely and let the development team translate this into MVC models? Just wondering what is the most common practice."} {"_id": "204525", "title": "Do Tech Firms use a lot of Algorithmic Practices that we studied in college?", "text": "I am yet to start my career, I have a profound interest in algorithm design. I somehow fail to understand that, how would a tech firm use the algorithmic paradigm? I mean in real life projects do we actually end up using dynamic programming, greedy techniques, graph algorithms etc. Is there actually a future for people who are good at this subject. I understand that there are a wide variety of projects that are based on java, html5, css, javascript, j2ee etc. that never use algorithms. But are there companies out there using from the algorithmic paradigm for software development/solving complex (business or other) problems or is just for clearing interviews. p.s. I understand that software development is also based on other subjects as well, for example Design Patterns ."} {"_id": "204529", "title": "Multiple scrum teams moving to single backlog", "text": "We currently have 5 scrum teams that work off their own product backlog for the past year. Each team works on their own dedicated system but underlying technology is the same .Net. There has been a lot of discussion on moving to feature based teams working off a single backlog. The reason is one of our main systems has a significant amount of work coming and their is not enough capacity to deliver all the work in the year. The other reason which I believe is the significant one is it gives greater flexibility to adjust to changes in the portfolio rapidly. A decision been made to change two teams to work on a single backlog but the devs don't have experience on the other systems. One thing we're doing is cross-skilling by moving an experience system developer over to the team. My question is, have you experienced moving to single backlog for two or more different systems. What were your challenges? What did you need to do to get it to work?"} {"_id": "138633", "title": "Calculate Ellipse based on 4 points", "text": "I need to move an object based on 100 images rotating. The object needs to move in a path that is forming an ellipse when I'm rotating the image based on my gestures. I have **4 points** , _2 pairs of opposite points on X/Y axis_ , on the ellipse but how do I calculate the rest of the points in **code-behind** so that I can **calculate** the new **X/Y Value** of my **next/previous point**? Currently I'm just rotating my object in a circle what is a start but not what I want at all... ![enter image description here](http://i.stack.imgur.com/rhyj2.png)"} {"_id": "231876", "title": "In Android can we let a fragment know other fragments?", "text": "I think I have found contradictory design guidance within the Google Android documentation on fragmentation. The first statement below advises each fragment be unaware of other fragments and always communicate only to the activity and let the activity decides what needs to be done. > Creating event callbacks to the activity > > In some cases, you might need a fragment to share events with the activity. > A good way to do that is to define a callback interface inside the fragment > and require that the host activity implement it. When the activity receives > a callback through the interface, it can share the information with other > fragments in the layout as necessary. > > For example, if a news application has two fragments in an activity\u2014one to > show a list of articles (fragment A) and another to display an article > (fragment B)\u2014then fragment A must tell the activity when a list item is > selected so that it can tell fragment B to display the article. In this > case, the OnArticleSelectedListener interface is declared inside fragment A: But a second statement later on the same page, a list fragment directly instantiates detail fragment. if (mDualPane) { // We can display everything in-place with fragments, so update // the list to highlight the selected item and show the data. getListView().setItemChecked(index, true); // Check what fragment is currently shown, replace if needed. DetailsFragment details = (DetailsFragment) getFragmentManager().findFragmentById(R.id.details); if (details == null || details.getShownIndex() != index) { // Make new fragment to show this selection. details = DetailsFragment.newInstance(index); // Execute a transaction, replacing any existing fragment // with this one inside the frame. FragmentTransaction ft = getFragmentManager().beginTransaction(); if (index == 0) { ft.replace(R.id.details, details); } else { ft.replace(R.id.a_item, details); } ft.setTransition(FragmentTransaction.TRANSIT_FRAGMENT_FADE); ft.commit(); So my questions: Are these contradicting each other? Should this be left to Activity? I feel confused, so any clarification is appreciated."} {"_id": "231871", "title": "Check some value between each function call", "text": "Can you recommend a nice way of checking a particular value between calls to a set of functions? E.g. something like this in Python (might not be terribly 'Pythonic'): self.error_code = 0 # this will be set by each function if an error has occurred self.function_list = [self.foo, self.bar, self.qux, self.wobble] ... def execute_all(self): while not self.error_code and self.function_list: func = self.function_list.pop() # get each function func(error_code) # call it if self.error_code: # do something The idea is that subsequent functions won't be called if `self.error_code` is set and all functions will be called in order if all is good. Anyone can think of a better alternative to that in Python or maybe a language-agnostic approach? I want to avoid having that code in each function call (e.g. `if self.error code: return`) so that I don't end up with the same check across all functions."} {"_id": "231870", "title": "Why are string resources generally kept external to the code and not inside the code?", "text": "Generally, on many platforms, I'm writing my string resources to a .resx or .xml file, and then I'm getting them using some platform-dependent approach. That is, on iOS, I'm getting them via `NSBundle.MainBundle`, and by using `Context.Resources` on Android. What are the advantages of this approach, and why not having it directly accessible in the code, so, for example: 1. In a cross-platform project, any platform can access it directly, with no integration. 2. There are no concerns during building about whether or not the resources built well. 3. The coder can use functionalities such as multilanguage handling Long story short: what is the reason why string resources are structured that way? [Edit] Let's say that my file is part of a \"core\" project shared between other project. (Think about a PCL, cross-platform project file structure.) And suppose that my file is just totally similar to a .resx/.xml file, looking like this (I'm not a pro in xml, sorry!): Parameters Param\u00e8tres So, this is basically a custom xml, where you point to the key/language to get the proper string. The file would be part of the application just like you add any accessible file inside an app, and the system to access the string resources, coded using PCL. Would this add an overhead to the applications?"} {"_id": "234000", "title": "ASP.net/IIS runtime envitonment", "text": "I'm not too experienced in the regard of configuring IIS/ASP.net. A client has asked me to extend the functionality of a custom web application I didn't develop. Unfortunately, they don't have the original Visual Studio project/source code. I'm trying to pull the application off of their production IIS server and configure it for development on my machine. I've already setup a new website instance in IIS on my machine and copied the code to it's directory. When I access it (`http://localhost`), I'm currently receiving a null reference in the application. Obviously, this is a pretty generic error the can be caused by a bug, but I feel it's related to IIS's configuration. I've made sure both sites are using the same .NET version and they are also both running in integrated pipeline mode. Any ideas on what else I should look into? Production server: Server 2008 R2 (64bit), IIS 7.5 Development machine: Win7 (64bit), IIS 7.5"} {"_id": "70383", "title": "Contracting outside of full time contracting job", "text": "I'm a full time consultant for a contracting company. Essentially, I'm just a full time employee and when my current contract is up, I roll to the 'bench' until they find me another contact. One of the independent contractors I'm colleagues with threw a job my way (he's overbooked currently) for a very small company of just a couple employees to do some maintenance on their website. My handbook states \"No individual shall provide consulting services for a fee in competition with [COMPANY]\". But this company isn't a client of my employer (and I doubt ever would be since they're so small). To boot, some of my co- employees for my company either talk on the DL about doing contracts independently, or even have formed companies to do outside contracting with outside clients. Is it safe to do outside contract work even if you're a full time contractor for a consulting firm? (A lawyer question obviously but would love some feedback from anyone with experience)"} {"_id": "149081", "title": "What sorting algorithm does STL use?", "text": "I recently started using `` library and I was wondering, since all the operations are already implemented, **IF** the method of the sorting algorithm is the most efficient one. Everything runs perfect, there is no doubt, but should I worry about its performance? If I want to sort up to 6-7 million numbers, should I implement my own sorting method using quick sort?"} {"_id": "70387", "title": "Is there a name for the concept of a hierarchy of many short methods in a class", "text": "A refactoring I commonly do is where I come across a large method such as public void doSomething() { // do First thing doPartA1(); doPartA2(); //now something else doSomethingElse(); doMoreSomethingElse(); doEvenMore(); // finally do this stuff someStuff(); someMoreStuff(); } and use extract method refactoring it to make it like this: public void doSomething() { doFirstThing(); doSomethingElse(); doStuff(); } private void doFirstThing() { doPartA1(); doPartA2(); } ... I know the benefits of this are that duplication tends to be spotted more easily, comments are replaced by descriptive methods and methods can be tested at a finer granularity. Also, in a large class it can be easier to isolate and group a selection of methods/fields as a candidate to extract to a new class. But, I think most importantly it means If I'm looking at doSomething() for the first time, I may only need to read 3 lines of code to know what it does instead of 7. If I don't fully understand doEvenMore(), I can choose to read the method and so on, working down through the class like a hierarchy. Effectively I start reading a short entry point method and only need to read the lower methods in any class when I need to drill down deeper. So, my question - is there a name for this concept in programming and what is the easiest, most concise way to explain or demonstrate it? I have sometimes found it difficult to explain the benefits to colleagues why it's good to split up large methods, even when these new methods are only called from one place. EDIT: I'll try to be clearer: I'm not asking about the concept of extracting methods, I'm asking about the principle that makes extracting methods the right choice in this case e.g. If I had duplicated code in the original method I would extract a method because of the DRY principle. In the case above I don't but it's still good to extract the methods because of the X principle. What is X?"} {"_id": "70386", "title": "Apache Wave API", "text": "I'd like to create an application based around Apache Wave. Where can I learn more about working with Apache Wave and/or the API for a locally running instance? At the moment, the extent of what I can find seems to be APIs for _Google Wave_ (running off Google's servers) and that it's written in Java."} {"_id": "101157", "title": "How do you annotate instantiation in UML class diagrams?", "text": "Given this pseudo code: class B { } class A : B { int val; }; alpha = new A(); What arrow do I draw between `alpha` and `A` in a UML class diagram? Is this even something UML is meant to do? +-------+ +-------+ +-------+ | alpha | --??-- | A |------|>| B | | | | | | | +-------+ +-------+ +-------+ In my case, I truly do want to express explicitly that `alpha` instantiates `A`. So if there is no way in UML, I'll just make up a way -- which is fine, but I want to know if UML can express it already. My situation is that I'm trying to explain OOP to a friend who is very visual. Therefore, I want to try to visually distinguish the two relationships (inheritance and instantiation) so that he can distinguish it mentally."} {"_id": "237153", "title": "What is the difference between C# and Visual C#?", "text": "So let's say I want to start learning C# so I can program Unity with it. I look for a good reviewed book and it says \"Learn Visual C#!\". I ask myself, what is the difference between Visual C# and C#? If I know Visual C#, do I also know C#?"} {"_id": "185589", "title": "More comfortable working on the backend, often referred for role on the front end", "text": "I have applied and have been approached for different roles in web development recently. The one thing that keeps coming up is that I am more suited to front end development than back end development. This makes sense given my background starting out as a designer. The problem is that while I have worked on a large app using ExtJS, I found designing and developing in ExtJS really frustrating. Oddly enough, it was working on this project that landed me in web development because my coding skills were recognized by some senior developers. I highlight that in my CV but I wonder if this might be a mistake. I also use JavaScript quite a bit outside of web development. Specifically in Photoscript and InDesign to create batch operations. It's often assumed that I have expert knowledge in JavaScript, whereas I'm only just getting to grips with the OOP style of JavaScript. I use a lot of procedural code or I just use libraries like jQuery and Google Maps. I have created some experimental apps in Node and Knockout which I fortunately have enjoyed although Node is back end. I used to avoid JavaScript and jQuery in certain web projects since I was focusing on SEO and would only use Javascript if I really needed it. During the interviews, I'm asked questions about JavaScript and front-end development. But I really wanted to talk about PHP and the server side development, so I guess my background shows. How do I address this when I am either contacted for a role or I applying directly without selling my short?"} {"_id": "185585", "title": "What is the advantage of currying?", "text": "I just learned about currying, and while I think I understand the concept, I'm not seeing any big advantage in using it. As a trivial example I use a function that adds two values (written in ML). The version without currying would be fun add(x, y) = x + y and would be called as add(3, 5) while the curried version is fun add x y = x + y (* short for val add = fn x => fn y=> x + y *) and would be called as add 3 5 It seems to me to be just syntactic sugar that removes one set of parentheses from defining and calling the function. I've seen currying listed as one of the important features of a functional languages, and I'm a bit underwhelmed by it at the moment. The concept of creating a chain of functions that consume each a single parameter, instead of a function that takes a tuple seems rather complicated to use for a simple change of syntax. Is the slightly simpler syntax the only motivation for currying, or am I missing some other advantages that are not obvious in my very simple example? Is currying just syntactic sugar?"} {"_id": "176326", "title": "Anti Cloud Open Source License", "text": "I'm working on a browser based open source monitoring project that I want to be free to the community. What I'm worried about is someone taking the project, renaming it, deploying it in the cloud and start charging people who don't even know my project exists. I know I maybe shouldn't mind, but it just sticks in my throat a bit if someone took a free ride like that and contributed nothing back. Is there any common open source license that can prevent this. I know GPL or AGPL don't."} {"_id": "36034", "title": "What are the benefits of PHP?", "text": "Everybody knows that people that have prejudices against certain programming languages. Especially PHP seems to suffer from problems of its past and some other things (like loose types) and is often called a non-serious programming language that should not be used for professional applications. In that special case PHP: How do you argue using PHP as your chosen programming language for web applications? What are the benefits, where is PHP better than ColdFusion, Java, etc.?"} {"_id": "146790", "title": "Combine jquery/jqueryui/qtip/custom javascript into one minified file", "text": "I'm using jQuery, jQueryUi and qTip in my web application alongside custom JavaScript I've written. All three of the JavaScript libraries I mentioned above are licensed under the MIT license. From what I've read, I just need to include a copy of the MIT license for each of those libraries in my web application. Is it OK to put each license as a separate file underneath a \"licenses\" directory of my project? So, I would have a licenses directory with \"jQuery- MIT-LICENSE.txt\", \"jQueryUi-MIT-LICENSE.txt\" and \"qTip-MIT-LICENSE.txt\" in it. Or, do I need to put the licenses right at the top of the JavaScript files? Confused about where exactly these licenses need to be put in my web application... What I'm ultimately looking to do is combine all of the JavaScript files into one minified version for faster loading and I want to put the licenses in whatever is the proper place..."} {"_id": "122286", "title": "Increase Performance of VS 2010 by using a SSD", "text": "After searching on the internet for performance improvements when using Visual Studio 2010 with a solid state hard drive, I heard a lot of different opinions. A lot of people said that there isn't really a benefit when using a SSD, but in contrast others said the exact opposite. I am a bit confused with the contrasting opinions and I cannot really make a decision whether buying a SSD would make a difference. What are your experiences with this issue and which SSD did you use?"} {"_id": "18609", "title": "Geocoding for an application unable to show maps?", "text": "We have a situation where our users need to geocode a location (programmatically turn a description into latitude/longitude coordinates) from within an application unable to show a graphic image. Since the Yahoo Geocoding API and the Google Geocoding API both require that the result is used for showing maps (which is a graphic image) this means that we cannot use them immediately. Is there a Geocoding facility available - web service or locally installed (preferred if it does not require network access) - that can be used for such a scenario with a reasonable price model? We can use either Windows applications, Java libraries or remote web services in this particular scenario. Suggestions?"} {"_id": "122283", "title": "If you develop on multiple operating systems, is it better to have multiple computers + displays?", "text": "I develop for iOS and Linux. My preferred OS is Ubuntu. Now my software shop (me and a partner) is developing for Windows too. Now the question is, is it more efficient to have multiple workstations, one for each target OS? Efficiency and productivity is a higher priority than saving money. I have a 3.4Ghz i7 desktop workstation running Ubuntu and virtualized Windows with two displays, and I'm putting together an even more powerful i7 Hackintosh with 16GB RAM (to replace my weak 2.2Ghz i5 Macbook Pro). My specific dilemma is whether I should sell the first computer and triple boot on the second one, or buy two more displays and run both desktop systems simultaneously. Would appreciate answers from developers who write software for multiple OSes. Running guest OSes in VirtualBox on one system not ideal, because in my experience performance is seriously degraded under virtualization. So the choice is between dual/triple booting on one system vs having two systems, one for OSX+iOS/Windows (dual boot) and the other for Ubuntu (which I prefer to use as my main OS). For much of our work, I write a server-side application in Linux and a client for iOS (or for Windows or OS X) simultaneously."} {"_id": "122281", "title": "How is the facade pattern different from abstraction layers?", "text": "**I just read about the facade pattern and found this example where a client (user of a computer) invokes a`startComputer()` method which calls on all the complex stuff:** _Source: wikipedia_ /* Complex parts */ class CPU { public void freeze() { ... } public void jump(long position) { ... } public void execute() { ... } } class Memory { public void load(long position, byte[] data) { ... } } class HardDrive { public byte[] read(long lba, int size) { ... } } /* Facade */ class Computer { private CPU cpu; private Memory memory; private HardDrive hardDrive; public Computer() { this.cpu = new CPU(); this.memory = new Memory(); this.hardDrive = new HardDrive(); } public void startComputer() { cpu.freeze(); memory.load(BOOT_ADDRESS, hardDrive.read(BOOT_SECTOR, SECTOR_SIZE)); cpu.jump(BOOT_ADDRESS); cpu.execute(); } } /* Client */ class You { public static void main(String[] args) { Computer facade = new Computer(); facade.startComputer(); } } Here we can see the client class (You) creating a new facade, calling the `startComputer()` method. This method instantiates several objects and invokes their methods, encapsulating it inside the facade class where it `'glues it together'`. ### Question 1: **But exactly how is this different from layers of abstraction?** Yesterday I made login functionality for a web application where a `doLogin($username, $password)` encapsulated a `'complex'` lower-layer method call that updated login info about the user, set sessions to log in with and instantiated several objects etc. to do this job. This layer would also call another even lower level which would deal with CRUD-like operations such as checking activation status, account existance and other things like processing strings through hashing algorithms. This method _(on model level, look 'layer overview' below)_ would return an array with either a `'success'` key or a `'error_message'` key, acting as a boolean of sorts to the layer calling it. This is what I understand is known as `'abstraction layers'`, hiding complex procedures from the higher-level architecture, but when I read about the facade pattern it seems like I have been using it all along? ### Question 2: **Would my approach to abstracting the login mechanism be a bad choice in a MVC architecture? (if not, please give examples why)** Taken into consideration it should be very easy to decide where the end result goes (HTML, AJAX, etc.), wouldn't this be a wise choice to avoid alot of IFs as this can be dealt with on a controller layer? ### Layer overview: **Controller layer:** `$model->doLogin(POST VARIABLES HERE)` **Model layer:** (Is this a facade?) Sets sessions, updates login information in database etc., calls independent components for information: `$user_id = $user->getId();` and `$session->sessionExists();` **Independent class layer:** Each class on this layer is independent and has little to no coupling. After all, why would it? It's the layer above's job to build the application and lend an API to control it with. ### Question 3: **Is the facade pattern only used for routing calls to sub-classes in an API- like form, or perhaps just most commonly?** _With this I mean:_ Object A instantiates object B and lends a method to control object B's methods through a method of it's own: _objectB = new B(); } public function callMethodB() { $this->_objectB->methodB(); } } class B { public function methodB() { die('Wuhu!'); } } // -------------------------- $objectA->callMethodB(); // Wuhu! _instead of:_ $objectA->objectB->methodB(); // Wuhu!"} {"_id": "87003", "title": "Preventing RSI (Repetitive Strain Injuries)", "text": "I am 16 years old and I love to program and playing the piano. It's not uncommon that I'm bashing away on my mouse and keyboard all day long. I do not feel any pains doing so. Yet I am still worried, because I often hear from people that they can never type for longer then 10 minutes again without getting severe pains. Given my two hobbies, programming and playing the piano that worries me **a lot**. My current situation is this: * G15 keyboard and G5 mouse * A chair that looks like this (the back of the chair is surprisingly supportive): http://www.ikea.com/nl/nl/images/products/torbjorn-bureaustoel__0084333_PE210956_S4.JPG * In my \"normal sitting position\" the table is around the height of my bellybutton. * A LG Flatron L194wt screen (too small IMO, getting a new one soon) Should I be worrying about RSI/similar health issues? If yes, what can/should I do about it?"} {"_id": "87004", "title": "What to do with a 7 button mouse as a programmer?", "text": "I just got a new mouse with 7 buttons. Is there any way to use these extra buttons in eclipse or emacs? Is it possible to have a mouse button for compiling? Do you have any other ideas how to use the extra buttons? I'm using linux, but I'd be interested to hear from windows or mac users too, just to get some ideas."} {"_id": "246095", "title": "How to test for performance or locking issues of sharing a database with another app", "text": "I have a typical Spring/Hibernate Tomcat webapp that has a user_info table. I\u2019m trying to create another webapp that will do CRUD actions on the user_info table. I'm considering sharing the same database between the two apps. The apps will use C3P0 for database pooling. What is the best way to test the performance and any issues (e.g. locking) that could arise from sharing the db with another app?"} {"_id": "246094", "title": "Understanding the differences: traditional interpreter, JIT compiler, JIT interpreter and AOT compiler", "text": "I'm trying to understand the differences between a traditional interpreter, a JIT compiler, a JIT interpreter and an AOT compiler. An interpreter is just a machine (virtual or physical) that executes instructions in some computer language. In that sense, the JVM is an interpreter and physical CPUs are interpreters. Ahead-of-Time compilation simply means compiling the code to some language before executing (interpreting) it. However I'm not sure about the exact definitions of a JIT compiler and a JIT interpreter. According to a definition I read, JIT compilation is simply compiling the code _just_ before interpreting it. So basically, JIT compilation is AOT compilation, done right before execution (interpretation)? And a JIT interpreter, is a program that contains both a JIT compiler and an interpreter, and compiles code (JITs it) just before it interprets it? Please clarify the differences."} {"_id": "203178", "title": "Starting android Development", "text": "I am considering learning android development. I have some basic knowledge in C++. I downloaded the ADT plugin and eclipse. Now while starting from http://developers.android.com I see the codes were in XML. So I googled for learning XML. The best site I found was http://www.w3schools.org but there I found that for learning XML I have to learn HTML and CSS. So I learned the basics of HTML and CSS. But, Now I find in that learning java is a must. So can someone give idea about the sequental languages that I should study now? Should I learn php,mysql too. BTW, I have a dream to work in google :p"} {"_id": "252262", "title": "Should you ever use private on fields and methods in C#?", "text": "I am somewhat new to C# and just found out that: in C# all of the fields and methods in a class are default private. Meaning that this: class MyClass { string myString } is the same as: class MyClass { private string myString } So because they are the same should I ever use the keyword private on fields and methods? Many online code samples use the keyword private in their code, why is that?"} {"_id": "252265", "title": "Why is the JavaScript-language different in different programs/sites?", "text": "I'm kind of new to programming and i have a question that's been bothering me for awhile. Why is the JavaScript-language different in different programs/sites. I've used Codecademy to practice and i've noticed it's different from Eclipse and Unity. For example, in codecademy, you use: var blabla = \"something\" to declare a variable. In eclipse though, you use: int x = 2, String x = \"hey\". Why is it like this? Thanks in advance."} {"_id": "252264", "title": "Choosing a class abstraction when multiple viable approaches exist", "text": "I'm having trouble trying to design a class structure for some search functionality. It's quite possible that I'm approaching this incorrectly altogether, but putting that aside I'm curious how other people would approach a situation as follows: We need search functionality that allows users to access parcel information: * People will be able to search via 3 different search methods -- MethodA, MethodB, MethodC. (To give some perspective on what I mean by \"method\", lets think of these as things like a one line search that will do some intelligent parsing, another method has specific form fields for entering each parameter, etc.) * People will be able to search via 3 different search types -- TypeA, TypeB, TypeC. (For example, search by owner, search by parcel address, etc.) The situation isn't terribly complex, but what I'm stuck on is whether I should design my classes around an abstract method object, or an abstract type object. Essentially, should I do this: public class ConcreteMethodA : BaseMethod { public ReturnDataType GetResultsTypeA(){}; public ReturnDataType GetResultsTypeB(){}; //Etc. } Or this: public class ConcreteTypeA : BaseType { public ReturnDataType GetResultsMethodA(){}; public ReturnDataType GetResultsMethodB(){}; //Etc. } If you do have some advice about how to solve my specific scenario, I will certainly appreciate it. However, I'm more interested in **heuristics** for approaching a design problem of this nature."} {"_id": "44079", "title": "When to learn the command line version of a programming tool?", "text": "Almost every programming tool has a command line version; many of which also have a gui version. It takes a lot of time and memorization effort to learn the different commands and various options/switches of the command line version. So I have a couple of questions (which are not necessarily mutually exclusive): 1) When would you bother learning/memorizing the commands in the command line version of a tool which also comes in a gui version ? 2) What tools should I learn the command line version of ? .... compilers ? version control system ? etc, etc"} {"_id": "202807", "title": "A better approach to refactor code when doing A/B Testing", "text": "I'm asked to refactor my component code to support A/B Testing. What is the better approach: 1) Pass a boolean value to the methods and check for that flag inside the method body? method(flag abTest): if A_VERSION_ENABLED: // Do 'A' logic else // Do 'B' logic 2) Create dedicated methods for each version? codeForATest() codeForBTest()"} {"_id": "155565", "title": "using static methods and classes", "text": "I know that static methods/variables are associated with the class and not the objects of the class and are useful in situations when we need to keep count of, say the number of objects of the class that were created. Non-static members on the other hand may need to work on the specific object (i.e. to use the variables initialized by the constructor) **My question what should we do when we need neither of the functionalities?** Say I just need a utility function that accepts value(s) and returns a value based solely on the values passed. I want to know whether such methods should be static or not. How is programming efficiency affected and which is a better coding practice/convention and why. PS: I don't want to spark off a debate, I just want a subjective answer and/or references."} {"_id": "44076", "title": "What's the fundamental difference between the MIT and the Boost Open Source licenses?", "text": "What's the fundamental difference between the MIT open source licence : > Permission is hereby granted, free of charge, to any person obtaining a copy > of this software and associated documentation files (the \"Software\"), to > deal in the Software without restriction, including without limitation the > rights to use, copy, modify, merge, publish, distribute, sublicense, and/or > sell copies of the Software, and to permit persons to whom the Software is > furnished to do so, subject to the following conditions: > > The above copyright notice and this permission notice shall be included in > all copies or substantial portions of the Software. > > THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR > IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, > FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE > AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER > LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING > FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS > IN THE SOFTWARE. and the Boost Open Source license : > Permission is hereby granted, free of charge, to any person or organization > obtaining a copy of the software and accompanying documentation covered by > this license (the \"Software\") to use, reproduce, display, distribute, > execute, and transmit the Software, and to prepare derivative works of the > Software, and to permit third-parties to whom the Software is furnished to > do so, all subject to the following: > > The copyright notices in the Software and this entire statement, including > the above license grant, this restriction and the following disclaimer, must > be included in all copies of the Software, in whole or in part, and all > derivative works of the Software, unless such copies or derivative works are > solely in the form of machine-executable object code generated by a source > language processor. > > THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR > IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, > FITNESS FOR A PARTICULAR PURPOSE, TITLE AND NON-INFRINGEMENT. IN NO EVENT > SHALL THE COPYRIGHT HOLDERS OR ANYONE DISTRIBUTING THE SOFTWARE BE LIABLE > FOR ANY DAMAGES OR OTHER LIABILITY, WHETHER IN CONTRACT, TORT OR OTHERWISE, > ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER > DEALINGS IN THE SOFTWARE. I'm willing to take an exception to the \"retain this copyright notice\" bit."} {"_id": "146246", "title": "How can I effectively discuss language limitations with a client?", "text": "I would like to know the ideas that are being used for explaining to the client the limitations of the language used for enhancement of the existing project. Given a scenario was that the project existed in VC++ 6.0 and the client had asked for enhancement that could include transparency (alpha) component in the color part (currently using `gdi`). I found out that `gdiplus` library supports such feature but could not find it for MSVS 6.0 since its already being outdated (could not even find the msdn files for it) did not have support for gdiplus. I had to explain it to my client, so created an application in the newer MSVS and included both `gdi` and `gdiplus` and drew some objects using both the libraries side by side. So gave him an idea that the following point could be fixed if we migrate it to the newer version of code. Since client being a developer himself understood it. But there are cases when the clients are not developers and bearing no knowledge about programming. In such cases it is difficult to explain the problem."} {"_id": "221442", "title": "What is the name of the following (anti) pattern? What are its advantages and disadvantages?", "text": "Over the last few months, I stumbled a few times over the following technique / pattern. However, I can't seem to find a specific name, nor am I a 100% sure about all its advantages and disadvantages. The pattern goes as follows: Within a Java interface, a set of common methods is defined as usual. However, using an inner class, a default instance is leaked through the interface. public interface Vehicle { public void accelerate(); public void decelerate(); public static class Default { public static Vehicle getInstance() { return new Car(); // or use Spring to retrieve an instance } } } For me, it seems that the biggest advantage lies in the fact that a developer only needs to know about the interface and not its implementations, e.g. in case he quickly wants to create an instance. Vehicle someVehicle = Vehicle.Default.getInstance(); someVehicle.accelerate(); Furthermore, I have seen this technique being used together with Spring in order to dynamically provide instances depending on the configuration. In this regard, it also looks like this can help with modularization. Nevertheless, I can't shake the feeling that this is a misuse of the interface since it couples the interface with one of its implementations. _(Dependency inversion principle etc..)_ Could anybody please explain to me how this technique is called, as well as its advantages & disadvantages? **Update:** After some time for consideration, I rechecked and noticed that the following singleton version of the pattern was used far more often. In this version, a public static instance is exposed through the interface which is initialized only once (due to the field being final). In addition, the instance is almost always retrieved using Spring or a generic factory which decouples the interface from the implementation. public interface Vehicle { public void accelerate(); public void decelerate(); public static class Default { public static final Vehicle INSTANCE = getInstance(); private static Vehicle getInstance() { return new Car(); // or use Spring/factory here } } } // Which allows to retrieve a singleton instance using... Vehicle someVehicle = Vehicle.Default.INSTANCE; _In a nutshell:_ it seems that this is a custom singleton/factory pattern, which basically allows to expose an instance or a singleton through its interface. With respect to the disadvantages, a few have been named in the answers & comments below. So far, the advantage seems to lie in its convenience."} {"_id": "213808", "title": "Why EJBContainer has no methods for managing transactions and other EJB low level services", "text": "I read that the EJBContainer is responsible for things like managing transactions, managing EJB lifecycle, managing bean pools etc. So I took a look at the source code But there seem to be no methods for managing transactions, managing EJB lifecycle, managing bean pools. So how is this taken care of?"} {"_id": "213805", "title": "Workflow versioning", "text": "I believe I have a fundamental misunderstanding when it comes to workflow engines which I would appreciate if you could help me sort out. I'm not sure if my misunderstanding is specific to the workflow engine I'm using, or if it's a general misunderstanding. I happen to use Windows Workflow Foundation (WWF). **TLDR-version** WWF allows you to implement business processes in long-running workflows (think months or even years). When started, the workflows can't be changed. But what business process can't change at any time? And if a business process changes, wouldn't you want your software to reflect this change for already started 'instances' of the business process? What am I missing? **Background** In WWF you define a workflow by combining a set of activites. There are different types of activities \\- some of them are for flow control, such as the IfElseActivity and the WhileActivty while others allows you to perform actual tasks, such as the CodeActivity wich allows you to run .NET code and the InvokeWebServiceActivity which allows you to call web services. The activites are combined to a workflow using a visual designer. You pretty much drag-and-drop activities from a toolbox to a designer area and connect the activites to each other. The workflow and activities have input paramters, output parameters and variables. We have a single workflow which sometimes runs in a matter of a few days, but it may run for 5-6 months. WWF takes care of persisting the workflow state (what activity are we currently executing, what are the variable values and so on). So far I think WWF makes sense. Some people will prefer to implement a software representation of a business process using a visual designer over writing all of it in code. **So what's the issue then?** What I don't really _get_ is the following: WWF is designed to take care of long-running workflows. But at the same time, WWF has no built-in functionality which allows you to modify the running workflows. So if you model a business process using a workflow and run that for 6 months, you better hope that the business process does not change. Because if it do, you'll have to have multiple versions of the workflow executing at the same time. This seems like a fundamental design mistake to me, but at the same time it seems more likely that I've misunderstood something. For us, this has had some **real-world effects** : * We release new versions every month, but some workflows may run for a year. This means that we have several versions of the workflow running in parallell, in other words several versions of the business logics. This is the same as having many differnt versions of your code running in production in the same system at the same time, which becomes a bit hard to understand for users. (depending on on whether they clicked a 'Start' button 9 or 10 months ago, the software will behave differently) * Our workflow refers to different types of entities and since WWF now has persisted and serialized these we can't really refactor the entities since then existing workflows can't be resumed (deserialization will fail We've received some **suggestions on how to handle this** * When we create a new version of the workflow, cancel all running workflows and create new ones. But in our workflows there's a lot of manual work involved and if we start from scratch _a lot_ of people has to re-do their work. * Track what has been done in the workflow and when you create a new one skip activites which have already been executed. I feel that this alternative may work for simple workflows, but it becomes hairy to automatically figure out what activities to skip if there's major refactoring done to a workflow. * When we create a new version of the workflow, upgrade old versions using the new WWF 4.5 functionality for upgrading workflows. But then we would have to skip using the visual designer and write code to inject activities in the right places in the workflow. According to MSDN, this upgrade functionality is only intended for minor bug fixes and not larger changes. What am I missing?"} {"_id": "41750", "title": "Many user stories share the same technical tasks: what to do?", "text": "_A little introduction to my case:_ As part of a bigger product, my team is asked to realize a small IDE for a DSL. The user of this product will be able to make function calls in the code and we are also asked to provide some useful function libraries. The team, together with the PO, put on the wall a certain number of user stories regarding the various libraries for the IDE user. When estimating the first of those stories, the team decided that the _function call mechanism_ would have been an engaging but not completely obvious task, so the estimate for that user story raised up from a simple 3 to a more dangerous 5. _Coming to the problem:_ The team then moved to the user stories regarding the other libraries, actually 10 stories, and added those 2 points of \" _function call mechanism_ \" thing to each of those user story. This immediately raised up the total points for the product of 20 points! Everyone in the team knows that each user story could be picked up by the PO for the next iteration at any time, so we shouldn't isolate that part in one user story, but those 20 points feel so awfully unrealistic! _I've proposed a solution, but I'm absolutely not satisfied:_ We created a \"Design story\" and put those annoying 2 points over it. However when we came to realize and demonstrate it to our customers, we were unable to show something really valuable for them about that story! _Here the problem is whether we should ignore the principle of having isolated user stories (without any dependency between them)._ What would you do, or even better what have you done, in situations like this? * * * _(a small foot-note: following a suggestion I've moved this question from stackoverflow)_"} {"_id": "41751", "title": "Does my JavaScript look big in this?", "text": "As programmers, you have certain curtains to hide behind with your code. With PHP all of your code is server side preprocessed, so this never see's the light of day as far as the user is concerned. If you have maybe rushed through some code for a deadline, as long as it functions correctly then the user never needs to know how many expletives you've inserted into the comments. However with more and more applications being written for the web, with a desktop feel implemented by AJAX and popular frameworks like jQuery being banded around to every Tom, Dick and Harry, how can a programmer maintain some dignity and hide his/her JavaScript code without it being flaunted like dirty laundry when the users hit Right Click->View Source or Inspect Element. Are there any ways to hide JavaScript application logic/code?"} {"_id": "12971", "title": "Defining Rank names for employees", "text": "I want to create a list of ranks for the employees in my company. We are an open source integrator that works usually with existing solutions and also building custom solutions. We don't want to name our employees as normal senior consultant, trainee and we would like give them ranks as forums do to the users. Does anyone have any suggestion for this?"} {"_id": "218789", "title": "Use cases sequentialy related, how to deal with this?", "text": "I'll express my doubt with an example. Suppose I have the following requirement: > _The customer needs a page in his website that displays all the products he > has with a quantity field on the side of each product. The visitor of the > website must be able to fill in the quantities desired, select one location > and ask for the budgeting. After asking for the budgeting, the visitor must > be taken to a page with the summary of the products asked, together with the > prices and fields to fill in contact informations to proceed the negotiation > if desired_ To organize this I've decided on building two use cases: \"Require budgeting\" and \"Ask for products\". The first one is: > Title: Require budgeting > > Actor: Website visitor > > Scenario: > > * The visitor selects to see the products > > * While the visitor want to add products to the budgeting > > * The visitor fills the desired quantity of a product in the > corresponding field > * The visitor selects his location to estimate shipping price > > * The visitor sends the requirement > > After that, the visitor will be redirected to the page with the information about what he asks and the fields to require the products. This is another use case, but there is a close connection with the first one, because it is in the sequnce. I didn't know exactly how to deal with this. So I thought of using precondition: > Title: Ask for products > > Actor: Website visitor > > Precondition: the visitor has asked for a budgeting > > Scenario: > > * The visitor reviews the products and prices > > * If he wants to ask for the products > > * The visitor fills his contact information and sends to the sales > department > * If not, he exits the page > > But I'm unsure this is the right way to do so, I think that \"extension\" was the right way to express that, but I think I didn't get this yet. How should we proceed when use cases are closely related like that?"} {"_id": "123487", "title": "Using neo4j for managing project tasks and schedules?", "text": "I am considering using Neo4j to manage project tasks and schedules and compute critical paths. Obviously tasks and milestones would be nodes, and dependencies would be relationships. Are there any best practices, resources, or implementations to which I can refer?"} {"_id": "123484", "title": "What kind of documentation should UX designers provide to the developer?", "text": "It would be interesting to hear what is used besides sketches from UX engineers during development of a GUI. Unfortunately our UX team provides just a minimum of requirements for GUI screens. I believe that a better solution would be to use some tables with requirements and state machines or may be just some text description of GUI behavior? Is that something you use ? I tried speaking to our UX team but looks like it's impossible to get anything besides graphic sketches from them and I am wondering what others use. Our project is for iPhone currently, but it is could be useful to hear any advice even if you develop for any other platform or even desktop or web. Thanks!"} {"_id": "218781", "title": "Is it correct to say that CSS is AOP?", "text": "With selectors being a counterpart to pointcuts, and rules pretty much doing the same as advices do, - can we say that cascading style sheets are adhering to aspect-oriented paradigm ? And a corollary of this: \"every CSS coder is doing AOP\"? I found this similarity some time ago, but the brilliant post below reaffirmed my thoughts on this topic: Every time you use CSS, you\u2019re doing Aspect- Oriented Programming > When you write the following CSS statement, you\u2019re injecting the behavior of > the rules into your entire site, without touching the code that\u2019s already > there... > > Aspect-oriented programming is the most misunderstood programming paradigm. > But fundamentally, the reason it exists is the same reason CSS exists \u2014 to > better separate concerns in your code."} {"_id": "76560", "title": "All possible solutions to equation, where operators are arbitrary?", "text": "Given something like this: 1 5 3 4 = 18 I need to determine (using an algorithm) if there is a combination of operators and brackets that bring me to 18. Only \"+\" and \"*\" and \"(\" and \")\" are allowed. Example: 1 + 5 + ( 3 * 4 ) = 18 Beside brute force, is there any particular algorithm that is able to compute all the possible combo in reasonable time? RPN may help in order to encode the possible solutions, but the same are a lot (4^n ?)."} {"_id": "76569", "title": "Tips for phone interviews", "text": "I'm about to do my first phone interview in a few days (I'm the one who is being interviewed), and am wondering about how to approach it (how to prepare, what to expect, and so on...). The job is for a software engineer. How do these things usually go? What would you advise for someone about to do a phone interview?"} {"_id": "123150", "title": "How to get a programmer to apply nice design touches?", "text": "We have a small webdesign firm, and count with one designer. We can currently not afford too much expansion in the design area, so that means that the designer creates the main design of a website (which often means: the homepage), and the programmers then work with this to make it into a website. In general, this works for us. However, I have noticed a big difference between the abilities of different programmers to apply 'nice design touches' to a website. Given that we only have the homepage design, we often need to structure internal sections ourselves. Nothing major, no design program needed, but CSS skills come into play. So let's say we need to show a list of different sections in the website. I will have the content and assign this to two programmers. One will come up with this: ![enter image description here](http://i.stack.imgur.com/jw6Lr.png) And another will do very much his best - and has the same CSS knowledge - but can only come up with something like this: ![enter image description here](http://i.stack.imgur.com/aVTiU.png) NOTE: sample image taken from MailChimp website. The second programmer is aware that his 'design' lacks style and that it needs improvement, but he is just not able to create something nice without having it all drawn up by somebody else. He wants to work on this, but I do not know what a good approach would be. Any tips? Anybody with similar experiences?"} {"_id": "86554", "title": "How do you track third-party software licenses?", "text": "How do you track licenses for third-party libraries that you use in your software? How did you vet the licenses? Sometimes licenses change or libraries switch licenses--how do you stay up to date? At the moment I've got an Excel spreadsheet with worksheets for third-party software, licenses, and the projects we use them on (organized like a relational database). It seems to work OK, but I think it will go out-of-date pretty quickly."} {"_id": "123159", "title": "What Are The Specific Meanings Of The Terms: Functions, Methods, Procedures, and Subroutines?", "text": "I'm wondering what are the specific differences in the terminology we use for grouping related parts of code. I've sometimes seen the terms used interchangeably: many OO languages even use the keyword \"function\" to define a method. (Why?) If you wanted to be precise, what are the specific meanings of each? Or is it just whatever each language chooses to call it?"} {"_id": "238394", "title": "Whats the best way to handle errors in code?", "text": "So I'm a little concerned about my error handling... Currently my execution path looks something like this: > Users.aspx -> App_Code/User.cs -> Data Layer/User.cs So now when I try to update a user record, I put my Try/Catch block in the event handler and ensure that only the App_Code class interacts with the data layer. Exceptions that happen on the data layer, to my understanding, should bubble up to the event handler below. In the data layer, I started off with this: public void Update() { var product = (from p in db.products where p.productid == id select p).FirstOrDefault(); if (product != null) { // update the thing } } More info on reddit. After chatting with a friend, he recommended something like this: public void Update() { int count = db.users.Count(u => u.userid == id); if (count == 0) // no user found { throw new ValidationException(String.Format(\"User not found for id {0}.\", id)); } if (count > 1) // multiple users { throw new ValidationException(String.Format(\"Multiple users found for id {0}.\", id)); } var user = db.users.FirstOrDefault(u => u.userid == id); // update the user record } Then I went onto IRC where they suggested I create my own Exceptions. I can see the pros here, but it seems a bit unnecessary when my friend's option will work just fine. Basically I'm just really confused as to how I should handle this... Obviously my initial option is insufficient, but it seems like creating my own exceptions might be complicating things too much. So what should I do here?"} {"_id": "171156", "title": "Can higher-order functions in FP be interpreted as some kind of dependency injection?", "text": "According to this article, in object-oriented programming / design **dependency injection** involves * a dependent consumer, * a declaration of a component's dependencies, defined as interface contracts, * an injector that creates instances of classes that implement a given dependency interface on request. Let us now consider a higher-order function in a functional programming language, e.g. the Haskell function filter :: (a -> Bool) -> [a] -> [a] from `Data.List`. This function transforms a list into another list and, in order to perform its job, it uses (consumes) an external predicate function that must be provided by its caller, e.g. the expression filter (\\x -> (mod x 2) == 0) [1, 2, 3, 4, 5, 6, 7, 8, 9, 10] selects all even numbers from the input list. But isn't this construction very similar to the pattern illustrated above, where * the `filter` function is the **dependent consumer** , * the signature `(a -> Bool)` of the function argument is the **interface contract** , * the expression that uses the higher-order is the **injector** that, in this particular case, injects the implementation `(\\x -> (mod x 2) == 0)` of the contract. More in general, can one relate higher-order functions and their usage pattern in functional programming to the dependency injection pattern in object- oriented languages? Or in the inverse direction, can dependency injection be compared to using some kind of higher-order function?"} {"_id": "171155", "title": "Is Wordpress more appropriate than Magento/Opencart for site like this?", "text": "The premise of the site is that a user pays a small fee to advertise an item that they want to sell. Therefore the user is responsible for adding the \"products\", not the administrator. The product upload will create a product page for that item. This is a rather common framework that I'm sure you're familiar with. My initial thought was that it would be best suited using Magento - mainly because it needs to accept payments - and the products will grow to form a catalog of categorized products. However - there is no concept of a shopping cart. A buyer does not buy the item online, or go to a checkout. They simply look at the product, and contact the seller if they like it. The buyer and seller then take it from there. For this reason, I then begin to suspect that Magento is perhaps too overkill, or just simply not the right CMS if there is on checkout procedure (other than the uploader making a payment) So then I begin to think Wordpress....Hmmm Feature requirements: * User's can add content via a form process * User's can be directed to a payment gateway * For each product listing - a series of photographs shall be displayed, in thumbnail form Zoom capabilities/rotate on the images would be a welcome feature In short - e-commerce CMS, or something more simple?"} {"_id": "86754", "title": "Is it illegal to rewrite every line of an open source project in a slightly different way, and use it in a closed source project?", "text": "There is some code which is GPL or LGPL that I am considering using for an iPhone project. If I took that code (JavaScript) and rewrote it in a different language for use on the iPhone would that be a legal issue? In theory the process that has happened is that I have gone through each line of the project, learnt what it is doing, and then reimplemented the ideas in a new language. To me it seems this is like learning how to implement something, but then reimplementing it separately from the original licence. Therefore you have only copied the algorithm, which arguably you could have learnt from somewhere else other than the original project. Does the licence cover the specific implementation or the algorithm as well? EDIT------ Really glad to see this topic create a good conversation. To give a bit more backing to the project, the code involved does some kind of audio analysis. I believe it is non-trivial to learn or implement, although I was prepared to embark on this task (I'm at the level where I can implement an FFT algorithm, and this was going to go beyond that.) It is a fairly low LOC script, so I didn't think it would be too hard to do a straight port. I really like the idea of rereleasing my port as well as using it in the application. I don't see any problem with that, and it would be a great way to give something back to the community. I was going to add a line about not wanting to discuss the moral issues, but I'm quite glad I didn't as it seems to have fired the debate a bit. I still feel a bit odd about using open source code to learn from. Does this mean that anything one learns from an open source project is not allowed to be used in a closed source project? And how long after or different does an implementation have to be to not be considered violation of the licence? Murky! EDIT 2 -------- Follow up question"} {"_id": "125649", "title": "Non-OOP languages advantages and good uses", "text": "I'm a C# developer, but I also know Java, JavaScript, XSLT, a little of C and Perl, e some other that I may have forgotten. Still, the paradigm I'm most familiar to is OOP. I have always thought that OOP was the natural evolution of procedural programming, but I wondered if OOP is that perfect. After reading some articles on the web and some questions here, I found that many people don't agree with this, and some say even that OOP is a bad option. While developing, I really appreciate using lambdas, LINQ, anonymous types and really enjoy JavaScript with its prototyping and dynamic nature. But still, I can't think of any scenario where OOP is not an option, or where other paradigms fits better. The only thing I can think of is that sometimes programming with OOP is really boring and slow, like having to declare a class, import some other classes and declaring a method, specifying its parameters, return type and name just to show \"Hello, World!\" on the console screen. But still, for real-life programs, it seems like something that compensates its cost. In what scenarios does other paradigms fits better than OOP? What are its advantages over OOP and were does OOP makes things worse instead of helping? Especially, what are the advantages and in what scenarios do excel procedural and functional programming?"} {"_id": "212254", "title": "Optimized Special Character Escaper vs Matcher/Pattern", "text": "I need to escape special characters which are sent to apache lucent. Since the code will run on a production server I want the code to be the fastest possible. I've seen multiple ways to do it: Using Pattern Using Replace Using Library See: http://www.javalobby.org/java/forums/t86124.html I'm wondering: * For trivial cases such as this, regex or custom? * Can the below be optimized further. /* * Lucene supports escaping special characters that are part of the * query syntax. The current list special characters are + - && || ! * ( ) { } [ ] ^ \" ~ * ? : \\ * * To escape these character use the \\ before the character. */ String query = \"http://This+*is||a&&test(whatever!!!!!!)\"; char[] queryCharArray = new char[query.length()*2]; char c; int length = query.length(); int currentIndex = 0; for (int i = 0; i < length; i++) { c = query.charAt(i); switch (c) { case ':': case '\\\\': case '?': case '+': case '-': case '!': case '(': case ')': case '{': case '}': case '[': case ']': case '^': case '\"': case '~': case '*': queryCharArray[currentIndex++] = '\\\\'; queryCharArray[currentIndex++] = c; break; case '&': case '|': if(i+1 < length && query.charAt(i+1) == c) { queryCharArray[currentIndex++] = '\\\\'; queryCharArray[currentIndex++] = c; queryCharArray[currentIndex++] = c; i++; } break; default: queryCharArray[currentIndex++] = c; } } query = new String(queryCharArray,0,currentIndex); System.out.println(\"TEST=\"+query);"} {"_id": "238398", "title": "Hide admin menu if no admin option is available", "text": "If you have a menu \"Admin tasks\" and different admin tasks (like 10) that you could separately assign to each user, but there are users who don't have any admin tasks, how would you deal with \"Hiding admin menu\" for those users? I was thinking of 3 ways: 1) Javascript, check if Admin menu is empty and then hide it. 2) Check for all permissions in Admin menu, with a counter, and show it if counter > 0. And then also re-check the permissions for each item to show. 3) Save all permissions in associative array. Test all and assign ' true' to granted items. When building the menu, have a function that tests if there is at least one permission granted. I wouldn't need to re-check permissions against DB, just against the array for each item. Is there any better way?"} {"_id": "219786", "title": "Identifying how server is authenticating users", "text": "I'm trying to build a bot that will parse the list of classes offered by my university and let me know when the one I'm looking for is open. The problem is that in order to get to the registration/search box, I have to log in with my university username and password. I'm trying to figure out what protocol that my school uses to authenticate me so I can give my bot my credentials and let it log in for me so it can access the registration/search page, so how can I figure out what they are using so I can figure out how to implement it in whatever language I decide to use. I've gone through packet captures but all I can see is the SSL syn/ack, which I guess is the point of SSL haha. Can anyone recommend how to figure out what protocol my school uses to log in users?"} {"_id": "219781", "title": "shall a vector2 extends a vector3 or is it the opposite?", "text": "Perhaps the question might be tied to a theoritical or mathematical forum, but since it is for programming purpose, i ask here first: In a computer vision context, i write a couple of interfaces intended to be the \"read-only\" part of vectors. So i define \"IVector2R\" and \"IVector3R\" that only contain getters. The question is: does IVector2R extends IVector3R (and the \"y getter\" always returns 0), or is it the opposite: IVector3R extends IVector2R? I would like a conception as close as possible to the mathematic/sets theory... Thank you for your attention"} {"_id": "219780", "title": "How to refactor a Java singleton to Clojure?", "text": "I'm writing a simple game in Java and I want to learn Clojure, so I've decided to refactor my current Java code to Clojure. The problem is that I've coded so much in object-oriented languages that I cannot _see_ how to do it functionally. To be concrete, I have a `Map` inside a singleton class that can be accessed from anywhere to get a Country instance, update it, and put it back into the map. I implemented this the same way in Clojure. For example, to update a country: (def countries (do-get-countries)) (defn update-country [country] (def countries (assoc countries (get country :name) country))) Also I've created a `defrecord Country`, but I actually modify these records like (assoc country :name \"New name\") These two examples don't look idiomatic in my opinion. Is this actually the correct way to do it in Clojure? If not how would it be more idiomatic? Thanks in advance!"} {"_id": "126501", "title": "What would be an appropriate algorithm to factorise numbers in the range of a few billion?", "text": "I'm learning Python at the moment and to give me reasons to apply what i'm learning I'm having a crack at some of the problems on Project Euler I'm currently on number 3, which is to determine the highest prime factor of said number. I've deduced I need to probably have two algorithms, one to determine primality, and the second which would involve finding factors of the number. So i've been reading up on Wiki articles. Trying to determine what might be the best algorithm to use and how to go about it. But it's been a while since i've done some hardcore maths based programming and i'm struggling to start somewhere. I was looking at using Fermat's factorization method with inclusion of Trial by Division but I don't want to make something too complicated I'm not after to crack RSA I just want two algorithsm suitable for my problem and there in lies my question. What algorithms would you use for testing for primality / factoring a number that are suitable to the problem at hand? **Edit** Thank you all for your answers and insights they have been most helpful I upvoted all that were useful either through advice or through there own Euler experiences. The one I marked as right was simply the most useful as It gave me a proper place to start from which was a push in the right direction. Thanks again =)"} {"_id": "229431", "title": "Should processing/filtering be performed client side or server side for catalog based apps", "text": "**Device targeting for product XML catalog** We currently have a webservice that outputs an XML of products based on get parameters in the request. The webservice is consumed from a windows mobile application. In front of the webservice we have an HTTP accelerator/cache that caches the results for identical URLs. The business guys, want a new feature to allow products to be targeted for specific device configurations. We consider a device configuration can be made of parameters such as: * hardware model * firmware * geographic location * cellular provider * etc... This may drastically kill the cache hit/miss ratio (efficiency) since we will be sending a param \"deviceConfigId\" which will be different for many devices, but will affect the list of applications outputted. We are talking 10,000 configs minimum. Our hit/miss ratio went from 75% to 40% after adding a few new features and filters via the URL one year ago. Except for using mechanisms such as Edge Server Includes ( http://stackoverflow.com/questions/5960598/varnish-and-esi-how-is-the- performance/9914643#9914643 ) , one of the ideas we are flirting with is to movie part of the filtering to the mobile devices. **Moving filtering client side** The mobile developers cringe because this may make their mobile devices less responsive. The client devices will need to download all product information(page by page as people scrolll down) but the device will need to filter out entries. Additionally, at the beginning of loading the device must download a list of rules applicable to the specific configuration in order to apply them on all future requests and the products listed in the XML. **Keeping all filtering on the backend** The backend developers cringe with the idea of adding the \"deviceConfigId\" to all the requests. This will require adding even more network infrustructure and resources. The problem can be solved by adding a better load balancer and adding more servers behind it (as well as moving to more distributed technologies later on). If we consider that user experience should be the highest priority and that slower/older devices should function as smooth as possible it seems we should use server side filtering for the product listings. However, newer devices are coming out and older devices are being thrown out continously. Forcing the clients to do some filtering but keeping load off the backend is quite tempting. Are there any other pros/cons, and more importantly solutions to such an issue? Thanks"} {"_id": "151038", "title": "when should a database table be broken into multiple tables with relations?", "text": "I have an application that needs to store client data, and part of that is some data about their employer as well. Assuming that a client can only have one employer, and that the chance of people having identical employer data is slim to none, which schema would make more sense to use? **Schema 1** Client Table: ------------------- id int name varchar(255), email varchar(255), address varchar(255), city varchar(255), state char(2), zip varchar(16), employer_name varchar(255), employer_phone varchar(255), employer_address varchar(255), employer_city varchar(255), employer_state char(2), employer_zip varchar(16) * * * **Schema 2** Client Table ------------------ id int name varchar(255), email varchar(255), address varchar(255), city varchar(255), state char(2), zip varchar(16), Employer Table --------------------- id int name varchar(255), phone varchar(255), address varchar(255), city varchar(255), state char(2), zip varchar(16) patient_id int Part of me thinks that since are clearly two different 'objects' in the real world, seperating them out into two different tables makes sense. However, since a client will always have an employer, I'm also not seeing any real benefits to seperating them out, and it would make querying data about clients more complex. Is there any benefit / reason for creating two tables in a situation like this one instead of one?"} {"_id": "149555", "title": "Difference between immutable and const", "text": "I've often seen the terms `immutable` and `const` used interchangeably. However, from my (little) experience, the two differ a lot in the 'contract' they make in code: Immutable makes the contract that this object _will not_ change, whatsoever (e.g. Python tuples, Java strings). Const makes the contract that in the scope of this _variable_ it will not be modified (no promise whatsoever about what other threads might do to the object pointed to during this period, e.g. the C/C++ keyword). Obviously, the two are not equivalent, unless the language is single-threaded (PHP), or has either linear or uniquness typing system (Clean, Mercury, ATS). First, is my understanding of these two concepts correct? Second, if there is a difference, why are they almost exclusively used interchangeably?"} {"_id": "214677", "title": "Single Page Web Application and Requiring JavaScript", "text": "I'm under the impression, and I agree with it, that it's bad to create a web application _requiring_ JavaScript to function properly. What if one were to create a single page application? Is it possible to create one that doesn't rely on JavaScript, without being difficult, or full of \"hacks\"?"} {"_id": "127672", "title": "Is Javascript a Functional Programming Language", "text": "* Is Javascript a functional language? I know it has objects & you can do OOP with it also, but is it also a functional language, can it be used in that way? * You know how OOP became/seems like the next evolution in programming, does that mean that 'Functional Programming' is the next evolution(Note: this is NOT a prompt for opinion BUT a prompt for a factual evidence based answer, & this note is more for the moderators than the contributors ;) ). * I learn best through examples, maybe someone could show performing the same task in a OOP way & then in a Functional Programming way for myself to understand & compare what functional programming does/is. I don't really completely understand 'Functional Programming' to be honest :P So comparing Javascript to functional programming may be totally incorrect. To put Functional programming in laymans terms: is it simply the benefit of abstration THROUGH using anonymous functions? Or is that way too simple? In a simple way, OOP is the benefit of abstraction through objects, but I believe thats being a little too simplistic to describe OOP. Is this a good example of functional programming?... Javascript OOP Example: // sum some numbers function Number( v ) { this.val = v; } Number.prototype.add( /*Number*/ n2 ) { this.val += n2.val; } Functional programming example: function forEach(array, action) { for (var i = 0; i < array.length; i++) action(array[i]); } function add(array) { var i=0; forEach(array, function(n) { i += n; }); return i; } var res = add([1,9]);"} {"_id": "40401", "title": "Strategic and Tactical direction for IT teams", "text": "In their influential team leadership book, \"Peopleware\", DeMarco and Lister suggest that managers should provide \"strategic but not tactical direction\" for their IT teams. This is intriguing, but they don't go on to explain exactly what they mean! Any thoughts on what this intriguing idea looks like in practice? Is it a good or bad practice?"} {"_id": "65522", "title": "How can I optimally consume and re-syndicate a REST web service", "text": "I need to write an application which consumes a forum's content via a REST API and stores threads and posts. The application will act as a bridge layer between the forum and write data to a third application periodically - as close to 'real-time'as possible. The platform is PHP 5.3 / MySQL and probably Symfony with the Zend_Rest client. My question is what would be an appropriate / performant architecture be for the bridge layer? I imagine I will need to do an initial import of the forum data which will be slow (may take hours). The bridge application will also have a front-end for selectively adding the forum messages to the third application and adding further meta data e.g sentiment (was the message positive or negative in tone). I realise the data import / export could be done with procedural scripts and cron jobs but am wondering if there is a better way. Many thanks,"} {"_id": "244930", "title": "Why/how does Java use a controlled mechanism to pause threads for GC?", "text": "I know that Java uses a controlled mechanism to allow threads to be paused. If I understood correctly, they put a read from a protected page at the end of e.g. loops, and change the protection of that page if they want the thread to be paused. What I don't understand is why this is necessary and whether this can even work. The only reason I can think of to make this necessary is that you don't have to have all pointers on the heap, and you can also just have them in registers. However, if you'd use the TSS, wouldn't you have the same data too? However, what I find more interesting is how they handle system calls. What happens when the thread is doing a slow read of a file over e.g. a network? Does the GC way? Does the GC forgo a run and try again a moment later? What happens when the system call takes very long?"} {"_id": "244931", "title": "How can I find the start of a native method?", "text": "For a hobby project, I'm writing an x86 GC and JIT. For the GC, I need to maintain information about the stack layout (it's a precise GC), for which I need to be able to find out which method the IP currently is in (and the complete call chain of course). How can you do this? The best solution so far was to keep a b-tree of the start addresses of all jitted methods, and use that to look up the current method. However, this looks like a lot of overhead. An alternative would be to use the BSP to find the return address, go back a few bytes and see what address was called. I could then put some data before the entry point. However, that has the issue that the callee may not be a jitted method (there will be native methods on the stack). In that case, the data before the method would be garbage or may not even be valid memory (in some extreme corner cases). What is the usual mechanism of implementing this functionality?"} {"_id": "136900", "title": "Checking if a method returns false: assign result to temporary variable, or put method invocation directly in conditional?", "text": "Is it a good practice to call a method that returns true or false values in an if statement? Something like this: private void VerifyAccount() { if (!ValidateCredentials(txtUser.Text, txtPassword.Text)) { MessageBox.Show(\"Invalid user name or password\"); } } private bool ValidateCredentials(string userName, string password) { string existingPassword = GetUserPassword(userName); if (existingPassword == null) return false; var hasher = new Hasher { SaltSize = 16 }; bool passwordsMatch = hasher.CompareStringToHash(password, existingPassword); return passwordsMatch; or is it better to store them in a variable then compare them using if else values like this bool validate =ValidateCredentials(txtUser.Text, txtPassword.Text); if(validate == false){ //Do something } I am not only referring to .NET, I am referring to the question in all programming languages it just so happens that I used .NET as an example"} {"_id": "28979", "title": "good books to get deep into PHP", "text": "I am a PHP developer with ~3 years experience. I want to get deeper into PHP and really understand more low level constructs/functionality. I have mainly done frontend web dev and a little php cron jobs. I want to learn more about stream contexts, closures, process forking, etc. The book I've found that seems closest to this is Expert PHP and MySQL (Wrox). I also found Pro PHP. I know there has to be more out there but I can't seem to find anything. Suggestions?"} {"_id": "28978", "title": "Is anyone actively developing software to be used with the Emotiv headset?", "text": "Based on the \"featured apps\" section of the main page http://www.emotiv.com/index.php Alot of the so called \"mind control\" apps so far seem to be fairly rudimentary read/scan apps that look to profile a certain activity, or perhaps allow the user to manipulate or control an app in a way loosely analogous to using a mouse/keyboard. I'm just wondering if anyone is currently working on something (assuming they can talk about it) that is a mind blowingly spectacular use of the Emotiv headset? thanks."} {"_id": "219436", "title": "Infrastructure-related additions in commit comments", "text": "For the sake of this question, let's agree on this definition of \"good commit message\" from here: > Your comment should be brief and to the point, describing what was changed > and possibly why. and accept that referencing a bug/issue in a commit message is a good thing. Now, there are a lot of tools that integrate with VCS repositories and, for them to function, require commit messages to contain various \"control information\" which generally does not add value in the historical sense? Think, * Amount of time spent fixing an issue (`[Spent: 4h]`) * A /cc to notify other team members of the changeset (`/cc john, jill`) * A tag to the CI server to deploy the changeset to development/staging/production (`[deploy: production]`) * Setting a reviewer (`R: jill`) Granted, this is all very convenient for developers in the short term (why leave console to push a button in the CI interface when I can add a short string to commit message), but I find these messages to clutter changelog too much and way too infrastructure-specific (what if we change to a Time Tracker which uses slightly different syntax?). What's your take on these messages and where do you draw the line on what to accept and what to forbid?"} {"_id": "226178", "title": "Why is it bad to use redundancy with logical operators?", "text": "I'm moving over to work on a library that a fellow developer has been writing. It's full of `== true` and `== false`, which I find _crazy_ frustrating to read. I've tried asking him to quit doing it, but he just says that it makes it more straightforward to understand what is going on. Is there a good reason why this isn't a great practice? Or is it purely a stylistic thing?"} {"_id": "241218", "title": "Can I use access used by Visual Basic for building a database", "text": "I am the only programmer where I work (summer job) and I am a student with only a few years of programming experience. So I was asked to build a database and I am very excited about this project because hopefully I can learn a lot from this. Using this database my manager is supposed to be able to assign work (dealing with businesses) to different people within the company using an interface (all workers have a shared drive). When workers are done with that paperwork related to the business, they can check off that its done, add comments at the bottom of the interface, and then move on to the next business. The only experience I've had with databases is some querying with SQL, and I've built GUI interfaces with JAVA. The information on the interface will be populated from Excel so workers know what businesses they are dealing with. I've done some research and I believe the best way to build this would be building a GUI using Microsoft Visual Studio (Visual Basic) first, then figuring out a way to populate the Interface from Excel. Also because the data is pretty straight forward and not complicated I will be using MS Access to store and track the database. I know this won't be easy, but for all you geniuses out there, is this on the right path? Thanks."} {"_id": "206496", "title": "Display error message with jQuery without reloading page", "text": "I created login form. When user clicks on login button this form shows up with some fade effect (jQuery). What I want to do is to display error message in this form when user inputs invalid data. Before showing any messages, PHP must read data from database, therefore page must be reloaded and when page is reloaded this form fade away. How can I display error message in this login form without reloading page? (I have lots of code so if you need any part of code I will provide)"} {"_id": "206490", "title": "What to use in UML for included module in ruby?", "text": "I like to create simple class diagrams for my projects. Most of the time I just use composition, inheritence and associations. IMB's basic UML resource tells all about this. However I'm using ruby so I got the option to define a module with some methods. I can just include this module in every class where I want to use these methods. This could be interpreted as an inheritence, however some of my classes are already child classes of an other class and I don't want to mix the meaning of a symbol used in my diagrams. **How should I display an included module in my class diagrams in UML?**"} {"_id": "213377", "title": "How relevant are \"Requests per second\" benchmarks?", "text": "Whenever a new framework is released it is a given that someone somewhere will benchmark it against other available solutions. One interesting benchmark is the \"Requests per second\" benchmark. For example look it this benchmark: ![enter image description here](http://i.stack.imgur.com/Ea42f.png) Now AFAIK Zend framework and Symfony are 2 of the biggest frameworks out there with major companies supporting them. Did the developers make a mistake when designing the framework that resulted in that (relatively) low threshold? If I'm planning to build a web site/app and expecting (relatively) high traffic, should I pay attention to this benchmark? Will my site/app surely go down at the presented figures? Simply I'm asking you as a software architect how you would strategically take into account this benchmark when planning a new project."} {"_id": "241214", "title": "Designing a social network with CQRS, graph databases and relational databases in mind", "text": "I have done quite an amount of research on the topic so far, but i couldn't come up with a conclusion to make up my mind. I am designing a social network and during my research i stumbled upon graph databases, i found neo4j pretty interesting for user relations and traversing through nodes. I also thought of using a relational database such as MS-SQL or MySQL to store entity data only and depending on neo4j for connections between entities. Of course this means more work in my application to store and pull data in and out of 2 different sources. My first question : Is using this approach (graph + relational) a good approach for designing my social network keeping in mind that users on social networks don't have to in synch with real data by split second ? What are the positives and negatives of this approach ? My Second question : I've been doing some reading on CQRS and as i understood it is mostly useful for collaborative environments, and environments where users see a lot of \"stale\" data. social networks has shared comments, events, etc .. and many users query or update the same data. Could CQRS be a helpful approach ? Would it give any performance/scalability benefits or non-useful complexity ? Is it fairly applicable with my possible choice of (graph + relational) databases approach mentioned in the question above ? My purpose is to know if the approaches i have mentioned above seem good enough for the business context."} {"_id": "81403", "title": "How can I become a good project manager?", "text": "I work in mobile development. I've just been promoted to a project manager. The problem is that I just have 4 months of experience in development. This was my first job and I didn't even finished school. I need some advice or maybe some software to help me manage my team and my projects."} {"_id": "13731", "title": "Will large screen increase develop productivity?", "text": "I am considering to buy a desktop computer, but could not determine which size of LCD should I buy. What's the size of your screen, will large LCD(30 inches+) will do good to develop?"} {"_id": "69955", "title": "What is Python 20th and final guideline", "text": "Sometimes when my life needs some guiding light, I read the **Zen of Python** by Tim Peters and usually manage to get back on the tracks. Today was one of those days, so I checked it again. I've noticed that the \"abstract\" section says: > Long time Pythoneer Tim Peters succinctly channels the BDFL's guiding > principles for Python's design into 20 aphorisms, only 19 of which have been > written down. What would be this last unwritten aphorism?"} {"_id": "174928", "title": "Need advice on framework design: how to make extending easy", "text": "I'm creating a framework/library for a rather specific use-case (data type). It uses diverse spring components, including spring-data. The library has a set of entity classes properly set up and according service and dao layers. The main work or main benefit of the framework lies in the dao and service layer. Developers using the framework should be able to extend my entity classes to add additional fields they require. Therefore I made dao and service layer generic so it can be used by such extended entity classes. I now face an issue in the IO part of the framework. It must be able to import the according \"special data type\" into the database. In this part I need to create a new entity instance and hence need the actual class used. My current solution is to configure in spring a bean of the actual class used. The problem with this is that an application using the framework could only use 1 implementation of the entity (the original one from me or exactly 1 subclass but not 2 different classes of the same hierarchy. I'm looking for suggestions / designs for solving this issue. Any ideas? EDIT: only idea I have is to add a parameter to the affected methods that takes a Class object. That would be the easy and pragmatic solution but it seems very ugly?"} {"_id": "43279", "title": "What features are missing from Python IDE tools?", "text": "What are the most desired features currently lacking in any Python IDE tools? I'm also interested in what's missing in Komodo 6 but available in other tools (I currently use Komodo 6 for Python 3 under Windows). [I am asking for this to be made community wiki if appropriate.]"} {"_id": "96371", "title": "What is the \"internal syntax\" of a programming language?", "text": "I was reading a paper (PDF) that introduced a term, _internal syntax_ , that I've never heard of. What does this term mean, and how is it different from the _source syntax_? It looks that the _internal syntax_ is used to formally prove some properties about the language (at least in this paper). I'd like to know the exact definition and the background behind this syntax."} {"_id": "201728", "title": "Shall we always use IoC in our designs?", "text": "I was studying _Mediator Pattern_ and I noticed that to use this pattern you should register the _Colleagues_ into _Mediator_ from the _Colleague_ concrete classes. for that we have to make an instance of _Mediator_ inside _Colleague_ concrete classes which violates IoC and you can not inject the _Colleagues_ into _Mediator_ (as far as I know! whether it is right or wrong) Questions: 1- Am I right about the thing I said? 2- Shall we always use IoC at all or there are some times you can forget about it? 3- If we always have to use IoC, can we say _Mediator_ is an anti-Pattern?"} {"_id": "96373", "title": "Handling Deprecated Methods", "text": "As many of you already know Apple released a new OS last week so I installed it on my system to see if the project I'm working on works. Well, absolutely nothing works! There are a lot of deprecated methods so I would like some advice on how to tackle updating your code so it is compatible with the new platform."} {"_id": "201726", "title": "How to ask a programmer a question without getting a solution as the answer", "text": "We've all had the experience. You go to someone who you know has the answer to a question, ask that person the question and they answer with the typical response \"why?\". You explain why you need to know, and they attempt to solve your problem. It takes time, arm twisting and patience to steer the conversation back to the original question and just get that darn answer. Why do programmers constantly do this, and why does the behavior get worse the more senior the programmer becomes? How can you ask a programmer a question in a way most efficient in extracting the answer to the original question? **EDIT** : A lot of the comments pertain to explain why the developer behaves this way, and recommend that the asker perform more research before asking the question. There is also the situation where the developer wants to advise the developer to take another path, but I want to avoid explaining or justifying my decisions. They are unrelated to the question, but the other developer wants to make it related. This is not an answer to the above question. The question is specifically how does one engage with another programmer to ask a question, where the other has the answer and skip the debate about why the question is being asked."} {"_id": "201724", "title": "Is it normal that I can't keep in my head more than three bugs assigned to me, nor can I understand a thousand lines of spaghetti code?", "text": "I'm working on an old codebase which is... _not perfect_ , in an environment which isn't either. It's not the worst codebase I've seen in my life, but there are still lots of issues: zero unit tests; methods with thousand+ lines of code; misunderstanding of basic object oriented principles; etc. It hurts to maintain the code. 1. Every time I have to debug a thousand lines of a badly written method with variables reused all over, I'm totally lost. 2. Some modifications or refactoring I've done introduced bugs in other places of the application. 3. Lacking any documentation, tests, or an observable architecture and combined with badly named methods, I feel that I fill up all of my available working memory. There is no room left over for all the other things I have to remember in order to understand the code I should modify. 4. Constant interruptions at the workplace disturb me and slow me down. 5. I can't remember more than two or three tasks at a time without a bug tracking system, and I forget all of them over the weekend. My colleagues don't seem to have similar issues. 1. They manage to debug badly written methods much faster than me. 2. They introduce fewer bugs than I do when changing the codebase. 3. They seem to remember very well all they need to in order to change the code, even when it requires reading thousands of lines of code in twenty different files. 4. They don't appear to be disturbed by emails, ringing phones, people talking all around, and other people asking them questions. 5. They don't want to use the bug tracking system that we already have since we use TFS. They prefer to just remember every task they should do. Why does this happen? Is it a particular skill developers acquire when working with badly written code for a long time? Does my relative lack of experience with bad code contribute to these problems / feelings? Do I have issues with my memory?"} {"_id": "223353", "title": "Python: How to decide which class' methods should provide behavior (functionality) affecting multiple classes", "text": "I have a question about object oriented design that is not specific to Python but since my code is in Python, I tagged it as such. How do I decide which of my classes should be responsible for implementing certain functionality that may affect more than one class? What are some good guidelines >(if any) or considerations? In the below example of a _Town_ class, the behavior of moving to/from a town is split up among the _Town_ class and _Inhabitant_ class. The code works. But I am wondering if there are compelling reasons to assign this behavior to only one class, e.g. allow the _Town_ method _addpeople_ to update the _Inhabitant_ attribute _town_ so it's possible to track each inhabitant's place of residence. Just trying to get a sense of what would be good practice or smart object oriented design. class Town(): def __init__(self,name): self.name = name self.inhab = [] def addpeople(self,person): self.inhab.append(person) def removepeople(self, person): self.inhab.remove(person) class Inhabitant(): def __init__(self,name): self.name = name self.town = \"\" def move(self,town): if self.town: print self.name,\"is moving from\",self.town.name, \"to\",town.name self.town.removepeople(self) else: print self.name,\"is moving to\",town.name self.town = town town.addpeople(self)"} {"_id": "170202", "title": "Pair programming and unit testing", "text": "My team follows the Scrum development cycle. We have received feedback that our unit testing coverage is not very good. A team member is suggesting the addition of an external testing team to assist the core team, but I feel this will backfire in a bad way. I am thinking of suggesting pair programming approach. I have a feeling that this should help the code be more \"test-worthy\" and soon the team can move to test driven development! What are the potential problems that might arise out of pair programming??"} {"_id": "19158", "title": "Enterprise knowledge sharing?", "text": "I recently read this article on knowledge sharing and immediately recognized the same problem within my own organization. My main goal now is to 'kill peer-to-peer collaboration' as the default method of communication for non- private, system related discussions. Otherwise you end up with all of the historical knowledge living in the heads of individuals, or lost in a massive email system. My question for the group is as follows: * What methods / software have you used to encourage more 'public' discussions among your developers? Some initial ideas I had.. any feedback would be great: * Internal news group * 'better' wiki software (using Sharepoint now) * Message board (I would love to have an internal instance of StackExchange, but don't think that is an option!) **Note:** As stated above, we already have a wiki, but I dislike the wiki idea because things are usually only added to the wiki after the fact, _if at all_. Thanks!"} {"_id": "223351", "title": "nodejs chaining with async", "text": "I'm trying to chain a series of methods that are async. I have heard of promises and futures but what I'm looking for is: obj.setup() .do_something() .do_another_thing() .end() and not: obj.setup() .then(...) .then(....) I have come across a tutorial that explains how to do this, but unfortunately the penny hasn't dropped: http://www.dustindiaz.com/async-method-queues/ So I'm looking for a module. This module, https://github.com/FuturesJS/FuturesJS , seems to have `chainify` in its API but there's no documentation on how to use it. In fact I can't find any documentation on how to use modules to achieve what I'm looking for, but plenty on using promises to get `then().then().then()` which is not what I need. Currently my module looks like: var obj = function(){ this.async_method = function(){} this.async_method_thingy = function(){} this.async_method_foo = function(){} } var o = new obj() var _ = { \"setup\" : function(){ ... return this }, \"do_something\" : function(){ o.async_method() return this }, \"do_another_thing\" : function(){ o.async_method_thingy() return this }, \"end\" : function(){ o.async_method_foo() return this } } module.exports = _ any help is appreciated ;)"} {"_id": "116412", "title": "What can and can't the Garbage Collector do?", "text": "Will the GC take care of all memory management issues (memory leaks) ? Is there any case where you don't want the GC to take control of some part of your code ?"} {"_id": "52267", "title": "Why should i write a commit message?", "text": "Why should i write a commit message? I dont want to and i think its stupid every single time. A gui frontend i use which will go unnamed forces you to do it. I hear other doing it everytime even if they are using the VCS on the command line. If i commit several times a day and havent finish a feature what am i writing about? I ONLY ever write a message after many commits and i feel its time for a mini tag or when i do an actual tag. Am i right or am i missing something? also i am using a distributed system"} {"_id": "68726", "title": "Should you create a boolean function that does the opposite of an existing function just so its purpose is clear?", "text": "Based on this question. Would you consider it best practice to create a function that does the opposite of an existing function just to give it a different name. Example: If you already have bool In(string input,string[] set) which returns true if the the array set contains the string input and false otherwise, should you create a function like bool NotIn(string input,string[] set) which returns false if the string is in the set or true otherwise?"} {"_id": "245980", "title": "What's the best way to store options with multiple/boolean choices in an Android app?", "text": "I'm working on an Android social app which connects to a Postgresql database for up-to-date user data and is going to use lookups for dropdown menus, and these lookups will either have multiple options (e.g. for eye color, the menu would show 'blue', 'brown', 'green', 'hazel', 'red', 'white') or boolean options (e.g. for smoking, the menu would show 'non-smoker' or 'smoker'). Should I store these within the app? They're not really high sensitivity/confidentiality. I was originally thinking I would store them in the Postgresql database but in hindsight that seems overkill and would possibly start to affect performance and cause unnecessary hit on the database server. What would be the best way to store these within the app/on the device? For arguments sake, I'm using the current latest Android version (4.4.4, though obviously not only targeting that one)."} {"_id": "43186", "title": "OOP oriented PHP app source code samples and advice", "text": "The day I have been dreading has arrived. I never felt OOP or good software design was important(I knew they were important, but I thought I could manage without them.). However having read otherwise almost everywhere on the interwebs, I started dreading the day when my client would ask me for new features in an existing app. The day has come and the pain is unbearable! I have never coded my PHP websites \"properly\"(PHP is my primary language and the bulk of my work. I am learning Python (using web2py)) I take care that the website doesn't fall apart in a daily use scenario. I code pages like I was creating a list of static html files with bits of \"magic code\" in each of them(this bugs me a lot). **How do I make the whole app more or less a single object?** For eg. How do I design the object model for an invoicing app? I use a lot of functions for doing any particular thing in the same fashion throughout the app(for eg. validation, generating ids, calculating taxes etc.). I know the basics of OOP in general. Can anyone point me to **source code samples of functional apps** written in php? Or can someone provide pointers so I can recode my existing apps in a more modular way."} {"_id": "43188", "title": "Is \"as long as it works\" the norm?", "text": "**See my more recent question:** Is programming as a profession in a race to the bottom? My last shop did not have a process. Agile essentially meant they did not have a plan at all about how to develop or manage their projects. It meant \"hey, here's a ton of work. Go do it in two weeks. We're fast paced and agile.\" They released stuff that they knew had problems. They didn't care how things were written. There were no code reviews--despite there being several developers. They released software they knew to be buggy. At my previous job, people had the attitude as long as it works, it's fine. When I asked for a rewrite of some code I had written while we were essentially exploring the spec, they denied it. I wanted to rewrite the code because code was repeated in multiple places, there was no encapsulation and it took people a long time to make changes to it. So essentially, my impression is this: programming boils down to the following: 1. Reading some book about the latest tool/technology 2. Throwing code together based on this, avoiding writing any individual code because the company doesn't want to \"maintain custom code\" 3. Showing it and moving on to the next thing, \"as long as it works.\" I've always told myself that next job I'm going to get a better shop. It never happens. If this is it, then I feel stuck. The technologies always change; if the only professional development here is reading the latest MS Press technology book, then what have you built in 10 years but a superficial knowledge of various technologies? I'm concerned about: 1. Best way to have professional standards 2. How to develop meaningful knowledge and experience in this situation"} {"_id": "60350", "title": "Who is Responsible for Setting Up An Automated Builds System?", "text": "I am a project manager at my company. I work with a few teams of developers using a standard, well-known version control system known as CVS. I'd like to see continuous integration and automated builds implemented to help prevent problems with the build breaking and with bad deployments sneaking onto the production servers. I am sure I can set this up myself, but I don't want to do this myself for two reasons: 1. I don't have time for it. I have my own responsibilities, which involve marketing, communication to other stakeholders with team members not part of development, communicating with customers, and project planning. 2. Most importantly, I'm the project manager. My purpose is to provide leadership, not to micro-manage the development team. What are some things that I can do to find someone on the development team who would be passionate about setting this up? Is a developer the right person for this task, considering it requires knowledge of Java, Spring, and Google App Engine? What are some tips for helping to promote change where change is feared?"} {"_id": "235145", "title": "Real time unit testing - or \"how to mock now\"", "text": "When you are working on a feature that depends on time... How do you organize unit testing ? When your unit tests scenarios depend on the way your program interprets \"now\", how do you set them up ? **Second Edit: After a Couple of days reading your experience** I can see that the techniques to deal with this situation usually revolve around one of these three principles: * Add (hardcode) a dependency: add a small layer over the time function/object, and always call your datetime function through this layer. This way, you can get control of time during the test cases. * Use mocks: your code stays exactly the same. In your tests, you replace the time object by a fake time object. Sometimes, the solution involves modifying the genuine time object that is provided by your programming language. * Use dependency injection: Build your code so that any time reference is passed as a parameter. Then, you have control of the parameters during the tests. Techniques (or libraries) specific to a language are very welcome, and will be enhanced if an illustrative piece of code comes along. Then, most interesting is how similar principles may be applied on any platform. And yes... If I can apply it straight away in PHP, better than better ;) **Let's start with a simple example: a basic booking application.** Let's say we have JSON API and two messages: a request message, and a confirmation message. A standard scenario goes like this: 1. You make a request. You get a response with a token. The resource that is needed to fulfill that request is blocked by the system for 5 minutes. 2. You confirm a request, identified by the token. If token was issued within 5 minutes, it will be accepted (the resource is still available). If more than 5 minutes have passed, you have to make a new request (the resource was freed. You need to check for its availability again). **Here come the corresponding test scenario:** 1. I make a request. I confirm (immediately) with the token I received. My confirmation is accepted. 2. I make a request. I wait 3 minutes. I confirm with the token I received. My confirmation is accepted. 3. I make a request. I wait 6 minutes. I confirm with the token I received. My confirmation is rejected. How can we get to program these unit tests ? What architecture should we use so that these functionalities remain testable ? **Edit** Note: When we shoot a request, the time is stored in a database in a format that looses any information about milliseconds. **EXTRA - but maybe a bit verbose: Here come details about what I found out doing my \"homework\":** 1. I Built my feature with a dependency on a time function of my own. My function VirtualDateTime has a static method get_time() that I call where I used to call new DateTime(). This time function allows me to simulate and control what time is \"now\", so that I can build tests like: \"Set now to 21st jan 2014 16h15. Make a request. Move ahead 3 minutes. Make confirmation.\". This works fine, at the cost of the dependency, and \"not so pretty code\". 2. A little more integrated solution would be to build a myDateTime function of my own that extends DateTime with additional \"virtual time\" functionnalities (most importantly, set now to what I want). This is making the first solution's code slightly more elegant (use of new myDateTime instead of new DateTime), but ends up to be very similar: I have to build my feature using my own class, thus creating a dependency. 3. I have though about hacking the DateTime function, to make it work with my dependency when I need it. Is there any simple and neat way to replace a class by another, though ? (I think I got an answer to that: See namespace below). 4. In a PHP environment, I read \"Runkit\" may allow me to do this (hack the DateTime function) dynamically. What is nice is that I would be able to check I am running in test environment before modifying anything about DateTime, and leave it untouched in production [1]. This sounds much safer and cleaner than any manual DateTime hack. 5. Dependency injection of a clock function in each class that uses time [2]. Is this not overkill ? I see that in some environments it gets very usefull [5]. In this case, I do not like it so much, though. 6. Remove dependency of time in all the functions [2]. Is this always feasible ? (See more examples below) 7. Using namespaces [2][3] ? This looks quite good. I could mock DateTime with my own DateTime function that extends \\DateTime... Use DateTime::setNow(\"2014-01-21 16:15:00\"), and DateTime::wait(\"+3 minutes\"). This covers pretty much what I need. What if we use the time() function though ? Or other time functions ? I still have to avoid their use in my original code. Or I would need to make sure any PHP time function I use in my code is overridden in my tests... Is there any library available that would do just this ? 8. I have been looking for a way to change the system time just for a thread. It seems that there is none [4]. This is a pity: a \"simple\" PHP function to \"Set time to 21st jan 2014 16h15 for this thread\" would be a great feature for this kind of tests. 9. Change the system date and time for the test, using exec(). This can work out if you are not afraid to mess up with other things on the server. And you need to set back the system time after you ran your test. It can do the trick in some situations, but feels quite \"hacky\". This looks like a very standard problem to me. However, I still miss a simple an generic way to deal with it. Maybe I missed something ? Please share your experience ! **NOTE: Here are some other situations where we may have similar testing needs.** * Any feature that works with some kind of timeout (e.g. chess game ?) * processing a queue that triggers events at a given time (in the above scenario we could keep going like this: one day before the reservation starts, I want to send a mail to the user with all the details. Test scenario...) * You want to set up a test environment with data collected in the past - you would like to see it like if it was now. [1] 1 http://stackoverflow.com/questions/3271735/simulate-different-server- datetimes-in-php 2 http://stackoverflow.com/questions/4221480/how-to-change-current-time-for- unit-testing-date-functions-in-php 3 http://www.schmengler-se.de/en/2011/03/php-mocking-built-in-functions-like- time-in-unit-tests/ 4 http://stackoverflow.com/questions/3923848/change-todays-date-and-time-in- php 5 Unit testing time-bound code"} {"_id": "235140", "title": "Best Practices - separation of concerns and inheritance issues", "text": "Here's the situation: I have a \"common\" Data Access assembly that contains classes used in all my projects. Some of those are abstract classes that are only implemented by my data access layers for each project. In my projects I have a layered approach - separated data access, business layer, and UI. My data access classes may inherit from the abstract classes in Common. These abstract classes contain an \"execute\" method. In my business layer of the project I only reference the data access layer of the project - I don't reference other projects or the Common assembly. But once I put my abstract classes that are reused all the time into Common, my business layer could no longer call the \"Execute\" method without having a reference to common. I hope that's not too confusing. If I don't want lots of interdependencies between assemblies, I need to move the abstract classes back into each project's data access layer. But then I have repeated code and potentially inconsistent behavior between projects. But if I keep it as it is, all my business layers need to be able to access this common data access assembly, which seems wrong. Any thoughts on this architecture? I know some might try to say \"use Entity Framework\" or some other ORM. But my projects are not complex enough to warrant that much overhead, especially given the need for fast performance. I have found that a simple framework of my own directly implementing ADO.Net is remarkably faster. So please just advise me on the separation of concerns and inheritance issues and don't try to talk me into adding an ORM. Example code: In Common: base class Public MustInherit Class AbstractDatabaseAction Protected Property Factory As DbProviderFactory Protected Property Connection As DbConnection Protected Property Command As DbCommand Protected Property MessageForExceptions As String Protected Property ProviderName As String Protected Sub New(connString As String, providerName As String, messageForExceptions As String) Factory = DbProviderFactories.GetFactory(providerName) 'set up connection Connection = Factory.CreateConnection Connection.ConnectionString = connString Me.ProviderName = providerName 'set up command Command = Factory.CreateCommand Me.MessageForExceptions = messageForExceptions End Sub Public MustOverride Sub Execute() Protected Overridable Sub SetParameters() 'nothing End Sub Protected MustOverride Sub SetCommandText() Protected Overridable Sub SetCommandType() Command.CommandType = CommandType.StoredProcedure End Sub Protected Sub BuildCommand() Command.Connection = Connection Me.SetCommandText() Me.SetCommandType() Me.SetParameters() If Me.ProviderName = \"Oracle.DataAccess.Client\" Then OracleSpecificCommandEdits() End If End Sub Protected Overridable Sub OracleSpecificCommandEdits() CType(Command, OracleCommand).BindByName = True End Sub End Class In Common: second base class (I have both a search and a save version, with the save version optionally allowing transactions.) Public MustInherit Class AbstractSearch Inherits AbstractDatabaseAction Protected Sub New(connString As String, providerName As String, messageForExceptions As String) MyBase.New(connString, providerName, messageForExceptions) End Sub Public Overrides Sub Execute() Try Me.BuildCommand() Using Connection Connection.Open() Using Command Try Dim rdr As IDataReader = Command.ExecuteReader Me.fill(rdr) rdr.Close() Catch ex As Exception Throw New Exception(MessageForExceptions & \"->Search\", ex) End Try End Using End Using Catch ex As Exception Throw New Exception(MessageForExceptions & \"->Search\", ex) End Try End Sub Protected MustOverride Sub fill(ByRef rdr As System.Data.IDataReader) Protected Overrides Sub OracleSpecificCommandEdits() MyBase.OracleSpecificCommandEdits() If TypeOf (Factory) Is OracleClientFactory Then Dim p As DbParameter = New OracleParameter p.ParameterName = \"results\" p.Direction = ParameterDirection.Output CType(p, OracleParameter).OracleDbType = OracleDbType.RefCursor Command.Parameters.Add(p) End If End Sub Protected Sub AddInParameter(key As String, value As Object) Dim p As IDataParameter = Command.CreateParameter p.Direction = ParameterDirection.Input p.Value = value p.ParameterName = key Command.Parameters.Add(p) End Sub Protected Sub AddOutParameter(key As String, type As System.Data.DbType) Dim p As IDataParameter = Command.CreateParameter p.Direction = ParameterDirection.Output p.DbType = type p.ParameterName = key Command.Parameters.Add(p) End Sub End Class A very simple implementation example of a data access layer implementation: Public Class IpBlackListSearch Inherits Common.DataAccess.AbstractSearch Private Property IPToSearch As String Public Property Results As List(Of String) = Nothing Public Sub New(connString As String, providerName As String, ipAddressToSearch As String) MyBase.New(connString, providerName, \"IpAddressSearch\") Me.IPToSearch = IPToSearch End Sub Protected Overrides Sub fill(ByRef rdr As System.Data.IDataReader) Results = New List(Of String) While rdr.Read Results.Add(HelperFunctions.NullScrubber(Of String)(\"ip\")) End While End Sub Protected Overrides Sub SetCommandText() Command.CommandText = \"Get_IPBlacklist\" End Sub Protected Overrides Sub SetParameters() MyBase.AddInParameter(\"in_ip\", Me.IPToSearch) End Sub End Class The problem would come when in the business layer of my project that would do something like: Dim srch as new IpBlackListSearch(connstring, providername, \"12.12.12.12.\") srch.execute srch.Execute can only compile if the business layer references the common data access assembly. It sounds like from the comments that there is nothing wrong with my business layer containing that reference."} {"_id": "155704", "title": "Efficient Trie implementation for unicode strings", "text": "I have been looking for an efficient String trie implementation. Mostly I have found code like this: Referential implementation in Java (per wikipedia) I dislike these implementations for mostly two reasons: 1. They support only 256 ASCII characters. I need to cover things like cyrillic. 2. They are extremely memory inefficient. Each node contains an array of 256 references, which is 4096 bytes on a 64 bit machine in Java. Each of these nodes can have up to 256 subnodes with 4096 bytes of references each. So a full Trie for every ASCII 2 character string would require a bit over 1MB. Three character strings? 256MB just for arrays in nodes. And so on. Of course I don't intend to have all of 16 million three character strings in my Trie, so a lot of space is just wasted. Most of these arrays are just null references as their capacity far exceeds the actual number of inserted keys. And if I add unicode, the arrays get even larger (char has 64k values instead of 256 in Java). Is there any hope of making an efficient trie for strings? I have considered a couple of improvements over these types of implementations: * Instead of using array of references, I could use an array of primitive integer type, which indexes into an array of references to nodes whose size is close to the number of actual nodes. * I could break strings into 4 bit parts which would allow for node arrays of size 16 at the cost of a deeper tree."} {"_id": "155705", "title": "Strategy vs Delegates", "text": "Can the `Strategy` design pattern entirely replace `delegates`? In `Java`, for example, there are no delegates. Is it possible to gain all the features of `delegates` by using `Strategy` design pattern? **Edit** : I see there is some ambiguity in my question. By `delegates` I mean the feature of the language, C# for instance."} {"_id": "65657", "title": "How to fight absentmindedness", "text": "Do you have any problems with loss of concentration, constant relaxation, etc.? How do you solve this problem? For example, when you are coding or learning something - You understand that it's interesting, but there's also another will to go and do something else. How do you motivate yourself to keep working when you get the urge to distract yourself?"} {"_id": "150555", "title": "What kind of task should I expect In a PHP job interview?", "text": "I've got a job interview a few days from today (and yeah, I'm dreading it). The job is based in PHP web development. I have 3 years experience with PHP and a few other languages, but I'm only 17 so this is my first computer/programming related interview I've ever had. To be honest, it's not like a huge organization/company so I don't think the interviewer(s) expectations are going to be too high (right?).. The employer did mention on the phone that I will be asked to write some code in the interview. I've had a look around and came across the famous FizzBuzz question, which I found quite easy to do. I'm really nervous about this interview and don't want to mess it up. Can anyone give me some example questions that you would ask a PHP programmer, in an interview. Or even better, if you've been in the same situation, what kind of task(s) were you set?"} {"_id": "65659", "title": "Using JMeter with a utility that does not support proxies", "text": "Currently I'm checking some client and server software and I wish to expand the testing coverage for it. Specifically I want to increase the quality of the load and stress testing that is performed so that it better reflects user usage. To this end I've been looking at JMeter, however I've encountered an issue. Unlike a web browser, the client does not have support for using a proxy. This presents the problem that JMeter can't monitor the communication between the two systems to build up the test data. Possibly JMeter provides for this very scenario, however it's not been obvious in the documentation that there is any facility for this. Although I have some ideas on how to approach this I can't be the first to have encountered this issue, so I'm putting the question out there: how best would you solve this issue?"} {"_id": "150558", "title": "Objects in non-OOP languages", "text": "Can we use the word \"object\" for data / functions in e.g. C (or Pascal) which doesn't really have objects? In C, what is an object? A data structure? A named memory area? I spoke to my collegue about \"objects\" in C and he said that there weren't any but in the text I study about C the word object is used as a generalization of either a function or data. For example, if you in C declare a `union` you can say that you have declared place for an object that can be either a function or data. The ADT defined as \"Object\" looks like this in C typedef enum { Integral, Real } Kind; typedef struct { Kind type; union { double rvalue; int ivalue; } data; } Object; Thanks for any reply"} {"_id": "118022", "title": "Can I use Visual studio for Python and Django development?", "text": "I have been using Visual Studio for quite some time now, in fact from the VS 2005 version. Right now, I am not only comfortable but addict to using it(VS 2010) for all my development needs. Recently I have been learning Python and Django. In that getting the tutorial tasks up and running with IDLE seems very lame and outdated(I am following the Django Book). Therefore, I downloaded and configured Eclipse & PyDev. Eclipse seems good and fine but needs a bit of learning curve for itself. So, I am looking for a way to configure Visual Studio 2010 for Python and Django development if possible. Please tell me if that's possible and How ?"} {"_id": "118020", "title": "Does it make sense to license unit tests?", "text": "I am wondering if there are any benefits / risks of (not) putting a licence on test code which mostly consists of unit tests. What do you think? I am particularly interested in licensing under (L)GPL, Apache, MIT and BSD. **EDIT** : The assumption is that non-test code is already published under some licence, but test code is not, so the question is whether to publish it and if so whether to put the same licence on it."} {"_id": "204786", "title": "Do I need unit test if I already have integration test?", "text": "If I already have integration test for my program, and they all passed, then I have a good feel that it will work. Then what are the reasons to write/add unit tests? Since I already have to write integration tests anyway, I will like to only write unit test for parts that not covered by integration tests. What I know the benefit of unit test over integration test are * Small and hence fast to run (but adding new unit to test something is already tested by integration test means my total test suit get larger and longer to run) * Locate bug easier because it only test one thing (but I can start write unit test to verify each individual part when my integration test failed) * Find bug that may not be caught in integration test. e.g. masking/offsetting bugs. (but if my integration tests all passes, which means my program will work even some hidden bug exists. So find/fix these bugs are not really high priority unless they start breaking future integration tests or cause performance problem ) And we always want to write less code, but write unit tests need lots more code (mainly setup mock objects). The difference between some of my unit tests and integration tests is that in unit tests, I use mock object, and in integration tests, I use real object. Which have lots of duplication and I don't like duplicated code, even in tests because this add overhead to change code behavior (refactor tool cannot do all work all the time)."} {"_id": "224365", "title": "In TDD, is it bad practice to pass a test with code that would pass more than one test?", "text": "**When using TDD, is it bad practice to pass a newly written test with code that could also pass another test?** For example, take the following test (in PHP) public function WhenSomethingIsNull_ThrowsException() { $input = null; $this->module->someMethod($input); } This could be made to pass with the following code public function someMethod($input) { if(!$input) throw new Exception() } This would also pass the test `WhenSomethingIsZero_ThrowsException`. In these cases, should I fix what I know is wrong with this test in the refactoring step? Or only write code which passes this and only this test?"} {"_id": "224829", "title": "How to manage success dependency between unit tests", "text": "How do you manage the dependency of one unit test on another unit test succeeding? So, supposed I have a class and this class has say 5 methods. I create like 2 dozen unit tests (test methods) for that class. each unit tests tests something specific and unique - so nothing is tested twice. So for instance I have a couple of unit test that checks if the class' constructor is working properly. Maybe testing that the constructor does not throw any exceptions and that the appropriate exception is thrown when no (valid) parameters are passed. Other unit tests may also use the constructor (in the setup stage of the unit test) but non perform any checks on them because there is another unit test that has that job. Now I start to refactor this class and I break some of the unit tests. I click open one test and it is a specific edge-case test that is pretty elaborate. Hard to see what is going wrong and why. If I would have opened the constructor test first, I would have seen that THAT was the problem. So I wasted my time. Granted, in one unit test file it is a trivial example but in a large test suite where success-dependencies may span across file boundaries it gets complex. So my question is first: is this way of unit testing correct - most effecient (not testing the same thing multiple times)? The next question is: How do you manage these success-dependencies between unit tests and how do you make sure you can pinpoint the test case that displays the 'real' problem (by failing)."} {"_id": "235045", "title": "Do we do white box testing on methods or on an overall program?", "text": "I am very confused about white box testing. A simplified version of the example: the entire system consists of three methods - `methodA()`, `methodB()`, `methodC()`. The program starts from `methodA()`, and `methodB()` requires input from `methodA()` and `methodC()` requires input from methodB(). Do we create 3 white box tests, one for each method, or do we create one white box test for the entire system?"} {"_id": "221766", "title": "How to structure tests where one test is another test's setup?", "text": "I'm **integration** testing a system, by using only the public APIs. I have a test that looks something like this: def testAllTheThings(): email = create_random_email() password = create_random_password() ok = account_signup(email, password) assert ok url = wait_for_confirmation_email() assert url ok = account_verify(url) assert ok token = get_auth_token(email, password) a = do_A(token) assert a b = do_B(token, a) assert b c = do_C(token, b) # ...and so on... Basically, I'm attempting to test the entire \"flow\" of a single transaction. Each step in the flow depends on the previous step succeeding. Because I'm restricting myself to the external API, I can't just go poking values into the database. So, either I have one really long test method that does `A; assert; B; assert; C; assert...\", or I break it up into separate test methods, where each test method needs the results of the previous test before it can do its thing: def testAccountSignup(): # etc. return email, password def testAuthToken(): email, password = testAccountSignup() token = get_auth_token(email, password) assert token return token def testA(): token = testAuthToken() a = do_A(token) # etc. I think this smells. Is there a better way to write these tests?"} {"_id": "225323", "title": "How should I test the functionality of a function that uses other functions in it?", "text": "Suppose there is a function get-data which returns a map of information about the id of the user passed in. now this function uses 3 functions source-a, source-b and source-c to get three different kinds of maps. now we combine all these maps into one map and return from get-data. when I test get-data, should I test for existence of data for keys ? Does it makes sense for this function to fail unit tests if one of source-a, source-b and source-c fail? If thats function job is to combine data,and its doing it, it should be enough right ?"} {"_id": "236492", "title": "Implementing TDD for existing code", "text": "I've just been learning Unit Testing and I'm trying to understand how I could incorporate it with a project with existing code. Say I wanted to write tests for a specific class in that project, but that certain class requires an instance of another class for the the methods to be run and/or tested. How should I approach this?"} {"_id": "232084", "title": "Do I need a suite of unit tests for inner business-logic class? Since it's going to replicate the acceptance test suite for 90%", "text": "I'm new to TDD and wondering about methodolody. Given: A simple project which implements functionality of, for example, a console calculator. It has the following structure: 1. Fairly simple top-level class that takes console input, delegates it to a buisness-logic class and shows output in the console with some fancy formatting. 2. The forgoing buisness-logic class which does all the calculations and which is relatively complex. Also let's assume we have a nice thorough suite of acceptance tests for the whole project, which performs through the user interface. Do I need a suite of unit tests for inner business-logic class? Since it's going to replicate the acceptance test suite for 90%. Additional, but closely related questions: 1. Will the answer for the question remain the same if, for example, that acceptance test suite takes 30 sec to run? 5 mins? 1 hour? 2. If the inner business-logic class is not yet implemented, do I need write that suit of unit tests to guide its development, or it's fine to remain only with acceptance tests?"} {"_id": "152958", "title": "All unit tests in one executable, or split them up?", "text": "When writing tests for one piece of software, say a library, do you prefer to compile all unit tests into one, or separate them into several executables? The reason I'm asking is because I am currently using CUnit to test a library I'm working on. The tests are split up into separate suites that are compiled into one executable complete with printed output for failures. Now, the build system for that library is CMake (which, despite its name, has little to do with CUnit), which comes with its own testing framework, CTest. CTest allows me to register a list of executables that serve as tests. I'm pondering whether to use CTest for automated testing runs. However, this would require me to split up the tests I've written so far into separate compile targets. Otherwise, I can't really utilize some of CTests advanced features, such as selectively running tests. I realize this is more a question of what tools to use and their handling and conventions, but apart from that, are there any other reasons to prefer a single test executable over separate ones? Or vice versa?"} {"_id": "152950", "title": "Is this overkill? Using MDX queries and cubes instead of SQL stored procedures", "text": "I am new to Microsoft's SQL Server Analysis Services Cubes and MDX queries. Where I work we have a daily sales table in SQL Server 2005 that already contains an aggregate of sale information per store per day. At this time it contains only 164,000+ rows. We have a sales cube dedicated to this table that about 15 reports are based off of. Now, I should also note that we generate reports based on our own fiscal year criteria: a 13 period year (1 month equals 28 days etc.). Is this overkill? At what point is it justified to begin using SSAS Cubes/MDX over plain old SQL Server stored procedures? Since I have always been just using plain old SQL am I tragically late to the MDX party?"} {"_id": "152951", "title": "Passing class names or objects?", "text": "I have a switch statement switch ( $id ) { case 'abc': return 'Animal'; case 'xyz': return 'Human'; //many more } I am returning class names,and use them to call some of their static functions using call_user_func(). Instead I can also create a object of that class, return that and then call the static function from that object as $object::method($param) switch ( $id ) { case 'abc': return new Animal; case 'xyz': return new Human; //many more } Which way is efficient? To make this question broader : I have classes that have mostly all static methods right now, putting them into classes is kind of a grouping idea here (for example the DB table structure of Animal is given by class Animal and so for Human class). I need to access many functions from these classes so the switch needs to give me access to the class"} {"_id": "200898", "title": "Code formatting for variable declarations", "text": "Is it looked down upon or bad to write multiple variable declarations on the same line? As in: boolean playMoreGames = true; int length; boolean win; int bodyparts; boolean contains; char blank = '-'; String word; Scanner fileIn; String guess; Scanner keyboard = new Scanner(System.in); vs (standard): boolean playMoreGames = true; boolean win; boolean contains; int length; int bodyparts; char blank = '-'; String word; String guess; StringBuilder sb_word; Scanner fileIn; Scanner keyboard = new Scanner(System.in);"} {"_id": "46003", "title": "How much time do you need in between large projects?", "text": "You've launched a large project at work, something that's been in progress and taken up large chunks of your life for more than 6 months. The post-launch triage is over. Tech support isn't calling you every hour because they don't know how to troubleshoot an issue. Your hours drop from 60+/wk to whatever is normal in your organization (which is hopefully less than 60+!). How much time do you (or your team) need before the next large project begins? I was asked this question at work and I think the ideal minimum is two weeks -- one week to clear your desk and inbox + one week to clear your head and remember what it's like to have a life outside of work. I'd frankly acknowledge that just being asked this question is a huge boon to work/life balance. But I do think it's possible to go _too_ long in between."} {"_id": "46001", "title": "How can I improve the way I make changes to an Index page?", "text": "We have index pages, running on an Apache Tomcat server, with links to other pages. > Welcome, you. What would you like to do? > > _Goto page x_ > > _Goto page y_ These links are hard coded. I want to be able to manage the index pages in a more dynamic way. So that I don't have to pull down the page, cause temporary down time, change / add links, and then have to redeploy. What route should I take?"} {"_id": "46004", "title": "Pure virtual or abstract, what's in a name?", "text": "While discussing a question about virtual functions on Stack Overflow, I wondered whether there was any official naming for pure (abstract) and non- pure virtual functions. I always relied on wikipedia for my information, which states that pure and non-pure virtual functions are the general term. Unfortunately, the article doesn't back it up with a origin or references. To quote Jon Skeet's answer to my reply that pure and non-pure are the general term used: > @Steven: Hmm... possibly, but I've only ever seen it in the context of C++ > before. I suspect anyone talking about them is likely to have a C++ > background :) Did the terms originate from C++, or were they first defined or implemented in a earlier language, and are they the 'official' scientific terms? UPDATE: Frank Shearar helpfully provided a link to the description of the SIMULA 67 Common Base Language (1970). This language seems to be the first language to introduce OO keywords as _class_ , _object_ , and also **virtual** as a formal concept. It **doesn't** define _pure/non-pure_ or _abstract_ , but **it does support the concepts**. Who defined them?"} {"_id": "99201", "title": "is Ada really gone?", "text": "* Do people still use Ada -- (it was mostly used in the Defense Department) * Are all applications written in Ada \"Legacy\"? * Does Ada knowledge still sell"} {"_id": "99202", "title": "Is it called class or object instance?", "text": "I have a wording / precision question. Sometimes I write _\"object instance\"_ sometimes _\"class instance\"_. Isn't it that an _object_ is always an _instance_ of a _class_? Therefore _\"object instance\"_ is not the correct wording, right? Is it common to say so anyway, or just a \"mistake\" of mine? I have the feeling _\"object instance\"_ is superfluous (doubled) because an _object_ is always an _instance_ and would like to know more for clarification of the terms."} {"_id": "153398", "title": "Big O(n log n) and Quicksort number of operations", "text": "I have an Array with `1,000,000` unsorted elements. I need to calculate expected number of operations that needs to be performed to sort array using `Quicksort` algorithm in common situations (not the n^2 worst case). I am not sure how `(n log n)` is calculated - does it even makes sense to calculate this? If `(n log n)` = `(n*log(some base)n)` what base would be for `Quicksort`?"} {"_id": "64299", "title": "What is the appropriate level of granularity for services in a distributed or service-oriented architecture?", "text": "Our shop is attempting to move towards a more \"distributed\" architecture. I didn't want to use the term \"SOA\" because I'm not really sure if that's appropriate in our situation. For each new feature that is developed, a new service (WCF in our case) is created specifically for that feature. A feature is defined as new functionality, which can be a brand new application or a new addition for an existing piece of software. As more features are added, the number of services that we need to maintain goes up. Each of these services are hosted in isolation and expose their own endpoint. I don't have a lot of experience with distributed architectures or SOA in general, but this just feels wrong. Is there better way we can do this? Instead of having separate services for each of these features, would it make more sense to logically group and consolidate some of these service into one large service and provide a \"unified\" API? Or would something like this tend to grow too large and become unwieldy/fragile?"} {"_id": "64296", "title": "Does reflection in Java make its functions \"first class\"", "text": "I am always hearing people talk about how Java does not support first class functions, that it is an advantage you get from functional languages, the only way to simulate it in Java is through the use of anonymous inner classes etc.. However, from what I can see Java does support first functions through the use of its reflection API. For example I can create a method object from a class and then call it on a number of objects of that class. I realize it is not as powerful as first class functions in other languages. For example in Python you can do the following: class Test: def __init__(self, num): self.number = num def add(self, num): self.number += num test = Test(1) method = test.add method(2) You cannot do this in Java because you need to have a reference to the object you want to invoke the method on. However you can still treat the method as an object which is what defines first class functions. I guess the method object is not really the actual function but rather a meta-data object, although using reflection it can be treated as such. Maybe I just need clarification on what defines a first class function."} {"_id": "153392", "title": "Benefits of classic OOP over Go-like language", "text": "I've been thinking a lot about language design and what elements would be necessary for an \"ideal\" programming language, and studying Google's Go has led me to question a lot of otherwise common knowledge. Specifically, Go seems to have all of the interesting benefits from object oriented programming without actually having any of the _structure_ of an object oriented language. There are no classes, only structures; there is no class/structure inheritance -- only structure embedding. There aren't any hierarchies, no parent classes, no explicit interface implementations. Instead, type casting rules are based on a loose system similar to duck- typing, such that if a struct implements the necessary elements of a \"Reader\" or a \"Request\" or an \"Encoding\", then you can cast it and use it as one. Is there something about OOP as implemented in C++ and Java and C# that is inherently more capable, more maintainable, somehow more powerful that you have to give up when moving to a language like Go? What benefit do you have to give up to gain the simplicity that this new paradigm represents? **EDIT** _Removed the \"obsolete\" question that readers seemed to get excessively hung up on and infuriated by._ The question is, what does the traditional object oriented paradigm (with hierarchies and such) as frequently seen in common language implementations have to offer that can't be done as easily in this simpler model? Or, in other words, if you were to design a language today, is there a reason you would want to include the concept of class hierarchies?"} {"_id": "7861", "title": "When deciding on whether or not to work for a new company, what are your dealbreakers?", "text": "I know we've covered what questions you should ask about a company before you would decide to work there. But what do you do with the answers? In other words, what would you consider a dealbreaker? I.e. what would scare you so much about a company that you wouldn't work there, even if everything else was great? For example, if they tell me they don't use version control, I wouldn't work there. End of story."} {"_id": "191515", "title": "Form validation and file structure", "text": "I have a form (lets say a registration form) and onsubmit, it calls a function to validate as follows: $.ajax({ url : \"/ajax/validate.php\", type : \"POST\", data : $(\".form\").serialize(), success : function(data) { data = $.parseJSON(data); $.each(data, function(i, item) { $('#' + i).addClass('errors'); }); } }); The validate.php looks something like this: if ($_POST['email'] == '' || filter_var($_POST['email'], FILTER_VALIDATE_EMAIL) == false) { $errors['email'] = 'Not a valid email'; } if ($errors) { echo json_encode($errors); } else { // insert to db } Now all of this code is specifically for the registration form. I'll have many forms on my site. My question is, do I make a new file (like `validate.php`) for each individual form? Such as `/ajax/save_profile_data.php` etc? Do I keep each form validation separate or is there a clever way to approach this? I don't know if having a validate file for each form on my site is the correct way to do it."} {"_id": "95556", "title": "What is the advantage of little endian format?", "text": "Intel processors (and maybe some others) use the little endian format for storage. I always wonder why someone would want to store the bytes in reverse order. Does this format have any advantages over the big endian format?"} {"_id": "191510", "title": "Scrum: What to do with epics once the stories are clear?", "text": "When working on a backlog, you define epics and break them down into user stories. Epics are estimated and kept on the backlog as epics until they become important enough to be planned into one of the next sprints. But once an epic is split into sprintable units, what do you do with the original epic? Do you keep it with the stories until all are done? Do you retire them into some kind of epic archive? Do you just delete them? They seem like unnecessary balast once the splitting is done."} {"_id": "95552", "title": "Sharding and Cloud computing", "text": "I've read that Facebook uses this technique DB sharding to manage its data volume and that the idea of cloud computing is elastic resources. So I'm wondering does the cloud instance take care of sharding for you automatically? Or do you still have to do that part manually? And if you do have to do it manually - then the DB doesn't seem very elastic to me."} {"_id": "42639", "title": "How to make profit from freeware application?", "text": "The components that I'm using are restricting me from selling the application. Any ideas how to still make a profit from it? I've seen some freeware apps which set your homepage to some site, I guess they get paid for that."} {"_id": "198630", "title": "How to persist temporary data over multiple HTTP requests?", "text": "In our webapplication we have a list of questions that have to be answered by the user. These questions are served to the user one by one and will be saved once the last question has been answered. The problem we faced was saving all the 'help'-data that goes with this: storing the index of the last question, returning whether or not you're at the last question, returning the answered questions for the overview, etc. Initially we stored this data each into its own session. This worked, but it also meant we had about 5 different session variables for each type of question list and a bunch of casts. I've removed these session variables by creating a few extra fields in the viewmodel and storing the viewModel in its entirety inside a session. This made sure we had our temporary data persisted troughout requests (each question solved meant a new request), removed a great deal of sessions and made the code more readable. Another example of usage: our local user object gets overwritten every request because it's being obtained from a repository/databasecontext that's re- created every request (ninject). This also meant that we couldn't just keep a temporary list in our user that holds the already answered questions their answers, since it'd be emptied every request. Using this approach we can save this list in the session object, write it to the local user at the start of the action method, perform the action (save a new answer) and afterwards obtain this list and write it to the viewmodel. It's a bit of a workaround, but it made sure we could keep this data. I believed this to be a decent solution, but now one of the project members (it's a school project due tomorrow) expressed his doubt about this method and said it was a very dirty solution (no alternative provided though). We're using ASP.NET MVC 4. Have I approached this the right way? How should I have solved it differently?"} {"_id": "198637", "title": "Where is it appropriate to do input validation in Erlang?", "text": "I'm writing a module that runs a finite state machine* based on the contents of an array of records passed in at initialization. Each record describes a state and includes instructions on how to act on inputs (i.e., when in state `S1`, input `I` triggers a transition to state `S2`). For the FSM to work correctly, the transitioned-to states need to exist. My quandary is where to validate those transitions. The defensive programmer in me says to do it once as early as possible, such as when the FSM is initialized. This means I can raise an error when I'm first handed bogus data and avoid handing back a process that may fail later as a result. The rest of the implementation won't have to be as defensive because the table is known to be good and any errors that Erlang decides to raise will be a result of implementation rather than having being fed bad data. It will also make debugging easier for those using the module since they'll get a `badarg` or something else back immediately instead of having to paw through my sources later to figure out that it was their mistake and not mine. The Erlang philosophy seems to be that things should be left to run as long as possible, failing only when they have to and letting a supervisor take care of picking up the pieces. On one hand, this makes sense because a given FSM could run forever without encountering the inputs it would take for a bad transition. On the other, failing late puts the onus on the callers to write repetitive tests for something that's easy for me implement once. I know that most rules in this business aren't hard and fast, but is one approach more \"Erlangy\" than the other? Would a fail-early implementation appear out of place if released for others to use? * * * *I am aware of the `gen_fsm` behaviour and that what I'm doing has some comparative shortcomings. This is a learning exercise for some other things, and a FSM happens to be something that incorporates them."} {"_id": "198636", "title": "foreach over multiple lists at once", "text": "Are there any languages that support foreach over multiple lists at once? Something like this: foreach (string a in names, string b in places, string c in colors) { // do stuff with a, b, c } Sure, this can be done with a regular for loop: for (int n = 0; n Starting in the first week of July, apps that meet the following criteria > are required to comply with French Encryption Laws/Regulations if you intend > to distribute your app in France. > > This requirement applies to apps that use, access, implement, or > incorporate: > > (a) any encryption algorithm that is yet to be standardized by international > standard bodies such as IEEE, IETF, ISO, ITU, ETSI, 3GPP, TIA, etc. or not > otherwise published; or > > (b) standard (e.g., AES, DES, 3DES, RSA) encryption algorithm(s) instead of > or in addition to accessing or using the encryption in iOS and/or Mac OS X > > Apple will require you to upload a copy of your approved French declaration > when you submit your app to the App Store. > > Relevant French encryption regulations can be found at: > > > http://www.legifrance.gouv.fr/affichTexte.do?cidTexte=LEGITEXT000005789847&dateTexte=#LEGIARTI000006421577 > > http://www.ssi.gouv.fr/archive/fr/reglementation/regl_crypto.html > > http://www.ssi.gouv.fr/site_article195.html > > http://www.ssi.gouv.fr/site_article197.html > > Regards, > > Apple Export Compliance I've had a hunt around the web, developer forums, etc, and cannot find one single English description of exactly what this means. In my case (like many other people) I'm only using the standard encryption components provided in iOS - ie. https/SSL, AES, etc, I can't figure out if I need to do anything or not (beyond the USA encryption ERN that I've already completed). I'm worried it is going to delay submitting my next build though. Does anyone have any further information/links (or speak French)?"} {"_id": "152668", "title": "Solving programming problems or contributing code?", "text": "What are the best skills to develop for a college graduate?? Should one spend hours/days trying to solve problems on codechef or topcoder or contribute code to open source organizations? My personal experience says solving problems teaches you how to make optimal code and learn new programming techniques (which someone else has researched and made available) to solve problems, whereas contributing to open source teaches you how to organize code (so others can work on it), use coding conventions and make \"real\" use of what you have learnt so far, blah blah!! Also another thing to note is that many companies are hiring today based on one's problem solving skills (Is this something I should worry about?) P.S. I have done little of online problem solving and little of code contribution (via GSoC), but left confused what I should continue doing (as doing both simultaneously isn't easy). I am in final year of my CS degree and I want to make myself good enough before I get employed."} {"_id": "162744", "title": "Is it a common practice to minimize JavaScript usage when building a website?", "text": "I've been a web developer for almost 10 years and I've gotten into the habit of trying not to use JavaScript whenever possible. I'm not talking about building web apps here, but database driven websites. Is this a good/respected approach?"} {"_id": "53608", "title": "Using template questions in a technical interview", "text": "I've recently been in an argument with a colleague about technical questions in interviews. As a graduate, I went round lots of companies and noticed they used the same questions. An example is \"Can you write a function that determines if a number is prime or not?\", 4 years later, I find that particular question is quite common even for a junior developer. I might not be looking at this the correct way, but shouldn't software houses be intelligent enough to think up their own interview questions? I've been to about 16 interviews as a graduate and the same questions came up in about 75% of them. This leads me to believe that many companies are lazy and simply Google: 'Template questions for interviewing software developers' and I feel there are doing themselves a disservice in taking this approach. **Question:** Is it better to use a set of questions off some template or should software houses strive to be more original and come up with their own interview material? From my point of view, if I failed an interview and went off and looked for good answers to the questions I messed up on, I could fly through the next interview if the questions are the same."} {"_id": "163297", "title": "Should a project start with the client or the server?", "text": "Pretty simple question with a complex answer. **Should a project start with the client or the server, and why?** Where should a single programmer start a client/server project? What are the best practices and what are the reasons behind them? If you can't think of any, what reasons do you use to justify why you would choose to start one before the other? Personally, I'm asking this question because I'm finishing up specs for a project I will be doing for myself on the side for fun. But now that I'm finishing this phase, I'm wondering \"ok, now where do I begin?\" Since I've never done a project like this by myself, I'm not sure where I should start. In this project, my server will be doing all the heavy lifting and the client will just be sending updates, getting information from the server, and displaying it. But, I don't want that to sway the answer as I'm looking for more of an in depth and less specific answer that would apply to any project I begin in the future."} {"_id": "63111", "title": "Which Scala open source projects should I study to learn best coding practices", "text": "What open source projects would you recommend for people to study to learn how the pros write Scala? Some of the attributes that I'm looking for - though they don't all have to be present in every exemplary project: * Idiomatic use of the language and libraries * Functional programming techniques * Concurrency (using Actors or other methods) * Large scale system with many modules * Readability * Java interop * etc."} {"_id": "121147", "title": "Throwing and catching exceptions in the same function/method", "text": "I've written a function that asks a user for input until user enters a positive integer (a natural number). Somebody said I shouldn't throw and catch exceptions in my function and should let the caller of my function handle them. I wonder what other developers think about this. I'm also probably misusing exceptions in the function. Here's the code in Java: private static int sideInput() { int side = 0; String input; Scanner scanner = new Scanner(System.in); do { System.out.print(\"Side length: \"); input = scanner.nextLine(); try { side = Integer.parseInt(input); if (side <= 0) { // probably a misuse of exceptions throw new NumberFormatException(); } } catch (NumberFormatException numFormExc) { System.out.println(\"Invalid input. Enter a natural number.\"); } } while (side <= 0); return side; } I'm interested in two things: 1. Should I let the caller worry about exceptions? The point of the function is that it nags the user until the user enters a natural number. Is the point of the function bad? I'm not talking about UI (user not being able to get out of the loop without proper input), but about looped input with exceptions handled. 2. Would you say the throw statement (in this case) is a misuse of exceptions? I could easily create a flag for checking validity of the number and output the warning message based on that flag. But that would add more lines to the code and I think it's perfectly readable as it is. The thing is I often write a separate input function. If user has to input a number multiple times, I create a separate function for input that handles all formatting exceptions and limitations."} {"_id": "163294", "title": "Is it possible to implement a completely-stateless multiplayer game?", "text": "I'm facing a challenge understanding how to program a web version of a card game that is completely stateless. I create my object graph when the game begins and distribute cards to PlayerA and PlayerB so I lay them out on the screen. At this point I could assume that HTML and the querystring is what holds at least _some_ of my state and just keep a snapshot copy of the game state on the server-side for the sole purpose of validating the inputs I receive from the web clients. Still it appears to me that the state of the game _is by its nature_ mutable: cards are being dealt from the deck, etc. Am I just not getting it? Or should I just strive to minimize the side-effects of my functions to the objects that I take as my input? How would you design a stateless card game?"} {"_id": "126410", "title": "Why would you ever use Lisp?", "text": "> **Possible Duplicate:** > Why is Lisp useful? Lisp has always stricken me as a very peculiar language... interesting in concept, but it just doesn't seem intuitive as, for instance, Java or C or C++. Why do a lot of people actually use Lisp then?"} {"_id": "224382", "title": "What is the relationship between an application server, a TCP/UDP client/server library, and a socket?", "text": "Up until now, the only networking concept I have ever implemented are basic sockets that are part of the java.net API. I am now trying to learn some higher-level API's/frameworks in java, and I am having trouble differentiating between TCP/UDP client/server libraries such as Apache MINA or kryonet and application servers such as Tomcat or JBoss. Furthermore, I would appreciate it if I could know the relationship between java.net.Socket and application servers and client/server libraries. My hypothesis is that Application Servers use websockets, which are similar to sockets but are limited by something related to the HTTP protocol. Moreover, I get the feeling that TCP/UDP client/server libraries are just higher-level abstractions of the basic sockets. Is this correct?"} {"_id": "121149", "title": "Is using Javascript/JQuery for layout and style bad practice?", "text": "Many, but not all, HTML layout problems can be solved with CSS alone. For those that can't, JQuery (on document load) has become very popular.* As a result of its ease, many developers are quick to use JQuery or Javascript for layout and style -- even without understanding whether or not the problem can be solved with CSS alone. This is illustrated by responses to questions like this one. **Is this bad practice? What are the arguments for/against? Should someone who sees this in practice attempt to persuade those developers otherwise?** **If so, what are the best responses to arguments in favor of JQuery saying it's \"so easy\"?** * * * * Example: Layouts that wish to use vertical layout flow of some kind often run into dead ends with CSS alone -- this would include layouts similar to Pinterest, though I'm not sure that's actually impossible with CSS."} {"_id": "142021", "title": "Differences between software testing processes and techniques?", "text": "I get confused between these terms. For example, should unit testing be listed as a software testing process or technique? I think unit testing is a software testing technique. And how about Test driven development? Can you give me some examples for software testing processes and techniques? In my opinion, software testing process is a part of the software development life cycle. For example, if we use V-Model, the software testing process will be System test, Acceptance test, Integration Test."} {"_id": "54953", "title": "Nervous about the \"real\" world", "text": "I am currently majoring in Computer Science and minoring in mathematics (the minor is embedded in the major). The program has a strong C++ curriculum. We have done some UNIX and assembly language (not fun) and there is C and Java on the way in future classes that I must take. The program I am in did not use the STL, but rather a STL-ish design that was created from the ground up for the program. From what I have read on, the STL and what I have taken are very similar but what I used seemed more user friendly. Some of the programs that I had to write in C++ for assignments include: * a password server that utilized hashing of the passwords for security purposes, * a router simulator that used a hash table and maps, * a maze solver that used depth first search, * a tree traveler program that traversed a tree using levelorder, postorder, inorder, selection sort, insertion sort, bit sort, radix sort, merge sort, heap sort, quick sort, topological sort, stacks, queues, priority queues, * and my least favorite, red-black trees. All of this was done in three semesters which was just enough time to code them up and turn them in. That being said, if I was told to use a stack to convert an equation to infix notation or something, I would be lost for a few hours. My main concern in writing this is when I graduate and land an interview, what are some of the questions posed to assess my skills? What are some of the most important areas of computer science that are prevalent in the field? I am currently trying to get some ideas of programs I can write in C++ that interest and challenge me to keep learning the language. A sodoku solver came to mind but am lost as to where to start. I apologize for the rant, but I'm just a wee bit nervous about the future. Any tips are appreciated."} {"_id": "212957", "title": "Reason for a reflection error", "text": "I\u2019m working on an Eclipse plug-in project. Using this plug-in, users can create Eclipse Java projects with some specificities. For example, they can add Java classes\u2019 names which will be saved in a file. These Java classes can be created on the src of the project or used from a jar file which must be added to the project classpath. In this case, the plug-in will use reflection to get some data from each class. There are two different test cases that give the same error because the plug- in can't find the class to instantiate: * A jar contain a class having a name saved in the file is not added to the project classpath. So in this case the classpath is incomplete. * The user of the plug-in updated a jar in which its old version of the jar contained the named class, but the new version of jar does not (which could happen if class was deleted from the new version of the jar). In this case, the plug-in will not find the class but the classpath is complete. So The plug-in must differentiate between the two test cases when it fails finding the class name using reflection. How that can be done?"} {"_id": "142024", "title": "Storage of value types and reference types in .net", "text": "In .net, the **value types** are stored on **stack** where as **reference types** are stored on **managed heap**. What is the reason for this one? Is it not possible to exchange their storage locations?"} {"_id": "142029", "title": "Does using a PHP framework count as experience using PHP to a company that doesn't use that framework?", "text": "I've started working at a company that uses the Yii PHP framework. I'm mostly using Yii but also some frontend stuff like jQuery and Ajax. What I'm worried about is limiting my skill set to a framework that isn't very popular. I mean, if the company I worked for was using Ruby on Rails or even Django, I wouldn't have this feeling of concern for the future. My first question is then, in regards to being able to find a job in the future somewhere else, is my feeling of concern warranted? Secondly, I see a lot of PHP jobs out there but do you think experience using a PHP framework counts as valuable experience to a company that doesn't use that particular framework or any framework at all?"} {"_id": "142028", "title": "Thick models Vs. Business Logic, Where do you draw the distinction?", "text": "Today I got into a heated debate with another developer at my organization about where and how to add methods to database mapped classes. We use `sqlalchemy`, and a major part of the existing code base in our database models is little more than a bag of mapped properties with a class name, a nearly mechanical translation from database tables to python objects. In the argument, my position was that that the primary value of using an ORM was that you can attach low level behaviors and algorithms to the mapped classes. Models are classes first, and secondarily persistent (they could be persistent using xml in a filesystem, you don't need to care). His view was that any behavior at all is \"business logic\", and necessarily belongs anywhere but in the persistent model, which are to be used for database persistence only. I certainly do think that there is a distinction between what _is_ business logic, and should be separated, since it has some isolation from the lower level of how that gets implemented, and domain logic, which I believe is the abstraction provided by the model classes argued about in the previous paragraph, but I'm having a hard time putting my finger on what that is. I have a better sense of what might be the API (which, in our case, is HTTP \"ReSTful\"), in that users invoke the API with what they want to _do_ , distinct from what they are allowed to do, and how it gets done. * * * tl;dr: What kinds of things can or should go in a method in a mapped class when using an ORM, and what should be left out, to live in another layer of abstraction?"} {"_id": "41473", "title": "How Can I Know Whether I Am a Good Programmer?", "text": "Like most people, I think of myself as being a bit above average in my field. I get paid well, I've gotten promotions, and I've never had a real problem getting good references or getting a job. But I've been around enough to notice that many of the worst programmers I've worked with thought they were some of the best. Bad programmers who are surrounded by other bad programmers seem to be the most self-deluded. I'm certainly not perfect. I do make mistakes. I do miss deadlines. But I think I make about the same number of bonehead moves that \"other good programmers\" do. The problem is that I define \"other good programmers\" to mean \"people who are like me.\" So, I wonder, is there any way a programmer can make some sort of reasonable self-evaluation? How do we know whether we are good or bad at our jobs? Or, if terms like _good_ and _bad_ are too ill-defined, how can programmers honestly identify their own strengths and weaknesses, so that they can take advantage of the former and work to improve the latter?"} {"_id": "173244", "title": "How do you measure yourself as a programmer?", "text": "> **Possible Duplicate:** > How Can I Know Whether I Am a Good Programmer? > How to rate myself as a programmer I am self-learning Python. Everything is going well so far, but how do I measure myself? I mean, how can I say \"I'm a beginner\" or \"I'm intermediate\"? Is there some sort of guideline against which I can measure myself, something I have to learn before I am qualified enough?"} {"_id": "223692", "title": "How to rate your understanding of a programming language?", "text": "**Background:** (may skip): I am currently in my second year at university studying Computer Science and applying for placement positions for a year of work. One application I have run into is asking me to rate my knowledge of the programming languages I know. quote : \" _rate your understanding and knowledge on a scale of 1 to 10 with 10 being the highest (eg. JAVA=8, C++=6, etc)._ \" Now I feel like I am quite familiar with the languages I know but it is possible I may be over or under estimating my ability in them as I am ultimately still learning. **The Question:** is there a way or systematic approach I can use to give my understanding of a programming language a rating?"} {"_id": "209055", "title": "Where do my programming skills stand in relation to other programmers", "text": "I am a student currently studying an undergraduate CS course at a crappy university in a third world country. I do however have a continuous desire to improve myself. My Objective is to bring myself up to the standards of programmers in United States. To further that goal i program continuously(4-5 hours daily). But here comes my predicament. I am already considered better than most of my peers in university competitions but due to the really low quality of education the comparison is essentially useless. I have searched the internet for website that rank programmers based on challenges but haven't been able to find any. So the question is. Is there any way i can compare my programming skills with the rest of the world ? And finally while i do understand the SE prefers questions that can be answered and not questions that solicit debate, i would still ask for suggestions of the community. In my country mostly nobody cares about self improvement while this is a very big deal to me."} {"_id": "96946", "title": "How to rate myself as a programmer", "text": "> **Possible Duplicate:** > How Can I Know Whether I Am a Good Programmer? I am sure many of us think of themselves as good coders, my question is how to be sure of this? How to know that i am 'above average'? How to compare myself against other coders? I dont mean taking formal tests, on a day-to-day basis, what is the way to evaluate oneself? Thanks for not closing"} {"_id": "83706", "title": "What are the guidelines and opinions to become a Good Programmer?", "text": "> **Possible Duplicate:** > How Can I Know Whether I Am a Good Programmer? > How to learn PHP effectively? I have a doubt about myself. I think I am not good enough in programming. I was really trying very very hard to learn programming and I am dedicated to it. Can you give me advice on what are the things that I need to master to become a better programmer or the stuff that will help me to improve my programming skills? Do I really need to learn c/c++ or java? I'm a PHP Programmer and I want to master it before learning new language. How can would I know if my skills is enough?"} {"_id": "50818", "title": "Is it possible to measure if someone is a 'good' programmer?", "text": "> **Possible Duplicate:** > How Can I Know Whether I Am a Good Programmer? There are a number of questions here about recognising or considering someone as a good/bad programmer. These are all subjective. What I'd like to know is if there is a way to measure this. I realise there will and should be a subjective element to it. But is it also possible to have some actual numbers to back up (or contradict) such an assessment?"} {"_id": "25707", "title": "How to measure his own skill in a programming language?", "text": "> **Possible Duplicate:** > How Can I Know Whether I Am a Good Programmer? As many programmers I have worked in several languages. While of course there are some that I am more at ease than other, I do not have a real way to precisely measure my skill in a specific language. So I thought of a system which allows me to help me with that. I am looking for 5 common criteria in programming languages, to which I will have a value from 1 (junior) to 4 ( Senior) to represent my skill. I however have no real idea of the criteria I should choose for that. Does anybody have suggestion ? Thanks."} {"_id": "129349", "title": "How can I tell bad code from good code?", "text": "> **Possible Duplicate:** > How Can I Know Whether I Am a Good Programmer? I was looking at some posts on language execution speeds (e.g. whether one Language was faster then another or vice versa), and the common answer was that it really depends on how good the programmer is. I was wondering what that actually means. How can I tell bad code from good code?"} {"_id": "116028", "title": "What is considered \"good programmer\" professionally?", "text": "> **Possible Duplicate:** > How Can I Know Whether I Am a Good Programmer? Sometimes I hear someone say \"good programmer\" once I saw I guy saying that he was going to promote programmers that were good. My question is: what is the definition of good? I am not asking here how to find good programmers or how to identify one, but just knowing what \"goo programmer\" in professional sense is. 2nd question: How do I know if I am good or bad programmer?"} {"_id": "212958", "title": "How to prevent fat views in MVC?", "text": "I'm just curious. If I'm a lead dev in a company of a dozens of developers is there any way I can prevent a newbie developer from creating a fat view? By fat view I mean to have an empty controller, empty model and having all database and business logic in the view along with the html/js."} {"_id": "128142", "title": "What to do with estimation of incomplete story?", "text": "I am part of a development team that is relatively new to `Scrum`, suppose that at the end of the sprint a few large stories are either `in progress` or were not `accepted` by the PO. Firstly, what happens with those user stories? Do you just carry them over into the next sprint? If so, should they be re-estimated? In my view the work remaining on these user stories can be minimal or a lot? If not, why not? EDIT: In my specific case, the stories were not completed because of an impediment that was a few days long, not because of user story underestimation. For those of you that it may help, we are using `VersionOne`"} {"_id": "48361", "title": "What's the best version control/QA workflow for a legacy system?", "text": "I am struggling to find a good balance with our development and testing process. We use Git right now, and I am convinced that ReinH's Git Workflow For Agile Teams is not just great for capital-A Agile, but for pretty much any team on DVCS. That's what I've tried to implement but it's just not catching. We have a large legacy system with a complex environment, hundreds of outstanding and undiscovered defects, and no real good way to set up a test environment with realistic data. It's also hard to release updates without disrupting users. Most of all, it's hard to do thorough QA with this process... and we need thorough testing with this legacy system. I feel like we can't really pull off anything as slick as the Git workflow outlined in the link. What's the way to do it?"} {"_id": "62869", "title": "How is client-side javascript covered by the GPL?", "text": "If I used a GPL-licensed Javascript libarary in a web application, would I then have to offer source code of the whole site to anyone who downloaded and executed the Javascript lib?"} {"_id": "26970", "title": "Perception of a developer that uses a pre-packaged web implementation for their personal site?", "text": "Lets say you're a web developer/programmer and you want to set up a personal portfolio site. Unfortunately, like most people with a full time job and a family, your time is hard to come by. You make the choice to, instead of building your own site from scratch, implement something like WordPress or Drupal. What effect (if any) might this have on people's (potential employers/fellow developers) perception of you as a developer?"} {"_id": "32442", "title": "What should I learn from Scheme?", "text": "I was wondering what unique features I can learn from Scheme that would help me become a better programmer? I have a lot experience in mainstream languages, and I am looking to expand my horizons and learn about functional aspects that are missing from other languages. I am familiar with closures from javascript, lambda expressions from C#, and I was wondering what I can focus on that is lacking in other languages? Aside from the Lisp syntax, I feel like what I have seen so far I've already encountered in other languages. **What is unique to Scheme/Lisp that will teach me something new?**"} {"_id": "251952", "title": "When using a programming library for an organization, can it still be a non-profit project?", "text": "I am currently developing a web-application that is to present data stored in a database. For the presentation there is a request for both presenting this in table format and graph format. The table format is straight-forward, simply output the data and voila! However, for the graph format it has proved a little more difficult. I have been reviewing a number of different libraries for visualizing data and it seems that there are quite a few that share licensing agreements; free for non-profit projects or non-commercial use. Now, if I was to create something for a company that later was to sell this, I can clearly distinguish that this is considered to be commercial use. But if I am working for an organization (not a non-profit) on a project that is to be used solely for internal purposes, it won't get anywhere near a customer, is this considered to be a non-profit project? **EDIT:** An example of such a license is Creative Commons Attribution- NonCommercial 3.0."} {"_id": "251959", "title": "Avoiding polling with components", "text": "Once you create separate components that need to communicate with each other you enter the realm of systems programming where you have to assume that errors could originate at any step in the process. You throw `try-catch` blocks out the window and have to develop robust alternatives for error handling yourself. We have two systems both with REST apis. Both systems have GUIs that users can use to add/update information. When information is added to one system it must be propagated to the other. We have integration software (the middleman) that polls on a minute-by-minute basis, picks up adds/edits and translates them from one system to the other. Each invokation keeps track of the timestamp of the last successful run--we have one timestamp for communication in either direction. In this way, if any part of the system fails, we can resume right where we left off when the issues are corrected. I have heard bad things about poll-based approaches: namely the fact that it runs without regard to whether there is actually work. I have heard that push- based approaches are more efficient because they are triggered on demand. I am trying to understand how a push-based approach might have worked. If either system attempts to push an add/edit, we have to assume that it could fail because the other system is down. It would seem to me that either system would need to maintain its own outgoing queue in order to resume once the issue with the other system is corrected. It seems to me that using a push approach eliminates the middleman, but heaps more responsibility on each system to manage its messages to the other system. This seems to not be a clean way of separating concerns. Now both systems have to take on middleman responsibilities. I don't see how you would redesign the middleman for a push-based architecture. You run the risk that messages are lost if the middleman himself fails. Is there a fault-tolerant architecture that could be used to manage system interactions without the polling? I'm trying to understand if we missed a better alternative when we devised/implemented our poll-based middleman. The software does the job, but there's some latency."} {"_id": "254709", "title": "How to verify that library assemblies originate from a given Web site?", "text": "How would the following solution be implemented? Would you need to put this code in each library assembly or just in the main assembly that is determining whether it is safe to call the library assembly based on whether or not it originates from a given intranet Web site? Also, who should call the CheckSite method - each library assembly or the main app? Here is the example and solution from a practice exam for the C# Specialist Exam 70-483 that I am referring to: > You are an application developer for a company. You are creating an > application on the company\u2019s Web server that will manipulate confidential > data from business partners. The application relies on many library > assemblies in the company intranet to complete its work. You are required to > verify that every assembly originates from the same intranet Web site. Which > code should you use to verify the current assembly originates from the > company intranet? public bool CheckSite () { SiteMembershipCondition site = new SiteMembershipCondition( \u201chttp://intranet.company.com\u201d ); return site.Check( Assembly.GetCallingAssembly().Evidence ); }"} {"_id": "49944", "title": "Interop Best Practices: Should I use Static Class, or Normal Classes", "text": "I have a front end C# that needs to call a C++ back end. So interop is needed. I have an \"interop layer\", that converts the C# data structure into C++ structure, and do all the memory freeing grunt work. My question is, should I write this interop layer as a _static_ class, or should I wrap it in a normal class and instantiate it as an object when I need to use it?"} {"_id": "157672", "title": "What should be the next thing after being accepted as a contributor to an Open Source Project?", "text": "So this is the first time I am trying to work with an `Open Source Project`. I created a `Code Plex` Account browsed for a project that I thought I might become a part of. I sent a request to join the project.The project head accepted my request. I have already `downloaded` the `source code` of the project. Can some one just suggest what should I be doing next? Since I have never worked with `Open Source Projects` the best thing I think for me would be to go through the `code` and just study it and understand how it works. Apart from that I want to know what things many of you do when you are accepted as a contributor. Also the project I choose to work includes `ASP.NET MVC` and a `javascript` library `Raphael.js` which I have no prior experience with. So should I also start looking at it also? Any suggestions are welcome."} {"_id": "68773", "title": "How can you distinguish a good consulting firm from a body shop?", "text": "It seems to me that most consulting firms do little more than match resumes to positions, typically with insightful criteria such as \"X years experience in technologies Y and Z.\" But there are also some consulting firms that attempt to find talented developers and invest in them. And rather than simply providing staff augmentation, they either partner with a client to accomplish a project, or supply the entire team. How can you effectively (and quickly) determine which is which? Since no one advertises themselves as a \"body shop,\" how do you find and pursue the firm that is actually looking for good developers, and not just resumes? Is there a Joel test for consulting firms? One that can be applied before you are hired? I'm asking as a potential employee here."} {"_id": "157677", "title": "Is HTTPS enough to avoid replay attacks?", "text": "I am exposing a few REST methods on a server for an mobile app. I would like to avoid that users can sniff how HTTP methods are built (from the mobile app) and then send them again to the server. Example : * The mobile app send a request * The user uses a proxy and can check what's going on on the network * The user sees and save the request that the mobile just sent * => Now I don't want the user to be able to send over manually that request Is it enough to secure the server over HTTPS?"} {"_id": "1997", "title": "What are non-programming mistakes that a programmer should avoid?", "text": "People make mistakes, even in the real life... Which should we, geeky programmers, avoid?"} {"_id": "197479", "title": "How is it possible to build the whole codebase from source at Google scale?", "text": "The first answer to an old, recently active question linked to a video which talks about how Google repository is done. One interesting thing which was mentioned is the fact that **everything is build from source, without relying on binaries.** This helps avoiding issues with dependencies becoming obsolete but still being used in other projects, an issue I indeed encountered a lot. How is it technically possible? If I try the same thing in my company, even considering the huge gap between the scale of my company codebase and the scale of Google's one, it wouldn't be possible for two reasons: * The IDE (Visual Studio) will quickly become unresponsive, given that is suffers a lot at even small solutions containing, say, 50 projects. * Any static analysis would be crunched by the size of the whole codebase. For example code metrics or static checking of code contracts would hardly be possible (code contracts would probably take days or weeks). * With continuous integration, compiling would take a huge amount of time too and would crunch the servers as soon as a project with lots of dependencies is modified, requiring a large tree of projects to be recompiled. How can a small company circumvent those issues and be able to: 1. Use the IDE without being affected by poor performance, 2. Compile the code after each commit without crunching the server, even when the consequences of a change require a large amount of the codebase to be recompiled?"} {"_id": "142790", "title": "Deprecated Methods in Code Base", "text": "A lot of the code I've been working on recently, both professionally (read: at work) and in other spheres (read: at home, for friends/family/etc, or NOT FOR WORK), has been worked on, redesigned and re-implemented several times - where possible/required. This has been in an effort to make things smaller, faster more efficient, better and closer to spec (when requirements have changed). A down side to this is that I now have several code bases that have deprecated method blocks (and in some places small objects). I'm looking at making this code maintainable and easy to roll back on changes. I'm already using version control software in both instances, but I'm left wondering if there are any specific techniques that have been used by others for keeping the superseded methods without increasing the size of compiled outputs? At the minute, I'm simply wrapping the old code in C style multi line comments. Here's an example of what I mean (C style, psuedo-code): void main () { //Do some work //Foo(); //Deprecated method call Bar(); //New method } /***** Deprecated code ***** /// Summary of Method void Foo() { //Do some work } ***** Deprecated Code *****/ /// Summary of method void Bar() { //Do some work } I've added a C style example, simply because I'm more confident with the C style languages. I'm trying to put this question across as language agnostic (hence the tag), and would prefer language agnostic answers, if possible - since I see this question as more of a techniques and design question. I'd like to keep the old methods and blocks for a bunch of reasons, chief amongst them being the ability to quickly restore an older working method in the case of some tests failing, or some unforeseen circumstance. Is there a better way to do this (that multi line comments)? Are there any tools that will allow me to store these old methods in separate files? Is that even a good idea?"} {"_id": "142796", "title": "from Java to SAS", "text": "I am a seasoned python,java,...other programmer having a (fairly advanced) mathematical education (so I do understand statistics and data mining, for example). For various reasons I am thinking to switch to SAS/BI area (I am naming SAS because it might be, for me, a possible way to enter in BI). My question, for whoever might have an experience of both: is it, in BI current state, worth it? I mean, the days of big ideas in BI _for business_ seem to be over (there are the APIs, managers think that they know what you can do with them), and my mathematical background might turn out to be superfluous. Also, the big companies now have their data organized, have their BI procedures well established, and trying to analyze it from a different standpoint might not be what they want. Another difference is: while in Java etc. development one codes and codes and codes, I don't know if this is the case for BI; in fact, from what I read on the net, a BI (or OLAP, ...etc) developer, in a big organization, is usually in a state of standby, and does in fact little coding."} {"_id": "58186", "title": "How was programming done 20 years ago?", "text": "Nowadays we have a lot of programming aids that make work easier, including: * IDEs * Debuggers (line by line, breakpoints, etc) * Ant scripts, etc for compiling * Sites like StackOverflow to help if you're stuck on a programming problem 20 years ago, none of these things were around. Which tools did people use to program, and how did they make do without these newer tools? I'm interested in learning more about how programming was done back then."} {"_id": "169848", "title": "Is Reading the Spec Enough?", "text": "This question is centered around Scheme but really could be applied to any LISP or programming language in general. **Background** So I recently picked up Scheme again having toyed with it once or twice before. In order to solidify my understanding of the language, I found the _Revised^5 Report on the Algorithmic Language Scheme_ and have been reading through that along with my compiler/interpreter's (Chicken Scheme) listed extensions/implementations. Additionally, in order to see this applied I have been actively seeking out Scheme code in open source projects and such and tried to read and understand it. This has been sufficient so far for me understanding the syntax of Scheme and I've completed almost all of the Ninety-nine Scheme problems (see here) as well as a decent number of Project Euler problems. **Question** While so far this hasn't been an issue and my solutions closely match those provided, am I missing out on a great part of Scheme? Or to phrase my question more generally, does reading the specification of a language along with well written code in that language sufficient to learn from? Or are other resources, books, lectures, videos, blogs, etc necessary for the learning process as well."} {"_id": "211429", "title": "How to design a log() method that can easily be accessed from the outside of the Console class?", "text": "Recently my team has programmed a custom developer console in a video game which can easily be hidden or displayed, because it's more comfortable and less of a hassle. The Console class contains a log(string) function which should be easily accessible from anywhere outside of it, without having to reference it everywhere. How could we design a method that can be easily accessed from the outside of the class? For example printing a message to the console in C++ is as simple as: cout << \"Message\"; Well we want something similar at best, maybe Console::log(), but we are clueless how to achieve a _proper_ way of doing it. Cheers."} {"_id": "193915", "title": "How do I deal with a MIT project with no included copyright notice?", "text": "There's a (possibly abandoned) library I'm using in my project. Its Google Code project page mentions it's MIT licensed. Nowhere in the code itself, nor any of the included files, is there a copyright notice or license specified. The closest thing is the author's name in one of the headers. Since the sole requirement of the MIT license is its inclusion of the copyright notice, which is non-existant in this libary, what's the best manner to proceed?"} {"_id": "59851", "title": "Career Shifters: How to compete with IT/ComSci graduates", "text": "I am wondering what are the chances of a career shifter (mid 20's), who have maybe 3-6 months programming experience vs. younger fresh IT/Com Sci graduates? You see, even though I really love programming (Java/J2EE), but nobody gives me a feedback when I apply online. maybe because they preferred IT/ComSci graduates vs a career shifter like me. So can you advice on how to improve my chance on being hired? How can I get a real-job programming experience if nobody is hiring me? I can make my own projects (working e-commerce site blah blah) but it is still different from the real job. And my code is working but it still needs a lot of improvement and no one can tell me how to improve it because no one sees it (because I'm doing it alone?). Do you know any open source websites (java/j2ee) / online home-based jobs who accept java/j2ee trainees?"} {"_id": "211420", "title": "Programming puzzle with constant selection", "text": "THE QUESTION: There is an event where there are N contestants. There are three tasks in the event: A,B,C (say). Each participant takes part in the events in the listed order: i.e a contestant must first complete A and B before beginning C. However, in event A only one person can participate at a time. Any number of people may simultaneously participate in B and C. So the event works as follows. At time 0, the first contestant begins A, while the remaining citizens wait for the first person to finish. As soon as the first person is done, he or she proceeds to event B, and the second citizen begins A. In general whenever a person completes A, the next person begins A. Whenever a person is done with A, he or she proceeds to B immediately, regardless of what the other contestants do. The whole event ends as soon as all the contestants finish all 3 events. So the basic question is given the number of participants N, and the time taken by each person for each of the 3 events, calculate the minimum time in which the whole event might be completed. MY ATTEMPT: This is the algorithm I came up with: LeastTime(people (2d array [n][3] with time of each person for each event, n, front_chosen = false) The least time for n people can be broken up into 2 cases: 1. The current guy seated first for event A 1.1 We take t1_1 -> time for current guy in event A + time taken for the rest of the people to finish the whole event with the front taken 1.2 We take t1_2 -> time for current guy in event A + time for his remaining events 1.3 The time taken for the whole event in this case is t1 = max{t1_1,t1_2}. 2 The current guy is not seated first for event A 2.1 We modify people such that the first element is placed last 2.2 t2 -> LeastTime(people, n, false) 3. We return min {t1,t2} So that is what I came up with. What are some better i.e more efficient solutions? Even Alternate Solutions will be helpful."} {"_id": "211421", "title": "Why can't SQL return joined tables in a nested format?", "text": "For example, say I want to fetch a User and all of his phone numbers and email addresses. The phone numbers and emails are stored in separate tables, 1-user to many phones/emails. I can do this quite easily: SELECT * FROM users user LEFT JOIN emails email ON email.user_id=user.id LEFT JOIN phones phone ON phone.user_id=user.id The problem* with this is that it's returning the user's name, DOB, favorite color, and all the other information stored in the user table over-and-over again for each record (users*emails*phones records), presumably eating up bandwidth and slowing down the results (unless I'm mistaken on this?). Wouldn't be nicer if it returned a single row for each user, and within that record there was a _list_ of emails and a _list_ of phones? It would make the data much easier to work with too. I know you can get results like this using LINQ or perhaps other frameworks, but it seems to be a weakness in the underlying design of relational databases. We could get around this by using NoSQL, but shouldn't there be some middle ground? Am I missing something? Why doesn't this exist? * Yes, it's designed this way. I get it. I'm wondering why there isn't an alternative that is easier to work with. SQL could keep doing what it's doing but then they could add a keyword or two to do a little bit of post-processing that returns the data in a nested format instead of a cartesian product."} {"_id": "211422", "title": "Why did Microsoft abandon IronRuby and IronPython?", "text": "Several years ago, Microsoft announced that Ruby and Python were coming to .net. The projects were called IronRuby and IronPython, respectively. Microsoft said that the projects would be built on top of the .net DLR. WIKIpedia indicates that for all intents and purposes, these projects have been abandoned by Microsoft. Why did Microsoft abandon these projects?"} {"_id": "169845", "title": "Arranging the colors on the board in the most pleasing form", "text": "Given a rectangular board of height H and width W. N colors are given and each color occupies Xi percentage of area on the board. Sum of Xi's is 1. The colors on the board must be placed in rectangles. An optimal solution has the rectangles' aspect ratios as close to 1 as possible. An ideal case has the board filled only with squares. What's the best algorithm for laying out the rectangles?"} {"_id": "203818", "title": "Defining work days and work time", "text": "I'm working on development of SMS parking software, and I'm stuck at one point for a month... I need to implement periods of payment (or work time, of a work day, if you will). Here's the problem: For an example, traffic wardens work from Monday to Saturday. From Monday to Friday, the work times are from 07:00 to 21:00, and on Saturday, the work time is from 07:00 to 14:00. The project request was, that the customer can pay parking by SMS unlimited, which I did, but didn't implement this logic. I started with making a table for this periods of payment, it consists of: dayofweek (INT, for use with mysql function DAYOFWEEK, to tell me which day in week is the current date), work_start and work_stop (DATETIME, for defining starting and ending the work day), but I'm unsure if I should use DATETIME, beacuse of the date, or should I use only TIME. The idea is this: If you send an SMS, at 20:50, Monday, it should be valid until 07:50, Tuesday (it's payed by the hour). Extending the time of payment, regarding the work time in week. Currently, it works with extending time by the hour without this rule. Really would use some help, or some ideas, I'm stuck with this for quite some time..."} {"_id": "208572", "title": "In embedded, is there any difference between a device driver and a library?", "text": "Assuming a platform with no kernel mode, such as Atmel AVR, is there any difference between a device driver and a library, given that everything is user mode anyway? I ask because I'm thinking about how I want to layer my code. It's like this: +----------------+ | Business logic | +----------------+ | CAN library | +----------------+ | MCP2515 driver | +----------------+ | SPI driver | +----------------+ The SPI \"driver\" has an interrupt handler, and it talks directly to the microcontroller's SPI peripheral, which _sounds_ like a driver, but other than that, I don't see how it's any different to, say, the CAN library, which has high-level functions like \"send this message\"."} {"_id": "39027", "title": "How do I contribute to open source projects?", "text": "What are the mechanics of contributing to open source projects? How are they managed? What exactly is meant by submitting a patch? Should I do some reading up on advanced version control techniques?"} {"_id": "243306", "title": "Do objects maintain identity under all non-cloning conditions in PHP?", "text": "PHP 5.5 I'm doing a bunch of passing around of objects with the assumption that they will all maintain their identities - that any changes made to their states from inside other objects' methods will continue to hold true afterwards. Am I assuming correctly? I will give my basic structure here. class builder { protected $foo_ids = array(); // set in construct protected $foo_collection; protected $bar_ids = array(); // set in construct protected $bar_collection; protected function initFoos() { $this->foo_collection = new FooCollection(); foreach($this->food_ids as $id) { $this->foo_collection->addFoo(new foo($id)); } } protected function initBars() { // same idea as initFoos } protected function wireFoosAndBars(fooCollection $foos, barCollection $bars) { // arguments are passed in using $this->foo_collection and $this->bar_collection foreach($foos as $foo_obj) { // (foo_collection implements IteratorAggregate) $bar_ids = $foo_obj->getAssociatedBarIds(); if(!empty($bar_ids) ) { $bar_collection = new barCollection(); // sub-collection to be a component of each foo foreach($bar_ids as $bar_id) { $bar_collection->addBar(new bar($bar_id)); } $foo_obj->addBarCollection($bar_collection); // now each foo_obj has a collection of bar objects, each of which is also in the main collection. Are they the same objects? } } } } What has me worried is that `foreach` supposedly works on a copy of its arrays. I want all the $foo and $bar objects to maintain their identities no matter which $collection object they become of a part of. Does that make sense? EDIT: My understanding is that using the `clone` keyword is the only way to really copy an object in PHP 5+. Using the assignment operator basically just creates another reference to the same object. I'm hoping that return values and `foreach` also work that way."} {"_id": "243307", "title": "Syntax of passing lambda", "text": "Right now, I'm working on refactoring a program that calls its parts by polling to a more event-driven structure. I've created sched and task classes with the sced to become a base class of the current main loop. The tasks will be created for each meter so they can be called off of that instead of polling. Each of the events main calls are a type of meter that gather info and display it. When the program is coming up, all enabled meters get 'constructed' by a main-sub. In that sub, I want to store off the \"this\" pointer associated with the meter, as well as the common name for the \"action routine. void MeterMaker::Meter_n_Task (Meter * newmeter,) { push(newmeter); // handle non-timed draw events Task t = new Task(now() + 0.5L); t.period={0,1U}; t.work_meter = newmeter; t.work = [&newmeter](){newmeter.checkevent();};<<--attempt at lambda t.flags = T_Repeat; t.enable_task(); _xos->sched_insert(t); } A sample call to it: Meter_n_Task(new CPUMeter(_xos, \"CPU \")); 've made the scheduler a base class of the main routine (that handles the loop), and I've tried serveral variations to get the task class to be a base of the meter class, but keep running into roadblocks. It's alot like \"whack-a- mole\" -- pound in something to fix something one place, and then a new probl pops out elsewhere. Part of the problem, is that the sched.h file that is trying to hold the Task Q, includes the Task header file. The task file Wants to refer to the most \"base\", _Meter_ class. The meter class pulls in the main class of the parent as it passes a copy of the parent to the children so they can access the draw routines in the parent. Two references in the task file are for the 'this' pointer of the meter and the meter's update sub (to be called via this). void *this_data= NULL; void (*this_func)() = NULL; Note -- I didn't really want to store these in the class, as I _wanted_ to use a lamdba in that meter&task routine above to store a routine+context to be used to call the meter's action routine. Couldn't figure out the syntax. But am running into other syntax problems trying to store the pointers...such as g++: COMPILE lsched.cc In file included from meter.h:13:0, from ltask.h:17, from lsched.h:13, from lsched.cc:13: xosview.h:30:47: error: expected class-name before \u2018{\u2019 token class XOSView : public XWin, public Scheduler { Like above where it asks for a class, where the classname \"Scheduler\" is. !?!? Huh? That IS a class name. I keep going in circles with things that don't make sense... Ideally I'd get the lamba to work right in the Meter_n_Task routine at the top. I wanted to only store 1 pointer in the 'Task' class that was a pointer to my lambda that would have already captured the \"this\" value ... but couldn't get that syntax to work at all when I tried to start it into a var in the 'Task' class. This project, FWIW, is my teething project on the new C++... (of course it's simple!.. ;-))... I've made quite a bit of progress in other areas in the code, but this lambda syntax has me stumped...its at times like thse that I appreciate the ease of this type of operation in perl. Sigh. Not sure the best way to ask for help here, as this isn't a simple question. But thought I'd try!... ;-) Too bad I can't attach files to this Q."} {"_id": "209852", "title": "Architecting Domain Layer and other modules with dependency injection in mind", "text": "I am currently new to Dependency Injection pattern. I am influenced by link by Mark Seemann. **I have a confusion regarding whether an interface for an agent class of some agent module should be included in domain layer?** _by agent I mean a class / module that interacts with external wcf/ webservices._ I have a diagram that shows it. In short, is it Ok to have: IProductAgent interface in Business layer where the actual implementation will be in an Agent Module. The confusion arises by the fact that: **Is calling a service even considered a business rule?** Today a service fulfills a facility, tomorrow there might be some another means of catering this same facility. Liskov's Substitution Principle. Apologies if I am mixing things here."} {"_id": "160675", "title": "What do you call classes without methods?", "text": "What do you call classes without methods? For example, class A { public string something; public int a; } Above is a class without any methods. Does this type of class have a special name?"} {"_id": "160677", "title": "The different types of CMS - Pros and cons", "text": "As I understand it, there are three different \"types\" of CMS: 1. **Proprietary** : A CMS built and owned by a company, and altered to meet a client's needs. 2. **Open-Source** : An open/free version of the above. An existing system altered to meet a client's needs. (E.g. Drupal, Joomla, etc.) 3. **Fully Bespoke** : A CMS built from scratch to the client's exact needs. It seems to me that, from a client's perspective, their order of preference (if everything else was equal) would be 3 -> 2 -> 1. There's a danger that option 3 could be riddled with difficult to understand code, but that's a danger with the other two as well. As developers, what's best for our clients? I ask this because a competing agency just pitched to one of our clients that Open-Source CMSes are the preferred solution. While I can see the benefits of using OS software, I can't see how it's better than a Fully Bespoke solution. As a developer blog put it, > \"Adding features to [an existing] CMS is not easy. It takes more time than > to develop the feature alone, because you need to trick the CMS into doing > something it was not designed to do. This is especially true for features > that write to the database a lot, like community features, or features that > have unusual business logic.\""} {"_id": "58453", "title": "I know how to program, and how to learn how to program, but how/where do you learn how to make systems properly?", "text": "There are many things that need to be considered when making a system, let's take for example a web based system where users log in and interact with each other, creating and editing content. Now I have to think about security, validation (I don't even think I am 100% sure what that entails), \"making sure users don't step on each others feet\" (term for this?), preventing errors in many cases, making sure database data doesn't become problematic through unexpected... situations? All these things I don't know how or where to learn, is there a book on this kind of stuff? Like I said there seems to be a huge difference between writing code and actually writing the right code, know what I mean? I feel like my current programming work lacks much of what I have described and I can see the problems it causes later, and then the problems are much harder to solve because data exists and people are using it. So can anyone point me to books or resources or the proper subset of programming(?) for this type of learning? PS: feel free to correct my tags, I don't know what I am talking about. Edit: I assume some of the examples I wrote apply to other types of systems too, I just don't know any other good examples because I've been mostly involved in web work."} {"_id": "194340", "title": "why are noSQL databases more scalable than SQL?", "text": "Recently I read a lot about noSQL DBMSs. I understand CAP theorem, ACID rules, BASE rules and the basic theory. But didn't find any resources on why is noSQL scalable more easily than RDBMS (e.g. in case of a system that requires lots of DB servers)? I guess that keeping constraints and foreign keys cost resources and when a DBMS is distributed, it is a lot more complicated. But I expect there's a lot more than this. Can someone please explain how noSQL/SQL affects scalability?"} {"_id": "109192", "title": "Why NoSQL over SQL?", "text": "> **Possible Duplicate:** > When would someone use MongoDB (or similar) over traditional RDMS? How well the SQL and NoSQL go head-to-head. I read somewhere that SQL databases are not well for data that is well structured or has some graph- iness associated with it. Is it really the case? Apart form Facebook, Google, and some other big players on the web, I don't know how well small-players and start-ups have used these tools. I found another similar question about the same here. But I couldn't gather much stats from here. These are certain specific cases, and is there a general pattern (like the one mentioned above) for which these NoSQL databases can be used? How wise will it be for a start-up to go for a NoSQL database if the developers know that the amount of data involved will be a little large and is well structured, but requires frequent CRUD operations? Here on stackoverflow, one can find questions about when not to use SQL, but are there any scenarios when one should avoid using NoSQL databases? Also, how effective will it be to use both in parallel so that we get the best of them both? One last question, do these distributed NoSQL databases perform equally well when they are used in a single-node setup?"} {"_id": "160672", "title": "Wrapping REST based Web Service", "text": "I am designing a system that will be running online under Microsoft Windows Azure. One component is a REST based web service which will really be a wrapper (using proxy pattern) which calls the REST web services of a business partner, which has to do with BLOB storage (note: we are not using azure storage). The majority of the functionality will be taking a request, calling our partner web service, receiving the request and then passing that back to the client. There are a number of reasons for doing this, but one of the big ones is that we are going to support three clients: our desktop application (win and mac), mobile apps (iOS), and a web front end. Having a single API which we then send to our partner protects us if that partner ever changes. I want our service to support both JSON and XML for the data transfer format, JSON for web and probably XML for the desktop and mobile (we already have an XML parser in those products). Our partner also supports both of these formats. I was planning on using ASP.NET MVC 4 with the Web API. As I design this, the thing that concerns me is the static type checking of C#. What if the partner adds or removes elements from the data? We can probably defensively code for that, but I still feel some concern. Also, we have to do a fair amount of tedious coding, to setup our API and then to turn around and call our partner\u2019s API. There probably is not much choice on it though. But, in the back of my mind I wonder if maybe a more dynamic language would be a better choice. I want to reach out and see if anybody has had to do this before, what technology solutions they have used to (I am not attached to this one, these days Azure can host other technologies), and if anybody who has done something like this can point out any issues that came up. Thanks! Researching the issue seems to only find solutions which focus on connecting a SOAP web service over a proxy server, and not what I am referring to here. Note: Cross posted (by suggestion) from http://stackoverflow.com/questions/11906802/wrapping-rest-based-web-service Thank you!"} {"_id": "160673", "title": "Full Trust level: should be a concern?", "text": "**Should I worry about the safety of my application if it runs in Full Trust level?** If yes, why? What damages could be done? If the trust level should be a concern why it is the default level for asp.net applications instead of medium? I know all the _yes_ and _no's_ between medium and full trust levels, but I can't see where this could be a **serious** risk to the application and server. Considering this: My ISP wants to (and will) change the trust level to Medium of an IIS asp.net mvc application running on a **dedicated** server. This simply will break my application, since it relies heavily on `System.Reflection` namespace and it uses 3rd party assemblies which don't have the `AllowPartiallyTrustedCallers` defined. The application runs under a specific user that has read only access to the application directory and only execute permission on stored procedures on the Sql Server. The authentication is via SSPI, so no passwords on web.config. The ISP claims that if they let my application run in full trust they can not guarantee the security of my server. I never heard about a single case when the full trust was the cause to a security break. It seems to me that they are not sure about what they are doing. I can't see a security flaw here. The only securiy flaw I can see in general is the password stealing from the web.config, but this is not possible in my current setup..."} {"_id": "241599", "title": "How to avoid oscillation by async event based systems?", "text": "Imagine a system where there are data sources which need to be kept in sync. A simple example is model - view data binding by MVC. Now I intend to describe these kind of systems with data sources and hubs. Data sources are publishing and subscribing for events and hubs are relaying events to data sources. By handling an event a data source will change it state described in the event. By publishing an event the data source puts its current state to the event, so other data sources can use that information to change their state accordingly. The only problem with this system, that events can be reflected from the hub or from the other data sources, and that can put the system into an infinite oscillation (by async or infinite loop by sync). For example A -- data source B -- data source H -- hub A -> H -> A -- reflection from the hub A -> H -> B -> H -> A -- reflection from another data source By sync it is relatively easy to solve this issue. You can compare the current state with the event, and if they are equal, you don't change the state and raise the same event again. By async I could not find a solution yet. The state comparison does not work by async event handling because there is eventual consistency, and new events can be published in an inconsistent state causing the same oscillation. For example: A(*->x) -> H -> B(y->x) -- can go parallel with B(*->y) -> H -> A(x->y) -- so first A changes to x state while B changes to y state -- then B changes to x state while A changes to y state -- and so on for eternity... What do you think is there an algorithm to solve this problem? If there is a solution, is it possible to extend it to prevent oscillation caused by multiple hubs, multiple different events, etc... ? **update:** I don't think I can make this work without a lot of effort. I think this problem is just the same as we have by syncing multiple databases in a distributed system. So I think what I really need is constraints if I want to prevent this problem in an automatic way. What constraints do you suggest?"} {"_id": "168463", "title": "Sample domain model for online store", "text": "We are a group of 4 software development students currently studying at the Cape Peninsula University of Technology. Currently, we are tasked with developing a web application that functions as a online store. We decided to do the back-end in Java while making use of Google Guice for persistence(which is mostly irrelevant for my question). The general idea so far to use PHP to create the website. We decided that we would like to try, after handing in the project, and register a business to actually implement the website. **The problem** we have been experiencing is with the domain model. These are mostly small issues, however they are starting to impact the schedule of our project. Since we are all young IT students, we have virtually no experience in the business world. As such, we spend quite a significant amount of time planning the domain model in the first place. Now, some of the issues we're picking up is say the reference between the Customer entity and the order entity. Currently, we don't have the customer id in the order entity and we have a list of order entities in the customer entity. Lately, I have wondered if the persistence mechanism will put the client id physically in the order table, even if it's not in the entity? So, I started wondering, if you load a customer object, it will search the entire order table for orders with the customer's id. Now, say you have 10 000 customers and 500 000 orders, won't this take an extremely long time? There are also some business processes that I'm not completely clear on. Finally, my question is: does anyone know of a sample domain model out there that is similar to what we're trying to achieve that will be safe to look at as a reference? I don't want to be accused of stealing anybody's intellectual property, especially since we might implement this as a business."} {"_id": "151637", "title": "Is unit testing development or testing?", "text": "I had a discussion with a testing manager about the role of unit and integration testing. She requested that developers report what they have unit and integration tested and how. My perspective is that unit and integration testing are part of the development process, not the testing process. Beyond semantics what I mean is that unit and integration tests should not be included in the testing reports and systems testers should not be concerned about them. My reasoning is based on two things. 1. Unit and integration tests are planned and performed against an interface and a contract, always. Regardless of whether you use formalized contracts you still test what e.g. a method is supposed to do, i.e. a contract. In integration testing you test the interface between two distinct modules. **The interface and the contract** determine when the test passes. But you always test a limited part of the whole system. Systems testing on the other hand is planned and performed against the system specifications. **The spec** determines when the test passes. 2. I don't see any value in communicating the breadth and depth of unit and integration tests to the (systems) tester. Suppose I write a report that lists what kind of unit tests are performed on a particular business layer class. What is he/she supposed to take away from that? Judging what should and shouldn't be tested from that is a false conclusion because the system may still not function the way the specs require even though all unit and integration tests pass. This might seem like useless academic discussion but if you work in a strictly formal environment as I do, it's actually important in determining how we do things. Anyway, am I totally wrong? (Sorry for the long post.)"} {"_id": "168467", "title": "Proper XAML for Windows 8 Applications", "text": "I have been wanting to understand more of proper XAML design since most of my current experience is in what I grew up in: Windows Forms. More specifically, though it is really a specific subset of Xaml, I am interested in Windows 8 Application development. I do have read considerable parts of 70-502 exam book by Microsoft and of course I have visited the Microsoft site to learn how many pixels what where and how is supposed to go to meet Windows 8 design guidelines. All these resources however focus on very small scale problems. How to make a button, how to connect it to a controller... I am looking for resources that teach me how to translate a design into xaml. All the individual components I am aware of, though not indepth. DockPanels, StackPanels, buttons, boxes, borders and styles. But I don't have a resource that tells me how to combine them into a high quality design. Does anyone have resources that teach weilding Xaml on an application scale?"} {"_id": "94787", "title": "Becoming a better C/Java programmer", "text": "I've currently hit a plateau and want to get better at coding in C and Java. I want to improve, but I think I hit a plateau on my learning curve. I can't find any \"advanced\" tutorials on the net; most of the things that are introductions that I already know. I don't really know where to go from here to get better. How do I get past the plateau? Would reading through the Linux kernel's source help? Is there a site, or something with advanced exercises or tasks to do? Not something that requires expert knowledge, and nothing basic, but something in the middle. Something with programming puzzles or riddles, that take a couple of hours at most, but not months. By advanced, I mean that I already know the syntax pretty well. I haven't mastered it, and still make mistakes here and there, but I wouldn't be completely useless without my IDE of choice (Eclipse). In Java, I have a basic grasp of collections, the Swing library, and the JDBC library. I also have some basic experience with ant and junit. I can make basic 2D games, and utility programs with local database connectivity. I'm reading up on Java's sound API. As for C, it's pretty rusty. I do know the syntax, and know how to use make, but I don't know many libraries. I've tinkered with sqlite3, and made a simple pacman type game that runs on a win32 console a few years ago, but I never really advanced past the basic syntax. Also, I recently learned about function pointers. Basically, I've gotten the lessons most books have in their first few chapters, but don't really know where to go from here. I am aware that the best way to improve is to code, but I don't really know _what_ to code. I don't really want to keep making the same kinds of programs again and again, but I can't think of any short, but new project."} {"_id": "164745", "title": "Reuse the data CRUD methods in data access layer, but they are updated too quickly", "text": "I agree that we should put CRUD methods in a data access layer\uff0c However, in my current project I have some issues. It is a legacy system, and there are quite a lot CRUD methods in some concrete manager classes. People including me seem to just add new methods to it, rather than reuse the existing methods. Because 1. We don't know whether the existing method is what we need 2. Even if we have source code, do we really need read other's code then make decision? 3. It is updated too quickly. Do not have time get familiar with the DAO API. Back to the question, how do you solve that in your project? If we say \"reuse\", it really needs to be reusable rather than just an excuse."} {"_id": "62241", "title": "Can I sell apps on the apple's US app store market?", "text": "given Apple's bizarre geographical segregation I was wondering if, as an Italian, I am allowed to create an English application sold on the US app store for $0.99 (regardless of the current USD <=> EUR exchange rates). It seems a really stupid question and I really hope that I can but who knows, since I can't buy them maybe I can't sell them either."} {"_id": "157019", "title": "What are some good ways for an experienced .NET client developer to start learning web development?", "text": "My background is as follows. I've been programming since I was young, worked in BASIC, then Asm for M68K, then C, C++, and now I've spent the past 5 years in .NET becoming a rather good .NET developer. Somewhere along the line, I realized I have no clue how the internet works, I don't know how TCP/IP works, I have no clue what HTML is capable of, or how PHP plays into the picture, or why Ruby on Rails should make me happy. I have no experience with LAMP or IIS, and I only have an inkling of a clue what Sharepoint is. In short: I'm internetstupid. I am intelligent and a capable developer, but I lack even rudimentary skills (routing, web hosting, hosting a custom DNS for intranet domain resolving, etc.) I have no clue where to start on web development learning. Here's the good news! I know what I'd like to learn to do (at least for now!) I want to be able to develop my own personal domain to host WSDL services, to power my applications, to showcase my work, to host my living resume, to market my brand so to speak. Stackoverflow, I beg you, help this disconnected but talented developer become part of the current era instead of being a relic stuck in the previous great golden age of software! Note: I'd prefer to stay wedded to MS for now for the most part as I'm already familiar with the .NET ecosystem. I'm okay branching out, but not ready to do so (I think?) If you can convince me otherwise that's fine."} {"_id": "94781", "title": "Handling out of hours support", "text": "This isn't a programming question per se - more about application management. I work for what started out as quite a small company, which has grown rather quickly and become very succesful in a relatively small period of time. Nearly all trade happens online, and our IT team is very small relative to the amount of money that gets pumped through the site (it's me and two others). There are occasionally \"blips\" in the framework outside of hours, which inevitably results in either myself or one of my colleagues receiving a phone call. Our call centre aren't particularly great at analyzing errors etc.. and 99% of the time the solution is actually a data entry / user training issue. 1% of the time it's a real bug. Our company is now insisting that we three developers figure out an on-call rota where we can be contacted with assurances that we won't be out of town etc. unpaid whether we are called or not, in case something goes wrong out of hours. Not all of our skills overlap, so I may not be able to quickly diagnose and fix a problem that one of the others could (e.g. database timeouts would need the DBA, email issues would need our infrastructure guy, website issues would need me). None of us are particularly happy about this, so I was wondering if anyone else has had a similar experience and how you dealt with the situation? EDIT: Thanks all for the suggestions - I've marked one as the right answer purely on the basis as it made me feel a bit better, but all suggestions are pretty much bang on what we've been discussing internally. We do feel like it's a little unfair we now have to give up our weekends so everyone else can get rich. We are attempting to push towards a more mature development process to minimize bugs too, but that would require the rest of the company to change too and they _really_ don't want anything to change outside of IT!"} {"_id": "79006", "title": "How to maintain VCS changeset/revision simplicity in the face of accidental partial commits?", "text": "Working on a project with a couple of classmates, and I've noticed that we get a lot of partial commits which are then corrected: Rev 249 Log: committing change foo, yeah! Rev 250 log: oh dear, forgot these files! Rev 251 log: oh my, how could I possibly forget another file in the *same* change? The obvious answer is to be more careful. But as of 2011, programmers are still (almost) human, and are error prone. The problem is that this causes ambiguity when reverting or merging. Do we leave them there? Do we modify the commit history?"} {"_id": "34208", "title": "How to convince boss to start using Codeigniter or YII at work?", "text": "i work for a web development company and during the one year i have spent here, there were no improvements in the technologies we used to built our websites. I introduced jquery to them (buying the Novice to Ninja by Sitepoint) and now, i want to get rid of all these crappy PHP from scratch and use a PHP framework instead. So what reasoning i can use to convince my boss to switch, and how to convice the other developers too?"} {"_id": "34209", "title": "Would a multitouch capable PC allow me to do Android development simulating the touch UI without an Android device?", "text": "I recently purchased a Samsung Galaxy Tab as a reference implementation (phone and first gen Android tablet), of Android 2.x for app development. I have noticed a slew of Android 3.0 slates being talked about at CES 2011 (Motorola XOOM, etc.). If I had a multitouch PC with the Android SDK/Emulator on it, would this allow me to more closely approximate device simulation by allowing user input via the multitouch screen ? Would it work via touch just like Windows 7 recognizes touch as mouse style input ? Has anyone done this ?"} {"_id": "237456", "title": "Design review: how well does my object oriented design fit the SOLID principles?", "text": "_This thread will be long, but I will try to make it as short as I can. Thank you._ I have recently implemented a relatively simple program. What this program does is generate a simple piece of music and play it. I will describe the object oriented design of the program and would like to hear your review of the design. **Especially how well it fits the SOLID principles** , where it doesn't, and how I can improve my design to be more SOLID in the future. _(If there's something in the description that isn't clear enough, please tell me.)_ I have divided my design into two parts. **1-** The \"behind the scenes\" system that generates the content - a series of notes and chords (as will be explained later) - and is responsible for playing it. **2-** The system that provides a UI to control the \"behind the scenes\" system. This system tells the other system when to create new music and when to play it. I will describe each system separately. * * * **System 1 - responsible for the generating and playing music.** The way music is created is by first generating a chord progression (a series of chords to be played one after the other), and then generating a melody based on that progression. I will describe how this is designed 'from the bottom up'. **Generating a chord progression**. Class `Note`: the most basic class. Responsible for playing sound files, and is set in instantiation to play a specific sound (C, D, etc.). It has an interface to play() and to stop() the playback. It also has a method `getName()` to identify it (the name is fed to the Note through the constructor). Class `Chord`: this class is instantiated with an array of three `Note` objects and a name. For example, a `Chord` named `cMajor` will be instantiated with the Notes `c`, `e`, and `g` and the string \"CMajor\". `Chord` has an interface to play() and stop() the notes it's composed with, has a method `getName()` and a method `getNotes()` that returns the array of Notes. Class `Progression`: this class is instantiated with an array of `Chord` objects, and has logic to play and stop them one after the other in a particular tempo (speed, in BPM - beats per minute). It has a method `play(int bpm)` and a method `getChords()` that returns the array of Chords. Class `ProgressionGenerator`: implements an algorithm to create a particular series of `Chord` objects, and instantiate a `Progression` with these chords. To summarize: a Progression is composed of Chords, which are composed of Notes. The ProgressionGenerator instantiates a Progression with a particular series of Chords. **Generating a melody**. Class `MelodyNote`: composed with a single `Note`. It has an attribute `double duration` which specifies for how long this note is to be played relative to a measure/bar of music (e.g. 'half note' is specified as 0.5, 'quarter' as 0.25). It has a method `getDuration()` and an interface to play() and stop() it's inner `Note` (basically delegates to the Note). Class `Melody`: composed with a series of `MelodyNote`s. Has logic to play the series of notes in a particular speed. It uses each `MelodyNote`'s `duration` to know when to stop a note and play the next one. Class `MelodyGenerator`: implements an algorithm to generate a series of `MelodyNote`s and instantiate a `Melody` with it. It does this based on a `Progression`. The method signature is `Melody generate(Progression prog)`. **So to summarize:** Progression **\\--> composed with** Chords **\\--> composed with** Notes Melody **\\--> composed with** MelodyNote **Each object controls it's inner objects and exposes them.** For example Chord has logic to play it's inner Notes, and has a method `getNotes()` that returns them to whoever needs to know. For example, `MelodyGenerator` uses `progression.getChords()` in order to build a melody according a particular series of chords. It also uses `chord.getNotes()` when choosing what notes to place over a chord. **One last thing:** the generators use the singleton class `NotesAndChordsSupplier` to get the Note and Chord instances they need. It hands them existing instances. * * * **System 2 - responsible for providing a UI to control the creation and playback of music.** This system is designed using the **MVC pattern.** The Model encapsulates the creation and playing of music. The View is the UI that provides buttons to tell the Model to play and make new music. The Controller is notified by the View when buttons are pressed and invokes the proper actions on the Model. **The Model** The Model contains a `Progression`, a `ProgressionGenerator`, a `Melody` and a `MelodyGenerator`. When it's told to generate new music, it simply delegates the task to the generators. When it's told to play music, it delegates the task to the melody and progression. Simplified code: public void makeNewMusic(){ progression = progGenerator.generate(); melody = melodyGenerator.generate(progression); } public void playMusic(){ // this actually starts two separate threads, but irrelevant. progression.play(); melody.play(); } The Model is connected (in a loosely-coupled manner) to two objects: it's `Melody` and the `View` \\- both via the Observer pattern. The Model implements two interfaces: `Observer` and `Observable`. It's registered as an observer to the Melody, and the View is registered as an Observer to the Model. When the Melody finishes it's playback, it notifies it's observer - the Model. When the Model is notified, it notifies it's own observer - the View. This way, the View gets notifies when playback is finished, so it can re- enable it's UI buttons that were disabled during playback. **The View** The View is the UI and it has two buttons: `Play` and `New Tune`. They invoke `playButtonPressed()` and `newTuneButtonPressed()` on the Controller, respectively. The View also features `enableButtons()` and `disableButtons()` methods. As I said, the View is registered as an Observer to the Model. It is notified when playback of the music is finished, so it can re-enable it's buttons that were disabled by the Controller when playback started. **The Controller** When the View calls `newTuneButtonPressed()` on the Controller, it simply delegates to the Model: `model.makeNewMusic()`. When the View invokes `playButtonPreesed()` on the Controller, two things happen. 1- The Controller calls `disableButtons()` on the View. 2- The Controller calls `playMusic()` on the Model. All buttons on the View are disabled by the Controller when music playback starts. The Controller registers the View as an observer to the Model, and the Model as an observer to it's member Melody. * * * **UML class diagram to illustrate the architecture** _(Please note: in this post I ommited a class`Scale`, which isn't important. Ignore it in the diagram)._ ![enter image description here](http://i.stack.imgur.com/Xi0mc.png) * * * **Thank you for reading all of this. I will appreciate any kind of criticism of the object oriented design of the program.** **Especially, how well are SOLID and OO principles integrated in my design, and how can I design more in the spirit of SOLID on the future.**"} {"_id": "237453", "title": "How can I identify a namespace without using a string literal?", "text": "My team has a lot of IOC conventions that look something like... if (type.Namespace == \"My.Fun.Namespace\") { // do stuff } Of course, maintaining this kind of thing becomes brutal after awhile, and we would rather not. The only option I could think of was to store these all in one place, as string constants or something of that nature, but that isn't a lot better... We have been considering making classes that basically serve as namespace identifiers in order to have almost an anchor type to key off of, like, say... if (type.Namespace == typeof(My.Fun.Namespace.MarkerClass).Namespace) { // do stuff } But that just feels weird. So is there another way to solve this one that we just don't know about? If not, has anyone had any luck with the whole marker class concept?"} {"_id": "34200", "title": "How would you advocate not using a shared spreadsheet to track bugs / issues?", "text": "In our company, the developers want to use a proper bug tracking tool to manager issues in our application. The management however insists on using a shared spreadsheet (formeerly a shared excel file, now a spreadsheet on a web base solution allowing concurrent access). Their argument is that the spreadsheet allow them to have a more highlevel view of the state of the project as they can see how many bugs are open with a quick glance. This also allow them to see who is working on each bug, and get estimation of the time required to close them all (as developer are required to fill time estimation of the bug they are working on). As you can understand, this is not really practical to use for the developers (bug tracking software were invented for a reason). So how can I advocate bug tracking software to ease the work of the developer ? As a bonus, which software would you recommend that would allow the management to be able to get their feedbacks (number of bugs opens, who is working on them, time estimation) with a high level view ?"} {"_id": "143777", "title": "Compatibility Test and other testing method to use while building software", "text": "I will be adding a feature to the software and have to update or modify some of public API of the present open source software. What are steps which could be taken to ensure the compatibility of the software. what are the testing methods which are used in open source world to test the newly added features?? Open Source program is: Xapian"} {"_id": "175644", "title": "New grad; To overcome complete lack of experience, should I ditch a creative pet project in lieu of one that would demonstrate more applicable skills?", "text": "I am currently working on a project on github that I think would be a good demonstration of my initiative, creativity and enthusiasm. It is an educational game I am developing in pygame that enables the user to learn to improve their development productivity by using vim, specifically with python, though learning to code faster with vim should be transferable to any language. I think this is something that might have a mass appeal and benefit to a lot of people in a measurable way. -However- I am graduating from college in a month (my degree is computer science with a minor in English), with no experience that is relevant to helping me get any kind of job in the field, and a gpa that doesn't tout my merits. I could pursue a career in game development, but it's not necessarily what I'm most interested in, and see myself applying to startups around the country. To the places I am looking at applying, showing that I have experience with pygame is going to be largely irrelevant, except in demonstration of my ability to code, period. A lot of skills that ARE more marketable, such a data modeling, GIS, mobile application, development, javascript, .net framework, and various web development technologies, are not going to be showcased by this project (on the upside, employers do like to see familiarity with git and python). I'm wondering if I should sink all my free time in the next couple of months into this project, since I'm motivated and interested in it, and if the value of being able to demonstrate ambition and 'good ideas' (for lack of a better term, and in my own opinion) will compensate for the absence of demonstrating more sought-after skills. I am probably at a point where I should either commit fully to this project now, or put it on the backburner in favor of something else, and I am leaning towards continuing with what I am already working on, because I think it's a great idea, and something achievable to me with enough dedication over the next couple months. But the most important thing to me is being able to get a job out of college, which I am exceedingly concerned about as the professional landscape which I am navigating for the first time is a lot more intimidating than I could have anticipated, with almost every job (even short-term contract positions) requiring years of experience which I lack. ## So in brief, the common denominator to answering the question \"How can I overcome experience requirements for a job\" seems to be \"Show off your own project.\" I want to know WHICH project I should work on to best increase my chances of getting a job out of college, keeping in mind that I have no experience. I believe this question is applicable to any new grad that lacks demonstrable experience."} {"_id": "175641", "title": "Assignment of roles in communication when sides could try to cheat", "text": "Assume two nodes in a peer-to-peer network initiating a communication. In this communication, one node has to serve as a \"sender\", another as a \"receiver\" (role names are arbitrary here). I'd like the nodes to assert either role with approximately equal probability. That is, in N communications with various other nodes a given node would assume the \"sender\" role roughly N/2 times. Since there's no third-party arbiter available, nodes should agree on their roles by exchanging messages. The catch is that we can encounter a rogue node which would try to become the \"receiver\" in most or all cases, and coax the other side to always serve as a \"sender\". I'm looking for an algorithm to assign roles to sides of communication so that no side could get a predetermined role with high probability. It's OK for the side which is trying to cheat to fail to communicate."} {"_id": "203492", "title": "When to use HTTP status code 404 in an API", "text": "I am working on a project and after arguing with people at work for about more than a hour. I decided to know what people on stack-exchange might say. We're writing an API for a system, there is a query that should return a tree of Organization or a tree of Goals. The tree of Organization is the organization in which the user is present, In other words, this tree should always exists. In the organization, a tree of goal should be always present. (that's where the argument started). In case where the tree doesn't exist, my co-worker decided that it would be right to answer response with status code 200. And then started asking me to fix my code because the application was falling apart when there is no tree. I'll try to spare flames and fury. I suggested to raise a 404 error when there is no tree. It would at least let me know that something is wrong. When using 200, I have to add special check to my response in the success callback to handle errors. I'm expecting to receive an object, but I may actually receive an empty response because nothing is found. It sounds totally fair to mark the response as a 404. And then war started and I got the message that I didn't understand HTTP status code schema. So I'm here and asking what's wrong with 404 in this case? I even got the argument \"It **found** nothing, so it's right to return 200\". I believe that it's wrong since the tree should be always present. If we found nothing and we are expecting something, it should be a 404. # More info, I forgot to add the urls that are fetched. Organizations /OrgTree/Get Goals /GoalTree/GetByDate?versionDate=... /GoalTree/GetById?versionId=... My mistake, both parameters are required. If any versionDate that can be parsed to a date is provided, it will return the closes revision. If you enter something in the past, it will return the first revision. If by Id with a id that doesn't exists, I suspect it's going to return an empty response with 200. ### Extra Also, I believe the best answer to the problem is to create default objects when organizations are created, having no tree shouldn't be a valid case and should be seen as an undefined behavior. There is no way an account can be used without both trees. For that reasons, they should be always present. # also I got linked this (one similar but I can't find it) http://viswaug.files.wordpress.com/2008/11/http-headers-status1.png"} {"_id": "35159", "title": "Haskell Using Source File Problems", "text": "I recently started using Haskell Platform. I created a source file using Wordpad and named it add. I tried double clicking it so I can open it in ghci but I get <[1 of 1] Compiling Main (C:\\add.hs,interpeted) C:\\add.hs:1:6: parse error on input '\\' Failed modules loaded: none.> What do I do so I can use my source file?. Should I use another text editor?. Thanks"} {"_id": "35152", "title": "Distributed Development Tools -- (Version control and Project Management)", "text": "I've recently become responsible for choosing which source control and project management software to use for a company that employs me. Currently it uses Jira (project management) and Subversion (version control). I know there are many other options out there -- the ones I know about are all in this article http://mashable.com/2010/07/14/distributed-developer-teams/ . I'm leaning towards recommending they just stay with what they have as it seems workable and any change would have to be worth the cost of switching to say github/basecamp or some other solution. Some details on the team: * It's a distributed development shop. Meetings of the whole team in one room are rare. * It's currently a very small development team (three developers). * The project management software is used by developers and a product manager or two. What are you experiences with version control and project management web applications? Are there any you would recommend and you think are worth the switching cost of time to learn new services / implementing the change? Edit: After educating myself further on the options it appears DVCS offer powerful benefits that may be worth investing in now as opposed to later in the company's lifetime when the switching cost is higher: I'm a Subversion geek, why I should consider or not consider Mercurial or Git or any other DVCS?"} {"_id": "35151", "title": "Common mistakes which lead to corrupted invariants", "text": "My main source of income is web development and through this I have come to enjoy the wonders of programming as my knowledge of different languages has increased over the years through work and personal play. At some point I reached a decision that my college education was not enough and that I wanted to go back to school to get a university degree in either computer science or software engineering. I have tried a number of things in my life and it took me a while before I found something that I feel is a passion and this is it. There is one aspect of this area of study that I find throws me off though. I find the formal methods of proving program correctness a challenge. It is not that I have trouble writing code correctly, I can look at an algorithm and see how it is correct or flawed but I struggle sometimes to translate this into formal definitions. I have gotten perfect or near perfect marks on every programming assignment I have done at the college level but I recently got a swath of textbooks from a guy from univeristy of waterloo and found that I have had trouble when it comes to a few of the formalisms. Well at this point its really just one thing specifically, It would really help me if some of you could provide to me some good examples of common mistakes which lead to corrupted invariants, especially in loops. I have a few software engineering and computer science textbooks but they only show how things should be. I would like to know how things go wrong so that it is easier to recognize when it happens. Its almost embarrassing to broach this subject because formalisms are really basic foundations upon which matters of substance are built. I want to overcome this now so that it does not hinder me later."} {"_id": "35150", "title": "How to organize large Rails application?", "text": "I am working on a large(ERP level) Rails project. We have 150 tables and more than 150 models. It takes minutes to find a model. Should we add all models under the models folder or should we put them in different subfolders? Same thing goes for controllers and views."} {"_id": "28693", "title": "Role of Microsoft certifications ADO.Net, ASP.Net, WPF, WCF and Career?", "text": "**I am a Microsoft fan and .Net enthusiast.** I want to align my career in the lines of current and future .Net technologies. I have an MCTS in ASP.Net 3.5. The question is about the continuation of certifications and my career growth and maybe a different job! **I want to keep pace with future Microsoft .Net technologies.** My current job however doesn't allow so.So i bid to do .Net based certifications to stay abreast with latest .Net technologies. My questions: 1. What certifications should i follow next? I have MCTS .Net 3.5 WPF(Exam 70-502) and MCTS .Net 3.5 WCF(Exam 70-504) in my mind so that i can go for Silverlight development and seek jobs related to Silverlight development. 2. What other steps i need to take in order to develop professional expertise in technologies such as WPF, WCF and Silverlight when my current employer is reluctant to shift to latest .Net technologies? I am sure that there are a lot of people of around here who are working with .Net technologies and they have industrial experience. I being a new comer and starter in my career need to take right decision and so i am seeking help from this community in guiding me to the right path."} {"_id": "197118", "title": "Spoiled by Python convenience- and productivity-wise, spoiled by C++ speed-wise. Now unhappy with both", "text": "I'm currenetly struggling with choosing how to proceed as a programmer. I mainly programmed games and would like to continue. And for about 5 years or so I just used C++ and OpenGL, so I spent a lot of time on infrastructure, strange bugs, and mostly getting basic things to work. A friend of mine then recommended python and after initially being aversed by it being not as explicit and formal as I was used to I was shocked by how much more productive I could be and how much progress I could actually make in a very small amount of time. So currently I'm working on a multiplayer-shooter and repeatedly I find myself struggling with python being not as fast I might want it to be. I know that that I have to approach writing efficient code in python very differently now, but even with a little help from friends that are more experienced with python there is just too much going on sometimes (and extrapolating this, I know that I will end up stuck). There are a lot of things I really like about my \"home-language\" C++, but after knowing how many hours I could be wasting I don't really want to go back. What language can you recommend which offers high-productivy, is memory-safe (I really hated this) and as high-performance as I can get, but is still mature enough to be used for kind of serious projects (games-related) and maybe even mature enough to have people already having spent some time on OpenGL-Bindings or various libraries for Sound and similar (alternatively easy access to shared libraries written in C). Easy cross-plattform is a big plus! So no .NET please. Is this even possible?"} {"_id": "28697", "title": "Code cleanup methods and/or code coverage tools", "text": "I have an application at work, which I started work on about 7 years ago. This was my first engagement in ASP.NET web applications, and coming from a Win32 background I had a hard time adjusting. All this time, both the project and myself have \"matured\" and now most parts of it are well defined, properly structured and readable. However, there is a significant piece of it, that is \"old junk\". By that I mean old obsolete code, that was not working well, or was not properly structured, which has been replaced, but left in the source files instead of being removed. I must note here, that I was not the sole developer, but did most of the coding throughout. What I want to do, is whenever I have free time at work, to remove the old obsolete and unused code. Are there any techniques/methodologies that I can use/follow in order to do this more efficiently? Any tools perhaps? **EDIT: (clarification based on feedback)** The code I am talking about is not used anywhere, and is not called from anywhere in the project. What I need is way to find out where the code resides, since we are talking about a large codebase. So something in terms of code coverage tools would help. Any suggestions for asp.net projects? (the particular case is in Web Site project form)"} {"_id": "197116", "title": "What data structure is suitable for implementing dynamic huffman encoding and decoding on a piece of text?", "text": "Some pseudo code or resources will be appreciated.I was thinking if implementing it in form of a BST stored in an array. However,not all operations can be performed easily using this approach. I am open to using STL's for this purpose as well.My main purpose is to implement it in the simplest manner possible."} {"_id": "148894", "title": "How to use unit tests as a source of information?", "text": "A colleague of mine was once at a seminar about agile development, where he heard it is possible to use unit tests as technical documentation. Something like using unit tests as an example of how to use the class. A quick Google search provided _TDD and Documentation_ , which proves it should be possible. But looking at our code, I see that we obviously failed to implement unit tests in such a way. In my opinion unit tests are there to test code as a minimal unit, even with help of mock and fake classes and functions. So, the questions are: * Isn't it the task of functional tests to show how a class (or set of classes) should be used? * If it is possible to use unit tests as technical documentation, are there some guidelines on how to implement such unit tests?"} {"_id": "222953", "title": "A Web Service to collect data from local servers every hour", "text": "I'm trying to find a way to collect data from different servers around the world. Here are the details: There is only one single PowerShell script on servers that encrypts data (simple csv file) and sends with preferred method (HTTP/HTTPS Post could be) There is no more control on that servers. Can't install any service, process etc. Just I can configure script to execute every hour. This script also will have encrypted username/password/license key for every server. Script will compress data and send to me with these information. So I need a service (I'm not sure if Web Service is the rigth solution) on the cloud that will help me to: Will get data that is sent from servers using a method. Will authenticate request to recognize sender using license key/username/password and most importantly, Will redirect/send this filecab to my SQL Server on the cloud (Azure). Also it should seperate data according to customer information in license key. So every data for every customer will be stored in dedicated DB/Tables on my SQL All the processes above should be completed automatically. No way for manual steps. Question: A Web Service (SOAP or Restful) is the rigth solution for that?"} {"_id": "222956", "title": "What is the accepted practice for string resource management in java?", "text": "In Android, final strings are stored in a `strings.xml` file, which makes it easy to update text, reuse strings and to translate an app to a different locale -- because everything is in one place. However, in all the examples that I have seen of desktop applications all the strings have been hardcoded into each class. Is hardcoding strings into files standard practice for desktop applications? Or is there a better way to do this?"} {"_id": "222957", "title": "Recommended architecture for an interactive table widget with multiple behaviors in Javascript / jQuery", "text": "Use case: For an administration UI, I want an interactive table widget with a number of behaviors: * Collapse / expand (yes, this means the rows are a hierarchy) * Update of data in a child cell based on a parent cell. * Hiding or revealing of a row or an entire sub-tree based on a value update. * Two or more form elements in each row, that may depend on each other. E.g. a checkbox that will mess with the value of a text field in the same row. * DOM manipulation on newly created rows. E.g. the checkbox might be turned into a more visual kind of widget. Or In another use case, I have the following additional behaviors: * Adding and removing rows, manually or automatically. * Drag and drop of rows. (probably not in combination with the collapse/expand, but who knows) The problem: * Ideally, we want each of these behaviors to be implemented as a separate and independent module / plugin. BUT * If one behavior changes the DOM, then other behaviors need to know about it. E.g. if you insert a row, then the collapse/expand needs to know, to decide whether the new row should be shown or hidden. And the newly added row needs new event listeners from various other behaviors. * There might be behaviors that are triggered by the state of other behaviors. E.g. each row might contain a text field, and depending on the value, the row must be moved up or down (drag/drop). The question: I had to deal with this kind of problem a few times and I generally managed to get it work. However, I always hated the architecture that came out of it. What I'd like is an approach that allows to create the different behaviors as independent modules, and then somehow plug them together. Some more specific questions: * Where should each of the behaviors keep its state information * What about state information for a specific table row? Should it be stored as a variable on the DOM element? (e.g. `$('tr')[5].isExpanded = true`) or rather in a data container in the module itself? * How can modules know about DOM updates? * How should modules talk to each other? * How should the code of each module be structured? What kind of API should it expose? * How to handle multiple instances of this kind of widget on one page? (In this case, global instances don't work) Example: https://drupal.org/project/multicrud (sorry, all I have atm is Drupal modules)"} {"_id": "222958", "title": "Win loss code that does not make 1-0 record better than someone like 20-3", "text": "ok, am just looking for a win loss code example. This can be for any language, just wanting the outline. Fairly new to programming, so dummy it up for me :) I can do (win-loss/total of win loss). Guessing that is a good standard win loss ratio, but I don't want a new person that has 1-0 to be ranked higher than someone that has 20-3. Any help is appreciated and thank you. EDIT: The chess styles are a little more than needed. Just want a ranking system with win/loss. so lest say 20-3 is top in league right now. he is, say 23 weeks in so far. if one guy comes in and wins first match against anyone, I don't want him to take #1 spot over people thats been there longer and have a great winning record. To respond to ampt... maybe he will be best in league, but I don't want him instantly there cause he had one good match. Not sure if that clarify any more. Didn't really follow Doc all the way. Looks as if he is hindered in list up to his 11th game. Not sure if thats what you ment there. Thanks again for all reasponses."} {"_id": "252700", "title": "Creating fast, realtime dialogue search for television scripts", "text": "We have a database of television scripts and we'd like to search it, getting results as we type. We often remember words or snippets of dialogue but can't remember exactly what was said or what episode it was said in: \"What was the episode where George goes against his instincts and walks up to the girl at the diner and tells her he's unemployed?\" With this tool you'd type \"George unemployed\" and the result with context (and possibly other hits) would pop up: EPISODE: \"The Opposite\" George : Excuse me, I couldn't help but notice that you were looking in my direction. Victoria : Oh, yes I was, you just ordered the same exact lunch as me. ( G takes a deep breath ) George : My name is George. I'm unemployed and I live with my parents. Victoria : I'm Victoria. Hi. My question is about the best way to design a data structure to implement this kind of search? My idea so far is store a lookup table in an in-memory data store such as redis. To create the table, first I'd give each script (episode) an id, and then i'd parse the script into lines based on the person speaking. So a particular piece of dialogue could be located with an episode id paired with a line id. Next I would process each line, creating a data structure indexed by all words appearing in any script, and storing the episode/line location of every instance the word was used. apple -> [ { scriptid: 9, lines: [11,99] }, { scriptid: 21, lines: [103,211,214] } ] orange -> [ { scriptid: 2, lines: [101] } ] We could pair this data structure with a simple scoring algorithm, so that when a user searched for multiple words, we'd only show matches where both words occured in the same script, and give higher rank to locations where the words appear closer to one another. There are other details that, ideally, would be accounted for, such as matching plurals and mispelled words. Is this a reasonable approach? How could it be improved? What other details should I consider?"} {"_id": "67740", "title": "Is having decrypted compressed files in iPhone is a problem on submission?", "text": "I am about to submit an iPhone application. I do download Encrypted Zip files from a remote server and of course I do decrypt the files to view them to the user. Does that mean I need to answer YES for Does your application have encryption? when I upload my Binary?"} {"_id": "252702", "title": "How do you make decorators as powerful as macros?", "text": "Quick background: I am designing a Pythonic language that I want to be as powerful as Lisp while remaining easy to use. And by \"powerful\", I mean \"flexible and expressive\". I've just been introduced to Python function decorators, and I like them. I found this question that suggests decorators are _not_ as powerful as Lisp macros. The third answer to this question says that the reason they're not is because they can only operate on functions. But what if you could use decorators on arbitrary expressions? (In my language, expressions are first-class, just like functions and everything else, so doing that would be fairly natural.) Would that make decorators as powerful as Lisp macros? Or would I have to use syntax rules, like Racket and Honu, or some other technique? **EDIT** I was asked to provide an example of the syntactic construct. For an \"expression decorator\", it would simply be a decorator on an expression (where \"$(...)\" creates an expression object): @Decorator(\"Params\") $(if (true): printf(\"Was true\") else: printf(\"Wasn't true\")) Also, if I _do_ need to use syntax rules, as mentioned above, it would use a pattern matching syntax: # Match a bool expression, a comma, and a string. macro assert(expr:bool lit string) { e , s }: if (!e): error s else: printf(\"All good\") The above would match this: assert i == 0, \"i should have been 0\" That's a rough sketch, but it should give you an idea."} {"_id": "252704", "title": "Lazy-initialization vs preprocessing in image-based iOS app", "text": "I am making an iOS app which contains museum gallery: user can look through exhibits and then select info about it, etc. On the exhibit screen I have two ScrollViews that are different, but are being fed by common data source. Each view is a set of images. Two top grey squares are the previews and the bottom one (where you can see the picture) is the main view. All scrollviews are lazy-loaded. The question is, do I need to transform preview images to proper size on-the- go, or somehow pre-process them to make loading faster and use that pre- processed and resized images in previews? The benefit of lazy-initialisation here is lower waste of resources, but slower loading, compared to ready images. On the other side, preprocessed images will waste storage and network (images will be loaded only once and then will be stored locally). What would be better from performance point of view, taking in account that app has to run smoothly down to iPhone 4 (oldest with iOS 7)? Is it good idea to combine two methods and cache lazy-loaded images?"} {"_id": "252707", "title": "what data storage method should I use?", "text": "I am currently writing a program and among other things need to store data. The program listens to two computers talking to each other in a known protocol (the program know what protocol it is,the user give the program the protocol structure ), if the packet that arrived to the program has unseen value in one or more of the fields it add it to the database. I am not sure what data storage method(such as sql server,xml etc.) to choose since I don't know whether the data stored in the database would be large or not, it depends on the computer's conversation Also I will need to be able to get all the data from the database with no trouble and might need to extract some data and not the other (for example all the packets with the value 24 in field x) By protocol I can't just give you one because the user of the program defines which protocol the conversation is in. By conversation I mean that two computers talk to each other it doesn't matter if a user operate the computers or not. The maximum number of entries in the database would be the number of variations that the packet in the certain protocol can be. I mean if the protocol is 8 bytes long the maximum number of entries would be 2^64 but if the header is longer the number goes up, but the number of entries could be less depending on the conversation and it is out of my control. The program will only store unique headers, I mean that the same combinations of values in the header won't be in the DB twice, the program store the header if it is unique and it isn't stored in the DB I thought of using a sql server such as mysql but I am not sure what data storage method would suit this situation the most?"} {"_id": "252708", "title": "Find common ancestor", "text": "![enter image description here](http://i.stack.imgur.com/GtimL.jpg) Given X number of leaves (the ringed leaves in the picture) in an unbalanced tree with depth aroun 100-1000 and total number of nodes around 15 000 000. I'm looking for the first common ancestor for those leaves. What is the most efficient way to achieve this?"} {"_id": "250699", "title": "How comes the C++ standards committee introduces a keyword like nullptr and gets away with it?", "text": "That must have broken a lot of peoples code bases right? Everyone who had a variable named \"nullptr\" (which I think would have been fairly common) has to find \"nullptr\" and replace with \"something_else\" before being able to compile with c++11. Does the standards committee assume that people will do that much \"adjusting\"?"} {"_id": "89817", "title": "Can Hadoop be made production-stable on a Windows platform?", "text": "Is anybody using **Hadoop** on **Windows** (Win32 or Win64) in **production** for serious work? * If you've tried it and rejected this combination can you give your (top) reasons? * If you managed to make it stable can you give an impression of how much work it took? * * * **Background:** My company is 100% Windows. There's not much I can do about this other than use the tools I'm given. According to the Hadoop documentation **only** UNIX-like platforms are recommended for production environments. Windows environments are currently considered to be experimental. I've tried some of the basic Hadoop Set-up on win32, and there seem to be a lot of gotchas, and I'm starting to suspect that this might be a foolish line of research. I understand that Hadoop scripts depend to some extent on UNIX features such as Bash, Rsync, SSH which could be provided by Cygwin - however I'm not planning on becoming a pioneer. I'd like to run something which is somewhat standard. I want a decent chance of being able to get some community support if there's something I cannot solve all by myself. I'm willing to invest some time to get this working on the company systems, but I'd like some reassurance from more pioneering types who have been there before. Alternatively I'd like some good advice from somebody who knows it's not worth bothering with!"} {"_id": "130508", "title": "When and how to use the advanced features of git?", "text": "I'm not exactly a new git user but I have never used any of its advanced features such as tags, branches, merging and others. I haven't even used anything equivalent features from other VCS software. My projects have been so simple that I have only needed commits and sometimes diffs. Now I'm planning to open-source a server project of mine which probably will get a lot bigger and it probably needs to be organized better. Git can probably help me with that but how? The development methods are scrum and TDD if that matters."} {"_id": "89811", "title": "Design question: Is this good case for proxy pattern, or \"over done\"?", "text": "Working on a major enhancement for legacy code, I have been wrestling with myself over whether this is a good case to use the Proxy pattern, and more specifically whether a good case to use the Java Dynamic Proxy API. **BACKGROUND** We have a \"FIXConnection\" class used to send orders to a destination. The application maintains 1 to n FIXConnections for sending (a specific object is selected based on what the user specified on the order). Normally, the order would be sent straight out to the market. The enhancement is to see whether order could be filled by other orders already sent to market before sending, and if it can be filled, execute totally different logic (i.e. not send). If not, it delegates to current logic. This feature will only be used by one client, so it is turned on/off dynamically (or by configuration setting). It feels like this is a good use for the Proxy pattern/Dynamic Proxy, because it would allow designing a block of code to intercept requests to the existing FIXConnection.send method (and also its processResponse method), only in the case whether the feature is enabled. The Proxy could either execute the new logic (if warranted), or dispatch to existing logic. **PROS AND CONS** _PRO_ * Using Proxy avoids any issues with Class hierarchy (we occasionally subclass this FIXConnection class). If we handled this by subclass, then other children might need to be moved to this under this as a parent. * We don't need to update the existing top level class with any flag to check if this feature is enabled, and to conditionally execute code. (i.e. not create code for one special case, which would now be hit by all cases). * Single point of control _CON_ * Existing class exposes public variables, all would need to be refactored into getter/setter methods, and need to extract an interface (so that Dynamic Proxy could be used). i.e. a ripple throughout other classes. * Binding between the proxy and legacy method could be a little cheesy (e.g. method.getName().equals(\"send\"). There are ways of more tightly binding, but ... (yuck?) * Is this making things unnecessarily complicated? In general I try to subscribe to doing \"the simplest thing that could work\". This doesn't quite feel like it. Would love to hear anyone's thoughts about how they might handle this design question."} {"_id": "151098", "title": "What is really happening when we change encoding in a string?", "text": "http://php.net/manual/en/function.mb-convert-encoding.php Say I do: $encoded = mb_convert_encoding ($original); That looks like simple enough. WHat I am imagining is the following `$original` has a pointer to the way the string is actually encoded. Something like `char *` kind of thing. And then there are things like what the character actually encoded. It's probably somewhere along UTF-64 kind of thing where each glyph is indeed a character. Now when we do $encoded = mb_convert_encoding ($original); several thing can happen: * the original internal representation doesn't change however it is REINTERPRETED so that the code that show up differs * the original string that it represent doesn't change however the ENCODING change. Which one is right?"} {"_id": "151099", "title": "Will TSQL become useless because of new ORMs?", "text": "By introducing LINQ to SQL, I found myself and my .NET developer colleagues gradually moving from TSQL to C# to create queries on the database. Entity Framework made that shift almost permanent. Now it's nearly 2 years that I use LINQ to SQL and LINQ to Entities and haven't used TSQL that much. Yesterday, a colleague encountered a problem (he had to create a SP) and we went to help him. But we all found that our TSQL knowledge was diminished for sure, and a simple SP that seemed trivial to us 2 or 3 years ago, was a challenge to be solved yesterday. Thus it came to my mind that while TSQL's life is attached to SQL Server, and logically as long as SQL Server lives and doesn't change it's SQL language, TSQL would also live, practically it might die, and soon very few people might know it. Am I right? Do existence of ORMs like Entity Framework threaten TSQL's life and usability?"} {"_id": "65081", "title": "How do you convince management to \"invest\" in unit tests?", "text": "How did you convince your manager to let you unit test? By \"use\", I mean being allowed to develop, check-in to source control and maintain the unit tests over time, etc. Typical management objections are: 1. The customer didn't pay for unit tests 2. The project does not allow time for unit testing 3. Technical debt? What technical debt? Do you know other objections? What were your answers? Thanks in advance!"} {"_id": "225296", "title": "How do I stress the importance of unit tests to my manager", "text": "I've recently started a new job, & I've been tasked with completing a feature that another developer didn't finish before he left the company. The existing tests are out of date (i.e useless)...technical debt everywhere. I think it would be a very bad idea to implement features without tests, I've put this across to him a few times and he keeps saying that we'll complete this feature and then I can do tests. But the previous developer didn't get around to updating the tests, so it seems that my manager just says that, but doesn't mean it and keeps pushing for more features. This latest feature is a big one, and I think it would be a really bad idea to finish it without writing tests for the whole application first. I understand that my manager has his own deadlines/pressures etc, but this feature needs to be done right, or it could come back to haunt both of us. So how can i put this across to him in a way that will sink in how important this issue is? I should also mention that I'm the sole developer, and the manager is non- technical."} {"_id": "230311", "title": "Layers and layers of SOAP", "text": "I am designing the back-end to a SOAP web-service and have a question about how I am thinking of doing it. I am going to go with a simple layered design which consists of 3 separate layers. Layer 1 -> Layer 2 -> Layer 3 **Layer 1 :** Will implement the SOAP skeleton interface, so this will be the main entry point of my application. This layer will pull data from the SOAP request and pass it to the business logic layer as a business object. It will get back business objects from the business logic layer with which it will populate a SOAP response. **Layer 2 :** Business logic layer that implements the business logic behind the web-service. It will be passed data from Layer 1 and interact with Layer 3, the DAO layer. **Layer 3 :** DAO layer which will preform CRUD operations on my DB. With populating SOAP response objects in my code, the implementation might be a bit messy in Layer 1, so I came up with the following 3 options. Option 1 - Three layers is enough, any more layers would be overkill. SOAP objects are always going to be messy looking, just get over it. Option 2 - Create an extra layer between Layers 1 and 2. This layer would take data from the SOAP request, populate a business object which it would then pass to the business logic layer. This would keep the methods in Layer 1 neat and tidy. It would look something like this : Layer 1 -> SOAP Request -> Layer 1.A -> Business Object -> Layer 2 Layer 1 <- SOAP Response <- Layer 1.A <- Business Object <- Layer 2 Option 3 - Do not create any more layers. Simply create a utility object with a method that takes a SOAP request and returns a business object. I could then pass the business object to Layer 2. The same utility object could then be used in Layer 2 to pass a SOAP response back to Layer 1. Or does this approach kind of blur the lines and make my design less modular?"} {"_id": "211205", "title": "project with 2 types of interfaces performing different jobs, should they use the same BLL and DAL?", "text": "i am working on a project that has two interface (web and desktop), they are not performing the same tasks but they use the same BLL and DAL, the web part using 100% of the BLL and DAL, while the desktop only needs to know about 20% of the BLL and DAL. do you think it is a good idea to let the desktop use the same BLL and DAL as the web, which consequently will lead to ditributing thos BLL and DAL with the desktop application? or you think i'd better create new BLL and DAL projects just to serve the desktop app? but in this case i will fall in the trap of maintaining two copies of the same code!"} {"_id": "51473", "title": "Is it wise for a programmer to move into management?", "text": "Many times, a developer has suggested that I become a team leader because I'm motivated, but during my career in the IT industry, I've seen so many people who are great at programming, move into management and they are miserable. I've also seen many managers return to programming stating \"I'm a technical person, I like technical problems\". If this is such a common thing, why do developers feel compelled to leave the technical domain and move into management? Sure you'll have more money and more control, but if you don't enjoy your work and take your problems out on your team, that's hardly economic of your time. Secondly, I've been asked in developer interviews, \"Would you consider leading a team?\" and I'm always tempted to cite the Peter Principle based on what I've seen. I am interested in furthering myself, but not in the way the company may want i.e \"Vice President of department blah\". To be honest, I've seen this more often in the corporate world than in small development houses and it's always put me off ever going back to a corporate environment. I just feel that this is becoming more and more the norm and it's impacting team morale and degrading the quality of the work. Question: Based on what I've said, Is it a smart move for a technical person to move into management?"} {"_id": "59326", "title": "How do you think about using `//` for JSON comment?", "text": "I'm considering extending JSON by adding comment. For just my own project, internally. If JSON got a comment, I think its syntax should be `//`~`\\n` style, JavaScript syntax. How do you think?"} {"_id": "59322", "title": "How to explain that writing universally cross-platform C++ code and shipping products for all OSes is not that easy?", "text": "Our company ships a range of desktop products for Windows and lots of Linux users complain on forums that we should have been written versions of our products for Linux years ago and the reason why we don't do that is * we're a greedy corporation * all our technical specialists are underqualified idiots Our average product is something like 3 million lines of C++ code. My and my colleagues analysis is the following: * writing cross-platform C++ code is not that easy * preparing a lot of distribution packages and maintaining them for all widespread versions of Linux takes time * our estimate is that Linux market is something like 5-15% of all users and those users will likely not want to pay for our effort when this is brought up the response is again that we're greedy underqualified idiots and that when everything is done right all this is easy and painless. How reasonable are our evaluations of the fact that writing cross-platform code and maintaining numerous ditribution packages takes lots of effort? Where can we find some easy yet detailed analysis with real life stories that show beyond the shadow of a doubt what amount of effort exactly it takes?"} {"_id": "59320", "title": "Sucking Less Every Year?", "text": "Sucking Less Every Year -Jeff Atwood I had come across this insightful article.Quoting directly from the post > I've often thought that sucking less every year is how humble programmers > improve. You should be unhappy with code you wrote a year ago. If you > aren't, that means either A) you haven't learned anything in a year, B) your > code can't be improved, or C) you never revisit old code. All of these are > the kiss of death for software developers. 1. How often does this happen or not happen to you? 2. How long before you see an actual improvement in your coding ? month, year? 3. Do you ever revisit **Your** old code? 4. How often does your old code plague you? or how often do you have to deal with your technical debt. It is definitely very painful to fix old bugs n dirty code that we may have done to quickly meet a deadline and those quick fixes ,some cases we may have to rewrite most of the application/code. No arguments about that. Some of the developers i had come across argued that they were already at the _evolved_ stage where their coding doesn't need improvement or cant get improved anymore. * Does this happen? * If so how many years into coding on a particular language does one expect this to happen? Related: Ever look back at some of your old code and grimace in pain? Star Wars Moment in Code **\"Luke! I am your code!\" \"No! Impossible! It can't be!\"**"} {"_id": "55351", "title": "How should I manage my time?", "text": "There are times when just one bug that keeps eating away your time like hell ... for example this one. I generally end up wasting hours and realize I've gone terribly behind my schedule and not completed other tasks. With n number of tabs open in the browser, I end up posting the question in stackoverflow as a last resort. What are some time management techniques that lets you stop, rewind and get back in action when faced with a road block?"} {"_id": "161108", "title": "Object-Oriented Design: What to do when responsibility of the class is big", "text": "I applied principles of the GRASP and ended up having a class called Environment. This class's responsibilities are to: * Keep information about services in the environment,i.e. environment definition (Service is another class) * Start/stop services meeting some criteria * Apply configuration changes to different service and keep list of updated services * Restart services with configuration change * Revert configuration changes at the end of session * Search/Give Service class based on criteria (name etc.) According to OOD, this is not a problem: _very cohesive responsibility is assigned to this class_. But on the other hand it's a problem since the _class is too big_ , even though all responsibility assigned to it makes sense. If I want to divide this responsibility to between separate classes, then all these classes need to know about \"environment definition\", which makes coupling worse and those classes will have _\"feature envy\"_. What design patterns are applicable for such situation? Or what other principals can be applied to have cohesive, less coupled classes? Thanks in advance."} {"_id": "73725", "title": "What uses IronPython has?", "text": "I've been wondering to myself, what uses does IronPython have in a .NET environment? what can it do that can't be done with VB, C# or F#? It seems kind of strange to go through all the trouble of making the DLR and enabling dynamic languages on the clr just to add another language. What do people use IronPython for?"} {"_id": "73729", "title": "What are the best things to put in my portfoilo to demonstrate my Java skills and make me more marketable?", "text": "I have studied Java for quite a while and written a lot of small SE programs. I have a good grasp on the language at this stage but unfortunately I do not work in the area of Java programming (as of yet). I would like to develop a portfolio of Java programs for the purposes of moving into this area. This is also vital in most interviews. I also would like to develop this portfolio to further my learning (maybe in the area of Java EE). Can anybody recommend a starting point? GF"} {"_id": "141485", "title": "What is the difference between Static code analysis and code review?", "text": "I just wanted to know what the difference is between static code analysis and code review. How are each of these two done? More specifically, what are the tools available today for code review/static analysis of PHP? I would also like to know about good tools for code review for any language."} {"_id": "230785", "title": "Why is my page load time so closely correlated with number of database queries?", "text": "Whenever I'm doing web development, and a page takes longer than half a second to be generated, I know that somewhere my code is hitting the DB too many times. The normal way to fix this situation is to ask the DB for all the information all at once instead, by doing JOINs and the like. My question is: Why do many database queries make a page slow? There must be considerable overhead to each query, but what is it? **EDIT** : Alright, let's take an example (it's a bit silly and small, but it'll do) `people` table: | name | football_team_id | +------+------------------+ | jim | 1 | | mike | 3 | | carl | 2 | `football_team` table: | id | color | +----+-------+ | 1 | red | | 2 | blue | | 3 | green | We all know that this is slow: SELECT name,football_team_id FROM people; # start rendering the page, realise we need colors SELECT color FROM football_team WHERE id=1 # oops, need mike's color SELECT color FROM football_team WHERE id=3 # oh, and carl's SELECT color FROM football_team WHERE id=2 This is a bit better: SELECT name,football_team_id FROM people; SELECT id,color FROM football_team WHERE id IN (1,3,2) This is best: SELECT name,football_team_id,color FROM people JOIN football_team ON people.football_team_id=football_team.id In each example we're getting the same amount of data, but the latter is easily the fastest. You wouldn't expect the same behaviour if you were reading from a file descriptor, for example."} {"_id": "78846", "title": "The definition of C-based language", "text": "What is the definition of C-based language? Is C# considered to be C-based? Is Java considered to be C-based? Furthermore, what does it mean for a language to be based on another language anyway?"} {"_id": "78844", "title": "What's the correct approach for passing data from several models into a service?", "text": "I have an `AccountModel` and a page where the user can upload a file. What I would like to have happen is when the user uploads the file. The `PageController` does something like the following. this is a quick attempt just written in the question to illustrate my question. public class PageController : Controller { private Service service; public ActionResult Upload(HttpPostedFileBase f){ service.savefile(f,_AccountModel_whatever.currentlyloggedinuser.taxid) } } public class Service { // abunch of validation and error checking to make sure the file is good to store } Wouldn't this approach be in bad practice? Since I'm making my controller dependent on the existence of th AccountModel? This will become a **HUGE** program over the next few years, and I really want to maximize the quality of the framework now."} {"_id": "196249", "title": "Capturing mobile device system (output) audio", "text": "I'm trying to figure out a way to capture the system audio of an Android and/or Windows Phone. The idea is to provide a stream based on the music I'm currently playing on my phone. What I'm not sure about is what the best approach would be for sending the audio. Preferably I'd like to capture all the device audio (same as when connected to a bluetooth device). But I'm not sure whether this is possible with Android or Windows Phone. For Android I found this article http://xzpeter.org/?p=254, he states this is'nt possible without making your own Android build. Which isn't an option for me. For Windows Phone I can only find ways to capture audio from the mic. Not directly from the system. The only alternative I can come up with is to let the app provide all the audio, meaning I would have to build all the basic mediaplayer functionality into the app. But this would mean I have to put much more work into the app, and I wouldn't be able to use other apps like Winamp and YouTube to provide audio. Did I maybe overlook a way to capture all the audio of the device? Or maybe there's a way to capture audio from specific applications? **Edit** After doing some more research, I really doubt I can capture all system audio. Or maybe it's possible to simulate an bluetooth audio device? For the Windows Phone I was going for the alternative solution: selecting songs which would then be streamed to the server. But it seems that for Windows Phone it's not even possible to get physical access to the songs: http://stackoverflow.com/questions/16251400/stream-a-xna-framework-media-song- to-a-server"} {"_id": "214889", "title": "Is an event loop just a for/while loop with optimized polling?", "text": "I'm trying to understand what an event loop is. Often the explanation is that in the event loop, you do something until you're notified that an event occurred. You than handle the event and continue doing what you did before. To map the above definition with an example. I have a server which 'listens' in a event loop, and when a socket connection is detected, the data from it gets read and displayed, after which the server goes to the listening it did before. * * * However, this event happening and us getting notified 'just like that' are to much for me to handle. You can say: \"It's not 'just like that' you have to register an event listener\". But what's an event listener but a function which for some reason isn't returning. Is it in it's own loop, waiting to be notified when an event happens? Should the event listener also register an event listener? Where does it end? * * * Events are a nice abstraction to work with, however just an abstraction. I believe that in the end, polling is unavoidable. Perhaps we are not doing it in our code, but the lower levels (the programming language implementation or the OS) are doing it for us. It basically comes down to the following pseudo code which is running somewhere low enough so it doesn't result in busy waiting: while(True): do stuff check if event has happened (poll) do other stuff * * * This is my understanding of the whole idea, and i would like to hear if this is correct. I am open in accepting that the whole idea is fundamentally wrong, in which case I would like the correct explanation. Best regards"} {"_id": "118640", "title": "Write a directory structure (pseudo code)", "text": "I am writing a wiki article and wondering what is the proper way to write a directory scheme? I am doing something like main folder - sub folder - sub folder ... But I'm stuck after that. Any help?"} {"_id": "115808", "title": "Got made redundant after three months - should I include this in my resume?", "text": "In my last job, I was hired for a specific project which the company lost before I even got a chance to start work on it: four people were made redundant on the same day. I spent three months there and while I did learn new dev skills (I worked on a minor project while waiting to start on the main one), I cannot consider myself an expert in the speciality of this company (barely passing familiar). As a developer, should I maintain this on my resume? Do I even say that I was made redundant or is it best to leave it out entirely? Rob :)"} {"_id": "214887", "title": "Calculating WPM given a variable stream of input", "text": "I'm creating an application that sits in the background and records all key presses (currently this is done and working; an event is fired every keydown/keyup). I want to offer a feature for the user that will show them their WPM over the entire session the program has been running for. This would be easy if I added a \"Start\" and \"End\" button to activate a timer, but I need to detect only when the user is typing continuously - ignoring all one-time keyboard shortcuts and breaks the user takes from typing. How in the world do I approach this? Is this even realistically & accurately possible?"} {"_id": "214886", "title": "How to deal with data on the model specific to the technology being used?", "text": "There are some cases where some of the data on a class of the domain model of an application seems to be dependent on the technology being used. One example of this is the following: suppose we are building one application in .NET such that there's the need of an Employee class. Suppose further that we are going to implement relational database, then the Employee has a primary key right? So that the classe would be something like public class Employee { public int EmployeeID { get; set; } public string Name { get; set; } ... } Now, that EmployeeID is dependent on the technology right? That's something that has to do with the way we've choose to persist our data. Should we write down a class independent of such things? If we do it this way, how should we work? I think I would need to map all the time between domain model and persistence specific types, but I'm not sure."} {"_id": "197120", "title": "How to implement multi-theme PHP application", "text": "I am developing an application which will handle many virtual stores and I would like to have many themes that the user could choose anytime. I would to know what's the main ideia to implement it. I will be developing it using Symfony 2. I was thinking about implementing own views and assets for each theme * Resources * views * theme 1 * Product * List.html * Detail.html * ... * theme 2 * Product * List.html * Detail.html * ... * theme 3 * Product * List.html * Detail.html * ... * public * theme 1 assets * js * css * images * ... * theme 2 assets * js * css * images * ... * theme 3 assets * js * css * images * ... And in the database each user would have own preferences(theme name, color, etc...). I am looking for an implementation which will allow me to add any kind of theme. For exemple, one theme, in main page the cart icon will go cart page and another theme, cart icon will pop up a window showing products. What's the best approach to implement multi theme web application? What I am missing?"} {"_id": "145326", "title": "REST Service Authentication/Authorization", "text": "I have a WCF rest service that will be consumed by multiple clients. The information returned by the client requires me to know who they are, so that I can return information specific to them. Is the best way to approach this type of design to authenticate them and return a token for their session, then pass this token along with every request? Thanks."} {"_id": "145323", "title": "When should you use bools in C++?", "text": "We had an assignment for our class where we had to create a _Tic-tac-toe_ game. People like to complicate themselves, so they wrote complex games which included menus. At the end of the game, you had to have the option to play again or quit the program. I used an `int` variable for that, but I noticed some classmates using BOOLs. Is it more efficient? What's the difference, between storing an answer that should only store two values in an `int` rather than storing it in a bool? What is the exact purpose of these variables?"} {"_id": "92314", "title": "Multistream Project v/s Single Stream project", "text": "If we want to have code reviews before a developer delivers his work, then can you suggest whether multi stream project { i.e., each developer create his own stream and view and later deliver to single stream.} or single stream { i.e., all developers work on same stream } which is advisable."} {"_id": "162165", "title": "Combining multiple events into one action/ Defer refreshing", "text": "So in a GUI program I have multiple events that trigger a update of a workspace. This update call is costly so I would want it to not happen very often. A user might pick something from a dropdown ( triggers a refresh ) and then put a check in a checkbox (also triggers a refresh ). If he does this in a short time interval this might result in the first refresh to be irrelevant. One way to handle this would be to have a refresh button for the user to press, or a toggle to set whether to automatically refresh. But I wouldn't want to put this in the hands of the user. Is there any other strategy to do this that is proven to be effective and error free? PS. I'm specifically using .Net 4.0 -> C# -> WPF -> Caliburn.Micro but would like to hear of any method that could be reproduced using .net 4"} {"_id": "92312", "title": "Questions about what professional C++ programming is like", "text": "I currently program professionally in C#, and since I started in 2008 I've been curious about what unmanaged, C++ programming is like. I've looked at the obvious answer -to look at the job descriptions online - but they're high- level descriptions and job requirements that don't really go into the details of what professional programming in C++ is like. If you're a Windows programmer, does this usually involve working with CLI, MFC, or ATL? How many jobs are about maintaining legacy code vs. innovating something new? Is the extra attention to memory management an interesting challenge or a tedious task? Are the standard and Windows/UNIX libraries enjoyable to work with? How often do you see a programmer go from a more \"high-level\" language background like C# or Java to C++, and how often do you see programmers move in the other direction? What do you think will drive demand for future generations of C++ programmers (e.g. game development or maintaining legacy systems)?"} {"_id": "165909", "title": "How to manage document format changes with local storage?", "text": "I'm programming a Javascript application which saves \"documents\" in localStorage. As the app evolves, naturally there are changes in the document format. I've tried searching but not found anything \u2013 probably (at least partly) because of search term ambiguities. What is a good practice in managing document format versions/upgrades with local storage implementation approach?"} {"_id": "55936", "title": "Programming language and database concurrent locking primitives", "text": "I am writing an article about both programming language locking primitives (which are really wrappers for OS primitives) and database locking primitives. Are there any short wide-spread titles for these two types of locking? For example, \"code locking\" and \"database locking\"?"} {"_id": "55934", "title": "What defines a standard?", "text": "What defines a standard like HTML5, C++0x, etc.? Is it just that you hand something in to W3C/ANSI/ISO/..., they produce a couple hundred pages long document and suddenly it's a standard? Can't I, as an individual, create something and standardize it by myself? I surely could produce a couple hundred pages long document which describes my creation in every detail. So what is the benefit of publisher: W3C Stand... ?"} {"_id": "51291", "title": "Is it true that first versions of C compilers ran for dozens of minutes and required swapping floppy disks between stages?", "text": "Inspired by this question. I heard that some very very early versions of C compilers for personal computers (I guess it's around 1980) resided on two or three floppy disks and so in order to compile a program one had to first insert the disk with \"first pass\", run the \"first pass\", then change to the disk with \"second pass\", run that, then do the same for the \"third pass\". Each pass ran for dozens of minutes so the developer lost lots of time in case of even a typo. How realistic is that claim? What were actual figures and details?"} {"_id": "104382", "title": "How to get better at testing your own code", "text": "I am a relatively new software developer, and one of the things I think I should improve is my ability to test my own code. Whenever I develop a new functionality, I find it really difficult to follow all the possible paths so I can find bugs. I tend to follow the path where everything works. I know this is a well known issue that programmers have, but we don't have testers at my current employer and my colleagues seem to be pretty good at this. At my organization, we do neither test-driven development nor unit testing. It would help me a lot but it's not likely that this will change. What do you guys think I could do in order to overcome this? What approach do you use when testing your own code?"} {"_id": "104381", "title": "Why do XSLT editors insert tab or space characters into XSLT to format it?", "text": "All XSLT editors I've tried till now add tab or space characters to the XSLT to indent it for formatting. This is done even in places within the XSLT where these characters are significant to the XSLT processor. XSLT modified for formatting in this way can produce output very different to that of the original XSLT if it had no formatting. To prevent this, _xsl:text_ elements or other XSLT must be added to a sequence constructor to help separate formatting from content, this additional XSLT impacts on maintainability. Formatting characters also adversely impact on general usability of the tool in a number of ways (this is why word-processors don't use them I guess) and add to the size of the file. As part of a larger project I've had to develop a light-weight XSLT editor, it's designed to format XSLT properly, but without tab or space characters, just a dynamic left-margin for each new line. The XSLT therefore doesn't need additional elements to separate formatting tab or space characters from content. The problem with this is that if XSLT from this editor is opened in other XSLT editors, characters will be added for formatting reasons and the XSLT may therefore no longer behave as intended. Why then do existing XSLT editors use tabs or spaces for formatting in the first place? I feel there must be valid reasons, perhaps historical, perhaps practical. An answer will help me understand whether I need to put compatibility options in place in my XSLT editor somehow, whether I should simply revert to using tabs or spaces for both XSLT content and formatting (though this seems like a backwards step to me), or even whether enough XSLT users might be able to persuade their tools vendors to include alternative formatting methods to tabs or spaces. **Note:** I provided an XSLT sample demonstrating formatting differences in this answer to the question: _Tabs versus spaces\u2014what is the proper indentation character for everything, in every situation, ever?_"} {"_id": "104386", "title": "What do you think of opening brace comments in source code?", "text": "I have developed a habit of writing comments in my code by putting the comments on the same line as the opening brace, after the brace. I've found that this saves vertical space. It also leaves a hint why something was done, but I'm wondering if it's readable for others. Example: void DoSomeInterestingImageManipulation(char *pImage) {//This will convert the image to formatABC which allows x% space savings for storage if(pImage && pImage[0] == 0xFF) {//Process the extra case where image internal format needs decompression ++pImage; //... //... //... } //Proceed normally *pResult = Foo(pImage); } Do you consider it easier to read or harder to read?"} {"_id": "104865", "title": "Managing SQL Stored Procedures' Version", "text": "I am working with a team of 5 persons. We are using SQL Server as our database. Since long time I want to store the Stored Procedures in SVN so that the versions can be maintained. Is there any tool which can be used as a plugin with SQL Server Management Studio and allows checkin from there."} {"_id": "149855", "title": "Software Optimization vs. Hardware Optimization - what has the bigger impact?", "text": "I was wondering how software optimization and hardware optimization compare when it comes to the impact they have on speed and performance gains of computers. I have heard that improving software efficiency and algorithms over the years has made huge performance gains. Obviously both are extremely important, but what has made the bigger impact in the last 10, 20 or 30 years? And how do hardware changes affect the software changes? How much of software optimization is a direct result of hardware improvements and how much is independent of the hardware? To be clear, I am asking about software optimizations at the level of compilers and operating systems. Obviously using better high level-algorithms will result in the largest speed ups (think: quick-sort vs. bubble-sort), but this question is about the underlying reason why computers are faster today, in general."} {"_id": "142805", "title": "How to remove the boundary effects arising due to zero padding in scipy/numpy fft?", "text": "I have made a python code to smoothen a given signal using the Weierstrass transform, which is basically the convolution of a normalised gaussian with a signal. The code is as follows: #Importing relevant libraries from __future__ import division from scipy.signal import fftconvolve import numpy as np def smooth_func(sig, x, t= 0.002): N = len(x) x1 = x[-1] x0 = x[0] # defining a new array y which is symmetric around zero, to make the gaussian symmetric. y = np.linspace(-(x1-x0)/2, (x1-x0)/2, N) #gaussian centered around zero. gaus = np.exp(-y**(2)/t) #using fftconvolve to speed up the convolution; gaus.sum() is the normalization constant. return fftconvolve(sig, gaus/gaus.sum(), mode='same') If I run this code for say a step function, it smoothens the corner, but at the boundary it interprets another corner and smoothens that too, as a result giving unnecessary behaviour at the boundary. I explain this with a figure shown in this image This problem does not arise if we directly integrate to find convolution. Hence the problem is not in Weierstrass transform, and hence the problem is in the fftconvolve function of scipy. To understand why this problem arises we first need to understand the working of fftconvolve in scipy. The fftconvolve function basically uses the convolution theorem to speed up the computation. In short it says: convolution(int1,int2)=ifft(fft(int1)*fft(int2)) If we directly apply this theorem we dont get the desired result. To get the desired result we need to take the fft on a array double the size of max(int1,int2). But this leads to the undesired boundary effects. This is because in the fft code, if size(int) is greater than the size(over which to take fft) it zero pads the input and then takes the fft. This zero padding is exactly what is responsible for the undesired boundary effects. _**Can you suggest a way to remove this boundary effects?_** I have tried to remove it by a simple trick. After smoothening the function I am compairing the value of the smoothened signal with the original signal near the boundaries and if they dont match I replace the value of the smoothened func with the input signal at that point. It is as follows: i = 0 eps=1e-3 while abs(smooth[i]-sig[i])> eps: #compairing the signals on the left boundary smooth[i] = sig[i] i = i + 1 j = -1 while abs(smooth[j]-sig[j])> eps: # compairing on the right boundary. smooth[j] = sig[j] j = j - 1 There is a problem with this method, because of using an epsilon there are small jumps in the smoothened function. _**Can there be any changes made in the above method to solve this boundary problem?_**"} {"_id": "149852", "title": "What is the point of using the private access modifier for C# class members?", "text": "From MSDN... > The access level for class members and struct members, including nested > classes and structs, is private by default. If class members are private by default then why use the private access modifier for them? I see it all the time in code examples and open source projects including the ASP.NET MVC source. I even use it in my own projects but I'm left wondering why."} {"_id": "112680", "title": "Multiple projects - similar platforms or as different as possible?", "text": "When working on multiple projects simultaneously (for the sake of simplicity let's say half time each on two projects), which is better? Should the two projects * Use the same language? Same/similar frameworks? * Use entirely different languages? Additionally, is it best if they are target for similar platforms, or different (web v. desktop v. mobile v. utility library for internal use)? The first responses that come to mind are * They should be similar, and play to the strengths of the developer to help him handle managing both of them * They should be as different as possible to help the developer keep organized Since I've never done this in a professional setting, I was hoping more experience programmers could shed some light on the best way to go about doing this. **EDIT** : This question is assuming the existence of multiple other projects, each of which is underway (I think some underway and some brand new with no design decisions yet made would be too broad for one question). When choosing between existing projects for a second project, should a developer consider how similar his new project is to his current project, and how should that affect the decision?"} {"_id": "52486", "title": "Learning to program in C (coming from Python)", "text": "If this is the wrong place to ask this question, please let me know. I'm a Python programmer by occupation. I would love to learn C. Indeed, I have tried many times, but I always get discouraged. In Python, you write a few lines and the program does wonders. In C, I can't seem to be able to do anything useful. It seems to be very complicated to even connect to the Internet. Do you have any suggestions on what I can do to learn C? Are there are any good websites? Any cool projects? Thanks"} {"_id": "223333", "title": "Extending user registration in Django site that uses both site admin tables and my own module?", "text": "I'm writing a Django site that registers a particular type of user and this is done by the resources that come with the framework, the site administration. The issue: I'd like to create a new model with a foreign key to an admin site table and loaded from a excel cvs that augments the data from the user registration form. I'd like to do this in a database agnostic way without sql. Just using models, urls, forms, template language etc. I do not know how to check the contents of the user registration form against the new table? **forms.py** from django import forms from django.contrib.auth.models import User from django.contrib.auth.forms import UserCreationForm from django.core.mail import send_mail # # Called by register_user view. # class MyRegistrationForm(UserCreationForm): email = forms.EmailField(required=True) class Meta: model = User fields = ('username', 'password1', 'password2', 'email', 'first_name', 'last_name',) def save(self, commit=True): # create and load user object user = super(MyRegistrationForm, self).save(commit=False) user.email = self.cleaned_data['email'] user.first_name = self.cleaned_data['first_name'] user.last_name = self.cleaned_data['last_name'] if commit: user.save() send_mail('Your Provider Enrollment Registration', 'You are now an authenticated provider!', 'donfox1@mac.com', [user.email], fail_silently=False) return user **models.py** class NPIdata(models.Model): ''' NPI data is associated with admin site User accounts. ''' NPI = models.IntegerField(max_length=10) lastname = models.CharField(max_length = 20) firstname = models.CharField(max_length = 20) Credential = models.CharField(max_length = 15) user = models.ForeignKey(User)"} {"_id": "52925", "title": "Can I use silverlight for building SocialNetworking applicaiton?", "text": "I am wondering: how feasible it would be to start developing a social networking website entirely based on silverlight; This has been fairly discussed over the years in favor of HTML. Has something changed with silverlight improvements over the years? What about: * Performance -- active users -- technology used, MVVM + MEF (possibility of lags, server memory overflow...) * Security --- WCF Ria Services & EF What are your thoughts on this subject?"} {"_id": "227824", "title": "How to comment on a PEP?", "text": "Proposals for new Python features are collected in documents called PEPs (Python Enhancement Proposals). There's a master list at http://www.python.org/dev/peps/ which links to (for example): * Labeled break and continue http://www.python.org/dev/peps/pep-3136/ * Asynchronous IO support http://www.python.org/dev/peps/pep-3153/ * Remove Backslash Continuation http://www.python.org/dev/peps/pep-3125/ It's great that proposals are published publicly for the community to read. However, how is the community supposed to participate? The pages don't allow comments. It strikes me as weird the Python developers would make proposals public then deliberately exclude the community from discussion. Have I missed something? In particular, I'd like to read other people's comments on http://www.python.org/dev/peps/pep-0453/ and add my own. * * * For comparison, Ruby feature proposals are made as posts to its bug tracker. You can read everyone's comments below, and add your own (after making an account) * Refinements and nested methods https://bugs.ruby-lang.org/issues/4085 * Frozen string syntax https://bugs.ruby-lang.org/issues/8579 * Exception#cause to carry originating exception along with new one https://bugs.ruby-lang.org/issues/8257 Nodejs feature requests are plain GitHub issues, which is probably the most inclusive. It's very easy to join GitHub and post a comment. * https://github.com/joyent/node/issues?labels=feature-request"} {"_id": "197785", "title": "Interaction between programs", "text": "I am writing an interactive program in which it takes speech input from the user for a specific list of commands. The list of commands will be stored locally in a graph and based on the usage their weight will be modified. The speech input is taken and processed into text and then it's executed by bash scripts. I am thinking what I should use to interact (synchronize) between the speech recognition engine and then the list and the bash program. I am using C++ and Bash on a Unix system. I thought of `mutex` but the problem is what if the user gives multiple inputs then I'll have to buffer it and then process it sequentially and that will make the system slower. The bash script is used to call the system utility commands as requested by the user. Should I only use C++ or Bash too? I am thinking of storing the graph in a binary format. Is this the best way to store the graph? Please suggest alternatives."} {"_id": "227820", "title": "How to convince my company (operating in the financial sector) to switch from PHP to Java", "text": "My company is in the financial sector and it is using PHP as programming language. I am a PHP developer myself. I am leading a big project started from almost scratch. I can see how PHP is not the best candidate for building robust platforms. I want to convince my company to _gradually_ switch to Java (which I have experience with). I was trying to find as many supporting arguments as possible. Can you help with this? So far I have found these: * Most of the competitors are using Java (anyway not PHP) * Most financial companies use Java rather than PHP * On average, Java developers are better prepared (on average!) * The compilation process catches a lot of problems before the software runs in production * Strong typing makes everything more robust as contracts between interfaces is well defined Any other points I am missing? Thanks!"} {"_id": "242999", "title": "Best Practice Method for Including Images in a DataGrid using MVVM", "text": "All, I have a WPF `DataGrid`. This `DataGrid` shows files ready for compilation and should also show the progress of my compiler as it compiles the files. The format of the `DataGrid` is Image|File Path|State -----|---------|----- * |C:\\AA\\BB |Compiled & |F:PP\\QQ |Failed > |G:HH\\LL |Processing .... The problem is the image column (the *, &, and > are for representation only). I have a `ResourceDictionary` that contains hundreds of vector images as `Canvas` objects: Now, I want to be able to include these in my image column and change them at run-time. I was going to attempt to do this by setting up a property in my View Model that was of type `Image` and binding this to my View via: Where in the View Model I have the appropriate property. Now, I was told this is not 'pure' MVVM. I don't fully accept this, but I want to know if there is a better way of doing this. Say, binding to an enum and using a converter to get the image? Any advice would be appreciated."} {"_id": "221797", "title": "Reasoning behind the syntax of octal notation in Java?", "text": "Java has the following syntax for different bases: int x1 = 0b0101; //binary int x2 = 06; //octal int x3 = 0xff; //hexadecimal Is there any reasoning on why it is `0` instead of something like `0o` like what they do for binary and hexadecimal? Is there any documentation on why they went this route?"} {"_id": "26497", "title": "For in-house programmers (non-project managers), do you normally receive project instructions written out in advance?", "text": "I'm fairly new to working as a programmer, at least working as a regular employee in-house for a company. I often become frustrated with my company's management style as they basically just pick out whatever random tasks catches the lead developer's fancy that day, or the day before, and give me the task (with tasks usually lasting less than 1 day). A lot of the tasks I have now are fairly small and are for adding enhancements to a web application, fixing bugs, etc. I am bothered, however, by the fact that they don't plan ahead so as to give me an written outline of projects a few days in advance so that if, for example, I am rusty on something I can at least familiarize myself with it on my own time or at least have a more relaxed approach to the job so that everything doesn't have to be so reactionary on my part (where I can mull things during off-time over before starting work on them, etc) I guess I'm just curious if this is the norm or if most development jobs value clearly laid out specifications for tasks and are able to prepare them somewhat in advance. I would be interested to hear anyone's experience / opinions, etc. Thanks"} {"_id": "60830", "title": "Where to find open source volunteers?", "text": "We have an open source Java project (REMPL) and actively looking for volunteers. What is the best way to find them? Junior programmers are OK for us."} {"_id": "60831", "title": "A good tool set for ASP.NET development", "text": "What are some good (or must have) ASP.NET development/debugging tools in addition to Visual Studio IDE? So far the only trick in my pocket is .NET Reflector, that has come in handy before. (I would limit browsers add-ons to IE if possible, since I'm working on a corporate intranet and using FF is not always an option)."} {"_id": "253944", "title": "Is it good practice to use NoStackTrace in scala?", "text": "I came across the `NoStackTrace` mixin for Exceptions in scala. Is it good practice to use it, or should it be considered \"internal\" to scala and left alone?"} {"_id": "220448", "title": "Is a 'God' ViewModel desired in WPF", "text": "My application has user controls within user controls. Please see a screen shot of one of the most beautiful applications of all time (UC = user control): ![enter image description here](http://i.stack.imgur.com/EEvpd.png) All the properties live in the MainWindow code behind, there are none in my UserControls. To bind to my properties I use DataContext=\"{Binding RelativeSource={RelativeSource Mode=FindAncestor, AncestorLevel=1,AncestorType=Window}}\" This works fine but, it does mean all binding properties are in my MainWindow code behind (or the MainWindow ViewModel). Is this desired in a MVVM (or should I say WPF) approach, in that all the children share the same ViewModel?"} {"_id": "147480", "title": "Should one check for null if he does not expect null?", "text": "Last week, we had a heated argument about handling nulls in our application's service layer. The question is in the .NET context, but it will be the same in Java and many other technologies. The question was: should you always check for nulls and make your code work no matter what, or let an exception bubble up when a null is received unexpectedly? On one side, checking for null where you are not expecting it (i.e. have no user interface to handle it) is, in my opinion, the same as writing a try block with empty catch. You are just hiding an error. The error might be that something has changed in the code and null is now an expected value, or there is some other error and the wrong ID is passed to the method. On the other hand, checking for nulls may be a good habit in general. Moreover, if there is a check the application may go on working, with just a small part of the functionality not having any effect. Then the customer might report a small bug like \"cannot delete comment\" instead of much more severe bug like \"cannot open page X\". What practice do you follow and what are your arguments for or against either approach? Update: I want to add some detail about our particular case. We were retrieving some objects from the database and did some processing on them (let's say, build a collection). The developer who wrote the code did not anticipate that the object could be null so he did not include any checks, and when the page was loaded there was an error and the whole page did not load. Obviously, in this case there should have been a check. Then we got into an argument over whether every object that is processed should be checked, even if it is not expected to be missing, and whether the eventual processing should be aborted silently. The hypothetical benefit would be that the page will continue working. Think of a search results on Stack Exchange in different groups (users, comments, questions). The method could check for null and abort the processing of users (which due to a bug is null) but return the \"comments\" and \"questions\" sections. The page would continue working except that the \"users\" section will be missing (which is a bug). Should we fail early and break the whole page or continue to work and wait for someone to notice that the \"users\" section is missing?"} {"_id": "64926", "title": "Should a method validate its parameters?", "text": "Say you are designing a Square root method sqrt. Do you prefer to validate the parameter passed is not a negative number or do you leave it up to the caller to make sure the param passed is valid. How does your answer vary if the method/API is for 3rd party consumption or if it is only going to be used for the particular application you are working on I have been of the opinion that a method should validate its parameter, however, Pragmatic Programmer in its Design by Contract section (chapter 4) says it's the caller responsibility to pass good data (pg 111 and 115) and suggests using Assertions in the method to verify the same. I want to know what others feel about this."} {"_id": "168204", "title": "Does (should?) changing the URI scheme name change the semantics?", "text": "If we take: http://example.com/foo is it fair to say that: ftp://example.com/foo .. points to the _same_ resource, just using a different mechanism for resolving it (and of course possibly a different representation, but perhaps not)? This came to light in a discussion we were having surrounding some internal tooling with Git. We have to process some Git repositories, and they come to use as \"git@{authority}/{path}\" , however the library we're using to interface with them doesn't support the `git` protocol. I suggested that we should make the service robust in of that it tries to use HTTP or SSH, in essence, discovering what protocols/schemes are supported for resolving the repository at {path} under each {authority}. This was met with some criticism: \"We don't know if that's the same repository\". My response was: \"It had better be!\" Looking at RFC 3986, I see this excerpt: > URI \"resolution\" is the process of determining an access mechanism and the > appropriate parameters necessary to dereference a URI; this resolution may > require several iterations. To use that access mechanism to perform an > action on the URI's resource is to \"dereference\" the URI. Which makes me think that the resolution process is permitted to try different protocols, because: > Although many URI schemes are named after protocols, this does not imply > that use of these URIs will result in access to the resource via the named > protocol. The only concern I have, I guess, is that I only see reference to the notion of changing protocols when it comes to traversing relationships: > it is possible for a single set of hypertext documents to be simultaneously > accessible and traversable via each of the \"file\", \"http\", and \"ftp\" schemes > if the documents refer to each other with relative references. I'm inclined to think I'm wrong in my initial beliefs, because the `Normalization and Comparison` section of said RFC doesn't mention any way of treating two URIs as equivalent if they use different schemes. It seems like schemes named/based on IP protocols ought to have this notion, at least?"} {"_id": "105381", "title": "suggesting large changes/a rewrite as an intern", "text": "**The context:** * it's an internal project (that I don't think a lot of people use) * it's old * we're updating it **The issues:** 1. it abuses the mvc framework (no use of models, business logic in views, etc) 2. what we're being asked to do is small, but because of the low cohesion we have two options: 1. continue to botch things 2. move large chunks of code around or rewrite the thing **The solutions (I see):** 1. continue working with it, ignore best practices in favor of being done soon and not introducing new bugs by refactoring/rewriting 2. refactor/rewrite I guess my question is really: if I want to make large changes to this project, how do I do propose that without insulting anyone? Or would it be better for me to simply go with the flow even if that means (metaphorical) duct-tape sometimes?"} {"_id": "240877", "title": "How significant is the impact of the type system (static/dynamic) on the overall design of programs?", "text": "Coming from Java, I've never used a language with dynamic typing. I'm very used to the static-typing way of thinking. My question is, how much does the use of dynamic typing as opposed to static typing influence the overall design of programs written using languages with that kind of typing? **Does the kind of typing (static/dynamic) influence the design of programs significantly?** Working with a dynamically-typed language, would you structure your application differently than working with a statically-typed language? **Or is it merely a language characteristic that affects mainly local implementation details, but doesn't affect overall designs?**"} {"_id": "227794", "title": "C API in C++ with RAII, two alternatives to implement error handling (Exceptions)", "text": "I have an API written in C, which produces a result by returning a pointer to allocated memory. For using it with C++ (C++11) I've wrapped the function calls in objects, which keep the result in a `std::shared_ptr`. So far so good. However, the C library features _two_ functions for every operation. One produces possibly an error, the other never. Let's call them `some_pod * do_it_with_error(Parameter ..., Error **)` and `some_pod * do_it_without_error(Parameter ...)` I can pass in the address of an `Error *` pointer to the first function and if there's an error it won't be `NULL` afterwards. For solving this I thought of two different implementations. First, I could SFINAE for choosing between the `with_error` and `without_error` functions, like so: template struct do_it { public: void operator()(...) { Error * error = NULL; m_result = std::shared_ptr(do_it_with_error(..., &error)); if (error) { throw(MyExceptionWithError(error)); } } private: std::shared_ptr m_result; }; template<> class do_it { public: void operator()(...) noexcept { if (! m_result) { m_result = std::shared_ptr(do_it_without_error(...)); } } private: std::shared_ptr m_result; }; Usage: do_it<> di_error; try { di_error(...); // might throw } catch (...) {} do_it di_no_error; di_no_error(...); // won't throw for sure However, since the result is wrapped in a `std::shared_ptr` there will also be a `get()` method: const std::shared_ptr & get(void) { return m_result; } For error checking I could just implement a another method `check`. The default implementation would now look like this: struct do_it { public: // will never throw const std::shared_ptr & get(void) noexcept { if (! m_result) { m_result = std::shared_ptr(do_it_without_error(...)); } return m_result; } // might throw do_it & check(void) { if (! m_result) { Error * error = NULL; m_result = std::shared_ptr(do_it_with_error(..., &error)); if (error) { throw ...; } } return *this; } private: std::shared_ptr m_result; }; Usage: do_it di_error; try { auto result = di_error.check().get(); } catch (...) {} do_it di_no_error; di_no_error.get(); So, both implementations seem to be equally good (or bad) to me. How could I decide which one to use. What are the Pros and Cons?"} {"_id": "198367", "title": "Find common functionalities or functions between 2 programs", "text": "I've been facing a problem recently, in which I want to optimize two programs. For that, I wanted to create some kind of \"Common Interface\" which I could reuse between my two programs. However, the code of these are approximately 1000 lines, and I don't want to spend a big amount of time checking if this or that function is the same as this one and whatsoever. So I used BOUML to check the similarities between class diagrams. But I wonder if there is a global way to do such a study ?"} {"_id": "55998", "title": "how to express alternative flows in activity diagram", "text": "I want to express the below events in an activity diagram: * An alternative flow, such as \"at step x of basic flow, user clicks cancel instead of ok\". * An alternative entry to the use case, such as \"instead of click the bold button, use can press Ctrl-B\". * An error, such as \"when user clicks save, the system is unable to save the file to disk.\" How can I do these? Thank you."} {"_id": "168206", "title": "Under which circumstances (if any) does it make sense to work for a startup, for free?", "text": "I've been bumping around the startup world for a while, and most startups I've seen seem to have (amongst other things) two things in common: 1. A lack of money 2. An inability to, reliably, hire good quality developers This means that, for startups, the ideal hire is someone who is free - where they can wait until they've both raised money and found out that the hire is worth his price tag. When (if ever) is this a win win situation? For you, as a programmer or software developer, when would this make sense?"} {"_id": "75442", "title": "How would you teach C# delegate to a newbie?", "text": "I was reviewing Andrew Troelsen book on C# 4.0. The part that explains delegates starts as smooth as: public class SimpleMath { //declare delegate public delegate int BinaryOp(int x, int y); public static int Add(int x, int y) { return x + y; } } class Program { static void Main(string[] args) { //create delegate which points to Add method BinaryOp b = new BinaryOp(SimpleMath.Add); //Invoke Add method using delegate Console.WriteLine(\"10 + 10 is {0}\", b(10,10)); Console.ReadLine(); } } than gets as complicated as public class Car { public int CurrentSpeed {get;set;} public int MaxSpeed {get;set;} private bool carIsDead {get;set;} public Car() {MaxSpeed=10;} public Car(int maxSpeed, int currentSpeed) { MaxSpeed = maxSpeed; CurrentSpeed = currentSpeed; } //declare delegate public delegate void CarEngineHandler(string msgForCaller); //define member of this delegate private CarEngineHandler listOfHandlers; //add registration function for the caller public void RegisterWithCarEngine(CarEngineHandler methodToCall) { listOfHandlers = methodToCall; } public void Accelerate(int delta) { if (carIsDead) { if (listOfHandlers!=null) listOfHandlers(\"Sorry, the car is dead\"); } else { CurrentSpeed+=delta; if (10== (MaxSpeed - CurrentSpeed) && listOfHandlers!=null) { listOfHandlers(\"Gonna blow!\"); } if (CurrentSpeed >= MaxSpeed) carIsDead = true; else Console.WriteLine(\"Current Speed = {0}\", CurrentSpeed); } } } If we try to pick that content and teach somebody else, we are going to find a hard time, since it becomes tricky. How would you teach C# delegates in a way he/she is able to understand clearly when to use them?"} {"_id": "26143", "title": "Debugging, performance tracing, profiling, tools, etc. - suite?", "text": "My client is looking to standardize on its \"helper\" tools suite of applications to aid developers of a very large .NET project in debugging, profiling, finding memory leaks and performance bottlenecks, etc. kinds of issues All developers use VS2010, target .NET 4.0, and support a large distributed system: Winform UI client (CAB framework), SQL2008 Backened, CSLA app-tier with Dataportal & remoting, and there is a large job-processing layer with Compute cluster pack (CCP) as well as a few dozen other peripheral technologies. To give you an idea of the size, the system is probably north of 5M lines of code, has about 100 developers around the world and has been in development & production support for the last 5 years with this same team size. Over the years, client did not have a single strategy that revolved around profiling tools of kinds. Client bought the tools when programmers said: \"oh, this looks nice to trouble-shoot this particular problem I'm having\".. So, now the client wants to standardize on a suite of tools. What has worked so far: Ants for memory profiling. dotTrace for performance profiling. There is currently a serious consideration being given to CA Wiley for production monitoring. VS2010 profiling has been deemed not good enough and not working well enough within the current system. Can anyone recommend a single suite of tools that would be a good substitute for ALL the tools mentioned above? Integration with VS2010 is a must. All sorts of bells & whistles and ease-of-use and ability to drill-down into deepest levels to find the weirdest problems is required. Thank you for your suggestions"} {"_id": "204726", "title": "When there is no API", "text": "When it is necessary to integrate with a web application, and an API is unavailable, is it a viable solution to simulate a web browser interacting with the web application as a real user would interact with it? UPDATE Some context. Web App belongs to a vendor/partner. Their timeline for building a proper API will not meet our needs. Scraping for data is the least used part. Full CRUD for interacting with the app is required. As a developer, I can see that it will work, but I want to be able to address concerns that others may raise about this approach. Would you base a mission critical business application on this approach? Why or why not?"} {"_id": "204729", "title": "What's the difference between college-level and corporate programming?", "text": "I have just completed my Bachelors degree in IT field. I have deep interest in coding and really want to be a professional in it. Now, apart from college courses, I have been learning programming(C#) on my own (college level programming is too basic). Now I feel I need a little more time to be close to professional programmer. But some of my seniors say that corporate world programming is too different from bookish programming, hence there is no point in wasting time. (They are not programmers themselves, this may be probably what they heard). Would I benefit by reaching an advance level of C#? or as mediocre level is sufficient to break interviews, higher levels do not matter to firms because they rely on their training purely to teach how things work in corporate world and learning more won't help me much? Please if there are some professional programmers who can help, I promise this is something which about every student interested in programming want to ask at my stage. \"How do you actually change from learner to professional in field of programming?\" - keep learning until you are perfect or joining a firm is must once basics are covered?"} {"_id": "196934", "title": "Unique Value Object vs Entity", "text": "Trying to convert some entities into value objects I am stuck in a case where what seems a value object must be unique within an aggregate. Suppose we have a **Movie** entity which makes the root of an aggregate. This **Movie** entity is related with some set of **AdvertisementEvent** objects with the role of displaying an advertisement at certain timestamp. The **AdvertisementEvent** contains a link to some **Banner** that must be displayed, the coordinates and some effect filters. Since **AdvertisementEvent** is just a _collection of configuration parameters_ I am not sure if I should care about its identity and treat it like just a large value object. However I do care that within a **Movie** there must be only one **AdvertisementEvent** at a certain timestamp, probably even _around_ the timestamps. I find hard to split my doubts in multiple independent questions, so there they go: 1. Does a _collection of configuration parameters_ sounds like a value object? 2. Am I mixing the concept of uniqueness of **AdvertisementEvent** within **Movie** and transactional integrity rule? 3. Does _any_ of the choices in point (2) implies that **AdvertisementEvent** must be a member of the aggregate made by **Movie**? 4. Is my **AdvertisementEvent** object an Entity, a Value Object or an Event Object? (I used the _Event_ suffix in the name to highlight my confusion) 5. Are large value objects like this a design smell? I guess that I am not dealing with an Event in the sense of DDD because it is not something that just _happens_. The real DDD event should be something more like _AdvertisementEventReached_"} {"_id": "194353", "title": "Internship along with Google Summer of Code?", "text": "I am wondering about how much time Google Summer of Code typically takes. I do have an internship lined up for this summer, but GSoC seems like quite an awesome experience. I also really want to get into Open Source development. It states in the FAQ that it's not a good idea to attempt both, but it doesn't give any specifics. I'm wondering if it's feasible to do both. Or would I be better off by trying to work on open source in my spare time? Thanks for all your help!"} {"_id": "139831", "title": "What are benefit/drawbacks of classifying defects during a peer code review", "text": "About 3 months ago, our engineering group rolled out Review Board to be used for all peer code reviews. Today, I had a discussion with one of the people involved in that process and found out that we are already looking for a replacement (possibly something commercial) because of several missing features. One of the features that is apparently asked by many people is the ability to classify/categorize each code review comment (i.e. is it a style issue, coding convention, resource leak, logic error, crash... whatever). For those teams that regularly practice code review, is this categorization a common practice? Do you do it? have you done it in the past? Is it good/bad? On one hand, it gives the team some more metrics and possibly will indicate more specific areas where developers may potentially need to be trained in (at least that seems to be the argument). Are there other benefits? And on the other hand, and this is my concern, is that it will slow down code review process that much more. As a team lead, I've done a fairly large share of reviews, and I've always liked the ability, to highlight a chunk of code, hammer off a comment and move on as fast as possible. Although I haven't tried it personally, I have a feeling that expanding that combo box every time and scrolling/searching for the right category would feel like something is tripping you. Also if we start keeping metrics on this stuff, my other concern is that valuable code review meeting time will be spent on arguing whether something is a logic error or if it should be classified as a crash."} {"_id": "207057", "title": "How can I help myself focus on a project when there is a more fun one I can work on?", "text": "I suffer from adult ADHD. Sometimes I get bored with projects or work items assigned to me at work because they aren't particularly interesting. I end up inventing more interesting work and working on that instead, not getting my assignments done on time. I do end up making useful software, but not what I committed to making. I know that I need to \"suck it up\" and \"just do it\", however the ADHD makes it difficult for me to choose what to think about. As a result, I get distracted often and don't deliver on time. Are there any techniques I should try to help stay on task, especially when there is more fun work I could be doing?"} {"_id": "2051", "title": "What is the (craziest, stupidest, silliest) thing a client/boss asked you to do?", "text": "See title, but I am asking from a technical perspective, not > Take my 40 year old virgin niece on a date or you're fired."} {"_id": "199222", "title": "Designing a SQL-like encapsulation object for programmatic use", "text": "In the last few weeks, I have been working on a Data Mapping Library, which has involved lots of research, experimentation, crying, blaming the whiteboard for not being big enough, and more research. But now I have a full idea of what I'm trying to do. I have decided that before working on any data source adapters, I need to build a SQL expression library. The queries are not assembled with a string of text, but in the form of an instantiated object, containing multiple other objects associated with which part of the query it pertains to. As such, a `SELECT` query object will have objects for the `columns`, `table from`, `joins`, `where clause`, `group bys`, `having clause`, `order bys`, and `limit`. These objects are then passed to the data source adapter, which will take the input and turn it into a query (or use it in other ways) so that it can fetch the data in a common fashion. Although this is good enough for simple queries, when it comes to more complex queries, where the join reference, or a condition within the where clause is a nested select query (i.e. `SELECT * FROM tblA INNER JOIN (SELECT * FROM tblB WHERE foo = 'bar')`, or `SELECT * FROM tblA WHERE alice IN (SELECT * FROM tblB WHERE foo = 'bar')`), I am having trouble designing an interface where such a thing could be defined, which would later be passed to the adapter that could use it in whatever way it needs to. So, can anyone propose a design that would allow for such queries to be defined in abstract which could be interpreted into a sql query, or into a nosql programmatic interface function call? **EDIT** The design of the system is such that all the tables and columns are defined as objects (separately, but with named associations for linking). Here is an example of the coding I am hoping to be able to execute: tblA::select() ->where(new inCondition( 'tblAfooCol', tblB::select()->where(new isNullCondition('tblBbarCol')) ) ) ->fetch(); Now, instead of this being directly translated into SQL (which is very easy), I am trying to plan a way that this can be returned to the data source adapter (internally through the `fetch` function). So far, the system I have built can handle the commands for a single `SELECT` query being parsed, but it is when sub queries are used in conjunction with the query that will be passed to the adapter. I am having problems determining how to encapsulate the sub query so the adapter can process it itself. As said before, converting it directly into SQL works great, but if I want to use it with a special adapter that does not use SQL (i.e. an XML or MongoDB adapter), without a function to convert the SQL into commands (or if it is SQL, converting some of the syntax to a valid specification for that rmdbs) that can be used may become very processor intensive (only for certain queries) and cost quite a lot of performance."} {"_id": "204099", "title": "Encryption Cannot Be Reversed?", "text": "I am under the impression that an encrypted string cannot be decrypted so the original value is lost forever. However, if the following string **always equals** \"dominic\" (my name), then can't there be some logical way to reverse it; being as it's not random nor is it based on the date/time, but there is a logical method to it? 0WrtCkg6IdaV/l4hDaYq3seMIWMbW+X/g36fvt8uYkE= No matter what or how many times I encrypt \"dominic\" (string), it always equals as above. So, shouldn't there be some way to decrypt a string like that? Example of what I'm talking about: public string EncryptPassword(string password) { return Convert.ToBase64String( System.Security.Cryptography.SHA256.Create() .ComputeHash(Encoding.UTF8.GetBytes(password))); }"} {"_id": "199227", "title": "Recompiling dll's and adding more during run time - what are my options?", "text": "I want to compile custom functions during run time based on user written scripts. I'll give a hypothetical example that should demonstrate exactly what I need to do. This is the best way for me to describe the problem. I'd like a little input/ideas in how I can solve this. At the bottom I give you my ideas. I have 2 classes, one with public int x and y. Another with public int x,y,z. There can be more than one instance of each class running. The thing with these classes though (and here comes my problem) is that they need to be updatable via a user written and then compiled function during run time, that then gets added to a list of update functions that gets run once per cycle/event. Which could be at the touch of some button or something. Example: The user has written a script to update the public members of Class A. He has written A.X = B.Y + C.Z. The variables here are public, never private. This method needs to be compiled, and added to some list of functions to call each cycle. Obviously this method needs access to class B and C, as well as A. So pointers to those need to be passed in. The function returns nothing. When I say class A, B, and C. I mean these as some instances of any classes. 'A' could be class Q, while both B and C could be from class T. Or any other variation. These classes do not need recompilation/user written or anything like that. I just need help with coming up with options and how to compile a function that gets passed in some pointers and does something with them. But with all of that defined run time, what the function does, what gets passed into it, ect. \\---- What I think I can do --- I'm jumping extremely far in knowledge but it is what I always do. So I am just switching from C# to C++ and have little to go on for now while I know little. So all i can do is guess solutions and research them. I'm guessing I could compile a late loading dll for each function, and I'm guessing the dll has to know something about the data structures passed into it so I think it could work to have them in a separate normal dll usable by any program. (The A,B, and C class thingies, or rather Q, and T classes which A,B,and C were instances of in my little demo). So now both the normal program and the dll functions knows all about the classes, and all I have to do is compile the dlls during runtime, load them, and somehow get a pointer to them that my main application has and can call with the appropriate arguments. Then if the user changes the script, the dll unloads, recompiles, loads again, and a new pointer is gotten? Any help is greatly appreciated. If you can help direct my learning/research I'll learn a lot faster and be able to implement this. Thank you so much!"} {"_id": "199224", "title": "Algorithm, AI, or intelligent agent suggestions for ingesting poorly-formatted & variable data from different document types", "text": "Thanks for looking! # Background I have been tasked with writing a program to normalize and ingest data from various sources to a common database. For the sake of simplicity, let's say that the program is for a public library system, and that they are wanting to maintain a database of all books currently lent out of their various branches. Let's further assume that the branches are not linked to a common network or database (silly, I know, but please bear with me). The task is to accept submitted data from the various branch managers, and then automate the process of normalizing that data and storing it into the common database. ## Variable raw data formats The **raw data** may be submitted in the form of a MS Excel file, a .csv, a tab-delimited file, a plain text file, possibly even just a simple email, a field delimited file, etc. ## Loosely related data contents The contents of the raw data files will _generally_ contain these fields: 1. Book ID 2. Book Title 3. Author 4. Is Checked Out? 5. Days Overdue 6. ISBN 7. Due date . . .and so on. The problem is that some of the submitted data files will have these fields in column headers and some will not (so will need to infer from data what the field is). Further, the field names will not always be consistent. One library branch may call the boolean field for whether a book is checked out \"OnLoan\", while another branch calls it \"IsCheckedOut\". ## Common data repository All of this data will be ingested to a common database with normalized data that has been cleaned up during the ingest process. So, hopefully, we have something like this in the final DB: 1. BranchId 2. BookId 3. Title 4. CheckedOut 5. ISBN 6. DueDate 7. DaysOverdue . . .and so on. ## Automation of Ingest Process Let's assume that there are thousands of branches and that they must each issue this report to the Library HQ once monthly. Obviously, my client can hire a bunch of data-entry people to do this job (in fact, that is how it is done today). The request from them however, is to automate as much of this as possible to cut data-entry costs. ## So here is my plan, please suggest or criticize away: 1. **Standardize the file submission process.** This will be handled by creating a web page with a file upload dialog, DONE! 2. **Determine the file type.** I will be using C# (not that language matters) and it has a pretty easy way of getting the file type but sometimes I will simply get a `.txt` that turns out to be tab or pipe-delimited _so I need an algorithm to detect this_. I am thinking of using a _Bayes Classifier_ or _Artificial Neural Network_ for this. 3. **Attempt to parse the data into memory.** Now I have hopefully determined if I have an excel file, a tab-delimited, a csv, etc. I will run the file through the correct parser to get it into memory but now need to determine if the **file has headers** or if I can **infer what the headers should be by the value**. For this I hope to again use a **Bayes Classification system** and perhaps calculate a **Levenshtein Distance** from the value to items in an array of known/standardized header names. But what about header inference from the data? How would I identify one column as containing due dates and one as containing ISBN numbers? 4. **Glean, clean and submit the values in each column.** If I am lucky enough to have gotten this far (I know what the headers are), then I need to loop through the values in each column and clean/normalize them. For example, some library branches may enter an ISBN value as \"ISBN12-345-67-89\" whereas another branch enters \"123456789\". I need to catch that difference and normalize them. Is this a case for just a plain ol' expert system or `if. . .then`? Is there a better way? 5. **Submit normalized data to database.** This step is not as trivial as it sounds because some library branches may report a book title as \"Algorithms for Dummies\" while another reports it as \"Algorithms for Dummies, 1st Edition\". Let's assume for a second that I don't have an ISBN to tie the two books together (though they are the same), **what method might be suitable for deducing that these books are the same and assigning them a common primary key int the related`Books` table**? # Many, Many thanks for your suggestions!!!"} {"_id": "167508", "title": "Is there any real-world practical problem where only the best (exact) solution algorithm or program will do?", "text": "_Please, if you feel that question is big and does not deserve your attention (tl;dr) do not downvote it or ask for closing it being based on that._ The problem 1.1-5 in the book of Thomas Cormen et al Introduction to algorithms is: \"Come up with a real-world problem in which only the best solution will do. Then come up with one in which a solution that is \u201capproximately\u201d the best is good enough.\" I'm interested in its first statement. And I would like to develop my (mis)understanding of the question and ask to name a real-world problem where only the **exact** solution will work as opposed to a real-world problem where good-enough solution will be ok. So what is the difference between the exact and good enough solution. Consider some physics problem for example the simulation of the fulid flow in the permeable medium. To make the computer simulation happen some simplyfing assumptions have to be made when deriving a mathematical model. Otherwise the model becomes at least complex and unsolvable. Virtually any particle in the universe has its influence on the fluid flow. But not all particles are equal. Those that form the permeable medium are much more influental than the ones located light years away. Then the mathematical model needs to be solved and an exact solution can rarely be found unless the mathematical model is simple enough (wich probably means the model isn't close to reality). We take an approximate numerical method and after hours of coding and days of verification come up with the program or algorithm which is a solution. And if the model and an algorithm give results close to a real problem by some degree that is good enough soultion. Its worth noting the difference between exact solution algorithm and exact computation result. When considering real-world problems and real-world computation machines I believe all physical problems solutions where any calculations are taken can not be exact because universal physical constants are represented approximately in the computer. Any numbers are represented with the limited precision, at least limited by amount of memory available to computing machine. I can imagine plenty of problems where good-enough, good to some degree solution will work, like train scheduling, automated trading, satellite orbit calculation, health care expert systems. In that cases exact solutions can't be derived due to constraints on computation time, limitations in computer memory or due to the nature of problems. I googled this question and like what this guy suggests: there're kinds of mathematical problems that need exact solutions (little note here: because originally the question is taken from the book \"Introduction to algorithms\" the term \"solution\" means an algorithm or a program, which in this case gives exact answer on each input). But that's probably more of theoretical interest. So I would like to narrow down the question to: **Are there any real-world practical problems where only the best (exact) solution algorithm or program will do (but not the good-enough, suboptimal, you call it solution)?** I want to clarify my understanding of _\"good enough\"_. The _best_ solution **is** _good enough_ **but not every** _good enough_ solution **is** the _best_ (exact, fastest, etc..) one. So I\u2019m asking for a kind of a problem for which no solutions can be considered good-enough unless the solution is the best. There are problems like breaking of cryptographic ciphers where only exact solution matters in practice and again in practice the process of deciphering without knowing a secret should take reasonable amount of time. Returning to the original question this is the problem where good-enough (fast-enough) solution will do. There's no practical need in instant crack though it's desired. So the quality of \"best\" can be understood in any sense: exact, fastest, requiring least memory, having minimal possible network traffic etc. And still I want this question to be theoretical if possible. In a sense that there may be example of computer X that has limited resource R of amount Y where the best solution to problem P is the one that takes not more than available Y for inputs of size N. But that's the problem of finding the particular solution for P on computer X which is... well, good enough. During formulation of this question I came to conclusion that we live in a world where it is required from programming solutions to practical purposes to be good enough. In rare cases (like shuttle maneuvering) very very good ones. I could not find any problems that require only the best solution. If there is such problem of practical interest solved or unsolved can you provide any example of it? Or does there exist any proof that no such problem can exist?"} {"_id": "204097", "title": "Dynamic form builder forms and database design?", "text": "Say your users can create their own web-based forms (textboxes, selects, etc) and publish them on the web for their users to fill out. Does anyone have a resource or any advice on how to architect the database to tie into the dynamic forms? For example, would you create a child table for each form, or different versions of a given form?"} {"_id": "104653", "title": "Should the domain model include all the domain entities in my project?", "text": "I have currently reading Grails and I love it. In order to get hands on experience with Grails I decided to create a web application for some Management System. Ya as you can guess there are plenty of entities that goes into the domain model for my web application. Any how as a novice in Web development, I just thought of creating a homepage first of all. Now here comes the problem, I sat for an hour and drawn my domain model(for my homepage alone!). After that I had a doubt that whether we have to enter all the entities(i.e my whole web app entities like user, profile, tasks and their relationships etc) in Domain model first and then start coding or draw domain models for each page in our web app and at last connect all the domain models? Well, what I do is wrong? This is my first project that ever started in my life. Thanks for your advices."} {"_id": "105524", "title": "is it realistic to make use of HTML5 local storage to store CSS, and JavaScript", "text": "The idea is to make use of HTML5 local storage to store frequently accessed CSS and JavaScript. For example (pseudo-code): var load_from_cdn = true; if (detect local storage) { if (cache of css, js found) { load the local storage cache load_from_cdn = false; } } if (load_from_cdn) { document.write(' "} {"_id": "81925", "title": "Interested in Feedback on QA System Design", "text": "I'm in the beginning phases of creating a QA system and I want to make sure that the design decisions that I'm making now make sense and won't bite me in the butt later on. If you have even the slightest nit-picky thing or whatever, please post, I'd appreciate hearing any feedback about this (I'm quite new to it all). **QA solution folder:** * Assembly For Regression Tests (Class Library) * Assembly For Integration Tests (Class Library) * Assembly For Unit Tests (Class Library) * Assembly for QA related Utilities (Class Library) **Each assembly has this basic structure:** [TestFixture] public class SomeComponent{ [Test] public void SomeComponentFunction(){ bool someBool; //Some stuff Assert.IsTrue(someBool,\"\"); } [Test] //etc. } **Some specific things I'm wondering** (but I'm open to feedback about anything) * Are class libraries the appropriate project type to use? I can see the advantage of using console applications so you can run these tests independent of NUnit (but I don't know why you'd do that instead of just using NUnit). * I'm having a difficult time understanding the difference between regression level tests and system level tests. For a system level test, it seems like you run an end-to-end test, and compare the final result with some \"Standard\" result. Isn't that just regression testing?"} {"_id": "80034", "title": "How valid is ITJobsWatch data?", "text": "_(I've been wondering all day whether to ask this question here - my biggest concern would be that it would be seen as off-topic, or not relevant. It's not relevant to the US, this is true, but it is VERY relevant the UK market.)_ My question is: do hiring managers / IT decision makers in the UK use data from ITJobsWatch in their decision making process, how valid is the data and how accurate do you perceive it to be. (as a secondary, more worldwide question: do people use salary aggregation sites in their decision making/negotiation process) A couple of disclaimers! I don't work for them, I have nothing to do with them. I invite (genuine) comments from representatives of the site about how they collect the data (I've read the small paragraph on their website already). I also wouldn't consider that I'm pimping the site here - I imagine any UK hiring manager worth his stuff will already know about it! I've been asked whether the data on the site is valid and relevant, and I'm curious as to the response from the UK market."} {"_id": "80036", "title": "How do you guys handle translation for software localization?", "text": "Most of the software I have written over my career has been built for English speaking customers, but recently I've been working on a project where localization of the UI for a wider range of languages is desired. I am just curious how other programming shops obtain the translations. Do they use the notoriously flawed online translation engines? I know there are for-hire translators out there, but am I going to have to track down and contract like a dozen of them to do a thorough job of localizing my interface? Are there services that specialize in doing this for a wide range of languages? Perhaps using something like Amazon's Mechanical Turk would be an option, but I have no idea how diverse the available workforce is on that site. I'd imagine not very."} {"_id": "118968", "title": "Multiuser System With Encrypted Database", "text": "I am currently developing a hosted solution in ASP.NET using MVC3 and Entity Framework. This product will then be made available to a number of clients as a hosted solution. As the data stored by each client will be critical to their business, we anticipate that many of them will insist that the data is stored in an encrypted form in the database. We are therefore designing the system with this in mind. Are there any recommended procedures for doing this? How would one go about encrypting the clients data in such a way that even us with root access to the database cannot possibly access their data? I believe there is no built in support in EF for encrypted data, is this true? And if so, how would one encrypt/decrypt the data?"} {"_id": "80032", "title": "How to gather critique on a highly specific or niche application?", "text": "I'm currently developing, on an independent basis, an application that is highly domain specific; a niche application if you will. This is all well and good, of course; but there is one problem; I'm not in a situation where I have a natural tester group. The issue is not really one of finding testers for technical accuracy; the application in question has the twin benefits of being (at least for now) dead simple in design and implementation, and having a developer that is also has the appropriate domain knowledge. Despite my relevant knowledge, however; I am not actually in the target audience for the application, leading to problems in identifying useful features for future implementation, among other things. What then, are the avenues to explore with regards to actually finding end users?"} {"_id": "118962", "title": "Is it normal to think about a design problem for days with no code written?", "text": "Sometimes I stare blankly into space or sketch ideas and write some pseudo codes on paper. Then I scratch it out and start again, then when I think I have the correct solution for the problem I begin writing the code. Is it normal to think for days without writing any code? Is this a sign that I am approaching the problem entirely wrong? It makes me nervous to not getting any tangible code written in my IDE."} {"_id": "139650", "title": "How is CoffeeScript influenced by Haskell?", "text": "I've been using CoffeeScript for a while now. On Wikipedia, it is said that CoffeeScript is influenced by Haskell. But after I check out the syntax of Haskell, I have found little resemblance from CoffeeScript. Which aspect of CoffeeScript is influenced by Haskell?"} {"_id": "205327", "title": "Restart button in Python", "text": "I am having trouble with python. I am making a text-based adventure game, and I am trying to make a restart function when you die. I am trying to count deaths by doing deaths = deaths + 1 whenever you die, but the only way to restart is to re-run the entire script, which resets the death counter. Does anyone know how to restart a script without re-running the program?"} {"_id": "46981", "title": "Where to find clients?", "text": "My main area: web development. Of course, I don't expect anybody give away their 'gold mine' or whatever but I am struggling to see where I should be advertising my services. I have one other developer I work with and we have a lot of happy clients - on freelance websites. Thing is, freelance websites just seem to suck the life out of you when you're being out-bidded by ridiculous rates. I want to attract customers who are more concerned about quality and accountability than price. Any suggestions at all? I'm so lost with this. EDIT: Added bounty of 200 - all of my 'reputation'. EDIT: Added second bounty of 50 I did hear of a novel idea. Do work for an opensource project and get featured in their 'trusted developers' section, if they have one. Input?"} {"_id": "139654", "title": "REST - Tradeoffs between content negotiation via Accept header versus extensions", "text": "I'm working through designing a RESTful API. We know we want to return JSON and XML for any given resource. I had been thinking we would do something like this: GET /api/something?param1=value1 Accept: application/xml (or application/json) However, someone tossed out using extensions for this, like so: GET /api/something.xml?parm1=value1 (or /api/something.json?param1=value1) What are the tradeoffs with these approaches? Is it best to rely on the accept header when an extension isn't specified, but honor extensions when specified? Is there a drawback to that approach?"} {"_id": "200132", "title": "Disadvantages of functional intermediate form", "text": "I'm writing an optimizer for a language similar to JavaScript, and need to choose an intermediate code representation. The obvious/typical choice these days is Static Single Assignment (SSA). However, _Modern Compiler Implementation in C_ also discusses functional intermediate form, which basically means going pure functional for the intermediate representation (pure in terms only of local variables, heap data is still mutable, and not CPS, just straightforward `let` blocks and tail calls) and has some advantages in terms of being easier to reason about. Presumably it's not a no-brainer or everyone would already be using such a representation, so my question is, what disadvantages does functional intermediate form have compared to SSA?"} {"_id": "85003", "title": "What is the best way to deal with legacy code not in version control?", "text": "What is the best way to develop and maintain legacy code not in version control? Adding it to version control is of course the obvious answer, but if you can't, for some reason, what would you do? A few reasons I can think of why version control wouldn't be possible are: * Management is against it (doesn't understand it, thinks it would take to much time, isn't worth it, etc.) * You don't have the administritative privileges to install the required software. * The code runs/is stored on legacy systems with limited capabilities for version control. So, if real version control isn't available, what do you do? Set up some regular backup system? Or perhaps create version-named folders?"} {"_id": "163155", "title": "How much detail is in a good UI regression test?", "text": "We use a detailed step-by-step user-interface regression test for our commercial web application. It has a \"backbone\" test for the most used / most important parts of the system, with optional tests for specific areas of functionality. Using this plan has definitely helped us ensure high quality software. But, having very specific tests can be counter-productive. The tester concentrates on following the test and will completely miss usability issues, or not notice fairly obvious problems such as the bottom part of a page that is missing. By contrast, some of the best UI testing happens when building a demo of a new feature. I often do my own best testing by pretending to demonstrate the system to an imaginary prospect. Yet when I tell the testers, \"Just demonstrate the system to yourself\" they don't cover nearly as much functionality as they do with a detailed point-by-point test. I'm repeatedly asked to provide more and more detail in the test plan so that a new untrained tester can test with it without asking any questions. Yet details seem to be counter-productive. How much detail do you put in a regression test to make it effective? How do you write tests that make the tester focus more on the system than on checking off items on the test?"} {"_id": "203446", "title": "Can web apps allow fast data-typists to \"type-ahead\"?", "text": "In some data entry contexts, I've seen data typists, type really fast and know so well the app they use, and have a mechanic quality in their work so that they can \"type ahead\", ie continue typing and \"tab-bing\" and \"enter-ing\" faster than the display updates, so that in many occasions they are typing in the data for the next form before it draws itself. Then when this next entry form appears, their keystrokes fill the text boxes and they continue typing, selecting etc. In contexts like this, this speed is desirable, since this persons are really productive. I think this \"type ahead of time\" is only possible in desktop apps, but I may be wrong. My question is whether this way of handling the keyboard buffer (which in desktop apps require no extra programming) is achievable in web apps, or is this impossible because of the way web apps work, handle sessions, etc (network latency and the overhead of generating new web pages ) ? **Edit:** By \"type ahead\" I mean \"keyboard type ahead\" (typing faster than the next entry form can load), not suggets-as-you-type-like-google type ahead. > Typeahead is a feature of computers and software (and some typewriters) that > enables users to continue typing regardless of program or computer > operation\u2014the user may type in whatever speed he or she desires, and if the > receiving software is busy at the time it will be called to handle this > later. Often this means that keystrokes entered will not be displayed on the > screen immediately. This programming technique for handling user what is > known as a keyboard buffer."} {"_id": "134064", "title": "Is there a difference between the quotes in `help',``help\", 'help' and \"help\"?", "text": "Sometimes I find in the source code files comments have quotation marks like these ,notice the ` : `help' ``help\" For example, these are comments from GNU cat source code file, cat.c: /* Plain cat. Copies the file behind `input_desc' to STDOUT_FILENO. */ > /* Select which version of `cat' to use. If any options (more than -u, --version, or --help) were specified, use `cat', otherwise use `simple_cat'. */ > /* Suppress `used before initialized' warning. */ While in other parts, \"\" is used : /* Determines how many consecutive newlines there have been in the input. 0 newlines makes NEWLINES -1, 1 newline makes NEWLINES 1, etc. Initially 0 to indicate that we are at the beginning of a new line. The \"state\" of the procedure is determined by NEWLINES. */ What does ` mean? and what is it used for?"} {"_id": "155994", "title": "Java: \"Heap pollution\"", "text": "A \" **Heap Pollution** \" as in Non-Reifiable Types (The Java\u2122 Tutorials > Learning the Java Language > Generics (Updated)) Why is it called that way?"} {"_id": "155997", "title": "Object Oriented programming and modelling", "text": "I am taking course in OOA/D this semester. In academic they teach till now about Object Oriented Programming. I have some doubts regarding this. 1. Is it true that Object Orinted programming can be done without any specific modelling like OMT. 2. What are the models available for object Oriented software development?"} {"_id": "203442", "title": "URL Naming Convention for with Repetitive Letter", "text": "What is a good practice to name an URL if it contains repetitive letters. For example, `/info/foossite` The two **s** looks kind of odd and if this was access point for a `Web- Service`, this could lead into misspelling. In most programming languages, we have `Camel` notation. However, as URLs are case insensitive, we don't have that luxury here. So, what is the best practice in this regard?"} {"_id": "256083", "title": "Managing complex relationships with Entity Framework", "text": "I'm using Entity Framework with POCO's and change tracking enabled. I started off using CASCADE DELETE relationships, but in some situations, due to cyclic or multiple-possible-cascade-paths, SQL Server prevented me from adding CASCADE DELETE... so I need to manage those deletions myself. Take for example the following situation: * A site has many chassis * A chassis has many cards * A card has many ports * A link has two ports (`PortAId` and `PortZId`) (and a port may participate in multiple links). The first 3 relationships have CASCADE DELETE, however the final (`Link`) relationship has \"multiple cascade path\" on the `Link.PortAId` and `Link.PortZId` attributes. Because of this, I _couldn't_ add the CASCADE DELETE relationship. * * * My **problem** is that when a port gets deleted, a foreign key constraint gets thrown because there are still `Link`'s referencing the deleted port. * * * I tried looking through the `ChangeTracker` on `SaveChanges()` for `Port`s which were deleted, and deleting their respective links; _but_ a port deleted implicitly (through CASCADE DELETE) doesn't show up in the ChangeTracker; so I have to look for cards as well, and chassis, and sites... it becomes untenable. I then tried adding `TRIGGER`s to the database, such as: CREATE TRIGGER delete_port_links ON Ports FOR DELETE AS DELETE FROM Links WHERE PortAId IN (SELECT Id FROM deleted) OR PortZId IN (SELECT Id FROM deleted) ... however `FOR` triggers fire _after_ the foreign key constraint has been checked (and failed). I can't use `INSTEAD OF`, because the table has a foreign key. * * * So I'm really stuck in terms of what direction to go down to solve this. What techniques, patterns or features of EF would allow me to detect when a Port (directly or implicitly) been deleted, such that I can delete the Links as well?"} {"_id": "134069", "title": "libavcodec/libavformat question", "text": "I've seen this code referenced in two different places, yet I haven't seen anybody bring this up, but it seems that on line 00339 there is an empty if block and I just copied the code to compile try it out but it won't even compile cause of that. if (frame_count >= STREAM_NB_FRAMES) { /* no more frame to compress. The codec has a latency of a few frames if using B frames, so we get the last frames by passing the same picture again */ } else { Any idea or explanation why it is like that. Is it a mistake on the author's part when copying it? What is supposed to be going on in that if block anyways?"} {"_id": "256081", "title": "Autosave in web development tools", "text": "I'm working on a new web based development tool (version management based on git). Usability tests showed that developers are used to manual save. I asked 15 developers that are used to work with Eclipse or Visual Studio to develop in my web based code editor. After a short mission I asked them if they feel their code is saved. They all said no, and looked for a save button, and didn't notice a \"saved\" label turning on and off at the top right corner. I'm wondering if auto save will be a better experience, and something developers will get used to, like in google docs?"} {"_id": "132403", "title": "Should I use friend classes in C++ to allow access to hidden members?", "text": "Here is my situation (a simple example). Say I have a class called `HiddenData` Then I have another class called `StoreHiddenData` And finaly a class called `OperateHiddenData` Here's the thing, my class called `HiddenData` has private members that I don't want to be visible to the rest of the world, but I still need classes to be able to operate on private members of this `HiddenData` class (`StoreHiddenData` and `OperateHiddenData` for example). I would like to be able to pass this object around to different classes that do different things on it. Another thing I was thinking of is to create a form of handle to these `HiddenData` objects, but I can't think of a way to use handles without using friend classes again (as to give the other classes access to a table that contains what the handles point to, without exposing it to the rest of the world as well). I was reading this question on this site just a while ago as I was researching the use of friend classes, and it has me thinking if using friends is the best way to go. If anyone has any suggestions for how to accomplish this task without the use of friend classes in C++, or if the use of friend classes is an OK OOP practice, please provide me with any suggestions. Thanks. EDIT: I figure I should add a little bit more information about my problem. For one thing, I am trying to create an API that consists of several classes that hide their functionality from the outside and only allow public functions to be accessed. On the inside I have data structures (or classes, doesn't really matter), that need to be passed around to these different classes with public functions. I don't want the outside to have access to anything inside these data structures other than being able to pass them around to functions of the API. I kind of think of it like the Windows API, where you pass around Handles to API functions, but you can't actually dissect the handles in your own code. Well I would like to do something very similar, but while remaining object oriented, and NOT passing these private data structures around by value because they will be BIG. Thanks again."} {"_id": "256086", "title": "Segmentation fault with a c++ file handling program", "text": "I am learning file handling c++. This program read words from a file and inserts them in a vector alphabetically as they are entered. When I am executing this program, I am getting segmentation fault. Can someone spot the error in the program? #include #include #include #include using namespace std; int main(int argc,char *argv[]) { if(argc!=2) { cerr << \"Incorrect number of arguements in Usage\" << endl; return 1; } ifstream fin(argv[1]); if(!fin) { cerr << \"File \" << argv[1] << \" not found\\n\"; return 1; } vector v; string word; int i; fin >> word; v.push_back(word); while(fin >> word ) { i=0; while( word.compare(v[i])<0 && i { while (true) { buffer = new int[layer.data.length] process(layer.data, buffer, parent) // remember, this can access other layers layer.data = buffer sleep(layer.increment) } } } Running each layer on its own thread makes the processing fairly simple, so long as nothing in the middle is locked. Grabbing each layer the processing function cares about when it begins an iteration and releasing them at the end should work, since they are read-only and I'm not worried about another thread replacing the reference. **Question:** 1. Are there any issues with Timers for this kind of work? Do they provide a way to discard or wait on the previous iteration? 2. Is Quartz a viable choice, or simply absurd? 3. Can something as simple as `System.nanoTime` suffice at this precision?"} {"_id": "252977", "title": "Cleanest way to report errors in Haskell", "text": "I'm working on learning Haskell, and I've come across three different ways of dealing with errors in functions I write: 1. I can simply write `error \"Some error message.\"`, which throws an exception. 2. I can have my function return `Maybe SomeType`, where I may or may not be able to return what I'd like to return. 3. I can have my function return `Either String SomeType`, where I can return either an error message or what I was asked to return in the first place. My question is: **Which method of dealing with errors should I use, and why?** Maybe I should different methods, depending on the context? My current understanding is: * It's \"difficult\" to deal with exceptions in purely functional code, and in Haskell one wants to keep things as purely functional as possible. * Returning `Maybe SomeType` is the proper thing to do if the function will either fail or succeed (i.e., there aren't different _ways it can fail_ ). * Returning `Either String SomeType` is the proper thing to do if a function can fail in any one of various ways."} {"_id": "252975", "title": "GPL and non GPL software common installer", "text": "From my vague understanding of this subject I assumed that if I make an installer for my program and it contains a GPL library my code would have to fall under GPL. However, I noticed yesterday when installing Ubuntu that its installation had checkboxes (selected by default) to install non free software. This question specifies about programming behavior and says that my program wouldn't fall under GPL as long as the library can be seem as separated programs but it does not say anything about making a common installer for the package. I saw this answer saying about common installers in StackOverflow in Portuguese and it does say about a common installer making the code fall under GPL. **If I create an installer for the library and put my non-free program that uses that library there as an option my program wouldn't fall under GPL ?**"} {"_id": "247166", "title": "Adding new functionality to all of shelve.Shelf's subclasses in Python", "text": "In order to avoid the overhead associated with the shelve module's `writeback` option I'm interested in putting together a shelf class that only accepts hashable values, with hashability being a proxy for immutability. So I'd like to subclass `shelve.Shelf` and override the `__set__` method. The catch is that `shelve.open` can return one of a number of different classes (e.g., `Dbfilenameshelf`), and I'd like my code to allow `shelve` this flexibility. Ideally a solution would have the following properties: 1. No need to wrap every function that `Shelf` provides, as in: `def keys(self): return self._shelf.keys()` 2. Will not break if Python adds new methods to `Shelf` or new subclasses of `Shelf` 3. Avoiding fragility, by which I mean doing something complicated that could easily trip up someone (conceivably me) who is modifying or using the code months or years down the road. An example in this context would be reassigning `Shelf.__set__`, as in `Shelf.__set__ = my_func`. To me defining a new `__getattr__` as @Winston hesitantly offers seems moderately fragile--I take it that's why he suggests it only with hesitation Less important but still desirable would be: 1. Consistent with object-oriented design principles. It seems to me that the new class here _is a_ shelf, not that it _has a_ shelf. And probably not both (it's not a shelf which contains another shelf)"} {"_id": "190880", "title": "strategies for dealing with machine epsilon", "text": "Say you have a situation where you divide and then multiply a float, and you need to guarantee that it survives macheps (ie multiplication output equals division input). What are known strategies for guaranteeing this? Rounding would work, but anything else?"} {"_id": "7364", "title": "Is it acceptable for projects to go over budget?", "text": "This question is something that's been bugging me for the past 3 months since I switched from being a freelancer to working at a Web Design firm. Our sales people often ask us something similar to the following series of questions: * How much does it cost to program a widget * How many hours will it take to convert this website to this software. (Without knowing what the website currently runs) * etc * * * 1. How can we give a quote without any information? ( **No, I can't ask for more info!** ) I have another question if a project goes over budget it's bad. Recently, I missed an entire menu when calculating the cost of transferring a website over to a new platform so the project went over budget. My boss was not happy at all, and it's my opinion that some things like this can't be avoided. 2\\. What is the general practice for dealing with going over budget and do projects like web development often go over budget? If you work at a web development/design/similar company: 3\\. How does your billable hour system work? For me, we have a time tracking application that we record how many hours we spend on which project and if they are billable or internal (AKA non- billable). If don't meet xx billable hours a week we can get in trouble/fired eventually. Work you do for the company or for clients that isn't billable isn't part of this system, and we often _have_ to do internal work, so I'm wondering if any alternative systems exist. **EDIT:** Ok I am a developer at this firm not a designer :) Second, I am paid salary, but here is how management looks at it. You have 35 hours a week that you must work. You could be doing work that they bill to clients in that 35 hours and you should. If they figure out a project will take 50 hours and I take 55 hours, that 5 hours could have been spent on another project that wasn't over budget so we just \"lost\" money. Another example is that if I only have 1 project, that is due in two weeks and I spend a day doing internal work, some how we lost money because I wasn't working. If I worked that day, I would finish a day early and still have no work. Either way, the work is contract so we will get paid the same amount regardless of which days I work!"} {"_id": "190881", "title": "Kiln - Mercurial and Git limitation", "text": "According to this blog post: Kiln now supports repositories accessible from both Git and Mercurial > We decided that the awesome way would be to make Kiln fully bilingual. It > stores every repo in both formats. It automatically converts everything back > and forth, always. The translation is 1:1, reversible, and round-trippable. > Whatever you do to a Kiln repository using Git will be immediately visible > to Mercurial users and vice versa. I can't imagine that everything runs smoothly. What limitations does this have?"} {"_id": "255194", "title": "Why aren't we building and using parallel processors *meant* for general computation?", "text": "We all know GPUs are much faster than CPUs for a wide range of applications. When someone asks why we are not just programming for GPUs at all, one of the most common answers is that GPUs are not good for everything - i.e., they fail to do some things that CPUs do easily. Well, no wonder: after all, they are not **meant to be used for general computations**. GPUs are strongly tied to games and graphical applications, not only having specialized functions for those, but often being advertised for them. My question is: why, then, aren't we building and using processors that are **actually designed for parallel programs**?"} {"_id": "255196", "title": "What is the legality of making a mobile app version of an existing card or board game?", "text": "I see that the market is full of mobile apps based on card/board games. For example, do a search for \"Taboo\" and there are many results which are not published by Hasbro Inc. Is this legal? The top \"Taboo\" result says in the description that they are not associated with Hasbro--does this absolve them of infringement issues? I can think of a few games that I could make mobile versions of that could be successful, but I'm wary of publishing (and monetizing) a game based on another, especially if it has the same name. Note: I noticed this useful thread, which hints that it is legal to create such games, but that the original owner may fight to have similar games removed and might win. If I develop such a game and have to pull it down, would I be able to resubmit it after making some changes?"} {"_id": "255190", "title": "how does semantic versioning apply to programs without API", "text": "In http://semver.org/ \u2014which in my perception seems to be the most widely used convention in versioning\u2014 it is recommended to increase the major version number when a change that breaks/modify the API is introduced. There are two related scenarios I don't see how to apply this guideline though: 1. What if my code doesn't offer any API? How should I version my code? 2. What if my code starts offering an API in a late stage of its development?"} {"_id": "176728", "title": "Learning by doing (and programming by trial and error)", "text": "How do you learn a new platform/toolkit while producing working code and keeping your codebase clean? When I know what I can do with the underlying platform and toolkit, I usually do this: 1. I create a new branch (with GIT, in my case) 2. I write a few unit tests (with JUnit, for example) 3. I write my code until it passes my tests So far, so good. The problem is that very often I do not know what I can do with the toolkit because it is brand new to me. I work as a consulant so I cannot have my preferred language/platform/toolkit. I have to cope with whatever the customer uses for the task at hand. Most often, I have to deal (often in a hurry) with a large toolkit that I know very little so I'm forced to \"learn by doing\" (actually, programming by \"trial and error\") and this makes me anxious. Please note that, at some point in the learning process, usually I already have: 1. read one or more five-stars books 2. followed one or more web tutorials (writing working code a line at a time) 3. created a couple of small experimental projects with my IDE (IntelliJ IDEA, at the moment. I use Eclipse, Netbeans and others, as well.) Despite all my efforts, at this point usually I can just have a coarse understanding of the platform/toolkit I have to use. I cannot yet grasp each and every detail. This means that each and every new feature that involves some data preparation and some non-trivial algorithm is a pain to implement and requires a lot of trial-and-error. Unfortunately, working by trial-and-error is neither safe nor easy. Actually, this is the phase that makes me most anxious: experimenting with a new toolkit while producing working code and keeping my codebase clean. Usually, at this stage I cannot use the Eclipse Scrapbook because the code I have to write is already too large and complex for this small tool. In the same way, I cannot use any more an indipendent small project for my experiments because I need to try the new code in place. I can just write my code in place and rely on GIT for a safe bail-out. This makes me anxious because this kind of intertwined, half-ripe code can rapidly become incredibly hard to manage. How do you face this phase of the development process? How do you learn-by-doing without making a mess of your codebase? Any tips&tricks, best practice or something like that?"} {"_id": "255198", "title": "How do you send a new command using NSTask to a unix executable file?", "text": "I'm trying to learn how to use NSTask, NSPipe, and NSFileHandler to make use of unix executable files in Xcode. I've already written a program that both renders chess and plays it against a user, so I now I want to make a chess GUI with Stockfish, which I have included in my Xcode project as a unix executable file. To this end, I've written the following program, but I've gotten stuck in sending new commands to Stockfish; that is, I can send it a command when I launch the NSTask, but I can't send it commands once the task is running. When I send these commands via Terminal, Stockfish responds perfectly, but for some reason, it ignores any commands I send it after launch (i.e. with [self sendCommandToStockfish]). If I send it a command, it simply ignores it and continues executing the original command while the input-pipe continues executing perfectly. How can I continue to send commands with NSTask after launch? Here's the code I've got so far: // // AppDelegate.m // CommandLineToolRunner // // Created by Thomas Redding on 8/28/14. // Copyright (c) 2014 Thomas Redding. All rights reserved. // #import \"AppDelegate.h\" @implementation AppDelegate - (void)applicationDidFinishLaunching:(NSNotification *)aNotification { // Insert code here to initialize your application self.myTimer = [NSTimer scheduledTimerWithTimeInterval:0.1f target:self selector:@selector(timerFired:) userInfo:nil repeats:YES]; // set up task self.task = [[NSTask alloc] init]; [self.task setLaunchPath:[[[NSBundle mainBundle] pathForResource:@\"stockfish\" ofType:@\"\"] copy]]; [self.task setArguments:@[@\"go infinite\"]]; // set up output-pipe [self.task setStandardOutput:[NSPipe new]]; NSNotificationCenter* notificaitonCenter = [NSNotificationCenter defaultCenter]; [notificaitonCenter addObserver:self selector:@selector(getReadData:) name:NSFileHandleReadCompletionNotification object:[[self.task standardOutput] fileHandleForReading]]; [[[self.task standardOutput] fileHandleForReading] readInBackgroundAndNotify]; // set up input-pipe [self.task setStandardInput:[NSPipe new]]; [self.task launch]; self.frame = 0; } - (void) timerFired:(NSTimer *)timer { // do stuff self.frame++; if(self.frame == 20) { // write NSLog(@\"[[Sending Commands]]\"); [self sendCommandToStockfish:@\"stop\"]; [self sendCommandToStockfish:@\"position fen rnbqkbnr/8/8/8/8/8/8/RNBQKBNR w KQkq - 0 1\"]; [self sendCommandToStockfish:@\"go infinite\"]; } } - (void) getReadData:(NSNotification *)notification { NSData *data = [[notification userInfo] objectForKey:@\"NSFileHandleNotificationDataItem\"]; NSString *response = [[NSString alloc] initWithData:data encoding:NSUTF8StringEncoding]; [[notification object] readInBackgroundAndNotify]; NSLog(@\"%@\", response); } - (void)sendCommandToStockfish:(NSString *)string { NSLog(@\"[[%@]]\", string); NSString *strWithNewLine = [NSString stringWithFormat:@\"%@\\n\", string]; NSData *myData = [strWithNewLine dataUsingEncoding:NSUTF8StringEncoding]; @try { NSLog(@\"((START TRY))\"); [[self.task.standardInput fileHandleForWriting] writeData:myData]; NSLog(@\"((END TRY))\"); } @catch (NSException * e) { NSLog(@\"((ERROR))\"); if ([self.task isRunning]) { NSLog(@\"^^^\"); [self.task terminate]; } else { NSLog(@\"***\"); } } } @end When I run this, I get the following output logged (I terminate the program on my own): 2014-09-02 15:02:18.474 CommandLineToolRunner[49999:303] Stockfish 140714 64 SSE4.2 by Tord Romstad, Marco Costalba and Joona Kiiski 2014-09-02 15:02:18.713 CommandLineToolRunner[49999:303] info depth 1 seldepth 1 score cp 85 nodes 27 nps 27000 time 1 multipv 1 pv e2e4 2014-09-02 15:02:18.714 CommandLineToolRunner[49999:303] info depth 2 seldepth 2 score cp 6 nodes 155 nps 77500 time 2 multipv 1 pv d2d4 d7d5 2014-09-02 15:02:18.715 CommandLineToolRunner[49999:303] info depth 3 seldepth 3 score cp 64 nodes 386 nps 128666 time 3 multipv 1 pv d2d4 d7d5 g1f3 2014-09-02 15:02:18.717 CommandLineToolRunner[49999:303] info depth 4 seldepth 4 score cp 25 nodes 1082 nps 216400 time 5 multipv 1 pv e2e4 e7e5 d2d4 b8c6 2014-09-02 15:02:18.724 CommandLineToolRunner[49999:303] info depth 5 seldepth 5 score cp 53 nodes 3129 nps 284454 time 11 multipv 1 pv b1c3 g8f6 d2d4 d7d5 g1f3 2014-09-02 15:02:18.729 CommandLineToolRunner[49999:303] info depth 6 seldepth 6 score cp 26 nodes 4702 nps 313466 time 15 multipv 1 pv e2e4 e7e5 d2d4 g8f6 b1c3 b8c6 2014-09-02 15:02:18.732 CommandLineToolRunner[49999:303] info depth 7 seldepth 7 score cp 58 nodes 6866 nps 343300 time 20 multipv 1 pv e2e4 d7d5 e4e5 b8c6 d2d4 e7e6 f1b5 2014-09-02 15:02:18.754 CommandLineToolRunner[49999:303] info depth 8 seldepth 8 score cp 12 nodes 14656 nps 396108 time 37 multipv 1 pv e2e4 d7d5 e4d5 g8f6 d2d4 d8d5 b1c3 d5a5 2014-09-02 15:02:18.791 CommandLineToolRunner[49999:303] info depth 9 seldepth 11 score cp 33 nodes 24157 nps 402616 time 60 multipv 1 pv e2e4 d7d5 e4e5 c7c5 d2d4 c5d4 d1d4 b8c6 f1b5 d8a5 b1c3 a5b5 c3b5 c6d4 b5d4 2014-09-02 15:02:18.807 CommandLineToolRunner[49999:303] info depth 10 seldepth 14 score cp 28 nodes 44269 nps 465989 time 95 multipv 1 pv e2e4 e7e5 b1c3 b8c6 g1f3 f8c5 c3a4 d7d6 a4c5 d6c5 2014-09-02 15:02:18.839 CommandLineToolRunner[49999:303] info depth 11 seldepth 15 score cp 41 nodes 75424 nps 593889 time 127 multipv 1 pv e2e4 e7e5 b1c3 b8c6 g1f3 f8c5 f3e5 c5f2 e1f2 c6e5 d2d4 2014-09-02 15:02:19.002 CommandLineToolRunner[49999:303] info depth 12 seldepth 16 score cp 28 nodes 254305 nps 879948 time 289 multipv 1 pv g1f3 d7d5 e2e3 b8c6 d2d4 c8g4 f1b5 g8f6 e1g1 a7a6 b5c6 b7c6 2014-09-02 15:02:19.042 CommandLineToolRunner[49999:303] info depth 13 seldepth 16 score cp 34 nodes 309418 nps 937630 time 330 multipv 1 pv g1f3 d7d5 e2e3 e7e6 d2d4 g8f6 f1d3 f8e7 e1g1 b8c6 c1d2 e8g8 b1c3 2014-09-02 15:02:19.224 CommandLineToolRunner[49999:303] info depth 14 seldepth 18 score cp 22 nodes 524342 nps 1024105 time 512 multipv 1 pv e2e4 e7e5 b1c3 b8c6 g1f3 g8f6 f1b5 f8b4 e1g1 e8g8 b5c6 d7c6 f3e5 d8e7 d2d4 b4c3 b2c3 f6e4 2014-09-02 15:02:19.603 CommandLineToolRunner[49999:303] info depth 15 seldepth 19 score cp 24 nodes 1002076 nps 1125928 time 890 multipv 1 pv g1f3 d7d5 d2d4 g8f6 c1f4 b8c6 e2e3 c8f5 f1d3 f5d3 d1d3 e7e6 d3b5 d8c8 b1c3 a7a6 b5a4 2014-09-02 15:02:19.769 CommandLineToolRunner[49999:303] info depth 16 seldepth 19 score cp 24 nodes 1213245 nps 1148906 time 1056 multipv 1 pv g1f3 d7d5 d2d4 g8f6 c1f4 b8c6 e2e3 c8f5 f1d3 f5d3 d1d3 e7e6 d3b5 d8c8 b1c3 a7a6 b5a4 2014-09-02 15:02:20.311 CommandLineToolRunner[49999:303] info depth 17 seldepth 20 score cp 22 nodes 1878116 nps 1174556 time 1599 multipv 1 pv g1f3 d7d5 d2d4 e7e6 c1f4 f8d6 f4d6 c7d6 b1c3 g8f6 e2e3 b8c6 f1d3 e8g8 e1g1 c8d7 a2a3 a7a6 2014-09-02 15:02:20.471 CommandLineToolRunner[49999:303] [[Sending Commands]] 2014-09-02 15:02:20.471 CommandLineToolRunner[49999:303] [[stop]] 2014-09-02 15:02:20.472 CommandLineToolRunner[49999:303] ((START TRY)) 2014-09-02 15:02:20.472 CommandLineToolRunner[49999:303] ((END TRY)) 2014-09-02 15:02:20.472 CommandLineToolRunner[49999:303] [[position fen rnbqkbnr/8/8/8/8/8/8/RNBQKBNR w KQkq - 0 1]] 2014-09-02 15:02:20.472 CommandLineToolRunner[49999:303] ((START TRY)) 2014-09-02 15:02:20.473 CommandLineToolRunner[49999:303] ((END TRY)) 2014-09-02 15:02:20.473 CommandLineToolRunner[49999:303] [[go infinite]] 2014-09-02 15:02:20.473 CommandLineToolRunner[49999:303] ((START TRY)) 2014-09-02 15:02:20.473 CommandLineToolRunner[49999:303] ((END TRY)) 2014-09-02 15:02:20.823 CommandLineToolRunner[49999:303] info depth 18 seldepth 20 score cp 23 nodes 2495624 nps 1182760 time 2110 multipv 1 pv g1f3 d7d5 d2d4 e7e6 c1f4 f8d6 e2e3 g8f6 f1e2 e8g8 e1g1 b8c6 f3e5 c8d7 b1c3 a7a6 a2a3 c6e5 d4e5 2014-09-02 15:02:21.756 CommandLineToolRunner[49999:303] info depth 19 currmove d2d4 currmovenumber 5 2014-09-02 15:02:21.951 CommandLineToolRunner[49999:303] info depth 19 currmove d2d3 currmovenumber 6 2014-09-02 15:02:21.960 CommandLineToolRunner[49999:303] info depth 19 currmove c2c3 currmovenumber 7 2014-09-02 15:02:21.966 CommandLineToolRunner[49999:303] info depth 19 currmove f2f3 currmovenumber 8 2014-09-02 15:02:22.010 CommandLineToolRunner[49999:303] info depth 19 currmove b2b4 currmovenumber 9 2014-09-02 15:02:22.015 CommandLineToolRunner[49999:303] info depth 19 currmove g1h3 currmovenumber 10 2014-09-02 15:02:22.022 CommandLineToolRunner[49999:303] info depth 19 currmove a2a3 currmovenumber 11 2014-09-02 15:02:22.029 CommandLineToolRunner[49999:303] info depth 19 currmove f2f4 currmovenumber 12 2014-09-02 15:02:22.036 CommandLineToolRunner[49999:303] info depth 19 currmove g2g4 currmovenumber 13 2014-09-02 15:02:22.037 CommandLineToolRunner[49999:303] info depth 19 currmove b2b3 currmovenumber 14 2014-09-02 15:02:22.044 CommandLineToolRunner[49999:303] info depth 19 currmove c2c4 currmovenumber 15 2014-09-02 15:02:22.086 CommandLineToolRunner[49999:303] info depth 19 currmove h2h3 currmovenumber 16 2014-09-02 15:02:22.091 CommandLineToolRunner[49999:303] info depth 19 currmove g2g3 currmovenumber 17 2014-09-02 15:02:22.095 CommandLineToolRunner[49999:303] info depth 19 currmove a2a4 currmovenumber 18 2014-09-02 15:02:22.098 CommandLineToolRunner[49999:303] info depth 19 currmove b1a3 currmovenumber 19 2014-09-02 15:02:22.101 CommandLineToolRunner[49999:303] info depth 19 currmove h2h4 currmovenumber 20 2014-09-02 15:02:22.107 CommandLineToolRunner[49999:303] info depth 19 seldepth 23 score cp 17 upperbound nodes 4113455 nps 1211621 time 3395 multipv 1 pv g1f3 d7d5 d2d4 g8f6 c1f4 c8f5 e2e3 e7e6 f1d3 f5d3 d1d3 b8c6 e1g1 f8d6 f4d6 c7d6 b1c3 info depth 19 currmove g1f3 currmovenumber 1 2014-09-02 15:02:22.410 CommandLineToolRunner[49999:303] info depth 19 currmove e2e4 currmovenumber 2 2014-09-02 15:02:23.133 CommandLineToolRunner[49999:303] info depth 19 currmove b1c3 currmovenumber 3 info depth 19 currmove d2d4 currmovenumber 4 info depth 19 currmove e2e3 currmovenumber 5 info depth 19 currmove d2d3 currmovenumber 6 info depth 19 currmove c2c3 currmovenumber 7 info depth 19 currmove a2a3 currmovenumber 8 info depth 19 currmove c2c4 currmovenumber 9 2014-09-02 15:02:23.158 CommandLineToolRunner[49999:303] info depth 19 currmove b2b4 currmovenumber 10 info depth 19 currmove f2f3 currmovenumber 11 info depth 19 currmove g2g4 currmovenumber 12 2014-09-02 15:02:23.159 CommandLineToolRunner[49999:303] info depth 19 currmove h2h3 currmovenumber 13 info depth 19 currmove a2a4 currmovenumber 14 info depth 19 currmove g1h3 currmovenumber 15 info depth 19 currmove b1a3 currmovenumber 16 info depth 19 currmove b2b3 currmovenumber 17 info depth 19 currmove f2f4 currmovenumber 18 info depth 19 currmove g2g3 currmovenumber 19 info depth 19 currmove h2h4 currmovenumber 20 2014-09-02 15:02:23.160 CommandLineToolRunner[49999:303] info depth 19 seldepth 23 score cp 27 nodes 5442993 nps 1223969 time 4447 multipv 1 pv e2e4 e7e5 b1c3 b8c6 g1f3 g8f6 f1c4 f8c5 d2d3 a7a6 e1g1 e8g8 a2a3 d7d6 b2b4 c5a7 c1e3 a7e3 f2e3 info depth 20 currmove e2e4 currmovenumber 1"} {"_id": "70417", "title": "Why does the Perl community have such a bad reputation?", "text": "I'm still fairly new to programming. I spend most of my time in Ruby, and I'm discovering a certain fondness for playing with regular expressions. That being said, I'm considering taking a look at Perl, just as a hobby. However, I've heard a lot about the Perl community, and none of it good. I've heard the community described as extremely elitist and resistant to inexperienced programmers. Is this true? If it is, why is that the case?"} {"_id": "161798", "title": "Is it easier to write robust code in compiled, strictly-typed languages?", "text": "I'd like to read the opinion of experts on whether compiled, strictly-typed languages help programmers write robust code easier, having their backs, checking for type mismatches, and in general, catching flaws in compile time that otherwise would be discovered in runtime ? Has unit testing in loosely typed, interpreted languages need to be more thorough because test cases need to assess things that in a compiled languages wouldn't simply compile ? Need the programer in compiled, strictly-typed need to write less code since he/she doesn't have to constantly check for correct types ( instance of operator ) since the compiler catches those things ( the same applying to misspelled variable names or uninitiated variables ) ? **NOTE: by strictly typed language I mean languages that force you to declare variables and assign a type to them in order to be able to use them, and once a variable is declared, you cannot change its type or assign incompatible type values.** int myInt = 0; String myString = \"\"; myInt = myString; // this will not fly in a strictly typed language myInte = 10; // neither will this ( myInte variable doesn't exist )"} {"_id": "70410", "title": "Starting new project with TDD", "text": "I'm studying TDD and I read that it also helps you to define the design of the app, correct? So I decided to start creating a new project to help me understand it better. I want to create a simple user registration system that will ask for its name, email address, country (will pick one from a list) and phone number. So the question is... I created a new solution in VS 2010, added a new Test project and I just don't know what tests to write! Since it will help me define the design, what tests could I write here? Thanks for any help!"} {"_id": "161794", "title": "Is it a good idea to design an architecture thinking that the User Interface classes can be replaced by a command line interface?", "text": "In Code Complete page 25, it's said that it's a good idea to be able to easily replace the regular user interface classes by a command line one. Knowing its advantages for testing, what about the problems it may bring? Will this extra work really pay off for web and mobile projects? What about small and medium projects; do the same rules apply? What if it makes your design more complex?"} {"_id": "161797", "title": "Using a GPLv3 library in a freemium business model", "text": "I read several related questions about the use of GPLv3 in a commercial app but they don't say anything about a freemium app. I would like to use a GPLv3 library in two apps, a free one (with limited features) and a paid one. Basically, the free app will allow to use the library without limitations, and the paid one will include it as well as other unrelated features. I will release the free app under GPLv3 but do I need to release the paid one under the same license? I don't want to release the whole source code of my paid app (for obvious reasons), as the other features don't use the library in any way. It would just be more convenient for users who bought the full version to have all features in a single app instead of putting the library in a separate one."} {"_id": "70419", "title": "How Do I Become a More Autonomous and Self-Sufficient Programmer?", "text": "The single largest factor in what is holding me back from being a stellar developer is my reliance on others. I feel like I ask too many questions because I fear the consequences of breaking everything and holding everyone back. So I'm overly cautious by asking so many questions that I basically get the answers after enough questioning. I've recognized that's bad but I want to stop it. Part of it comes that there are times where I simply don't know the code (either it's a branch I've never worked with or it's a brand new product), but I want to rely on others less. To preface, these kinds of questions are not the ones about generic patterns or languages: usually my questions revolve around how we do code at our company, and how we get things to work in our ecosystem. I want to be able to take specs and roll with them without having to feel like I need to get help every step of the way. Is this normal? Have you been through this, and if so, how did you get over it?"} {"_id": "161791", "title": "Is it frowned upon to release works in progress to github/sourceforge/bitbucket/etc?", "text": "Long story short, I've spent the last two years in an entirely new career, transitioning from academia to a data analyst role (working towards a data scientist). Before starting at my current company, I knew next to nothing about coding, save for teaching myself SQL for a few hours per week over a couple of months. Besides SQL, I have since become conversant in Perl, have used PHP a bit here and there, and have made some headway into learning other languages (primarily Java and C). So, I still have a lot of catch-up work to do. In order to teach myself things, I've built a few side projects--a lot more sophisticated than `\"Hello World, my name is $name.\"`, but not as complicated as, say, Minecraft or a device driver. I'd like to release the code for them in order to learn from constructive feedback and to build a portfolio to sit alongside my resume. However, a lot of these things are works in progress and, to be honest, I feel some trepidation at putting code out there for all to see that's not completely, 100% \"done\" and polished. Am I worrying over nothing? If not, is there some minimal polish threshold a project should have before releasing it as open source?"} {"_id": "179713", "title": "Good practice about Javascript referencing", "text": "I am fighting about a web application script optimization. I have an ASP.NET web app that reference jQuery in the master page, and in every child page can reference other library or JavaScript extension. I would like to optimize the application with YUI for .NET. The question is, I should put all the libraries reference in the master page or to compress all the JavaScript code in a single file, or I should create a file for every page that contains only the code useful to the page? Is there any guidance to follow?"} {"_id": "191809", "title": "Should I use a formal grammar for my interpreted scripting language", "text": "I have a scripting engine I just published as an open source project. It's been sitting on my harddrive waiting for about a year. My engine of course isn't complete in any way, but it does work for simple scripts. It has a javascript-ish feel to it, but I don't wish to abide by the ECMA spec or anything. Now, the big thing I'm working on is improving code quality while leaving the language working as it is(which I have a few regression tests to \"prove\"). It doesn't have a formal grammar at all and works like so: 1. Preprocess/Tokenize. At this point it removes whitespace and cuts everything into \"tokens\", which is basically just a structure containing a string and a rough \"hint\" as to what the token is (Number, Identifier, Operation, etc) and some debugging info such as line number 2. A ScriptingEngine class which takes the list of tokens and actually parses them and executes them 3. An \"ExpressionEvaluator\" class which will take a subset of the tokens list and build a specific tree of operations, values, and then execute operations and such and collapse the tree down into a single value My engine has the goals of being portable(works everywhere .Net does) and self-contained. So far, this \"works\", but the code is terrible and I'm pretty sure that I'm going about it the wrong way. I'm wondering if a formal grammar and everything that goes with it might help Some benefits I've heard of being more formal with grammar * Unambiguous specification of the language * Easier to maintain/change * More traditional/Bigger community support? And some of the disadvantages * Some languages can be very difficult to reduce to a formal grammar, ie Perl. * A learning curve for someone not in the know(ie, me) * Generally rely on tools such as yacc and ANTLR, which introduce another step in your workflow and/or add dependencies(which I'd like to avoid) Although this project is in .Net, it could equally apply to any other implementing language. Should I use a formal grammar? Can someone expand on the pros/cons of both sides?"} {"_id": "225705", "title": "What's the best way to show some of my code during an interview?", "text": "I realize that there are already a couple of questions about this, but they mostly are about _if_ you should bring code to the interview, which is not the case here. I have a job interview (at a startup) coming up next week and they _requested_ to bring some code to the interview. I have two smaller open-source projects and a closed-source Android app, which I mentioned (and linked) in my skill sheet I sent along, so I wanted to show them some code of those projects. It's my first job after graduating, so I don't have any sources from previous employments and there won't be any issues regarding \"spying\", licensing issues, confidential code etc. All the code is completely mine and written by me. I guess I have pretty much three options: print some interesting parts of the code on paper or have everything on my tablet or laptop to show. The latter two have the advantage that I can bring all my projects and basically show them everything they want to see. However you can't directly make notes/underline stuff etc. as if I had printed it out and it's less convenient than just handing out paper. What's the best way to handle this professionally and do this as hassle-free as possible? I currently tend to print out some of the parts that I think are both interesting and well written, but have everything on my tablet in case they want to see more."} {"_id": "191807", "title": "How do these technologies go together?", "text": "* Ruby on Rails * Twitter Bootstrap * Html5Boilerplate * Backbone.JS or Knockout.JS I sort of understand what each one is individually, but my understanding isn't strong enough to understand whether some of them sort of overlap in functionality and whether any subset of that list contains items you wouldn't use together because they play similar roles. I'm fairly clear on RoR-- it's mainly the latter three that I'm not sure how they play together."} {"_id": "176890", "title": "How to plan/manage multi-platform (mobile) products?", "text": "Say I've to develop an app that runs on iOS, Android and Windows 8 Mobile. Now all three platforms are technically in different program languages. The only 'reuse' that I can see is that of the boxes-and-lines drawings (UML :) charts and nothing else. So how do companies/programmers manage the variation of the same product across different platforms especially since the implementation languages differ? It's 'easier' in the desktop world IMO given the plethora of languages and cross-platform libraries to make your life easier. Not so in the mobile world. More so, product line management principles don't seem to be all that applicable - what is same and variant doesn't really matter - the application is the same (conceptually) and the implementation is variant. Some difficulties that come to mind: * **Bug Fixing** : Applications maybe designed in a similar manner but the bug identification and fixing would be radically different. A bug on iOS may/may-not be existent for that on Android. Or a bug fix approach on one platform may not be the same on another (unless it's a semantic bug like `a!=b` instead of `a==b` which would require the same 'approach' to fixing in essence * **Enhancements** : Making a change on one platform would be radically different than on another * **Code-Design Divergence** : They way the code is written/organized, the class structures etc., could be very different given the different implementation environments - leading to further reuse of the (above) UML models. There are of course many others - just keeping the development in sync and making sure all applications are up to the same version with the same set of features etc. Seems the effort is 3x that of a single application. So how exactly does one manage this nightmarish situation? Some thoughts: * Split application to client/server to minimize the effect to client side only (not always doable) * Use frameworks like Unity-3D that could take care of the cross-platform problem (mostly applicable to games and probably not to other applications etc.) Any other ways of managing a platform line? What are some proven approaches to managing/taming the effects?"} {"_id": "121642", "title": "Carpool logical architecture", "text": "I'm designing a carpool system (drivers can publish their routes and passengers can subscribe to them) with WebServices(axis2) and Android clients (ksoap2). I have been having problems with the logical architecture of the system and I wondered if this architecture is fine. ![enter image description here](http://i.stack.imgur.com/mmyUf.png) And another question: for that architecture (if it is ok), how would be the packages structure? I suppose something like that: (In android) `package org.carpool.presentation` *All the activities here (and maybe mvc pattern) (In the server) `package org.carpool.services` *Public interfaces (for example: register(User user), publishRoute(Route route) ) `package org.carpool.domain` *Pojos (for example: User.java, Route.java, etc) `package org.carpool.persistence` *Dao Interface and implementation (jdbc or hibernate)"} {"_id": "253369", "title": "There's a most performant way to check that a collection has exactly 1 element?", "text": "I came up with this solution : if (Take(2).Count() == 1) is there any more performance solution (or better syntactical sugar) to do this check ? I want a performance way because this will be an extension used on Linq To Entites and Linq to Objects. I'm not using `SingleOrDefault` because that will throw and exception if it has more than 1 element. Based on @Telastyn answer I came up with the following: public static bool HasOne(this IEnumerable enumerable) { var enumerator = enumerable.GetEnumerator(); return enumerator.MoveNext() && !enumerator.MoveNext(); } another implementation (slighly slower but 100% sure will work effectivly on Linq to Entities) would be : public static bool HasOne(this IEnumerable enumerable) { return !enumerable.FirstOrDefault().Equals(default(T)) && !enumerable.Skip(1).Any(); } I'm not sure if the `MoveNext` one works with IQueryable on Linq to Entites. (any takers? I don't know how to test that) After some test, `Take(2).Count() == 1`; is the fastest. :S"} {"_id": "121640", "title": "How to develop complex applications", "text": "I want to know the approach in developing big complex application regardless of which programming language to use. I want to know how developers make such big applications such as internet banking, API's and big database management application. How should one approach to make such applications. I have only 1 year programming experience so far and I work as a freelancer so when I saw such applications lots of questions came in my mind. How to understand the basic need of information technology. What actually it is? How it can be useful for common people of small towns. I want everyone to make use of technology no matter they are educated or not, rich or not. Please post some suggestions."} {"_id": "211875", "title": "Is it good practice to return an array of objects?", "text": "If I have an ItemContainer class that contains, for example, items in an order, where each item is an Item object; is it best to have a method like: ItemContainer->getItems() that returns an array containing Item objects, or is it better practice to do something like: ItemContainer->getItem($itemNo) which returns a single item object for that item number, and forgoes the array. I realise this may be a trivial question or simply one of preference, but I'd like my app to adopt best practices from the start and I'm unsure which way to proceed. I'm writing in PHP, but I figured this pretty much applies to any OOP language."} {"_id": "122569", "title": "What to bring to a programming interview?", "text": "I have just completed my Master's degree in Computer Science and have gotten my first job interview as a developer. I do not have much experience in large scale development projects, but I am hoping my university education counts for something. I am wondering, what materials should I bring that would impress my interviewers? What do most interviewers expect, especially from a new graduate? **Edit: The job interview went OK, except I forgot my pants. Thanks for all the great advice!"} {"_id": "122598", "title": "What are the relative merits for implementing an Erlang-style \"Continuation\" pattern in C#", "text": "What are the relative merits ( _or demerits_ ) for implementing an Erlang- style \"Continuation\" pattern in C#. I'm working on a project that has a large number of `Lowest` priority threads and I'm wondering if my approach may be all wrong. It would seem there is a reasonable upper limit to the number of long-running threads that any one Process 'should' spawn. With that said, I'm not sure what would signal the tipping-point for too many thread or when alternate patterns such as \"Continuation\" would be more suitable. In this case, many of the threads do a small amount of work and then sleep until woken to go again ( _Ex. Heartbeat, purge caches, etc..._ ). This continues for the life of the Process."} {"_id": "119362", "title": "What are some best practices for cookie based web authentication?", "text": "I'm working on a small side project using CGI and Python (scalability is not an issue and it needs to be a VERY simple system.) I was thinking of implementing authentication using cookies, and was wondering if there were any established best practices. When the user successfully authenticates, I want to use cookies to figure out who is logged on. What, according to the best practices, should be stored in such a cookie?"} {"_id": "15949", "title": "When do we need to apply a software development methodology?", "text": "There are many software development methodologies - SCRUM, agile, XP, etc. - and they all have there advantages and disadvantages I suppose. But when do we really need to apply them? Surely they are not necessary for small 1-man projects, but for large 50+ teams you can most certainly not go about ad-hoc:ing the whole thing. So what's the line for when to use and when to not use such a methodology, if there indeed is one?"} {"_id": "42792", "title": "Staying OO and Testable while working with a database", "text": "What are some OOP strategies for working with a database but keeping things unit testable? Say I have a User class and my production environment works against MySQL. I see a couple possible approaches, shown here using PHP: 1. Pass in a $data_source with interfaces for `load()` and `save()`, to abstract the backend source of data. When testing, pass a different data store. $user = new User( $mysql_data_source ); $user->load( 'bob' ); $user->setNickname( 'Robby' ); $user->save(); 2. Use a factory that accesses the database and passes the result row to User's constructor. When testing, manually generate the $row parameter, or mock the object in UserFactory::$data_source. (How might I save changes to the record?) class UserFactory { static $data_source; public static function fetch( $username ) { $row = self::$data_source->get( [params] ); $user = new User( $row ); return $user; } } I have _Design Patterns_ and _Clean Code_ here next to me, but I'm struggling to find applicable concepts."} {"_id": "129675", "title": "Write Unit test for heavily-database application?", "text": "I'm currently developing a Spring-based web-application, in which almost every operation create/update/delete something in the database. The logic mostly about checking condition so as to we should create/update records in the database or not. The reason I want to try unit test is that we often meets regression errors when request changes or refactor. Most the bugs comes from Database changes, when I doesn't fully reflect those changes in the code. I have some experience in web-development now, but it seems that's not enough to stop them appear. My controller/service is actually not too complex. I just take the binding object submited from the HttpRequest, **check the condition** & record in DB. Sometimes the system must read the Database, take out some records & manipulate them, then update some other records.Much of the coding effort lies on the interface(HTML/CSS/Javascript) too. I'm investigating unit test, and I heard that when it comes to database operation, it's no longer unit test, since the test will be slow. Is that true? So that if my project is heavily database-operation, I shouldn't use unit test? I also heard about DBUnit & some in-memory database which can quicken the test. Should I use them? And how I can write good unit test for database operation?"} {"_id": "156978", "title": "What pattern to use for this 'constructor'? Decorator vs Factory?", "text": "I'm developing a programme to generate LARP characters in java and I've hit a snag. Initially I had planned to use a decorator to iterate through the potential 'roles' (effectively classes), and then from that work out the costs for each skill depending on the combination of roles and store that in the character. I.e you might be a Java Coder/Nerdfighter, if those are your two roles, or you might just have the SysAdmin role etc. I then prototyped out a way to buy a skill that required looking up skills in a data base that contained their name and cost. Here is the snag: If I've preworked out the costs then I don't ever need to look up the database, so no changes in the db are pushed through (if the skill costs on a SysAdmin go down I'd like to know). As well as this, I may only buy a fraction of the available skills so I'll be storing unnecessary data on skill costs, and besides the character should only know about their skills, not all skills. More over, the decorator would only decorate the skill costs, nothing more. Instead I feel I need a factory that has two options for constructing the character, something like: public static CharacterFactory multiRolePlayerCharacter(Role firstRole, Role secondRole) { /*Some code goes here*/ } public static CharacterFactory singleRolePlayerCharacter(Role role) { /*Some more code goes here*/ } But this seems far less extensible compared to the decorator. If the game ever changes so that you can multirole morethan twice (and Java Coder/Nerdfighter/SysAdmin is an allowed role) then I need to rework from the top to add a new factory in. Also I don't quite know how each factory differs besides perhaps a flag to say if it's multirole or not, and another variable to hold the secondRole. Which doesn't seem much. So it seems either way I lose: Decorator adds extensibility for the long run whilst bogging my classes/objects down, and Factory will work to start with but will resist change. So how do I know which one to pick? Is there some way to combine the two? **Edit** : I'm really interested in learning about the _whys_ and _hows_ of this, so I know next time which option to go for."} {"_id": "119365", "title": "iphone application give up rights", "text": "Is it possible to give up an iphone application rights to another person or company? I mean, you make an application, you sell it for a while, and then you don't need it anymore and you give or sell it to another person or company to continue develope it. And in this case, how persons who already bought a copy of it can continue to receive next updates? I hope I'm clear."} {"_id": "252357", "title": "Should I put extension methods of an interface in the interface.cs file?", "text": "Imagine this set up: public interface IMass{ double Mass {get;} } public static class IMassExtension { public static double ToKg(this IMass massObject) { return massObject.Mass / 1000.0; } public static double CalculateInteractiveGravity(this IMass massObject, IMass otherMassObject) { return blah; } } Is it ok to put the extension class in the same file as the interface (i.e. IMass.cs) or should it be in a separate file (IMassExtension.cs)? * * * A base class is not possible here. Imagine public class Person : Animal, IMass {} and public class House : Building, IMass {}"} {"_id": "15943", "title": "not checking in from the project root", "text": "Whenever I do a check-in, I always check in from the **root** of the project... i.e. check in **all** the files in my working copy, so after the check-in the source control repo contains exactly the same set of files that I just finished testing in my local copy. I also make sure my source control is set to flag local files that are **not** under source control. In general, there are **none** of these files... if there are, I either add them to source control or mark them as \"ignored\". I also check in all my changes together in one check-in. A lot of colleagues check in much differently. They carefully select each file to check in, as if they are a master jeweler selecting only the very best gemstones to set into the royal crown, and they check in each one as a separate check-in. They rely only on their memory to figure out which files need to be checked in, or especially **added** to source control. The results are quite predictable... frequent broken builds because they forget to add their new files to source control or forget to check in a changed file (especially changed project files). I have mentioned this to them and they never seem to change. When I mentioned it to the team lead he said, \"this is just a different way of working\". To which I may respond: What if I want to drive my car with my eyes closed? Is that just \"a different way of driving\"? Am I right in being bothered by this practice?"} {"_id": "124254", "title": "What can be the cause of new bugs appearing somewhere else when a known bug is solved?", "text": "During a discussion, one of my colleagues told that he has some difficulties with his current project while trying to solve bugs. \"When I solve one bug, something else stops working elsewhere\", he said. I started to think about how this could happen, but can't figure it out. * I have sometimes similar problems **when I am too tired/sleepy to do the work correctly** and to have an overall view of the part of the code I was working on. Here, the problem seems to be for a few days or weeks, and is not related to the focus of my colleague. * I can also imagine this problem arising on a **very large project, very badly managed** , where teammates don't have any idea of who does what, and what effect on other's work can have a change they are doing. This is not the case here neither: it's a rather small project with only one developer. * It can also be an issue with **old, badly maintained and never documented codebase** , where the only developers who can really imagine the consequences of a change had left the company years ago. Here, the project just started, and the developer doesn't use anyone's codebase. So what can be the cause of such issue **on a fresh, small-size codebase written by a single developer who stays focused on his work**? What may help? * Unit tests (there are none)? * Proper architecture (I'm pretty sure that the codebase has no architecture at all and was written with no preliminary thinking), requiring the whole refactoring? * Pair programming? * Something else?"} {"_id": "156970", "title": "How to ease the maintenance of event driven code?", "text": "When using an event based component I often feel some pain at maintenance phase. Since the executed code is all split around it can be quite hard to figure what will be all the code part that will be involved at runtime. This can lead to subtle and hard to debug problems when someone adds some new event handlers. Edit from comments: Even with some good practices on-board, like having an application wide event bus and handlers delegating business to other part of the app, there is a moment when the code starts to become hard to read because there is a lot of registered handlers from many different places (especially true when there is a bus). Then sequence diagram starts to look over complex, time spend to figure out what is happening is increasing and debugging session becomes messy (breakpoint on the handlers manager while iterating on handlers, especially joyful with async handler and some filtering on top of it). ////////////// Example I have a service that is retrieving some data on the server. On the client we have a basic component that is calling this service using a callback. To provide extension point to the users of the component and to avoid coupling between different components, we are firing some events: one before the query is sent, one when the answer is coming back and another one in case of a failure. We have a basic set of handlers that are pre-registered which provide the default behavior of the component. Now users of the component (and we are user of the component too) can add some handlers to perform some change on the behavior (modify the query, logs, data analysis, data filtering, data massaging, UI fancy animation, chain multiple sequential queries, whatever). So some handlers must be executed before/after some others and they are registered from a lots of different entry point in the application. After a while, it can happens that a dozen or more handlers are registered, and working with that can be tedious and hazardous. This design emerged because using inheritance was starting to be a complete mess. The event system is used at a kind of composition where you don't know yet what will be your composites. End of example ////////////// So I'm wondering how other people are tackling this kind of code. Both when writing and reading it. Do you have any methods or tools that let you write and maintain such code without to much pain ?"} {"_id": "252350", "title": "Naming: StartDate or StartDateTime when working with DateTimes", "text": "I am using a lot of DateTimes in my application. Now I usually name it like StartDateTime, EndDateTime, etc, to imply there is also a time involved. I am getting a bit tired of this (it is tiresome to read), and _most of the time_ it is quite logical there is a time compartment to it anyway. Now I'm thinking about ditching `StartDateTime` in favor of the shorter `StartDate`, despite that there is also a time part included. **Question** What is better: easier to read _vs_ being more explicit about that there is a time included? (ps: I guess this has a lot to do with C# not having separate objects for `Date` and `DateTime`, so that's where the need for Hungarian comes from)"} {"_id": "153107", "title": "Can I do a git merge entirely remotely?", "text": "My team shares a \"work\" branch and a \"stable\" branch. Whenever a particular work branch is approved for further testing/release/etc, we merge it into stable. No code is ever checked directly into the stable branch. Because of this, merge conflicts simply won't happen, and it seems silly to pull down the work branch and the stable branch, merge them, and then push the changes back. Is there a git command to ask a remote git server to commit a merge of two branches that it already knows about?"} {"_id": "154332", "title": "How should I structure a solution for a long term project?", "text": "I'm about to create a do-everything dashboard for my team and am still having second thoughts about my project/solution structure. Since this could be a long ongoing project, I want to get the structure right from the beginning. This is what I had in mind: 1. Create a solution named \"doEverythingDashboard\" 2. Delete the project named \"doEverythingDashboard\" under the solution \"doEverythingDashboard\" 3. Create winform project named \"interface\" 4. Create console applications projects for each functionality of \"doEverythingDashboard\" 5. Reference each console application in \"interface\" Does this make any sense? Would it make more sense to just have one project and create a class per functionality instead of an entire project?"} {"_id": "211886", "title": "How can I indicate if an object operates with another one in an UML class diagram?", "text": "Suppose I want to draw a class diagram of a DAO and an Entity. The DAO is used to load instances of the Entity from the database. How can I represent this relationship on my class diagram? Is this considered one? I think it should be displayed on the diagram somehow: ![enter image description here](http://i.stack.imgur.com/HUL9D.jpg) **TL;DR** : should I draw something between them or not?"} {"_id": "124251", "title": "Does a webapp need a connection monitoring feature?", "text": "I developing a client using Flex and ActionScript that is run in the browser and communicates with our server backend. There has been concern that the application \"needs\" to have a graphic indicator to show whether the client has a connection to the server or not. Part of the problem of this requirement is that is coming from a high-level perspective that because the interface looks \"broken\" (i.e. doesn't update properly). I have an idea where the problem is, but that doesn't mean that the error is in the interface. I believe that a connection monitor feature is unnecessary because: 1. Users won't be able to do squat. Our user base is largely non-technical. 2. Flex generates HTTP error events but does not specify what happened. I can't tell what _real_ problem is which could be: * The user actually has no network connection. * The application server code is broken in some way. * The server itself is messed up (i.e. \"Oops, I broke the Apache config\"). * Server load is too high and becomes unresponsive. * There is a some other problem (e.g. network or hardware) beyond my or users' control. How do I convince that this feature is not valuable to develop? If it is something worth keeping, what are alternatives to showing a \"network status\" or server connection problem in the client? I would rather spend the time to dig at the root of the problem and _prevent_ it from happening rather than creating an additional \"feature\" that doesn't solve anything."} {"_id": "126032", "title": "Creating crossplatform TCP proxy/gateway that would handle maximum possible clients simultaneously on 1GB ethernet pipe?", "text": "So what I wonder about is preaty simple - having in mind next architecture: -> computing unit \"A\" / gateway --> computing unit \"B\" same as A \\ -> computing unit \"C\" same as A We do not care to which unit to forward request so we should forward them one by one to A than to B than to C and than to A, etc. I wonder in what language, using what framework it is possible to create such crossplatform TCP proxy/gateway (not ethernet card dependent) that would redirect incoming requests into internal nodes via simple rool, and would be capable to get as much as possible from ethernet cable load in/out capabilety - receive and send as much as possible? I started to wonder after I read this nice 2003 article..."} {"_id": "164313", "title": "Who are 'users' in testing?", "text": "Having e.g. a system for booking flights, during UAT it is not being tested by real users (customers who will buy tickets) rather than people from the client side who will just simulate this. Are there any more specific terms to distinquish between real users (like end user) and users doing the UAT?"} {"_id": "250817", "title": "Writing a simple code validator", "text": "I know that programming languages can be defined in EBNF which can be converted into regular expressions. Right now I am working on a very simple BASIC interpreter for a project. The code has to be entered in a gui which should validate the syntax to later transfer the code to an embedded system where it is executed. I was googling to find an article or tutorial on writing a validator for this job but I could not really find such a thing. Is it just defining the regular expressions and try to match them? Note: the GUI part is written in Java while the embedded code is written in C++."} {"_id": "215655", "title": "Main activity design too busy", "text": "I have a main activity that contains many buttons. I don't want to have a list of buttons like this. I know this looks bad, only cause I threw this together to show you what I have. As you can see, I really don't even have enough room. I don't want a `scrollView` in my main Activity. Does anyone have suggestions on building my activity to look sleek and simple? ![enter image description here](http://i.stack.imgur.com/JrZbM.png)"} {"_id": "1034", "title": "Is there any real benefit to Microsoft's application certification programs?", "text": "I'm developing a server application and am looking at Windows 2008 R2 **application** certification located here. Does application certification really make a difference to your application? Has anything tangible ever come from it in your experience? Are there any other certifications I should consider?"} {"_id": "245738", "title": "What do I need to make a messaging app?", "text": "I was wondering how to make a messaging app like whats app? What are the resources I need? What will I need yo know? I want to make it for iPhone's. I know some obj-c but that's really it..."} {"_id": "129912", "title": "Why is local copy writable by default in SVN, but readonly in Perforce/TFS?", "text": "I have some experience with Perforce, SVN, and TFS. For SVN, The source files were by default writable after synchronization. However, they were readonly for Perforce, as well as TFS if memory served me. Meanwhile, 'checkout' means source sync for SVN but its meaning is quite different for other popular tools. I'm wondering which is more 'correct' behavior and why these tools behave differently. Thanks."} {"_id": "129917", "title": "Trimming script size by using array notation for frequently accessed properties", "text": "I noticed some redundancy in a script I ran through Google Closure Compiler. (function(){function g(a){var k;if(a){if(a.call)a.prototype=j,a.prototype[e]={}}else a= {};var c=a,b,c=(a=c.call?c:null)?new a(a):c;b=c[e]||{};var f=b.extend;b=b.a;var d=c.hasOwnProperty(\"constructor\")?c.constructor:b?b:f?g.b(f):new Function;if (f)b=d.prototype,k=new h(h.prototype=f.prototype),f=k,h.prototype={},d.prototype=f,i (d.prototype,b);i(d,a);i(d.prototype,c);return d.prototype.constructor=d}function i(a,c) {for(var b in c)c.hasOwnProperty(b)&&\"prototype\"!=b&&b!=e&&(a[b]=c[b])}var h=new Function, e=\"decl-data\",j={extend:function(a){return(this[e].extend=a).prototype},a:function(a) {return(this[e].a=a).prototype}};g.c=function(a){e=a};return g})().b=function(g){return function(){g.apply(this,arguments)}}; It looks like a big sea of `prototype` and `constructor`, doesn't it? I was was able to get it to about 6/7 of the size of the original by storing \"constructor\" and \"prototype\" strings and using array notation everywhere to access prototypes and constructors. The size savings should grow as the size of the script grows. Here's what it looks like after the change: (function(b,h){function i(a){a?a.call&&(a[b]=l,a[b][f]={}):a={};var d=a,c,d=(a=d.call? d:null)?new a(a):d;c=d[f]||{};var g=c.extend;c=c.a;var e=d.hasOwnProperty(h)?d[h]:c?c:g?i.b (g):new Function;if(g){c=e[b];var m=b,g=new j(j[b]=g[b]);j[b]={};e[m]=g;k(e[b],c)}k(e,a);k(e [b],d);return e[b][h]=e}function k(a,d){for(var c in d)d.hasOwnProperty(c)&&c!=b&&c!=f&&(a [c]=d[c])}var j=new Function,f=\"decl-data\",l={extend:function(a){return(this[f].extend=a) [b]},a:function(a){return(this[f].a=a)[b]}};i.c=function(a){f=a};return i}) (\"prototype\",\"constructor\").b=function(b){return function(){b.apply(this,arguments)}}; Is it worth it to **damage readability to save a few bytes?** It's hard to know when to stop... `hasOwnProperty` only appears twice, I might use it once more. So, should I replace `.hasOwnProperty` with `[_has]`? Should I just stop here? Or should I put it back the way it was, and not worry about the few extra bytes? On the other hand, **maybe this doesn't damage readability** at all, and could just be looked at as convenient shortcuts. Another alternative would be abstracting the behavior **using macros** ; write `.prototype` and have it converted to `[_p]` at build time, of course declaring `_p` somewhere. But that seems overly complicated for a relatively minor size optimization. Speaking of which, I wonder why closure-compiler doesn't do something like this already? I'm curious what the community thinks about this. * * * For reference, the un-minified source before the size optimization: var decl = (function(){ var Clone = new Function(), // dummy function for prototypal cloning /** dataKey The name of the property where declaration objects' metadata will be stored. If you want to pass objects to decl instead of functions, put the metadata (parent, partial, etc.) in this property. */ dataKey = 'decl-data', /** proto This object is used as a prototype for declaration objects, so all properties are available as properties of `this` inside the body of each declaration function. */ proto = { /** extend Perform prototypal inheritance by calling `this.extend(ParentCtor)` within your decalration function. @param {Function} ctor to extend. @return {Object} prototype of parent ctor. */ extend: function (ctor) { return (this[dataKey].extend=ctor).prototype; }, /** augment Finish a partial declaration. TODO: test for bugs, possibly retroactively fix child classes when augmenting parent. @param {Function} ctor to augment. @return {Object} prototype of partial ctor. */ augment: function (ctor) { return (this[dataKey].augment=ctor).prototype; } }; /** decl Create a prototype object and return its constructor. @param {Function|Object} declaration */ function decl (declaration) { if (!declaration) { declaration = {}; } else if (declaration.call) { declaration.prototype=proto; declaration.prototype[dataKey]={}; } return getCtor(declaration); } /** setDataKey Sets the name of the property where declaration objects' metadata will be stored. If you want to pass objects to decl instead of functions, put the metadata (parent, partial, etc.) in this property. @param {String} String value to use for dataKey */ decl.setDataKey = function (value) { dataKey=value; }; /** clone Create a copy of a simple object. @param {Object} obj @return {Object} clone of obj. */ function clone (object) { var r=new Clone(Clone.prototype=object); Clone.prototype={}; return r; }; /** merge Merge src object's properties into target object. @param {Object} target object to merge properties into. @param {Object} src object to merge properties from. @return {Object} target for chaining. */ function merge (target, src) { for (var k in src) { if (src.hasOwnProperty(k) && k!='prototype' && k!=dataKey) { target[k] = src[k]; } } return target; }; /** getCtor Prepare a constructor to be returned by decl. @param {Function|Object} declaration @return {Function} constructor. */ function getCtor (declaration) { var oldProto, declFn = declaration.call ? declaration : null, declObj = declFn ? new declFn(declFn) : declaration, data = declObj[dataKey] || {}, parent = data.extend, partial = data.augment, ctor = // user-defined ctor declObj.hasOwnProperty('constructor') ? declObj.constructor : // ctor already defined (partial) partial ? partial : // generated wrapper for parent ctor parent ? decl.wrap(parent) : // generated empty function new Function(); // If there's a parent constructor, use a clone of its prototype // and copy the properties from the current prototype. if (parent) { oldProto = ctor.prototype; ctor.prototype = clone(parent.prototype); merge(ctor.prototype, oldProto); } // Merge the declaration function's properties into the constructor. // This allows adding properties to `this.constructor` in the declaration function // without defining a constructor, or before defining one. merge(ctor, declFn); // Merge the declaration objects's properties into the prototype. merge(ctor.prototype, declObj); // Have the constructor reference itself in its prototype, and return it. return (ctor.prototype.constructor=ctor); }; return decl; }()); // This is outside of the main closure so wrapper functions // will have as short a lookup chain as possible. /** wrap Generate wrapper for parent constructor. @param {Function} parent constructor to wrap. @return {Function} child constructor. */ decl.wrap = function (parent) { return function(){ parent.apply(this, arguments); }; }; And after: var decl = (function(_p, _c){ var Clone = new Function(), // dummy function for prototypal cloning /** dataKey The name of the property where declaration objects' metadata will be stored. If you want to pass objects to decl instead of functions, put the metadata (parent, partial, etc.) in this property. */ dataKey = 'decl-data', /** proto This object is used as a prototype for declaration objects, so all properties are available as properties of `this` inside the body of each declaration function. */ proto = { /** extend Perform prototypal inheritance by calling `this.extend(ParentCtor)` within your decalration function. @param {Function} ctor to extend. @return {Object} prototype of parent ctor. */ extend: function (ctor) { return (this[dataKey].extend=ctor)[_p]; }, /** augment Finish a partial declaration. TODO: test for bugs, possibly retroactively fix child classes when augmenting parent. @param {Function} ctor to augment. @return {Object} prototype of partial ctor. */ augment: function (ctor) { return (this[dataKey].augment=ctor)[_p]; } }; /** decl Create a prototype object and return its constructor. @param {Function|Object} declaration */ function decl (declaration) { if (!declaration) { declaration = {}; } else if (declaration.call) { declaration[_p]=proto; declaration[_p][dataKey]={}; } return getCtor(declaration); } /** setDataKey Sets the name of the property where declaration objects' metadata will be stored. If you want to pass objects to decl instead of functions, put the metadata (parent, partial, etc.) in this property. @param {String} String value to use for dataKey */ decl.setDataKey = function (value) { dataKey=value; }; /** clone Create a copy of a simple object. @param {Object} obj @return {Object} clone of obj. */ function clone (object) { var r=new Clone(Clone[_p]=object); Clone[_p]={}; return r; }; /** merge Merge src object's properties into target object. @param {Object} target object to merge properties into. @param {Object} src object to merge properties from. @return {Object} target for chaining. */ function merge (target, src) { for (var k in src) { if (src.hasOwnProperty(k) && k!=_p && k!=dataKey) { target[k] = src[k]; } } return target; }; /** getCtor Prepare a constructor to be returned by decl. @param {Function|Object} declaration @return {Function} constructor. */ function getCtor (declaration) { var oldProto, declFn = declaration.call ? declaration : null, declObj = declFn ? new declFn(declFn) : declaration, data = declObj[dataKey] || {}, parent = data.extend, partial = data.augment, ctor = // user-defined ctor declObj.hasOwnProperty(_c) ? declObj[_c] : // ctor already defined (partial) partial ? partial : // generated wrapper for parent ctor parent ? decl.wrap(parent) : // generated empty function new Function(); // If there's a parent constructor, use a clone of its prototype // and copy the properties from the current prototype. if (parent) { oldProto = ctor[_p]; ctor[_p] = clone(parent[_p]); merge(ctor[_p], oldProto); } // Merge the declaration function's properties into the constructor. // This allows adding properties to `this.constructor` in the declaration function // without defining a constructor, or before defining one. merge(ctor, declFn); // Merge the declaration objects's properties into the prototype. merge(ctor[_p], declObj); // Have the constructor reference itself in its prototype, and return it. return (ctor[_p][_c]=ctor); }; return decl; }('prototype', 'constructor')); // This is outside of the main closure so wrapper functions // will have as short a lookup chain as possible. /** wrap Generate wrapper for parent constructor. @param {Function} parent constructor to wrap. @return {Function} child constructor. */ decl.wrap = function (parent) { return function(){ parent.apply(this, arguments); }; };"} {"_id": "220", "title": "Agile for the Solo Developer", "text": "How would someone implement Agile process concepts as a solo developer? Agile seems useful for getting applications developed at a faster pace, but it also seems very team oriented..."} {"_id": "50658", "title": "Using Agile development in a one person team", "text": "> **Possible Duplicate:** > Agile for the Solo Developer I am going to be starting a project soon and plan to use as much of agile methods as I can (CI, TDD etc.). What have been people's experiences doing agile development when working solo? I want to get good practices in place now while its a one person team so when I scale up to having several people the basics are in place."} {"_id": "105917", "title": "Agile methodology for a single developer working on a prototype", "text": "> **Possible Duplicate:** > Agile for the Solo Developer For my thesis I will be working on a user interface prototype during a 6 month period. I am required to pick a strategy and create a precise planning. While reading through different descriptions of agile methodologies I felt like I'm in buzz-word heaven. Instead of just picking one because it's famous, I was hoping someone could give some insight into which one/or which parts of it might be useful. Are there any agile methodologies specifically suitable (or scalable) for single developers?"} {"_id": "141818", "title": "How can a single developer make use of Agile Methods?", "text": "I am a freelancer and work as a single developer. In all cases, I am the person wearing different hats. I am the Scrum Master and I am the only person in the development. What I am not is the Product Owner? Are there any Agile Methods and free tools that a freelancer can make use of?"} {"_id": "222389", "title": "How can a single person use agile method? for software application", "text": "I am looking to do a personal project on a web application. on my methods section I'm finding it difficult to choose just one Agile Method so here is the list I am trying to combine those or just a few of those to one method that works best for me. * Scrum (I noticed for this one it requires a team of minimum of seven) * XP (Extreme Programming) * Crystal * FDD (Feature dive n development) * DSDM (Dynamic System Development Method) * Adaptive Software Development * RUP (Rational Unified Process) for example I was thinking something like split from scrum simple design (KISS) from eXtreme Programming only 19 weeks for this project so DSDM and TIMEBOXING USE cases etc.?"} {"_id": "250925", "title": "Whats the best way to build a HTML/AJAX site that requires login?", "text": "Let's say that hypothetically you wanted to build a website that delivered content to the visitors entirely using HTML and Javascript (AJAX to fetch server side data). The site would require login for certain functions. Let's also say, for the sake of discussion, you have the ability to create necessary web service methods that would be called from AJAX (for authenticating a user, getting data from a database, etc). What would be the best way to implement this site? Would there be any advantages / drawbacks compared to building a site using a server side language like ASP.NET or PHP? **EDIT 1** First, thanks to everyone that has replied so far. I didn't properly phrase my question so I'm getting responses that aren't addressing my question. I've built many sites using traditional methods - a database backend and a frontend that was a mix of server side languages (.NET or PHP) and client side scripting JavaScript / jQuery. Let's say I'm going to build an environment that has a database server, a web services server, and a web server. My web services server has all the methods I need to interact with the database (authentication, CRUD operations, reporting, etc). The web server does not support any server side languages (like PHP or ASP.NET). I know it would be possible to build the site entirely in HTML, CSS, and JavaScript (jQuery). A user could be authenticated via AJAX, the authentication token could be stored in a cookie, and then token could be used on all subsequent AJAX requests. My questions are: 1. What would be the advantages / drawbacks of such an approach? 2. What challenges would be faced with authentication and security? 3. Would this solve any of the problems typically faced when building a website? 4. If your boss was asking you to use this environment, would you argue against it? Why or why not?"} {"_id": "82957", "title": "Tips and Tricks on Web Page Design on ASP.NET MVC (Razor Syntax)", "text": "I am a newbie in web development and have started studying ASP.NET MVC. I noticed that cshtml replaces aspx when using Razor syntax and design view is not supported. Now designing ASP.NET MVC webpages is difficult without design view. How do you handle your web page fesigns in your projects without it?"} {"_id": "207219", "title": "Reading CSV files located on a Linux server and updating the tables in a SQL Server database", "text": "I was wondering how we could ingest CSV files located on a Red Hat Linux server into SQL Server database tables. I know we can write a stored procedure/bulk insert to read the files that are located on the same Windows server as SQL Server and update the database, but not sure how to do it when the files are present on a Linux server. Any help would be greatly appreciated."} {"_id": "8660", "title": "Good naming convention for named branches in {DVCS} of your choice", "text": "We're integrating Mercurial slowly in our office and doing web-development we started using named branches. We haven't quite found a good convention as far as naming our branches though. We tried: * FeatureName (Can see this causing problem down the line) * DEVInitial_FeatureName (Could get confusing when developer come and go down the line) * {uniqueID (int)}_Feature So far the uniqueID_featureName is winning, we are thinking of maintaining it in a small DB just for reference. It would have: branchID(int), featureName(varchar), featureDescription(varchar), date, who etc... This would give us branches like: 1_NewWhizBangFeature, 2_NowWithMoreFoo, ... and we would have an easy reference as to what that branch does without having to check the log. Any better solution out there?"} {"_id": "111176", "title": "Prefer class members or passing arguments between internal methods?", "text": "Suppose within the private portion of a class there is a value which is utilized by multiple private methods. Do people prefer having this defined as a member variable for the class or passing it as an argument to each of the methods - and why? On one hand I could see an argument to be made that reducing state (ie member variables) in a class is generally a good thing, although if the same value is being repeatedly used throughout a class' methods it seems like that would be an ideal candidate for representation as state for the class to make the code visibly cleaner if nothing else. Edit: To clarify some of the comments/questions that were raised, I'm not talking about constants and this isn't relating to any particular case rather just a hypothetical that I was talking to some other people about. Ignoring the OOP angle for a moment, the particular use case that I had in mind was the following (assume pass by reference just to make the pseudocode cleaner) int x doSomething(x) doAnotherThing(x) doYetAnotherThing(x) doSomethingElse(x) So what I mean is that there's some variable that is common between multiple functions - in the case I had in mind it was due to chaining of smaller functions. In an OOP system, if these were all methods of a class (say due to refactoring via extracting methods from a large method), that variable could be passed around them all or it could be a class member."} {"_id": "209408", "title": "What is the definition of pointer?", "text": "Conceptually a \"pointer\" is just something that \"points\" to something else;Is this definition is sufficient to tell exactly what a pointer is in programing languages? Does it need to have any other features? Programmers who come from specific language may have pre-conceived ideas of what constitutes a 'pointer' based on how it is used in the language So let say if he is from **c/c++** language, he say's pointers supports pointer arithmetic.Is pointer arithmetic an essential feature of a pointer? **Go** has pointers and does not support pointer arithmetic. Is the ability to \"dereference\" the pointer essential to the concept of a pointer? So what is the precise definition of a pointer that I can give as an answer, irrespective of any specific programming language?"} {"_id": "111171", "title": "When does switching to a framework mid-project make sense?", "text": "Some of my friends and I started a PHP project some weeks ago. In the beginning, I suggested we use a PHP framework such as CodeIgniter or Zend. But my friends wanted to start clean and without the overhead or extra complexity. But as time has passed, the project has become more and more complex. At first, we wrote all the code in the view files, in code blocks before the HTML output. Eventually, we changed direction and started to use controllers that did most of the work. But as you can see there is a coding difference here. At this point, we're trying to determine how to proceed before the project becomes more complex. When does it make sense to start over and use a framework? Or is it possible to formulate a simple way to introduce an MVC pattern to our existing code without having to start from scratch?"} {"_id": "111178", "title": "What should I do when my project manager does not care about implementation details?", "text": "My project manager, when providing requirements for specific tasks, does not care about the implementation details. Although he has a programming background and has some knowledge of the MVC framework, he does not consider the perspective of the developer. For example, I was given a task to create a simple form in ASP.NET MVC. This form should be pluggable - that is, the customer should choose which fields do or do not exist and which fields are required. If this were a simple form with validation, I would easily be able to implement it using ASP.NET validations. However, the problem is not simple and requires design and architecture first. The time that I have to implement the solution, which is not well understood, is very restricted. Not having sufficient time will not let me come upwith a solution that meets the requirements, but also benefits myself and any future developers. What should I do in this situation? Do you feel that given requirements in the example can be expected from a single developer?"} {"_id": "40172", "title": "Should newbies use IDE autocomplete (Intellisense)?", "text": "I often encounter this when I am helping out someone who is new to programming and learning it for the first time. I'm talking about really new newbies, still learning about OOness, constructing objects, method calls and stuff like that. Usually, they have the keyboard and I am just offering guidance. On the one hand, the autocomplete feature of the IDEs helps to give them feedback that they are doing it right and they quickly get to like and rely on it. On the other hand, I fear that early dependence on the IDE autocomplete would make them not really understand the concepts or be able to function if they one day find themselves only with a simple editor. Can anyone with more experience in this regard please share their opinion? Which is better for a newbie, autocomplete or manual typing? **Update** Thanks for the input everyone! Many answers seem to focus on the main use of autocomplete, like completing methods, providing methods lookup and documentation etc. But IDEs nowadays do a lot more like. * When creating an object of List type, an IDE autocompletes to new ArrayList on right hand side. It may not be immediately clear to a newbie why it cannot be new List, but hey it works, so they move on. * Filling method parameters based on local variables in context. * Performing object casts * Automatically adding 'import' or 'using' statements and much more. These are the kinds of things I mean. Remember I'm talking about people who are doing Programming 101, really just starting. I have watched the IDE do these things which they have no idea about, but they just carry on. One could argue that it helps them focus on program flow and getting the hang of things first before going in-depth and understanding the nuances of the language, but I'm not sure."} {"_id": "114727", "title": "Forgetting basic language functions due to use of IDE, over reliance?", "text": "> **Possible Duplicate:** > Should newbies use IDE autocomplete (Intellisense)? > Is it wrong or bad to use autocomplete? Having used an IDE for the last 3-4 years, the other day a few of my coworkers and I came to a fairly sad realization that we had forgotten some basic Java methods/functions because we always relied on our IDE (Eclipse/STS/IntelliJ) to complete them for us (such as what a function takes in terms of parameters), is this a sign that our skills are deteriorating, that we're over relying on IDEs? (This whole discussion came about us talking about taking the Oracle Java 7 EE certification test and looking over a few practice questions, questions such as, what parameters does function X take, etc) If so, how do you guys fight off this sort of thing? Just re-read books and stuff? I myself can't imagine not coding with an IDE because in all honesty, it does make me code a lot faster with things such as auto complete, etc while some of my co-workers suggested we code sans IDE for a few months. I do realize that sometimes one doesn't need to know everything about a function, but I do remember a college professor telling us that IDEs are terrible because when the time comes that you need some sort of function, you'll have to waste time looking it up instead of just banging out the code from rote memory."} {"_id": "100480", "title": "Is there any evidence that Intellisense reduces productivity?", "text": "> **Possible Duplicate:** > Should newbies use IDE autocomplete (Intellisense)? I was having a conversation with another developer the other night about the pros and cons of Visual Studio. He was of the opinion that Intellisense reduces productivity. Of course I thought that was insane but I could be wrong. Is there any evidence to support the idea that Intellisense reduces productivity?"} {"_id": "201457", "title": "Does relying on intellisense and documentation a lot while coding makes you a bad programmer?", "text": "Is a programmer required to learn and memorize all syntax, or is it ok to keep handy some documentation? Would it affect the way that managers look at coders? What are the downside of depending on intellisense and auto-complete technologies and pdf documentation?"} {"_id": "197983", "title": "Do programmers (using an IDE) need to know namespaces?", "text": "I got into a discussion with a coworker over an interview question that was asked of a potential employee. \"Name some namespaces/what do they do/what namespace is involved with x\". I'm of the opinion that programmers don't really need to know that kind of stuff, especially with the use of an IDE and extremely accessible documentation. He feels that a programmer with experience should be able to name namespaces. This is valid, I suppose, especially if you're looking to hire someone into a more experienced position. They don't particularly need to memorize namespaces, but anyone that has experience should be able to rattle off some namespaces and what their purpose is. What are your thoughts on this?"} {"_id": "137252", "title": "Is it a good idea to generate code with the help of your IDE?", "text": "> **Possible Duplicate:** > Should newbies use IDE autocomplete (Intellisense)? Since I moved on to actual IDEs for C++, Java and Python, I noticed they automatically try to complete my code. For example if I write `System.out.pr[...]` I am automatically offered a list from which I can select the item I want to use. As a beginner I avoided using this feature as I thought I should learn the syntax by typing. Was I correct in thinking that? From a more experienced programmer's point of view, I'd see no harm in using this feature."} {"_id": "126690", "title": "What should I learn in Java SE before proceeding to Java EE?", "text": "Currently, I've learned the following in Java SE: * Logical Operations * Loops * Inheritance * Polymorphism * Abstract Classes * Interface I'm currently learning Strings, Characters and Regular Expressions. What are things that I should learn before proceeding to Java EE?"} {"_id": "126691", "title": "Academic programming languages?", "text": "Back in the day, there were a lot of academic programming languages (okay, maybe not a _lot_ , but it seems like there were more than today). I distinctly remember spending time in both high school and college learning languages like Basic, Ada, Pascal, Prolog, Haskell, Scheme, and Turing. While it's unfair to call all those languages solely academic, it's also unfair to say they are industrially equivalent to an enterprise language like Java (or even C or Smalltalk back in the day). My question is, what academic languages are still in use (and still taught today)? It seems like there's a lot of Java schools, and a rising number of Python schools, and even some that teach languages like C as the first programming language - but what would be the modern equivalent to something like Pascal?"} {"_id": "89440", "title": "What can I gain from examining others code?", "text": "I recently downloaded some large code bases and have had light reads through them, but what can I gain from this? How can I be sure what the author is doing is the right way to go about things? (One of the the code bases was the Zeus Trojan source code ;D)"} {"_id": "160545", "title": "What do employers look for in self-taught applicants?", "text": "I'm a self-taught programmer about to enter the job market. What I want to know is what is the best way to show my experience to employers? What do employers want to know about my programming experience? Do employers want to look at code I wrote or could they want to see the software in action? Or do they only care how much my software is being used/how much it has created revenue? Should I write about my design and programming style? My background: I recently graduated from a university where I studied foreign languages, and it was during this time when I found out that programming is the thing I really want to do. Currently I'm a garage programmer, developing a software for a client for commercial use while continuing to self-teach myself, but my aim is to start a \"proper\" employment and career. As an added twist I live in Finland at the moment, but I'm looking for employment in China or Japan (I learned both languages while studying abroad). Any help is greatly appreciated!"} {"_id": "66843", "title": "Kind of project for pair programming interview", "text": "As discussed in the SO question Pair Programming for a job interview, there are mixed feelings on the usefulness of this approach. For those that have used this approach, what kind of projects are useful? From my limited experience with this as an interviewee, a key seems to be a well chosen task that the candidate can understand quickly and be able to contribute to (and the latter may depend somewhat on the candidate's background). I've gone through a couple days of pairing interviews and found some periods to seem like a few hours were wasted: one because the developer wasn't working on something that lent itself to pairing (messy code that the developer wasn't very familiar with and was already interacting with someone about) and involved a tech I wasn't very familiar with. Another case was because an unexpected build issue prevented much useful work for a long while. One shop trying this approach wasn't sure if they should have someone outside the company work on a customer's project - any thoughts on this? They also worried that explaining the domain and system would take too long, though without that the candidate may not be able to contribute much. So they chose an open source project the employee was working on, though that's likely not an option for everyone. Recently, there have been a lot of folks working on _coding katas_ for various things. I'm wondering if this would be a useful thing to use. Some sites even have them in multiple languages and multiple skill levels (e.g. http://codingkata.org), which could be useful for candidates and/or employees with different backgrounds. [I guess coding katas could be used even for coding exercises in non-pairing interviews - I'll ask a separate question on that: What coding katas are good for interview exercises]"} {"_id": "66842", "title": "Reading SICP with F#?", "text": "I've been meaning to read the SICP book for a while, and am finally about to get around to it (now that I can read it on Kindle :) I'd like to learn a functional language, and I use C# at work so thought it might be a good idea to go through the book with F# in my mind rather than Scheme. (That is, do the exercises in F#.) I wonder though, will there be much mental disjoint between Scheme and F#? Maybe it's better to go through with Scheme on the first pass?"} {"_id": "51003", "title": "Should a .NET, JavaScript and SQL Web App developer learn Perl?", "text": "I'm a front and backend .NET web developer (most solutions use MS SQL Server) and I won't be using any non-MS solutions for a while. Will Perl be useful for situations that require scripting in an MS product environment?"} {"_id": "220104", "title": "We have a custom program that generates all the stored procs and classes for the Data Tier. Where will I put the generated classes?", "text": "This is more of an architectural question regarding MVC and Data Access: We have a custom program that generates all the stored procs and classes for the Data Tier from the MS SQL database. It's pretty nice as it generates a base class with the basic CRUD operations which includes ForeignKey reads. It also generates the plural version of the class to return the collections of objects. For the next phase of our application we are planning on using MVC but we were hoping to continue using this great tool. Where will I put the generated classes in my new MVC application? I have seen people create an Infrastructure folder for their data access logic. Is it a good idea to continue using this tool or should we be converting to the Entity Framework? Also if the DAL is returning my objects and lists of objects what will I put in my Model layer?"} {"_id": "214968", "title": "Principles of an extensible data proxy", "text": "There is a growing industry now with more than 30 companies playing in the Backend-As-A-Service (BaaS) market. The principle is simple: give companies a secure way of exposing data housed on premises and behind the firewall publicly. This can include database data, as well as Legacy PC data through established connectors; SAP for example provides a connector for transacting with their legacy systems. Early attempts were fixed providers for specific systems like SAP, IBM or Oracle, but the new breed is extensible, allowing Channel Partners and Consultants to build robust integration applications that can consume whatever data sources the client wants to expose. I just happen to be close to finishing a Cloud Based HTML5 application platform that provides robust integration services, and I would like to break ground on an extensible data proxy to complete the system. From what I can gather, I need to provide either an installable web service of some kind, or a Cloud service which the client can configure with VPN for interactions. Then I can build in connectors, which can be activated with a service account, and expose those transactions via web services of some kind (JSON, SOAP, etc). I can also provide a framework that allows people to build in their own connectors, and use some kind of schema to hook those connectors into the proxy. The end result is some kind of public facing web service that could securely be consumed by applications to show data through HTML5 on any device. My gut is, this isn't as hard as it sounds. Almost all of the 30+ companies (With more popping up almost weekly) have all come into existence in the last 18 months or so, which tells me either the root technology, or the skillset to create the technology is in abundance right now. Where should I start on this? Are there some open source projects I can leverage? A specific group of developers I can hire? I'm confident someone here can set me on the right path and save me some time. You don't see this many companies spring up this rapidly if they are all starting from scratch with proprietary technology. The Register: WTF is BaaS One Minute Video from Kony on their BaaS"} {"_id": "127344", "title": "How do you test a data load?", "text": "I'm working on a project that involves loading batches of 3000+ files to several dozen tables. There is no user interface, and the tables are simply available for querying. What are the best practices for testing this type of process? * Loading a small set of data and validating each and every member? * Loading all the data and validating a subset? * Loading all the data and validating counts, averages, and other metrics? Are there other types of testing that can be done, or is some combination best?"} {"_id": "120698", "title": "Diagram that could explain a state machine's code?", "text": "We have a lot of concepts in making diagrams like UML and flowcharting or just making up whatever boxes-and-arrows combination works at the time, but I'm looking at doing a visual diagram on something that's actually _really_ complex. State machines like those that parse HTML or regular expressions tend to be very long and complicated bits of code. For example, this is the stateLoop for FireFox 9 beta. It's actually generated by another file, but this is the code that runs. How can I diagram something with the complexity of this in a way that explains flow of the code without taking it to a level where I draw every single line- of-code into it's own box on a flowchart? I don't want to draw \"Invoke loop, then return\" but I don't want to explain every last detail. What kind of graph is suitable to do this? Is there already something out there similar to this? Just an example of how to do this without going overboard in complexity or too-high-level is really what I want. * * * If you don't feel like looking at the code, basically it's 70 different state flags that could occur, inside an infinite loop that exists to a label based on some conditions, each flag has it's own infinite loop that exists to a label somewhere, and each of those loops has checks for different types of chars, which then runs off into various other methods."} {"_id": "244883", "title": "Algorithm to reduce calls to mapping API", "text": "A random distribution of points lies on a map. This data lies behind an API, and I want to grab the complete set of points within a given bounding box. I can query the API with the bounding box and the API will return the set of points that fall within that box. The problem is that the API will limit the result set to 10 items, with no pagination and no indication if there are more points that have been omitted. So I made a recursive algorithm that takes a bounding box and requests the points that lie within it. If the result set is exactly 10 items, then I split the bounding box into four quadrants and recurse. It works fine but my question is this: if want to minimize the number of API calls, what is the optimal way to split the bounding box? Splitting it into quadrants was just an arbitrary decision. When there are a lot of points on the map, I have to drill down many levels before I start getting meaningful results. So I imagine it might be faster to split the box into, say, 9, 16, or more sections. But if I do that, then I eventually get to a point where a lot of requests are returning 0 results which isn't so efficient. Also, does the size of the limit on the results set affect the answer? (This is all assuming that I have no prior knowledge of nominal point density in the bounding box)"} {"_id": "244887", "title": "Should XmlDocument.xml be Included in Source Control", "text": "In a Web API 2 project, in the project properties, I have enabled the output of an XML documentation file. By default, Visual Studio 2013 wants to check this into TFS. I'm pretty sure that I'm going to exclude it, but wondered what the general opinion is on this?"} {"_id": "149871", "title": "Confused about modifying the sprint backlog during a sprint", "text": "I've been reading a lot about scrum lately, and I've found what seem to me to be conflicting information about whether or not it's ok to change the sprint backlog during a sprint. The Wikipedia article on scrum says it's not ok, and various other articles say this as well. Also my Software Development professor taught the same thing during an overview of scrum. However, I read Scrum and XP from the Trenches and that describes a section for unplanned items on the taskboard. So then I looked up the Scrum Guide and it says that during the sprint \"No changes are made that would affect the Sprint Goal\" and in the discussion of the Sprint Goal \"If the work turns out to be different than the Development Team expected, then they collaborate with the Product Owner to negotiate the scope of Sprint Backlog within the Sprint.\" It goes on to say in the discussion of the Sprint Backlog: > The Sprint Backlog is a plan with enough detail that changes in progress can > be understood in the Daily Scrum. The Development Team modifies Sprint > Backlog throughout the Sprint, and the Sprint Backlog emerges during the > Sprint. This emergence occurs as the Development Team works through the plan > and learns more about the work needed to achieve the Sprint Goal. > > As new work is required, the Development Team adds it to the Sprint Backlog. > As work is performed or completed, the estimated remaining work is updated. > When elements of the plan are deemed unnecessary, they are removed. Only the > Development Team can change its Sprint Backlog during a Sprint. The Sprint > Backlog is a highly visible, real-time picture of the work that the > Development Team plans to accomplish during the Sprint, and it belongs > solely to the Development Team. So at this point I'm altogether confused. Thinking about it, it makes more sense to me to take the second approach. The individual, specific items in the backlog don't seem to me to be the most important thing, but rather the sprint goal, so not changing the sprint goal but being able to change the backlog makes sense. For instance if both the product owner and the team thought they were on the same page about a story, but as the sprint progressed they figured out there was a misunderstanding, it seems like it makes sense to change the tasks that make up that story accordingly. Or if there was some story or task that was forgotten about, but is required to reach the sprint goal, I would think it would be best to add the story or task to the backlog during the sprint. However, there are a lot of people who seem quite adamant that any change to the sprint backlog is not ok. Am I misunderstanding that position somehow? Are those folks defining the sprint backlog differently somehow? My understanding of the sprint backlog is that it consists of both the stories and the tasks they're broken down into. Anyway I would really appreciate input on this issue. I'm trying to figure out both what the idealistic scrum approach is to changing the sprint backlog during a sprint, and whether people who use scrum successfully for development allow changing the sprint backlog during a sprint."} {"_id": "118185", "title": "Why can't we get anything done?", "text": "I work on a small team, in a medium-sized company, most of which isn't involved in software development. I'm the newest and least-experienced developer and had no professional or academic background in software before starting, but I'm quite pleased with how respected my input is and am grateful for being taken seriously at such an early stage in my career. Still, I feel like I should be doing more with this generous amount of airtime. As a team, we seem to have trouble getting things done. I'd like to be able to suggest something to improve the situation, and I think I'd be listened to if it was a good idea, but I'm at a loss for what to suggest. Things I can identify as being issues include: * Specification of the tasks at hand is sparse. This is partly because management is a bottleneck and we don't have the money or people to commit to working out detailed requirements as much as we'd like. It's also partly because the software we're developing is investigative and the precise method isn't clear until it's demonstrated and used to determine its effectiveness. * The Lead Dev is very fond of what he calls 'prototyping' to the point that he's lately started insisting that everything is 'prototyped', which to the rest of us looks like writing bad code and giving it to the modellers to play with. It isn't clear what he expects to come out of this exercise in many cases. The 'actual' implementation then suffers because of his insistence that good practice takes too much time from the prototyping. I haven't even begun to be able to untangle this twisted logic and I'm not sure I want to try. * The modellers are expected to tell us everything about the desired methodology in precise detail, and it's taken on absolute trust that what they come out with is theoretically flawless. This is hardly ever true, but no action is taken to rectify this situation. Nobody on the modelling side raises any concerns in a structured way that is likely to be acted upon, nor do they seek guidance in applying best practices. Nothing is done about their passivity either. * I've tried to push TDD in the team before, but found it difficult as it's new to me and while those with oversight of my work were willing to tolerate it, no enthusiasm has been forthcoming from anyone else. I can't justify the amount of time I spend wallowing and not finishing features, so the idea has - for the moment - been abandoned. I'm concerned it won't be picked up again, because nobody likes to be told how to do their job. * We now have a continuous integration server, but it's mostly only being used to run multiple-hour regression tests. It's been left open that it ought to be running full-coverage unit and integration tests as well, but at the moment nobody writes them. * Every time I raise the issue of quality with the lead dev, I get an answer to the effect of 'Testing feature A is straightforward, feature B is much more important to the user but too difficult to test, therefore we shouldn't test feature A'. Once again I've made no headway in trying to untangle this logic. ....phew. When I phrase it like that, it looks much worse than I thought. I suppose, as it turns out, this is a cry for help."} {"_id": "126966", "title": "What visualization method would you recommend for event driven programs?", "text": "What visualization method would you recommend for event driven programs? Are there industry standard diagrams, such as flowcharts?"} {"_id": "83716", "title": "How to properly understand django framework?", "text": "I have decent knowledge of php, i.e., I can take a framework, read its code and if the docs are adequate, understand what its doing. Main reason for that is that php is actually a very easy language which is literally made for web- dev. I have been trying to learn django for a week now, can knock up a basic app with it, but there are just too many things which go over my head, i.e, look like magic, reason for that , i think is that the whole interaction with the server thing is part of django, which is all handled by your server in php, I want to read more about this part, i.e. what all topics should i cover to 'get' this. please suggest some books too"} {"_id": "166165", "title": "Identifying languages used by particular industries", "text": "> **Possible Duplicate:** > What technologies are used for Game development now days? I am new to programming and I don't know the differences between the major languages. I desperately want to get into the gaming industry because I have so many stories I want to tell and so many experiences I want to create. I currently do 3D modeling/animation, so any similarities would be helpful. What steps should I take to investigate an industry (gaming) and the companies within that industry? How do I identify what programming languages they use, so I can study them?"} {"_id": "200739", "title": "Beginner programmer wants to work his way up to game development", "text": "I've taken a course in computer science that covered basics about java and programming from conditionals and loops to inheritance and interfaces. I also got Head First Java from a friend who didn't write in it. After I finish that book, what should I do before I head into developing games? Also, I've been just making command line games and math programs for now. Is that good practice? I'm just worried that I will only be repeating things that I already know, and that I won't be learning anything new."} {"_id": "163930", "title": "How relevant is UTF-7 when it comes to parsing emails?", "text": "I recently implemented incoming emails for an application and boy, did I open the gates of hell? Since then every other day an email arrives that makes the app fail in a different way. One of those things is emails encoded as UTF-7. Most emails come as ASCII, some of the Latin encodings, or thankfully, UTF-8. Hotmail error messages (like email address doesn't exist or quota exceeded) seem to come as UTF-7. Unfortunately, UTF-7 is not an encoding Ruby understands: > \"hello world\".encode(\"utf-8\", \"utf-7\") Encoding::ConverterNotFoundError: code converter not found (UTF-7 to UTF-8) > Encoding::UTF_7 => # My application doesn't crash, it actually handles the email quite well, but it does send me a notification about the potential error. I spent some time googling and I can't find anyone that implemented the conversion, at least not as a Ruby 1.9.3 Encoding::Converter. So, my question is, since I never got an email with actual content, from an actual person, in UTF-7, how relevant is that encoding? can I safely ignore it?"} {"_id": "163931", "title": "Is using a bigger buffer useful?", "text": "I use buffer for quite a long time when I need to copy a stream or read a file. And every time I set my buffer size to 2048 or 1024, but from my point of view a buffer is like a \"bucket\" which carries my \"sand\" (stream) from one part of my land (memory) to an other part. So, increase my bucket capacity will in theory allow me to do less travel? Is this a good things to do in programming?"} {"_id": "155239", "title": "Are all languages basically the same?", "text": "Recently, i had to understand the design of a small program written in a language i had no idea about (ABAP, if you must know). I could figure it out without too much difficulty. I realize that mastering a new language is a completely different ball game, but purely understanding the intent of code (specifically production standard code, which is not necessarily complex) in any language is straight forward, if you already know a couple of languages (preferably one procedural/OO and one functional). Is this generally true? Are all programming languages made up of similar constructs like loops, conditional statements and message passing between functions? Are there non-esoteric languages that a typical Java/Ruby/Haskell programmer would not be able to make sense of? Do all languages have a common origin?"} {"_id": "163934", "title": "How to correctly write an installation or setup document", "text": "I just joined a small start-up as a software engineer after graduation. The start-up is 4 year old, and I am working with the CEO and the COO, even if there are some people abroad. Basically they both used to do almost everything. I am currently on some kind of training phase. I have at my disposition architecture, setup and installation internal documentation. Architecture documentation is like a bible and should contain complete information. The rest are used to give directions in different processes. The issue is that these documents are more or less dated, as they just didn't have the time to change them. I will be in charge of training the next hires, and updating these documents is part of my training. In some there is a lot of hard-coded information like: Install this_module_which_still_exists cd this_dir_name_changed cp this_file_name_changed other_dir_name_changed ./config_script.sh ./execute_script.sh The issues i have faced : * Either the module installation is completely different (for instance now there is an rpm, or a different OS) * Either names changed, and i need to switch old names by new names * Description of the purpose of the current step missing. * Information about a whole topic is missing Fortunately these guys are around and I get all the information I want and all the explanations I need. I want to bring a design to the next documents so in the future people don't feel like they are completely rewriting a document each time they are updating it. Do you have suggestions? If there is a lightweight design methodology available online you can point me to it's nice too. One thing I will do for sure is set up a versioning repository for the documents alone. There is already one for the source code so I don't know why internal documents deserve a different treatment."} {"_id": "253753", "title": "What are the difference between: agent, actor, dataflow based programming?", "text": "What are the difference between the following terms? * agent-based programming * agent-based programming with microagents * actor-based programming * actor-based programming with lightweight actors * dataflow based programming It is hard to find comparing articles and they are very similar. Afaik they have different constraints and they are implemented on a different abstraction level, but I need some reassurance..."} {"_id": "245284", "title": "PHP - Repository matrix pattern?", "text": "I'm trying really hard to refactor some of my legacy code in the project using best practices and design patterns + DDD so I'd love some feedback on an issue I'm currently having. Let's assume that I have two entity classes: class Dog { protected $name; function __construct($name) { $this->name = $name; } /** * @return mixed */ public function getName() { return $this->name; } function bark() { echo 'Rawr'; } } class Husky extends Dog { /** * @var Sledge */ protected $sledge; function __construct($name, Sledge $slegde) { parent::__construct($name); $this->sledge = $slegde; } function pull() { echo $this->sledge->pull(); } } `Dog` is my regular entity and his only responsibility is to map database fields. `Husky` on the other hand has same responsibilities as dog but also delegates `Sledge` pulling. Normally, both entities would have different repositories (should they? Since one inherits from one another) to call for, however business requirement implies that client does not have to specify dog's type (so it can either be a \"basic\" `Dog` or special `Husky`), just its name `http://localhost/animal/fluffy`. What is more, currently they both reside in the same database table (recognized by `type` field) and right now there are no technical plans to change that (performance and time reasons). What is the best way to do it? * Should I create some `AnimalRepository`, pull the data and treat it as DTO, detect it's type and then create appropriate class? * Should I create some kind of higher abstraction level mapper? How should it look like?"} {"_id": "245287", "title": "What are the advantages and disadvantages of splitting teams by architecture tier rather than by product?", "text": "As the title states; what are the advantages and disadvantages of splitting teams by architecture tier rather than by product? **_For example:_** **Organization A** has three teams: * Team Web and Front End * Team APIs, Web-Services and Data Stuffs * Team Embedded Stuffs _Organization A_ teams, over time, have become more specialized in their work rather than being truly full-stack developers. **Organization B** has three teams: * Team Product Alpha * Team Product Beta * Team Product Charlie _Organization B_ has fostered an environment of full-stack developers that are all fairly interchangeable among one-another. What will **Organization A** be able to do better (and not so better) than **Organization B** , and vice versa? You don't necessarily need to take the examples to heart, as they were just to get the point of my question across."} {"_id": "213464", "title": "What does \"because IL offers no instructions to manipulate registers, it is easy for people to create new languages\" mean?", "text": "I am reading CLR via C# and came across this sentence in the first chapter and I did not understand what exactly it meant. Full line here: > because IL offers no instructions to manipulate registers, it is easy for > people to create new languages and compilers that produce code targeting the > CLR. What does it mean? I ventured a guess that it means IL is a bit low level, but not too low so that it is easy to create languages on top of it."} {"_id": "195632", "title": "Creating alien symbols or signs that look natural", "text": "I was reading some Cthulhu Mythos stuff and started to think about those various signs and symbols that are used in the texts. Of course those are made by humans but point is that they look natural to human eyes. So, what kind of algorithm would be needed to create natural looking symbols and signs? How this problem could be approached? I guess Golden ratio and Fibonacci numbers are in key role."} {"_id": "215700", "title": "What is the best approach for deploying apps to companies", "text": "What is the best approach for the following scenario: 1) A publicly available app (available in app stores) which is used by end users to make use of services offered by multiple companies. 2) These companies maintain their services also using a mobile app. I'm not sure on how to solve the second part. Having one app for both enduser and admin functionality, secured by username/password doesn't sound like a good idea. This would leave the only option of developing a separate admin application for the companies. What is the best approach to deploy \"admin\" like mobile apps to companies only, for Android, iOS and Windows Phone? Some additional information: > Public App ----> Servers -----> Multiple Company Apps The public app shows all companies offering their services. An end user uses the public app to order something from a specific company. The order is sent to our servers. Our servers send the order to the associated company. This order is displayed on the company's admin app and given the option to accept the order."} {"_id": "2776", "title": "What do you think about the Joel Test?", "text": "The Joel Test is a well known test for determining how good your team is. What do you think about the points? Do you disagree with any of them? Is there anything that you would add?"} {"_id": "195633", "title": "Good approaches for packaging PHP web applications for Debian", "text": "Many PHP web applications follow this model for installation and upgrade: 1. Un-tar a source tar ball. 2. Point Apache at the source. 3. Navigate a web browser to the home page. 4. Go through several web pages of set-up (e.g., checks for existence of libraries, asks for database connection information, creates or updates database schema, etc.). 5. User renames an `install/` directory to something else so that application knows it has been installed. I don't see any (simple) way to create a Debian package out of this without making the user installing the package go through many of the above manual steps. Note that I am not a developer on the application so I am not in a position to make direct change to how the application installation works. What is the typical approach to packaging such an application?"} {"_id": "215706", "title": "Variable declaration versus assignment syntax", "text": "Working on a statically typed language with type inference and streamlined syntax, and need to make final decision about syntax for variable declaration versus assignment. Specifically I'm trying to choose between: // Option 1. Create new local variable with :=, assign with = foo := 1 foo = 2 // Option 2. Create new local variable with =, assign with := foo = 1 foo := 2 Creating functions will use `=` regardless: // Indentation delimits blocks square x = x * x And assignment to compound objects will do likewise: sky.color = blue a[i] = 0 Which of options 1 or 2 would people find most convenient/least surprising/otherwise best?"} {"_id": "136236", "title": "Assigning development work effectively to enable parallel development", "text": "I am currently doing my 1st accademic project where we have to work in groups of 4 to develop an application (java ...). Anyways, how might work be assigned so that there is less dependencies on each other and we can work separately instead of waiting on one another? I suggested 1 person develop 1 layer: Model, View, Controller, Data Access. I am working on Data Access but I find I need Model classes: For example My `EventsDataAccess` has a `findEventByName(String name)` that returns an `Event`, which is developed by another person. How should I proceed? Here `Event` is a very small class so it shouldn't take long, but suppose its big, it might be a long wait? How is work usually split up for developers in a small to medium sized team? Its so much easier to work in say a very small familiar team with, say 2 persons only"} {"_id": "255975", "title": "Multiple burndown charts on TFS project page", "text": "we work with Scrum in TFS2013, with one main project and multiple teams. What I want to do is make the homepage for the project display each teams burndown chart, so management can easily see all teams progress. Currently it rolls up all the burndown charts into a single burndown, and if I create charts I can only get line / bar / pie charts that are far less effective at displaying the same information. Is it possible to pin burn down charts to home pages? Thanks."} {"_id": "193517", "title": "Functional document from code", "text": "I am a Sr java Developer and have recently joined a new team. Here I have been asked to create a functional document looking at the code of a legacy application. This application was written about 8-10 years back, which is currently running in live environment but due to lack of documentation it is difficult to be setup and run on local development box. I consulted some of the old team members who support it, but none was able to run it on their dev boxes. Now I have to create a functional document out of it (with running it only looking at the code) which will act as a base to re-write this application with new technology and some enhancements. Kindly suggest what would be the best way to move forward? I tried to find out from testers and found that there is no tester currently allocated to this application as it's an old running app on live. There are not much enhancements or bugs coming up. Ans as this re-write project is already queued up, the management is looking into aligning any coming bugs to this new app."} {"_id": "255973", "title": "C++ : Association, Aggregation and Composition", "text": "I'm beginning to study OOAD and I'm having difficulty finding a `C++` code example that'd illustrate how `Association`, `Aggregation` and `Composition` are implemented programmatically. (There are several posts everywhere but they relate to C# or java). I did find an example or two, but they all conflict with my instructor's instructions and I'm confused. My understanding is that in: * **Association:** Foo has a pointer to Bar object as a data member * **Aggregation:** Foo has a pointer to Bar object and data of Bar is deep copied in that pointer. * **Composition:** Foo has a Bar object as data member. And this is how I've implemented it: //ASSOCIATION class Bar { Baz baz; }; class Foo { Bar* bar; void setBar(Bar* _bar) { bar=_bar; } }; //AGGREGATION class Bar { Baz baz; }; class Foo { Bar* bar; void setBar(Bar* _bar) { bar = new Bar; bar->baz=_bar->baz; } }; //COMPOSTION class Bar { Baz baz; }; class Foo { Bar bar; Foo(Baz baz) { bar.baz=baz; } }; Is this correct? If not, then how should it be done instead? It'd be appreciated if you also give me a reference of a code from a book (so that I can discuss with my instructor)"} {"_id": "255971", "title": "Entity design for a blackjack game - should I make Card an entity?", "text": "I am creating a simple blackjack game backed by database In my Card is public class Card{ private Face face; private Suit suit; //setters.. getters } where `face` and `suit` are enums I have an entity `Bet` with the following @Entity public class Bet{ private Player player; private String cards; //... } Currently when I'm dealing cards I parse the suit and face to string and concatenate them in the cards field and then parse the cards if I want to calculate the score. I find this cumbersome so I want to change my field \"cards\" to List in the Bet Entity. Now, if that's the case I would have to make the Card class an entity as well. But my `cardService`, which is where I get my cards, does not rely on the database, it just creates random cards so it does not make sense to make card an entity - am I right?"} {"_id": "166268", "title": "How secure (or insecure) is it to install Node packages globally?", "text": "Should I be concerned with security when installing Node packages globally? Why or why not?"} {"_id": "82593", "title": "Javascript Ternary Operator vs. ||", "text": "I was taking a look at some node.js code earlier, and I noticed that the guy who wrote it seemed to favour the following syntax: var fn = function (param) { var paramWithDefault = null == param ? 'Default Value' : param; } Over what I consider to be the more concise: var fn = function (param) { var paramWithDefault = param || 'Default Value'; } I was wondering if the second form is actually more socially acceptable JavaScript syntax, I've seen it out in the wild more times than the ternary operator for this purpose. I note that in the first example he's using the double equals (not the triple equals) which means it will count \"undefined\" as null, which would reduce one impact that I could think of. However, I've read in numerous places that == is a rather evil operator in JavaScript (JSLint is very much against it, IIRC)."} {"_id": "116961", "title": "Is it unethical to track app usage through REST API calls?", "text": "I am building an app that communicates with my website with ASIHTTPRequest to a PHP-based REST API on the server side. Naturally, in my app I have different endpoints on the server side, and usually return JSON data. Is it unethical to log counters on how many times each endpoint was hit? I'd like to capture how the app was used capturing what endpoint was hit, the user agent, time of day, possibly their IP (to group visits etc). Should I ask permission to do this?"} {"_id": "116965", "title": "Why aren't VM languages compiled just once?", "text": "(First of all, I should make clear that compilers and virtual machines (aka) are a completely unknown field for me) As I understand it, every time a Java/C#/... application is run, a VM is invoked and translates intermediate code (bytecode, CIL, etc) to machine instructions. But why can't this operation done only once - at install time?"} {"_id": "10816", "title": "\"G\u00f6del, Escher, Bach\" still valid today?", "text": "I have just completed a course on computability and logic which was an interesting course. The lecturer recommend a few books on his slides, which include \"G\u00f6del, Escher, Bach\". I can see the book is quite famous, and looks very interesting. But I have a few questions to ask regarding its content. 1. Is the content still valid today? I guess most theoretical stuff doesn't change over night, but are there any major points which no longer hold today that I should be aware of? 2. I assume we actually HAVE made some progress in the last 30 years or so. Can any of you recommend a book on the subject which includes this progress (logic, AI, computability)? _Another question:_ Do I have to know about Escher and Bach?"} {"_id": "116969", "title": "iPad app architecture with very large files", "text": "I am working on an iPad app. It needs to contain some very large files. I could put them into the app, but they would push it well over 20MB, which means that it would not be downloadable over the air -- only via iTunes. Is there a way around this without too much pain for me? I don't have a server or anything like that."} {"_id": "56254", "title": "Studentworker - being a superhero?", "text": "A couple of weeks ago I got a new job as a studentworker for a webagency. The job is 15-20 hours of week. Even though I am new in the company, I feel right at home and I enjoy working with my co-workers. To start of with, I was assigned to work on an internal tool in the company, in order to learn their systems and their development platform. The deadline for this project is this week, and I am right on time. But today (wednesday at noon) I received an email from my boss, asking me to do a new project that has a deadline at friday morning. The new assignment alone will be hard to finish on time, and on top of that I need to finish the other assignment on time. My question is: How do I handle my boss expecting me to be a superhero? **EDIT:** I will talk to my boss about delaying one of the projects. But another problem is that the new assignment will be hard to do on time (friday morning). I didn't have a say on the deadline - I just got a mail telling me the deadline. I am new in the company and want to stay, but I don't want to start off on the wrong foot with the boss."} {"_id": "194058", "title": "Structuring the XML Response", "text": "I'm designing an XML response for a consuming application to take in, and I'm weighing two different designs. Currently I have a design as follows (where the Product element can repeat): An alternate design is: In the first design I have a master element called Products, under which each Product element exists and can repeat within Products. The second design is a little flatter - where Product exists under the main Response node. Which one of the two is more optimal for a consuming application to read from and process?"} {"_id": "194050", "title": "Why Object.clone() is able to \"see\" fields defined in subclasses?", "text": "I was wandering how can Object.Clone() access field fields that are actually defined in the subclasses, and not the actual implementation of this feature. What is bothering me is that I cannot see the logic to allow base class (even if it is \"special\") to access the fields defined in the subclasses - doesn't this breaks whole object inheritance concept? Consider this: class Test implements Cloneable { private String test_field; public Test() { } public Test clone() throws CloneNotSupportedException { return (Test)super.clone(); } } Here, calling `Test.clone()` will copy the `test_field` also. Things are even getting weirder for me when it comes to the situation where `Test` is declared as an abstract class - calling `clone()` will allow me (in some way) to get the instance of the abstract class."} {"_id": "194057", "title": "Using a DAO to abstract our ORM from the rest of the application", "text": "We're using MySQL with Sequelize.js as the ORM. What we're wondering is whether a DOA layer of abstraction is worthwhile. Here are our options: 1. To use the Sequelize models throughout the application. 2. To abstract Sequelize by building a layer that converts Sequelize models to Backbone models so that we're using Backbone throughout the rest of the application instead. Is abstraction important in this case or is it normal to use Models from the ORM throughout the other layers?"} {"_id": "150466", "title": "How to test a program in an efficient way?", "text": "I know that a writing test case is one of the way to do some programming level testing, but how to test some careless mistake? or how to reduce? For example, a `buttonA`, should perform `ActionA`, but sometime, just human mistake, the `buttonA` perform `ActionB`. And it requires the human test, it seems can't automatic the testing process, any hints to do so? Thanks."} {"_id": "16308", "title": "Why is it you never get as much done as you'd planned?", "text": "I always start the day thinking \"I'll easily get this done by the end of the day\" and set what looks like a realistic target. So why do I never hit it? The task always ends up taking 3x longer due to unforeseen bugs, last minute changes etc. Is it just me? I don't seem to be getting any better predicting what can be done in a day."} {"_id": "65762", "title": "Is there a canonical resource on ERP?", "text": "For a while I've been wanting to learn ERP. What I would like to do is set up a system, and then practice running a business doing things like generating invoices, raising purchase orders, producing monthly accounts, and keeping track of fixed assets. Then look at enhancing the system further by adding custom code. Is there a book out there that's the de-facto standard for describing best practices, design methodologies, and other helpful information on ERP? What about that book makes it special?"} {"_id": "16301", "title": "Tips for working fast", "text": "What are some tips for helping to design and construct applications faster?"} {"_id": "193287", "title": "Term for a Class with Multiple Interfaces", "text": "Say I have a class that implements multiple interfaces. I pass the same instance around using a different interface, depending on what the consumer is interested in. I am trying to remember what this is called. I know there is a fancy name for it - I thought it was \"interface partitioning\", but I'm not finding any hits. The official name and perhaps a link to a site explaining what it is would be handy. In case I haven't explained myself well enough, here is a real world example. I have a class, call it `ContextManager`, that is responsible for providing access to configuration settings, session variables, user information and other utilities. Instead of passing the object around as a `ContextManager`, I might pass it around as an `IConfigurationManager` in one spot and as a `IUserManager` in another. This prevents the client from accessing things they shouldn't care about and allows me to reuse code within the class. I just want to know what it is called?"} {"_id": "117382", "title": "What programming language is most suitable for handling unstructured data?", "text": "I'm trying to automate the application of metadata to huge amount of text, but I'm not sure what language would make this task easier (if there is one). What programming language is most suitable for handling unstructured text? Why? If the answer is \"it depends\", what are a few examples of why you would use one language over another?"} {"_id": "158255", "title": "How to store data in Gujarati language in SQLite?", "text": "I am developing an Android application for Gujarat's farmers. So this app should be in Gujarati also. Is there any way so that I can store my data in Gujarati in SQLite?"} {"_id": "158251", "title": "Do you estimate all user stories in iteration zero?", "text": "After our product backlog is created and prioritized, are we meant to briefly estimate all the stories in the product backlog? I assume they have to be in order to create a product burndown chart, however if you have a lot of stories this could take a long time initially. Additionally, should a user story have acceptance criteria when added to the backlog or are those criteria added when the story transfers from the product backlog to the sprint backlog? It would be harder to estimate without them."} {"_id": "105779", "title": "Common to use linear programming?", "text": "I have read about linear programming and its content and I wonder if this way of programming is common to use in the market? I often hear about object oriented programming but not linear programming. I would like to hear a discussion about it."} {"_id": "214495", "title": "adding data gathering to a utility app and publish it to partners?", "text": "I've been working on a utility app for a while, and when we're in the final phases the boss is asking for adding user behavior and location tracking and was explicit about publishing them to 3rd party developers to make use of them. He seems to have set his mind about it, is there anything I can/should do about it? Clarify: if you are an app developer who's asked to add data gathering of the app users and publishing it, is there anything you should say against it ?"} {"_id": "105771", "title": "Should I make public my implementation of a published algorithm?", "text": "My company has problem A, lets say, face recognition. We find a nice algorithm like this one. We implement it and go home happy. Problem is, once home I cant sleep because I wonder if we should make the implementation public. After all the algorithm was public. Someone did intensive research and did half ( or more ) the work for us. I feel like we should help back and make our efforts public too. But then our commercial product would not be very _commercial_ , just image our banner: > Hey customer please buy our grunt master 6000 software, its only $1,000.00 ! > Oh by and by the way, you can download the main algorithms here for free ! Is there some sort of licence of derivative work from these academic research papers ? Is it my duty to make my implementations public ? Or is it up to me ? Can I make money from the algorithm ?"} {"_id": "124864", "title": "What does \"automated build\" mean?", "text": "I'm trying to add Continuous Integration to a project. According to Wikipedia, one major piece of CI is automated builds. However, I'm confused about what, exactly, that means, as the CI and build automation articles seem to disagree. **Specific points of confusion: what does \"automated build\" mean _in the context of:_** * a project using an interpreted language, such as Python or Perl? * building from source on an end-user's machine? * an application that has dependencies that cannot be simply pre-compiled and distributed, such as a database in an RDBMS local to the user's machine?"} {"_id": "120527", "title": "ResourceSerializable: an alternate to ORM and ActiveRecord", "text": "A few opinionated reasons I don't like the traditional ORM and ActiveRecord patterns: * **They work only with a database.** Sometimes I'm dealing with objects from an API and other objects from a database. All the implementations I have seen don't allow for that. _Feel free to clue me in if I'm wrong on this._ * **They are brittle.** Changes in the database will likely break your implemenation. Some implementations can help reduce this, but a few of the ones I've seen don't. * **Their very design is influenced by the database.** If I want to switch to using an API, I'll have to redesign the object to get it to work (likely). * **It seems to violate the single-responsibility pattern.** They know what they are and how they act, but they also know how they are created, destroyed and saved? Seems a bit much. **What about an approach that is somewhat more familiar in PHP: implementing an interface?** In php 5.4, we'll have the `JsonSerializable` interface that defines the data to be `json_encode`d, so users will become accustomed to this type of thing. What if there was a `ResourceSerializable` interface? This is still an ORM by name, but certainly not by tradition. interface ResourceSerializable { /** * Returns the id that identifies the resource. */ function resourceId(); /** * Returns the 'type' of the resource. */ function resourceType(); /** * Returns the data to be serialized. */ function resourceSerialize(); } _Things might be poorly named, I'll take suggestions._ **Notes:** * **`ResourceId` will work for API's and databases.** As long as your primary key in the database is the same as the resource ID in the API, there is no conflict. All of the API's I've worked with have a unique ID for the resource, so I don't see any issues there. * **`ResourceType` is the group or type associated with the resource.** You can use this to map the resource to an API call or a database table. If the `ResourceType` was `person`, it could map to `/api/1/person/{resourceId}` and the table `persons` (or `people`, if it's smart enough). * **resourceSerialize() returns the data to be stored.** Keys would identify API parameters and database table columns. * **This also seems easier to test than ActiveRecord / Orm implemenations.** I haven't done much automated testing on traditional ActiveRecord/ORM implemenations, so this is merely a guess. But it seems that I being able to create objects independently of the library helps me. I don't have to use `load()` to get an existing resource, I can simply create one and set all the right properties. This is not so easy in the ActiveRecord / Orm implemenations I've dealt with. **Downsides:** * **You need another object to serialize it.** This also means you have more code in general as you have to use more objects. * **You have to map resource types to API calls and database tables.** This is even more work, but some ORMs and ActiveRecord implementations require you to map objects to table names anyway. **Are there other downsides that you see?** **Does this seem feasible to you?** **How would you improve it?** _Note: I almost asked this on StackOverflow because it might be too vague for their standards, but I'm still not really familiar with programmers.stackexchange.com, so please help me improve my question if it doesn't shape up to standards here._"} {"_id": "124860", "title": "Is there a formal anti-pattern to describe the scenerio?", "text": "Some code is written to generate Excel Spreadsheets (Office Interop). * The code performs very poorly. * A subsystem is designed to generate the files at night. Performance isn't a concern at night. * A function is created to pick the correct file from the 100 different files available depending on a chosen set of parameters. * Because physical files exist, an archival system is added to backup these files (There is no reason to archive. These files should be generated on the fly). * This system doesn't include a configuration file, instead it has a hard coded \"server picker\" function that simply reflects upon the server the code is running upon. * A scheduled task is necessary to support and run this service. This boils down to a single problem. The original code performs far too poorly to run in a production environment. Had the performance problem been resolved, the subsystem and subsequently archiving system, \"file picker factory function\", hard coded failure point and the maintenance of the scheduled task and its added point of failure have no need to exist. This is a \"cascading failure\" if you will. The original problem led to more bad code, more bad solutions and unnecessary overhead. Is there a formal anti- pattern or general term to describe it?"} {"_id": "154543", "title": "Should i expect real world questions from interviewing agency ?", "text": "I started coding almost a year ago. By \"coding\" I mean HTML(5), CSS(3), and only few times I implemented some AJAX and JavaScript. I am interviewing for a position that expects me to know HTML, CSS, JS, JQuery, and AJAX. I feel confident in HTML5/CSS3 subject area and somewhat ok with javascript. Will agency expect from me to write some code during the interview? I do have a live website as an example which contains snapshots of past projects which were sent to them. I am little nervous, so any tips or recommendations are welcome."} {"_id": "60198", "title": "Would PMP or CEH(Certified Ethical Hacker) course be more useful to a junior ASP.Net web developer?", "text": "Only recently i moved into doing Asp.net web development work, with quite a fair bit on Asp.net MVC. I'm thinking of taking up some courses to further build up my profile. I'm having a touch decision between taking PMP or CEH. Would course would benefit my current career more or maybe in the upcoming 1-2 years? Thanks."} {"_id": "250226", "title": "How to handle hidden folders on deployed website", "text": "Our security team at work did a security scan of our soon-to-be-deployed website and one of the items that was found was \"Hidden Directory Detected\". It shows up for 3 different folders, aspnet_client, scripts, and content. The software recommends to throw a 404 instead of a 403, or to remove the folders completely. First, are the folders actually needed? How can I determine which folders in my visual studio project are actually needed in order for the site to actually run (without removing folders one-at-a-time and trying to access the site)? What is the proper way to handle this/resolve the security scan alert? Do I need to add special routing rules in the routeconfig.cs for when these paths are accessed? Edit, I should note that this is WebApi/REST service, not a regular MVC site. (Therefore, using the CustomErrors configuration section will not work)"} {"_id": "154549", "title": "How do you handle measuring Code Coverage in JavaScript", "text": "In order to measure Code Coverage for JavaScript unit tests, one needs to instrument the code, run the tests and then perform post-processing. My concern is that, as a result, you are unit testing code that will never be run in production. Since JavaScript isn't compiled, what you test should be precisely what you execute. So here's my question, how do you handle this? One thought I had was to run Unit Testing on the production code and use that for my pass fail. I would then create a shadow of my production code, with instrumentation and run my unit tests again; this would give me my code coverage stats. Has anyone come across a method that is a little more graceful than this? **EDIT** I don't want to use browser plugins, because I then need to use a browser in order to run my unit tests."} {"_id": "120529", "title": "What's the real benefit of meta-modeling?", "text": "After reading several texts about meta-modeling I still do not really get the practical benefit. Sometimes I think it is only an interesting mind game but no useful tool. Sure it is wise to clarify your modeling vocabulary: some may say _class_ where others say _entity_ or _concept_ , but this is just simple documentation your modeling terminology. Meta-modeling, as I understand it, is more complex, as it tries to formalize and abstract modeling. Some good examples are Keet's formal comparison of conceptual data modeling languages (UML, ERM and ORM) from academia and the Meta Object Facility (MOF) from industry. To me MOF looks as impractical as CORBA, which was also created by OMG. In theory you could use meta-modeling to transform and integrate models in different modeling languages, but is anyone actually doing this?"} {"_id": "69648", "title": "Questions and Concerns About Using Java/Tomcat and Apache", "text": "I am having to use Java due to needing one backend library. Is it still considered good practice to set up Apache as the front-facing server and have Tomcat behind it? I am spending a lot of time configuring mod_jk. Also, what do people usually use for CMS or web-page templating in such a setup? I wanted to use Drupal, but was advised that since I'd have to set up mod_jk and jump through extra hoops configuring it, it might not be a good approach. Could you guys please suggest some common considerations or good approaches in this sort of a situation? Thanks, Alex"} {"_id": "63421", "title": "How to handle a 'bad code' interview?", "text": "A 'bad code' interview is one where the interviewee is shown a snippet of 'bad code' and asked to correct it or point out things that are wrong with it. I have trouble with these interviews because it takes me some time to read the code, figure out what its doing, and point out the flaws. In a situation where there's time pressure, I tend to freeze up and I see that the code 'should' work, even when it doesn't. What's a good way to handle this sort of interview, and, more generally, what are some good techniques to read and understand code _quickly_?"} {"_id": "63424", "title": "Clojure Web Application: EC2 or GAE?", "text": "I am developing a web application written in Clojure using the Compojure framework. My question is, should I deploy to Amazon EC2 or Google App Engine? I've read this article on running Clojure code on GAE, but I am still a bit concerned about the limitations on GAE. I am going to be running sandboxed Clojure code, which means I might end up needing to tinker with the JVM security policies. With EC2, I'll obviously have full control of everything. The downside is that this means more of an effort from a sys admin perspective. I'm not sure which option makes more fiscal sense. I am not expecting too much traffic initially, and suspect that I'll operate pretty close to the \"free\" tier/quotas on either service. I've had a lot of success working with EC2 in the past, but at the same time I'd love to get some experience with Google App Engine (which I've never used). So whats your vote: Amazon EC2 or Google App Engine?"} {"_id": "187595", "title": "Setting up a development standardization guide for in-house/vendor programmers", "text": "I was recently hired by a large multi-national corporation to head up mobile development for their sales operation/support team. In a company of close to 10,000 people I am, at least in the America's, the only mobile developer. They are testing the waters and phase 1 (temp-to-hire) went well enough for them. Now they are considering expanding to other developers for their other sales operations/support teams and I've been tasked with assisting/leading the writing of a standardization guide for iOS programming. I am a big believer in giving people the freedom to work in the manner they most feel comfortable in but at the same time, I have been the creator of and on the receiving ends of big balls of mud applications. Having learned through experience I have several standards that I follow religiously such as commenting, at times almost every line - just short things but enough to let someone else know what is going on and using #pragma mark - DESCRIPTION to block off like minded methods, indentation, naming classes with a prefix to avoid name space conflicts, etc. etc. So I guess what I am looking for is not to tell another programmer how they should iterate through an array but rather some basic standardization so anyone can jump into anyone else's project and find their way around with little learning curve. I'd love to see what other means people use to maintain control over a software development group."} {"_id": "63429", "title": "Real world Agile practices and estimates", "text": "In a perfect world, we tell the client we follow an agile methodology where we allow the scope to increase/decrease as the requirements change and we bill per hour for each iteration. In reality, estimates need to be delivered before the project starts, and there is always pressure to hit that estimate number while being flexible with scope changes. I'm wondering what sort of real-word agile practices people have applied to the de-facto waterfall based billing/estimating that we sometimes get stuck in. Can you \"break-in\" a client to agile mid project, after you have acquired trust, or are there any other techniques to transition a client to agile. *side note: in my \"real world\" situation I run a recently founded development company that does not have the luxury of forcing clients to use our method and process. Many times we have to bend our backs to use theirs, but I see it as a burden all businesses face when starting. I'm wondering how best to deal with it."} {"_id": "187599", "title": "Is it possible to use a \"non-commercial\" REST API in a for-pay app?", "text": "I am interested in integrating the results of 3rd-party news API's into my for-pay application. The APIs would be a very small part of the app (e. g. the app is not just reselling the APIs). I have found that many APIs such as Yahoo have \"non-commercial\" conditions like the following: > YOU SHALL NOT: Sell, lease, share, transfer, or sublicense the Yahoo! APIs > or access or access codes thereto or derive income from the use or provision > of the Yahoo! APIs, whether for direct commercial or monetary gain or > otherwise, unless the API Documents specifically permit otherwise or Yahoo! > gives prior, express, written permission... (from Yahoo Local Search API page) Does this prevent the API from being used as part of any app which costs money to use? For example, if I charge you to get into my site, and then display some content from this API, is that OK? If I let you see the content from the API for free, but then charge you to do something else on the site, is that OK? I'm obviously not expecting specific legal advice. Instead, I'd like to know about either: (1) a good reference on this topic that can answer my question (2) an \"answer by example\", i. e. an example of a real-world app that does something similar"} {"_id": "137941", "title": "Should a method do one thing and be good at it?", "text": "\"Extract Till You Drop\" is someting I've read in Uncle Bob's blog, meaning that a method should do one thing alone be good at it. **What is that one thing?** When should you stop extracting methods? Let's say I have a login class with the following methods: public string getAssistantsPassword() public string getAdministratorsPassword() These retrieve the respective accounts password in the database. Then: public bool isLogInOk() The method compares the password that has been called or selected, and sees if the password the user provided is in the database. Is this an example of \"Extract Till You Drop\"? **When will you know when you are doing too much of extracting?**"} {"_id": "210372", "title": "Why are we supposed to use short functions to sectionalize our code?", "text": "I've seen an increasing trend in the programming world saying that it is good practice to separate code blocks into their own functions. Obviously, if that code block is reusable, you should do that. What I do not understand is this trend of using a function call as essentially a comment that you hide your code behind if the code is not reusable. That's what code folding is for. Personally, I also hate reading code like this because it feels like it has the same problem as the GOTO statement - it becomes spaghetti code where if I'm trying to follow the program's flow I'm constantly jumping around and can't logically follow the code. It is much easier to me to follow code that is linear but has a single comment over sections of code labeling what it does. With code folding, this is essentially the same exact thing, except the code stays in a nice linear fashion. When I try to explain this to my colleagues, they say comments are evil and clutter - how is a comment on top of a block of folded code any different from a function call that will never get called more than once? How is overusing functions different than overusing comments? How are frequent use of functions different from the problems with GOTO statements? Can someone please explain the value of the programming paradigm to me?"} {"_id": "195989", "title": "Is it OK to split long functions and methods into smaller ones even though they won't be called by anything else?", "text": "Lately I've been trying to split long methods into several short ones. **For example:** I have a `process_url()` function which splits URLs into components and then assigns them to some objects via their methods. Instead of implementing all this in one function, I only prepare the URL for splitting in `process_url()`, and then pass it over to `process_components()` function, which then passes the components to `assign_components()` function. At first, this seemed to improve readability, because instead of huge 'God' methods and functions I had smaller ones with more descriptive names. However, looking through some code I've written that way, I've found that I now have no idea whether these smaller functions are called by any other functions or methods. **Continuing previous example:** someone looking at the code might think that `process_components()` functionality is abstracted into a function because it's called by various methods and functions, when in fact it's only called by `process_url()`. This seems somewhat wrong. The alternative is to still write long methods and functions, but indicate their sections with comments. Is the function-splitting technique I described wrong? What is the preferred way of managing large functions and methods? **UPDATE:** My main concern is that abstracting code into a function might imply that it could be called by multiple other functions. **SEE ALSO:** discussions on reddit at /r/programming (provides a different perspective rather than most of the answers here) and /r/readablecode."} {"_id": "206366", "title": "How to avoid peppering the code with IFs", "text": "I need to add a new payment type to an existing code base. That means that I'm going to have a few methods looking like this: if (old payment type) process old type of payment else process new type of payment Now, if this could have been determined beforehand, I would have this method point to an interface implementing a common Pay method and then that interface would be implemented by one of two classes. Unfortunately, I only know which method the customer chooses at runtime, which means I need a way to determine which branch to use. Is there another way except for just having `if`s spread through the code?"} {"_id": "209822", "title": "Are too many if-else statements for validation bad?", "text": "From the book Professional Enterprise .Net, which has 5 star rating on Amazon that I am doubting after having a read through. Here is a Borrower class (In C# but it's pretty basic; anyone can understand it) that is part of a Mortgage application the authors built: public List GetBrokenRules() { List brokenRules = new List(); if (Age < 18) brokenRules.Add(new BrokenBusinessRule(\"Age\", \"A borrower must be over 18 years of age\")); if (String.IsNullOrEmpty(FirstName)) brokenRules.Add(new BrokenBusinessRule(\"FirstName\", \"A borrower must have a first name\")); if (String.IsNullOrEmpty(LastName)) brokenRules.Add(new BrokenBusinessRule(\"LastName\", \"A borrower must have a last name\")); if (CreditScore == null) brokenRules.Add(new BrokenBusinessRule(\"CreditScore\", \"A borrower must have a credit score\")); else if (CreditScore.GetBrokenRules().Count > 0) { AddToBrokenRulesList(brokenRules, CreditScore.GetBrokenRules()); } if (BankAccount == null) brokenRules.Add(new BrokenBusinessRule(\"BankAccount\", \"A borrower must have a bank account defined\")); else if (BankAccount.GetBrokenRules().Count > 0) { AddToBrokenRulesList(brokenRules, BankAccount.GetBrokenRules()); } // ... more rules here ... return brokenRules; } Full code listing on snipt.org . What is confusing me is that the book is supposed to be about professional enterprise design. Maybe I am a bit biased because the author confesses in chapter 1 that he didn't genuinely know what decoupling was, or what SOLID meant until the 8th year of his programming career (and I think he wrote the book in year 8.1) I am no expert but I don't feel comfortable with: 1. Too many if else statements. 2. The class both serves as an entity _and_ has validation. Isn't that a smelly design? (You might need to view the full class to get some context) Maybe I am wrong, but I don't want to pick up bad practices from a book which is supposed to be teaching an enterprise design. The book is full of similar code snippets and it is really bugging me now. If it _is_ bad design, how could you avoid using too many if else statements. I obviously do not expect you to rewrite the class, just provide a general idea of what could be done."} {"_id": "148849", "title": "Style for control flow with validation checks", "text": "I find myself writing a lot of code like this: int myFunction(Person* person) { int personIsValid = !(person==NULL); if (personIsValid) { // do some stuff; might be lengthy int myresult = whatever; return myResult; } else { return -1; } } It can get pretty messy, especially if multiple checks are involved. In such cases, I've experimented with alternate styles, such as this one: int netWorth(Person* person) { if (Person==NULL) { return -1; } if (!(person->isAlive)) { return -1; } int assets = person->assets; if (assets==-1) { return -1; } int liabilities = person->liabilities; if (liabilities==-1) { return -1; } return assets - liabilities; } I'm interested in comments about the stylistic choices here. [Don't worry too much about the details of individual statements; it's the overall control flow that interests me.]"} {"_id": "198337", "title": "Is it a good practice to write a method that gets something and checks the value?", "text": "Occassinally I have to write methods like this: string GetReportOutputDirectoryAndMakeSureExist() { string path = Path.Combine ( ... ) //whatever logic if(!Directory.Exists(path)) Directory.Create(path); return path; } or string GetAndVerifyExistenceOfReportConfigurationPath() { string path = Path.Combine ( ... ) //whatever logic to find the configuration if(!File.Exists(path)) throw new Exception(\"Report configuration not found\"); return path; } or Customer GetCustomerAndVerifyActive(int id) { Customer customer = Db.GetCustomerById(id); if(!customer.IsActive) throw new Exception(\"Customer is not active\"); return customer; } Is it a good practice? I am told that it is normally not a good idea for a method to do more than one things, or for a method to have side-effects (like creating directory). But if I split, for example the last metod to GetCustomer(id) and VerifyActive(customer), I will have to do: var customer = GetCustomer(id); VerifyActive(customer); consecutively at several places, and I think it counts as violation of DRY. Is this a good idea? Any idea how to help with the long method names?"} {"_id": "220888", "title": "How to do a clean refactoring of an If Else Code without leaving any free blocks?", "text": "if(condition1) { Statement1A; Statement1B; } else if(condition2) { Statement2; } else if(condition3) { Statement3; } else { Statement1A; Statement1B; } return; I would like to refactor that code so that I do not duplicate Statements. I **always** need to check condition1 before any other condition. (So I cannot just change the order of the conditions). I also do not want to write `&&!condition1` after every other condition. I solved it like this if(condition1) { } else if(condition2) { Statement2; return; } else if(condition3) { Statement3; return; } Statement1A; Statement1B; return; However I do not think an empty if condition will be easily understandable by others (even by me after a while). What is a better approach?"} {"_id": "211363", "title": "Until what point should I refactor?", "text": "What do you think until what point should a programmer refactor the code? Basically having def method do_something end Pieces of code would be handy, but they increase spaghetti code until the point where you have to remember the path of more than 8-10 methods. So wouldn't be easier to have no more than 3 methods spaghetti code, despite the fact that the method is longer than it suppose to be"} {"_id": "191208", "title": "Approaches to checking multiple conditions?", "text": "What is the best practice for checking multiple conditions, in no particular order? The example in question needs to check four distinct conditions, in any order, and fail showing the correct error message. The examples below use a C-like syntax. ## Option 1: Check all at once This method isn't preferred because the reasons why the condition failed aren't easy to discern. if (A && B && C && D) { // continue here... } else { return \"A, B, C, or D failed\"; } ## Option 2: Nested Conditions In the past, I used to use nested conditional statements, like so. This can get really confusing. if (A) { if (B) { if (C) { if (D) { // continue here... } else { return \"D failed\" } } else { return \"C failed\"; } } else { return \"B failed\"; } } else { return \"A failed\"; } ## Option 3: Fail-early This is the current method I use, which I like to call _fail-early_. As soon as a \"bad\" condition is hit, it returns. if (!A) { return \"A failed\"; } if (!B) { return \"B failed\"; } if (!B) { return \"C failed\"; } if (!D) { return \"D failed\"; } // continue here... # Option 4: Collect Errors One last approach I can think of is a sort of _collection_ of errors. If the conditions to test for are completely separate, one might want to use this approach. String errors = \"\"; if (!A) { errors += \"A failed\\n\"; } if (!B) { errors += \"B failed\\n\"; } if (!C) { errors += \"C failed\\n\"; } if (!D) { errors += \"D failed\\n\"; } if (errors.isEmpty()) { // continue here... } else { return errors; } What are the best practices for checking multiple conditions? In terms of expectations, it ideally behaves like example 4, where details of all the failures are returned."} {"_id": "254235", "title": "Avoid Code Repetition in Condition Statements", "text": "I have been programming for over 15 years now. I consider myself a very good programmer, but I understand (like all of us) there are things that I need to work on. One of these things is code repetition when dealing with conditions. I will give a generic sample: if(condition1) { //perform some logic if(condition2) { //perform some logic if(condition3) { //Perform logic } else { //MethodA(param) } } else { //MethodA(param) } } else { //MethodA() } Now, I cannot make it easy by doing the following: if(condition1 && condition2) { } else { } I cannot do this since I need to perform some logic if condition1 is true and before I test condition2. Is there a way to structure if...else blocks to where if you need to call a method in each else blocks, you are not repeating yourself?"} {"_id": "252568", "title": "Eliminate duplicate code in nested IFs without creating a function", "text": "Let's say we have two ifs that depend on each other: if var exists { if var is array { //Do stuff with var } else { //Resolve the problem } } else { //Resolve the problem in the exact same way as above } //Continue execution with the assumption that var is in the appropriate state How can I refactor this to remove the duplicated code without using gotos or functions/methods?"} {"_id": "94429", "title": "Refactoring into lots of methods - is this considered clean or not?", "text": "So, I watched as my colleague complained a bit about a project he has inherited from someone who is, shall we say, not very experienced as a programmer (intern left to his own devices on a project). At one point there was duplicate code, about 7-8 lines of code duplicated (so 14-16 lines total) in a method of say 70-80 lines of code. We were discussing the code, and he spotted a way to refactor this to remove the duplication by simply altering the structure a bit. I said great, and then we should also move the code out to a separate method so the large method gets a little more readable. He said 'no, I would never create a method for just 7-8 lines of code'. Performance issues aside, what are **your** input on this? Do you lean against using more methods (which in c# pads code with about 3 lines) or larger methods, when that particular code will probably not be used anywhere else? So it is purely a readability issue, not a code-reuse one. Cheers :)"} {"_id": "47789", "title": "How would you refactor nested IF Statements?", "text": "I was cruising around the programming blogosphere when I happened upon this post about GOTO's: http://giuliozambon.blogspot.com/2010/12/programmers-tabu.html Here the writer talks about how \"one must come to the conclusion that there are situations where GOTOs make for more readable and more maintainable code\" and then goes on to show an example similar to this: if (Check#1) { CodeBlock#1 if (Check#2) { CodeBlock#2 if (Check#3) { CodeBlock#3 if (Check#4) { CodeBlock#4 if (Check#5) { CodeBlock#5 if (Check#6) { CodeBlock#6 if (Check#7) { CodeBlock#7 } else { rest - of - the - program } } } } } } } The writer then proposes that using GOTO's would make this code much easier to read and maintain. I personally can think of at least 3 different ways to flatten it out and make this code more readable without resorting to flow-breaking GOTO's. Here are my two favorites. 1 - Nested Small Functions. Take each if and its code block and turn it into a function. If the boolean check fails, just return. If it passes, then call the next function in the chain. (Boy, that sounds a lot like recursion, could you do it in a single loop with function pointers?) 2 - Sentinal Variable. To me this is the easyest. Just use a blnContinueProcessing variable and check to see if it is still true in your if check. Then if the check fails, set the variable to false. How many different ways can this type of coding problem be refactored to reduce nesting and increase maintainability?"} {"_id": "213284", "title": "Making some methods mostly contain method calls, while others doing \"the lowest level\" work", "text": "So I thought about this, and I don't know if it's included or not in any methodology. I think the advantages of this coding style is that, at the lowest level, the code is extremely testable, and then the integration tests should also be very easy to build. I also think this would make the code more readable and the UML would be understood faster. So here's my example: class CoolObject{ var member1; //needed for instance in lifecycle events var member2; //same comment //This method could be for instance an event handler //Notice this contains only assignments and method calls. No library calls or lower level stuff [public] method high_level(params...){ var local_var1; var local_var2; local_var1 = call method lower_level1(param1,param2); local_var2 = call method lower_level2(param1, local_var1); member1 = call method lower_level3(local_var2); } //Notice this contains only library calls and lower level processing [private] method lower_level1(param1, param2){ return param1 + param2 + libraryXXY123.function142(current_date); } //Notice this contains only library calls and lower level processing [private] method lower_level2(param1, param2){ var return_value; loop over param2{ if(condition){ add param1 to return_value; } } return return_value + libraryASDF123.function3132(system_user); } } Please note that this is not written in any specific language, as I only wanted to illustrate the concept. So do you know some methodologies that use this, or that warn against it? Please elaborate on the answer, as I think this would be a good idea, and would like this either confirmed, or the opposite."} {"_id": "205803", "title": "How to tackle a 'branched' arrow head anti-pattern?", "text": "I recently read this question that features, the arrow anti-pattern. I have something similar in code I'm trying to refactor except that it branches. It looks a little something like this: if(fooSuccess==true&&barSuccess!=true){ if(mooSuccess==true){ ..... }else if (mooSuccess!=true){ ..... } }else if(fooSuccess!=true&&barSuccess==true){ if(mooSuccess==true){ ..... }else if (mooSuccess!=true){ if(cowSuccess==true){ ..... }else if (cowSuccess!=true){ ..... } } } ...... In short it looks like this if if if if do something endif else if if if do something endif endif else if if if do something endif endif endif Outline borrowed from Coding Horror article on the subject And the code goes on through different permutations of true and false for various flags. These flags are set somewhere 'up' in the code elsewhere, either by user input or by the result of some method. How can I make this kind of code more readable? My intention is that eventually I will have a Bean-type object that contains all the choices the previous programmer tried to capture with this branching anti-pattern. For instance, if in the outline of the code above we really do only have three, I have an enum set inside that bean: enum ProgramRouteEnum{ OPTION_ONE,OPTION_TWO,OPTION_THREE; boolean choice; void setChoice(boolean myChoice){ choice = myChoice; } boolean getChoice(){ return choice; } } Is this an acceptable cure? Or is there a better one?"} {"_id": "214327", "title": "Using Functions for Never-Repeated Code", "text": "What are some best practices for using functions to break up large blocks of code into discrete chunks of logic when those functions are only ever going to be used once within the lifetime of a function? The canonical example for web development is the initialization of a home page. When the page loads, you might do something like check some credentials and authentication, make an API call to get some data, and parse that data. Naturally, this could all be written in a procedural format within a large function called initialize(). None of the logic will foreseeably be re-used anywhere else within the program. When I'm confronted with such scenarios, my initial instinct is to divide each piece of logic into a discrete function and simply call the functions from an initialize() function, with a few lines of code within initialize() performing some clean up and tear down duties. So I'd have something like an authenticate(), get_data(), and parse_data() function, as well as perhaps a few helper functions for the main functions. In my opinion, that makes the code easier to understand from a high level, makes unit tests more meaningful, and helps organize your thinking as you're writing your code. However, I recently clashed with a guy who essentially thinks the opposite. He feels too many functions lead to \"annoying jumping around\", functions should never be used if code is not going to be re-used, and that the over-use of functions/classes/modules/any kind of modularity amounts to over-engineering. I was put off by his argument and felt that it leads to the shitty procedural PHP that so often makes web development a nightmare, but perhaps there is some validity in what he's saying?"} {"_id": "205425", "title": "Should I repeat condition checking code or put it in a function?", "text": "I have a bunch of calls to a method that may or may not need to be made depending on whether certain features are enabled or not. As such, I've wrapped these calls in `if` blocks checking the enabled statuses. The arguments to the method in each of these calls use some long variable names, so I've broken them out into different lines for readability / to adhere to our line-length coding standard. These large method-call blocks, combined with the condition checks to see if they should be run (exacerbated by the mandate that we enclose all `if` blocks in braces), are quite noisy. So I have something like this: if (FeatureSettings.FeatureXIsEnalbed) { this.TheMethodImCalling( FeatureSettings.ReallyLongFeatureXSettingAName, FeatureSettings.ReallyLongFeatureXSettingBName, ...); } if (FeatureSettings.FeatureYIsEnalbed) { this.TheMethodImCalling( FeatureSettings.ReallyLongFeatureYSettingAName, FeatureSettings.ReallyLongFeatureYSettingBName, ...); } if (FeatureSettings.FeatureZIsEnalbed) { this.TheMethodImCalling( FeatureSettings.ReallyLongFeatureZSettingAName, FeatureSettings.ReallyLongFeatureZSettingBName, ...); } To reduce the noise and redundancy in this code I was thinking of taking the `if` block structure and putting it _inside_ `TheMethodImCalling`, then passing in the `...IsEnabled` value along with the rest of the arguments. This seems somewhat silly though as I would effectively be calling the method and telling it \"Don't do anything.\" when a feature wasn't enabled. Now, I could probably make the code a bit more readable by assigning the long settings to shorter temporary variables or even fields of an `Arguments` class that I would then pass to the method, but those don't address the core question: **Should I repeat the condition checking code or put it in the function?** Or is there an alternative I should consider? (Both options seem somewhat smelly...)"} {"_id": "255817", "title": "Which is a better pattern (coding style) for validating arguments - hurdle (barrier) or fence?", "text": "I don't know if there are any accepted names for these patterns (or anti- patterns), but I like to call them what I call them here. Actually, that would be **Question 1:** What are accepted names for these patterns, if any? Suppose there is a method that accepts a bunch of parameters, and you need to check for invalid input before executing actual method code: public static void myMethod (String param1, String param2, String param3) ## Hurdle Style I call it so because it's like hurdles a track runner has to jump over to get to the finish line. You can also think of them as conditional barriers. { if (param1 == null || param1.equals(\"\")) { // some logging if necessary return; // or throw some Exception or change to a default value } if (param2 == null || param2.equals(\"\")) { // I'll leave the comments out return; } if (param3 == null || param3.equals(\"\")) { return; } // actual method code goes here. } When the checks are for a certain small section in a larger method (and the section cannot be moved to a smaller private method), labelled blocks with `break` statements can be used: { // method code before block myLabel: { if (param1 ... // I'll leave out the rest for brevity break myLabel; if (param2 ... break myLabel; ... // code working on valid input goes here } // 'break myLabel' will exit here // method code after block } ## Fence Style This surrounds the code with a fence that has a conditional gate that must be opened before the code can be accessed. Nested fences would mean more gates to reach the code (like a Russian doll). { if (param1 != null && !param1.equals(\"\")) { if (param2 != null && !param2.equals(\"\")) { if (param3 != null && !param3.equals(\"\")) { // actual method code goes here. } else { // some logging here } } else { // some logging here } } else { // some logging here } } It could be re-written as follows too. The logging statements are right beside the checks, rather than being after the actual method code. { if (param1 == null || param1.equals(\"\")) { // some logging here } else if (param2 == null || param2.equals(\"\")) { // some logging here } else if (param3 == null || param3.equals(\"\")) { // some logging here } else { // actual method code goes here. } } **Question 2:** Which style is better, and why? **Question 3:** Are there any other styles? > I personally prefer _hurdle style_ because it looks easier on the eyes and > does not keep indenting the code to the right every time there's a new > parameter. It allows intermittent code between checks, and it's neat, but > it's also a little difficult to maintain (several exit points). > > The first version of _fence style_ quickly gets really ugly when adding > parameters, but I suppose it's also easier to understand. While the second > version is better, it can be broken accidentally by a future coder, and does > not allow intermittent code between conditional checks."} {"_id": "198565", "title": "Is this pattern bad?", "text": "I notice that when I code I often use a pattern that calls a class method and that method will call a number of private functions in the same class to do the work. The private functions do one thing only. The code looks like this: public void callThisMethod(MyObject myObject) { methodA(myObject); methodB(myObject); // was mehtodB before } private void methodA(MyObject myObject) { //do something } private void methodB(MyObject myObject) { //do something } Sometimes there are 5 or more private methods, in the past I have moved some of the private methods into another class, but there are occasions when doing so would be creating needless complexity. The main reason I don't like this pattern is that is not very testable, I would like to be able to test all of the methods individually, sometimes I will make all of the methods public so can I can write tests for them. Is there a better design I can use?"} {"_id": "201636", "title": "How to handle complex conditions?", "text": "We are working on project where we have to manage these conditions. i.e.: A User can save an order under these conditions: * User has permission \"SaveOrder\" * Order is in state \"shipped\" * Online Shop is opened. This condition is for example - I would like to point out, that we have three conditions from different areas(role and permissions, inner state of object order and state of another domain object). In a previous project we used this code: public static bool CanSaveOrder(Order order) { return CurrentPrincipal.HasPermission(Permissions.SaveOrder) && order.State == States.Shipped && OnlineShop.IsOpen(); } But I feel that there can be a more elegant/dynamic solution. I have read something about a \"business role engine\". Is it a right way to manage this condition?"} {"_id": "243331", "title": "calling methods if previous call success", "text": "I my c# program I have to perform 5 steps(tasks) sequentially. basically these five should execute one after the other only the previous task performed is success. Currently I have done it in following style. But this is not very good code style to follow. var isSuccess=false; isSuccess=a.method1(); if(isSuccess) isSuccess=a.method2(); if(isSuccess) isSuccess=a.method3(); if(isSuccess) isSuccess=a.method4(); if(isSuccess) isSuccess=a.method5(); How can I re factor this code. What is the best way I can follow?"} {"_id": "107669", "title": "One-line functions that are called only once", "text": "Consider a parameterless ( _edit:_ not necessarily) function that performs a single line of code, and is called only once in the program (though it is not impossible that it'll be needed again in the future). It could perform a query, check some values, do something involving regex... anything obscure or \"hacky\". The rationale behind this would be to avoid hardly-readable evaluations: if (getCondition()) { // do stuff } where `getCondition()` is the one-line function. My question is simply: is this a good practice? It seems alright to me but I don't know about the long term..."} {"_id": "218429", "title": "Which is preferred coding style to validate and return from a method", "text": "Which of the below is a preferred coding style (in c# .net) public void DoWork(Employee employee) { if(employee == null) return; if(!string.IsNullOrEmpty(employee.Name)) return; // Do Work } **or** public void DoWork(Employee employee) { if(employee != null && !string.IsNullOrEmpty(employee.Name)) { // Do Work } }"} {"_id": "201804", "title": "Is using subprocedures to logically separate my code a bad idea for structured programming?", "text": "Most of my programming experience is in OOP where I have fully embraced the concepts thereof including encapsulation. Now I'm back to structured programming where I have a tendency to logicaly seperate my code using subprocedures. For example, if I have a large switch case (30 cases or more), I'll put that in it's own procedure so the main method looks a little \"neater\". Generally, subprocedures are used to help keep things _DRY_ , but in some instances these logical seperations I create usually amount to being used only once. Some of my code was being reviewed, and it was mentioned that this is a bad idea. His backing to this claim is that it muddies the water and \"unecessarily hides\" code. Instead, he insists that a subprocedure MUST be used more than once to merit making a subprocedure out of a section of code. While this idea of \"hiding code\" is a common place in OOP, he does admit to having little to no understanding of OOP concepts and has only ever worked with structured programming. **Is there any backing to his claim or is this merely programming dogma?**"} {"_id": "187478", "title": "Always pull out common cases and branch separately?", "text": "We had a disagreement in a code review. What I had written: if(unimportantThing().isGood && previouslyCalculatedIndex != -1) { //Stuff } if(otherThing().isBad && previouslyCalculatedIndex != -1) { //Otherstuff } if(yetAnotherThing().isBad) { //Stuffystuff } The reviewer called that ugly code. This is what he expected: if( previouslyCalculatedIndex != -1) { if(unimportantThing().isGood) { //Stuff } if(otherThing().isBad) { //Otherstuff } } if(yetAnotherThing().isBad) { //Stuffystuff } I'd say it's a pretty trivial difference and that the complexity of adding another layer of branching is equivalently bad as one or two logical-ands. But just to check myself, is this really a grievous coding sin that you would take a firm stance over? Do you _always_ pull out the common cases in your if statements and branch on them separately, or do you add logical-ands to a couple of if statements?"} {"_id": "166887", "title": "What does (Lua) game scripting mean?", "text": "I've read that Lua is often used for embedded scripting and in particular game for scripting. I find it hard to picture how it is used exactly. Can you describe why and for which features and for which audience it is used? This questions isn't specifically addressing Lua, but rather any embedded scripting that serves a _purpose similar to Lua_ scripting. Is it used for end-users to make custom adjustments? Is it used for game developers to speed up creation of game logic (levels, AI, ...)? Is it used to script game framework code since scripting can be faster? Basically I'm wondering how deep between plain configuration and framework logic such scripting usage goes. And how much scripting is done. A few configuration lines or a considerable amount?"} {"_id": "166882", "title": "Reconstruct a file from a TCP stream", "text": "I have a client and a server and a third box which sees all packets from the server to the client (but not the other way around). Now when the client requests a file from the server (over HTTP), the third box sees the response. I am trying to reconstruct the file there. I am using `libpcap` to capture TCP datagrams and trying to reconstruct the file there. Here is what I did 1. Listen for packets on an interface 2. Group all packets which have the same ACK number 3. Sort the group based on SEQ number 4. Extract data from each packet and combine them and write to the disk The problem is, the file thus generated is not exactly the same as the original file. Their sizes are the same each, but when I open the images in an image viewer, they look different. time Does everything sound correct here? **EDIT** Hexdump of first 512 bytes of the recovered file 0000000 009a 0095 0090 008b 0086 0081 007c 0077 0000010 0072 006d 0068 0063 005e 0059 0054 004f 0000020 004a 0045 0040 003b 0037 0032 002d 0028 0000030 0023 001e 0019 0014 000f 000a 0005 0000 0000040 0400 0000 0000 0000 7276 6375 5420 4352 0000050 0000 0000 6720 7369 0000 0000 0000 0000 0000060 0000 0000 0002 0000 028f 0000 0000 0000 0000070 0001 0000 0000 0000 6173 6d65 1fe7 0057 0000080 0000 0050 0956 004c 0000 0000 5a20 5859 0000090 0001 0000 5c9e 0003 130b 0004 edcc 0003 00000a0 cf14 0010 5f2e 0014 a4fe 0013 0000 0000 00000b0 6577 7669 0000 0000 0000 0000 0000 0000 00000c0 0000 0000 0000 0000 0000 0000 2e31 2d32 00000d0 3636 3139 4336 4945 6e20 2069 6f6e 7469 00000e0 6469 6f6e 2043 6e67 7769 6965 2056 6365 00000f0 656e 6572 6566 2c52 0000 0000 0000 0000 0000100 0000 3100 322e 362d 3936 3631 4543 2049 0000110 696e 6e20 696f 6974 6e64 436f 6720 696e 0000120 6577 5669 6520 6e63 7265 6665 5265 002c 0000130 0000 0000 0000 7363 6465 0000 0000 0000 0000140 0000 0000 0000 0000 0000 0000 0000 0000 0000150 4742 7352 2d20 6520 6163 7370 7220 6f75 0000160 6f6c 2063 4742 2052 6c74 6175 6566 2044 0000170 2e31 2d32 3636 3139 2036 4543 2e49 0000 0000180 0000 0000 0000 0000 4200 5247 2073 202d 0000190 6365 7061 2073 7572 6c6f 636f 4220 5247 00001a0 7420 756c 6661 4465 3120 322e 362d 3936 00001b0 3631 4320 4945 002e 0000 0000 0000 7363 00001c0 6465 0000 0000 0000 0000 0000 0000 0000 00001d0 0000 0000 0000 0000 0000 0000 0000 0000 * 00001f0 6368 632e 6965 772e 7777 2f2f 703a 7474 0000200 Hexdump of first 512 bytes of the original file: 0000000 d8ff e0ff 1000 464a 4649 0100 0101 2c01 0000010 2c01 0000 e2ff 6d1c 4349 5f43 5250 464f 0000020 4c49 0045 0101 0000 5d1c 694c 6f6e 1002 0000030 0000 6e6d 7274 4752 2042 5958 205a ce07 0000040 0200 0900 0600 3100 0000 6361 7073 534d 0000050 5446 0000 0000 4549 2043 5273 4247 0000 0000060 0000 0000 0000 0000 0000 0000 d6f6 0100 0000070 0000 0000 2dd3 5048 2020 0000 0000 0000 0000080 0000 0000 0000 0000 0000 0000 0000 0000 * 00000a0 0000 0000 0000 0000 1100 7063 7472 0000 00000b0 5001 0000 3300 6564 6373 0000 8301 0000 00000c0 6c00 7477 7470 0000 ef01 0000 1400 6b62 00000d0 7470 0000 0302 0000 1400 5872 5a59 0000 00000e0 1702 0000 1400 5867 5a59 0000 2b02 0000 00000f0 1400 5862 5a59 0000 3f02 0000 1400 6d64 0000100 646e 0000 5302 0000 7000 6d64 6464 0000 0000110 c302 0000 8800 7576 6465 0000 4b03 0000 0000120 8600 6976 7765 0000 d103 0000 2400 756c 0000130 696d 0000 f503 0000 1400 656d 7361 0000 0000140 0904 0000 2400 6574 6863 0000 2d04 0000 0000150 0c00 5472 4352 0000 3904 0000 0c08 5467 0000160 4352 0000 450c 0000 0c08 5462 4352 0000 0000170 5114 0000 0c08 6574 7478 0000 0000 6f43 0000180 7970 6972 6867 2074 6328 2029 3931 3839 0000190 4820 7765 656c 7474 502d 6361 616b 6472 00001a0 4320 6d6f 6170 796e 6400 7365 0063 0000 00001b0 0000 0000 7312 4752 2042 4549 3643 3931 00001c0 3636 322d 312e 0000 0000 0000 0000 0000 00001d0 1200 5273 4247 4920 4345 3136 3639 2d36 00001e0 2e32 0031 0000 0000 0000 0000 0000 0000 00001f0 0000 0000 0000 0000 0000 0000 0000 0000 0000200 Some more details: 1. I am using C++ 2. The packet data is being stored as `std::vector` 3. I did change the byte order while reading the ack number and seq number from the packet using `ntohl` 4. I am not sure if I need to change the byte order for the data as well. I tried to reverse the data from each packet before combining them, even that did not work. Is there something I am missing? **EDIT** Original file: ![Original](http://i.stack.imgur.com/rlLU4.jpg) Recovered file: ![Recovered](http://i.stack.imgur.com/8blIG.jpg)"} {"_id": "21212", "title": "Physical effects of long term keyboard use- what does the science say and what factors affect it?", "text": "This question asks about the ergonomics of a particular keyboard for long programming hours, what I would like to know is about the ergonomics of using a keyboard in general. What are the most significant risks associated with it and how can they best be mitigated? Do the \"ergonomic\" keyboard designs make a difference and if so which design is most effective? If not do other factors such as wrist-rests, regular exercise or having a suitable height of chair or desk make a difference? Do you have any direct experience of problems deriving from keyboard use and if so how did you resolve them? Is there any good science on this and if so what does it indicate? Edited to add: Wikipedia suggests that there are no proven advantages to \"ergonomic\" keyboards, but their citation seems pretty old- is that still the current state of play?"} {"_id": "166888", "title": "How to setup the c++ rule of three in a virtual base class", "text": "I am trying to create a pure virtual base class (or simulated pure virtual) my goal: 1. User can't create instances of BaseClass. 2. Derived classes have to implement default constructor, copy constructor, copy assignment operator and destructor. My attempt: class Base { public: virtual ~Base() {}; /* some pure virtual functions */ private: Base() = default; Base(const Base& base) = default; Base& operator=(const Base& base) = default; } This gives some errors complaining that (for one) the copy constructor is private. But i don't want this mimicked constructor to be called. Can anyone give me the correct construction to do this if this is at all possible?"} {"_id": "21217", "title": "Which Internet places are popular for commercial Java product announcements?", "text": "For Java libraries (which I wrote in the past years) I found not so many places iin the Internet where developers could announce new releases. So I use paid advertising (banner campaigns) at the moment. Are there known and popular online forums or newsgroups which welcome advertising for products for the Java platform?"} {"_id": "116385", "title": "How to handle manipulation of data after a db record is written from outside my program", "text": "I have written a warehouse management web app. The application handles batch picking, warehouse routing, packing, and the final piece is handled by UPS worldship to \"ship\" the packages. Worldship will write a record to my postgres db after every shipment or void. I need a way to cleanly see that the record was written/deleted and then \"do stuff\". The easy answer is to have a program just monitor the database and when it sees a record written to do its thing but something is nagging at me that there is probably a better way to go about this that doesn't have a program polling the table and comparing it to what was there last. EDIT: How i was going to approach this... The table is setup with a status field. My program would on intervals read the table for any records that have a null status, if records are found \"do stuff\" then mark the status as complete. TIA"} {"_id": "99522", "title": "randomized management?", "text": "I saw an article a few months ago that explain a company's practice of randomly choosing management via lottery. The employee would manage his peers for some fixed amount of time until the next management raffle. The writer went on to explain in detail why this worked better than the traditional expert-manager model. I'm completely unable to remember where I read this, and unable to get useful results from Google. Does anyone recall seeing this? I'd really like to find it again. \\--buck"} {"_id": "116382", "title": "Shouldn't we count characters of code and comments instead of lines of code and comments?", "text": "Counting lines of code and comments is sometimes bogus, since most of what we write may be written in one or more lines, depending column count limitations, screen size, style and so forth. Since the commonly used languages (say C, C++, C# and Java) are free-form, wouldn't it be more clever to count characters instead? **Edit:** I'm not considering LOC-oriented programming where coders try to artificially match requirements by adding irrelevant comments or using multiple lines where less would be enough (or the opposite). I'm interested in better metrics that would be independent of coding style, to be used by honest programmers."} {"_id": "167139", "title": "What makes for a good JIRA workflow with a software development team?", "text": "I am migrating my team from a snarl of poorly managed excel documents, individual checklists, and personal emails to manage our application issues and development tasks to a new JIRA project. My team and I are new to JIRA (and issue tracking software in general). My team is skeptical of the transition at best, so I am also trying not to scare them off by introducing something overly complex at the start. I understand one of JIRA's strengths to be the customized workflows that can be created for a project. I've looked over the JIRA documentation and a number of tutorials, and am comfortable with the how in creating workflows, but I need some contextual What to go along with it. * _What makes a particular workflow work well?_ * _What does a poorly designed workflow look like?_ * _What are the benefits/drawbacks of a strict workflow with very specific states and transitions to a looser workflow, with fewer, broader defined states and transitions_"} {"_id": "167134", "title": "Are there any specific workflows or design patterns that are commonly used to create large functional programming applications?", "text": "I have been exploring Clojure for a while now, although I haven't used it on any nontrivial projects. Basically, I have just been getting comfortable with the syntax and some of the idioms. Coming from an OOP background, with Clojure being the first functional language that I have looked very much into, I'm naturally not as comfortable with the functional way of doing things. That said, are there any specific workflows or design patterns that are common with creating large functional applications? I'd really like to start using functional programming \"for real\", but I'm afraid that with my current lack of expertise, it would result in an epic fail. The \"Gang of Four\" is such a standard for OO programmers, but is there anything similar that is more directed at the functional paradigm? Most of the resources that I have found have great programming nuggets, but they don't step back to give a broader, more architectural look."} {"_id": "199612", "title": "Javascript behavior depends only on browser or browser + OS?", "text": "Generally for a javascript application, compatibility is mentioned in terms of browser types and browser versions it supports. Frameworks/Libraries like ExtJs also mention about the browser versions they are compatible with. **Does this mean that if my javascript application can run on Google Chrome 10+ version, then any operating system where Google Chrome 10+ can be installed will be able to run my javascript application without any glitches?** Or does type of operating system also have an effect on Javascript execution? The reason behind asking this is to evaluate the scope of testing of javascript applications before making any commitment to the end user. Thanks in advance for any guidance provided."} {"_id": "167133", "title": "How do most sync programs monitor file changes?", "text": "Do sync programs like Dropbox typically track file changes by doing byte by byte comparisons, or using hashes, or using `diff` / keeping local commit logs like version control, or what?"} {"_id": "199611", "title": "Idea of the binary search main theorem", "text": "This link really provided some insights into the idea that binary search implications into the optimization problems by giving the main theorem. I am not really confident that I get the idea of the main theorem suggested there. Are there any easy explanations, for the ideas, expressed in that tutorial?"} {"_id": "183645", "title": "Pair programming when driver and observer have different skill level and experience", "text": "I know pair programming is an agile software development technique in which two programmers work together at one workstation. One, the driver, writes code while the other, the observer, reviews each line of code as it is typed in. But I just wonder the strategy still work in the case. For example * if they have a very different programming skill level. * if one never experience in the problem domain while another have. * Is it still OK if they have low programming skill level? Could you suggest the pair programming strategy in the case above?"} {"_id": "197509", "title": "Which design pattern would be best for this case?", "text": "I have a class, called PolicyProvider, at present with the following (abridged) interface: public interface IPolicyProvider { List GetRenewalPolicies(Client client, int financialYear); } The purpose is quite obvious - every year, the insurance renews for the client, and they will get a number of new policies. The thing is, the policies change year by year. Therefore, a concrete implementation of the class needs to be changed every year (or amended with lots of ugly little \"if's\" evaluating the financial year). I can see that over time this will become horrendous. In this case, what design pattern is most appropriate?"} {"_id": "157161", "title": "What do you say in a code review when the other person built an over complicated solution?", "text": "The other day I reviewed code someone on my team wrote. The solution wasn't fully functional and the design was way over complicated-- meaning stored unnecessary information, built unnecessary features, and basically the code had lots of unnecessary complexity like gold plating and it tried to solve problems that do not exist. In this situation I ask \"why was it done this way?\" The answer is the other person felt like doing it that way. Then I ask if any of these features were part of the project spec, or if they have any use to the end user, or if any of the extra data would be presented to the end user. The answer is no. So then I suggest that he delete all the unnecessary complexity. The answer I usually get is \"well it's already done\". My view is that it is not done, it's buggy, it doesn't do what the users want, and maintenance cost will be higher than if it were done in the simpler way I suggested. An equivalent scenario is: Colleague spends 8 hours refactoring code by hand which could have been automatically done in Resharper in 10 seconds. Naturally I don't trust the refactoring by hand as it is of dubious quality and not fully tested. Again the response I get is \"well it's already done.\" What is an appropriate response to this attitude?"} {"_id": "197501", "title": "DDD - Domain Object calling a web service", "text": "Is it ok to call a webservice from a Domain object?. As I write the question I am thinking that you should never do that, as it is poor design, but the situation is the following: I have a domain object called Postman that works very closely with a Message object. This Message is provided by a web service. Without the Message, the Postman object could not do its business logic. I understand that this code doesn't smell good but, since the Postman depends on the Message, it seems logical to call the web service in order to get the Message. There is no other way to get the Message without the web service."} {"_id": "197502", "title": "Can I use select2 in my website?", "text": "I am working on project that deals in customer support. This website actually sells products ( so it is commerical ), and provides customer support. So in this project/website I was about to use select2 . Select2 License: > This software is licensed under the Apache License, Version 2.0 (the \"Apache > License\") or the GNU General Public License version 2 (the \"GPL License\"). > You may choose either license to govern your use of this software only upon > the condition that you accept all of the terms of either the Apache License > or the GPL License. Can I use this in the website? Ask me if any other info is also required."} {"_id": "197504", "title": "What is the best way to construct your CSS documents?", "text": "Normally I start a project working on a site, I'd do the basic html skeleton and then start tweaking the styles using CSS. Slowly I'd just add one CSS rule after another with no \"organized structure\" or groupings in the CSS rules. And at the end of the project I look at the CSS document and the abomination I have created. Often spending quite some time trying to search for the spot in the document where I added a certain style. Is there some type of 'rule-of-thumb' that one can use in laying out and getting more organized in your CSS documents? And I'm not just talking about adding comments..."} {"_id": "190737", "title": "How do I make a cloud based web app accessible internally in the event of an internet outage?", "text": "I have a Java Web application backed by a database. Both are hosted in Amazon EC2. If the Internet is down, I need to allow internal users to be able to continue to work and somehow update the hosted service when the Internet is available again. Is this possible? How would I design such a solution."} {"_id": "116038", "title": "Why is there usually a reference to Java when when people talk about C#?", "text": "I don't know much about C# but I've been programming in Java for a few months now. I had always heard what C and C++ were, but was a little curious about where C# came from. I'm a little confused about the C# language since even though it's one of the 'C' languages (C, C++, C#), on the other hand all of my professors seem to line it up against Java, and not the others. If anyone can clarify that would be great :)"} {"_id": "116036", "title": "BitBucket - what's the catch? (TANSTAAFL, right?)", "text": "The BitBucket capabilities, and pricing (up to 5 users for free) left me wondering what the catch was... there is no such thing as a free lunch, after all. What is BitBucket getting out of my unpaid participation, besides a handsome man to freely include in their promotional materials (per the TOS)?"} {"_id": "94620", "title": "How to hire a good C# developer if I don't know C#?", "text": "I'm a C++ developer. I know how Windows works on the native level, but I'm not a big expert in C# and .NET. Now I need a C# developer in my team (all my developers are C++). How can I hire a great C# developer if I don't know C# at good level? How to ask questions, how to test whether answers are great or are with silly mistakes?"} {"_id": "116032", "title": "Besides Waterfall, what are other plan-driven software development methodologies?", "text": "I just read Balancing Agility and Discipline. Poor title aside, it contrasted a plan-driven project team that was employing PSP/TSP and an agile team using Extreme Programming. When the authors provided an example of a plan-driven methodology, they used Personal Software Process/Team Software Process. Although, out-of-the-box, these are plan-driven methodologies, they are also designed to be used as process frameworks and ultimately only specify what types of things to do and not how to do them, whi ch makes them potentially useful even in an agile environment. It's possible to be agile and still adhere to the PSP principles, and I'm not familiar enough with the TSP to say for sure, but my understanding is that it is very similar. At one point in the book, they list a number of methodologies and rank them in terms of agility. Methods like Scrum, Lean, Crystal, and XP are at the top. The bottom (from most to least agile) consists of the Rational Unified Process, the Team Software Process, Feature-Driven Development, CMMI, Software CMM, the Personal Software Process, and Cleanroom. Watts Humphrey, in PSP: A Self-Improvement Process for Software Engineers, dedicates a chapter to process definition, and specifically modifying the Personal Software Process. The common theme is that processes are prescriptive (they say what to do) and not descriptive (how to do it). I would surmise that the TSP is very much the same way. CMMI has also been used in conjunction with agile methods, and the SEI has a book on it (which I have not yet read). Feature-Driven Development is often touted as an agile approach to project management, yet the authors choose to rank it as a less agile methodology. RUP is an iterative framework. Although I'm not incredibly familiar with it, the fact that it's a framework lends me to group it with SW-CMM, CMMI, and PSP/TSP in that it could be implemented either as an agile or as a plan-driven methodology. The only other example that the book provides that I agree with is Cleanroom Software Engineering. The key components of Cleanroom are the use of formal methods, statistical quality control, and statistically-sound testing. I don't see why these couldn't be used in an agile (iterative/incremental) method, with added the time and cost overhead. Just to clarify what I'm looking for, the family of agile methods includes specific implementations of an abstract idea in the form of Scrum and Extreme Programming. These realize the concepts of iterative and incremental development, responding to change, people (individuals and teams), frequent delivery of working software, collaborating with the customer, and so forth. They clearly specify roles, artifacts, meetings, timeboxes, and other practices and to \"do Scrum\" or \"do Extreme Programming\" means to take the package. Even so, they allow for tailorability and the creation of new processes (but then you aren't \"doing Scrum\" or \"doing XP\"). However, I haven't found the \"do X\" of plan-driven methodologies - most of the work appears to be toward frameworks that could be agile or plan-driven. So, my question: What are examples of more plan-driven software development methodologies? A number of the process frameworks (PSP/TSP, SW-CMM, CMMI, RUP) allow for plan-driven or agile development, as well, but none are descriptive. But are there any truly plan-driven methodologies, that are, for example, direct counterparts to Scrum and Extreme Programming?"} {"_id": "116031", "title": "How do you organize an ASP.NET MVC 3 application with potentially hundreds of views but with only a few entry points?", "text": "**Assumptions:** * Minimalist ASP.NET MVC 3 application for sending emails where the view represents the contents of an email. * Over 500+ email types. I would NOT like to have 500+ actions in my controller corresponding to each email type. * Email types are stored in an enum named MailType, so we could have: * MailType.ThankYouForYourPurchase, MailType.OrderShipped, etc. * The view name is the same as the mailType name: * MailType.OrderShipped would have a corresponding view: OrderShipped.cshtml * Some views would directly use an Entity while others would use a ViewModel. So, given that I have 500+ email types, what is the best way/pattern to organize my application? Here is what I was thinking, **Controller:** public class MailController : Controller { public ActionResult ViewEmail(MailType mailType, int customerId) { string viewName = mailType.ToString(); var model = _mailRepository.GetViewModel(mailType, customerId); return View(viewName, model); } public ActionResult SendEmail(MailType mailType, int customerId) { ... } } **MailRepository Class:** public class MailRepository { private readonly CustomerRepository _customerRepository; private readonly OrderRepository _orderRepository; //pretend we're using dependency injection public MailRepository() { _customerRepository = new CustomerRepository(); _orderRepository = new OrderRepository(); } public object GetViewModel(MailType mailType, int customerId) { switch (mailType) { case MailType.OrderShipped: return OrderShipped(customerId); case MailType.ThankYouForYourPurchase: return ThankYouForYourPurchase(customerId); } return _customerRepository.Get(customerId); } public Order OrderShipped(int customerId) { //Possibly 30 lines to build up the model... return _orderRepository.GetByCustomerId(customerId); } public Customer ThankYouForYourPurchase(int customerId) { return _customerRepository.Get(customerId); } } But then this would lead to my MailRepository class becoming extremely large unless I somehow broke it up..."} {"_id": "148496", "title": "Tool that can do semantic search in a body of C code", "text": "I'm looking for a tool that can do semantic search in a body of C code. Example query: \"give me all references to field y in struct x defined in file z.h\". I would prefer an open source, command line driven tool. C++ support is an advantage. Is there such a tool other than cscope? cscope doesn't preserve the type of tags. In hostapd for example there are more than 900 references to the tag \"ifname\". However, I'm only interested in the ifname field of a specific struct. cscope can't filter tags according to type."} {"_id": "102205", "title": "Should UTF-16 be considered harmful?", "text": "I'm going to ask what is probably quite a controversial question: \"Should one of the most popular encodings, UTF-16, be considered harmful?\" Why do I ask this question? How many programmers are aware of the fact that UTF-16 is actually a variable length encoding? By this I mean that there are code points that, represented as surrogate pairs, take more than one element. I know; lots of applications, frameworks and APIs use UTF-16, such as Java's String, C#'s String, Win32 APIs, Qt GUI libraries, the ICU Unicode library, etc. However, with all of that, there are lots of basic bugs in the processing of characters out of BMP (characters that should be encoded using two UTF-16 elements). For example, try to edit one of these characters: * \ud834\udd1e (U+1D11E) _MUSICAL SYMBOL G CLEF_ * \ud835\udd65 (U+1D565) _MATHEMATICAL DOUBLE-STRUCK SMALL T_ * \ud835\udff6 (U+1D7F6) _MATHEMATICAL MONOSPACE DIGIT ZERO_ * \ud840\udc8a (U+2008A) _Han Character_ You may miss some, depending on what fonts you have installed. These characters are all outside of the BMP (Basic Multilingual Plane). If you cannot see these characters, you can also try looking at them in the Unicode Character reference. For example, try to create file names in Windows that include these characters; try to delete these characters with a \"backspace\" to see how they behave in different applications that use UTF-16. I did some tests and the results are quite bad: * Opera has problem with editing them (delete required 2 presses on backspace) * Notepad can't deal with them correctly (delete required 2 presses on backspace) * File names editing in Window dialogs in broken (delete required 2 presses on backspace) * All QT3 applications can't deal with them - show _two_ empty squares instead of one symbol. * Python encodes such characters incorrectly when used directly `u'X'!=unicode('X','utf-16')` on some platforms when X in character outside of BMP. * Python 2.5 unicodedata fails to get properties on such characters when python compiled with UTF-16 Unicode strings. * StackOverflow seems to remove these characters from the text if edited directly in as Unicode characters (these characters are shown using HTML Unicode escapes). * WinForms TextBox may generate invalid string when limited with MaxLength. It seems that such bugs are extremely easy to find in many applications that use UTF-16. So... Do you think that UTF-16 should be considered harmful?"} {"_id": "90710", "title": "Clarification of pseudo random number generator", "text": "I am asked to create a pseudo-random number generator using the following algorithm: > The generator will generate every integer from `1` to `N-1` exactly once > > The algorithm for `N=2^n` > > * Initialize an integer `R` to be equal to `1` every time the tabling > routine is called and then on each successive call for a random number: > * set `R=R*5` > * mask out all but the low-order `n+2` bits of the product and place the > result in R > * set `p=R/4` > What does the algorithm mean when it says mask out all but the low order `n+2` bits of the product?"} {"_id": "53339", "title": "What's better than outputdebugstring for windows debugging?", "text": "So, before I came to my current place of employment, the windows OutputDebugString function was completely unheard of, everyone was adding debug messages to string lists and saving them to file or doing showmessage popups (not very useful for debugging drawing issues). Now everybody (all 6 of us) is like \"What can I say about this OutputDebugString?\" and I'm like, \"with much power comes much responsibility.\" I kind of feel as though I've passed a silent but deadly code smell to my colleagues. Ideally we wouldn't have bugs to debug right? Ideally we'd have over 0% code coverage, eh? So as far as petty debugging is concerned (not complete rewriting of a 3 million line Delphi behemoth) what's a better way to use debug running code than just adding OutputDebugString all over?"} {"_id": "146594", "title": "REST and redirecting the response", "text": "I'm developing a RESTful service. Here is a map of the current feature set: POST /api/document/file.jpg (creates the resource) GET /api/document/file.jpg (retrieves the resource) DELETE /api/document/file.jpg (removes the resource) So far, it does everything you might expect. I have a particular use case where I need to set up the browser to send a POST request using the multipart/form-data encoding for the document upload but when it is completed I want to redirect them back to the form. I know how to do a redirect, but I'm not certain about how the client and server should negotiate this behavior. Two approaches I'm considering: 1. On the server check for the `multipart/form-data` encoding and, if present, redirect to the referrer when the request is complete. 2. Add a service URI of `/api/document/file.jpg/redirect` to redirect to the referrer when the request is complete. I looked into setting an X header (X-myapp-redirect) but you can't tell the browser which headers to use like this. I manage the code for both the client and the server side so I'm flexible on solutions here. Is there a best practice to follow here?"} {"_id": "168378", "title": "How far should an entity take care of its properties values by itself?", "text": "Let's consider the following example of a class, which is an entity that I'm using through Entity Framework. - InvoiceHeader - BilledAmount (property, decimal) - PaidAmount (property, decimal) - Balance (property, decimal) I'm trying to find the best approach to keep Balance updated, based on the values of the two other properties (BilledAmount and PaidAmount). I'm torn between two practices here: 1. Updating the balance amount every time BilledAmount and PaidAmount are updated (through their setters) 2. Having a UpdateBalance() method that the callers would run on the object when appropriate. I am aware that I can just calculate the Balance in its getter. However, it isn't really possible because this is an entity field that needs to be saved back to the database, where it has an actual column, and where the calculated amount should be persisted to. My other worry about the automatically updating approach is that the calculated values might be a little bit different from what was originally saved to the database, due to rounding values (an older version of the software, was using floats, but now decimals). So, loading, let's say 2000 entities from the database could change their status and make the ORM believe that they have changed and be persisted back to the database the next time the SaveChanges() method is called on the context. It would trigger a mass of updates that I am not really interested in, or could cause problems, if the calculation methods changed (the entities fetched would lose their old values to be replaced by freshly recalculated ones, simply by being loaded). Then, let's take the example even further. Each invoice has some related invoice details, which also have BilledAmount, PaidAmount and Balance (I'm simplifying my actual business case for the sake of the example, so let's assume the customer can pay each item of the invoice separately rather than as a whole). If we consider the entity should take care of itself, any change of the child details should cause the Invoice totals to change as well. In a fully automated approach, a simple implementation would be looping through each detail of the invoice to recalculate the header totals, every time one the property changes. It probably would be fine for just a record, but if a lot of entities were fetched at once, it could create a significant overhead, as it would perform this process every time a new invoice detail record is fetched. Possibly worse, if the details are not already loaded, it could cause the ORM to lazy-load them, just to recalculate the balances. So far, I went with the Update() method-way, mainly for the reasons I explained above, but I wonder if it was right. I'm noticing I have to keep calling these methods quite often and at different places in my code and it is potential source of bugs. It also has a detrimental effect on data-binding because when the properties of the detail or header changes, the other properties are left out of date and the method has no way to be called. What is the recommended approach in this case?"} {"_id": "147380", "title": "Where does Objective-C come from? C++ or C?", "text": "I am very confused about this programming language, Objective-C, which I heard is used to develop iOS applications. I know that it uses the principles of OOP. Would it be easier to learn if I already knew C++? What about it's name? is it a combination between the C programming language and the OOP principles I use in C++?"} {"_id": "108063", "title": "Is there a fundamental reason for keeping file systems and versioning systems separate?", "text": "I've done some work looking at Next3 and ext3cow, and a few other file systems that provide snapshots and/or file versioning. It seems a bit strange to me that file systems only go part-way with version control, though. Are there **fundamental** reasons for keeping the two separate? How challenging would it be to, for example, build a version of the ext3 file system which includes a git-like interface for managing file versions?"} {"_id": "168372", "title": "In dependency injection, is there a simple name for the counterpart of the injected object?", "text": "In tutorials and books, I have never seen a single word describing the object that the injected object is injected into. Instead, other terms are used, like \"injection point\" which don't denote the object containing the injected object. And nothing I can think of sounds right, except maybe \"injection target\" - but I have never read it anywhere. Is there a single word or a simple expression for it, or is it like the \"He- Who-Must-Not-Be-Named\" from a recent fantasy book series?"} {"_id": "240740", "title": "A small project but I want to use design patterns to do it right", "text": "I've got a project coming up, a very a small system, but one that needs to be extended in the future. Here's how I've designed it so far. It's 3-tier: presentation, business and data. For the presentation layer there will be ASP.NET webforms with user controls (.ascx). There are .NET validator controls on the usercontrols. In the business layer there will be a domain model, probably an Active Record- based object for _each_ type of business entity (e.g. class Trainee, class Course etc). These objects will contain the relevant data (user input or DB) in properties but also have behaviour such as CalculateCost(), Validate() and Save(). When a page passes input to a business object for processing, the object will validate the input. Any validation failures will be stored within a collection within the object. If the data is valid, the business object's Save() method can be called. Save() will call a Data Mapper which transforms the business object's data to a DTO, the Data Mapper then passes the DTO to a table data gateway. The Table Data Gateway takes the DTO and performs CRUD operations with it. Any data to be returned from the DB is tranformed into a DTO, passed to the mapper, and then returned to the relevant business object. Would you recommend the above approach for any future small TDD projects that are probably likely to be extended, or is it overkill? Any advice is appreciated. Even if it is to say \"I have no idea what you're talking about\" :)"} {"_id": "104919", "title": "Daily Scrum Meeting (Burndown chart)", "text": "Here is an example: early in the morning of the second day of the Sprint (during the stand up) I go to the board and see that the story I worked the previous day (first day of Sprint) contains a big \"1 IDEAL DAY\" written on it (estimate). Right now (early in the second day of Sprint) the story is not completed and I guesstimate it will take half a \"REAL DAY\" to complete it. Question: so to track progress and update the burndown right now, shouldn't I update that \"1 IDEAL DAY\" on the card with _something else_ (recap.: original estimate in 1 ideal day, remaining work in 1/2 real day)? What would be that _something else_ in this particular example?"} {"_id": "99594", "title": "Fast Learning: Is it bad?", "text": "As many of you must have noticed, learning to program is not an overnight thing, it takes years of hard work (I really should refer here to this wonderful article of Peter Norvig). But there's a lot you can do to make an impressive fast progress, you can use the online tutorials, websites like projecteuler or CodingBat, Stackoverflow help, or any other quickie stuff that can give you the ability to actually get something done... You may not need all the very basic things that you will get if you go to school or learn by some formal way. They might be skipped, and if nothing goes wrong, your work will be done without dealing with them (I'm not talking about making a kernel or some huge thing, but things like making a website, a program to sort files, basic database management scripting, or anything else that may actually gets you to be hired or brings money somehow)... So, the question is: To what degree can you get by fast learning?? What are the benefits of all the things that is taught in schools about programming? (and here I mean the theoretical stuff of how internal stuff works and philosophy of programming techniques or any abstraction articles, not the practical part), and is it possible for a person to keep going and up- progressing this way or he has to switch to the slow-steady way if he wish to progress?? Thank you.. Any personal experience or link to a personal experience will be highly appreciated... EDIT: and an important question: if you think fast learning makes a person a low-level or medium level, but not gonna take him to the high level or architect level, how can he catch up to what he missed of principles or theoretical sides on-the-go, while he is progressing?? like a top-down approach of learning practically then dig deeper at the principles he missed,, how can that be achieved ??"} {"_id": "99593", "title": "Is exception handling a cross-cutting concern?", "text": "I don't see much of a difference between the concerns of exception handling and logging in that both are cross cutting concerns. What do you think? Shouldn't it be handled separately on its own rather than interleaved with the core logic a method is implementing? **EDIT** : What I am trying to say, is that in my opinion a method implementation should only contain the logic for the successful path of execution and exceptions should be handled elsewhere. This is not about checked/unchecked exceptions. For example, a language might handle exceptions in a fully checked way by using constructs like this: class FileReader { public String readFile(String path) { // implement the reading logic, avoid exception handling } } handler FileReader { handle String readFile(String path) { when (IOException joe) { // somehow access the FileInputStram and close it } } } In the above conceptual language, the program won't compile in the absence of `FileReader` _handler_ , because the `FileReader` _class_ 's readFile is not throwing the exception. So by declaring the `FileReader` _handler_ , the compiler can ensure that it is being handled and the program then compiles. This way we have the best of both of checked and unchecked exception problems: robustness and readability."} {"_id": "115609", "title": "What happens when a company says/pretends that it uses a version control system (or any other tool/methodology), but doesn't?", "text": "This question might look like a mix of Is it unusual for a small company (15 developers) not to use managed source/version control? and How to convince a teammate, who sees oneself as senior, to learn SVN conceptual basics?. My interest is purely academical, since the company mentioned here does not exist any more, but it keeps popping up in discussions, so I would really like to hear the community's thoughts on how to behave in such situation. It has happened to me twice, with the most recent example being more interesting: I visited a company for a interview and was being asked for subversion experience. After that I got into an extensive QnA related concepts such as branching/merging, access rights etc. At the end I got the job. From my first day there, I saw that everyone was using a shared drive where they coded. Because subversion was used as a backup, people had to commit from their local machines, on a windows shared drive, which made each commit sometimes to last more than 30 minutes (I do not know the internals of why), while sometimes encountering locks or any othere issues where encountered. There was a template project from which we did a \"branch\". No merges anywhere. So once confronting my line manager about it I got told that we have an svn installation and that we use it properly, obviously we did not. So what should the appropriate action for developer/employee do in such case(s)? * Nothing, accept it as a difference in perspective, * Leave * Try to educate the team"} {"_id": "143864", "title": "Java Application for handling records(CRUD)", "text": "I am new to JavaEE and am faced with a tight situation here. I have to develop a Java application for (CRUD) handling records and saving and loading an XML concerning that record. Obviously, I won't be asking you to do this for me. What I would be asking you is to give me some hints/pointer. Initially I thought JAXB would be enough for this but after putting a lot of time learning it and implementing the program I realized that it just can create the XML and read it but for update, delete I would have to do something else. Even if it wasn't for update and delete features requirement for my project I would still think that by just using JAXB is not a good implementation. I was wondering if \"REST with Java (JAX-RS) using Jersey\" should do the trick for me. ?"} {"_id": "186071", "title": "What's the point of those technique detailed interview question for Senior dev?", "text": "I've had an internal promotion interview for gain a higher level programmer title, something like Senior plus. And I've been interviewed by around 7 people using different technology, and people using the same technology with me(.Net) tend to ask very technique detailed question which can actually easily been found via Google, like what is JIT, how GC works, difference between List and Array, abstract class and interface, delegate and event, even what is the class name when you process Upload file, etc. And I've got only one question asked about one of my design idea in my project and just simple discussion. But for most other my design choice they seems just not interested. I didn't get the result yet and it would be about late this week, but here is my concern: I personally think when I play as a senior programmer, I mainly solving problems, and I just need to know there are some certain way can make it happen but I may not remember every detailed thing, and that's should be why we have those detailed reference documentation like MSDN. I feel it fine if you found I've not too much experiences around this area like Junior to Intermediate level, but when tend to interview higher level guy, isn't you should more focus to see how this guy logical thinking is and how good is he/she at solving problems? Does everybody just think if you know every detailed small point tech stuff then you are a Senior+. And by checking those interview question books I found there are more those kind of questions. If I spend 5 days go through those interview question list book I can easily make those guys feel a wow, but does that really mean anything? This kind of interview can easily let those guys good at remember things gain higher salary even they have no idea how to solve difficulty problem. So why is this happening on the world, is that just cause problem solving skills, design skills are hard to measure? I've serve this company for years and only had few interview with other companies so I wonder is that every company doing the same thing? Or is this actually just my own problem that I should try harder to remember everything in MSDN in my mind so that I can work even without it and internet? **EDIT** For better explain my situation regarding Frank's concern about job tasks. Sorry it's my bad didn't clarify those background. There actually will be no specifically job's task changes, I personally think what I've done already played as a senior, like code review, mentor members, review BA's doc and give technique opinions, design architecture of new projects, it's just my title still stayed without senior and I asked to get one to reflect pay slip and lead to such an interview. This is a Saas company so people stay in one project as long as that product is still alive, this leads they need people be able to design more new features based on current product, fixing technique difficulty on live servers, design/code review/mentoring members And higher level title based on technique would be Architect and we do not have any job similar with Technical Expert. And I agree if you want to play Technical Expert role you should know more detail about the tech you use. Sorry it's my bad didn't clarify those background."} {"_id": "148720", "title": "Is this awkward spacing some type of style?", "text": "In reading another programmers code, he uses a format I have never seen. E.G. namespace MyNs.HereWeAre {//tab here for some reason public class SomeClass {//here's another tab public string Method() {//yet another tab string _variable = \"\";//no tab implementation return _variable; } }//eof - class (Yes these eof comments are on every file) }//eof - namespace // eof - file I'll admit...this code makes me angry. It is difficult to achieve this style. One needs to fight the default formatting provided by the IDE, visual studio. I could swallow this pill a little easier if I knew that there was a good reason for this style. Does this style stem from some other programming language/IDE?"} {"_id": "137020", "title": "Encapsulating a class within another class to hide its exposed properties and details", "text": "I have essentially a data model class that represents an XML data structure that we use to model our system. The model class is in a shared project that is used by a number of different solutions in order to generate a common XML data structure for usage by any number of applications. In one application I am involved in we have provided a wrapper class for one of these XML model classes as we wish to perform a number of operations on the model data. However other parts of my program also needs to access this data for calculations they need to do. At the moment I have this setup (note this is an example only but represents my code structure) 1) public class ModelWrapper { private readonly ModelClass _model; public int PropertyX { get { return _model.PropertyX; } } public ModelWrapper(ModelClass model) { _model = model; } // and other methods we need that uses the properties we have exposed e.g public void PerformOperation() { if(this.PropertyX > 10) // do something else // do something else } } This appears ok, but I've run into the problem where I or another programmer has un-intentially used the private _model variable. It seems to me that we could end up with hidden issues in this method in the fact that there are two ways to get to the same data and if we put some logic in the public exposed get property then we could miss that if we directly reference the private variable. The other alternative I thought of was 2) public class ModelWrapper { public readonly int PropertyX; public ModelWrapper(ModelClass model) { PropertyX = model.PropertyX; // set other properties here we wish to expose on the ModelClass ... } // and other methods we need that uses the properties we have exposed e.g public void PerformOperation() { if(this.PropertyX > 10) // do something else // do something else } } Or another option after reading some posts about this is: 3) public class ModelWrapper { private readonly int _propertyX; public int PropertyX { get { return _propertyX; } } public ModelWrapper(ModelClass model) { _propertyX = model.PropertyX; } // and other methods we need that uses the properties we have exposed e.g public void PerformOperation() { if(this.PropertyX > 10) // do something else // do something else } } Why do this? I thought it would be a good idea to hide the model from the our code as we wanted a wrapper class anyway to do the methods so why then expose the model as well? I thought it might be overkill but it does save in other parts of the code going something like **ModelWrapper.Model.PropertyX** etc So to the question. Which method if any would be a better solution. Is it overkill or are there better practices out there for this? EDIT: I added another option (3) as it there is no need to set the properties I am exposing so I didn't necessarily want to have a private set as it's only being set in the constructor. Hence the readonly private variable that is used in the get part of the exposed property? I've also found a stack overflow question about this at http://stackoverflow.com/questions/2249980/c-immutability-and-public-readonly- fields but it doesn't necessarilly say which was better/preferred solution with the current c# limitations. Any ideas on the 3 implementations as to the best practice?"} {"_id": "148722", "title": "Rails solution for mobile-specific content filter?", "text": "To note, I'm not interested in simply 'hiding' content for mobile devices, I want to filter out that content completely. I'm also not trying to address the issue by building a mobile specific interface (mob.example.com). There was another question regarding something similar: How do I prevent useless content load on the page in responsive design? The solution, in that post, was to set a session during the initial request, and then use the session to filter content on subsequent requests. I primarily develop in Rails, and I'm wondering if there are any gems or ruby- specific solutions to this problem?"} {"_id": "137028", "title": "Business Objects within a Data Access Layer", "text": "So I've been creating a data access layer via TDD and have approached somewhat of a concern. I'd rather not start down the wrong path, so I figured I'd ask you guys to see if my thoughts were in line with a clean architecture. The methods within my Data Access Layer (DAL for short), are pretty simple. They are in line with the stored procedures in the database (no other way to call into it to keep things clean), and they contain those same parameters that the procedures do. They then just connect to the database, and return the query result. Here's one example: public int DeleteRecord(int recordId) { recordId.RequireThat(\"recordId\").NotZeroOrLess(); List parameters = new List(); parameters.Add(new SqlParameter { ParameterName = \"@RecordId\", SqlDbType = SqlDbType.Int, Direction = ParameterDirection.Input, Value = recordId}); return this.ExecuteNonQuery(\"DeleteRecord\", parameters.ToArray()); } This works perfectly for this type of method because I am not doing anything meaningful with the result set. I just want to make sure the command worked, so I will return the result of the non-query, which is just the rows affected, and I can verify the logic using that number. However, say in another DAL method, I want to load a record. My load procedure is going to be executing `selects` against a bunch of tables and returning a `DataSet`, but I am battling with whether my DAL should create the Business Objects within the method using the `DataSet`, _or_ if my Business Objects themselves should just have a `Load()` method that gets the `DataSet` from the DAL, and then basically fills itself in. Doing it through the DAL would result in less logic in the Business Objects (even though this is just select logic, it's still logic), but would crowd the DAL a little bit and make it feel like it really is doing something that it shouldn't be doing. What do you guys think?"} {"_id": "132210", "title": "Difference between Controller and Dispatcher in MVC for web frameworks?", "text": "In MVC applied to WSGI or Java EE, is the Servlet a controller, dispatcher, or both? I think I've seen system diagrams where the controller and the dispatcher are different. Could the controller control SQL statements which should not be in the dispatcher? Thank you"} {"_id": "142408", "title": "Advice on whether to use scripting, run time compile or something else", "text": "I work in the prodution area at my works and I design and create the software to run our automated test equipment for testing our products. Everytime I get involved with a new machine I end up with a different and (hopefully) better design. Anyway I have come to the point where I feel I need to start standardizing all the machines with the same program. I see a problem when it comes to applying updates as at the moment the test procedures are hard coded into the program at each station. I neeed to be able to update the core program without affecting the testing section. The way I see it is that this will mean splitting the program into 2 sections. 1. Main UI - This is the core that talks to everything on the machine such as cameras, sensors, printer etc and is a standalone application. 2. Test Procedure - This is the steps that is executeted everytime the machine runs through a test. The main UI will load the test procedure and execute when ever a test is required. My question is what is the best approach to this in terms of having an application load a file and execute the code with in? Take into account that the code in the test procedure will need access to public methods on the UI/core system to communicate to sensors etc. I have heard about MS Roslyn and had a quick look, would this solve my issue?"} {"_id": "136776", "title": "How can I document someone else's past work?", "text": "We're in a bad situation of having very little documentation on customization our past workers made to a business critical system. Lots of changes were done to Crystal Reports, database entities, and proprietary configuration/programming files for our ERP software. The current documentation generally reads something like this: > This program is run before invoicing. Known bugs: none. > > Run this program after installing software X. > > Changed the following fields in this report: (with no explanation of how or > why) Our IT shop is small, and in the case of the ERP software, most work was lumped on one person (that's me now) so no one else here knows what all we did. The IT and accounting department know bits and pieces (occasionally quite helpful ones) but it's not enough. Another problem is our Accounting department seems to _think_ we're well documented. It's true that we kept lots of records of what went _wrong_ , but very little explains what (if anything) was done to fix these problems. We have hundreds of papers explaining bugs, but the documents explaining changes (as shown above) are almost useless. **How can I go about documenting past changes when I don't know what all was done?** I can start with documenting _what_ we've changed: Files, database tables ect which we need to have for the system to work. I can also document what we _do_ ; when reports are run, why people were told to use X report/program. But when one of these customized things has a problem, I'm always back to square one. How can I proactively document this stuff for myself and others?"} {"_id": "142405", "title": "DAO/Webservice Consumption in Web Application", "text": "I am currently working on converting a \"legacy\" web-based (Coldfusion) application from single data source (MSSQL database) to multi-tier OOP. In my current system there is a read/write database with all the usual stuff and additional \"read-only\" databases that are exported daily/hourly from an Enterprise Resource Planning (ERP) system by SSIS jobs with business product/item and manufacturing/SCM planning data. The reason I have the opportunity and need to convert to multi-tier OOP is a newer more modern ERP system is being implemented business wide that will be a complete replacement. This newer ERP system offers several interfaces for third party applications like mine, from direct SQL access to either a dotNet web-service or a SOAP- like web-service. I have found several suitable frameworks I would be happy to use (Coldspring, FW/1) but I am not sure what design patterns apply to my data access object/component and how to manage the connection/session tokens, with this background, my question has the following three parts: 1. Firstly I have concerns with moving from the relative safety of a SSIS job that protects me from downtime and speed of the ERP system to directly connecting with one of the web services which I note seem significantly slower than I expected (simple/small requests often take up to a whole second). Are there any design patterns I can investigate/use to cache/protect my data tier? 2. It is my understanding data access objects (the component that connects directly with the web services and convert them into the data types I can then work with in my Domain Objects) should be singletons (and will act as an Adapter/Facade), am I correct? 3. As part of the data access object I have to setup a connection by username/password (I could set up multiple users and/or connect multiple times with this) which responds with a session token that needs to be provided on every subsequent request. Do I do this once and share it across the whole application, do I setup a new \"connection\" for every user of my application and keep the token in their session scope (might quickly hit licensing limits), do I set the \"connection\" up per page request, or is there a design pattern I am missing that can manage multiple \"connections\" where a requests/access uses the first free \"connection\"? It is worth noting if the ERP system dies I will need to reset/invalidate all the connections and start from scratch, and depending on which web-service I use might need manually close the \"connection/session\""} {"_id": "136775", "title": "Should I include multiply license header notice in dual license software?", "text": "Is it necessarily if a software released under both of LGPL3 && GPL3 terms have to include both license header notice top of each file? Or just put GPL header notice? or simply put LGPL3 license text in some file like COPYING.LESSER ? EDITED: I am the copyright holder of the project!"} {"_id": "222247", "title": "Why isn't exponentiation hardware-implemented?", "text": "Why is there no exponentiation operation in hardware, even though many languages have builtin operators for it? Is it because even hardware implementations would need to use the same algorithm as software (i.e. no hardware implementation could be significantly more efficient), or because it is rarely used, or another reason?"} {"_id": "106451", "title": "Lists & Collections in MVVM - which approach to take?", "text": "I'm currently working on a Silverlight app using Caliburn.Micro. At present, we have Views (eg: `PeopleView`) and View Models (eg: `PeopleViewModel`) that equate to 'pages' of the application. `PeopleView` might contain a `ListBox` (\"`People`\") which is bound to an `ObservableCollection` of `Person` objects, and has an `ItemTemplate` assigned to denote how each `Person` object should be displayed. However, one of my colleagues has begun to implement a list in another way, where each `Person` is a View Model (ie: `PersonViewModel`) and has an associated `PersonView` to determine how that `PersonViewModel` should be displayed in the `ListBox`. The latter seems more MVVM (or at least has more mention of V and VM!) but I'm not sure whether there's a particularly large advantage to doing one over the other. Are both of these ways valid? Is either better than the other?"} {"_id": "48708", "title": "What issues carry the highest risk in a software project?", "text": "Clearly, software projects are different from other industries in terms of many things like for instance, quality assurance, project progress measurement, and many other things. Unique characteristics of software projects also makes the risk management process unique. Lots of issues in a project might lead it to unacceptable delay or failure to deliver business value. They might even make a complete disaster in the project. What are the deadliest risk factors in a software project? How to analyze, prevent and handle them? Particularly, I'm interested in the issues that you can detect from the beginning and you should keep an eye on (for example, you might be told about a third-party API that the current application uses and lacks documentation). Please share your experiences if they are relevant."} {"_id": "245583", "title": "Use of malloc in C", "text": "Is it necessary to call free function every time we use malloc in C. I am asking this because I have seen many times that it is not called . Thank you"} {"_id": "216429", "title": "Is it better to have constructors with or without parameters?", "text": "Is it better to have constructors with or without parameters and why? public NewClass( String a, String b, int c) throws IOException { //something } _OR_ public NewClass() { //something }"} {"_id": "216428", "title": "Testing a codebase with sequential cohesion", "text": "I've this really simple program written in C with ncurses that's basically a front-end to sqlite3. I would like to implement TDD to continue the development and have found a nice C unit framework for this. However I'm totally stuck on how to implement it. Take this case for example: A user types a letter 'l' that is captured by ncurses getch(), and then an sqlite3 query is run that for every row calls a callback function. This callback function prints stuff to the screen via ncurses. So the obvious way to fully test this is to simulate a keyboard and a terminal and make sure that the output is the expected. However this sounds too complicated. I was thinking about adding an abstraction layer between the database and the UI so that the callback function will populate a list of entries and that list will later be printed. In that case I would be able to check if that list contains the expected values. However, why would I struggle with a data structure and lists in my program when sqlite3 already does this? For example, if the user wants to see the list sorted in some other way, it would be expensive to throw away the list and repopulate it. I would need to sort the list, but why should I implement sorting when sqlite3 already has that? Using my orginal design I could just do an other query sorted differently. Previously I've only done TDD with command line applications, and there it's really easy to just compare the output with what I'm expected. An other way would be to add CLI interface to the program and wrap a test program around the CLI to test everything. (The way git.git does with it's test-framework). So the question is, how to add testing to a tightly integrated database/UI."} {"_id": "137798", "title": "WCF Data Services (OData) Vs ASP.NET Web API? Hypermedia?", "text": "I'm desiging a distributed application that will consist of REST services and a variety of clients (Silverlight, iOS, Windows Phone 7, etc). I was ready to decide that I would implement my REST services using WCF Data Services (OData) but now the the MVC 4 Web API has made me question that decision. What I liked about OData was the URI querying and hypermedia capabilities you get for free. What I disliked was the verbosity of the OData payload; lots of unnecessary characters coming over the wire. What I like about the Web API is that the payloads are much more concise and it has the URI querying capability of OData, however it seems to be lacking hypermedia (out of the box, at least). My boss is also pushing for the Web API because \"the powers that be at Microsoft are backing it and OData hasn't been getting traction.\" So I have two questions: 1) Can anyone comment on the backing/traction of the Web API and OData? 2) Is the Web API expected to natively support hypermedia by release time or are there any off-the-shelf implementations or examples I should look into? Thanks!"} {"_id": "137793", "title": "Scheduling Algorithm for Scheduling Life", "text": "I am trying to write some code to schedule a set of real life tasks that are input by the user. These tasks are stored in an sqlite database. And at the moment, the only parameters I am taking into consideration are the, `The project to which a task belongs to --> p` `The name of the task itself --> t` `And the due date for this task --> d` The `project` and `due date` parameters are optional. But assuming that the user will always input at least the `task name` and `due date` for every task.. I was wondering if it is possible to schedule the set of tasks using a scheduler like the `Completely Fair Scheduler (CFS)` for example!. I realize that the CFS was written for scheduling tasks with much finer granularity(nanoseconds) than the set of tasks being proposed for this purpose... But I realized that it might be possible and maybe more efficient if I can modify it to work with tasks that are on the same time scale as our perception of time. A typical entry in the database would be in the format (p, t, d). 'p' is optional. Here are a few examples.. `(_, 'Call home', 29/2/2012)` `(Work, 'Meet boss', 14/3/2012)` `(Work, 'Ask for raise', 18/3/2012)` `(_, 'Book tickets', 10/3/2012)` `(Work, 'Quit', 14/4/2012)` `(Personal, 'Get botox injections', 10/3/2012)` `(Personal, 'Get breast implants', 10/10/2012)` `(_, 'Dad bday', 7/10/2012)` Here is a situation to consider. I would like to wake up in the morning. Run this \"yet to be coded\" algorithm on the set of tasks.. like the ones given above.. and I would like to receive a schedule for the rest of day, that maximizes throughput. At a later stage, I would like to pass arguments to this algorithms that would allow me to control the scheduler to return a set of tasks depending on my current situation. Like if I am at work, I want to be able to pass arguments to the algorithm, to ask it to only return tasks that can be completed at work.. I hope I am able to convey the gist of it. I understand that the `due date` alone is not sufficient to schedule tasks using the CFS for example.. but if there are other parameters that I should consider, please do let me know. And any suggestions for the kind of scheduling algorithm to employ would be helpful. Thanks."} {"_id": "216422", "title": "Is creating a separate pool for each individual png image in the same class appropriate?", "text": "I'm still possibly a little green about object-pooling, and I want to make sure something like this is a sound design pattern before really embarking upon it. Take the following code (which uses the Starling framework in ActionScript 3): [Embed(source = \"/../assets/images/game/misc/red_door.png\")] private const RED_DOOR:Class; private const RED_DOOR_TEXTURE:Texture = Texture.fromBitmap(new RED_DOOR()); private const m_vRedDoorPool:Vector. = new Vector.(50, true); . . . public function produceRedDoor():Image { // get a Red Door image } public function retireRedDoor(pImage:Image):void { // retire a Red Door Image } Except that there are four colors: red, green, blue, and yellow. So now we have a separate pool for each color, a separate produce function for each color, and a separate retire function for each color. Additionally there are several items in the game that follow this 4-color pattern, so for each of them, we have four pools, four produce functions, and four retire functions. There are more colors involved in the images themselves than just their predominant one, so trying to throw all the doors, for instance, in a single pool, and then changing their color properties around isn't going to work. Also the nonexistence of the static keyword is due to its slowness in AS3. Is this the right way to do things? * * * **EDIT:** One individual piece of the puzzle is whether I should be trying to throw all of these pools into the same class, with brand new functions and everything for each and every pool. That is somewhat related to another question I asked recently. Even though the accepted answer of that question didn't necessarily address this particular scenario, getting more classes, variables, abstractions, etc. is going to create more overhead, and when you're trying to do something like object pooling, I'm not sure whether neatening up the class design like that is going to introduce \"too much\" overhead. This is only one individual part of what I'm asking though, as the question is ultimately much more general."} {"_id": "216425", "title": "Advice on designing a robust program to handle a large library of meta-information & programs", "text": "So this might be overly vague, but here it is anyway I'm not really looking for a specific answer, but rather general design principles or direction towards resources that deal with problems like this. It's one of my first large-scale applications, and I would like to do it right. **Brief Explanation** My basic problem is that I have to write an application that handles a large library of meta-data, can easily modify the meta-data on-the-fly, is robust with respect to crashing, and is very efficient. (Sorta like the design parameters of iTunes, although sometimes iTunes performs more poorly than I would like). If you don't want to read the details, you can skip the rest **Long Explanation** Specifically I am writing a program that creates a library of image files and meta-data about these files. There is a list of tags that may or may not apply to each image. The program needs to be able to add new images, new tags, assign tags to images, and detect duplicate images, all while operating. The program contains an image Viewer which has tagging operations. The idea is that if a given image A is viewed while the library has tags T1, T2, and T3, then that image will have boolean flags for each of those tags (depending on whether the user tagged that image while it was open in the Viewer). However, prior to being viewed in the Viewer, image A would have no value for tags T1, T2, and T3. Instead it would have a \"dirty\" flag indicating that it is unknown whether or not A has these tags or not. The program can introduce new tags at any time (which would automatically set all images to \"dirty\" with respect to this new tag) This program must be fast. It must be easily able to pull up a list of images with or without a certain tag as well as images which are \"dirty\" with respect to a tag. It has to be crash-safe, in that if it suddenly crashes, all of the tagging information done in that session is not lost (though perhaps it's okay to loose some of it) Finally, it has to work with a lot of images (>10,000) I am a fairly experienced programmer, but I have never tried to write a program with such demanding needs and I have never worked with databases. With respect to the meta-data storage, there seem to be a few design choices: **Choice 1** : Invidual meta-data vs centralized meta-data **Individual Meta-Data** : have a separate meta-data file for each image. This way, as soon as you change the meta-data for an image, it can be written to the hard disk, without having to rewrite the information for all of the other images. **Centralized Meta-Data** : Have a single file to hold the meta-data for every file. This would probably require meta-data writes in intervals as opposed to after every change. The benefit here is that you could keep a centralized list of all images with a given tag, ect, making the task of pulling up all images with a given tag very efficient"} {"_id": "137794", "title": "Offshoring a Software Project -- Conflict Resolution", "text": "I had been tasked with managing a project which was outsourced to some Ukrainian developers. The company hired them through Elance at a **fixed price**. At that point my boss left me _alone_ to handle them and get the work done. I created a detailed specification of the complete thing that needed to be done. The project involved dealing with such things as XMPP, RabbitMQ, and Database. In my first meeting with them (always IM) I explained **thoroughly** what they needed to do. They seemed to understand it -- and they were very confident that it would be done easily. So far so good. But after one week, when we met again, they were full of misunderstandings about what needed to be done. When I asked one of the developers if he knew XMPP, he said he was working with it for the first time. At our first meeting I'd very specifically mentioned the complexity of the project and the technologies involved. Plus, I had repeatedly asked them to write a functional specification of exactly HOW they would do it. But they said NO, and insisted that they would rather write the code. I said OK. The project completed after 3 weeks and they delivered what was needed. At that point I started to review the code. It was okay for the most part, but there ware some important problems: * they hard-coded some of the things that needed to be separated out into a config file * there were multiple config files that I needed to be consolidated in one * they wrote absolutely NO documentation * some other minor changes I asked them to make these changes (except documentation) -- And, we had an argument. They said, since the price was fixed, I was being unfair in asking them to make any changes once they completed the working code. That they had worked for unreasonable amount of time on the project and now it was completely wrong to ask for anything. Finally now they have made the changes, and the project is over. But it leaves some questions in my mind... * They did what was needed but I needed it **properly done** , and hence the changes. was I really unfair? * Why did I agree on letting them code without having a functional specification? * Why did I not make sure that they understood everything the first time? Does anyone find themself in the same position? Do you think there is a better way to manage outsourced projects? _**\\-- UPDATE --_** Thanks for all the opinions -- after reflecting upon entire experience, I can conclude... * Although I wasn't vague in the specifications from my side, I certainly didn't make them _ironclad_ as suggested. So the take away is: always be as much specific as possible -- read your specs from their perspective too and see if you missed something. Repeat it at least three times. * Just specifying what the code should do it not enough. You must specify what the code is supposed to look like. What the directory structure will be; even the file names if possible. This will save you from lot of annoyance later. Strictly specify the coding guidelines, variable naming conventions, internal documentation format, etc. See to it that they abide by those guidelines, and if not, scream. * Demand a functional specification from their side -- insist that it be written before any code. This will get a lot of confusions and misunderstandings out of the way. * Review the code as it is being developed so that you identify the anomalies earlier and get them corrected. Talk to them at least once every other day. * Lastly, try to make a good rapport with them. Make them feel that you appreciate their work. Don't push them exaggeratedly to fit your guidelines -- instead request them to do so and tell them that it would make maintaining the code so much easier for you once they complete the project."} {"_id": "170573", "title": "Long lines of text in source code", "text": "> **Possible Duplicate:** > Is the 80 character limit still relevant in times of widescreen monitors? I used to set a vertical line set at 80 characters in my text editor and then I added carriage returns if the lines got too long. I later increased the value to 135 characters. I started using word wrap and not giving myself a limit but tried to keep lines short if I could because it took a lot of time shortening my lines. People at work use word wrap and don't give themselves a limit.. is this the correct way? What are you meant to do ? Many thanks."} {"_id": "79798", "title": "System analysis at the begining of the project", "text": "When I want to develop a new application, first I'm going to design a UML and specify project details and definition. But when I start development process, I determine that I should change some parts of my idea in order to become more popular software or easier logical steps for my users or simpler process or anything else. Then, I change my code, redesign some parts of my UML and start/continue development process one time again and sure that this time I have a perfect project definition and UML, but after a while ( this time takes longer ) i determine again I should change something again ! So I get back and change my UML( sometimes not ! just continue the project without changing UML ) and so on. This process will happen over and over until I become tired or find (almost) the best state of the project or have time limitation problem! So my question is: * _'Can I design a perfect UML at the beginning of my project so that describe the best state ever !?'_ or _\\- 'Should I swear in God never ever change my UML ( and project definition ) even if there is a better one and write current state to the end!'_ Another question \"Is changing some parts of a new idea, even if analyzed with hands of the best analyzer in the world, is inevitable ?( in order to find the best state )\". I mean we can't fully simulate user experience, can we? We should see it in action."} {"_id": "198171", "title": "Choosing how to approach Geocoding Requests", "text": "I am about to begin writing a program in c# that will read Addresses from a source file create a Geocoding request, sent it to Google Maps API, get the response choose the coordinates from the xml and then store them in a database. My question is what is the proper type of source file considering performance and easy implementation. I have the option to supply the source addresses as txt file or xml file. The number of addresses need to get geolocated is about 100.000 which is a big number so what is the proper approach to handling this kind of requests? Should i provide the addresses source file as txt or xml or something else?"} {"_id": "196362", "title": "How to keep unit tests independent?", "text": "I've read it at many places that unit tests should be independent. In my case I have a class that does data transformation. These steps must be done sequentially, otherwise they don't make sense. For example, load_data parse_float label normalize add_bias save_data For example, I cannot normalize the data before I label it, because then values have changed and I don't have access to the original values of my data anymore. Or I cannot save the data before I actually loaded it. So my test class looks something like this class TestTransform(Testcase): def setUp(self): self.trans = Transform() def test_main(): self.load() self.parse_float() self.label() def load(self): self.trans.load() assert .... def parse_float(self): self.trans.parse_float() assert .... In this case, my unit tests clearly depend on each other, but I can't see how else I could do it. As an alternative, I could write something like this def test_normalize(self): # setup self.trans.load() self.trans.parse_float() self.trans.label() # test begins here self.trans.normalize() assert .... But in this case, I run a lot of code multiple times, which is inefficient and makes my tests run a lot longer. So does the best practice of keeping unittests independent apply in this case or not?"} {"_id": "196361", "title": "Best way to create draw with limitation", "text": "I'm writing a program to automatically make the draw for a competition. There are four objects: `Debate` `Judge` `School` `Team` Each Debate has two teams and a judge. Each team participates in three debates. With this, there are the following rules: 1) A team cannot face someone from their same school 2) A team cannot face another team from the same other school (As in if they play a team from a school once, they cannot play again against someone from that school) 3) A judge cannot come from the same school as a team they are judging. So how would I create a draw? A draw looks like a table (I just need the data, but when you write it it looks like this) with a column for judges and then three other columns for each round of debates (the three debates each team participates in). Right now I'm basically choosing a random team, finding an opposing team that hasn't played the school of the first team before and doesn't come from the same school and then finding a judge from neither of those schools. Then I do the same thing until I have all the debates. The problem is the program sometimes gets into ruts where there is no other team/judge that fits. A human would then shift things around and try to find a way to move other judges around to figure it out, but how can I do that with a program. If I run the program again it figures it out just because it's random which teams it chooses for what. Basically, I'm wondering what the best solution is to the problem?"} {"_id": "7927", "title": "How to reduce the number of bugs when coding?", "text": "No one's perfect, and no matter what we do, we are going to produce code that has bugs in it from time to time. What are some methods/techniques for reducing the number of bugs you produce, both when writing new software and changing/maintaining existing code?"} {"_id": "60231", "title": "How to be more logical? (less bugs/errors)", "text": "I have been programming for 6 years and I am in high school (I prefer not to disclose my age). I have dabbled in many different languages. Just to list a few: Java, PHP, C++, Python, Autohotkey, Mathematica and many more. I have a very good understanding of the basics of general programming. The only problem is I _still_ create bugs all the time. I think too often. Do you have any hints _besides continuing_ to program that will help me become a better programmer and make less errors?"} {"_id": "128023", "title": "How to make code writing more accurate?", "text": "I am trying to practice writing code for a long period of time before compiling and write unit test (if possible) for what I wrote (the language is C++). Of course, I got the IDE support (Emacs or Eclipse for on the fly error detection and code completion) with minimal or no error. I practice this ability because the pattern \"trial and error\" in which a bit of code is written and then build/run to check the correctness is not productive as I feel. However, isn't what I am doing counter the purpose of testing? Maybe, the point of my practice is not to assure 100% logic correctness, but rather, 100% or close to 100% syntax correctness, in order to vastly improve productivity. I will try to practice with the small piece of code first, and then improve gradually as time passes. What's your opinion on code writing and productivity in general? Do you practice like I do or do you have much better method? (Aside from being in the \"flow\", or extremely concentration). I concern this matter because I heard that some people are an actual human compiler, and it seems like they can directly translate the logic to code as their natural language. **Edit:** I was a bit wrong when I mentioned not to improve logical correctness. Yes, logical correctness counts as well. What I want is to practice programming to the point where I can keep writing code for a long period of time with minimal mistakes (such as I can write 1000 lines of code,a and only a few trivial errors occur).In case of C++ or any language, using external libraries require you to understand many dependencies of the libraries to use it correctly in your code. For example, if you use Spring framework in Java, you have to make sure the xml, the server etc... are set up correctly. I consider this is also a form of syntax, and one way to achieve \"syntax correctness\" is to practice it frequently to the point when you can minimize the most mistakes written by hands. This is similar to when we first start learning programming, sometimes/often we miss the semicolon. Then we learn the mistake and improve it."} {"_id": "72611", "title": "How do you stop yourself from making mistakes?", "text": "I used to pride myself with the high quality of the code I delivered. Today I made a mistake that wiped the grin off my face. It was a null reference exception caused by a hasty fix to an edge case that was caught during testing. I didn't have proper unit tests and I didn't regression test after making the fix. So my question is, what do you do to force yourself to follow best practices? I know you should write tests (unit, integration, acceptance, etc.) and run them to validate any changes. But when the pressure's on and you're rushing to put out a fire, how do you keep yourself focused and sticking to those best practices? Physical exercise? Religious TDD?"} {"_id": "196369", "title": "How can I learn to like C++?", "text": "I'm a wimpy web programmer by trade -- I enjoy programming in JavaScript, CoffeeScript, TypeScript, and Ruby. I have to program in C++ for my computer science degree, and it frustrates me. I don't much like the syntax and I feel that it's more verbose than it could be. I think I've roared \"I hate C++\" at least once per CS project. I don't need to _love_ C++, but I wonder if someone who adores the language can tell me what I can do to ease my pain. C++11 and Syntastic for Vim have certainly helped, but I'm wondering if there's something more than that -- a specific tool or an abstract philosophy."} {"_id": "63786", "title": "Have you tried Usability Audit for your company's website?", "text": "We had a security audit and it was brilliant. Are there companies that do web usability audit?"} {"_id": "109334", "title": "Are there any studies of cross-functional teams vs. domain-based teams (e.g. project-based vs. software/mechanics/etc)?", "text": "I work in an organization which creates many integrated systems products - i.e. it is complete products with mechanical/system/electronics/software being designed and manufactured. At the moment most teams are organized around projects in a cross functional way. The advantage of an organization like this is that people who are working closely together for a common goal are close. The disadvantages come from the isolation of engineers from their peers. Typically a project is assigned only one software engineer. This means that the projects have a high truck factor, minimal knowledge sharing and best practices, and technical development is limited. So my question is: are there any studies comparing the cost/benefits of these two approaches?"} {"_id": "73830", "title": "Is it legal to disassemble a Microsoft dll and post the result on my blog?", "text": "I'm writing an article for my blog about undocumented functions which exist in the _dwmapi.dll_ library. I want to post the result of the disassembled code to explain how the names and parameters of these functions are obtained, something like the article from this blog. **This is only for educational purposes** to show a couple of samples using these undocumented functions. So the question is: _Can I post the disassembled code of this library on my blog?_ **UPDATE :** There exist a couple of applications like Aura which uses these undocumented functions (DwmGetColorizationParameters, DwmSetColorizationParameters), obviously the authors in some point disassembled the _dwmapi.dll_ file in order to get the parameters and functions names. But they (the authors) only publish the final source code to access to these functions. This makes me think that _I can disassemble in private a Microsoft dll and then publish an application or source code based in this research_. Is this correct?"} {"_id": "194739", "title": "Why do some of object oriented languages let programmer use primitive types?", "text": "Why do some object oriented languages let the programmer use primitive data types? Aren't classes like Integer, Boolean, etc. enough?"} {"_id": "73838", "title": "Job in other country - relocation package", "text": "I have the opportunity to work in an 'overseas' country. I have passed the technical interviews and I am at the 'offer' stage. Everything is ok, except for the fact that I am expected to pay for my plane ticket, which is almost 2000 EUR (an aprox 8000 km distance). I've discussed this with the possible future employer and they said that the plane ticket is my expense. What I'm asking you is: Shouldn't they pay me for the plane ticket? Or give me a relocation package? Do you accept this kind of offer? What relocation package would you have asked for if you were in my shoes? PS: Destionation country is Singapore. I am a Delphi Developer with 5 years of experience in SD, and overall 8 years in IT."} {"_id": "194730", "title": "Good Version Control Guidelines from a Development/Collaboration Perspective?", "text": "At our company we have started outsourcing some of our development. This has worked somewhat well. However, we are having a hard time getting them to properly use version control. They are familiar with SVN and know how to use it. However, for some reason they don't commit regularly, instead they work with 16 things simultaneously and make a huge commits every 2 weeks, if we are lucky maybe with a few comments. Which makes it very difficult to both following and review their work, collaborate and also fix bugs. I have tried to explain to them to do their work as one small task at a time and regularly commit each of these with appropriate comments. Without much success, either they don't understand the concept or they are lazy. My question is, what would be good guidelines describing how one should work with version control, not from a technical perspective (they know SVN), which seems to be the only thing I'm finding online, but from a development/collaboration/project perspective?"} {"_id": "209896", "title": "call a function and never wait for it in C#", "text": "I have a controller in my mvc4 web application in which there is an action that needs to call another function. What happens in that function i.e. the return value is not important for my action. How i can call that function and never wait for it to be executed? I think it can be done by async but my point is not to use resources, just call the function and never wait for it what ever happens. Please give me some advise."} {"_id": "139360", "title": "Looking for a digital developer book service", "text": "I just started managing a small team of developers for my company and I would like to give them access to a library of books that would help them improve their skills. Before I buy a bookshelf worth of books I was wondering if anyone was aware of any services that would fill this need. Something that anyone with an account could log into and get access to the latest books on programming and software development. Ideally, something like a shared kindle account, but would be happy to hear about other experiences or ideas that would also help fit my need. Or even if you think I am dreaming and should just shell out some money for the books."} {"_id": "78777", "title": "Where should I include comments in my \"self-documenting code\"?", "text": "I'm currently developing a web-app by myself and have made it a point to use descriptive variable and method names (sometimes at the expense of brevity) in order to minimize commenting. The plan was to code each method using this strategy, then comment after I've completed the method I was currently working on. However, I've found that after (more or less) all of the methods I've completed, in-line comments seemed superfluous. I still follow Javadoc conventions for classes and methods, but as of now my code is for the most part, completely devoid of any in-line comments. Fortunately, I'm still relatively early in the development process and all of the methods' workings are fresh in my head, should a situation arise where I need to write in-line comments. Is this a good strategy? If not, where should I include in-line comments? I've included one of my methods below in order to illustrate how self-documenting it currently is. (I'm using Pelops, which is a Java library used to access a Cassandra database. Without going in too deep in to Cassandra's data model, a row basically corresponds to a relational database tuple, and a column corresponds a column in a relation database table). I'd like to think that even without knowledge of Pelops or Cassandra, that one will be able to understand what is going on in the code. Is this the case? If not, where can I insert comments to make it crystal clear? I hope to abstract any suggestions on comment placement in this code in to all the other methods I've written so far. /** * Gathers a user's ID and first name from the database, creates a random activation code for that user and stores * it in the database, and sends an activation e-mail to that user. * @param eMail a String representation of the e-mail of the user that the e-mail is being sent for * @throws MyAppConnectionException if any of the queries is unable to be executed or a problem arises because of one, or if a problem arises while sending the e-mail */ public static void generateAndSendActivation(String eMail) throws MyAppConnectionException, MyAppActivationException { eMail = eMail.toLowerCase(); try { Selector userIDSelector = Pelops.createSelector(pool); Column userIDColumn = userIDSelector.getColumnFromRow(\"Users_By_Email\", eMail, \"User_ID\", ConsistencyLevel.ONE); String userIDString = new String(userIDColumn.getValue()); Selector firstNameAndStatusSelector = Pelops.createSelector(pool); SlicePredicate firstNameAndStatus = Selector.newColumnsPredicate(\"First_Name\", \"Status\"); List firstNameAndStatusColumns = firstNameAndStatusSelector.getColumnsFromRow(\"Users\", userIDString, firstNameAndStatus, ConsistencyLevel.ONE); char statusChar = Selector.getColumnValue(firstNameAndStatusColumns, \"Status\").toChar(); if(statusChar == 'N') { String firstNameString = Selector.getColumnStringValue(firstNameAndStatusColumns, \"First_Name\"); String activationCode = createRandomCode(); String activationHash = BCrypt.hashpw(activationCode, BCrypt.gensalt()); Mutator storeActivationHashMutator = Pelops.createMutator(pool); storeActivationHashMutator.writeColumn(\"Users\", userIDString, storeActivationHashMutator.newColumn(\"Activation_Code\", activationHash) ); storeActivationHashMutator.execute(ConsistencyLevel.ONE); sendEmail(firstNameString, userIDString, eMail, activationCode, \"sendActivationCode\"); } else { throw new MyAppActivationException(\"User with ID \" + userIDString + \"tried to activate an already activated account\"); } } catch(NotFoundException nfe) { MyAppQueryException bqe = new MyAppQueryException(\"Account for user with e-mail \" + eMail + \" not found. Cannot send activation code\", nfe); bqe.accountDoesNotExist = true; throw bqe; } catch(PelopsException pe) { MyAppQueryException be = new MyAppQueryException(\"Unable to carry out one of the operations required to generate and store activation code\", pe); throw be; } catch(MyAppMailException bme) { throw bme; } }"} {"_id": "114957", "title": "Does directly accessing an applications database break the license agreement?", "text": "We have purchased a fairly light-weight application that uses a database on SQL Server for its back-end. This application has no API. There is a small web-based feature we need to add, that can easily be added by connecting to the database directly. **Generally speaking, and all issues relating to future schema changes aside, is this typically allowed by the software license?** A co-worker made the argument that in the spirit of the software license, we are reverse-engineering their code. I hold the belief that it is our data in the database that we would be accessing and potentially modifying, no license is required for the actual data. In addition, since we still pay for a license to use the software, that means we also have a license to use its schema, as the schema is part of the software. It is also my understanding that it is common practice (at least, everywhere I have worked) to access an application's database directly for reporting purposes. Which is considered correct? I am interested in input from both a legal standpoint, and an industry commonly understood standpoint. _I also understand that this will vary from license to license. I am looking for a general answer that will apply to most software. Thank you._ * * * **Edit:** Perhaps another way to look at it... What is generally considered acceptable? Throw out the license agreement, since the legal question cannot be properly answered without the license. **As a developer, would you be unhappy with your users accessing the database directly?**"} {"_id": "114953", "title": "What's the best Java equivalent to Linq?", "text": "Are there any libraries in Java that come close to providing the functionality of Linq?"} {"_id": "114951", "title": "How can I get my startup working with Agile development?", "text": "In our startup, we've worked until now using the traditional waterfall model, but we want to try our next project using Agile Methodology. We are pretty much alien to the entire Agile process, so considering this, what is the best way to understand Agile (resources and a practical handbook can be of help), decide on a specific methodology (Scrum, etc), and start it in our next project?"} {"_id": "222866", "title": "Which methods should be put in an interface and which in abstract classes?", "text": "I have seen many frameworks and modules and their standard they follow is like this 1. `UserInterface` which have some predefined methods 2. `AbstractUserClass` which implements `userInterface` 3. Then `GenericUserClass` which extends from `AbstractuserClass` 4. Then other classes extending from that generic Now I have seen that abstract class has additional functions to the interface, and the generic class also has additional functions. 1. So I am confused which methods should go where 2. Sometimes I see `class A extends Abstractuserclass` and sometimes `class A extends Abstractuserclass implements UseraInterface`. What is the difference if `Abstractclass` already impelements `Userinterface`"} {"_id": "79422", "title": "Which professional positions combine Computer Science and Marketing?", "text": "I would like to find a job that combines Computer Science with Marketing, ideally in a position where you get to meet a lot of people and travel a fair bit. After having worked as a Front/Back Web App Developer I am getting a degree in Management of Technology, which also includes an element of Marketing. How can I combine my two interests, Computer Science and Marketing, professionally, in a sensible way? Which positions should I be aiming for?"} {"_id": "91970", "title": "What is a buffered write scheme?", "text": "I received this response from Jeff while researching hit counters. He said page hits are incremented in a `buffered write scheme,` and that a views table does not exist. Can you explain what a buffered write scheme is, please? I would appreciate answers not too heavy with technical jargon. In particular I'm interested in how such a scheme can be implemented to track page hits. I'm curious how hit data gets persisted since it is needed to prevent users from simply refreshing a page to increment page hits."} {"_id": "5232", "title": "Am I a bad programmer, or does everyone have this feeling?", "text": "I tend to understand things rather quickly, but after 2 years of programming in Python I still stumble across things (like Flask today) that amaze me. I look at the code, have no idea what's going on, and then feel very humbled. I feel like an absolute expert each time this happens, up until the moment it happens. Then, for about a 2 week period I feel like an absolute beginner. Does this often happen, or does it indicated that I have so much more to learn before I can even be considered a \"good\" programmer?"} {"_id": "222860", "title": "Nested Classes or Namespace", "text": "Why do need namespaces when we have nested classes. What can be done through namespaces, can also achieved through nested classes. so I don't understand the reasoning of having namespaces ?"} {"_id": "144128", "title": "How to separate sensitive data in database(MySql)", "text": "I need to design a database that will contains information about personal disease of users. What can be the approach in order to implement the columns of the DB's tables: encrypt the information, separate data within two differents DB, one for sensitive data and another for not sensitive data, or both or another approach?"} {"_id": "67852", "title": "Do you also forget the code after getting the task done?", "text": "I'm a new programmer and want to ask senior programmers (programmers who have some experience in the real world). I do my work and after coding, my project gets completed but honestly speaking I don't remember the code, classes and frameworks name and their properties. Sometimes even I doubt myself that did I made this? Is this normal with all programmers or am I the silliest programmer who couldn't remember the code and classes/properties names? **Edit:** I think many of programmers are getting me wrong here. I said I forget frameworks names, classes' names, property names but I start remembering my own code once I start working on it again. My question is do you remember syntax and classes/property etc. names?"} {"_id": "131746", "title": "How to correct a junior, but encourage him to think for himself?", "text": "I am the lead of a small team where everyone has less than a year of software development experience. I wouldn't by any means call myself a software guru, but I have learned a few things in the few years that I've been writing software. When we do code reviews I do a fair bit of teaching and correcting mistakes. I will say things like \"This is overly complex and convoluted, and here's why,\" or \"What do you think about moving this method into a separate class?\" I am extra careful to communicate that if they have questions or dissenting opinions, that's ok and we need to discuss. Every time I correct someone, I ask \"What do you think?\" or something similar. However they rarely if ever disagree or ask why. And lately I've been noticing more blatant signs that they are blindly agreeing with my statements and not forming opinions of their own. I need a team who can learn to do things right autonomously, not just follow instructions. How does one correct a junior developer, but still encourage him to think for himself? Edit: Here's an example of one of these obvious signs that they're not forming their own opinions: > Me: I like your idea of creating an extension method, but I don't like how > you passed a large complex lambda as a parameter. The lambda forces others > to know too much about the method's implementation. > > Junior (after misunderstanding me): Yes, I totally agree. We should not use > extension methods here because they force other developers to know too much > about the implementation. There was a misunderstanding, and that has been dealt with. But there was not even an OUNCE of logic in his statement! He thought he was regurgitating my logic back to me, thinking it would make sense when really he had no clue why he was saying it."} {"_id": "255954", "title": "Selecting the main version of a linux library", "text": "I am looking to use cracklib library for testing password security in my C/C++ linux applications. But I cannot find the original author, nor the official code that hasn't been forked."} {"_id": "224161", "title": "GPL - Writing an exception for a plugin interface under section 7 of GPLv3", "text": "## The problem I've written a application for Android licensed under GPLv3 which needs to use Google Play Services, a proprietary library, as a plugin to the app. Now I'd also like to add libspotify as a plugin. For all of the proprietary libraries used I'm providing alternative open source plugins but they are not as good as the proprietary ones, not even close. The user should then be free to disable all proprietary parts either at compile or runtime. ## Possible solutions The GPLv3 has a section \"7. Additional Terms.\" under which you can add a \"classpath\" exception. I'd like to use this exception to do the following. The idea of a classpath exception is explained in the GPLv3 FAQ here: * Linking over a controlled interface: http://www.gnu.org/licenses/gpl-faq.html#LinkingOverControlledInterface * http://www.gnu.org/licenses/gpl-faq.html#GPLIncompatibleLibs ## The basic idea Implement these interfaces: interface Plugin {} interface ProprietaryPlugin extends Plugin {} interface XYZ_1 extends Plugin { public void someHook(); public void someOtherHook(); ... } interface XYZ_2 extends Plugin { public void someHook(); public void someOtherHook(); ... } XYZ_3, 4, ... The above interfaces are part of the project and are under GPL. But I'd like to let all code that extend some subinterface of Plugin except ProprietaryPlugin to be able/allowed to be proprietary given that they mark themselves as ProprietaryPlugin. My intention is to make tight plugin hooks that all proprietary plugins must follow and a way to indicate to the user which plugins that we can remove at runtime. Also, I want the \"bridge\" code, the implementation of the plugin interfaces to be open source and under a GPLv3 compatible license. And the source code must provide at least one GPLv3 implementation, hopefully creating a concrete wall between the GPLv3 and proprietary code. ## Visual explanation ![visual explanation with UML and licenses](http://i.stack.imgur.com/NmkP5.png) Is this possible to do? I know that most guys here are not lawyers, but I find asking developers first enlightening. ## A suggestion from irc.gnu.org#gnu The guys at irc.gnu.org#gnu gave me the following suggestion: https://raw.github.com/qcad/qcad/master/gpl-3.0-exceptions.txt It states that: As a special exception, the copyright holders of QCAD hereby grant permission for non-GPL compatible plug-ins and script add-ons to be used and distributed together with QCAD, provided that you also meet the terms and conditions of the licenses of those plug-ins and script add-ons. Is \"non-GPL compatible plug-ins and script add-ons\" vague / a loophole, or is it clear and does it apply to my case? ## A second suggestion I got a second suggestion at #gnu: http://roundcube.net/license/ It states that: This file forms part of the Roundcube Webmail Software for which the following exception is added: Plug-ins and Skins which merely make function calls to the Roundcube Webmail Software, and for that purpose include it by reference shall not be considered modifications of the software. If you wish to use this file in another project or create a modified version that will not be part of the Roundcube Webmail Software, you may remove the exception above and use this source code under the original version of the license. What do you think about this one? ## My attempt at a \"classpath exception\" My first idea was to add the classpath-exception to Plugin with something like the following... does it accomplish my intent or is it filled with loopholes, etc.? Additional permission under GNU GPL version 3 section 7 0. Definitions \"FSF\" means the Free Software Foundation, Inc. \"Compatible License\" means any license the FSF considers to be compatible with the GNU GPL version 3. \"Plugin interface\" means any interface definition file in the Corresponding Source containing - in the file's header - the words \"The licensors of this Program designates this particular file as a Plugin interface\" \"Proprietary interface mark\" means any interface definition file in the Corresponding Source containing - in the file's header - the words \"The licensors of this Program designates this particular file as a Proprietary interface mark\"\" \"Allowed library\" means any library that is linked or combined with this Program, or any covered work that creates a derivative work of a subinterface of a Plugin interface not directly linking with the library. Such a subinterface provides at least the source code of one working implementation licensed under This License. The derivative work of the subinterface linking to the Allowed library implements a Proprietary interface mark. Such a derivative work is referred to as a \"Proprietary plugin implementation\". A Proprietary plugin implementation is licensed under a Compatible License. 1. Exceptions If you modify this Program, or any covered work, by linking or combining it with any Allowed library (or a modified version of those libraries), containing parts covered by the terms of those libraries, the licensors of this Program grant you additional permission to convey the resulting work. {Corresponding Source for a non-source form of such a combination shall include the source code for the parts of any Proprietary plugin implementation used as well as that of the covered work.}"} {"_id": "180026", "title": "What are tangible advantages to proper Unit Tests over Functional Test called unit tests", "text": "A project I am working on has a bunch of legacy tests that were not properly mocked out. Because of this the only dependency it has is EasyMock, which doesn't support statics, constructors with arguments, etc. The tests instead rely on database connections and such to \"run\" the tests. Adding powermock to handle these cases is being shot down as cost prohibitive due to the need to upgrade the existing project to support it (Another discussion). My questions are, what are the REAL world tangible benifits of proper unit testing I can use to push back? Are there any? Am I just being a stickler by saying that bad unit tests (even if they work) are bad? Is code coverage just as effective?"} {"_id": "180025", "title": "PHP best practice in return values", "text": "For PHP, best practices, I have read somewhere that if you assign your data to a variable it takes almost no memory or resource. So let's say I have this function that return a count. public function returnCount($array){ return count($array); } or if this is better? public function returnCount($array){ $count = count($array); return $count; }"} {"_id": "145832", "title": "Find a \"hole\" in a list of numbers", "text": "What is the fastest way to find the first (smallest) integer that doesn't exist in a given list of **unsorted** integers (and that is greater than the list's smallest value)? My primitive approach is sorting them and stepping through the list, is there a better way?"} {"_id": "180020", "title": "Guidance in naming awkward domain-specific objects?", "text": "I'm modeling a chemical system, and I'm having problems with naming my elements / items within an enum. I'm not sure if I should use: * the atomic formula * the chemical name * an abbreviated chemical name. For example, sulfuric acid is H2SO4 and hydrochloric acid is HCl. With those two, I would probably just use the atomic formula as they are reasonably common. However, I have others like sodium hexafluorosilicate which is Na2SiF6. In that example, the atomic formula isn't as obvious (to me) but the chemical name is hideously long: `myEnum.SodiumHexaFluoroSilicate`. I'm not sure how I would be able to safely come up with an abbreviated chemical name that would have a consistent naming pattern. There are a few problems that I'm trying to address through naming the enum elements. The first is readability, with the longer names presenting an issue. The second is ease of picking up the code for new maintainers, and here the shorter names present an issue. The next issue is the that business owners usually refer to the full chemical name, but not always. The \"mouthful\" chemicals are referred to by their formula. The final concern is making sure it's consistent. I _don't_ wan't a mixed naming convention as it will be impossible to remember which to use. **From a maintenance point of view, which of the naming options above would you prefer to see and why?** * * * _Note: Everything here below the line is supplementary | clarifying material. Please don't get bogged down in it. The main question regards naming the awkward objects._ _Atomic Option_ public myEnum.ChemTypes { H2SO4, HCl, Na2SiF6 } _Chemical Name Option_ public myEnum.ChemTypes { SulfuricAcid, HydrochloricAcid, SodiumHexafluorosilicate } * * * Here are some additional details from the comments on this question: * Audience for the code will be just programmers, **not** chemists. * I'm using C#, but I think this question is more interesting when ignoring the implementation language. * I'm starting with 10 - 20 compounds and would have at most 100 compounds, so I don't need to worry about _every_ possible compound. Fortunately, it's a fixed domain. * The enum is used as a key for lookups to facilitate common / generic chemical calculations - which means that the equation is the same for all compounds but you insert a property of the compound to complete the equation. * For example, Molar mass (in g/mol) is used when calculating the number of moles from a mass (in grams) of the compound. FWIW, Molar Mass == Molar Weight. * Another example of a common calculation is the Ideal Gas Law and its use of the Specific Gas Constant A sample function might look like: public double GetMolesFromMass(double mass_grams, myEnum.ChemTypes chem) { double molarWeight = MolarWeightLookupFunctionByChem(chem); //returns grams / mole double moles = mass / molarWeight; //converts to moles return moles; } //Sample Call: myMoles = GetMolesFromMass(1000, myEnum.ChemTypes.Na2SiF6); //*or* myMoles = GetMolesFromMass(1000, myEnum.ChemTypes.SodiumHexafluorosilicate); public double GetSpecificGravity(myEnum.ChemTypes chem, double conc) { //retrieves specific gravity of chemical compound based upon concentration double sg = SpecificGravityLookupTableByChem(chem, conc); } So the enum of the compound name is used as a key and to provide consistency in referencing the compound with the related functions."} {"_id": "107325", "title": "Patterns for ajax-heavy web applications", "text": "Up until now, I've been a great fan of the MVC pattern for developing web applications. For the web, I've developed mostly in PHP (with the Kohana and CodeIgniter frameworks) and Ruby (RoR). As my applications become heavier on the Ajax side (single-page apps etc) I noticed that I can't help but betray the very basic concepts of MVC: Javascript is doing most of the works; calling controllers just to ask for views or more js/json code seems wrong. After striving for keeping all the routing jobs in the controllers, now I've fundamentally split it between them and Javascript (that is, from the framework's PoV, part of the views). When asking for json the MVC subversion looks even more obvious: the js code doing the request _is_ the controller; the framework's controller is merely acting as proxy for the model's data - what I'm actually asking for. So, what should I look into? I was thinking about pure-javascript applications, for instance with backbone.js and a document-based, json-spitting database (couchDB) as backend, but I love my relational databases. Another option would be the following: I'd just make \"routed models\" in PHP/ruby/go/whatnot. Those will analyse the request, call the db, give back some json. This approach looks interesting to me but it lacks any substantial documentation or academic analysis, so I'm a little afraid of the leap. Ideas?"} {"_id": "38597", "title": "Where and how to mention Stackoverflow participation in the r\u00e9sum\u00e9?", "text": "I think I have good enough reputation on SO now. Well, this may not be that much as compared to so many other users out there but I am happy with mine. So, I was thinking of adding my profile link on my r\u00e9sum\u00e9 - just the profile link and not that \"I have this much reputation on SO\". Those who haven't seen, can see this question Would you put your stackoverflow profile link on your CV / Resume? How would this look like? > **Forums/Blogs/Miscellaneous others** > > No blogging as yet but active participant in Stackoverflow. My profile link > - `http://stackoverflow.com/users/userId/username` I think of putting this section after **Project Details** and **Technical Expertise** sections. Any tips/advice? **Update** MKO has made a very good point \\- > do you really want a potential employer to be able to evaluate in detail > everything you've ever written on SO I thought of commenting but it would be too long - In my questions/answers I put a lot of statements like - \"AFAIK ...\", \"following are my assumptions so far ...\", \"am I correct to conclude that... ?\", \"I doubt if it is possible to ...\" etc. when I am not sure about something and I rarely involve in fights with other users. However I do argue on topics sometimes if I feel it is necessary and if I have a valid point. I do accept my mistakes and apologize for the same. As we all know nobody is perfect. I must have written many things which may be judged as wrong by a potential employer. But what if the same employer notices that I have improved in the quality of content by comparing old content with new one? Isn't that great? I also try to go back to older questions/answers and put corrective comments etc. when I feel I was wrong or if I can improve my post. Of course there are many employers who want you (potential employees) to be correct each and every time. They immediately remove you from consideration when you say a single incorrect thing. I have personally met such an interviewer few months back. He didn't even care to listen to any good thing I had done after he found a single wrong thing. Now the question is do you really care to work with such people? Or do you like those people who give value to the fact that you are striving to improve every day. I personally prefer the latter."} {"_id": "107321", "title": "Learning in the Classroom", "text": "I am currently taking an Assembly programming class, and I honestly find it extremely boring and tedious. While I've programmed some assembly before as part of a C++ program, I find what we are doing to be completely different than actual assembly code I've worked with (we are using a program called MARS MIPS, found here). Since we are using the MIPS architecture, which from what I gathered is hardly used any more, I find what we are learning hardly applicable to anything outside of the class. Likewise, I find myself not retaining the material we cover in class due to detachment from real world applications as well as the classroom setting itself. I've been trying to write my some code outside of the classroom to get familiar with it, but without any obvious real world applications, I find myself unmotivated to do so. **Is there any advice for trying to learn a programming language (such as assembly) for a class when I don't seen any real world application of it?** Or perhaps a more specific question to this problem would be: **Is there any real world application to the MIPS architecture which I am unaware of?** **EDIT:** I found the following information on StackOverflow concerning MIPS: http://stackoverflow.com/questions/2635086/mips-processors-are-they-still-in- use-which-other-architecture-should-i-learn"} {"_id": "145839", "title": "What is the alternative to frequent manual verification?", "text": "I was thinking, is there a particular time in your coding where you verify that it works? Say, after coding a function, or an entire class, or an entire section of an app, or after every 'significant' block of code? I ask this, because I tend to verify my app too often, sometimes after every 3rd or 4th change. This is a habit which has proven very hard to shake. It appears to be counter productive to do this repeatedly and manually. Is there another solution? It seems to be either be a more competent programmer and essentially 'know' the return of the majority of your code, or have your IDE check the return periodically, or only test the return via TDD."} {"_id": "66103", "title": "How should a ViewModel be named?", "text": "While starting work on a brand new ASP.net MVC application we've learned that we should have all of our available data pushed to make it easy to create a full view. While learning this we've started using a concept of a ViewModel, or a complex object full of properties from other basic data transfer objects. Now our problem has been in naming these ViewModels. We've run across the idea of using a name created from the DTO object names like MemberContactViewModel. while this is okay, I feel that names should be more unique or more related to what the complex object will be doing rather than what it's made up of. What are your thoughts. How should a complex ViewModel be named, based on the data it holds, or the view where it will interact with. Thanks for the input."} {"_id": "189509", "title": "Where can I turn to if I can't fix a bug?", "text": "I am looking for resources to turn to when I don't have the answer for something. I lead a team of software developers. We have been rolling out new software releases on a monthly basis. When there is a bug that my team cannot fix, it lands on me. Most of the time I can resolve the issue, but there are times when I get stuck. Unfortunately, I'm the top of the line at our company. There is nobody I can ask for help or assistance in figuring something out. Do you happen to have any recommendations or guidance for situations like this?"} {"_id": "189508", "title": "Git submodule vs Git clone", "text": "I am working on an opensource project on github. It has a subdirectory /Vendor in which it has a **copy** of several external libraries. Original maintainer of the project updated this directory with newer copy of external library once in a while. One developer send me a pull request with idea to replace this **copy** by **git submodule**. And I am considering whether it's good idea or now. Git submodule Pros: * Submodules were specifically designed for similar scenarios * It removes possibility of accidental commit to Vendor which will be overwritten while next update Git submodule Cons: * It looks like git submodules pushes complexity from maintainer to a person who will clone/pull the project (additional steps required after you clone to start working with the project: \"git submodule init\", \"git submodule update\" What's your opinion on this? One more thing. This problem is reasonably small size library with very limited external dependencies. I think any build tool would be overkill for it for now."} {"_id": "131293", "title": "Personal Software Process (PSP1)", "text": "I'm trying to figure out an exercise but it doesn't really makes to much sense.. I'm not asking someone to provide the solution. just to try and analyse what needs to be done in order to solve this. I'm trying to understand which PSP 1.0 1.1 process I should use. PROBE? Or something else? I would greatly appreciate some help on this one from someone that has experience with the Personal Software Process Methodology.. Here is the question. > For the reference case (\u201ccode1.c\u201d), the following s/w metrics are provided: > > * man-hours spent in implementation phase (per-module): 2,7 mh/file > * man-hours spent in testing phase (per-module): 4,3 mh/file > * estimated number of bugs remaining (per-module): 0,3 errors/function, 4 > errors/module (remaining) > > > Based on the corresponding values provided for the reference case, each of > the following tasks focus on some s/w metrics to be estimated for the test > case (\u201ccode2.c\u201d): [25 marks] > > 1. (estimated) man-hours required in implementation phase (per-module) [8 > marks] > 2. (estimated) man-hours required in testing phase (per-module) [8 marks] > 3. (estimated) number of bugs remaining at the end of testing phase (per- > module) [9 marks] > > > Tasks 4 through 6 should use the data provided for the reference case within > the context of Personal Software Process level-1 (PSP-1), using them as a > single-point historic data log. Specifically, the same s/w metrics are to be > estimated for the test case (\u201ccode2.c\u201d), using PSP as the basic estimation > model. > > In order to perform the above listed tasks, students are advised to consider > all phases of the PSP software development process, especially at levels > PSP0 and PSP1. Both cases are to be treated as separate case-studies in the > context of classic s/w development."} {"_id": "59294", "title": "Visual web page designer for Django?", "text": "I'm just starting my Django learning so pardon me if any part of this question is off-base. I have done a lot of web searching for information on the equivalent of a visual web page designer for Django and I don't seem to be getting very far. I have experience with Delphi (Object Pascal), C, C++, Python, PHP, Java, and Javascript and have created and maintained several web sites that included MySQL database dependent content. For the longest time I've been using one of the standard WYSIWIG designers to design the actual web pages, with any needed back end programming done via Forms or AJAX calls that call server side PHP scripts. I have grown tired of the quirks, bugs, and other annoyances associated with the program. Also, I find myself hungry for the functionality and reliability a good MVC based framework would provide me so I could really express myself with custom code easily. So I am turning to Django/Python. However, I'm still a junkie for a good WYSIWIG designer for the layout of web pages. I know there are some out there that thrive on opening up a text editor, possibly with some code editor tools to assist, and pounding out pages. But I do adore a drag and drop editor for simple page layout, especially for things like embedded images, tables, buttons, etc. I found a few open source projects on GitHub but they seem to be focused on HTML web forms, not a generic web page editor. So can I get what I want here? The supreme goal would be to find not only a web page editor that creates Django compatible web pages, but if I dare say it, have a design editor that could add Python code stubs to various page elements in the style of the Delph/VCL or VB design editors. Note, I also have the Wing IDE Professional IDE, version 2.0. As a side note, if you know of any really cool, fun, or time-saving Python libraries that are designed for easy integration into Django please tell me about them. \\-- roschler"} {"_id": "24658", "title": "REPL - Do You Find It Useful?", "text": "Recently I started learning Clojure and downloaded Clojure-Box for Windows which came with SLIME and have been fiddling around with Clojure using the REPL mode. Now, I have starting learning many languages that have REPL available (F#, Python, Haskell) but have never used it until I started with Clojure and I realized how much of a help it is. I was wondering how many people use REPL for the languages that have them available. If you don't use them, what is your reasoning not to? If you do use them, do you use them for testing code or just for learning languages?"} {"_id": "3853", "title": "How to set up Unit Testing in Visual Studio 2010?", "text": "I'm doing my first big project and I don't have a lot of experience in a professional programming environment. While researching anything programming- related I often see references to Unit Testing, but I am still unclear as to how to set those up or even if it would be beneficial to me. Can someone explain unit testing to me, and how to set it up in a Visual Studio 2010 solution that has multiple projects? Is it something that occurs within your project's solution, or is it a separate solution? And is it something you'd recommend for a small development team or is it just a waste of time to setup? Right now I just run the entire program to test whatever I am currently working on, but occasionally I have run into problems that are not easy to debug and it would be useful to run subsets of the code elsewhere... Sometimes I do setup another project with some of the libraries referenced to test a small part of the program, but I feel more time is wasted setting that up then just running the entire program because of all the dependencies involved"} {"_id": "3851", "title": "When would someone be considered a bad programmer?", "text": "How would you consider that a programmer is bad at what he or she is doing? If possible... How should he/she improve?"} {"_id": "151251", "title": "Scrum in a consulting environment?", "text": "Is it possible to introduce scrum to a consulting environment? On any given week I might belong to 4 different teams, each with a different PM, BA and dev team (I am a designer). It doesn't seem feasible, time wise, to be able to be part of that many teams/projects as there would need to be daily stand-ups for each, sprint reviews, etc. I should add these projects are not necessarily software, but mainly ecommerce web projects. Can someone who has done this speak to how you approach it? _Edit_ I think my situation is different and I should have discussed it further. Our dev team is in India, so finding a time in the US morning/India evening for daily stand-ups are hard, especially when we almost always have client calls in the US morning/India evening so dev can be present if necessary. Instead, what our PM's have done is have 1 daily meeting for the US team (BA, Design). However, they go through ALL the projects at once. So, if you're not on that project, you're screwed and have to wait till your project comes up. This can be 30min-75min a day, for me. Insane. This does NOT cancel our weekly status call (30m-1h) with the client (x however many projects you are on). This is obviously not scrum, as there are no devs present during any of this and it's just action items being listed for the US team. However, I'd love to get to a point where our 'scrum-like' dev team (they develop in sprints and have sprint planning meetings) can be part of daily standups with the US team and we transition to an actual scrum framework."} {"_id": "197281", "title": "Why the ugly keywords in C11?", "text": "I am currently reading a draft of the C11 specification. The new introduced keywords: `_Bool, _Alignof, _Atomic` all feel like custom extensions, instead of standard reserved keywords like `struct, union, int`. I realize that the standard basically consists of standardized extensions ... but still, this is awful! Maybe we will soon end up with `__Long_Long_Reallylong_Integer_MSVC_2020_t` creeping in the standard! _Is the backward compatibility of nonstandard code the only reason of the new style of the keywords?_"} {"_id": "22762", "title": "Tips for a novice PHP developer to drive down long-term maintenance costs", "text": "I'm an experienced Java developer who is just starting up a project for an NGO. I will be working on the project for at least 6 months, following which the NGO will have to pay or find a volunteer to maintain the project. Seeing as they already have people working on their website in PHP, I figured PHP was the obvious choice to make sure the skills are still available (it is webby) - eliminated Java because Java devs are typically expensive. Unfortunately I have next to zero experience with proper PHP development (just a few months spending a small percentage of my time on a Drupal project without any real coding). What are some things I can do to ensure that the code I leave behind is maintainable by a relatively low-skilled PHP developer (eg a teenager wanting to make some holiday cash)? Do I go with a CMS? Are Drupal developers cheap? Any other CMS / Framework I should look at? Background: the project is a website that people will search for educational information, with some simple user-management to only allow some users to create content, restrictions to specific content-types etc. The CMS vs write myself question is not the only thing I'm interested in hearing. I'm also interested in any tips about code style, anything you think my Java experience would push me towards that is going to make it difficult for the hypothetical volunteer etc. There's probably things about this scenario that I haven't thought through - so anything related to keeping maintenance costs low would be appreciated."} {"_id": "208109", "title": "Reading a specific type of input from file", "text": "I have a software that reads from a file. Each object in the software takes 7 inputs viz. `string string string float string float int` I have an input file. It contains a number of input values. If input for one object is like: `hss cscf \"serving cscf\" 32.5 ims 112.134 124` (Note: when an object's variable needs multi word string, I used \"....\", for single word string, it is without quotes) How can I read it using ifstream? (I searched google but didn't find.) I tried to read entire line using getline and but again got stuck when it came to find out whether its a single word or multi word input! I thought to read a line and then search char by char. If its '\"', I know its a multi word. But I stuck when it comes to an integer or float. For `char`, you can use `if(line[i]>='a'&&line[i]<='z')` but how to go ahead when integer or float is the next value? Please give some suggestions for this."} {"_id": "22769", "title": "What programming language generates fewest hard-to-find bugs?", "text": "What language, in your opinion, allows the average programmer to output features with the least amount of hard-to-find bugs? This is of course, a very broad question, and I'm interested in very broad and general answers and wisdoms. Personally I find that I spend very little time looking for strange bugs in Java and C# programs, while C++ code has its distinct set of recurring bugs, and Python/similar has its own set of common and silly bugs that would be detected by the compiler in other languages. Also I find it hard to consider functional languages in this regard, because I've never seen a big and complex program written in entirely functional code. Your input please. Edit: _Completely arbitrary clarification of hard-to-find bug: Takes more than 15 minutes to reproduce, or more than 1 hour to find cause of and fix._ Forgive me if this is a duplicate, but I didn't find anything on this specific topic."} {"_id": "75936", "title": "Explain Cloud computing to Grandmother", "text": "Technically I understand what is cloud computing, but I'm having difficulties when trying to explain it to a layman. I am wondering if there are any good examples which could easily explain it."} {"_id": "75933", "title": "Are there any Phone Interview equivalents to FizzBuzz?", "text": "I think FizzBuzz is a fine question to ask in an in-person interview with a whiteboard or pen and paper handy to determine whether or not a particular candidate is of bare-minimum competence. However, it does not work as well on phone interviews because any typing you hear could just as easily be the candidate's Googling for the answer (not to mention the fact that reading code over the phone is less than savory). **Are there any _phone-interview_ questions that are equivalent to FizzBuzz in the sense that an incompetent programmer will not be able to answer it correctly and a programmer of at least minimal competence will?** Given a choice, in my particular case I am curious about .NET-centric solutions, but since I was not able to find a duplicate to this question based on a cursory search, I would not mind at all if this question became the canonical source for platform-agnostic phone fizzbuzz questions."} {"_id": "204831", "title": "Good coding problem/s to use in an interview", "text": "I've read a lot of different articles on how to locate good programmers, including the Joel Spolsky and Steve Yegge stuff. Overall I feel we have a pretty good interview process. We ask good questions and we have a coding test problem that we give to candidates to help screen out people that can't actually code; this problem takes a little while and is a \"take home\" where the candidate is given a few days to return a solution. Overall this process has worked fairly well with us, but we have still landed an occasional dud on the programming front. In at least one case I now suspect that the solution provided was not the candidate's own, but there would be no way to prove this out; nor would it matter as such. We are in an environment where once someone is hired it is incredibly difficult to let them go, so this makes it all the more critical we end up with a good candidate. So I would like to add a **coding exercise** into the actual interview and have the candidate work through a solution in a **pairing style environment** with an existing team member. I'm looking for something that is not terribly complex; something any **competent programmer could solve in about an hour** , but also something a little more difficult that the fizz-buzz problem. * * * I've looked at the following questions as well as other similar questions, but most are slightly different in that they are not asking for specific problems that could fit within the time constraints and are well suited to pairing, or I haven't really seen good answers provided. Code During Interview Favorite Interview Question Passionate Programmer? Fizz Buzz, really? And many many others..."} {"_id": "77688", "title": "How to protect yourself from being sued by patents?", "text": "I've had a few software ideas before that could probably be patented.(decided not to pursue any of them, however). Basically, I don't want these ideas patented though. I don't care if someone else implements them, I just don't want to get sued later by some patent troll who patented the idea I had and implemented 5 years ago. Would posting your idea to public websites, and using the poor man's patent technique ensure that even if someone else patents your idea, you have protection from being sued, and possibly the ability to invalidate their patent?(assuming the reform bill doesn't pass)"} {"_id": "204837", "title": "Component design: getting cohesion right", "text": "I currently have a set of components named DataValues, ValueParsers, ValueFormatters and ValueValidators. The first one defines an abstract base class DataValue and contains a whole load of implementations. The 3 other components each define in interface similar to their name (minus the \"s\"), and also contain a bunch of implementations of those. These 3 depend on DataValues. There are no further dependencies. In other words, each component currently contains a whole inheritance hierarchy. I recently re-watched EP16 of Robert C. Martin's Clean Coders, where he points out this is one of the most common mistakes made in package design. This made me realize that this exact thing is going on for the here described packages. The question then is how to best improve upon the current component design. Luckily none of those components have seen their first release yet, so things can still be moved around, and boundaries can still be redefined. Releases for these are on the horizon though, so I better get it right now. What I'm thinking of doing now is to create a new component to hold the mentioned abstract class and interfaces for all these components. It would also contain the Exceptions of these components and perhaps some trivial implementations of the interfaces. This new component would then be needed by all the current ones, and by all the ones needing the current ones. Then again, in this later category, there will be a number components that can just depend on the new and get rid of their current dependency on the much more concrete and unstable ones containing all the interface implementations. This is great for the stable dependencies principle, and equally nice for the reuse release equivalence principle. It's however doing rather bad when it comes to the common closure and common reuse principles. Concretely this means that components that need the ValueParsers interface but don't care about the ValueValidators one will still depend on it as it is in the same package. They can thus be affected by it for no good reason. Then again, considering how abstract/stable this new component ends up being, I don't really see this causing much problems. I'm looking for ideas on how to better draw the component boundaries and concerns or suggestions about the alternative I described."} {"_id": "125414", "title": "How to deal with people who don't want to share knowledge?", "text": "> **Possible Duplicate:** > How do you deal with an information hoarder? Often, in IT teams knowledge equals power. This is fine as long as (IT-)knowledge is equally accessible by all members of the team and company specific know-how is well documented. Sometimes personnel in an IT department build up a tremendous amount of know-how _without_ documenting it. By doing so they think that they ensure their position. Often these people like it when people specifically have to ask them how to do a job. How do you get these programmers, database administrators or other IT staff to **_DOCUMENT_** their work and make it accessible to the company they work for? EDIT: It's a relief that so many of you _know_ this type of person. That it is not me bumping into them everywhere I do projects. On the other hand it makes me sad, such talented people but in many ways behaving as a child. I have seen the behavior in men and women btw. It is hard to pick one answer as the best and accept. Will do after more re-reads."} {"_id": "77686", "title": "Architectural Patterns for a Game", "text": "So I've got a solution that contains a few big projects, which I'm trying to break down into smaller projects with more isolated responsibilities. This is a game I'm tinkering with -- I'm mainly a LOB developer and I think the principles are universal, so I'm hoping to learn something here. The dependencies in some of the objects are a bit too tightly intertwined, and I'm hoping for some help on how to untangle them. Or maybe some sort of pattern or abstraction that might make them more manageable. **Ares.Core.World** has classes in it like Creatures, Items, etc. All of them inherit from Entity, which is aware of what cell on the map it exist at. It accomplishes this by holding a reference to a Ares.Core.UI.MapControls.MapCell... And you can see that the wires are already getting crossed. **Ares.Core.UI.MapControls** contains Map and MapCell, each of which contain Lists of creatures and items that they contain. MapCell also inherits from Ares.Core.World.Entity since that solved a few early problems very elegantly -- for instance, all Entities have inventory. I would like to find a way to split UI and World out into seperate projects ( **Ares.World** and **Ares.UI** ) since UI and the overarching world should probably be seperate concerns. But as you can see, the way it is now the two projects would need to reference each other, and circular references are not allowed. I'm wondering if there are any architectural patterns out there that might help in this situation. Thanks!"} {"_id": "197473", "title": "A backlog of \"bite-size\" tasks in parallel to the \"main\" feature backlog?", "text": "After over two years of working in a highly siloed, \"lone-wolf\" development department structure, we're adopting Agile SCRUM. Great. I like Agile; as a dev it keeps you focused, busy, and productive without having myriad stakeholders shoving project after project down your throat with the expectation they all be done yesterday. There is, however, one aspect of moving to SCRUM versus our current \"model\", that I think people outside Development are not going to like in the slightest. That is their current ability to have us do small changes \"while you wait\". A large portion of our development is for in-house consumption only, and we're mostly all in the same building. So, it's been common practice for years for a department lead or manager elsewhere to come to the \"codebase owner\" of a particular application and ask for small stuff (sometimes not so small, but we're pretty good about not taking on three-week projects based on these \"drive-bys\"). Even our boss sometimes relays things brought up to him in this way. Very often, if we're working in the codebase in question at the time, we can simply pop up the source file, make the change, and run it with them looking over our shoulder to verify the change is what they want, before checking it into Subversion for inclusion in the next build. With a basic Agile SCRUM methodology, these tweaks would either be logged as defects (we didn't meet a requirement specified in a story previously consumed) or new small stories (we met all stated requirements, but those requirements were incomplete, vague or incorrect, or they changed after delivery once the users saw the new features). Either way, the vast majority would be one-pointers at most if not zeroes, and of relatively low priority (the system is usable in its current state, but it would be _so_ much cooler if...), making them unlikely to be brought into a sprint when working the backlog top-down. This possibility was raised at a dev meeting as being a source of active opposition to our Agile process by other departments, who would see it as less \"agile\" than our current ability to make small tweaks on request. It's a valid concern IMO; the stakeholders behind the PO don't always agree on what things are most important, because they don't all have the same point of view, yet it's typically only the managers who make the final decision, and therefore their bias is the one that shows in the product backlog. A solution was then proposed, which was tentatively called the \"candy jar\" (another term thrown out was the \"gravy boat\"). Small tweaks requested by the \"little guys\" in the various departments, that are not defects in an existing story, that are estimated by consensus or acclamation within the team to take less than one-half of a developer-day, and that would have an immediate, significant, positive impact on the user experience in the opinion of the end user, are put on a list in parallel to the primary backlog. They'd be identified as \"stories\", but would be kept separate from the primary backlog of \"big\" stories subject to prioritization. If, at any time during the normal progress of a sprint, we happen to be working in an area of the system in which one of these tweaks can be made, making the tweak trivial, we can bring the tweak into the sprint and code it alongside the larger story. Doing this _must not_ jeopardize the completion of the larger story or any other committed work. The PO would also have access to this list, and if they were working on an upcoming user story touching the basic feature involving the tweak, they could fold it into the story as a requirement and then we'd meet the requirement as we would any other. This, it was thought, would make tweaks more likely to be worked on sooner than later. This triggered the reaction among those of us with ScrumMaster training of \"uh-uh\". There is _one_ backlog. Two backlogs introduces the question of which #1 item is _really_ the most important, which list's items determine _real_ velocity, and which of the two backlogs a story actually belongs in (any demarcation of size/complexity is going to have some cases that fall relatively arbitrarily to one side or the other). \"Let the process work\", we said; if the change really is significant to the end users, they'll make enough noise to get the department heads making time/money decisions on board, and it'll get bumped up into the dev team's consciousness toward the top of the backlog. I thought I'd pose the question to the floor: **In your opinion, would a parallel list of \"bite-size\" stories have value in getting small, useful but ultimately low-priority changes made faster, or is it overall a better decision to fold them into the main backlog and let the basic process govern their inclusion in a sprint?**"} {"_id": "230830", "title": "Doing localization each Sprint", "text": "We need to support 5 languages for our product. This means that in order to have a potentially shippable product we need to translate all text new to the Sprint to all these languages. It doesn't make sense to me because the text might change after a few Sprints during which the users will use the application and give feedback. So paying the translating company for translating text that will most likely change doesn't seem to be a wise decision. So what do you do with localization in Scrum?"} {"_id": "95328", "title": "Should one time contributors be listed as an Author?", "text": "When working on open source projects should one time contributor's (I mean like single or minor patchset, nothing that would be considered a major contribution ) be listed as an Author? or should they simply get listed in say an acknowledgement section somewhere? If you contribute a small patch to a project where do you want to get listed?"} {"_id": "121027", "title": "Why is a linked list implementation considered linear?", "text": "Typically, computer memory is always linear. So is the term non linear used for a data structure in a logical sense? If so, to logically achieve non linearity in a linear computer memory, we use pointers. Is that right? In that case, if pointers are virtual implementations for achieving non linearity, why would a data structure like linked list be considered linear, if in reality the nodes are never physically adjacent?"} {"_id": "230838", "title": "Is it common to use code you don't understand?", "text": "I have been working on a project recently that uses a pretty comprehensive framework built by someone else. The framework is free to use commercially and privately as anybody sees fit, so there are no legal reasons not to use it. I do find myself, however, uncomfortable about the fact that I'm creating projects and submitting them without fully understanding everything that goes into a project. The entire framework of the project is built by someone else and I haven't had the time to go through it all and what I have gone through is slightly confusing. I'm a (ground up) kind of person by nature so building something off of another project makes me nervous, but is this a common practice to do even if you don't fully understand how or why the code you're using works? I'm mainly wondering if using other peoples code is a common practice when: * You don't have the time to go through said code to fully understand what its doing, but you know it works (via tests and other reviews) * You are submitting a project based on said code, but have only been exposed to the top layer you wrote yourself."} {"_id": "230839", "title": "What does this duration reported by ffmpeg mean?", "text": "When I run ffmpeg on an mp4 file, it says the duration is: 00:03:57.54 What is the unit that `54` represents ? I am interpreting it as 0.0054 seconds (54 milliseconds)."} {"_id": "101248", "title": "Will my communication skills be wasted in a software engineering career?", "text": "I've been in the financial engineering arena (after BA Math and BA Computer Science) for about 5 years (20% analysis/programming, 80% communicating) and take pride in my ability to communicate with people and discuss technical problems (i.e. interacting with a team). I love this part of my job. Going to the white board to draw abstract ideas and brainstorm. However, for many reasons, I want to transition my career into a technology company (software engineering) but I'm deeply afraid that I will fall into a stereotypical programming job where programmers code with big headphones on. I certainly know this is only a stereotype but I've witnessed similar environments before (at startups) and it scares me to think that I would be migrating to a career of isolation. I love coding and thinking algorithmically, but I don't want to give up interacting with people. I understand that having communication skills is only a positive, but am I setting myself up for career-happiness failure by transitioning into software engineering. I'd love to hear any clarifications and/or advice."} {"_id": "121025", "title": "A programming language that does not allow IO. Haskell is not a pure language", "text": "Are there any 100% pure languages (as I describe in the Stack Overflow post) out there already and if so, could they feasibly be used to actually do stuff? i.e. do they have an implementation? I'm not looking for raw maths on paper/Pure lambda calculus. However Pure lambda calculus with a compiler or a runtime system attached is something I'd be interested in hearing about."} {"_id": "32624", "title": "Do you develop with localization in mind?", "text": "When working on a software project or a website, do you develop with localization in mind? By this I mean e.g. * Externalizing all strings, including error messages. * Not using images that contain text. * Designing your UI with text expansion in mind. * Using pseudo-translation to test your UI's early in the process. * etc. On projects you work on, are these in the 'nice to have' category and let the localization team worry about the rest, or do you have localization readiness built into your development process? I'm interested to hear how developers view localization in general."} {"_id": "51763", "title": "What programming languages have you taught your children?", "text": "I'm a C# developer by trade but have had exposure to many languages (including Java, C++, and multiple scripting languages) over the course of my education and career. Since I code in the MS world for work I am most familiar with their stack and so I was excited when Small Basic was announced. I immediately started teaching my oldest to program in it but felt that something was missing from the experience. Being able to look up every command with the IDE's intellisense seemed to take something from the experience. Sure, it was easy to grasp but I found myself thinking that a little more challenge might be in order. I'm looking for something better and I would like to hear your experiences with teaching your children to program in whatever language you have chosen to do so in. What did you like and dislike? How fast did they pick it up? Were they challenged? Frustrated? Thank you very much!"} {"_id": "165211", "title": "Authentication for users on a Single Page App?", "text": "I have developed a single page app prototype that is using Backbone on the front end and going to consume from a thin RESTful API on the server for it's data. Coming from heavy server side application development (php and python), I have really enjoyed the new different design approach with a thick client side MVC but am confused on how best to restrict the app to authenticated users who log in. I prefer to have the app itself behind a login and would also like to implement other types of logins eventually (openid, fb connect, etc) in addition to the site's native login. I am unclear how this is done and have been searching - but unsuccessful in finding information that made it clear to me. In the big picture, what is the current best practice for registering users and requiring them to login to use your single page app? Once a user is logged in, how are the api requests authenticated? Can I store a session but how do I detect for this session in the API calls? Any answers to this would be much appreciated!"} {"_id": "101241", "title": "Did jQuery kill the JavaScript discussions?", "text": "There are over 100,000 questions on Stack Overflow tagged as questions relating to JQuery troubleshooting/usage. Compare this to the 124,000 questions on stack overflow that are tagged for JavaScript issues. We are very close to almost half of all JavaScript related questions on Stack Overflow being attributed to JQuery (plus or minus any margin for the few other JS frameworks that get questions on SO). What I'm getting at is, jQuery is not a language and it is not the be all and end all of frameworks that must be applied to every scenario in which JavaScript is present, yet it is quickly catching (and I predict will soon eclipse) JavaScript as a source of discussion/inquiry on sites like Stack Overflow. Is jQuery killing the JavaScript star? Is there no longer a firm grasp by the next generation of web developers regarding the power, simplicity and use of JavaScript as a means for DOM manipulation? Is this just the natural evolution of things and the viewpoint I'm presenting typical of the coder's ego (i.e., is this how assembly programmers view the .NET/Java/Web crowd?) or is this really the beginning of the end of the true JavaScript developer?"} {"_id": "101243", "title": "Any ideas for a good way to get HTML Text-To-Speech?", "text": "We do online e-learning without using any third party tools like Adobe Captivate / eLearning suite, and it would be great to (somehow) get some text- to-speech going. Is there any way at all to get this working for free (or very cheap, some plans are just ridiculous such as http://www.acapela-group.com/text-to-speech- interactive-demo.html) Ideally what we need is for the user to select their language, however happy to start with just English, then just start reading the HTML on the page.. Or am I just barking up the wrong tree here, if people NEED TTS, they would just be using their own software or the ones the come with O/S...right? Edit: The requirement is to make the content a little more engaging. Also, it aids some people if they are slow readers. The reason I was thinking TTS over a proffesional Voice Over is that is should be easier to implement? Having a VO linked to each text \"snippet\" will require cutting up hundreds of MP3 files and then programming the elearning framework to do this. Having an automated TTS system should be easier for that, especially if in multiple languages However, I think it does need to be a professional VO. thanks for your thoughts/inputs"} {"_id": "101246", "title": "Silverlight - modularity. Best way to physically separate binaries?", "text": "We are working on LoB app with lot's of content. XAP download will be rather large I think for one time download. I'm planning to break solution into separate projects. Not sure why - but I DO NOT like many many projects in solution. IMO it is not very good idea. Even when we were working as a large team - it was pain with many projects. Even now when I'm using MEF/PRISM - there still going to be some core dependencies like: 1. PRISM libraries 2. Interfaces 3. Navigation 4. Shell/Bootstrapper 5. App styles 6. Converters/Commands/Validators 7. etc. And than I'm going to have modules that will use all that CORE stuff. Modules will have following inside them: 1. RIA Services client side 2. ViewModel's 3. Views Those modules will be loaded on demand using MEF. I think size-wise all those modules will be larger than core module because of ammount of logic in them. I expect to have about 5-6 modules and CORE. I think that will give me reasonable number of client-side projects and libraries/XAPS and it will be manageable solution to work with. Do you see any issue with breakdown like this? Some online videos will make 7+ projects out of CORE module. What's a point? I think it adds to complexity. Than they show 3-4 DLL out of module. One for views, one for viewmodels and so on. They still need to be loaded together so why? I'm looking for do's and dont's from you guys who went through this.. Thank you!"} {"_id": "51769", "title": "Is Android a language or a framework/platform?", "text": "I know that Android uses the Java language with a limited Java SDK and that Google claims it isn't Java. But is it right to say that Android is a Programming language? Or is it more right to say that Android is a framework in Java? Or is both true?"} {"_id": "198458", "title": "Working in a company that does not comment their code at all?", "text": "I work for a small software development house (10~ developers, a few product managers and a few support staff) that sells various products and services to organisations internationally through resellers. I arrived in the job 7 months ago into a role opened up by a developer who left after 3 years. The company has maybe been around for 10 years. Basically they have about 30+ separate products (in 30 separate .NET solutions which may consist of 10+ projects within each of those solutions). I would estimate each project or solution to be of medium to large size. **In each project there is absolutely no source code comments at all**. There's also no architecture diagrams or documentation for how the application is designed or the code hangs together. There's some business analysis (use case) type documentation for how the program is supposed to work, when it was first created, but that's it. As a new person to the company, for the past 7 months I've been thrown into various projects and had to fix bugs in these various products. I have an IT degree and I'm perhaps intermediate-senior level in HTML5/PHP/web development, also junior-intermediate level in .NET development. They recruited me for my web development skills and wanted to progress my .NET skills. I accepted as it was paying more than my old job in PHP and thought it would be useful for my career and more job options to have some more .NET development background. However each day I feel as though I'm floundering in the deep end and treading water. For starters to get anything done, reading the code is mostly pointless as you've got no baseline from where to start or find or fix a bug in this new project you've just been assigned. I end up talking to one of the other senior programmers to get an idea of where to start. By senior, I mean they've been there for 4 years and have a general idea of how a particular program might work. Even they didn't even write most of the software, a lot of it was written by a few older guys that have since retired and left the company. The code itself is very complex in my opinion. Maybe it's just because I'm not that skilled in .NET but there seems to be lots of unnecessary layers of complexity. You might have your basic 3 layers for MVC. Then whoever wrote the application added in more layers of code for the project, maybe to structure the application. For example, the front controller calls a few classes which bootstraps the application, then that calls the controller, then a method there might call a model then that model calls a business layer, then that calls a business logic layer, then that calls a data layer, then the data layer calls a service which has more classes to call stored procedures and get the data from the database. This is a far cry from my experience in PHP which even in complex web apps would still be as simple as the front controller calling a controller which calls the model, the model retrieves data from the database and renders it to the view/webpage. I mean what benefit are all these extra layers adding? Another thing, there might be a class, then they create an interface for just that class. There's no other classes using that interface or inheriting from it. They just create an interface for every class pretty much. As far as I can see there's no unit tests using the interfaces either. I'm sure it must've made sense to the senior developer writing it, but the meaning is lost on me. The other thing they like to do is because they don't have many developers in house, they write up some specs/documentation for a system they want, then outsource the project to another company to develop. Then once it's finished they bring it back in house for the local developers to maintain it. One company they outsourced it to was a local development services company, but that company in turn outsourced the development efforts to some developing country. Now I'm back here maintaining their code base, and it doesn't have any comments either. The program itself is probably the size of say Photoshop as an example. There's literally multiple layers of complexity and large parts of the code are commented out in chunks. Why is it commented out? Do they think they'll use that code in the future? No idea. No-one else knows either. The people that worked on it in the outsourced company have since left that company and disappeared. Early on when I joined the company, I was reading some convoluted project code and added some comments into the code so I could make sense of it later on if I had to come back to the project. A few days later one of the more senior developers there came and asked me some questions about that project. I found the thing he was trying to figure out was actually the thing I had figured out the other day. I had written a comment in the code explaining what it did and I showed him. Then he immediately deleted the comment without reading it, scrolled up the code and asked me another question. Then I said, well that can be explained in the comment below which you just deleted. He goes back and undoes the deletion then reads the comment... then he finally understands. I asked him why none of the code has any comments and his rationale was that the comments get outdated and become pointless. Well yeah, comments are supposed to be maintained along with the code. You update the code, you update the comment at the same time. The development manager's rationale for no comments was that good code should be understandable without comments. OK so they've got these hugely complicated systems and the only people that have some idea of how they work are the people that were there in the beginning or who have been working on them for many years. Basically if these people leave it would practically ruin the company in my opinion because no newcomers coming along will understand the code base and be able to maintain it. When I get asked to do something there, like add a new feature into the system, I think I could probably knock it out in HTML5/jQuery/PHP in a few days no problem. In this company, I've got to first figure out the system, then understand how the code works with no comments, then figure out where to put my code, then code it. Days turn to weeks. It's like searching for a needle in a haystack. My philosophy is to design and code something well the first time, in the simplest way possible to get the job done, while making the code easily maintainable and understandable so if a new person had to pick it up they can just run with it straight away. But that seems very contrary to what this company does. So my questions: * **Is not commenting code a common practice in the software industry?** * **Why is there so much aversion to commenting code?** * **How can I perform my job well in this environment?** * **Should I recommend they start commenting their code or leave them to it?**"} {"_id": "226488", "title": "Commenting MVC applications", "text": "I've recently realised that my workplace doesn't comment their ASP.NET MVC applications. By 'doesn't document', I mean there is probably 1 line of comment per model/view/controller. No file purpose, date created etc. The excuse is that the MVC should be self documenting, with skinny controllers, self-documenting view code etc. This is a fine explanation, but several things affect this statement: * Javascript technologies are also being implemented alongside the view code with no documentation as to their integration (the javascript code is also undocumented) * There are multiple databases and providers serving internal and external content (undocumented) * There is regular confusion daily amongst the programmers of what exactly is going on in any particular module. THis takes about 3-4 minutes of a co-worker's time to clear up. Is this commenting practice lazy and bound to cause more problems than it's worth? Or am I just not adept enough at reading self-documenting code? As a side question, should MVC applications be less rigid in their commenting standards since the separation of concerns should make code obvious?"} {"_id": "51307", "title": "Self Documenting Code Vs. Commented Code", "text": "I had a search but didn't find what I was looking for, please feel free to link me if this question has already being asked. Earlier this month this post was made: http://net.tutsplus.com/tutorials/php/why-youre-a-bad-php-programmer/ Basically to sum it up, you're a bad programmer if you don't write comments. My personal opinion is that code should be descriptive and mostly not require comment's unless the code cannot be self describing. In the example given // Get the extension off the image filename $pieces = explode('.', $image_name); $extension = array_pop($pieces); The author said this code should be given a comment, my personal opinion is the code should be a function call that is descriptive: $extension = GetFileExtension($image_filename); However in the comments someone actually made just that suggestion: http://net.tutsplus.com/tutorials/php/why-youre-a-bad-php-programmer/comment- page-2/#comment-357130 The author responded by saying the commenter was \"one of those people\", i.e, a bad programmer. What are everyone elses views on Self Describing Code vs Commenting Code?"} {"_id": "137177", "title": "/*To Comment or Not To Comment...*/", "text": "> **Possible Duplicates:** > Self Documenting Code Vs. Commented Code > > \u201cComments are a code smell\u201d > > Best practices in comment writing and documentation Is commenting code useful, or just a waste of time? It seems so trivial but I've partaken in several quite heated discussions on this topic as well as it being a long-standing point of contention between my own learned philosophies and those of our local university. After reading Bob Martin's Clean Code I chugged the \"self commenting code\" kool-aid and subsequently flew in the face of everything I was taught in my undergrad years regarding commenting my code. Does anyone find thoroughly commenting code good for anything other than obligatorily fulfilling your development group's coding standards? I'm very curious to hear everyone's take on this, from both sides. :)"} {"_id": "56757", "title": "XML and Registry", "text": "Which method is considered to be best practice for setting storage for applications: XML or Registry? What are the benefits/downsides of using either of these?"} {"_id": "75689", "title": "are f# computational expressions a form of aspect oriented programming?", "text": "are monads, or more specifically f# computational expressions, a form of aspect oriented programming? Update: f# workflow builders have methods other than bind and unit. They have hooks for lots of keywords. see Creating a New Type of Computation Expression."} {"_id": "56752", "title": "Blocked Sites at work (that aren't even bad)", "text": "So here recently, i've been using google to look up information for basically random programming things (i was just hired on a month or so ago). So here recently I was actually looking up some information about RAW_SOCKETS (but thats beside the point) Anyways some of the tutorials sites/explaining how to use them and explaining the protocol sites are actually blocked. (and our manager sent out an email saying that if u run into a site just to email her just in case). Now obviously...w/e sys admins probably see these 'blocked' sites in their reports. But should I be worried? I mean....I literally am not trying to be devious Im just trying to learn stuff. I guess programming websites are sometimes labeled as \"hacking\". sometimes blogs get labeled like that, but alot of the time blogs have USEFUL information. This apparently happens alot of my other co- workers and they don't even bother emailing our manager.....but should I be worried? Or has this happened to you guys before?"} {"_id": "56753", "title": "Sending out a \"Request for Comment\" when establishing a new guideline", "text": "Do you send out a \"request for comment\" when establishing a new company guideline or standard? Companies need to establish consistent guidelines on things like development process, version control, release procedures, etc. Is it worth the effort to solicit feedback prior to publishing the final guidelines? What do you need to watch out for? In certain cases, such as when you have experienced engineers who can draft the guidelines, or where guidelines can be revised over time, I think the answer could be \"No\". Benefits * Resolve corner cases or ambiguities prior to publication * Better acceptance/adoption because the people that will use the guidelines were involved Drawbacks * Takes time * May not get useful feedback"} {"_id": "63991", "title": "Who pays for learning curve?", "text": "For those of us in consulting OR settings where you bill your hours whats the consensus on this? Should a client be billed for a certain amount of the learning curve? Should your employer be picking up the tab on overhead? Should it be you yourself the developer? Or perhaps a combination of all 3? I am conflicted on this - I've heard it being said both ways - that a client (or project budget) should not have to include the inefficiency of a lesser experienced programmer. And again I've heard it being argued that people learn on the job so projects should include a certain amount of training budget. I usually find that I (and other devs that I work with) work off hours or weekends reading up on technology , design patterns etc, sometimes struggling with a small piece that may go beyond contingency budgets - so most projects that we work on take up a bit more of our un-billed time than whats budgeted. I thought I'd open it up to the Stack-exchange hive mind for more insight from folks with much more experience. I am going to post the same question on the Project Management StackExchange and look at the delta."} {"_id": "98502", "title": "How to deal with Project Leader who doesn't involve himself in project in any way?", "text": "Our division is a small offshore development-only unit (not more than 25 members) but we develop many projects. Our main focus is development. Developers are grouped under Project Leaders, and they in turn report to a Project Manager. I'm stuck with the most lamest Project Leader. Our team has 4 developers under this one guy, and we handle 4 projects. Some of his characteristics are: * Doesn't contribute anything during design/coding * Doesn't do any managerial tasks (such as getting us read-only access to servers, or getting us software to be installed, filling out forms, etc) * Doesn't do any co-ordination work (such as giving instructions to QA team regarding requirements, co-ordinating with middleware team for deployment, etc) * Doesn't respond to client emails, just forwards it back to us. In case he responds, he messes up things because he is not involved in the projects for too long * Doesn't attend conference calls. Even in the rare case he does, he has nothing to contribute * Doesn't troubleshoot any issue (doesn't even care to see the logs) in developer's absence * DOES stock trading/researching from morning to evening * DOES involve himself in success parties/calls for our projects * DOES demand rating 1 for himself Overall, his title is \"Project Leader\" but he is not at all involved with the project. We developers do everything including the full grunt of design, development, co-ordination and support activities. He is quite vocal about it too. He says it is not his responsibility, and his full existence is to get work done from others. I can agree this, but as a Developer my work is to develop and deliver (heck we have so much development work), but I can't do support and co-ordination at the same time, sitting in phone calls and internal chat rooms and sending/receiving emails. I have explicitly said him that \"I will do coding, you do support\" but he just said he won't. In fact, during our last meeting when deciding about a new project - he explicitly said \"Make sure it has no dependency on me\". What can I say? He has been in the company for too long, nearly everyone who joined with him are in much higher ranks. He has been given this promotion recently just because of this one fact. The Project Manager too knows very well, but he is a new person and is afraid to act against a \"veteran\". I've tried a lot of things, like trying to talk with him about this (\"I do coding, you do support\"), talking with the PM, touting him in emails, etc. But he doesn't nudge. I'm not low enough to report this to higher management in US - but anyway they won't understand what's going on here. Due to his \"communication\" skills, they have a big impression of him. What do I do? Any traps that I can lay? Edit: More info: My growth is locked to his. Only if he becomes a manager and makes some way, I can fill up his place and become the Leader. But the whole problem is that he is content to just sit around without any aspirations, while we ambitiously do all his work without getting anything in return."} {"_id": "186852", "title": "Is this an implementation of the promise pattern?", "text": "I am writing a library in C++ (making use of many C++11 features) that (as far as I can tell) implements the promise pattern. The library consists of a class that makes asynchronous network requests. When a user of this class invokes a method that issues a network request, an instance of another class (we'll call it the \"promise class\") is returned that provides methods for canceling the request. Since this application uses the Qt framework, the promise class also emits signals when certain events happen, one of which is the successful completion of the request. Because the result of the network request is often included when the signal is emitted, I feel that this class implements the promise pattern. According to the Wikipedia article: > \"In computer science, future, promise, and delay refer to constructs used > for synchronizing in some concurrent programming languages. They describe an > object that **acts as a proxy for a result** that is initially unknown, > usually because the computation of its value is yet incomplete.\" > > * emphasis mine Have I missed anything? Does my class indeed implement the promise pattern?"} {"_id": "224837", "title": "Interpreting Date formats", "text": "Let's say I have a `DatePicker` control and I allow my US clients to type something like `\"1/1\"` in it and hit tab go to next control. So I will parse that date for them as \"01/01/2014\" and some more similar patterns. But they are all US-based so they are \"MM/dd/yyyy\" Now my question is about other formats? What If I want to support a country format that is like `\"yyyy/mm/dd\"` . How do those countries interpret such patterns?"} {"_id": "139321", "title": "How do I review my own code?", "text": "I'm working on a project solo and have to maintain my own code. Usually code review is done not by the code author, so the reviewer can look at the code with the fresh eyes \u2014 however, I don't have such luxury. What practices can I employ to more effectively review my own code?"} {"_id": "224834", "title": "Is it good or bad practice to provide separate classes for an object: one to build it, and one to use it?", "text": "Suppose I'm writing some C++ code to visualize \"Foo\" objects. I have two ways of getting a \"Foo\": computing it from data, or from taking the pieces of a precomputed \"Foo\" and building a new \"Foo\". Now, once a \"Foo\" is computed it's guaranteed to be good for visualization, but changing it may break this assumption. Therefore, I've decided to represent \"Foos\" in my code by a `Foo` class that has no mutating methods: once it is constructed and initialized, it doesn't change. But there's a second way to make a \"Foo\": build it from a precomputed \"Foo\"'s components. I've come up with several methods of building a `Foo` from precomputed data: ## Method 1: Constructor/Static methods Perhaps the most obvious method would be to add a new constructor or a static method to `Foo`, call it`fromPrecomputed`, that would read the components of the precomputed Foo and make a new `Foo` object, checking that it is valid. To explain why I'd like to shy away from this, I have to complicate my example: Let's say that one component of a \"Foo\" is a collection of \"Bars\". Now, in terms of implementation, sometimes a \"Bar\" is represented as a `std::vector >`, sometimes as a `Bar array[][2]`, sometimes as a `std::vector >`, and so on... I could have the user reorganize their data into a standardized form and have a single constructor for this standard, but this might require the user to perform an extra copy. I don't want to provide a static method for each format: `readPrecomputedFormatA`, `readPrecomputedFormatB`, and so on: this clutters the API. ## Method 2: Make `Foo` mutable If I exposed the `addBar(Bar)` method of `Foo`, then I could allow the user to iterate over their collection of \"Bars\" in their own way. This, however, makes `Foo` mutable. So I could compute a `Foo` that makes sense for visualization, then use `addBar` to add a `Bar` that makes the `Foo` no longer a \"Foo\". Not good. ## Method 3: Make a friend \"builder\" class I make a class called `FooBuilder` which has the `addBar(Bar)` method exposed. I make `FooBuilder` a friend of `Foo` and add a constructor to `Foo` that takes a `FooBuilder`. On calling this constructor, it checks to make sure that `FooBuilder` contains a valid \"Foo\" object, then swaps its empty representation of a Foo with what is inside the `FooBuilder`. Everybody is happy. The only \"messiness\" about method #3 is that it requires a friendship, but it's worth it to maintain encapsulation I think. But this has got me thinking: is this an established pattern? Or is there another, _better_ way of doing this that I don't know about?"} {"_id": "224833", "title": "Programming PHP without MVC, classes or framework: rewrite or continue on new features?", "text": "I have been programming for several years now, and back then (learning PHP) I've didn't learn to program using classes, MVC-logic or using any frameworks. I found my self solving my problems very well using my own functions. Eight months ago I was recruited to a start-up, to develop a huge social platform. I have been working there for several months now, and built up a huge website with various complicated features (+35k lines of code I guess), and I could see my self continue like this. Everything is coded without any framework, no classes or with MVC-logic, since I didn't have the time to learn it (we had to move fast). I do write everything in functions, and put a lot of effort in well documenting/describing my code as well as organizing it beautifully and easy- to-read. However, it _is_ top of my list to learn Classes > MVC > Laravel (or any other) framework. But I just can't see my self stopping now, learning above list, and then rewrite all code. This would simply push us back in time, and in the start-up we're moving fast and have many deadlines for new features/ideas/development. I've spoken with many people regarding this, and people say a lot of different things. Some say it's a matter of taste, and you'll be able to move on with it. Others say it's incredibly stupid, you're not scaleable, you're never get serious funding, you're the only one who ever be able to work on the project, you should stop and start learning it now, etc. Am I doomed? I feel lost. My personal opinion on this is, that even though it is a huge system already, it is still a MVP and I guess at some point in the future we would rewrite the code anyways. This is, if we're at such a successful stage in our venture and growing very fast / getting funding / etc."} {"_id": "183823", "title": "Holding mutable data in a single place", "text": "Given a **mutable** property, it generally makes sense to _only hold/store that property in a single place_. When the data needs to change you only need to update a single field, and you can completely avoid bugs where the different fields get out of sync. An extremely simple example could be: The 'right' approach: class Owner { String name; } class Dog { Owner owner; String getOwnersName() { return owner.name; } } The 'wrong' approach: class Owner { String name; } class Dog { Owner owner; String ownerName; String getOwnersName() { return ownerName; } } Experience has taught me that it's very seldom a good idea to break this rule of thumb. The risk of bugs being introduced and the increased effort required to understand the code almost always outweighs any benefit. My question is, is there a name for this rule/principle? Bonus points for linking to articles/blogs/etc. which make the argument for this clearly. Double bonus points for counter-arguments!"} {"_id": "4550", "title": "Are innovative programming languages too dangerous?", "text": "This question is a result of the question Which programming language do you really hate? where, unsurprisingly, all of the most popular languages are listed. Most of the top 15 are extremely common programming languages. One comment brings up **haXe** as an alternative to ActionScript, a language that can target Flash, PHP, JavaScript or native applications. Another similar alternative to JavaScript is GWT. I worked at a company a year ago that decided to adopt GWT for our next product. That didn't pan out as well as hoped because GWT hasn't really taken off. There's little example of best practice, limited support libraries, and some features are so immature that they are ridiculously difficult to use. And, worst of all for something like haXe or GWT, it's nearly impossible to find programmers who are already familiar with the platform and are ready to go, let alone bring their own expertise when you lose a developer (or want to grow). With a language like Java, it's not terribly difficult to find a rock star programmer to jump in and immediately make a difference. So my question is, even though there are many complaints about established languages, and many innovative improvements, variations, or completely new languages, are the alternatives simply too dangerous or costly for the corporate environment? If so, how will these innovative languages ever become adopted? It seems like technology moves too fast for innovations, even really good ones, to have time to build a following big enough to stay around when things start to change."} {"_id": "190152", "title": "In which order should I do comparisons?", "text": "I'm a strong proponent of writing if statements like this: variable == constant Because to me it just makes sense, it is more readable than the inverted: constant == variable Which seems to be used a lot by C-programmers. And I see the use, namely that the interpreter or compiler will throw an error and let you know if you're not doing a comparison. But still it is less readable, and for that reason alone I don't think comparisons should be written in the manner of the second example. **The actual question is:** > **Does it exist a general best practice for this, or is it different > depending on language/religion/age/etc..?** I'm happy that so many seem to understand why you'd want to do as in the latter example, but that is not what I'm asking about."} {"_id": "188384", "title": "coding style for If condition", "text": "I came across below style of writing if statements in C#, on msdn code examples. Usually when I write if statements, the conditions I would write `(Customer != null) I want to know if there is any difference/gain in writing statement like below: Customer customer; if (null != customer) { // some code } Or if (\"\" != customer.Name) { // some code }"} {"_id": "212781", "title": "Boolean condition before variable", "text": "I have noticed this style from time to time: if ( 0 == myVar ) Rather than: if ( myVar == 0 ) Is this just the individual programmers idiom? A defensive programming style? Does anyone know if it originated in some canonical text?"} {"_id": "236501", "title": "True/false on the left or the right?", "text": "I have heard that generally, an expression like: `if (true === $variable)` is faster than: `if ($variable === true)` My question is about the performance, not the readability. Questions such as this one do not mention explicitly in any of their answers whether there is a performance difference between the two. Is it true? Is the first form really faster than the second?"} {"_id": "104147", "title": "Ideas on writing a meaningful resume which is not a compilation of buzz words?", "text": "I'm am about to commence redesigning my resume all over. What I'm stuck with is where should I draw the line between writing _everything_ down and writing too little. I don't want my resume to be a pack of buzzwords nor do I want to seem to not know any of the things I do know. Keep these thoughts in mind and wanting to write a _meaningful_ resume, how do I go about it. Someone reading my resume should be able to tell if I'm a hacker culture person or someone who went to computer school just for the heck of it. **Edit:** In view of answer majorly focusing on buzzwords I would like to point out I'm also looking for things that make my resume more meaningful - I want the reader to be able to tell the difference between a hacker person and just another guy who went to computer school."} {"_id": "128573", "title": "Who is Configuration Manager?", "text": "I would like to ask members of the community about the role of Configuration Manager, as you see it. I'm not asking what Configuration Management is, as long it had been asked before. What I need to know is: 1. What tasks do you think Configuration Manager should perform (or performs) in your team? 2. What is primary responsibility of Configuration Manager? 3. What are secondary/auxiliary responsibilities of Configuration Manager? 4. Does Configuration Manager need to be in charge of development processes on the project/company or he should be told what to do? 5. What are relations between Configuration Manager, Build Manager, Release Manager, Deployment Engineer, CI Engineer roles? Aren't they all the same - Configuration Management? 6. Maybe term Configuration Management is redundant and Technical/Team Lead should do all the related work instead? It would be really great if you could share your vision and experience."} {"_id": "130161", "title": "Should I refactor this rails application?", "text": "I have taken over the code base for a ruby-on-rails application which relies very heavily on ActiveRecord callbacks to perform domain rules. The application can be compared to a bank application, where a customer has an Account. On each account you can register a Transaction (an amount of money that is either inserted or removed from the account). The Account has an account_balance that is updated every time a transaction is created. This update happens as an ActiveRecord callback. class Account < ActiveRecord::Base has_many :transactions def update_account_balance account_balance = 0 self.transaction_logs.each do |transaction| account_balance = account_balance + (transaction.amount + transaction.commission) end self.account_balance = account_balance end end class Transaction < ActiveRecord::Base belongs_to :account after_save :update_account_value def update_account_value self.account.update_account_balance self.account.save! end end I find that this way of coding makes it very difficult to figure out what happens (and what goes wrong when a bug occurs). Also, since the account needs to load all transactions, this will be heavier as more and more transactions are added. Had I been developing an application like this in .NET (in which I am much more familiar) I would have created something like this: public class Account { public void AddTransaction(decimal amount) { this.Transactions.Add(new Transaction(amount)); this.AccountValue += amount; } } And then made sure that a Transaction object is immutable. I am very tempted to refactor the system into something like that. But should I? Would that be the \"rails way\"? The drawback, as far as I can tell, is that my controller needs to explicitly handle controlling database transactions, something which happened implicitly in the Transaction.save! call currently called in the application. Are there other factors I should be aware of?"} {"_id": "130165", "title": "How could I implement multitouch gestures without a start event?", "text": "While working on multitouch, one of the problems I've run into is the fact that nobody seems to do gesture recognition without some kind of init event, whether it's a mouseclick or contact with a capacitive touchscreen. Is it possible to do an \"always listening\" gesture handler? If so, how to do it without mirroring input in realtime and thereby imposing gloomy processor overheads? Is there some kind of downsampling (ie take every nth input from the cursor)?"} {"_id": "74089", "title": "Does anyone have insight into whether MonoDroid is really dead?", "text": "My company just recently invested in the Mono for Android tools for Visual Studio as we have a lot of .NET developers and were impressed with how powerful the monodroid tools seemed. After reading the ZDNet Post I was saddened to see that this project may be dead. Is there anyone out there who might know anything more about this than what is listed in that article? Apparently legions of developers were let go from the mono project but I'm wondering if it's true. Any information would be greatly appreciated. We just bought the Enterprise 5 license, which was pretty costly and I'd be pretty mad if it died after I just bought this thing and started learning it!"} {"_id": "253254", "title": "Why do people nowadays use factory classes so often?", "text": "I've seen the history of several \u0421#/Java class library projects on GitHub and CodePlex, and it seems like a trend. Why do people nowadays use factory classes so often for almost everything? I.e. We have pretty good library, where objects are created the old-fashioned way - by invoking public constructors of classes.. Now, in the last commits the authors quickly changed all of the public constructors of thousands of classes to internal, and also created one huge factory class with thousands of CreateXXX static methods that just return new objects by calling the internal constructors of the classes. The external project API is broken, well done... Why? What is the point of such \"refactoring\"? What is a great benefits from replacing calls to all public class constructors with static factory method calls? Is having public constructors something like \"bad practice\" now?"} {"_id": "27301", "title": "Writing Unit Tests in the Middle", "text": "Is unit testing a 100% or not at all kind of deal? I was browsing through my old projects and started adding features, this time with unit testing. However, is this ultimately worthless if I'm going to be re-using past components that do not have unit tests? Do I need to write unit tests for all the previous classes and not bother at all, or is it OK to only write unit tests for the new stuff I'm adding?"} {"_id": "135534", "title": "Generalized VS Specialized technical solution; what to take into account?", "text": "We recently had a discussion in the office because of conflicting views between developers. One side (side S) argued technical solutions -generally- need to be a specific as possible, while the other side (side G) argued generalized solutions are preferred. I'll try to keep it short; we deal with file transfers and we need to start saving three files (log.txt, details.txt, receipt.pdf) for each transfer. We already have a files table that we'll use but we all agree that we need a different table to create a one-to-many relationship between transfers and files. Side G proposed creating general resource_attachments table that can attach files to any type of resource it would look something like this; resource_attachments \\- id : int \\- entityId : int \\- entityType : string \\- fileId : int \\- kind : string Side S disagreed and proposed creating a specialized transfer_attachments table, something like this; transfer_attachments \\- id : int \\- transferId : int \\- fileId : int \\- kind : string One argument of side S is that any resource should be as specific as possible so its role and its attributes and their possible values are clear, so any new developer will have no trouble understanding it. An argument of side G was that a more generalized approach would provide a broader range of functionality; you can attach files to any resource, whatever their role will be. Though the practical differences are very small, there is some fundamental stuff going on here (this is where it gets philosophical); one person in the room sharply observed that the generalizer is the ruby expert, while the specializer is the java expert. Ruby is an interpreted language with dynamic typing while java is a compiled language with static typing. I found the matter wildly interesting and was wondering which approach is preferred; specialized or generalized solution, and what matters should be taken into account? Note that we're only talking about a the technical part of the solution, this has nothing to do with the end-user experience."} {"_id": "135537", "title": "Is Spring + Hibernate prefered instead of EJB 3?", "text": "It is my perception that whenever new JEE projects start (where these technologies would be applicable), people prefer to use a combination of Spring + Hibernate instead of EJB 3. It seems junior programmers are even advised to go for that instead of EJB. Is this personal preference or are there pertinent reasons for it? (e.g. personal scars created by earlier EJB versions which caused mistrust in EJB or technology bloat versus performance reasons or learning curve)?"} {"_id": "135539", "title": "Program/library to match string patterns", "text": "I've city names, like Newyork. Somebody may misspell it and write it as Newyoork. I want an algorithm or library which can match such patterns with some confidence level. Any idea how to do it or if any pattern recognition library is present? It is same as when we type something in Google with spelling mistake in it but instead it automatically corrects the mistake."} {"_id": "181087", "title": "Do you use Instant Messaging to communicate during your day with your fellow devs?", "text": "Hello Fellow Programmers: I run a small business and we typically communicate over IM during the say. We sign on to IM when we arrive and stay on all day. It seems to be quite distracting, in my opinion, and could detract from work. I'm interested in how other small businesses use IM. Do you use IM? If so, do you stay on all day or only come up when there is a question initiated by e-mail? I'm trying to create the most productive and effective work environment for geographically separated programmers. Interested in your opinions. Thank you."} {"_id": "140717", "title": "interpreted languages vs compiled languages (from deployment point of view)", "text": "I many times read that if you keep updating your website you may consider an interpreted language to be better for this case. I want to understand why interpreted server side language is better if I keep updating my site (such as adding new features or change some functionality).?"} {"_id": "181082", "title": "Can I use a GPL licensed piece of JavaScript on a commercial website?", "text": "I'm looking at plupload for some upload features on a website I'm developing. Now plupload is GNU GPLv2 licensed and that implies that all derivative software should also be GPL licensed (right?). Therefore I run the plupload through my minifier, the single minified js file will violate the license, and upon request, I must make _all_ the sources of my page available (right?). I'm curious about: * can I use the plupload API without having to open source my code? * does the license exclude minified code somehow? See also: http://stackoverflow.com/questions/3213767/minified-javascript-and- bsd-license"} {"_id": "203687", "title": "Implicit or explicit database save actions", "text": "I'm writing a web app that requires the usage of drag and drop as well as other jQuery/HTML5 features. There are two options for saving a user's changes to a database * Implicit: database save on the end of an event, such as drop, hover, ect. * Explicit: save changes to local array and submit on submit button click event Currently, the only difference between the two is that implicit saves hooked to user events results in many more POST/GET data requests as compared to the explicit save. Other than that, is there a major distinction between the two, and what are the reasons for choosing either option?"} {"_id": "134669", "title": "Sharding with IoC", "text": "I've come across a situation where I need to shard a database (Oracle, but that doesn't particularly matter). The gist of the problem is I have written a large-scale system in a fairly standard TDD-style, with repositories and services hidden behind interfaces. Dependency injection is used to implement particular versions of the services and repositories at runtime, with this functionality allowing for crazy client requirements (for example we migrated from MSSQL to Oracle in under a week, which would not have been possible if the architecture was not properly decoupled!). The problem now arises that I need to shard off a large portion of the data into a separate archival system, for performance reasons. The table structure has to remain the same due to time constraints, and I would like the purity of the architecture to remain the same, with no one section of the system having knowledge of the inner workings of another. The system will need to be switched from one shard to another dynamically via a user input (what input this is is yet to be finalised). So, the two ways I can think of to approach this issue are: 1 - Pass in the required connection string on creation of a repository. This is not ideal as it means that the service layer, or worse, the UI, needs to know about the underlying sharding. This will make the design very inflexible in future, and require things like the sit-in-front caching layer (a write- through cache implementation on top of the IService) to be updates so their methods take a connection string as a parameter, which just seems _wrong_. 2 - Create a new IConnectionProvider subsection of the system that abstracts the connection information for the repository. This would be more ideal, as the repositories would remain self-contained, but I cannot think of a good way to switch this implementation without having to specify the relevant interface as a parameter to each method call, which goes back to the service layer specifying which connection to use. Does anyone have any experience with this issue, or any opinions on a preferably low-labor yet still fairly pure implementation in this situation?"} {"_id": "134664", "title": "In Scrum, should tasks such as development environment set-up and capability development be managed as subtasks within actual user stories?", "text": "Sometimes in projects we need to spend time on tasks such as: 1. exploring alternate frameworks and tools 2. learning the framework and tools selected for the project 3. setting up the servers and project infrastructure (version control, build environments, databases, etc) If we are using User Stories, where should all this work go? One option is to make them all part of first user story (e.g. make the homepage for application). Another option is to do a spike for these tasks. A third option is to make task part of an Issue/Impediment (e.g. development environment not selected yet) rather than a user Story."} {"_id": "146715", "title": "What is an effective way to obtain use case information from preoccupied professionals?", "text": "When designing a novel system in a hospital or other clinical setting, I think it would be important to gather information from doctors, nurses, pharmacists, technicians, et al on use cases for the system. Aside from the fact that all of these are also busy professionals, what is a good way to get them to share information pertinent to their (possibly chaotic and time sensitive) workflow that would be usable in a software engineering design?"} {"_id": "185503", "title": "What can I do to strengthen up my pen and pencil coding skills?", "text": "I want to better write code with pen and paper. Whether it'll be in pseudocode or real code, doesn't matter. Could you kindly advise me sources for this?"} {"_id": "9991", "title": "What to do when the programming activity becomes a problem?", "text": "I once saw a program (can't remember which) where it talked about people \"experiencing flow\" when they are doing something they are passionate about. When \"in flow\", they tend to lose track of time and surrounding, concentrating only on their activity at hand. This happens a lot for me when I program; most particularly when I face a problem. I refuse to give up until it's solved. This usually leads to hours just rushing by and I forget to eat lunch, dinner gets pushed into far into the evening, and when I finally look at the clock, it's way into the wee-hours of the night and I will only get a few hours of sleep before having to rise early in the morning. (This is not to say that I'm in flow _only_ when facing a problem - but I find it particularly hard to stop programming and step back when there's something I can't solve immediately.) I love programming, but I hate it when it disrupts my normal routines (most importantly eating and sleeping patterns). And sitting still for so many hours, staring a screen, is not healthy. Please, any ideas on how I can get my rampant programming activity under control?"} {"_id": "148969", "title": "Scuttlebutt Reconciliation in the paper \u201cEfficient Reconciliation and Flow Control for Anti-Entropy Protocols\u201d", "text": "I am reading the paper \"Efficient Reconciliation and Flow Control for Anti- Entropy Protocols\"! , I couldn't clearly understand Section 3.2 \"Scuttlebutt Reconciliation\". Here I extract out some sentences from the paper, which especially confuse me. 1. If gossip messages were unlimited in size, then the sets contains the exact differences,just like with precise reconciliation. 2. Scuttlebutt requires that if a certain delta (r; k; v; n) is omitted, then all the deltas with higher version numbers for the same r should be omitted as well. 3. Scuttlebutt satises the global invariant C(p;q) for any two processes p and q:"} {"_id": "74334", "title": "how to use version control", "text": "I'm developing a web site in php in localhost and as modules of it gets completed, I upload it on the cloud so that my friends can alpha test it. As I keep developing, I've lots of files and I lose track of which file I've edited or changed etc. I've heard of something as 'version control' to manage all those but am not sure how it works. So, my question is: Is there an easy way/service/application available to me to track all the edits/changes/new files and manage the files as I develop the website. As Soon as I'm done with a module, I want to upload it on the cloud (I'm using Amazon Cloud Service). If something happens to the new files, I might want to get back to the old file. And maybe, in a click or two, I get to see the files which I've edited or changed since the last one I've uploaded?"} {"_id": "200486", "title": "Is Code First with Migrations or SQL Server Data Tools a better fit?", "text": "I have been given a spec to create a new MVC4 website, it will not be too large a project at first but I suspect it will grow as the business gets new ideas for it. Using .NET 4.5 ASP.NET MVC4 and EF I have to choose between code-first with migrations or Sql Server Data Tools (SSDT) for handling my database. With the SSDT I can control my database in a project as part of my solution and handle the changes all the way from dev through to production and beyond using dacpac files. My experience of code-first from MVC3 was not to use it beyond development due to the limited database options. It would always end up with dropping the Db on model change or handle the Db changes manually. However I am led to believe with MVC4 Migrations that is no longer the case and I can now push updates to the Db. So my question is which one is the most efficient to use based on saving time/effort in development but also scalable and able to handle production changes. I liked code-first and the ability to generate my Database from Models, does the introduction of migrations now make it viable in production?"} {"_id": "200480", "title": "Does learning to play an instrument improve programming ability?", "text": "I've seen plenty of questions asking if _listening_ to music boosts productivity, etc. but I haven't been able to find one about _performing_ music. Learning to play the piano has been on my to-do list for a while, partly because I have a sneaky suspicion that learning to deal with the relationships, recursions, etc. inherent in music will make me a better programmer (plus it looks fun). Does anyone know of any research on, or have first-hand experience with, learning a musical instrument having any sort of noticeable effect on one's programming or mathematical ability? Most of what I've been able to find seems to be interested in the effects of learning music at a very young age on boosting overall mental ability. I would also be interested in the other direction, aka learning to program improving one's ability with an instrument."} {"_id": "200481", "title": "Which closely represents Aggregation?", "text": "I know Aggregation is a has-a relationship, but I encountered a question in a test which did not make sense (and had grammatical mistakes as well) Which of the following statements correctly describe the concept of aggregation in OOP? * **A** Splitting a class into a several sub classes * **B** Creating one piece of code that works with all similar objects * **C** Accessing data only through methods * **D** Combining several classes to create a new class * **E** Creating a new class from an existing class I think; * **A** Could be true. * **B** Sounds like inheritance. * **C** Seems like property. * **D** Could be true. * **E** Could be true. I'm uncertain how the has-a relationship translates into actual code in these statements. Any ideas?"} {"_id": "166748", "title": "What is consultant application development?", "text": "Recently I got a job offer from csc as consultant application development. Currently I am working as a software engineer. Can somebody enlighten me on what is consultant application development career growth , job type, security and what is there role ?"} {"_id": "241083", "title": "Architecture or Pattern for handling properties with custom setter/getter?", "text": "**Current Situation:** I'm doing a simple MVC site for keeping journals as a personal project. My concern is I'm trying to keep the interaction between the pages and the classes simplistic. Where I run into issues is the password field. My setter encrypts the password, so the getter retrieves the encrypted password. public class JournalBook { private IEncryptor _encryptor { get; set; } private String _password { get; set; } public Int32 id { get; set; } public String name { get; set; } public String description { get; set; } public String password { get { return this._password; } set { this.setPassword(this._password, value, value); } } public List journals { get; set; } public DateTime created { get; set; } public DateTime lastModified { get; set; } public Boolean passwordProtected { get { return this.password != null && this.password != String.Empty; } } ... } I'm currently using model-binding to submit changes or create new JournalBooks (like below). The problem arises that in the code below `book.password` is always null, I'm pretty sure this is because of the custom setter. [HttpPost] public ActionResult Create(JournalBook book) { // Create the JournalBook if not null. if (book != null) this.JournalBooks.Add(book); return RedirectToAction(\"Index\"); } **Question(s):** Should I be handling this not in the property's getter/setter? Is there a pattern or architecture that allows for model- binding or another simple method when properties need to have custom getters/setters to manipulate the data? To summarize, how can I handle the password storing with encryption such that I have the following, * Robust architecture * I don't store the password as plaintext. * Submitting a new or modified `JournalBook` is as easy as default model-binding (or close to it)."} {"_id": "204309", "title": "Are there algorithims for polling optimization?", "text": "I have a web-application with an async HTTP backend, which gets called by the client by AJAX requests. The client has to start a job and then polls for the result. I started with a simple 150ms polling interval, which was fine for small jobs, but big jobs, which could take several minutes, threw many failed requests to the server. Currently, I just add 1000ms to the delay after every poll, so the polling gets slower over time for longer requests, but never longer than 10 seconds. **Are there some formulas, statistics, or algorithms that I could use to dynamically optimize the polling time?**"} {"_id": "87081", "title": "Criteria When Evaluating Middleware", "text": "What are some suitable criteria for evaluating a list of Companies complimentary middleware? I am doing a trade study on several Company's middleware implementations and so far I have came up with the following criteria: 1. Cost 2. Ease of Development/Ramp Up Time 3. Licensing 4. Support 5. Quality of Tools"} {"_id": "35276", "title": "The advantages & disadvantages to be had from using a Web Framework?", "text": "This question is focused on **extracting the advantages and disadvantages of using Web based Frameworks** : such as Cake PHP, Zend, jQuery, ASP.NET). This question is completely _language agnostic_. Let me start with the notion of _\"Standing on the shoulders of Giants_ \". ## Advantages: * **Empowers Developers** \\- by taking features that would have previously have taken 100's of lines of code and compressing them into one simple function call empowers developers to integrate more complex features into their Web Sites. * Allow for **Quicker development** of applications - this is very relevant for people that need websites created in a very small window (has anyone any examples of this?) * **Lower Costs** \\- allows programmers to pass cost savings onto the customer, a whole new range of customers generated that wanted a website but previously could not afford the higher development costs. ## Disadvantages: * **Lost Understanding** \\- by relying on the features of a framework a developer is in danger of loosing understanding on how things work (underneath the hood). * The configuration cliff - once you go further than the configuration of your framework your productivity drops right off, it can be difficult to implement features outside of a frameworks configuration. * **Developer tramlines** \\- you (the developer) has to do things the way that the developer want you to do things. I wonder what people make of my points, and whether anybody disagrees with them? Also if people have additional points I would be grateful."} {"_id": "101937", "title": "How do I cluster strings based on a relation between two strings?", "text": "_If you don't know WEKA, you can try a theoretical answer. I don't need literal code/examples..._ I have a huge data set of strings in which I want to cluster the strings to find the most related ones, these could as well be seen as duplicate. I already have a set of couples of string for which I know that they are duplicate to each other, so, now I want to do some data mining on those two sets. The result I'm looking for is a system that would return me the possible most relevant couples of strings for which we don't know yet that they are duplicates, I believe that I need clustering for this, which type? Note that I want to base myself on **word occurrence comparison** , not on interpretation or meaning. * * * Here is an example of two string of which we know they are duplicate (in our vision on them): * The weather is really cold and it is raining. * It is raining and the weather is really cold. Now, the following strings also exist (most to least relevant, ignoring stop words): * Is the weather really that cold today? * Rainy days are awful. * I see the sunshine outside. The software would return the following two strings as most relevant, which aren't known to be duplicate: * The weather is really cold and it is raining. * Is the weather really that cold today? Then, I would mark that as duplicate or not duplicate and it would present me with another couple. * * * How do I go to implement this in the most efficient way that I can apply to a large data set?"} {"_id": "1189", "title": "What should I do to be language-agnostic?", "text": "By now I work with asp.net and C#. I have done a decent work in Java as well. I am planning my career in such a way I should be language-agnostic someday. What are the things that I need to learn? First would OOP paradigms as its speaks about the Class design. Are there any others?"} {"_id": "101931", "title": "Is Groovy going away?", "text": "I am sure this question has been asked many times. However, I like to ask it again with the intention of what is the future of these languages. I was first introduced to Groovy and really liked it. I felt the syntax was simpler and it was much closer to Java and I was able to quickly learn Grails. Then there was Scala, and the web frame work Lift. I am still learning Scala and I find the syntax very difficult at times. However, I still wonder what is the future of Groovy. When the author of Groovy says he would have never created groovy if he knew about Scala, then it makes me wonder if there is a future at all. Of course Groovy has came a long way and Grails is used today by many large companies. If one was to look at Grails vs Lift today, then Grails would be clear winner. More companies are using it. But given everything I have said so far, I am interested to know if one should invest in Groovy? Is Groovy going away and Scala the better choice? If the CEO of BMW says he drives a Mercedes then one would wonder why shouldn't we all drive Mercedes too, right? (I understand if this question is really broad and might be closed. I hope to make it an open Wiki for others though.)"} {"_id": "127919", "title": "Smallest lexicographical rotation of a string using suffix arrays in O(n)", "text": "I will quote the problem from ACM 2003: > Consider a string of length n (1 <= n <= 100000). Determine its minimum > lexicographic rotation. For example, the rotations of the string \u201calabala\u201d > are: > > alabala > > labalaa > > abalaal > > balaala > > alaalab > > laalaba > > aalabal > > and the smallest among them is \u201caalabal\u201d. As for the solution - I know I need to construct a suffix array \\- and let's say I can do that in O(n). My question still is, how can I find the smallest rotation in O(n)? (n=length of a string) I'm very interested in this problem and still I somehow don't get the solution. I'm more interested in the concept and how to solve the problem and not in the concrete implementation. Note: minimum rotation means in the same order as in an english dictionary - \"dwor\" is before \"word\" because d is before w. EDIT: suffix array construction takes O(N) LAST EDIT: I think I found a solution!!! What if I just merged two strings? So if the string is \"alabala\" the new string would me \"alabalaalabala\" and now I'd just construct a suffix array of this (in O(2n) = O(n)) and got the first suffix? I guess this may be right. What do you think? Thank you!"} {"_id": "71302", "title": "How can I get free artwork for my free software?", "text": "So many free software projects have beautiful art, specially websites, that I wonder where the coders meet their artists. Is there some place to ask for this besides the local art majors' schoolboard? I thought of IRC, but I fear that one might actually bother the small and frequent userbase by asking it there."} {"_id": "84193", "title": "Sr. Dev made a database I disagree with. Advice sought", "text": "I need to run to work soon, so this will be brief. I've only been with the company for a couple weeks. It is a good company, this is a contractor who just happens to be twice my age. I am new to the professional programming field. This database does exactly what wikipedia says not to do for 1NF. It repeats telephone columns in various tables. That is one strike. Strike two: Some data is duplicated across tables. Flat out duplicated. Strike three: Not vital, but he turned all the keys into bigints. All the \"FK_\" are also nullable. Wtf? We have not started using this database YET, but there has been a big time crunch to meet the client's needs and timeline since I joined and the current timeline will put it into use, say, tomorrow. It was okay for me to sit back while he made a mess since I did not have deal with it directly, but it sounds like I'm going to be taking over this section of the code while he's needed to architecture something else. Any advice would be greatly appreciated. My boss is a great man, but is also very, very busy and more stress is the last thing he needs. **Update:** Sorry this came off as a rant and less of a proper question. I really do appreciate all of the responses and they've helped me start viewing the \"problem\" more analytically. There's also some benefit to them being generalized in that the same advice will still apply to future \"problems.\" Thank you. **Second Update:** So I kindly tried to figure out why he did what he did and for the most part he even admitted he didn't have a very good reason. We ended up spending some time and cutting almost all of the duplicated columns. For the rest of the oddities, I can bite my cheek well enough since they're just annoying and won't potentially wreak havoc if every duplicated field isn't updated. All in all a good day. PS I'm having a hard time choosing a best answer since there are so many good ones."} {"_id": "33170", "title": "How to deal with WYSIWYG editors?", "text": "There are now lots of WYSIWYG editors, however whenever we use one on a CMS- based website we consistently have issues. The biggest being users pasting content from Word or other online sources and all the various formatting rules being added in \"behind the scenes\". How do you deal with these editors on a live production website? I love Markdown, however its target market is most definitely the tech industry."} {"_id": "123406", "title": "What are some FizzBuzz-type questions for web or SQL developers?", "text": "After a while, we are hiring again, and I'm reviewing tests for programmers; some of them are a bit out of date. What are some of the FizzBuzz-type questions for web developers and SQL? That is, not too trivial, but still solvable in five to ten minutes with pen and paper and without Google? I typically eliminate about two thirds or more of the candidates based on CV, and then all but a few really good candidates in a one-hour interview (which can be over the phone). At this point the candidate is writing a personality test and has a chance to write a bit of FizzBuzz-like code. So, I'm not trying to eliminate a bunch of candidates, but I am trying to validate my initial assessment that candidate is hireable and able to code."} {"_id": "232522", "title": "Web API architecture design", "text": "I'm learning and diving into Web API. I've used web services before but not specifically with web API. Below is how I am designing it so far and I was curious on the feedback. I have a ReturnMessages object. This basically is a standard object that gets returned from any of the API calls, correctly executed or an error happens. Within each API method I have a try catch. if everything is alright, I specify the values I need within my ReturnMessages object and than `Ok(_returnMessages)`. Now if an error happens I fill in the ReturnMessages object with the error information and once again return `Ok(_returnMessages)`. The ReturnMessages object contains a ReturnClass field to hold any type of other objects I may need to return. A single object or an Array. It also has a return code, return message, friendly message for the end user in case something wrong happens and than a generic string list of data that was passed in that I can send off and use for testing purposes to try and re-create the error. Below is a code sample from one of the methods that shows off what I am talking about. Is this approach ok with always returning Ok with the object I'm returning or am I missing potential pieces within the Web API that I should be utilizing? I've seen the NotFound exceptions and all that other fun stuff. **EDIT: I made some changes in terms of what I was passing back for when things don't work out. I took out the ReturnMessage and also the ReturnData and wrote that to the Application event log for a new string value for this particular web api. Taht occurs in the _lw code line which is just a LogWriter class that uses the values passed in to write to the application event log. public IHttpActionResult method1(string arg 1= null, string arg2 = null, string arg3 = null) { try { clrObject1 varClrObject = (from p in db.clrObject1 where p.column1 == arg1 select p).SingleOrDefault(); if (varClrObject == null) { _returnMessages = new ReturnMessages { ReturnCode = 204, FriendlyErrorMessage = \"Nothing found matches the supplied values.\" }; _lw.SetupEventLog(\"No data was found that matched the information supplied.\\n\\nParameters:\\narg1: \" + arg1 + \"\\narg2: \" + arg2 + \"\\narg2=\" + arg2, \"Warning\"); } else { _returnMessages = new ReturnMessages { ReturnCode = 200, ReturnClass = varClrObject, ReturnMessage = \"Information Successfully Retrieved\" }; } } catch (Exception e) { _returnMessages = new ReturnMessages { FriendlyErrorMessage = \"An error has occurred while getting your information. Please try again in a few minutes. A notification was already sent to the company about this issue.\", ReturnCode = 400 }; _lw.SetupEventLog(\"Parameters:\\narg1: \" + arg1 + \"\\narg2: \" + arg2 + \"\\narg3=\" + arg3, \"Error\", e); } return Ok(new { Response = _returnMessages }); }"} {"_id": "7008", "title": "What security practices should you be aware of when writing software?", "text": "What different types of security do there exist? Why and when should they be implemented? Example: SQL Injection Prevention"} {"_id": "7000", "title": "How much effort should we spend to programming for multiple cores?", "text": "Processors are getting more and more cores these days, which leaves me wondering... Should we, programmers, adapt to this behaviour and spent more effort on programming for multiple cores? To what extent should we do and optimize this? Thread? Affinity? Hardware optimizations? Something else?"} {"_id": "159830", "title": "Nearest color algorithm using Hex Triplet", "text": "Following page list colors with names http://en.wikipedia.org/wiki/List_of_colors. For example #5D8AA8 Hex Triplet is \"Air Force Blue\". This information will be stored in a databse table (tbl_Color (HexTriplet,ColorName)) in my system Suppose I created a color with #5D8AA7 Hex Triplet. I need to get the nearest color available in the tbl_Color table. The expected anaser is \"#5D8AA8 - Air Force Blue\". This is because #5D8AA8 is the nearest color for #5D8AA7. Do we have any algorithm for finding the nearest color? How to write it using C# / Java? **REFERENCE** 1. http://stackoverflow.com/questions/5440051/algorithm-for-parsing-hex-into-color-family 2. http://stackoverflow.com/questions/6130621/algorithm-for-finding-the-color-between-two-others-in-the-colorspace-of-painte **Suggested Formula:** Suggested by @user281377. Choose the color where the sum of those squared differences is minimal (Square(Red(source)-Red(target))) + (Square(Green(source)-Green(target))) +(Square(Blue(source)-Blue(target)))"} {"_id": "226158", "title": "Short Sequential Search vs. Regular Sequential Search", "text": "Assuming a list of elements is sorted in ascending order, we search sequentially from the first element comparing the target to successive elements either until we find the target (succeed) or until the current element is greater than the target or we reach the end of the list (fail). **Is Short Sequential Search ever more efficient than regular sequential search? If so, when?**"} {"_id": "226155", "title": "How do I traverse a tree without using recursion?", "text": "I have a very large in memory node tree and need to traverse the tree. Passing the returned values of each child node to their parent node. This has to be done until all the nodes have their data bubble up to the root node. Traversal works like this. private Data Execute(Node pNode) { Data[] values = new Data[pNode.Children.Count]; for(int i=0; i < pNode.Children.Count; i++) { values[i] = Execute(pNode.Children[i]); // recursive } return pNode.Process(values); } public void Start(Node pRoot) { Data result = Execute(pRoot); } This works fine, but I'm worried that the call stack limits the size of the node tree. How can the code be rewritten so that no recursive calls to `Execute` are made?"} {"_id": "742", "title": "Good furniture for programmers", "text": "What are some good furniture offerings for those of us that sit around all day every day? I'm interested in chairs, desks, etc... Bonus points for posting images. P.S. Is the GeekDesk really as great as Stack Overflow would have us believe? That's a lot of cheddar for an adjustable table."} {"_id": "745", "title": "Staying alert and awake while coding", "text": "What methods do you use to stay awake and alert while working? Personally I drink coffee non stop throughout the day. But I've also heard of this thing called exercise that should help too. Does anyone else have tips and tricks to stay more awake and alert while working? Redbull? Maybe a magic pill that won't require me to sleep?"} {"_id": "159789", "title": "How can rotating release managers improve a project's velocity and stability?", "text": "The Wikipedia article on Parrot VM includes this unreferenced claim: > Core committers take turns producing releases in a revolving schedule, where > no single committer is responsible for multiple releases in a row. This > practice has improved the project's velocity and stability. Parrot's Release Manager role documentation doesn't offer any further insight into the process, and I couldn't find any reference for the claim. My first thoughts were that rotating release managers seems like a good idea, sharing the responsibility between as many people as possible, and having a certain degree of polyphony in releases. Is it, though? Rotating release managers has been proposed for Launchpad, and there were some interesting counterarguments: > * Release management is something that requires a good understanding of > all parts of the code and the authority to make calls under pressure if > issues come up during the release itself > * The less change we can have to the release process the better from an > operational perspective > * Don't really want an engineer to have to learn all this stuff on the job > as well as have other things to take care of (regular development > responsibilities) > * Any change of timezones of the releases would need to be approved with > the SAs > and: > I think this would be a great idea (mainly because of my lust for power), > but I also think that there should be some way making sure that a release > manager doesn't get overwhelmed if something disastrous happens during > release week, maybe by have a deputy release manager at the same time (maybe > just falling back to Francis or Kiko would be sufficient). The practice doesn't appear to be very common, and the counterarguments seem reasonalbe and convincing. I'm quite confused on how it would improve a project's velocity and stability, is there something I'm missing, or is this just a bad edit on the Wikipedia article? Worth noting that the top voted answer in the related \"Is rotating the lead developer a good or bad idea?\" question boldly notes: > **Don't rotate.**"} {"_id": "250208", "title": "Do immutable objects that constantly change impact memory/performance?", "text": "I'm writing a program that goes into a loop and keeps changing the state of some models (similar to a game). Naturally, many things are mutable. However, I'm also writing some classes that are immutable because they're inherently treated like values (for example: vectors, matrices, etc.) However, these values change on every loop (maybe 50-100 times a second). Does this mean that on every change, the program would need to allocate a new chunk of memory? If I'm using managed code, does this mean that the memory usage will build up very quickly? How does this impact determinism, performance, and garbage collection in languages such as C# and Java, especially when many garbage collectors have to pause the entire program in order to clear the memory?"} {"_id": "52698", "title": "Putting books read in resume", "text": "Is it a good idea if I put the books I read on my resume, or at least those related to software development?"} {"_id": "202227", "title": "Do I go by ID or by Label while programming?", "text": "Suppose I have a table \"Progress\" with two columns. One column is ID which is identity. Another column is Progress_Label ID Progress_Label 1 Submitted 2 Approved by user 3 Rejected by leadership 4 Cancelled 5 Completed **What is the best programming practice** , should I go by ID or by label? In my stored procedures, functions, or in programming code methods etc should I search records by the ID = 3 for example or should I type \"Where progress_lable is Rejected by leadership\" ? If somebody would want to edit the labels, all the code would stop working if I go by the label? At the same type if I type the label, code looks more understandable since it says right in the code what is it we are looking for? Are there any articles regarding this?"} {"_id": "159787", "title": "Replacing Multiple Inhertance with delegation", "text": "I was going through \"Object Oriented Modelling and Design\" by James Rumbaugh et al and it said that in languages where multiple inheritance is not supported like Java three mechanisms can be used as workarounds 1. Delegation using aggregation of roles 2. Inherit the most important 3. class and delegate the rest Nested Generalization I couldnt understand the examples given in the text. Could any one please explain this with an example. I know that \"diamond problem\" is a problem within multiple inheritance and java supports multiple inheritance of interfaces."} {"_id": "47860", "title": "Why does the word \"Pythonic\" exist?", "text": "Honestly, I hate the word \"Pythonic\" -- it's used as a simple synonym of \"good\" in many circles, and I think that's pretentious. Those who use it are silently saying that good code cannot be written in a language other than Python. Not saying Python is a bad language, but it's certainly not the \" ** _end all be all language to solve ALL of everyone's problems forever!_** \" (Because that language does not exist). What it seems like people who use this word really mean is \"idiomatic\" rather than \"Pythonic\" -- and of course the word \"idiomatic\" already exists. Therefore I wonder: Why does the word \"Pythonic\" exist?"} {"_id": "208488", "title": "How to maximise chances of success in an HR interview?", "text": "I was recently rejected for a Junior Web Developer role after a first interview with two members of HR staff. Personally, I think it's crazy that someone who hasn't written a single line of code in their life can make hiring decisions about programmers - the interview was an hour and a half long and they didn't ask a single technical question! However, this is not about opinions. So leaving opinion aside, my questions are: 1. How common is it for hiring decisions to be made by non-technical staff? 2. And more importantly...what can a programmer do to emphasise their skills and make a positive impact in an HR interview?"} {"_id": "219700", "title": "Is it Typical for Large Software Companies to Not Document or Refactor Code?", "text": "I have begun working at a large software company and was assigned to a project that is over a million and a half lines of code. It's part of a program suite that is sold to clients (not an in-house project) and the source code can be purchased if they desire (although given the extra fees associated with it, this seems rare). They've been doing software design for years and their current products are intended to be continued for the foreseeable future. To my surprise, the million and a half lines of code are almost completely lacking in documentation. Moreover, there are some areas of code that are incredibly messy to follow or could use some refactoring to become much easier to understand (for instance, an improvement in the programming language came out 10 or so years ago that would make large portions of code much cleaner, not to mention less prone to bugs). There doesn't seem to be any efforts to rectify this and my offers to do so for the parts I'm working with have met with resistance, for which I've never really gotten a clear answer. Are these practices common in a large business in the software industry? Or is my company unique in its lack of refactoring and documentation? Addendum: Based on some of the comments, I'd like to clarify what I'm looking for. I understand that my company has technical debt and this is bad. I'm not looking to determine whether or not my company is worse off because of this, I am just wanting to know whether or not this lack of documentation and resistance to refactoring is a fact of life within the programming world that I'll have to deal with if I continue working in it."} {"_id": "179904", "title": "Is asking for control totals on a file an outdated means of verifying a file?", "text": "I'm in a new position where I need to process a flat files on a regular basis. The last time I did this was 5 or 6 years ago but as part of the file layout I received control totals. It gave me simplistic information on the file like the total number of records as well as sums of the important fields. This helped me during testing then also during production to verify the file arrived and has correct information. I have asked for similar data for this new project and have hit a wall of no. Is this no longer a standard practice? Is there a better way?"} {"_id": "76659", "title": "How is a \"Software Developer\" different from a \"Software Consultant\"? What makes a consultant?", "text": "I have seen a lot of people claiming themselves to be a \"software consultant\". These consultants do what a normal software developer does, write code, estimate tasks, fix bugs and attend meetings etc. The only difference being the financials, consultants end up earning more. Then how is a software developer different from a \"consultant\"? In addition to the main question, I would like to know how can a software developer become a consultant? Are there any specific guidelines for a consultant? Do they need to amass certifications and write up research papers? Please do not confuse the software consultant with a management consultant. Software consultants I have seen are not managers."} {"_id": "165532", "title": "Multiple attribution in Python, JS, ...?", "text": "I accidentally discovered this `a=b=c=d=e=f=2` in python(2.7)(and JavaScript a few minutes later) interpreter . Is this a feature or just the way the interpreter works, if is a feature how it is called ? Do other languages have this feature ?"} {"_id": "171485", "title": "What is the concept of software wear and tear?", "text": "I have heard that over time, software can begin to show signs of wear and tear. What does wear and tear of software mean? Software itself not being a physical entity, so how can there be wear and tear?"} {"_id": "171481", "title": "Unable to debug an encodded javascript?", "text": "I\u2019m having some problems debugging an encoded javacscript. This script I\u2019m referring to given in this link over here. The encoding here is simple and it works by shifting the unicodes values to whatever Codekey was use during encoding. The code that does the decoding is given here in plain English below:- I\u2019m interested in knowing or understanding the values (e.g s1,t). Like for example when the value of i=0 what values would the following attributes / method would hold s1.charCodeAt(i) and s.substr(s.length-1,1) The reason I\u2019m doing this is to understand as to how a CodeKey function really works. I don\u2019t see anything in the code above which tells it to decode on the basis of codekey value. The only thing I can point in the encoding text is the last character which is set to 1 , 2 ,3 or 4 depending upon the codekey selected during encoding process. One can verify using the link I have given above. However, to debug, I\u2019m using firebug addon with the script running as localhost on my wamp server. I\u2019m able to put a breakpoint on the js using firebug but I\u2019m unable to retrieve any of the user defined parameters or functions I mentioned above. I want to know under this context what would be best way to debug this encoded js. **EDIT** @blueberryfields Thanks for the neat code review. However, to clarify this is no homework its something i picked from a website about encoding and javascript.The material just looked interesting and decided to give it a go. I don't see the point of using the intermediate variables as I was hoping to make use of those already define (s1,t,i). Usually these variable types are seen in firebug way too often like the enumerable types. Beside, using a good breakpoint and the right place i can always step over these values in the loop. I changed my focus as someone on stackexchange told me to use dragonfly (opera) as i did I was able to retrieve the variable and their values with the breakpoint statement. For other values i just did `document.write` to get desired results. Here is the link of the screenshot. link. I was more interested in understanding the part of coding that actually tell the program to shift-back the unicode character based upon code key value. That part of code was `s.substr(s.length-1,1)`. He just extracted the last character which is the codekey number and then use it in calculating the matching charcode value. If you unescape this shift-1 code `%264DTDSJQU%2631MBOHVBHF%264E%2633kbwbtdsjqu%2633%264F%261Bbmfsu%2639%2638Ifmmp%2631Xpsme%2638%263%3A%264C%261B%264D0TDSJQU%264F%261B%261%3A%261%3A%261%3A1` you would get `&4DTDSJQU&31MBOHVBHF&4E&33kbwbtdsjqu&33&4F&1Bbmfsu&39&38Ifmmp&31Xpsme&38&3:&4C&1B&4D0TDSJQU&4F&1B&1:&1:&1:1` Although those last chars not required but were intentionally added so it helps in decoding."} {"_id": "40488", "title": "How to proceed when a bug in open source libraries is suspected?", "text": "We are using some open source libraries in our projects. Sometimes there are some issues found in some of them (most likely library bugs, but it may also be a wrong usage from our side, especially when sometimes documentation is not exactly 100 % complete). As the libraries are often quite complex, debugging them to pinpoint the source of the problem is sometimes quite hard. Can you help me to summarize what other options are there and how to exactly proceed with them? I have just recently hit some strange problems when using TCMalloc (Google scalable memory allocator) on Windows, so I would most welcome answers which would apply to this particular library, but more general answers are good as well. 1) Ask the maintainer/owner of the project for assistance. How can this be done? 2) Hire someone to identify and fix the issue. How to do this? How can I find someone with enough expertise in some particular library? ... any other options?"} {"_id": "171483", "title": "Steps to manage a large project", "text": "Software development is an area where parallel development to its fullest form is very difficult to achieve, although you could get reasonably close with the right design. This is especially true for game development. That being said, if you are designing a game from scratch from engine to front end, what steps should be taken in order? How would you efficiently manage your project and your team? I'm asking because several people and I are interested in working on a relatively large project for learning purposes. Initially, we were going to use a proprietary engine like Unity, but since we wanted to learn how the engine works, we're going to start from bottom. I'd appreciate any suggestions that you guys can provide me."} {"_id": "171488", "title": "Which things I need to purchase to develop and distribute application to customer in .Net", "text": "This is the first time I got one customer and I am developing one application for him in c#.net and winforms. I want to distribute .net framework with my application since customer may not have it installed so I want to give it with my application so that while installing it will see if framework is installed or not and install it accordingly. I want to know, if I want to distribute this application to this customer as well as several other customers in future, which things or licenses I need to purchase like .net framework, visual studio etc. You can consider that I am going to start my small software firm and going to make more of such applications and sell them to various customers."} {"_id": "40483", "title": "What can I use instead of Interfaces in Ruby (or any other dynamic language)?", "text": "My goal is to define contracts between classes. I like duck typing and all but I'd like also to define an interface between different layers of my application to clearly define which are the method to call from the external, and which are accessory methods that shouldn't be used by the other layer. For example in Java I can define a Persistor interface with methods like get() and save() and then define a JdbcPersistor class with all the methods I need to persist on database. And maybe another RestPersistor with other methods for saving on a remote restserver. I'm not asking for interfaces in Ruby, just to know if there is a neat way do keep this distinction. I like Ruby but I worked only on small projects with it."} {"_id": "18026", "title": "Migrating from one PHP framework to another", "text": "I'm working with a web company that's approaching a point where it will likely need re-think the product as a V2 - due to outgrowing some of its V1 foundations and principles that have been built into virtually everything, from the data model to the user interfaces. For various reasons, this evolution might involve a migration from CakePHP (with which the V1 has been built) to Symfony or Zend. I would like to ask for some experienced views on how people might have managed a transition like this for a website that has significant traffic and generates revenue. I don't want to open up a discussion on the pro's & con's of different PHP frameworks, or why this migration might be needed. Rather, I would be very interested in hearing whether there are some practical alternatives to essentially building a V2 from scratch alongside the V1 for a couple of months - and locking up precious coding time for the duration of this intense period. An example of such an alternative might be migrating an app in parts over a longer period of time. I'd be grateful for any views from people who might have managed or been involved in such transitions. Thanks in advance."} {"_id": "230016", "title": "Does producing potentially shippable product make you less agile?", "text": "The cost of developing a feature in Agile is very high because it needs to be ready for shipment. It includes: 1. Significant bug fixing 2. Writing/updating and executing tests 3. Writing/updating the user manual 4. Polishing the UI design (placement of controls, color schemes, etc...) 5. Updating the documents for FDA (in our organization) When the cost is that high you are less likely to change the feature once you implement it. So does it make your less agile?"} {"_id": "89685", "title": "Apple Enterprise Developer Account Query", "text": "My employer is looking to purchase an enterprise account in the belief that it will enable us to develop and distribute apps for large clients of ours - however online it clearly states that the Enterprise account should only be used to distribute apps to the employees of the company that own the account. So my question is this: If we want to distribute in-house apps to \"Company X\" should we ask them to purchase an Enterprise account to distribute the app we made for them, or do we buy the account under our organisation instead? Thanks in advance."} {"_id": "108509", "title": "Certified Scrum Master exam", "text": "I have just finished two days of the CSM course and I feel pretty excited about the upcoming online exam. What is your experience with the exam? How much did you get? And can you recommend any good online tests sample?"} {"_id": "102353", "title": "Why do we love using i?", "text": "> **Possible Duplicate:** > Why do most of us use 'i' as a loop counter variable? Maybe this questions seems to be extreamly stupid but I wonder why we use i as variable in most cases in for loops (and not other letter). This might be a historical cause. eg. for(int i = 0; i < 10; i++)"} {"_id": "127934", "title": "Any standards for naming variables in for loops (instead of i,j,k)", "text": "> **Possible Duplicate:** > Why do most of us use 'i' as a loop counter variable? I was just writing a nested loop and got to the point where I was using l and m as variables to control for loops, and I realized this could get very confusing; Ive already had a few bugs when I copied blocks of code to different levels. So I was thinking instead of using i,j,k I would use iSomething,iSomethingelse. So if I were going over a 3D model Id use for(int iMesh=0;iMesh Sony -> 500 -999 -> Retail Receiver -> Sony -> 1000 - Up -> Retail Results expected from the given 5 levels of information: Receiver -> Sony -> 500-999 -> Open -Box -> Retail Receiver -> Sony -> 500-999 -> New -> Retail Receiver -> Sony -> 1000-Up -> Open -Box -> Retail Receiver -> Sony -> 1000-Up -> New -> Retail The things I have tried are performant with small sets, but if I was to have a lot of levels and big gaps in the combinations which wouldn't allow me to prune the valid combinations until I was deep into the levels, I am running into major performance problems. I am obviously not tackling the problem incorrectly. Any other views or suggestions on tackling the problem would be greatly appreciated."} {"_id": "207763", "title": "How can I deal a team member who is irresponsible and shows no commitment?", "text": "I am handling a team of a few guys working on a software module in a large project. As per our estimates our module is getting delayed by a week. Since we can not have this delay, our client and our manager arrive at a decision that we need to work on few weekends (3-4) to try to finish the work. All members of the team are aware of this and understand this and since its just a matter of 3-4 extra days, so we decide to work on weekends to meet the deadline. But one of the team members is not following this. From last three weekends he has some or the other reason to not come on weekend. He is not even willing to put up extra hours during weekdays. Personally I don't care about the number of hours he works as long as he/(any one) can finish their work to meet the deadline. Please let me know how to handle this situation?"} {"_id": "30737", "title": "Pythonic Java. Yes, or no?", "text": "Python use of indentation for code scope was initially very polemic and now is considered one of the best language features, because it helps ( almost by forcing us ) to have a consistent style. Well, I saw this a post where someone posted Java code with `; y {}` aligned to the right margin to look more pythonic. int foo() { int sum ; bar() ; for(int i = 0; i < 100; i++) { sum += i ;} return sum ;} It was very shocking at first ( as a matter of fact, if I ever see Java code like that in one of my projects I would be scared! ) However, there is something interesting here. Do we need all those braces and semicolons? How would the code would look like without them? class Person int age void greet( String a ) if( a == \"\" ) out.println(\"Hello stranger\") else out.printf(\"Hello %s%n\", a ) int age() return this.age class Main void main() new Person().greet(\"\") Looks good to me, but in such small piece of code is hard to appreciate it, and since I don't Python too much, I can't tell by looking at existing libraries if it would be cleaner or not. So I took the first file of a library named: jAlarms I found and this is the result: ( **WARNING** : the following image may be disturbing for some people ) http://pxe.pastebin.com/eU1R4xsh Obviously it doesn't compile. This would be a compiling version using right aligned {} and ; http://pxe.pastebin.com/2uijtbYM **Question** What would happen if we could code like this? Would it make things clearer? Would it make it harder? I see braces, and semicolons as help to the parser and we, as humans have get used to them, but do we really need them? I guess is hard to tell specially since many mainstream languages do use braces, C, C++, Java, C# JavaScript Assuming the compiler wouldn't have problems without them, would you use them? Please comment. **UPDATE** By the way I have just remember about this language that does something similar for JavaScript."} {"_id": "229126", "title": "Snapchat clone: How do I secure pre-downloaded notifications so that they cannot be opened outside of the app?", "text": "Say I'm making a snapchat clone app for Android and iOS. Let's say that I get a snapchat from Baz. I want to pre-download the audio for this snapchat. However, as the developer, I want to secure this audio from being viewable outside of the app. I've been thinking of encrypting it using AES with an IV and key that are both generated from a pseudo-random function that takes the user's unique ID as input. However, if an attacker found out that this was the way we encrypt our files, and had access to our PRF, he would easily be able to decrypt it and store it permanently. The thing is, I don't have enough background in cryptography or android programming to tell if that's really a concern or not. The attacker has to learn a lot about our cipher in order to break it, but he could gain pretty much all of that from looking at the unobfuscated source of our app. Is my suggested approach cryptographically secure? What other, better or simpler approaches could I take to solving this problem?"} {"_id": "228296", "title": "Android Data persistence question", "text": "I have an android app in which users have sets of items, and each item has about 10 properties. What I do at the moment: * items are stored in the server database * when the user logs in, I get all the items (say 37) via the API and put them in a LinkedHashSet< UserItem > (UserItem is a POJO with setters and getters) * then I get the 37 items from the set and put them in the local SQLite database * when the user opens \"My Items\" screen in the app, i get the 37 items from the local SQLite DB I was thinking, is this a good practice? Could I circumvent Step 3(storing items in the local database), but instead maintain the life of that LinkedHashSet object and get the items directly from there. If Im right with this suggestion, how do I do that? I asked a similar question at Stack Overflow but I was told that I should come here and ask it."} {"_id": "171734", "title": "Difference between a socket and a port", "text": "Could someone please explain quite clearly the difference between a port and a socket. I know that a port serves as a door into the network for an application process and that the application process uses a socket connection to the given port number to handle network communication but when you have multiple processes listening on a single port number, I am finding it difficult to understand the difference between the socket and the port and how they all fit together."} {"_id": "171731", "title": "What is meant by namespaced content and what advantages does it have?", "text": "I was reading this blog by James Bennett regarding HTML vs XHTML . He writes : > I don\u2019t have any need for namespaced content; I\u2019m not displaying any complex > mathematical notation here and don\u2019t plan to, and I don\u2019t use SVG for any > images. So that\u2019s one advantage of XHTML out the window. I also don\u2019t have > any need for XML tools; all the processing I need to do can be handled by > HTML-parsing libraries like BeautifulSoup. That\u2019s the other advantage gone. What does he mean by `namespaced content` and what advantage does it provide us ?"} {"_id": "132153", "title": "Help me understand how to stream video", "text": "I'm an experienced PHP web developer that is looking to understand the options available for streaming video. _What I have_ : a video processing system (this one) that can provide output to various streaming servers / CDNs / HTTPs. _What I want_ : the ability to embed streams on multiple sites, with the ability to enable/disable the stream based on the visitor's session. What options exist for meeting these requirements? Feel free to be broad or recommend reading, as I have a relatively low understanding of this field. I'm open to both paid services as well as implementing some of this myself. Low cost is of relative importance."} {"_id": "251850", "title": "Permutation tests for large sequences", "text": "I would like to perform a permutation test on a particularly large data set, i.e. around 4 million entries. Basically, I need to get some number of random permutations of this data set. Usual way to do this, is a Fisher-Yates shuffle, but it's pretty much limited by period of a PRNG. That is, to a sequence no longer than around 2080. Is there a solution to randomly permuting much larger sequences? EDIT: Here is an interesting related discussion on random shuffle algorithms and how much are they limited by RNG."} {"_id": "5748", "title": "Are there tools to determine code similarity?", "text": "I'm not talking about a diff tool. I'm really looking to see if a project contains code that may have been \"refactored\" from another project. It would be likely that function names, variable names and whatnot would be changed. Conditionals might be reversed, etc."} {"_id": "5749", "title": "Working as the sole programmer at a non-tech company", "text": "I work as the back-end developer, front-end developer, systems admin, help desk and all-around 'guy who knows computers' at a small marketing company of about 15 people. I was wondering if others could share their experiences flying solo at companies that aren't necessarily inclined toward the technology industry. I originally took the job in order to transition from front-end developer/designer to full-time coder. It's been a good experience to a point. I definitely get to occupy the role of 'rock star' programmer - because frankly, no one really understands my job. Lately, it feels like a very solitary position. I rarely get to bounce ideas off of people, and everyone looks to me like I have magic powers that will make all the computers work and land us first on Google searches. I've also felt a strong disconnect versus between what we say we want (projects with large, months-long development schedules) versus what we actually do (copy- edit our sites over and over). So who else finds themselves being the 'tech guy' in a company that thinks technology is all a bit magical, and what is your take on your situation?"} {"_id": "252172", "title": "Better way for creating a 2D map?", "text": "I have an idea for a game map and have researched some into it. What i'm going to do is have a 2D map(s) that a GM can pick from, or they can create their own using predefined images or images of a particular size they can upload. From what i've seen, people are using a 2-Dimensional Array, but this is for tiles. So what my question would be, is there a better way of doing this, for Hexagon? Not necessarily easier, but better. Something that can be used on a large scale, say 1000x1000 hex (100x100 px, 50 radius) Edit: I found this site that explains a way to draw out the hexagons. I think i might follow it, unless someone knows a better way."} {"_id": "155033", "title": "Is there any well-known commercial project which is currently an open source ?", "text": "I mean is there some open sourced projects that were started as closed source and were commercially successful also? I am also interested in any story behind open sourcing them."} {"_id": "205945", "title": "Modern practices for stored procedure-based applications", "text": "I work in a fairly large and old solution that has many entry points for different kinds of clients, with web sites for public access, web sites for internal access, some web sites and web services for partner companies access, etc. All those applications use different (Microsoft-based) technologies, such as Classic ASP, ASP.NET WebForms, MVC 3, ASMX, DTSX, etc. The code has no standard, and many programmers with no experience on the codebase and in the business have coded their own way. The central point of all the applications is a SQL Server database with tons of stored procedures that implement all the business rules. The applications are usually just a shell for the database. The data access is done with no framework or ORM (using pure ADO.NET, usualy rewriting everything from creating a connection to iterating through a data reader on each method, function or whatever). What are the most modern best practices for creating an productive data access layer based on stored procedures? For business reasons, we cannot rewrite the older applications (the customer won't pay for that, and the volume of daily work is too large for doing that internally). We also cannot use any third-party ORM (the architects are against it for \"security reasons\"). So, the improvements must be more like refactorings."} {"_id": "155035", "title": "New insights I can learn from the Groovy language", "text": "I realize that, for a programmer coming from the Java world, Groovy contains a lot of new ideas and cool tricks. My situation is different, as I am learning Groovy coming from a dynamic background, mainly Python and Javascript. When learning a new language, I find that it helps me if I know beforehand which features are more or less old acquaintances under a new syntax and which ones are really new, so that I can concentrate on the latter. So I would like to know which traits distinguish Groovy among the dynamic languages. What are the ideas and insights that a programmer well-versed in dynamic languages should pay attention to when learning Groovy?"} {"_id": "155039", "title": "Scrum and Google Docs burndown chart", "text": "There is a tutorial on how to create a burndown chart for Scrum in the Google Docs application: http://www.scrumology.net/2011/05/03/how-to-create-a-burndown-chart-in-google- docs/ The problem with it though is, it has only a place to update progress once per sprint but the burndown is supposed to be updated with daily progress, right? How can one modify this chart to be able to put daily progress on it? I mean to be able to plot two lines (ideal and actual) with data such as (story points 255, velocity 24): Actual Google Docs document (free to edit): https://docs.google.com/spreadsheet/ccc?key=0AuPWErnOiLTUdElJVzJZaE5EWEZ2S2xCelF6Z2lzaUE ![Sprint Data](http://i.stack.imgur.com/3MV6i.png) ![Sprint Chart](http://i.stack.imgur.com/WbQGZ.png)"} {"_id": "170286", "title": "How to initialize object which may be used in catch clause?", "text": "I've seen this sort of pattern in code before: //pseudo C# code var exInfo = null; //Line A try { var p = SomeProperty; //Line B exInfo = new ExceptionMessage(\"The property was \" + p); //Line C } catch(Exception ex) { exInfo.SomeOtherProperty = SomeOtherValue; //Line D } Usually the code is structured in this fashion because exInfo has to be visible outside of the try clause. The problem is that if an exception occurs on Line B, then exInfo will be null at Line D. The issue arises when something happens on Line B that must occur before exInfo is constructed. But if I set exInfo to a new Object at line A then memory may get leaked at Line C (due to \"new\"-ing the object there). Is there a better pattern for handling this sort of code? Is there a name for this sort of initialization pattern? By the way I know I could check for exInfo == null before line D but that seems a bit clumsy and I'm looking for a better approach."} {"_id": "213668", "title": "Script language native extensions - avoiding name collisions and cluttering others' namespace", "text": "I have developed a small scripting language and I've just started writing the very first native library bindings. This is practically the first time I'm writing a native extension to a script language, so I've run into a conceptual issue. I'd like to write glue code for popular libraries so that they can be used from this language, and because of the design of the engine I've written, this is achieved using an array of C `struct`s describing the function name visible by the virtual machine, along with a function pointer. Thus, a native binding is really just a global array variable, and now I must obviously give it a (preferably good) name. **In C, it's idiomatic to put one's own functions in a \"namespace\" by prepending a custom prefix to function names,** as in `myscript_parse_source()` or `myscript_run_bytecode()`. The custom name shall ideally describe the name of the library which it is part of. Here arises the confusion. Let's say I'm writing a binding for `libcURL`. In this case, it seems reasonable to call my extension library `curl_myscript_binding`, like this: MYSCRIPT_API const MyScriptExtFunc curl_myscript_lib[10]; But now this collides with the `curl` namespace. (I have even thought about calling it `curlmyscript_lib` but unfortunately, libcURL does not exclusively use the `curl_` prefix -- the public APIs contain macros like `CURLCODE_*` and `CURLOPT_*`, so I assume this would clutter the namespace as well.) Another option would be to declare it as `myscript_curl_lib`, but that's good only as long as I'm the only one who writes bindings (since I know what I am doing with my namespace). As soon as other contributors start to add their own native bindings, they now clutter the `myscript` namespace. (I've done some research, and it seems that for example the Perl cURL binding follows this pattern. Not sure what I should think about that...) So how do you suggest I name my variables? Are there any general guidelines that should be followed?"} {"_id": "213669", "title": "What are the advantages of converting empty strings to evaluate to true as compared to false?", "text": "When converting a string to a boolean, what are the advantages of having a programming language evaluate an empty string as true and what are the advantages of having it evaluate it to false?"} {"_id": "245049", "title": "Should I released my plugins AGPL", "text": "I am using ownCloud - which is AGPL license - and only create few custom modules, and a theme, the core is not touched at all so: 1. should I allow download source for the whole app ( owncloud and my modules/theme ) ? ( as it is agpl based ) 2. or should I only provide link to download owncloud code ( the original code ) ? 3. Or don't have to allow download at all ( optional )? Considering that, I am building a commercial app, where users will pay for the service? Please advice,"} {"_id": "120315", "title": "What do you do when working with multiple languages with different capitalization schemes?", "text": "I'm making a webapp using Django. The Python convention for naming variables is lowercase_with_underscores, but the Javascript convention is camelCase. In addition, I've seen many people use lowercase-with-hyphens for CSS identifiers. Would you suggest using all three naming conventions where appropriate, or picking one and using it, even if the other two recommend something else? Switching back and forth isn't a huge problem, but it can still be mental overhead."} {"_id": "225711", "title": "To store data or not?", "text": "I'd like to ask you about one simple thing. I have class A that do something (for example counts something ). There is also one class B that handle some parameters to this class (class A is member of class B). Class B calls one method of class A that does something and writes value to database. Class A uses parameters that got from Class B. Class A has few methods to made clean code (it's good I think). But... Is it ok to save parameters as private members?? It's not necessary. I don't have to remember them after writing to Database. But if I don't have private data I have to handle it to every private method as parameters (during processing). I think that it's ok to save this parameters as members of this class. Am I right? Or maybe I should avoid this when it's not necessary (when I don't have to remember them)? When I don't have private members my publicMethod (called from class B) is looked something like that: void publicMethod(int param1, int param2, int param3, int param4) { privateMethod1(param1, param2, param3, param4); privateMethod2(param1, param2, param3, param4); privateMethod3(param1, param2, param3, param4); } and in private methods I call other private method and I have to handle parameters.... I think it doesn't look good... Am I right? When I have private members it looks like: void method(int param1, int param2, int param3, int param4) { privateMethod1(); privateMethod2(); privateMethod3(); } It's better I guess. But I'm not professional and I'm not sure... (I write in C++)"} {"_id": "245042", "title": "How about using a DTO class as a property in the corresponding BO class?", "text": "I was reading this blog post and liked the idea of using the DTO class for an entity and using it as a property in the corresponding business object class like so: public class Person : BALBase { public PersonDTO Data { get; set; } This also can eliminate the need for a mapping tool like AutoMapper (mapping between POCO and DTO). I am thinking of using this concept in my app. My app is layered using straight assemblies with no web services/REST/WCF calls. What can be the disadvantages of using this concept?"} {"_id": "245041", "title": "Writing Models in PyroCMS/Codeinighter Models", "text": "In `Pyrocms` there are `Admin` views and `User` views. Im developing a complex module where my model file is getting to be rather large. Should I be abstracting logic in my model files to also be User models, and an admin models of my database. Currently I have a large Model file called `products_m.php` Should I split this up in 2 files * products_admin_m.php * products_user_m.php or should the admin inherit the user model (using PHP include). Where **products_admin_m** extends **products_user_m.**"} {"_id": "213663", "title": "C# Design for SQL connection and commands", "text": "Currently I'm working on system that works with database, and I would like to have it done elegant way. So I have abstracted DBConnection into one class, DBCommands into another class. (DBCommands : DBConnection) There is also a class SQLUsage in which I'm creating new thread that works in a loop, it checks all the time if there's any object in BlockingCollection to take care of, if there is that thread is processing it, and it uses DBConnection and DBCommands for it. I am having a problem accesing DBCommands from SQLUsage, since the DBCommands object is created in DBConnection (I have to pass MySqlConnection object into DBCommands) Is that good solution? Or I should change the design, I would like to use those classes in another app as well (part of same, bigger system). I'll tell you bit more about this app. It is a server app that handles connections from multiple clients. Each client sends some info to server, that is stored in `BlockingCollection`. Another thread is using items from `BlockingCollection` to communicate with mysql server, and insert some data to it, according to `objects`. And I'm having problem with design. So far I have a main class, class for handling TCP/IP connections, `thatinfo` class, SQLUsage, DBConnection and DBCommands class. Since BlockingCollection is initiated in main class and it is passed to both TCP/IP handling class and SQL Handling class it is not a problem. My problem is to properly design SQLHandling classes. I want it to be done in elegant way. So I figured out it would be good to abstract connection and commands from the class where I process the info. So I already have working SQLConnection class, I had some commands in it as well and it worked. But next step was to abstract commands in another class, since there will be many overloaded methods (for example Insert with 1 arg, Insert with 2 args etc.) So my question is not about the code, I know how to create it, it's more about design. If abstracting sql commands in another class was good idea or not? Should both DBConnection and DBCommands inherit from SQLHandling class? or only DBConnection : DBCommands ? And another question, probably easier to answer, is there more elegant way to do sql commands than just overloading methods? Like 1 universal INSERT method that handles all arguments? For now having few overloaded insert methods seems to be a good idea since in this app I'm working on, there will be only 2 different kinds of inserts (just different amount of arguments), but in another app I wanted use the same classes I could really use universal Insert method."} {"_id": "213667", "title": "How to offer programming/database contract services?", "text": "If you are an independent consultant/contractor, working for multiple clients, how do you sell your services? Do you believe applying to job offers for regular positions and offering your services to the hiring manager is a good strategy? My opinion is that a contractor is a great idea to see if you really need to take in a full-time employee, and thought if I could explain this to managers they would be open. What do you think?"} {"_id": "228740", "title": "How to make it obvious that a function is being accessed from the outside?", "text": "This is a C specific question. I am trying to keep everything possible inside the translation unit boundaries, exposing only a few functions through the `.h` file. That is, I am giving `static` linkage to file-level objects. Now, a couple of functions need to be called by other modules, but not directly. My module/file/translation unit subscribes to the other modules, passing a pointer to a function. Then, upon a specific event, the pointer is called with some arguments. So I am wondering how to make it very obvious that those functions are called from some obscure location. * Should they be `static` or `extern` (and expose them in the `.h`) ? * Should I include some hint in the name of the functions? * Or is it enough to put a comment \"called by X\"?"} {"_id": "228743", "title": "Non-Recursive vs Recursive Locks?", "text": "I am thinking of using non-recursive locks; I have found them to be having performance superiority over the standard recursive locks (e.g. SimpleRWSync). I have mainly been using critical sections but in highly threaded environments, locking was producing significant delays. 1. What are the benefits and/or pitfalls? 2. Do the non-recursive ones make coding harder comparatively?"} {"_id": "228745", "title": "Code Duplication in Multi-Module Project", "text": "I've about seven modules arranged like so: * Service * Processing * Common * Account * Email * Scheduling I try to make it my policy to restrict code to the module that actually uses it. Code that is shared by multiple projects (3+) is sent to common. However, there are a few classes that are only used by two projects. In my most recent example, both Account and Processing need some Image Processing done. Is it a code smell two have the same classes found in two modules? Should I move duplicate code into common as soon as it it's used more than once?"} {"_id": "125135", "title": "How do you find errors related to undeclared variables in php?", "text": "If I copy a piece of code from somewhere that looks like so $blah = array(1,2,3,4); foreach ($blah as $i) echo ($i); and rename the variables but forget to do it correctly like so $apple = array(1,2,3,4); foreach ($blah as $i) <--- notice $blah instead of $apple echo ($i); Then my NetBeans IDE doesn't complain and I get an error at runtime when I run this. Is there a way to catch errors like this without running the code? Which IDE does it? Which plugin? Or the whole PHP development world is living without this?"} {"_id": "239196", "title": "Using commit messages for time tracking", "text": "Some products can parse a special syntax in commit messages to extract additional data, such at time tracking information: https://confluence.atlassian.com/display/FISHEYE/Using+smart+commits To me, this seems like the misuse of one tool to activate a feature on another, but I'm having trouble articulating my reasoning. It seems similar to the process smell of storing issue tracking information in code comments. Is there anything fundamentally wrong with using commit messages as a time tracking mechanism?"} {"_id": "47130", "title": "How do I use an API?", "text": "### Background I have no idea how to use an API. I know that all APIs are different, but I've been doing research and I don't fully understand the documentation that comes along with them. There's a programming competition at my university in a month and a half that I want to compete in (revolved around APIs) but nobody on my team has ever used one. We're computer science majors, so we have experience programming, but we've just never been exposed to an API. I tried looking at Twitter's documentation, but I'm lost. Would anyone be able to give me some tips on how to get started? Maybe a _very_ easy API with examples, or explaining essential things about common elements of different APIs? I don't need a full-blown tutorial on Stack Overflow; I just need to be pointed in the right direction. ### Update The programming languages that I'm most fluent in are C (simple text editor usually) and Java (Eclipse). In an attempt to be more specific with my question: I understand that APIs (and yes, external libraries are what I was referring to) are simply sets of functions. ### Question I guess what I'm trying to ask is how I would go about accessing those functions. Do I need to download specific files and include them in my programs, or do they need to be accessed remotely, etc.?"} {"_id": "241290", "title": "If a library doesn't provide all my needs, how should I proceed?", "text": "I'm developing an application involving math and physics models, and I'd like to use a Math library for things like Matrices. I'm using C#, and so I was looking for some libraries and found Math.NET. I'm under the impression, from past experience, that for math, using a robust and industry-approved third party library is much better than writing your own code. It seems good for many purposes, but it does not provide support for Quaternions, which I need to use as a type. Also, I need some functions in Vector and Matrix that also aren't provided, such as rotation matrices and vector rotation functions, and calculating cross products. At the same time, it provides a lot of functions/classes that I simply do not need, which might mean a lot of unnecessary bloat and complexity. At this rate, should I even bother using the library? Should I write my own math library? Or is it a better idea to stick to the third party library and somehow wrap around it? Perhaps I should make a subclass of the Matrix and Vector type of the library? But isn't that considered bad style? I've also tried looking for other libraries but unfortunately I couldn't find anything suitable."} {"_id": "241296", "title": "Store scores for players and produce a high score list", "text": "This question is derived from an interview question that I got for a job I was declined. I have asked for code review for my solution at the dedicated Stack Exchange site (http://codereview.stackexchange.com/q/51842/43237). But I hope this question is sufficiently rephrased and asked with a different motivation not to be a duplicate of the other question. Consider the following scenario: You should store player scores in the server back end of a game. The server is written in Java. Every score should be registered, that is, one player may have any number of scores for any number of levels. A high score list should be produced with the fifteen top scores for a given level, but only one score per user (to the effect that even if player X has the two highest scores for level Y, only the first position is counted and player Z has the second place). No information should be persisted and only Java 1.7+ standard libraries should be used. No third party libraries or frameworks are acceptable. With the number of players as the primary factor, what would be the best data structure in terms of scalability and concurrency? How would you access the structure to register a single score given a level and a player id? How would you access the structure to compile the high score list?"} {"_id": "87505", "title": "Is it OK to learn an algorithm from an open source project, and then implement it in a closed source project?", "text": "Reference The post that started it all In order to clear up the original question I asked in a provocative manner, I have posed this question. If you learn an algorithm from an open source project, is it OK to use that algorithm in a separate closed sourced project? And if not, does that imply that you cannot use that knowledge ever again? If you can use it, what circumstance could that be? Just to clarify, I am not trying to evade a licence, otherwise I would not have asked the question in the first place."} {"_id": "173396", "title": "Deprecated vs. Denigrated in JavaDoc?", "text": "In the JavaDoc for `X509Certificate` `getSubjectDN()` it states: > **Denigrated** , replaced by getSubjectX500Principal(). I am used to seeing Deprecated in the for methods that should not be used any longer, but not Denigrated. I found a bug report about this particular case where it was closed with comment: > This isn't a bug. \"Deprecated\" is meant to be used only in serious cases. When we are using a method that is _Deprecated_ , the general suggested action is to stop using the method. So what is the suggested action when a method is marked as _Denigrated_?"} {"_id": "173394", "title": "Can working exclusively with niche apps or tech hurt your career in software development? How to get out of the cycle?", "text": "I'm finding myself in a bit of a pickle. I've been at a pretty comfortable IT group for almost a decade. I got my start here working on web development, mostly CRUD, but have demonstrated the ability to figure out more complex problems. I'm not a rock star, but I have received many compliments on my programming aptitude, and technologists and architects have commented on my ability to pick things up (for example, I recently learned a very popular web framework that shall remain nameless since I don\u2019t want to be identified). My problem is that, over time, my responsibilities have been shifting towards work such as support or \u2018development\u2019 with some rather niche products (afraid to mention here due to potential for being identified). Some of this work, if it includes anything resembling coding, is very menial scripting in languages such as Powershell or VBScript. The vast majority of the time, however, a typical day consists of going back and forth with the product\u2019s vendor support to send them logs and apply configuration changes or patches they recommend. I\u2019m basically starved for some actual software development. However, even though I\u2019m more than capable of doing that development work (and actually do a much better job at it than anything else), our boss is more interested in the kind of work I mentioned above, her reasoning being that since no one else in the organization wants to do it, it must mean job security. This has been going on for close to 3 years, and the only reason I have held on is on the promise that we would eventually get more development projects assigned to us. Well, that turned out not to be true at all. A recent talk with the boss has just made it more explicitly clear, as she told me in no uncertain terms that it\u2019s very likely that development work (web or otherwise) would go to another group. The reason given to me is that our we don\u2019t have enough resources in our group to handle that. So now I find myself in the position that I either have to stay in what has essentially become a dead end IT job that is tied to the fortunes of a niche stack of apps, or try to find a position that will be better for my long term career. My problem (is it a problem?), however, is that compared to others, my development projects in the last three years are very sparse in number. To compound things, projects using the latest and most popular frameworks, amount to the big fat number of just one\u2014with no work of that kind in the foreseeable future. I am very concerned that this sparseness in my resume is a deficit, and that it will hurt my chances of landing a different job. I\u2019m also wondering how much it will hurt me, and whether that can be ameliorated with hobby projects of my own. I guess I\u2019m looking for opinions. Thank you very much for reading."} {"_id": "155783", "title": "Can I make a good career with VC++ programming?", "text": "I'm addicted to VC++ since 2008, and I begin to work for my current company from 2011 when I graduated in Mathematics. Now I still love VC++, it is a wonderful programming language. I'm a little confused whether it's a good idea to continue with Windows Programming. I'm in Beijing, China. Of course, I come from China. I want to find a work in Silicon Valley, America in the future. Can anyone tell me is it possible for me to find a VC++ work in Silicon Valley someday in the future? And what should I do in the recent years?"} {"_id": "177888", "title": "Weird UIView transforms in Retina iPhone", "text": "I'm having a problem I don't understand. I'm developing an OpenGL app for iOS. Because at some points I want to force the orientation of the view programatically, and Apple for whatever reason doesn't make it easy (or even possible), I'm doing it by hand. I return always NO in `shouldAutoRotateToInterfaceOrientation`, and when I want to change the orientation (to portrait, for example), I do something like this in the UIView: [self setTransform:CGAffineTransformMake(1, 0, 0, 1, 0, 0)]; [self setBounds:CGRectMake(0, 0, 768, 1024)]; This works fine. In order to support Retina devices, I started checking `[UIScreen mainScreen].scale`, and setting `self.contentScaleFactor` accordingly. I also modified the code above to account for the new dimensions, like this: [self setTransform:CGAffineTransformMake(1, 0, 0, 1, 0, 0)]; [self setBounds:CGRectMake(0, 0, 2*768, 2*1024)]; Same rotation, different size. The weird result with this is that I get a \"screen\" with the right size, but offsetted half a screen to the bottom and the left. To correct for this, I need to do the following: [self setTransform:CGAffineTransformMake(1, 0, 0, 1, 0, 0)]; [self setBounds:CGRectMake(-768, -1024, 2*768 - 768, 2*1024 - 1024)]; This works, but it's ugly, I also need to make similar corrections when I get touch coordinates, and worst of all, I don't understand what's going on or why the above \"correction\" works. Can anyone shed some light on this issue?"} {"_id": "206414", "title": "choice of design for OO-linear algebra library", "text": "I'm writing a library for sparse linear algebra computations as a backend for my thesis work and I've come to a bit of a crossroads. I'm using modern Fortran (don't groan, it's had inheritance and polymorphism and all that jazz for 10 years now). From a software design standpoint, my main issue was making iterative solvers be able to use sparse matrices without knowing what storage format they're in. The only functionality that an iterative solver has to know about is how to multiply a matrix by a vector. I did this by having an abstract class `sparse_matrix` with a virtual method `matvec` for matrix-vector multiplication; then there were several child classes, representing each storage format, which override the parent matvec with their own implementation. I believe this is called the \"template\" pattern yes? I'm considering refactoring my code to use composition over inheritance. To that end, a sparse matrix consists of an underlying graph with some extra data -- sometimes it's an array of real or complex numbers, sometimes an array of dense matrices, etc. There are multiple different sparse matrix formats which use the same underlying graph storage scheme. Every sparse matrix has a `graph` object as an attribute, and has a collection of function pointers which change to use that graph in different ways. Before, I had to effectively redefine the same graph storage scheme for each sparse matrix format that used it. The advantages I can discern are: 1. fewer classes make it easier to hook my code up to C/C++/Python 2. easy to choose different parallel implementations of the same algorithm; write every implementation and redirect function pointers at runtime. Before I had to use big conditional blocks. 3. I think this design will be easier when the underlying graph is better thought of as a hyper-graph, and matrices as heterogeneous compositions of several matrices in possibly different formats. (This happens in some PDE applications.) Can anyone think of a good reason why I should stick with the old inheritance- based design? If the new approach is more sensible, any advice beyond what's said in GoF would be appreciated."} {"_id": "177882", "title": "OOD: All classes at bottom of hierarchy contain the same field", "text": "I am creating a class diagram for what I thought was a fairly simple problem. However, when I get to the bottom of the hierarchy, all of the classes only contain one field and it is the same one. This to me looks very wrong, but this field does not belong in any of the parent classes. I was wondering if there are any suggested design patterns in a situation like this? A simplified version of the class diagram can be found below. _Note, fields named differently cannot belong to any other class_ +------------------+ | NoteMapping | |------------------| | String noteId | | String content | | | +---------+--------+ | +---------------+----------------+ | | +--------|--------+ +--------|--------+ | UserNote | | AdminNote | |-----------------| |-----------------| | String userId | | String adminId | | | | | +--------+--------+ +--------+--------+ | | | | +--------|--------+ +--------|--------+ | UserBookNote | | AdminBookNote | |-----------------| |-----------------| | String bookId | | String bookId | | | | | +-----------------+ +-----------------+ _ASCII tables drawn usinghttp://www.asciiflow.com/_ ### Edit I have added some class and field names to the class diagram above. The reason `bookId` cannot exist in any of the parent classes is because it is used to create a `OneToOne` relationship with another class - which is not something that we always want to do."} {"_id": "177881", "title": "Why do I always think I know much less than others?", "text": "I have been in programming since primary 6. Since the time DOS comes, I have been doing programming in quickbasic 4.5, then to VB 6, then to C#. In between I also do programming in C++. But every time I open Stack Overflow and trying to help others answering their problems, it seems that I know nothing. I feel that I am so stupid even I have been in programming for so long. I would shock reading all the questions and unable to find any clue. Is technology moving too fast that left out me? I feel that technology changes too fast and I can't keep up, when I know ASP.NET web form, MVC is out, when I know MVC, android/iphone/HTML5 app is popular. It seems that I am chasing something and never reach 'it'. I don't know whether this is correct place for me to talk about this. I just wish to listen to opinion like you, how do you think technology should grow instead of recreating language, adding bug here and there to let programmer figure it out, while big company share the solution among themselves. This is exactly how I feel. The simple example is how do you think why doesn't `Dictionary<>` in .NET provide iterating the object using index? Why must we use Key or GetEnumerator(). Developer has to google and read wasted hour of hour of time to find pieces of hack code to use reflection to achieve reading from index. Where developer will keep it as collection and valuable code. HOwever when times come, everything changes again, developer has to find answer for new silly problems again! Yes, I really hate it! I hate how many big companies are playing with the developer by cutting a big picture into small puzzle and messing it up and asking developer to place it together themselves. As if they are creating problems for us to solve it, so we are unable to grow upfront, we are being manipulated by those silly problems they have created. Another sample would how difficult to collect Cookies from CookieContainer without passing the URL, yes without the URL and I WANT to get all cookie in the cookiecontainer without knowing the URL, I want to iterate all. Why does micros0ft have to limit me from doing that?"} {"_id": "177880", "title": "Efficient way to check for changes to the contents of folders", "text": "I am creating an application that maintains a database of files of a certain type in a given folder (and all subfolders) Initially the program will recurse the folders and add any file it finds of that type to the database. I want the application to have the ability to re-scan the folder and add any files that were not there the last time the folders were scanned. It can't use the date created property of the file because there is a high chance of a file being added to the folders that isn't a new file. I am wondering what the most efficient way of doing this is, and if there is a way that doesn't involve checking each file is in the database already (which, if there are 5000 files would mean 5000 queries of a list 5000 items in size, or 25 million 'checks' for the sql engine to perform) I suppose a more specific question to acheive the same goal would be - is there a property of a file (in Microsoft Windows) that will reliably tell you when that file arrived in that folder. **Edit:** The app would not be running all the time, so monitoring the folder for change events is not an option. A typical scenario might be. Run the app. get new files. close the app. A week later (after normal computer usage and files being added to the folder) run the app again, look for changes since the app was last used."} {"_id": "219056", "title": "Multiple APIs, or one API with a \"chooser\" parameter?", "text": "Say you have a web service, which adds business logic on top of a data source. What each API of this service pretty much looks like is - given a set of constraints, give me the items from the data source that satisfy these constraints. You can say you get a \"view\" of the data source back from the API. Now, over time you get asked to return different kinds of views over the data source. You have the option to either add new APIs for each \"sufficiently distinct\" view, or to switch gears and provide a getFooDataView() API, which takes a parameter that specifies what kind of view you want. You have several competing pressures for deciding which way to go: * The existing big client of your service would prefer to be lazy, and not have to code up to new APIs when new views over the data are needed. * But, some of your request parameters (constraints) only make sense for some views, and not for others - you'd have to make your API contract looser by saying \"well if you want XYZ view, setting the \"foo\" parameter will have no effect\", with the unfortunate side effect that you can't make \"foo\" a required parameter even if it is so for some of the views. * It's becoming more and more the case that new clients want to leverage your service. You can't decide which would be more confusing to them - having to pick between different, but tighter-defined APIs, and one API where they have to know what combination of parameters really gives them what they want. To distill this, when do you draw the line that something should be its own API as opposed to a variation of an existing API? Different people you have to work with have different views about what makes two client requests semantically distinct, so it can be hard to drive consensus about this matter. You also want to make sure that your service isn't prohibitively difficult for future clients to consume. What are some best practices arount making this kind of choice?"} {"_id": "252063", "title": "Copying and Selling Apache Licensed Software?", "text": "**Questions in bold:** I've read the Apache Software License 2.0... What I gather is that all a person needs to do when redistributing the licensed software is include their name in all the parts they've modified(if any), include some notices and whatnot, and you're away. You don't even need to follow along with the license in all your additions/edits, because you can sub-license. The thing I don't really get is, **why would anyone want to use this kind of license** , honestly? Am I missing something? One more thing, **does #include-ing(or similar) a licensed library mean that you are making a derivative of the licensed source** , therefore binding you to the terms and conditions of the license?"} {"_id": "239220", "title": "Use a custom value object or a Guid as an entity identifier in a distributed system?", "text": "## tl;dr I've been told that in domain-driven design, an identifier for an entity could be a custom value object, i.e. something other than `Guid`, `string`, `int`, etc. Can this really be advisable in a distributed system? ## Long version I will invent an situation analogous to the one I am currently facing. Say I have a distributed system in which a central concept is an egg. The system allows you to order eggs and see spending reports and inventory-centric data such as quantity on hand, usage, valuation and what have you. There area variety of services backing these behaviors. And say there is also another app which allows you to compose recipes that link to a particular egg type. Now egg type is broken down by the species--ostrich, goose, duck, chicken, quail. This is fine and dandy because it means that users don't end up with ostrich eggs when they wanted quail eggs and whatnot. However, we've been getting complaints because jumbo chicken eggs are not even close to equivalent to small ones. The price is different, and they really aren't substitutable in recipes. And here we thought we were doing users a favor by not overwhelming them with too many options. Currently each of the services (say, `OrderSubmitter`, `EggTypeDefiner`, `SpendingReportsGenerator`, `InventoryTracker`, `RecipeCreator`, `RecipeTracker`, or whatever) are identifying egg types with an industry- standard integer representation the species (let's call it `speciesCode`). We realize we've goofed up because this change could effect every service. There are two basic proposed solutions: 1. Use a predefined identifier type like `Guid` as the `eggTypeID` throughout all the services, but make `EggTypeDefiner` the only service that knows that this maps to a `speciesCode` and `eggSizeCode` (and potentially to an `isOrganic` flag in the future, or whatever). 2. Use an `EggTypeID` value object which is a combination of `speciesCode` and `eggSizeCode` in every service. I've proposed the first solution because I'm hoping it better encapsulates the definition of what an egg type is in the `EggTypeDefiner` and will be more resilient to changes, say if some people now want to differentiate eggs by whether or not they are \"organic\". The second solution is being suggested by some people who understand DDD better than I do in the hopes that less enrichment and lookup will be necessary that way, with the justification that in DDD using a value object as an ID is fine. Also, they are saying that `EggTypeDefiner` is not a domain and `EggType` is not an entity and as such should not have a `Guid` for an ID. However, I'm not sure the second solution is viable. This \"value object\" is going to have to be serialized into JSON and URLs for GET requests and used with a variety of technologies (C#, JavaScript...) which breaks encapsulation and thus removes any behavior of the identifier value object (is either of the fields optional? etc.) Is this a case where we want to avoid something that would normally be fine in DDD because we are trying to do DDD in a distributed fashion? ### Summary Can it be a good idea to use a custom value object as an identifier in a distributed system (solution #2)?"} {"_id": "16549", "title": "How do you feel about those companies that try to use the newest technologies?", "text": "Two days ago I got a suggestion to pass test of HTML 5 (I am looking for a job). I was shocked because modern web browsers don't support some features or support its partially. Other side of situation: I worked for a some company that still using SQL Server 2000 (now is available 2005, 2008) on her production. So my question is: how do you feel about those companies that try to use newest technologies? The newest is evil of good? **SUMMARY** We continue using old technology because of it predictability (this applies to critical systems in particular). A lack of productivity, low expansiveness, difficulties of deployment, implementation, testing of old technology are picking us to choice a new one. Even if we know that a new technology can be unsupported and now it is untested, raw and has low documentation, simple human curiosity is pushing us to use it. Any way we should be oriented on our target audience, people which are using our IT solutions. Other important things should be taken into account: * time to implement * time and cost to learn * ease of deployment, implementation, testing * faster and easier to use"} {"_id": "200813", "title": "Video streaming, with minimum delay", "text": "My goal is to stream a video from my pc, to a smartphone. However, i want the latency between us to be minimal. VIDEO -> ENCODING -> TRANSFER -> DECODING -> VIDEO I already know that the transfer will take approximately 50ms. What i want to know is different methods of encoding/decoding and how fast they are. The decoding smartphone will of course have limited computing power, and limited bandwidth available as well. As far as i can understand, less time encoding/decoding means more bandwidth usage. The whole operation above should take maximum 200ms, preferably less. EDIT: @gnat I'm just looking for some numbers on how fast a coder/decoder can work. I'm not very experienced with this kind of theory at all, seeing as i only know very basic c#. That is why i asked here, and not on stackoverflow. I've searched around quite a lot but haven't really found anything about coding/decoding speed at all, and all i wanted was some example numbers like Per was kind enough to provide. Other then that, this was a very theoretical question, as i have used VLC media server for the purpose above. I just found it easier to ask with a simple example."} {"_id": "160041", "title": "How do I add a Help system to my WinForms project?", "text": "I have pretty much completed work on a Winforms application, and would like to add a Help feature for users. I cannot send users to a website, and am thinking along the lines of the kind of help you get when you click on Help in IE (or hit F1). I believe this is compiled html or something, but am not sure what tools are available to assist in building this kind of thing, or how many options are out there. Anyway, how about some suggestions?"} {"_id": "157450", "title": "Why all classes in .NET globally inherits from Object class?", "text": "Its very interesting for me which advantages gives \"global root class\" approach for framework. In simple words what reasons resulted the .NET framework was designed to have one root **object** class with general functionality suitable for all classes. Nowadays we are designing new framework for internal use (the framework under SAP platform) and we all divided into two camps - first who thinks that framework should have global root, and the second - who thinks opposite. I am at \"global root\" camp. And my reasons what such approach would yields good flexibility and development costs reduction cause we will not develop general functionality any more. So, I'm very interested to know what reasons really push .NET architects to design framework in such way."} {"_id": "16544", "title": "Build/deployment tools for a small (1) development effort", "text": "What build or deployment tools are available for a 1 person development effort, in the .NET space, that are capable of producing project outputs? I'm not looking _necessarily_ looking for a CI server (though I can't think of anything else that does what I'm looking for) but I am looking for it to: * produce and publish documentation from xml comments * produce and publish the project (web and/or clickonce app) * handle basic versioning (automatic build number incrementing) * work from a sln file * be easy to setup (< 8-16 hrs for someone who knows little to nothing about the tool(s)) * do this at the push of a button (after configuration obviously) Things I don't need: * source control integration : I can point it to a sln if need be. Not a huge deal. * unit testing : I run test suite before commits * static analysis : again, I also run these before commits I know that msbuild is capable of most or all of this, and I do have my msbuild book(s) with me, but I'm still very new to it and I don't have the time at the moment to learn it well enough to do what I want."} {"_id": "88229", "title": "How do I write a software product definition?", "text": "I would like to learn how to write a software product definition. Therefore I am looking for online materials or books which would help me to learn more about this topic. I would like to learn, for starters: * what must be included in such a document * what must not to be included in such a document * how to make a product definition to sell internally the product * finding balance between use case descriptions (the _why_ ), and feature descriptions (the _how_ ). I am aware that it is not something that can learn in 15 minutes but I think such a discussion could help me to have a good start."} {"_id": "173648", "title": "Is it possible (and practical) to search a string for arbitrary-length repeating patterns?", "text": "I've recently developed a huge interest in cryptography, and I'm exploring some of the weaknesses of ECB-mode block ciphers. A common attack scenario involves encrypted cookies, whose fields can be represented as (relatively) short hex strings. Up until now, I've relied on my eyes to pick out repeating blocks, but this is rather tedious. I'm wondering what kind of algorithms (if any) could help me automate my search for repeating patterns within a string. Can anybody point me in the right direction?"} {"_id": "173642", "title": "keeping connection open all time in sql", "text": "I have developed a Windows application in c# in which multiple users can add some numbers and their name and can view the data entered. The problem that I have is that the sever is on my laptop and every time I log off or close my laptop, they are losing connection to the DB and can not add or view any more. It seems that the port is closed or something. Is there a way to keep the port and their connection established all the time even when I'm logging out?"} {"_id": "84979", "title": "How are developing countries affecting the web design and development field?", "text": "While searching ELance.com I noticed that companies from developing countries, such as India and Pakistan, were snatching up a lot of the freelance jobs at a low pay. How are foreign countries affecting web design and development jobs in general?"} {"_id": "161966", "title": "Term for 24-bits", "text": "Is there a term for a 24-bit (3-byte) integer? I know uncommon bit counts (such as a _\"nibble\"_ or _\"nybble\"_ for 4 bits) have names, and having 24-bits in both video and audio technology, for instance, is very common."} {"_id": "88227", "title": "What Software Development Life-Cycle (SDLC) methodology or methodologies are used by Google?", "text": "Does anyone know or have information about which Software Development Life Cycle methodologies are used by Google?"} {"_id": "161964", "title": "What is a \"behavior rich object\" and why would it be advantageous?", "text": "I am referring to the article _Mocks aren't Stubs_ by Martin Fowler. When naming cases when he think \"mockist\" TDD will be advantageous, he said > It's particularly worth trying if you are having problems in some of the > areas that mockist TDD is intended to improve. I see two main areas here. > [...] The second area is if your objects don't contain enough behavior, > mockist testing may encourage the development team to create more behavior > rich objects. My question, what does he mean by \"behavior rich objects\", objects that \"contain enough behaviour\", etc? And why does it matter if an object contains many behaviour or not, if it works correctly?"} {"_id": "100650", "title": "How to organize Javascript and AJAX with PHP?", "text": "My javascript is getting out of hand for my PHP application. I have 20 tags that link to various javascript files in a javascript folder. Each javascript file basically controls one element on the DOM. And, if the javascript file uses AJAX, then it will have a corresponding PHP file that the AJAX will call. For example, a js file might control a button on the page: $(document).ready(function () { $(\"#button\").live('click', function() { $.ajax({ type: \"POST\", data: ... url: \"button_click.php\", }); }); }); As you can see, this gets out of hand. What is the best way to organize all of the javascript?"} {"_id": "69481", "title": "Why do Wordpress & Drupal serialize the DB data?", "text": "I've recently went through manually editing some tables on a Wordpress website. I've also had some experience with database internationalization so I know that serializing is not the best (IMO) option to apply multiple languages. So why is it done?"} {"_id": "154781", "title": "Copyrighting software, templates, etc. under real name or screen name?", "text": "My question is hopefully simple--should I copyright my work (art, software, web design, etc.) under my real name or my screen name? My real name and screen name are also easily connected with a bit of searching, so does it really matter in the end? I'm not a professional (at this point). I read this question: Is it a bad idea to sell Android apps in the Android Market under your real name? and they recommended releasing on the app market under a company name. I also read this question: On what name should I claim copyright in open source software?, but that didn't answer my question. I know it probably matters for big projects, but for little projects, does it matter?"} {"_id": "120321", "title": "Is ORM an Anti-Pattern?", "text": "I had a very stimulating and interessting discussion with a colleague about ORM and its pros and cons. In my opinion, an ORM is useful only in the rarest cases. At least in my experience. But I don't want to list my own arguments at this time. So I ask you, what do you think about ORM? What are the pros and the cons?"} {"_id": "154786", "title": "Is a blob more efficient than a varchar for data that can be ANY size?", "text": "When setting up a database I want to use the most efficient data type for potentially fairly long data. Currently my project is to store song titles and thoughts pertaining to that song. Some titles might be 5 characters or longer than 100 characters and the thoughts could run pretty long. Is it more efficient to use a varchar set to 8000 or to use a blob? Is using a blob the same as a varchar, in that there is a set size it is allocated regardless of what it holds? or is it just a pointer and it doesn't really use much space on the table? Is there a certain set size of a blob in KB or is it expandable?"} {"_id": "154785", "title": "Should I provide fallbacks for HTML5/CSS3 elements in a web page at this point?", "text": "I'm wondering if I should bother providing a fallback for HTML5 tags and attributes and CSS3 styling at this point in time. I know that there's probably still a lot of people out there who use older versions of browsers and HTML5/CSS3 are still fairly new. I read this article: Should I use non-standard tags in a HTML page for highlighting words? and one answer mentioned that people kind of \"cheat\" with older browsers by using the new tags and attributes, but styling them in CSS to ensure they show up right. This question: Relevance of HTML5: Is now the time? was asked about two years ago and I don't know how relevant it is anymore. For example, I want to use the `placeholder` and `required` attributes in a web form I'm building and it has no labels to show what each `` is. How do I handle this, or do I bother?"} {"_id": "69488", "title": "Approaches that cater for poor connectivity", "text": "First off some background info My company has a software as a service model where people log onto our servers and do work. In order to support that application we have a utility application that lives on the client machine, and copies binary data from our servers to their machines. The first iteration of this software polled the database and then generated the information on the client side. This approach was done away with because we thought there where some fundamental flaws * Polling the database to see if any work had to be done * Opening up our SQL port to the world * Pulling all the data from the database and generating it on the client side. Our second iteration of the software generated the data on one of our servers and sent it on to the client via the Java Messaging Service. The advantages being seen as * Event Driven * No polling * SQL port is closed * Data is not being generated over the WAN The above is to illustrate the approaches we've used in the past. The problem that we have is that our clients often have poor connectivity to us (packet loss, poor response times etc) JMS was supposed to handle that as it should reconnect when the line comes back up but this doesn't always happen. What are the best approaches to deal with poor client connectivity? **EDIT** To explain what I mean about JMS not working as expected. My expectation of JMS is that it works reliably. What I mean by this is that if a line goes down, or is losing packets, or misbehaving in any other manner that the JMS implementation will continue to try send the message until it succeeds. As an example I see on an almost daily basis a JMS queue with consumers attached, yet no messages being sent or received, restarting the client normally resolves the issue. I can speak about more specifics, however for the sake of this conversation lets leave JMS out as potential answer **EDIT 2** I wanted some other possible solutions to my problem, but the general consensus seems to be that my current one (JMS) is the best. So here are 2 of my current cases where I am having issues with JMS _Case 1_ Client connects to the queue succesfully sends and receives multiple messages, after a variable time (between several hours and several days) the client no longer receives any messages. The main queue shows the consumers connected, but the client never receives the messages. The resolution is to restart the client application _Case 2_ The client never connects to the queue. The client continually tries to reconnect but the reconnection always fails. The resolution for this is to try again in a few hours Our setup is fairly simple * One public server * OpenMQ * The clients are low volume, ranging from 10 messages a day to a 1000 messages a day * Clients never initiate a 'message conversation', the queue initates and the clients reply * Messages are small, all pure text, no binary data * config is as follows: connectionFactory = new ConnectionFactory(); connectionFactory.setProperty(ConnectionConfiguration.imqAddressList, connectionURL); connectionFactory.setProperty(ConnectionConfiguration.imqReconnectEnabled, \"true\"); connectionFactory.setProperty(ConnectionConfiguration.imqPingInterval, \"30\"); connectionFactory.setProperty(ConnectionConfiguration.imqReconnectAttempts, \"-1\"); connectionFactory.setProperty(ConnectionConfiguration.imqReconnectInterval, \"30000\");"} {"_id": "154789", "title": "Generate sequence of string of 4 characters", "text": "I'm facing problem with generating character sequence for SMS tracking. there should be a easy to enter code send with all the outgoing messages. Reply SMS will map with that code. I can't generate next sequence of code from database because multiple clients may have read at ones and may end up with same code. is there any workaround for this? **EDIT** Database is Informix."} {"_id": "177775", "title": "Strategy for clients to retrieve real-time log from HTTP server", "text": "I have an HTTP Server Service application which has its own logging mechanism. It's written in Delphi. I would like to provide a way for multiple clients to connect to this service and get a real-time update of the log. The log in the service moves rather fast, there's a lot of things to log. There may be up to 50 messages within 1 second at times. The existing log which is already implemented is not saved, it's only kept in the memory of the server service - where I will need to distribute it to any client which needs it. Once all clients have a log message, it should be deleted. I intend to use HTTP to \"ask\" the server for the log, and respond with an XML packet. The connections are not keep-alive. The only problem is, the server should only send the client those log records which it needs, not everything. I have no way of the server pushing the log to the clients in real-time, so each client needs to repeatedly ask the server for the latest log records. This HTTP Server is very lightweight, and there is no session management. There isn't even any type of authentication. The only way I see is for a client to register its self on the server, and whenever a log is issued on the server, it creates a copy of the log for each client, where each client has a log queue (string list). However, suppose there are 100 clients connected and expecting to receive this log. That means the server must create 100 copies of each log, add this log to the end of each client log queue, and wait for the client to request it. At that point, when the server replies with the XML log, it should flush (delete) whatever's in the queue. I'm worried however that this could cause memory issues. Each client log queue might get 100 log messages before the client requests the latest logs. How should I go about doing this in the fastest way possible without hindering the performance of the server? I'm trying to avoid having to create a copy of each log for each client."} {"_id": "224448", "title": "How maintainable are automated refactorings?", "text": "Resharper offered to turn my loop: foreach (JObject obj in arr) { var id = (string)obj[\"Id\"]; var accountId = (double)obj[\"AccountId\"]; var departmentName = (string)obj[\"DeptName\"]; i++; } ...into a LINQ statement. I acquiesced, and it produced this: int i = 1 + (from JObject obj in arr let id = (string) obj[\"Id\"] let accountId = (double) obj[\"AccountId\"] select (string) obj[\"DeptName\"]).Count(); Gee whillikers, jumpin' Jehoshaphat, and Heavens to Murgatroid! This makes me wonder if the robots becoming smarter than me is a good thing; how maintainable is that? Answer: it's not, since I can't understand it; R# may just as well have converted it to machine code, for all I can grok it. I have to admit, though, it's \"cool.\" At what point does the perfect become the enemy of the good in this kind of scenario? ## UPDATE On a second pass through, R# tells me for that line of LINQ, \"Local variable \"i\" is never used\" Letting it remove it, the line becomes: var arr = JsonConvert.DeserializeObject(s); Hmm! Now that's grokkable; the reason I was breaking the elements down into individual vars before was so that I could see what was in them: MessageBox.Show(string.Format(\"Object {0} in JSON array: id == {1}, accountId == {2}, deptName == {3}\", i, id, accountId, departmentName)); ...but with that messagebox commented out, I guess the above is better after all."} {"_id": "215548", "title": "Implementing new required feature after software release", "text": "**Fake Scenario** There is a software that was released 1 year ago. The software is to map and register all kind of animals on our planet. When the software was released, the client only needed to know the scientific name of the animal, a flag if it is in risk of extinction and a scale of dangerous(that is a fake software and specification, I don't want to discuss this here). There are already 100.000 animals records saved on DB. **New Feature** One year later, the client wants a new feature. It is really important to him to know the animals classes, and this is a required field. So he asks me to put a field to input the animal class, and this field is required. Or maybe where this animal was discovered. **Problem** I have already 100.000 recorded animals without a class or where it was discovered, but I need to insert a new column to storage this information and this column can't be null. I don't have a default value for this situation (there isn't a default animal class or where it was discovered). I don't want to keep the requirement rule only on my software, my DB must have this requirement too(I like to keep business rules on DB too). What are the alternatives to solve this situation? _I am on a situation that this new feature cannot be previewed or reviewed for the existing records. The time already passed and I can't go back on time to get it_"} {"_id": "19845", "title": "Coding Style And defects", "text": "Over the years I have developed several style based techniques which I use to keep my from make Error `if(const == lvalue)` rather then `if(lvalue == const)` since the first can't fall victim to the classic accidental assignment goof. I recently worked on a project with very unusual style standards and I found that I had much greater difficulty reading code. Has anyone seen any statistics on a particular coding style and its defect levels or have any experience with changing style alone to improve defect rates."} {"_id": "22073", "title": "Is functional programming actually used to create applications?", "text": "> **Possible Duplicate:** > What are some well known applications written in F#? I see a lot of people talking about how cool functional programming is, how awesome Lisp and Haskell and F# are, etc, but what I don't see is them actually being _used_. Just about every program on my computer is written in either something in the C family, Delphi, or Python. I see tutorials talking about how easy it is to use functional languages to do complicated math problems, but no one talking about using them to do things most people actually care about using computers for, like business applications and games. Does anyone have any examples of actual programs that people have heard of and are using, written in a functional language? The only one I can think of off the top of my head is Abuse, from almost 15 years ago. (Things like Emacs, where the core program is written in C with a functional language scripting layer on top, don't count.)"} {"_id": "214462", "title": "Object oriented immutability: final classes or final methods", "text": "One of the things you see in numerous places in the standard java library is final classes. It is claimed that this is for immutability which I understand...to an extent. Suppose you have a class: final public class ImmutableTest { private String value; public ImmutableTest(String value) { this.value = value; } public String getValue() { return this.value; } } You can say that the value is immutable because it can't be adapted after creation. However this limits usability of the class as no one can extend it (for example to use as a drop-in for a bad interfaceless design...). Now suppose you have this class: public class ImmutableTest { private String value; public ImmutableTest(String value) { this.value = value; } final public String getValue() { return this.value; } } The \"value\" is still immutable but now at least people can extend the class. Someone once interjected: but if you extend it, the extended class is not guaranteed to be immutable! While this is true, I think it is immaterial to the class itself. If user A uses my ImmutableTest class everywhere in his code, he only has access to the \"value\" anyway which **remains immutable**. Unless he does explicit casting (in other words: he's aware that he's trying to access other stuff) he can't access a \"mutable\" part of the actual instance which means from the developer's point of view the class is as immutable as it should be, even if the actual instance has other mutable variables. So in short: my opinion is that final classes are superfluous as final methods are all you need. I would love to hear arguments to the contrary though! **UPDATE** To clarify: I'm not claiming that adding \"final\" to a class or its methods makes it immutable. Suppose you have a class that is designed to be immutable but you do not make the methods or class final, a subclass can of course make it mutable again. This argument is used as to why some classes in the standard library are final. My opinion is that it should've been enough to finalize the accessor methods to the immutable part and leave the class non-final. This may leave the window open to the child objects **adding** mutable fields but the original set of fields would remain immutable hence satisfying the \"immutable\" requirement for the initial set of methods. **UPDATE2** The conclusion being that final classes have no real purpose except perhaps brevity which in itself is not enough of a reason to lose extensibility in my opinion."} {"_id": "236411", "title": "what are the things to take in consideration when designing a database that likely will be distributed after some time?", "text": "We are small start-up now, we don't have a DBA. if our project succeed, it will have a lot of data that will likely need to be distributed over several machines. So we want to make that step easier, so what we should take in consideration now to make it so? What we can think of now is using uuid instead of autoincrement ids, but even that we don't know how much that will affect performance? So i using uuid is right choice? and what else we should do?"} {"_id": "241430", "title": "What are the benefits of using a 'decorator factory' that decorates objects?", "text": "In a project I decided to implement the Decorator pattern. I have a class `Thing` with `methodA()`, and a class `AbstractDecorator` that inherits from `Thing` and that all decorators inherit from: `ConcreteDecorator1`, `ConcreteDecorator2`, etc. All of them of course override `methodA()` to add some functionality before delegating to the wrapped `Thing`. Usual Decorator implementation. I decided to implement a `WrappingFactory` (for a lack of a better name): it receives `Thing` objects and wraps them with a specified decorator. Some of the decorators require a parameter in the constructor, and `WrappingFactory` takes care of that too. Here it is: public class WrappingFactory{ public static void addSomeFunctionality(Thing thing){ thing = new SomeDecorator(thing); } public static void addFunctionalityWithParameter(Thing thing, int param){ thing = new DecoratorWithParameter(thing, param); } public static void addSomeAwesomeFunctionality(Thing thing){ thing = new AwesomeDecorator(thing); } } I did this but actually I don't know why. Does this have benefits as opposed to having the client instantiate decorators directly? If this has benefits, please explain them to me."} {"_id": "132691", "title": "Is there an alternative to bits?", "text": "Is there an alternative to bits as the smallest unit of data? Something that won't be only 0 or 1, but actually hold many possible states in between? Wouldn't it be more natural to store floats like that?"} {"_id": "241439", "title": "If Scheme is untyped, how can it have numbers and lists?", "text": "Scheme is said to be just an extension of the Untyped Lambda Calculus (correct me if I am wrong). If that is the case, how can it have Lists and Numbers? Those, to me, look like 2 base types. So I'd say Racket is actually an extension of the Simply Typed Lambda Calculus. No? ## Question: * **Is Scheme's type system actually based or more similar to Simply Typed or Untyped Lambda Calculus?** * **In what ways does it differ from Untyped and or Simply Typed Lambda Calculus?** (The same question is valid for \"untyped\" languages such as Python and JavaScript - all of which look like they have base types to me.)"} {"_id": "245959", "title": "How viable is to create a mini documentation inside the source code", "text": "So, I work as a developer. We are a small team. The majority of the people (everyone except the project manager and the senior developer) are still in university. We have very flexible work hours, sometimes I don't see some of the people for 3-4 days. You can imagine that communication is key here. We write a lot of emails and use a bug tracking system and as long as you are somewhat connected to the task you can be well informed. (as long as you read all the correspondence). One of the weak points of the team is the documentation that people leave behind. The code is reasonably well written and as long as you take the time to read it in it's entirety you will understand it. (problem: this usually takes quite a bit of time when tackling modules of a couple of thousand lines of code) Note: There is very good documentation for the users which is written to the end users, however we are discussing the documentation shared between the programmers. When a new person has to enter some of the things a part of the team is working on, or when someone has to modify old code or someone's old code things get a bit complicated. As I mentioned before sometimes we don't see eachother for days so unless the project manager knows(and remembers) the details of the code the newcommer will have the following options: 1 Read the source 2 Read the old emails with the discussions. When you have to refactor large pieces of the code This is fine (esp the first thing) however when an algorithm needs to be changed or someone just has to use an old module as part of new module or script or whatever you can imagine that waiting to meet the guy who created the module or reading tons of emails will **waste time**. I have started adding small documentation in the beginning of each module. It consists of: 1. 2-3 sentences about what the module does 2. A bit of information for each function that is not a part of another function. Basically what it does and some specific information about the arguments it takes if the arguments are objects. If they aren't I just write \"see the source for more on the arguments\" Since I don't know if I am on the right track the question is simple: What should such a minimal documentation contain so it can give a general idea about what the module does and how people can use it?"} {"_id": "119913", "title": "How can I learn to effectively write Pythonic code?", "text": "Doing a google search for \"pythonic\" reveals a wide range of interpretations. The wikipedia page says: > A common neologism in the Python community is pythonic, which can have a > wide range of meanings related to program style. To say that code is > pythonic is to say that it uses Python idioms well, that it is natural or > shows fluency in the language. Likewise, to say of an interface or language > feature that it is pythonic is to say that it works well with Python idioms, > that its use meshes well with the rest of the language. It also discusses the term \"unpythonic\": > In contrast, a mark of unpythonic code is that it attempts to write C++ (or > Lisp, Perl, or Java) code in Python\u2014that is, provides a rough transcription > rather than an idiomatic translation of forms from another language. The > concept of pythonicity is tightly bound to Python's minimalist philosophy of > readability and avoiding the \"there's more than one way to do it\" approach. > Unreadable code or incomprehensible idioms are unpythonic. What does the term \"pythonic\" mean? How do I learn to effectively apply it in practice?"} {"_id": "249603", "title": "Rendering STL file to STL viewer which is in new page", "text": "I've uploaded a STL file as an additional file for a product. And it gets downloaded in frontend as shown in screen shot. ![associated files](http://i.stack.imgur.com/pgDsQ.png) When I click on that link for STL Downloader, then that STL file will be downloaded successfully. Tried and tested too many times and it worked like a charm. But I don't want that STL file to be downloaded. I want it to be rendered to a STL viewer which I built. That STL viewer is in another web page. So clicking `Click here to download associated STL file` link should render the STL file to the STL viewer which is in another web page. How can I do it? I don't want users to download STL files because it is the intellectual property of my company. Whole thing should be done in PHP only."} {"_id": "245950", "title": "Is template \"metaprogramming\" in Java a good idea?", "text": "There is a source file in a rather large project with several functions that are extremely performance-sensitive (called millions of times per second). In fact, the previous maintainer decided to write 12 copies of a function each differing very slightly, in order to save the time that would be spent checking the conditionals in a single function. Unfortunately, this means the code is a PITA to maintain. I would like to remove all the duplicate code and write just one template. However, the language, Java, does not support templates, and I'm not sure that generics are suitable for this. My current plan is to write instead a file that generates the 12 copies of the function (a one-use-only template expander, practically). I would of course provide copious explanation for why the file must be generated programmatically. My concern is that this would lead to future maintainers' confusion, and perhaps introduce nasty bugs if they forget to regenerate the file after modifying it, or (even worse) if they modify instead the programmatically- generated file. Unfortunately, short of rewriting the whole thing in C++, I see no way to fix this. Do the benefits of this approach outweigh the disadvantages? Should I instead: * Take the performance hit and use a single, maintainable function. * Add explanations for why the function must be duplicated 12 times, and graciously take the maintenance burden. * Attempt to use generics as templates (they probably don't work that way). * Yell at the old maintainer for making code so performance-dependent on a single function. * Other method to maintain performance and maintainability? P.S. Due to the poor design of the project, profiling the function is rather tricky... however, the former maintainer has convinced me that the performance hit is unacceptable. I assume by this he means more than 5%, though that is a complete guess on my part. * * * Perhaps I should elaborate a bit. The 12 copies do a very similar task, but have minute differences. The differences are in various places throughout the function, so unfortunately there are many, many, conditional statements. There are effectively 6 \"modes\" of operation, and 2 \"paradigms\" of operation (words made up by myself). To use the function, one specifies the \"mode\" and \"paradigm\" of operation. This is never dynamic; each piece of code uses exactly one mode and paradigm. All 12 mode-paradigm pairs are used somewhere in the application. The functions are aptly named func1 to func12, with even numbers representing the second paradigm and odd numbers representing the first paradigm. I'm aware that this is just about the worst design ever if maintainability is the goal. But it seems to be \"fast enough\", and this code hasn't needed any changes for a while... It's also worth noting that the original function has not been deleted (although it is dead code as far as I can tell), so refactoring would be simple."} {"_id": "249608", "title": "Multiple intranet/internet systems partially working on same data - database strategy", "text": "We are starting rewritting our apps (Internet portal, millions of unique users and few CRM/ERP systems, few hundred users) and we have a huge decision to make now. We are going to write them mostly (90-95%) in `Symfony2` with `Doctrine`, and some background services (e.g. mailing) in `Java`. Database - `MySql`/`MariaDb`. also lot of additional technologies (`redis/memcached`, `load balancing`, `varnish`, `replication` and so on). Most important (in this case) are - `symfony2`, `mysql/maria` and `doctrine`. The thing is - it will be best for few systems to work on same tables. For example: internet portal with job offers + CRM system for managing clients that pay for posting those offers (there are many similar cases). Also having functionality of one login for our users between every system is important. I see two approaches here: 1. We have one big database, having few hundred of tables. I used to work with 200+ tables, but not with that high amount of traffic. So if traffic goes up, sharding/partitioning will be involved. 2. We have many databases, each for one app. If there is need for one app to communicate with another `DB`, we will write special services to deliver that functionality. Now, what I'm worried about: 1\\. ease of development - it's easier to just have one database 2\\. ease of configuration/assuring redudancy 3\\. performance One thing I know now that database will be hosted on three machines with `master-slave` replication, which is supported nicely by `Doctrine`. What are your thoughts? What are your pro's and con's? Thanks!"} {"_id": "245956", "title": "Should a variable name be changed if its purpose changes?", "text": "I'll change my personal code often and not worry about going back and changing the associated variable names. Should variable names be maintained in case of meaning changes? var accountCurrentValue = x; var accountYesterdayValue = y; var accountValueChange = accountCurrentValue - accountYesterdayValue; \\--> `accountPastValue` becomes instead the previous month's value var accountCurrentValue = x; var accountLastMonthValue = y; var accountValueChange = accountCurrentValue - accountLastMonthValue;"} {"_id": "81228", "title": "Learning Sparx Enterprise Architect", "text": "I have a basic working knowledge of Enterprise Architect with 6 months experience. But I would like to learn how to use more of it. Anyone have a recommendation for a learning source or course?"} {"_id": "159008", "title": "What is a good starting point for small scale PHP development and would a framework be overkill?", "text": "I'm a web development intern working on a small PHP application (just a few pages with a little database access) which has fast become a couple of very non-DRY, non-OO, individual scripts. A framework seems like overkill (maybe not?) but I was wondering what some best practices for code / folder structure and general solo, small-scale PHP development. What is your starting point for a small application such as this?"} {"_id": "159007", "title": "Are fluent interfaces more flexible than attributes and why?", "text": "In a EF 4.1 Code First tutorial the following code is given: public class Department { public int DepartmentId { get; set; } [Required] public string Name { get; set; } public virtual ICollection Collaborators { get; set; } } Then it is explained that the fluent interface is more flexible: > Data Annotations are definitely easy to use but it is preferable to use a > programmatic approach that provides much more flexibility. The example of using the fluent interface is then given: protected override void OnModelCreating(ModelBuilder modelBuilder) { modelBuilder.Entity().Property(dp => dp.Name).IsRequired(); modelBuilder.Entity().HasKey(ma => ma.ManagerCode); modelBuilder.Entity().Property(ma => ma.Name) .IsConcurrencyToken(true) .IsVariableLength() .HasMaxLength(20); } I can't understand why the fluent interface is supposedly better. Is it really? From my perspective it looks like the data annotations are more clear, and have more of a clean semantic feel to it. My question is why would a fluent interface be a better option than using attributes, especially in this case? (Note: I'm quite new to the whole concept of fluent interfaces, so please expect no prior knowledge on this.) Reference: http://codefirst.codeplex.com/"} {"_id": "153062", "title": "What term is used to describe running frequent batch jobs to emulate near real time", "text": "Suppose users of application A want to see the data updated by application B as frequently as possible. Unfortunately app A or app B cannot use message queues, and they cannot share a database. So app B writes a file, and a batch job periodically checks to see if the file is there, and if load loads it into app A. Is there a name for this concept? A very explicit and geeky description: \"running very frequent batch jobs in a tight loop to emulate near real time\". This concept is similar to \"polling\". However polling has the connotation of being very frequent, multiple times per second, whereas the most often you would run a batch job would be every few minutes. A related question -- what is the tightest loop that is reasonable. Is it 1 minute of 5 minutes or ...? Recall that the batch jobs are started by a batch job scheduler (e.g. Autosys, Control M, CA ESP, Spring Batch etc.) and so running a job too frequently would causes overhead and clutter."} {"_id": "153067", "title": "Designing configuration for subobjects", "text": "I have the following situation: I have a class (let's call it Main) encapsulating a complex process. This class in turn orchestrates a sequence of subalgorithms (AlgoA, AlgoB), each one represented by an individual class. To configure Main, I have a configuration stored into a configuration object MainConfig. This object contains all the config information for AlgoA and AlgoB with their specific parameters. AlgoA has no interest to the information relative to the configuration of AlgoB, so technically I could have (and in practice I have) a contained MainConfig.AlgoAConfig and MainConfig.AlgoBConfig instances, and initialize as AlgoA(MainConfig.AlgoAConfig) and AlgoB(MainConfig.AlgoBConfig). The problem is that there is some common configuration data. One example is the printLevel. I currently have MainConfig.printLevel. I need to propagate this information to both AlgoA and AlgoB, because they have to know how much to print. MainConfig also needs to know how much to print. So the solutions available are 1. I pass the MainConfig to AlgoA and AlgoB. This way, AlgoA has technically access to the whole configuration (even that of AlgoB) and is less self-contained 2. I copy the MainConfig.printLevel into AlgoAConfig and AlgoBConfig, so I basically have three printLevel information repeated. 3. I create a third configuration class PrintingConfig. I have an instance variable MainConfig.printingConfig, and then pass to AlgoA both MainConfig.AlgoAConfig and MainConfig.printingConfig. Have you ever found this situation? How did you solve it ? Which one is stylistically clearer to a new reader of the code ?"} {"_id": "153065", "title": "IValidatableObject vs Single Responsibility", "text": "I like the extnesibility point of MVC, allowing view models to implement IValidatableObject, and add custom validation. I try to keep my Controllers lean, having this code be the only validation logic: if (!ModelState.IsValid) return View(loginViewModel); For example a login view model implements IValidatableObject, gets ILoginValidator object via constructor injection: public interface ILoginValidator { bool UserExists(string email); bool IsLoginValid(string userName, string password); } It seems that Ninject, injecting instances in view models isn't really a common practice, may be even an anti-pattern? Is this a good approach? Is there a better one?"} {"_id": "81221", "title": "Why do companies tell me they want me as an in-house employee, not as a contractor?", "text": "## Context Money is not what is at issue here. What is at issue is independence and to make sure I will contribute by working at my field of expertise, tactfulness, and ensuring everyone's expectations are in-line. Companies that are trying to develop new software products tell me they want me as an in-house employee, and not as a contractor. After just finishing my PhD, I am an expert in a very specialized kind of data analysis, not in general programming. My programming skills were gained as a by-product of my academic research. I believe I am called a 'domain expert' rather than a 'software engineer'. My coding is focused on implementing a specific class of data analysis algorithms, so I consider myself more of an a academic excelling at thinking of creative ways of developing tools to solve problems, who programs to test and implement these new ideas, rather then being a corporate programmer. I am looking for work as a contractor in my field of expertise and would prefer to earn less money and have less benefits working as a short-term contractor helping people with what I am an expert in rather than being a in- house employee. Some of the problems they have discussed seem perfectly suited for me to work with them for only a few months and then move on and let their real coders take over. I have helped academics in this manner. I am in the USA. ## **The problem:** Companies say I need to be an in-house employee for intellectual property reasons. ## **The question:** Wouldn't it be easy for them to just have me sign a contract saying all my work is owned by them? If I work as a contractor, they can let me go when my value diminishes and save lots of money and I could just focus on what I think is a fun way to work --moving from problem to problem. What am I missing here?"} {"_id": "63198", "title": "Discrete Mathematics Refreshers Course?", "text": "I graduated 8 years ago and did a discrete mathematics course in my 2nd year, but I've been told that I'll likely be asked discrete maths questions in an upcoming interview. Is there a simple webpage/document that I can use to refresh my memory? I understood the concepts when I took the course 10 odd years ago, but my memory is a bit rusty and I'd like to just refresh it."} {"_id": "135272", "title": "Lines of Code vs. Optimal Language", "text": "I was wondering whether it is possible to give any advice as the the maximum number of lines of code at which one should consider switching from say MATLAB to a more low level language? Is it even the case that at a certain point it makes more sense to manage a certain degree of complexity of a given program in a proper object oriented language rather then MATLAB? I should say I am a newbie in both MATLAB and Java, so I have no hidden agenda in this question even though I m aware of the heated discussions that people sometimes engage in over whether MATLAB is a proper programming language. I'm not experienced enough to even think about participating in such an exchange and I'm just looking for advice whether there is a cut off point where one really needs to go to a different language? Also I should add I understand the code may become longer when you move to lower level, however I was under the impression that object oriented programming makes it easier to manage the complexity of bigger programs. (Maybe lines of code was a bad choice as a proxy for complexity?) Is the choice of low vs. high level just one of performance? (which in the end you have to pay for with a bigger programming task at your hands for the low level language?)"} {"_id": "49674", "title": "open source database project", "text": "What is the best way to build an open source database? I would like to build a database of all vehicles and their related maintenance information (i.e Oil Weight, Quantity, Tire Pressure, Windshield wipers etc). Currently this information is fragmented or just not put on-line in an easily accessible manner. As soon as collection begins, I would like to import this data into the DB and make it available freely to anyone. Is there a process (site or group) that I can start gathering this information in a reliable and verifiable way? In addition to this, are there any issues that I should be aware of?"} {"_id": "63193", "title": "Storing bugfixes as local changes rather than in SVN branch", "text": "We are deploying and upkeeping web-sites to different customers. Normally we wait for newest changes to stabilize, then tag subversion version as release, and install than version to customer. Then we receive support requests, and sometimes are forced to make quick bugfixes between releases. I find it troublesome to upkeep these bugfixes in branches, and wonder why not store them as \"local changes\" in some \"Release X.Y\"-folder (on build-machine)? Can this work, or is branching the only way? Reasons for this change are: * it seems much easier to use WinMerge-utility for merging newest bugfixes into some \"Release X.Y\"-folder, than using tortoise-svn's \"branch integration\". WinMerge offers you much finer control over which changes to merge and which not (on file-basis, not commit-basis). * when keeping bugfixes as \"local changes\" you have three-way comparison: in \"Check for modifications\" you can see all the bugfixes in current release, and with WinMerge you can see all the newest trunk-changes (if compare \"Release X.Y\" with \"trunk\"-folder) * also it seems over-obsessive to keep everything in SVN. IMO SVN should serve the user and not the other way around. * also branch re-integration never merges only those files that have actually changed. It takes along whole bunch of unnecessary files, which make it difficult to track anything in SVN-log."} {"_id": "204417", "title": "What is the most optimal algorithm for counting lines of text in a file?", "text": "File > 5Gb, simply with lines like apache access.log. Need to get number of lines. Any constructions like file(filename).read().counter('\\n') Would read all of files and it would be very long time. e.g. os.stat(filename).st_size Works very fast. It is possible to get at least hypothetically number of lines, by size of first lines in bytes and total size of file. Is there more accurately way?"} {"_id": "149037", "title": "Why is the folder name \"bin\" used in some frameworks and languages?", "text": "I have been learning Java. And still after a prolonged time I don't know **why the name of the folder is \"bin\"** where one find all the tools for java? Is there is any logical reason behind that? I have also noticed the same in .Net framework also."} {"_id": "45574", "title": "Should Development / Testing / QA / Staging environments be similar?", "text": "After much time and effort, we're finally using maven to manage our application lifecycle for development. We still unfortunately use ANT to build an EAR before deploying to Test / QA / Staging. My question is, while we made that leap forward, developers are still free to do as they please for testing their code. One issue that we have is half our team is using Tomcat to test on and the other half is using Jetty. I prefer Jetty slightly over Tomcat, but regardless we using WAS for all the other environments. My question is, should we develop on the same application server we're deploying to? We've had numerous bugs come up from these differences in environments. Tomcat, Jetty, and WAS are different under the hood. My opinion is that we all should develop on what we're deploying to production with so we don't have the problem of well, it worked fine on my machine. While I prefer Jetty, I just assume we all work on the same environment even if it means deploying to WAS which is slow and cumbersome. What are your team dynamics like? Our lead developers stepped down from the team and development has been a free for all since then. Walter"} {"_id": "41855", "title": "What approaches can I take to lower the odds of introducing new bugs in a complex legacy app?", "text": "Where I work I often have to develop (and bug fix) in an old system (.NET 1) whos code is complete spaghetti - with little thought given to variable names, program structure nor comments. Because of this it takes me ages to understand what bits need changed, and I often 'break' the existing software because I've made a modification. I really _really_ want to spend a couple of months (with colleagues) going through it to refactor but existing developers both can't see the need - nor think theres time for this (the system is massive). I dread having to work on its code as it takes days to fix something only to find out I've broken something else. This obviously makes me look incompetent - so how can I deal with this?"} {"_id": "41854", "title": "A list of the most important areas to examine when moving a project from x86 to x64?", "text": "I know to check for/use asserts and carefully examine any assembly components, but I didn't know if anyone out there has a fairly comprehensive or industry standard check-list of specific things at which to look? I am looking more at C and C++. note: There are some really helpful answers, I'm just leaving the question open for a couple days in case some folks only check questions that don't have accepted answers."} {"_id": "162438", "title": "Returning an IQueryable from an IRepository", "text": "Using the Repository pattern, is it proper to return an IQueryable of a data set (table), for generic usage? It is very handy in many cases, especially when using external libraries that leverage that interface, for example some plugins that sort/filter and bind to ui elements. However, exposing an IQueryable sometimes seems to leave the design prone to errors. This, coupled with a wrong usage of lazy loading could lead to severe performance hits. On the other hand, having access methods for every single usage seems redundant and also a lot of work (considering unit tests etc)."} {"_id": "162439", "title": "Is there any way to remove this redundancy?", "text": "I currently have the following HTML code I have that because I felt the \"23. August 2012\" format was easier for website visitors to read than Both of those HTML5 examples validate at w3, but the first one is obviously redundant, and I like to cut out markup I don't need. Is there another date formate that I can use in the second example that is (1) easy for website visitors to read, and (2) w3 valid?"} {"_id": "162432", "title": "Why aren't there native Javascript interpreters for Windows/Mac/Linux?", "text": "It seems to me it would be very useful to use Javascript for general server side scripting tasks as it has more or less the same features as Perl and Python. But AFAIK there are no generally available Javascript interpreters for the major machine architectures. I guess the other problem may be lack of libraries but surely these would come if the interpreters were there. Google's V8 maybe could be a starting point. Does anyone think we'll see this soon?"} {"_id": "162435", "title": "How to unit test a function that is refactored to strategy pattern?", "text": "If I have a function in my code that goes like: class Employee{ public string calculateTax(string name, int salary) { switch (name) { case \"Chris\": doSomething($salary); case \"David\": doSomethingDifferent($salary); case \"Scott\": doOtherThing($salary); } } Normally I would refactor this to use Ploymorphism using a factory class and strategy pattern: public string calculateTax(string name) { InameHandler nameHandler = NameHandlerFactory::getHandler(name); nameHandler->calculateTax($salary); } Now if I were using TDD then I would have some tests that work on the original `calculateTax()` before refactoring. ex: calculateTax_givenChrisSalaryBelowThreshold_Expect111(){} calculateTax_givenChrisSalaryAboveThreshold_Expect111(){} calculateTax_givenDavidSalaryBelowThreshold_Expect222(){} calculateTax_givenDavidSalaryAboveThreshold_Expect222(){} calculateTax_givenScottSalaryBelowThreshold_Expect333(){} calculateTax_givenScottSalaryAboveThreshold_Expect333(){} After refactoring I'll have a Factory class `NameHandlerFactory` and at least 3 implementation of `InameHandler`. How should I proceed to refactor my tests? Should I delete the unit test for `claculateTax()` from `EmployeeTests` and create a Test class for each implementation of `InameHandler`? Should I test the Factory class too?"} {"_id": "160362", "title": "What's the benefit of a singleton over a class in objective-c?", "text": "They both seem to take about the same amount of effort to use: Singleton: needs a .h and .m file, has to be imported into any class you want to call it from and then has to be instantiated How is that any different than a class?"} {"_id": "160363", "title": "When you won't need a language anymore, should you still use it?", "text": "My first main language was Java. However, over the years I've dropped Java in favor for Python, JavaScript, bash, etc. I still have advanced reading knowledge of Java, but since I haven't coded in it for so long, my writing knowledge is intermediate at most. The other day, while looking at some old code written in Java, I got inspiration to re-write it as a learning experience to reacquaint myself. It was at that point that I stopped to think that if I wrote this in Java, it would be nice, but when the renewed inspiration had hit me, I wanted to implement it as a web app (it was a game), so other people could play it. So now here's the dilemma: Do I re-write (and heavily refactor) the code I've previously written in Java, and _then_ in JavaScript (for the web version), or do I just forget about Java altogether and re-write it in JavaScript? EDIT: I have no problem writing in Java first and then JavaScript, but I'm more likely than not going to need Java anymore for a long while. What I'm trying to ask is whether or not I should rewrite the old code in Java first, to practice a skill I may not need in the future. The actual languages are less important than that question."} {"_id": "190240", "title": "Do context diagrams have levels?", "text": "I heard that the context analysis diagram has different levels. I couldn't find this in .Net, but I have seen that a DFD has different levels. Do context diagrams have any levels (`level0`, `level1`, `level2`)? If yes, please suggest some examples."} {"_id": "54485", "title": "In a multidisciplinary team, how much should each member's skills overlap?", "text": "I've been working in embedded software development for this small startup and our team is pretty small: about 3-4 people. We're responsible for all engineering which involves an RF device controlled by an embedded microcontroller that connects to a PC host which runs some sort of data collection and analysis software. I have come to develop these two guidelines when I work with my colleagues: 1. Define a clear separation of responsibilities and make sure each person's contribution to the final product doesn't overlap. 2. Don't assume your colleagues know everything about their responsibilities. I assume there is some sort of technology that I will need to be competent at to properly interface with the work of my colleagues. The first point is pretty easy for us. I do firmware, one guy does the RF, another does the PC software, and the last does the DSP work. Nothing overlaps in terms of two people's work being mixed into the final product. For that to happen, one guy has to hand off work to another guy who will vet it and integrate it himself. **The second point is the heart of my question.** I've learned the hard way not to trust the knowledge of my colleagues absolutley no matter how many years experience they claim to have. At least not until they've demonstrated it to me a couple of times. So given that whenever I develop a piece of firmware, if it interfaces with some technology that I don't know then I'll try to learn it and develop a piece of test code that helps me understand what they're doing. That way if my piece of the product comes into conflict with another piece then I have some knowledge about possible causes. For example, the PC guy has started implementing his GUI's in .NET WPF (C#) and using LibUSBdotNET for USB access. So I've been learning C# and the .NET USB library that he uses and I build a little console app to help me understand how that USB library works. Now all this takes extra time and energy but I feel it's justified as it gives me a foothold to confront integration problems. Also I like learning this new stuff so I don't mind. On the other hand I can see how this can turn into a time synch for work that won't make it into the final product and may never turn into a problem. **So how much experience/skills overlap do you expect in your teammates relative to your own skills? Does this issue go away as the teams get bigger and more diverse?**"} {"_id": "54487", "title": "What's the best way of marketing to programmers?", "text": "Disclaimer up front - I'm definitely not going to include any links in here - this question isn't part of my marketing! I've had a few projects recently where the end product is something that developers will use. In the past I've been on the receiving end of all sorts of marketing - as a developer I've gotten no end of junk - 1000s of pens, tee-shirts and mouse pads; enough CDs to keep my desk tea-free; some very useful USB keys with some logos I no longer recognise; a small forest's worth of leaflets; a bulging spam folder full of ignored emails, etc... So that's my question - **What are good ways to market to developers?** And as an aside - are developers the wrong people to target? - since we so often don't have a purchasing budget anyways!"} {"_id": "160368", "title": "Why is the use of abstractions (such as LINQ) so taboo?", "text": "I am an independent contractor and, as such, I interview 3-4 times a year for new gigs. I am in the midst of that cycle now and got turned down for an opportunity even though I felt like the interview went well. The same thing has happened to me a couple of times this year. Now, I am not a perfect guy and I don't expect to be a good fit for every organization. That said, my batting average is lower than usual so I politely asked my last interviewer for some constructive feedback, and he delivered! The main thing, according to the interviewer, was that I seemed to lean too much towards the use of abstractions (such as LINQ) rather than towards lower- level, organically grown algorithms. On the surface, this makes sense--in fact, it made the other rejections make sense too because I blabbed about LINQ in those interviews as well and it didn't seem that the interviewers knew much about LINQ (even though they were .NET guys). **So now I am left with this question:** If we are supposed to be \"standing on the shoulders of giants\" and using abstractions that are available to us (like LINQ), then why do some folks consider it so taboo? Doesn't it make sense to pull code \"off the shelf\" if it accomplishes the same goals without extra cost? It would seem to me that LINQ, even if it _is_ an abstraction, is simply an abstraction of all the _same_ algorithms one would write to accomplish exactly the same end. Only a performance test could tell you if your custom approach was better, but if something like LINQ met the requirements, why bother writing your own classes in the first place? I don't mean to focus on LINQ here. I am sure that the JAVA world has something comparable, I just would like to know why some folks get so uncomfortable with the idea of using an abstraction that they themselves did not write. ## UPDATE As Euphoric pointed out, there isn't anything comparable to LINQ in the Java world. So, if you are developing on the .NET stack, why not always try and make use of it? Is it possible that people just don't fully understand what it does?"} {"_id": "13212", "title": "Can the ScrumMaster and other team members be managed by the Product Owner?", "text": "Our team is switching to Scrum. I would be the ScrumMaster (in addition to being a developer), and another developer would become Product Owner (in addition to our product marketing guy). All members of the team, including me, would be managed by the would-be Product Owner. By that I mean that the guy would be the one deciding about our yearly evaluation, raises, etc. Would this hierarchical link be prone to introduce issues? How do organizations typically map hierarchical structure onto agile teams? I suppose it's quite common that the ScrumMaster has a hierarchical link to the other developers in the team. Here it would be the Product Owner. Is this different?"} {"_id": "90887", "title": "choosing an opensource license for a software", "text": "i am glad to say that we are going to make Advanced Electron Forum (anelectron.com) an Opensource project, since we have a lack in developers and we think that this will make the development wheel turn much much better, we are going to post the source code to public in github, but we are not sure about the license that we are going to use, GPL gives too much freedom we don't want, the only thing we want to prevent the user from doing is the forking, we don't want any forks or redistribution of the code without permission, so what to chose ?"} {"_id": "90080", "title": "Is it common for developers to deal with extensive change control procedures?", "text": "This question requires a bit of setup, please bear with me. Last week my company rolled out a new change management procedure. Any change destined for production requires a change control record; this policy was already in place and I don't disagree with it. The new procedure, however, involves a highly convoluted, server-intensive web app for creating the records. As an added bonus the servers are in Europe (I'm in Seattle), which often results in latency issues. **Any** given change record requires (at a minimum) a business justification, requirements document, pre-implementation plan, pre-implementation test plan, execution plan, execution test plan, post-implementation plan and post- implementation test plan. These plans have to be typed manually into the aforementioned web app. After creating the record, the developer making the change is required to attend an hour-long phone conference with the Change Advisory Board to justify the change. Nevermind that the change request passed through four layers of management before hitting our desks; it's on us to justify the work. I'm of the opinion that any work that lands on my desk should have been justified long since, preferably by the person/department requesting the work. This may end up being a deal-breaker for me. My question is, how common is this practice in the programming shops of non- software companies? **Edit:** There's a lot of good feedback here. It sounds like the solution is to join a software company that isn't involved in finance, healthcare or government. :) Thanks to everyone for the responses."} {"_id": "120817", "title": "Where do other programming languages fit in Windows 8 development model?", "text": "Today, Windows applications can be developed in numerous languages and frameworks. But in the upcoming Windows 8, with the introduction of WinRT, the development model is going change at a large scale. So, where do other programming languages fit in this model?"} {"_id": "179034", "title": "Design pattern for isomorphic trees", "text": "I want to create a data structure to work with isomorphic tree. I don't search for a \"algorithms\" or methods to check if two or more trees are isomorphic each other. Just to create various trees with the same structure. Example: 2 - - - - - - - 'a' - - - - - - - 3.5 / \\ / \\ / \\ 3 3 'f' 'y' 1.0 3.1 / \\ / \\ / \\ 4 7 'e' 'f' 2.3 7.7 The first \"layer\" or tree is the \"natural tree\" (a tree with natural numbers), the second layer is the \"character tree\" and the third one is the \"float tree\". The data structure has a method or iterator to traverse the tree and to make diferent operations with its values. These operations could change the value of nodes, but never its structure (first I create the structure and then I configure the tree with its diferent layers). In case of that I add a new node, this would be applied to each layer. Which known design pattern fits with this description or is related with it?"} {"_id": "193953", "title": "How many regression bugs from refactoring is too many.", "text": "Recent QA testing has found some regression bugs in our code. My team lead blames recent refactoring efforts for the regressions. My team lead's stance is \"refactor, but don't break too many things\", but wouldn't tell me how many is \"too many\". My stance is it's QA's job to find bugs that we can't find, and refactoring usually introduces breaking changes. So sure, I can be careful, but I don't knowingly release code with bugs to QA. I do it because I don't see them. If the refactoring was necessary, how many regression bugs should be considered too many?"} {"_id": "201756", "title": "How to sync clocks over networking for game development?", "text": "I'm writing a game that has a lot of time based aspects. I use time to help estimate player positions when the network stalls and packets aren't going through (and the time between packet's being received and not). It's a pacman type game in the sense that a player picks a direction and can't stop moving, so that system makes sense (or at least I think it does). So I have two questions: 1) How do I sync the clocks of the games at the start since there is delay in the network. 2) Is it okay NOT to sync them and just assume that they are the same (my code is time-zone independent). It's not a super competitive game where people would change their clocks to cheat, but still. The game is being programmed in Java and Python (parallel development as a project)"} {"_id": "244608", "title": "Maintaining Two Separate Software Versions From the Same Codebase in Version Control", "text": "Let's say that I am writing two different versions of the same software/program/app/script and storing them under version control. The first version is a free \"Basic\" version, while the second is a paid \"Premium\" version that takes the codebase of the free version and expands upon it with a few extra value-added features. Any new patches, fixes, or features need to find their way into both versions. I am currently considering using `master` and `develop` branches for the main codebase (free version) along side `master-premium` and `develop-premium` branches for the paid version. When a change is made to the free version and merged to the `master` branch (after thorough testing on `develop` of course), it gets copied over to the `develop-premium` branch via the `cherry-pick` command for more testing and then merged into `master-premium`. Is this the best workflow to handle this situation? Are there any potential problems, caveats, or pitfalls to be aware of? Is there a better branching strategy than what I have already come up with? Your feedback is highly appreciated! P.S. This is for a PHP script stored in Git, but the answers should apply to any language or VCS."} {"_id": "235852", "title": "Scrum task over estimation", "text": "In my current company we're using Scrum with 2 week iterations and a regular planning session. How planning normally works in our company is that we take a predefined , prioritized (by PO before the planning) product backlog of user stories / PBIs, take PBIs from the top one by one and break it down to tasks to then estimate them in hours (using poker planning and fibonacci) until we reach a given team's capacity. While this works for most of the teams, there is one 2-person team where I'm a SM, that tends to systematicaly overestimate the tasks by far (as those developers are being paid by an hour in this project, and from the record of working with them in another project I know for a fact they have a far better velocity). Not being a part of this team as a developer, trying to adhere to Scrum I'm refraining from influencing those estimates by any means, however as we approach the deadlines in the project this situation is becoming more and more troublesome. My question is - is there a way / good practice in such situations, to keep the team on the edge of their velocity and not influence their estimations, or maybe we're doing sth wrong in the process ?"} {"_id": "235851", "title": "Why Num&sizeMinusOne faster than num&(size-1)", "text": "I've been told that when I have a hash table of size `m` and `m=2^k`, I can use the `&` operator as `num & (size-1)` instead of `num % size`, to fit the hashCode to my table size. I've also been told that the the command Num&sizeMinusOne is more than twice faster than `num & (size-1)`. My question is, why? And isn't the operation of creating a variable called SizeMinusOne takes time too?"} {"_id": "132395", "title": "How to find extra work outside of full time job?", "text": "> **Possible Duplicate:** > Do you work contract projects in addition to your full-time job? I currently work as a C++ developer full time (not a contract). I'm seeking extra opportunities outside of work. So far, I've been doing small gigs for people on sites like Craigslist or ODesk.com on the side for extra cash but they are just never lucrative enough. My first question is ... is it possible to find a contract type position that will allow me to work outside of normal hours (i.e. evenings and weekends)? How would you even find a job like that? I'm also worried about potential conflicts of interest. If the contract was in a completely different field, is there any reason that I have to report this to my employer? Should I get the help of a recruiter to find these type of jobs? If it's too difficult to find something like that ... where do you guys go to find extra work outside of your full time job? I know a lot of people do their own projects but I just don't have any of my own ideas for an iPhone app or anything like that. :P I have had the most luck with Craigslist in the past but there has to be better sites out there somewhere. Anyone willing to reveal the secret place? ;) Thanks!"} {"_id": "53680", "title": "How do you get people to use a bug tracker?", "text": "This is actually a question I should've asked a while ago(as in, I don't even work at this job) but I thought it to be an interesting question nonetheless. Our team was basically just 1 developer(me!). The manager also developed sometimes, but was mostly just business. He thought we should have some sort of bug tracker, so we installed some open source tracker on our server. I initially did not use this bug tracker. Then, another developer was hired. He doubled as a tester(sometimes) and it seemed like every morning(His shift was scheduled like 2 hours earlier than mine by preference) I'd come into work, he'd have about 2 pieces of paper full of bugs and possible bugs. I'd go through each item, mark them off as a fixed them, or wrote by-design, or fix later. Anyway, then I'd come in another morning.. another list of bugs. About 6 of the 15 bugs listed were duplicates, or extremely related to bugs I'd previously said by-design or fix-later on. So, I started using the bug tracker on our server. It wasn't hard to use(required only a bug title), but it wasn't great either. I told the other developer that he should start entering bugs there and I will check the bugs he submits when I come in. This way it'd be easier to track. I come in the next morning, and lo and behold another piece of paper on my desk listed with bugs. At this point about 11 out of 13 listed bugs were duplicates. I didn't even bother writing on the paper. (this continued basically for about 4 months, until I was layed off) **TL;DR:** What should I have done to convince this other developer to use the bug tracker?"} {"_id": "255784", "title": "Competitive Programming", "text": "I think this may just be one of the most frequently asked questions by a novice (like me) on this site. Please pardon me for that. My question is I wish to get better at solving the harder problems of competitive programming which almost always are very tough dynamic programming formulations, dynamic data-structure manipulations and hard combinatorial object manipulations. How should I go about training myself to not only gain confidence but also gain an expertise in these areas. Yes, I know the obvious answer is practise! But, I'd want to know say books or materials that I should look up to before hitting a programming site like say spoj, hackerrank, codechef, codeforces and the like."} {"_id": "112828", "title": "How to fit beta versions into a numeric versioning scheme?", "text": "Some tools force developers to adopt a version scheme of a certain form, form instance \"major.minor.build.revision\", where each field must be a number. How do I fit in betas in there? For instance, what version should I choose for version \"2.0 beta2\"? Should it be of the form \"1.99.x.y\" (it's not yet 2.0 stable) or \"2.0.x.y\" (2.0 beta introduces breaking changes with 1.x)."} {"_id": "219581", "title": "Definition of a type", "text": "Conceptually, I used to think of types as sets. However, I think I've seen people wishing to distinguish types `A`, `B` even if they represent identical collections of values. So I figured a better definition of type is a pair `(type_name, set)`, where two different types cannot have the same first element. Then I ran into a different situation. I thought a function is just a set of pairs `(x, y)`. But then a function `A->B` (where `A, B` represent the same collections of values) cannot be distinguished from a function `B->B` or `A->A` or `B->A`, and again I think I've seen people want to distinguish them. So how do I define a function? As a tuple `(A, B, (x1, y1), (x2, y2), ...)`, where each element of `A` appears exactly once as the first element in the pairs, and where each second element is of type `B`? And the type `F` that represents all functions that takes `A->B` is then `(F, ((A, B, (a1, b11), (a2, b12), ...), (A, B, (a1, b21), (a2, b22), ...), (A, B, (a1, b31), (a2, b32), ...)))`, where `a1, ...` are all the values represented by A, and `b?1, b?2, ...`, for any `?`, are some of the values represented by `B`. This all seems rather cumbersome, am I missing something?"} {"_id": "176912", "title": "Hierarchical View/ViewModel/Presenters in MVPVM", "text": "I've been working with MVVM for a while, but I've recently started using MVPVM and I want to know how to create hierarchial View/ViewModel/Presenter app using this pattern. In MVVM I would typically build my application using a hierarchy of Views and corresponding ViewModels e.g. I might define 3 views as follows: ![View A](http://i.stack.imgur.com/eHsn6.png) The View Models for these views would be as follows: public class AViewModel { public string Text { get { return \"This is A!\"; } } public object Child1 { get; set; } public object Child2 { get; set; } } public class BViewModel { public string Text { get { return \"This is B!\"; } } } public class CViewModel { public string Text { get { return \"This is C!\"; } } } In would then have some data templates to say that BViewModel and CViewModel should be presented using View B and View C: The final step would be to put some code in AViewModel that would assign values to Child1 and Child2: public AViewModel() { this.Child1 = new AViewModel(); this.Child2 = new BViewModel(); } The result of all this would be a screen that looks something like: ![enter image description here](http://i.stack.imgur.com/hXGz7.png) Doing this in MVPVM would be fairly simple - simply moving the code in AViewModel's constructor to APresenter: public class APresenter { .... public void WireUp() { ViewModel.Child1 = new BViewModel(); ViewModel.Child2 = new CViewModel(); } } But If I want to have business logic for BViewModel and CViewModel I would need to have a BPresenter and a CPresenter - the problem is, Im not sure where the best place to put these are. I could store references to the presenter for AViewModel.Child1 and AViewModel.Child2 in APresenter i.e.: public class APresenter : IPresenter { private IPresenter child1Presenter; private IPresenter child2Presenter; public void WireUp() { child1Presenter = new BPresenter(); child1Presenter.WireUp(); child2Presenter = new CPresenter(); child2Presenter.WireUp(); ViewModel.Child1 = child1Presenter.ViewModel; ViewModel.Child2 = child2Presenter.ViewModel; } } But this solution seems inelegant compared to the MVVM approach. I have to keep track of both the presenter and the view model and ensure they stay in sync. If, for example, I wanted a button on View A, which, when clicked swapped the View's in Child1 and Child2, I might have a command that did the following: var temp = ViewModel.Child1; ViewModel.Child1 = ViewModel.Child2; ViewModel.Child2 = temp; This would work as far as swapping the view's on screen (assuming the correct Property Change notification code is in place), but now my APresenter.child1Presenter is pointing to the presenter for AViewModel.Child2, and APresenter.child2Presenter is pointing to the presenter for AViewModel.Child1. If something accesses APresenter.child1Presenter, any changes will actually happen to AViewModel.Child2. I can imagine this leading to all sorts of debugging fun. I know that I may be misunderstanding the pattern, and if this is the case a clarification of what Im doing wrong would be appreciated. **EDIT** \\- this question is about WPF and the MVPVM design pattern, not ASP.NET and MVP."} {"_id": "187556", "title": "While running an application the error occurs, while debugging it doesn't", "text": "There is an issue which appears while running the application. It is not an Exception, but the desired UI change is not been implemented. While debugging to find the code which should be changed to fix this problem, the UI changes can be seen correctly, To be specific, if we put one breakpoint in the class where it is called, and press the Continue in NetBeans, the problem occurs. But if we step over to the next lines and try to see the state at the end of the debug, the component seems to work perfectly. How can these kind of issues be resolved?"} {"_id": "109418", "title": "How does an interviewer get a glimpse of a candidates thought process during an interview?", "text": "> **Possible Duplicate:** > How do you interview someone with more experience than you? > How can I really \"wow\" an employer at an interview? I am a bit puzzled by the whole technical interview process. I have been on a few interviews recently and I keep hearing certain phrases during the interviews. And I want to get more insight into the mind of the interviewer and the process itself. Can anyone please shed some light into this? What do they really want? I am convinced that it is not the actual correct answer to the problem one is presented with because getting the right answer hasn't gotten me a job. How do they \"get a glimpse into my thought process\"? EDIT: I am adding this to the question because I think I might help. During the interview when you are writing code on the white board and they ask you to explain the process or approach are they expecting me to act like a player on who want to be a millionaire? Do I need to explain every little detail or reason for doing something?"} {"_id": "98688", "title": "Interviewing a Senior Dev when they will be far more competent at programming than the interviewer", "text": "> **Possible Duplicate:** > How do you interview someone with more experience than you? Next week i'll be interviewing some contractors for a 6-month senior Dev position on a project. I'm an OK programmer myself, but nothing special, and at the moment i'm struggling to think of what questions to ask them, or what coding challenges i'd set, since the answers would likely go over my head anyway. The project is an ASP.NET MVC application for an intranet. My personal thought was to have them write a trivial MVC app that pulls some data from a DB during the interview, showing that they can implement a testable, loosely coupled application, but i don't know if this is _too_ trivial. From it i would expect them to have implemented a couple of tests, set up dependency injection, and probably use a repository instead of fat controllers. Any ideas, or is what i'm suggesting a bad idea? The coding part of the interview would likely last about 40 minutes. I timed myself doing what i'd planned and it took me just over half an hour. The candidates would have seen the requirements for the application a couple of days prior to the interview. From thinking about it a bit more, perhaps i should start off with a tightly coupled app, and ask the candidates to refactor it instead, and implement some tests? Edit: With respect to Rob's comments below, i'd like the answers to more focus on what i'd got planned (But all answers gratefully received)"} {"_id": "196865", "title": "What to expect from a 'peer interview' for a new grad position?", "text": "I apologize if something like this has been asked, I did try to look for something similar. I am graduating May 16th, and I applied for a position posted as '.NET Application Developer' requiring 1-2 years experience and a Bachelor's in CIS. I had the phone screen, which went well. They brought me in Friday for an interview with the head managers which consisted of a written 20 question test, and other behavioral like questions and oddball ones such as \"How would you make an M&M?\" That seemed to go well too, they took me on a tour of the campus and said they usually will get back to the candidate, if they decide to proceed, in a week or so. Well, that following Monday at 9:00 am sharp they called me back for a third and final 'peer interview'. **This is where I am stuck - I have no idea what to expect here? How do I prepare?** Is this like the 'Hey let's see if you are culture fit\" or \"Lets put him through the paces with whiteboard coding\" or what? Any advice would be HIGHLY appreciated. I have been practicing whiteboard coding just in case, and will come in with a humble, \"I want to learn from you guys\" kind of attitude."} {"_id": "186372", "title": "What are some interview questions I can expect for a technical evaluation?", "text": "I am 20 years old and I've been doing some web development work for about 2 years or so. I have an interview for a Junior Web Developer position and the company knows that but I'll still be going through a technical interview and since I haven't been on one before I am not sure what kinds of questions I'll expect. I will be interviewed on PHP, HTML, CSS, and JavaScript. What are some popular or rather important questions that I might be asked in those four language areas? I know the questions might vary greatly but just to get an idea. Eager to see some of your responses. Thanks!"} {"_id": "1947", "title": "How to prepare yourself for programming interview questions?", "text": "> **Possible Duplicate:** > Really \"wow\" them in the interview Let's say I appear for an interview. What questions could I expect and how do I prepare?"} {"_id": "93384", "title": "How can I really \"wow\" an employer at an interview?", "text": "I'm a top-notch programmer, but a notoriously bad interviewee. I've flunked 3 interviews consecutively because I get so nervous that my voice tightens at least 2 octaves higher and I start visibly shaking -- mind you, I can handle whatever technical questions the interviewer throws at me in that state, but I think it looks bad to come off as a quivering, squeaky-voiced young woman during a job interview. I've just got the personality type of a shy computer programmer. No matter how technical I am, I'm going to get passed up in favor of a smooth talker. I have another interview coming up shortly, and I want to really impress the company. Here are my trouble spots: 1. **What can I do to be less nervous during my interview?** I always get really excited when I hear I have a face-to-face interview, but get more and more anxious as the interview approaches. 2. **How long or short should I keep my answers?** Interviewers want me to explain what I used to do at my prior employment, but I'm a very chatty person and tend to talk/squeak for 10 minutes at a time. 3. **What exactly is my interviewer looking for when asking about prior jobs?** 4. During the interview, I'll be asked if I have any questions for the interviewer. I _should_ , but **what kinds of questions should I ask to show that I'm interested in being employed?** 5. **How can I sugarcoat \"I want to make more money\" into something that sounds nicer?** Interviewers always ask why I'm looking for a new job. The real reason is that my current salary isn't that great, and I want to make more money: otherwise the work environment is fine. 6. I have only one prior programmer job, and I've worked there for 18 months, but I have the skill of someone with 4 to 6 years of experience. **What can I say to compete against applicants with more work experience?**"} {"_id": "80065", "title": "Preparing for Interviews", "text": "> **Possible Duplicate:** > How to prepare yourself for programming interview questions? Could someone give me pointers on how go to about preparing for a technical interview? As a CS graduate, I guess one must be thorough with the following topics: **Data Structures:** Array, Linked List, Stack, Queues, Heap, Hash Table, Binary Tree, Binary Search Tree, Self Balancing Binary Tree (AVL, Red Black Tree), B-Tree, Tries/Suffix Tree **Algorithms:** Sorting (Bubble Sort, Insertion Sort, Selection Sort, Shell Sort, Quick Sort, Merge Sort, External Sorting), Searching (linear and logarithmic time searching), Graph Theory (Adjacency List, Adjacent Matrix, DFS, BFS, Topological Sort), Dynamic Programming, Greedy Algorithms, Divide and Conquer. **Adhoc Algorithms:** Select Algorithm, Fisher Yates Card Shuffle, Reservoir Sampling and list is endless. **Databases:** SQL Queries **Programming and Design:** C,C++, Java, scripting languages (Perl, Python), (OOPS basics, virtual functions, deep and shallow copy, copy constructor, assignment operator, STL, memory management, pointers/reference, interface, abstract classes **Operating Systems:** Thread Synchronization (Mutex, Conditional Variables, Semaphores, Deadlocks), Memory Management (Segmentation, Paging, TLB, Caching Mechanisms) * * * Also it would be great if we all could compile the resources available on internet to brush up these topics. I will add some of these: Linked List and trees: http://cslibrary.stanford.edu/103/ Please help me out to find useful resource to prepare for these topics, also I would appreciate if you could add in to these topics."} {"_id": "54375", "title": "Doing practice jobsearch/technical interviews?", "text": "> **Possible Duplicate:** > How can I really \"wow\" an employer at an interview? I graduated college last year & I've never gone through the interview process - my current programming position evolved out of projects I did in school, but in a few months I'm making a clean break and moving across the country so I'm going to have to face a \"grown-up\" jobsearch. I'm kind of scared of technical interviews - I think I'm pretty good at my job and my hobby programming projects ... but what if it turns out I'm part of that group that thinks they're qualified, but really just cause despair at the state of education in the hearts of interviewers? So, I'm thinking of doing a \"practice\" job hunt in my current city to get an idea of what it's like and what kind of experience/expertise employers are really looking for. Is this a dick move ethically (applying to/interviewing for jobs I can't take)? If so, is there another good way to prepare for technical interviews, especially those little trick puzzle-type questions?"} {"_id": "74650", "title": "c# interview with a programming task", "text": "> **Possible Duplicate:** > How to prepare yourself for programming interview questions? I am going for c#-sql server job and I was looking for some link where I could download some questions and answers to have look at.Googling I have found many but because there is no download you have to click and navigate for each question and answer which makes the process very intense. Do you know a link where I can download some good c#/sql server questions? Also I am told I will be given a programming task. I have no idea what that might be. I hope is not some silly math algorithm that I know I will fail or some bubble sort stuff that I have never used in my c# programming life. Any ideas or examples people have been asked? Or is there a place where managers go to get this stuff? It's not a case of cheating but I find some questions not really pratical in the real world. typical silly question for a c# in my view is What's the difference between a stack vs heap? Now tell me whether as a c# programmer you are really bothered about the stack and heap. If you do some unboxing the compiler will tell you have an error.etc.. Why they keep asking that question.Even the agency the other day asked me that question. thanks a lot for any suggestions"} {"_id": "155814", "title": "Need help eliminating dead code paths and variables from C source code", "text": "I have a legacy C code on my hands, and I am given the task to filter dead/unused symbols and paths from it. Over the time there were many insertions and deletions, causing lots of unused symbols. I have identified many dead variables which were only being written to once or twice, but were never being read from. Both blackbox/whitebox/regression testing proved that dead code removal did not affected any procedures. (We have a comprehensive test-suite). But this removal was done only on a small part of code. Now I am looking for some way to automate this work. We rely on GCC to do the work. P.S. I'm interested in removing stuff like: 1. variables which are being read just for the sake of reading from them. 2. variables which are spread across multiple source files and only being written to. For example: file1.c: int i; file2.c: extern int i; .... i=x;"} {"_id": "246601", "title": "How do programming languages integrate with OS runtimes", "text": "For example, Objective-C, Swift and Ruby (i.e. RubyMotion) integrate with the Cocoa framework. Is this done via linked libraries? I assume they call functions in existing binaries instead of simply recreating some common interface."} {"_id": "246605", "title": "GUI app with Web Technology that can take screenshots", "text": "I was given a task to prototype a Desktop Application with the ff. requirements: 1. Must use javascript, nodewebkit, node.js 2. Must be multi platform specially Windows and OSX 3. Can take a screenshot of a certain area where a use can point and drag 4. Must be able to watch multiple directories for changes Mostly `1,2,4` are possible but I can't get my head around (3) how I will take screen shots on specific areas depending on user point and drag input with only web technologies? What are ideas on this? I also plan to use angularJS/css for the UI."} {"_id": "155812", "title": "How to learn new technologies without live projects", "text": "I have a year of experience in .Net but I have knowledge in basic stuffs like framework 2.0, java script and basics of sql server. But in Industry there is no need of these basic stuffs, they want more than these but as we don't have live project, How to go about that...?"} {"_id": "122413", "title": "Pros and cons of using Grails compared to pure Groovy", "text": "Say, you (by you I mean an abstract guy, any guy in your team) have experience of writing and building java web apps, know about filters, servlet mappings and so on, and so on. Also, let us assume you know pretty well any sql db, no matter which one exactly, whether it mysql, oracle or psql. At last, let pretend we know Groovy and its standard libraries, for example all that JsonBuilder and XmlSlurper stuff, so we don't need grails converters. The question is - what are benefits of using grails in this case. I'm not trying to start flame war, I'm just asking to compare - what are ups and downs of grails development compared to pure groovy one. For instance, off the top of my head I can name two pluses - automatic DB mapping and custom gsp tags. But when I want to write a modest app which provides small API for handling some well defined set of data, I'm totally OK with groovy's awesome SQL support. As for gsp, we does not use it at all, so we are not interested in custom tags as well."} {"_id": "94705", "title": "Fault tolerance through replication of SQL databases", "text": "Suppose the middle tier servers are replicated 3 way and the backend database (MySQL, PostgreSQL, etc...) is replicated 3 way. A user request ends up creating some user data in the middle tier servers and I would like to commit this to the backend database in a way resilient to failures. A candidate attempt solution, for example, if I send the data to one SQL database and have it replicate the data to the other databases then if the one SQL database has the harddrive crash before it can replicate the data, the data is lost. What is the best practice solution for fault tolerance that is used in the actual real world."} {"_id": "94704", "title": "Data Structures to represent logical expressions", "text": "Here is a logical statement: term1 AND (term2 OR term3) OR term4 What is an effective way of storing this information in a data structure? For example, should I use a graph, with a property on each edge defining the operator to the next term? (This suggestion doesn't make sense IMO, but it's a data structure that might be useful for this)"} {"_id": "122417", "title": "Organizing your Data Access Layer", "text": "I am using Entity Framework as my ORM in an ASP.Net application. I have my database already created so ended up generating the entity model from it. What is a good way to organize files/classes in the data access layer. My entity framework model is in a class library and I was planning on adding additional classes per Entity(i.e per database table) and putting all the queries related to those tables in their respective classes. I am not sure if this is a right approach and if it is then where do the queries requiring data from multiple tables go? Am I completely wrong in organizing my files based on entities/tables and should I organize them based on functional areas instead."} {"_id": "252836", "title": "Explain what this means, \"Bulldozer Code\"", "text": "I ran across an article on things programmers should avoid doing and came across this term: Bulldozer Code. The author defined it at as \"giving the appearance of refactoring by breaking out chunks into subroutines, but that are impossible to reuse in another context\". I interpret that as him saying to avoid creating procedures for code that can't actually be used in more than one place, which confuses me a little because I have an instructor prefers using procedures for the purpose of 'cleaner' code. So what exactly is bulldozer code and why should I avoid it?"} {"_id": "168177", "title": "What can be done to decrease the number of live issues with applications?", "text": "First off I have seen this post which is slightly similar to my question. : What can you do to decrease the number of deployment bugs of a live website? Let me layout the situation for you. The team of programmers that I belong to have metrics associated with our code. Over the last several months our errors in our live system have increased by a large amount. We require that our updates to applications be tested by at least one other programmer prior to going live. I personally am completely against this as I think that applications should be tested by end users as end users are much better testers than programmers, I am not against programmers testing, obviously programmers need to test code, but they are most of the times too close to the code. The reason I specify that I think end users should test in our scenario is due to the fact that we don't have business analysts, we just have programmers. I come from a background where BAs took care of all the testing once programmers checked off it was ready to go live. We do have a staging environment in place that is a clone of the live environment that we use to ensure that we don't have issues between development and live environments this does catch some bugs. We don't do end user testing really at all, I should say we don't really have anyone testing our code except programmers, which I think gets us into this mess (Ideally, we would have BAs or QA or professional testers test). We don't have a QA team or anything of that nature. We don't have test cases for our projects that are fully laid out. Ok, I am just a peon programmer at the bottom of the rung, but I am probably more tired of these issues than the managers complaining about them. So, I don't have the ability to tell them you are doing it all wrong.....I have tried gentle pushes in the correct direction. Any advice or suggestions on how to alleviate this issue ?"} {"_id": "254787", "title": "Implementing a generic/dynamic custom property system in C#", "text": "I have an architecture design problem which I think is appropriate for this site. _Note that I have made an_ **EDIT** _to this post below, reflecting my latest potential solution to this problem._ **General problem description:** My primary goal is to design a software library/program (C#) to automate a very particular type of microscopy experiments. Our experimental setups consists of many distinct hardware devices that generally fall into a limited number of categories: * Point detector * XY sample stage * Autofocus device * ... The natural choice is to design an interface `IDevice` with some basic properties shared by all devices such as e.g. `IDevice.Initialize()`, `IDevice.Name`, ... We can even design an abstract class, inheriting `IDevice`, e.g. `DeviceBase` implementing some functionalty common to all devices. Likewise, we can also design interfaces for specific devices, each time implementing the common stuff (`IXYSampleStage` might hold e.g. `IXYSampleStage.Move(posX, posY)` or `IXYSampleStage.AxisXStepSize`). Ultimately, I can implement all these interfaces in the final classes representing specific devices. I think I know what to do there. However, not all devices of a class are exactly the same. Some manufacturers offer rich functionality on top of the standard stuff. An XY Stage might have e.g. any number of optional/non-standard parameters to set such as e.g. channel delays, PID control values or whatever. To tackle this, > I need a mechanism to add additional properties (not methods, properties > will suffice) to specific device classes. Higher level code, interacting > with the devices should understand these added properties, be able to > get/set them, however many there might be. So much for the basic design problem. **What exists already** There is an OS software program, Micro Manager, which has such a system implemented, be it in C++ (full source here and no, I cannot use this SW directly for my purposes): class PropertyBase { public: virtual ~PropertyBase() {} // property type virtual PropertyType GetType() = 0; // setting and getting values virtual bool Set(double dVal) = 0; virtual bool Set(long lVal) = 0; virtual bool Set(const char* Val) = 0; virtual bool Get(double& dVal) const = 0; virtual bool Get(long& lVal) const = 0; virtual bool Get(std::string& strVal) const = 0; // Limits virtual bool HasLimits() const = 0; virtual double GetLowerLimit() const = 0; virtual double GetUpperLimit() const = 0; virtual bool SetLimits(double lowerLimit, double upperLimit) = 0; // Some more stuff left out for brevity ... }; Subsequently, the `Property` class implements some of the pure virtual functions related to the limits. Next, `StringProperty`, `IntegerProperty` and `FloatProperty` implement the Get()/Set() methods. These kinds of properties are defined in an enum which allows only these three kinds of property. Properties can next be added to a `PropertyCollection` which is basically a kind of dictionary that is part of a device class. This all works nicely. We use MM a lot in the lab for other types of experiments but when trying to do similar things in my C# solution I stumbled about a few more fundamental questions. Most of these have to do with not being quite sure how to leverage specific features offered by C#/.NET (generic but definitely also dynamics, ...) to perhaps improve the existing C++ implementation. **The questions:** _First question_ : To limit the types of properties allowed, MM defines an enum: enum PropertyType { Undef, String, Float, Integer }; Whereas C# has reflection and a lot of built-in support for types/generics. I therefore considered this: public interface IProperty { public T value { get; set; } ... } I could next do an abstract `PropertyBase` and ultimately IntegerProperty : PropertyBase StringProperty : PropertyBase FloatProperty : PropertyBase But this gets me in trouble when trying to put them into a collection because in .NET following is not possible: PropertyCollection : Dictionary> Unless, perhaps, I adopt this strategy, i.e. have a non generic base class that only returns a type and then yet more base classes to actually implement my property system. However, this seems convoluted to me. **Is there a better way**? I think that utilising the generic language features might allow me to do away with needing all these Get()/Set() methods in the MM example above, as I would be able to know the backing type of my property value. I realise this is a very broad question but, there is a big gap between understanding the basics of some language features and being able to fully make the correct design decisions from the outset. _Second question_ I am contemplating making my final device classes all inherit from a kind of `DynamicDictionary` (e.g. similar to the one from here): public class DynamicDictionary : DynamicObject { internal readonly Dictionary SourceItems; public DynamicDictionary(Dictionary sourceItems) { SourceItems = sourceItems; } public override bool TrySetMember(SetMemberBinder binder, object value) { if (SourceItems.ContainsKey(binder.Name)) { SourceItems[binder.Name.ToLower()] = value; } else { SourceItems.Add(binder.Name.ToLower(), value); } return true; } public override bool TryGetMember(GetMemberBinder binder, out object result) { if (SourceItems != null) { if (SourceItems.TryGetValue(binder.Name.ToLower(), out result)) { return true; } } result = null; return true; } } } but instead of `` I would then like to use `` where the string would be the alias for my property (which would correspond to the design choice from Q1). Would doing such a thing make sense from a design standpoint? I would actually only need to add custom properties to devices at design time (i.e. when writing a custom device driver). At runtime, I would only need to inspect my custom properties or get/set their values. the C++ MM solves this by using a bunch of `CreateProperty(\"alias\", Property)` methods in the device constructor, which again, works so maybe using dynamics here is overkill. But again, I am wondering if somebody could provide insight in possible advantages but certainly also disadvantages of going down the dynamic route. **To summarize** Are there .NET features (4.5) which I could leverage to implement some of the MM concepts in a more elegant way with a focus on the above mentioned specific matters? Again, I do realise this is a very broad question and perhaps some of the possible answers are more a matter of taste/style than anything els but if so (this making the question unsuited for this platform), please provide feedback/comments such that I can adjust accordingly. **EDIT** I have spent some more thinking about this and tried to solve it in the following way: First, an enum specifying the allowed propertytypes: public enum PropertyType { Undefined, String, Float, Integer } Second, an interface `IPropertyType` that is responsible for checking values fed to a property of a specific type: public interface IPropertyType { Type BackingType { get; } PropertyType TypeAlias { get; } bool IsValueTypeValid(object value); } Then, a base class implementing `IPropertyType`: public bool IsValueTypeValid(object value) { if (value == null) { return true; } Type type = value.GetType(); if (type == this.BackingType) { return true; } return false; } ... public Type BackingType { get { switch (this.TypeAlias) { case PropertyType.Integer: return typeof(long); case PropertyType.Float: return typeof(double); case PropertyType.String: return typeof(string); default: return null; } } } I then have three `IPropertyType`: `StringPropertyType`, `IntegerPropertyType` and `FloatPropertyType` where each time, the property `PropertyTypeAlias is set to one of the enum values.` I can now add my IPropertyType to `IProperty` types: public interface IProperty { string Alias { get; } /// If we want to limit the property to discrete values. Dictionary AllowedValues { get; } bool HasLimits { get; } bool HasValue { get; } bool IsReadOnly { get; } /// Callback for HW operations using the property value. Func PropertyFunction { set; } PropertyType TypeAlias { get; } object Value { get; set; } void AddAllowedValue(string alias, object value); void ClearAllowedValues(); // On numeric properties, it might be usefull to limit the range of values. void SetLimits(object lowerLimit, object upperLimit); // These will do stuff on the HW... bool TryApply(); bool TryUpdate(); } Here, the Value setter will run the validation on input value as defined by the `IPropertyType`. `IProperty` is almost fully implemented in `PropertyBase` which also has a field of `IPropertyType` which gets set upon instance creation (and cannot be changed thereafter; a property cannot change type. From the base class e.g.: public object Value { get { return this.storedvalue; } set { if (this.propertyType.IsValueTypeValid(value) && this.IsValueAllowed(value)) { this.storedvalue = value; } } } That only leaves specific `IProperty` classes: `StringProperty`, `IntegerProperty` and `FloatProperty` where there is only one override, `SetLimits()` because it relies on comparison of values. Everything else can be in the base class: public override void SetLimits(object lowerlimit, object upperlimit) { // Is false by default this.HasLimits = false; // Makes no sense to impose limits on a property with discrete values. if (this.AllowedValues.Count != 0) { // If the passed objects are in fact doubles, we can proceed to check them. if (this.IsValueTypeValid(lowerlimit) && this.IsValueTypeValid(upperlimit)) { // In order to allow comparison we need to cast objects to double. double lowerdouble = (double)lowerlimit; double upperdouble = (double)upperlimit; // Lower limit cannot be bigger than upper limit. if (lowerdouble < upperdouble) { // Passed values are OK, so set them. this.LowerLimit = lowerdouble; this.UpperLimit = upperdouble; this.HasLimits = true; } } } } This way, I think I have a system that allows me to have a collection of `IProperty` that can actually hold data of different types (int, float or string) where for each type of `IProperty` I have the ability to implement value limits etc (i.e. stuff that is different, depending on the data type)... However, I still have the impression that there are either better ways to do this sort of thing or that I am over engineering things. Therefore, any feedback on this is still welcome. **EDIT 2** After more searching online I believe the last bits of code posted here in the previous edit are leaning towards the Adaptive Object Model \"pattern\" (if it really is that). However, most resources on this topic seem to be quite old and I'm wondering if there are now better ways to achieve these types of things."} {"_id": "44627", "title": "What are some small, but extendable game ideas to use for inspiration?", "text": "I recently came across the game Dopewars(or more generically, Drugwars). Now this is an extremely simple game to implement. A trivial basic implementation could be done in a couple of days, at the most. And it's extremely extendable. Now, my question is what other games are out there that follow this same kind of \"difficulty level\" for implementation? A game that is actually entertaining, yet trivial to implement, and can easily be extended?"} {"_id": "164292", "title": "Can jQuery be used server side?", "text": "I know that you can use JavaScript server side with node.js, but can we use jQuery for backend as well? Or will I just have to learn CoffeeScript instead of jQuery?"} {"_id": "255438", "title": "How do browsers paint a render tree to the screen?", "text": "I'm interested in building a browser-like rendering engine. I understand a bit about how OpenGL and GUI toolkits work. But how do browsers actually put pixels on the screen? Are they like software rasterizers that scan every line and paint whatever's in the render tree? Or do they act like GUI toolkits and call functions to display elements at certain coordinates? Are there any libraries that focus specifically on the painting portion of the browser's critical rendering path?"} {"_id": "255437", "title": "Best RAD Tools for Web Application", "text": "I am planning to develop a web based **invoicing** application. It will be used by individuals to log and review their invoices. Their data must be saved to a common database but kept separate from other logins (ie data must not shared across users/logins). I suspect I will need the following... 1. a login page / authentication 2. forms to enter an individual's unique customers and items 3. a form to enter invoices 4. solid reporting tools / dashboard capabilities The current knowledge base is Microsoft suite (ie C#, .Net) **Can anyone advise the best RAD Tools / environment to achieve this outcome? Cheers!**"} {"_id": "255436", "title": "How to connect a Django app with a Java Web app using JMS", "text": "I'm dealing with my DB systems course project, it's a stock exchange simulation app using Oracle 10g, the deal is that the support is for Java, but i'm going to do this in Python. The last iteration for the project is to connect two apps using message queues (MQ Protocols), if both apps are in Java the TA proposed to connect them using JMS, can you advise me on how to connect a Django app (my app) with a Java app (partner's app). I have found that MQ Protocol is based on a universal standard, so that must not be a hard problem to solve. Another option i've looked is Spring Pytthon, a framework that facilitates this connection. So ehat is the best option."} {"_id": "255432", "title": "In Agile, should I create a code-review task?", "text": "My team uses Agile as a development approach. Currently, each task has four states which are todo, in progress, testing, and done. We've just started doing code reviews, and I wonder if I need to create a code-review task or I should add time for code-review to the estimation?"} {"_id": "255430", "title": "How to execute a file within a subdirectory", "text": "So I have a small setup file that needs to install a few run-time files. The files are located within sub directories of the root of the drive. The drive letter will be different on every customers machine, so I was trying this and several other methods to no avail. // File is being ran from D:\\ \"eg: D:\\setup.exe\" // All my runtimes are in the Tools\\Runtime direcorties string path = Directory.GetCurrentDirectory(); string ext = \"/q /norestart\"; string location = \"\\\\Tools\\\\Runtimes\\\\Net\\\\\"; string step1 = \"dotnetfx35.exe\"; var process = Process.Start(path + location + step1 + ext); process.WaitForExit(); As you pros and more experienced programmers know, for some reason... It doesn't work I keep getting file not found."} {"_id": "179699", "title": "What's the problem with Scala's XML literals?", "text": "In this post, Martin (the language's head honcho) writes: > [XML literals] Seemed a great idea at the time, now it sticks out like a > sore thumb. I believe with the new string interpolation scheme we will be > able to put all of XML processing in the libraries, which should be a big > win. Being interested in language design myself, I'm wondering: Why does he write that it was a mistake to incorporate XML literals into the language? What is the controversy regarding this feature?"} {"_id": "252789", "title": "Best practices for geolocation", "text": "We are writing a flight search engine. We want to pre-fill the departure airport for mobile users with the closest one to their location. To do that, our plan is to 1. Find a list of airports with their latitude/longitude. 2. Find the user geolocation using something like HTML5 Geolocation (which asks the user for permission). 3. Calculate the distance between the user's location and every airport to find the closest one. 4. Fill the departure form. Is this a standard way of proceeding? I am a junior programmer and I am not used to this kind of problem. Is there any obstacle I should bear in mind while developing my solution? I have the feeling the algorithm to calculate the distance between one point and 300 locations might get a bit heavy."} {"_id": "156549", "title": "PHP - Internal APIs/Libraries - What makes sense?", "text": "I've been having a discussion lately with some colleagues about the best way to approach a new project, and thought it'd be interesting to get some external thoughts thrown into the mix. Basically, we're redeveloping a fairly large site (written in PHP) and have differing opinions on how the platform should be setup. **Requirements:** The platform will need to support multiple internal websites, as well as external (non-PHP) projects which at the moment consist of a mobile app and a toolbar. We have no plans/need in the foreseeable future to open up an API externally (for use in products other than our own). **My opinion:** We should have a library of well documented native model classes which can be shared between projects. These models will represent everything in our database and can take advantage of object orientated features such as inheritance, traits, magic methods, etc. etc. As well as employing ORM. We can then add an API layer on top of these models which can basically accept requests and route them to the appropriate methods, translating the response so that it can be used platform independently. This routing for each method can be setup as and when it's required. **Their opinion:** We should have a single HTTP API which is used by all projects (internal PHP ones or otherwise). **My thoughts:** To me, there are a number of issues with using the sole HTTP API approach: 1. It will be very expensive performance wise. One page request will result in several additional http requests (which although local, are still ones that Apache will need to handle). 2. You'll lose all of the best features PHP has for OO development. From simple inheritance, to employing the likes of ORM which can save you writing a lot of code. 3. For internal projects, the actual process makes me cringe. To get a users name, for example, a request would go out of our box, over the LAN, back in, then run through a script which calls a method, JSON encodes the output and feeds that back. That would then need to be JSON decoded, and be presented as an array ready to use. 4. Working with arrays, as appose to objects, makes me sad in a modern PHP framework. **Their thoughts (and my responses):** 1. Having one method of doing thing keeps things simple. - _You'd only do things differently if you were using a different language anyway._ 2. It will become robust. - _Seeing as the API will run off the library of models, I think my option would be just as robust._ **What do you think?** I'd be really interested to hear the thoughts of others on this, especially as opinions on both sides are not founded on any past experience."} {"_id": "216155", "title": "Releasing a project under GPL v2 or later without the source code of libraries", "text": "I wrote a system in Java that I want to release under the terms of GPL v2 or later. I've used Apache Maven to deal with all the dependencies of the system, so I don't have the source code of any of the libraries used. I've already checked, all the libraries were released under GPL-compatible licenses (Apache v2, 3-clause BSD, MIT, LGPL v2 and v2.1). Can I distribute my work under GPLv2 or later and provide the libraries on the same package as **separate works** , without having to apply GPL on them (as stated by **GPLv2 sec 2** ), even if they are necessary to compile and run the system? If so, I know I am bound only by the original licenses of each one, and than I know what to do."} {"_id": "216154", "title": "Is it possible to access javascript return value outside of function?", "text": "How would one access javascript function's return value outside of the function? For example could you tell a function to return something somewhere else in the code? Theoretical example: milkmachine = function(argument){ var r; var k; //do something with arguments and variables return r; } var rainbow = milkmachine(); //rainbow == r milkmachine.return(k); var spectrum = milkmachine(); //spectrum == k"} {"_id": "156541", "title": "What is the correct order to read these books?", "text": "I'm a junior C# developer, I learned at home and now I got my first job :) I want to buy these books. But what is the correct order to read these books? Code Complete: A Practical Handbook of Software Construction Clean Code: A Handbook of Agile Software Craftsmanship Pragmatic Programmer"} {"_id": "220170", "title": "Is it best practice to create a back door for testing a web service?", "text": "We have a public RESTful web service which exposes functionality to third parties. We are writing automated tests against it. In order to set up all scenarios on it we need to change the data behind it. We also have a private RESTful web service which exposes functionality to other systems owned by us. However, this web service does not expose endpoints required to set up all the scenarios for the public web service. In this circumstance is it best to create extra endpoints, not needed in normal operational behaviour, on the private web service in order to support the testing of the public one? Or should we set up our data using other means such as sending SQL commands?"} {"_id": "163479", "title": "Dealing with Fanboys", "text": "We've all probably met someone like this, that developer who just _knows_ that his language is the one true language and won't shut up about it. How do you deal like someone like this? I don't want to offend anyone (especially since the fanboy in my workplace is the senior developer). But I want to be able to use my own choice of scripting language when I have to write a throwaway script that never makes it to the repository and no one else need know existed. Thoughts that I had to dealing with this: 1. Laugh it off - \"Haha yeah maybe language X is a bit easier, I guess I'm a masochist!\" 2. Go with it - I'd really prefer to avoid this as I can't afford the drop in productivity associated with picking up a new language. 3. Hide my language - Become a closet programmer and hide my monitor whenever I'm scripting or automating something. What would you suggest for this situation?"} {"_id": "178343", "title": "What is the best practice, point of view of well experienced developers", "text": "My manager pushes me to use his self defined best practices. All of these practices are based on is own assumptions. I disagree with them and I would like to have some feedback of well experienced people about these practices. I would prefer answers from people involved in the maintenance of huge products and people whom have maintained the product for years. Developers with 15+ years of experience are preferred because my manager has that much experience himself. I have 7 years of experience. * * * ## Here are the practices he wants me to use: * never extends classes, use composition and interface instead because extending classes are unmaintainable and difficult to debug. * What I think about that Extend when needed, respect \"Liskov's Substitution Principle\" and you'll never be stuck with a problem, but prefer composition and decoration. I don't know any serious project which has banned inheriting, sometimes it's impossible to not use that, i.e. in a UI framework. Design patterns are just unusable. * * * * In PHP, for simple use cases (for example a user needs a web interface to view a database table), his \"best practice\" is: copy some random php code wich we own, paste and modify it, put html and php code in same file, never use classes in PHP, it doesn't work well for small jobs, and in fact it doesn't work well at all, there is no good tool to work with. Copy & paste PHP code is good practice for maintenance because scripts are independent, if you have a bug somewhere you can fix it without side effects. * What I think about that: NEVER EVER COPY code or do it because you have five minutes to deliver something, you will do some refactoring after that. Copy & paste code is a beginners error, if you have errors you'll have the error everywhere any time you have pasted it's a nightmare to maintain. If you repsect the \"Open Close Principle\" you'll rarely get edge effects, use unit test if you are afraid of that. For small jobs in PHP use at least something you get or write the HTML separately from the PHP code and reuse it any time you need it. Classes in PHP are mature, not as mature as other languages like python or java, but they are usable. There is tools to work with classes in PHP like Zend Studio that work very well. The use of classes or not depends not on the language you use but the programming paradigm you have choosen. I'm a OOP developer, I use PHP5, why do I have to shoot myself in the foot? * * * * When you find a simple bug in the code, and you can fix it simply, if you are not working on the code where you have found it, never fix it, even if it takes 5 seconds. * He says to me his \"best practices\" are driven by the fact that he has a lot of experience in maintaining software in production (14 years) and he now knows what works and what doesn't work, what the community says is a fad, and the people advocating such principles as never copy & paste code, are not evolved in maintaining applications. * What I think about that: If you find a bug fix it if you can do it quickly inform the people who've touched that code before, check if you have not introduced a new bug, ideally add a unit test for it. I currently work on a web commerce project, which serves 15k unique users per day. The code base has to be maintained and has been maintained this way since 2005. * * * Ideally you include a short description of your position and experience in terms of years effectively maintaining an application which has been in production for real."} {"_id": "17053", "title": "What is Java used for these days?", "text": "Java is fifteen years old. It started life as an alternative to C++ with a comprehensive standard library. Riding on the coattails of the Internet boom, it was popular for writing web applets. Its supposed portability was touted as a way to write desktop apps that would run on any platform. Now it's 2010. Applets are long gone. Desktop apps are giving way to web and mobile apps. Scripting languages are very popular, as is Flash, especially among web-centric developers. People have been chanting \"Java's death is near\" for several years. Yet a quick job search shows that Java is still a desired skill among programmers. So what is Java used for these days? What kinds of apps are you writing in Java? This should give us an idea of the \"state of Java\" today. Has the Java tide shifted from Swing desktop apps to Android mobile apps? If you write programs in a JVM language (such as Scala or Groovy), mention it."} {"_id": "208605", "title": "Do sigils make source code easier to read?", "text": "In most programming languages, variables do not have identifying characters like they do in PHP. In PHP you must prefix a variable with the `$` character. Example; var $foo = \"something\"; echo $foo; I'm developing a new scripting language for a business application, and my target users do not have a programming background. Do these characters make code easier to read and use? One reason PHP uses the `$` is because without it PHP can't tell if a name is a function reference or variable reference. This is because the language allows strange references to functions. So the `$` symbol helps the parser separate the namespace. I don't have this problem in my parser. So my question is purely on readability and ease of use. I have coded for so many years in PHP that when I see `$foo` it's easy for me to identify this as a variable. Am I just giving a bias preference to this identifier?"} {"_id": "164125", "title": "What resources are there for facial recognition", "text": "I'm interested in learning the theory behind facial recognition software so that I can hopefully implement it in the future. Not just face tracking, but being able to recognize individuals. What papers, books, libraries, or source is available so that I can learn more about the subject? I have found libface which seems to use eigenfaces for recognition. If there are any practitioners out there, please share any information that you can. Edit: I found a good post on the subject here."} {"_id": "164128", "title": "Entity Framework with large systems - how to divide models?", "text": "I'm working with a SQL Server database with 1000+ tables, another few hundred views, and several thousand stored procedures. We are looking to start using Entity Framework for our newer projects, and we are working on our strategy for doing so. The thing I'm hung up on is how best to split the tables into different models (EDMX or DbContext if we go code first). I can think of a few strategies right off the bat: * **Split by schema** We have our tables split across probably a dozen schemas. We could do one model per schema. This isn't perfect, though, because dbo still ends up being very large, with 500+ tables / views. Another problem is that certain units of work will end up having to do transactions that span multiple models, which adds to complexity, although I assume EF makes this fairly straightforward. * **Split by intent** Instead of worrying about schemas, split the models by intent. So we'll have different models for each application, or project, or module, or screen, depending on how granular we want to get. The problem I see with this is that there are certain tables that inevitably have to be used in every case, such as User or AuditHistory. Do we add those to every model (violates DRY I think), or are those in a separate model that is used by every project? * **Don't split at all - one giant model** This is obviously simple from a development perspective but from my research and my intuition this seems like it could perform terribly, both at design time, compile time, and possibly run time. What is the best practice for using EF against such a large database? Specifically what strategies do people use in designing models against this volume of DB objects? Are there options that I'm not thinking of that work better than what I have above? Also, is this a problem in other ORMs such as NHibernate? If so have they come up with any better solutions than EF?"} {"_id": "164129", "title": "Is deserializing complex objects instead of creating them a good idea, in test setup?", "text": "I'm writing tests for a component that takes very complex objects as input. These tests are mixes of tests against already existing components, and test- first tests for new features. Instead of re-creating my input objects (this would be a large chunk of code) or reading one from our data store, I had the thought to serialize a live instance of one of these objects, and just deserialize it into test setup. I can't decide if this is a reasonable idea that will save effort in long run, or whether it's the worst idea that I've ever had, causing those that will maintain this code will hunt me down as soon as they read it. Is deserialization of inputs a valid means of test setup in some cases? To give a sense of scale of what I'm dealing with, the size of serialization output for one of these input objects is 93KB. Obtained by, in C#: `new BinaryFormatter().Serialize((Stream)fileStream, myObject);`"} {"_id": "179968", "title": "Wiki/markup in enterprise as documentation", "text": "Why it is less common to use wiki/markup in enterprise as documentation? They use Word document instead. Wiki/markup are in text format, which is version control friendly."} {"_id": "161189", "title": "Macintosh OS10.8 user experience with 'identified developer' downloads?", "text": "I work on a Windows/Mac cross platform application that is sometimes downloaded from web sites. It is sometimes re-branded by and sold by third parties. I'm the 'windows guy', my Mac counterpart is on vacation. I'm getting some questions from a customers about what the Mac user experience will be if we sign the Mac installer with an Apple Developer certificate in our build system with our company name. For some reason this customer doesn't want to sign the app themselves, but doesn't want our company name to pop up in front of the user. Assuming the user's OS 10.8 Gatekeeper is set to allow downloads from both the Apple App Store and from 'identified developers', does the user see any warning dialog that provides certificate information similar to he Windows User Access Control prompts? In my tests it looks like internet downloaded Apple signed apps are just trusted by OS10.8, and will run without anything resembling a Windows UAC prompt. The user experience seems to be: 1) Download application from internet signed by an 'Apple identified developer'. 2) Click the application on web browser download list or using finder. 3) Application installer runs with no warning or caution dialogs. Is this correct? Also, is there some way on OS10.8 for the user to view in human readable form the 'identified developer' certificate information of a signed file?"} {"_id": "161184", "title": "Using captured non-local variables in C++ closures", "text": "On this wikipedia page I have found the following sentence regarding closures in C++11: > C++11 closures can capture non-local variables by copy or by reference, but > without extending their lifetime. Does this mean that an existing closure might refer to a non-local variable that has already been de-allocated? In such a case, is there any special care / best practice that one has to follow when using captured non-local variables inside a C++ closure?"} {"_id": "179966", "title": "Requesting quality analysis test cases up front of implementation/change", "text": "Recently I have been assigned to work on a major requirement that falls between a change request and an improvement. The previous implementation was done (badly) by a senior developer that left the company and did so without leaving a trace of documentation. Here were my initial steps to approach this problem: 1. Considering that the release date was fast approaching and there was no time for slip-ups, I initially asked if the requirement was a \"must have\". Since the requirement helped the product significantly in terms of usability, the answer was \"If possible, yes\". 2. Knowing the wide-spread use and affects of this requirement, had it come to a point where the requirement could not be finished prior to release, I asked if it would be a viable option to thrash the current state and revert back to the state prior to the ex-senior implementation. The answer was \"Most likely: no\". 3. Understanding that the requirement was coming from the higher management, and due to the complexity of it, I asked all usability test cases to be written prior to the implementation (by QA) and given to me, to aid me in the comprehension of this task. This was a big no-no for the folks at the management as they failed to understand this approach. Knowing that I had to insist on my request and the responsibility of this requirement, I insisted and have fallen out of favor with some of the folks, leaving me in a state of \"baffledness\". Basically, I was trying a test-driven approach to a high-risk, high-complexity and must-have requirement and trying to be safe rather than sorry. Is this approach wrong or have I approached it incorrectly? P.S.: The change request/improvement was cancelled and the implementation was reverted back to the prior state due to the complexity of the problem and lack of time. This only happened after a 2 hour long meeting with other seniors in order to convince the aforementioned folks."} {"_id": "179965", "title": "Functional programming compared to OOP with classes", "text": "I have been interested in some of the concepts of functional programming lately. I have used OOP for some time now. I can see how I would build a fairly complex app in OOP. Each object would know how to do things that object does. Or anything it's parents class does as well. So I can simply tell `Person().speak()` to make the person talk. But how do I do similar things in functional programming? I see how functions are first class items. But that function only does one specific thing. Would I simply have a `say()` method floating around and call it with an equivalent of `Person()` argument so I know what kind of thing is saying something? So I can see the simple things, just how would I do the comparable of OOP and objects in functional programming, so I can modularize and organize my code base? For reference, my primary experience with OOP is Python, PHP, and some C#. The languages that I am looking at that have functional features are Scala and Haskell. Though I am leaning towards Scala. Basic Example (Python): Animal(object): def say(self, what): print(what) Dog(Animal): def say(self, what): super().say('dog barks: {0}'.format(what)) Cat(Animal): def say(self, what): super().say('cat meows: {0}'.format(what)) dog = Dog() cat = Cat() dog.say('ruff') cat.say('purr')"} {"_id": "119288", "title": "Guidance for a C# developer to become better UI developer", "text": "I am a C# developer and had developed simple websites in regular asp.net(with asp.net controls) and a wpf application. Nowadays, I am trying to learn Asp.net MVC3 and I have been exposed to HTML with the Razor view Engine. To be honest, my knowledge of HTML and CSS is limited. Therefore, I keep posting questions now and then on SO for very simple tasks. This has made me very tired of the this Q&A development process. So, now i am thinking of learning the basics of HTML, CSS and maybe some Javascript. Therefore i would request you to guide me to become an efficient enough developer for these technologies. Something that won't take much time and get me up and running fast."} {"_id": "115073", "title": "Is it Ok to change estimates in the middle of an iteration?", "text": "We have started using Agile/Scrum in a team of 4 developers. We did our story estimations and ordered the stories Primed stories in the product backlog. We started with the point based estimation on the complexity from 1 to 5, instead of usual 1,2,3,5,8,13.... and so on After working on a couple of the stories we felt that some of the stories that was estimated at 4 point should only be 2 while the other which were estimated at 2 are a lot more complex and should have been estimated as 5. I would like to know: * Is it Ok to change our story estimates in the middle of the iteration? * Is it Ok to use the current estimation points from 1 to 5, instead of usual 1,2,3,5,8,13.... and so on Although I personally feel that it should be no for both the cases but I need to back myself up as my own understanding is not very clear.(Any good ref material would be good though!)"} {"_id": "148811", "title": "DB technology for efficient search in tabular data?", "text": "We have a repository of tables. Around 200 tables, each table can be thousands of rows, all tables are originally in Excel sheets. Each table has a different scheme. All data is text or numbers. We would like to create an application that allows free text search on all tables (we define which columns will be searched in each table) efficiently - speed is important. **The main dilemma is which DB technology we should choose.** We created a mock up by importing all tables to MS SQL Server, and creating a full text index over them. The search is done using the CONTAINS keyword. This solution works well for a small number of tables, but it doesn't scale. We thought about a NoSQL solution, but we don't yet have any experience in it. Our limitations (which unfortunately I can not effect): Windows servers only. But we can install on them whatever we want."} {"_id": "98990", "title": "Using Quadratic Bezier Curves to generate a cave that stays within certain bounds", "text": "I'm working on a project that generates a series of quadratic bezier curves and connects them together, maintaining slope from the end of one segment to the beginning of the next to make the transition smooth. The problem is that while the path is smooth, it tends to go off screen frequently. The way this is done currently: P0 = new Point(0, gapStart); P2 = new Point(wallWidth, gapEnd); P1 = getAnchorPoint(); where: GapStart is the P2.y of the previous curve wallWidth is a constant gapEnd is the one and only random aspect of the cave generation It's the Y value that the curve will end at. The getAnchorPoint function takes points 0 and 2 and generates an anchor point so that the slope at the beginning of this segment is the same as the end of the previous segment. So the main question is, what values can gapEnd be to ensure that the next curve has a gapEnd that won't send the curve off the screen? In other words, how do I determine the min and max values of gapEnd so that the next curve is safe? In addition, it's important that these values don't box the following curve in to being impossible: i.e. they cannot allow for the next curve to not have a possible solution that would allow for continued curve generation. Image of cave generation process"} {"_id": "238907", "title": "In what situation do Entity Framework enums become useful?", "text": "I am working on a project where there will be plenty of static options being stored in the database. I looked at using Enums for this, but do not see how they could be useful. They do not create any kind of look-up table, just reference a number in the table which can be used in code as an enum option. The number is meaningless to anyone creating SSRS reports and if you need to add an extra option, you need to recompile. Is there a situation where these have a genuine purpose and are a better fit than an entity. Or are they generally a bad practice for the above reasons?"} {"_id": "111706", "title": "Is it better to use pre-existing bad practices, or good practices that don't fit well with old code?", "text": "I was thinking of this because I was trying to write an extension for an existing 3rd party software, and their database is horribly denormalized. I needed to use their existing tables and add a bunch of new fields. I had the option of either creating new tables in their design style (which consists of almost all the propeties being in one big table), or creating a new set of tables alltogether and using something extra such as Triggers to synchronize data between the new and old tables. I ended up going with the first option of using the existing poor design style, but I was left with this question: **Is it better to go with pre- existing bad practices, or to implement good practices that do not play nicely with existing code? Is there some situations where I should choose one over the other?** NOTE: Many answers so far have to do with slowly refactoring the bad code, however I am unable to do that. The code is not ours, and it frequently gets updated by the vendor. I can only build onto it."} {"_id": "175766", "title": "How are requirements determined in open source software projects?", "text": "In corporate in-house software development it is common for requirements to be determined through a formal process resulting in the creation of a number of requirements documents. In open source software development, this often seems to be absent. Hence, my question is: how are requirements determined in open source software projects? By \"determining requirements\" I simply mean \"figuring out what features etc. should be developed as part of a specific software\"."} {"_id": "119131", "title": "Security issues with freelancing work on an existing website", "text": "I currently have a website that has been recently completed. I'm contemplation using a freelance service such as SO for all future components. There is a considerable amount of work to be completed on-top of the existing infrastructure. I've tried searching for an answer to this question, but it's so generic in terms that it's difficult. My site is fairly complex using relational databases, MySQL, PHP, Javascript, JQuery, and Ajax. I'm concerned about using a freelance worker on the site in regards to giving them admin access to the server and allowing them to access the existing databases. Is there a tried and true way to handle / manage this? I'm open to allowing it, but what safety precautions and other ideas do you have to prevent mischief?"} {"_id": "130863", "title": "How to apply DRY to files shared by repositories?", "text": "I've got a few files which are used in several of my repos: * `functions.sh`, shell library to for example print a colored warning/error message or the documentation of a script file. * `Makefile`; a standardized one which installs the file `$(CURDIR)/[dirname].sh` to `$(PREFIX)/[dirname]` and references a test script. * `LICENSE`. * `tools.mk`; Makefile commands to for example print all the variable definitions in the parent `Makefile`. These are more or less stable, and some are used in probably over a dozen repos. I've been thinking _how to keep this DRY_ , but none of the options so far seem satisfactory: 1. Keep doing it like now, creating a **copy for each new repo.** This keeps the code together with all its dependencies (avoiding bugs when the general solution is not general enough), but changes which are applicable in multiple places have to be applied in each separately. 2. Add an **executable** to each of the repositories to **download the files** needed. This means that developers and end users will have to run an extra command to get all the relevant files, and it breaks the possibility for developers to modify and `commit`/`push` the included files. 3. Use **`git submodule` or equivalent.** This at least keeps the repositories connected, but in the Git case it seems like it's restricted to \"a dedicated subdirectory\", so no top-level files, and no mixing with parent repository files which belong in the same directory. This could be circumvented with symlinks, but that's an ugly workaround for the obvious ideal situation. The _ideal solution_ should: 1. **Communicate with the correct repository** when doing an operation on a file. 2. Allow includes in the **same directories as the parent directory**. 3. **A single, simple command** should be enough to update the entire repository and all includes, no matter how many or how deeply nested. 4. Allow includes in the **top-level directory**. 5. **Not incur** significant developer or user **constraints** (must be online while installing) or **extra work** (running a \"pre-install\" command separately from the \"install\" command). 6. Allow **\"cherry-picking\" of files** to include. Many projects might need a different `Makefile`, for example, and including one which is not used is just ugly (and would get uglier as more files are added). Is this sort of thing possible with current software?"} {"_id": "162985", "title": "Windows 8 development in Mac", "text": "Is there any way to start programming for Windows 8 mobile on Mac OS X platform. I'm already programming for iOS and want to developing for Windows 8 mobile but don't want to install Windows."} {"_id": "162987", "title": "Is scanning the ports considered harmful?", "text": "If any application is scanning the ports of other machines, to find out whether any particular service/application is running, will it be considered harmful? Is this treated as hacking? How else can one find out on which port the desired application is running (without the user input)? Let's say I only know the port range in which the other application could be running, but not the exact port. In this case, my application ping each of the port in range to check whether the other application is listening on it, using already defined protocol. Is this a normal design? Or is this considered harmful for the security?"} {"_id": "221465", "title": "Random access (read/write) in datastructures", "text": "Certain datastructures, like Python's Dictionary, are unordered/random read/written. As programming in python is iterative (and programming in general is?), how do these unordered datastructures work? I understand that Python's Dictionary is in essence a hashtable. But I figured that the datastructure gets stored in the memory by order of inserting, and are thus read, when iterating over the full datastructure, in a similar order. But this is not the case. Aside from not understanding how this works, I also wonder of the benefit of this. But that may be all too clear when I understand how it works :) ps. Didn't think this was a StackOverflow Question, so put it here..."} {"_id": "212271", "title": "Is such modification allowed by GPL? Which licences allow this which not?", "text": "Assume that I have **library X** which contains **class A** with **method b** and that library X is realeased under **GPL licence**. public class A { public void b() { // this is an example of b method's body. But assume that the body of method b // is very complex System.out.println(\"operation a\"); System.out.println(\"operation b\"); System.out.prinltn(\"operation c\"); } } Assume that I am using library X and that I need a behaviour similar to the behaviour of the method b but with one addtition which I cannot achieve by simple extension of class A and overriding some of theirs methods. **The first question is:** May I extend class A and override method b in shown below way and is it really still allowed by the GPL licence. public class AExt extends A { public void b() { // this is an example of b method's body. But assume that the body of method b // is very complex System.out.println(\"operation a\"); System.out.println(\"operation b\"); doOperationINeed(); System.out.prinltn(\"operation c\"); } } **The 2nd question is:** Which from the common used opensource licences allow such extension and which do not?"} {"_id": "116050", "title": "Filesystems that use logging: If you're writing the data in the log (on disk), and in the actual locations themselves (also on disk) then...?", "text": "Aren't you essentially writing the same data twice, to the disk? Doesn't this cause a slowdown of a factor of ~2x? What optimizations can be made to minimize the cost of having to write things twice?"} {"_id": "151011", "title": "SQL Strings vs. Conditional SQL Statements", "text": "Is there an advantage to piecemealing sql strings together vs conditional sql statements in SQL Server itself? I have only about 10 months of SQL experience, so I could be speaking out of pure ignorance here. Where I work, I see people building entire queries in strings and concatenating strings together depending on conditions. For example: Set @sql = 'Select column1, column2 from Table 1 ' If SomeCondtion @sql = @sql + 'where column3 = ' + @param1 else @sql = @sql + 'where column4 = ' + @param2 That's a real simple example, but what I'm seeing here is multiple joins and huge queries built from strings and then executed. Some of them even write out what's basically a function to execute, including Declare statements, variables, etc. Is there an advantage to doing it this way when you could do it with just conditions in the sql itself? To me, it seems a lot harder to debug, change and even write vs adding cases, if-elses or additional where parameters to branch the query."} {"_id": "151013", "title": "What is the best C++ source code to read for a beginner?", "text": "I'm trying to improve my c++ coding technique by reading c++ source code. Which open source project would you recommend? Is the code of Boost C++ Libraries a good one?"} {"_id": "189079", "title": "Storing in-text metadata in a discrete data structure", "text": "I am developing an application which will need to store _inline_ , _intext_ metadata. What I mean by that is the following: let's say we have a long text, and we want to store some metadata connected with a specific word, or sentence of the text. What would be the best way to store this information? My **first thought** was to include in the text some kind of **`Markdown` syntax** that would then be parsed on retrieving. Something looking like this: Lorem ipsum dolor sit amet, consectetuer adipiscing elit, sed diam __nonummy nibh__[@note this sounds really funny latin] euismod tincidunt ut laoreet dolore magna aliquam erat volutpat. This would introduce two problems I can think of: 1. A relatively small one, is that if said syntax happen to be fortuitously on the said text, it can mess with the parsing. 2. The most important one is that this doesn't maintain this metadata **separate** from the text itself. I would like to have a discrete data structure to hold this data, such a different DB Table in which these metadatas are stored, so that I could use them in discrete ways: querying, statistics, sorting, and so on. * * * **EDIT:** Since the answerer deleted his answer, I think it might be good to add his suggestion here, since it was a _workable_ suggestion that expanded on this first concept. The poster suggested to use a similar syntax, but to link the metadata to the `PRIMARY KEY` of the `metadata` database table. Something that would look like this: Lorem ipsum dolor sit amet, consectetuer adipiscing elit, sed diam __nonummy nibh__[15432] euismod tincidunt ut laoreet dolore magna aliquam erat volutpat. Where `15432` would be the `ID` of a table row containing the necessary, queriable information, as per example below. * * * My **second thought** was to store information of this kind in a DB Table looking like this: TABLE: metadata ID TEXT_ID TYPE OFFSET_START OFFSET_END CONTENT 1 lipsum note 68 79 this sounds really funny latin In this way the metadata would have a unique id, a `text_id` as a foreign key connected to table storing the texts and it would connect the data with the text itself by using a simple character **offset range**. This would do the trick of keeping the _data_ separated from the _metadata_ , but a problem that I can immediately see with this approach is that the text would be fundamentally _not editable_. Or, if I wanted to implement the editing of the text after the assignation of metadata, I would basically have to calculate characters additions, or removal compared to the previous version, and check whether **each** of this modifications adds or remove characters before or after **each** of the associated metadata. Which, to me, sounds like a really unelegant approach. Do you have any pointers or suggestions for how I could approach the problem? * * * ### Edit 2: some XML problems Adding another case which would make quite necessary for this separation of data and metadata to happen. * Let's say I want to make it possible for different users to have **different metadata sets of the same text** , with or without the possibility of each user actually displaying the other user metadata. Any solution of the _markdown_ kind (or HTML, or XML) would be difficult to implement at this point. The only solution in this case that I could think about would be to have yet another DB Table which would contain the single user version of the original text, connecting to the original text table by the use of a `FOREIGN KEY`. Not sure if this is very elegant either. * **XML has a hierarchical data model:** any element which happens to be _within_ the borders of another element is considered as its _child_ , which is most often not the case in the data model I'm looking for; in XML any _children_ element must be closed before the _parent_ tag can be closed, allowing for no overlapping of elements. Example: > `` _Lorem ipsum > dolor sit_ `` _amet_ > ``, _consectetuer adipiscing elit_ `` _,_ ` content=\"adversative?\">` _sed diam_ **``** > _nonummy_ `` _nibh_ **``** _euismod tincidunt ut laoreet > dolore magna aliquam erat volutpat._ Here we have two different problems: 1. **Different elements overlapping:** The first comment starts within the first note, but ends after the end of the first note, i.e. it's not its child. 2. **Same elements overlapping:** The last note and the boldfaced note overlap; however, since they are the same kind of element, the parser would close the lastly opened element at the first closure, and the first opened element at the last closure, which, in this circumstance, is not what is intended."} {"_id": "245563", "title": "Is there a standard way of cleaning data files?", "text": "Having spent quite a lot of time working in software companies that deal in medium to large data sets, I've seen that a choke point in their processes is often prepping data files for loading into databases. For example, searching for formatting errors, peculiar whitespace or line-end characters, wrangling with Unicode formats, that sort of thing. Everyone I've ever known deals with this in a bespoke manner because the requirements are so varied and often unique to the companies involved. I normally just fiddle about in a combination of hex editors, excel, powershell and SQL to get the job done. But this is such a perennial problem that I find it hard to believe that there isn't some standards already in place to take some of the basic grunt-work out of the process. Is there a standard technique that is tailored for the purpose of cleaning data files?"} {"_id": "245564", "title": "Broadcasting - Listening to replies", "text": "We use JS by the way, but I think it's language agnostic. I'm open to ideas. We have this \"pub-sub\" framework that we use at work to fix the problem of tightly-coupled code. Works fine. Modules listen to events, things run, broadcasts events, more things run. It's amazing. However... For instance I wanted to click on a link to open a modal, modify the passed data, and send it back to the called. Sending it to the modal would be as simple as broadcasting an event with the data. Modal, subscribed to the event, responds and does some alteration to the data. The problem is getting the data back to the caller. Scenarios: * Module broadcasts an event, and at the same time listens to an event in the hopes of a response (event from another module). How would I know that the response was in effect of the event that was broadcast just before it? * Another is that I plan to send out a callback function with the data sent through the event. That way, the receivers can just call the callback with the data from their side. However, the module would be receiving an unknown amount of callback calls, depending on how many subscribers. The scenario is more of directed messaging, which is somewhat contradictory to the fire-and-forget nature of a broadcast. Are there any patterns which still maintains loose coupling, modularity and the \"unknown to the other\" type of architecture?"} {"_id": "114662", "title": "So many alternative languages available, is it good to read and implement application using those?", "text": "I meant alternative languages not in sense **if not Java use C#**. But I meant **if not Java use Groovy, Jruby etc, if not HTML use HAML** and so on. Yes, there are many alternative languages available, which makes coding fun and reduce the developers works. Trying these languages is good or bad? Well learning new things never going to hurt, these alternative languages are pretty easy to read and write them. As a student for example, using Groovy I can create a Swing based application easily and it would take very less lines of code when compared to Java. When I enter into a IT company where they use Java to build the Swing applications, will it be tough for me to build in Java(because I have used alternative languages a lot in my college days)? ### Note: Example is given with Java and Groovy, but I need an answer that is common. Hope my question is clear enough!"} {"_id": "189073", "title": "How to deal with too much pragmatism in the project?", "text": "My team and I took over a medium sized codebase over a year ago when the previous tech lead left the company. Originating from the lack of man power I fear we favored _pragmatic_ solutions over best practices a little too much. Now, I have to deal with a constant decline in code quality and some kind of _organic growth_ of day-to-day processes. I regret that when asked for code conventions a year ago I basically gave _common sense_ as the only rule. Soon I had programmers using different syntactic styles and failing to see the difficulties this induces in a merge process. Another example is my push for database migration scripts. I tried to incorporate Flyway into our process but after only a week I was quickly overruled by my boss. Even despite me warning them about the upcoming mandatory use of database migration scripts and providing them with as many clues, hints and tools to mitigate the problem of not starting applications because of missing or failing migrations they decided that it would be best do complain to my boss about them not being able to do their work. I forcefully disabled Flyway again and we now live with migration steps in arbitrary named SQL files on a network share that you have to remember to apply to the respective database at the right time. One problem in our process was that we never did formal code reviews. So a lot of _hacks_ went under the radar and into the code base without someone noticing on time. Nowadays I tend to read checkins of my team mates when there is time (that's not often the case) but there is no automatic process to prevent unwanted changes. It is up to me to go to the developer in question and try to ease them into acknowledging why their code is bad. I thought to introduce lint-like tools like FindBugs and Checkstyle but I fear I would face the same psychological problems like I did with the database migrations. After all I would make their jobs harder for them and I can understand why this might lead to misunderstandings. So my question is: How can I go about improving our process and our code quality in an environment where _getting the job done_ is valued much higher than _doing it right_?"} {"_id": "142882", "title": "Submitting software to a competition, it becomes their property?", "text": "So I'm about to submit a game to a competition, but as I looked through the rules a chunk grabbed my attention: > All Entries become the sole and exclusive property of Sponsor and will not > be acknowledged or returned. Sponsor shall own all right, title and interest > in and to each Entry, including without limitation all results and proceeds > thereof and all elements or constituent parts of Entry (including without > limitation the Mobile App, the Design Documents, the Video Trailer, the > Playable and all illustrations, logos, mechanicals, renderings, characters, > graphics, designs, layouts or other material therein) and all copyrights and > renewals and extensions of copyrights therein and thereto. Without > limitation of the foregoing, each Eligible Entrant shall and hereby does > absolutely and irrevocably assign and transfer all of his or her right, > title and interest in his or her Entry to Sponsor, and Sponsor shall have > the right and may authorize others to use, copy, sublicense, transmit, > modify, manipulate, publish, delete, reproduce, perform, distribute, display > and otherwise exploit the Entry (and to create and exploit derivative works > thereof) in any manner, including without limitation to embody the Entry, in > whole or in part, in apps and other works of any kind or nature created, > developed, published or distributed by Sponsor and to and register as a > trademark in any country in Sponsor\u2019s name any component of the Entry, > without such Eligible Entrant reserving any rights or claims with respect > thereto. Sponsor shall have the exclusive right, in perpetuity, throughout > the Territory to change, adapt, modify, use, combine with other material and > otherwise exploit the Entry in all media now known or hereafter devised and > in any manner, in its sole and absolute discretion, without the need for any > payment or credit to Entrant. So the game will become the sponsor's property; however, they don't ask for source code. So will I still own the rights to the source code, whatever that means? And if it doesn't win said competition, will I be able to publish it myself without their trademarks? I am very new to software legality stuff, so I would appreciate any clarification. Since there's a possibility I won't even own the source, is it possible to make the game core engine open source software with a not-very-restrictive license and include that in the project, so I at least still own the game engine? Or does it not work that way?"} {"_id": "22568", "title": "What's the best way to explain branching (of source code) to a client?", "text": "The situation is that a client requested a number of changes about 9 months ago which they then put on hold with them half done. They've now requested more changes without having made up their mind to proceed with the first set of changes. The two sets of changes will require alterations to the same code modules. I've been tasked with explaining why them not making a decision about the first set of changes (either finish them or bin them) may incur additional costs (essentially because the changes would need to be made to a branch then if they proceed with the first set of changes we'd have to merge them to the trunk - which will be messy - and retest them). The question I have is this: How best to explain branching of code to a non-technical client?"} {"_id": "142880", "title": "Why do HDFS clusters have only a single NameNode?", "text": "I'm trying to understand better how Hadoop works, and I'm reading > The NameNode is a Single Point of Failure for the HDFS Cluster. HDFS is not > currently a High Availability system. When the NameNode goes down, the file > system goes offline. There is an optional SecondaryNameNode that can be > hosted on a separate machine. It only creates checkpoints of the namespace > by merging the edits file into the fsimage file and does not provide any > real redundancy. Hadoop 0.21+ has a BackupNameNode that is part of a plan to > have an HA name service, but it needs active contributions from the people > who want it (i.e. you) to make it Highly Available. from http://wiki.apache.org/hadoop/NameNode So why is the NameNode a single point of failure? What is bad or difficult about having a complete duplicate of the NameNode running as well?"} {"_id": "189077", "title": "Managing Alerts in Web Application Using RESTful API", "text": "I have designed a RESTful API and I am now working on creating a web application to use the service. One thing I am struggling with is how to manage alerts in the web application (similar to the alerts stackoverflow gives in your profile for a new answer, response etc). Basically the web application is a workflow between a user and his clients. Certain tasks have to be completed in order, and the web application alerts both the user and his clients of any actions. The user also gets alerted to any client actions that are outstanding. The way I have my architecture setup at the moment is that the API has no knowledge of alerts. It simply stores the resources that are retrieved by the web application. To work out if an alert needs to be displayed, the web application looks at all the resources, and based on a set of rules, decides what alerts need to be displayed. The problem I have is that I'm not sure this is a very efficient way to do things. There are a number of different alerts that can be displayed, and they depend on different API resources which will all need to be retrieved when a user or client logs in. My question: Is this the best way to achieve what I want, or are there other methods which are better to use and will help decrease web application load time and api calls?"} {"_id": "189074", "title": "How Do Computers Process Conditional/Input/ Event Based Code?", "text": "I understand that computers are basically a complex system of electrical signatures that can calculate based on logic boards, and some sort of gate mechanism, but how do computers process something like if the number produced by the keyboard is less than 10, or if a mouse is clicked two times in a certain amount of time it equals a double click?"} {"_id": "233416", "title": "How to compare and relate together the concept of DBMS to the concept of Cloud Storage system?", "text": "I am a student of Computer Science and I am writing my disseration on a **Cloud Storage** application developed in the company for wich I work. My problem is the following one: my supervisor (the teacher that is that oversee my work) ask me to execute a **comparison** between the **Database Concept** and the **Cloud Storage system** that I am presenting in the disseration. He raccomended to me to focus on the **ACID** property that have to be guaranteed in a DBMS concept but, in some way, also in a Cloud Storage application (for example if I put a new file into my Cloud Storage account this operation have to be durable in time, and so on for the others ACID properties) My problem is how to link together the DBMS concept with the Cloud Storage concept. I have write some pages following this street (and I ask you if it could be a good way to link togheter these 2 arguments that, in my opinion, are pretty different...): 1. I describe wath is a **DBMS** as something that decouples the data model seen by applications from the physical representation of the same data on the hard drive. 2. I describe the **Relational Model** (and the concept of **RDBMS** as the classic model to access data). In this section I introduce the concept of relational algebra and I speack about of the **limits of the Relational Model** in the Cloud Context speacking about of **scalability problems** and about **low performance when accessing data in particular contexts** (mainly caused by the **JOIN** operator). 3. Starting from the previous problem of _low performance when accessing data in particular contexts_ * I explain why the **JOIN** operator is the cause and I speack about of the **Object Oriented Databae** as possibile solution of this problem (an object attribute could be the reference to another object and so I often no need the use of JOIN operator to relate together 2 objects but simply I have to follow a pointer, in this way the temporal complexity is dropped from **O(n)** to **O(1)** ensuring performance incredibly higher when the number of data access is hight). 4. Then I put in relaction the presented concept of **Object Oriented Database** to the **Cloud Storage System** developed, stating that they could be see as similar tools because the server that implement our Cloud Storage System use an **object storage architecture** designed to manage petabytes of information and billions of objects across multiple geographic locations as a single system. An **Object storage** is a storage architecture that manages data as objects...and this concept is pretty similar to how **Object Oriented DBMS** see the data. 5. Finally I speack about the **ACID** properties explaining the importance of these properties in the transaction managment of a DBMS (both RDBMS and OODBMS) and traslating these properties into the **Cloud Storage** context. (for example if I put a new file into my Cloud Storage account this operation have to be durable in time, and so on for the others ACID properties). What do you think about this way to relate the DBMS topic with the Cloud Storage Topic? it makes sense or is it a stretch? Other ideas are welcome Tnx Andrea"} {"_id": "114289", "title": "Ethics of collecting non-identifiable information on install", "text": "I wrapping up an application I'm writing that I'm hoping will get a decent user base. It will be free and open source, but I'm interested in collecting some analytic data on the install base. Essentially a small version of Google Analytics, but for a desktop application instead of a website. Details I would like to collect are: * Previous App Version (if any) * Current App Version * OS Version * Regional Geographic Location (Country/City level only, probably IP based). * Unique System Hash (To determine # of unique installs) I have a feeling that the first three could probably be done with no real worries as they are data points gathered by pretty much EVERY website in existence without ever asking permission. Geographic location I'm still fairly comfortable is OK. Again, collected by every website out there... and heck, they even get your actual IP. I wouldn't get this, but actually resolve the geo location client side and send ONLY that. All this data would probably be submitted to a google docs form, so I wouldn't have IP access even at the collection level. I'm a little more worried about the unique system hash as people might see this as identifiable. However, my current thought is to gather the systems motherboard and CPU serial, concatenate them together and then create a hash (probably MD5) out of that string. Same thing that's done with passwords, no way to reverse it so really NO way to discern who the data actually belongs to. You'd have to have that specific hash from every computer along with their owner info to match them up. So, I'm probably fine, but I just want to cover my ass here. Any thoughts from the community as to if the data I want to collect is reasonable or if there are some data points I should not collect (or none at all). As previously mentioned, the application will be totally open source, so they would be more than able to look at the code and verify that no questionable data is sent. The only thing I might be tempted to \"hide\" is the actual address of the collection form, purely so someone doesn't decide to spam it with bogus data (maybe I'm being to paranoid). As it is, the app uses some web APIs for which I have personal keys that cannot be given away in the checked in source, so I have a file that is omitted from source control that contains things like those API keys. The application still works without them, just in a more limited fashion and anyone could get a key of their own and compile to make it full featured. I figure I could put the form URL in this section. Thoughts?"} {"_id": "165453", "title": "What is the right way to group this project into classes?", "text": "I originally asked this on SO, where it was closed and recommended that I ask it here instead. I'm trying to figure out how to group all the functions necessary for my project into classes. The goal of the project is to execute the following process: 1. Get the user's FTP credentials (username & password). 2. Check to make sure the credentials establish a valid connection to the FTP server. 3. Query several Sharepoint lists and join the results of those queries to create a list of items that need to have action taken on them. 4. Each item in the list has a folder. For each item: * Zip the contents of the folder. * Upload the folder to the FTP server using SFTP * Update the item's Sharepoint data. 5. Email the user an Excel report showing, e.g., * Items without folder paths * Items that failed to zip or upload Steps 2-5 are performed on a periodic basis; if step 2 returns an invalid connection, the user is alerted and the process returns to step 1. If at any point the user presses a certain key, the process terminates. I've defined the following set of classes, each of which is in its own .cs file: `SFTP`: file transfer processes `DataHandler`: Sharepoint data retrieval/querying/updating processes. Also makes and uploads the zip files. `Exceptions`: Not just one class, this is the .cs file where I have all of my exception classes. `Report`: Builds and sends the report. `Program`: The main class for running the program. I recognize that the DataHandler class is a god object, but I don't have a good idea of how to refactor it. I feel like it should be more fine-grained than just breaking it into Sharepoint, Zip, and Upload, but maybe that's it. Also, I haven't yet worked out how to combine the periodic behavior with the \"wait for user input at any point in the process\" part; I think that involves threads, which means other classes to manage the threads... I'm not that well-versed in design patterns, but is there one that fits this project well? If this is too big of a topic to neatly explain in an SO answer, I'll also accept a link to a good tutorial on what I'm trying to do here."} {"_id": "235504", "title": "Branch per feature: What are the actual benefits (and risks)?", "text": "I've read quite a lot of posts lately about different branching strategies (e.g a lot of links from To branch or not to branch?), and while almost every article explains on how to do this or that strategy they rarely explain the why. Currently our team is investigating whether or not to switch to \"feature branches\". We're currently using a \"everyone commits to trunk\" model and we have release branches, which we use to do hotfixes to previous releases. All in all, this model has worked very well in the past. So actually I'd say - if it ain't broken, don't fix it. However, there might be a point that I'm missing, so I'd like to ask: What are actual tangible benefits of using a feature branch strategy over \"everyone commits to trunk\"? What are the drawbacks of feature branches in your experience?"} {"_id": "114282", "title": "What can I do to keep my skills sharp without the internet?", "text": "This probably sounds like a funny question from the graybeards out there, but I seriously don't know what I'm supposed to do without the Google to offer help. Long story short, I'm planning on being pulled away for a deployment overseas. Things I can expect from this: * lots of downtime * effectively no internet * probably electricity I will be bringing a laptop with me, but I'm making an assumption that what little internet access I do have will probably be spent talking to my kids and family via Skype and IM. To add to that, the internet overseas is controlled tightly, so no USB sticks, external HD, or connecting my laptop up. What I can see on the screen will be probably all I can take away from it (when or if I have it at all). * * * I'm looking for ideas on things I can download on my laptop for a lengthy time away from the internet. I'm not too concerned which platform, language, or kit I download, since I will have alot of time to get proficient at anything. I really just want a fairly self-contained, well-documented and rich environment that I could download the whole thing and not feel the internet is even necessary to code in it. Am I asking too much in this always-on Internet connected world?"} {"_id": "207296", "title": "How do you learn to take a more OO approach to problems?", "text": "I have been learning C# and am trying to tackle some common projects / works of my own to become even better. Currently I am working on understanding the Mars Rover Problem. I read the description and have seen several solutions, they seem nice and follow the principles of OO design. * How do you come to understand what classes you need to make for a project? * Is there a book I can read to grasp the concept of OO design better? I understand in theory, but I just don't get how and what I would do when actually given a problem. For instance I see the Mars Rover problem and instantly in my head I think Ok I can get that done with just 3 classes, a Rover, Grid and a CommandCenter class. Then I look at online examples and there are a lot more classes, interfaces and overall breakdown."} {"_id": "172917", "title": "Learning good OOP design and unlearning some bad habits", "text": "> **Possible Duplicate:** > What books or resources would you recommend to learn practical OO design > and development concepts? I have been mostly a C programmer so far in my career with knowledge of C++. I rely on C++ mostly for the convenience STL provides and I hardly ever focus on good design practices. As I have started to look for a new job position, this bad habit of mine has come back to haunt me. During the interviews, I have been asked to design a problem (like chess, or some other scenario) using OOP and I doing really badly at that (I came to know this through feedback from one interview). I tried to google stuff and came up with so many opinions and related books that I don't know where to begin. I need a good through introduction to OOP design with which I can learn practical design, not just theory. Can you point me to any book which meets my requirements? I prefer C++, but any other language is fine as long as I can pick-up good practices. Also, I know that books can only go so far. I would also appreciate any good practice project ideas that helped you learn and improve your OOP concepts."} {"_id": "117957", "title": "Local Stack vs Call Stack", "text": "On the matter of Recursion: When you create a recursive function, you create a call stack. Ok, no problem; However, a comment on this page (look for comments by \"LKM\") triggered my curiosity (and google was no help): * What is a local stack? * Why would it be faster than a call stack? * How would you actually implement it (pseudo-code, js/php/ruby/python are ok)? Subsidiary question (might deserve another question, but I don't know on which (.*\\\\.)?stack.*\\\\.com to ask): In these conversations about programming, I often see the \"recursion\" theme and how bad/newbie programmers don't grok it. I'm self-taught, and I never understood what the big fuss is all about. I use recursion a lot in my everyday coding, both for solving problems and sometimes just because I think they are beautiful. But these articles make me feel like maybe there is something there I just don't see. So: * What's all the fuss about recursion? What's there not to grok?"} {"_id": "117956", "title": "Many small scripts, one repository or multiple?", "text": "A co-worker and myself have run into an issue that we have multiple opinions on. Currently we have a git repository that we are keeping all of our cronjobs in. There are about 20 crons and they are not really related except for the fact that they are all small python scripts and essential for some activity. We are using a `fabric.py` file to deploy and a `requirements.txt` file to manage requirements for all of the scripts. Our issue is basically, do we keep all of these scripts in one git repository or should we be separating them out into their own repositories? By keeping them in one repository it is easier to deploy them onto one server. We can use just one cron file for all the scripts. However this feels wrong, as the 20 cronjobs are not logically related. Additionally, when using one `requirements.txt` file for all the scripts, it's hard to figure out what the dependencies are for a particular script and they all have to use the same versions of packages. We could separate all of the scripts out into their own repositories but this creates 20 different repositories that need to be remembered and dealt with. Most of these scripts are not very large and that solution seems to be overkill. A related question is, do we use one big crontab file for all cronjobs, or a separate file for each? If each has their own, how does one crontab's installation avoid overwriting the other 19? This also seems like a pain as there would then by 20 different cron files to keep track of. In short, our main question and issue is do we keep them all closely bundled as one repository or do we separate them out into their own repository with their own requirements.txt and fabfile.py? We feel like we're also probably looking over some really simple solution. Is there an easier way to deal with this issue?"} {"_id": "55423", "title": "Month long creative challenges for programmers?", "text": "Does anybody know of any one month creative challenges for programmers, sort of like National Novel Writing Month (NaNoWriMonth) but for programmers. One month in which you must create something, not just a series of challenges for you to overcome, or problems for you to solve. Something like: National Website Building Month (NaWebBuMo), or National Game Building Month (NaGaBuMo), but real instead of fabrications spun from my mind? I'm looking for something that asks that you have a (relatively) finished product at the end of thirty days."} {"_id": "32421", "title": "Distance Education in Computer Science - HCI", "text": "I have been a software engineer / graphic designer for a few years and have recently been considering furthering my education in the field. (It was actually a very generous Christmas present) I would primarily be interested in something like Human Computer Interaction or a similar \"creative technology\" that involves heavy UI/UX Design, prototyping or Information Architecture. Anyways - I still plan on working full-time and was looking into part-time distance programs and was wondering if anyone had some experience with pursuing a similar degree (either from a distance or in-person) and could share their experiences. Thanks!"} {"_id": "207401", "title": "Writing Tests for Existing Code", "text": "Suppose one had a relatively large program (say 900k SLOC in C#), all commented/documented thoroughly, well organized and working well. The entire code base was written by a single senior developer who no longer with the company. All the code is testable as is and IoC is used throughout--except for some strange reason they did not write any unit tests. Now, your company wants to branch the code and wants unit tests added to detect when changes break the core functionality. * Is adding tests a good idea? * If so, how would one even start on something like this? **EDIT** OK, so I had not expected answers making good arguments for opposite conclusions. The issue may be out of my hands anyway. I've read through the \"duplicate questions\" as well and the general consensus is that \"writing tests is good\"...yeah, but not too helpful in this particular case. I don't think I am alone here in contemplating writing tests for a legacy system. I'm going to keep metrics on how much time is spent and how many times the new tests catch problems (and how many times they don't). I'll come back and update this a year or so from now with my results. **CONCLUSION** So it turns out that it is basically impossible to just add unit test to existing code with any semblance of orthodoxy. Once the code is working you obviously cannot red-light/green-light your tests, it usually not clear which behaviors are important to test, not clear where to begin and certainly not clear when you are finished. Really even asking this question misses the main point of writing tests in the first place. In the majority of cases I found it actually easier to re-write the code using TDD than to decipher the intended functions and retroactively add in unit tests. When fixing a problem or adding a new feature it is a different story, and I believe that this is the time to add unit tests (as some pointed out below). Eventually most code gets rewritten, often sooner than you'd expect--taking this approach I've been able to add test coverage to a surprisingly large chunk of the existing codebase."} {"_id": "67295", "title": "How do you learn to program?", "text": "> **Possible Duplicates:** > I still can't figure out how to program? > I'm graduating with a Computer Science degree but I don't feel like I know > how to program I don't known if you guys have super brains specifically for programming but I would like to know how you do manage to learn, understand and apply Java programming. I am in grade 11 and we have learnt statements, objects, classes and arrays. We get programming test each week and I'm failing them. Now we have a project to do BlackJack using a JPanel form. Also memorizing the while loop is not the problem; I think that the problem is applying it to the situation (my friend never programs the same way as the teacher but still gets the same results)."} {"_id": "207408", "title": "Is it necessary to map integers to bits in a genetic algorithm?", "text": "From what I've read, genetic algorithms are usually (always?) applied to chromosomes of bits. So if a problem involves maximizing a function that takes integer values, the integers are first encoded as bits. Is this mapping necessary? It seems that you could take chromosomes of integers and apply crossover and mutation directly. So, if I have a function that takes 35 integer inputs, I can just apply the genetic operators to those integers, rather than on the 35xB bits (where B is the number of bits needed to encode an integer). Is there a reason this isn't done? Perhaps the algorithm would suffer because the problem is defined more coarsely (that is, a problem can be defined with shorter chromosomes if we're not using bits), but I'm curious if anyone has a better answer."} {"_id": "98340", "title": "Is there a license where all rights are granted?", "text": "The licenses I know, including open source and the less restrictive ones like BSD, make it mandatory to: * Retain the copyright notice, and/or: * Indicate that the precise part of source code was written by the original author. Let's say I want to share source code I've written and I want to grant all rights, including the right to: * Remove the copyright notice or substitute another one, * Sell the source code or use it in a commercial product with no restrictions, * Never indicate that this source code was written by me\u00b9, but at the same time disclaim any warranty as it is the case in BSD license. Are there licenses which match this case? Note: public domain is not a solution, since the term is unclear and change from one country to another. * * * \u00b9 Why would anyone do that? Because we all have pieces of code which are not worth real licensing, but still useful for the community (example: removing diacritics from a text). Providing it \"completely free\" is in this case much for friendly that saying that in order to use your code, the person must read a few paragraphs or pages of an UNREADABLE ALL CAPITAL text, understand it, accept it, and then copy-paste your name and copyright notice every time the code is used."} {"_id": "102452", "title": "Switch to Java?", "text": "I have been doing .NET and MS tech/arcitecture for 4+ years. I have reached a dead end in my current job and is considering moving to a new job. Most of the current available jobs in my city are Java based. J2EE and the like. Is it wise to leave all those years of experience and start from scratch in the Java world?"} {"_id": "88714", "title": "Possibility to switch over to Java from .Net", "text": "I am an MCA fresher and will be working on dot Net for next few months (almost 10-12 months). After that I want to switch over to Java. How is the possibility that I can switch over to it? What type of preparation I am supposed to do? I am also planning to appear for SCJP but don't have any knowledge how to get registered an registration fees. Please also guide me what is company's approach towards the candidates switching their technologies. What are the things that companies look for?"} {"_id": "110076", "title": "Planning Poker and wordy developers", "text": "My team is composed of 4 developers; all seasoned and skilled. One of them is a wordy, well intended chap who insists on defining the technical solution to our stories before we put down our estimates with Planning Poker. He refuses to estimate if he doesn't have a rough idea of the agreed technical solution (which sounds reasonable, right?). The problem is that our estimating sessions are taking forever to finish!! In your experience, how do you deal with this kind of personality when playing the planning poker?"} {"_id": "110071", "title": "How should I implement Transaction database EJB 3.0", "text": "In the CustomerTransactions entity, I have the following field to record what the customer bought: @ManyToMany private List listOfItemsBought; When I think more about this field, there's a chance it may not work because merchants are allowed to change item's information (e.g. price, discount, etc...). Hence, this field will not be able to record what the customer actually bought when the transaction occurred. At the moment, I can only think of 2 ways to make it work. 1. I will record the transaction details into a String field. I feel that this way would be messy if I need to extract some information about the transaction later on. 2. Whenever the merchant changes an item's information, I will not update directly to that item's fields. Instead, I will create another new item with all the new information and keep the old item untouched. I feel that this way is better because I can easily extract information about the transaction later on. However, the bad side is that my Item table may contain a lot of rows. I'd be very grateful if someone could give me an advice on how I should tackle this problem. **UPDATE:** I'd like to add more information about the current design. public class Customer implements Serializable { @OneToMany private List listOfTransactions; } public class CustomerTransactions implements Serializable { @ManyToMany private List listOfItemsBought; } public class Merchant implements Serializable { @OneToMany private List listOfSellingItems; }"} {"_id": "209532", "title": "name convention for variables in C#", "text": "I'm watching a video on C# about Variables. The author declares a variable inside a method and he named it like this: string MyName =\"James\"; my question is: which convention is recommended by .Net Framework. Is it Pascal casing as in the above example or is it camel case?"} {"_id": "209537", "title": "How/Where do I give my github commit a version?", "text": "I 'm just learning to give project versions. I understand writing details about the changes in my project when committing, but where should I put my version number for my project?"} {"_id": "92937", "title": "what is storing data in constant space?", "text": "\"bloom filter allows us to store data in constant space\" Can someone explain what exactly does that sentence means?"} {"_id": "118661", "title": "Why would I learn C++11, having known C and C++?", "text": "I am a programmer in C and C++, although I don't stick to either language and write a mixture of the two. Sometimes having code in classes, possibly with operator overloading, or templates and the oh so great STL is obviously a better way. Sometimes use of a simple C function pointer is much much more readable and clear. So I find beauty and practicality in both languages. _I don't want to get into the discussion of \"If you mix them and compile with a C++ compiler, it's not a mix anymore, it's all C++\" I think we all understand what I mean by mixing them._ Also, I don't want to talk about C vs C++, this question is all about C++11. C++11 introduces what I think are significant changes to how C++ works, but it has introduced many special cases, exceptions and irregularities that change how different features behave in different circumstances, placing restrictions on multiple inheritance, identifiers that act as keywords, extensions of string literals, lambda function variable capturing, etc. I know that at some point in the future, when you say C++ everyone would assume C++11. Much like when you say C nowadays, you most probably mean C99. That makes me consider learning C++11. After all, if I want to continue writing code in C++, I may at some point need to start using those features simply because my colleagues have. Take C for example. After so many years, there are still many people learning and writing code in C. Why? Because the language is good. What good means is that, it follows many of the rules to create a good programming language. So besides being powerful (which easy or hard, almost all programming languages are), C is regular and has few exceptions, if any. C++11 however, I don't think so. I'm not sure that the changes introduced in C++11 are making the language better. So the question is: **Why would I learn C++11?**"} {"_id": "93043", "title": "Where does \"method\" as a special term in OOP originate?", "text": "\"Method\" is a special term in Object-Oriented Programming. Does anyone know when the word began to be used in this particular sense, and in connection with what programming language or other branch of quantitative learning?"} {"_id": "165985", "title": "Serializing network messages", "text": "I am writing a network wrapper around `boost::asio` and was wondering what is a good and simple way to serialize my messages. I have a message factory which can take care of dispatching the data to the correct builder, but I want to know if there are any established solutions for getting the binary data on the sender side and consequently passing the data for deserialization on the receiver end. Some options I've explored are: passing a pointer to a `char[]` to the serialize/deserialize functions (for serialize to write to, and deserialize to read from), but it's difficult to enforce buffer size this way; building on that, I decided to have the serialize function return a `boost::asio::mutable_buffer`, however ownership of the memory gets blurred between multiple classes, as the network wrapper needs to clean up the memory allocated by the message builder. I have also seen solutions involving `streambuf`'s and `stringstream`'s, but manipulating binary data in terms of its string representation is something I want to avoid. Is there some sort of binary stream I can use instead? What I am looking for is a solution (preferrably using boost libs) that lets the message builder dictate the amount of memory allocated during serialization and what that would look like in terms of passing the data around between the wrapper and message factory/message builders. PS. Messages contain almost exclusively built-in types and PODs and form a shallow but wide hierarchy for the sake of going through a factory. Note: a link to examples of using `boost::serialization` for something like this would be appreciated as I'm having difficulties figuring out the relation between it and buffers."} {"_id": "118666", "title": "How do you run a sprint retrospective to maximize participation and honesty? And what about follow up on action items?", "text": "I've seen it done as a free-for-all where the scrum master just asks everyone to throw out comments. Or a time-boxed session where \"anonymous\" comments are posted on the walls bucketed by category. My current team is constrained by being spread out, so it has be done online/over the phone. What is everyone's best experience with soliciting constructive feedback? And how do you follow through on action items for improvement?"} {"_id": "96030", "title": "How analysis is different from design?", "text": "I'm sure you all have heard managers saying that \"we need an analyzer\", or \"we need a designer\". While I'm a .NET developer, I hardly can differentiate an analyzer from a designer (not web designer or UI designer). Who is analyzer? Who is designer? Do they overlap?"} {"_id": "93049", "title": "Are \"Proactive\" designs on new projects useful?", "text": "When starting a new project, are \"proactive\" designs useful? I mean, if you create a UML diagram/Activity diagram/Use Case/Class diagram/etc. for anything and everything can think of, then when you _think_ you're done, you start coding. Afterwards, you realize an important feature or method was left out. All your spiffy UML diagrams are nice stacks of worthless paper now. Is it easier/better to design one class, work on it a bit, refine it, then work on it more?"} {"_id": "165988", "title": "I think client's project will be a flop; should I discuss with him?", "text": "I have a meeting with a prospective client tomorrow for a certain e-commerce project he wants to commission. I had an overview of it over the phone and from what I understand there are gazillion such concepts already floating and most of them are disasters and I have reasons to believe that his project is significantly likely to have the same fate. Should I raise/discuss the commercial feasibility of his idea with him or simply accept the project, give my best and leave out all the rest?"} {"_id": "117481", "title": "Is it best to minimize using pointers in C?", "text": "I think most people would agree that pointers are a major source of bugs in C programs (if not the greatest source of bugs). Other languages drop pointers entirely for this reason. When working in C, then, would it be best to avoid using pointers whenever practical? For example, I recently wrote a function like this: void split(char *record, char *delim, int numfields, int fieldmax, char result[numfields][fieldmax]); While not as versatile as a dynamically allocated variable, in that it requires the programmer to know the number of fields in advance and anything over `fieldmax` is truncated (which is sometimes acceptable), it eliminates the need for memory management, and the potential for memory corruption. I think this is a pretty good trade, but I was wondering what the opinions of other programmers on this were."} {"_id": "117480", "title": "Should non-interface code be hidden from the client?", "text": "I am working on a library which had several headers that are meant to only be used by the library itself. I also have a few classes and functions in headers that I do not want the client to use. For instance, I have a header that contains debugging functions, but they should not be used outside of the library as it could cause problems. Should this code be hidden from the client? If so, how?"} {"_id": "81624", "title": "How Do Computers Work?", "text": "This is almost embarrassing ask...I have a degree in Computer Science (and a second one in progress). I've worked as a full-time .NET Developer for nearly five years. I generally seem competent at what I do. **But I Don't Know How Computers Work!** Please, bare with me for a second. A quick Google of 'How a Computer Works' will yield lots and lots of results, but I struggled to find one that really answered what I'm looking for. I realize this is a huge, huge question, so really, if you can just give me some keywords or some direction. I know there are components....the power supply, the motherboard, ram, CPU, etc...and I get the 'general idea' of what they do. But I really don't understand how you go from a line of code like `Console.Readline()` in .NET (or Java or C++) and have it actually _do_ stuff. Sure, I'm vaguely aware of MSIL (in the case of .NET), and that some magic happens with the JIT compiler and it turns into native code (I think). I'm told Java is similar, and C++ cuts out the middle step. I've done some mainframe assembly, it was a few years back now. I remember there were some instructions and some CPU registers, and I wrote code....and then some magic happened....and my program would work (or crash). From what I understand, an 'Emulator' would simulate what happens when you call an instruction and it would update the CPU registers; but what makes those instructions work the way they do? Does this turn into an Electronics question and not a 'Computer' question? I'm guessing there isn't any practical reason for me to understand this, but I feel like I should be able to. (Yes, this is what happens when you spend a day with a small child. It takes them about 10 minutes and five iterations of asking 'Why?' for you to realize how much you don't know)"} {"_id": "253495", "title": "Is it ok to write \"extra\" unit tests?", "text": "My understanding of how TDD should work is that you write a failing test for the next bit of functionality you want to add to a function or object, code until the test passes and then write the next test. Is it ever ok to write tests that pass with the code as is? One example of that happened today. I was writing a function that could Manchester encode an arbitrary number of bits. I wrote failing tests for encoding one single bit and coded until it passed, then wrote a failing test for passing two bits to the function and coded until it passed. But the solution I used to handle two bits made the code work for any number of bits. Because passing a byte to the function will be it's most common use, I added a unit test to make sure it worked for 8 bits, which did in fact pass. Did I violate TDD principles? Is there anything wrong with adding redundant tests just for my peace of mind? I understand part of the problem is that I could have written a faulty tests and would never know because it didn't fail in the first place, but I'm not sure what the alternative is."} {"_id": "117486", "title": "Creating a webservice API - how much \"credit\" should I give the client/developer", "text": "When creating a web service API, how much should I count on the developer to act by my rules? We are really aimed on creating an API so the developers shouldn't develop client side logic too much... for example, if in one of my methods we require the client to send us a unique ID - we are relying on the client that the ID sent is really unique - so the client should also keep track of it's requests and add logic to it's code. is it ok or should we try to solve ths kind of issues in a different more transparent ways?"} {"_id": "220731", "title": "Split skilled Scrum team", "text": "A Scrum team has been forced together and is feeling very uncomfortable. They are constantly saying that it is not working for them and that they are fed up with hearing the words _Agile_ and _Scrum_. They are feeling that the business if simply forcing a new buzz word on them. They don't have any Agile experiance before this, including the Scrum master. Also, the team consists of a very disconnected skill set. * 1 manual tester * 1 .NET developer * 2 Cobol developers * A BA turned PO * The scrum master The .NET dev doesn't want to learn Cobol and the Cobol devs don't want to learn .NET. I've been asked to help out with making them more Agile, however, one of the main tennets of Agile is that the power for change must lie with those with the domain knowledge. Kanban could help, but it wouldn't tackle the broken skill set. _Any advise on where to start?_ Currently my plan is to start with the PO and see how stories are being written, but I am not sure where to go from there."} {"_id": "253499", "title": "Uniform variable naming across HTML, CSS and JS", "text": "I'm sorry if this is opinion based, but how do you guys make sure that HTML, CSS and JS have uniform variable naming? I use vim, but I don't know if IDEs are smart enough to do this on their own. In case it's not clear, the issue is that if I change a variable name in HTML, I need to go and change the variable names in CSS and JS."} {"_id": "253498", "title": "What's a \"Polymorphic method\"?", "text": "Sometimes people use the phrase \"Polymorphic method/function\". Does it mean: 1. A method that takes a Polymorphic type as a parameter, and performs some operation on it. By \"Polymorphic type\" I mean a super-type with multiple sub-types. 2. An abstract or virtual method in a super-type, with multiple sub-types that implement/override the method differently."} {"_id": "222467", "title": "Are 'The Pragmatic Programmer' and 'The Mythical Man Month' good books for people with limited practical experience programming", "text": "I was once told to read the books titled 'The Pragmatic Programmer' and 'The Mythical Man Month'. I have very little experience programming (basic HTML, CSS and JS). If I were to purchase these books, read through them, would it be beneficial to me with such a limited understanding or would I benefit from them? I ask because I am currently learning HTML5 and CSS3 from an online source and want some general reading material to add to my kindle."} {"_id": "222465", "title": "Designing online exam", "text": "I need to design an online exam server for an exam like GRE in which question difficulty increases if you answer correctly and decreases if you answer wrong. > Questions are multiple choice questions > > Difficulty scale of question is 1-10, 1 being the easiest and 10 being the > hardest. > > If two continuous questions are answered wrong, decrease the difficulty by 1 > and if two questions answered right increase the difficulty by 1. > > Test starts with a question of difficulty level of 4. > > Question carries marks equal to its difficulty. **My question is** : Which data structure should I use to store the questions? Which is the best algorithm to fetch the question by its difficulty, etc.? I am currently considering a doubly linked-list: struct node { int data; node *prev; node *next; int n int MAX; }; Here, 2 n's we need to store. One is MAX (actual size) and n is possible random size to pick questions from n - MAX we selected already for each double linked list you can add extra int data where you can store prev link, next link, pointer to array, int Maxdata(MAX size), int n(current size). Each node is a pointer to an array of questions for each level. If the answer is correct, it moves next node and picks the random questions from that list, otherwise it moves to the previous node and picks a question. For example, let's say an array has 10 questions 1 - 10. Since the array size is n = 10, you know that 1. now you selected a random question rand() % 10 = 6 th questions 2. now swap the question number 6 & 10, make n-- and return the 6th question 3. now n = 9 so next time 10th will not be considered 4. random will return 1 - 9 only Is there any better way of doing it?"} {"_id": "222468", "title": "Is there a difference learning VBA on Mac vs Windows", "text": "I'm a programmer who'd like to learn VBA, but I don't have a Windows box. However, I have a MacBook Pro with the latest version of MS Office for Mac on it (2011 I think). Since I work in a .Net shop, I'd like to learn VBA. Is there anything I'm missing out on not having a Windows machine to learn it on?"} {"_id": "167588", "title": "Why isn't functional language syntax more close to human language?", "text": "I'm interested in functional programming and decided to get head to head with Haskell. My head hurts... but I'll eventually get it... I have one curiosity though, why is the syntax so cryptic (in lack of another word)? Is there a reason **why it isn't more expressive** , more close to human language? I understand that FP is good at modelling mathematical concepts and it borrowed some of it's concise means of expression, but still it's not math... it's a language."} {"_id": "151900", "title": "Assignments in mock return values", "text": "(I will show examples using php and phpunit but this may be applied to any programming language) The case: let's say we have a method `A::foo` that delegates some work to class `M` and returns the value as-is. Which of these solutions would you choose: $mock = $this->getMock('M'); $mock->expects($this->once()) ->method('bar') ->will($this->returnValue('baz')); $obj = new A($mock); $this->assertEquals('baz', $obj->foo()); or $mock = $this->getMock('M'); $mock->expects($this->once()) ->method('bar') ->will($this->returnValue($result = 'baz')); $obj = new A($mock); $this->assertEquals($result, $obj->foo()); or $result = 'baz'; $mock = $this->getMock('M'); $mock->expects($this->once()) ->method('bar') ->will($this->returnValue($result)); $obj = new A($mock); $this->assertEquals($result, $obj->foo()); Personally I always follow the 2nd solution, but just 10 minutes ago I had a conversation with couple of developers who said that it is \"too tricky\" and chose 3rd or 1st. So what would you usually do? And do you have any conventions to follow in such cases?"} {"_id": "25037", "title": "What would you think about a new Java persistence tool, that's not really an ORM?", "text": "## Persistence in Java Over the past years, I have gathered experience in the field of persistence abstraction in Java, using concepts such as EJB 2.0, Hibernate, JPA and home- grown ones. They seemed to me to have a steep learning curve and lots of complexity. In addition, as a big fan of SQL, I also thought that many abstraction models provide too much abstraction over SQL, creating concepts such as \"criteria\", \"predicates\", \"restrictions\" that are very good concepts, but not SQL. The general idea of persistence abstraction in Java seems to be based on the Object-relational model, where RDBMS are somehow matched with the OO world. The ORM-debate has always been an emotional one, as there does not seem to exist a single solution that suits everyone - if such a solution can even exist. ## jOOQ My personal preference of how to avoid ORM-related problems is to stick to the relational world. Now the choice of data model paradigm should not be the topic of discussion as it is a personal preference, or a matter of what data model best suits a concrete problem. The discussion I would like to start is around my very own persistence tool called jOOQ. I designed jOOQ to provide most of the advantages modern persistence tools have: * A Domain Specific Language based on SQL * Source code generation mapping the underlying database schema to Java * Support for many RDBMS Adding some features that few modern persistence tools have (correct me if I'm wrong): * Support for complex SQL - unions, nested selects, self-joins, aliasing, case clauses, arithmetic expressions * Support for non-standard SQL - stored procedures, UDT's, ENUMS, native functions, analytic functions Please consider the documentation page for more details: http://www.jooq.org/learn.php. You will see that a very similar approach is implemented in Linq for C#, although Linq is not exclusively designed for SQL. ## The Question Now, having said that I'm a big fan of SQL, I wonder whether other developers will share my enthusiasm for jOOQ (or Linq). Is this kind of approach to persistence abstraction a viable one? What are the advantages / disadvantages you might see? How could I improve jOOQ, and what's missing in your opinion? Where did I go wrong, conceptually or practically? ## Critical but constructive answers appreciated I understand that the debate is an emotional one. There are many great tools out there that do similar things already. What I am interested in is critical but constructive feedback, based on your **own experience** or articles you may have read."} {"_id": "167580", "title": "What is the aim of software testing?", "text": "Having read many books, there is a basic contradiction: Some say, \"the goal of testing is to find bugs\" while other say \"the goal of the testing is to equalize the quality of the product\", meaning that bugs are its by-products. I would also agree that if testing would be aimed primarily on a bug hunt, who would do the actual verification and actually provide the information that the software is ready? Even e.g. Kaner changed his original definition of testing goal from bug hunting to quality assessment provision but I still cannot see the clear difference. I perceive both as equally important. I can verify software by its specification to make sure it works and in that case, bugs found are just by products. But also I perform tests just to break things. Also what definition is more accurate? Note above I am primarily referring to software testing as a process."} {"_id": "25033", "title": "Should I still consider using Appcelerator Titanium for building mobile apps if I don't have any web dev skills?", "text": "I'm an experienced desktop developer who's recently begun writing iOS apps and would like to venture into Android development as well. I've been hearing a lot of talk surrounding the Appcelerator Titanium framework lately, but I'm not sure I fully understand it's purpose. As I understand it, it's a framework for building native mobile apps using web technologies. If I don't have any web dev skills, are there any ways that using Appcelerator Titanium would benefit me? Thanks for your thoughts, I'm going to continue researching this right now."} {"_id": "112458", "title": "Forking an open source project nicely", "text": "It's time. You've worked long and hard to add your vision to the open source project you love, on which you've worked, debated, and to which you've contributed inestimable amounts of code and insight. But it's not going to work out with the existing developers. **You finally need to fork the code.** How do you do this and remain on the best terms possible with the existing project? How do you not say, \"Oh yeah? Fork you!\" Aside from the mechanics of cross-polination and assuming that the reasoning for forking is sound, logical, and acceptable, what issues come up? Competition? Resource sapping? User poaching? How do you go through this arguably difficult and long process until you've diversified enough that these are no longer seen as problems? Rather than discussing the reasoning behind the decision, please assume that you have already been convinced that forking the code is the best overall solution, and now the point is to move forward in the best way possible."} {"_id": "150050", "title": "What is the name for a NON-self-calling function?", "text": "I have a collection of normal functions and self-calling functions within a javascript file. In my comments i want to say something along the lines of \"This script can contain both self-calling and XXX functions\", where XXX is the non-self-calling functions. \"Static\" springs to my mind, but feel that would be incorrect because of the use of static methods in PHP, which is something completely different. Anyone any ideas? Thanks!"} {"_id": "82251", "title": "Types of databases", "text": "I read that there exists three main types of databases * Transactional (Client-Server database// OLTP database) * Decision support system (DSS) (Data warehouse database // Data mart // Reporting database) * Hybrid What is the type of database considered for a social network, like Facebook or Twitter? I guess the category is: Transactional > Client -Server database. From the book: **A transactional database is a database based on small changes to the database (that is, small transactions).The database is transaction-driven. In other words, the primary function of the database is to add new data, change existing data, delete existing data, all done in usually very small chunks, such as individual records.** But, as I said, I am not sure."} {"_id": "225947", "title": "How to unit test a missing case in a switch statement where all cases are true", "text": "I often use `enum` types in my code with a switch to apply logic to each type. In these cases it's important that each `enum` has code implemented. For example; public enum eERROR { REQUIRED, DUPLICATE, UNKNOWN, MISSING_VALUE } public void Error(eERROR pError) { switch (pError) { case eERROR.REQUIRED: // ...... return; case eERROR.DUPLICATE: // ...... return; case eERROR.MISSING_VALUE: // ...... return; case eERROR.UNKNOWN: // ...... return; } throw new InvalidArgumentException(\"Unsupported error type\"); } At the bottom I've added an exception as a last resort check that a programmer remembered to add any new `enum` types to the `switch` statement. If a new type is added to `eERROR` the exception _could_ be thrown if the code is not updated. Here is my problem. The above code generates unit test coverage of only **99%** because I can not trigger the exception. I could add a generic `unhandled` to `eERROR` and call `Error(eERROR.unhandled)` just to get **100%** coverage, but that feels like a hack to solve something that is not a problem just to get full coverage. I could remove the line but then I don't have the safety check. Why is _99%_ a problem? because the code base is so large that any uncovered method doesn't move coverage to _98%_. So I always see _99%_ and don't know if I missed a method or two. How can I rewrite `Error()` so that all newly added enums that are not added to the switch will be caught somehow by the programmer, and also be covered by tests. Is there a way of doing it without added a dummy enum?"} {"_id": "24922", "title": "How to deal with Interviews where HR is missing?", "text": "The situation: Person X, who already has a full-time job, applies to Company A. Money is good at A, but the real reason X had applied is 'cause the work at A really excites X. Next someone from the technical dept. from A calls up X, gives some assignment and holds a round of technical interview over phone. X does very well in all of it so far. Finally, X asks the tech guy from A \"What's next?\" The guy says we'd get back to you in 3-4 days time and then doesn't call up in the next 3-4 days. X is confused. He's got the mail-id of the tech guy but not sure if he should inquire -- this is no HR after all. In fact HR was never anywhere in this process. What should he do? **My Take** : Don't mail. You already have a full-time job and this shows desperation. **The Problem** : This isn't me and people tell me that some companies wait for candidate to get back just to double check their interest level. * * * **Update** Mailing the engineering team proved very useful, they apologized for being busy and now I am told the next round is early January."} {"_id": "153503", "title": "Which is more important in a web application code promotion hierarchy? production environment to repo resemblance or unidirectional propagation?", "text": "Lets say you have a code promotion hierarchy consisting of several environments, (the polar end) two of which are development (dev) and production (prod). Lets say you also have a web application where important (but not developer controlled) files are created (and perhaps altered) in the production environment. Lets say that you (or someone above you) decided that the files which are controlled/created/altered/deleted in the production environment needed to go into the repository. Which of the following two sets of practice / approaches do you find best: 1. Committing these non-developed file modifications made in the production environment so that the repository resembles the production environment as closely and as often as possible. 2. Generally ignoring the non-developed production environment alterations, placing confidence in backups to restore the production environment should it be harmed, and keeping a resolution to avoid pushing developments through the promotion hierarchy in the reverse direction (avoiding pushing from prod to dev), only committing the files found in the production environment if they were absolutely necessary in other environments for development. So, 1 or 2, and why? PS - I am currently slightly biased toward maintaining production environment to repository resemblance (option 1), but I keep an open mind and would accept an answer supporting either."} {"_id": "153506", "title": "Question about modeling with MVC (the pattern, not the MS stuff / non web)", "text": "I'm working on an application in which I'm looking to employ the MVC pattern, but I've come up against a design decision point I could use some help with. My application is going to deal with the design of state-machines. Currently the MVC model holds information about the machine's states, inputs, outputs, etc. The view is going to show a diagram for the machine, graphically allowing the user to add new states, establish transitions, and put the states in a pleasing arrangement, among other things. I would like to store part of the diagram's state (e.g. the x and y state positions) when the machine information is stored for later retrieval, and am wondering how best to go about structuring the model(s?) for this. It seems like this UI information is more closely related to the view than to the state-machine model, so I was thinking that a secondary model might be in order, but I am reluctant to pursue this route because of the added complexity. Adding this information to the current model doesn't seem the right way to go about it either. This is the my first time using the MVC pattern so I'm still figuring things out. Any input would be appreciated."} {"_id": "191380", "title": "Alternative Scripting Language to Lua?", "text": "I would like to add scripting support to an applications and with plenty scripting languages available I am a bit overwhelmed. At first I thought about Python but I guess Python is a little too big for my taste and the application (although I like it very much, it gets a lot of things right in my opinion while the syntax is clean and simple especially if you are not doing heavy OOP). I am also very fond of Lisps. I like scheme and it is the perfect extension language - at least in some ways, but I would like to stay syntactically more conservative. I remember having heard a lot of positive things about Lua and indeed Lua seems a good possibility. Unfortunately there is this one thing that really \"turns me off\": Its the `local` keyword for local binding. This is really ugly to my taste and I frankly do not quite grok why languages are still designed this way and in essence knowing about `local`. This is also a major drawback of JavaScript in my opinion and I feel like it all boils down to bad language design. My major concern is, that a user might just override a function from just within her/his function just by omitting `local`. But then Lua gets a lot of things right, size, speed, and it is surprisingly scheme-esque. Yet I am really afraid that it is not suitable for non- programmers who have no sense of scope. * Any alternatives on the horizon? * Might hacking the Luacore to make local the default be an option? Lua sources appear to be quite decent and modular, so this would be an interesting project ;)"} {"_id": "188840", "title": "Is it safe to assume that one controller will only ever use one primary model?", "text": "So, I'm designing an MVC framework. In the name of keeping everything statically typed and non-magical, I've come to quite a problem with \"automatically\" passing models to a controller. So, traditionally, I usually see no more than one model used at a time in a controller as far as automatic-population. For instance, take this tutorial. There is a method like this in the controller: [HttpPost] public ActionResult Create(Movie newMovie) { if (ModelState.IsValid) { db.AddToMovies(newMovie); db.SaveChanges(); return RedirectToAction(\"Index\"); } else { return View(newMovie); } } My concern is passing a `Movie` model to the `Create` method which is populated by FORM values \"magically\". In my API, this should be easily possible and would look something like this at routing: var movie=router.Controller((context) => new MovieController(context)) .WithModel(() => new Movie()); movie.Handles(\"/movie/create\").With((controller, model) => controller.Create(model)); My concern with this is that it is much harder to have multiple models because of limitations with C#'s type system. Of course, the controller can always manually create the models from FORM values and such, but it's not nearly as pretty. So, my question: Is it common to have something like `Foo(Movie model)` and `Bar(SomeClass model)` in the same controller class? Is it a good idea for me to attempt to support such a scenario, or is it just a symptom of putting too much unrelated logic in a single controller? Note: if you're concerned about how this fluent API is even possible, the answer is generic delegates.. lots and lots of generic delegates :) (but so far very little reflection)"} {"_id": "188843", "title": "How to develop a Delete command through Behavior Driven Development?", "text": "I am trying to develop a Delete command through BDD that will simply delete an user from the database, given user_id as a parameter. What can be some possible behavioral tests that will drive me to write a proper implementation for the command?"} {"_id": "194154", "title": "Writing OOPS code in Non Object Oriented Language", "text": "I was reading some article on the internet as I was preparing for interview and I found out below statement- > Writing object oriented code, even in non-object oriented language? Is this statement true?? Can anybody provide an example to justify the above statement if somebody asked me this question in the interview. And when do we write object oriented code in non object oriented language? I mainly worked in Java. So any example corresponding to that will be helpful. Thanks in advance."} {"_id": "117220", "title": "Is the market of small-scale development/production tools that difficult?", "text": "I was wondering, is there an economic rationale in developing and marketing a small tool (standalone or rather plugin)? Of course, let's assume that the tool does indeed solve a real problem from the production process. And by small, I mean an application that can be developed by 1-2 developers within 2 years. Do you know any contemporary success stories? I have two observations that make me wonder: 1. Anyway I turn around people ask for free and preferably open-source tools. Even when the price is low, and the software is worth its price, I see this big reluctance to spend any money on commercial development software. An application author would probably have to assume that only a very small percentage of all people who might find his application useful, will actually purchase it. 2. I've seen a few good commercial software examples that after some time started offering free licenses. The most notable example for me was gDEBugger from Graphic Remedy. Last time I checked it didn't even had any worthy competitor (commercial or noncommercial) in the market of OpenGL debuggers. I have no idea what might have made them do that, other than poor sales."} {"_id": "210127", "title": "Should I fork for a major re-write that uses a small amount of the original code?", "text": "I'm writing a library. It's a completely rewritten version of another one, to suit my needs (PCL compatibility, mainly). However, the API will be completely rewritten, as I'll need to change a lot of stuff around for PCL compliance. Also, as it is a rewrite, I won't be able to just start from the library and just change it bit by bit, as I typically see with forks. I tried that, but it just didn't work. So what should I do? Should I fork here or should I make a new library?"} {"_id": "198864", "title": "Deploy PHP project without giving away the source code?", "text": "I am working on developing range of web based software solutions. But the problem is that for some clients we need to deploy the applications in their local intranet. In such a situation I have to stick with Java as development language as I cannot give away the source code of the project. Now in order to tap on more consumers we put these solutions online. Now considering the fact that Java hosting is more expensive than PHP we are left with no choice but to again re develop the entire solution in PHP. Thus we end up having two version of same solution for maintenance and upgrading in different platforms. So can any point us to some good solution in which we can have the online solution and also have a deployable version of the application in which we can ship the application without its source code and at the same time we also need the application to be cost effective from hosting point of view?"} {"_id": "198865", "title": "How to find method and class usages along git repositories", "text": "We got some code in a git repository that's used along different projects (with git different repositories), the problem is that we got now so many different projects that's difficult to track which projects will be affected and how when there is a change in the shared module. Is there any way to find classes and methods usages along different git repositories? Or who would you manage an architecture like that?"} {"_id": "198862", "title": "Is it possible to shuffle team in between a sprint?", "text": "We are working on scrum framework. Now a situation arise that we have to shuffle 2-3 scrum team members in between sprints. Is it possible to shuffle sprint team members in between a sprint? What are the potential drawbacks of doing this?"} {"_id": "198861", "title": "Migrating Web based projects from Java to PHP", "text": "At our work place after hours of coding, testing and QA we have successfully added a couple of software tools in our product line. We specialize in web based software solutions so in order to tap on more users we are now considering putting our developed solutions online for hosting and providing online access to all our clients for their products. But what is coming in our way is that we have developed all these solutions in Java and compared to PHP Java hosting is very expensive. So we are now planing to migrate all these solutions to PHP while also maintaining their Java counterpart. How we should start with these migration task? We used AJAX in all our Java based solutions. Tech specs are 1. Front end web pages in JSP 2. Back end server side in Java using Apache Struts2 3. Database in MySQL 4. PDF report generation using iText PDF lib. for Java NB: We have started a little with Code Igniter Framework in PHP."} {"_id": "101659", "title": "Java Frameworks", "text": "I would just want to ask what Java frameworks are worth learning, specially when it comes to web applications, but other frameworks are welcome too. Could you please state: * The learning curve * The availability of resources * The performance of that framework * Would you recommend it or not I want to study frameworks on my free time. If it's not too much a bother thanks to those that will answer! If it's too vast: How about for web applications? Should I learn Spring or Struts 2? iBatis or Hibernate? Stuff like that Still if it's vast: What frameworks would you recommend out there? Really. Anything. I want to learn and I'm used as a JEE guy now so if it's related with web app then I would take that as a bonus."} {"_id": "97344", "title": "Can You Make An Entire Website With JavaScript?", "text": "I have looked at the aims of Java-script and it intends to `provide and enhanced user-interface and dynamic websites`. I am trying to get into the Web Development Business and am learning Javascript. But unfortunately, I have never seen an entire website implemented in Java script. Javascript is usually just used for Ads. People use things like PHP or ASP .NET (and C#) for serious web development. So will learning Javascript seriously enable me to make **Dynamic Websites** , and are there any good websites made with Javascript. Is Javascript a good choice for making websites? Is it even possible?"} {"_id": "193726", "title": "How do you keep consistent self confidence while coding?", "text": "As the number of bugs in a codebase increases, that number not only decreases the quality of the code, it also affects the mindset of the developers. Developer self-confidence falls when things are not going well. If self- confidence is not right, things more easily become a mess. How do you guys keep your self confidence up in these situations?"} {"_id": "193727", "title": "What algorithm should I use to find the shortest path in this graph?", "text": "I have a graph with about a billion vertices, each of which is connected to about 100 other vertices at random. I want to find the **length** of the shortest path between two points. I don't care about the actual path used. Notes: * Sometimes edges will be severed or added. This happens about 500 times less often than lookups. It's also ok to batch up edge-changes if it lets you get better performance. * I _can_ pre-process the graph. * If it takes more than 6 steps, you can just come back with infinity. * It's acceptable to be wrong 0.01% of the time, but only in returning a length that's too long. * All edges have a length of 1. * All edges are bidirectional. I'm looking for an algorithm. Psuedocode, english descriptions, and actual code are all great. I could use A*, but that seems optimized for pathfinding. I thought about using Dijkstra's algorithm, but it has a step which requires setting the shortest-path-found attribute of every vertice to infinity _(If you're wondering about the use-case, it's for the Underhanded C Contest.)_"} {"_id": "193729", "title": "What is watershed in the context of image processing?", "text": "I am new to image processing using Python. Now I am learning OpenCV and the mahotas module in Python. Many functions in these modules are related to watershed of an image. I don't know what watershed means for an image. Here is an example from the OpenCV documentation: http://docs.opencv.org/search.html?q=watershed&check_keywords=yes&area=default"} {"_id": "203093", "title": "Naming conventions used for variables and functions in C", "text": "While coding a large project in C I came upon a problem. If I keep on writing more code then there will be a time when it will be difficult for me to organize the code. I mean that the naming for functions and variables for different parts of the program may seem to be mixed up. So I was thinking whether there are useful naming conventions that I can use for C variables and functions? Most languages suggest a naming convention. But for C the only thing I have read so far is the names should be descriptive for code readability. EDIT: Examples of some examples of suggested naming conventions: * Python's PEP 8 * Java Tutorial I read some more naming conventions for java somewhere but couldn't remember where."} {"_id": "99120", "title": "How can this all fit into 64kb?", "text": "So, I am here at assembly 2011 and there was this demo played: http://www.youtube.com/watch?v=69Xjc7eklxE&feature=player_embedded It's one single file only, it says that in the rules. So I repeat, how have they made this to fit into so small file?"} {"_id": "36896", "title": "Looking for a large collection of XML files", "text": "Hey, I'm writing a program that I need to test with 1000s of XML files - any idea where I can get some?"} {"_id": "99123", "title": "What is the clause in an employee contract that says they own all your code?", "text": "I am looking at my employee contract and I can't seem to figure out where it might say that they own all the code that I write, be it at work or at home. Any examples of what the wording could be like?"} {"_id": "36890", "title": "stackoverflow induced passivity - how to cope?", "text": "After not _really_ working on my pet project for a while, I discovered Stackoverflow and upon perusing it more intensely I was quite amazed. I'm a bit of a perfectionist, so when I found eye-openers here highlighting many of the mistakes I made, I first wanted to fix everything. However, it's a **pet** project for a reason: I'm self-taught and I'm studying psychology, so programming skills can never become priority one (though it often helps, even in this field). Issues that stuck out were * numerous security issues (e.g. CSRF-prevention and bcrypt eluded me) * not object-oriented (at least the PHP part, the JS-part _mostly_ is) * no PHP framework used, so many of my DIY takes on commonly-tackled components (auth, ...) are either bad or inefficient * **really** poor MySQL usage (no prepared statements, mysql extension, heard about setting proper indices two days ago) * using mootools even though JQuery seems to be _fashionable_ , so there's more probably always going to be better integration with services I'd like to use (like google visualization) So, my SO-induced frenzy turned into **passivity**. I can't do it all (soon) in the rather small amount of spare time I can spend on working on my project. I can leave some of the issues be in good conscience (speed stuff: an unfinished & unpublished project will never become popular, right?). No clear conscience without good security though and if I don't use a framework for _auth_ and other complex stuff I'll regret having to do it myself. One obvious answer would probably be going open-source, but I think the project would need to become more impressive before others would commit to it. I can't afford to employ someone either. I do think the project deserves being worked on, though. How should I tackle it **anyway**? What's the best practice for little-practice people? * * * I couldn't edit my question, because it was transferred. I would have emphasised, that I have a working product, but like so many pet projects it will _never be finished_ , so I intended to make things easier for myself in the future too. Some of you mis-guessed my problem as being afraid to get started, but I couldn't react up here, sorry. I'm surprised that no one seems to think that using a framework would be a good way to relieve myself of some responsibilities (such as implementing an authorization module that conforms to these specs. Or is it just too hard to turn to something like an MVC when you have been working process-oriented before?"} {"_id": "99128", "title": "Build automation: Is it usual to use QMake for non-Qt projects?", "text": "So, I'm planning to write a C++ library and I want it to be cross-platform, and as this library won't deal with UI and I want it to have as little dependencies as possible, I won't be using Qt (actually Qt won't really help me to achieve what I want, all I plan on using is STL and Boost). Now when it comes to build a cross-platform project, I really like QMake as it's extremely easy to use and I have experience with it. I also heard good things about CMake, though I really doubt it's as easy to use as QMake. Anyway, here is my question: Should I stick with build automation tool that I know or is QMake just out of context for a non-Qt project? Should I take this as an opportunity to learn CMake? Or is there a better alternative to these two?"} {"_id": "199405", "title": "What information should be in the github README.md?", "text": "What information would you expect to see in the github README? Should everything go in the README? i.e. * Introduction * Installation * Versions * User Guide * Implementation * Testing * Related Resources Or should you just put certain things in the README (Introduction, Installation, Versions) and the other information is best placed in the Github wiki?"} {"_id": "199400", "title": "Cold, neutral attitude to programming languages - sign of a pro developer or not", "text": "A profesional developer Zed Shaw says this: > Which programming language you learn and use doesn't matter. Do not get > sucked into the religion surrounding programming languages as that will only > blind you to their true purpose of being your tool for doing interesting > things. So, being neutral to languages is a sign of a good/pro developer? We see worshipping and holywars what language is better. The good coder does not care about tools, he only thinks what software he writes (like writer does not care about pen he uses - he thinks about the BOOK content) 1. should a programmer develop total ignorance to language as tool he uses? 2. **is it a sign of pro developer** , if he does not care what language to use?"} {"_id": "191658", "title": "How to create a Web app that \"interacts\" with email?", "text": "I have a web host that supports cPanel and email addresses. I'm interested in creating a web app that checks for email messages, reads their contents and then does something with them, like interact with a database. For example, Bob with email bob@gmail.com links his email address to his account on my web app. He sends an email to storethis@webapp.com with subject \"Shopping List\" and message \"Apples, bananas, kiwis, milk\", along with an attachment. The web app receives this email, and then parses the email's contents and stores this data into a database. How can I implement this?"} {"_id": "107545", "title": "How a port \"listens\", pull or push?", "text": "When you write a code to listen from a port, like 80 for example, what happens under the hood? Is the method the OS uses to listen is pull, or push? In other words, does the OS checks that port every x milliseconds for example? I just don't get it. The more I think about it, the more it seems to me that it can't be anything other than pulling. I mean, even if OS set a callback function, still something should understand that new information has arrived to call that callback function. That **something** still should use pull to understand the arrival of the new data. How a port listens?"} {"_id": "191653", "title": "Why many programming languages have only 2 data-structures: arrays and hashes?", "text": "Many programming languages have only those 2 structures, and even some languages that have more structures still only provide special syntax for those 2; usually, `[]` and `{}`. Why is this? Is there anything special about those datatypes that is necessary for the completeness of the language?"} {"_id": "191651", "title": "Program Structure for Table Cells Representing Objects", "text": "So I have a program with \"cue\" objects and each have their own table cell. The thing is that the table cells have loading bars on them that represent the progress of the cues. This presents the following question: How should I structure the program? Should the objects representing the Cues store pointers to the table cells, allowing them to update the table cells themselves, or would some other program structure work better. Sorry if this question is too general, but I couldn't really find anything at all about this when I searched around on google and the website. I fairly new to iOS development and my programs structure is already getting chaotic. The suggestion I gave was just my initial instinct, but it's probable wrong..."} {"_id": "107542", "title": "Warning-free Objective C code", "text": "I am currently creating my first iPhone app in Objective C and getting a lot of warnings because I've in many places taken advantage of the fact that Objective C is a dynamically typed language. I get a lot of 'class' may not respond to '-message:' warnings. Should I strive to remove all these warnings? If yes, then what's the benefit of having a dynamic language at your disposal?"} {"_id": "102079", "title": "How can be get sure that we're not implementing micromanagement in the field of software development?", "text": "As I got the feedback of developers about scrum methodology in this question, on whether it turns active developers into passive developers or not, I encountered the word **micromanagement**. However, Wikipedia doesn't explain this term in the context of specific and tangible examples, and this, can result in different interpretations of micromanagement. Can anyone point to some well-known micromanagerial issues in the world of development (of course, with reference)? What practices could be regarded as micromanagerial? How can we get sure that we're not implementing micromanagement in our working environment?"} {"_id": "113830", "title": "Scrum taskboard with QA stage", "text": "I've been reading this article by Mike Cohn about Scrum task boards. One query I have is how this fits into a workflow where there is QA stage performed by testers who aren't developers. It seems that many tasks could not easily be QA'd in isolation. A practical alternative could be that the \"To Verify\" process should be carried out by another developer in the team, and then to have an additional stage after DONE to QA the whole story (probably after deployment to a stage/test environment). Does this seem right?"} {"_id": "200205", "title": "\"A line comment acts like a newline\"", "text": "I'm reading the Go language specification. The section on comments states: > Line comments start with the character sequence `//` and stop at the end of > the line. A line comment acts like a newline. What is the point of specifying that a line comment acts like a newline? Couldn't line comments simply act like empty strings? Lines (except the last) end in a newline anyway, so any line (except the last) will act like two consecutive newlines. If the last line has a line comment, then it can also safely act like an empty string."} {"_id": "113839", "title": "Explain to a non-technical manager that the tool he chose isn't apt", "text": "My non-technical manager just paid for a license to use a tool which designed to pick mentions of your brand name at blogs, \"social media\", comments, etc. and gauge the sentiment (positive, neutral, negative) of the post. What he wants me to do is use it to do a task which can only be achieved with a certain level of natural language processing and hence shows up unrelated results for whatever query you can try. I want to be able to explain to him that this tool is designed for task X and what we are trying to do is task Y. Y cannot be modelled as X. And all the success of the tool he cites as proof is proof of X which is not Y."} {"_id": "225620", "title": "What is the meaning of bootstrapping in software development?", "text": "In some articles and books that I read, I some time see the term 'Bootstrapping'. For example I see this sentence \"Bootstrapping X.JS\" in angular js document: http://docs.angularjs.org/tutorial/step_00 > Bootstrapping AngularJS apps automatically using the `ngApp` directive is > very easy and suitable for most cases. In advanced cases, such as when using > script loaders, you can use imperative / manual way to bootstrap the app... What is the meaning of bootstrapping in software development?"} {"_id": "9043", "title": "Server and MySQL safety for Java / JDBC applications?", "text": "I have an external java application which use JDBC as to reach an MySQL database. This will be used by many people and is going to store all the people's data on the same server / MySQL. As I bet people will be able to crack the .jar and see the source, I expect them to be able to cheat the program by editing it and making it get information from other users even when they are not allowed to reach said information. Is there any standard way to protect a server so people should only be able to reach the information their account is connected to even if said people are able to change the source? Cheers, Skarion"} {"_id": "37278", "title": "Should I use native android junit framework?", "text": "Can anybody share a reason (or list of reasons) of not using the native android junit framework but a standalone one like robolectric or something like that? I'm just trying to understand what framework to learn."} {"_id": "98834", "title": "Address validation service for a public website", "text": "Given the following scenario: * There is a public website (with about 50-100 visitors a day) * Visitors register for the services offered by the company who owns the website * Visitors tend to give wrong information regarding address and phone number that slows down the process of their registration, therefore their transactions and general happiness about the services of the company * Generaly the address validation adds value to the website I am looking for a service for a reasonable price, that has: * Up to date postcode and address database * Easy to integrate API * Address validation check * All this with a reasonable price I am looking specifically looking in the UK. What I have found so far are companies having these services for on ridiculously high price. We are also considering buying the PAF database from RoyalMail and implement such a service for ourselves. Any help, suggestion is very much appreciated and thanks very much in advance!"} {"_id": "24079", "title": "What are the common mistakes when you are a good coder in C and PHP and you start coding in Java ? What are good tips when switching?", "text": "What apsects of Java are the most diificult to learn when coming from such a background? What common mistakes do people make ? What are the top timesaving and produtivtity increasing tricks ? If you had a room of C/PHP coders who were about to start development using Java what advise would you give ? This is my list of topics so far (in no particular order): * Use jodatime instead of the standard library , and also less importantly the guava library. * Arrays are zero indexed * I'd also highlight the pass-by-value/reference aspects of Java, and the fact that > > String s1 = new String(\"test\"); > String s2 = new String(\"test\"); > if(s1 == s2) // will be false > if(s1.equals(s2)) // will be true > * Introduce the concept of design patterns and give a quick overview. * Introduce Spring (it will be used) and the concept of dependency injection Is there anything obvious I am missing."} {"_id": "98833", "title": "What are the steps in beginning a large project, when all I have is a big idea?", "text": "I am computer engineering student. I've been thinking about how I can handle a big project. What should be my first step to reach my goal in a more efficient and effective way? When I come up with a project, I don't know how I should start working on it. Many times, I just ignore it. However, I don't want to ignore my project ideas anymore. Now, I am asking to all of you, can anyone share his/her experiences? How should I start a project when all I have is an idea?"} {"_id": "223410", "title": "Payment Gateways and RESTful API", "text": "I have a RESTful API that offers eCommerce functionality. One area I'm struggling to decide on the correct implementation is how to process payments. Lets say I have the following URI GET .../checkout/{id}/payment. This resource provides details to the client application of what details needs to be submitted to make a payment. The specifics of the information in the resource depends on the implementations below **Implementation 1** The resource contains basic payment fields to be filled in like card number, billing address etc. The data is sent to the API using POST .../checkout/{id}/payment. The API then submits a request to the payment gateway (e.g. paypal), waits for the response, and then sends an appropriate response back to the client application. **Implementation 2** The resource contains details of how to submit a request directly to the payment gateway (e.g. paypal). The client submits the payment to the gateway, waits for a response, and then sends the payment reference (normally always provided by a payment gateway) back to the API using POST .../checkout/{id}/payment. The problem I am having is that both have pros and cons. Implementation 1 I would say is the more fail safe method as everything is contained in one request to the API. The API can then update backend tables once the payment has been processed, and return success or failure status to the client. Since the API is sending a response to the gateway, it knows the response it receives is genuine. Implementation 2 frees the API from having to process the payment which is great, but requires the client to make two requests. One to the payment gateway and one to the API with the reference. Additionally, how can I validate the payment reference on the API without calling the gateway? A dishonest client could send any reference back to the API which then may be incorrectly marked as paid. Just wanted to throw my ideas out there, but if anyone has a really good solution for implementing Payment Gateways and RESTful API web services that would be great. **UPDATE** Just thought of an **Implementation 3**. I could actually create a separate smaller API to handle the payments. It would work in a similar way to implementation 1 except that the client application doesn't post to api.site.com/.... but posts to gateway.site.com/... This give me a degree of security as I control the code on gateway.site.com and ensures that api.site.com isn't being bogged down by waiting for the payment provider to respond. The gateway api then essentially becomes a client of the main API and posts the payment success details to it. Any thoughts on this?"} {"_id": "37271", "title": "What to watch out for when writing code at an Interview?", "text": "I have read that at a lot of companies you have to write code at an interview. On the one hand I see that it makes sense to ask for a work sample. On the other hand: What kind of code do you expect to be written in 5 minutes? And what if they tell me \"Write an algorithm that does this and that\" but I cannot think of a smart solution or even write code that doesn't semantically work? I am particularly interested in that question because I do not have that much commercial programming experience, 2 years part-time, one year full-time. (But I am interested in programming languages since nearly 15 years though usually I was more concentrated in playing with the language rather than writing large applications...) And actually I consider my debugging and problem solving skills much better than my coding skills. I sometimes see myself not writing the most beautiful code when looking back, but on the other hand I often come up with solutions for hard problems. And I think I am very good at optimizing, fixing, restructuring existing code, but I have problems with writing new applications from scratch. The software design sucks... ;-) Therefore I don't feel comfortable when thinking about this code writing situation at an interview... So what do the interviewers expect? What kind of information about my code writing are they interested in?"} {"_id": "24077", "title": "Are short identifiers bad?", "text": "Are short identifiers bad? How does identifier length correlate with code comprehension? What other factors (besides code comprehension) might be of consideration when it comes to naming identifiers? Just to try to keep the quality of the answers up, please note that there is some research on the subject already! **Edit** Curious that everyone either doesn't think length is relevant or tend to prefer larger identifiers, when both links I provided indicate large identifiers are harmful! **Broken Link** The link below pointed to a research on the subject, but it's now broken, I don't seem to have a copy of the paper with me, and I don't recall what it was. I'm leaving it here in case someone else figure it out. * http://evergreen.loyola.edu/chm/www/Papers/SCP2009.pdf"} {"_id": "157088", "title": "Best practices: Ajax and server side scripting with stored procedures", "text": "I need to rebuild an old huge website and probably to port everyting to `ASP.NET` and `jQuery` and I would like to ask for some suggestion and tips. Actually the website uses: * `Ajax (client site with prototype.js)` * `ASP (vb script server side)` * `SQL Server 2005` * `IIS 7 as web server` This website uses hundred of stored procedures and the requests are made by an `ajax call` and **only 1 ASP page** that contain an huge `select case` Shortly an example: JAVASCRIPT + PROTOTYPE: var data = { action: 'NEWS', callback: 'doNews', param1: $('text_example').value, ......: ..........}; AjaxGet(data); // perform a call using another function + prototype SERVER SIDE ASP: <% ...... select case request(\"Action\") case \"NEWS\" With cmmDB .ActiveConnection = Conn .CommandText = \"sp_NEWS_TO_CALL_for_example\" .CommandType = adCmdStoredProc Set par0DB = .CreateParameter(\"Param1\", adVarchar, adParamInput,6) Set par1DB = .CreateParameter(\".....\", adInteger, adParamInput) ' ........ ' can be more parameters .Parameters.Append par0DB .Parameters.Append par1DB par0DB.Value = request(\"Param1\") par1DB.Value = request(\".....\") set rs=cmmDB.execute RecodsetToJSON rs, jsa ' create JSON response using a sub End With .... %> So as you can see I have an `ASP` page that has a **lot of CASE** and this page **answers to all the ajax request** in the site. My question are: 1. Instead of having many `CASES` is it possible to create dynamic `vb` code that parses the `ajax` request and creates dynamically the call to the desired SP (also implementing the parameters passed by JS)? 2. What is the best approach to handle situations like this, by using the advantages of `.Net + protoype or jQuery`? 3. How the big sites handle situation like this? Do they do it by creating 1 page for request? Thanks in advance for suggestion, direction and tips."} {"_id": "245315", "title": "Alternatives to foreach iterators involving ref and out", "text": "I am trying to make a flexible particle system for my XNA game, and I've got these interfaces: public interface IParticle : IUpdateable { bool Alive { get; } float Percent { get; } } public interface IParticleEffect where T : IParticle { void Apply(GameTime time, ref T particle); } public interface IParticleEmitter : IUpdateable where T : IParticle { } public interface IParticleRenderer : IDrawable where T : IParticle { } The idea behind this system is that the client code only needs to derive from `IParticle`, then make a compatible subclass from `IParticleEmitter` and `IParticleRenderer`, and everything else just automagically works behind the scenes. (I'm actually in the middle of writing everything at the moment, but the latter two would have an abstract base class available.) Anyways, some particle systems like to use mutable structs for optimization purposes, and that's perfectly reasonable. My system only provides the skeleton, and if the client decides that \"Hey, structs are the way to go!\", then my system should support whatever the client code throws at it. This is why my `IParticleEffect.Apply()` method takes a particle by **ref** \\-- it's cheaper to pass a struct by reference than it is to copy it. Unfortunately, it breaks when collections are involved, because the foreach iterator doesn't play nicely with objects passed by **ref** or **out**. Eric Lippert explains why here. So, now I have a design decision to make: 1. Completely disregard structs, and change my constraint to `where T: class, IParticle`. This potentially hurts future optimizations, but makes it much easier to work with collections. 2. Change anything that uses `ICollection` or `IEnumerable` to `IList` so I can manually poll it via an indexer. This makes it potentially more powerful, but at the cost of using a deeper interface (list) to store my objects. 3. Something else I hope this question isn't too \"it depends\", but I am curious as to what strategies I can apply here to make it work the way I want. * * * **EDIT** : I realized that I could also include a local variable such as: foreach (var particle in SomeParticleCollection) { var p = particle; SomeEffect.Apply(ref p); } However, `p` would still have the net effect of copying it, which is also not ideal."} {"_id": "68183", "title": "When is it appropriate to use a bitwise operator in a conditional expression?", "text": "First, some background: I am an IT teacher-in-training and I'm trying to introduce the boolean operators of java to my 10th grade class. My teacher- mentor looked over a worksheet I prepared and commented that I could let them use just a single & or | to denote the operators, because they \"do the same thing\". I am aware of the difference between & and &&. & is a bitwise operator intended for use between integers, to perform \"bit- twiddling\". && is a conditional operator intended for use between boolean values. To prove the point that these operators do not always \"do the same thing\" I set out to find an example where using the bitwise between boolean values would lead to an error. I found this example boolean bitwise; boolean conditional; int i=10, j=12; bitwise = (i 5); // value of i after oper: 3 System.out.println(bitwise+ \" \"+ i); i=10; conditional = (i 5 ; // value of i after oper: 10 System.out.println(conditional+ \" \"+ i); i=10; bitwise = (i>j) & (i=3) > 5; // value of i after oper: 3 System.out.println(bitwise+ \" \"+ i); i=10; conditional = (i>j) && (i=3) > 5; // value of i after oper: 10 System.out.println(conditional+ \" \"+ i); This example shows that if a value has to be changed by the second half of the expression, this would lead to a difference between the results, since bitwise is an eager operator, while the conditional behaves as a short circuit (does not evaluate the second half, if the first half is false in the case of && and true in the case of ||). I have an issue with this example. Why would you want to change a value at the same time as you do a comparision on it? It doesn't seem like a robust way to code. I've always been averse to doing multiple operations in a single line in my production code. It seems like something a \"coding cowboy\" with no conscience as to the maintainability of his code would do. I know that in some domains code needs to be as compact as possible, but surely this is a poor practice in general? I can explain my choice of encouraging the use of && and || over & and | because this is an accepted coding convention in software engineering. But could someone please give me a better, even real-world, example of using a bitwise operator in a conditional expression?"} {"_id": "211295", "title": "Apps being portable", "text": "I haven't found much on this, but what I did find doesn't explain it well enough (at least for me). What makes the application I make portable? Or what defines a portable application? Is there a specific difference that makes an app portable or not? I always thought if I just made an executable file and some images I could just copy that folder and take it to another computer on a flash drive. But, if you could what is the purpose of having \"portable\" apps? I know I have asked a range of questions, but I suspect an answer that addresses one of the questions will cover them all."} {"_id": "245314", "title": "Random forest ML algorithm suitable for use on cluster based HPC?", "text": "I need help in identifying a better algorithm. I have developed a script using pythons scipy package to analyse a rather large model that I wish to solve. The model contains over 12GB of data including over 500 parameters. The problem is that when I run small simulations of about 0.5GB of data with 20 parameters it can take my computer a decent amount of time if I allow a reasonable number of iterations through the random forest classifier. Currently my script is only using one core, so I guess that making the script multi-threaded would be the first step. But I do not believe this will be enough given how complex the model is. I am willing to explore the use of a cluster based HPC solution but I am not sure how to go about this. Are there better algorithms I can use, or is there a cluster based algorithm that would be more appropriate?"} {"_id": "211292", "title": "How to solve \"train wreck\" properties problem that violates Law Of Demeter?", "text": "I've been reading about law of Demeter and I would like to know how to solve this traversing model properties problem that I see a lot on Objective-C. I know that there is a similar question but in this case I'm not calling a method from the last property that do some calculations, instead, I'm just setting values (Ok, I know that getter are methods but my intention here is just to get the value, not to change the state of some object). For example: self.priceLabel.text = self.media.ad.price.value; Should I change that for something like: self.priceLabel.text = [self.media adPriceValue]; and inside the Media.m - (NSString *)adPriceValue { return [self.ad priceValue]; } and inside the Ad.m - (NSString *)priceValue { return [self.price value]; } Is that a good solution? Or Am I creating unnecessary methods?"} {"_id": "68184", "title": "Dealing with bad job description and interviewing applicants", "text": "My employer has been looking for a new web developer as well as a DBA for a while now with little success. I just saw the job description, and now I can understand why we are getting no interest - they have lumped together three jobs into 1 position: web developer, DBA, and marketing SEO/Analytics expert all under the title of \"Webmaster\". It's hard enough to find someone who can do any of those individual things well, let alone somebody who can do it all (or wants to be called \"Webmaster\" these days). Part of their motivation is, no doubt, saving money by hiring 1 person to do three jobs and underpaying that person. But now I have to deal with interviewing people for this position (that is, if we ever get an applicant), and trying to figure out what this person is _really_ going to be doing once hired. Have you had experience with this sort of thing? Either as an applicant or as an interviewer. And how did you deal with it? Assuming that rewriting the job description to something more realistic isn't an option, what would be the best way of handling this (finding the right person, interviewing them effectively, etc.)?"} {"_id": "186053", "title": "How to better performance", "text": "In one of my interviews I was asked a vague question, I'm still not sure of the answer. We have a client and a server they are placed apart. The network that connects them has high latency, which is dominating the performance. How can we improve the performance. Note that we can't change the network's topology. I was thinking of caching, breaking the request to multiple smaller requests and opening multiple connections with the server. Any ideas? Please note that the question description is vague and I wasn't supplied with more information about the situation. **Clarified question:** How should the client-server communication be designed in order to get the best performance on a network that has big latency?"} {"_id": "146990", "title": "Any Course/Lecture videos on Design Patterns", "text": "I am planning to read some design patterns and I took the book on \"design patterns in C++' by gang of four. However, I am not really some one who reads book and prefers reading slides/watching course lectures and then apply and read the book. I cannot find any course lecture videos on net(yeah, its not that important a course as algo/OS). Any one know any course website, preferably on C++"} {"_id": "137611", "title": "Add a unit test for each new bug", "text": "In my job all developers that resolve a bug have to add a new unit test that warns about this type of bugs (in the case it occours again). If a unit test is not possible (for example, a webpage design issue), then QA department has to create a test case to manually check it. The idea behind this is that if a defect has not been detected before the product release is because there isn't an appropriate unit test to detect it. So the developer has to add it. The question is: is this common in any software development methodology? This technique has a name? I would like to learn more about it, but I need some information to start with it."} {"_id": "71859", "title": "Developing Android apps for someone else", "text": "We have developed several apps and published them on Android Market. We are now writing an app that another company will brand and sell through their own publisher account. The other company has no experience with Android Market or with Android development. I'd appreciate any insights from others who have faced similar situations. I'm specifically concerned about the following areas: 1. **Signing the app** The alternatives we see are: sign with our usual key; create a signing key pair specific to the other company and sign with that; or help the other company install a development system, generate a key pair, and do the signing themselves. The latter would require us sending them the project sources, which presents its own problems. Other than our concern about sending the source, does the choice matter in any way? 2. **Licensing** Since the license check will be done against their account, the code will need to embed their public key for decrypting the license response. Is there any reason they should be concerned about sharing that key with us? Are there any alternatives to them sharing the key with us? 3. **Publishing** The other company is responsible for all marketing and sales; we are responsible for the app development. From what we can tell, Android Market is not set up to allow a clean separation of these roles. (It assumes that the developer will also be the publisher.) This makes it difficult to work out a division of responsibilities for the publication process. Our initial thought was to deliver the .apk file to them and let them handle it from there. The licensing issue was the first indication that we were being naive about this. The publishing process itself is rather technical, and we see two alternatives: walk them through all the steps or ask them to give us access to their publisher account and do it ourselves. What do others do?"} {"_id": "108742", "title": "Source code and project documentation requirements for legal agreement", "text": "# Question What kind of documentation and other artifacts would you expect to get in case of taking over existing Java webapp (probably it uses JBoss Seam framework) made _in-house_? I would expect to get: * Source code in form ready to open in specified IDE. * All external elements (i.e. DB schema as SQL scripts, installation scripts). * Configuration files (both for devel, testing and production environments). * All external libraries needed by project. * All external tools used in project or at least their description (i.e. make program, version control software). * Detailed info about: * How to compile project (IDE, libraries, their versions, compilers, OS), * How to setup development environment (database version, OS, DB schema), * How to install and run application -- server specification (OS, database, versions, minimal hardware, configuration both app and DB), Probably it would be also nice to have: * Archive of the whole version control system (to make possible checking changes from the past). * UML diagrams or other form of _big picture_ of application design. * DB schema. What I miss? # Some background details Friend works as a lawyer for a company which needs legal agreements with people who develop applications for them. Just in case something goes wrong. In general -- the company wants to have source code deposited somewhere. But source code is not enough to maintain or even compile and run webapp, so they also needsome documentation which helps maintain the app. I've never taken someone else's project in a similar case (no access to previous developers), and I am not Java developer, so I expect there are a lot of things to miss. ## More details added after a few answers The bosses of company (named it **A** , small corporation), which ordered and uses the app and company (named it **B** , a few people) which writes the app, are friends. Development is _in-house_. No formal spec, no written specification. All features are discussed over mail or on meetings. Everything, including hosting the app, is managed by **B**. Friend (who is also one of heavy users of the app -- let's say she is _product owner_ ) doesn't trust developers from **B** at all. Friend also doesn't believe in **B** competencies (believe me -- she should not). So to protect **A** business friend wants written legal agreement with **B** in case **B** would go away with **A** data and app... As MSalters wrote -- don't laugh -- it happens :(. **A** already made **B** to make and provides backups."} {"_id": "108743", "title": "Resolving merge conflicts due to refactoring", "text": "I got involved in a discussion recently on how to handle refactoring in general (which is an interesting topic in itself). Eventually the following question was brought up: **How does one handle merge conflicts that occured due to that someone did a refactoring of a part of the code, while someone else was working on a feature for the same piece of code?** Basically, I have no idea how to deal with this in an efficient manner. Are there any best practices that one should follow regarding this? Is there any difference on how one should handle this for a system with tons of legacy code?"} {"_id": "108740", "title": "Can we replace XML with JSON entirely?", "text": "I'm sure lots of developers are familiar with XML and JSON, and they've used both of them. Thus no point in explaining what they are, and what is their purpose, even in brief. If we try to map their concepts, we can say (correct me if I'm wrong): 1. XML tags are equivalent to JSON `{}` 2. XML attributes are equivalent to JSON properties 3. XML tag collection is equivalent to JSON `[]` The only thing I can think of, which doesn't exist in JSON, is XML Namespaces. The question is, considering this mapping, and considering that JSON is highly lighter in this mapping, can we see a world in future (or at least theoretically think of a world) without XML, but with JSON doing everything XML does? Can we use JSON everywhere XML is used? PS: Please note that I've seen this question. It's something entirely different from what I'm asking here. Thus please don't mention **duplicate**."} {"_id": "71851", "title": "Dependency Injection and method signatures", "text": "I've been using YADIF (yet another dependency injection framework) in a PHP/Zend app I'm working on to handle dependencies. This has achieved some notable benefits in terms of testing and decoupling classes. However,one thing that strikes me is that despite the sleight of hand performed when using this technique, the method names impart a degree of coupling. Probably not the best example -but these methods are distinct from ... say the PEAR Mailer. The method names themselves are a (subtle) form of coupling //example public function __construct($dic){ $this->dic = $dic; } public function example(){ //this line in itself indicates the YADIF origin of the DIC $Mail= $dic->getComponent('mail'); $Mail->setBodyText($body); $Mail->setFrom($from); $Mail->setSubject($subject); } I could write a series of proxies/wrappers to hide these methods and thus promote decoupling from , but this seems a bit excessive. You have to balance purity with pragmatism... How far would you go to hide the dependencies in your classes?"} {"_id": "197718", "title": "What \"jobs\" benefit by having someone with a programming background perform them?", "text": "What non-programming jobs should someone look for who doesn't have the passion for programming? There are a percentage of people who make it through a computer science type of degree but do not have the passion for programming. I have worked with many people who have a programming background but became a project manager or tester and it has made my job as a programmer easier... and our products better. This is not a failure of the system or person... they are just finding where they fit. So, I wonder what non-programming jobs benefit from having someone in them that is a programmer and why does it make that person a stronger or weaker candidate for that job? For instance: Programmers make good project managers because they may be able to express requirements in a detailed fashion that makes sense to the programmers that will work on those requirements. But they may be to accustom to talking about things in a technical way that the customers may not understand and they may have a tendency to try to communicate a design for a solution instead of presenting the problem in a raw fashion."} {"_id": "68233", "title": "Namespaces just seem to be making things more complicated. Am I missing something?", "text": "Now that I am using namespaces in my php files which match the file's path, I have to append a namespace to pretty much every class instantiation. The namespaces definitely help my autoloader find files, but the code is becomming harder to read/write. aren't namespaces supposed to simplify my code? why is it making things more complicated? Am I using it wrong? Not sure I see how using a namespace is much better than having super long class names... example: file: /root_path/init.php file: /root_path/sub_path/bar.php "} {"_id": "157337", "title": "Are Design Patterns SuperSet of OOP or SubSet?", "text": "Initially I started learning OOP and later started grasping concepts of Design Patterns. I wonder whether it is the Design Pattern which is the SuperSet of OOP or it is the OOP itself."} {"_id": "104892", "title": "Which language and GUI toolkit would you use for a prototype program?", "text": "Suppose, I have an idea and I have to put it into code quickly. And then I am presenting it to someone who is not so computer savvy. Which language should I use for quick and dirty coding? And which GUI toolkit should I use so that the the computer semi-literate find it easy to use (read shiny-eye-candy). It is a desktop application."} {"_id": "197710", "title": "How to use GPL v3 with Apache License 2.0?", "text": "Based on this page, I can see that only Apache License 2.0 is compatible with GPL v3. Now my question is, in layman's term, how do I make those two work together? My current situation is that I am developing an application that is made from a GPL v3 licensed programming language. I want to put some form of security on the application and figured I can achieve that via a web service. However, I don't want to expose my web service's code as GPL v3 since it is part of a larger system that I am working on which will be closed source. Is there any way to use the Apache License 2.0 in this scenario? EDIT: For clarity's sake, the programming language I am going to use is LiveCode."} {"_id": "157339", "title": "Unit testing C++: What to test?", "text": "**TL;DR** Writing good, useful tests is hard, and has a high cost in C++. Can you experienced developers share your rationale on what and when to test? **Long story** I used to do test-driven development, my whole team in fact, but it didn't work well for us. We have many tests, but they never seem to cover the cases where we have actual bugs and regressions - which usually occur when units are interacting, not from their isolated behaviour. This is often so hard to test on the unit level that we stopped doing TDD (except for components where it really speeds up development), and instead invested more time increasing the integration test coverage. While the small unit tests never caught any real bugs and were basically just maintenance overhead, the integration tests have really been worth the effort. Now I've inherited a new project, and am wondering how to go about testing it. It's a native C++/OpenGL application, so integration tests are not really an option. But unit testing in C++ is a bit harder than in Java (you have to explicitely make stuff `virtual`), and the program isn't heavily object oriented, so I can't mock/stub some stuff away. I don't want to rip apart and OO-ize the whole thing just to write some tests for the sake of writing tests. So I'm asking you: What is it I should write tests for? e.g.: * Functions/Classes that I expect to change frequently? * Functions/Classes that are more difficult to test manually? * Functions/Classes that are easy to test already? I began to investigate some respectful C++ code bases to see how they go about testing. Right now I'm looking into the Chromium source code, but I'm finding it hard to extract their testing rationale from the code. If anyone has a good example or post on how popular C++ users (guys from the committee, book authors, Google, Facebook, Microsoft, ...) approach this, that'd be extra helpful. **Update** I have searched my way around this site and the web since writing this. Found some good stuff: * When is it appropriate to not unit test? * http://stackoverflow.com/questions/109432/what-not-to-test-when-it-comes-to-unit-testing * http://junit.sourceforge.net/doc/faq/faq.htm#best Sadly, all of these are rather Java/C# centric. Writing lots of tests in Java/C# is not a big problem, so the benefit usually outweights the costs. But as I wrote above, it's more difficult in C++. Especially if your code base is not-so-OO, you have to severely mess things up to get a good unit test coverage. For instance: The application I inherited has a `Graphics` name space that is a thin layer above OpenGL. In order to test any of the entities - which all use its functions directly - I'd have to turn this into an interface and a class and inject it in all the entities. That's just one example. So when answering this question, please keep in mind that I have to make a rather big investment for writing tests."} {"_id": "30135", "title": "Does syntax really matter in a programming language?", "text": "One of my professors says \"the syntax is the UI of a programming language\", languages like Ruby have great readability and it's growing, but we see a lot of programmers productive with C\\C++, so as programmers does it really matter that the syntax should be acceptable? I would love to know your opinion on that. Disclaimer: I'm not trying to start an argument. I thought this is a good topic of discussion. Update: This turns out to be a good topic. I'm glad you are all participating in it."} {"_id": "211742", "title": "What is the exact semantic of reduce in clojure?", "text": "The Clojure docs regarding `(reduce f val coll)` state: > If val is supplied, returns the result of applying f to val and the first > item in coll, then applying f to that result and the 2nd item, etc. So I tried: (reduce str [\"s1\" \"s2\"] \"s3\") Which leads to: \"[\\\"s1\\\" \\\"s2\\\"]s3\" However, when I try (apply str [\"s1\" \"s2\"]) I get: \"s1s2\" So why does the vector get converted into its string representation?"} {"_id": "211745", "title": "Why do some websites showing 0 bytes in Chrome's developer tools", "text": "I am doing a page speed optimization for my website and studying how other websites do it. I noticed that some websites such as as Facebook or Ringgitplus show 0 bytes for some of their resources in Chrome's developer tools, Network tab, while the real content size is several kilobytes. ![The screenshot of Network tab showing 0 bytes for some resources](http://i.stack.imgur.com/asvBh.png) I read some articles that says that _size_ is the amount being fetched and _content_ is the actual size of the response. So when size is 0 bytes, it means it served from cache. But the same thing happened when I open the page using Incognito or clearing all my cache. How is this possible and how can I achieve the same thing for my websites?"} {"_id": "130426", "title": "Need help with deciding elements for icon creating application", "text": "I'm trying to practice programing by creating a simple application which, I think, I can manage to do in .Net C# in VisualStudio 2010. I'm working on simple application which will let me to create small pictures (icons) in sizes such as 8x8, 16x16, 32x32 (ect) pixels. I will be creating a grid representing each size by scaling to amount of pixels. Each pixel will be represented in a small square which will be able to click to change color ect. So here's my question regarding what elements such app should have: * What VS tool would let me to create a grid for my project? Keeping in mind that each small square should be generated automatically by using X/Y size provided by user and boxes should be clickable. Also, I probably should be able to be able to click one pixel and drag mouse to select other pixels. * What simple functions should such small app have? Ex. choosing colors. * What minimal functions should I have? I hope this is place for such question. I'm trying to get the general idea of how to make such small app work and what user might expect when opening small, icon creating, application."} {"_id": "78636", "title": "What's the best way to cache a growing database table for html generation?", "text": "I've got a database table which will grow in size by about 5000 rows a hour. For a key that I would be querying by, the query will grow in size by about 1 row every hour. I would like a web page to show the latest rows for a key, 50 at a time (this is configurable). I would like to try and implement memcache to keep database activity low for reads. If I run a query and create a cache result for each page of 50 results, that would work until a new entry is added. At that time, the page of latest results gets new result and the oldest results drops off. This cascades down the list of cached pages causing me to update every cache result. It seems like a poor design. I could build the cache pages backwards, then for each page requested I should get the latest 2 pages and truncate to the proper length of 50. I'm not sure if this is good or bad? Ideally, the mechanism I use to insert a new row would also know how to invalidate the proper cache results. Has someone already solved this problem in a widely acceptable way? What's the best method of doing this? EDIT: If my understanding of the MYSQL query cache is correct, it has table level granularity in invalidation. Given the fact that I have about 5000 updates before a query on a key should need to be invalidated, it seems that the database query cache would not be used. MS SQL caches execution plans and frequently accessed data pages, so it may do better in this scenario. My query is not against a single table with TOP N. One version has joins to several tables and another has sub-selects. Also, since I want to cache the html generated table, I'm wondering if a cache at the web server level would be appropriate? Is there really no benefit to any type of caching? Is the best advice really to just allow a website site query to go through all the layers and hit the database every request?"} {"_id": "85760", "title": "What steps should be taken to ensure that an open source database gets ready for production?", "text": "I am considering using GridSQL in a production environment. However, I do have some indications that it is not ready. One is that it got excluded by the offering of EnterpriseDB a while ago, and the forums seem to report a few wrong results and relatively severe bugs. The alternatives to GridSQL, however cost around 100.000$ to buy, so I was thinking to utilize some of this money to ensure that GridSQL gets ready for production. At the same time, I could risk spending 50.000$ and months of work on the development of GridSQL, just to discover that the design was flawed and that a complete rewrite is needed. Then I would have to buy the commercial alternatives to GridSQL and the existence of my startup would be at risk. **Question** What steps would you take to ensure that there is as little risk as possible that the worst case scenario described above would happen? It is unrealistic that I could do much testing nor code review/coding myself (I am also not the best developer), so please describe where to find the guys that would need to do the work."} {"_id": "3131", "title": "Hiring \"off-platform\" - what are the indicators for successful candidates?", "text": "When looking for candidates you often want to have people with direct experience with the technologies and platforms you are using today. However, that may not always be a possibility. The particular requirements may involve niche tech, outdated tech, or your team has already drained the local area of everyone with that particular tech (not everyone works in Silicon Valley, some of us work and live in some smaller metro area for reasons unrelated to the profession). So often that can leave you with hiring people who are \"off-platform\" like Dot-Net people for a Java project, or Cobol programmers for an iPhone app (or vice versa). Or even just a language switch, from Visual Basic to Scala or JavaScript to C. My question is what qualities, attributes, and experience would you look for as in interviewer to indicate that the candidate can successfully switch platforms or languages, and what are the indicators as to the level of success and timeframe. Also, decent \"war stories\" related to this would get a solid upvote from me."} {"_id": "81718", "title": "How should I approach developing a Java based client-server application?", "text": "I have been asked to develop a client-server application (requires database) for a company. I am very handy with java so would like to go with it. Its up to me how I develop the application. It may be an JSP web application or Java Swing GUI based application. I have following queries/doubts. So if I go on to develop a web application, I have to teach the company personnel 1. How to install Tomcat 2. How to load the web application Tomcat 3. How to start the server to start the application. If I go on to develop a Java Swing GUI based application, 1. It should start when the computer starts up. i.e it should be auto added into the service start-up of the OS on installation 2. Have application shortcut in quick launch, tray bar on installation. On Database Part: I would like to have MS-access like DB but free one. This is because, 1. No need to worry about installation of DB engine (like we have to take care in-case of MySQL). 2. DB would maintained in some Drive (like .mdb file in case of MS-Acess) unharmed even if their OS crashes. Please guide me how should I approach in developing the software."} {"_id": "106792", "title": "What are the (or some) tenets of software security?", "text": "When developing security, what tenets should you keep in mind and follow in order to practice due diligence?"} {"_id": "106791", "title": "Writing a game engine using javascript", "text": "..by this I mean a logic handler for a chess game. Basically validating a move and checking if somebody has won. Now ignore the complexity of the game(if you can..) I'd like some sort of psuedo code on how it would look using callbacks in javascript. For arguments sake imagine once a move is completed in the browser it sends a JSON object of the piece that has been moved and to which space it has been moved to. On the server we have access to a JSON object which contains data for the complete board. With this information I expect to pass it in to my engine and determine from the output if the move is valid..and if so is it a winning move? I'm not looking for any specific logic on how to determine a move in chess -> just a kind of psuedo example/template of the flow of the game using the above paragraph as a guide to what must be done. The reason for this is I would like to understand how callbacks can be used to produce something of this nature as opposed to synchronous code."} {"_id": "225145", "title": "Best practice or design patterns for retrieval of data for reporting and dashboards in a domain-rich application", "text": "First, I want to say this seems to be a neglected question/area, so if this question needs improvement, help me make this a great question that can benefit others! In my experience, there are two sides of an application - the \"task\" side (where the users interact with the application and it's objects where the domain model lives) and the reporting side, where users get data based on what happens on the task side. On the task side, it's clear that an application with a rich domain model should have business logic in the domain model and the database should be used mostly for persistence. Separation of concerns, every book is written about it, we know what to do, awesome. What about the reporting side? Are data warehouses acceptable, or are they bad design because they incorporate business logic in the database and the very data itself? In order to aggregate the data from the database into data warehouse data, you must have applied business logic and rules to the data, and that logic and rules didn't come from your domain model, it came from your data aggregating processes. Is that wrong? I work on large financial and project management applications where the business logic is extensive. When reporting on this data, I will often have a LOT of aggregations to do to pull the information required for the report/dashboard, and the aggregations have a lot of business logic in them. For performance sake, I have been doing it with highly aggregated tables and stored procedures. As an example, let's say a report/dashboard is needed to show a list of active projects (imagine 10,000 projects). Each project will need a set of metrics shown with it, for example: 1. total budget 2. effort to date 3. burn rate 4. budget exhaustion date at current burn rate 5. etc. Each of these involves a lot of business logic. And I'm not just talking about multiplying numbers or some simple logic. I'm talking about in order to get the budget, you have to apply a rate sheet with 500 different rates, one for each employee's time (on some projects, other's have a multiplier), applying expenses and any appropriate markup, etc. The logic is extensive. It took a lot of aggregating and query tuning to get this data in a reasonable amount of time for the client. Should this be run through the domain first? What about performance? Even with straight SQL queries, I'm barely getting this data fast enough for the client to display in a reasonable amount of time. I can't imagine trying to get this data to the client fast enough if I am rehydrating all these domain objects, and mixing and matching and aggregating their data in the application layer, or trying to aggregate the data in the application. It seems in these cases that SQL is good at crunching data, and why not use it? But then you have business logic outside your domain model. Any change to the business logic will have to be changed in your domain model and your reporting aggregation schemes. I'm really at a loss for how to design the reporting/dashboard part of any application with respect to domain driven design and good practices. I added the MVC tag because MVC is the design flavor du jour and I am using it in my current design, but can't figure out how the reporting data fits into this type of application. I'm looking for any help in this area - books, design patterns, key words to google, articles, anything. I can't find any information on this topic."} {"_id": "106795", "title": "How to tackle complex business rule and logic?", "text": "I have a domain expert to work with, but he would throws a lot of details to me verbally. The business logics are complex, business rules change often, the business process is long and multi-ending / intermediate wait-state. There are a lot of cases to deal with (because there are many states for the account, product and subscription). What's the best way to deal with it? BPMN diagram isn't even low level enough to explain. Written documentations are time consuming to write and read. UML state diagram can become crazily complex very soon. Any advice would be appreciated."} {"_id": "234276", "title": "Using a progress dialog and multi threading", "text": "My .NET Windows desktop application creates an HTML report, and has 3 main phases. It may create multiple reports. I show a progress bar so the user knows (estimates) how long it will take (as well as reassure the system didn't crash by an updating GUI). So, if I needed to generate 3 reports, there would be a total of 9 phases (3 phases * 3 reports). The issue I now have, is the process is taking too long. As such, I need to use threads so I can start on phase 1, then open a new thread which completes phase 2 and 3. Whilst phase 2 and 3 are in progress, the next item on the list begins with phase 1 and the process continues. I'm hoping this makes sense. So I'm now thinking what to do with the progress bar since multiple phases will be being run at once. At the moment, the only logical thing I can think of is Report 1. Start phase 1. New thread -> Phase 2 and 3 Check to see if Report 2 is last report. It isn't, so, during Report 2 phase 2, start Report 2 phase 1. New thread -> Phase 2 and 3 Check to see if Report 3 is last report. It is. So, during report 2 phase 2, start Report 3 phase 1, then continue on the same thread with phase 2 and 3 The above will mean I update the Progress Dialog during phase 2 and 3 of the last Report (and I guess I need to add some checks to ensure the other phases are actually finished). I can't imagine I'm doing any thing new or different to what has been before and as such, I was wondering if there is already a pattern for dealing with type of situation?"} {"_id": "234271", "title": "Am I bound by the license of a library that is used in my project but is not distributed with it?", "text": "I want to publish a small open source project on GitHub. I'd like to release it with either MIT or Apache license. The project has unit tests that use JUnit library. JUnit is released with the Eclipse Public License v1.0. JUnit is not distributed in the project - it is downloaded and used at build time by Maven, the build tool. Am I bound by the license of a library that is used in the project but is not distributed with it? If I am, can I use a library released with the Eclipse Public License in MIT or Apache licensed projects? **My own thoughts (the most pessimistic scenario)** As the unit tests use JUnit, the project is a derivative work of JUnit, even though JUnit is not distributed with it. Distributing the project automatically means distributing JUnit. The Eclipse Public License (the JUnit's license) says: > A Contributor may choose to distribute the Program in object code form under > its own license agreement, provided that \u2026 its license agreement: There is a list of four requirements here that sound trivial but which, perhaps, are not fully addressed in the MIT and Apache licenses (disclaiming all warranties, excluding any liability, a statement about additional provisions, a statement about availability of the source code). So, the Eclipse Public License may not be compatible with the MIT and Apache licenses, and I may not be able to use JUnit."} {"_id": "80318", "title": "Checked vs Unchecked vs No Exception... A best practice of contrary beliefs", "text": "There are many requirements needed for a system to properly convey and handle exceptions. There are also many options for a language to choose from to implement the concept. Requirements for exceptions (in no particular order): 1. **Documentation** : A language should have a mean to document exceptions an API can throw. Ideally this documentation medium should be machine usable to allow compilers and IDEs to provide support to the programmer. 2. **Transmit Exceptional Situations** : This one is obvious, to allow a function to convey situations that prevent the called functionality from performing the expected action. In my opinion there are three big categories of such situations : 2.1 Bugs in the code that cause some data to be invalid. 2.2 Problems in configuration or other external resources. 2.3 Resources that are inherently unreliable (network, file systems, databases, end-users etc). These are a bit of a corner case since their unreliable nature should have us expect their sporadic failures. In this case are these situations to be considered exceptional ? 3. **Provide enough information for code to handle it** : The exceptions should provide sufficient information to the callee so that it can react and possibly handle the situation. the information should also be sufficient so that when logged this exceptions would provide enough context to a programmer to identify and isolate the offending statements and provide a solution. 4. **Provide confidence to the programmer about the current status of his code's execution state** : The exception handling capabilities of a software system should be present enough to provide the needed safeguards while staying out of the way of the programmer so he can stay focused on the task at hand. To cover these the following methods were implemented in various languages: 1. **Checked Exceptions** Provide a great way to document exceptions, and theoretically when implemented correctly should provide ample reassurance that all is good. However the cost is such that many feel it more productive to simply bypass either by swallowing exceptions or re-throw them as unchecked exceptions. When used inappropriately checked exceptions pretty much looses all it's usefulness. Also, checked exceptions make it difficult to create a API that is stable in time. Implementations of a generic system within a specific domain will bring it's load of exceptional situation that would become hard to maintain using solely checked exceptions. 2. **Unchecked Exceptions** \\- much more versatile than checked exception they fail to properly document the possible exceptional situations of a given implementation. They rely on ad-hoc documentation if at all. This creates situations where the unreliable nature of a medium is masked by an API that gives the appearance of reliability. Also when thrown these exceptions loose their meaning as they move back up through the abstraction layers. Since they are poorly documented a programmer cannot target them specifically and often needs to cast a much wider net than necessary to ensure that secondary systems, should they fail, do not bring down the whole system. Which brings us right back to the swallowing problem checked exceptions provided. 3. **Multistate return types** Here it is to rely on a disjoint set, tupple, or other similar concept to return either the expected result or an object representing the exception. Here no stack unwinding, no cutting through code, everything executes normally but the return value must be validated for error prior to continuing. I have not really worked with this yet so cannot comment from experience I acknowledge it resolves some problems exceptions bypassing the normal flow but still it will suffer from much the same problems as the checked exceptions as being tiresome and constantly \"in your face\". So the question is : What is your experience in this matter and what, according to you is the best candidate to make a good exception handling system for a language to have ? * * * EDIT: Few minutes after writing this question I came across this post, spooky !"} {"_id": "254333", "title": "redimension multidimensional arrays in Excel VBA", "text": "Take a look at the following code. What my problem is is that I can't figure out how to redimension the n integer and the b integer. What I'm doing is the array sent1 is already working and it is populated with about 4 sentences. I need to go through each sentence and work on it but I'm having trouble. ` dim sent1() dim sent2() dim n as integer, b as integer, x as integer dim temp_sent as string b = 0 For n = 1 to ubound(sent1) temp_sent = sent1(n) for x = 1 to len(temp_sent1) code if a then b = b + 1 **THIS IS THE PART OF THE CODE THAT IS NOT WORKING** redim preserve sent2(1 to ubound(sent1), b) sent2(n,b) = [code] next next `"} {"_id": "146505", "title": "Service Registry - is it needed?", "text": "Suppose you develop a system that has multiple services (implemented by different teams, in different countries, using different technologies including java, php, c#, c++ and more). Let's assume that despite using different technologies, somehow you have agreed to integrate using RESTful APIs. Does such a system need a Service Registry? I understand that basically it's not mandatory, since each service may have its own configuration where the service it uses are located. However, creating a central location makes the life of operations easier: they don't need to configure the same service multiple times. But creating a Service Registry may be complex. It's another system in the solution and it requires resources and even more important ownership. Who will owner it? I can think about more pros and cons of a Service Registry. So what do you think?"} {"_id": "185363", "title": "POST/Redirect/GET with invalid form submission?", "text": "In the field of web development, is it good practice to do a POST/Redirect/GET when fields in a form submission are invalid, as well? Typically, no sensitive transaction would have taken place, in this event. However, can an argument be made that, nonetheless, it is still good practice to utilize the POST/Redirect/GET pattern?"} {"_id": "185360", "title": "Copying (forking) an open source project to your own repository", "text": "I'm currently using an open source project called CodeFirstMembership for one of my projects. There's a critical issue that I need to get past, and the more I use it, the more I find things I need to modify. It's extremely useful, except it doesn't look like the developer has much time to update it (totally cool, we've all been there). I'm wondering about the etiquette and legality and general \"hey, you stole my code you jerk\" responses I'd get if I copied the source, moved it to github and made my own fork. On Github, we do this all the time, forking projects, but it FEELs less jerk- ish because there's a connection to the original. Is there anything wrong with me doing this, whether legal or otherwise since there's no copyright notice or license associated with this library?"} {"_id": "180784", "title": "is it okay to remove copyright info from a free, open source API even if you are explicitly told not to do so?", "text": "Just wondering if text such as the following, which appears inside the source files of a free, open source API, means anything in a court of law // Permission is hereby granted, free of charge, to any person obtaining a copy // of this software and associated documentation files (the \"Software\"), to deal // in the Software without restriction, including without limitation the rights // to use, copy, modify, merge, publish, distribute, sublicense, and/or sell // copies of the Software, and to permit persons to whom the Software is // furnished to do so, subject to the following conditions: // // The above copyright notice and this permission notice shall be included in // all copies or substantial portions of the Software. // // THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR // IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, // FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE // AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER // LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, // OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN // THE SOFTWARE."} {"_id": "147643", "title": "Splitting an image into many pieces to improve perceived performance? Good or Bad?", "text": "Time to time, I have to work on a website that involves a huge background in size and resolution (1920 x 1080 in resolution, over 2mb in size). I've been using preloaders and it worked very well, except that users have to wait a little bit. I noticed that google map splits a big image into many pieces. I researched about it and found out that it actually increases perceived performance. Is this true?"} {"_id": "147645", "title": "What is the difference between an Array and a Stack?", "text": "According to Wikipedia, a stack: > is a last in, first out (LIFO) abstract data type and linear data structure. While an array: > is a data structure consisting of a collection of elements (values or > variables), each identified by at least one array index or key. As far as I understand, they are fairly similar. So, what are the main differences? If they are not the same, what can an array do that a stack can't and vice-versa?"} {"_id": "144594", "title": "Do I suffer from encapsulation overuse?", "text": "I have noticed something in my code in various projects that seems like code smell to me and something bad to do, but I can't deal with it. While trying to write \"clean code\" I tend to over-use private methods in order to make my code easier to read. The problem is that the code is indeed cleaner but it's also more difficult to test (yeah I know I can test private methods...) and in general it seems a bad habit to me. Here's an example of a class that reads some data from a .csv file and returns a group of customers (another object with various fields and attributes). public class GroupOfCustomersImporter { //... Call fields .... public GroupOfCustomersImporter(String filePath) { this.filePath = filePath; customers = new HashSet(); createCSVReader(); read(); constructTTRP_Instance(); } private void createCSVReader() { //.... } private void read() { //.... Reades the file and initializes the class attributes } private void readFirstLine(String[] inputLine) { //.... Method used by the read() method } private void readSecondLine(String[] inputLine) { //.... Method used by the read() method } private void readCustomerLine(String[] inputLine) { //.... Method used by the read() method } private void constructGroupOfCustomers() { //this.groupOfCustomers = new GroupOfCustomers(**attributes of the class**); } public GroupOfCustomers getConstructedGroupOfCustomers() { return this.GroupOfCustomers; } } As you can see the class has only a constructor which calls some private methods to get the job done, I know that's not a good practice not a good practice in general but I prefer to encapsulate all the functionality in the class instead of making the methods public in which case a client should work this way: GroupOfCustomersImporter importer = new GroupOfCustomersImporter(filepath) importer.createCSVReader(); read(); GroupOfCustomer group = constructGoupOfCustomerInstance(); I prefer this because I don't want to put useless lines of code in the client's side code bothering the client class with implementation details. So, Is this actually a bad habit? If yes, how can I avoid it? Please note that the above is just a simple example. Imagine the same situation happening in something a little bit more complex."} {"_id": "144591", "title": "When is it acceptable to NOT fix broken windows?", "text": "In reference to broken windows, are there times when refactoring is best left for a future activity? For example, if a project to add some new features to an existing internal system is assigned to a team that has not worked with the system until now, and is given a short timeline in which to work with - can it be ever be justifiable to defer major refactorings to existing code for the sake of making the deadline in this scenario?"} {"_id": "103673", "title": "What are linkers and loaders? How do they work?", "text": "I am trying to understand things like linkers and loaders better. What area of computer science do they belong to? Compiler, Operating System, Computer Architecture? Where do linkers and loaders come into play during development?"} {"_id": "156060", "title": "Constructs for wrapping a hardware state machine", "text": "I am using a piece of hardware with a well defined C API. The hardware is stateful, with the relevant API calls needing to be in the correct order for the hardware to work properly. The API calls themselves will always return, passing back a flag that advises whether the call was successful, or if not, why not. The hardware will not be left in some ill defined state. In effect, the API calls advise indirectly of the current state of the hardware if the state is not correct to perform a given operation. It seems to be a pretty common hardware API style. My question is this: Is there a well established design pattern for wrapping such a hardware state machine in a high level language, such that consistency is maintained? My development is in Python. I ideally wish the hardware state machine to be abstracted to a much simpler state machine and wrapped in an object that represents the hardware. I'm not sure what should happen if an attempt is made to create multiple objects representing the same piece of hardware. I apologies for the slight vagueness, I'm not very knowledgeable in this area and so am fishing for assistance of the description as well!"} {"_id": "186558", "title": "How can I work on different projects without losing focus?", "text": "I'm doing a full-time job (developer) and at the end of the day, I work on my part-time job (developer). How do you work, with different projects, without loosing focus? Often, my focus on the full-time job still remains at my head when working on my part- time job or the other way around."} {"_id": "156066", "title": "How to do MVC the right way", "text": "I've been doing MVC for a few months now using the CodeIgniter framework in PHP but I still don't know if I'm really doing things right. What I currently do is: **Model** \\- this is where I put database queries (select, insert, update, delete). Here's a sample from one of the models that I have: function register_user($user_login, $user_profile, $department, $role) { $department_id = $this->get_department_id($department); $role_id = $this->get_role_id($role); array_push($user_login, $department_id, $role_id); $this->db->query(\"INSERT INTO tbl_users SET username=?, hashed_password=?, salt=?, department_id=?, role_id=?\", $user_login); $user_id = $this->db->insert_id(); array_push($user_profile, $user_id); $this->db->query(\" INSERT INTO tbl_userprofile SET firstname=?, midname=?, lastname=?, user_id=? \", $user_profile); } **Controller** \\- talks to the model, calls up the methods in the model which queries the database, supplies the data which the views will display(success alerts, error alerts, data from database), inherits a parent controller which checks if user is logged in. Here's a sample: function create_user(){ $this->load->helper('encryption/Bcrypt'); $bcrypt = new Bcrypt(15); $user_data = array( 'username' => 'Username', 'firstname' => 'Firstname', 'middlename' => 'Middlename', 'lastname' => 'Lastname', 'password' => 'Password', 'department' => 'Department', 'role' => 'Role' ); foreach ($user_data as $key => $value) { $this->form_validation->set_rules($key, $value, 'required|trim'); } if ($this->form_validation->run() == FALSE) { $departments = $this->user_model->list_departments(); $it_roles = $this->user_model->list_roles(1); $tc_roles = $this->user_model->list_roles(2); $assessor_roles = $this->user_model->list_roles(3); $data['data'] = array('departments' => $departments, 'it_roles' => $it_roles, 'tc_roles' => $tc_roles, 'assessor_roles' => $assessor_roles); $data['content'] = 'admin/create_user'; parent::error_alert(); $this->load->view($this->_at, $data); } else { $username = $this->input->post('username'); $salt = $bcrypt->getSalt(); $hashed_password = $bcrypt->hash($this->input->post('password'), $salt); $fname = $this->input->post('firstname'); $mname = $this->input->post('middlename'); $lname = $this->input->post('lastname'); $department = $this->input->post('department'); $role = $this->input->post('role'); $user_login = array($username, $hashed_password, $salt); $user_profile = array($fname, $mname, $lname); $this->user_model->register_user($user_login, $user_profile, $department, $role); $data['content'] = 'admin/view_user'; parent::success_alert(4, 'User Sucessfully Registered!', 'You may now login using your account'); $data['data'] = array('username' => $username, 'fname' => $fname, 'mname' => $mname, 'lname' => $lname, 'department' => $department, 'role' => $role); $this->load->view($this->_at, $data); } } **Views** \\- this is where I put html, css, and JavaScript code (form validation code for the current form, looping through the data supplied by controller, a few if statements to hide and show things depending on the data supplied by the controller).

User Registration

The codes above are actually code from the application that I'm currently working on. I'm working on it alone so I don't really have someone to review my code for me and point out the wrong things in it so I'm posting it here in hopes that someone could point out the wrong things that I've done in here. I'm also looking for some guidelines in writing MVC code like what are the things that should be and shouldn't be included in views, models and controllers. How else can I improve the current code that I have right now. I've written some really terrible code before(duplication of logic, etc.) that's why I want to improve my code so that I can easily maintain it in the future. Thanks!"} {"_id": "193563", "title": "How safe & trustworthy are hosting sites such as sourceforge, github or bitbucket for closed-source projects?", "text": "I am considering using sourceforge, bitbucket or github for managing source control for my business. I have open projects and I participate in open projects such as gcc. But I also have a business where I develop closed-source software for my living. How trustworthy are sourceforge, github or bitbucket in terms of keeping software secure from prying eyes? How stable is the hosting in terms of data loss prevention? Has anyone out there based their business logic with such an outfit? Has anyone out there surveyed several of the hosting solutions? Thank you"} {"_id": "185093", "title": "Do I really want a dynamic Enum, or something else? Better caching? A Struct?", "text": "This question is asked on SO and this site often it looks like. \"How do I do a dynamic enum?\" Answers range from you shouldn't to possible compile time solutions. Now my question is do I want a dynamic enum that is generated from look up tables in the DB, or do I want something else that is similar? I have a solution using T4 at build time to generate enums from DB tables, that works fine. But now some of the look up tables have an 'active' flag on them. Does this mean they are not look up tables anymore? If that is changed I want to somehow regenerate my code without having to redeploy. These tables will not change often. Currently the app when it wants to populate a dropdown or something will query the DB. But if the values mostly remain static, why go to the DB every time, hence the use of enums. It also has the added benefit of getting rid of magic strings and numbers. Would static structs be better? Some sort of caching for the look up tables? Any advise welcome."} {"_id": "66737", "title": "Can you recommend good quality online master's programs?", "text": "I've been looking at online master's degree programs in software engineering. Have you completed a master's program online? If so, where? Did you like it? Was it relevant or out of date?"} {"_id": "185094", "title": "Better to use error monad with validation in your monadic functions, or implement your own monad with validation directly in your bind?", "text": "I'm wondering what's better design wise for usability/maintainability, and what's better as far as fitting with the community. Given the data model: type Name = String data Amount = Out | Some | Enough | Plenty deriving (Show, Eq) data Container = Container Name deriving (Show, Eq) data Category = Category Name deriving (Show, Eq) data Store = Store Name [Category] deriving (Show, Eq) data Item = Item Name Container Category Amount Store deriving Show instance Eq (Item) where (==) i1 i2 = (getItemName i1) == (getItemName i2) data User = User Name [Container] [Category] [Store] [Item] deriving Show instance Eq (User) where (==) u1 u2 = (getName u1) == (getName u2) I can implement monadic functions to transform the User for instance by adding items or stores etc, but I may end up with an invalid user so those monadic functions would need to validate the user they get and or create. So, should I just: * wrap it in an error monad and make the monadic functions execute the validation * wrap it in an error monad and make the consumer bind a monadic validation function in the sequence that throws the appropriate error response (so they can choose not to validate and carry around an invalid user object) * actually build it into a bind instance on User effectively creating my own kind of error monad that executes validation with every bind automatically I can see positives and negatives to each of the 3 approaches but want to know what is more commonly done for this scenario by the community. So in code terms something like, option 1: addStore s (User n1 c1 c2 s1 i1) = validate $ User n1 c1 c2 (s:s1) i1 updateUsersTable $ someUser >>= addStore $ Store \"yay\" [\"category that doesnt exist, invalid argh\"] option 2: addStore s (User n1 c1 c2 s1 i1) = Right $ User n1 c1 c2 (s:s1) i1 updateUsersTable $ Right someUser >>= addStore $ Store \"yay\" [\"category that doesnt exist, invalid argh\"] >>= validate -- in this choice, the validation could be pushed off to last possible moment (like inside updateUsersTable before db gets updated) option 3: data ValidUser u = ValidUser u | InvalidUser u instance Monad ValidUser where (>>=) (ValidUser u) f = case return u of (ValidUser x) -> return f x; (InvalidUser y) -> return y (>>=) (InvalidUser u) f = InvalidUser u return u = validate u addStore (Store s, User u, ValidUser vu) => s -> u -> vu addStore s (User n1 c1 c2 s1 i1) = return $ User n1 c1 c2 (s:s1) i1 updateUsersTable $ someValidUser >>= addStore $ Store \"yay\" [\"category that doesnt exist, invalid argh\"]"} {"_id": "188065", "title": "Attended (CEH or other) course but have no certification", "text": "Last summer I attended course for \"Certified Ethical Hacker\" led by Mr. Wayne Burke. The course was great. Mr. Burke was awesome, flashing with energy all the time, he would stay hours late just to share another piece of his experience with us. I learned a lot of new things, took lot of notes (to which I keep returning), and most importantly, it pushed my way of thinking in a direction. However, due to unfortunate circumstances, I did not attend the actual certification exam, therefore I do not hold the certificate. Now I'm looking for a job (in QA, not necessarily security-related) and while updating my CV, I ask myself: Should I mention the course in the CV? I'm thinking about two possible final effects: * positive, since I _have_ been there, seen it, understood most of it---better than nothing, well? * negative, as it could be perceived as failure, so why remind it? Plus, I definitely don't want to look like someone who is boasting with CEH and then turns out he did not make it Is there a recommended rule of thumb on these situations? Does anyone have similar experience? Note: Frankly (aside from what should be considered rational in context of career market), personally I _do_ feel strongly enlightened by the course, and I _do_ believe it helped me a lot in my growth. I don't even consider the missing cert as a personal failure: compared to the actual course, the (preview) questions were a bunch of numbers and tool names to cram up my head. **Update:** Finally I decided to list the course, in spirit as @tdammers wrote about. Anyway, in of interview I've been to, I was not given questions about C|EH-- neither technical ones nor \"where is your ~~homework~~ certificate\", which is not surprising since as I have pointed out, I wasn't looking for particularly security-centered position. Now I have a new job--just as exciting as I hoped for."} {"_id": "130820", "title": "What are some games involving recursion?", "text": "I need to introduce a group of 5-15 people to recursion and I would like to do so by using a physical game/dance/activity they can play to get a feeling for recursion. The class is not so much focused on CS than general concepts so anything computer or programming related is out (but programmers are still the best people I could think of to bombard with such as question). I was thinking about using some kind of clap game such as teenie girls like to play that includes recursion but couldn't come up with one yet. The game/dance might well be ridiculous, it should just be interactive and get them some intuitive feeling for what recursion is and how omnipresent it is in our surroundings. Looking forward to your suggestions!"} {"_id": "151541", "title": "Is version history really sacred or is it better to rebase?", "text": "I've always agreed with Mercurial's mantra 1, however, now that Mercurial comes bundled with the rebase extension and it is a popular practice in git, I'm wondering if it could really be regarded as a \"bad practice\", or at least bad enough to avoid using. In any case, I'm aware of rebasing being dangerous after pushing. OTOH, I see the point of trying to package 5 commits in a single one to make it look niftier (specially at in a production branch), however, personally I think would be better to be able to see partial commits to a feature where some experimentation is done, even if it is not as nifty, but seeing something like \"Tried to do it way X but it is not as optimal as Y after all, doing it Z taking Y as base\" would IMHO have good value to those studying the codebase and follow the developers train of thought. My very opinionated (as in dumb, visceral, biased) point of view is that programmers like rebase to hide mistakes... and I don't think this is good for the project at all. So my question is: have you really found valuable to have such \"organic commits\" (i.e. untampered history) in practice?, or conversely, do you prefer to run into nifty well-packed commits and disregard the programmers' experimentation process?; whichever one you chose, **why does that work for you?** (having other team members to keep history, or alternatively, rebasing it). * * * 1 per Google DVCS analysis, in Mercurial \"History is Sacred\"."} {"_id": "207423", "title": "When should we clean up old, no longer used GIT branches?", "text": "We have several bugfix branches that are starting to pile up. They have been merged into master, and deployed to production. Is there a good benchmark for when these branches should be cleaned up? Should they ever be cleaned up, or is it good to have the historical data?"} {"_id": "183836", "title": "Why does git allow you to \"change history\"?", "text": "> **Possible Duplicate:** > When should the VCS history of a project be deleted? I am experienced using svn and recently started learning git. I was quite shocked to learn that git has features that allow you to \"rewrite history\". Coming from svn, I had accepted as sacrosanct the source control principle that once a change is committed, there is no way to \"undo\" the commit itself... the most one can do is execute a later commit that effectively \"reverses\" the changes made in the earlier commit, but one can always reproduce the state of any commit that had been done in the past. What is the rationale for changing this \"principle\"? Is there something different about git that makes this OK, or does the design of git reflect a different \"philosophy\" of source control that differs with the \"philosophy\" of svn?"} {"_id": "175150", "title": "Syntax Memorization", "text": "> **Possible Duplicate:** > Programmers forgetting syntax > Do programmers need a good memory? I'm a new web developer. I began learning HTML/CSS around June of this year. I picked them up easily, now I'm moving on to Javascript. My problem is this. I understand the concepts that the training material are teaching but I find myself coming up short when it's time to apply those lessons. Mainly it's due in part to the syntax. I memorize a function, apply it, test it, move on then forget it a few days later (the syntax, not the concept). My question is this. Should I be spinning my wheels trying to remember everything, or is knowing the concept enough. I know this has been addressed before but I couldn't find any real definitive real world answers. Feel free to discuss what methods worked for you when you started out."} {"_id": "95213", "title": "Do programmers need a good memory?", "text": "It seems one has to remember all sorts of syntax to be able to program. If one don't have a good memory for remembering names, will it be more difficult to learn to program?"} {"_id": "140036", "title": "Is it a good idea to do UI 100% in Javascript and provide data through an API?", "text": "My primary day job is making HTML applications. With that I mean internally used CRUD-type applications with lots of editable gridviews, textboxes, dropdowns, etc. We're currently using ASP.NET webforms, which do get the job done, but the performance is mostly dismal, and quite often you have to jump through hoops to get what you need. Hoops which are hung from the ceiling and set aflame. So I'm wondering if it perhaps would be a good idea to move all UI to the JavaScript side. Develop a strong set of reusable controls which are tailored specifically to our needs, and only exchange data with the server. Yes, I like the \"control\" (aka \"widget\") paradigm, its quite well suited to such applications. So on the server side we would still have a basic layout simliar to our current ASPX markup, but that then would get sent to the client only once, and the Javascript part would take care of all the subsequent UI updates. The problem is I've never done this before, and I've never seen anyone doing this either, so I don't know what the problems would be. In particular, I'm worried about: * **Performance** still. Benchmarking shows that currently the main delay is on the client side, when the browser tries to re-render most of the page after an AJAX update. ASP.NET webforms generated markup gives a new meaning to the word \"web\", and the rich Devexpress controls add their own layer of Javascript complexity on top of that. But would it be faster to recalculate all the necessary changes on Javascript side and then update only what needs to be updated? Note that I'm talking about forms that have several editable gridviews, lots of textboxes, many dropdowns each with half-a-zillion filterable items in them, etc. * **Ease of development**. There would be a lot more Javascript now, and it would probably mix with the HTML markup of the page. That or some kind of new view engine would have to be produced. Intellisense for Javascript is also a lot worse than for C# code, and due to the dynamic nature of Javascript it can't be expected to get a lot better. Coding practices can improve it a bit, but not by much. Also, most of our developers are primarily C# developers so there will be some learning curve and initial mistakes. * **Security**. A lot of security checks will have to be done twice (server-side and UI-side), and the data-processing server side will have to include a lot more of them. Currently, if you set a textbox to be read-only on server side, you can depend on its value not changing through the client roundtrip. The framework already has enough code to ensure that (through viewstate encryption). With the data-only approach, it gets harder, because you have to manually check everything. On the other hand, maybe security holes will be easier to spot, because you will have only data to worry about. All in all, will this solve our problems, or make them worse? Has anyone ever attempted this, and what were the results? Are there any frameworks out there that help in this sort of endeavor (jQuery and moral equivalents aside)?"} {"_id": "136254", "title": "What data structure should I use for this caching strategy?", "text": "I am working on a .NET 4.0 application, that performs a rather expensive calculation on two doubles returning a double. This calculation is performed for each one of several thousand _items_. These calculations are performed in a `Task` on a threadpool thread. Some preliminary tests have shown that the same calculations are performed over and over again, so I would like to cache _n_ results. When the cache is full, I would like to throw out the least- ~~often~~ recently used item. ( **Edit:** I realized least-often doesn't make sense, because when the cache is full and I would replace a result with a newly calculated one, that one would be least often used and immediately replaced the next time a new result is calculated and added to the cache) In order to implement this, I was thinking of using a `Dictionary` (where `Input` would be a mini-class storing the two input double values) to store the inputs and the cached results. However, I would also need to keep track of when a result was used the last time. For this I think I would need a second collection storing the information I would need to remove a result from the dictonary when the cache was getting full. I am concerned that constantly keeping this list sorted would negatively impact performance. Is there a better (i.e. more performant) way to do this, or maybe even a common data structure that I am unaware of? What kinds of things should I be profiling/measuring to determine the optimality of my solution?"} {"_id": "155679", "title": "Programming knowledge vs. programming logic", "text": "Is there any difference between the two topics? I have seen companies asking for _Good Programming knowledge_ some _Good Programming logic_. I have seen this in Job profiles for a developer \u2013 for e.g. \"good Programming logic\", \"strong Programming knowledge\". I believe that Programming knowledge is related to knowledge about the language in consideration and Programming logic is problem solving logic using programming (in general). Please correct me if I am wrong. Also what is more important? **Edit:** Do selection of components for application, designing interfaces validating user inputs fall under programming knowledge or Programming logic? Does programming logic simply imply problem solving, or is there anything else which it should comprise of?"} {"_id": "105855", "title": "What is the best way to maintain software tool chains?", "text": "**Short Question** What is the best way to create, maintain, and distribute software development tool chains? **Background** I am trying to develop a workflow / process to create an isolated environment in install and deploy software tool chains for embedded targets. Each tool chain typically would contain an IDE, compiler, and a small set of common tools. I have looked into two solutions but the both have their limitations and I hope to find a solution that has the best of both implementations. The rational behind a stand alone tool chain is so that all developers would be able to build the exact same code (matching CRC etc..). As well as be able to rebuild the same code after X number of years to support legacy code. A third option would be to use a central build server, which we plan on doing for nightly builds, but we still need to be able to build 'on the fly' when working on a test bench or remotely. So only using a build server will not suffice. **Ideal Goals for Deployed tool chain** * Work on Windows XP, Vista, and 7 (excluding drivers) * Contain all required information to run use the tool chain (excluding drivers) **\\------------------ Possible Options ------------------** **Option 1: Full Stand alone Virtual Machine** _Pros_ * Open Source packaging via Virtual Box * 100% isolation * Guarantees the tools can interact with one another _Cons_ * Licensing multiple copies of windows * At best only one developer could **legally** use a Windows VM at a time. * Running the same VM on **physically** different lap tops (Macbook Pro, Dell 830, Dell E6500, etc...). Typically when Windows see's a new CPU or motherboard it will force a re-activation. * _Very_ large files to maintain **Option 2: Virtualized Application** _Pros_ * Through packagers like Cameyo, applications can run on nearly all versions of Windows. (Note that Cameyo is also free) * Little to no licensing issues for most tools / IDEs. Purchased compilers might have additional restrictions, all of our current compilers do support this type of usage. * Smaller files to maintain via a SCM, such as SVN or GIT. _Cons_ * Difficult, if even possible, to get a single file that contains the entire tool chain * Difficult, if even possible, to get the tools to see / work with one another. I have seen this very issue with Cameyo. **Closing Thoughts** There may not be a solution that meets all of my design goals, but I would like to find out if anyone else been successful in this type of deployment."} {"_id": "105854", "title": "Is it better to team up with a designer, or to piece out design work to a contractor?", "text": "I am clueless when it comes to design but I am proficient in programming. If you are a programmer who lacks visual design skills, what has worked best for you -- to pair up with a graphic designer or to do all the programming and then piece out specific work? I have done some piecing out of work but I am not completely satisfied because I see too much of my influence come out in the design. I guess this happens since I have to be explicit in what I want with contractors, and they are limited in creativity by the architecture I have created. It feels like having a partner would allow for greater collaboration, but I have read plenty of entrepreneur books that warn against partnerships."} {"_id": "105851", "title": "Favoring Immutability in Database Design", "text": "One of the items in Joshua Bloch's Effective Java is the notion that classes should allow mutation of instances as little as possible, and preferably not at all. Oftentimes, the data of an object is persisted to a database of some form. This has led me to thinking about the idea of immutability within a database, especially for those tables that represent a single entity within a larger system. Something I have been experimenting with recently is the idea of trying to minimize the updates I do to table rows representing these objects, and trying to perform inserts instead as much as I can. A concrete example of something I was experimenting with recently. If I know I might append a record with additional data later on, I'll create another table to represent that, sort of like the following two table definitions: create table myObj (id integer, ...other_data... not null); create table myObjSuppliment (id integer, myObjId integer, ...more_data... not null); It is hopefully obvious that these names are not verbatim, but just to demonstrate the idea. Is this a reasonable approach to data persistence modeling? Is it worth trying to limit updates performed on a table, especially for filling in nulls for data that might not exist when the record is originally created? Are there times when an approach like this might cause severe pain later on?"} {"_id": "143593", "title": "What is the value of the Cloudera Hadoop Certification for people new to the IT industry?", "text": "I am a software developer with 8 months of experience in the IT industry, currently working on the development of tools for BIG DATA analytics. I have learned Hadoop basics on my own and I am pretty comfortable with writing MapReduce Jobs, PIG, HIVE, Flume and other related projects. I am thinking of taking the exam for the Cloudera Hadoop Certification. Will this certification add value, considering that I have less than 1 year of experience? Many of the jobs I've seen relating to Hadoop require at least 3 years of experience. Should I invest more time in learning Hadoop and improving my skills to take this certification?"} {"_id": "109162", "title": "Is all industry experience equal (for the most part)?", "text": "I graduated not too long ago with a degree in computer science. I've been working for a software company for a couple of months, but the work that I'm doing doesn't seem to fall into any defined category (at least the categories that exist in my mind - developer, tester, project manager). My job will occasionally require me to do a bug fix in Java (5%), but the majority of it seems to consist of either querying a client's database and adding/editing/deleting information (60%), or troubleshooting and then giving the response, \"You forgot to do such and such before you submitted the form, and that's why you got an error\" (35%). My worry is that down the road if I try and apply to some other company for a more senior developer/tester/PM position, they'll look at my work experience and say, \"You seemed to have dealt more with database management / technical support than the actual software development cycle\". Is this a valid concern on my part? **EDIT** Since the consensus seems to be that it's low-end/support, I have another question. I've been here a relatively short amount of time, and I've had offers for interviews at a couple of other companies (residual contacts from posting my resume online) where I would be doing development (which is what I'd prefer to be doing). I took this position because they were the first to make an offer (the pay is pretty decent) and I'm hoping to go to graduate school in the next year or so, so I wanted to start saving money. Would it be worth it to pursue these other interview offers (who are going off my resume of me coming straight from university), and just act like these last couple of months didn't happen (resume wise)."} {"_id": "177579", "title": "Is testable code actually more stable?", "text": "A google scholar search turns up numerous papers on testability, including models for computing testability, recommendations for how ones code can be more testable, etc. They all come with the assertion that more testable code is more stable, however I can't find any studies which actually demonstrate this. I tried looking for studies evaluating the effect of testable code vs. quality, however the closest I can find is Improving the Testability of Object Oriented Systems, which discusses the relationship between design flaws and testability. Is testable code is actually more stable? And more importantly, how strong is this relationship? Please back up your answers with references or evidence to back up your claim. For example, there is a lot of study regarding the relationship of cyclomatic complexity and defect rate. Troster finds a correlation of `r = .48` There are metrics for \"testability\", such as code coupling. I'm looking for research conclusions relating these to defect rate. Ideally I would love a graph plotting some measure of testability vs. defect rate."} {"_id": "135130", "title": "How to provide a solid explanation that no two software should have same version number?", "text": "We are an outsourcing company. We develop firmware and software for our client. They have a hardware engineer team with whom we, firmware developers, work. Our client has a **strictly defined process** to release a firmware. A change in firmware version number has a big impact in that process. When we release a firmware to them for testing, the hardware engineers test it first. Then another SQA test it. Finally, the product is released to customers. Sometimes, when the change is minor in a new release, our client insist to keep the version number same as the last one. Say, latest release from developers (not released to the customers) had version number 10. Then we found a place for optimization. Or may be slightly change in behavior of the firmware in a very special case which may or may not be visible from outside. When we propose them that we found these places for minor improvements, they insist us to do the changes but keep the version number 10. We insist that every firmware we release must have distinct version number. We pointed out several common pitfalls if we don't change the firmware version number with every release. They say that they understand why it is necessary. Yet they force us to keep the version number same. Keeping the version number same will help their **strictly defined process** of releasing firmware to customer end. We manage the code in our own repository (SVN). We strictly tie a given version number to a given revision number. The version number is stored in the EEPROM of the product our client release. There is defined 2 bytes where the version number is stored. The PC software that reads version number always reads the 2 bytes and display the version number in PC monitor. There is no other way to define version number. They won't change the software that read version number and they won't allocate more memory to introduce another version number that developers can only use. I'm about to change a firmware and release with the modifications today. I've got another request to keep version number same as previous one. I want to email several unbeatable arguments again what problems we might face if we keep the version number same as previous release. How can I convince my client not to use same version number?"} {"_id": "224284", "title": "How do I find which line connected two squares?", "text": "In the Permissive Field of View algorithm, a destination square is _visible_ from a source square if any unobstructed line can be drawn from the source square to the destination square. The algorithm works by defining a series of view fields (essentially, triangles) and checking if the destination square exists inside of a view field. Unfortunately, the algorithm does not make available the actual line that it found between the source square and the destination square; it only tells you whether such a line is guaranteed to exist. How can I find the line, presuming it is guaranteed to exist? I've tried plotting lines between the edges of the source square to the edges of the destination square, but this apparently is just an approximation (it can fail to find the line)."} {"_id": "92582", "title": "Multi sourced search in WPF app", "text": "If I\u2019m using the MVVM pattern \u2013 this one for my WPF application And my project requires a search function on different sources \u2013 say clients, accountants, cases. What would be the best way to go about it so that I only have one search view and the search results shown in the ListView are \u2018dynamic\u2019 in that they could be result for clients or accountants or cases Should I have a base search view model and specific view models for clients, accountants, cases that then somehow set as the datacontext for the search view Or try to build the grid dynamically in a code behind for the search and then somehow get the data back into the view model without the search view knowing about the view model. **For clarification** The searches are separate and will not need to be combined, I would only every return results for clients or accountants or cases."} {"_id": "92581", "title": "Customer wants to modify the .properties files packaged in our WAR file", "text": "I have a customer who wants the ability to modify the .properties files packaged in the web applications WAR file so that they have the control to modify environments at settings. They will be hosting this site on their own servers so I can't say I blame them but I am a little unsure what the best course is. I would think that any adminstrator of a Java EE application server should know how to drill into the WAR package with a zip tool and find and modify the properties files themselves. I am unsure if this is common knowledge or not. I have typically always packaged my build artifacts with the property files appropriate for a given environment, but this is kind of an oddball case where WE build the WAR and hand it off to the customer. I might try to keep the property files external from the WAR but then it gets messy because of classloader issues (JBoss classloader, Tomcat classloader, etc...) I am thinking that as a convenience we maintain properties files for THEIR environments and change and build these on their requests. My manager likes this idea because it keeps us in control but the customer doesn't like this. What would you do?"} {"_id": "228504", "title": "Translate soap input of one web service into input of another", "text": "We need to support a legacy web service using a new web service implementation. At a given time the old input as well as new input should work. So here is what we need to achieve > Old SOAP input -> translate -> New SOAP input -> New WS -> New WS output -> > translate -> Old SOAP output. For now, I have created xslt for translating input and output But looks like that has little use if I implement a webservice to handle old input Given that we know the input and output XSD for all formats involved, how to implement a web service that will take old soap input, translate it (using XSLT) and call the new service? What tools (spring ws,cxf ..) are most suited?"} {"_id": "42045", "title": "Will they release a Wrox Box 4?", "text": "does anyone know if there will be a Wrox Box 4? I would love to get something like that, but the latest collection of Wrox books seems to be the release for 3.5. I really need to get up to date with the latest version of .NET. I'm not sure if there will be a Wrox Box 4 though, because it looks like Wiley is now publishing Wrox books under Wiley, and not Wrox anymore. So it looks like I'm going to have to go with Professional ASP.NET 4 in C# Instead of the Wrox Box 3.5"} {"_id": "77184", "title": "Anyone using Intel Performance Primitives library (IPP) with Delphi?", "text": "I'm considering purchasing the Intel Integrated Performance Primitives (IPP) library for use with my Delphi applications. I'm interested in the signal processing functions. Mainly for re-sampling and FFT. Has anyone used IPP with Delphi? How did it go? I'm wondering how easy it will be to begin using the library. I don't have any experience using non-Delphi code/libraries with Delphi."} {"_id": "77185", "title": "How do open source projects deal with complex setup and deployment?", "text": "At most of my corporate clients they have very complex systems with multiple entities to be deployed and tested. (e.g. client software, user database, system, database, identity manager application, etc). They usually don't have an easy way to deploy all this. Neither in production nor for development. I wonder how open source projects/applications deal with this? Do open source projects tend to be smaller in scope therefore less complex to setup? Do they manage to avoid complexity as no-one pays for their time wasted in setup? I wonder what complex open source projects are there in terms of complexity. I thought about a couple of examples. Neither of them perfect for my purpose. **Wordpress** \\- The .com site is probably quite complex in terms of deployment but AFAIK the version that can be downloaded from .org is quite simple. It cannot be distributed to multiple servers. **Wikipedia** ? Most of the software it is using is open source but it is not itself a downloadable open source project so it does not really fit. Details of their system can be found on Wikipedia **Diaspora** \\- the Facebook replacement is not working yet but it might become such project. **Dreamwidth** is originally a LiveJournal fork. Dreamwidth is mostly open source. Its wiki page has information with some production notes. I got some input from them in IRC that I posted on their forum to get further clarifications. So I am interested in further examples of open source applications/system that are complex in terms of deployment and testing."} {"_id": "194961", "title": "Performance considerations when using Go for my project", "text": "I'm looking at using Go (aka golang) for a project (a SQL database, but that mostly doesn't matter here) where performance is critical, but under low load the primary bottleneck will be I/O to disk. In this case, I think Go would be great! Under high load or while hitting cache a lot, CPU and memory utilization will increasingly become bottlenecks, and I'm worried that Go could make the high- end of the performance spectrum significantly lower than what C/C++/D might provide. Can anyone with experience working in Go give some insight into how quickly that bottleneck is reached (networking applications are subject to the same bottleneck, typically) and what you can do to relax it in Go other than rewriting the bottleneck in a faster language? * * * Note my question was sufficiently different that I asked it anyway after reading related question: * What kind of software is done best with The Go Programming language? I have a specific application I'm asking about, and I've limited my concerns to performance (not usability, library support, development tools, etc)."} {"_id": "171800", "title": "Why doesn't Java's BigInteger class have a constructor capable of taking a numeric literal?", "text": "Why doesn't Java's BigInteger class have a constructor capable of taking a numeric literal? Every single time I use BigIntegers, and many times I merely think about them, I wonder this. What reason could the designers of java have had to exclude one despite the overwhelming convenience of one should it exist?"} {"_id": "171805", "title": "Does 'Me' in VB.NET refer to the instantiated object only?", "text": "Does 'Me' in VB.NET refer only to an instantiation of the type? Just occurred to me that since I can reference properties in my VB.NET class without using 'Me', that I don't see a reason for using it for this purpose. Will referencing the variable either way always refer to the actual stored value for the property at runtime?"} {"_id": "171807", "title": "How get and set accessors work", "text": "The standard method of implementing get and set accessors in C# and VB.NET is to use a public property to set and retrieve the value of a corresponding private variable. Am I right in saying that this has no effect of different instances of a variable? By this I mean, if there are different instantiations of an object, then those instances and their properties are completely independent right? So I think my understanding is correct that setting a private variable is just a construct to be able to implement the get and set pattern? Never been 100% sure about this."} {"_id": "171809", "title": "Are elements returned by Linq-to-Entities query streamed from the DB one at the time or are they retrieved all at once?", "text": "Are elements returned by Linq-to-Entities query streamed from the database one at the time ( as they are requested ) or are they retrieved all at once: SampleContext context = new SampleContext(); // SampleContext derives from ObjectContext var search = context.Contacts; foreach (var contact in search) { Console.WriteLine(contact.ContactID); // is each Contact retrieved from the DB // only when foreach requests it? } thank you in advance"} {"_id": "241315", "title": "Relation between objects", "text": "For a few weeks I\u2019ve been thinking about relation between objects \u2013 not especially OOP\u2019s objects. For instance in **C++** , we\u2019re used to representing that by layering pointers or container of pointers in the structure that needs an access to the other object. If an object `A` needs to have an access to `B`, it\u2019s not uncommon to find a `B *pB` in `A`. But I\u2019m not a **C++** programmer anymore, I write programs using functional languages, and more especially in **Haskell** , which is a pure functional language. It\u2019s possible to use pointers, references or that kind of stuff, but I feel strange with that, like \u201cdoing it the non-Haskell way\u201d. Then I thought a bit deeper about all that relation stuff and came to the point: > _\u201cWhy do we even represent such relation by layering?_ I read some folks already thought about that (here). In my point of view, representing relations through explicit graphes is way better since it enables us to focus on the _core_ of our type, and express relations later through combinators (a bit like **SQL** does). By _core_ I mean that when we define `A`, we expect to define what `A` is _made of_ , not what it _depends on_. For instance, in a video game, if we have a type `Character`, it\u2019s legit to talk about `Trait`, `Skill` or that kind of stuff, but is it if we talk about `Weapon` or `Items`? I\u2019m not so sure anymore. Then: data Character = { chSkills :: [Skill] , chTraits :: [Traits] , chName :: String , chWeapon :: IORef Weapon -- or STRef, or whatever , chItems :: IORef [Item] -- ditto } sounds really wrong in term of design to me. I\u2019d rather prefer something like: data Character = { chSkills :: [Skill] , chTraits :: [Traits] , chName :: String } -- link our character to a Weapon using a Graph Character Weapon -- link our character to Items using a Graph Character [Item] or that kind of stuff Furthermore, when a day comes to add new features, we can just create new types, new graphs and link. In the first design, we\u2019d have to break the `Character` type, or use some kind of work around to extend it. What do you think about that idea? What do you think is best to deal with that kind of issues in **Haskell** , a pure functional language?"} {"_id": "131153", "title": "What are the common approaches & practices for big applications?", "text": "Developing big applications requires huge effort. There may be teams for each phase of development and they must use different design patterns and go through the whole SDLC. I have two years of experience as a software developer on small applications and I'm interested on what are the common approaches & practices for big applications? Especially from a project management perspective. How can I adopt such working practices?"} {"_id": "92059", "title": "Is there a such thing as a Program Manager for Outsourcing Companies?", "text": "Accordingly to the first paragraph of Joel's post, a Program Manager is a very good thing to have in products. But what about having a Program Manager for a company that doesn't have a product, but have projects? The only coincidence that I'd see if you ask me and ask me to answer right away is that a Program Manager for an outsourcing company would be someone that oversees all the projects that the company have, ensuring that the things are being done aligned to the company strategy, from a technical point of view."} {"_id": "92058", "title": "Independent Developer Pre-Coding Planning/Design/Architecture", "text": "For the independent developers, or the weeknight/weekend developers, when you are about to begin a large/enterprise project, what are you first steps to take when hashing out the pre-coding details and designs? What are the tools you use? Do you find it easiest to keep with the traditional mechanical pencil and pad of paper? Do you use a text editor to get your ideas and requirements in a bulleted format? Do you use UML? What my question is, is what's the first step you take when designing a large application? What medium do you use to record your notes? And when are you typically content to start writing code?"} {"_id": "187205", "title": "How to write useful Java programs without using mutable variables", "text": "I was reading an article about functional programming where the writer states > > (take 25 (squares-of (integers))) > > > Notice that it has no variables. Indeed, it has nothing more than three > functions and one constant. Try writing the squares of integers in Java > without using a variable. Oh, there\u2019s probably a way to do it, but it > certainly isn\u2019t natural, and it wouldn\u2019t read as nicely as my program above. Is it possible to achieve this in Java? Supposing you are required to print the squares of first 15 integers, could you write a for or while loop without using variables? > **Mod notice** > > This question is _not_ a code golf contest. We are looking for answers that > explain the concepts involved (ideally without repeating earlier answers), > and not just for yet another piece of code."} {"_id": "180250", "title": "Would using Quercus make my code fall under the GPL?", "text": "Quercus is an implementation of PHP written in Java, and released under the GPL. If I use it, does my PHP code fall under the GPL? What about my Java code? Assuming I write new Java code, and new PHP code, and use Quercus to run my PHP code on the JVM and call into my Java code, which pieces fall under the GPL? Also, assume that the result is commercial software that I want to distribute."} {"_id": "50601", "title": "What do you think was a poor design choice in Java?", "text": "Java has been one of the most (the most?) popular programming languages till this day, but this also brought controversy as well. A lot of people now like to bash Java simply because \"it's slow\", or simply because it's not language X, for example. My question isn't related to any of these arguments at all, I simply want to know what you consider a design flaw, or a poor design choice in Java, and how it might be improved from your point of view. Something like this."} {"_id": "221108", "title": "Obstacles to using Git Flow in Subversion", "text": "My team at work is starting a new project, using Subversion as our VCS (you may consider this set in stone for the purpose of this question). We're still in the early stages of the project and are trying to agree on a branching model. Our previous project was based on a non-standard version model that led to issues when managing hot-fixes and patches to existing releases. I've found different branching models to be rather complicated, but one model that I do understand fairly clearly is git flow. I'm curious how hard/undesirable it would be to implement a variation of this in Subversion. Obviously there would some difference in terms of people collaborating on branches. The feature branches would have to be centralized rather than limited to local repositories, but the other concepts of the model should be reproducable in Subversion as I understand it. What would be the drawbacks or challenges to this approach. What I've heard is that in SVN \"merging is expensive\" relative to Git. But I'm not completely clear on what this means in practice or how it would effect our ability to use a git flow like branching model. What would be the biggest concerns with this approach. Is there a similarly clear approach that is more natural in Subversion?"} {"_id": "187208", "title": "What is a practical level of abstraction in a web application?", "text": "(Originally asked on StackOverflow - http://stackoverflow.com/questions/14896121/what-is-a-practical-level-of- abstraction-in-a-web-application) I still consider myself a newcomer to OO programming, especially in php, so forgive me if I have missed some fundamental principle! Say I have an intranet application with a `staff` object with properties and methods unique to being a member of staff: class staff { ... private $job_title; private $start_date; etc... } In the quest to keep the maintainable parts of my code in one place, make my data model more intelligent and make my code less tightly-coupled, is it reasonable to abstract each object property to another level? For example, I expect each property to have a `value`, but to complicate matters, I'm also interested in what `label` and `description` to use when referring to it in human-readable applications. More of interest is which `type` it is, as well as the capability of the object to validate itself, if its member properties can validate themselves also. It seems then, that properties of the majority of objects have a lot in common. So, the `$start_date` property might actually be an object in itself: abstract class property { public $value = NULL; protected $label; // A simple label which can be used on user forms, reports, etc. protected $description; // Some detail to describe what this property is used for protected $type; // Helps choose the right HTML form control and validate user input protected $regex; // Regular expression to prompt user entry and validate return value protected $unique = FALSE; // If TRUE, check for duplicates in the persistence layer protected $default_value; // Prompt the user when entering new values protected $HTML_form_element; // Which form element suits this field best? etc.... } class start_date extends property { ... public function __construct(){ ... $this->label = 'Start date'; // Might be an i18n entry $this->description = 'Enter the date on which the contract starts for this staff member'; // Might be an i18n entry $this->type = 'VARCHAR(512)'; // Better to use persistence layer's requirements? $this->regex = '/^([0-9]{1,2})[/.-]([0-9]{1,2})[/.-]([0-9]{2,4})$/'; // UK date $this->default_value = date('Y-m-d'); // Depends on the business policy of the company $this->HTML_form_element= 'datepicker'; // Chooses the correct HTML element when building forms automatically ... } etc... } class staff { ... public function __construct(){ ... $start_date = new start_date(); ... } etc... } In previous applications, these related things have lived in different places; random global variables, commented code, decorator objects, a config file, embeded in HTML, a database table, a scrap of paper. It makes sense to bring all an objects properties together in one place, but is this the right way?"} {"_id": "181125", "title": "How to extract operators from the grammar productions for conflict resolution in LALR parser?", "text": "Is there some standardized or widely accepted algorithm for picking up operators in shift/reduce conflicts in LALR parser? The question is naive, my problem is not with implementing my solution, but implementing the solution is already widely used. For shift the operator is the next input token, for reduce, it depends -- I consider all already read symbols (for given production) declared as operators: * if there is one -- it is the operator * if there are more than one -- I report grammar error * if there is none I use the input token as operator So for example: E ::= E + E E ::= E * E E ::= num in case of E + E | * num Considering first production I read `+`, and since it is the only one operator read I pick this one for reduce operator. `*` is the first token in input so it servers as shift operator. But is my algorithm correct (i.e. the same as normally used for LALR parsers)? I am especially worried for the point when I have more than 1 operator in the read tokens. * * * ## Update What I **NOT** am asking here I don't ask how to resolve shift/reduce conflict once you have the operators. I don't ask how to set the precedence for operators. What I **am** asking here I ask how to extract operators from the \"stream\" of tokens. Let's say user set some precedence rules for `+`, `-` and `*`. The stack is: `-` `E` `+` `E` and input is `E`. Or the stack is `E` and input is `*`. What is the operator for reduce in first case? And in the second? Why? _The stack is really entire right hand side of production._"} {"_id": "181127", "title": "Custom error handling", "text": "I'm trying to figure out the best way to handle custom errors in my application. Option 1: if(expr) { } else { $_SESSION['error'] = \"Some message describing the error\"; } Option 2: if(expr) { } else { $_SESSION['error'] = 2; } In option 1, the same error might be recycled in multiple situations. If I ever decide to change the message then I'll have to dig through my code and find every occurrence. In option 2, I give a numerical value which could reference a master list of errors. This seems to solve the problem associated with option 1 but makes the code harder to read. What should I do?"} {"_id": "181123", "title": "C++ name mangling and linker symbol resolution", "text": "The name mangling schemes of C++ compilers vary, but they are documented publicly. Why aren't linkers made to decode a mangled symbol from an object file and attempt to find a mangled version via any of the mangling conventions across the other object files/static libraries? If linkers could do this, wouldn't it help alleviate the problem with C++ libraries that they have to be re-compiled by each compiler to have symbols that can be resolved? List of mangling documentations I found: * MSVC++ mangling conventions * GCC un-mangler * MSVC++ un-mangling function documentation * LLVM mangle class"} {"_id": "234500", "title": "Issue with My.Settings using Visual Basic", "text": "For my A2 Computing project I have created a game using Visual Basic. For the leaderboard section, I have used the My.Settings feature to store the scores when the game closes, but only one or two actually save. My teacher doesn't know and I can't find anything helpful after a google search of the problem"} {"_id": "140699", "title": "How to camel-case where consecutive words have numbers?", "text": "Just wondering if anybody has a good convention to follow in this corner- corner-corner case. I really use Java but figured the C# folks might have some good insight too. Say I am trying to name a class where two consecutive words in the class name are numeric (note that the same question could asked about identifier names). Can't get a great example, but think of something like \"IEEE 802 16 bit value\". Combining consecutive acronyms is doable if you accept classnames such as `HttpUrlConnection`. But it seriously makes me throw up a little to think of naming the class `IEEE80216BitValue`. If I had to pick, I'd say that's even worse than `IEEE802_16BitValue` which looks like a bad mistake. For small numbers, I'd consider `IEEE802SixteenBitValue` but that doesn't scale that well. Anyone out there have a convention? Seems like Microsoft's naming guidelines are the only ones that describe acronym naming in enough detail to get the job done, but nobody has addressed numbers in classnames."} {"_id": "189333", "title": "Modern analog of DDD (Data Display Debugger)", "text": "I if there are visual applications for debugging code in popular languages (like C, Pytho, Javascript). By visual I do not mean debugger with UI which is bundled with almost every IDE, I'm rather talking about a debugger that can visualize data and process of changing this data over time. E.g. imagine you're implementing a sort algorithm. It would be very cool to have an application that would save states of the array at each iteration and then would allow you easily go back and forth between states with good animation of insertion/deletion and move. I understand that there maybe no general utility for that. Insetead I expect some environment (like R or MatLab) that can be used to easily create such debugging applications."} {"_id": "189337", "title": "Looking for feedback on my approach to implement an algorithm", "text": "We've recently worked with a mathematician to build us an algorithm. The algorithm will look at click data and will continuously update data associated with the user, the content, and the content's category and then pair the user with relevant content. With that said, I have never implemented an algorithm before but I'm guessing that our current environment (PHP, MySQL) is not entirely suitable for continuously crunching and updating data. **Looking for feedback on whether the following approach is on the right track?** * Write algorithm in Java (or other compiled language) for best performance. * Store user, content, and category data on a NoSQL server (or use memcache). * Use Gearman (or equivalent) to submit click/user data to a job server. * Run jobs on separate worker server that contains the algorithm. * Update user, content, category data. If this is not on the right track, can you explain why would that be?"} {"_id": "77724", "title": "How does java resolve class names in a lot of jars?", "text": "Recently I found one of my Maven project have 100+ jar dependencies. FWIK a zip archive doesn't have index at all, so it should scan the whole zip to determine if it contains a specific path. But I found Java resolve class names against so many jars rather fast, why?"} {"_id": "86920", "title": "What would be a good set of first programming problems that would help a non-CS graduate to learn programming?", "text": "I'm looking at helping a friend learn programming (I'm NOT asking about the ideal first language to learn programming in). She's had a predominantly mathematical background (majoring in Maths for both her undergrads and graduate degree), and has had some rudimentary exposure to programming before (in the form of Matlab simulations/matrix operations in C etc) - but has never been required to design/execute complex projects. She is primarily interested in learning C/C++ - so, with respect to her background, what would be a set of suitable problems that would both engage her interest ?"} {"_id": "86921", "title": "Convert filenames to their checksum before saving to prevent duplicates. Is is a smart thing to do?", "text": "TL;DR:what the title says * * * I am developing some sort of image board in PHP. I was thinking of changing each image's filename to it's checksum prior to saving it. This way, I might be able to prevent duplicates. I know this wouldn't work for two images that are the same but differ in size or level of compression or whatnot, but this method would allow for an early check. What bugs me is that I never saw this method implemented anywhere, so I was wondering if there is a catch to it. Maybe it is just more efficient to keep the original filename and store the hash in DB? Maybe the whole method is just not useful and my question is moot? What do you think? On a side note, I don't really get how hashes are calculated so I was wondering, **if** my first question checks out, if it would be possible to calculate the likeness that two images are similar by comparing hashes (levenshtein or something of the sort)."} {"_id": "184574", "title": "Why is Today() an example of an impure function?", "text": "It seems like, when reading something like this Wikipedia article about \"pure functions\", they list `Today()` as an example of an impure function but it seems pretty pure to me. Is it because there is no formal input argument? Why is the actual time of day not treated as the \"input to the function\" in which case if you gave it the same input, i.e. executed `today()` twice at the same time, or traveled back in time to execute it again (maybe a hypothetical :) ), the output would be the same time. `Today()` never gives you a random number. it always gives you the time of day. The Wikipedia article says \"different times it will yield different results\" but that's like saying for different `x` `sin(x)` will give you different ratios. And `sin(x)` is their example of a pure function."} {"_id": "57811", "title": "Embedded Web Server Vs External Web Server", "text": "So I've thought of creating a web application in either Lisp or another functional language and was thinking of embedding the web server into the application (have my application handle the HTTP requests). I don't see any issues with that, however, I'm new to creating web applications (and in the grand scheme of things, programming as well). Is there any drawbacks to handling HTTP requests within your program instead of using a web server? Are there any benefits?"} {"_id": "184570", "title": "How to make an ASP.NET MVC site modular", "text": "I'm in the planning stage for an employee intranet system to be built with ASP.NET MVC 4. We'd like the site to consist of separate \"modules\", each of which provides a different feature: messaging, payroll changes, etc. I'd like these modules to be able to be enabled or disabled at compile time. The homepage will display some kind of navigation that will link to each module that is loaded. That's easy so far, but I don't want the navigation feature to have to know about the modules beforehand. In other words, I want the modules to be dynamically discoverable; I want to be able to write the code for a new module and then have a link added to the navigation bar with no code changes anywhere else in the source. Each module should have some way of registering itself with the navigation bar, and--more importantly--this should be done for each module as it's loaded. I believe that this precludes using MVC's Areas, since those are designed for the case when the layout of the site is known beforehand. MEF seems like it might be appropriate, although people seem to have had mixed success in combining MEF with MVC. Is MEF actually the way to go here, or is there a better way to accomplish what I need?"} {"_id": "43927", "title": "How to handle Real Time Data from a database perspective?", "text": "I have an idea in mind, but **it still confuses me the database area**. Imagine that **I want to show real time data** , and using one of the latest browser technologies ( _web sockets_ \\- even using older browsers) it is very easy to show to all observables (user browser) what everyone is doing. **Remy Sharp** has an example about the simplicity about this. But I still don't get the database part, **how would I feed** , let's imagine (using Remy game Tron) that I want to save **the path for each connected user** in a database and if a client wants to see what is going on with a **5 sec delay** , he will see that, not only the 5 sec until that moment but **the continuation in time** ... how can I query a DB like that? SELECT x, y FROM run WHERE time >= DATEADD(second, -5, rundate); is not the recommended path right? and pulling this x in x time ... this is not real data feed correct? If can someone help me understand the Database point of view, I would greatly appreciate."} {"_id": "219295", "title": "Is It Common For a Company to Have Their Own Definition of Job Titles such as Software Architect", "text": "Recently I am very confused about my position in the company. Months ago, this company offered me the position of Software Architect and I accepted. But about one year passed and I came to realize that this \"Software Architect\" title in this company is almost totally different from the one I used to know and the one I see people discussing here. Below are several items what a software architect is responsible for. They are from This Post: A Systems Architect should: 1. Specify the high-level architecture 2. Participate in code reviews 3. Approve technologies used 4. Assist the developers in their coding effort 5. Maintain and monitor the development schedule 6. Produce SA artifacts, such as UML diagrams, Gantt charts and the like. But as a \"Software Architect\" in this company, I can only touch the 6th Item together with a lot of coding. So this title in my company is more like a Senior Developer rather than a Software Architect. My questions are: 1\\. Is it common for a company in IT industry to have their own definition of Job Titles? 2\\. If the answer to question 1 is No, then is it legal for a company to do that? **Update** The reason why this issue became a problem recently is that, our General Manager seems to want me to be responsible for a project which is being developing by many members, but he **never** authorizes me to lead other team members or coordinate between teams (coordination among teams is a must especially when are developing software suites). Instead, he just came to me and blame me when the project didn't get progressed as expected. Though I designed all the interfaces and classes of some modules for other members to work and did lots of coding, pushing and guaranteeing the progress is beyond my responsibility especially when this Software Architect is not the usual one we see, I feel it is so unfair to me\uff01"} {"_id": "205771", "title": "Is PHP \"list()\" language construct a bad convention?", "text": "PHP supports the `list()` language construct which, in short, allows you to return multiple values from a function and then attach them to different variable, eg: function myBigReturn(){ return array(\"foo\", \"bar\"); } list($fooer, $barer) = myBigReturn(); echo $fooer; // echoes \"foo\" echo $barer; // echoes \"bar\" I have failed to find much info about such a language construct and I am curious - is usage of PHP `list()` construct considered a bad coding convention? Are there any serious articles/literature on this subject?"} {"_id": "219292", "title": "wpf and mvvm printing report startegy", "text": "I am developing a WPF application and I need some clarity on printing in WPF and MVVM (I am using MVVM-Light). I want to create standard looking reports for my application and I don't want to have to code each one from scratch. But I want to write this solution myself, without stuff like crystal reports etc. But I am open to other open source examples of course. 90% of my reports will be generated from datagrid itemsources. I want to have a report footer and header and then the \"records\" that get paged in a content section. **My idea is:** use Fixed document as report wrapper (with header and footer properties), use a listview for records in report content, and generate report to xps that gets opened in document viewer for user to review and print etc. But I have never created reports from WPF before and it looks like there are lots of options and no clear best practice. I am looking for some suggestions or advice on which direction to go from someone with some experience creating custom reports in WPF and keeping MVVM. What was you strategy?"} {"_id": "135708", "title": "Is there a pattern that will help with this data structure", "text": "I'm doing a java project. My main structure contains 2 lists with elements of type A the other type B. B itself contains a list of objects which may contain elements of A. It must be that when an element from list of A is removed it must be removed from all subelements in list B. also if an possible member of list A is added to a B it should also be added to A. And also I need some way to find the parent objects containing an A. So far I have a \"working\" implementation - using lots of loops. I am wondering - can you suggest patterns that will help me in this task? * * * more details. I think my main issue is boils down to that i have objects A that have multiple parents. And when I add/remove from one parent I need to adjust some other parents. I can't help but believe such a problem is solved already. * * * to clarify: I have a main `List` , and each `B` contains a `List` When a A is removed from the main list it must be removed from all B. But not when it is removed from a B. It's essential that all A used in the application are present in the main list."} {"_id": "135709", "title": "What are the best resources for learning about concurrency and multi-threaded applications?", "text": "I realised I have a massive knowledge gap when it comes to multi-threaded applications and concurrent programming. I've covered some basics in the past, but most of it seems to be gone from my mind, and it is definitely a field that I want, and need, to be more knowledgeable about. What are the best resources for learning about building concurrent applications? I'm a very practical oriented person, so if said book contains concrete examples the better, but I'm open to suggestions. I personally prefer to work in pseudocode or C++, and a slant toward game development would be best, but not required."} {"_id": "139287", "title": "what is the difference between Soak testing and Stress testing?", "text": "Can anybody explain the difference about soak and stress testing? I googled about them and found that both are about to test the software beyond its limits. Is it right for both testing strategies?"} {"_id": "140967", "title": "how should I design Objects around this business requirement?", "text": "This is the business requirement: \" A Holiday Package (e.g. New York NY Holiday Package) can be offered in different ways based on the Origin city: * From New Delhi to NY * **From Bombay to NY** * NY itself ( Land package ) (Bold implies default selection) **a.** and **b.** User can fly from either New Delhi or Bombay to NY. **c.** NY is a Land package, where a user can reach NY by himself and is a standalone holidayPackage. \" Let's say I have a class that represents **HolidayPackage** , **Destination** (aka City). public class HolidayPackage{ Destination holidayCity; ArrayList variants; BaseHolidayPackageVariant defaultVariant; } public abstract class BaseHolidayPackageVariant { private Integer variantId; private HolidayPackage holidayPackage; private String holidayPackageType; } public class LandHolidayPackageVariant extends BaseHolidayPackageVariant{ } public class FlightHolidayPackageVariant extends BaseHolidayPackageVariant{ private Destination originCity; } What data structure/objects should I design to support: * options * a default within those options Sidenote: A HolidayPackage can also be offered in different ways based on Hotel selections. I'd like to follow a design which I can leverage to support that use case in the future. This is the backend design I have in mind."} {"_id": "139281", "title": "Training a company to use a DVCS coming from a CVCS mindset, is it as hard as one would think?", "text": "So, I'm preparing to consider the outcome of training a lot of people (>25) to use **Mercurial** coming from a centralized mindset. I've done it with **individuals** and had success with it, although the time invested in each one has been different and most of them have been both proactive and open to trying something new. I was wondering if there was someone here with experience giving such training to a large group of people **as I would like to find out what are the do's and don'ts... of course, this would be more oriented towards dealing with those resistant to change** , and a don't I can come to think of is telling them to avoid cheat-sheets mapping commands, as Joel Spolsky suggests."} {"_id": "135697", "title": "Why is it so difficult to make C less prone to buffer overflows?", "text": "I'm doing a course in college, where one of the labs is to perform buffer overflow exploits on code they give us. This ranges from simple exploits like changing the return address for a function on a stack to return to a different function, all the way up to code that changes a programs register/memory state but then returns to the function that you called, meaning that the function you called is completely oblivious to the exploit. I did some research into this, and these kinds of exploits are used pretty much everywhere even now, in things like running homebrew on the Wii, and the untethered jailbreak for iOS 4.3.1 My question is why is this problem so difficult to fix? It's obvious this is one major exploit used to hack hundreds of things, but seems like it would be pretty easy to fix by simply truncating any input past the allowed length, and simply sanitizing all input that you take. **EDIT: Another perspective that I'd like answers to consider - why do the creators of C not fix these issues by reimplementing the libraries?**"} {"_id": "194026", "title": "Suggestions to improve small team workflow (CI / Deployment)", "text": "I'd like to improve my team's workflow and architecture. Right now we have a LAMP dev server on which every member of the team has a subdirectory. We work directly in this directory via LAN. There is also a \"release\" directory for when we're ready to upload via FTP to the production server. We use a github private repo that serves as a centralized repo. Example of a programmer's current daily workflow : Choose a user story to work on after a quick daily meeting Open a git branch named after the user story Work, commit some changes Push to github When I decide to deploy, I pull the changes, perform a code review, change the needed variables for production and upload via FTP. If changes were made to the dev database I replicate them on the production server's database as well. Any ideas where I should begin to improve this situation ? Current issues : * Members of the team can't easily work from home since they'd have to edit on the dev server directly (lag issues) * Deployment process is awful (changing variables manually and FTPing to production) EDIT : I'll try to add better questions. * Setting up subdirectories for each developer on a common dev server : good idea or not ? Possible alternative : each developer setups his own dev environment (using WAMP or similar) and pushes to test server. * For the deployment aspect I feel manually changing config variables and FTPing to production is a bad idea. Possible alternative : setting up git on production server, set config files in .gitignore (maintain them manually directly on production server) and git pushing the changes. * For remote developers, should they replicate the dev environment (again, WAMP or similar) but work on the dev server's database ? I think it's better than copying the database on local machines."} {"_id": "249720", "title": "Deny use of my library to compete a certain company", "text": "I'm developing a library for reading/writing a file format of a program created by a certain company. The guys from that company were so kind that they provided me with a documentation for their format, which I'm really grateful for. I'm intending to release the library as open-source under a license \"derived\" from MIT, the difference being that the library must not be used to create a program that would compete with the original program, as an expression of my gratefulness to the company. What I'm asking: is adding this rule to the license legally possible?"} {"_id": "99487", "title": "Good way to learn how to solve questions on InterviewStreet", "text": "> **Possible Duplicate:** > How do I adapt to pre-interview challenge questions? InterviewStreet is a new company that essentially acts as a filter for companies to find programmers that can code. My problem is my math is fairly weak and I'd like to study it, even if it's from the ground up, to be able to solve questions such as this one, that is found on their site: `Find the no of positive integral solutions for the equations (1/x) + (1/y) = 1/N! (read 1 by n factorial) Print a single integer which is the no of positive integral solutions modulo 1000007` Now, please do NOT post the answer to that question, it is taken directly from InterviewStreet and should not be posted here. It is not the answer I'm seeking for in this thread. What I'm asking is a more fundamental question which probably can be answered by some of the hackers in the SO community. How does one prepare for such a question? What resources are available for me to study/learn how to solve this type of problem? Is this covered on MIT open courseware? Khan Academy? Any particular books? I'm not even sure where to begin to start solving the problem above and I'd like to learn what steps I can take to do so."} {"_id": "194027", "title": "Does Text Editor Sign The Files?", "text": "As images have EXIF data which provide info about where and how they are created, do any Text Editors append such kind of descriptive details to code files? For example I use Coda for my programming works. Does Coda sign any files created by itself? Is it possible to find out the author of two different projects by just looking at the inners of the files?"} {"_id": "75767", "title": "How hard is Python to learn?", "text": "I'm on the list of the tutors in my university _(for PHP, MySql, CSS, etc)_ but someone contacted me asking me to teach them entry level Python. Would I be able to be up quickly enough and be able **to teach it to someone else?** (I've coded in PHP, Java, C#, Javascript, jQuery, tiny bit of C++ and ASP.NET. I'm a Computer Science student. Just finished my third year.)"} {"_id": "134252", "title": "How can a child state machine relinquish control back to the parent state machine?", "text": "My top level state machine has some states and edges. I will call this the parent state machine. A ----> B ----> C Any state within the parent state machine can be a state machine too. I will call these children state machines. ___________ / \\ A ----> | B0->B1->B2 | ----> C \\____________/ If the parent state machine transitions from A to B, B's state machine takes over. Once B is done running, how should it relinquish control to the parent state machine and transition to state C? Which design pattern do you use? If you're wondering, I have children state machines within parent state machines because my exact project is quite complex and it is natural to encapsulate the internal workings of a child state."} {"_id": "138775", "title": "choosing a functional language platform for a new project", "text": "I have been writing code for a few years now and I don't believe I can claim to have a complete knowledge in this job yet. My experience primarily rolls around C# related areas with a decent knowledge on Silverlight and Asp .Net MVC as well. I tend to use Sql server and Postgres for normal RDBMS related tasks with Mybatis or LinqtoSql as ORM (and Cassandra as well for NoSql). Given that background, I am looking to building a new project which is a normal application using database and user interface screens. It is considerably a big project which we plan to be rolled out as a product. In recent times, I see a lot of noises being made about functional programming like Haskell and F#. I am wondering \\- if it is worth to build the new system using Haskell \\- or just stick to a regular application building model using Silverlight/Prism with a layering similar to onion architecture I understand the benefit of staying with the same technology background as it is always good to work on a familiar technology. I also heard that by moving towards a functional language like Haskell we get cleaner code, improved testability etc., Is it really worth considering a move to build new systems in Hasekll? Or it is yet to be proven in production environments? Simply put would I be safe if I build the new project using Haskell and my preferred database or it is too risky at this point in time to choose Haskell. Any help appreciated. Edit: I have already viewed a few other threads like the following and a few more as well, but did not get a definitive answer. Choosing a functional programming language"} {"_id": "5160", "title": "How does a one-person ISV handle software patent research? What due-dilligence is required?", "text": "I'd like to sell software, but I'm worried about infringing on software patents. What should I do/research prior to writing code?"} {"_id": "198274", "title": "Introducing Fowler's Table Data Gateway to refactor poorly designed systems", "text": "I am developing an application, which currently has about 150,000 lines of code. The previous developer didn't really use any discipline when writing code. Application is in production but is continually developed. I have read Martin Fowler's book (Patterns of Enterprise Application Architecture) and it talks about 'Transaction Scipt' and 'Data Access Objects'. These are the patterns used i.e. there is a class called Person, which contains everything Person related and a class called Order with everything Order related. The functions are not reusable because they contain everything i.e. data access logic, business logic etc. For example, Person.GetPerson will connect to the database find the person, check the age of the person, get all the orders linked to the person etc. I am thinking about introducing what Martin Fowler terms a Table Data Gateway. I am seeing this as a longer term refactor project. The problem is that this will mean inconsistency to begin with i.e. data access logic will be contained in the new Gateway, but also in the Transaction Script classes (where the other developer put it)? Is it a bad idea to go against the original developers style of coding?"} {"_id": "200692", "title": "Is there a comparative study of the memory consumption of programming languages runtimes, correlated with expressiveness and production bug ratios?", "text": "There are many comparative studies and available online when it comes to the runtime performance of applications built using one language or another. Some driven by corporations, some academic, some just personal experiment reports. We also get a decent share of comparative studies on side-effects of a programming language and its tooling, like: * build times, * likelihood of post-production bug detection, * expressive power, * etc... However, I've recently been more and more bummed out by the memory consumption of my programs more than anything else. This might come from the fact that while Moore's Law is on our side for raw performance, we have come to realize that other bottlenecks matter more. That, and I don't tend to update my hardware every so often, and I have some \"old\" (read 2005-2006 3.6GHz Pentium 4 with 4GB of RAM) that nowadays are hard-pressed to be useable for large applications without requiring me to go through great trouble to squeeze every bit of juice out of them (choice of OS, UI, tweaking of services and daemons, choice of applications to use for a task or another...). Quite honestly, sometimes I fire up `top` or `procexp` and weep at the sight of the memory used by the most innocent programs. I can address this by keeping to push in the direction listed above, and essentially trying to limit myself and the programs I use (I have a dear love for cli programs for that reason, I guess), but I also cannot help but to think that maybe we're doing it wrong. ### Modern Tools for Modern Needs Of course, higher-level languages are arguably better and justify their worth of dead weight. Some design choices were made for good (or supposedly well- intended) reasons at the time, in many toolchains. Shared libraries, memory models, pre-processors, type-systems, etc... But some might be more viable than others with our modern hardware, and I'd be curious to read a few serious studies on the matter. So, my question is, is there a pendant to the Benchmarks Game and others that focus on a comparison of the languages' base runtime memory consumption? And even further, are there some studies that cross-reference this with other parameters (similar to what this article did, for instance, for other criteria, also based on the Benchmarks Game)?"} {"_id": "246444", "title": "How should one implement the dependency-injection for a geocoding client that aggregates different coordinate provider implementations?", "text": "I am new to DI and I would like to know how DI might be used to help resolve this problem. If I have an `ILatLongLocation` which implements a `Latitude` and `Longitude`, then given two of these I can produce a `Distance` using a variety of algorithms, but I can also reasonably pick one of these as the \"best\" algorithm at design-time without losing any sleep. However, since users will rarely input physical coordinates, I may need to instead compare by zip codes or something similar. Now, in order to resolve zip codes to coordinates, I could apply a naive algorithm that uses string concatenation, etc to do a rough guess as to the physical coordinates, I could pick a random set of coordinates, etc. But in all likelihood, I will need to resolve the zip codes by using some sort of database mapping zip codes to coordinates. So being new to DI, I would like to know if the following is a reasonable application of DI and/or if I am overthinking it. `ILocatable` has `ILatLongLocation` `ILocatableProvider : ILocatable` has a `T RawLocatable` and can return an `ILatLongLocation`. `IZipCodeLocatableProvider : ILocatableProvider` has a `string RawLocatable` and returns an `ILatLongLocation`. `NaiveZipCodeLocatableProvider` is an implementation that just uses patterns in zip codes that are well-known to provide a best-effort guess as to the coordinates. `DbZipCodeLocatableProvider` is an implementation that uses a database (maybe through a `DbContext` or somehow else) to return the coordinates. Now `DbZipCodeLocatableProvider` itself must depend on some sort of repository or context or connection or something to access a database (unless it is literally hard-coded)... so then this means I need to have a `DbContext` or something similar... here is where I get lost. Am I thinking about this the right way? Is this a legitimate scenario for dependency injection? And how do I discriminate between, say different connection strings vs different strategies altogether for resolving zip codes to coordinates? Edit: ...and don't forget about `WebServiceZipCodeLocatableProvider` ... there's another (realistic) option."} {"_id": "246445", "title": "Is it safe to rely on static analysis to \"reproduce\" concurrency bugs reliably?", "text": "I've inherited some Java code which I suspect harbours some concurrency bugs when synchronizing between a thread that queries data and an IO event that updates the same data. I'm trialling a static analysis tool called ThreadSafe which indeed detects various concurrency issues (i.e. field accessed from asynchronously invoked method without synchronization and Inconsistent synchronization of accesses to a collection). After discovering how difficult it is to reproduce data races reliably by writing unit tests, I wondered if it's advisable to depend on ThreadSafe to \"reproduce\" the bugs reliably? What I'm asking is, is it safe to depend on the static analysis tool to tell me when I've fixed the bugs? (Of course, to fix the bugs I will need to understand each one in context)."} {"_id": "237213", "title": "Maze generation given intersection tree", "text": "**Background Information:** I'm building a 2D maze generator. I have tried Prim's algorithm, Wilson's algorithm, and a recursive backtracker algorithm for generating my maze, however was not satisfied with the difficulty of any. I have decided to create my own. I decided that two things make a maze hard. First, mazes can have lots of intersections and choices to make. Second, they can be disorienting and cause you to lose your way. I decided to create a tree to represent the intersections and dead ends in a maze and connect each node in the tree with a randomly generated path to disorient users. **The problem:** If I begin to generate the cells in the maze for the tree, I may find that a node doesn't have the room it needs to connect to or create its children. How do I fix or avoid this problem? **My Thoughts:** There seems like there might be a way to do this by dividing the maze into sections and subdividing them, but that still doesn't guarantee enough room at the end of the division. I could also try to start small and work my way up, sectioning off smaller areas and then creating connections between them, but that could still run into pathing issues with not having enough room to connect the sections together or even creating really long paths between sections. I am using a hexagonal grid, but any solution you guys come up for rectangular grids should be easy to transfer to a hexagonal one. I wasn't sure if this should be posted in the theoretical computer science section or here, and opted for the more general one."} {"_id": "128304", "title": "Is there any way to test how will the site perform under load", "text": "I have made an Asp.net MVC website and hosted it on a shared hosting provider. Since my website surrounds a very generic idea, it might have number of concurrent users sometime in future. So, I was thinking of a way to test my website for on-load performance. Like how will the site perform when 100 or 1000 users are online at the same time and surfing the website. This will also make me understand whether my LINQ queries are well written or not."} {"_id": "125151", "title": "Should the deploy script be an artifact of the build?", "text": "This is a web project written in Java. So, I'm writing the build and the deploy scripts. To create the build, I used ant. The continuous build is done with Jenkins. The build generates 3 different artifacts: 1. The war file 2. A zip with layouts 3. A zip with images So far, so good, but now I need to write the deploy script, which should: * Deploy the war (artifact 1) to the tomcat running at _server 1_ * Place the artifact **2** at _server 1_ in a specific directory * Place the artifact **3** at _server 2_ in a specific directory So I was talking with my colleague and he said that we should also generate an artifact (maybe _deploy.xml_ ) that deploys these artifacts when placed at the correct server. So there would be another script, that would: * Download the jenkins artifacts * scp to each server and place the deploy.xml there * remotely invoke the deploy.xml What makes me a little uncomfortable is the act of having the deploy.xml as a build artifact. The motivation behind this would be to be able to make a deploy without needing to have access to the VCS repositories, so a build would be self-contained, ie, any build could go into production only with what was generated by Jenkins. Where should the deploy scripts be placed? Should they be **only** at the VCS or should they be build artifacts too?"} {"_id": "128300", "title": "Right Tools For The Job? One Location vs Multiple?", "text": "I'm currently using Codebase for all my project management (where I'm learning Agile). I use it for tracking bugs, user stories (via tickets), wikis, files and git hosting. However, I'm looking to improve on this and find some tools better suited to things like bug tracking, managing stories, git hosting, collaboration, client feedback etc. I'm part of a small team of 3 or 4 developers. Does anyone have any recommendations? My initial thoughts based on research was: * Project Management: JIRA * Bug Tracking: JIRA * User Stories and Agile: Pivotal Tracker * Git hosting: JIRA * Storage: Dropbox However it seems this is quite disjointed in that it could prove more work than its worth. Appreciate any advice on the above or tools which you've used and found work great for small-medium teams and suit Agile. My SPECIFIC requirements are: Bug tracking, managing stories, git hosting, remote team collaboration, client feedback, time tracking, storage, build management, project lifecycle progression data... I also found Assembla which seems excellent. One of my team members believes in \"the right tool for the job\" and that there is no single tool to manage all of this. I was hoping for a central location."} {"_id": "128302", "title": "Move from Curl to Aria2?", "text": "We have a crawling engine that uses curl and caters to about 400,000 ppl / month. Although curl supports concurrent downloads, it does not support bandwidth limiting ( only in php 5.4.0 ) which is why my boss wants me to move to aria2. He also says Aria2 is faster ( which seems to be true to me as well ) Aria2 would require a large amounts of changes to the system. Right now we have crawling system which does not crawl in a conconrrent manner. Would moving to aria2 be a good decision?"} {"_id": "146692", "title": "Why do popular websites store very complicated session-related data in cookies -- and what does it all mean?", "text": "As web developers, we all learn that sessions help overcome the problems related to the stateless nature of HTTP. We create a unique session id, and send it to the browser -- and when the browser sends the same id back to us, we identify the user easily. All this sounds pretty straightforward, and NOT so complicated to implement in any language. ## NOW Look at the following screenshots I took. These show the kinds of cookies popular websites store. It looks like they are storing multiple session IDs, or they are trying the hide the actual id by setting so many cookies, or, it is something very specialized security measures they're taking to prevent session hijacking and other related problems. Or, whatever. ## Gmail (before login) ![enter image description here](http://i.stack.imgur.com/RKLmL.jpg) ## Gmail (after login) ![enter image description here](http://i.stack.imgur.com/9pWMf.jpg) ## Facebook ![enter image description here](http://i.stack.imgur.com/f760r.jpg) ## StackExchange ![enter image description here](http://i.stack.imgur.com/CIXd4.jpg) (Don't worry, you can't steal my session -- it's stale and incomplete :)) So, my question is -- what purpose does this complicatedness serve? Please explain what these different cookies mean (in general), and for what purposes these are set. Lastly, hint on how I can do it (and whether I should) in my own apps. One more question: In a lot of instances, the values in the cookies look like they are URL-encoded -- why so?"} {"_id": "146690", "title": "Call stack starts at bottom or top?", "text": "A stack is something that piles bottom-up. Hence a call stack adds new items on the stack when functions are called with items being removed from the stack as each function ends until the stack is empty and then the program ends. If the above is correct, why do people refer to control moving \"up\" the call stack? Surely control moves _down_ the call stack until it reaches the bottom."} {"_id": "125159", "title": "Selling LINQ To Management", "text": "I recently started working at a new company that currently doesn't use linq, but does use C# and the .net framework. I'm coming from a LINQ background, so I'm biased, but I still think there are a lot of advantages to using linq. That being said, it's still going to be hard to influence change as I'm still very new. What suggestions do you have to sell this to management? The database systems are still relatively new, so I think now, as things are still developing, would be the best time to try convincing people to use this. This is both a technical and political question. Any help would be appreciated."} {"_id": "146695", "title": "Isolated environment for software stacks", "text": "I was sure that I would find this question, but I couldn't. How to create an isolated development environment? It other words, a sandbox, where I can install different combinations of web servers, databases, other software packages and play with them, without cluttering my system packages and without manually downloading packages from official websites. Something for software stacks as virtualenv + pip is for Python. Before that I tried to install OSes inside QEMU/KVM, but that is an overkill and a bit complicated (couldn't set up bridging, for example). Is it possible to create isolated virtual environments without running virtual machines with full-blown operating systems? I use Debian GNU/Linux. **Edit (01.08.12):** A similar question on Unix.SE with more elaborated answers - http://unix.stackexchange.com/q/31136/11397"} {"_id": "84784", "title": "Inspirational software for end-users written in Haskell?", "text": "I think great technology is invisible. Besides the usual suspects (GHC, Xmonad, proprietary trading software) what great examples are there of end-user software written in Haskell? I think good examples are FreeArc, Hledger and \"Nikki And The Robots\". Do you have more examples (full blown GUI apps, small CLI tools, etc)? **Edit:** For example, I am fascinated by Wings3D, because, while it's written in Erlang, users cannot tell that. It just works. Among Haskell's weak spots are cross-platform GUIs. There are not many GUI apps written in Haskell in general and most of them are not easy to use, install or even compile. What are good examples to learn from how to make hard things look easy?"} {"_id": "76467", "title": "How to interpret programmer salary survey data, especially concerning different job titles?", "text": "I've been trying to understand how my compensation stacks up against the competition. I understand that this can be a bit of a localized question, so I'll do my best to avoid local issues. There are a number of web sites that provide salary survey data about different jobs. I've been looking at the data on careerbuilder.com in particular. I would like to use this data to understand how much salary I would be wise to ask for in negotiation, but it seems impossible to interpret. Let me start with job titles. I have a background in mathematics and computer programming. I am currently doing database-driven web development using mostly SQL, .net, and Javascript. I do much more programming than I do web design. In my current position, I was told that I would be a \"Web Developer\". I was told that I would be junior to the \"Application Developers\", but senior to \"Junior Developers\". That was fine with me. After several months lapsed, I received (with no prior notice) a new nameplate for my cube that bore the title \"Application Developer\". The developers who had been previously called \"Application Developers\" are now called \"Sr. Systems Analysts\". As far as I know, these new nameplates did not accompany any actual organizational change or pay differences. However, when I go to the salary surveys on careerbuilder.com and enter in \"computer programmer\", \"application developer\", \"web developer\" and other seemingly similar titles, I get wildly divergent results. Even more divergent are the results on Glassdoor.com, which reports survey results from individual companies. Consider this example: ( http://www.glassdoor.com/Salaries/net-salary-SRCH_KO0,3.htm ) that has differences of over 100% salary between different companies for the same title!"} {"_id": "232190", "title": "Regarding interfaces of classes in OOP", "text": "When one says \"a class' interface\": Does he/she refer to all of the get and set methods - or do they refer only to the methods' signatures and return types, without the inner implementation of these methods? (Aka, the implementation of a 'get' method isn't considered part of the interface, but rather part of the inner implementation of the class. The interface is the return type for that method and the signature of that method)."} {"_id": "129478", "title": "Guide developers to not forget their obligations", "text": "> **Possible Duplicates:** > What is the Best Way to Incentivize a Team of Developers? > How do you motivate peers to become better developers? Aside developer's tasks, there are minor stuff that each developer should do. In our company, the project leader spends some time with each developer to guide them on how to do several stuff, especially on inexperienced ones. Of course each developer has their tasks, but also they have notes that either they or the project leader noted. Those tasks can be like: 1. You should change the flow of this method, 2. I don't like the way you solve this issue, 3. Consider a faster solution 4. etc. Those notes of course can be noted! But laziness is an issue, even if the developer has the abilities required. My question is: How do we 'force' them to become more hypochondriacal than lazy?"} {"_id": "246196", "title": "Is using MultiMaps code smell? If so what alternative data structures fit my needs?", "text": "I'm trying to model nWoD characters for a roleplaying game in a character builder program. The crux is I want to support saving too and loading from yaml documents. One aspect of the character's is their set of skills. Skills are split between exactly three 'types': Mental, Physical, and Social. Each type has a list of skills under if. My Yaml looks like this: PHYSICAL: Athletics: 0 Brawl: 3 MENTAL: Academics: 2 Computers My initial thought was to use a Multimap of some sort, and have the skill type as an Enum, and key to my map. Each Skill is an element in the collection that backs the multimap. However, I've been struggling to get the yaml to work. On explaining this to a colleague outside of work they said this was probably a sign of code smell, and he's never seen it used 'well'. Are multiMaps really code smell? If so what alternate data structures would suit my goals?"} {"_id": "36662", "title": "Finding the time to program in your spare time?", "text": "I've got about a dozen programming projects bouncing about my head, and I'd love to contribute to some open source projects, the problem I have is that having spent the entire day staring at Visual Studio and or Eclipse (Sometimes both at the same time...) the last thing I feel like doing when I go home is program. How do you build up the motivation/time to work on your own projects after work? I'm not saying that I don't enjoy programming, it's just that I enjoy other things too and it can be hard to even do something you enjoy if you've spent all day already doing it. I think that if I worked at a chocolate factory the last thing I'd want to see when I got home was a Wonka bar."} {"_id": "85789", "title": "Why profile applications using AOP?", "text": "When tuning performance in a web application, I am looking for good and light- weight performance profiling tools to measure the execution time for each method. I know that the easiest profiling method is to log the start time and end time for each method, but I see more and more people using AOP to profile (add @profiled before each method). What's the benefit of AOP profiling compared to the common \"log\" way? Thanks in advance Vance"} {"_id": "133704", "title": "When calling a method should we use base.methodname and this.methodname?", "text": "In C#, with an inherited class set -- when calling a method should we use keywords 'base.methodname and this.methodname'... irrespective of whether it is a overridden method or not? The code is likely to undergo changes in terms of logic and maybe some IF-ELSE like conditions may come in at a later date. So at that time, the developer has to be compelled to revisit each line of code and ensure that he/she makes the right choice of which method is being called --- base.methodname() or this.methodname() ELSE the .NET framework will call the DEFAULT (I think its base.methodname()) and the entire logic can go for a toss."} {"_id": "85780", "title": "Should I buy Clean Code after reading The Clean Coder?", "text": "I\"m currently reading _The Clean Coder_ by Robert C. Martin. It's a great book and I'm learning a lot from it. My objective is to become a \"professional\" programmer so I'm trying to learn the most I can when I'm not at my office. I was wondering if I should buy _Clean Code_ from the same author. Is it the natural \"sequel\" to _The Clean Coder_? Does it cover much the same ground to make its purchase not worthwhile?"} {"_id": "85783", "title": "Technology Selection for a dynamic product", "text": "We are building a product for Procurement Domain in JAVA. Following are the main technical requirements. 1. Platform Independent 2. Database Independent 3. Browser Independent In functional requirements the product is very dynamic in nature. The main reason being the procurement process around the world is different from client to client. Briefly we need to have a dynamic workflow engine and a dynamic template engine. The workflow engine by which we can define any kind of workflows and the template engine allows us to define any kind of data structures and based on definition it can get the user input through workflow. We have been developing this product for almost 2 years. It has been a long time till we can get down with the dynamics of requirements. Till now we have developed a basic workflow and template engine and which is in use at one of the client. We have been using following technologies. 1. GWT-Ext (Front End Framework) 2. Hibernate (Database Layer) In between we have faced some issues with GWT-Ext (mainly browser compatibility) and database optimization due to sub classing in hibernate. For resolving GWT-Ext issue, which a dying community so we decided to move to SmartGWT. In SmartGWT we faced issues related to loading and now we are able to finalize that GWT 2.3 will be the way to go as the library is rich and performance is upto the mark. We are able to almost finalize GWT-Spring based front and middle layer. In hibernate, we found main issues with sub-classing due to that it was throwing astronomical queries and sometimes it would stop firing any queries for 5-10 seconds or may be around 30 seconds and then resume again. Few days back I came to one article related to ORM. I am a traditional .Net SQL developer and I have always worked with relational database. Reading through this article, I also found it relating to the issues I face. I am still not completely convinced of using hibernate and this article just supported my opinion. Following are the questions for which I am looking for an answer. 1. Should we be going with Hibernate in case of dynamic database requirements and the load of the data will be heavy in future? How can we partition the data, how we can efficiently join the data, how we can optimize the queries? If the answer is no then how do we achieve database independence? 2. Is our choice related to GWT and Spring proper or do we need to change that too? 3. Should we use any other key value pair database if the data is dynamic in nature and it is very difficult to make it relational?"} {"_id": "202922", "title": "Documenting mathematical logic in code", "text": "Sometimes, although not often, I have to include math logic in my code. The concepts used are mostly very simple, but the resulting code is not - a lot of variables with unclear purpose, and some operations with not so obvious intent. I don't mean that the code is unreadable or unmaintainable, just that it's _waaaay_ harder to understand than the actual math problem. I try to comment the parts which are hardest to understand, but there is the same problem as in just coding them - _text does not have the expressive power of math_. I am looking for a more efficient and easy to understand way of explaining the logic behind some of the complex code, preferably in the code itself. I have considered TeX - writing the documentation and generating it separately from the code. But then I'd have to learn TeX, and the documentation will not be in the code itself. Another thing I thought of is taking a picture of the mathematical notations, equations and diagrams written on paper/whiteboard, and including it in javadoc. Is there a simpler and clearer way? * * * P.S. Giving descriptive names(`timeOfFirstEvent` instead of `t1`) to the variables actually makes the code more verbose and even harder too read."} {"_id": "41655", "title": "Inspiring Computer Science College Student Stories?", "text": "After watching \"The Social Network\", a movie about Mark Zuckerberg and Facebook, I had a productivity spike as it inspired me to learn many different languages and software. That spike has been deteriorating somewhat and I don't feel like watching the movie all over again so I was wondering if anybody had any other similar success stories/links/articles/whatever. Mainly about successes that started for people in college."} {"_id": "203345", "title": "Vernon's book Implementing DDD and modeling of underlying concepts", "text": "Following questions all refer to examples presented in Implementing DDD In article we can see from Figure 6 that both **BankingAccount** and **PayeeAccount** represent the same _underlying concept_ of _Banking Account_ **BA** **1.** On _page 64_ author gives an example of a _publishing organization_ , where the life-cycle of a book goes through several stages ( _proposing a book_ , _editorial process_ , _translation of the book_ ... ) and at each of those stages this book has a different definition. Each _stage of the book_ is defined in a different _Bounded Context_ , but do all these different definitions still represent the same _underlying concept_ of a **Book** , just like both **BankingAccount** and **PayeeAccount** represent the same _underlying concept_ of a **BA**? **2.** a) I understand why `User` shouldn't exist in **Collaboration Context** ( CC ), but instead should be defined within **Identity and Access Context** IAC ( _page 65_ ). But still, do `User` ( IAC ), `Moderator` ( CC ), `Author` ( CC ),`Owner` ( CC ) and `Participant` ( CC ) all represent _different aspects_ of the same _underlying concept_? b) If yes, then this means that CC contains several _model elements_ ( `Moderator`, `Author`, `Owner` and `Participant` ), each representing _different aspect_ of the same _underlying concept_ ( just like both **BankingAccount** and **PayeeAccount** represent the same _underlying concept_ of a **BA** ). But isn't this considered **a duplication of concepts** ( Evan's book, page 339 ), since several _model elements_ in CC represent the same _underlying concept_? c) If `Moderator`, `Author` ... don't represent the same _underlying concept_ , then what _underlying concept_ does each represent? **3.** In an **e-commerce system** , the term **Customer** has multiple meanings ( page 49 ): When user is browsing the **Catalog** , **Customer** has different meaning than when user is placing an **Order**. But do these two different definitions of a **Customer** represent the same _underlying concept_ , just like both **BankingAccount** and **PayeeAccount** represent the same _underlying concept_ of a **BA**? **UPDATE:** 2. > 2a) As with the book, they refer to the same identity but express different > aspects of that identity in a specific context. Sort of like a single object > implementing multiple interfaces which embody the roles that object plays. a) Does having two _model elements_ ( both within same **BC** ) , each representing _different aspect of the same underlying concept_ , result in what _Evans_ calls **duplicate concepts**? b) I thought terms \" _representing different aspects of an identity within particular BC_ \" and \" _representing different aspects of the same underlying concept within particular BC_ \" are interchangeable ( ie they mean the same thing )? If not, how do they differ? c) > Sort of like a single object implementing multiple interfaces which embody > the roles that object plays. I assumed each _role_ represents a _particular aspect of the underlying concept_ , but you're saying it doesn't? What then does _role_ model and how is the thing that role models conceptually different from _an aspect of the underlying concept_? **2\\. UPDATE:** **2.** a) > > Does having two model elements ( both within same BC ) , each representing > different aspect of the same underlying concept, result in what Evans calls > duplicate concepts? > > I don't recall what Evans called duplicate concepts so not sure. On page 339 Evans describes **Duplicate concepts** as being one of the two conceptual splinters that cause the unification of a model to break down. Here's the quote: > Combining elements of distinct models causes two categories of problems: > duplicate concepts and false cognates. Duplication of concepts means that > there are two model elements ( and attendant implementations ) that actually > represent the same concept. Evry time this information changes, it has to be > updated in two places with conversions. Every time new knowledge leads to a > change in one of the objects, other has to be reanalyzed and changed too. > Except the reanalysis doesn't happen in reality, so the result is two > versions of the same concept that follow different rules and even have > different data. b) and c) > This may be a linguistic issue. The way I see it, an identity can have > different concepts associated with it depending on the context. > > Again this seems to be a linguistic issue. You use \"underlying concept\" to > refer to identity which I think isn't clear enough to distinguish from > simply \"concept\" which by itself isn't sufficient. Perhaps the following will make my questions clearer: By _underlying concept_ I'm referring to _an aspect of reality_ ( ie the thing ) which we try to model. thanks"} {"_id": "175939", "title": "Internationalization messages based in views or in model entities", "text": "I have a small webapp in java and I am adding the internationalization support, replacing texts with labels that are defined in dictionary files. While some texts are obviously unique to each view (v.g. the html title), other refer to concepts from the model (v.g. a ticket, the location or status of such ticket, etc.) As usual, some terms will appear many times in different locations (vg, in both the edition page and in the search page and in the listings I have a \"ticketLocation\" label). My question is: can I organize the labels around the model concepts (so I have a `ticket.location` label and I use it everywhere such field is labeled) or should I make a different label for each (so `form.ticketLocation` and `filter.ticketLocation` and `list.ticketLocation`). I would go for the first option; I have searched for tips and the only thing that I see that could hinder me is due to the length of the string disrupting the design, and even for that I would prefer having to add a `ticket.locationShort` for places where there is not much space. What is your opinion/tips/experience?"} {"_id": "137930", "title": "how to find out if spelling mistakes in source code are a serious issue or not?", "text": "I find very troubling amount of spelling mistakes I see everyday in our codebase, from which I will reproduce a very short but representative example: ArgumnetCount Timeount Gor message from queue Unfortunately this is in no way limited to one person. There is a lot of non- native English speakers in our team who contribute to that, however I can also pinpoint some of the worst spelling mistakes to our Software Architect who is American, born and raised. These are also to be found even in emails, presentations, documents, whatever piece of written information we have in a software development company. I'd like to know how to find out if it is a serious issue or not? I've always met these spelling mistakes with concern, but my own, personal, **official** policy is that we are not paid to spell things right, we are paid to get things done, so inside the company I never really criticized anyone about it. But I have raised this issue with some of my close friends, and never settled it for good."} {"_id": "202691", "title": "Design pattern for logging changes in parent/child objects saved to database", "text": "I\u2019ve got a 2 database tables in parent/child relationship as one-many. I\u2019ve got three classes representing the data in these two tables: Parent Class { Public int ID {get; set;} .. other properties } Child Class { Public int ID {get;set;} Public int ParentID {get; set;} .. other properties } TogetherClass { Public Parent Parent; Public List ChildList; } Lastly I\u2019ve got a client and server application \u2013 I\u2019m in control of both ends so can make changes to both programs as I need to. Client makes a request for ParentID and receives a Together Class for the matching parent, and all of the child records. The client app may make changes to the children \u2013 add new children, remove or modify existing ones. Client app then sends the Together Class back to the server app. Server app needs to update the parent and child records in the database. In addition I would like to be able to log the changes \u2013 I\u2019m doing this by having 2 separate tables one for Parent, one for child; each containing the same columns as the original plus date time modified, by whom and a list of the changes. I\u2019m unsure as to the best approach to detect the changes in records \u2013 new records, records to be deleted, records with no fields changed, records with some fields changed. I figure I need to read the parent & children records and compare those to the ones in the Together Class. Strategy A: If Together class\u2019s child record has an ID of say 0, that indicates a new record; insert. Any deleted child records are no longer in the Together Class; see if any of the comparison child records are not found in the Together class and delete if not found (Compare using ID). Check each child record for changes and if changed log. Strategy B: Make a new Updated TogetherClass UpdatedClass { Public Parent Parent {get; set} Public List ListNewChild {get;set;} Public List DeletedChild {get;set;} Public List ExistingChild {get;set;} // used for no changes and modified rows } And then process as per the list. The reason why I\u2019m asking for ideas is that both of these solutions don\u2019t seem optimal to me and I suspect this problem has been solved already \u2013 some kind of design pattern ? I am aware of one potential problem in this general approach \u2013 that where Client App A requests a record; App B requests same record; A then saves changes; B then saves changes which may overwrite changes A made. This is a separate locking issue which I\u2019ll raise a separate question for if I\u2019ve got trouble implementing. The actual implementation is c#, SQL Server and WCF between client and server - sharing a library containing the class implementations. Apologies if this is a duplicate post \u2013 I tried searching various terms without finding a match though. * * * Update based on comments from svick and TimG: To track change would the base class be something like: public class ChangedRowBase { public enum StatusState { New, Unchanged, Updated, Deleted }; protected StatusState _Status; public StatusState Status { get { return _Status; } } public void SetUnchanged() { _Status = StatusState.Unchanged; } public void SetDeleted() { _Status = StatusState.Deleted; } } Then my Child class would have something like: public int SomeDumbProperty { get; set { base.Status = StatusState.Updated; // Missing line here to set the property itself } } private int _SomeIntelligentProperty; public int SomeIntelligentProperty { get { return _SomeIntelligentProperty; } set { if (value != _SomeIntelligentProperty) { _SomeIntelligentProperty = value; base.Status = StatusState.Updated; } } } The idea being that the intelligent one detects if there has indeed been a change; otherwise using the dumb one the client app would need to either detect change and only update if there has been a change, or use the setter anyway flagging a change which actually didn't occur. I also know that my dumbproperty doesn't work (or compile) right now. If I want code in the getter or setter, does this mean I need to define a separate private variable myself ? I know the short hand {get; set;} does this behind the scenes."} {"_id": "137934", "title": "How to learn the basics", "text": "I have been programming for 2 years in python, Java and C#. I have developed two programs that is being used by a IT-company I worked for and use programming/a programmers mind to solve almost every problem I counter. But still I feel I am missing something in my curriculum (selftaught mostly). When programming I usally do the most often mistake of never changing my solutions to problems in my code. Because I dont know the lowest principals in programming. I often give up on new projects I start because of this. I see codes from others and I shiver when I think back to my code afterwards. What are the basics in programming? How can I learn this? And which principals in programming is the building blocks to actually become a better programmer?"} {"_id": "40389", "title": "What file extension do you use for your template/view files in PHP?", "text": "I'm building a Model-View-Controller framework, and it has come time to decide how I will be creating and using view templates and layouts. Some frameworks use special extensions for these files. CakePHP uses `.ctp`. I have heard-of/seen `.tpl` files, though I've never used them myself. There is the `.inc` extension, which doesn't feel right, and of course, I could stick with plaon ol' `.php`. For that matter why not `.awesome`? What do you use for your template files? Is there any benefit to using a special file extension for these files? Are you partial to a certain extension for nostalgia or convention alone? They're typically (and will be in my project) already in their own directory, so I assume there must be _some_ reason they are differentiated by their extension in projects like CakePHP."} {"_id": "83787", "title": "Is there a canonical resource on bouncycastle?", "text": "It doesn't matter if it is Java or .NET. Is there a resource out there that's the de-facto standard for describing best practices and other helpful information on bouncycastle? What about that resource makes it special?"} {"_id": "236173", "title": "Where does one draw the line between inspiration and plagiarism?", "text": "Suppose one would like to use in an iOS app only one function from a GPLv3 library that contains a large number of them. One cannot simply link the whole library beacuse otherwise the resulting software, having to be GPLv3, would be incompatible with the App Store. Can one \"take inspiration\" from the source code of the library to write a function that would perform the same task? I understand that this is a fine line, but I wonder if there is a rule of thumb."} {"_id": "166444", "title": "Figuring out the Call chain", "text": "Let's say I have an `assemblyA` that has a method which creates an instance of `assemblyB` and calls its `MethodFoo()`. Now `assemblyB` also creates an instance of `assemblyC` and calls `MethodFoo()`. So no matter if I start with `assemblyB` in the code flow or with `assemlyA`, at the end we are calling that `MethodFoo` of `AssemblyC()`. My question is when I am in the `MethodFoo()` how can I know who has called me? Has it been a call originally from `assemblyA` or was it from `assemlyB`? Is there any design pattern or a good OO way of solving this?"} {"_id": "15536", "title": "What technologies or techniques senior developers of today should *unlearn*", "text": "Many questions here about learning new things. But what about `unlearning`? When we used to develop software 10 or 20 years ago very differently, we counted RAM in bytes, libraries were very rare, IDE was an unknown word and Merise was king. **What should we unlearn today? What bad habits we should get rid of?** EDIT: **unlearn** doesn't mean **forget**. Rather to use old techniques that was useful in the past, use the new technologies (such as GC for memory management). But your past knowledge will always be useful today."} {"_id": "76449", "title": "Starting a User Group C#/.net", "text": "Inspired by this question, I have looked into user groups within my area in England, and there does not seem to be one at all for a few hundred miles. I would like to look into the possibility of setting one up in a local city, but I have no idea what the format should be. My questions are: What is the purpose of a user group? Is it to simply network or are there expectations for presentations, informal learning, etc? How and where does one promote a user group? Is it run as a free enterprise or is there usually a charge to allow for investment back into the meetings? Thanks."} {"_id": "76445", "title": "Is it necessary to make a portfolio website for web developer", "text": "I have worked on around 15-20 sites . I have used different skills / languages on there. So i am confused how should write those in my resume . Or should i create a portfolio website where i have screen shots of those sites where i have wroked. Then do i need to mention about sites or just give the link in my resume"} {"_id": "175919", "title": "Is there an open source license that prohibits public hosting, so I can avoid competition?", "text": "I am thinking about open sourcing my project, but I want to prohibit public hosting, so noone will put up an alternative website with my code on it, and compete with my site. Internal networks, like libraries etc. would be welcome to use it. Are you aware of such a license? I guess this would be even more restrictive than AGPL."} {"_id": "10136", "title": "What are the Vital Things noticed in an Experienced Programmer Resume by a firm?", "text": "Hai Friends, Tell me or share with me , i am developer with one year experience, now i want to modify my old fresher resume to an experience resume, so i want to know what are the primary things must be in an experienced resume which is attracted by the firm."} {"_id": "237968", "title": "Manage the persistence of entities on iOS in several places: CoreData on the device, iCloud and on a REST API", "text": "For the needs of a project, I would persist the datas contained in Core Data in several places depending on the state of the user. * If the user is logged to my API -> Persist the datas on my API. * If the is logged on iCloud -> Persist the datas on iCloud * In all case -> Persist the datas on the device. Indeed, with this schema, the datas could be saved on the device, on iCloud and on my API. We need to duplicate datas to be able to keep the last datas avalaible even if the user is not logged on iCloud or the API. * * * Here is an implementation of how I think this feature will be implemented. To uniformise the process of executing CRUD requests I think to create a `PersistenceManager` which permits to execute CRUD requests for entities depending the sate of the user. (If he's logged on the API or on iCloud or nowhere). Here there are some methods that the `PersistenceManager` will implement: **Save request** This method will save an entity depending on the state of the user. If the user is logged to the API: persist on the API If the user is logged to iCloud: save on iCloud In all case -> Save on the device with Core Data. Each datas stored will have a timestamp to be sure to get the last version of the data. **Read request** This method will retrieve entities only from de device in asynchronous mode. For each Save, Update or Delete request executed, if the user is logged to the API or with iCloud, one or both will send a notification to clients application where the user is logged to retrieve datas from API or iCloud and store them into Core Data. Then the data that the user want to read is returned by Core Data. To retrieve notification from the API, I think use socket through my API and my iOS app to communicate. But I don't know how can I do the same mechanism for iCloud. Any suggestion ? **Update request** Same mechanical than for the save request but for update datas. **Delete request** Same mechanical than for the save request but for delete datas. * * * I'm not familiar with these problematics, and I would like have your advices on it. Thanks a lot !"} {"_id": "123210", "title": "Decoupling Threads", "text": "It's not uncommon to hear of decoupling the UI from program logic, or database design/access from program logic...or even program logic from itself. However, I've never heard of an approach to generalizing or abstracting away the details of threading tasks to the logic (aside from Tymeac, an Android- specific package). Honestly, with all of the details going into managing threads, I don't expect many solutions to work well anytime soon, so... My question is, how do you write threads that aren't joined at the hip to the processes that use those threads? What do you normally do when requirements change in an already complicated multi-threaded application? Is there a design approach to this that I should read? I ask this because we are the dawn of the \"bagillion\"-core era in computing. Eventually, a good methodology will have to exist to manage 1k+ cores and even more threads here in the next couple of decades."} {"_id": "237961", "title": "Interface hierarchy design for separate domains", "text": "There are businesses and people. People could be liked and businesses could be commented on: class Like class Comment class Person implements iLikeTarget class Business implements iCommentTarget Likes and comments are performed by a user(person) so they are authored: class Like implements iAuthored class Comment implements iAuthored People's like could also be used in their history: class history class Like implements iAuthored, iHistoryTarget Now, a smart developer comes and says each history is attached to a user so history should be authored: interface iHistoryTarget extends iAuthored so it could be removed from `class Like`: class Person implements iLikeTarget class Business implements iCommentTarget class Like implements iHistoryTarget class Comment implements iAuthored class history interface iHistoryTarget extends iAuthored Here, another smart guy comes with a question: How could I capture the `Authored` fact in `Like` and `Comment` classes? He may knows nothing about `history` concept in the project. By scalling these kind of functionallities, interfaces may goes to their encapsulated types which cause more type strength, on the other hand explicitness suffered and also code end users will face much pain to process. So here is the question: Should I encapsulate those dependant types to their parent types (interface hierarchies) or not or explicitly repeat each type for every single level of my type system or ...?"} {"_id": "123218", "title": "F# performance vs Erlang performance, is there proof the Erlang's VM is faster?", "text": "I've been putting time into learning functional programming and I've come to the part where I want to start writing a project instead of just dabbling in tutorials/examples. While doing my research, I've found that Erlang seems to be a pretty powerful when it comes to writing concurrent software (which is my goal), but resources and tools for development aren't as mature as Microsoft development products. F# can run on linux (Mono) so that requirement is met, but while looking around on the internet I cannot find any comparisons of F# vs Erlang. Right now, I am leaning towards Erlang just because it seems to have the most press, but I am curious if there is really any performance difference between the two languages. Since I am use to developing in .NET, I can probably get up to speed with F# a lot faster than Erlang, but I cannot find any resource to convince me that F# is just as scalable as Erlang. I am most interested in simulation, which is going to be firing a lot of quickly processed messages to persistant nodes. If I have not done a good job with what I am trying to ask, please ask for more verification."} {"_id": "233247", "title": "Why did the Sun engineers decided to make Java only call by value?", "text": "Is there any specific reason they decided to go with Call by value? Is it for simplicity?"} {"_id": "233240", "title": "Server side C# MVC with AngularJS", "text": "I like using .NET MVC and have used it quite a bit in the past. I have also built SPA's using AngularJS with no page loads other than the initial one. I think I want to use a blend of the two in many cases. Set up the application initially using .NET MVC and set up routing server side. Then set up each page as a mini-SPA. These pages can be quite complex with a lot of client side functionality. My confusion comes in how to handle the model-view binding using both .NET MVC and the AngularJS scope. Do I avoid the use of razor Html helpers? The Html helpers create the markup behind the scene so it could get messy trying to add angularjs tags using the helpers. So how to add ng-model for example to @Html.TextBoxFor(x => x.Name). Also ng-repeat wouldn't work because I'm not loading my model into the scope, right? I'd still want to use a c# foreach loop? Where do I draw the lines of separation? Has anyone gotten .NET MVC and AngularJS to play nicely together and what are your architectural suggestions to do so?"} {"_id": "159668", "title": "Database-stored configuration management", "text": "Developing database-backed software, often one finds it convenient to store stuff in the database which one might traditionally express in code, but which is useful to store in the database (so that it can be changed without requiring a reload of the application). For instance, one might have processes which send automated email using a template; it makes sense to store this template in the database, so that the text can be tweaked by the users without having to change the code/redeploy/etc. However, these templates are \"important\"- they are required for the correct operation of the system, and for instance, if they become malformed or missing, the underlying functionality would probably stop working. As they live now in the database, they don't outside version control- you might add audit tables to have change history about this information, but revisions/etc. are separate and independent from your main code's revisions. How do you handle this kind of stuff? Not putting this kind of stuff in the database and make changes go through development/change control/deployment cycles? Or something else? Cheers, \u00c1lex"} {"_id": "159637", "title": "What is the Mars Curiosity Rover's software built in?", "text": "The Mars Curiosity rover has landed successfully, and one of the promo videos \"7 minutes of terror\" brags about there being 500,000 lines of code. It's a complicated problem, no doubt. But that is a lot of code, surely there was a pretty big programming effort behind it. Does anyone know anything about this project? I can only imagine it's some kind of embedded C."} {"_id": "240294", "title": "What language do companies like NASA use to create their applications?", "text": "So obviously NASA need programmers to develop applications for them, be it VOIP applications, applications for control of machines and AI, etc. But in what language do they actually use for this? I'm 15 and plan on studying Software Engineering in University and my dream job is to program for NASA, astronomy is a huge hobby of mine. I have a need to learn things before I am of the \"recommended\" age, for example I have been studying post-grad Astrophysics in my free time. As a result of this I would like to know what programming language NASA use so I can begin to learn the basics and see if it is something I may enjoy. Please do not say `Live you life as a kid` as I have seen people say such things before, this is how I enjoy to live my life and absolutely love programming. I just feel that, like in many things, if I learn it at a younger age I will have greater experience when I am older - yes I have aspirations like you wouldn't believe (Haha). I am thinking of learning how to program an Arduino, or Engduino as we may be witnessing in my Computer Science classes, but is this at all related to the type of programming NASA do. I understand that NASA will be programming MUCH, MUCH, MUCH more advanced programs than I, but to just get the sense of bringing a inanimate object to life would be an achievement to me. So yeah, what programming language do they use, what would you recommend I study in University to follow up such a dream?"} {"_id": "238299", "title": "Serpent Algorithm thats implement Cipher", "text": "How must i change the Standard Implemplementation of the Serpent Algorithm that i can use it with the javax.crypto.CipherInputStream or javax.crypto.CipherOutputStream? I must implement/extends the Cipher class and when i do it, what must i change? The Original Implementation of the Serpent Algorithm is here http://www.cl.cam.ac.uk/~rja14/serpent.html Exist a Implementation of the Serpent Algorithm that's implement Cipher? I found the Java implementation of the Algorithm on the Page of its Creator. The Problem in this is: i can't use it with javax.crypto.CipherInputStream or javax.crypto.CipherOutputStream. I try to use the ObjectOutputStream to write a Configuration File, but i would like to encrypt it with the Serpent Algorithm. The same is with the reading of this Configuration file. I don't want to use API's like the BouncyCastle and flexiprovider because they create a to heavy boilerplate which i don't need. What is the best practice to use a finished (final and tested) Algorithm that dont extends/implements Cipher and it should be used in javax.crypto.CipherXxxStream?"} {"_id": "7635", "title": "Are you able to close your eyes and focus/think just on your code?", "text": "I have some colleagues at work, they arrive at the morning, switch off email, phone, close the door, etc. Then they close their eyes and during 30 minutes try to focus on their programming (C++, C, etc) and remember what they did yesterday. Then and somehow the process or imagine the code on they mind, with closed eyes. And some time later they just grab pencil and dump code on a sheet of paper or on their IDE (Kdevelop, Visual Studio, Eclipse, etc). Some of these guys are really effective and develop very complicated code. Me, I can not do this and I wonder if some of you can do something similar, and in case, if you could give some advice on how to be able to focus so deep in a codde or problem. Thanks"} {"_id": "242632", "title": "What should be used to diagram an application's internal architecture?", "text": "I have an application that has several dependencies, including a Matlab library and a Microsoft Windows Media Player ActiveX control. How would I graphically depict the internal architecture of this application for documentation purposes? Would a UML structure diagram be appropriate? If so, would it be as simple as the following: ![enter image description here](http://i.stack.imgur.com/77xN8.png)"} {"_id": "242637", "title": "Help with MVC design pattern?", "text": "I am trying to build a java program for user login but I am not sure if my MVC design is accurate. I have the following classes: * LoginControl - servlet * LoginBean - data holder java class with private variables getters and setters * LoginDAO - concrete java class where I am running my SQL queries and doing rest of the logical work. * Connection class - java class just to connect to the database * view - jsp to display the results * html - used for form Is this how you design a java program based on MVC design pattern? Please provide some suggestions?"} {"_id": "226343", "title": "Is masking really necessary when sending from Websocket client", "text": "The current Websocket RFC requires that websocket clients mask all data within frames when sending (but server is not required to). The reason the protocol was designed this way is to prevent frame data from being altered by malicious services between the client and server (proxies, etc). However, the masking key is still known to such services (it is sent on a per frame basis at the beginning of each frame) Am I wrong to assume that such services can still use the key to unmask, alter, and than re-mask the contents before passing the frame to the next point? If I'm not wrong, how does this fix the supposed vulnerability?"} {"_id": "226341", "title": "Email Content creation | Proper design", "text": "Working on an E commerce application where we need to send so many email to customer like * Registration email * Forget Password * Order placed There are many other emails that can be sent, I already have `emailService` in place which is responsible for sending email and It needs an Email object, Everything is working find, but I am struck at one point and not sure how best this can be done. We need to create content so as it can be passed to `emailService` and not sure how to design this. For example, in Customer registration, I have a `customerFacade` which is working between `Controller` and `ServiceLayer`, I just want to delegate this Email Content creation work away from Facade layer and to make it more flexible. Currently I am creating Registration email content inside `customerFacade` and some how I am not liking this way, since that means for each email, I need to create content in respective Facade. What is best way to go or current approach is fine enough?"} {"_id": "35582", "title": "Standards for how developers work on their own workstations", "text": "We've just come across one of those situations which occasionally comes up when a developer goes off sick for a few days mid-project. There were a few questions about whether he'd committed the latest version of his code or whether there was something more recent on his local machine we should be looking at, and we had a delivery to a customer pending so we couldn't wait for him to return. One of the other developers logged on as him to see and found a mess of workspaces, many seemingly of the same projects, with timestamps that made it unclear which one was \"current\" (he was prototyping some bits on versions of the project other than his \"core\" one). Obviously this is a pain in the neck, however the alternative (which would seem to be strict standards for how each developer works _on their own machine_ to ensure that any other developer can pick things up with a minimum of effort) is likely to break many developers personal work flows and lead to inefficiency on an individual level. I'm not talking about standards for checked-in code, or even general development standards, I'm talking about how a developer works locally, a domain generally considered (in my experience) to be almost entirely under the developers own control. So how do you handle situations like this? Are the one of those things that just happens and you have to deal with, the price you pay for developers being allowed to work in the way that best suits them? Or do you ask developers to adhere to standards in this area - use of specific directories, naming standards, notes on a wiki or whatever? And if so what do your standards cover, how strict are they, how do you police them and so on? Or is there another solution I'm missing? [Assume for the sake of argument that the developer can not be contacted to talk through what he was doing here - even if he could knowing and describing which workspace is which from memory isn't going to be simple and flawless and sometimes people genuinely can't be contacted and I'd like a solution which covers all eventualities.] **Edit:** I get that going through someone's workstation is bad form (though it's an interesting - _and likely off-topic_ \\- question as to precisely why that is) and I'm certainly not looking at unlimited access. Think more along the lines of a standard where their code directories are set up with a read- only share - nothing can be changed, nothing else can be seen and so on."} {"_id": "229378", "title": "Do validate non-input data from a client?", "text": "One of our developers has an opinion that all client data should be validated before using it. Even non-input data. Say, our web service has an internal protection against database injections. Examples: machine generated codes, various integers, indexes, calculated values. We use ORM (Django). As I know there is a next approach in Django: validate only values entered as primary strings (usually via web forms). For example: is there any reason to create validation rules to validate numbers if we know that there is 500th error (if the data are wrong) in the beginning of handling request? I think no. But probably I'm wrong. Do my position requires to be changed?"} {"_id": "245366", "title": "What is the name of this design pattern?", "text": "I have been using this \"design pattern\" (may or may not be an \"official\" design pattern) for a while and I wanted to know if it had a name (so that I could name my classes after it). Example in PHP code (though applies to any language) (the example is stupid please don't mind): interface Formatter { public function format($variable); } class IntFormatter implements Formatter { public function format($variable) { echo (string) $variable; } } class StringFormatter implements Formatter { public function format($variable) { echo '\"' . $variable . '\"'; } } Now the pattern I want to know if for this class: class FormatterDispatcher implements Formatter { private $formatters = []; public function setFormatter($type, Formatter $formatter) { $this->formatters[$type] = $formatter; } public function format($variable) { $type = gettype($variable); return $this->formatters[$type]->format($variable); } } // Now in the code: $formatter = new FormatterDispatcher(); $formatter->setFormatter('int', new IntFormatter()); $formatter->setFormatter('string', new StringFormatter()); $formatter->format($variable); As you can see, it's just a proxy to other implementations of the interface. It will select the implementation to use based on the class of the parameter (here I used primitive types for simplicity's sake). So what is the name of this pattern? PS: in a language that supports Generics, the code would definitely look better, but I guess the spirit would be the same."} {"_id": "245364", "title": "Laravel 4: Binding/linking two users together", "text": "I'm building a unittest system. At some point I want to bind two users together in order to make an assignment for those two. So when the admin chooses to link two students together the system should figure out how to, maybe using the ID numbers of the both users. I have an idea on how implement this, but not very clear."} {"_id": "100587", "title": "How do I get a job as a game developer?", "text": "I'm an entry-level programmer just out of college. As I've found out, I don't really enjoy working on \"boring\" applications like networking, firewalls, and such. My dream job would be something like a game developer. What should I be focusing on to move into games development? Are there specific technologies or languages that I should know?"} {"_id": "245362", "title": "calculate complexity of LinkedHashSet", "text": "I have an `ArrayList> setOfStrings` for example this arraylist internally is composed like: positionX[hello,car,three,hotel,beach] positionY[.....] ... I want to find car inside this data-structure, so I did it for (Iterator> iterator = setOfStrings.iterator(); iterator.hasNext();) { LinkedHashSet subSet = iterator.next(); if (subSet.contains(\"hotel\")) System.out.println(\"found\"); } The for iterate over the entire `arrayList` and the complexity in the worst- case is O(n), but I'm confused on the complexity of the method `contains()` of set. In according of `javadocs` this method is executed in **constant time** , but I've heard that in certain cases the complexity might become `O(n)`. Saying that, I'm not sure about the complexity of this snippet algorithm. Can someone provide me an explanation of that?"} {"_id": "245360", "title": "Rectangle class java.lang.Object", "text": "Can someone please explain to me the difference or benefits rather to using the rectangle class ex. Rectangle r1 = new Rectangle(x,y,w,h); versus the graphics class drawRec method? g.drawRec(x,y,w,h); I'm thinking that you can create a new object outside of paint() or paintcomponent() which gives creating an object more options and maybe the associated methods is a bonus? I'm just speculating though because the tutorial I was going through switched from using one to the other without explaining why. I believe I understand how to use them just not when it's better to use which. Please try to keep your responses at the beginner level seeing as how I'm just starting out but of course any assistance is GREATLY appreciated. Thanks!! Here is a code segment of what I'm working with: public void paintComponent(Graphics g){ g.drawImage(yellowBall, xCoor, yCoor, this); Rectangle r1 = new Rectangle((boardXSize/2), (boardYSize/2),50, 50); g.setColor(Color.red); g.fillRect(r1.x, r1.y,r1.width, r1.height); repaint(); }"} {"_id": "44866", "title": "finding high end software contracting jobs", "text": "I've been contracting for about 3 years now. I am currently a contractor for a web firm. This is a hourly position. I want to find larger projects. I had read that some people are able to only do one or two jobs a year and be set on that. I want those types of jobs, and I want to hire people to take on these jobs as well, but I have no idea where to start. I highly doubt places like odesk post these types of contracts. Where can I find them? How can I make good money and live comfortably while working for myself?"} {"_id": "56471", "title": "What would the ultimate developer training class look like?", "text": "I think today's typical/traditional 3-5 days developer training classes aren't so great, as you tend to forget half of it shortly after. It's too much one way communication and not enough interaction. Also brain research has shown that this kind of setup is usually not optimal for efficient learning. For clarification, I am referring to professional, commercial, paid classes. However this could also be applied for any kind of studies. How could the ultimate developer training package be setup to really make sure you learn what you are supposed to learn? Would that be more: * Multimedia? * Exercises? * Homeworks? * Spread out over time instead of 3-5 compact days? * Group projects?"} {"_id": "178899", "title": "Can a Guy with Embedded System Background go into Game Development", "text": "Well, I finished my Masters in Embedded Systems, and I am working in GUI development, and working with graphic tools and images and GUI's keep me glued to my seat more than working on code for MUP/MUC . And I want to give game development a Fair chance, try out developing a game from scratch using basic libraries then tryout the same in a free/open source game engine and there is a good chance I may fall in love with it, but it is poissible for a person with an Electrical and Electronics Bachelors and Embedded Systems Masters ( just a years experience in the field) go into game development and be successful in the profession. And I asked the same question @ stackoverflow.com (wrong place to ask ) http://stackoverflow.com/questions/13794822/can-a-guy-with-embedded-system- background-go-into-game-development/13794943#13794943 And I received good but a very generic answer. I would be happy to know the actual pro's and con's of a master's in embedded systems migrating to Game Dev"} {"_id": "178898", "title": "Difference between Atomic Operation and Thread Safety?", "text": "From the discussion I've seen it seems that _atomic operation_ and _thread safety_ are the same thing, but a lot of people say that they're different. Can anyone tell me the difference if there is one?"} {"_id": "86142", "title": "What exactly do I need to do if I use a LGPL licenced library?", "text": "I have read this questions and answers, but I still don't understand what exactly do I need to do if I dynamically link with a library that uses a LGLP licence (the SDL library in my case). If I understand LGLP text correctly, I need to somehow provide the source for the library. Is this enough? If not, what else needs to be done?"} {"_id": "83003", "title": "How do you make a manager understand Agile?", "text": "I have a problem with a senior director who doesn't understand iterative development (much less Agile). He is insistent that our software design specification (SDS) be complete before any line of code is written. Complete, to him, means all functional detail is there. Also, being a former Cobol programmer, he wants to see \"modules\" and flowcharts. This is a Java web app for crying out loud! Anyway, I'm trying to find a simple place to gently point him to show that the SDS need not be 100% complete before we begin coding (nor can it be complete). Any suggestions? Thanks!"} {"_id": "86145", "title": "Is it acceptable to decompile someone else's code for the purpose of learning what they did, and how they did it?", "text": "I am not talking about stealing code, or reusing someone's for profit. But I am assuming that if a program or plugin is distributed in a format where I can't readily view the source, that is a deliberate action on the part of the programmer. Is it acceptable to look anyway, for the purpose of learning? And if so, are there limitations to that freedom?"} {"_id": "208697", "title": "What percentage of change is considered artistic license vs. a clone in video game development?", "text": "I'm a \"weekend coder\" developer making mobile apps and games and I've had the desire to make my own version of a game made for the Commodore 64 back in the early 80's by a small publisher e.g. Not EA Games or similar. The game hasn't been ported / released on any platform since. I want to make my version because it was fun to play but I can obviously create much better graphics now (the original had 8 colors at most, and requires a good imagination to transform a character image in the game into something with 4 limbs and a head). Since it will take effort to make from scratch, and the game will be different when done I'd like to charge 99 cents to users... But I have fears of being sued should the original developer pop out of the woodwork from somewhere. Here's a breakdown of original vs. My game intentions. Original game was a 2D side view game (think original Mario bros) with 3 screens that flipped when you crossed the edge. You could collect items, there were AI players that \"chased\" you, ladders, and some ride-able objects... All within a specific theme. My version would also be a 2D sideview, but be a sidescroller with at least 3 but maybe 4 or 5 \"screens\" worth of content. Images would be all original, have a nice 3D look to them, full color, shadows and lighting effects. You would still collect items but they would be all different, and you'd still get chased etc. but I'd try as best as possible to make my game unique... and of course it would have a different name. To a video game enthusiast that played this game in the early 80's they'd be able to proclaim...\"oh man this game reminds me of **___ _**!\" I don't think they would declare it to be a \"ripoff\" or \"just a clone\" though. Are there documented cases where a developer has been sued successfully for copying another game?.. And if so just how close was it? Likewise is there a statute of limitations?... This game is pushing 30 years old... Or is the answer just build it and see?"} {"_id": "208690", "title": "Queues and threading", "text": "I am developing a new project where I will be constantly checking a webpage for data and adding this data to a queue for processing. This data will then be removed from the queue and added to a list if it hasn't already been added. This list will then be stored in the database. I was wondering about how to use threads as I am relatively new to programming? I think it would be ideal if I had a thread constantly checking the webpage for data, a thread processing the queue and a thread for processing the list. Any suggestions on whether this method is the most effective or not and how to achieve this would be much appreciated."} {"_id": "219576", "title": "Comparing strings against a pool of words", "text": "I am creating an app where the user enters 8 characters. After he enters the string I have to see if it is an eight letter word. If not, check if contains a seven letter word etc. I am checking against a given pool of 150k+ words. I only care about the longest possible match. Is there a better way then this one: * WordPool.Contains(word.substring(0,8)) * WordPool.Contains(word.substring(0,7)), WordPool.Contains(word.substirng(1,7)) * WordPool.Contains(word.substring(0,6)), WordPool.Contains(word.substirng(1,6)), WordPool.Contains(word.substirng(2,6)) * etc... _Edit_ I forgot to add that I am checking against an english dictionary. So far I am using this: for(i = 8; i >= 3; i--) for(j = 0; j <= 8 - i; j++) if(words.contains(word.substring(j, i)) //do something **Edit 2** I have been using the approach defined above just with a minor change. I am using a few background agents which all search for a word of a certain length. They all then return a result and I just pick the one which gives the user the highest score."} {"_id": "115520", "title": "Should I reuse variables?", "text": "Should I reuse variables? I know that many best practice say you should not do it, however later when different developer is debugging the code and have 3 variables that look a like and only difference is that they are created in different places in the code he might be confused. unit-testing is a great example of this. However I do know that best practice are most of the time against it. For example they say not to \"overide\" method parameters. Best practice are even are against nulling the previous variables (in Java there is Sonar that has warning when you assign null to variable that you don't need to do it to call garbage collector since Java6. you cant always control what warnings are turned off, most of the time the default is on)"} {"_id": "213368", "title": "Is it OK to use same variable to store similar stuff sequentially?", "text": "Say I have a variable name `len` in a function and several strings. Can I use this to store length of those strings one after the other or should I create separate variables? Basically this: size_t len; len = string1.size() ....//some code len = string2.size() ...//more code versus size_t str1len, str2len; str1len = string1.size() ....//some code str2len = string2.size() ...//more code All are local variables within a function BTW."} {"_id": "131832", "title": "Topics that should be learned before graduating with a CS Degree", "text": "Feeling slightly underwhelmed with the content in my degree path of Computer Science at my University, I have been attempting to learn on my own. I am looking for direction into topics that will be helpful to know when going into the working world. As a senior I've learned basics of data abstraction, and basics of C++, vb.net, and php."} {"_id": "206593", "title": "Real world analogy for a clustered index", "text": "A db index is analogous to a table of contents. This helps me understand db index in an easy way. My question is are tehre any real world analogies for a clustered index?"} {"_id": "206594", "title": "Apache License 2.0 and source code recompile", "text": "I am not a licenses expert so I got confused a bit... The license Apache 2.0; Does it allow to get source codes and modify them for new projects? I mean modify source codes (some its parts) and recompile them?"} {"_id": "158049", "title": "Is it wrong to push messages from server to client in a client-server application?", "text": "I practically never see any web application where server pushes messages to the client. While pushing messages from server to client is hugely used in multiplayer games, why this model is so underused in web apps? Is it wrong? Does it destroy the client-server model?"} {"_id": "158044", "title": "Can testers peer review the developers' design and code?", "text": "I am a junior developer for a small business using scrum / agile development. A long-term goal of ours is to be appraised at CMMI lvl 2. We have a team of 3 senior developers who implement user stories and a handful of junior developers for support. We are moving towards a \"three amigos\" methodology, especially in regard to separating the duties of development and testing (the third amigo being the product owner / business stakeholders). This way our senior developers can focus on implementation and our junior developers can focus as impartial testers. We are using peer reviews as a specific practice for verifying work products, such as source code. This paper, http://repository.cmu.edu/cgi/viewcontent.cgi?article=1208&context=sei, describes on p. 19 that an optimal peer review process for design and code ranges from 50-65% of the time spent designing and coding. **My question is this** : is it appropriate for our testers to peer review the developers' design and code? An advantage would be that the developers can spend more time implementing user stories. A disadvantage would be that the testers may sacrifice their objective/impartial view of the system, since peer reviewing creates a shared ownership of the code."} {"_id": "206599", "title": "Is it valid to initialize an instance of a class within the same class?", "text": "I was wondering if it's valid to initialize an instance of a class within the same class? For example: public class Person() { string name; string age; public Person getPerson() { Person p1=new Person(); //some logic here return p1 } } I ended up doing that for a part of my code at work and thus wanted to know if there are any potential problems with doing something like this."} {"_id": "121403", "title": "When defining Product Backlog items, is it s a bad idea to describe what will be part of the user experience?", "text": "First, I am using the TFS 2010 SCRUM template. I am wondering if this is a bad idea... I started defining a PBI for User Interface Elements. Basically, this will hold all the tasks that developers will be assigned when developing UI elements for a web application. Since this has to do with user interaction and usability I was thinking it may be OK, however my struggle is that it also can be considered functionality and may not fit as a PBI."} {"_id": "213035", "title": "What is the best way to estimate when using Agile with Scrum? Hours or Story Points?", "text": "I was at a company that switched us over from Hours to Story Points for our estimations. I remember it being difficult to understand it, and we all chaffed against the idea. However, once we figured out how to do it and started doing it, I think our estimations became much more accurate. We were much better able to complete our sprints on time and to better be able to estimate duration of remaining work. Once a couple weeks had passed, we had a good idea of our individual teams\u2019 velocities and could then translate the story points into days and weeks of work. We then used that to know what we could complete within a given time-span. I guess the same could work if estimating in hours, as long as it is understood that each team will have a different number of hours per week in their velocity. One team may do 200 hours, while another may do 150 hours on average. This doesn't mean one team is working any harder than the other, it just means that they estimate hours differently, or that some team members may not have as much time available to work on the project, for instance, they may always be pulled into bug fixes, or help on other projects, which reduces their available time for this project. Anyway, what do you think? And why? Which is best? Hours, or Story Points?"} {"_id": "158043", "title": "Searching in a repository", "text": "I'm very new to source control management and one thing puzzles me: is it possible to search through the whole repository for a string? For example I'm tracking one file, which has 100 commits and I would like to see all the version of the file which contain \"xyz\" inside. Is this possible? I'm using various GUI clients and non of them has this..? Yes, they can search trough commit messages but I want to find a version even when the commit message does not contain the string. To put it differently, which GUI client has such a feature? I don't care about which SCM it uses - git, hg, bazaar, all are fine."} {"_id": "238434", "title": "Should I use branches for example uses of a github repo?", "text": "I'm working on a Github repo called Designemplate at https://github.com/benwatkinsart/Designemplate and was thinking of developing some example uses of the CSS file to show how it's used. Should I use a branch called examples and store them there?"} {"_id": "238430", "title": "Do you have to have boxing of primitives in OO language?", "text": "Is boxing of primitives required in OO languages to keep them consistent with the rest of the object system (generics etc.)? Or is it avoidable - is it possible to avoid any additional performance cost of having both primitives and objects in a language? One solution I can come up on the spot is having references big enough to store values of every possible primitive type. Are there other (better) solutions and are there implemented in popular languages?"} {"_id": "69858", "title": "Unit testable code: method visibility vs test complexity", "text": "I can't decide how to refactor code right. Let's say I have a complex hypothetical logic piece: public class ImageService { public void UploadImage(VectorData data) { var image = ConvertVectorDataToImage(); var thumbnail = CreateThumbnail(image); SaveThumbnail(thumbnail); SaveImage(image); SendEmail(image); ... } } _(If something in example is out of single responsibility, let's pretend, that I could make up a better example)_ If I make UploadImage visible (public), then writing tests become complicated. They tend to be long, hard to read and if you would like to avoid code duplication, then logic in tests appears. If I make all those small pieces visible, the API becomes messy (because in my example, the only interesting thing to everyone is UploadImage method). Even worse, after some time people will tend to use those small methods, because they are public, not because they should be used outside the class. So my question is, how do you balance between visibility and test complexity? Or am I doing something wrong in first place?"} {"_id": "69856", "title": "Is 'Aurora' a good release paradigm?", "text": "As we all know, there is 'alpha', and 'beta' release status. Mozilla Firefox, however, labels its releases differently: 'nightly', 'aurora', 'beta'. Is this an example to follow? Is aurora the new alpha? Does it create confusion, since it is basically public beta testing with a cool name and cool icon?"} {"_id": "238439", "title": "The best way to open source my website", "text": "I'm creating a website where trust is a big issue, I want to prove by making it open source I actually do what I claim. Question 1. If I make my source code open, how can I prove the actual website runs on this code? But I also have a dilemma, it is a commercial project and I don't want others to run off with it and claim it theirs. So I think I need a correct license. Question 2. What license is best for me? Currently I'm thinking about GPL v3."} {"_id": "231577", "title": "Are object oriented programming languages procedural?", "text": "Procedural programming means coding the application is a series of tasks. Do A, then do B, then Do C. And often wrap these tasks in procedures or functions that can be easily called and run several times in the code. Object Oriented Programming is also often done by doing A, then doing B, then doing C. But objects are used (and correct me if I'm wrong because I'm not sure) as sophisticated ways to store, manipulate and hide data. This affects significantly the design of the program. But the overall flow of the application is still do A, then do B, then do C. Do you agree? If so, would you say that OOP is essentially a type of Procedural Programming?"} {"_id": "160913", "title": "Validating best practices, property vs dto, simple type vs object", "text": "Consider a user profile page. User can add many emails to his/her profile (something like GitHub's profile page). So, theoretically, user hits the plus button, then enters an email address, and clicks the save button. Now we have some dispute in our team over how to validate this email address. Some developers (including me) believe that we should validate this simple string via an extension method: bool isValid = newEmail.IsEmail(); They argue that any validation mechanism other than a single method call would be overhead, and an anti-pattern (against KISS principal). While others argue that this email should first be added to a new instance of `UserEmail` DTO, then that DTO should be passed to a the relevant validator: UserEmail userEmail = new UserEmail() { Email = newEmail }; bool isValid = new UserEmailValidator().Validate(userEmail); The second group argues that validating a single property makes no sense, because a property can't exist on its own. Thus we should always validate an object, not a property (I mean, value type properties, like strings and integers, etc.) I searched but didn't find good resources on the net in favor or against these methods. What disadvantage could each approach have? I would be grateful if somebody point me to the right direction and show us best practices for validation."} {"_id": "121496", "title": "Discovering elegant ways of coding", "text": "I read this thread on programmers today and thought that looked like a really elegant way of coding. I would like to discover more neat methods of coding. What are the best ways of discovering new elegant ways of coding? (Im already aware of the standard design patterns)"} {"_id": "170364", "title": "What are programmers made to do in spare time in jobs?", "text": "Well, with no prior job experience I am completely ignorant of how things happen at software companies. I want to know what programmers are made to do when there is nothing to do? Lets consider Facebook or Twitter. Now it is quite improbable that Facebook people have something or other feature in mind to be implemented. So software developers are quite expected to have some time when there is nothing to do. Are they free to do anything in this?"} {"_id": "234738", "title": "Observer pattern: \"Web of observers\" - Is this ever in use?", "text": "I had an idea (which I'm sure already exists), to create a sort of 'network of observers/subjects'. I would like to describe how it works and than ask several questions about it. Say we have 5 objects: Objects **A** , **B** , **C** , **D** and **E**. Objects **D** and **E** need to observe objects **A** , **B** and **C**. With the regular Observer pattern, both **D** and **E** would register as observers to **A** , **B** and **C**. This means both **D** and **E** would have to register three times as observers, creating **six** observer-subject relationships total. ![enter image description here](http://i.stack.imgur.com/OR0ii.png) The idea is to add another object in the middle, let's call it Object **O**. It implements both the Observer and Observable interfaces. It registers as an observer to objects **A** , **B** and **C**. Objects **D** and **E** register as observers to object **O**. Whenever objects **A** , **B** and **C** notify object **O** , object **O** notifies it's own observers - **D** and **E**. Thus creating a sort of network. This network has a total of **five** observer-subject relationships. ![enter image description here](http://i.stack.imgur.com/4LjjX.png) **As I see it, this has two main benefits:** * **The less important benefit:** This solution allows for less observer-subject relationships, and thus (I think) creates less complexilty in the system. In this simple description the number of relationships only gets reduced by one, but the more observers and subjects there are, the bigger the benefit. * **The more important benefit:** Please consider an application with two groups of objects, group A and group B. **All objects in group A need to observe all objects in group B.** If we decide to add an object to group B, than using regular Observer we'd have to update a lot of code to register all objects in group A as observers to the new object. With the 'network' solution (which I'm sure has a different name), we only register the 'middle' object (object **O** ) as an observer to the new object in group B, and all objects in group A would be notified when the new object in group B changes state. **My questions:** 1. **Is this solution ever in use in professional projects?** Did you **ever encounter this** in use? Or is it just a cool idea but never used in practice? 2. What would you say are this pattern's disadvantages? 3. Does it have more advantages that I'm not aware of? 4. Does this have a name?"} {"_id": "38318", "title": "Where do I start to learn systems analysis?", "text": "Though I learned Systems Analysis in college, I feel like I am out of date. All I really remember is certain aspects of the SDLC which I realize is a little pass\u00e9. I've been on the implementation end of the development process for the past five years, and I am wanting to explore design and perhaps do a few projects of my own. So I have two questions. What is the most popular design methodology? And what book would be a good place to start for someone who wants to learn that methodology. I know quite a lot of methodologies and patterns (PRISM, MVVM, UML, etc..), but I am having a very hard time picturing the overall process of design. Also, is there a website that offers sample requirements documents? That's the hardest part of starting your own projects is finding something to do."} {"_id": "165861", "title": "Unit testing and Test Driven Development questions", "text": "I'm working on an ASP.NET MVC website which performs relatively complex calculations as one of its functions. This functionality was developed some time ago (before I started working on the website) and defects have occurred whereby the calculations are not being calculated properly (basically these calculations are applied to each user which has certain flags on their record etc). Note; these defects have only been observed by users thus far, and not yet investigated in code while debugging. **My questions are:** * Because the existing unit tests all pass and therefore do not indicate that the defects that have been reported exist; does this suggest the original code that was implemented is incorrect? i.e either the requirements were incorrect and were coded accordingly or just not coded as they were supposed to be coded? * If I use the TDD approach, would I disgregard the existing unit tests as they don't show there are any problems with the calculations functionality - and I start by making some failing unit tests which test/prove there are these problems occuring, and then add code to make them pass? Note; if it's simply a bug that is occurring that can be found while debugging the code, do the unit tests need to be updated since they are already passing?"} {"_id": "116519", "title": "The need for Explicit Type Conversion in C#", "text": "Consider the following code: DerivedClass drbObj = (DerivedClass)obj; Here `obj` is of type `Object` and this is reasonable since Object is the base type of every Class in C#. Here, since the type of `derObj` is defined at compile time, what is the need to explicitly use type conversion here. Can't the compiler predict on it's own that it will be of `DerivedClass`. I understand that, the Conversion type doesn't have to match the `Derived` type, but for practical purpose, it will only be as useful as the `Derived` type. Could someone explain with a small, hypothetical example, as to why the Explicit Type Conversion is necessary when a reference of type Object is being assigned to a derived class. From what I know, in C, there is no need to do perform Explicit type conversion from `void*` to any pointer and the compiler can handle it, based on the type of the pointer to which the converted value is being assigned."} {"_id": "116510", "title": "How can I overcome the communication problem(oral language)", "text": "I am a junior developer who living in Korea. I think I am quite passionate and active. and I want to be one of famous developer all over the world. For spreading my area, I try to join Open Source Projects. But the problem is always _English_. I often can't understand what the member are speaking at the IRC channel. As all you know, for resolving an issue We have to discuss in detail. So English is a big problem for me. And there's no oriental among my Opensource members. How can I solve this communication problem? if you know someone who can overcome this problem, please give me your opinions. Thanks in advance."} {"_id": "192038", "title": "How do I share different files in a git repo with different people?", "text": "In a single directory with a Git root folder, I have a bunch of files. I am working on one of those files, X.py, with my friend Alice. The other files I am working on with other people. I want Alice (and everyone else) to have access to X.py. I want Alice to only have access to X.py though. How can I achieve this with Git? Is there a way I can split a directory into two repos? That sounds rather cumbersome. Maybe I could add a remote repo that Alice can access containing X.py?"} {"_id": "114782", "title": "Is it necessary to understand what's happening at the hardware level to be a good programmer?", "text": "I'm a self-taught programmer, just in case this question is answered in CS 101. I've learned and used lots of languages, mostly for my own personal use, but occasionally for professional stuff. It seems that I'm always running into the same wall when I run into trouble programming. For example, I just asked a question on another forum about how to handle a pointer-to-array that was returned by a function. Initially I'm thinking that I simply don't know the proper technique that the designers of C++ set up to handle the situation. But from the answers and discussions that follow I see that I don't really get what happens when something is 'returned'. How deep a level of understanding of the programming process must a good programmer achieve?"} {"_id": "192035", "title": "method to allow me ability to freely modify my classes, but make them immutable to others?", "text": "I am creating the model part of an MVC architecture. My one class will provide all the accesses to allow one to fetch system state. I want most of this state to be immutable as it shouldn't be changed and I don't want anyone accessing my model to be able to break the state by making a foolish change. However, The objects representing the state have a bit of a tree structure. object A contains a set of B which contains a set of objects C which contain a set of Object D etc etc. Due to the nature of how I have to fetch the data I can't build the structure from the bottom up; the bottom most 'chid object' will be generated after all the others are completed. This is making it rather inconvenient to build the immutable objects in an intuitive manner, I want to add a set as my last phase of building but I can't if I already built immutable objects. I know of three approaches, but not sure I like any. One is a builder patern, but keeping track of all the builders for a complicated structure until I'm done seems like it could be very ugly. Second is to have a sort of 'clone' method that builds a new immutable object by cloning the old but adding a newly provided set to it (what is this pattern called again?). This gets ugly when I try adding the child most element to the set since I need to pretty much rebuild the entire tree with 'new' immutable objects to add one value. The third, and so far easiest, solution is to just get rid of the tree structure and make them make a call to my model to get the data they want. so instead of calling myObject.getChildren they call Model.getObjectChildren (myObject). This one will work, but darn it I wanted the pretty tree structure for my immutable state. Is there some other way to conveniently build up my state in a manner where my Model can freely modify the state while being built; but still have in immutable when it's finally published to the Controller? ps. I'm running in java if that matters."} {"_id": "5951", "title": "How do you efficiently keep your tests working as you redesign?", "text": "A well-tested codebase has a number of examples, but testing certain aspects of the system results in a codebase that is resistant to some types of change. An example is testing for specific output--e.g., text or HTML. Tests are often (naively?) written to expect a particular block of text as output for some input parameters, or to search for specific sections in a block. Changing the behavior of the code, to meet new requirements or because usability testing has resulted in change to the interface, requires changing the tests as well--perhaps even tests that are not specifically unit tests for the code being changed. * How do you manage the work of finding and rewriting these tests? What if you can't just \"run 'em all and let the framework sort them out\"? * What other sorts of code-under-test result in habitually fragile tests?"} {"_id": "219220", "title": "Testing for equivalence of two programs in different architectures (Sequential, Multi Processor, Multi Core)", "text": "I've found a question that I'm having trouble to realize. It's as follows: Consider the program P :: x := 1; y := 1; z := 1; u := 0; and the program Q :: x,y,z,u := 1,1,1,1; u := 0; Which of the following is true? (only 1 option is correct) 1. P and Q are equivalent for sequential processors. 2. P and Q are equivalent for all multi processor models. 3. P and Q are equivalent for all multi core machines. 4. P and Q are equivalent for all networks of computers. 5. None of the above. At a first glance it appears that option (1) is true. But in order for P & Q to be equivalent, their respective instructions are to be executed sequentially, they themselves can be executed in any order (even parallelly). Which holds true for options (1) & (2) [or even with option (3)]. > E.g: P is being executed on processor-1 & Q is being executed on processor-2 > (parallelly), but their individual instructions are being executed > sequentially on their respective processors. In such a case both of them are > equivalent, which makes option (2) true. Here is where ambiguity comes in. Can anybody help me ?"} {"_id": "96589", "title": "Ruby on Rails: Converting a Rails 3 app to Rails 2?", "text": "I just recently learnt ruby on rails using ruby 1.9.2 and rails 3 using Michael Hartl's tutorial. I'm making an application which I wanted to host on my preexisting server. However, I found that they still do not support rails 3 as it breaks compatibility with mongrel. My question is: How difficult will it be to port my whole Rails 3 app onto Rails 2 if need be? Keep in mind that although I understand rails 3 quite well by now, I have never used rails 2 and do not have any idea of the differences between them."} {"_id": "96585", "title": "Good coding problem for teaching and practicing how to write good unit tests?", "text": "Every once and a while I find myself teaching others some techniques useful for writing effective unit tests (usually in Java where many people find writing tests challenging). However, I have yet to find a small-but-not-too- small problem to use to demonstrate the techniques. Likewise, I have not yet found a great problem we can use to practice writing tests. Some problems I have tried have felt completely unrealistic or irrelevant to real life. Does anyone know of a great problem I can use which is small but complex enough and realistic enough to use to teach and practice with? In this case _small_ = _low enough complexity to be a approachable for most developers_ , not necessarily _easy to finish in a short amount of time._ It is okay if we don't finish implementing a solution during the workshop."} {"_id": "96580", "title": "Is experience in business analysis useful when moving into a programming career?", "text": "_**Note: If you want the gist of this post, skip to the last two paragraphs._** Before expanding on the question, just a short preamble: * I figured that the Programmers Q&A network was a better place to ask this question than StackOverflow. **I might be wrong**. * This is cross-post of a question I asked on Quora some time ago. The general gist of the answers was that, **even though it might not necessarily help you to actually land a job in programming, experience in business analysis will most likely benefit a programmer**. Main reasons for this included: * the \"ability to understand and empathise with the business side of [a] company's operations\"; * the idea that the business analysis skill and mindset \"might map pretty well to [programming] and help you pick up skills more quickly\"; and * that \"analytical skills are critical to being a good programmer\". * Someone elsewhere on this network asked whether \"[i]s it more important to focus on a business domain or programming stack/technology\". This question resonates with one element of my question, as it asks about whether to focus on specific domain knowledge (e.g. beer brewing) versus specific technology stacks (e.g. barrel ageing, double dropping, etc.). The other element contained in my question (additional to the value of specific domain knowledge) is the value of **general analytical thinking (that can be applied to any business domain) for a programmer**. * Another guy on this network (sorry, can't paste the hyper-link as I'm already capped at 2 per post at my current reputation level) asked \"[h]ow much system and business analysis should a programmer be reasonably expected to do?\". He states that in most jobs he held there were no formal business or systems analysis roles. The programmers were expected to play those roles. Consequently, he would often **\"lose out to guys who may be average programmers but have a much better understanding of the business processes\"**. I guess that, by merit of asking the question, the asker already contributed toward answering mine - business analysis knowledge (i.e. both general analytical thinking skills as well as specific business domain knowledge) and experience (\"embracing and internalising a customer-oriented approach\" as stated in one answer) is useful to a programmer. So here's **my question in a bit more detail** : I currently work as a consultant for a consulting house. I primarily do business analysis work in the financial industry (to get a good idea about what it is I actually do, consider looking at what the International Institute of Business Analysis has to say). I have been at it now for one and a half years, straight out of university, where I obtained a degree in informatics (which is a fancy word for business analysis). **I am considering a career move into programming**. The question: will my qualification in business analysis, and one and a half years of experience working as a business analyst, add any value to a prospective career in software development. **Another question that naturally follows** : I do realise that, as pointed out by the answers on Quora, that it might not actually help me to _land_ a job in programming. So what _would_? (Obviously I do have a plan, but I'm keen to hear what the community has to say.) Feel free to ask questions or recommend edits to the question, and thanks in advance!"} {"_id": "88694", "title": "Is there an Editor that I can run code using SSH", "text": "I need an editor where I can edit my code client side, then highlight part of it and run the highlighted code on the connected server. An Example of this would be SQL management studio, but I need it for shell scripts and other languages. UltraEdit does something close with a copy-paste mechanic, are there any others?"} {"_id": "65110", "title": "Facebook contest and vote(like) exchange", "text": "**UPDATED** we are running a contest using a facebook application to connect with facebook and identify users. a user follows through the standard facebook procedure to allow the application and login. the user then can vote for any entry only once. the vote is saved to avoid the same user voting twice. since the contest has a cash prize we face the problem of vote exchange. vote- exchange is the process where one individual goes to a group-page-forum-etc and trades his voting to another contest, for people voting for his entry in our contest. in essence \"buying\" votes. they achieve higher number through fake profiles (another problem) so we are faced with: 1. how to detect this vote exchanging 2. how to detect fake facebook profiles 3. in general how to safeguard such a procedure (with or without facebook) right now we are looking for the blindingly obvious violations 1. Very similar emails(john1@ , john2@) 2. Sudden spike in votes (in terms of timing and vote count)"} {"_id": "82493", "title": "Community blog sites", "text": "I'm considering writing a blog about my experiences while programming. Unfortunately, I doubt I'll be able to post regularly. What I'm wondering is if there are any good software development blogging sites that incorporate posts from multiple people, allowing each one to post at their own rate and, because there's a number of them, wind up with enough content that it's worth visiting them? Context: There are three types of posts that I'd likely create: * Personal thoughts on various topics (I think language feature X is important because..., etc) * Personal experience with a given language/framework/etc (I had problems with framework X because...) * Personal experience trying to solve a specific problem (Here's what I needed, here's the problems I ran into, and here's how I solved it) Edit: Of course, I mistyped the title as \"community glob sites\" at first. That doesn't bode well for my blogging performance ;)"} {"_id": "127446", "title": "version control workflow for environment with shared code", "text": "I work at a company that is looking into \"upgrading\" our version control system. We have about 200 websites that operate in their own directories. However, all of these websites use the same shared code. We basically use the 200 directories for the display code (html/css) and then the shared code for the processing and database queries. Its a little more detailed but I'll leave it at that for now. We also have a backend system that primarily uses shared code only. We currently have multiple programmers and multiple coders/designers that all need access to the code on a regular basis. Unfortunately our shared code isn't exactly modular, so it is hard to split something off into a \"project\" when something needs done since so many files are hit. I am basically looking for suggestions on the best workflow available for a version control system. As far as we can tell from research, git seems to be the way to go. But being the version control newbies that we are, we still have concerns on general workflow. Can anyone give me any advice on this topic? **Edit:** I'm trying to be a little more detailed, hopefully I answer some questions from the comments. Pretty much anyone is allowed to change any code at any time. If a change is made to shared code then it is immediately applied to the central code base. We don't really have a versioning scheme. We used to in the past but it was never enforced when new developers were brought on board. The shared code is clearly separated from the rest of the code. There are a few files in our system that get edited a lot. We are constantly \"stepping on each other's toes\" with these files. These files are very important to our system. I'll be the first one to say this is not an ideal work situation. We are well aware these files need to go. We are in the planning process of modularizing them but we would still need to consider them when choosing a versioning system. I guess another good thing to mention is the fact that we have a large number of files. This isn't ideal either but its happened primarily from not having a sound version control system put in place. Many developers over the years would just save a back up of a file if making a significant change. This is another reason we want to move away from the current scheme. Thanks for your responses already."} {"_id": "127447", "title": "Recommendations for implicit versus explicit line joining", "text": "I would like to know recommendations about Implicit Line Joining versus Explicit Line Joining in Python. In particular, do you favor one form over the other? What do you recommend as the general default? What criteria do you have for choosing one over the other, and if you do have a preference of one, when do you make exceptions for the other? I have an answer in mind for this question that reflects my own biases, but before I post my own answer I would like to know what others think... and if you can have a better set of criteria than what I have in mind, then I'll certainly accept your answer over my own. Some of the recommendations may be generalized to this choice in other programming languages, but my own bias is somewhat stronger in Python due to some language-specific features, so I'd like to know both the general and the Python-centric reasoning you may have on this topic. For some background, the discussion happened around a particular question on stackoverflow, but I thought it was more appropriate to move the discussion over to here as a question to avoid cluttering up the answer on SO with this tangent since it has veered off-topic from the original question. You can look at that question and its answers for the example code snippets that got the discussion going. Here is a simplified example: join_type = \"explicit\" a = \"%s line joining\" \\ % (join_type) # versus join_type = \"implicit\" b = (\"%s line joining\" % (join_type))"} {"_id": "82496", "title": "Is defining every method/state per object in a series of UML diagrams representative of MDA in general?", "text": "I am currently working on a project where we use a framework that combines code generation and ORM together with UML to develop software. Methods are added to UML classes and are generated into partial classes where \"stuff happens\". For example, an UML class \"Content\" could have the method DeleteFromFileSystem(void). Which could be implemented like this: public partial class Content { public void DeleteFromFileSystem() { File.Delete(...); } } All methods are designed like this. Everything happens in these gargantuan logic-bomb domain classes. Is this how MDA or DDD or similar usually is done? For now my impression of MDA/DDD (which this has been called by higherups) is that it severely stunts my productivity (everything must be done The Way) and that it hinders maintenance work since all logic are roped, entrenched, interspersed into the mentioned gargantuan bombs. _Please refrain from interpreting this as a rant - I am merely curious if this is typical MDA or some sort of extreme MDA_ **UPDATE** Concerning the example above, in my opinion Content shouldn't handle deleting itself as such. What if we change from local storage to Amazon S3, in that case we would have to reimplement this functionality scattered over multiple places instead of one single interface which we can provide a second implementation for."} {"_id": "150216", "title": "Software design approach for large relational database", "text": "I am working on a personal side project that will utilize a complex and large relational database. During the design of the database I had some co-workers give advice to how I should approach my application; they advised to use the entity framework. From what I've read and what I am watching it sounds like the entity framework is the right tool for the job. However what I have come to programmer.stackexchange for is to ask whether or not somebody who has just going going in their career should do. Let me rephrase; I have 3 years web design / development experience (html/css/js/php/mysql) and now about 1 year experience with actual software engineering C#/VB. So the question is should I utilize the entity framework or should I build this rather complex and ambitious project without the entity framework; ergo building all the SQL that will handle massive amounts of relational data? The reason why I ask is because I want to get the most ROI (knowledge and experience) from the project. Once again... thanks!"} {"_id": "82499", "title": "From a design perspective, what are the best practices for logging?", "text": "I want to add logging to an application I'm currently working on. I've added logging before, that's not an issue here. But from a design perspective in an object-oriented language, what are the best practices for logging that follow OOP and patterns? _**Note:_** I'm currently doing this in C#, so examples in C# are obviously welcome. I would also like to see examples in Java and Ruby. * * * **Edit:** I'm using log4net. I just don't know what's the best way to plug it in."} {"_id": "233265", "title": "How to use the Decorator pattern to add little functionality to big objects?", "text": "This question regards the usage of the Decorator pattern to add little functionality to objects of large classes. Following the classic Decorator pattern, please consider the following class structure: ![enter image description here](http://i.stack.imgur.com/FCyEd.png) For example, imagine this happens inside a game. Instances of `ConcreteCharacterDecorator` are meant to add little functionality to the `ConcreteCharacter` they are 'wrapping'. For instance, `methodA()` returns an `int` value representing the damage the character inflicts on enemies. The `ConcreteCharacterDecorator` simply adds to this value. Thus, it only needs to add code to `methodA()`. The functionality of `methodB()` stays the same. `ConcreteCharacterDecorator` will look like this: class ConcreteCharacterDecorator extends AbstractCharacterDecorator{ ConcreteCharacter character; public ConcreteCharacterDecorator(ConcreteCharacter character){ this.character = character; } public int methodA(){ return 10 + character.methodA(); } public int methodB(){ character.methodB(); // simply delegate to the wrapped object. } } This is no problem with small classes containing two methods. **But what if`AbstractCharacter` defined 15 methods?** `ConcreteCharacterDecorator` would have to implement all of them, even though it's only meant to add little functionality. I will end up with a class containing one method that adds a little functionality, and another 14 methods that simply delegate to the inner object. It would look like so: class ConcreteCharacterDecorator extends AbstractCharacterDecorator{ ConcreteCharacter character; public ConcreteCharacterDecorator(ConcreteCharacter character){ this.character = character; } public int methodA(){ return 10 + character.methodA(); } public int methodB(){ character.methodB(); // simply delegate to the wrapped object. } public int methodC(){ character.methodC(); // simply delegate to the wrapped object. } public int methodD(){ character.methodD(); // simply delegate to the wrapped object. } public int methodE(){ character.methodE(); // simply delegate to the wrapped object. } public int methodF(){ character.methodF(); // simply delegate to the wrapped object. } public int methodG(){ character.methodG(); // simply delegate to the wrapped object. } public int methodH(){ character.methodH(); // simply delegate to the wrapped object. } public int methodI(){ character.methodI(); // simply delegate to the wrapped object. } public int methodJ(){ character.methodJ(); // simply delegate to the wrapped object. } public int methodK(){ character.methodK(); // simply delegate to the wrapped object. } public int methodL(){ character.methodL(); // simply delegate to the wrapped object. } public int methodM(){ character.methodM(); // simply delegate to the wrapped object. } public int methodN(){ character.methodN(); // simply delegate to the wrapped object. } public int methodO(){ character.methodO(); // simply delegate to the wrapped object. } } Obviously, very ugly. I'm probably not the first to encounter this problem with the Decorator. How can I avoid this?"} {"_id": "60544", "title": "Why do game developers prefer Windows?", "text": "Is it that DirectX is easier or better than OpenGL, even if OpenGL is cross- platform? Why do we not see real powerful games for Linux like there are for Windows?"} {"_id": "60545", "title": "How to tell client I no longer want to work on his project", "text": "This is probably going to sound messed up, but here it goes. I've been working on a project for a client for a while now. I wasn't given any details except for \"It has to be an XYZ plugin and interface with ABC product\". Which was fine, but now we're towards the end (I think) and it's just dragging out. I don't have any time to spend on it and I'm already over schedule by 3 months. Trying to get the client to describe to me how he would like to be able to navigate the data (a UI issue) is just difficult. I've submitted mock ups on what I think he wants but his latest response is \"you should look at XXX product\", it has similar functionality. Of course, I looked at it and it looks similar to what I submitted, but I don't think that the way I've built the framework is going to support what he is now describing to me. We've had good communication throguh out the process but he doesn't know what he wants. I explained how I was going to build the framework and he agreed, so it isn't a bad choice on my part about design. When I go over what I think are finalized modules, he says, \"You should have done it this way\" which requires me to go back and rework code and UI. Some smaller items could have been better thought out by me, but the big things are how I interpreted his requirements and I've gone over this module several times during development. I've already received final funds last month so i'm working for free at this point. I no longer want to deal with this project. I've already received payment. I've done other successful projects with this client before and he has a lot of other projects he wants to do. What the heck should I do? I don't want to work on this project anymore. I don't want to ask for any more money (money isn't really the issue). I don't want to make him mad either. I know it looks like I want to have my cake and eat it too. If you think I should call it quits, how should I do it given the circumstances?"} {"_id": "64644", "title": "Is the rule of 10% code is comments still valid?", "text": "I've used some older languages before C# and I always felt the need to comment what I was doing so it is clear for anyone who reads the code. But now that I use C# decently, I often feel that it's a pain to write most of the comments because I feel like I state the obvious. So unless I do complicated formulas or algorithms, I'm not sure how I should comment. Microsoft has reached such a readability in C#, I'm mesmerized. Do you have the same feeling? How do you write your comments so they are not pure redonduncy? Have you changed your commenting habbits / best practice?"} {"_id": "153744", "title": "Books or help on OO Analysis", "text": "I have this course where we learn about the domain model, use cases, contracts and eventually leap into class diagrams and sequence diagrams to define good software classes. I just had an exam and I got trashed, but part of the reason is we barely have any practical material, I spent at least two good months without drawing a single class diagram by myself from a case study. During the exam the teacher gave us the domain model, two or three use cases on some vital system operations and a glossary. We had to model up a class diagram and some sequence diagrams for these. The thing is, the jump from real-life modeling to OO classes didn't seem clear to me. I handle the GRASP principles pretty well, but I lack practice. I could use something that gives a pre-made analysis and asks you to draw the OO design in form of class diagrams or something. I know it varies from one person to another, but some concepts should always make sense. I'm not here to blame the system or the class I'm in, I'm just wondering if people have some exercise-style books that either provide domain models with glossaries, system sequence diagrams and ask you to use GRASP to make software classes? I could really use some alone-time practicing going from analysis to conception of software entities. I'm almost done with Larman's book called \"Applying UML and Patterns: An Introduction to Object-Oriented Analysis and Design and Iterative Development, Third Edition\". It's a good book, but I'm not doing anything by myself since it doesn't come with exercises. I've been programming in OO for four years now in Java and C++. It was oriented on business applications. I've also spent 6 months on UML syntax in a crash course in college. It's just that I don't understand how they can expect me to define software entities from use cases into class diagrams if I've done like one or two in my life."} {"_id": "60549", "title": "How strictly do you follow the \"No Dependency Cycle\" rule (NDepend)", "text": "A bit of background: As a team lead I use NDepend about once a week to check for the quality of our code. Especially the test-coverage, lines of code and cyclomatic complexity metrics are invaluable for me. But when it comes down to levelization and dependency cycles I am a bit ... well concerned. Patrick Smacchia has a nice blog post which describes the goal of levelization. To be clear: Under \"dependency cycle\" I understand a circular references between two namespaces. Currently I am working on a Windows CE based GUI framework for embedded instruments - just think of the Android graphics platform but for very low end instruments. The framework is a single assembly with about 50.000 lines of code (tests excluded). The framework is split into the following namespaces: * Core Navigation & Menu Subsystem * Screen Subsystem (Presenters / Views / ...) * Controls / Widget Layer Today I spent the half day on trying to bringe the code to proper levels [thanks to Resharper no problem in general] but in all the cases some dependency cycles exist. So my question: How strictly do you follow the \"No Dependency Cycle\" rule? Is levelization really that important?"} {"_id": "153742", "title": "Using XSLT for messaging instead of marshalling/unmarshalling Java message objects", "text": "So far I have been using either handmade or generated (e.g. JAXB) Java objects as 'carriers' for messages in message processing software such as protocol converters. This often leads to tedious programming, such as copying/converting data from one system's message object to an instance of another's system message object. And it sure brings in lots of Java code with getters and setters for each message attribute, validation code, etc. I was wondering whether it would be a good idea to convert one system's XML message into another system's format - or even convert requests into responses from the same system - using XSLT. This would mean I would no longer have to unmarshall XML streams to Java objects, copy/convert data using Java and marshall the resulting message object to another XML stream. Since each message may actually have a purpose I would 'link' the message (and the payload it contains in its properties or XML elements/attributes) to EXSLT functions. This would change my design approach from an imperative to a declarative style. Has anyone done this before and, if so, what are your experiences? Does the reduced amount of Java 'boiler plate' code weigh up to the increased complexity of (E)XSLT?"} {"_id": "245383", "title": "How to comment the file system?", "text": "When certain parts of code are unusual or unclear, common practice is to leave a comment explaining why it is so. However, sometimes a filesystem may have unusual configuration, such as a directory with 0477 privileges. **Is there any convention for leaving comments about this?** I am writing a 'Developers Wiki' for the company as that would be the place for such things. However I would prefer something **more standard if it exists** and more accessible when, for instance, the next guys has to SSH in from home in the middle of the night and he might not think to check the wiki (or even have access to it)."} {"_id": "97181", "title": "Is it true that \"Real programmers can write assembly code in any language.\"?", "text": "> Real programmers can write assembly code in any language. > (Larry Wall). As far as I can make out, Mr. Larry Wall is trying to say that to a real programmer any language can have the same functionality as ASM. But I seriously do not understand. How can you write assembly code in high level languages like Perl, Python, Java and C#? Languages like Perl and Python don't even have pointers. Or Does he mean something else? What is Mr. Wall actually trying to say?"} {"_id": "200998", "title": "How would I make a compiler in C++?", "text": "This probably been asked for but I can't google \"How to make a compiler in C++\" because I will just get \"How to compile C++\" as the results. Anyway, for my question, I'd like to make a simple programming language in C++. Now I understand basic file IO stuff but what I don't get is how to build a EXE. The problem here is I don't know how EXEs exactly are \"planned out\", granted that most people don't. I was going to simply parse the language into assembly and assemble it using an assembler. But I don't want to do that, I want to actually compile it directly into a EXE. Does anyone know how this would be done? PS: To all you people who say making a compiler is virtually impossible, it's a fairly fast process, it's just implementing OOP features that's hard"} {"_id": "97187", "title": "Can you actually produce high quality code if you are sleep deprived?", "text": "I have heard about programmers coding for two days without sleep and drinking coffee and Red Bull. Also in movies like The Social Network, in a scene they show that Mark Zuckerberg has been programming for 36 hours. Also it's said that in companies like Facebook, Google, foursquare, etc. they can code for more than 24 hours without sleep. Is this really true? Can you actually produce high-quality code if you are sleep deprived? _Can things like Red Bull make up for sleep?_"} {"_id": "155594", "title": "What are the best practices to use NHiberante sessions in asp.net (mvc/web api) ?", "text": "I have the following setup in my project: public class WebApiApplication : System.Web.HttpApplication { public static ISessionFactory SessionFactory { get; private set; } public WebApiApplication() { this.BeginRequest += delegate { var session = SessionFactory.OpenSession(); CurrentSessionContext.Bind(session); }; this.EndRequest += delegate { var session = SessionFactory.GetCurrentSession(); if (session == null) { return; } session = CurrentSessionContext.Unbind(SessionFactory); session.Dispose(); }; } protected void Application_Start() { AreaRegistration.RegisterAllAreas(); FilterConfig.RegisterGlobalFilters(GlobalFilters.Filters); RouteConfig.RegisterRoutes(RouteTable.Routes); BundleConfig.RegisterBundles(BundleTable.Bundles); var assembly = Assembly.GetCallingAssembly(); SessionFactory = new NHibernateHelper(assembly, Server.MapPath(\"/\")).SessionFactory; } } public class PositionsController : ApiController { private readonly ISession session; public PositionsController() { this.session = WebApiApplication.SessionFactory.GetCurrentSession(); } public IEnumerable Get() { var result = this.session.Query().Cacheable().ToList(); if (!result.Any()) { throw new HttpResponseException(new HttpResponseMessage(HttpStatusCode.NotFound)); } return result; } public HttpResponseMessage Post(PositionDataTransfer dto) { //TODO: Map dto to model IEnumerable positions = null; using (var transaction = this.session.BeginTransaction()) { this.session.SaveOrUpdate(positions); try { transaction.Commit(); } catch (StaleObjectStateException) { if (transaction != null && transaction.IsActive) { transaction.Rollback(); } } } var response = this.Request.CreateResponse(HttpStatusCode.Created, dto); response.Headers.Location = new Uri(this.Request.RequestUri.AbsoluteUri + \"/\" + dto.Name); return response; } public void Put(int id, string value) { //TODO: Implement PUT throw new NotImplementedException(); } public void Delete(int id) { //TODO: Implement DELETE throw new NotImplementedException(); } } I am not sure if this is the recommended way to insert the session into the controller. I was thinking about using DI but i am not sure how to inject the session that is opened and binded in the BeginRequest delegate into the Controllers constructor to get this public PositionsController(ISession session) { this.session = session; } **Question:** What is the recommended way to use NHiberante sessions in asp.net mvc/web api ?"} {"_id": "200990", "title": "Are you allowed to \"copy\" the GUI/Features of another application", "text": "I'm making a app which is based heavily on another application that serves a similar purpose. I'm not actually planning on selling it, but I'm wondering whether it would even be legal for me to sell it. I don't borrow graphics from the other application, but I have a very similar color scheme, virtually the same features (I added a few, but not really significant), and a very similar interface layout. Obviously, I'm not borrowing any code (It's written in Objective-C and mines written in Java). Again, I'm not actually planning on selling it, I was just wondering in the hypothetical. I'm not sure whether it's just the code that's copyrighted or the actually design itself."} {"_id": "153298", "title": "Should I swap from WCF to NserviceBus", "text": "We have a central server that sends and recieves messages from a number of PCs that are located on client networks in various locations. To facilitate this, currently I'm using WCF with TCPNetBindings, using duplex communication secured with certificates. Now, we have a number of issues with this - mainly that we are being asked to support \"disconnected mode\" (we need to be fault tolerant). From what I know, there is no simple way to do this using the WCF stack - we'd need to implement something and perhaps use msmq. I've been looking at NServiceBus lately, and from I can see it seems to fit the bill well - fault tolerance, messages can be sent over the internet via a simple http gateway, etc. I know it's well respected in the community, and I can see why from looking into it. So, my question is...Does employing NServiceBus sound like a sensible idea, or does anyone have any other suggestions / real world experience that relate to this? I guess I'm worried of introducing a new tech that I know relatively little about, and facing problems with things like securing it, setting everything up in a reliable way, gotchas along the way.. I'm also wary of \"gold-plating\" the architecture, and choosing something shiny that will end up bogging me down in implementation versus sticking with WCF and just making it work for me.. Thanks!"} {"_id": "215088", "title": "Effective and simple matching for 2 unequal small-scale point sets", "text": "I need to match two sets of 3D points, however the number of points in each set can be different. It seems that most algorithms are designed to align images and trimmed to work with hundreds of thousands of points. My case are 50 to 150 points in each of the two sets. So far I have acquainted myself with `Iterative Closest Point` and `Procrustes Matching` algorithms. Implementing `Procrustes algorithms` seems like a total overkill for this small quantity. `ICP` has many implementations, but I haven't found any readily implemented version accounting for the so-called \"outliers\" - points without a matching pair. Besides the implementation expense, algorithms like `Fractional` and `Sparse ICP` use some statistics information to cancel points that are considered outliers. For series with 50 to 150 points statistic measures are often biased or statistic significance criteria are not met. I know of `Assignment Problem` in linear optimization, but it is not suitable for cases with unequal sets of points. Are there other, small-scale algorithms that solve the problem of matching 2 point sets? I am looking for algorithm names, scientific papers or C++ implementations. I need some hints to know where to start my search."} {"_id": "64396", "title": "Watson Encoding Information", "text": "IBM's Watson has alot of book information encoded into a 'database' that Watson searches in real time. Does anyone know how that information is coded? I mean no one could type in all of those rules."} {"_id": "124364", "title": "Implicitly or explicitly sorted elements?", "text": "So, I've got the following problem: I have a number of ordered elements, ordered in the following way: * Type 1: Must be the first element. (only one possible, always present) * Type 2: Must be the second and following elements, if present. * Type 3: Must be the ordered last. Stated a bit more succintly / pseudoregex: `12*3+` Now, when constructing these elements, I do so in a method, where it is easy to do something like: public List CreateElements(...) { var list = ...; list.Add(new Type1()); list.AddRange(GetType2Elements()); list.AddRange(GetType3Elements()); } However, there is no explicit semantic ordering - it's just a list, that I happen to construct in a certain way. A more explicit way is to have IElement implement IComparable, and then use an explicitly sorted list and returning that instead. That would carry the \"sortedness\" out of my construction method, and sort the entities irregardless how they are created. However, it is suddenly a bit less trivial, since atleast 3 different Compare methods must be implemented. What would you choose?"} {"_id": "64394", "title": "Too many seniors in one team?", "text": "Can having too many senior programmers in one team turn out to be a bad thing? Having like say, 4-5 senior programmers in a team of 6-7 people. What is the optimal number/ratio in these kind of situations? Can this lead to too much philosophy and arguments about ideas? Has anyone had such an experience, that can share it with me?"} {"_id": "191419", "title": "Configuration file that can be modified by user in C#", "text": "I want to create a configuration file (text file preferred) that can be modified by user. A windows service will read from that file and behave accordingly. Can you recommend a file format for this kind of process? What I have in my is a config file like the game config files, but could not imagine how to read from it. My question is very similar to INI files or Registry or personal files?, but my question is different because I need to consider user editing."} {"_id": "179624", "title": "Could a programming language work as well without statements?", "text": "As programming in JavaScript, I've noticed everything that can be done with statements and blocks can be done with expressions alone. Can a programming language work fine with only expressions? And, if yes, why are statements used at all?"} {"_id": "254088", "title": "MVC helper functions business logic", "text": "I am creating some helper functions (mvc.net) for creating common controls that I need in almost every project such as alert boxes, dialogs etc. If these do not contain any business logic and it's just client side code (html, js) then it's ok. My problem arises when I need some business logic behind this helper. I want to create a 'rate my (web) application' control that will be visible every 3 days and the user may hide it for now, navigate to rate link or hide it for ever. To do this I need some sort of database access and a code that acts as business logic. Normally I would use a controller for this, with my DI and everything, but I don't know where to put this code now. This should be placed **in the helper function** or **in a controller** that responds objects instead of ActionResults?"} {"_id": "118276", "title": "How can I charge money for open source targeted at individuals?", "text": "Most questions like this only seem to apply for big pieces of software for companies with money. I'm developing a small application targeted at individuals. Informally speaking, I want to sell the application to users, but allow them to fix and modify it(but not redistribute). Practically and legally speaking, how would I do this?"} {"_id": "195568", "title": "QT-C++ vs Generic C++ and STL", "text": "Been brushing up on my C++ lately, on Ubuntu QQ. I love the Qt framework for everything, especially building GUI's. I became quite familiar with it when using PyQt over the last few years. When using PyQt, I had some issues that are now more pronounced when using C++ with Qt: **Qt has many extensions to C++ that are Qt specific** \\- QString being just one common example, not to mention automated garbage collection. It is possible to write Qt applications using C++ without knowing much at all about C++ and the STL. I may have to hit the job market again soon and I'd like to be able to consider C++ positions - but I'm afraid binding myself too much to Qt will limit my abilities to work with generic C++, which were once quite formidable but are now long dormant and rusty. Should I avoid Qt? Would I be better off using WxWidgets or GTK++ for building GUI's? What's the best GUI framework to use that allows/requires the most use of generic C++ and the STL? How do I make myself most marketable as a C++ programmer when it comes to GUI frameworks, etc?"} {"_id": "195561", "title": "Best option to send image from javascript client to SQL server", "text": "From a client (browser), using javascript I want to send an image to sql server ( and store with user profile) (sql server), along with other data such as user id or name. Which option is better? Send image as img / jpg? Or convert it to base64 and send everything as json format?"} {"_id": "46299", "title": "PDF or ebook Java API documentation", "text": "Since I have a long train ride to and from work I was wondering if there is a version of the Java API documentation floating around that I could put on my Kindle. It would be nice on the rare occasion I get something in my head that I want to think about some more. I know I can browse the web through the Kindle but coverage is spotty and slow. I know that the API docs are not really designed for a sequential reading format but I'm curious to see if anyone else has thought about this and given it a shot. Note I am not reading the Java API to learn how to program Java but to review classes I plan to use. The differences between things like FileReader and FileInputStream are subtle and best gained from reviewing the API and not reading a chapter in a book that will tell me a lot of stuff I already know."} {"_id": "119896", "title": "How to populate a private container for unit test?", "text": "I have a class that defines a private (well, `__container` to be exact since it is python) container. I am using the information within said container as part of the logic of what the class does and have the ability to add/delete the elements of said container. For unit tests, I need to populate this container with some data. That data depends on the test done and thus putting it all in setUp() would be impractical and bloated -- plus it could add unwanted side effects. Since the data is private, I can only add things via the public interface of the object. This runs code that need not be run during a unit test and in some cases is just a copy and paste from another test. Currently, I am mocking the whole container but somehow it does not feel that elegant a solution. Due to Python mocking frame work (mock), this requires the container to be public -- so I can use `patch.dict()`. I would rather keep that data private. **What pattern can one use to still populate the containers without excercising the public method so I have data to test with?** Is there a way to do this with mock's `patch.dict()` that I missed?"} {"_id": "119893", "title": "How is this \"interface\"-like structure/pattern called?", "text": "Let's assume we have an `XmlDoc` class that contains basic functionality for dealing with an XML data structure and saving/loading data to/from a file. Now we have several subclasses, `A`, `B` and `C`. They all inherit from `XmlDoc` and add component-specific methods for setting and getting lots of data. They are like \"interfaces\" but also add an implementation for the signatures. Finally, we have an `ABCDoc` class that joins all the \"interfaces\" via virtual multiple inheritence and adds some `ABCDoc`-specific stuff, such as using `XMLDoc`-methods to set an appropriate doc type. We may also have an `ADoc` class for only saving `A` data. How is this pattern called? \"Interface\" is not really the right word since interfaces usually do not contain an implementation. Bonus points for C++ code conventions."} {"_id": "52459", "title": "how do you manage application performance reviews", "text": "I have been trying to figure out ways to effectively do performance reviews before an install happens for all releases done by our team. Do you usually make this a part of code review process or do you handle it as a separate review task? FYI - we do not have a dedicated performance testing team. It is up to the developers to make sure the app performs well. The apps I am referring to are web applications."} {"_id": "255067", "title": "Beginning a sentence with a function name?", "text": "Occasionally while typing something up that relates to a case-sensitive programming language I end up starting a sentence with a function name. Now the rules of English state that the first word in a sentence needs to be capitalized; the function name is lowercase, though. If you are wondering what could I be saying that would result in the first word being a function name, take this example: > Your fread implementation is broken. fread needs to return how many bytes > were read. I understand that I could change the second instance of _fread_ to _It_ but I want to know the best way of handling this other than just rewriting the sentence. Should I capitalize the function name? The only way I would like to hear \"rewrite the sentence\" as an answer is if starting the sentence with a function name violates some English rule that I am not aware of. Edit: I really thank everyone for these answers. They have changed and improved my insight into the issue. I have learned quite a bit from this. I am very surprised that I did not think of these simple but good solutions. I do think my stance on alternating the sentence was too tough and now I realize due to these good answers that overall altering the sentence appears to be the best option for dealing with these cases be it adding parenthesis after the function or saying The function before the function name and if available using formatting for the function name."} {"_id": "99498", "title": "How should a web API handle misspelled/extra parameters?", "text": "**Question:** For a public facing web API(send HTTP Get/Post requests, get JSON/XML data back), how should parameters be handled that are either misspelled or are extra. It seems to me that if the incorrect parameters are ignored, an error in the caller's code may go unnoticed since they would be getting back a valid result. This may be especially true in situations where it wouldn't be obvious by looking at the results returned. **I am referring to optional parameters only.** Obviously if a required parameter is misspelled, then the parameter will be considered missing and an error will be returned. **As an example** , the Place Search API call has four required parameters(location,radius,sensor and key) and several optional parameters(types is one of them). I can run these commands(with an API key) and get back valid results: curl \"https://maps.googleapis.com/maps/api/place/search/json?location=45.47554,-122.794189&radius=500&sensor=false&key=&type=bakery\" curl \"https://maps.googleapis.com/maps/api/place/search/json?location=45.47554,-122.794189&radius=500&sensor=false&key=&types=bakery\" The first command has the \"types\" parameter in the singular form which is an invalid key name. The API ignores that parameter and returns all types of entities. In this case the error is obvious, but there may be times(and other API calls) where it won't be."} {"_id": "131545", "title": "How to become certified in Objective C?", "text": "> **Possible Duplicate:** > iOS Programming Certifications I am interested in Objective C certification or any certificate than can support me as an iOS developer, what do you recommend?"} {"_id": "235952", "title": "Implementing Rules-type logic without a rules engine (like Drools)?", "text": "I have a system that must output decisions and alerts based on input read from a messaging queue. This system will hold state about all objects in memory and update this state based on input from the messaging queue. Logic must be applied to data flowing into the system and as the data updates the system's working memory. Example: > I have 50 quad-rotor drones that report their current state to a RabbitMQ > queue. This includes a (latitude,longitude) location property, a 'time left > on battery' property, and an integer battery charge percentage property. The > rules run on each drone message would look like: if (drone.charge_left < time_to_home_from(drone.location)) { drone.state = EMERGENCY; // land immediately send_alert(\"Drone must be recovered!\"); // put an alert on another messaging queue } else if (drone.charge_percentage < some_threshold) { drone.state = RETURN_HOME; send_alert(\"Drone is coming back for recharging\"); } ... and so on Drools seems like a decent candidate for this task but only because few rules engines seem to be still under development, and it has the biggest name so to speak. On the flip side, the architecture of Drools seems heavyweight and outdated. I've seen a lot of use cases in the financial domain, which is fairly far from my use case. I'm leaning towards using a graph db based solution (ex. Neo4j), where each of the rules is a node containing an MVEL predicate and some type of Alert that is sent out when the rule's predicate evaluates true. How would you architect a system like this in 2014? EDIT: I forgot to add that only the engineering team would ever look at or modify these rules, but it would be desirable for values like `some_threshold` to be tunable by non-technical teammates through some web interface."} {"_id": "9200", "title": "Is there a canonical book on Scala?", "text": "I'm interested in learning Scala, but due to is relative newness, I can't seem to find a whole lot of books about it. Is there a book out there that's the de-facto standard for describing best practices, design methodologies, and other helpful information on Scala? What makes that book special?"} {"_id": "99496", "title": "Recent graduate with an idea, but I need some starting out advice", "text": "As the title says, I am a recent graduate with a mathematics degree, looking to develop software as a career. The job hunting has not resulted in a job yet but over the past couple of months I have picked up quite a bit of Ruby/Rails and Objective-C, as well as learning git and deployed a simple web-app to Heroku. I want to continue to build my resume and feel making a simple app and contributing to open source projects would look really good. Which leads me to my idea: My girlfriend is a botanist with the California Native Plant Society, and they just put together this really cool rare plant database. So I'd like make a simple iPhone app which would allow the user to query that database. Eventually it would be neat to do other stuff, but just getting a prototype together that would allow lookup via scientific names of different plant species is my first goal. I'm looking for any advice or resources as I'm not even sure what to google. I'm not sure if my app is physically filling in the text fields or if there is some other way to query an Internet database which I am unfamiliar. CNPS is a pretty great not-for-profit and it would be neat to give back to them in some way. I plan to host this on github as well if anyone else is interested. Thanks"} {"_id": "235959", "title": "Developing a platform or mass-scale application - where does it all start?", "text": "I notice that a ton of platforms are developed nowadays. I have always wondered where it all begins - What choices has to be made, reasons why certain platforms are chosen, why certain places choose to use Open Source, or Microsoft products. I am at the stage of my .NET Development career where I would like to write something big, and make a platform out of it utilizing many different technologies to comprise a single solution. This question is probably double sided too. **Where does platform development start?** **What needs to be researched and known by the developer before he embraces the challenge of writing a mass-scale application** **Is it a good idea to involve communities in the project, even if you plan not to fully use open-source technologies?** These are just a couple of questions which boggles my mind. I am a young developer seeking to improve my skill in developing, creating and building applications. Not always fun when millionaires depend on me for my hard work where I can use my own work to make ends meet more comfortably. Thanks"} {"_id": "163393", "title": "What's so bad about pointers in C++?", "text": "To continue the discussion in Why are pointers not recommended when coding with C++ Suppose you have a class that encapsulates objects which need some initialisation to be valid - like a network socket. // Blah manages some data and transmits it over a socket class socket; // forward declaration, so nice weak linkage. class blah { ... stuff TcpSocket *socket; } ~blah { // TcpSocket dtor handles disconnect delete socket; // or better, wrap it in a smart pointer } The ctor ensures that `socket` is marked NULL, then later in the code when I have the information to initialise the object. // initialising blah if ( !socket ) { // I know socket hasn't been created/connected // create it in a known initialised state and handle any errors // RAII is a good thing ! socket = new TcpSocket(ip,port); } // and when i actually need to use it if (socket) { // if socket exists then it must be connected and valid } This seems better than having the socket on the stack, having it created in some 'pending' state at program start and then having to continually check some isOK() or isConnected() function before every use. Additionally if TcpSocket ctor throws an exception it's a lot easier to handle at the point a Tcp connection is made rather than at program start. Obviously the socket is just an example, but I'm having a hard time thinking of when an encapsulated object with any sort of internal state shouldn't be created and initialised with `new`."} {"_id": "247455", "title": "how to approach pharmacy software project", "text": "I would like to complete a pharmacy software proof concept. But I have a few questions I do not know. **Physician information** In order to refill or dispense a new RX the software needs to have access to a database of current doctors, is there a downloadable database or a service I can interact with to get a list of current practicing doctors ? **Medication information** needs to access a database of medications, which I believe companies like Medispan offer for a price. How much does interacting with these kinds of companies cost. **Billing** How can I interface be it with an insurance company, medicare, medicaid, etc for billing ? I have seen these programs are able to submit orders and get a response automatically for how much was billed and how much was paid etc. **What I have done** I started to just work on basic windows form application which would be the client and then I imagine I would have to connect it to my server in order to perform authentication etc before I billed for them from the server. Are the client applications connecting with the APIs directly or just an intermediary web service which in turn does all the work ? I also found one open source project http://www.anshealth.com/ but it seems like its pretty complicated to setup and there isn't much documentation. There hasn't been a post in their forums since last year. **Edit** I edited my question in order to make it more specific, I have asked specifically three questions."} {"_id": "247450", "title": "Single page app permissions represented through RESTful APIs", "text": "I'm trying to figure out the right way to handle permissions in a single page app that talks directly to several RESTful APIs, that implement HATEOAS. As an example: \"As a user of my application I can view, start and pause jobs but not stop them.\" The underlying rest API has the following resource: /jobs/{id} Which accepts GET and PUT. The GET returns a job model and the PUT accepts a job model as a _request body_ in the form: { \"_links\" : { \"self\" : \"/jobs/12345678\" } \"id\" : 12345678, \"description\" : \"foo job\", \"state\" : \"STOPPED\" } Accepted job states can be: dormant | running | paused | stopped. The requirement says that on the UI I must have the buttons: START, PAUSE, STOP ... and only display based on the logged in user's permissions. From the API perspective everything works as the underlying logic on the server makes sure that the user cannot update the state to a STOPPED state when a request is made (a 401 is returned maybe). What is the best way to inform the app / UI of the user's permissions, so it can hide any buttons that the user has no permission to action? **Should the API provide a list of permissions, maybe something like :** { \"_links\" : { \"self\" : \"/permissions\", \"jobs\" : \"/jobs\" } \"permissions\" : { \"job\" : [\"UPDATE\", \"DELETE\"], \"job-updates\" : [\"START\", \"PAUSE\"] } } **OR should the API change so that the permissions are reflected in the HATEOS links maybe something like :** { \"_links\" : { \"self\" : \"/jobs/12345678\", \"start\" : \"/jobs/12345678/state?to=RUNNING\", \"pause\" : \"/jobs/12345678/state?to=PAUSED\", } \"id\" : 12345678, \"description\" : \"foo job\", \"state\" : \"DORMANT\" } **Or should it be done in a completely different way?**"} {"_id": "63016", "title": "Parameterized tests - When and why do you use them?", "text": "Recently at work we've been having some differences of opinion with regard to Parameterized testing. Normally we use a TDD-style (or at least try to) so I understand the benefits of that approac. However, I'm struggling to see the gain parameterized tests bring. For reference, we work on a service and it's libraries which are exposed via a RESTful interface. What I've seen so far is tests that are, at least using JUnit within Eclipse: * Lacking in detail - when a test fails it's very hard to see the parameters which caused it to fail * Often complicated to create * Tend to be created after the code has been written - strictly not a drawback as such but do people set out with parameterized tests in mind when they start a piece of code? If anyone has any examples of where they are really useful or even any good hints for using them that would be fantastic. I want to make sure I'm not just being obstinate because I personally don't choose to use them and see whether they are something we should consider being part of our testing arsenal."} {"_id": "247458", "title": "What does neo4j's licensing guide mean?", "text": "There are three versions of Neo4j apparently, called Community, Advanced, and Enterprise. Neo4j's Licensing Guide says that Community is GPL3 (which I confirmed from the LICENSE file in the tarball) and the Advanced/Enterprise seems to be dual licensed under AGPL3 and a proprietary license. What exactly does statement of the guide mean?: > If you\u2019re using Neo4j to build closed-source online applications that are > central to your business, then you\u2019ll want to talk to us about commercial > licensing of Neo4j Advanced or Enterprise editions. What exactly is meant by a closed source application in the context of server software? Does writing software that merely talks to neo4j over RPC trigger the AGPL under Neo's interpretations? If so, that's vastly differently from how the 10gen treats MongoDB (AGPL). The above wouldn't be as confusing if it weren't for the following statement: > ... you\u2019re free to use the Community edition of Neo4j Server under a GPL > license \u2013 which means you can use it anywhere you would use something like > MySQL. _Used in this way, only changes you make to the Neo4j software itself > should be open-sourced and shared with the community._ That last sentence is not required of GPL3 software modified for an organization but never distributed and only ever made available as a web application. In fact that's the exact reason AGPL3 was invented, to plug that gap. Further, the following statement makes no sense since a public domain project is a project without copyright: > We love open source development: so you are free to use all Neo4j components > for your open-source, public domain project under either the GPL (for > Community edition) or the AGPL (for Advanced and Enterprise)."} {"_id": "236603", "title": "Most Appropriate Authentication Type for MVC5 project", "text": "I am about to start a new ASP.NET MVC5 project and I am planning the authentication / authorization requirements at present. The client wants Windows authentication, to prevent their users having to remember another new password. The site is web facing, so the downside would be an ugly pop up box asking for their credentials when they are accessing it offsite. Worse yet, on mobile, this box would be a problem. The Active Directory Authentication options from out of the box are new to me, but after some reading appear to be more about controlling roles and authorization through your AD groups. I intend to keep all authorization concerns internal to the application. Ideally, users will have Window authentication but with a nice login page where they can select their domain from a dropdown box and enter their domain login credentials. From some reading I thought possibly ActiveDirectoryMembershipProvider is the answer. However with the new available options I want to be sure there is not other options before blindly taking this route."} {"_id": "236601", "title": "How to choose parameter for Golomb coding?", "text": "I am trying to implement Golomb coding, but I don't understand how it's tuned to obtain optimal code. It is said that > Golomb coding uses a tunable parameter M to divide an input value into two > parts: q, the result of a division by M, and r, the remainder. The quotient > is sent in unary coding, followed by the remainder in truncated binary > encoding. I don't understand how should I choose the parameter _M_ \\- I can't see how the explanation in Wikipedia relates to actual data. I believe it should be related to statistical moments, is that true? For example, if I have this example set: {3,4,4,4,3,1,2,2,3,1,2,1,4,1,2,2,2,2,1,1,2,2,1} I believe _M_ should be very small for this kind of data. I bet it's either 1 or 2. It's mean is ~2.2 and standard deviation is ~1.1. My intuition would tell me to choose 2. Another dataset here: {2,7,11,19,6,2,6,13,11,1,5,2,19,7,6,9,6,7,2,4,5,12,3} This time the mean is ~7.2 and standard deviation is ~5.0. Is 7 the right value in this case? And should I prefer Rice code (use 8 as it is a power of 2) if I get a value like 7? I understand that division will be easier if I use Rice coding, but are there any benefits in NOT using it? I mean - 3 bits will be used for remainder in either case, how could pure Golomb code be more optimal then? One more nuance - Golomb code is for nonnegative integers. If I have positive integers instead, should I save x-1 instead? It would change a lot for the first of the mentioned datasets."} {"_id": "176035", "title": "Are Scrum and XP comparable things or are they used for different things", "text": "Are Scrum and XP comparable things or are they used for different things? what is the main features of each of them? how do they overlap? I've been reading about both XP and Scrum over the past weeks and something is vague about them for me. Scrum does have a definitive official rulebook but for XP I'm not sure where to look."} {"_id": "54856", "title": "Should validation messages contain punctuation?", "text": "When building an application should all validation messages have punctuation?"} {"_id": "236609", "title": "Visitor only applicable when using the Composite pattern?", "text": "For a long time I've tried to get my head wrapped around the visitor pattern, and somehow this thing keeps being rather fuzzy to me. I'm currently under the impression it is only useful to apply operations on objects that implement the Composite pattern. At least as far as PHP is concerned. Is that an accurate observation, or am I missing something? I'm in the process of reimplementing some functionality, and this includes computing a diff between two Entities. These Entities contain various value objects. Different derivatives in the type hierarchy of these Entities have different value objects. Originally the diff code was contained in the entities themselves, though this caused quite some clutter, so I'd rather move it out into dedicated service objects. The Visitor pattern sprung to mind, though I do not see how I can actually sanely apply it. Would all the value objects need to implement some EntityElement interface? That seems bad. Is this a case where the Visitor pattern indeed does not apply, or am I simply failing to see how it would be applied nicely here?"} {"_id": "176033", "title": "Using table-styled divs instead of tables", "text": "I was referred here from stackoverflow as my question was apparently too broad. I'm working on a template, and I know using CSS is preferred over HTML tables for positioning... But, is it acceptable to get the best of both worlds and use table-like styles on my divs? For example: display: table; This not only helps solve the sticky footer problem, but it also avoids the pains associated with using floats. Somehow it feels dirty, but I can't logically explain why because it works without any \"tricks\" or ugly hacks, which is how it should be, right? Is this technically incorrect, or does it ultimately boil down to just a matter of opinion? ...Thoughts?"} {"_id": "120169", "title": "is there a term for doing this: func1(func2(), func3());", "text": "I know that `obj.func1().func2()` is called method chaining, but what is the technical term for: func1(func2(), func3()); Where return of a function is used as an argument to another."} {"_id": "53498", "title": "What is the philosophy/reasoning behind C#'s Pascal-casing method names?", "text": "I'm just starting to learn C#. Coming from a background in Java, C++ and Objective-C, I find C#'s Pascal-casing its method-names rather unique, and a tad difficult to get used to at first. What is the reasoning and philosophy behind this? I'm guessing it is because of C# properties. Unlike in Objective-C, where method names can be exactly the same as an instance variables, this is not the case with C#. I would guess one of the goals with properties (as it is with most of the languages that support it) is to make properties truly indistinguishable from variables and methods. So, one can have an \"int x\" in C#, and the corresponding property becomes X. To ensure that properties and methods are indistinguishable, all method names I'm guessing are also therefore expected to start with an uppercase letter. (This is just my hypothesis based on what I know of C# so far--I'm still learning). I'm very curious to know how this curious guideline came into being (given that it's not something one sees in most other languages where method names are expected to start with a lowercase letter) (EDIT: By Pascal-casing, I mean PascalCase (which is basically camelCase but starting with a capital letter). Method names typically start with a lowercase letter in most languages)"} {"_id": "254622", "title": "Best way to deal with Floors and Ceiling when using substitution method to solve Recurrences", "text": "I'm currently using substitution method to solve recurrences. The problem I'm having is dealing with T(n) that have either ceilings or floors. For example in the following example see example here. They end up using the guess: `T(n) \u2267 c(n+2) lg(n+2)` My first guess was `T(n) \u2267 n lg(n)`, which turns out to not work but my problem is I end up having to play around with guess to try get one to work. So questions are as follows: 1. What is the best way to deal with these floors and ceiling in general? 2. With regards to guesses, does this come with practice or are there betters ways of deducing the correct guess off the first shot without having to use Recursion Trees. (PS not sure how to write equations in math notation, its my first time using this forum)"} {"_id": "233461", "title": "Strategies to troubleshoot an error that only happens on a specific device", "text": "As an Android developer, the target market I create apps for is very fragmented. While I can specify certain requirements - e.g. my app only supports Android version x.x or above, sometimes errors may occur that are only evident on one specific phone model. **Are there any strategies to handle device-specific errors, without buying the phone in question?** We maintain a suite of phones for testing, but can't afford to go out and buy a new phone when 2 or 3 users report that there's a bug that only occurs for their model of phone. I'm sure other Android developers have encountered similar issues in the past, and I'm curious what cost-effective strategies are available to help squish device-specific bugs. **Update to add a few details:** * I use Bugsense to capture bug reports, so whenever exceptions are thrown I will know the model of the phone, the stack trace, the number of times it has happened to my users, and a few other details. * The users may be located in different countries, so I can't assume I'll ever be able to borrow their phone. **Imagine a scenario like this:** 100 users have installed the app, but three people complained that a button doesn't work properly when pressed. None of the models of phone I have for testing experience the problem. There doesn't appear to be an emulator for the problem phone model."} {"_id": "53493", "title": "Javascript business model", "text": "Does anyone know anything about slidedeck's business model (see link below)? These guys are a javascript component that they sell online. As the tech savvy know - It's very difficult to sell javascript online, because it's pretty much like selling source code. But check out: http://www.slidedeck.com/pricing-b/ These guys give away a \"GPL Branded\" branded version of their code and then sell two premium \"unbranded\" versions that allow licensing of servers. They have some impressive clients. I'm just about done with a project of my own that is similar. I was just going to open source it and then use that source to create a demo to go and find financing for a bigger server side project (with a more clear cut monetization strategy), but seeing this made me take pause. I'm thinking - could I monetize the client source in the short run and raise less money? This seems like an option - but it also raises lots of questions: 1) If slidedeck's code is \"GPL Branded\" - how do they enforce that it's branded? Couldn't I just remove the branding, release my update to the community? (That would be a crappy thing to do, but would it be illegal or against GPL?) 2) While it's quiet obviously - the right thing to do - Outside of premium support models what are the main drivers for companies to license this code outside of the honor system? 3) Given Slidedeck is GPL what protections do they have from flat out plaugerism and people coying their code and selling it for less? Are there legal measures that can be taken to prevent this? How could they enforce this? Copyright perhaps?"} {"_id": "118850", "title": "AVL Tree Balancing Problem", "text": "Take the following tree: 50 40 30 20 10 Which node will be balanced first? 50 or 30 **Case 1: 50** The tree is 30 20 40 10 50 **Case 2: 30** The tree is 40 / \\ 20 50 10 30 Well, both are AVL Trees, so are both correct?"} {"_id": "21571", "title": "Is COM truly dead?", "text": "I have previously worked on COM, however I have observed for quite sometime that hardly any company asks for COM exp. Is COM dead or reports of its demise are highly exaggerated?"} {"_id": "46832", "title": "C/C++: Who uses the logical operator macros from iso646.h and why?", "text": "There has been some debate at work about using the merits of using the alternative spellings for C/C++ logical operators in `iso646.h`: and && and_eq &= bitand & bitor | compl ~ not ! not_eq != or || or_eq |= xor ^ xor_eq ^= According to Wikipedia, these macros facilitate typing logical operators in international (non-US English?) and non-QWERTY keyboards. All of our development team is in the same office in Orlando, FL, USA and from what I have seen we all use the US English QWERTY keyboard layout; even Dvorak provides all the necessary characters. Supporters of using the `iso646.h` macros claim we should them because they are part of the C and C++ standards. I think this argument is moot since digraphs and trigraphs are also part of these standards and they are not even supported by default in many compilers. My rationale for opposing these macros in our team is that we do not need them since: * Everybody on our team uses the US English QWERTY keyboard layout; * C and C++ programming books from the US barely mention `iso646.h`, if at all; and * new developers may not be familiar with `iso646.h` (this is expected if they are from the US). /rant Finally, to my set of questions: * Does anyone in this site use the `iso646.h` logical operator macros? Why? * What is your opinion about using the `iso646.h` logical operator macros in code written and maintained on US English QWERTY keyboards? * Is my _digraph and trigraph_ analogy a valid argument against using `iso646.h` with US English QWERTY keyboard layouts? EDIT: I missed two similar questions in StackOverflow: * Is anybody using the named boolean operators? * Which C++ logical operators do you use: and, or, not and the ilk or C style operators? why?"} {"_id": "21575", "title": "How do you cope with ugly code that you wrote?", "text": "So your client asks you to write some code, so you do. He then changes the specs on you, as expected, and you diligently implement his new features like a good little lad. Except... the new features kind of conflict with the old features, so now your code is a mess. You _really_ want to go back and fix it, but he keeps requesting new things and every time you finish cleaning something, it winds up a mess again. What do you do? Stop being an OCD maniac and just accept that your code is going to wind up a mess no matter what you do, and just keep tacking on features to this monstrosity? Save the cleaning for version 2?"} {"_id": "49827", "title": "Parallel programming against a database: managing locks and transactions", "text": "I have first experience of parallel programming of code which actively works with relational database. I find it interesting to combine .NET parallel programming primitives with database locks/transactions and I'm interested in articles on this topic. Could you please advice some (.NET is optional)?"} {"_id": "152163", "title": "How do you stay in touch with a programming language?", "text": "I'll be starting work for the first time in the IT Industry on the 18th of this month. I'll be working mostly with Microsoft technologies such as C#.NET and MS Dynamic CRM. I spent the last year working with C++. Developing small applications to automate taks and organize my notes. During this time I have developed a good basic understanding of the language. My question is how do you guys stay in touch with a programming language that you love when you need to use something else at the office?"} {"_id": "152162", "title": "0.00006103515625 GB of RAM. Is .NET MicroFramework part of Windows CE?", "text": "The .NET MicroFramework claims to work on 64K RAM and has list of compatible targets vendors. At the same time, same vendors who ship hardware and create Board Support Packages (vendors like Adeneo) keep releasing something named Windows 7 CE BSP for the same hardware targets. Obviously the OS as heavy as WinCE needs more than 64K RAM. So, somehow .NET MicroFramework is relevant to WinCE, but how ? **Is it part of bigger OS or is it base of it, or are both mutually exclusive ?** Background: 0.00006103515625 GByte of RAM is same as 64Kbyte of RAM. I am looking for possiblity to use Microsoft development tools for small target like BeagleBone. http://www.adeneo-embedded.com/About-Us/News/Release-of-TI-BeagleBone Nice. Now .. where is a MicroFramework for the same beaglebone ? Is it inside the released pile ?"} {"_id": "157531", "title": "Issues Tracker for both developers and end users", "text": "currently, we have a closed source project on hand. However, we would like to have a communication channel together with our end users. * We would like to have our end users to know, what features we are currently working on, and what features they are going to get in next release. * We would like to know users' thought, on our planned features. They may always provide input/suggestion on our planned features. * We do not planned to host on our own server. Hence, a ready-available service is welcomed. * **But we would like to remain our source code in closed model** Is there any web based service available for this purpose? Free or commercial doesn't matter."} {"_id": "157536", "title": "How can I write a set of functions that can be invoked from (almost) any programming language?", "text": "I'd like to find a way to write an API that can be accessed from any other programming language via language bindings (or some other framework). Is it possible to do this? If so, which programming language would be the most suitable for writing a \"cross-language\" API? My goal is to create a single set of functions that I can access from any programming language that I'm working with, so that I won't need to manually re-write the entire API in each language."} {"_id": "68439", "title": "Any programming language can be mastered easily if the fundamentals of programming are strong. A fact or a myth?", "text": "Is it true that a person with fairly good fundamentals in programming can easily learn any programming language? Well, when I say programming languages, I refer to the agile and dynamic languages like PHP, Perl, Ruby, etc but not the former programming languages of the distant past. I've worked only on java, groovy and flex to some extent. So considering the fact that I am an amateur programmer but a fast learner, on a rough basis, how long would it take to get a foothold on any one of such languages?"} {"_id": "13397", "title": "Is it unreasonable to write user documentation in the style of a technical book?", "text": "Tell me if this sounds familiar: > something something something... as seen in figure 1-1 on the next page... It's in practically every book I've ever read about programming. So when I was writing a small instructional booklet on how to use some in-house software, which had images of the screen that the user will be on at this certain step I wrote something like \"The screen that pops up looks like the one shown in figure 1-1 below.\" But I'm thinking: I'm used to that style of writing, but if my target audience is the 'average person' are they going to be confused? So, more generally, are there any common practices in technical books that should be avoided when writing documentation for average people?"} {"_id": "13396", "title": "What tools do you use to manage requests from users?", "text": "I'm drowning in user emails and I'd like to implement a better way to manage all these requests I get and put them in a queue where those people on a team, as well as users, have access to them and can make common notes. I'm thinking about some sort of task management tool that would allow multiple tasks to be created under a project where emails, comments, ideas, etc. could be dropped/entered and easily accessible. I need something that all parties can be involved in - users, managers, team leaders, developers. I'm looking for a tool that can allow: * Users to just drag/drop an email to submit a request for maintenance or enhancement. * Developers to just see their queue and the weighted priority of each task/project. * A team of developers to see what everyone is working on in real-time. * Management to keep a a log of time spent on each task. I I am starting to look in more of a Agile/Scrum direction for solving this problem. I found list of scrum agile sofware project management open source tools. Since I am limited on time, has anyone used these? Which one should I test to see if it will meet my needs? TeamPulse is a good direction, but think it is a little too bloated. I need something simple for all parties."} {"_id": "13391", "title": "How to prioritize tasks when you have multiple programming projects running in parallel?", "text": "Say you have 5 customers, you develop 2 or 3 different projects for each. Each project has Xi tasks. Each project takes from 2 to 10 man weeks. Given that there are few resources, it is desired to minimize the management overhead. Two questions in this scenario: 1. What tools would you use to prioritize the tasks and track their completion, while tending to minimize the overhead? 2. What criteria would you take into consideration to determine which task to assign to the next available resource given that the primary objective is to increase throughput (more projects finished per time unit, this objective conflicts with starting one project and finishing it and then moving on to the next)? Ideas, management techniques, algorithms are welcome"} {"_id": "193815", "title": "How to time the sprints in Scrum to allocate time for TDD?", "text": "We have sprints of 4 weeks duration. What I have been doing is 3 weeks dev time and 1 week of pure manual/automated testing, stabilization and shipment assurance testing. How to manage TDD within dev time? In my previous experience writing tests and getting 80% coverage requires round about 50% of the development time. Following the same we would get only 1 and a half week of development which is not enough. My actual problem is how to allocate time for TDD? I want to make TDD a mandatory but with these it becomes really difficult for me. Do you think this is correct approach we are using or are we missing something and alter our approach to Scrum?"} {"_id": "243229", "title": "Naming in Security Protocols: Alice, Bob and Eve", "text": "Among computer scientists and programmers, there's the common habit of naming people in the context of security protocols e.g. `Alice`, `Bob` or `Eve`. Descriptions of more elaborate attack vector sometimes refer to `Charlie` (as does this XKCD strip), but is there a convention for additional participants?"} {"_id": "168707", "title": "Making simple forms in web applications", "text": "How do you work with forms in your web applications? I am not talking about RESTful applications, I don't want to build heavy front-end using frameworks like Backbone. For example, I need to add \"contact us\" form. I need to check data which was filled by user and tell him that his data was sent. Requirements: * I want to use AJAX. * I want to validate form on back-end side and don't want to duplicate the same code on front-end side. I have my own solution, but it doesn't satisfy me. I make an AJAX request with serialized data on form submit and get response. The next is checking \"Content-type\" header. * **html** -> _It means that errors with filling form are exists and response html is form with error labels._ -> I will replace my form with response html. * **json** and **response.error_code == 0** -> _It means that form was successfully submited._ -> I will show user notification about success. * **json** and **response.error_code != 0** -> _Something was broken on back-end (like connection with database)._ * **other** \\- I display the following message : > We have been notified and have started to work with that problem. Please, > try it later. The problem of that way is that I can't use it with forms that upload file. What is your practise? What libraries and principles do you use?"} {"_id": "168701", "title": "Design pattern and best practices", "text": "I am an iPhone developer. I am quite confident on developing iPhone application with some minimal feature. I would consider myself as a fair application developer but the code I write is not so much structured. I make very little use of MVC because I don't seem to find places to impose MVC. Most of the time, I create application with viewcontrollers and very few models only. How could I improve the skill for making my code more reusable, standard, easy and maintainable. I have seen few books on design patterns and tried few chapters myself but I don't seem to skip my habit. I know few of them but I am not being able to apply those patterns into my app. What is the best way to learn the design patterns and coding habit. Any kind of suggestion is warmly welcomed."} {"_id": "243221", "title": "Why is String Templating Better Than String Concatenation from an Engineering Perspective?", "text": "I once read (I think it was in \"Programming Pearls\") that one should use templates instead of building the string through the use of concatenation. For example, consider the template below (using C# razor library) Browser Capabilities Type = @Model.Type Name = @Model.Browser Version = @Model.Version Supports Frames = @Model.Frames Supports Tables = @Model.Tables Supports Cookies = @Model.Cookies Supports VBScript = @Model.VBScript Supports Java Applets = @Model.JavaApplets Supports ActiveX Controls = @Model.ActiveXControls and later, in a separate code file private void Button1_Click(object sender, System.EventArgs e) { BrowserInfoTemplate = Properties.Resources.browserInfoTemplate; // see above string browserInfo = RazorEngine.Razor.Parse(BrowserInfoTemplate, browser); ... } From a software engineering perspective, how is this better than an equivalent string concatentation, like below: private void Button1_Click(object sender, System.EventArgs e) { System.Web.HttpBrowserCapabilities browser = Request.Browser; string s = \"Browser Capabilities\\n\" + \"Type = \" + browser.Type + \"\\n\" + \"Name = \" + browser.Browser + \"\\n\" + \"Version = \" + browser.Version + \"\\n\" + \"Supports Frames = \" + browser.Frames + \"\\n\" + \"Supports Tables = \" + browser.Tables + \"\\n\" + \"Supports Cookies = \" + browser.Cookies + \"\\n\" + \"Supports VBScript = \" + browser.VBScript + \"\\n\" + \"Supports JavaScript = \" + browser.EcmaScriptVersion.ToString() + \"\\n\" + \"Supports Java Applets = \" + browser.JavaApplets + \"\\n\" + \"Supports ActiveX Controls = \" + browser.ActiveXControls + \"\\n\" ... }"} {"_id": "243220", "title": "How to solve linear recurrences involving two functions?", "text": "Actually I came across a question in Dynamic Programming where we need to find the number of ways to tile a 2 X N area with tiles of given dimensions.. Here is the problem statement Now after a bit of recurrence solving I came out with these. F(n) = F(n-1) + F(n-2) + 2G(n-1), and G(n) = G(n-1) + F(n-1) I know how to solve LR model where one function is there.For large N as is the case in the above problem we can do the matrix exponentiation and achieve O(k^3log(N)) time where k is the minimum number such that for all k>m F(n) does not depend on F(n-k). The method of solving linear recurrence with matrix exponentiation as it is given in that blog. Now for the LR involving two functions can anyone suggest an approach feasible enough for large N."} {"_id": "243224", "title": "Liskov substitution principle with abstract parent class", "text": "Does Liskov substitution principle apply to inheritance hierarchies where the parent is an abstract class the same way if the parent is a concrete class? The Wikipedia page list several conditions that have to be met before a hierarchy is deemed to be correct. However, I have read in a blog post that one way to make things easier to conform to LSP is to use abstract parent instead of a concrete class. How does the choice of the parent type (abstract vs concrete) impacts the LSP? Is it better to have an abstract base class whenever possible?"} {"_id": "163538", "title": "Efficient way to find unique elements in a vector compared against multiple vectors", "text": "I am trying find the number of unique elements in a vector compared against multiple vectors using C++. The vectors are in sorted order and it can be of size 2,000,000. Suppose I have, v1: 5, 8, 13, 16, 20 v2: 2, 4, 6, 8 v3: 20 v4: 1, 2, 3, 4, 5, 6, 7 v5: 1, 3, 5, 7, 11, 13, 15 The number of unique elements in v1 is 1 (i.e. number 16). I tried two approaches. 1. Added vectors v2,v3,v4 and v5 into a vector of vector. For each element in v1, checked if the element is present in any of the other vectors. 2. Combined all the vectors v2,v3,v4 and v5 using merge sort into a single vector and compared it against v1 to find the unique elements. Note: sample_vector = v1 and all_vectors_merged contains v2,v3,v4,v5 //Method 1 unsigned int compute_unique_elements_1(vector sample_vector,vector > all_vectors_merged) { unsigned int duplicate = 0; for (unsigned int i = 0; i < sample_vector.size(); i++) { for (unsigned int j = 0; j < all_vectors_merged.size(); j++) { if (std::find(all_vectors_merged.at(j).begin(), all_vectors_merged.at(j).end(), sample_vector.at(i)) != all_vectors_merged.at(j).end()) { duplicate++; } } } return sample_vector.size()-duplicate; } // Method 2 unsigned int compute_unique_elements_2(vector sample_vector, vector all_vectors_merged) { unsigned int unique = 0; unsigned int i = 0, j = 0; while (i < sample_vector.size() && j < all_vectors_merged.size()) { if (sample_vector.at(i) > all_vectors_merged.at(j)) { j++; } else if (sample_vector.at(i) < all_vectors_merged.at(j)) { i++; unique ++; } else { i++; j++; } } if (i < sample_vector.size()) { unique += sample_vector.size() - i; } return unique; } Of these two techniques, I see that Method 2 gives faster results. 1) Method 1: Is there a more efficient way to find the elements than running std::find on all the vectors for all the elements in v1. 2) Method 2: Extra overhead in comparing vectors v2,v3,v4,v5 and sorting them. How can I do this in a better way? [edit] Vectors are in sorted order."} {"_id": "112981", "title": "From Slashdot: Does being a loyal developer pay?", "text": "I saw this question posted on Slashdot and thought it would make a great question here on Programmers.SE. _It's not my question, nor my situation._ Here goes: > \"As a senior developer for a small IT company based in the UK that is about > to release their flagship project, I know that if I were to leave the > company now, it would cause them some very big problems. I'm currently > training the other two 'junior' developers, trying to bring them up to speed > with our products. Unfortunately, they are still a long way from grasping > the technologies used \u2013 not to mention the 'interesting' job the outsourced > developers managed to make of the code. Usually, I would never have > considered leaving at such a crucial time; I've been at the company for > several years and consider many of my colleagues, including higher > management, to be friends. However, I have been approached by another > company that is much bigger, and they have offered me a pay rise of \u00a37k to > do the same job, plus their office is practically outside my front door (as > opposed to my current 45 minute commute each way). This would make a massive > difference to my life. That said, I can't help but feel that to leave now > would be betraying my friends and colleagues. Some friends have told me that > I'm just being 'soft' \u2013 however I think I'm being loyal. Any advice?\" What are your thoughts on this Anonymous Slashdotter's situation?"} {"_id": "112980", "title": "How do I package a J2SE app for sale to a market of buying customers?", "text": "Currently, I enjoy my Java 101 class. I have some ideas for apps that I'd like to make and with time and work I think the process of developing an application from the ground up will work out fine. When I get my Java app(s) up and running - will it be easy to find a place where I can offer them for sale? In the Enterprise labor marketplace, I see ads for developers with expertise with J2EE, WebSphere, Hibernate, Spring, etc. As a Java newbie, I'm aware that I can forget about ever qualifying for those jobs. An alternative for me might be to make something that I can build a micro- business with. Is there a straightforward way of doing that?"} {"_id": "11889", "title": "Is a MSDN subsciption worthwhile for personal use?", "text": "I code primarily in .net at work, but was wondering if home MSDN subscriptions were available/worthwhile as a tool to stay in touch with the latest technology for Microsoft development?"} {"_id": "164629", "title": "What is the impact of CSS Validation Failure?", "text": "I am developing a asp.net website. When I used the CSS property \"word-wrap\", VIsual Studio 2010 is showing a warning: Validation (CSS 2.1) 'word-wrap' is not a known CSS property name. When I tested the website, it is working fine. However, can there be any issue in using this property ignoring the warning?"} {"_id": "237552", "title": "How can I keep Web services requests in a DAO layer without tying the code to the DOM?", "text": "I'm working on a single page application on the node-webkit desktop app platform, which means 99.9% of all of the logic is written in JavaScript. Since this is a reboot of a project we're working on, I wanted to approach the architecture from more of an MVC approach on the client side. I'm planning on creating a DAO layer for access to localStorage, IndexedDB, as well as Web services via REST requests. The problem is, the first tool I reach for to make HTTPS requests to the server is jQuery AJAX. It feels a bit weird to use jQuery in the DAO layer, even though I don't plan to manipulate the DOM, but it works really well on node-webkit and is a tried and tested tool for retrieving and sending data to the servers. Despite the popularity of certain JavaScript application frameworks that have a certain _way_ of doing things, we're not planning on using any frameworks other than JavaScript libraries that are fairly lightweight and unobtrusive, such as jQuery, Underscore, Mustache, Node.js modules, but nothing that forces us to unequivocally exclude certain technologies. I'm planning to inject jQuery into the DAO layer for the purposes of making calls via AJAX to the Web services. My question is this: * How should I approach Web services? Do I treat Web services as just another data source like the local database? * Or do I just simply use the raw XMLHttpRequest object? What's the proven method for dealing with this architectural problem on the client side, **_without JavaScript frameworks_**?"} {"_id": "237554", "title": "Should we use any JS framework which makes HTML as scripting language", "text": "After studying HTML 5, I learnt that HTML is purely for defining semantics of data. And it has provided various tag for each purpose. Although we can create our own tags, can provide styling and it will work still we use standard tags because they are known at global level. And all search engines can understand them. JS frameworks like AngularJS provide the way of creating own directive/tags. Which are non-standard tags. Moreover, I believe that programming logic should be kept apart from HTML :ng- if, ng-repeat etc. It makes HTML page very difficult to understand by a pure UI designer. I would say these frameworks make a HTML page developer friendly not designer friendly. If we put all the logic on a HTML page, there would not be any difference between a server side page & a HTML page. I believe **there should not be any non-standard tag and no programming logic** (like conditional or looping statements) **presents on HTML page**. Instead JS should handle/control HTML from outside. Please tell me if I am incorrect somewhere. Or give me some positives views So I can think about using such frameworks."} {"_id": "237556", "title": "Should we encapsulate everything in a try{} block in a Try object?", "text": "Why can't I make a class for a `Try` including what I try and then run that in the `try {}` block? Why is it impractical? class DBConnectTry extends Try { TryResponse response[] attempt(TryObject o[]...){ //try to connect to database } } Then in a `try ... catch` I could invoke only the `DBConnectTry.attempt`. Would that improve readability / clarity / modularization or is it not a good idea?"} {"_id": "178139", "title": "What is the good way of sharing specific data between ViewModels", "text": "We have IAppContext which is injected into ViewModel. This service contains shared data: global filters and other application wide properties. But there are cases when data is very specific. For example one VM implements Master and the second one - Details of selected tree item. Thus DetailsVm must know about the selected item and its changes. We can store this information either in IAppContext or inside each concerned VM. In both cases update notifications are sent via Messenger. I see pros and cons for any of the approaches and can not decide which one is better. **1st:** \\+ explicitly exposed shared proerties, easy to follow dependencies \\- IAppContxt becomes cluttered with very specific data. **2nd:** the exact opposite of the first and more memory load due to data duplication. May be someone can offer design alternatives or tell that one of the variants is objectively superior to the other cause I miss something important?"} {"_id": "223529", "title": "DDD: Can immutable objects also be entities?", "text": "I've read countless posts on differences between Entities and Value objects and while I do think that at least conceptually I understand how the two differ, it appears that in some of these posts authors consider a particular domain concept to be a VO simply because it is immutable ( thus its state will never change, at least within that particular domain model ). Do you agree that if the state of an object will never change within particular domain model, then this object should never be an entity? Why?"} {"_id": "199882", "title": "C++ Using Intrinsics in a Cross-platform Manner", "text": "I have a cross platform math library that I am working on and I want to make sure that some common operations are performed in an optimized manner, so I wish to use some intrinsic functions wrapped in inline functions to compute things such as sqrt(float/double). I should mention that at this point I only plan on targeting a select few platforms which are mentioned in the source below. I have started a single prototype of the a sqrt() function but before I proceed any further I want to know if I am doing anything blatantly wrong. I am following the documentation at: http://msdn.microsoft.com/en- us/library/windows/desktop/ff471376%28v=vs.85%29.aspx http://gcc.gnu.org/onlinedocs/gcc/Other-Builtins.html Also, you will note I have a blank line under the pound define for macintosh. I know the newer versions are using clang/llvm. How do you get access to intrinsic functions under the newer versions of mac os and its associated compilers? #ifndef INTRINSICS_H_INCLUDED #define INTRINSICS_H_INCLUDED #if defined(_WIN32) #elif defined(__gnu_linux__) || defined(__linux__) #elif defined(__ANDROID__) #elif defined(__CYGWIN__) #elif defined(macintosh) #else #include #endif #if defined(_WIN32) #pragma function( sqrt ) #endif template inline float sqrt(T &val) { #if defined(_WIN32) return sqrt(val); #elif defined(__gnu_linux__) || defined(__linux__) return __builtin_sqrt(val); #elif defined(__ANDROID__) return __builtin_sqrt(val); #elif defined(__CYGWIN__) return __builtin_sqrt(val); #elif defined(macintosh) // How do I do this #else return std::sqrt(val); #endif } #if defined(_WIN32) #pragma function( sqrt ) #endif #endif"} {"_id": "199880", "title": "What level of queries should I expect from an interviewer regarding my official software tasks?", "text": "_**Assumption:_** Post that I am applying for: _Software developer_ Experience in years: _5_ _**Background:_** > I have been working with a robotics company and I am not a robotics person. > I haven't handled a \"project\" by myself yet. So, the kind of work that I > have handled so far are \"small\" C++ programming related stuffs in the > projects. > > _Tiny examples:_ > 1\\. Someone has already made classes and written socket programming for > this software X with UDP. My job was to add the TCP code using Qt API. > 2\\. I had to add a \"callback\" functionality (using C++ templates) somewhere > in the project for some reason. > > _Another somewhat bigger example:_ > I had to set up a functionality of message exchanging between two > components (according to a standard protocol) in C++. Both these example tasks are related to the proprietary software of this company. _**What kind of questions (in which depth) should I expect the interviewer to ask regarding these example tasks?_**"} {"_id": "178136", "title": "Should the analyst define the programmers and their seniority needed by the project?", "text": "It might be obvious, but I am not sure. Should the analyst who interview the client, gather and analyse requirements then give an estimate, also specifies how many and with how much experience developers should be required to develop it? Maybe I'am not so sure on how I go about this allocation thing, maybe this is just not my work."} {"_id": "199886", "title": "Should I \"Fight\" to use development environment I want to use and how?", "text": "I am the only developer for some of projects. I used to pick environment I like, alike C# and C++. But this time new task could come from another department head who used to write programs ~20 years ago. That person reject all the new technologies alike .net and even C++ with no sane reason. Sure everything could be written in C and it could be fast and stable but forcing me (as only one developer) to write large scale-able project in plain C is not anyhow sane for me. * Must developers aware of directions like that? * Must developers fight to pick development environment? My current position is a bit weird. I don't like the situation and I say that I'm able to go and start project in C but I will not tell any development time only apparent minimum and I guess this time will not be anyhow acceptable. That could make me look as really bad developer. * What will you suggest me in such situation? * Have you experienced such situations?"} {"_id": "199884", "title": "How to convince team members of the existence of a \"mandelbug\"", "text": "We are developing an application; it includes a library developed by another coder, this library communicates with server via multiple network connections, and this involves multiple threads working together. The server side code is quite complicated, and we don't have access to the source code. Recently I've discovered a mandelbug making application crash sometimes. I could reproduce it once and got a stack trace, so I opened a bug report. The bug itself is easy to fix (uncaught web exception in one of background threads, that makes CLR terminate the program). The problem is that developer is refusing to fix the bug, because \"he is not convinced it exists\". Unfortunately for me the boss is siding with him and says this bug cannot be fixed unless I make a \"solid test case\" to prove existence of the bug, and to make unit test verifying that it's gone. What is basically impossible due to a nature of the bug. Any advice?"} {"_id": "54145", "title": "I don't really understand \"Backend/Serverside\" when it comes to web-development?", "text": "In the Web development world, what exactly do backend/server-side programmers do? I guess I don't really understand the whole concept. I've done the HTML/CSS layouts and website design and a little bit of SQL with PHP (still enhancing my skills, it's more of a side project for me). I've also done a small amount of JavaScript/JQuery. But I don't understand the \"backend\" work, such as the scripting languages (Rails/Python/etc) and such. What exactly do you \"do\" with them? Are there any books on the subject? I'm not even sure what it means. Is it kinda like what Web Application Frameworks do? Or not so much?"} {"_id": "133134", "title": "Is Model-View-Presenter (MVP) scheme useful for Android?", "text": "How to separate View and Presenter in Android, while the reactions on the user actions (Presenter part of MVP) are set into the same activities that shows GUI elements (View part of MVP). \"In model view presenter just as Martin Fowler or Michael Feathers [2] say, the logic of the UI is separated into a class called presenter, that handles all the input from the user and that tells the \"dumb\" view what and when to display\" (cited from here). Till now I thought that one of the main features of Android is the **_smart_** Activity that takes actions, reacts to them and shows the results. Is MVP scheme in contradiction with Android philosophy? Has it sense to try to realize it on Android? If yes, how could it be done?"} {"_id": "175075", "title": "Why is Java version 1.X referred to as Java X?", "text": "I saw that Java 1.2 is also known as Java 2. Do \"Java 1.x\" and \"Java x\" (for example \"Java 1.6\" and \"Java 6\") refer to the same version of Java? And if yes, why the need for this duality?"} {"_id": "62058", "title": "Does having to scroll horizontally make code less readable?", "text": "Well, does it? Is it considered bad practice? IMO, it is less readable. I hate having to scroll to the right, then back left, right, left, and so on. It makes coding more painful and confuses me sometimes. For example, anytime I am coding a long string, I will do the following... bigLongSqlStatement = \"select * from sometable st inner join anothertable at\" + \"on st.id = at.id\" + \"and so on...\" + \"and so on...\" + \"and so on...\""} {"_id": "62059", "title": "Simple approach to Rails", "text": "I recently got into web development through the PHP framework, CodeIgniter. I knew some PHP before I started learning that framework and overall I found it very easy to learn. The CodeIgniter documentation made it really easy to get started and figure out what each specific helper/library/function accomplished and how it was used. After making a somewhat basic CMS/Forum in the framework, I decided to start learning Rails and see which one I liked more. Needless to say, I've had a somewhat difficult time understanding Rails. I'm not sure if Rails just takes more time to learn, doesn't fit my way of thinking, or if I'm using the wrong book to learn. I've been working through http://ruby.railstutorial.org/ with little success. I understand many of the main points, but I feel as though I lack any real understanding. There seems to be a lot going on 'under the hood' with Rails compared to CodeIgniter. I tried looking at the Rails API for help, but didn't find much help there. I'm more or less looking for a book or reference sheet that clearly outlines what each specific line of code is doing or what's going on behind the scenes. Something similar to the CodeIgniter documentation or a book that has less words and more basic examples for each aspect of the framework. I've also tried reading through Why's poignant guide to Ruby, but that had far too much talking and not enough examples or clear documentation. I feel like it's the easiest for me to learn something when it's heavily tied to basic logic/math. In the end I believe that it will just end up being a combination of the Rails API and spending more time working with the framework; however, I'm hoping that someone learns the same way that I do and has a suggestion. Again, I'm very new to programming and haven't spent more than 40 or so hours working with PHP/Ruby combined, so I apologize if my question doesn't make sense or if it's just a matter of putting a lot more time into it. If Ruby doesn't seem like the right language for me, would Python be a better option? My plan was to build something basic with PHP/Python/Ruby and decide which I liked the best. **TLDR:** Trying to learn Rails in limited free time. Finding that it's taking a lot longer to learn than CodeIgniter. Not sure if I just haven't been able to find a book/reference sheet similar to the CI documentation. Looking for suggestions or advice. Thanks in advance for any advice or suggestions."} {"_id": "175070", "title": "Pattern for a class that does only one thing", "text": "Let's say I have a procedure that _does stuff_ : void doStuff(initalParams) { ... } Now I discover that \"doing stuff\" is quite a compex operation. The procedure becomes large, I split it up into multiple smaller procedures and soon I realize that having some kind of _state_ would be useful while doing stuff, so that I need to pass less parameters between the small procedures. So, I factor it out into its own class: class StuffDoer { private someInternalState; public Start(initalParams) { ... } // some private helper procedures here ... } And then I call it like this: new StuffDoer().Start(initialParams); or like this: new StuffDoer(initialParams).Start(); And this is what feels wrong. When using the .NET or Java API, I always never call `new SomeApiClass().Start(...);`, which makes me suspect that I'm doing it wrong. Sure, I could make StuffDoer's constructor private and add a static helper method: public static DoStuff(initalParams) { new StuffDoer().Start(initialParams); } But then I'd have a class whose external interface consists of only one static method, which also feels weird. Hence my question: Is there a well-established pattern for this type of classes that * have only one entry point and * have no \"externally recognizable\" state, i.e., instance state is only required _during_ execution of that one entry point?"} {"_id": "20225", "title": "Is it correct to fix bugs without adding new features when releasing software for system testing?", "text": "This question is to experienced testers or test leads. This is a scenario from a software project: Say the dev team have completed the first iteration of 10 features and released it to system testing. The test team has created test cases for these 10 features and estimated 5 days for testing. The dev team of course cannot sit idle for 5 days and they start creating 10 new features for next iteration. During this time the test team found defects and raised some bugs. The bugs are prioritised and some of them have to be fixed before next iteration. The catch is that they would not accept the new release with any new features or changes to existing features until all those bugs fixed. The test team says that's how can we guarantee a stable release for testing if we also introduce new features along with the bug fix. They also cannot do regression tests of all their test cases each iteration. Apparently this is proper testing process according to ISQTB. This means the dev team has to create a branch of code solely for bug fixing and another branch where they continue development. There is more merging overhead specially with refactoring and architectural changes. Can you agree if this is a common testing principle. Is the test team's concern valid. Have you encountered this in practice in your project."} {"_id": "62053", "title": "What's a good light-weight source repository for local development?", "text": "I'm doing some prototyping locally that I would like to keep in source control (for backup and revert purposes) but I don't necessarily want to publish it as open-source or make available online for others to view. What source control system would you recommend for local development? Any setup or walk through for my scenario is greatly appreciated. I'm looking for: * Easy setup and administration. As this is my local machine, I'm constrained to Window OS and would really like to minimize the amount of environment configuration changes and necessary learning curve. There will only be one user, so I don't want to configure access rights, etc. * Low resource overhead, I want to host locally on my developer machine, so I don't want it sucking up my CPU. I don't plan on storing massive amounts of data, either. * Familiar. I've used SVN clients before. Visual Studio integration is a nice-to-have. * Portable. If I have to move it to a external drive or to another machine. * Free. Yes I want it all and don't want to have to pay for it."} {"_id": "256173", "title": "Should `setX(Object o)` methods perform deep or shallow copies of objects?", "text": "My particular situation is related to Java, but I believe this is a more general OOP question than _just_ Java programming. **The Question:** Should \"mutator\" methods perform _deep_ or _shallow_ copies? **An example:** Let's assume my object has the following structure: public class MyObj { ArrayList myList; //constructors, etc. public void setMyList(ArrayList list) { this.myList = list; //Option 1 //OR this.myList = new ArrayList(list); //Option 2 } } My assumption is that Option 2 is best. This is because it prevents an external class from having direct access to `MyObj`'s list. For example, if I did: public class SomeOtherClass { public static void main(String[] args) { ArrayList exampleList = new ArrayList<>(Arrays.asList(1, 2, 3)); MyObj a = new MyObj(); a.setMyList(exampleList); exampleList.add(4); } } At the end of `main`, Option 1 means that `a.myList` contains `4`, while Option 2 means `a.myList` still consists of `1, 2, 3`. To me, the second seems preferable. Is there a standard/convention regarding this situation?"} {"_id": "106806", "title": "Is wrapping third-party API calls a design smell?", "text": "Five methods within my API call the same third-party method. In trying to abide by DRY, does it make sense to wrap this call in a private method?"} {"_id": "256171", "title": "I have this App idea, what technologies can I use/need to learn?", "text": "It's pretty straight forward: You have an object (let's say a car, or house, or clothes, or whatever), and all you want to do is to add different kind of textures to it (Let's say one can add 5-10 different types of textures to the object) The object will pretty much be static (in movement), and I'd like a pretty fast rendering time. The textures can be uploaded by the user, or picked from a DB. Sounds like a really easy project, but I'm very new to app programming. I've done some programming before, but mostly just Windows(C#,Java,Python) and microcontroller stuff(C). Where should I start? From similar products that I've seen, they seem to use everything from Asp.net to Flash for their solutions(these are websites, though)."} {"_id": "106804", "title": "What are the advantages and disadvantages of the various git modes available for emacs?", "text": "I just started using git, and because I tend to live in emacs, I want to use one of the emacs integration packages. Looking at this list, I see that there are a lot of packages available, but the blurbs for each of them don't explain very much about their capabilities, especially since I don't know git very well. Which git modes for emacs have you used, and what are the advantages and disadvantages of each?"} {"_id": "67663", "title": "Where can I test my Silverlight/WPF knowledge online?", "text": "Do you of know any sites where I can take a quiz on Silverlight/WPF that isn't Braindumps?"} {"_id": "156668", "title": "How to mix different styles of programming on several languages?", "text": "I know that Senior Developer doesn't use only one language and only one platform or IDE. Can you advise how to mix different styles of programming to make efficient code? For example, **best** mixing is **Perl + Objective-C**. `$ObjectiveCLongNameOfVariablesAndMethods` are worst for Perl-style, but good for iOS-team. `[self method:[[NSArray alloc] initWithobjects:[hash objectForKey:name]]]` style is weird for an eye, but not for perl simple-looking arrays and hashes and autovivification. Also, i try to mixing **SAS + Objective-C style** , but here i have a trouble. **SAS** doesn't support names long names, only 32 symbols. So, big trouble to naming variables. Another difficulty is features of current language. So, if I can't use current language for a task, that it can **solve 'like a BOSS'** , I become angry and see weird code. Using libraries is good, but it makes code not natural, 'cause eating soup with fork is strange'. And at the end: where can I trust compiler and interpreter of language? So, **C++-style** is best for pointers (you make destructors, constructors, etc.), but many languages now support mechanism like **ARC in Objective-C** or don't use pointers (like **C#** )."} {"_id": "156669", "title": "Is code reviewing good practice?", "text": "When the company I work in hired new managers, they offered us to overview someone's code on every meeting. We have meetings every two weeks, so each time one of developers was to show his/her code on the projector, and others were going to discuss it. I thought this would be great: each developer would be more careful when writing code and we could share our experience better. But somehow we forgot about this and the offer just remained an offer. What are the benefits of this and are there any drawbacks?"} {"_id": "252663", "title": "What is the appropriate amount of guidance and mentorship a junior programmer should receive?", "text": "I'm a junior software developer, and just got this job working with a very small group of engineers. I've been coding for a few months in this group in a professional settings - before I just coded at school - and it's definitely been a learning experience. I feel like I'm taking a long time to learn - things like the design patterns, the different methods of different classes throughout the codebase - I make a lot of mistakes in my code and need to ask a lot of questions. I also think that I don't have the greatest mentorship and this is where I'm wondering if it's me being slow or I'm not in the best learning environment. Currently there's one lead engineer that reviews everyone's code and sends back feedback. In a code review, I will often get short comments such as \"use XXX method here\". Usually the XXX method is a method in our code base which I have never used or seen, on an object in our code base that I'm not familiar with. I'll have to ask for more clarification and extend the review process. In a case like this - is it entirely up to me to find this method and try to figure out what it does? I feel like these types of comments should have more explanation - why I should use that method and what I did wrong - but instead it's more along the lines of \"do XXX\". Of course I go and try and find the method, but most of the time I don't have enough experience and context to fully understand the issue. Another issue is that when I have questions and the lead engineer comes over, they tend to interrupt me, shush me (literally - they say \"no, no, shhh shh\" if I'm still explaining the question), take over my mouse and keyboard and just fix the issue after looking at it for a second. Usually my question will have the underlying reason for the problem I'm having (I misunderstood a concept), and if I can't express it and show the engineer where I have a wrong idea or concept then how will I be able to improve and learn? I feel like they're not correcting me and teaching me - they're just fixing the problem on my screen - but since I'm not learning, I tend to repeat it again. This is not helpful for me - I'd rather they sit with me and let **me** drive with the keyboard mouse, let me ask the question with my own words, and explain to me with words what I did wrong instead of just changing my code. Am I asking for too much? At the end of the day, I'm definitely learning and getting better, it's just been taking a long time. I'm frustrated with the lack of mentorship and guidance I have. I'm confused and feel that sometimes I'm expected to know everything that the lead engineer who's been building up this codebase on their own for the past 4 years knows. I feel like a burden when I do have questions. So, I'm wondering what is the right amount of help and guidance a junior engineer should have? Obviously I know how to use SO, I know how to search google. I'm talking about learning our code base, our patterns, and the way we do things for our product. Is my experience normal? Or am I building this all up in my head?"} {"_id": "27644", "title": "Do designer tools degrade the programming experience?", "text": "I've been looking around lately, specifically at some of the MS tools that are available, and I'm noticing a big focus on designer tools and wizards. Not just for UI development but for everything. * Entity Framework has the modeller * RIA Services has the DomainService wizard(s) * Workflow has the whole workflow designer thingy... (I don't know, haven't really used it) There's more, but I think you get the idea. There's lots of designer tools. Using some of these I find that: **They complicate matters beyond the prescribed use cases** (ie all of the tech demo videos) I have been evaluating some of these technologies recently, and trying to work with them I end up having to dissect exactly what the designers, modellers and wizards are doing for me... otherwise I'm lost when I actually have to try to do something with whatever was created. This ends up being a case where I have to fight the tool, or fight its output enough such that I could have just done the whole thing myself without it - and had a _much_ stronger understanding of what's going on. I find this particularly infuriating with the silverlight designer and RIA services domain service wizard. I find myself asking \"What good is this tool if I have to figure out its inner workings or re-write half of its output in order to use it?\" **They're not as fast** The selling point of these tools is to increase productivity and this point may change over time using the tool, and doesn't necessarily apply to UI designers (though, in some cases it still does -> I'm looking at you silverlight designer). I find that I can hack some code much faster than I can drag-and-drop, resize, move, whatever in a designer. **The UI gets in the way of the model** Maybe this is just me, but when I am using anything reminiscent of a UML design tool I end up spending more time laying everything out so that my lines don't cross and so that I can see it all on the screen than modelling what I'm trying to achieve. **They're no fun** Half of the reason that I code for a living is that I enjoy it. Clicking checkboxes and selecting comboboxes and then fixing everything that comes out isn't fun. **I don't appear to be alone** The community seems to _not_ want these tools either. The best example I can think of at the moment is Entity Framework Code-First. So I ask: 1. Do designer tools _actually_ improve productivity? 2. Are they fun-killers? 3. Is 'the community' _actually_ asking for more designer tools, or are vendors just thinking we are?"} {"_id": "51538", "title": "How should I name the language data files?", "text": "Recently I decided to add some translations to my program. I wonder how I should name the language files? 1. in the culture's name of the language (example: english = en, french = fr, italian = it, etc...) 2. in the name of the language [in english] (example: english = english, french = french, italian = italian, etc..) I know you'll choose the second way because you dont have to detect which filename it is because both have the same name. But the problem is this - I show the name of the languages in its langauge (example: english = english, french = fran\u00e7aise, italian = italiano, etc..) so I still need to detect which filename it is. The main question is which way I should choose? the name of the language in english or the culture name? and why?... Thanks!"} {"_id": "51537", "title": "Evaluating a Programmer for Startup", "text": "I have sorted through the previous questions and couldn't find a specific answer to the following (and thank you in advance): I have fully developed my idea on paper and am looking to move forward with it, create it, and grow it. Since I am non-technical, I am looking to either partner or employ (I would pay for his/her services) for a very talented and well-rounded programmer to help create and develop the project. I am looking for someone that can act as a IT manager/CTO and get the job done while I use my resources to develop and deploy the strategy, deal with the business side of things, raise capital, grow, etc. However, due to my lack of IT knowledge, it is always hard for me to differentiate between a good and bad programmer and therefore only find out if he/she is good or not when it is too late. So my question that I have been asking everyone around me is \"How do I assess whether the programmer is good or not if I cannot evaluate them myself?\" and \"Is there any website that reviews and rates programmers?\" I have asked many to refer me to talented programmers but all are either not local (important for me to work side by side with them), happily employed or working on their own startup. I have also asked these programmers to help me find others but none seem to be able to help. Any help would be extremely appreciated. Thank you so much, HelpJ"} {"_id": "51531", "title": "What is the best way to handle different TimeZones?", "text": "I'm working on a web application where there will be many different users from all over the world making updates. I'm wondering what the best way to handle timezones would be? Ideally, as an event happens, all users view it using their local times. Should the server OS time be set to GMT, physical location time, or my development location time? Should the application logic (PHP) and database (MySQL) be set to store data as GMT or local time (local to users)? Is there an industry standard or even a simple/obvious solution that I'm just not seeing?"} {"_id": "161660", "title": "Can a loosely typed language be considered true object oriented?", "text": "Can a loosely typed programming language like PHP be really considered object oriented? I mean, the methods don't have returning types and method parameters has no declared type either. Doesn't class design require methods to have a return type? Don't methods signatures have specifically-typed parameters? How can OOP techniques help you code in PHP if you always have to check the types of parameters received because the language doesn't enforce types? Please, if I'm wrong, explain it to me. **When you design things using UML, then code classes in PHP with no return- typed methods and no-type parameters... Is the code really compliant with the UML design?** **You spend time designing your software architecture, then the compiler doesn't force the programmer to follow your design while coding, letting he/she assign any object variable to any other variable with no \"type- mismatch\" warning.**"} {"_id": "161662", "title": "Openfire licensing with custom code", "text": "I am thinking of integrating the openfire chat server with my custom website. Openfire server and the smack client library is licensed under Apache License, Version 2.0. If I decide to use openfire, will it require my website source code to be also made available under the Apache license? Tried going through the licensing terms, but the legal language is a bit confusing. Thanks in advance for any insights! Rohit."} {"_id": "59040", "title": "Valid reason for employer to breach freelance contract", "text": "Please don't close this as offtopic. According to the FAQ I can post programming related questions. I was working on a project and when it was half way completed (1 weeks work), the employer backs out and refuses to pay me. Shortly before this he was being very rude. He was having problems configuring the server and he told me it was my fault and that I had to fix it. After I spent several hours trying to figure out the problem, it turned out to be his fault. After this when I put the code on the server. He found 1 bug that I had missed. He freaked out, accused me of being a bad programmer and told me that the code was shit and that he couldn't use it. He said that if there is a bug in the code, that means the code is bad and he can't use it. He would have to throw the code away and hire someone else. His kept reiterating his argument: \"why should I pay for code that I can't use\". And I kept telling him the code was fine and urged him to have another programmer give him a second opinion. But he would have none of that. He said he would compensate me for my troubles by paying me 250$. Then he changes his mind and lowers that to 200$. Then a third time he changes his mind and says he doesn't want to compensate me at all. I'm left frustrated because besides being rude, he did not at any time tell me he was unhappy with the work that I was doing. So my question is; Is the above a valid reason to back out of a verbal contract in your opinion?"} {"_id": "59044", "title": "Would you consider using training/mentoring from LearnersParadise.com?", "text": "My initial question deserves some explanation. I signed up for an account at learnersparadise.com. After signing up I couldn't login so I opted to use their \"send password\" feature. Upon receiving my password in my email I confirmed two things A) They trimmed off 2 of the last digits of my 10-digit password without informing me and saved it that way in their database B) my password is not saved in their database using a one-way hash since they were able to email me my password. I'm quite certain that both of these are perfectly awful programming practices. I suspect that the mentors/trainers at learnersparadise are not necessarily affiliated with the website and it's design since they are basically people like you and me (hopefully more skill than me) who have signed up to become mentors. However, I'm still uncertain about signing up for training/mentoring at a site that uses such poor programming practices themselves? Would you let learnersparadise poor programming practices affect your opinion of their trainers/mentors?"} {"_id": "161668", "title": "Passing the data of a certain page to another page. Windows Phone 7", "text": "The user will fill out information in page 1, and the data will be seen in the other page listed in a listbox. Is it advisable to use data binding? I've been studying/researching about data binding already for about 2 days and still couldn't understand it. Simple example would really help."} {"_id": "214005", "title": "Sync / send data between front end and back end systems for an Ecommerce site", "text": "A client recently had a new backend .net stock management system developed that hooks into their EPOS and allows their stores to keep track of orders, products, customers, etc from a single central database that utilizes MS SQL Server 2003. The client hopes to use this as an opportunity to renew/re- develop their site to also utilize this centralized back end database. I have experience with e-commerce, but this is my first time developing a site that needs to work with the companies back end systems. I had been hoping to use an off the shelf platform such as Magento to make the development process smoother and allow for easy upgrades and the use of plugins/modules in the future. However, having looked at the schema for their new backend systems it is apparent that there could be several problems. Their database has a lot of design issues, the tables are badly normalized, and their approach to the database design seems dated to me. If I was to use an off the shelf platform like Magento, the only way I can see it working is to re-create all the products/categories/customers in the front end DB, this may have to be done manually because of the differences in schema between the backend (centralized db) and frontend(magento db). That would be a mamoth task, and would create a new problem of how to deal with the replication of data (products, stock levels, prices, orders, order statuses). The client currently manages all the products and order data on their .net system and wants to continue to do so, so there would need to be a way to update the sites db when they make changes to their back end systems (E.G in the case order has been shipped/canceled/etc). Like wise the centralized database would need to be updated from the front-end when a customer makes an order, updating stock levels, etc. These are just two examples of data that will need to be synchronized, but their are countless other scenarios. Does anyone have experience working with Magento or other platforms in situations similar to the above? How did you deal with data replication and sending data between the back end centralized db and Front end db? I am having a hard time wrapping my head around the best setup/architecture for a site like this, any pointers in the right direction would be really appreciated. In retrospect I can't help but feel the best solution would be to create an in-house platform from scratch that can directly connect to their centralized database instead of using an off the shelf platform and worrying about syncing data between backend/frontend. Any thoughts?"} {"_id": "219871", "title": "How effective are code signing certificates when it comes to customer behavior or acceptance?", "text": "Whenever a potential customer downloads a copy of my software they get the \"Unknown Publisher\" warning message. I understand that if I purchase a code signing certificate this message goes away and the customer does not get the scary warning. I'd like to know if anyone has information on how effective these code signing certificates are with regard to customer behavior. What difference did you see in customer behavior or acceptance after you signed your code using a certificate?"} {"_id": "255898", "title": "Why did the creators of the Internet Protocol decide to use IP addresses to identify a particular computer?", "text": "Why did the creators of the Internet Protocol decide to use IP addresses to identify a particular computer? Why not just have a unique ID assigned to each computer upon manufacture, then use that ID for identification of the computer?"} {"_id": "236693", "title": "An efficient way of starting an arbitrary number of consumer threads?", "text": "I have a Producer/Consumer implementation where the number of consumers is configurable (this is a form of configurable throttling). The producer and consumer are kicked off like this: var cts = new CancellationTokenSource(); var processCancelToken = cts.Token; Task.Run(() => Parallel.Invoke(new ParallelOptions() { CancellationToken = processCancelToken } , producer , consumer ) , processCancelToken); The producer action is quite simple and just populates a BlockingCollection with objects that are derived from System.Threading.Tasks.Task (yes it is possible!). Here is a simplified example: var pollingInterval = 30000; var producer = new Action(() => { Random rnd = new Random(DateTime.Now.Second); while (!processCancelToken.IsCancellationRequested) { for (int ii = 0; ii < 10; ii++) { var r = rnd.Next(2, 15); _mainQueue.Add(new Task(() => { //this is a dummy task for illustrative purposes Console.WriteLine(\" Queued task starting, set to sleep {0} seconds, ID: {1}\", r, Thread.CurrentThread.ManagedThreadId); Thread.Sleep(r*1000); })); Console.WriteLine(\" Producer has added task to queue\"); } System.Threading.Thread.Sleep(pollingInterval); } Console.WriteLine(\"Exiting producer\"); } For this example it creates an anonymous task that sleeps for a random number of seconds between 2 and 15. In the real code this producer polls the database, extracts data entities representing work items, then transforms those into executable tasks that are added to the collection. I then have a consumer task that uses a `Parallel.For()` to start _n_ instances of an anonymous action which then dequeues a task from the collection, then starts and waits on the task, then repeats: var numberConsumerThreads = 3; var consumer = new Action(() => { Parallel.For(0, numberConsumerThreads, (x) => { //this action should continue to dequeue work items until it is cancelled while (!processCancelToken.IsCancellationRequested) { var dequeuedTask = _mainQueue.Take(processCancelToken); Console.WriteLine(\" Consumer #{0} has taken task from the queue\", Thread.CurrentThread.ManagedThreadId); dequeuedTask.Start(); while (!processCancelToken.IsCancellationRequested) { if (dequeuedTask.Wait(500)) break; Console.WriteLine(\" Consumer #{0} task wait elapsed\", Thread.CurrentThread.ManagedThreadId); } } Console.WriteLine(\"Exiting consumer #{0}\", Thread.CurrentThread.ManagedThreadId); }); } **The question:** is this an efficient way to start and operate an arbitrary number of consumers? Or is there a more efficient way of using PLINQ from within the main consumer action that both continues to execute queued tasks, blocks while there isn't any, and can still be cancelled using `processCancelToken`? (Note: `processCancelToken` is operated separately to cancel tokens contained within the queued tasks - they are independently cancelable. This all runs within a Windows service and `processCancelToken` is used to cancel everything if the service is stopped)."} {"_id": "129206", "title": "Make or Buy: Web Based Accounting App Advice needed\u2026", "text": "My company has a client (in the US) that currently offers small businesses web-based payroll software. They want to offer small businesses a web-based accounting package that integrates well with their current web-based offerings. The company does have a CPA on staff which is a bonus. To this point they have said it needs to be a double-entry accounting system, GL, subledger, sales invoicing, cash receipts processing, payables entry and payment processing. So my question is \u2026 Should we try and write this ourselves from scratch with the on-staff CPA, try the open source path and see if we can adjust it to what we need or is there a different option that I\u2019m not thinking of? Any advice in any direction would be greatly appreciated."} {"_id": "213055", "title": "Module system for OOP language", "text": "I'm designing a simple OO programming language. It's statically typed, compiled, and executed by a VM - similar to Java. The difference is that I don't want to have such a strong emphasis on OOP. The code itself will mostly resemble C++ (classes, functions, and variables allowed in the file scope). One of the things that I need to have is a module system. I have the following figured out: 1. Every file is a module (once compiled) - like Python 2. Programmers must import a module with the `import` keyword, which causes the compiler to search for modules in standard directories and the file directory (the VM has to do this at runtime as well) And now I have no idea how should I introduce the concept of submodules and module hierarchy. One option, for example, is to depend on the directory hierarchy, so that `import engine.graphics.renderer` would expect to find a directory called \"engine\" in the working directory, and inside a directory called \"graphics\", with a module called \"renderer\". What are the drawbacks of such a design? Am I missing anything?"} {"_id": "23885", "title": "How is quality important to the programmer, the person?", "text": "I didn't ask how the quality of the software is important to the product itself, the customers/users, the manager or the company. I want to know how it is important to the **programmer** that build it. I'll be interested in any **books** (please specify the chapter), **articles** , **blog post** , and of course your **personal opinion** on the subject regardless your experience."} {"_id": "74611", "title": "How to present self-taught knowledge?", "text": "I work as a senior software engineer (5+ years of exp) at the moment at my company, developing some networked systems. Since a year, I've noticed that I'm becoming increasingly interested in distributed/high-performance file systems (like apache hdfs, GFS) , cloud- scale databases (like azure table storage, hypertable) and that led me to start reading a lot about it. I've started buying books, playing with existing similar open-source software, reading their source code and modifying them as an exercise. To sum up, I've read around 10 books on that topic (including topics that are somehow relate to DFS, like low-level high-performance IO optimization techniques, to minimize the hardware-related latency), spent approximately 15-25 hours per week hacking some code in that area, implementing some small proof-of-concept projects for a whole year. It was pretty fun. But as it is said that the appetite grows with eating, and now I want to start spending all my days working on such systems - in short, change my job and start developing a commercial, widely used DFS. Now, is there a way other than going back to school and doing a PhD on that topic, to find a job that would be related to an area that has not been on my resume (commercial experience) for the past five years? How to convince an employer that I have managed to gain more knowledge than my resume is showing in that area? I am aware that I would be applying for a much less senior position that currently I'm holding."} {"_id": "141478", "title": "Will javascript be in the HTML5 standard ", "text": "I'm pretty new to the web development scene, and I just want to be absolutely clear on this because I've read a few conflicting statements. I was under the impression that \"html5\" is a particular way of constructing xml to represent data for a webpage and \"javascript\" is a programming language that runs as client-side code in the browser. But left and right I see APIs for javascript (workers, geolocation, local storage, etc.) being referred to as an \"html5 technology\". Wikipedia says that html5 doesn't have a standard yet, so I can't look it up to see if it somehow mandates stuff about javascript. So will APIs for javascript somehow be apart of the html5 standard? Or has it become a common bad practice to label javascript APIs \"html5 technology\"?"} {"_id": "253049", "title": "Best practice: Secure Android app online authentication", "text": "Currently, I develop an Android App needing online authentication for login (and registration). The main focus is on **security**. I\u2019m amaing for: * keeping the users\u2019 passwords safe and * preventing MITM attacks. Secondary aims are performance and user experience. Besides, I\u2019d prefer not to use third-party solutions. I studied a lot about different approaches[1][2][3]. My problem is now how to _combine_ these ideas into _one_ secure mechanism. To put it differently, **have I overlooked something?** Long story short, I came up with the following: ![Flowchart. Basically, if an authToken is present, it is sent to the server and checked there. If successful, the login process is completed. Else or if no token was present, the user has to enter his credentials. His/Her password is hashed, sent to the server which generates a token and doing so completes the process and enables the user to login automatically the next time.](http://i.stack.imgur.com/ZnxZO.png) The illustration you see shows the login process performed by the app\u2019s background Service before it downloads notifications for this user. The idea is to send the hashed user password only once and then to work with a server- side generated authentication token which is kept in an encrypted **KeyStore** on the phone and renewed on every login process. Further information: The app\u2013server communication is done over **HTTPS**. The hash is a randomly **salted** **bcrypt** created on the phone. The database table consists only of `id`, `username`, `hash`, `salt`, `authtoken`. What do you think about these considerations? I am looking forward to your critism and feedback. Qdeep _Some ideas were \u201cstolen\u201d in theThe definitive guide to form based website authentication here on SO. Others found by searching for `android secure authentication`._"} {"_id": "253047", "title": "Better support for controlling printing options like headers and footers from Javascript", "text": "Why is it that browsers do not provide a mecanism to set the printing defaults regarding headers and footers from within the app? Web apps are fully fledged apps nowadays, and modern browsers are capable of sending a very well formatted page to a printer (or even download it as a PDF), but web apps cannot rely on this method because browsers don't allow them to control the metadata completely. Current possibilities don't allow websites to store the printing settings along with the report or file on the server. Each browser provides it's own mandatory dialog form to set printing settings that could be replaced when needed by a more convenient experience: in-app options, adapted to the use case, cloud-persistent and consistent along different browsers. The options that are currently available are: 1. Delegating part of the work on the user (i.e., having him/her change the metadata options and paper size/orientation themselves). 2. Using other methods that can quickly become too expensive and complicated, like generating PDF's on the server for the user to download. Opting for number 1 is not always our decision. As professional web developers, the final users are usually not our direct clients, but our client's target audience. Also, we would be forcing the final user to do extra work. Experience says that it wouldn't be done and reports will be printed with the default settings of the browser, usually with headers and footers that the user don't really want (and the associated paper waste if he/she decides to print it againg without them). Option 2 --generating server side PDF's-- is becoming less of an option every day. Modern web apps are heavy users of javascript, and reports benefit from excellent plotting libraries made with JS that can generate charts in HTML canvas elements that could be printed right away if we weren't using this \"do it all over again on the server\" approach. The resulting reports never look the same, and programmers have to maintain two versions of every report. Moreover, generating PDF's on the server can be really slow and resource consuming. So, is there a compelling reason why browsers programmers opted not to support this kind of customization?"} {"_id": "84263", "title": "What was Java enterprise programming like before Eclipse?", "text": "I'm your standard Java/Oracle developer at a large software firm and Eclipse 3.6 is what I spend most of day in. Java is incredibly verbose and that can be painful (but we don't need another blog post explaining this), but Eclipse eases a lot of that pain with one-click renaming, auto-completion, dynamically adjusting your classpath, etc. I cannot imagine trying to do my job without a tool like Eclipse at my disposal. So for those of you who were around before my time, what was it like? The disadvantages are obvious, but were there any advantages?"} {"_id": "107884", "title": "To branch or not to branch?", "text": "Till recently my development workflow was the following: 1. Get the feature from product owner 2. Make a branch (if feature is more than 1 day) 3. Implement it in a branch 4. Merge changes from main branch to my branch (to reduce conflicts during backward merging) 5. Merge my branch back to main branch Sometimes there were problems with merging, but in general I liked it. But recently I see more and more followers of idea to not make branches as it makes more difficult to practice continuous integration, continuous delivery, etc. And it sounds especially funny from people with distributed VCS background who were talking so much about great merging implementations of Git, Mercurial, etc. So the question is should we use branches nowadays?"} {"_id": "243205", "title": "Understanding branching strategy/workflow correctly", "text": "I'm using svn without branches (trunk-only) for a very long time at my workplace. I had discovered most or all of the issues related to projects which do not have any branching strategy. Unlikely this is not going to change at my workplace but for my private projects. For my private projects which most includes coworkers and working together at the same time on different features I like to have an robust branching strategy with supports long-term releases powered by git. I find out that the Atlassian Toolchain (JIRA, Stash and Bamboo) helped me most and it also recommending me an branching strategy which I like to verify for the team needs. The branching strategy was taken directly from Atlassian Stash recommendation with a small modification to the hotfix branch tree. All hotfixes should also merged into mainline. ![Planned Branching Strategy](http://i.stack.imgur.com/zOdcE.png) **The branching strategy in words** * _mainline_ (also known as _master_ with git or _trunk_ with svn) contains the \"state of the art\" developing release. Everything here was successfully checked with various automated tests (through Bamboo) and looks like everything is working. It is not proven as working because of possible missing tests. It is ready to use but not recommended for production. * _feature_ covers all new features which are not completely finished. Once a feature is finished it will be merged into _mainline_. Sample branch: `feature/ISSUE-2-A-nice-Feature` * _bugfix_ fixes non-critical bugs which can wait for the next normal release. Sample branch: `bugfix/ISSUE-1-Some-typos` * _production_ owns the latest release. * _hotfix_ fixes critical bugs which have to be release urgent to _mainline_ , _production_ and all affected long-term *release*es. Sample branch: `hotfix/ISSUE-3-Check-your-math` * _release_ is for long-term maintenance. Sample branches: `release/1.0`, `release/1.1` `release/1.0-rc1` * * * I am not an expert so please provide me feedback. Which problems might appear? Which parts are missing or slowing down the productivity?"} {"_id": "213959", "title": "Web API Development and SVN", "text": "We are planning to develop a ASP.NET Web API project starting soon. For source control we use svn. We typically follow the pattern of the trunk being stable and doing all of our work in the branches. Most of the time we are lucky that one developer is working on the project. For this Web API project there are at least two, possibly three developers working on different parts. By this I mean that one may be working on a set of controllers for working with document models while the other developer is working on a controller or controllers for user authentication and maintenance. Seeing that these two should not have overlapping work what are advantages/disadvantages to both developers working on a single branch with daily \"updates to latest version\" to keep both working copies consistent versus each developer working on an individual branch and merging the two branches back into the trunk when they are complete? It seems to me that a single branch would be better in that the daily updates will keep each developer up on what the other is doing and this can possibly help share code. My main concern is with performing trial and error coding. What happens when some code is committed that really probably should not be pulled down by the other developer? Should that even be a concern? Another thought I had was that with each developer having their own branch if either developer has code that the other can use that is stable enough we merge that back to the trunk. Then the developers would update daily from the trunk instead. Finally I had a thought that maybe we create one branch for this \"feature set\" and then each developer branch from that branch. They would then push stable features back to the main branch and update daily from the main branch. This would keep the trunk clean and stable until the feature set is released. Those are my thoughts and sorry for the long question. I really want to start this project right and any guidance would be greatly appreciated."} {"_id": "183819", "title": "Can branching ever be considered a bad practice?", "text": "> **Possible Duplicate:** > To branch or not to branch? A co-worker is against branching. I use branching all the time. I put the team onto a basic 3 branch system. I create my own feature branches. Said co-worker tends to have good ideas, follow best practices and I tend to agree with him on many such things. Except this. The successful software shops I've read about: Microsoft, Google, fogcreek all use branching. My co-worker argues that because of the way our shop runs (medical field), branching causes problems. Instead, he says we should create many small project's (*.csproj files) and never work on the same code at the same time. I agree, we have issues with merging. I argue the root cause of these issues are a lack team communication and a lack of knowledge on how to correctly use our source control system to full effect (TFS). I consider his approach to be programming in Silo's and a DLL management nightmare. Are there situations where Branching is a bad idea? Am I wrong?"} {"_id": "252712", "title": "Develop in trunk and then branch off, or in release branch and then merge back?", "text": "Say that we've decided on following a \"release-based\" branching strategy, so we'll have a branch for each release, and we can add maintenance updates as sub-branches from those. Does it matter whether we: 1. develop and stabilize a **new** release in the trunk and **then** \"save\" that state in a new release branch; or 2. first create that release branch and only merge into the trunk when the branch is stable? I find the former to be easier to deal with (less merging necessary), especially when we don't develop on multiple upcoming releases at the same time. Under normal circumstances we would all be working on the trunk, and only work on released branches if there are bugs to fix. What is the trunk actually used for in the latter approach? It seems to be almost obsolete, because I could create a future release branch based on the most recent _released_ branch rather than from the trunk. **Details based oncomment below:** * Our product consists of a base platform and a number of modules on top; each is developed and even distributed separately from each other. * Most team members work on several of these areas, so there's partial overlap between people. * We generally work only on 1 future release and not at all on existing releases. One or two might work on a bugfix for an existing release for short periods of time. * Our work isn't _compiled_ and it's a mix of Unix shell scripts, XML configuration files, SQL packages, and more -- so there's no way to have push-button builds that can be tested. That's done manually, which is a bit laborious. * A release cycle is typically half a year or more for the base platform; often 1 month for the modules."} {"_id": "246826", "title": "Git branches avoided by the office where I work", "text": "The place I work at does the following: 1. They manage a dev version of an app 2. They manage another replicate version of that app, they call this the rep version 3. The live version of the app that is running 4. They believe branches are evil We are asked to manually copy paste the code from one version to another. What are the circumstances under which git branches are considered harmful?(we all add code to master)"} {"_id": "119073", "title": "Values, types, kinds, and\u2026?", "text": "We all know what a value is. A type is the type of a value. A kind is (loosely) the type of a type. A type constructs a value; a kind constructs a type. So what is the type of a kind, a thing that constructs kinds? Is there such a thing? Does it have a name? Is it useful?"} {"_id": "145406", "title": "How should the cppcms template hierarchy be used", "text": "I understand that the hierarchy, in cppcms for templates, goes skin (topmost, representing a namespace), then view (representing a class) and finally template (representing a function). I want to know when I should use a skin and when I should use a view; would each page be a skin or would it be a skin per application, etc. Could you explain when each should be used and could you give some examples?"} {"_id": "145404", "title": "Replace Type Code with Class (From Refactoring [Fowler])", "text": "This strategy involves replacing the likes of this: public class Politician { public const int Infidelity = 0; public const int Embezzlement = 1; public const int FlipFlopping = 2; public const int Murder = 3; public const int BabyKissing = 4; public int MostNotableGrievance { get; set; } } With: public class Politician { public MostNotableGrievance MostNotableGrievance { get; set; } } public class MostNotableGrievance { public static readonly MostNotableGrievance Infidelity = new MostNotableGrievance(0); public static readonly MostNotableGrievance Embezzlement = new MostNotableGrievance(1); public static readonly MostNotableGrievance FlipFlopping = new MostNotableGrievance(2); public static readonly MostNotableGrievance Murder = new MostNotableGrievance(3); public static readonly MostNotableGrievance BabyKissing = new MostNotableGrievance(4); public int Code { get; private set; } private MostNotableGrievance(int code) { Code = code; } } Why exactly is this preferable to making the type an enumeration, like so: public class Politician { public MostNotableGrievance MostNotableGrievance { get; set; } } public enum MostNotableGrievance { Infidelity = 0, Embezzlement = 1, FlipFlopping = 2, Murder = 3, BabyKissing = 4 } There is no behavior associated with the type and if there was you would be using a different type of refactoring anyways, for example, 'Replace Type Code with Subclasses' + 'Replace Conditional with Polymorphism'. However, the author does explain why he frowns on this method (in Java?): > Numeric type codes, or enumerations, are a common feature of C-based > languages. With symbolic names they can be quite readable. The problem is > that the symbolic name is only an alias; the compiler still sees the > underlying number. The compiler type checks using the number 177 not the > symbolic name. Any method that takes the type code as an argument expects a > number, and there is nothing to force a symbolic name to be used. This can > reduce readability and be a source of bugs. But when trying to apply this statement to C#, this statement doesn't appear to be true: it won't accept a number because an enumeration is actually considered to be a class. So the following code: public class Test { public void Do() { var temp = new Politician { MostNotableGrievance = 1 }; } } Will not compile. So can this refactoring be considered unecessary in newer high-level languages, like C#, or am I not considering something?"} {"_id": "215423", "title": "What is lightweight lock in distributed shared memory systems?", "text": "I started reading Tanenbaum's Distributed Systems book a while ago. I read about two phase locking and timestamp reordering in transactions chapter. While having a deeper look from google I heard of lightweight transactions/lightweight transactional memory. But I couldn't find any good explanation and implementation. So what is lightweight memory? What are the benefits of lightweight locks? And how can I implement them?"} {"_id": "202626", "title": "Truth condition testing with BOOL", "text": "BOOL myBool; myBool = YES; ... if (myBool) { doFoo(); } I have read that because there are instances where the above does not actually call the `doFoo()` function, it is best to instead always test against `YES`, as such: if (myBool == YES) But for the most part this just makes my code wordy and repetitive. Any thoughts?"} {"_id": "223278", "title": "APIs, Versioning and Models", "text": "I have my (JSON) API structured like this (which I'm pretty happy with): ### API Project /_V1 /Controllers V1EntityController.cs // Applies to version 1 only /_V2 /Controllers V2OtherEntityController.cs // Applies to versions 2 and below /Controllers/ EntityController.cs // Applies to versions 2 and above OtherEntityController.cs // Applies to versions 3 and above ### Core Project /Data/Entity.cs /Data/OtherEntity.cs But as the project has progressed both the `Entity` and `OtherEntity` classes has become full of legacy properties and a bunch of `ShouldSerializexxx` methods. They also then also contain properties and sub-classes which are only for serialization. Would a better solution for this be to create \"Models\" in the API project like so: ### API Project /_V1 /Controllers V1EntityController.cs /Models V1EntityModel.cs /_V2 /Controllers V2OtherEntityController.cs /Models V2OtherEntityModel.cs /Controllers EntityController.cs OtherEntityController.cs /Models EntityModel.cs OtherEntityModel.cs Then convert to and from the classes in the core project? What's the industry recognised practice for handling this scenario?"} {"_id": "166570", "title": "Programming vs Planning", "text": "Recently, I have been tasked with more high-level planning assignments due to the lead developer of my team leaving. I hate long-term planning. My brain just doesn't naturally seem wired for it, and I am not interested enough in it to spend the time to learn it (it is hard enough to keep up with the programming side of the picture). Can one still be a good programmer without being a high-level planner, too? As a senior programmer, is one expected to be good at planning out the entire product and picking a date of completion?"} {"_id": "22685", "title": "What is Configuration Management?", "text": "In all projects that I have been involved with that have had input from an outside consultant the question has been asked about what sort of Configuration Management we were using. In none of these cases has the consultant been able to define Configuration Management. So what is it?"} {"_id": "111512", "title": "How to deal with an outsourcing company that brings your production system down?", "text": "We have a high load SAAS product and due to lack of resources the board has outsouced reporting and OLAP & Warehouse work to a 3rd Party Company. When we started working with them, they accidentally DOS'ed our production system, now they have made too many long running queries on the production databases and slowed the system down to unuseable. They are about to deliver and we have a maintainence contract with them. How do you deal with 3rd companies you can't really trust no to bring your production system down? And what do you do about it when they do? Has anyone any experience of this? (note they have a test environment set up to do everything they need to on). I'm not looking for answers on how to prevent someone bringing down our system, I'm after what to do with the 3rd party company itself."} {"_id": "189136", "title": "Unit testing in Django", "text": "I'm really struggling to write effective unit tests for a large Django project. I have reasonably good test coverage, but I've come to realize that the tests I've been writing are definitely integration/acceptance tests, not unit tests at all, and I have critical portions of my application that are not being tested effectively. I want to fix this ASAP. Here's my problem. My schema is deeply relational, and heavily time-oriented, giving my model object high internal coupling and lots of state. Many of my model methods query based on time intervals, and I've got a lot of `auto_now_add` going on in timestamped fields. So take a method that looks like this for example: def summary(self, startTime=None, endTime=None): # ... logic to assign a proper start and end time # if none was provided, probably using datetime.now() objects = self.related_model_set.manager_method.filter(...) return sum(object.key_method(startTime, endTime) for object in objects) How does one approach testing something like this? Here's where I am so far. It occurs to me that the unit testing objective should be _given some mocked behavior`by key_method` on its arguments, is `summary` correctly filtering/aggregating to produce a correct result?_ Mocking datetime.now() is straightforward enough, but how can I mock out the rest of the behavior? * I could use fixtures, but I've heard pros and cons of using fixtures for building my data (poor maintainability being a con that hits home for me). * I could also setup my data through the ORM, but that can be limiting, because then I have to create related objects as well. And the ORM doesn't let you mess with `auto_now_add` fields manually. * Mocking the ORM is another option, but not only is it tricky to mock deeply nested ORM methods, but the logic in the ORM code gets mocked out of the test, and mocking seems to make the test really dependent on the internals and dependencies of the function-under-test. The toughest nuts to crack seem to be the functions like this, that sit on a few layers of models and lower-level functions and are very dependent on the time, even though these functions may not be super complicated. My overall problem is that no matter how I seem to slice it, my tests are looking way more complex than the functions they are testing."} {"_id": "196058", "title": "Wise way to implement a website login and database tables for a small shop", "text": "Im building this website for a small store and I was told that its better **not** to keep the login and the rest of the users information on the same table. Now im wondering, what is the best way to implement this website's database? Some of the database tables are: * Customers * Providers * Products * Etc.. I need to have different users, after someone is logged in the site, depending on their rights they can do a series of things: * If they are a customer they can order stuff * If they are employees they can order stuff, manage stock, add customers, etc * If they are the owner or administrator they can add employees and everything else. Should I keep the login and the rest of the information of the user in the same table?"} {"_id": "61080", "title": "How to (professionally) back up reasons for choosing open source technologies in a large project", "text": "I know the title is a bit vague so I'll try to be more precise in explaining what's my actual question (I apologize in advance if this is a duplicate). I work for a small company (8 people) that works with Linux / PHP / MySQL to deliver various solutions (CRM amongst everything) to several larger financial institutions. The process of actually winning the contracts was always painful and I was tasked with explaining why use MySQL and not MSSQL or PostgreSQL or Oracle 11g, why use PHP and not a compiled language or ASP.NET or Java and the list goes on. And this is the problem - I really am not proficient enough with mentioned commercial solutions (databases). I've done some worth with mentioned databases, but that was simple testing, such as how to create view in this RDBMS opposed to MySQL, how to do a trigger here, how to declare a stored procedure here, how to profile a query - but, I've never worked with any of the mentioned databases extensively as much as I have with MySQL. On the other hand, I've been introduced to MySQL in early 2000. only to be more and more proficient with it, its quirks and even extending it trough UDFs or hacking the source code in order to see how things work behind the scene - so, as I like to think, I know how MySQL works, what to expect of it and how to add functionality we require. A few days ago we started initial negotiations with our next-to-be client who is in need of bespoke solution for client / business management. They have their own team, however those people didn't succeed in delivering what the company actually needs - which is where my company enters. Naturally, first thing I wanted to know was what kind of technology was used to build systems that are currently running and keeping the client's company running - and I saw things from Postgres for smaller apps to MSSQL for the actual client database. Of course, as the company isn't satisfied with their current system(s), that implies something is going wrong. Currently, there's no way of synchronizing data between the main company and its branches, thus - there's no unified client database. It's incredibly hard to track expenses, again - due to disparate systems. And here is what I have problems with - as soon as I mentioned technology we use, the company's development team frowned upon us and belittled us saying that open source technologies are insecure, buggy, prone to hacking, slow opposed to commercial ones (without any actual proof). Just for argument's sake, I've worked with larger datasets built upon MySQL (25TB+) that stores various information and I've been able to keep up with any requirement I had so far when it comes to response times and performance of the application built upon MySQL. But, since I am not experienced enough with other databases - I'm unable to provide actual facts representing that technology I use is at least on par (if not better + cheaper!) than commercial solutions. How would you defend your (my) position when faced with a technical person who favors Microsoft's software over open source one in a company that employs several tens of thousands of people with over 10 million customers? Would you even choose MySQL for the database behind a system that should be powering such a large company? If not, what would you choose and why? Sorry for the wall of text :)"} {"_id": "26599", "title": "What are some good examples of using pass by name?", "text": "When I write programs I using pass by value or pass by reference always seem to be logical methods. When learning about different programming languages I came across pass by name. Pass by name is a parameter passing method that waits to evaluate the parameter value until it is used. See Stack Overflow pass by name question for more information on the method. What I would like to know is: what are some good examples and/or reasons to use pass by name and should it be re-introduced into some more modern languages."} {"_id": "167799", "title": "Pulling changes from master to my work branch?", "text": "There's two of us working on something. We're using this branch structure * master * dev-A * dev-B We both work on separate branches (dev-A,B) and whenever we're done - we promote our changes to master. But the drawback of this is we can't get changes the other developer makes. Everything exists in the master tree - but we can't get the latest updates the other developer made. Is there a way to resolve this or should we change our branch structure (per feature?)?"} {"_id": "60970", "title": "Microsoft Most Valuable Professional Selection Criteria[SO?]", "text": "How are Microsoft's MVPs selected? I read somewhere that most of the professionals are selected on the basis of the usenet(other communities) performance. Is is true for SO as well. Can someone be elected as MS MVP on the basis of his/her stackoverflow performance?"} {"_id": "221696", "title": "How to represent a geometric line programmatically?", "text": "I have been trying to design a library to do some simple geometric computations in an Euclidean space regardless of its dimension. While it is easy to represent points, vectors, hyperspheres and hyperplanes in a generic fashion, I am still unable to find a generic way to represent a (infinite) line, even though lines share properties across dimensions. My best guess is that I could store some of the parameters of its parametric equation since it is easy to extend a parametric equation to a line in a space of any dimension: x = x0 + at y = y0 + bt z = z0 + ct // can be extended to any dimension But even with this equation, I can't find what should be stored and what should not be in order to compare lines. With an ideal solution, two objects of type `Line`: * would be programmatically equal (with `operator==`), * would have equal representations in the memory. What should I store in order to achieve that?"} {"_id": "60974", "title": "Feeling of Despair before programming?", "text": "When I have a programming assignment or some program that I have to start on I sometimes have this feeling of despair, I just do whatever to starting and then I just start feeling like crap, and that makes me not start even more...it's like a vicious circle. All my friends seem to just get things done...Is it just me or are there others that have this problem? Am I not a in \"love\" with programming? If there are others, how do you overcome this feeling and just start? EDIT: @Kennith it's actually kinda hard to describe but it kinda like very mild depression, i think it may be because the work required of us is so challenging. I remember in high school I loved programming because after learning the synthax of a language(ie java) you felt like an expert and could do a lot of programming. But in university the there is just so much complexity to programming and so many little things that I never considered before, log time, data structures, insane algorithms, graphs, np problems, etc.that I think I've lost that confidence that I once had. I don't think I've had to do any programming over the last 3-4 years where I knew much about what I was doing (had an overview but it was all new territory), every single project I work on is new material, new concepts, language, theory, etc. It's good for learning...in hindsight, but when your learning it's hard and frustrating. But you sort of never feel like your progressing or actually accomplishing anything. You feel good for a little while after finishing one proj, but as soon as you look the next one...its back to ground zero. What I love(d) about programming is that feeling of mastery, that was motivation enough, now when you're always doing something new, its really hard to get that feeling back."} {"_id": "150157", "title": "Plagued by multithreaded bugs", "text": "On my new team that I manage, the majority of our code is platform, TCP socket, and http networking code. All C++. Most of it originated from other developers that have left the team. The current developers on the team are very smart, but mostly junior in terms of experience. Our biggest problem: multi-threaded concurrency bugs. Most of our class libraries are written to be asynchronous by use of some thread pool classes. Methods on the class libraries often enqueue long running taks onto the thread pool from one thread and then the callback methods of that class get invoked on a different thread. As a result, we have a lot of edge case bugs involving incorrect threading assumptions. This results in subtle bugs that go beyond just having critical sections and locks to guard against concurrency issues. What makes these problems even harder is that the attempts to fix are often incorrect. Some mistakes I've observed the team attempting (or within the legacy code itself) includes something like the following: **Common mistake #1** \\- Fixing concurrency issue by just put a lock around the shared data, but forgetting about what happens when methods don't get called in an expected order. Here's a very simple example: void Foo::OnHttpRequestComplete(statuscode status) { m_pBar->DoSomethingImportant(status); } void Foo::Shutdown() { m_pBar->Cleanup(); delete m_pBar; m_pBar=nullptr; } So now we have a bug in which Shutdown could get called while OnHttpNetworkRequestComplete is occuring on. A tester finds the bug, captures the crash dump, and assigns the bug to a developer. He in turn fixes the bug like this. void Foo::OnHttpRequestComplete(statuscode status) { AutoLock lock(m_cs); m_pBar->DoSomethingImportant(status); } void Foo::Shutdown() { AutoLock lock(m_cs); m_pBar->Cleanup(); delete m_pBar; m_pBar=nullptr; } The above fix looks good until you realize there's an even more subtle edge case. What happens if Shutdown gets called _before_ OnHttpRequestComplete gets called back? The real world examples my team has are even more complex, and the edge cases are even harder to spot during the code review process. **Common Mistake #2** \\- fixing deadlock issues by blindly exiting the lock, wait for the other thread to finish, then re-enter the lock - but without handling the case that the object just got updated by the other thread! **Common Mistake #3** \\- Even though the objects are reference counted, the shutdown sequence \"releases\" it's pointer. But forgets to wait for the thread that is still running to release it's instance. As such, components are shutdown cleanly, then spurious or late callbacks are invoked on an object in an state not expecting any more calls. There are other edge cases, but the bottom line is this: **Multithreaded programming is just plain hard, even for smart people.** As I catch these mistakes, I spend time discussing the errors with each developer on developing a more appropriate fix. But I suspect they are often confused on how to solve each issue because of the enormous amount of legacy code that the \"right\" fix will involve touching. We're going to be shipping soon, and I'm sure the patches we're applying will hold for the upcoming release. Afterwards, we're going to have some time to improve the code base and refactor where needed. We won't have time to just re-write everything. And the majority of the code isn't all that bad. But I'm looking to refactor code such that threading issues can be avoided altogether. One approach I am considering is this. For each significant platform feature, have a dedicated single thread where all events and network callbacks get marshalled onto. Similar to COM apartment threading in Windows with use of a message loop. Long blocking operations could still get dispatched to a work pool thread, but the completion callback is invoked on on the component's thread. Components could possibly even share the same thread. Then all the class libraries running inside the thread can be written under the assumption of a single threaded world. Before I go down that path, I am also very interested if there are other standard techniques or design patterns for dealing with multithreaded issues. And I have to emphasize - something beyond a book that describes the basics of mutexes and semaphores. What do you think? I am also interested in any other approaches to take towards a refactoring process. Including any of the following: 1. Literature or papers on design patterns around threads. Something beyond an introduction to mutexes and semaphores. We don't need massive parallelism either, just ways to design an object model so as to handle asynchronous events from other threads **correctly**. 2. Ways to diagram the threading of various components, so that it will be easy to study and evolve solutions for. (That is, a UML equivalent for discussing threads across objects and classes) 3. Educating your development team on the issues with multithreaded code. 4. **What would you do?**"} {"_id": "65250", "title": "As a professional .NET developer, should you learn to work with MSIL using reflection?", "text": "I was reading through the code of StackOverflow's new SQL Object Mapper. And I noticed at the bottom of the SqlMapper.cs there is some code that I had never seen before. After reading some of the documentation on MSDN and a tutorial on MSIL this feels a little over my head. As a professional .NET developer (6 years), should I invest time to learn this? What is the advantage of understanding the ILGenerator? I hope I'm not confusing things and the question is clear. Thanks!"} {"_id": "150150", "title": "Generic Handler vs Direct Reference", "text": "In a project where I'm working on the data access layer I'm trying to make a decision how to send data and objects to the next layer (and programmer). Is it better to tell him to 1. reference my dll, OR 2. should I build a generic handler and let him take the objects from there (i.e. json format) If I understand correctly, In case of 2. he would have to handle the objects on his own, whereas in case 1. he will have the entities I've built. Note: It is very probable that other people would need to take the same data, though, we're not up to that yet. Same question here - should I make it into a webservice, or have them access the handler?"} {"_id": "65252", "title": "Migrating large system from WebForms to MVC", "text": "I'm looking for advice from people who have actually migrated a large system from WebForms to MVC. What are the things to watch out for? The literature says that both can co- exist for the same system but was it really that easy? Was managing authentication between the two seamless? Was maintenance easier/tougher after that? Thanks."} {"_id": "220560", "title": "Storing a mailing address in a database: What structure should I use for International apps?", "text": "I will have international users use my database, but I don't know how the mailing system operates outside of the US. Are the concepts \"City\", \"State, Country and possibly \"Zip\" sufficient to capture any hierarchy (even if it was only 2 levels deep: (city/country)"} {"_id": "220564", "title": "How exactly can I check for new rows in sql with ajax?", "text": "How do certain services, just like google plus and facebook, check for new content without reloading the page? Whenever you are on any of those websites a new notification of a post related content just pops up without the need to refresh the page. Can someone give me some tips on how this works? :)"} {"_id": "150159", "title": "Fear of releasing a hobby project - how to overcome?", "text": "I don't know if this question is strictly related to software development, but still I'll give it a try: Like a lot of programmers, I love to work on hobby projects. Sometimes, seemingly good ideas turn out to be not so good, so I drop the project. But sometimes, something useful comes out of the project. So, I could release it, present it to the world, right? Wrong. Somehow, I don't seem to be able to make this step. I fear that my code is not good enough, I can always think of things which are suboptimal, of features which could be added. So, I don't release anything, I lose interest, and at one point abandon the project. Is this normal? How do you overcome such a situation?"} {"_id": "227695", "title": "How do I better engage the users who starred my project on GitHub?", "text": "I recently put up a project, called Hebel, that I've been working on to GitHub. It's a framework for GPU accelerated deep learning written in Python and Nvidia CUDA. I posted about it on Google+ and soon afterwards it was picked up on Hacker News and went slightly viral for a few days. I later posted about it again in the Machine Learning subreddit as well and altogether my project picked up 822 stars and 47 forks on GitHub, which was really exhilarating. 822 stars means my project is in the top 200 Python projects on GitHub and in fact has more stars than some high profile and widely used Python projects like virtualenv. Despite the considerable interest for my project, I'm very disappointed with the actual engagement I have seen so far. None of the 47 forks of my projects have ever had any commits, I haven't received any pull requests, and only three issues submitted were submitted by two people. It seems that my project is potentially very interesting to many people, but they only star or fork it once and then never return to it again or use it on a continuous basis. How can I improve engagement in order to have users either submit bugs or enhancement requests or have contributors submit changes?"} {"_id": "75566", "title": "Static Methods in Business Layer to achieve data from DAL! Yes? No?", "text": "Some advice here, I've run into a system where the DAL contents of hundreds of sql command calls are split up on a class per table. There are also a Business layer which get it's data from this DAL, recieving it further to other methods and layers in other places. Nearly 100% of those Business Methods are pure forwarding of data. Few of them contain logic that affect the data (because the data are already sorted/evaluated or somewhat in the sql-commands/stored-procedures. **Now to the real question.** All of those methods in business layer are static. This is easy because I can call them from everywhere without instantiation. Is static methods really preferrable? Why and how do you think? I mean static methods need to be in heap and so far I really can't se the cosmetic profit because of it. I also feel the whole system is very hard to debug, especially now because the system has V E R Y high variety of response time, without many changes in load from users."} {"_id": "75562", "title": "Is reusability roughly synonymous with good design?", "text": "Reusability is a feature of good software design. Is reusability an acceptable gloss (\"brief notation of the meaning\") for good software design? Why?"} {"_id": "75563", "title": "Software background building tools", "text": "I develop a cross-platform project (Windows, Linux, Mac OS X) and use CMAKE, some python and platform-dependent scripts (sh, bat) for construction. Nevertheless I find the process of construction before release rather boring and annoying. Switching between computers, typing commands, copying binaries, eeww. Does anyone use some kind of building daemon? Something which can build project in the background by request or by reading special git/svn commit message."} {"_id": "215349", "title": "Switch vs Polymorphism when dealing with model and view", "text": "I can't figure out a better solution to my problem. I have a view controller that presents a list of elements. Those elements are models that can be an instance of B, C, D, etc and inherit from A. So in that view controller, each item should go to a different screen of the application and pass some data when the user select one of them. The two alternatives that comes to my mind are (please ignore the syntax, it is not a specific language) 1) switch (I know that sucks) //inside the view controller void onClickItem(int index) { A a = items.get(index); switch(a.type) { case b: B b = (B)a; go to screen X; x.v1 = b.v1; // fill X with b data x.v2 = b.v2; case c: go to screen Y; etc... } } 2) polymorphism //inside the view controller void onClickItem(int index) { A a = items.get(index); Screen s = new (a.getDestinationScreen()); //ignore the syntax s.v1 = a.v1; // fill s with information about A s.v2 = a.v2; show(s); } //inside B Class getDestinationScreen(void) { return Class(X); } //inside C Class getDestinationScreen(void) { return Class(Y); } My problem with solution 2 is that since B, C, D, etc are models, they shouldn't know about view related stuff. Or should they in that case?"} {"_id": "250539", "title": "MVC class modeling", "text": "I have following requirement and I am trying to design class diagram. This is the first time I am creating class diagram so I would like to get some advice from experts.. :) This is a simple login functionality implementation using mvc. ![enter image description here](http://i.stack.imgur.com/zssbk.jpg) I have a login controller that take login and logout request from the view. It uses security module class to authenticate users, check if the user already logged in or not and holds the list of logged in users. Would you consider this as a valid approach? and also if you can point out the room for improvement, I would really appreciate. Thanks.. :)"} {"_id": "117114", "title": "Do you think university is a good learning environment or is it better to be autodidact?", "text": "I'm currently a student in software engineering and I am very passionate in computer science. In my university we learn how to design an application with design patterns and a bunch of other techniques to have a good software. But I have a really bad problem: I want to know everything. I'm currently writing a little game engine in c++ while I write an application for iOS5 while building a website for a client while doing my homework, and my homework takes a lot of time for not that much learning. I Recently came to a conclusion: I can't achieve all my projects. Some of my friends tell me to drop university so I can learn faster because I will be able to work my other projects at my _fast_ rhythm. Do you think university is a good learning environment or is it better to be autodidact?"} {"_id": "31753", "title": "Do I need a degree in Computer Science to get a junior Programming job?", "text": "Do I need to go to Uni to get a job as a Junior C# coder? I'm 26 and have been working in Games (Production) for 6 years and I am thinking of a change, I've had exposure to VB6, VBA, HTML, CSS, PHP, JavaScript over the past few years and did a web design NCFE at College, but other than that, nothing else! I'm teaching myself C# at the moment with books and I was wondering 'how much' I need to learn and also how I can improve my chances of getting a programming job! Am I a late started to learn coding? (I know many people who started at a very young age!) Thanks for the help :)"} {"_id": "250534", "title": "Onion Architecture Structure", "text": "I am looking to understand and implement the Onion Architecture and have a vague idea on how to structure everything but need help to clear up some of my confusion. ![Onion Architecture](http://i.stack.imgur.com/iyWBg.png) Based on different examples and articles I Have read I created the above structure. One of my main confusion comes when I look at \"02-Service: Services Interfaces\". Lets take the IUserService.cs. I assume this interface would contain different signatures like RegisterUser(), LoginUser(), BanUser(), ModifyUser(), ChangeAuthenticationLevel() and so on? Is this correct? If not what other examples would we find? And are these considered Domain Services or Application Services?"} {"_id": "250530", "title": "Preventing ip resolvers in a skype-like program", "text": "I'm currently in the process of creating a Skype-like program, that uses a hybrid peer to peer system to communicate between users (i.e. server contains all users IPs, a client that wants to connect to a friend will tell the server, that will then send each client the other's IP so they could start hole punching to establish a connection between them). I know that with Skype, there are many websites that allow you to enter a username and easily get his IP address. My question is, what is the best way of preventing such an exploit? Here are a few solutions I could think of: * save each and every users' friends lists on the server (so the server can just drop IP requests from users that aren't in the requested client's friends list) * whenever the server is asked for a client's IP, ask that client if the asker is on their friends list (if not, don't reply) * have each client use some secret key to communicate with the server (so only registered clients will be able to send IP requests) Feel free to add a different solution, again, these are just solutions I could think of off the top of my head while writing this question. In addition, a related question that I wanted to ask - what would be the best practice in such a program: Have each client store his friends' IPs and make the server notify him whenever they are changed -VS- Have the client ask the server for a friend's IP whenever he wants to start a direct connection with him (keep in mind that the client anyway has to tell the server whenever he's trying to connect to someone, in order for the hole punching to work). P.S. I've never created such a program and I'm not following any tutorial or such. If there's any better way of doing something I wrote, or if I got anything wrong, I'd be very glad if you could mention it."} {"_id": "25489", "title": "What's more important for advancement writing lots of code (maybe only bug patches) or learning more CS?", "text": "I'm currently a graduate student in Computer Science. I've had a few jobs before coming back (doing some coding for the Army while in) and working in an Oracle shop. Both jobs could be done by monkeys with a semester of computer science. I want to be a great programmer. So, right now I'm back in school working on my master in computer science and I'm looking to do my thesis in data mining and natural language processing. I have limited time and right now I'm using my free time to work through Structure and Interpretation of Computer Programs, Machine Learning, OpenGL SuperBible (to learn the basics of graphics) and Cormen's Intro to Algorithms during the break before Winter quarter starts. I'm writing most of the programming projects that go along with those books too. Does this seem like a good use of my time, or should I cut my time in half on that and start trying to do bug fixes for some open source project (I've never done open source stuff and barely have the vaguest idea of how to get started)? If you recommend open source how should I go about getting started (really, step by step would be nice, I haven't done any so far because I literally don't know how to start) or if continue studying what should I focus on? Thanks."} {"_id": "197907", "title": "Migration to embedded systems", "text": "My company has so far been developing a medical device, which is connected via USB to a desktop system (running x64 Windows 7) to run the image analysis and do everything GUI related. I am familiar with both Windows and Linux programming, C, C++, C++11 and C#, but now our new project coming from management would be a handheld, embedded system, and since I am the only software engineer, I have absolutely no idea how embedded systems work. Is it totally different from the \"normal\" programming job, should I recommend hiring someone with embedded experience, are there good ressources for introduction to embedded computing? I am at a loss here, since I do not know what exactly to expect (it'll be in theory the same as with the desktop systems, a sensor acquiring an image, and the software doing analysis). Can someone help me get an idea what I would have to expect for this? Edit: there is no framework, as to what hardware to use. We can use whatever we want, as long as it is small enough to be handheld. We will be using a third party sensor (either photo-sensor, or acoustic sensor, thats not been established yet), but again, we are pretty much free to decide, so my guess will be that it will have a well established API. I don't even really know what embedded systems are, I have experimented privately with an Arduino, does that count as embedded already?"} {"_id": "25486", "title": "What is convention based framework?", "text": "I'm reading about ColdBox, and I came across this word. So what is meant by convention based framework?"} {"_id": "25487", "title": "Who can learn to program?", "text": "I always hesitate when talking to professors about trying to improve the percentage of people who graduate with a CS type degree compared to the number that start out thinking that is what they want. On one hand I really do think it is important for professionals to be involved and give this feedback, on the other hand it would be better if less sub-par students ended up with CS degrees. I don't think every mind is built for this field and you have to be a good life long student. You have to have a high degree of patience and problem solving skills just to eek by. If you do have the \"right\" kind of brain, those hard problems are what drives you to continue. If you just get a long list of easy problems you get bored so these people are actually not good at more repetitive jobs. I don't need to go into all the details... if you are reading this you probably know what I'm getting at. So the question is: How do you find the balance of a degree program that is accessible to enough people to be funded and considered successful but also doesn't turn out people who aren't really cut out for the job? Maybe a better question is, what metric do you use to know if the changes you are making in a degree program are making it better? I don't know that a higher graduation rate is a good metric. And it seems that the feedback that you could try to capture many years later about the jobs that the graduates hold would be too far delayed. I've struggled with this question for a long time, mostly because I don't think there is an answer. But I thought I'd ask to see if anyone knows of any research that has actually been done on it. Addition: I recently had a very wise professor remind me that not everyone who graduates with a CS degree even wants to be a full-time programmer once they have actually discovered what that means. But, with the education that they received they could possibly make great Project Managers, Managers, system admins, etc. I think this was a very good point that I hadn't thought to consider here. There are a very high percentage of people who don't end up working in the field they majored in, CS isn't an exception to that. Having the extra folks helps not only in budget for the degree but also to expand the percentage of non-programmers who still know enough about it to work with programmers."} {"_id": "166391", "title": "Visual C++, CMap object save to blob column", "text": "I have a MFC CMap object, each object stores 160K~ entries of long data. I need to store it on Oracle SQL. We decided to save it as a blob. Since we do not want to make additional table, we also thought about saving it as local file, with the SQL column pointing to that file. We would prefer to just keep it as blob on the server and clear the table every couple of weeks. The table has a sequential key as ID, and 2 column of time. I need to add the blob column in order to store on every row that CMap. Can you recommend a guide to do this approach (read/write Map to blob or maybe a clob)? Thanks."} {"_id": "166390", "title": "How to prevent duplicate data access methods that retrieve similar data?", "text": "In almost every project I work on with a team, the same problem seems to creep in. Someone writes UI code that needs data and writes a data access method: AssetDto GetAssetById(int assetId) A week later someone else is working on another part of the application and also needs an `AssetDto` but now including 'approvers' and writes the following: AssetDto GetAssetWithApproversById(int assetId) A month later someone needs an asset but now including the 'questions' (or the 'owners' or the 'running requests', etc): AssetDto GetAssetWithQuestionsById(int assetId) AssetDto GetAssetWithOwnersById(int assetId) AssetDto GetAssetWithRunningRequestsById(int assetId) And it gets even worse when methods like `GetAssetWithOwnerAndQuestionsById` start to appear. You see the pattern that emerges: an object is attached to a large object graph and you need different parts of this graph in different locations. Of course, I'd like to prevent having a large number of methods that do almost the same. Is it simply a matter of team discipline or is there some pattern I can use to prevent this? In some cases it might make sense to have separate methods, i.e. getting an asset with running requests may be expensive so I do not want to include these all the time. How to handle such cases?"} {"_id": "3713", "title": "What would be the best way to handle errors in parallel programs?", "text": "With parallel algorithms knocking at the door, it might be a good time to think about error handling. So at first there were error codes. Those sucked. It was free to ignore them, so you could fail late and produce hard-to-debug code. Then came exceptions. Those were made impossible to ignore once they occur, and most people (except Joel) like them better. And now we got libraries that help parallel code. Problem is, you can't handle exceptions in parallel code as easily as you could with non-parallel code. If you asynchronously launch a task and it throws an exception, there's no stack trace past it to unwind; best you can do is capture it and register it on the task object, if there's such an object. However, it defeats the primary strength of exceptions: you have to check for them and _you can ignore them without any additional effort_ , whereas in single-threaded code an exception will necessarily trigger the appropriate actions (even if it means terminating your program). How should language implementations or libraries support errors in parallel code?"} {"_id": "166392", "title": "In PHP, what are the different design patterns to implement OO controllers as opposed to procedural controllers?", "text": "For example, it's very straightforward to have an index.php controller be a procedural script like so: Then I can just navigate to http://example.com/index.php and the above procedural script is essentially acting as a simple controller. Here the controller mechanism is a basic procedural script. How then do you make controllers classes instead of procedural scripts? Must the controller class always be tied to the routing mechanism?"} {"_id": "200037", "title": "What encoding is used by javax.xml.transform.Transformer?", "text": "Please can you answer a couple of questions based on the code below (excludes the try/catch blocks), which transforms input XML and XSL files into an output XSL-FO file: File xslFile = new File(\"inXslFile.xsl\"); File xmlFile = new File(\"sourceXmlFile.xml\"); TransformerFactory tFactory = TransformerFactory.newInstance(); Transformer transformer = tFactory.newTransformer(new StreamSource(xslFile)); FileOutputStream fos = new FileOutputStream(new File(\"outFoFile.fo\"); transformer.transform(new StreamSource(xmlFile), new StreamResult(fos)); inXslFile is encoded using UTF-8 - however there are no tags in file which states this. sourceXmlFile is UTF-8 encoded and there may be a metatag at start of file indicating this. am currently using Java 6 with intention of upgrading in the future. 1. What encoding is used when reading the xslFile? 2. What encoding is used when reading the xmlFile? 3. What encoding will be applied to the FO outfile? 4. How can I obtain the info (properties) for 1 - 3? Is there a method call? 5. How can the properties in 4 be altered - using configuration and dynamically? 6. if known - Where is there info (web site) on this that I can read - I have looked without much success."} {"_id": "105089", "title": "Difference between free and open software?", "text": "> **Possible Duplicate:** > Open Source but not Free Software (or vice versa) I recently was at talk were Stallman was the keynote speaker, and he stated he hated the term open software, because it was geared to confuse people. Now, I have since then read on to understand his stance, but I am unable to draw a line where free software becomes open source, and which licenses can be considered in each group. For example I know that GPL, LGPL are free software licenses, but they are also open source licenses as well. Is there a pure open source license? like MIT license? or is it all shades of grey. Can a line be drawn? Thanks!"} {"_id": "116890", "title": "'Recommended' file length and line widths", "text": "I was curious if anyone knew of a recommendation from a reputable source for the max number of lines of code for a given file. For example, Google's Closure Linter recommends that each line should not exceed 80 characters."} {"_id": "190376", "title": "Why is black box called functional testing when it tests also non functional?", "text": "This has been bothering me for a while. Security, performance tests etc. are all done typically using the black box approach. But these are nonfunctional,while black box is called functional testing. Is it because it judges the function and it is just a naming or there is an inconcistency? References: Software Engineering by Salleh Software Engineering and Testing by Gupta,Tayal Software Engineering by A.A.Puntambekar Software Testing : a Practical Approach by Sandeep Desai,Abhishek Srivastava I cannot see the reason for downvote as obviously many are confused as I am (from the comments saying \"I have never seen it before\"."} {"_id": "99860", "title": "How important is it for a programmer to know how to optimize code, solve complex puzzles, answer technical questions quickly?", "text": "So I'm in the process of looking for a job. And I have had plenty of test and just recently someone sent me a puzzle. It's the crackless wall problem and I have no idea how to approach it. I could cheat and google the answer, but I'm an honest fellow and it's taken me 2 days and still no solution. Also performance is a key issue as well. Should I take some time to master solving programming puzzles? Or should I focus on learning the language first, then master optimization techniques? I'm having issues choosing the best approach. Just today I found out from a previous interview that they chose someone else over me because that candidate seemed more knowledgeable than me because during the technical interview this candidate was a bit more snappy with his/her responses. I knew all the answers but I wasn't fast enough with them. Does this matter with all interviewers? Should I invest time in mastering core concepts in java so I can have just as fast recall? (To shed some light on my background: I am a recent graduate with a little over a year of java programming experience under my belt. I only have a bachelors degree. I know this is irrelevant, but I'm also 23.)"} {"_id": "255037", "title": "Decorating a class that calls its own public methods", "text": "I've written a system that calculates discounts for a shopping cart based on a set of rules. Each rule is implemented with the following interface (C#): interface IRule { bool IsActive(); bool IsApplicableTo(IShoppingCart cart); decimal CalculateDiscount(IShoppingCart cart); } A concrete implementation might look something like this: class Rule : IRule { bool IsActive() { return true; } bool IsApplicableTo(IShoppingCart cart) { if (!IsActive()) return false; return TestIfApplicable(cart); } decimal CalculateDiscount(IShoppingCart cart) { if (!IsApplicableTo(cart)) return 0m; return CalculateStuff(cart); } } Now I'm trying to introduce some logging to the system. To keep things clean, I don't want to jam it into the Rule class. So I thought I'd make a Decorator for IRule, which handles the logging: class LoggingRule : IRule { IRule decoratedRule; bool IsActive() { LogImportantThings(); return decoratedRule.IsActive(); } //And so on. } The issue that I'm running into here, is that the Rule class actually calls the IsActive and IsApplicableTo methods itself. When that happens, the decorator is simply passed by, and nothing is being logged. I suppose I could make the LoggingRule a subclass of Rule. It would mean that I would have to make a LoggingRule variant for every IRule implementation. That could potentially cause a lot of duplication. I could also put the logging into a base class, and use the Null object pattern when I don't want to log anything. But then again, 99 out of 100 times I don't want to log anything. And when these methods are called many many times, is it worth the potential overhead? So neither solution seems particularly elegant, and I'm struggling to find a better alternative... Any suggestions?"} {"_id": "167627", "title": "How show Attributes which appear In Many To Many association", "text": "As we know a many to many association are shown by two asterisks in both end of association. Now I have a association between two entities \"Good\" and \"Invoice\" so Good and Invoice have a many to many association but I want to show the \"count of each good\" in each invoice on class diagram. How can I show it?"} {"_id": "222583", "title": "How to prevent re-checking already-checked data?", "text": "I have a class with a `validId($id)` method that is called by the constructor and by `public function load($id)`. The method queries the database to see if the id exists and returns true/false. The constructor optionally takes an `$id` (if no $id, then the object will be empty). The constructor checks the $id with `validId($id)`, and if it passes, then it calls `load($id)`. I should not take `validId()` out of `load()` because `load()` is public and the input has to be checked! The same goes for the constructor. How do I avoid calling `validId()` twice? I was thinking I could add a parameter to `load($id,$check_id)`, but that would be ridiculous. You can't give clients the option to not validate data! I could add a protected property like `$id_is_valid` that gets set to true by `validId($id)`. But how do I know the value that was valid is the same one being received by my `load()` method? I'm using PHP. Hopefully this isn't a stupid question."} {"_id": "160215", "title": "How do I make the most of the chance to meet one on one with a programming guru?", "text": "So, I'm not a programmer, though I've been writing code all of my life. As is my habit, I attempt to contact well know experts in almost any domain I find of interest, and interestingly, I get a lot of meetings, which I value a great deal. So, here's the deal -- a meeting has been set for me to meet a very well known programmer and I would like advice on how to make the most of it. They've written a number of books, some of which I've read, but nothing strikes me as a point of conversion; meaning they know I'm not a real programmer, though I am very interested in talking to them about their advice for becoming a programmer. Should I attempt to have a potential coding topic, set of code to review, etc -- or just show up and get general advice?"} {"_id": "222586", "title": "How should you cleanly restrict object property types and values in Python?", "text": "I have been learning python from no long time ago. But nearly at the beginning I stumbled on simple question: how to set a restriction (limitation) on object value or object's properties without making a difficult class description. There is no problem with python's unstrict typification. I search for the methods to control not only the variable types, but the variable value diapasons also. For example let creating the classical educational class - \"Circle\". A Circle - object should have three properties: two origin coordinates (some x, y) and a radius, in that (in our human visible world) x and y should be real numerical, a radius should be real positive numerical. I somehow solved this task, correspnding method of \"Circle\" class below: def __setattr__(self,name,value): def setting_atr(name,value): if isinstance(value,(float,int)): if name=='radius' and value<0: raise ValueError(\"Attribute 'radius' should be no negative\") self.__dict__[name]=value else: raise TypeError(\"Attribute should be numerical\") if len(self.__dict__)<3: #number of object attribute return setting_atr(name,value) elif name not in self.__dict__: raise AttributeError(\"'Circle' object has no such attribute\") else: return setting_atr(name,value) Nevertheless I utterly displeased with way I have done that. The code is huge for so small conception, and it's almost no reusable (I know about the descriptors - it will give the near same result). I would prefer some declarative stile. When I had been starting search for an alternative my first thought was to add my own operator. But after hard googling I realized that it impossible to make that in python itself (if I'm still wrong - please correst me). Such as I only newbie, and entering in python developers team or forked python ( :)) is no way for me, I gave up for this thoughts. The another possibly path in declarative stile is writing something like: radius=radius if isinstance(radius,(float,int)) else raise TypeError(\"Attribute should be numerical\") radius=radius if radius>=0 else raise ValueError(\"Attribute 'radius' should be no negative\") of course it not work, because we haven't ability to use 'raise' by that. I should be able calling errors object as function to brought it for reality. The simplest way to do that it wrote the error-raised function like this: def SomeError(exception=Exception, message=\"Something don't well\"): if isinstance(exception.args,tuple): raise exception else: raise exception(message) So it allow to write: radius=radius if radius>=0 else SomeError(ValueError(\"Attribute 'radius' should be no negative\")) I also thought about followed form: @dfValueError(['isinstance(radius,(float,int))'],\"Attribute should be numerical\") @dfTypeError(['radius>=0'],\"Attribute 'radius' should be no negative\") def fun_radius(x) return x The small problem with this approach is in the more complex and less informatively evident error outing print. At the first the error message lead to the fake error raised function, the place where \"oops\" actually occurred only in second lines. My goal is to make possibility raise exception from lambda functions and code lines like has showed above, to make code more evident. To do this I should create my own exceptions class with call method implemented. Problem: inherited class didn't see itself in own name spase, so method __call__(self): raise self #yes ha-ha give in result: TypeError: __call__() missing 1 required positional argument: 'self' So my question is simple: is it possible to make error object be callable? If yes, what I should read, to do it. And also, if someone solve this small problem, I would be happy. P.S. I know samples (about radius variable declaration) showed above can't be implemented in this \"circle\" class as is. I give it only as example to show my thoughts track. Please, don't pick on it. **Afterword (by Marat)** Thank very much who wrote and voted in this topic. After readed answers, I thought very long, and finaly raised myself to add some augment to this topic (even if it entail some risk to infringe rules). For the first. Some people have wrote that we dont't need care about errors at the object creation state, and even no need to control variable types at all (inasmuch as it no has implemented in python by default). Although I have been no long time in object oriented programming, before that I had (and still have) been work on calculation a technical processes and devices. So I'm very confident about this words: \"Possible error should be localized as soon as possible (I didn't say improved)\". Many algorithms for calculation a real physical processes use iterations and recursions so if you slip out (for later handle) some occurred mistake it could brought a hard detectable error in final result. So, there is python, from this point of view. Python's \"Duck Typing\" is great for quick prototyping and simplifying applied programming. But \"Duck Typing\" is not a religion demand. Nevertheless Python's unstrict typification it isn't a bizarre thing to add a type control, if you think you need that. But the style which Python impose to perform it differ from the style in the static typification languages as C++, Java and etc. In Python we can control the object only in some \"control points\". This guide us to the \"System Oriented Programming\" principles (Therm is mine, but I didn't check if it exist in press, so it may has differ meaning from the same of another authors, and what I mean may also has another name). We should (if will) set the \"control points\" at the inlet and outlet of the elements of ( **main** ) system. What is for the system elements? It should be any object who has own name space and can interact with outside as function (by methods also). The simplest version of inlet controlling for function element is: def func(a,b,c): #just like this a=int(a) b=str(b) c=list(c) More comlexity and interesting approach shown here: Type-checking decorator for Python 2 Separately, about method that was suggested by my co-author. \"Property\" decorator use the same methods as descriptor, so it emulate **object** attributes by **class** methods. This mode should be used carefully because in python the objects of same class are shared the class name space. I let myself rewrote the same programm that have wrote by @Izkata with `__setattr__` (again): filter_function_for_radius=lambda value: value if value>0 else SomeError(ValueError, 'Radius cannot be negative') class Circle(object): def __init__(self, x=0, y=0, radius=1): self.x = x self.y = y self.radius =radius def __setattr__(self,name,value): if name=='radius': value=filter_function_for_radius(value) self.__dict__[name]=value ...and if you need some another (perhaps complex or negative) radius for some hipothetical circle you don't need to inherit or redefine the class, just \"change filter\". But that advantage of function style may be obtain only by using `__setattr__` or by creating own metaclass. And once more thank you, for answering."} {"_id": "104513", "title": "When developing algorithms, is skipping the pen&paper phase a bad habit?", "text": "I heard many people saying that when developing algorithms you should first use pen and paper, flowcharts and what not, so that you can focus on the algorithm itself, not worrying about the implementation of said algorithm (i.e., you deal with one problem at a time). However, most of the time I find it easier to actually develop my algorithm on the fly. That is, I think a bit about the problem until I know the general direction to take, and then I start writing code and making changes until the algorithm emerges and works. Is this a bad habit that I should try to change?"} {"_id": "94054", "title": "How to transition from a web developer to an embedded developer?", "text": "I've been doing web development, and backend Java development, professionally for about 5 years now. My passion has always been closer to the metal though. Applying to embedded jobs has not been successful, and I believe it's because I lack experience. Perhaps there are open source projects I should undertake to show that I'm serious? How does one make a transition from being a web developer to an embedded developer?"} {"_id": "163090", "title": "What is the meaning of 'high cohesion'?", "text": "I am a student who recently joined a software development company as an intern. Back at the university, one of my professors used to say that we have to strive to achieve \"Low coupling and high cohesion\". I understand the meaning of low coupling. It means to keep the code of separate components separately, so that a change in one place does not break the code in another. But what is meant by high cohesion. If it means integrating the various pieces of the same component well with each other, I dont understand how that becomes advantageous. What is meant by high cohesion? Can an example be explained to understand its benefits?"} {"_id": "105645", "title": "Is there a reason for initial overconfidence of scientists and engineers working on artificial intelligence in the 1960s?", "text": "I just started an AI & Data Mining class, and the book. AI Application Programming, starts off with an overview of the history of AI. The first chapter deals with the history of AI from the 1940s to present. One particular statement stuck out at me: > [In the 60s] AI engineers overpromised and underdelivered... What was the reason for the overconfidence? Was it because of mathematical prediction models showing that a breakthrough was around the corner, or due to the ever-increasing hardware capability to take advantage of?"} {"_id": "105647", "title": "When inheriting from a class in C#, which naming convention is preferred?", "text": "If you inherit from Tile, for example, should the subclass be called CollisionTile or just Collision? What about for painting tools in an application like Paint? PenTool or just Pen? Are there any published guidelines for this?"} {"_id": "141899", "title": "When is a requirement considered complete?", "text": "Which elements must a requirement contain that it can be considered complete? Or if this works better - which questions should I ask about a requirement to find out if it is complete. I am not talking about the implementation of the requirement but the requirement itself. I am asking this from the perspective of an analyst who wants to make sure that his requirements are complete before passing them on to the design team."} {"_id": "222639", "title": "What's the best way to undo a Git merge that wipes files out of the repo?", "text": "So imagine the following the happens (and that we're all using SourceTree): 1. We're all working off origin/develop. 2. I go on holiday for a week. 3. My coworker has been working locally for the past several days without merging origin/develop back in to his local develop branch. 4. He tries to do a push, gets told he has to merge first, and then does a pull. 5. He gets a conflict, stopping the automatic commit-after-a-merge from proceeding. 6. Assuming that Git is like SVN, my coworker discards the \"new\" files in his working copy and then commits the merge - wiping those \"new\" files from the head of origin/develop. 7. A weeks worth of dev work goes on on top of that revision. 8. I come back from holidays and find out that several days of my work is missing. We're all very new to Git (this is our first project using it), but what I did to fix it was: 1. Rename \"develop\" to \"develop_old\". 2. Merge develop_old into a new branch \"develop_new\". 3. Reset the develop_new branch to the last commit before the bad merge. 4. Cherry pick each commit since then, one by one, resolving conflicts by hand. 5. Push develop_old and develop_new up to the origin. At this point, develop_new is, I'm hoping, a \"good\" copy of all of our changes with the subsequent weeks worth of work reapplied. I'm also assuming that \"reverse commit\" will do odd things on a merge, especially since the next few weeks worth of work is based on it - and since that merge contains a lot of things we _do_ want along with stuff we don't. I'm hoping that this never happens again, but if it does happen again, I'd like to know of an easier / better way of fixing things. Is there a better way of undoing a \"bad\" merge, when a lot of work has gone on in the repo based on that merge?"} {"_id": "60028", "title": "Is there any evidence that lisp actually is better than other languages at artificial intelligence?", "text": "There seems to be a long-held belief (mainly by non-lispers) that lisp is better than most languages at AI. Where did this belief originate? And is there any basis in fact to it?"} {"_id": "113117", "title": "Are any top software products outsourced? (offshore or otherwise)", "text": "Is there any software that is both ahead of the competition and outsourced? By that I mean software products which are big sellers and which are highly reviewed, and developed by an entity outside of the entity that owns the product itself. The \"outside entity\" could be a domestic or foreign business, performing development work on a contractual basis. I am excluding open source here, as the process is more democratized than the typical product (where the product owner dictates the design and requirements directly). I'm thinking desktop software like: * Adobe Creative Suite * MS Office * Aperture * Visual Studio * ReSharper Or web apps/sites like: * Yelp * Pandora * 37 Signals apps * Zendesk * Salesforce * Amazon I'm looking for data points, here, and not subjective ideas about what's popular. We can measure popularity by sales or Google rank or other metrics."} {"_id": "36460", "title": "Comparison of languages by usage type?", "text": "Does anyone know of a good place to go find comparisons of programming languages by the intended platform/usage? Basically, what I want to know, is of the more popular languages, which ones are meant for high level application development, low level system development, mobile development, web, etc. If there's a good listing out there already, I'm not finding it so far. Does anyone know of a place that would have this? Thanks."} {"_id": "36466", "title": "Accessing a Web Service: Learning Resource needed", "text": "I have been searching for resources to learn (Java) Web Services. Although I have found a lot of resources and tutorials on JWS, I am confused with the version numbers, the abbreviations and Metro. Plus the last update to Metro was in 2008. Is it a worthwile thing to learn? I wanted to learn how to access Web Services, since an upcoming project is about accessing one. I have some experience with OAuth on Twitter(using code available). Things I know about the project: I have to access a Web Service. Java is the preferred platform to use(Although I know I can use any). Axis can be used to access the Web Service(I have never used Axis) I have a meeting scheduled to learn more, but I sure don't want to look silly since I am no Java expert, have never created or accessed Web Services using Java. My Questions: 1.Can someone point me to a tutorial which will help me learn how to access a already running Web Service (Preferably SOAP(?), not REST. It's XML based) 2\\. Will you recommend using PHP or Python to do the work of accessing the web service? I am expecting a lot of nay saying, but I hope I get some answers too. I will clarify things if needed."} {"_id": "220956", "title": "Do I have to include the Copyright notice from the original author?", "text": "I'm using some codes that I modified, from Paradigms of Artificial Intelligence Programming to create another project, the project is released under GPL V2. The License agreement is written here, it says: Copyright \u00a9 1998-2002 by Peter Norvig. My question is: Were's almost in 2014, do I still have to include that license or did it expire and therefore I may use the codes anyway that I like?"} {"_id": "146324", "title": "Handling database schema changes when pushing new versions", "text": "During times of heavy development, the database schema changes both rapidly and continuously, and by the time our weekly push to the beta build comes around, the schema has changed so much that the only sensible option is to nuke all of the tables I can and copy the new versions from my dev database. Obviously, this isn't going to work once we launch, since nuking production data is a recipe for disaster, so I was wondering what strategies were out there for managing database schema changes from one version/revision to another? Some I've found or experienced: 1. Straight nuke-and-dump from one database to another (what I am doing now) 2. Maintaining an UPDATE.sql file with SQL statements that get run either via script or by hand. 3. Maintaining an update.php file with a corresponding \"db-schema-version\" value in the active database The third option seems to be the most sensible, but there still exists the possibility of a badly constructed SQL query failing mid-script, leaving the database in a half-updated state, necessitating a restoration of a backup. It seems like a non-issue, but it does happen, since we as a team, we use phpMyAdmin, and I can't seem to even rely on myself remember copying the executed SQL statement to paste into the update.php file. Once you navigate to another page, I have to re-write the SQL statement out by hand, or reverse my change and do it again. I guess what I am hoping for is a solution that doesn't impact our established development workflow?"} {"_id": "186371", "title": "Practical reference for learning about graph reduction", "text": "Are there any practical references (with actual examples) for getting started implementing a small, lazy functional programming language with graph reduction? A reference that included the lexing and parsing steps would be especially helpful. So far I've read most of the _Implementation of Functional Programming Languages_ by Simon Peyton Jones and the Wizard book (SICP)."} {"_id": "143943", "title": "Database Table Prefixes", "text": "We're having a few discussions at work around the naming of our database tables. We're working on a large application with approx 100 database tables (ok, so it isn't that large), most of which can be categorized in to different functional area, and we're trying to work out the best way of naming/organizing these within an Oracle database. The three current options are: * Create the different functional areas in separate schemas. * Create everything in the same schema but prefix the tables with the functional area * Create everything in the same schema with no prefixes We have various pro's and con's around each one but I'd be interested to hear everyone's opinions on what the best solution is. Edit: Thanks for all the answers. It's hard to pick a right answer seems it is very subjective so I'll go for the one with the most votes (it helps that it matches what I was thinking! ;-)"} {"_id": "102916", "title": "Is there a canonical book on x86 assembly?", "text": "There are lots of books on assembly. However, they usually deal with ISAs about which I don't care, such as MIPS or ARM. I don't deal with these architectures; there's no reason for me to try to learn them. But x86 assembly books seem... nonexistent. Let's say for example I'm trying to build a toy compiler generating Windows Portable Executable files. Is there a book out there that's the de-facto standard for describing best practices, design methodologies, and other helpful information on x86 assembly? What about that book makes it special?"} {"_id": "170246", "title": "How to prevent code from leaking outside work?", "text": "> **Possible Duplicate:** > How to manage a Closed Source High-Risk Project? I'm working on an institution that has a really strong sense of \"possession\" - each line of software we write should be only ours. Ironically, I'm the only programmer (ATM), but we're planning in hiring others. Since my bosses wouldn't count the new programmers as people they can trust, they have an issue with the copies of the source code. We use Git, so they would have a **entire** copy of _each_ of the projects they work on, when they clone the repository. We can restrict access to them to a single key with Gitolite and bind that to their PC's, but they can copy those keys to another computer and they would have the repository access in another PC. Also (and the most obvious method) they could just upload the files somewhere else, add another remote, or just copy the files to an USB drive. Is there any (perhaps clever) way to prevent events like these? **EDIT:** I would like to thank everyone for their insights in this question, since it has been not only _more_ eye opening, but also a firm support of my arguments (since you basically think like me, and I've been trying to make them understand that) against my bosses in the near future. I am in a difficult situation work-wise, with my coworkers and bosses (since I'm basically in the middle) being like two gangs, so all this input is greatly, greatly appreciated. It is true that I was looking for a _technical_ solution to a _people_ problem - both the management and the employees are the problem, so it can't be solved that way (I was thinking about some _code obfuscation_ , perhaps working with separate modules, etc., but that wouldn't work from my developer POV). The main problem is the culture inside and outside the company - development is not taken seriously in my country (Venezuela) so naivity and paranoia are in fact a real issue in here. The real answer here is an NDA (something that here in Venezuela doesn't completely work), because that's the _people_ solution, because no sane developer would work in those conditions. Things will get ugly, but I think I will be able to handle that because of your help. Thank you all a lot! <3"} {"_id": "118740", "title": "How to protect source code from remote developers?", "text": "My company is going to hire an external developer to create some new modules and fix some bugs in our PHP software. We have never hired an external developer by the hour before. How can we protect the source code? We are not comfortable giving out source code and were thinking that everything remained under a surveillance enabled VPN which external developer would log in to. Has anyone solved this problem before? If so, how? Edit: We want the developer to see/modify the code but under surveillance and on our machine remotely. Does anybody have a similar setup? Edit 2: NDA is just a formality. IMO, even people who are in favor of NDAs know that it'll do nothing to protect their property. Edit 3: Let me clarify that we aren't worried about the developer copying an algorithm or a solution from the code. Code is coming out of his brain, so naturally he is the creator and he can create that again. But our code is built over several years with tens of developers working on it. Let's say I hire an incompetent programmer by mistake, who steals our years of work and then sells it to the competitor. That can make us lose our cutting edge. I know this is rare, but such a threat has to be taken under consideration if you're in business. I'll make points of my comments so its easy for everyone to communicate: 1. Why NDA sucks? Take this scenario, if anyone is capable of suggesting a solution to this scenario I will consider the NDA effective. Ok, here goes: We hire 2 external developers, one of them sells our code as it is to someone else after a year. You are no longer in touch with any of the developers, how are you supposed to find out who ripped you off? NDA does provide a purpose, but you can't rely completely on that. At least we cannot. 2. I did not meant to offend anyone while I was posting this question, even though unintentionally I did. But again to people answering/commenting like 'I will never ever work with you' or that Men-in-black-gadget thingy: It's not about you, it's a thread about how feasible a given technical solution would be. And if anyone in this community has worked under such an environment. 3. About 'Trust', of course we won't hire anyone we do not trust. But is that it? Can't someone be deceitful at first? We all trusted a lot of politicians to run our country, did they not fail us ever? So, I'm saying 'trust' is a complete other layer of protection like NDA, and my question was not directed to it. My question is rather directed towards technical measures we can take to avoid such a thing from happening."} {"_id": "205162", "title": "How to Protect Intellectual property violations from one's own team", "text": "I have an android app idea, I want to make the app along with a few batch- mates in my collage. (1) How do I make sure that these guys will not compromise the intellectual properties involved? In the sense that they will not leak out confidential information, new theories, new ideas, new features descriptions either freely on any medium or to some other company developing similar software. Also, these people will most likely have access to the entire source code (with comments) of the project, how do I make sure that in the future they will co- operate with me, (2) will not release any part of the code, (3) Will not freely release alpha versions containing features meant only for premium users. Also, (4) is there way to effectively work without granting them access to the entire source code? I am open to all kinds of answers: trust, technical, managerial - leadership, good friendship - relationship with collegues. UPDATE: We all live in India."} {"_id": "10736", "title": "How to manage a Closed Source High-Risk Project?", "text": "I am currently planning to develop a J2EE website and wish to bring in 1 developer and 1 web designer to assist me. The project is a financial app within a niche market. I plan to keep the source closed. However, I fear that my would-be employees could easily copy the codebase and use it or sell it to a third party. The app development will take 4-6 months, perhaps more, and I may bring in additional employees after the app goes live. But how do I keep the source to myself. Are there techniques companies use to guard their source? I foresee disabling USB drives and DVD writers on my development machines, but uploading data or attaching the code in email would still be possible. My question is incomplete. But programmers who have been in my situation, please advice. How should I go about this? Building a team, maintaining code- secrecy,etc. I am looking forward to sign a secrecy contract with the employees if needed too. (Please add relevant tags) **Update** Thank you for all the answers. I certainly won't be disabling all USB ports and DVD writers now. But I think I should be logging activity(How exactly should I do that?) I am wary of scalpers who would join and then run off with the existing code. I haven't met any, but I have been advised to be wary of them. I would include a secrecy clause, but given this is a startup with almost no funding and in a highly competitive business niche with bigger players in the field, I doubt I would be able to detect or pursue any scalpers. How do I hire people I trust, when I don't know them personally. Their resume will be helpful but otherwise trust will develop only with time. But finally even if they do run away with the code, it is service that matters after the sale is made. So I am not really worried for the long term."} {"_id": "102912", "title": "Iterators versus 'cursors' in Java", "text": "In Java, most people expect that each call to `Iterator.next()` will return a distinct object. This has led some of us to define an alternative model which we called a 'cursor'. (Note that others use the term 'cursor' to draw a different distinction.) The pattern of a cursor is: class SomeCursor { boolean hasNext(); void next(); type1 getItem1(); type2 getItem2(); } In other words, it is perfectly clear that the same object evolves at each step in the iteration. The problem on my hand right now is to also support situations, like web services, in which we absolutely do want distinct objects so that we can ship a collection of them around. It pains me to imagine pairs of classes (a cursor and a bean) that have to be maintained in parallel, and I'm casting about for alternatives. Here's one idea, and I wonder if anyone else out there has seen a better one. 1. You make an interface consisting of the 'get' methods for an object with all the items in it. 2. You define the cursor as implementing that interface and also hasNext & next. 3. You dynamically create the bean objects using one of the major dynamic code-gen libs on the fly to create a class with the obvious fields and get methods, plus set methods if needed to make something like JAX-B happy. edit: for those who are not following, I recommend a read of http://lucene.apache.org/java/2_4_0/api/org/apache/lucene/analysis/TokenStream.html. Also compare with http://download.oracle.com/javase/6/docs/api/java/sql/ResultSet.html, which is an example of this 'cursor' approach. Note in Lucene an API that iterates over a sequence of objects, allow the caller to specify whether to deliver the new data to a new object or the same old object each time. Yes, it is possible to define an `Iterator` that returns the same thing ever time, but we have concerns that people will use this as a gun to shoot their toes off. Thus the preference for the JDBC-ish pattern."} {"_id": "255663", "title": "JSP vs templates", "text": "I did some research on using server side templates vs JSP, could not find information on it. Can someone please provide information on pros and cons using server side templates vs JSP and vice versa."} {"_id": "34828", "title": "How to search for a tester?", "text": "As a freelance developer, a few times I tried to find some testers to be able to let them test my software/web applications. If I try to find them, it's because most of the customers are not intended to hire external testers and don't see why this can benefit to them, so products are UI-untested and buggy. I tried lots of things. Discussion boards for IT people, specific websites for people who search for a job. Every time I clearly precise that I'm looking for product testers. I completely failed to find anybody for this job. I found instead two types of people: * Non IT people who try to qualify as testers, but don't have enough skills for that, and don't really know what testing is and how to do it, * Programmers, who are skilled _as programmers_ , but not as testers, and who mostly don't understand neither what testing is about (or think it's the same thing as _code review_ , or it consists in _writing unit tests_ ). Of course, they submit general programmers resumes, where they describe their high experience in Assembler and C++, but don't tell anything about anything related to the job of a tester. What I'm doing wrong? Isn't it called \"tester\"? Is there at least a tester job, different from general programming job? Is there any precise requirement to require from each candidate which can eliminate non IT people and general programmers?"} {"_id": "207050", "title": "Is there a way to prevent the editing of HTML and CSS contents in a page using Firebug-like tools?", "text": "Is there a way to prevent the editing of HTML and CSS contents in a page using Firebug-like tools? I found that some users are editing some values in hidden fields and some contents which written between a div or span tag for gaining some profits. They are doing mostly by editing with help of tools like firebug. Is there any way to identify such modifications? The problem here is that the values they are editing is generated when the page is compiled. The page is developed in PHP. The editing is done mostly in between the tags."} {"_id": "34820", "title": "What threading pratice is good 90% of the time?", "text": "Since my SO thread was closed i guess i can ask it here. What practice or practices are good 90% of the time when working with threading with multiple cores? Personally all i have done was share immutable classes and pass (copy) data to a queue to the destine thread. Note: This is for research and when i say 90% of the time i dont mean it is allowed to fail 10% of the time (thats ridiculous!) i mean 90% it is a good solution while the other 10% it is not so desirable due to implementation or efficiently reasons (or plainly another technique fits the problem domain a lot better)."} {"_id": "34827", "title": "Recruiters intentionally present one good candidate for an available job", "text": "Maybe they do it without realizing. The recruiter's goal is to fill the job as soon as possible. I even think they feel it is in their best interest that the candidate be qualified, so I'm not trying to knock recruiters. Aren't they better off presenting 3 candidates, but one clearly stands out? The last thing they want from their client is a need to extend the interview process because they can't decide. If the client doesn't like any of them, you just bring on your next good candidate. This way they hedge their bet a little. Any experience, insight or ever heard of a head-hunter admit this? Does it make sense? There has to be a reason why the choose such unqualified people. I've seen jobs posted that clearly state they want someone with a CS degree and the recruiter doesn't take it literally. I don't have a CS degree or Java experience and still they think I'm a possible fit. Edit: I'm asking this question because there are many posts that assume recruiters cannot recognize talent, but maybe it is not in their interest to recognize too many talented candidates and make it harder for clients to decide. Eliminate the confusion. There is one candidate. Make that person an offer. It's a win, win, win. Maybe this falls under the theory of Too Many Choices?"} {"_id": "251375", "title": "Combining 3rd party javascript libraries with my code, then using Closure Compiler", "text": "I'm using multiple third party javascript libraries in my website, and right now I'm keeping each third party library as a separate .js file, with its own ` **widgets.js** (initLog || (window.initLog = [])).push([new Date().getTime(), \"A log is kept during page load so performance can be analyzed and errors pinpointed\"]); // Widgets are stored in an object and extended (with jQuery, but I'll probably switch to underscore if using Backbone) as necessary var Widgets = { 1: { // Widget ID, this is set here so widgets can be retreived by ID id: 1, // Widget ID again, this is used after the widget object is duplicated and detached size: 3, // Default size, medium in this case order: 1, // Order shown in \"store\" name: \"Weather\", // Widget name interval: 300000, // Refresh interval nicename: \"weather\", // HTML and JS safe widget name sizes: [\"tiny\", \"small\", \"medium\"], // Available widget sizes desc: \"Short widget description\", settings: [ { // Widget setting specifications stored as an array of objects. These are used to dynamically generate widget setting popups. type: \"list\", nicename: \"location\", label: \"Location(s)\", placeholder: \"Enter a location and press Enter\" } ], config: { // Widget settings as stored in the tabs object (see script.js for storage information) size: \"medium\", location: [\"San Francisco, CA\"] }, data: {}, // Cached widget data stored locally, this lets it work offline customFunc: function(cb) {}, // Widgets can optionally define custom functions in any part of their object refresh: function() {}, // This fetches data from the web and caches it locally in data, then calls render. It gets called after the page is loaded for faster loads render: function() {} // This renders the widget only using information from data, it's called on page load. } }; **script.js** (initLog || (window.initLog = [])).push([new Date().getTime(), \"These are also at the end of every file\"]); // Plugins, extends and globals go here. i.e. Number.prototype.pad = .... var iChrome = function(refresh) { // The main iChrome init, called with refresh when refreshing to not re-run libs iChrome.Status.log(\"Starting page generation\"); // From now on iChrome.Status.log is defined, it's used in place of the initLog iChrome.CSS(); // Dynamically generate CSS based on settings iChrome.Tabs(); // This takes the tabs stored in the storage (see fetching below) and renders all columns and widgets as necessary iChrome.Status.log(\"Tabs rendered\"); // These will be omitted further along in this excerpt, but they're used everywhere // Checks for justInstalled => show getting started are run here /* The main init runs the bare minimum required to display the page, this sets all non-visible or instantly need things (such as widget dragging) on a timeout */ iChrome.deferredTimeout = setTimeout(function() { iChrome.deferred(refresh); // Pass refresh along, see above }, 200); }; iChrome.deferred = function(refresh) {}; // This calls modules one after the next in the appropriate order to finish rendering the page iChrome.Search = function() {}; // Modules have a base init function and are camel-cased and capitalized iChrome.Search.submit = function(val) {}; // Methods within modules are camel-cased and not capitalized /* Extension storage is async and fetched at the beginning of plugins.js, it's then stored in a variable that iChrome.Storage processes. The fetcher checks to see if processStorage is defined, if it is it gets called, otherwise settings are left in iChromeConfig */ var processStorage = function() { iChrome.Storage(function() { iChrome.Templates(); // Templates are read from their elements and held in a cache iChrome(); // Init is called }); }; if (typeof iChromeConfig == \"object\") { processStorage(); } ### Objectives of the restructure 1. **Memory usage:** Chrome apparently has a memory leak in extensions, they're trying to fix it but memory still keeps on getting increased every time the page is loaded. The app also uses a lot on its own. 2. **Code readability:** At this point I can't follow what's being called in the code. While rewriting the code I plan on properly commenting everything. 3. **Module interdependence:** Right now modules call each other a lot, AFAIK that's not good at all since any change you make to one module could affect countless others. 4. **Fault tolerance:** There's very little fault tolerance or error handling right now. If a widget is causing the rest of the page to stop rendering the user should at least be able to remove it. Speed is currently not an issue and I'd like to keep it that way. ### How I think I should do it * The restructure should be done using Backbone.js and events that call modules (i.e. on storage.loaded => init). * Modules should each go in their own file, I'm thinking there should be a set of core files that all modules can rely on and call directly and everything else should be event based. * Widget structure should be kept largely the same, but maybe they should also be split into their own files. * AFAIK you can't load all templates in a folder, therefore they need to stay inline. * Grunt should be used to merge all modules, plugins and widgets into one file. Templates should also all be precompiled. ### Question: **Should I follow my current restructure plan? Does that sound like a good starting point, or is there a different approach that I'm missing? Should I _not_ do any of the things I listed?** Do applications written with Backbone tend to be more intensive (memory and speed) than ones written in Vanilla JS? Also, can I expect to improve this with a proper restructure or is my current code about as good as can be expected?"} {"_id": "241240", "title": "Architecture for dashboard showing aggregated stats", "text": "I'd like to know what are common architectural pattern for the following problem. Web application A has information on sales, users, responsiveness score, etc. Some of this information are computationally intensive and or have a complex business logic (e.g. responsiveness score). I'm building a separate application (B) for internal admin tasks that modifies data in web application A and report on data from web application A. For writing I'm planning to use a restful api. E.g. create a new entity, update entity, etc. In application B I'd like to show some graphs and other aggregate data for the previous 12 months. I'm planning to store the aggregate data for each month in redis. Some data should update more often, e.g every 10 minutes. I can think of 3 ways of doing this. 1. A scheduled task in app B that connects to an api of app A that provides some aggregated data. Then app B stores it in Redis and use that to visualise pages. Cons: it makes complex calculation within a web request, requires lot's of work e.g. api server and client, storing, etc., pros: business logic still lives in app A. 2. A scheduled task in app A that aggregates data in an non-web process and stores it directly in Redis to be accessed by app B. 3. A scheduled task in app A that aggregates data in a non-web process and uses an api in app B to save it. I'd like to know if there is a well known architectural solution to this type of problems and if not what are other pros/cons for the solution I've suggested?"} {"_id": "214115", "title": "Should cucumber step definitions in Java be static methods or instance methods?", "text": "We are new to using cucumber with selenium to write automated test suites. Our initial approach was to have one java class per feature file. Now we added instance methods in each class for corresponding step definitions. Now if we need to re-use a step definition in some other feature file we are facing problems as we can't re-use same annotation with same regex pattern with any other method of other class and neither can we use the existing step definition which is in some other class. These methods are sharing instance variables like the reference to the driver. Now to re-use an instance method as a step definition is also a problem because these methods are not re-usable outside the class and I can't put all step definitions inside one class. I looked at some samples of ruby, I found that they write some sort of blocks that don't have access to shared state. They just execute steps. So in Java should I always make static methods that will just execute steps one by one and share no state ?"} {"_id": "155758", "title": "Is committing/checking in code everyday a good practice?", "text": "I've been reading Martin Fowler's note on Continuous Integration and he lists as a must \"Everyone Commits To the Mainline Every Day\". I do not like to commit code unless the section I'm working on is complete and that in practice I commit my code every three days: one day to investigate/reproduce the task and make some preliminary changes, a second day to complete the changes, and a third day to write the tests and clean it up^ for submission. I would not feel comfortable submitting the code sooner. Now, I pull changes from the repository and integrate them locally usually twice a day, but I do not commit that often unless I can carve out a smaller piece of work. **Question:** is committing everyday such a good practice that I should change my workflow to accomodate it, or it is not that advisable? _**Edit:_** I guess I should have clarified that I meant \"commit\" in the CVS meaning of it (aka \"push\") since that is likely what Fowler would have meant in 2006 when he wrote this. ^ The order is more arbitrary and depends on the task, my point was to illustrate the time span and activities, not the exact sequence."} {"_id": "82785", "title": "Are there deals (free or low cost) to license Visual Studio for open-source developers?", "text": "I have several open-source projects, but I no longer have a license for Visual Studio. I heard that Resharper has some kind of deal for open-source project contributors. Is there something similar for Visual Studio? Thanks. * * * **EDIT** ps: The main project I'm concerned about is DotNetZip. I use an open-source text editor now to modify the code and project files; and the project is hosted at CodePlex, which I can get to via the \"free\" Visual Studio TFS command-line tools. My main problem is that I can no longer design or run the tests I've put into the code. I know there is Nunit and other alternative testing frameworks, but the code is already instrumented for MSTest (I guess that's what it is) and I'd like to keep the testing stable if possible. **EDIT2** I know about DreamSpark but I am not a student. Just a developer of an open- source project. **EDIT3** I guess a better question might be, _what's the lowest-cost option for me, an individual open-source developer, to license Visual Studio so that I can run tests on my project?_"} {"_id": "214119", "title": "Old programmer disappeared. About to hire another programmer. How do I approach this?", "text": "After spending over one year working on a social network project for me using WordPress and BuddyPress, my programmer has disappeared, even though he got paid every single week, for the whole period. Yes, he's not dead as I used an email tracker to confirm and see he opens my emails, but he doesn't respond. It seems he got another job. I wonder why he just couldn't say so. And I even paid him an advance salary for work he hasn't done. The problem is that I never asked for full documentation for most of the functions he coded in. And there were MANY functions for this 1+ year period, and some of them have bugs that he still didn't fix. Now it seems all confusing. What's the first thing I should do now? How do I proceed? I guess the first thing to do will be to get another programmer, but I want to start on the right foot by having all the current code documented so that any programmer can work on all the functions without issues. Is that the first thing I should do? If yes, how do I go about it? What's the standard type of documentation required for something like this? Can I get a programmer that will just do the documentation for all the codes and fix the bugs or is documentation not really important? Also, do you think getting another \"individual\" programmer is better or get a company that has programmers working for them, so that if the programmer assigned to my project disappears, another can replace him, without my involvement? I feel this is the approach I should have taken in the beginning."} {"_id": "241248", "title": "How to analyze a scenario where a bug didn't get caught and adjust development workflow to prevent similar errors", "text": "I had a bug that was really difficult to track down, because all the unit tests were green, but the production application didn't work properly. Here's what happened: I had a filter class that set my application to ignore data that was not in some specified time windows. * The unit test, which seemed thorough to me, turned green. * Additionally, my integration tests also produced results as expected. * Production, however, did not work. * As a result of the first two bullets, this problem was very difficult to find. It turned out the problem was that my test dates were using my time zone (America/Chicago) but the production data was providing dates in UTC, which I did not realize, and the logic for the filter wasn't correct for UTC dates. (I was using joda time `DateTime` objects). 1. **Where did my workflow break down?** * Did I fail to produce a spec that specified that the logic needed to handle dates in any time zone? * Did I fail to thoroughly consider all cases at the unit test level? * Did I fail to insure the integration test was sufficiently similar to production? * Other? 2. **What changes can I make to my workflow to better prevent this sort of mistake in the future?** 3. **How can I more effectively debug a problem when there is an issue in production but not in testing?**"} {"_id": "65601", "title": "Is it smart to store application keys, ids, etc directly inside an application?", "text": "I have heard some say it isn't but they never suggest an alternative. Is this true? **UPDATE** Is it possible to store this external from application and have it called?"} {"_id": "191596", "title": "Is it OK to partially change a collection with PUT or DELETE?", "text": "I have a collection of products in a product group e.g.: product-groups/123/products 1. If I need to **add** to the collection, is it OK that I pass only **some** products with PUT? 2. If I need to delete **some** products from the collection, is it OK that I pass **filter data** (an array of ID's) with DELETE? What's the best way to implement the functionality in the spirit of ReST? Edit: the items are links to separate entities, basically ID's of products."} {"_id": "218287", "title": "How do I cleanly design a central render/animation loop?", "text": "I'm learning some graphics programming, and am in the midst of my first such project of any substance. But, I am really struggling at the moment with how to architect it cleanly. Let me explain. To display complicated graphics in my current language of choice (JavaScript -- have you heard of it?), you have to draw graphical content onto a `` element. And to do animation, you must clear the `` after every frame (unless you want previous graphics to remain). Thus, most canvas-related JavaScript demos I've seen have a function like this: function render() { clearCanvas(); // draw stuff here requestAnimationFrame(render); } `render`, as you may surmise, encapsulates the drawing of a single frame. What a single frame contains at a specific point in time, well... that is determined by the program state. So, in order for my program to do its thing, I just need to look at the state, and decide what to render. Right? Right. But that is more complicated than it seems. My program is called \"Critter Clicker\". In my program, you see several cute critters bouncing around the screen. Clicking on one of them agitates it, making it bounce around even more. There is also a start screen, which says \"Click to start!\" prior to the critters being displayed. Here are a few of the objects I'm working with in my program: StartScreenView // represents the start screen CritterTubView // represents the area in which the critters live CritterList // a collection of all the critters Critter // a single critter model CritterView // view of a single critter Nothing too egregious with this, I think. Yet, when I set out to flesh out my `render` function, I get stuck, because everything I write seems utterly ugly and reminiscent of a certain popular Italian dish. Here are a couple of approaches I've attempted, with my internal thought process included, and unrelated bits excluded for clarity. ### Approach 1: \"It's conditions all the way down\" // \"I'll just write the program as I think it, one frame at a time.\" if (assetsLoaded) { if (userClickedToStart) { if (critterTubDisplayed) { if (crittersDisplayed) { forEach(crittersList, function(c) { if (c.wasClickedRecently) { c.getAgitated(); } }); } else { displayCritters(); } } else { displayCritterTub(); } } else { displayStartScreen(); } } That's a very much simplified example. Yet even with only a fraction of all the rendering conditions visible, `render` is already starting to get out of hand. So, I dispense with that and try another idea: ### Approach 2: Under the Rug // \"Each view object shall be responsible for its own rendering. // \"I'll pass each object the program state, and each can render itself.\" startScreen.render(state); critterTub.render(state); critterList.render(state); In this setup, I've essentially just pushed those crazy nested conditions to a deeper level in the code, hiding them from view. In other words, `startScreen.render` would check `state` to see if it needed actually to be drawn or not, and take the correct action. But this seems more like it only solves a code-aesthetic problem. The third and final approach I'm considering that I'll share is the idea that I could invent my own \"wheel\" to take care of this. I'm envisioning a function that takes a data structure that defines what should happen at any given point in the `render` call -- revealing the conditions and dependencies as a kind of tree. ### Approach 3: Mad Scientist renderTree({ phases: ['startScreen', 'critterTub', 'endCredits'], dependencies: { startScreen: ['assetsLoaded'], critterTub: ['startScreenClicked'], critterList ['critterTubDisplayed'] // etc. }, exclusions: { startScreen: ['startScreenClicked'], // etc. } }); That seems kind of cool. I'm not exactly sure how it would actually work, but I can see it being a rather nifty way to express things, especially if I flex some of JavaScript's events. In any case, I'm a little bit stumped because I don't see an obvious way to do this. If you couldn't tell, I'm coming to this from the web development world, and finding that doing animation is a bit more exotic than arranging an MVC application for handling simple requests -> responses. * * * What is the clean, established solution to this common-I-would-think problem?"} {"_id": "225300", "title": "What changes can I make to my IDE to minimize the effect of my dyslexia?", "text": "I program and I am dyslexic. My vision is excellent. I do poorly processing symbols and am a visual thinker. When I code, I'm slower than normal people because I am unpredictably unaware of the errors I make. I am learning python and the text only development environments cause me a lot of visual stress; I am using Wingware which is somewhat helpful, but cant complete assignments in the time given. Can you suggest an accommodation that would help me? What adaptations would be helpful to me? Is there any way I can automatically find, highlight and fix these kinds of errors? Proofreading, I see what I expect to see or something familiar. I don't notice typos, skip lines etc and the bugs turn up in testing. Even copy & pasting I can miss lines and cause errors. Blocks of text from margin to margin give me headaches as do some color combinations I do not process text as symbols, rather as objects that can be rotated, transposed so that the digits in a number move to different places, I may perceive \"123\" AS \"132\", THE LETTERS \"pddq\", look the same to me. I think of these as tricky - the same shape rotated and reflected."} {"_id": "119700", "title": "Is the term \"web portal\" obsolete?", "text": "My boss uses the term \"portal\" for the project I work on all the time. To me, the word makes me think of Yahoo in the late 90s. Does the word \"portal\" have old-school connotations, or is it just me? Do you think it's ok to use it or will it drag our client's perception of the product down into the middle-ages?"} {"_id": "119703", "title": "Interpolation search vs Binary Search", "text": "When should I used interpolation search instead of binary search? For example, I have a sorted dataset, in what situations would I use binary search to find an item in this dataset or in which situation should I used interpolation search? What properties of the dataset would be the determining factor?"} {"_id": "119470", "title": "Differences between programming in school vs programming in industry?", "text": "A lot of students when they graduate and get their first job, feel like they don't really know how to program even though they may have been good programmers in college. What are some of the differences between programming in an academic setting and programming in the 'real world'?"} {"_id": "120487", "title": "HTTP events? Is there a standard / precedent for this?", "text": "Our architecture is HTTP servers (custom written) which whereby custom clients send a HTTP request for some information and information is returned just as HTTP works. But we need a special custom 'extension' which is a request which is a subscription for receiving asynchronous 'events' on a resource. For example the client sends an http request subscribing for events on some entity. As the 'entity' generates events they are passed to the http server and the http server must then lookup subscriptions for that entity and send the event message to all subscribed clients. Hope that makes sense. So my questions are: * Has this been done before / or is there a standard I should be looking at? * If no standard, any suggestions on how to implement? * How does a http server send an unsolicited 'message' to a client?"} {"_id": "81476", "title": "To implement registration page with Vaadin or not?", "text": "This is a tactical implementation question about usage of Vaadin or in some part of my application. Vaadin is a great framework to login users and implement sophisticated web applications with many pages. However, I think it is not very well suited to desgin pages to register new users for my application. Am I right? Am I am wrong? It seems to me that a simple HTML/CSS/Javascript login + email registration + confirmation email with confirmation link cannot be implemented easily with Vaadin. It seems like Vaadin would be overkill. Do you agree? Or am I missing something? I am looking for feedback from experienced Vaadin users."} {"_id": "252710", "title": "How to parse different number types with LALR(1)", "text": "Consider a LALR(1) parser for a file format that allows integer numbers and floating point numbers. As usual, something like `42` shall be a valid integer and a valid float (with some automagic conversion in the background). There might be parsing rules where a floating point number or an integer number is expected, and other rules where _only_ an integer number is expected, e.g.: foo1 : bar FLOAT buzz | bar INT buzz ; foo2 : some INT other stuff ; Now consider something like foo3 : bar FLOAT xyz FLOAT abc FLOAT buzz ; but at each position in this rule, instead of `FLOAT`, also `INT` shall be allowed. * Turning this rule into 8 rules (one rule for each combination of `FLOAT` and `INT`) isn\u2019t an option. (Consider a rule having 4 or 5 numbers...) * Using a rule like float_or_int : FLOAT | INT; won\u2019t help, because in general, this rule will reduce all `INT` to `float_or_int`, and rules like `foo2` no longer can be parsed. (Because with a grammar large enough, the one token lookahead cannot avoid the shift-reduce- conflicts resulting from this rule.) * When the lexer sees a number without a decimal point, it cannot decide whether the parser currently expects an int or a float-or-int. How can this be handled in an elegant way?"} {"_id": "60383", "title": "How can we integrate use of a foreign language into an internship?", "text": "We have a few interns here that have orders from college to integrate speaking, reading, and writing French into their internship. All of them are working on developing their own web application, and I'm wondering how we can integrate French into their projects. Here are some things I've come up with: * Translate the whole project front-end to French * Write French documentation for the project * Promote the application in a French community * Speak French (exclusively?) to our project managers Any other ideas or suggestions? Does anyone else have experience with this?"} {"_id": "94275", "title": "How do I make a license an open source license?", "text": "I have been granted permission to reduce the rules from our custom license used in Advanced Electron Forum project. I am not sure what exactly is needed to make the license above an open source license for the product but I think that third term is the one that should be removed/changed. Best regards."} {"_id": "66408", "title": "What graphics engine is used in Photoshop", "text": "I am wondering what is the default graphics engine used in Photoshop? It's great tool. And I don't know how they make it? I mean, if I want to create a simple tool like it, I will use MFC/GDI+. So, what is core to make Photoshop being great tools?"} {"_id": "201263", "title": "When should I use or not BooleanUtils.isTrue(...) and BooleanUtils.isFalse(...)?", "text": "About this function: `org.apache.commons.lang3.BooleanUtils.isFalse(Boolean bool)` and the similar `isTrue`, my co-work (less experienced) use it for **every** boolean in the code. I am trying to convince him that it is not a good practice because make the code confusing. In my POV, it should be used ONLY if the boolean to be tested is a type of non-primitive `Boolean` and it can be `null`. Actually, even this I think it is unnecessary, because the implementation of this function is simply `Boolean.TRUE.equals(bool)` or `Boolean.FALSE.equals(bool)`. Anyway, I think it is totally crazy do something like: boolean isReady = true; if (BooleanUtils.isTrue(isReady)) { // ... } when you cold simply do if (isReady) { // ... } or `(!isReady)` for the opposite. His only argument to use this is \"it is easy to read\". I just can't accept this argument. Am I wrong? What arguments can I use to convince him that no useless code is better than useless code in this case? Thank you guys."} {"_id": "113165", "title": "NoSQL Modify operations performance", "text": "I am working on the application which in near future (hopefully) will have to process tens or hundreds of thousands of items (item is currently one row in relational database table) per second. If I understood it right NoSQL databases are superheroes of data retrieval thanks to map-reduce magic. But will NoSQL databases provide extra performance in comperatison to relational ones in my case?"} {"_id": "201261", "title": "What to do if the task is too difficult ? Internship/SummerTraining", "text": "I conducting my summer training or internship for 2 months in an IT company as a a Sysadmin and Web developer. This is my first week. My mentor assigned me to work on an existing Flash/AS3 project(alone) to improve it and add new features. ( I think this wasn't part of my training plan) The problem is that I have never worked on AS3, so I took my time this week to learn as much as I can, about it. Today, I saw the source code of the project and it was outsourceed with 3500 lines of code plus no comments or any documentation included. I am panicking . The company has no one with AS3/Flash experience nor my mentor. I'm not sure if I can accomplish the task with their requirements. As it definitely needs an experienced developer in this field , plus I have only 2 months - so it's short. Hence , I'm asking here and looking for advice and suggestions on what to do next. Should I ask him and explain ? or should I risk and (try) working on it? what if I fail? Thanks advance! EDIT: I'm not sure if I have to accomplish the task in 2 months. I'm sure less so that I work on other projects. I haven't really asked. Since I'm not living in the US , I'm not sure if an Intern is the proper word to use. Every university student is required to conduct his/her summer training for 2 months in a company. At the end , either he/she passes the course or fail."} {"_id": "11400", "title": "Are captchas worth the decreased usability?", "text": "When is it useful to use a captcha? When is it an unnecessary hindrance? Is a captcha just a quick fix for the lazy/unexperienced programmer, or are they really the best way to prevent spam and bots?"} {"_id": "96698", "title": "Is there a clean alternative to WTF that communicates to coders?", "text": "> **Possible Duplicate:** > Another term for common code smell Programmers know immediately what I mean if I talk of finding a 'WTF' in the code base. In particular, it is a noun that refers to an extremely wrongheaded approach/implementation, and not typically an exclamation (though it can be used also as an exclamation). I don't want to use expressions that thinly veil obscenity, but I can't think of any substitute that communicates this idea quickly and succinctly. Any ideas?"} {"_id": "19238", "title": "How to calculate the cost of a course I will give?", "text": "I am going to give a javascript course to some developers in a local company, I know all the subjects I will teach and estimate duration, also note that the course will be for experience developers so it will contain a lot of advanced subjects. I don't know how to calculate my cost, anyone there have experience and can advice me?"} {"_id": "240925", "title": "Separate Action from Assertion in Unit Tests", "text": "## Setup Many years ago I took to a style of unit testing that I have come to like a lot. In short, it uses a base class to separate out the Arrangement, Action and Assertion of the test into separate method calls. You do this by defining method calls in [Setup]/[TestInitialize] that will be called before each test run. [Setup] public void Setup() { before_each(); //arrangement because(); //action } This base class usually includes the [TearDown] call as well for when you are using this setup for Integration tests. [TearDown] public void Cleanup() { after_each(); } This often breaks out into a structure where the test classes inherit from a series of Given classes that put together the setup (i.e. GivenFoo : GivenBar : WhenDoingBazz) with the Assertions being one line tests with a descriptive name of what they are covering [Test] public void ThenBuzzSouldBeTrue() { Assert.IsTrue(result.Buzz); } ## The Problem There are very few tests that wrap around a single action so you end up with lots of classes so recently I have taken to defining the action in a series of methods within the test class itself: [Test] public void ThenBuzzSouldBeTrue() { because_an_action_was_taken(); Assert.IsTrue(result.Buzz); } private void because_an_action_was_taken() { //perform action here } This results in several \"action\" methods within the test class but allows grouping of similar tests (i.e. class == WhenTestingDifferentWaysToSetBuzz) ## The Question Does someone else have a better way of separating out the three 'A's of testing? Readability of tests is important to me so I would prefer that, when a test fails, that the very naming structure of the tests communicate what has failed. If someone can read the Inheritance structure of the tests and have a good idea why the test might be failing then I feel it adds a lot of value to the tests (i.e. GivenClient : GivenUser : WhenModifyingUserPermissions : ThenReadAccessShouldBeTrue). I am aware of Acceptance Testing but this is more on a Unit (or series of units) level with boundary layers mocked. _EDIT_ : My question is asking if there is an event or other method for executing a block of code before individual tests (something that could be applied to specific sets of tests without it being applied to all tests within a class like [Setup] currently does. Barring the existence of this event, which I am fairly certain doesn't exist, is there another method for accomplishing the same thing? Using [Setup] for every case presents a problem either way you go. Something like [Action(\"Category\")] (a setup method that applied to specific tests within the class) would be nice but I can't find any way of doing this."} {"_id": "215242", "title": "Implementing set of processes in a stored procedure or through the code?", "text": "I want to know what's the suitable method to implement the following case (best practice). If i make a set of processes like this: 1. select data from set of DB tables. 2. loop on the selected result. 3. Make some checks on each iteration. 4. Insert the result in another table. * * * Implementing the previous steps in a stored procedure or in a transaction through my code (asp.net)? Concerning the performance, security and reliability issues."} {"_id": "90695", "title": "What is the best way to handle similar functionlaity in separate user stories / Product back Log Items?", "text": "I am using the TFS 2010 SCRUM template. So, let's say I have these user stories: As a user I want the ability to add contacts directly from the Partner page. As a user I want the ability to add contacts directly from the Project page. Design wise the contact forms should be the same and so should the functionality. So, if a user clicks the add contact button from either page, a pop up window will appear allowing the user to add a new contact. Should both those stories share similar tasks since the development and design are similar? Should I break up the features into separate stories? Update: So, is it more acceptable to write this: As a user I want to add contacts from any area in the site so that I don't have to keep going back to a speicifc contact entry page. Then I could supply the features we need for that story?"} {"_id": "164050", "title": "How would you manage development between many Staging branches?", "text": "We have a Staging Branch. Then we came out with a Beta branch for users to move whenever they wanted to from old Production branch to the new features. Our plan seemed simple, we test on Staging, when items get QA'd, they get cherry-picked and deploy to Beta. Here's the problem! A bug will discreetly make its way on to Beta, and since Beta is a production environment, it needs fixes _fast_ and _accurate_. But not all the QA's got done. Enter Git hell.. So I find a problem on Beta. No sweat, it's already been fixed on Staging, but when I go to cherry-pick the item over, Beta barely has any of the other pre- requisites of code to implement this small change. Now Beta has a little here and a little there, and I can't imagine it as a code base being as stable as Staging. What's more, is I'm dealing with some insane Git conflicts, and having to monkey patch a bunch of things to make up for what Beta hasn't caught up with Staging. Can someone polite or non-polite terms, tell me what we're doing wrong here as far as assembling this project? Any awesome recommendations or workarounds or alternatives to the system we came up with?"} {"_id": "152902", "title": "What guidelines should be met for something to be called a webapp", "text": "In my mind a webapp is any website that has some complicated features and that's served from the web, but I know there are people who are a lot smarter than me here and they'll point out other features that I'm not noticing. For example: * If it doesn't have a login system, can I still call it a webapp? (because most webapps end up being user-based). * Should I think of a CMS as a webapp? and how about the sites that are built with it? probably they're not webapps, right? * Is Gmail a webapp? I think so. * Is twitter a webapp? Yes, though the functionality is rather simple. So what conditions need to be met to call a website a webapp? P.S. StackExchange's tag system defines web-applications as > applications that are accessed over the \"web\", which can mean the Internet, > or an internal network (an intranet)."} {"_id": "251237", "title": "Should each method have a seperate JUnit test class?", "text": "I am writing JUnit unit tests for my classes. Is it better to have a separate class for each method, or have just one test class for every actual class?"} {"_id": "251236", "title": "Need help understanding a recursion example in Python", "text": "Python is my first programming language, and I'm learning it from \"How to Think Like a Computer Scientist\". In Chapter 5 the author gives the following example on recursion: def factorial(n): if n == 0: return 1 else: recurse = factorial(n-1) result = n * recurse return result I understand that if n = 3, then the function will execute the second branch. But what I don't understand is what happens when the function enters the second branch. Can someone explain it to me?"} {"_id": "118077", "title": "Data Scraping - One application or multiple?", "text": "I have 30+ sources of data I scrape daily in various formats (xml, html, csv). Over the last three years Ive built 20 or so c# console applications that go out, download the data and re-format it into a database. But Im curious what other people are doing for this type of task. Are people building one tool that has a lot of variables and inputs or are people designing 20+ programs to scrape and parse this data. Everything is hard-coded into each console and run through Windows Task Manager. Added a couple additional thoughts/details: * Of the 30 sources, they all have unique properties, all are uploaded into individual MySQL tables and all have varying frequencies. For example, one data source is hit once a minute, another on 5 minute intervals. Majority are once an hour and once a day. At current I download the formats (xml, csv, html), parse them into a formatted csv and put them into staging folders. Within that folder, I run an application that reads a config file specific to the folder. When a new csv is added to the folder, the application then uploads the data into the specific MySQL tables designated in the config file. Im wondering if it is worth re-building all this into a larger complex program that is more capable of dynamically adding content+scrapes and adjusting to format changes. Looking for outside thoughts."} {"_id": "148350", "title": "What algorithm(s) can be used to achieve reasonably good next word prediction?", "text": "What is a good way of implementing \"next-word prediction\"? For example, the user types \"I am\" and the system suggests \"a\" and \"not\" (or possibly others) as the next word. I am aware of a method that uses Markov Chains and some training text(obviously) to more or less achieve this. But I read somewhere that this method is very restrictive and applies to very simple cases. I understand basics of neural networks and genetic algorithms(though have never used them in a serious project) and maybe they could be of some help. I wonder if there are any algorithms that, given appropriate training text(e.g., newspaper articles, and the user's own typing) can come up with reasonably appropriate suggestions for the next word. If not (links to)algorithms, general high-level methods to attack this problem are welcome."} {"_id": "74258", "title": "How do other people keep code current accross multiple machines? (something other than DropBox)", "text": "I am a hardware/infrastructure guy going back to school to get my degree (FINALLY...) and am taking some low level intro programming classes. On top of the JAVA I am working on at school, I am also tinkering around in some other languages. I work in a shop with some very talented developers and am attempting to learn as much as I can from them. Basically, learning a bunch on my own to write code that will make my infrastructure job easier, so I have more time to learn. With that said, I spend 90% of my time using eclipse, and 8% in BlueJay (blah, I know) and 2% in VS Express. Because I have 5 different machines between work and home, I am having a big issues keeping my code for various projects up to date when working on different machines. I tried using Dropbox, and setting the local folder on all my machines to my workspace in Eclipse, and that worked great for a week. Then I found out that the company was getting on users for Internet Usage, because their Dropbox was keeping a live connection to the web 100% of the time that it was running. I have tried the old school thumb drive approach, and I keep forgetting to save most recent to the thumb drive. So, basically, what I am looking for is advice on how users keep their code up to date? Are there any similar utilities like DropBox, and only attempt to sync when stuff is changed? Are their any other suggestions for an online repository that I can do this with, without logging 50+ hours a week of time on our Firewall report?"} {"_id": "49003", "title": "Which online/hosted bug tracking tool do you use for your own work and projects?", "text": "I've accumulated a lot of side projects over the years, which I slowly improve on over time. Whenever I return to one, I take some time reading over text files that include design, recent bugs, next features, etc... that I should be working on - it's not pretty. I'm looking to switch to something more formal. Ideally, this would be a full featured, online, bug tracking system, which allows for free or nearly free bug tracking for my own projects. Also, ideally this would be doable in a private manner - I don't really want everyone to see my side projects and what a mess I've made of some of them."} {"_id": "80789", "title": "Distribution Licensing for Windows XP Junction tool", "text": "My application needs to create a directory symbolic link in Windows XP to get around the file path length limitation of ~256 characters. Windows XP does not have this functionality built-in. I had chosen Junction from Sysinternals as a candidate to solve this issue. However, now legal is telling me that I cannot distribute Junction to my users; they would have to download it themselves. My problem is that my users are not connected to the internet and probably have too many bureaucratic hurdles to go through to get approval to download and transfer this software. 1. Has anyone found an exception and legally distributed Sysinternals software like Junction? 2. What free/open/libre alternative do I have for creating directory symbolic links in Windows XP?"} {"_id": "80780", "title": "How should ability be distributed through teams?", "text": "After reading this, I saw that there seems to be a lot of disagreement over how agile teams should be structured within a group of developers with varying ability (aka almost all teams). Should all the best developers be put on their own teams and be given the highest priority work? This will pretty much ensure that the most important tasks will get done. At the same time, you are then left with the \"less than perfect\" teams elsewhere racking up technical debt, even if it is only on low priority tasks. On the other hand, evenly distributed teams could have the benefit of making your lagging developers a bit better, but has the potential to demotivate your heaviest hitters. Also, if you mix in a bunch of good design patterns with a bunch of terrible anti- patterns, you can really end up with something that might as well be a bunch of anti-patterns. All teams have their strong coders and their not so strong coders, so how should this be dealt with?"} {"_id": "153365", "title": "How to deal with colleagues refuse to follow practices?", "text": "I was discussing with another colleague about what we should be used when an DB entity is referring to another. I don't think there is any good reason to break the practice of putting the Primary Key in the referring entity. However, one of my colleague says: \"You should use a surrogate key in the entity, but it is better to put the human-readable natural key in the referring entity. As long it is unique, it is fine and it is easier when you are doing support or maintenance job\" I know it will works, but obviously it is not a good practice you are putting a non-PK unique column as \"foreign key\", just for gaining a bit of ease in writing SQL during support as we can have less table join. Though I mentioned the his approach is conceptual incorrect, and causing problem too practically etc, he seems rather trade off correctness in data model in exchange of ease of ad-hoc support jobs. And he said: \"I know it is not good practice, but good practice is not golden rule\" Honestly I feel frustrated when dealing with something like this. I know there are always case that we should break some rule or practice, but doubtless it is not such case now. What will you when you are facing situation like this? Please assume yourself being a senior developer which is expected to contribute in misc development direction and convention."} {"_id": "153366", "title": "What is So Unique About Node.js?", "text": "Recently there has been a lot of praise for Node.js. I am not a developer that has had much exposure to network application. From my bare understanding of Nodes.js, its strength is: we have only one thread handling multiple connections, providing an event-based architecture. However, for example in Java, I can create only one thread using NIO/AIO (which is non-blocking APIs from my bare understanding), and handle multiple connections using that thread, and I provide an event-based architecture to implement the data handling logic (shouldn't be that difficult by providing some callback etc.)? Given JVM being a even more mature VM than V8 (I expect it to run faster too), and event-based handling architecture seems to be something not difficult to create, I am not sure why Node.js is attracting so much attention. Did I miss some important points?"} {"_id": "45586", "title": "Integration of advertisement into WP7 apps: Any experiences?", "text": "Has anyone of you guys integrated or is thinking about integrating either admob, MS pubCenter, adwhirl or any other advertising provider into your apps? Please share your experience... 1. Who pays best? 2. Any regulations? 3. Any problems with getting the app approved by Microsoft Market? 4. If you chose Admob: Which plugin do you suggest? This or that or another one? 5. Any experiences in Europe or Germany? (MS pubCenter is US only so far) **Edit:** I think we discussed enough about acceptance of mobile advertisement. **Let's focus now on the questions I asked.** **Edit 2:** Hmm, maybe Stackoverlfow is a better place to discuss question 2-5. Could somebody migrate this one?"} {"_id": "153369", "title": "Is there any benefit of using one language over the other for competitive programming websites like SPOJ or TopCoder?", "text": "I know a bit of C++, Java and Ruby. I want to be proficient in one of these now and I don't know how to pick. I was wondering if picking one over the other would be advantageous in any way for competitive programming?"} {"_id": "148686", "title": "iOS object instance accessible from three separate classes, or load 3 nib files with one class?", "text": "I've got three nib files in my project, each of which is driven by its own class (.h and .m files). Each nib has a stylized design with a full screen background image and a few overlay images acting as buttons. Each button has its own button-click sound, and most of the buttons on each nib file will play a different video per button. To play the videos, each of my three .m files have a couple of methods similar to this: -(IBAction) videoButtonClicked: (id)sender{ if([sender tag] == 1) { AudioServicesPlaySystemSound (buttonClick1); [self loadMoviePlayer:[[NSBundle mainBundle] pathForResource:@\"Video 1\" ofType:@\"mov\"]]; } else if ([sender tag] == 2) { AudioServicesPlaySystemSound (buttonClick2); [self loadMoviePlayer:[[NSBundle mainBundle] pathForResource:@\"Video 2\" ofType:@\"mov\"]]; } else if ([sender tag] == 3) { AudioServicesPlaySystemSound (buttonClick3); [self loadMoviePlayer:[[NSBundle mainBundle] pathForResource:@\"Video 3\" ofType:@\"mov\"]]; } else if ([sender tag] == 4) { AudioServicesPlaySystemSound (buttonClick4); [self loadMoviePlayer:[[NSBundle mainBundle] pathForResource:@\"Video 4\" ofType:@\"mov\"]]; } } - (void)loadMoviePlayer:(NSString*) movieURL { // Play movie from the bundle NSURL *url = [NSURL fileURLWithPath:movieURL]; player = [[MPMoviePlayerViewController alloc] initWithContentURL:url]; [[NSNotificationCenter defaultCenter] addObserver:self selector:@selector(movieFinishedCallback:) name:MPMoviePlayerPlaybackDidFinishNotification object:player]; [self presentMoviePlayerViewControllerAnimated:player]; [player release]; } My code works now because the videos are embedded in my resources folder. However, together, the videos are too big to be bundled with the app and downloaded from the App Store. I'll therefore have the app download them from a server, so I want to create one video-management class to keep track of which videos have been downloaded. I think I should have one instance of the object, but then how can my three controllers talk to the one instance? I am pretty sure if I have one class that loads the correct .nib file, I can have that one instance talk to my one video-management instance. But would I be unnecessarily wasting memory by loading all three nibs at once?"} {"_id": "99251", "title": "How prototypal inheritance is practically different from classical inheritance?", "text": "Inheritance, Polymorphism, and Encapsulation are the three most distinct, important features of OOP, and from them, inheritance has a high usage statistics these days. I'm learning JavaScript, and here, they all say that it has prototypal inheritance, and people everywhere say that it's something **far different** from classical inheritance. However, I can't understand what's their difference from the point of practical usage? In other words, when you define a base class (prototype) and then derive some subclasses from it, you both have access to functionalites of your base class, and you can augment functions on the derived classes. If we consider what I said to be the intended result of inheritance, then why should we care if we're using prototypal or classic version? To clear myself more, I see no difference in the usefulness and usage patterns of prototypal and classic inheritance. This results in me having no interest to learn why they are different, as they both result in the same thing, OOAD. How practically (not theoretically) prototypal inheritance is different from classical inheritance?"} {"_id": "198602", "title": "What is a good pattern for multi language in MongoDb?", "text": "Say I want to build a website that support multiple language and use MongoDB datastore. I wonder what is a good approach to have multiple versions of the body and title of an article, one for each language. Should I keep all versions in the same article document? Like a collection of key-value pairs of language code and text in the body field of the article. This way I'd get no more than 1 request for viewing articles, but I'd also get more data than I might need so the response is larger. Is there a better solution? How and why?"} {"_id": "161390", "title": "Does Clojure have the continuation?", "text": "I started programming with Python. When using python, concepts like coroutine, closure made me really confusing. Now I think I know them some superficial level, but I want to get the \"enlightement\" so, I choose to learn Clojure, the dialect of Lisp. I bought the book of Stuart Halloway and it's good. But when I looked the index of that book, there's no words like coroutine, continuation. I googled it, but there's anything, too. So, my question is, Does Clojure have continuation or coroutine (which greenlet or stackless serve) to do the jobs like the Ping-pongs without stack overflow? Python style (though Python standard does not serve full-feature of this symmetric coroutine): def ping(): while 1: print \"ping\" function to switching to pong def pong(): while 1: function to switching to ping print \"pong\" If not, as for learning the functional concepts, is it better to learn Scheme instead?"} {"_id": "80434", "title": "PHP and SQL, together or individually?", "text": "I am done with HTML and CSS. Now I want to know whether I should study PHP and SQL together or individually. Basically, I have this book, `O'Reily Head First PHP and MySQL`. Should I take this book or start with a book for PHP only? And is there any difference?"} {"_id": "125966", "title": "Algorithm to determine fastest route?", "text": "Let's say we're going from 1 to 5. The shortest route will be 1-4-3-5 (total: 60 km). ![Graph](http://i.stack.imgur.com/w0SnS.png) We can use Dijkstra's algorithm to do that. Now the problem is, the shortest route is not always the fastest one, because of traffic jams or other factors. For example: * 1-2 is known to have frequent traffic jams, so it should be avoided. * Suddenly a car accident happens along 4-3, so it should be avoided too. * Etc... So probably we can speed on the route 1-4-5, because of no traffic jams/accidents, so will arrive at 5 faster. Well that's the general idea, and I haven't think about more details yet. Is there any algorithm to solve this problem?"} {"_id": "198605", "title": "Researching the growth of functional languages", "text": "In recent years it seems that functional programming languages have had a real popularity increase. Languages like Erlang, Haskell, Scala, F# and Clojure seem to be pretty well known and many popular programming sites (such as Stack Overflow) seem to be full of questions and discussions on them. The question is how would you go about researching the growth of these languages in real terms. Of course I know about the TIOBE index but is that really the best source to measure the popularity of a given programming language? Hopefully this question will be allowed since it is something that many programmers look towards to decide whether a given technology is likely to last for a long time or not (although this is not my personal goal)."} {"_id": "80439", "title": "Why is C++ still preferred to build heavy GUI apps over the latest dynamic languages?", "text": "I see that most of the apps that include heavy GUI content are usually developed in C++. Most of the games/browsers are coded in C++. Can't we just develop better GUI apps with the latest dynamic languages? I know java wouldn't be a great choice. But what about languages like python which are natively built on C? Aren't the latest languages supposed to be better than their ancestors? Why do we still have to prefer the age old C++ over the latest languages? And I would also like to know, what is it that is responsible in C++, for the better speed of processing GUI? On the other hand what is it that the other latest languages lack?"} {"_id": "12174", "title": "How do you pronounce technical English words around non-natives?", "text": "I'm working as a developer back in my home country, where English is not a first/second language. However, I consider myself a native speaker since most of my schooling has been in an American environment. Everyday I have to talk to my colleagues about technical stuff and English words obviously come up all the time. My colleagues pronounce English words very differently than how it's supposed to be (well most of the times) and I feel weird trying to correct them or speaking the 'right way' because it makes it seem like I'm a dick and often takes the focus away from the real subject. How do you guys deal with this kind of 'problem?'"} {"_id": "232049", "title": "Merging branches where the same code was worked on", "text": "If there are two unrelated tasks, except that the changes are in the same project, and end up touching at least one of the same classes/methods, how do I manage that? Two different developers could each pick up one task. Each dev will create a new branch off of Integration, work in it, then need to Merge. When Merging two branches, most of the time \"Take Source Branch\" is proper for Up-Merging, and \"Take Target Branch\" is proper for Back-Merging. But when Back-Merging, or dual-merging up, there could be conflicts which overwrite the code of the previously merged project if they both changed the same class/method. One answer is: \"So only let one dev work on both tasks sequentially.\" Except that we're in SCRUM and any dev can pick up any task. We don't want to start assigning. Another answer is: \"Have the devs get together and hash out how to manually merge some stuff.\" But that's work! Lol. Is there a method I'm missing?"} {"_id": "12170", "title": "GPLed Library (EXT JS) Licensing Issue", "text": "Good Day Everyone, I am realy out of option for interpreting the GPL for EXTJS for my work/idea/personal project. I can see that this is an active forum I realy hope, I have a closure on this. First let me explain my project, Iam creating this website which is like a webportal which is intented for End-User, now this webapps uses an EXTJS library which is GPL'ed as with my understanding on this GPL Any application uses GPL license libray should be released to as Open Source a a GPL compatible Linsence, this is when I or my application is a derivework or I have modified the Library and released it. or destribute it. But EXTJS has this dual license which is typically giving me the rights to do what I want. without giving out my code. If my application is intented for end user only, not a derive form of work, not a library, not a development tool, I will not distribute it because it is on the web. with this also the Libraries i will use will remain untouch as it is. and I will have list of library I use and thier respective license to credit them Given this can I not close-source my application and not violate the GPL? is it ok for me to use GPL library so long as the above is meet? The question in short is can i used GPL'ed Library, do not released my code as OPensource compatible License and still NOT violate the GPL'ed terms? Thanks in Advance Nick Ace"} {"_id": "12171", "title": "How does fair use apply to code snippets?", "text": "Is there a size where you can copy under fair use for code you don't have a license for the purpose? For example, what if I copy a snippet that is (normally) 3 lines of code? Is that fair use? If it is fair use, what length is required before I need a license?"} {"_id": "46584", "title": "What should a Python developer know while learning Ruby?", "text": "I have been a Python programmer for about 18 months, consisting of one internship and a few side projects, and I consider myself pretty comfortable in the language. However, there seems to be a lot of attention on Ruby in the programming field, but not a lot on Python anymore. So in learning Ruby, are there going to be Pythonic things that are just bad practices in Ruby? What should I watch out for, and what should I avoid?"} {"_id": "46582", "title": "Career advice: PhD in theory of programming languages", "text": "I'm very interested in the theories of programming languages and going to apply a PhD in this topic, but I want to know more about the career after the graduate education. besides being a professor, but also what occupation can I get?"} {"_id": "46583", "title": "Advancing Code Review and Unit Testing Practice", "text": "As a team lead managing a group of developers with no experience ( and see no need) in code review and unit testing, how can you advance code review and unit testing practice? How are you going to create a way so that code review and unit testing to naturally fit into the developer's flow? One of the resistance of these two areas is that \"we are always tight on dateline, so no time for code review and unit testing\". Another resistance for code review is that we currently don't know how to do it. Should we review the code upon every check-in, or review the code at a specified date?"} {"_id": "162715", "title": "Arguments for development environment being the same as production", "text": "Intuitively it seems appropriate that the development environment (and all test environments) be as close to the production build as possible. Are there any documented arguments in support of this intuition?"} {"_id": "196738", "title": "How the cross programming language compiler or translator works", "text": "These days there are more cross programming language compilers (specially from some 'X' language to JavaScript). I wonder how these are developed? What are the general steps to be taken care to write algorithms if I were to develop one? Do I need to be completely thorough in Lexical Analysis? As far as my knowledge is concerned they should follow the same steps of translating some 'X' language to Assembly language (basic compilation). Is that how they actually been developed? OR there is some different way? Thanks"} {"_id": "167946", "title": "What are the differences between AppMobi and PhoneGap?", "text": "I am new to the cross platform application development. I came across the very similar cross platform frameworks AppMobi and PhoneGap. I want to know * Is there any differences between apk/ipa created using Appmobi and apk/ipa created using PhoneGap? * Is there any difference in native features that can be used ? * What are the advantages of appmobi over phonegap or PhoneGap over Appmobi? Also other differences between these two."} {"_id": "194765", "title": "What to do when your colleagues don't value code maintainability", "text": "I've been working in the same software development department for a few years now. In that time, the average stay of a developer has been 6-9 months. A handful have been around for over 2 years, but the majority of our 20 or-so developers come and go at a relatively high rate. As a result, the majority of our projects have become maintenance nightmares. Contractors will come in, code a few patch releases, and leave. Our department has development guidelines (we do TDD) but they aren't enforced. Recently, I've been pushing for our department to produce more maintainable code. I've been asking for mandatory code reviews and mandatory TDD. Management fully agrees with me... in theory. In practice TDD always goes out the window. The justification is always that, in our domain, we need to deliver NOW. I keep going on and on to colleagues that we're just digging a hole for ourselves, and that our current approach to software development is costing our department a lot of money... but it seems to fall on deaf ears. What can I do to get my colleagues to see the value of code maintainability? How can I explain that short-term wins without a long-term vision is not sustainable?"} {"_id": "238749", "title": "How to effectively cooperate in a team having mixed background/mindset regarding OOP?", "text": "I've been recently assigned for a new high-performance C++ project (finance) together with 3 other guys who, like, me, refer to themselves as \"primarily C/C++ programmers\", meaning, all of us have also done Java & other stuff, we all have comparable experience (8 to 10 years) and have successfully collaborated for the making of other projects (not C/C++). I soon found out we have such different mindsets when it comes to our \"mother programming tongue\" that it risks not only the good delivery of the project but also the general well-being of the team/office. Specifically: They come from an electrical engineering/automation/polytechnic type of background, so their reasoning is intimately related to how hardware works, how bytes move around, processor workings, instruction caching, generally everything at the lower layers. They did embedded programming, ARM programming (I barely know what those are). I soon found out their practical experience is with mostly C, not C++. They think procedurally, not OOP. I, on the other hand, have trained myself to think in a more abstract way, to apply principles such as encapsulation, low coupling/high cohesion for creating reusable, modular software. My speciality is the C++ programming language, with an accent on language constructs that help you get both fast and reusable, generic code. I think design patterns, idioms, concepts. Throughout 9 years of developing software I consistently witnessed how poor maintainability of software systems is by far the #1 burden of teams. I generously document my code, write careful interfaces. I embrace template meta-programming as a way to obtain fast and reusable products. I embrace the new C++11 standard features. Where I see reusability and loose coupling, they see \"complication\". They want to \"see it all there\". They want to see `char*`, not some buffer class in interface methods. At one point one even said \"I don't care how simple your interfaces are if your implementation looks complicated\". The trouble is, we really are trying to transition from a monolithic style of systems to a reusable one - we are writing a library. Yet they don't wish to see their C-style structs being polluted with templated methods, typedefs and c-tors, even if neither makes their structs non-POD. They also don't wish to see static_assert(std::is_pod<...>). They love memcpy and pointers and generally stay away from references. They tend to think typedefs are somehow evil. I really tried to make them see an abstraction for what it is. It is code together with data it operates on. By necessity, a library needs to define some interfaces for the user to be able to plug it into a larger product. They tend to see these interfaces as ways in which my code _requires their code behave/look as I wish and doesn't allow them to do things their way_ , but without actually telling me a specific functionality they can't obtain. Make no mistake, I'm no ignorant when it comes to performance. I know about cache lines, branch prediction, the overhead of method virtualization, heap allocation, hash lookups. I'm good with complexity of algorithms (though that doesn't matter so much here), data structures, inlining and some other compiler optimizations such as RVO. I believe with a modern compiler a reusable, layered OOP C++ product can get within 10% of the performance of a C-only monolithic product, when carefully written and optimized. I honestly believed we would complement each other in our thinking. What are your thoughts? Please don't be too harsh, we need to make this work."} {"_id": "193927", "title": "Which programming guidelines for a chess network application?", "text": "I'd like to implement a simple chess peer to peer network application : one instance of the program may register friends player, and when one friend is \"connected\" (I mean both available by another program instance and registered as friend), I can ask him to play a game. As I know J2SE language, as well as Swing framework, I tried to start coding, using Sockets. The problem is that I need a way to * register my friends (or other accounts) * get the friend ip, for creating a socket (it will be the same problem on the other side) : with the problem of dynamic ip adresses.I mean that a serialized adress may not be valid anymore * be informed that the other side has \"connected\" I think that first and third point can be resolved if I use a PHP server which will have all informations for all registered users. This is how the application should work : 1. First, before trying to connect each other, the users must register an account on a PHP server that I'll have to implement (with MySQL database) 2. When launching the application, the user logins by giving its username and password. So that the application will be able to retrieve friends (and stats for example) thanks to the server 3. When two friends are ready to play together, the server give all needed informations (such as ip adress) to both player application instance. My main difficulties will be for the (de)serialization of dynamic IP Adresses, and for retrieving informations about a user on the PHP server. What do you suggest ?"} {"_id": "59866", "title": "When is a glue or management class doing too much?", "text": "I'm prone to building centralized classes that manage the other classes in my designs. It doesn't store everything itself, but most data requests would go to the \"manager\" first. While looking at an answer to this question I noticed the term \"God Object\". Wikipedia lists it as an antipattern, understandably. Where is the line between a legitimate glue class, or module, that passes data and messages from place to place, and a class that is doing too much?"} {"_id": "193923", "title": "Approaches to reduce cyclomatic complexity", "text": "I was running our code through JSHint, and decided to switch checks against cyclomatic complexity, and then went on long refactoring sprint. One place though baffled me, here is a code snippet: var raf = null //raf stands for requestAnimationFrame if (window.requestAnimationFrame) { raf = window.requestAnimationFrame; } else if (window.webkitRequestAnimationFrame) { raf = window.webkitRequestAnimationFrame; } else if (window.mozRequestAnimationFrame) { raf = window.mozRequestAnimationFrame; } else if (window.msRequestAnimationFrame) { raf = window.msRequestAnimationFrame; } else { raf = polyfillRequestAnimationFrame; } Not surprisingly, in this implementation CM is 5, first attempt was to use solution from MDN: var requestAnimationFrame = window.requestAnimationFrame || window.mozRequestAnimationFrame || window.webkitRequestAnimationFrame || window.msRequestAnimationFrame; window.requestAnimationFrame = requestAnimationFrame; Which simply looks like a hack to me (yes I know, majority of full time javascript programmers won't agree with me, however this is a prevalent opinion within our team). Doodling around my code, I found another hacks I could employ to full code-linter, among ones I was proud for about 5 second was usage of array comprehensions: var rafs = [ window.requestAnimationFrame, window.webkitRequestAnimationFrame, window.mozRequestAnimationFrame, window.msRequestAnimationFrame, polyfillRequestAnimationFrame ].filter(function (rafClosure) { return rafClosure !== null && rafClosure !== undefined; }); return rafs[0]; However I'm curios whether there is more or less standard practice of refactoring long branching code (that's not trivial to reduce)?"} {"_id": "58515", "title": "Advice: The first-time interviewer's dilemna", "text": "I've been working in my first job for about 2 years now, and I've been \"asked\" to interview a potential teammate (whom I might have to mentor as well) on pretty short notice (2 days from now). Initially, I had been given a free rein(or so I thought, and hence agreed), but today, I've been told \"not to pose _bookish_ questions\" - implying I can only ask basic programming puzzles and stuff similar to the 'fizbuzz' question. I strongly believe that not knowing basic algorithmic notations(the haziest ideas of space/time complexities) or the tiniest idea of regular expressions would make working with the guy very difficult for anyone. I know i'm asking for a lot here, but according to you, what would be a comprehensive way to test out the _absolutely basic_ requirements of a CS guy(he has 2 yrs of exp) without sounding too pedantic/bookish etc ? It seems it would be legit to ask C questions/simple puzzles only....but I really do want to have something a bit different from \"finding loops in linked lists\" that has kind of become the opening statement of most techie interviews !! This is a face-to-face interview with about an hour or more of time - I looked at Steve's basic phone-screen questions, and I was wondering if there exists a guide on \"basic face-to-face interview questions\" that I can use(or compile from the community's answers here). EDIT: The position is mostly for a kernel level C programming job, with some smattering of C++ required for writing the test framework."} {"_id": "209615", "title": "I've never interviewed anybody before... HELP?", "text": "There were 3 developers at this GIS tracking company, but now there's 2. Last week my boss was fired. Today, new developers will be coming in for interviews. They'll be coming in from a placement agency. I don't really know how to be on this side of a hiring interview. Any advice or sample questions I can ask? Company Background: The 2 of us wear many hats. We are the MySQL DB admins, we write C# web services including APIs, there's C# back end processes that generate reports and issue alerts, and we also have a few VB6 projects that I personally am hoping to replace. Our web app is just under 10K lines of javascript, almost 2K lines of HTML and just under 4K lines of CSS. We don't use a lot of frameworks or libraries aside from jQuery and Bing Maps. I've been here a year as just a developer and at no place have I ever worked with any sort of management responsibilities. What kinds of questions can I ask that can really show me that somebody knows how to develop in these languages? I'm really drawing a blank here."} {"_id": "53654", "title": "What are the problems with a relatively large common library?", "text": "As long as the code in the base library is as loosely coupled as splitting it up into separate libraries, what's the problem? In general, having a lot of assemblies composing a .NET solution is painful. Plus, when code in one solution needs to be shared, it can just be added to the common library, rather than deciding which common library it should be added to or creating yet another library. edit: the question comes to me after using Smalltalk for a bit, where all the code is available to use, all the time."} {"_id": "21771", "title": "Is there a good reason to use Java's Collection interface?", "text": "I've heard the argument that you should use the most generic interface available so that you're not tied to a particular implementation of that interface. Does this logic apply to interfaces like _java.util.Collection_? I would much rather see something like the following: List getFoos() or Set getFoos() instead of Collection getFoos() In the last case, I don't know what kind of data set I'm dealing with, whereas in the first two instances I can make some assumptions about ordering and uniqueness. Does _java.util.Collection_ have a usefulness outside of being a logical parent for both sets and lists? If you came across code that employed _Collection_ when doing a code review, how would you determine whether its usage is justified, and what suggestions would you make for its replacement with a more specific interface?"} {"_id": "63121", "title": "How to evaluate Java skills", "text": "I was asked to make a table in which developers in team could use to evaluate their Java level (Maybe from 1 to 5). And then they will know what they are lacking so they could focus to learn on that technique. I think about Skill Matrix but don't know how to start. Could you please help me some hint? Thank you in advance. * * * As Paul commented, i would like to update some information. our software is just related to Web-Application using Core Java, JSP, JSF, Struts, Spring, Hibernate. Desktop application is using Core Java, Swing to develop"} {"_id": "78325", "title": "Book on Information Technology Concepts for Developer", "text": "Is there a good book and/or series of articles that explains IT concepts such as networking, windows domain, security (SSL certs), cryptography, IT infrastructure, etc? I'm a mid level developer who would like to build a good foundation on these topics as it would help me see how my system fits in. Now, I realize that these topics are vast and have great deal of resources available on individual topics. My goal, at this point in my career, is not to necessarily gain the depth of the information, instead I'm looking for a good overall working knowledge and when required I'll resort to those special topic books. I'm certain you folks had to overcome this bump in your career as enterprise software developer. What did you do? What resources did you resort to? How does one go about bridging the gaps in knowledge and gaining the necessary understanding that is essential to your success. I'm very interested in reading about your experiences. Thanks for the comments so far. Thanks in advance!"} {"_id": "195180", "title": "What does it mean to perform an operation \"In Place\" for Interpreted Languages?", "text": "Programming question: > Reverse words in a string (words are separated by one or more spaces). **Now > do it in-place**. What does \"in-place\" mean in the above context for an interpreted language like PHP or JavaScript?"} {"_id": "78320", "title": "Are code screening services worthwhile?", "text": "I've seen websites that screen programmers by their ability to write code. It's a service that you enter a programming question into and then send out a link. Job candidates program their solution to the question as they are timed and recorded. The person who posted the question can then playback a video of their candidate programming the script. This video allows them to see how quickly and neatly their job candidate can code. Are these types of services worth it? What caveats and hangups are there to using such things to screen potential hires?"} {"_id": "236775", "title": "How to fulfill requirements of AGPL on my web service?", "text": "We want a simple tool to parse PDF forms, retrieving just the associated field names and user values. PDFSharp would be a good option for us, as it is under the MIT license, but it is a few versions of Acrobat behind - so not going to work. The most popular library (from what my searches have revealed) seems to be iTextSharp. Thus I am introduced to the intricacies of the AGPL. Now, for the structure of our app. We want something simple and re-usable among any apps we may want to use later that require the same functionality. My plan was to design a simple web service, that takes in the PDF file, and simply returns the Fields and Values in a list of key-value pairs. It looks like, as long as this is an intranet service, there is no concern from the AGPL, as described in this question: Can I safely use an open source library in an internal closed-source project? That is our most likely scenario, but I wanted to be aware ahead of time in case we need to expose our service externally (such as for some of our Silverlight clients, for example). If the service were exposed externally, then, under AGPL, would we simply need to provide the source for the service itself? Or, would the source of any consumer of the service also need to be provided? So, I am looking for 1) confirmation that in the intranet scenario there are no further considerations, and 2) what is necessary if the web service is publicly visible?"} {"_id": "176322", "title": "Sharing authentication methods across API and web app", "text": "I'm wanting to share an authentication implementation across a web application, and web API. The web application will be ASP.NET (mostly MVC 4), the API will be mostly ASP.NET WEB API, though I anticipate it will also have a few custom modules or handlers. I want to: 1. Share as much authentication implementation between the app and API as possible. 2. Have the web application behave like forms authentication (attractive log-in page, logout option, redirect to / from login page when a request requires authentication / authorisation). 3. Have API callers use something closer to standard HTTP (401 - Unauthorized, not 302 - Redirect). 4. Provide client and server side logout mechanisms that don't require a change of password (so HTTP basic is out, since clients typically cache their credentials). The way I'm thinking of implementing this is using plain old ASP.NET forms authentication for the web application, and pushing another module into the stack (much like MADAM - Mixed Authentication Disposition ASP.NET Module). This module will look for some HTTP header (implementation specific) which indicates \"caller is API\". If the header \"caller is API\" is set, then the service will respond differently than standard ASP.NET forms authentication, it will: 1. 401 instead of 302 on a request lacking authentication. 2. Look for username + pass in a custom \"Login\" HTTP header, and return a FormsAuthentication ticket in a custom \"FormsAuth\" header. 3. Look for FormsAuthentication ticket in a custom \"FormsAuth\" header. My question(s) are: 1. Is there a framework for ASP.NET that already covers this scenario? 2. Are there any glaring holes in this proposed implementation? My primary fear is a security risk that I can't see, but I'm similarly concerned that there may be something about such an implementation that will make it overly restrictive or clumsy to work with."} {"_id": "112617", "title": "When should the presentation model design pattern include one or more controllers?", "text": "I have been researching the usage of the Presentation Model design pattern for an application I am preparing to build. The specific technology I will be using is Flex though I doubt that it matters for this question. In general most of the Presentation Model examples I have run across are pretty consistent in the basic implementation. More or less they follow Martin Fowler's Presentation Model description to a T when it comes to the three basic concepts: View, Model, Presentation Model. I think that I understand this. However, I have seen some other examples that extend this further by including controller, event, and service objects in addition to the three basic constructs I listed above. The examples I have seen are (note these are both using Swiz for DI and event handling): * CafeTownsend * Swiz Presentation Model Example I understand the reasoning for a service layer. You may want / need to retrieve your data from a SOAP interface one day, a REST interface the next, and a SharedObject on the third day. The service layer interface makes it easy to swap out retrieval and persistence logic. However, I am not sure that I understand why the addition of the controller is an advantage? In the examples linked above some logic within the presentation model is dispatched and handled within the presentation model itself while other logic is dispatched and addressed within a controller. My guess is that logic that is view specific is captured within the presentation model. This supports the notion that the presentation model is simply an abstract representation of the view. Further, I suspect that the controllers are used to capture common business logic and events that are not view specific and are not performed by the service layer (e.g., persistence). This should support reuse across the application and help to make the application more testable. * Is this conclusion right? * Are there other advantages I am missing? * What are the disadvantages (besides more code)? * How do you know when logic should be in the presentation model and when it should reside in a controller?"} {"_id": "176324", "title": "Android Card Game Database for Deck Building", "text": "I am making a card game for Android where a player can choose from a selection of cards to build a deck that would contain around 60 cards. Currently, I have the entire database of cards created that the user can browse. The next step is allowing the user to select cards and create a deck with whatever cards they would like. I have a form where the user can search for specific cards based off a few different attributes. The search results are displayed in a List Activity. My thought about deck creation is to add the primary key of each card the user selects to a SQLite Database table with the amount they would like in the deck. This way as the user performs searches for cards they can see the state of the deck. Once the user decides to save the deck. I'll export the card list to XML and wipe the contents of the table. If the user wanted to make changes to the deck, they would load it, it would be parsed back into the table so they could make the changes. A similar situation would occur when the eventually load the deck to play a game. I'm just curious what the rest of you may think of this method. Currently, this is a personal project and I am the only one working on it. If I can figure out the best implementation before I even begin coding I'm hoping to save myself some time and trouble."} {"_id": "167915", "title": "How do I develop database-utilizing application in an agile/test-driven-development way?", "text": "I want to add databases (traditional client/server RDBMS's like Mysql/Postgresql as opposed to NoSQL, or embedded databases) to my toolbox as a developer. I've been using SQLite for simpler projects with only 1 client, but now I want to do more complicated things (ie, db-backed web development). I usually like following agile and/or test-driven-development principles. I generally code in Perl or Python. Questions: 1. How do I test my code such that each run of the test suite starts with a 'pristine' state? Do I run a separate instance of the database server every test? Do I use a temporary database? 2. How do I design my tables/schema so that it is flexible with respect to changing requirements? 3. Do I start with an ORM for my language? Or do I stick to manually coding SQL? One thing I don't find appealing is having to change more than one thing (say, the CREATE TABLE statement and associated crud statements) for one change, b/c that's error prone. On the other hand, I expect ORM's to be a low slower and harder to debug than raw SQL. 4. What is the general strategy for migrating data between one version of the program and a newer one? Do I carefully write ALTER TABLE statements between each version, or do I dump the data and import fresh in the new version?"} {"_id": "61753", "title": "Should documentation be a company policy or every programmer's responsibility?", "text": "I have been struggling lately with the whole subject of documentation at my current position. I am at a point in my programming career in which I feel I have just been birthed into the whole world of proper and effective documentation. Currently there is very little to be found in any database or source code at the company. User spec documents are generally developed in email threads, on a good day. I wonder, am I to blame for having not been strict with myself on this issue? Is it the responsibility of the programmer to maintain this and update where needed or found lacking? I have come to the realization that the best way to approach documenting is to pretend that you have one foot out the door, and another soul will have to take over your work. Thoughts?"} {"_id": "179311", "title": "Is there a way to check if redistributed code has been altered?", "text": "I would like to redistribute my app (PHP) in a way that the user gets the front end (presentation) layer which is using the API on my server through a web service. I want the user to be able to alter his part of the app but at the same time exclude such altered app from the normal support and offer support on pay by the hour basis. Is there a way to check if the source code was altered? Only solution I can think of would be to get check sums of all the files then send it through my API and compare them with the original app. Is there any more secure way to do it so it would be harder for the user to break such protection?"} {"_id": "225081", "title": "Reporting Solution in PHP / CodeIgniter - Server side logic vs client side", "text": "I'm building a report for an end user. They would like to see a list of all widgets... but then also like to see widgets with missing attributes, like missing names, or missing size. So i was thinking of creating one method that returns json data containing all widgets... and then using javascript to let them filter the data for missing data, instead of requerying the database. Ultimately, they need to be able to save all \"reports\" (filtered versions of data) inside a csv file. These are the two options I'm mulling over: **Design 1** Create 3 separate methods in my controller/model like: get_all_data() get_records_with_missing_names() get_records_with_missing_size() And then when these methods are called, I would display the data on screen and give them a button to save to csv file. **Design 2** Create one method called get_all_data() and then somehow, give them tools in the view to filter the json data using tables etc... and then letting them save subsets of the data. The reality is, in order to display all data, I still need to massage the data, and therefore, I know which records are missing attributes. So i'd rather not create separate methods by each filter. I'm not sure how I would do that just yet but at this point, i would like to know some pros/cons of each method. Thanks."} {"_id": "115343", "title": "Does the (A)GPLv3 allow people to contibute code anonymously?", "text": "Does AGPLv3 or GPLv3 allow people to contribute code without specifying author of contributed code ? Does it allow people to fork an (A)GPL-project without specifying authors of forked code ?"} {"_id": "115344", "title": "How long should I practice a code kata?", "text": "I recently found out about the idea of code katas at Dave Thomas's blog and was really interested in it. I'd like to hear the opinions of people who have experience with code katas; is it assumed that I should practice one kata until I find a solution to the problem, or there is another explanation?"} {"_id": "152016", "title": "Running cherryPy on apache or not?", "text": "seems the cherryPy have a standalone web server, is this suggested to deploy the application on Apache Server? or just use the standalone one? If so, any pros and cons between these decision? Thanks."} {"_id": "213172", "title": "developing an android app that include a C++ toolkit", "text": "I'm a java developer and I want to develop an android app that capture a photo and extract its bags of visual words. To extract those bags of words I use the TOP-SURF toolkit which is written in C++ I'm new to C++ development and I want to know how to develop an android app that use this toolkit I read about the NDK. It's the solution?"} {"_id": "251907", "title": "Commercial Apps with an Open Source on github", "text": "I am going to be launching a small web service soon and was wondering if it's a good idea to upload the code to a public github repo. The code will not be licensed for commercial use but the public could download the code and hack around with it. Is this a good idea? Would it help produce more secure/bug-free code? Or would it run my startup into the ground? Thanks, Ben"} {"_id": "225082", "title": "How do I read the Entity Framework Model and validate it against a given connection?", "text": "I have a Entity Framework Database First Model. I want to write a MSTest/nUnit test to verify that all the stored procs, tables and views that are defined in my edmx model are still valid on the database."} {"_id": "152018", "title": "Dual Inspection / Four Eyes Principle", "text": "I have the requirement to implement some kind of dual inspection or four-eyes principle _as a feature of my software_ , meaning that every change of an object done by user A has to be checked by user B. A trivial example would be a publishing system where an author writes an article and another has to proofread it before it is published. I am a little bit surprised that you find nearly nothing about it on the net. No patterns, no libraries (besides cibet), no workflow solutions etc. **Is this requirement really so uncommon? Or am I searching for the wrong terms?** I am not looking for a specific solution. More for a pattern or best practice approach. **Update:** the above example is really trivial. Let's add some more complexity to it. The article has been published, but it now needs an update. Putting the article offline for the update is not an option, but the update has to be proof read, too."} {"_id": "190294", "title": "About ANSI C++ 2003 standard", "text": "I would like to ask for your help. I searched a lot on Internet, but I found mismatched informations. **My questions:** 1. I tried to buy the \"ISO/IEC 14882:2003(E) Programming Languages - C++\" standard on the ansi.org, but i have not found it. However, I found this standard on nssn.org: www.nssn.org/search/DetailResults.aspx?docid=338353&selnode= But unfortunately this standard has been deleted or replaced with an another one. webstore.ansi.org/RecordDetail.aspx?sku=INCITS/ISO/IEC%2014882-2003 On the iso.org, it's also the same situation: www.iso.org/iso/catalogue_detail.htm?csnumber=38110 Yes, I know that the actual standard is C++11, but I'm need the C++03 standard. From another sources, I heard that, the C++03 standard has become an open standard, so I can download it from the Internet for free, THE FULL, OFFICIAL standard, for example: code.google.com/p/openassist/downloads/detail?name=C%2B%2B%20Standard%20-%20ANSI%20ISO%20IEC%2014882%202003.pdf cs.nyu.edu/courses/spring13/CSCI-GA.2110-001/downloads/ Is this true? And it's the full, official C++03 standard, not just a draft? 1. Is that true, the C99 (C programming language, 1999) has also become an open standard? If yes, this is the full C99 standard?: cs.nyu.edu/courses/spring13/CSCI-GA.2110-001/downloads/C99.pdf"} {"_id": "255269", "title": "Communication/Updates between units of works/entity framework contexts, colliding with user changes", "text": "I'm developing a WPF application using Entity Framework for my database communication. The application has a hierarchy of tabs where each tab has a db context. Each tab allows the user to view some particular object, make changes, save, update, close or open tabs for related objects. My crux is that some tabs contain information that depends on other objects as well, an object O of type A may be related to other A objects (o_1,...,o_n) in a parent-children relationship, where I have a graphic indicator showing wether property P in O has the same value as P in all the children of O, for example. Red indicator means they are unequal, green means they are equal. The \"overlap\" in information between contexts will only be presentation-wise, the different tabs will not be stepping on each others toes by changing each others data. Now if I have Tabs for O and o_3 opened, change P in o_3 and save, the indicator in the tab for O might need to change as well, but since that tab has its own (now outdated) db context, the indicator still shows the old value, which might be confusing to the user. This seems like a very general and common problem considering how many application has this architecture. How would you solve the problem? I see a few different but flawed/problematic solutions. A. Make the user aware of the fact that tabs can become \"obsolete\" when saving others. _This feels like a forfeit to technological troubles._ B. Force updates of related tabs when saving. _This would either overwrite user changes or I would have to build some system (repository pattern?) that maintains information of user changes and mediates what to update and what to keep, what to tell the user and so on._ C. Indicate outdated:ness of related tabs. _This seems like the pragmatic solution, but the user is left to wonder what would be affected if updating the tab manually, and any changes made would have to be dismissed_ D. Communicate the change to the other tab without involving the db context. _This would sort of work, but could lead to complicated problems by introducing state that is not related to the db context, overall it would increase complexity_"} {"_id": "157629", "title": "GUI advice for a responsive touchscreen", "text": "I am tasked with building a piece of software that interfaces with a MySQL database, in order to allow the user to pick songs to play and que using a touch screen, and then they are shown simultaneously on a second monitor as videos. **Questions:** To allow both displays to work, would it be best to write two pieces of software (one for each display or just the one) I have never written code for a touch screen (other than mobile development, are there certain libraries to use, how would one go about it) I plan to use VLC to play the videos, is there a language that would be best for that In order to create a truly stunning GUI should I use a specific language If I know all the dimensions, should I worry about fluid layouts? * * * Sorry for the many questions, its this is my first solo commercial piece of software so I am looking to get a solid game plan. Also I was sure if this was best suited to SO or here, so please correct me if I was wrong. The languages I was considering were: C#, JAVA, python."} {"_id": "157625", "title": "When is using DI and optionally a IoC framework a step too far?", "text": "Consider a logging system - used absolutely everywhere in your codebase. _(note - logging is just an example, don't take it too literally and point me at your favourite logging system)._ public interface ILogger { void LogMessage(string message); } // Just used for testing, do nothing public class NullLogger : ILogger { public void LogMessage(string message) { } } public class FileLogger : ILogger { public FileLogger(string fileName){...} public void LogMessage(string message) { // Log to a file here } } No I could make every single class have ILogger in ctor or param and use some IoC framework. Or I could just use something like public static class Globals { public static ILogger Log = new NullLogger(); } Use at any point by `Globals.Log.LogMessage(\"Hello Logging World\");` And at 'Compositon Root' (main entry point of program) set `Globals.Log = new FileLogger(\"somefile.log\");` Or this could be done as a Singleton of course. When is using DI and optionally IoC a step too far? What would you do in this case?"} {"_id": "255263", "title": "Improve logic finding possible misconceptions", "text": "I made a logic to accomplish a specific problem, but it's too long. I've sure that it can be reduce too fit it. I have the following model public class ColumnChart { public virtual string type { get; set; } public virtual string name { get; set; } public virtual List data { get; set; } } The idea of `List` is to keep 12 values (1 per month if exists). All these 12 values, has to belong someone, so the `name` will kept the legend The type is `column` (not always, but in this case will to make it simple) **Big Logic** // Create a List of ColumnChart's var list = new List(); // Instantiate one object of type ColumnChart var serie = new ColumnChart(); // Instantiate the List (like my model says) serie.data = new List(); // Control the loop int i = 0; foreach (var element in MyDataSource) { // If the serie.name don't have the current Group Name that came from my DataSource if (serie.name != element.GroupName) { // Isn't the first loop ? if (i != 0) { // Return a new empty instance of ChartColumn serie = NewInstance(); // Set the legend that belong all data serie.name = element.GroupName; // Set the type of the chart serie.type = \"column\"; // Instantiate a new List (like my model says) serie.data = new List(); if (serie.data.Count == element.Month - 1) { serie.data.Insert(element.Month - 1, element.ValueRevenue); } // If the number of elements in my List is lower then the Month. else { while (serie.data.Count < element.Month - 1) { serie.data.Add(0); } serie.data.Insert(element.Month - 1, element.ValueRevenue); } } // Is the first loop interaction else { serie.name = element.GroupName; serie.type = \"column\"; serie.data.Insert(element.Month - 1, elemento.ValueRevenue); i++; } } // The serie.name have the same current Group Name that came from my DataSource else { serie.data.Insert(element.Month - 1, element.ValueRevenue); i++; } // My i variable that control the loop is the last interaction ? if (i == MyDataSource.Count() - 1) { list.Add(serie); } } i = 0; **Data Source** MyDataSource [0] Month: 1 GroupName: 'Music' ValueRevenue: 700.0 [1] Month: 2 GroupName: 'Music' ValueRevenue: 700.0 [2] Month: 3 GroupName: 'Music' ValueRevenue: 700.0 [3] Month: 4 GroupName: 'Music' ValueRevenue: 700.0 [4] Month: 5 GroupName: 'Music' ValueRevenue: 700.0 [5] Month: 6 GroupName: 'Music' ValueRevenue: 700.0 [6] Month: 7 GroupName: 'Music' ValueRevenue: 700.0 [7] Month: 8 GroupName: 'Music' ValueRevenue: 700.0 [8] Month: 9 GroupName: 'Car' ValueRevenue: 700.0"} {"_id": "157621", "title": "How to handle encryption key with a large development team?", "text": "If we have a large development team, say 100, and we would like to keep our encryption key hidden from developers who are not directly involved in the encryption module/algorithm, what are some best practices to keep this key as secret as possible while still allowing for development and debugging?"} {"_id": "243351", "title": "Time passage arithmetic explanation", "text": "I ported this from http://www.effectgames.com/effect/article.psp.html/joe/Old_School_Color_Cycling_with_HTML5 some time ago. However i'm now wanting to modify it for the purpose of changing it from floating point to fixed point maths for enhanced efficiency (for those who are going to talk about premature optimization and what not, i want to have my entire engine in fixed point both as a learning process for me and so i can port code more easily to systems in the future that dont have native floating points such as arm cpus) My initial conversion to fixed points just resulted in the cycling stuck on either the first or last frame of cycling. Plus it would be nice to understand better how it works so i can add more options and so forth in the future, my maths however sucks and the comments are limited so i don't really know how the maths work for determining the frame it shoud use (cycleAmount) I was also a beginner when i ported it as i had no idea between floating points and integers and what not. So in summary my question is, can anyone give an explination of the arithmatic used for determining the cycleAmount (which determings the \"frame\" of the cycle) This is the working floating point maths version of the code: public final void cycle(Colour[] sourceColours, double timeNow, double speedAdjust) { // Cycle all animated colour ranges in palette based on timestamp. sourceColours = sourceColours.clone(); int cycleSize; double cycleRate; double cycleAmount; Cycle cycle; for (int i = 0, len = cycles.length; i < len; ++i) { cycle = cycles[i]; cycleSize = (cycle.HIGH - cycle.LOW) + 1; cycleRate = cycle.RATE / (int) (CYCLE_SPEED / speedAdjust); cycleAmount = 0; if (cycle.REVERSE < 3) { // Standard Cycle cycleAmount = DFLOAT_MOD((timeNow / (1000 / cycleRate)), cycleSize); if (cycle.REVERSE < 1) { cycleAmount = cycleSize - cycleAmount; // If below 1 make sure its not reversed. } } else if (cycle.REVERSE == 3) { // Ping-Pong cycleAmount = DFLOAT_MOD((timeNow / (1000 / cycleRate)), cycleSize << 1); if (cycleAmount >= cycleSize) { cycleAmount = (cycleSize * 2) - cycleAmount; } } else if (cycle.REVERSE < 6) { // Sine Wave cycleAmount = DFLOAT_MOD((timeNow / (1000 / cycleRate)), cycleSize); cycleAmount = Math.sin((cycleAmount * 3.1415926 * 2) / cycleSize) + 1; if (cycle.REVERSE == 4) { cycleAmount *= (cycleSize / 4); } else if (cycle.REVERSE == 5) { cycleAmount *= (cycleSize >> 1); } } if (cycle.REVERSE == 2) { reverseColours(sourceColours, cycle); } if (USE_BLEND_SHIFT) { blendShiftColours(sourceColours, cycle, cycleAmount); } else { shiftColours(sourceColours, cycle, cycleAmount); } if (cycle.REVERSE == 2) { reverseColours(sourceColours, cycle); } } colours = sourceColours; } // This utility function allows for variable precision floating point modulus. private double DFLOAT_MOD(final double d, final double b) { return (Math.floor(d * PRECISION) % Math.floor(b * PRECISION)) / PRECISION; }"} {"_id": "67717", "title": "Procedural Code vs OOP code", "text": "I've finished a project in PHP of 13000+ lines in Procedural Style [because I'm very familiar with that, though I know OOP], and the project is running perfectly. But should I convert it to OOP? [ _because the world is busy with OOP_ ] My code doesn't need any of the feature of OOP [encapsulation, inheritance basically...]! So what should I do? And what kind of help I will get if I convert it to OOP ?"} {"_id": "245389", "title": "What is the best way to get a method name in runtime?", "text": "Here is a little background of my problem: I implemented a singleton logger class which is being called from several projects. I want to log the name of the class as well as the name of the method asked for logging. So far, I thought about two ways to get the name of the method and the class - either by reflection or by manually writing them. The advantage of using reflection is that whenever the name of the class or the method will be renamed, which is not that often, it will be updated automatically. The disadvantage of reflection is that it is performance consuming and it is suggested not to use it whenever you can avoid it, as can be seen in the documentation of reflection (http://docs.oracle.com/javase/tutorial/reflect/index.html) or in various threads online. What would be the best way and why?"} {"_id": "151661", "title": "Is it bad practice to use ` recently and I am reluctant to use it, but it itches so hard that I wanted to have your take on it. I know it is bad practice to use short tags `` and that we should use full tags `` instead, but what about this one : ``? It would save some typing and it would be better for code readability, IMO. So instead of this: \"> I could write it like this, which is cleaner : \"> Is using this operator frowned upon?"} {"_id": "50350", "title": "Arguments for a coding standard?", "text": "A few friends and i are planning to work on a project together and we want a COMPLETELY DIFFERENT coding standard. We do NOT want to use the coding standard the libraries/language uses. Its our project and we want to mess around. So i came here to ask what you guys think are good standards and arguments for it (or what not to do and arguments against it). The styles i remember most are * Upper casing the entire word * Camel and Pascal casing * Using '_' to separate each word * pre or postfixing letters or words (i hate m for member but i think IsCond() is a good func name. SomethingException as a postfix example) * Using '_' at the start or end of words * Brace placement. On a new or same line? I know of libs that use Pascal casing on all public and protected members. But would you ever get confused if something is a func, var or even property if the lang supports it? What about if you decide a public member to be private (or vice versa) wouldnt that great a lot of fix up work or inconsistencies? Is prefixing C to every class a good idea? I ask what do you think and why?"} {"_id": "187175", "title": "Is it a acceptable approach to put try catch wherever null pointer exception occurs?", "text": "There are instances where the references to objects, fields, variables etc can be null and there might be a possible occurrence of Null Pointer exception occurring at that point. Is is a permanent solution to put these code blocks which expect Null ponter exception to be put in a try catch block?"} {"_id": "187177", "title": "requirements for software quality for internally developed / shared software?", "text": "In environments where software is built internally by one team and then that software is used by other internal teams, how does one decide what the required quality should be for the produced artefacts? For example: 1. documentation completeness and accuracy 2. extensibility/re-usability of artefact (without hacking at source code) 3. defects 4. etc I guess one aspect to consider is the length of time into a project before quality is considered. It should be more cost effective to ensure quality is built into a product from the start rather than leaving it until later in the project life-cycle."} {"_id": "187170", "title": "Searching for text within a webpage", "text": "I'm currently designing a search algorithm for a document, and just got curious about this while designing the algorithm: how do web browsers search a webpage? For example, in Google Chrome, you can press the keyboard shortcut Ctrl-F to activate the \"Find\" bar, which will let you search a particular webpage for text. How does it do that, given that it only has the raw HTML as reference?"} {"_id": "157992", "title": "Java advice needed, What is Java Messaging Service", "text": "Is it used for Software Development or Website development? Is it needed (or good) for a JSP Programmer to learn JMS? I know server side scripting languages like JSP and PHP, but I have not found any relation between JSP and JMS."} {"_id": "157991", "title": "Reasons NOT to use JSF", "text": "I am new to StackExchange, but I figured you would be able to help me. We're crating a new Java Enterprise application, replacing an legacy JSP solution. Due to many many changes, the UI and parts of the business logic will completely be rethought and reimplemented. Our first thought was JSF, as it is the standard in Java EE. At first I had a good impression. But now I am trying to implement a functional prototype, and have some really serious concerns about using it. First of all, it creates the worst, most cluttered invalid pseudo-HTML/CSS/JS mix I've ever seen. It violates every single rule I learned in web- development. Furthermore it throws together, what never should be so tightly coupled: Layout, Design, Logic and Communication with the server. I don't see how I would be able to extend this output comfortably, whether styling with CSS, adding UI candy (like configurable hot-keys, drag-and-drop widgets) or whatever. Secondly, it is way too complicated. Its complexity is outstanding. If you ask me, it's a poor abstraction of basic web technologies, crippled and useless in the end. What benefits do I have? None, if you think about. Hundreds of components? I see ten-thousands of HTML/CSS snippets, ten-thousands of JavaScript snippets and thousands of jQuery plug-ins in addition. It solves really many problems - we wouldn't have if we wouldn't use JSF. Or the front- controller pattern at all. And Lastly, I think we will have to start over in, say 2 years. I don't see how I can implement all of our first GUI mock-up (Besides; we have no JSF Expert in our team). Maybe we could hack it together somehow. And then there will be more. I'm sure we could hack our hack. But at some point, we'll be stuck. Due to everything above the service tier is in control of JSF. And we will have to start over. My suggestion would be to implement a REST api, using JAX-RS. Then create a HTML5/Javascript client with client side MVC. (or some flavor of MVC..) By the way; we will need the REST api anyway, as we are developing a partial Android front-end, too. I doubt, that JSF is the best solution nowadays. As the Internet is evolving, I really don't see why we should use this 'rake'. Now, what are pros/cons? How can I emphasize my point to not use JSF? What are strong points to use JSF over my suggestion?"} {"_id": "240965", "title": "What Kind of Source Control Do High Security Projects Use?", "text": "I was just reading this slightly older post on choosing a good source control system when I started thinking about how different projects use source control. Where I work we've essentially moved completely to Git and things have been mostly good. I have wondered about high security projects such as military software projects. Using something like Git there would fundamentally be unsafe; even allowing repo clones under restricted circumstances could be problematic. Using SVN or TFS may not even provide enough security in itself. Are there any VCS systems that have any stronger security considerations? Or are there additional considerations that go into using VCS for high security projects?"} {"_id": "58405", "title": "What issues are there for doing freelance work?", "text": "I'm considering doing some contract work on the side of my normal job. I know that it will kill my free time, but I figure I can control when I'm doing projects and then get a little extra money or even eventually make it my full time job. But as I've never done this before, I'm wondering what issues people face to do this kind of work. For instance: how do you find customers? What difficulties do you normally face on a project? How do you deal with projects that are too large for one programmer to effectively complete? What about projects that need other skill sets (for instance web design for a web app?)"} {"_id": "175670", "title": "Should I incorporate exit cost into choosing a solution", "text": "I'm currently choosing between two viable software designs/solutions. Solution 1 is easy to implement, but will lock some data in a proprietary format, and will be hard to change later. Solution 2 is hard to implement, but will be a lot easier to change later on. Should I go YAGNI on this or should I incorporate the exit cost in the decision making? Or asked differently, is the exit cost part of the TCO? I'm thinking of going back to the customer with this to ask whether or not he thinks the exit costs are relevant, but I'd like to know what the community thinks first. P.S. Is exit cost the correct term?"} {"_id": "72001", "title": "So they're trying to pull me into management", "text": "I work in a small IT department in a non tech company. My manager recently quit and they are looking for a replacement. I guess since I'm one of those \"rare\" developers with people skills, the director is encouraging me to apply Part of me wants to apply but another part of me says no. The pay and ability to make a difference sound intriguing, and I'm a little burnt out on programming after 12 years, but there are downsides too it seems. I'd be managing someone else who is very interested in the position and it could be awkward since he's a friend and currently higher on the ladder than me (along with a few others). Has anyone else been in a similar position? Is anyone in management and happier or has anyone taken a management job and wished you didn't? Any feedback would be appreciated! * * * @Pratik: When managers worked for ex-employees in the companies that you worked for, did they give them any trouble? * * * EDIT: Thanks everyone for your answers. While this seems like it might be a good opportunity, there are a few things that make me uncomfortable about this. 1) I would be responsible for EVERYTHING instead of what I am asked to do. The department is still reeling after several rounds of lay offs...overstressed and on the verge of burnout. I do have a pretty good relationship with everyone on the team....but I wonder if it wouldn't change if I took this job. 2) A couple of other co-workers despise the director for whatever reason. He might try to get me to do his dirty work and punish them if they butt heads. 3) Managing people who are older than me which doesn't seem to be an issue based on Codemwnci's post. 4) It's a small department and I really don't think a full time manager is needed. IMO we don't need someone to spend 50% of their time ordering people around (because everyone knows what they have to do) and 50% of the time doing nothing. The dept need more of a player-coach IMO. I think it would actually help the team because they would have another person to do support work full time (instead of having a support person write code like I'm doing now) if that makes any sense. It is difficult but what I'm doing now is also difficult (development + support work) I'd just hate to see them bring in someone from the outside that doesn't know what they're doing or ruins the team we have now which is pretty solid. * * * EDIT (4/30) > Your relationship with the rest of the team **will** change. You are now the > boss rather than a mate. You will need to tell people what to do and they > should do it. Some will be OK with the change, but others might resent that > you've been promoted rather than them. That could be a problem since I'm friends with most of them now. > In this case allocate some of the \"easy\" project tasks to yourself. These > should be non-critical items that you can pick up and drop at a moment's > notice. This allows you to help on the project, keep up with the code base > but not get distracted from the managerial activities. Another thing you can > do is field all the bug reports that come in. Check them out to make sure > that they are real bugs and, if it's an easy edit, fix it straight away. > Larger problems can then get scheduled in to the rest of the team's > activities. This stops the team getting distracted and also shows the > client/upper management that things do get fixed. Sounds like a good strategy... > To me it comes down to which day-to-day activities you enjoy. Do you like > coaching, meeting with people, project management, building rapport, and > solving people problems? Or do you need lots of alone time, find meetings > draining, dislike drama, and like intellectual/technical problems? I can do both. The main thing for me is work-life balance. My old boss had work-life balance but only because he had dedicated employees below him that knew what they were doing, worked the extra hours, and made his job easy. Of course if someone else took over and the department was mismanaged, work-life balance for everyone could be out the window. * * * EDIT (5/1) @Jeff It's a corporate environment and I have no control in how many people we have or job description of each position. I would hope that my fellow teammates would want me to succeed but I'm not sure. I made friends with some of them and surely this would change our relationship. But I'm not sure the other candidates would be able to run the dept and keep things running smoothly...although I could be wrong."} {"_id": "72004", "title": "Source code stolen\\hacked by rival company", "text": "On some companies I've worked for, managers have spent quite a lot of money on it-security consultants. Primarily because they're afraid we're gonna get the source code stolen by a rival company. However, as a programmer, I see it as a minor concern that a rival company would find the actual source code useful. After all, just having access to our application is enough, and that can be done without breaking the law. In my opinion, the data handled by the business people would much more useful than source code. My question is; _are there any known examples where source code has gotten stolen AND a rival company have used it extensively?_ I know some game engine (quake 1 and half life 2 if I remember correctly) source code has gotten stolen, but I can't really see that really hurt their business. (I know, this question might be more appropriate for other forum at stackexchange)"} {"_id": "164712", "title": "Field symbol and Data reference in SAP-ABAP", "text": "If we compare field symbol and data refernece with that of the pointer in C i concluded that :- In C language, Say we declare a variable \"var\" type \"Integer\" with default value \"5\". The variable \"var\" will be stored some where in the memory and say the memory address which holds this variable is \"1000\". Now we define a pointer \"ptr\" and this pointer is assigned to our variable. So, \"&ptr\" will be \"1000\" and \" *ptr \" will be 5. Lets comapre the above situation in SAP ABAP. Here we declare a Field symbol \"FS\" and assign that to the variable \"var\". Now my question is what \"FS\" holds ? I have searched this rigorously in the internet but found out many ABAP consultants have the opinion that FS holds the address of the variable i.e. 1000. But that is wrong. While debugging i found out that fs holds only 5. So fs (in ABAP) is equivalent to *ptr (in C). Please correct me if my understanding is wrong. Now lets declare a data reference \"dref\" and another filed symbol \"fsym\" and after creating the data reference we assign the same to field symbol . Now we can do operations on this field symbol. So the difference between data refernec and field symbol is :- in case of field symbol first we will declare a variable and assign it to a field symbol. in case of data reference first we craete a data reference and then assign that to field symbol. Then what is the use of data reference? The same functionality we can achive through field symbol also."} {"_id": "79051", "title": "Competency matrix for 3D graphic skills", "text": "As a part of self-evalution, I bumped into this programmer competency matrix which collects many major aspects of being good/expert programmer. It really helped me a lot to understand what I know about programming, what I don't know and what I have never even thought about. Now, I would need same kind of matrix about 3D graphics programming skills - not so much about using software to build models but about general 3D stuff, mathematics, using shaders, performance/optimizations etc. Does anyone have an idea where I could find one? It does not have to be perfect (and not even matrix necessarily) but it would be nice to have at least some basic aspects covered."} {"_id": "178046", "title": "Is \"Interface inheritance\" always safe?", "text": "I'm reading \"Effective Java\" by Josh Bloch and in there is Item 16 where he tells how to use inheritance in a correct way and by inheritance he means only class inheritance, not implementing interfaces or extend interfaces by other interfaces. I didn't find any mention of interface inheritance in the entire book. Does this mean that interface inheritance is always safe? Or there are guidlines for interface inheritance?"} {"_id": "73751", "title": "is there an \u201capp store\u201d for regular pc apps?", "text": "I have written an app for the computer in java, but i don't want to give it out for free. I only want to charge somewhere between $0.99 - $2.99, is there a website that i can upload it to, to do this for me. I am looking for something like the apple app store, or the android market, but for the computer. Does this exist? also, is there a way to control the amount of computers it can be installed on? ie. if a person is using a specific key, they can only use the program one one computer at a time, like i-tunes movies. the program is a standalone app, meaning there is no installation process, you just open the executable file and it works. will this be a problem for integrating a licensing function."} {"_id": "178048", "title": "Event driven language for Robotics", "text": "There are several options that are available like C, C++, Matlab and some more. But is there a language that naturally feels like Event programming? For example: If I see a red ball (Event) ---> Do this (Action)"} {"_id": "178049", "title": "css - use universal '*' selector vs. html or body selector?", "text": "Applying styles to the body tag will be applied to the whole page, so body { font-family: Verdana } will be applied to the whole page. This could also be done with * {font-family: Verdana} which would apply to all elements and so would seem to have the same effect. I understand the principle that in the first instance the style is being applied to one tag, body for the whole page whereas in the second example the font is being applied against each individual html elements. What I am asking is what is the practical difference in doing that, what are the implications and what is a reason, situation or best practice that leads to using one over another. One side-effect is certainly speed (+1 Rob). I am most interested in the actual reason to choose one over the other in terms of functionality."} {"_id": "151591", "title": "How do you keep code with continuations/callbacks readable?", "text": "Summary: **Are there some well-established best-practice patterns that I can follow to keep my code readable in spite of using asynchronous code and callbacks?** * * * I'm using a JavaScript library that does a lot of stuff asynchronously and heavily relies on callbacks. It seems that writing a simple \"load A, load B, ...\" method becomes quite complicated and hard to follow using this pattern. Let me give a (contrived) example. Let's say I want to load a bunch of images (asynchronously) from a remote web server. In C#/async, I'd write something like this: disableStartButton(); foreach (myData in myRepository) { var result = await LoadImageAsync(\"http://my/server/GetImage?\" + myData.Id); if (result.Success) { myData.Image = result.Data; } else { write(\"error loading Image \" + myData.Id); return; } } write(\"success\"); enableStartButton(); The code layout follows the \"flow of events\": First, the start button is disabled, then the images are loaded (`await` ensures that the UI stays responsive) and then the start button is enabled again. In JavaScript, using callbacks, I came up with this: disableStartButton(); var count = myRepository.length; function loadImage(i) { if (i >= count) { write(\"success\"); enableStartButton(); return; } myData = myRepository[i]; LoadImageAsync(\"http://my/server/GetImage?\" + myData.Id, function(success, data) { if (success) { myData.Image = data; } else { write(\"error loading image \" + myData.Id); return; } loadImage(i+1); } ); } loadImage(0); I think the drawbacks are obvious: I had to rework the loop into a recursive call, the code that's supposed to be executed in the end is somewhere in the middle of the function, the code starting the download (`loadImage(0)`) is at the very bottom, and it's generally much harder to read and follow. It's ugly and I don't like it. I'm sure that I'm not the first one to encounter this problem, so my question is: **Are there some well-established best-practice patterns that I can follow to keep my code readable in spite of using asynchronous code and callbacks?**"} {"_id": "151593", "title": "Challenges in multi-player Android Game Server with RESTful Nature", "text": "I'm working on an Android Game based on Contract Bridge, as a part of my college Summer Internship project. The game will be multi-player such that 4 Android devices can play it, so there's no BOT or CPU player to be developed. At the time of getting project, I realized that most of the students had already worked on the project but none of their works is reusable now (for variety of reasons like, undocumented code and design architecture, different platform implementation). I have experience working on several open source projects and hence I emphasis to work out on this project such that components I make become reusable as much as possible. Now, as the game is multi-player and entire game progress will be handled on server, I'm currently working on Server's design, since I wanted to make game server reusable such that any client platform can use it, I was previously confused in selecting Socket or REST for Game Server's design, but later finalized to work on REST APIs for the server. Now, since I have to keep all players in-sync while they make movements in game, on server I've planned to use Database which will keep all players' progress, specific for each table (in Bridge, 4 players play on single table, and server will handle many such game tables). I don't know if its an appropriate decision to use database as shared medium to track progress of each game table (let me know if there's an appropriate or better option). Obviously, when game is completed for the table, data for that table on server's database is discarded. Now the problem is that, access to REST service is an HTTP call, so as long as client doesn't make any request, server will remain idle, and consider a situation where * A player has played a card on his device and the device requests to apply this change on the server. * Now, I need to let rest of the three devices know that the player has played a card, and also update view on their device. * AFAIK, REST cannot provide a sort-of Push-notification system, since the connection to the server is not persistent. * One solution that I thought was to make each device constantly poll the server for any change (like every 56 ms) and when changes are found, reflect it on the device. But I feel this is not an elegant way, as every HTTP request is expensive. (and I choose REST to make game play experience robust since, a mobile device tends to get disconnected from Internet, and if there's Socket-like persistent connection then entire game progress is subject to lost. Also, portability on client-end is important) * Also, imagining a situation where 10 game tables are in progress and 40 players are playing, a server must be capable to handle flooded HTTP requests from all the devices which make it every 56 ms. So I wonder if the situation is assumed as DoS attack. So, explaining the situation, am I going on the right track for the server design? I wanted to be sure before I proceed much further with the code."} {"_id": "151594", "title": "introducing automated testing without steep learning curve", "text": "We're a group of 4 developers on a ajax/mysql/php web application. 2 of us end up focusing most of our efforts on testing the application, as it is time- consuming, instead of actually coding. When I say testing, I mean opening screens and testing links, making sure nothing is broken and the data is correct. I understand there are test frameworks out there which can automate this kind of testing for you, but I am not familiar with any of them (neither is anyone on the team), or the fancy jargon (is it test-driven? behavior-driven? acceptance testing?) So, we're looking to slowly incorporate automated testing. We're all programmers, so it doesn't have to be super-simple. But we don't want something that will take a week to learn... And it has to match our php/ajax platform... The Question: Which testing framework will allow us to jump in right away without spending a lot of time learning the syntax and/or a new programming language."} {"_id": "79586", "title": "Future proofing code", "text": "Where I work developers are always telling me that \"I added this just in case for the future\" or \"I think it's a good idea to do this because they'll probably want it some day\". I think it's great that they're proactive in trying to anticipate future changes but I can't help thinking that it's unnecessary and risks writing code that may never be needed and therefore unproductive (I also think that some developers just want to try out something new for the sake of it). Are the arguments for future proofing invalid if you just write good, clean organised code?"} {"_id": "144041", "title": "Merging similar graphs based solely on the graph structure?", "text": "I am looking for (or attempting to design) a technique for matching nodes from very similar graphs based on the structure of the graph*. In the examples below, the top graph has 5 nodes, and the bottom graph has 6 nodes. I would like to match the nodes from the top graph to the nodes in the bottom graph, such that the \"0\" nodes match, and the \"1\" nodes match, etc. This seems logically possible, because I can do it in my head for these simple examples. Now I just need to express my intuition in code. Are there any established algorithms or patterns I might consider? (* When I say based on the structure of the graph, I mean the solution shouldn't depend on the node labels; the numeric labels on the nodes are only for demonstration.) UPDATE: It has been correctly pointed out that using the structure alone, most of these graphs can't be solved with 100% accuracy. I've added some thoughts to each graph. I'm also interested in the performance of any potential solutions. How well will they scale? Could I merge graphs with millions of nodes? In more complex cases, I recognize that the best solution may be subject to interpretation. Still, I'm hoping for a \"good\" way to merge complex graphs. (These are directed graphs; the thicker portion of an edge represents the head.) ![Example 1](http://i.stack.imgur.com/vxEmS.png) This graph is a cycle with 1 node \"attached to the outside.\" After attaching a second node to the outside it's not possible to determine whether the new node (node 5) is attached to node 1 or 3. The 4 nodes composing the cycle can't be perfectly matched either. ![Example 2](http://i.stack.imgur.com/3Bkbd.png) Nodes 1 and 5 can be swapped in the bottom graph without changing the structure. Thus, you can only guess with 50/50 accuracy which is the new node. ![Example 3](http://i.stack.imgur.com/jmU7l.png) This graph has symmetry in 2 dimensions and can be mirrored without changing the structure. Meaning you can swap several nodes without changing the structure. ![Example 4](http://i.stack.imgur.com/CeUEb.png) This last graph is the only one for which you can identify the new node with 100% accuracy. Thanks Kirk Broadhurst, for pointing this out."} {"_id": "144042", "title": "how a pure functional programming language manage without assignment statements?", "text": "When reading the famous SICP, I found the authors seem rather reluctant to introduce the assignment statement to Scheme in Chapter 3. I read the text and kind of understand why they feel so. As Scheme is the first functional programming language I ever know something about, I am kind of surprised that there are some functional programming languages (not Scheme of course) can do without assignments. Let use the example the book offers,the `bank account` example. If there is no assignment statement, how can this be done?How to change the `balance` variable? I ask so because I know there are some so-called pure functional languages out there and according to the Turing complete theory, this must can be done too. I learned C, Java, Python and use assignments a lot in every program I wrote. So it's really an eye-opening experience. I really hope someone can briefly explain how assignments are avoided in those functional programming languages and what profound impact (if any) it has on these languages. The example mentioned above is here: (define (make-withdraw balance) (lambda (amount) (if (>= balance amount) (begin (set! balance (- balance amount)) balance) \"Insufficient funds\"))) This changed the `balance` by `set!`. To me it looks a lot like a class method to change the class member `balance`. As I said, I am not familiar with functional programming languages, so if I said something wrong about them, feel free to point out. **EDIT** :I guess to fully understand the question, I have to dig into a pure functional programming language.Thanks for all the answers. I sure learned a lot."} {"_id": "222905", "title": "How to maintain view logic separation with a server", "text": "I am writing a client server application. I wanted to fully separate the server logic, from the view. The first thing I wanted to to, is to make a sort of a message log. The server itself should not know if the messages will be shown at a GUI, or on the console. What I was thinking, would be to have a handler method, that would be called every time a new message was posted. So a GUI app would have it's own method to maybe add to a listView, while the console would have a simple printf. Is there a better way to do this?"} {"_id": "222907", "title": "Can some LGPL library published upgrades use a commercial library?", "text": "I am using a LGPL v3 library and have made upgrades to it. Before using these upgrades, I understand I have to publish them. That's al-right. 1. Is it legal if I perform upgrades that use a commercial library of my own (upgrades will be published, but not my commercial library)? Note: usually, questions are the other side: commercial applications that want to use GPL libraries. 2. If yes to (1), is there some restrictions on the build of the upgraded LGPL library ? Shall the commercial library be dynamically or statically linked, or whatever ?"} {"_id": "34584", "title": "How to explain OOP concepts to a non technical person?", "text": "I often try to avoid telling people I'm a programmer because most of the time I end up explaining to them what that really means. When I tell them I'm programming in Java they often ask general questions about the language and how it differs from x and y. I'm also not good at explaining things because 1) I don't have that much experience in the field and 2) I really hate explaining things to non-technical people. They say that you truly understand things once you explain them to someone else, in this case how would you explain OOP terminology and concepts to a non technical person?"} {"_id": "253557", "title": "how to install lib_mysqludf_sys.dll?", "text": "I have put the \"lib_mysqludf_sys.dll\" in C:/Xampp/mysql/lib/plugin. in \"my.ini\" config file the plugin directory has been set correctly. after putting that .dll file in the appropriate place, I restarted the mysql server. all things seems are correct. then I write CREATE FUNCTION sys_exec RETURNS INT SONAME 'lib_mysqludf_sys.dll' in phpmyadmin but I confront with Can't open shared library 'lib_mysqludf_sys.dll' (errno: 193 ) ... can anybody help me and say to me where I have done incorrectly or might have not done? your cooperation is appreciated before. meisam..."} {"_id": "130532", "title": "Prerequisites for developing an application with Unicode support", "text": "What could be the necessary prerequisites to be taken when developing an application with Unicode support in the context of 1. Web applications 2. Desktop applications 3. Embedded applications Prerequisites to be taken care of relating to * Type casting and conversion * Data Storage * Fallback in case of no Unicode support * Transition from a database without Unicode support to a database with Unicode support Answers pertaining to real-world projects are preferred"} {"_id": "130534", "title": "How should I extend an existing Service?", "text": "Our system's main functionality is encapsulated in a service, let's call it X. There are requests coming in to X-Manager service which deal with all validations and security issues, and activates X's functionality after everything passed. We now want to extend X's functionality with a new module, let's call it Y, but without changing X's code too much, preferably without changing it at all. Also, Y may be able to work on it's own some day (and not only extending X) The main idea now in the team is to make X-Manager **call Y with X instead of just calling X, so that Y will do it's thing and then Y Will call X's functionality instead of X-Manager.** I don't know why but this smells icky to me, I hope I managed to explain this well... **Is there any better way of doing this?**"} {"_id": "130539", "title": "Easiest way to create static HTML file with sortable and filterable table?", "text": "I want to create a static HTML file I can email to someone with a lot of data, and have that data sortable and filterable. What is the easiest to use library or package I can use to get this off the ground?"} {"_id": "233808", "title": "How can I handle this string concatenation in C in a reusable way", "text": "I've been writing a small C application that operates on files, and I've found that I have been copy+pasting this code around my functions: char fullpath[PATH_MAX]; fullpath[0] = '\\0'; strcat(fullpath, BASE_PATH); strcat(fullpath, subdir); strcat(fullpath, \"/\"); strcat(fullpath, filename); // do something with fullpath... Is there a better way? The first thought that comes to mind is to create a macro but I'm sure this is a common problem in C, and I'm wondering how others have solve it."} {"_id": "93670", "title": "Line break before/after operator", "text": "While Sun's Java code convention suggests to put line break before the operator many other guidelines disagree with it. I do not see any obvious pros and cons, so are there advantages of using one of these styles over another? String longVarName = a + b + c + d + e + f; vs String longVarName = a + b + c + d + e + f;"} {"_id": "93676", "title": "Is it a must to focus on one specific IT subject to be succesful?", "text": "Lately I'm deeply disturbed by the thought that I'm still not devoted to one specific IT subject after so many years of doing it as a hobby. I've been in so many different IT related hobbies since I was 12. I have spent 8 years and now I'm 20 and just finished freshman year at Computer Eng. Just to summarize the variety: * 3D Game Dev. and Modelling (Acknex, Irrlicht , OpenGL, GLES, 3DSMAX) * Mobile App.Dev (Symbian, Maemo, Android) * Electronis (Arduino) * Web.Dev. (PHP, MYSQL, Javascript, Jquery, RaphaelJS, Canvas, Flash etc.) * Computer Vision (OpenCV) I need to start making money. But I'm having problem to pick the correct IT business to do so. **Is it a problem to have interest in so many different IT subjects?(in business world)** I'm having a lot of fun by doing all those stuff from time to time. Other than making money I also noticed that having so many different interests is lowering my productivity. But I'm still having difficulty to pick one. I'm feeling close to all those subjects (time to time)."} {"_id": "128455", "title": "LGPL open source license", "text": "I have recently become curious about open source licenses. I have a question about the LGPL. I created a Java web project and built it as a WAR package. Can I add LGPL jars to this WAR file?"} {"_id": "188316", "title": "Is there a reason that tests aren't written inline with the code that they test?", "text": "I've been reading a bit about Literate Programming recently, and it got me thinking... Well-written tests, especially BDD-style specs can do a better job at explaining what code does than prose does, and have the big advantage of verifying their own accuracy. I've never seen tests written inline with the code that they test. Is this just because languages don't tend to make it simple to separate application and test code when written in the same source file (and nobody's made it easy), or is there a more principled reason that people separate test code from application code?"} {"_id": "188310", "title": "Naming guard clauses that throw exceptions", "text": "I have a function `evaluate()` that parses a `String` for some variables and replaces them with their corresponding value: public String evaluate() { String result = templateText; for (Entry entry : variables.entrySet()) { String regex = \"\\\\$\\\\{\" + entry.getKey() + \"\\\\}\"; result = result.replaceAll(regex, entry.getValue()); } if (result.matches(\".*\\\\$\\\\{.+\\\\}.*\")) { //<--check if a variable wasn't replaced throw new MissingValueException(); } return result; } Since the `evaluate()` function now does 2 things, it first replaces the variables with the values **AND** check for any variables that weren't replaced, I thought about refactoring it to: public String evaluate() { String result = templateText; for (Entry entry : variables.entrySet()) { String regex = \"\\\\$\\\\{\" + entry.getKey() + \"\\\\}\"; result = result.replaceAll(regex, entry.getValue()); } checkForMissingValues(result); } private void checkForMissingValues(String result) { if (result.matches(\".*\\\\$\\\\{.+\\\\}.*\")) { throw new MissingValueException(); } } Now is this a good name for the function `checkForMissingValues`? I mean it does check for missing values, but it also throws an exception. I thought about renaming it to `throwExceptionIfMissingValueWasFound()`, but this name tells **HOW** the function does it more than **WHAT** the function is doing. Is there a standard for naming such functions that check for a condition then throw an exception?"} {"_id": "165393", "title": "Bug Tracking Etiquette - Necromancy or Duplicate?", "text": "I came across a really old (2+ years) feature request issue in a bug tracker for an open source project that was marked as \"resolved (won't fix)\" due to the lack of tools required to make the requested enhancement. In the time elapsed since that determination was made, new tools have been developed that would allow it to be resolved, and I'd like to bring that to the attention of the community for that application. However, I'm not sure as to what the generally accepted etiquette is for bug tracking in cases like this. Obviously, if the system explicitly states to not duplicate and will actively mark new items as duplicates (much in the way the SE sites do), then the answer would be to follow what the system says. But what about when the system doesn't explicitly say that, or a new user can't easily find a place that says with the system's preference is? Is it generally considered better to err on the side of duplication or necromancy? Does this differ depending on whether it's a bug or a feature request?"} {"_id": "55388", "title": "What are pros and cons of using \"in-house\" tools?", "text": "What are pros and cons of using \"in-house\" tools, from real-world experience?"} {"_id": "181439", "title": "Why can't java generics be in arrays?", "text": "Why is it that when I try to make an array of ArrayLists: `ArrayList[] arr=new ArrayList[40];` there is an error and java does not allow this? Is there a reason related to java's implementation of generics, generics in any language, or something arbitrary?"} {"_id": "51403", "title": "What should web programmers know about cryptography?", "text": "Should programmers who build websites/web applications understand cryptography? I have no idea how most crypographic algorithms work, and I really don't understand the differences between md5/des/aes/etc. Have any of you found any need for an in-depth understanding of cryptography? I haven't needed it, but I wonder if perhaps I'm missing something. I've used salt + md5 hash to encrypt passwords, and I tell webservers to use SSL. Beyond that, I can't say I've used much else, nor can I say with any certainty _how_ secure these methods are. I only use them because other people claim they are safe. Have you ever found a need to use cryptography in web programming aside from these two simple examples?"} {"_id": "181431", "title": "What are the differences between algorithms using data structures and algorithms using databases?", "text": "# The General Question What are the differences between algorithms using data structures and algorithms using databases? # Some Context This is a question that has been bugging me for some time, and I have not been able to come up with a convincing answer for it. Currently, I am working on strengthening my understanding of algorithms that, of course, heavily involve data structures. These are basic structures such as Bag, Queue, Stack, Priority Queue, and Heap. I also use databases on a daily basis to store the data that has been processed and submitted by the end-user or processed by the program. I retrieve and submit the data through a DAL, which has data structures of its own that are generated based on the tables in the database. My questions come when I have the option to sort the data using the database to send it back to me ordered in either an ascending/descending fashion or retrieve and load the data into my logic, process this data in a priority queue, and heap sort all of it. Or another one would be to search for records using the database rather than loading a subset of the records and using something like binary search to find the record or records I am interested in. In my mind, I would try to have as many operations take place on the database- end before sending it over because communication is expensive. This also makes me wonder when do you use algorithms and data structures strictly defined within your own logic rather to process data than that of the database's? So here are the questions... # Questions 1. What are the differences between data structures and databases? 2. When do we use algorithms that use data structures defined solely within your own logic and not that of the database's? 3. **@Harvey post:** When do the methods in the database become less efficient to use than methods in your own logic? * **@mirculixx post:** What makes a method efficient? 4. **@Harvey post:** How is processing data with data structures faster than doing it in the database? # Clarifications 1. **@Grant post:** The databases I normally work with are relational, and these questions are coming out of working with them. However, I do think these questions are applicable to any persistence framework (when I say framework, I mean it in the most general sense). I know answers without a specific context are difficult. Food-for-thought, advice, or discussion points are mainly what I'm looking for and would be most appreciated!"} {"_id": "78353", "title": "How far should one take e-mail address validation?", "text": "I'm wondering how far people should take the validation of e-mail address. My field is primarily web-development, but this applies anywhere. I've seen a few approaches: * simply checking if there is an \"@\" present, which is dead simply but of course not that reliable. * a more complex regex test for standard e-mail formats * a full regex against RFC 2822 \\- the problem with this is that often an e-mail address might be valid but it is probably not what the user meant * DNS validation * SMTP validation As many people might know (but many don't), e-mail addresses can have a lot of strange variation that most people don't usually consider (see RFC 2822 3.4.1), but you have to think about the goals of your validation: are you simply trying to ensure that an e-mail address can be sent to an address, or that it is what the user probably meant to put in (which is unlikely in a lot of the more obscure cases of otherwise 'valid' addresses). An option I've considered is simply giving a warning with a more esoteric address but still allowing the request to go through, but this does add more complexity to a form and most users are likely to be confused. While DNS validation / SMTP validation seem like no-brainers, I foresee problems where the DNS server/SMTP server is temporarily down and a user is unable to register somewhere, or the user's SMTP server doesn't support the required features. How might some experienced developers out here handle this? Are there any other approaches than the ones I've listed? Edit: I completely forgot the most obvious of all, sending a confirmation e-mail! Thanks to answerers for pointing that one out. Yes, this one is pretty foolproof, but it does require extra hassle on the part of everyone involved. The user has to fetch some e-mail, and the developer needs to remember user data before they're even confirmed as valid."} {"_id": "129514", "title": "How can I efficiently approach cookie-based session handling?", "text": "Currently our web application uses server-sided sessions. Because of the large amount of memory usage, we want to switch to cookie-based sessions. I have been thinking about several ways: ### Idea 1 My first idea was to give the client two cookies: `SessionID` (a hash) and `SessionData` (the encrypted data). The `SessionID` will be used to look up a decryption key for `SessionData` in an in-memory hashtable. Because only decryption keys are stored on the server, this should wind up being less memory usage. However it still uses some memory, and the CPU load is increased due to encryption/decryption on every request. It'd also make the session cluster-dependant, with no graceful failover possible. ### Idea 2 My second idea was similar to the fist, but instead of looking up a decryption key in a cache, I'd use a salted version of `SessionID` to decrypt `SessionData`. The upside is that there wouldn't be any in-memory cache or hash table anymore, and that session would be cluster-independent if the salt is the same for all clusters. However, the tradeoff is that it's much less secure due to a potential danger of decrypting or modifying the `SessionData` cookie, and that the CPU load is increased due to encryption/decryption on every request. Is there a better way to go about this problem? Or a way to modify one of these ideas to be a little more palatable?"} {"_id": "25644", "title": "Are there any actual examples of profitable programmer's \"worker's cooperatives\"?", "text": "http://en.wikipedia.org/wiki/Worker_cooperative I'm curious whether there are, anywhere in the world, worker's cooperatives that center on a technology business that involves either programming, IT, or some sort of IT or programming related consulting or services. The wikipedia link above is an overview of the concept. The short form explanation is that a co-op is a worker-owned business. Also there is the notion that every worker owns shares in the business. I am interested in knowing whether an example of a \"programmer's/IT co-op\" even exists. Note: I am not talking about nor asking about a government-funded incubator nor any other socialized, state supported group. I also don't mean \"co- working\", which is renting an office with other self employed people doing their own thing. I mean a going, profitable IT business operating in a competitive environment that is worker-owned and run."} {"_id": "65989", "title": "How to do this in Standard UML?", "text": "In the below Sequence diagram, when the user have entered the Username and Password, I have to do the authentication. Now you can see two details **valid details** and **invalid detail** in the diagram , which i will return when the **user password match** and **miss-match** respectively. Now My big question is which one i have to draw first, either **valid details** or **invalid detail** , how I know which one will come first."} {"_id": "74769", "title": "What's the limitation of APSL compared to BSD or MIT?", "text": "I heard some people complain that APSL 2.0 ( http://www.opensource.apple.com/license/apsl/ ) has too many limitations. What's the limitation of it compared to BSD or MIT?"} {"_id": "74760", "title": "Is javascript worth learning if you do not plan on being a web developer?", "text": "I heard Javascript is a full language just like c++. Is this true? What else is it good for programming besides web stuff?"} {"_id": "196293", "title": "Why DependencyProperties and not native language support?", "text": "With advent of WPF and MVVM Microsoft introduced `DependencyProperties` and `INotifyPropertyChange` interface to provide a way to implement the \"reactive\" approach used with those technologies. Sadly both of these constructs are very verbose, require much boilerplate, are clumsy to use, also are not really that safe since they require much use of \"magic strings\". So here come the question: why didn't they put these functionalities directly into the language - why didn't they create new kind of properties created with a simple keyword, providing useful stuff of `DependencyProperties` (like events on change and so on...). What were stopping them?"} {"_id": "196998", "title": "Is there a name for the school of thought behind writing tests?", "text": "Essentially it is a branch of software engineering but SE itself is too large an umbrella. I was curious if there was a title for the knowledge base that encompasses TDD, BDD, mocks/stubs/spys, unit tests vs. integration tests, code coverage, etc."} {"_id": "214870", "title": "What am I risking if I don't update my SDK/JDK and bundled runtime/JRE every time there's a security update?", "text": "It seems like there's a new major security hole patched in Java every other week, and I would assume the same goes for other development platforms. After years of frustration trying to get customers to install and configure a compatible JRE on their systems, we started bundling one with our software. (By bundling, I mean we extract a copy of the JRE in our installation directory--we don't install the JRE and configure it as the system default.) The problem is, it's a hassle having to keep that JRE up-to-date because first we have to retest everything to make sure the update didn't break anything (it has broken some of our third-party dependencies in the past). How seriously, if at all, are we putting our customers at risk if we don't update our SDK/JDK and the runtime/JRE that we bundle with our product every time there's a security update? Is it reasonable to just update on a periodic schedule--say, once every 6 months or so?"} {"_id": "115833", "title": "Drawing thread interaction", "text": "I'd like to draw (pen and pencils) threads interaction in a UML(-like) notation. I don't insist on UML, anything that is obvious to the reader should do. I started with sequence diagrams, but I don't feel this is the best way to do that. All the time, there would be \"action initiators\" coming from off-screen which kinda break the SSD idea. I inherited a medium size code base with around 9 - 10 threads each owning a state machine and I'm trying to figure out how it works. How should I visualize thread interaction?"} {"_id": "136079", "title": "Are there any statistics that show the popularity of Git versus SVN?", "text": "I'm writing an essay, and would like to have some empiric evidence, perhaps longitudinal data where the popularity of these technologies is compared over a period of some years. Are there any statistics that show the popularity of Git versus SVN?"} {"_id": "234721", "title": "Why do concurrent languages tend to have more complicated syntax?", "text": "This is a question that's been on my mind for a while. Recently I've been checking out concurrent languages like Haskell or Go or Erlang. From my point of view, they have huge benefit in performance as opposed to languages like C++ or Python because of the way they handle functions in parallel. My question is: Why is the syntax for languages like C++ or Python so much different (and IMO simpler) from those of concurrent languages, even though most concurrent languages are executed on runtime (and therefore they have more possibility in simplifying the syntax)? Here is an example, consider Go's `sqrt`: // Package newmath is a trivial example package. package newmath // Sqrt returns an approximation to the square root of x. func Sqrt(x float64) float64 { z := 1.0 for i := 0; i < 1000; i++ { z -= (z*z - x) / (2 * z) } return z } Now for the Python counterpart (ported this myself based on the `Sqrt` example from Go): def Sqrt(x): z = 1.0 for i in Range(0, 1000): z -= (z*z-x/2*z) return z Now as you can see go has a syntax like \":=\" for assignment, If you take a more complicated examples, the syntax will look more like what I am trying to point out. And Go is just the least \"weird-looking\" language. If you consider Erlang it's looking even weirder: %% qsort:qsort(List) %% Sort a list of items -module(qsort). % This is the file 'qsort.erl' -export([qsort/1]). % A function 'qsort' with 1 parameter is exported (no type, no name) qsort([]) -> []; % If the list [] is empty, return an empty list (nothing to sort) qsort([Pivot|Rest]) -> % Compose recursively a list with 'Front' for all elements that should be before 'Pivot' % then 'Pivot' then 'Back' for all elements that should be after 'Pivot' qsort([Front || Front <- Rest, Front < Pivot]) ++ [Pivot] ++ qsort([Back || Back <- Rest, Back >= Pivot]). Parts that caught my eye were \"++ [] ++\", '<-' and '->' I am convinced these languages look like this for a reason, but I can't help but think: Can't it be simpler? Why are concurrent languages like this? Why if they use a runtime like Python and JavaScript, are they still type-safe? I know type-safe languages have an advantage of their own to not mix up variable types, but still, there's gotta be somebody who made a concurrent language that didn't have type-safety, if possible right? It seems like almost all concurrent languages have one thing in common: a bigger list of possible / valid syntax. I hope I've explained my question well enough."} {"_id": "214879", "title": "How to manage a lot of Action Listeners for multiple buttons", "text": "I have this Tic Tac Toe game and I thought of this really cool way to draw out the grid of 9 little boxes. I was thinking of putting buttons in each of those boxes. How should I give each button (9 buttons in total) an `ActionListener` that draws either an **X** or **O**? Should they each have their own, or should I do some sort of code that detects turns in `this`? Could I even do a `JButton Array` and do some `for` loops to put 9 buttons. So many possibilities, but which one is the most **proper**? Code so far: import javax.swing.*; import java.awt.event.*; import java.awt.*; public class Board extends JPanel implements ActionListener{ public Board(){ Timer timer = new Timer(25,this); timer.start(); } @Override protected void paintComponent(Graphics g){ for(int y = 0; y < 3; y++){ for(int x = 0; x < 3; x++){ g.drawRect(x*64, y*64, 64, 64); } } } public void actionPerformed(ActionEvent e){ repaint(); } }"} {"_id": "239025", "title": "Advantage of Tagging a Release in SVN versus Only Leaving a Comment for the Commit", "text": "In my company, we have had a policy of tagging every release. Someone new joined, and he suggested that instead of formally using a Tag, we could just leave a comment for the release build when it is checked in. I like using a Tag, but, obviously, we can also get to the source code for a build also by looking for the comment. The only advantage I see is that because our product spans multiple technologies, we can group the source code for both in the same directory with a folder for each. **Is there some other advantage of tagging releases that I'm missing?** BTW, we are using SVN."} {"_id": "165971", "title": "How is architectural design done in an agile environment?", "text": "I have read Principles for the Agile Architect, where they defined next principles : > Principle #1 The teams that code the system design the system. > Principle #2 Build the simplest architecture that can possibly work. > Principle #3 When in doubt, code it out. > Principle #4 They build it, they test it. > Principle #5 The bigger the system, the longer the runway. > Principle #6 System architecture is a role collaboration. > Principle #7 There is no monopoly on innovation. The paper says that most of the architecture design is done during the coding phase, and only system design before that. That is fine. So, how is the system design done? Using UML? Or a document that defines interfaces and major blocks? Maybe something else?"} {"_id": "93093", "title": "How to sell your libraries?", "text": "If someone has created something like a DevExpress library, or Infragistics, Telerik, RavenDb etc., what's the easiest way to start selling it? What are the required steps? Is there some rules for licenses and stuff?"} {"_id": "231618", "title": "Should Libraries Use Events or a Set Action", "text": "I'm building a small reusable library for two systems our company manages. Something that I've been caught up on is whether I should expose a set of properties of type `Action` for events such as Completed, Aborted, ExceptionOccured, or if I should use the `Event` T : EventArgs methodology. In practice another internal library will be consuming this information and handling the display of information to the user. The library's classes will be instantiated within a single thread and wouldn't be consumed by anything outside of its owner. To me it seems more along the lines of if I only need to notifications are only ever going to be consumed by one object then it doesn't matter, however for completeness, and due to the fact that I as the library auther have little control over how another developer implements this, I should use the Event method due to it's broader ability to handle events. EDIT (sample code): @Robert Harvey, I read that post when searching for reasoning on picking one over the other. I understand the answer in context to that question, but I'm not sure how it relates to using the event keyword or the simple assigning of a property. @Fabio Marcolini, I understand that I'm using the word event to describe what is happening. What I'm trying to answer is that if using the event structure is preferred to simply executing a delegate property. Here are the properties that I use to define the event actions: public Action InstallCompleted; public Action ActivityCompleted; public Action InstallAborted; public Func UnhandledExceptionDuringInstall; The idea would be to invoke one of them in this manner: if (this.InstallCompleted != null) this.InstallCompleted(new WorkflowInstallerCompletedEventArgs(logs)); Now until @svick's comment I would've said that you could only have one Action (or Func) assigned to each property. Example implementation: WorkflowInstaller installer = new WorkflowInstaller(activity, new TestVariablesCollection()) { ActivitiesRunSequentially = true }; installer.AddInstallActivity(activity2); installer.InstallCompleted = (e) => { Log[] logs = e.Logs.ToArray(); string test = SerializationHelper.Serialize(logs.FirstOrDefault()); try { logOutputFromRun = SerializationHelper.Serialize(logs); } catch (Exception ex) { } syncEvent.Set(); }; installer.InstallAborted = (a, e) => { Assert.Fail(\"Unexpected abort during the install.\"); syncEvent.Set(); }; installer.UnhandledExceptionDuringInstall = (a, e) => { Assert.Fail(\"Unexpected exception during the install.\"); syncEvent.Set(); return System.Activities.UnhandledExceptionAction.Terminate; }; installer.StartInstallation(); syncEvent.WaitOne(); The answer to my question may not be a definitive this way or that way, but more of best practice or generally accepted approach. I'm really trying to expand my horizons when it comes to creating code that is acceptable to a wider audience."} {"_id": "165978", "title": "Case insensitive keywords in a language", "text": "We're trying to write a custom scripting language. There has been a suggestion to make the language forgiving by providing **case insensitive** keywords. I personally do not like the idea, but there are few people in my team leaning towards it, saying it will make the end user happy! Examples of languages like FORTRAN, BASIC, SQL are given saying that these are case insensitive. Is this a good idea?"} {"_id": "109420", "title": "How to Properly Google for C", "text": "The problem with trying to use Google to find tutorials or answers for the C programming language is that C is not an expressive enough name to narrow down the searches. Even coupled with keywords like \"Programming\" and/or \"Language\" yields results mostly for C++, C#, and Objective-C. Is there a way to more effectively search for specific C resources using Google?"} {"_id": "206256", "title": "Methods as verbs: is the object the subject?", "text": "Is there some recommended practice regarding methods as verbs in OOP? Should the object work syntactically as subject or as object/complement? Should `object.doSomething()` be normally understood as \"the object iself does something\" (subject) or \"the caller does something with the object\"? I suspect that the first alternative is more right, and it sounds more natural with such a general verb... But consider for example \"OutputStream.write(byte[])\", which... > writes b.length bytes from the specified byte array to this output stream. Here it's not the object who is the subject of the action, it's the caller. The Writer (rather confusingly) does not really \"write\", its the caller is who \"writes\" bytes _to the Writer_. Should this be considered incorrect?"} {"_id": "117475", "title": "What skills to add for an older CS grad living in rural area?", "text": "The answers to this question might be different than the generic, \"I graduated. Duh, now what should I do?\" Living in an area with a low population presents limited opportunities compared to those in more urban or suburban communities. On the other hand...maybe the answers will be largely the same. I'm asking you folks who might be looking at this from a perspective of more years on the inside for your wisdom. I have just finished a CS degree. I am about 40 (does it matter much?). I left a successful, professional career in the laboratory robotics industry because it required me to either move to a big city or continue to travel extensively. In a few years we could potentially move somewhere else or I could start travelling again but I need to start working now. I need to make myself as employable as possible. There are few jobs available in my area for computer professionals. Of those few, the majority seem to be IT related more than software development related -- SharePoint, ERP, network security, web services, etc.. Might I be well served by trying to pick up some IT knowledge? Or should I continue to work on my software development chops? I have my own project now that I am proud of and excited about and could happily continue to extend it. It is a good vehicle for learning new stuff until (and hopefully after) I find employment. I'm hoping to get some others involved in it as well. Or I could start a new project that would be a vehicle for learning something marketable -- from .net to being able to support all that legacy COBOL there seems like a lot of stuff out there. Thanks for whatever insight anyone shares."} {"_id": "55966", "title": "Going back to ASP.Net Webforms from ASP.Net MVC. Recommend patterns/architectures?", "text": "To many of you this will sound like a ridiculous question, but I am asking because **I have little to no experience with ASP.Net Webforms** \\- I went straight to ASP.Net MVC. I am now working on a project where we are limited to .Net 2.0 and Visual Studio 2005. I liked the clean separation of concerns when working with ASP.Net MVC, and am looking for something to make webforms less unbearable. Are there any recommended patterns or practices for people who prefer asp.net MVC, but are stuck on .net 2.0 and visual studio 2005?"} {"_id": "22597", "title": "Does late have any meaning in Agile methodologies?", "text": "This came out of some of the answers and comments on another question (this one). I've worked primarily with waterfall projects and while I've worked on ad-hoc projects that have taken on agile behaviours and have read a fair bit about agile, I'd say I've never worked on a \"proper\" agile project. My question is does the concept of \"late\" have any meaning in agile, if so then what? My reasoning is that with agile you have no upfront plan and you have no detailed requirements at the outset. You may have a high level goal in mind and a notional date attached to it but both may change (potentially massively) and neither are certain. So if you don't know exactly what you're going to deliver basically until you deliver it and the user accepts it, and if you don't have a schedule beyond the next sprint, how could you ever be late in any way that actually has meaning? (Obviously I understand that a sprint might overrun but I'm talking about beyond that.) **Edit** : Just to be clear I'm (personally) happy with the assumption that on time waterfall projects (even relatively large ones) are possible based on the fact I've seen them and been involved in them - they're not easy or common even but they are possible. This isn't about knocking agile, it's about me understanding it. I've always seen the benefit of agile as nothing to do with deadlines or budgets (or rather only indirectly), it's to do with scope - agile delivers closer to what is really important rather than what the project team think is important before they've seen anything."} {"_id": "22598", "title": "Algorithms: Find the best table to play (standing gambler problem)", "text": "_**Preface_** This is not code golf. I'm looking at an interesting problem and hoping to solicit comments and suggestions from my peers. This question is not about card counting (exclusively), rather, it is about determining the best table to engage based on observation. Assume if you will some kind of brain implant that makes worst case time / space complexity (on any given architecture) portable to the human mind. Yes, this is quite subjective. Assume a French deck without the use of wild cards. _**Background_** I recently visited a casino and saw more bystanders than players per table, and wondered what selection process turned bystanders into betting players, given that most bystanders had funds to play (chips in hand). _**Scenario_** You enter a casino. You see n tables playing a variant of Blackjack, with y of them playing Pontoon. Each table plays with an indeterminate amount of card decks, in an effort to obfuscate the house advantage. Each table has a varying minimum bet. You have Z currency on your person. You want to find the table where: * The least amount of card decks are in use * The minimum bet is higher than a table using more decks, but you want to maximize the amount of games you can play with Z. * Net losses, per player are lowest (I realize that this is, in most answers, considered to be incidental noise, but it could illustrate a broken shuffler) _**Problem_** You can magically observe every table. You have X rounds to sample, in order to base your decision. For this purpose, every player takes no more than 30 seconds to play. What algorithm(s) would you use to solve this problem, and what is their worst case complexity? Do you: * Play Pontoon or Blackjack ? * What table do you select ? * How many rounds do you need to observe (what is the value of X), given that the casino can use no more than 8 decks of cards for either game? Each table has between 2 and 6 players. * How long did you stand around while finding a table? I'm calling this the \" **standing gambler problem** \" for lack of a better term. Please feel free to refine it. _**Additional_** Where would this be useful if not in a casino? _**Final_** I'm not looking for a magic gambling bullet. I just noticed a problem which became a bone that my brain simply won't stop chewing. I'm especially interested in applications way beyond visiting a casino."} {"_id": "142836", "title": "Achieving forward compatibility with C++11", "text": "I work on a large software application that must run on several platforms. Some of these platforms support some features of C++11 (e.g. MSVS 2010) and some don't support any (e.g. GCC 4.3.x). I expect this situation to continue for for several years (my best guess: 3-5 years). Given that, I would like set up a compatibility interface such that (to whatever degree possible) people can write C++11 code that will still compile with older compilers with a minimum of maintenance. Overall, the goal is to minimize #ifdef's as much as reasonably possible while still enabling basic C++11 syntax/features on the platforms that support them, and provide emulation on the platforms that don't. Let's start with std::move(). The most obvious way to achieve compatibility would be to put something like this in a common header file: #if !defined(HAS_STD_MOVE) namespace std { // C++11 emulation template inline T& move(T& v) { return v; } template inline const T& move(const T& v) { return v; } } #endif // !defined(HAS_STD_MOVE) This allows people to write things like std::vector x = std::move(y); ... with impunity. It does what they want in C++11 and it does the best it can in C++03. When we finally drop the last of the C++03 compilers, this code can remain as is. However, according to the standard, it is illegal to inject new symbols into the `std` namespace. That's the theory. My question is: _practically speaking_ is there any harm in doing this as a way of achieving forward compatibility?"} {"_id": "193744", "title": "Is there a better way to model a many-to-many relationship on the same table with Entity Framework?", "text": "# The context We're building a web application using Entity Framework 5.0. One of the requirements is that it should be possible for the administrators to link related products so that when someone browses to a product we can render a list \" _You might also like these products:_ \". Since a product can be linked to many products we need a many-to-many relationship for this. The table in the database could look something like this: _________________________ | LinkedProducts | |-------------------------| | ProductId1 | ProductId2 | |------------|------------| | 1 | 2 | |------------|------------| | 1 | 3 | |------------|------------| | 2 | 4 | |------------|------------| An additional requirement is that the linking should be bidirectional. This means that with the sample data from the table above, when you browse to product 2, you should get product 1 and 4 in the list. So for a given `ProductId`, you should be able to get its linked products from column `ProductId1` and `ProductId2`. # What I've come up with The product entity: public class Product { public int ProductId { get; set; } public string Name { get; set; } public virtual ICollection References { get; set; } public virtual ICollection ReferencedBy { get; set; } public Product() { References = new List(); ReferencedBy = new List(); } } The product mapping: public class ProductMapping : EntityTypeConfiguration { public ProductMapping() { HasKey(t => t.ProductId); Property(t => t.Name).IsRequired().HasMaxLength(150); ToTable(\"Products\"); Property(t => t.ProductId).HasColumnName(\"ProductId\"); Property(t => t.Name).HasColumnName(\"Name\"); HasMany(x => x.References).WithMany(x => x.ReferencedBy).Map(map => { map.ToTable(\"LinkedProducts\"); map.MapLeftKey(\"ProductId1\"); map.MapRightKey(\"ProductId2\"); }); } } This all works. To display the list of linked products for a certain product, I can just get the union of `References` and `ReferencedBy`: var product = db.Find(2); var linkedProducts = product.References.Union(product.ReferencedBy); # The problem As I said, this works, but I don't really like it and I was wondering if there is a better way to deal with a situation like this when working with Entity Framework. # The solution I liked Ryathal's suggestion to add two records to the database when linking products. So, when linking product 1 to product 2, I now insert two records: `{ 1, 2 }` and `{ 2, 1 }`. This also allows me to remove the `ReferencedBy` collection in the `Product` class so that only one collections remains: `References`. To link to products: product1.References.Add(product2); product2.References.Add(product1); To remove the link between products: product1.References.Remove(product2); product2.References.Remove(product1); Here is my new `Product` class with the mapping: public class Product { public int ProductId { get; set; } public string Name { get; set; } public virtual ICollection References { get; set; } public Product() { References = new List(); } } The product mapping: public class ProductMapping : EntityTypeConfiguration { public ProductMapping() { HasKey(t => t.ProductId); Property(t => t.Name).IsRequired().HasMaxLength(150); ToTable(\"Products\"); Property(t => t.ProductId).HasColumnName(\"ProductId\"); Property(t => t.Name).HasColumnName(\"Name\"); HasMany(x => x.References).WithMany().Map(map => { map.ToTable(\"LinkedProducts\"); map.MapLeftKey(\"ProductId1\"); map.MapRightKey(\"ProductId2\"); }); } }"} {"_id": "221762", "title": "Why doesn't Java 8 include immutable collections?", "text": "The Java team has done a ton of great work removing barriers to functional programming in Java 8. In particular, the changes to the java.util Collections do a great job of chaining transformations into very fast streamed operations. Considering how good a job they have done adding first class functions and functional methods on collections, why have they completely failed to provide immutable collections or even immutable collection interfaces? Without changing any existing code, the Java team could at any time add immutable interfaces that are the same as the mutable ones, minus the \"set\" methods and make the existing interfaces extend from them, like this: Iterable (already immutable) | ImmutableCollection _______/ / \\ \\___________ / / \\ \\ Collection ImmutableList ImmutableSet ImmutableMap ... \\ \\ \\_________|______________|__________ | \\ \\___________|____________ | \\ | \\___________ | \\ | \\ | List Set Map ... Sure, operations like List.add() and Map.put() currently return a boolean or previous value for the given key to indicate whether the operation succeeded or failed. Immutable collections would have to treat such methods as factories and return a new collection containing the added element - which is incompatible with the current signature. But that could be worked-around by using a different method name like ImmutableList.append() or .addAt() and ImmutableMap.putEntry(). The resulting verbosity would be more than outweighed by the benefits of working with immutable collections, and the type system would prevent errors of calling the wrong method. Over time, the old methods could be deprecated. Wins of immutable collections: * Simplicity - reasoning about code is simpler when the underlying data does not change * Documentation - if a method takes an immutable collection interface, you know it isn't going to modify that collection * Concurrency - immutable collections can be shared safely across threads As someone who has tasted languages which assume immutability, it is very hard to go back to the Wild West of rampant mutation. Clojure's collections (sequence abstraction) already have everything that Java 8 collections provide, plus immutability (though maybe using extra memory and time due to synchronized linked-lists instead of streams). Scala has both mutable and immutable collections with a full set of operations, and though those operations are eager, calling .iterator gives a lazy view (and there are other ways of lazily evaluating them). I don't see how Java can continue to compete without immutable collections. Can someone point me to the history or discussion about this? Surely it's public somewhere."} {"_id": "193745", "title": "UDP order of packets with direct connection", "text": "If i have two systems (A and B) running on LAN(INTRANET) which are directly connected. There are no routers in the middle. In this case, if system A sends a few UDP packets every few milliseconds to system B: Is it possible that system B receives the packets in a different order? Please note that I'm not asking whether to use TCP or UDP. I'm interested in whether the above scenario will have packets out of order - I'm aware that UDP packets are not guaranteed to arrive in order."} {"_id": "240867", "title": "Why are there no function pointers in Java?", "text": "Lately I started studying about different interesting concepts that exist in languages other than Java. Since the only language I've ever programmed with is Java, a lot of these concepts are very new to me. So this question may be very naive :) . I learned recently about first class functions and function pointers. Why are there no function pointers in Java? Or at least some variation of them, like delegates in C#? Maybe it's just the excitement of learning about this concept, but it seems to me like it could be a powerful feature in the language."} {"_id": "220414", "title": "How to store Status for Reservation table", "text": "I have a reservation table (a user fills out a form to request a reservation). It has two parts that need to be confirmed. isReservationAccepted (decline,accept,waiting) and hasReservationBeenSent (sent, declined, waiting) Not sure how to structure my table, thinking about making a lookup table for each column, or maybe just creating them as int, and having 1 = accepted/sent, 2 = declined, 3 = waiting. not sure if there is a better way (probably)"} {"_id": "57529", "title": "Sharing code in LGPL and proprietary software", "text": "I'm working on a piece of software that'll be released as a dll under LGPL. The software interfaces with hardware from a small company that has provided me with the needed libraries and some code to use them correctly (not only headers but its all in a separate file). As far as i know, the same code is used in their proprietary software that they don't intend to open source but they'd be fine releasing the piece of code they've given me. Now here's the question: What license could be used on the code I got from the company? I guess using GPL or LGPL would make them violate GPL when using the same code in their other software. Is MIT a good idea? Is it ok to just include a file with MIT license on it in my otherwise LGPL:ed project? Since I'm not the copyright holder, I'd have to ask the company to apply the license obviously but that shouldn't be a problem. Thanks /Martin"} {"_id": "104609", "title": "Infinite Bitmap", "text": "I'd like to build a bitmap during runtime. The bitmap should be scalable on all sides and pixel access should be quiet efficient. ![Some illustration](http://img546.imageshack.us/img546/4995/maptm.jpg) Between and after the commands shown in the picture, Map.setPixel() and Map.getPixel() should set/return data saved in the bitmap. I don't expect an implementation just a concept how to allocate memory in such a way that the setPixel()/getPixel is as fast as possible."} {"_id": "199468", "title": "How to record/store edits?", "text": "In many programs and web apps (stack exchange included) the program is able to backtrack what edits where made to the piece. My issue is similar: I want to be able to store a \"timeline\" of edits, where the user can go back and see what they typed at a specific time/when the typed certain words. What's the standard way to do this? Also, in things like google docs and other programs, they automatically \"group\" actions together (Like if I delete something by hitting back space 5 times, it knows I was deleting one word), any ideas on how to do that to?"} {"_id": "57522", "title": "Modified Strategy Design Pattern", "text": "I've started looking into Design Patterns recently, and one thing I'm coding would suit the Strategy pattern perfectly, except for one small difference. Essentially, some (but not all) of my algorithms, need an extra parameter or two passed to them. So I'll either need to * pass them an extra parameter when I invoke their calculate method or * store them as variables inside the ConcreteAlgorithm class, and be able to update them before I call the algorithm. Is there a design pattern for this need / How could I implement this while sticking to the Strategy Pattern? I've considered passing the client object to all the algorithms, and storing the variables in there, then using that only when the particular algorithm needs it. However, I think this is both unwieldy, and defeats the point of the strategy pattern. Just to be clear I'm implementing in Java, and so don't have the luxury of optional parameters (which would solve this nicely)."} {"_id": "57526", "title": "Computer Science vs. Game Programming", "text": "I'm interested in going to University in the hope of taking programming to a professional level, in particular I would like to go into Game Programming. Would I benefit more from going to a University and getting a degree in a \"Game Programming\" type course, or would a more general degree in Computer Sciene be better?"} {"_id": "194286", "title": "How do you store \"fuzzy dates\" into a database?", "text": "This is a problem I've run into a few times. Imagine you have a record that you want to store into a database table. This table has a DateTime column called \"date_created\". This one particular record was created a long time ago, and you're not really sure about the exact date, but you know the year and month. Other records you know just the year. Other records you know the day, month and year. You can't use a DateTime field, because \"May 1978\" isn't a valid date. If you split it up into multiple columns, you lose the ability to query. Has anyone else run into this, if so how did you handle it? **Edit:** To clarify the system I'm building, it is a system that tracks archives. Some content was produced a long time ago, and all that we know is \"May 1978\". I could store it as May 1 1978, but only with some way to denote that this date is only accurate to the month. That way some years later when I'm retrieving that archive, I'm not confused when the dates don't match up. For my purposes, it is important to differentiate \"unknown day in May 1978\" with \"May 1st, 1978\". Also, I would not want to store the unknowns as 0, like \"May 0, 1978\" because most database systems will reject that as an invalid date value."} {"_id": "223693", "title": "Using MVC strictly with DAAB?", "text": "I am the only one at my company that is familiar with MVC and they are getting more pressure to modernize and switch to MVC for future projects, so I was tasked to create a template to use as a base for any training or as a base for new projects. This is perfectly fine and I'm glad to do it. However, one absolute requirement is that I use EL Data Access Block. Doesn't this kinda defeat the purpose of MVC and using an ORM for simpler data code? I have tried looking everywhere online for any standard practices when using MVC with DAAB but I cannot find a single good article or tutorial or code anywhere, which leads me to believe I should not do it this way, but i have to.. So, on that note, is it possible for me to use EL Data Access for the Model and set it up so that it can be strongly typed and get it to basically act kinda like EF? I was thinking that I could just create a t4 template or something to help generate the models based off of the DB tables and add as much of the CRUD operations as I could. No clue if this will be a good idea though."} {"_id": "24997", "title": "Can all the recursive functions be coded with iterations?", "text": "What are the advantages of recursion? Some programming languages can optimize tail recursion, but, still in general terms, recursion consume more resources than regular loops. Is it possible to have an iterative version of some recursive function?"} {"_id": "223691", "title": "Why would you opt to fully qualify a package instead of importing it?", "text": "In java, to print the date we could do either of the following: **Fully qualified** public class MyMain { /** * @param args */ public static void main(String[] args) { // TODO Auto-generated method stub System.out.println(new java.util.Date()); } } **Using an import** import java.util.*; public class MyMain { /** * @param args */ public static void main(String[] args) { // TODO Auto-generated method stub System.out.println(new Date()); } } What are the advantages and disadvantages to for each way? For me, using an import seems tidier, as you can see at the top of your class, just what packages are being used."} {"_id": "141563", "title": "Should main method be only consists of object creations and method calls?", "text": "A friend of mine told me that, the best practice is class containing `main` method should be named `Main` and only contains `main` method. Also `main` method should only parse inputs, create other objects and call other methods. The `Main` class and `main` method shouldn't do anything else. Basically what he is saying that class containing `main` method should be like: public class Main { public static void main(String[] args) { //parse inputs //create other objects //call methods } } Is it the best practice?"} {"_id": "75499", "title": "Single line comments for multiple indented lines of code", "text": "After many years of coding, trying various programming styles, weeding out unreadable or impractical stuff, I still can't figure out one thing: what is the best way to single-line-comment multiple lines of indented code. 1. setup_checkpoint(); // if (object->looks_suspicious()) // guard->full_body_scan(object); 2. setup_checkpoint(); // if (object->looks_suspicious()) // guard->full_body_scan(object); 3. setup_checkpoint(); // if (object->looks_suspicious()) // guard->full_body_scan(object); Please, don't suggest multiline commenting, because sometimes you want to leave that possibility for bigger blocks, which may include smaller pieces of comments."} {"_id": "141564", "title": "why are transaction monitors on decline? or are they?", "text": "http://www.itjobswatch.co.uk/jobs/uk/cics.do http://www.itjobswatch.co.uk/jobs/uk/tuxedo.do Look at the demand for programmers (% of job ads that the keyword appears), first graph under the table. It seems like demand for CICS, Tuxedo has fallen from 2.5%/1% respectively to almost zero. To me, it seems bizarre: now we have more networked and internet enabled machines than ever before. And most of them are talking to some kind of database. So it would seem that use of products whose developers spent last 20-30 years working on distributing and coordinating and optimizing transactions should be on the rise. And it appears they're not. I can see a few causes but can't tell whether they are true: 1. we forgot that concurrency and distribution are really hard, and redoing it all by ourselves, in Java, badly. 2. Erlang killed them all. 3. Projects nowadays have changed character, like most business software has already been built and we're all doing internet services, using stuff like Node.js, Erlang, Haskell. (I've used RabbitMQ which is written in Erlang, \"but it was small specialized side project\" kind of thing). 4. BigData is the emphasis now and BigData doesn't need transactions very much (?). None of those explanations seem particularly convincing to me, which is why I'm looking for better one. Anyone?"} {"_id": "75493", "title": "\"Accept the human condition\" is one of lean software development values. Can you elaborate?", "text": "The Lean Software and System Consortium 2011 conference, which took place last week, stated the vision and values of lean software development. Here are the six values of lean software development as photographed by one of the attendees. Number 1 is \" **accept the human condition**.\" While I can sense the humanistic and maybe even anti- _Taylorist_ mood in this statement, I'd like to ask the experts to explain this principle and how to apply it in our daily work."} {"_id": "75490", "title": "Does it make me a bad programmer if I dislike the Agile methodology?", "text": "I like the small iterations. I like the unit tests. I like code review. What I don't like is the starting off with little or no documentation. Am I alone in this? Do I simply have a misunderstanding of the process? Any thoughts would be appreciated."} {"_id": "26179", "title": "Why do people disable JavaScript?", "text": "I asked a question yesterday Should I Bother to Develop For JavaScript Disabled?. I think the consencus is: Yes, I should develop for JavaScript Disabled. Now I just want to understand why users disable JS. It seems many developers (I guess people who answered the questions are developers) disable JS. Why is that. Why do users disable JS? For security? Speed? or what?"} {"_id": "209583", "title": "License for academic use only?", "text": "I'm developing a software package and would like to make it (and the source) available for personal, academic, or educational use, but _not for commercial use_. Is there such a license? Ideally, it should be reciprocal, so that if someone publishes a modified version, they'll have to grant the same rights. Alternatively, would it be possible to modify the GPLv3 in such a way that it forbids commercial use and grants rights only to non-commercial users?"} {"_id": "209581", "title": "Offline data transaction processing capabilities in a context of poor internet connectivity", "text": "We have a core banking product developed with ASP.NET with C# using Microsoft SQL Server 2008 database. Product is hosted on a centralized server and works well at all locations (bank branches) where connectivity is good and stable. We need to implement this solution at a region where connectivity is always an issue. Since it is a core-banking solution, there should not be any service lapse to the customer because of technical issues (viz., connectivity). How to design the architecture for this and ensure that there is no transactional issues at any given point of time, and at the same time the centralized database contains the latest data from all branches?"} {"_id": "204757", "title": "Am I approaching learning Java correctly?", "text": "(Originally Posted on Stackoverflow) I'm sorry if this question is not appropriate for stackoverflow, I know this is not a technical question, but I feel like it is relevant. I have always been interested in programming, I have messed around with many programming languages and different game engines, making simple apps and things like that. I have been doing Java for a little over a year. Because of school, I did Java mostly on-and-off. I would do it for a month or so then pick up 6 - 8 weeks later... anyway, my point is I know the basics of Java quite well. This is because every time I jumped back into java, I would always recap from the beginning. I understand I know hardly anything. That being said I really want to learn more about the language and get better at it. I am tired of doing System.out.Print() statements, I feel like I have done almost everything I can do with them. I have also done basic work with GridLayouts and Swing. Since school is out I have been doing a lot of Java, at least a few hours a day. I have been following along with these tutorials. http://zetcode.com/tutorials/javagamestutorial/ The first 3 or so tutorials I completely understand, but after that it gets a bit difficult. I am not here to ask a specific question about code. But I would like to know the opinion of an experienced Java programmer. Am I approaching this the correct way? What is the best way to learn the most Java? I feel like it's gotten to the point in the tutorials where I am not befitting from them like I was in the beginning. Should I stick with what I am doing? Attempt to memorize the code? Could someone point me to somewhere else that I could learn? I have never taken a class on programming, I will be a senior in high school."} {"_id": "187996", "title": "Why do large websites use different languages for the backend and frontend?", "text": "My understanding from small MVC applications is that you have the front end, which deals with HTML, JS, jQuery, etc, and you have the back end, which consists of your controllers and models. However, when I talk to developers from large companies, they often mention having a frontend tier and a backend tier. So sometimes, I might hear that they have a frontend with C# and a backend with Java. Why would any company want a backend and frontend in different languages? Does this help the large website scale better? When people say that their frontend is built in C#, does this mean that they are using a framework for the frontend (like .NET) and an additional framework on the backend (such as Spring)? Or does it mean something entirely different?"} {"_id": "187994", "title": "Is sending password to user email secure?", "text": "How secure is sending passwords through email to a user, since email isn't secured by HTTPS. What is the best way to secure it? Should I use encryption?"} {"_id": "187990", "title": "Should web service response use a base class or generic class?", "text": "In my RESTful WCF web service I have something like the following response object. public class WebResponse { public bool Success { get; set; } public T Data { get; set; } public string ErrorMessage { get; set; } } This way I can capture any errors that occur during the request to the web service. It also separates the data object from the response object. Another way this could be done is with a response base class like this. public abstract class WebResponseBase { public bool Success { get; set; } public string ErrorMessage { get; set; } } public class CarResponse : WebResponseBase { public string Manufacturer { get; set; } public string Make { get; set; } } I personally prefer the first method as it gives me a clear separation of the data and the response/error status and allows me to easily return lists/arrays etc. without having to create multiple classes. Are there any benefits to using the base class? Are there any limitations to using the generic class?"} {"_id": "182319", "title": "Effective implementation of \"array\" of type Int X String -> String in .NET or in general", "text": "The question in general is: is there a more effective way of implementation of table with structure like `Dictionary>`? The reason I am asking this is because I have made few performance tests and it does not perform well for data with > 5M rows. Now, I dont really need this amount of data but I was wandering if there is a more effective way. It could also help performance for smaller tables with thousands of rows. Last but not least I am interested in what _COULD_ be done to improve it. What I have thought of is to use string[][] and have some method transform string rows/column to numbers. That would however require a quite significant rewrite of my work so far. Is there something simpler? I need rows to be able to handle gaps. Background on my project: I have a home brewed structure of objects that represent a table along with some additional functionality that I need. I have table called T, and it stores data (rows) in `Dictionary`. Each TRow has another `dictionary` that represents the row data, in which TCells are indexed by column names. TCells is basically a wrapper around simple string. Table and each row has a Schema definition (column -> {INT, DOUBLE, STRING, BOOL, ...} that is parsed when getting data from the table by methods like .getBool( int row, string column ) etc. Each object (T,TRow,TCell) has quite a lot of helper methods that I use, so they are not a simple wrapper with get methods. EDIT TO ANSWER FOLLOW-UP QUESTIONS: The table is meant for general purpose. No special focus on reading/writing only. The table is often initially loaded from result-set produced by stored procedure in database and then only read from - but not exclusively. The composite key is an interesting idea, but that would break my T, TRow, TCell structure I am afraid. The Dictionary INT X STRING -> STRING is only a simplification, as written in my last paragraph the table T has Dictionary< int, TRow> and TRow has Dictionary< string, string >. The reason I need to keep Table, Row and Cell broken up is that sometimes I work directly with rows, e.g. some method can return a single row etc. Any ideas please? Or there is nothing better :/."} {"_id": "147928", "title": "Does the structured programming definition only consider imperative programming?", "text": "Does the structured programming definition only consider imperative programming? By this I mean does the definition of structured programming automatically exclude functional programming (in the most common usage, by which I mean not necessarily pure-functional programming, but something like Clojure). Structured programming, at least from the definitions that I've found seem to really be saying: \"good programming shouldn't use goto, and should be modular\". Which doesn't necessarily exclude functional programming, while most definitions seem to begin with \"... is a subset of imperative programming\". I'm looking for a bit of clarification I think. BTW, I have read \"What's The Difference Between Imperative, Procedural and Structured Programming?\" which is a pretty good historical description."} {"_id": "75144", "title": "Best Practises in Android Resources Naming", "text": "There are a lot of android resources types: layouts, strings, drawables and so on. I understand that readability of it's names is important but can not create a table of rules how to name them in the best way. Are there any best practices on that?"} {"_id": "199467", "title": "Simple way to deploy Winform application to website", "text": "First of all, I know there is no perfect answer to my question. It is a bit personnal, but I'm sure many programmers face the same dilemma when they get from regular desktop applications to client/server web applications: what is the \"best\" language/framework to use? There are a lot of great solutions out there, all having more or less the same advantages and features. So, I would like an advice based on my needs and background. I'm fairly experienced with VB and PHP, and I have some Javascript and C# knowledge. I currently have a perfectly functionnal C# Winform app that I would like to deploy to my website. The application uses the user's webcam to recognize a game card through a perceptual hashing algorithm and displays details of the best matched card from a MySQL database. Here's how I'd like the web version to perform: * the server sends a recordset of the whole datebase's cards ID/hash matches to the client * the client \"scans\" his card with his webcam and creates a hash * the client searches for a match between this hash and the ones in the recordset * the client returns the best match to the server * the server displays information of the card from the database My objectives : * limit (if possible) the need for the user to install 3rd party software (flash, java, activex or other plugins) * create a solution that integrates well in a Windows/Apache/PHP environnement * create a solution that is, ideally, cross-platform * using a language/framework combination on which it's simple to write, debug and maintain code So far, I'm eyeing Python on Eclipse or Javascript on Visual Studio. I read interesting stuff about Ruby on rails, but the learning curve might be a bit steep for me. I don't mind learning a new language and coding the app again from scratch, but I'd like as much as possible to use the skills and code I already have. Any thoughts? Thanks!"} {"_id": "75147", "title": "Is c# actually a multiplatform language?", "text": "C# (and the .net platform in general) is looking like it's becoming a good option form multi-targeting apps : * official MS .net framework : full blow windows development, asp.net dev, Windows phone Dev, etc. * mono and all its derivated : monotouch, monodroid : the rest of the world. This tools are today RTM. * Does it means that C# is becoming a good language for targeting the most popular platforms : desktop, web and mobile ? * Is it still better to use the \"native\" language of target platforms (objective C, Java, etc.) * Is it only a screen of smoke and only marketing language ? Please note that I'm actually conscious I won't able to copy/paste the code between platforms. But I'm sure the lower layers of applications (models, business, etc.) can be reused, but I know I'll have to adapt the higher layers (Gui, etc.) to the platform. My goal is more focused on required skills than technical code sharing. [edit] I **am** a c# developer in a company that massively use c#. That's why I talked about c# in a plan to expand the range of target platforms in my company."} {"_id": "182313", "title": "find second smallest element in Fibonacci Heap", "text": "I need to describe an algorithm that finds the second smallest element in a Fibonacci-Heap using the Operations: Insert, ExtractMin, DecreaseKey and GetMin. The last one is an algorithm previously implemented to find and return the smallest element of the heap. I thought I'd start by extracting the minimum, which results in its children becoming roots. I could then use GetMin to find the second smallest element. But it seems to me that I'm overlooking other cases because I don't know when to use Insert and DecreaseKey, and the way the question is phrased seems to suggest I should need them."} {"_id": "199466", "title": "Should I allow my client to self host?", "text": "I'm just starting out as a web designer. I'm trying to build up a portfolio for my own website. I'm concerned about designing a site and handing it over to the client. Once I do this I could lose access to the site. They can change the login information and they may not need to me manage it. If the client changes anything on the site that compromises my design work, it will reflect my business. My business' name will be on the homepage of the website. Is this a valid concern?"} {"_id": "182315", "title": "What will be the better way to access information from another object", "text": "I have a `page` Object, which has `Paragraph` and `Image` object Collections And each `paragraph` has only `image_id(s)` that are assigned to a paragraph. All other information about image is stored in `Page->Image` Now from the `paragraph` object I want to access the `image` information where paragraph has only `image_id` and all other information about the image is in `Page->Image` object. What will be the better way to access this info? Should I pass the `page` object in every `paragraph` constructor or something else? I also can't change the class structure, as it is written by someone else."} {"_id": "138158", "title": "How do you do technical interview when you can't commnicate with the interviewer?", "text": "I just had a developer technical interview last night. As similar to so many technical interviews, it ask me to solve and implement an algorithms within a period of time (1 hour). But when interview begin, he start with introducing his life and to long 13 years career life for like 15 min or more (1/6 of my problem solving time gone). Then I notice I really having troubles understanding his English, thus every question or explanations he said, I have to repeatedly ask him to repeat and explain many times. Turns out he gave up explain one question and ask another one. At the end it turns out I couldn't have enough time to do the algorithms implementation even I got the concept to implement. I did tried to briefly explain how my concept and implementations should be, but at the same time I really concern is he really understand my explanation(English). I feel I am pretty much fail this interview, but I don't want to repeat this situation again. So what will be the best way to get him communicate properly with me, so it won't waste so much time to repeat the same explanation? And not letting him think I have no idea how to implement the algorithms? A side question, how can I cut his long introduction?"} {"_id": "138159", "title": "C++, how many years experience?", "text": "As a little background, I've been programming for a long time now using various languages, systems, etc. I've come across the old problem of a recruiter wanting to know \"how many years experience\" I have of C++. I'm a little stumped as I've bounced around it many times over a number of years. I don't think I can just add up the months/years. I'd put myself somewhere around the 3 - 5 year mark. I know it's a bit of a wide range, but I'm not really sure, with newer standards and libraries, the older stuff probably becomes deprecated. So, I wanted to ask your opinion. What would you expect a C++ programmer with (i) 3 years, and (ii) 5 years experience (mainly on Unix/Linux systems) to be able to do? Perhaps, more importantly, what would be the difference you would expect to see between a programmer of 3 years compared to 5 years (and above)? I know this is all a bit vague and the correct answer is, \"it depends\". But if anyone has a good opinion, I'd love to know."} {"_id": "209052", "title": "Shared hosting for a PHP application", "text": "I was reading Essential PHP Security and chapter 8 talks about problems with hosting your PHP app in a shared hosting environment. Some of the problems he mentions are: **\\- Exposed source code and File system browsing.** > a web server must be able to read the source code in order to execute it. > Since the web server is shared a PHP script written by another developer on > the server can read arbitrary files. An attacker can also create a script > that browses the file system. **\\- Exposed session data and Session injection.** > By default, PHP stores session data in /tmp which is writable by all users, > so Apache has permission to write session data there. A simple script can > allow other users to read, add, modify, or delete sessions. It's like everything is exposed and vulnerable if I used shared hosting this way. My questions: 1. Considering the book was published 8 years ago, are they problems still occurring or were they mitigated somehow in the last few years? 2. Why would one opt for shared hosting if it going to cause these huge security concerns? 3. I understand that shared hosting is cheap, but there must be a safer alternative to it and cheaper than dedicated hosting? 4. In case a customer ask me to develop an application that will be hosted on a shared hosting, is there a full proof way to develop a secure application or is it just a recipe for disaster?"} {"_id": "247105", "title": "Calculate reachability of one point from another", "text": "I have an array which stores a set of positive x coordinates in sorted way, say `arr={1, 4, 5, 9, 12, 45}` etc. And I have a maximum distance `k` which I can go from one point to another point let `k=3`. Now, given two points `x` and `y(arr[x]2 then 2->3 but since distance between 3 and 4 is greater than 3 I can't go so in this case I can't reach. But if `x=1` and `y=2` then I can reach. It can be simply solved with O(n). I have created a for loop from `arr[x]` to `arr[y]` and for each pair of points I check if distance between them is less than or equal to `k`. But I want better algorithm. I am thinking of doing something like binary search. Can anybody please suggest a good algorithm?"} {"_id": "4879", "title": "How to improve the code writing effort?", "text": "Code needs to be written to file, on way or another. While all programmers should strive to write no more code than necessary, this \"small\" portion needs to be written nevertheless. What tips do you have for improving the code writing effort? Usage of IDEs? Different keyboards or character layouts? Minimal usage of mouse? Code generation tools? What else can you think of?"} {"_id": "79871", "title": "Is Java Plug-in still relevant?", "text": "When I challenged Chrome development team about their decision to block every version of the Java Plug-in by default (http://code.google.com/p/chromium/issues/detail?id=84001). They answered that Java Plug-in is not widely used anymore. Google is also officially stating this: https://www.google.com/support/chrome/bin/answer.py?answer=1247383&hl=en-US. I'm aware that Applets as a tool for design purposes (banners, menus, etc) is outdated, and I must admit that it has been a while since I developed something serious that used the Java Plug-in (I had some fun with JavaFX and Web Start draggable applications for learning purposes, but that's pretty much it). Still, Java Plug-in accounts for a important part of my surfing experience (I'm the sad type of grownup still playing games such as Runescape. My bank use a Java Applet for the security keyboard and several sites I often visit uses Applets for things such as file uploading and authentication). My question here is: Do you guys think that client-side Java web applications are still relevant? Please disregard Desktop and Server Side applications... We all know how popular Java is (http://www.tiobe.com/index.php/content/paperinfo/tpci/index.html). This question is specifically about Java Plug-in."} {"_id": "221413", "title": "Why is it so hard to recruit for compiler[-related] jobs?", "text": "Last week, a few colleauges and I were participating in career fairs at three major universities (two here in the US and one in England), where we were trying (without much success) to recruit for several compiler positions, ranging from internship, to entry-level, to more senior, for our team. To our surprise, 80% of the students that we talked to responded somewhere a long the line of \"I want to build Ansroid apps\", when asked what they were interested in doing. (And the other 20%? \"iPhone apps\"! ) Some even expressed openly that they did not \"want to build a compiler, ..., it's boring\"; they said and I quoted. So what is it about mobile apps that is so appealing to (young ?) \"developers\" these days? And by the same token, why is compiler such a\"boring\" topic to them? ( I don't necessarily think these two are mutually exclusive. One can certainly build a compiler for a mobile phone, but that's beside the point) What can we do, if anything, to attract more talents, or even just interested candidates?"} {"_id": "199218", "title": "Is it good to have an interface plenty of methods which belong to different concepts, just to preserve the Liskov's Principle?", "text": "I'm currently studying a course based on Software Design and I had a discussion in class with my professor and some classmates about a problem represented by the next scenario: **Scenario** > _Imagine we have a graphic application which lets us plan the interior > design of our future house using a perspective from top. We are able to add > or remove some elements like furniture and change their position, but > obviously, we can't move the walls of the house because it was already > built._ **Solution 1** To solve this problem, some of my classmates proposed a solution that could be expressed using the following UML diagram: ![UML diagram for Solution 1](http://i.stack.imgur.com/nV7Ne.jpg) As you can see, they agreed on the use of a common interface called \"Drawable\" which represents the graphical objects displayed on the UI. The general class \"App\" manages a list of Drawables, and each of them has a set of methods. This interface is implemented by different classes such as Furniture, Wall or Window. The thing is that a Furniture object could be moved, but not the Walls nor the Windows. So, a Furniture would implement the 'move' method defined in Drawable. In contrast, Wall and Window simply will write it empty. At this point, other classmates and I complained about this decission because the design does not require to fulfill with the constraints (like moving or not Drawable objects, depending on their nature). This way, the design will allow some new objects could be moved, although they probably shouldn't be. However, the professor said this design is good because it's flexible and transparent for the App class, because this class doesn't know what kind of instance is managing. _Extended case_ In addition, we could think on an extreme case where we add several methods to the Drawable interface, such as 'sell', 'open' and 'lend'. Using the same approach described above, we will have an interface which could be able to do anything, like Superman. Therefore, I believe this is a bad design solution due to we are mixing behaviors which belong to different concepts (Movable, Sellable, Openable, and so on). Also, we are allowing a Wall object could implement the 'sell' method, which completely does not make sense... but my professor still insisted on his point of view and he didn't see the problem. **Solution 2 (based on the extended case)** Other people suggested we could add those interfaces (Movable, Sellable, Openable) and each of them would inherit from the Drawable interface. That way: * A Wall would implement Drawable directly * A Furniture would implement Movable and Sellable * A Window would implement just Openable The next diagram summarizes this approach: ![UML Diagram for Solution 2](http://i.stack.imgur.com/xvqVi.jpg) [Edit: The Drawable's move method should be removed, this is a mistake because it is now placed into the Movable interface] **Questions and Doubts** Finally, once you can figure out the problem, could you help me and answer these questions, please? 1. Aren't we breaking the _Single Responsibility Principle_ with this super-mega-god-interface? 2. Are both approaches valid against the _Liskov's Substitution Principle_? I think it's broken in the first case because the post-conditions are broken due to some methods don't do anything. Anyway, in the second approach I'm not sure as well because of the interfaces inheritance tree 3. Maybe, another alternative could be the definition of basic Drawable objects (with 'click' and 'draw' methods). Thus, we may take advantage of some kind of mechanism like the Decorator pattern in order to add behaviors dynamically. But how? 4. If we use a language without interfaces such as JavaScript or Python, how can we deal with this problem?"} {"_id": "70510", "title": "How can I prepare for an interview over TeamViewer and Skype?", "text": "I am in big dilemma what to do or not... Tomorrow have an interview on TeamViewer and Skype for a web developer position in PHP. They want to take my technical interview there, so I think they will give give me a PHP program to write. The thing is that PHP has vast variety of functions so its not possible to keep all the functions in mind or on finger tips: Google is really needed. I am aware about how to solve problems, but it may be the case that I don't know the exact syntax for questions they ask. So what should I do in this case? How can I convince them that I know how this can be accomplished but the exact syntax is not coming immediately to mind? Also, what type of question might they ask? _Some thing about me so that may be you can give your advice accordingly_ Am a junior web developer who has 14 month experience in PHP. My skills : PHP, Mysql, Magento, Wordpress, CSS, HTML, Javascript..."} {"_id": "199211", "title": "Matrix Pattern Recognition Algorithm for a 2D space", "text": "I have a picture that I elaborate with my program to obtain a list of coordinates. There is a matrix represented in the image. In an ideal test I would get only the sixteen central points of each square of the matrix. But in actual tests I have some noise points. I want to use an algorithm to extrapolate from the list of the coordinates the group formed by 16 coordinates that best represent a matrix. * Example of found points: ![](http://i.stack.imgur.com/HhriV.png) * Example of desired result: ![](http://i.stack.imgur.com/WfCHq.png) How to do this? Note: The matrix in the image can be rotated a little too, so a rotation- independent algorithm would be great."} {"_id": "199217", "title": "What did Rich Hickey mean when he said, \"All that specificity [of interfaces/classes/types] kills your reuse!\"", "text": "In Rich Hickey's thought-provoking goto conference keynote \"The Value of Values\" at 29 minutes he's talking about the overhead of a language like Java and makes a statement like, \"All those interfaces kill your reuse.\" What does he mean? Is that true? In my search for answers, I have run across: * The Principle of Least Knowledge AKA The Law of Demeter which encourages airtight API interfaces. Wikipedia also lists some disadvantages. * Kevlin Henney's Imperial Clothing Crisis which argues that use, not reuse is the appropriate goal. * Jack Diederich's \"Stop Writing Classes\" talk which argues against over-engineering in general. Clearly, anything written badly enough will be useless. But how would the interface of a well-written API prevent that code from being used? There are examples throughout history of something made for one purpose being used more for something else. But in the software world, if you use something for a purpose it wasn't intended for, it usually breaks. I'm looking for one good example of a good interface preventing a legitimate but unintended use of some code. Does that exist? I can't picture it."} {"_id": "58874", "title": "I'm having trouble understanding these exercises wording", "text": "Exercise 1-20. Write a program detab that replaces tabs in the input with the proper number of blanks to space to the next tab stop. Assume a fixed set of tab stops, say every n columns. Should n be a variable or a symbolic parameter? Exercise 1-21. Write a program entab that replaces strings of blanks by the minimum number of tabs and blanks to achieve the same spacing. Use the same tab stops as for detab. When either a tab or a single blank would suffice to reach a tab stop, which should be given preference? could you paraphrase these for me. thanks"} {"_id": "18478", "title": "Tuple: toople or tuh-ple?", "text": "I hear it's another Europe vs. America dispute, just like with \"router\". Some other time I hear tuh-ple is specific to Python (and possibly some other languages too) while in mathematics and CS it's \"toople\" on all continents. What do you know about this?"} {"_id": "33498", "title": "Have unit test generators helped you when working with legacy code?", "text": "I am looking at a small (~70kLOC including generated) C# (.NET 4.0, some Silverlight) code-base that has very low test coverage. The code itself works in that it has passed user acceptance testing, but it is brittle and in some areas not very well factored. I would like to add solid unit test coverage around the legacy code using the usual suspects (NMock, NUnit, StatLight for the Silverlight bits). My normal approach is to start working through the project, unit testing & refactoring, until I am satisfied with the state of the code. I've done this many times in the past, and it's worked well. However, this time I'm thinking of using a test generator (in particular Pex) to create the test framework, then manually fleshing it out. My question is: have you used unit test generators in the past when commencing work on a legacy codebase, and if so, would you recommend them? My fear is that the generated tests will miss the semantic nuances of the code-base, leading to the dreaded situation of having tests for the sake of the coverage metric, rather than tests which clearly express the intended behaviour in code."} {"_id": "59713", "title": "Best Development Methodology for One Person?", "text": "I spend a lot of time working on projects in which I am the sole developer, project manager, designer, QT person (Yes, I know... Bad!), and sometimes I'm even the client. I've tried just about everything for planning projects and managing myself, from just sitting and working freestyle until the project is done however long it takes, to a single-person version of scrum in which I held a progress meeting with myself over a one-man burn down chart every morning (not kidding). For those of you who spend much time working alone, what is the best way to organize yourself, manage large (for one person) projects, and keep productivity as high as possible?"} {"_id": "63048", "title": "Design and Development Methodologies for the single developer", "text": "At my work, typically the developers here all work separately, we may share projects but often only do so when another isn't working on it (for reference, we're a consulting company. So someone may work on project A one month, then three months late, I may too). What methodologies can we use to improve the quality of the code we produce when there is typically only one developer (at most three) on the project?"} {"_id": "232627", "title": "How to manage the whole project while you are the only resource", "text": "It's frustrating to handle the whole Project/manage the project while you being the only developer/coder/Programmer in the Project. How can we handle the project? Are there any steps to follow to complete the project with the limited given time?"} {"_id": "87152", "title": "Development methodology for single web developer?", "text": "> **Possible Duplicate:** > Design and Development Methodologies for the single developer I'm a web developer who mostly works with the LAMP stack when it comes to my own projects. Most of the time I just start coding on a project and fixing bugs and adding features as I go along. Often I'll try to use an existing solution such as Wordpress or Drupal. Now that I'm thinking of creating my own web application with businesses as the target group, I feel there's a **need for proper analysis and design**. Something lightweight for a **one person project** and still solid enough to handle requirements, user interfaces, security, etc. If you could **recommend methodologies and literature** I would be grateful."} {"_id": "232441", "title": "Encrypt and decrypt password for a specific application", "text": "I have a basic web application where users can login and edit their profile. In the profile they can submit an username and a password for a different application. I'd like to take that password and encrypt it. Later, when I want to connect to that different application I need to decrypt the password. Is there a common pattern for this scenario? Or some other advice you might have? Just in case it matters, I want to connect to JIRA without having the user have to submit his password on every request / login to the page. Thanks, Sven **Update** To make it more clear, I have a web appliction where users can signup/login/etc, this uses a authentication/authorization library (friend) and its all well. However, from my webapp I want users to connect to a jira instance that they can choose and where they have to provide a username/pw combo which I cannot hash, because I need to send them unhashed to the JIRA instance. What I am thinking of right now is to display a login dialog to the user as soon as he wants to request his JIRA instance. There he provides his username/password combo for JIRA and I send a REST request to JIRA from the client side, so no user/password is sent to my server. I get a sessionid back then, which I can use for further requests. What do you think about that approach?"} {"_id": "232442", "title": "Unit testing, factories, and the Law of Demeter", "text": "Here's how my code works. I have an object that represents the current state of something akin to a shopping cart order, stored in a 3rd party shopping API. In my controller code, I want to be able to call: myOrder.updateQuantity(2); In order to actually send the message to the third party, the third party also needs to know several things that are specific to THIS order, like the `orderID`, and the `loginID`, which will not change in the lifetime of the application. So when I create `myOrder` originally, I inject a `MessageFactory`, which knows `loginID`. Then, when `updateQuantity` is called, the `Order` passes along `orderID`. The controlling code is easy to write. Another thread handles the callback and updates `Order` if its change was successful, or informs `Order` that its change failed if it was not. The problem is testing. Because the `Order` object depends on a `MessageFactory`, and it needs `MessageFactory` to return actual `Message`s (that it calls `.setOrderID()` on, for example), now I have to set up very complicated `MessageFactory` mocks. Additionally, I don't want to kill any fairies, as \"Every time a Mock returns a Mock a fairy dies.\" How can I solve this problem while keeping the controller code just as simple? I read this question: http://stackoverflow.com/questions/791940/law-of- demeter-on-factory-pattern-and-dependency-injection but it didn't help because it didn't talk about the testing problem. A few solutions I've thought of: 1. Somehow refactor the code to not require that the factory method return real objects. Perhaps it's less of a factory and more of a `MessageSender`? 2. Create a testing-only implementation of `MessageFactory`, and inject that. * * * The code is pretty involved, here's my attempt at an sscce: public class Order implements UpdateHandler { private final MessageFactory factory; private final MessageLayer layer; private OrderData data; // Package private constructor, this should only be called by the OrderBuilder object. Order(OrderBuilder builder, OrderData initial) { this.factory = builder.getFactory(); this.layer = builder.getLayer(); this.data = original; } // Lots of methods like this public String getItemID() { return data.getItemID(); } // Returns true if the message was placed in the outgoing network queue successfully. Doesn't block for receipt, though. public boolean updateQuantity(int newQuantity) { Message newMessage = factory.createOrderModification(messageInfo); // *** THIS IS THE KEY LINE *** // throws an NPE if factory is a mock. newMessage.setQuantity(newQuantity); return layer.send(newMessage); } // from interface UpdateHandler // gets called asynchronously @Override public handleUpdate(OrderUpdate update) { messageInfo.handleUpdate(update); } }"} {"_id": "106112", "title": "Work experience instead of education?", "text": "I wanted to name this topic as \"Education vs. Experience\", but this topic already exists. I've read that discussion and though what I'd like to ask is related with that topic, the question is quite different. I've started learning programming about 12 years ago. For the last 4 years I'm working as developer in software outsourcing (located in Russia). I'm thinking about leaving Russia and moving somewhere else like Australia (doesn't matter basically). I have a number of examples illustrating the general ability and success stories, but still there's a difference between all these people and my case. I have quite a good technical experience - primary areas are C++ and .NET. I have already participated in 7 projects based on different technologies/platforms (Windows, Linux, Android, Qt, .NET, etc). So, I believe, _I'm capable_ of working as a software developer. Let's just take it - \"from technical standpoint, this guy is absolutely OK\". The only problem is, I don't have any education. So, here's the question: **In most cases, may I read words like \"BS in CS, equivalent, or better\" as \"N years of experience\"?** **Update** : Is there a sense in getting certificates like MCSD (for .NET)? I know there's a separate topic for this question, but still asking for the case when there's no education but there are certificates and experience."} {"_id": "247109", "title": "Multilayered enterprise application use of JAXB objects", "text": "I am asked to refactor and maintain an enterprise application. Normally I'm used to using the MVC design pattern. This time however, I'd like to separate everything into layers (multilayered architecture). Something along the lines of (Microsoft Application Architecture Guide, 2nd Edition - October 2009): ![enter image description here](http://i.stack.imgur.com/Pqqq0.png) My data sources are in general XML files. These files are read and written with the use of JAXB, and this resides in the data-layer. Whenever I load a specific XML a \"provider\" will use JAXB to construct a tree of objects representing the XML structure. These objects/information needs to be used inside the business, service and presentation layer. Per layer the information needs to be treat differently, i.e. business layer will enforce business rules on the objects/information whereas the presentation layer needs additional UI information to have everything shown to the user. I'm looking for your tips / experiences when it comes down to using data sources throughout an multilayered application. Should I: 1. Create some sort of mapping, such that a new object tree is constructed in every layer? 2. Work via the lines of aggregation, such that my data sources can be decorated with extra functionality / information inside an object that lives in another layer? 3. Make the JAXB-objects \"fat\", with an interface which supports all my needs that come from all other layers? I'd love to hear your thoughts!"} {"_id": "251577", "title": "What's the name of this category of variables (NEW, OLD, etc) in triggers?", "text": "I need to do some very specific web search, but in order to do that I need to know what's the technical name of the category/type of variables like NEW, OLD, USER ( the could be more ) that you can access from within a trigger in a RDBMS without having to declare them. The question I want to post is \"does `` support `` variables in triggers ?\" but I don't know what to put in the `` placeholder. For example, variables like this `:var` in a query are called _bind variables_."} {"_id": "146038", "title": "API design and versioning using EJB", "text": "I have an API that is EJB based (i.e. there are remote interfaces defined) that most of the clients use. As the client base grows there are issues with updates to the API and forcing clients to have to update to the latest version and interface definition. I would like to possibly look at having a couple versions of the API deployed at a time (i.e. have multiple EAR files deployed with different versions of the API) to support not forcing the clients to update as frequently. I am not concerned about the actual deployment of this, but instead am looking for thoughts and experiences that others have on using EJB's as an API client. * How do you support updating versions, are clients required to update? * Does anyone run multiple versions in a production environment? Are there pro's cons? * Any other experiences or thoughts on this approach, and having an EJB centric API?"} {"_id": "30254", "title": "Why Garbage Collection if smart pointers are there", "text": "These days, so many languages are garbage collected. It is even available for C++ by third parties. But C++ has RAII and smart pointers. So what's the point of using garbage collection? Is it doing something extra? And in other languages like C#, if all the references are treated as smart pointers(keeping RAII aside), by specification and by implementation, will there still be any need of garbage collectors? If no, then why is this not so?"} {"_id": "108546", "title": "Is there a need for garbage collection in a stack-based language?", "text": "What is the need for garbage collection (GC) in a stack-based language? In a language like Forth or RPL (on HP calculators), is there a need for garbage collection? I would think, since output is popped off the stack, that there wouldn't be any need. Am I missing something?"} {"_id": "220822", "title": "Why not free memory as soon as its reference counter hits zero", "text": "A lot of languages like Java and C# have garbage collectors that free memory when that memory no longer has any reference. Yet they don't immediately free it after the reference counter hits zero but instead every once in a while they check on all the memory to see if that memory has any reference and delete it if it doesn't. What is the benefit of doing that way? The downside to doing it that way is that you lose the destructor as you can't guarantee when it will be called. I would imagine that it is done that way because of performance, but has there been any study that shows that a garbage collector that works like that has a better performance then `std::shared_ptr` found in C++?"} {"_id": "185845", "title": "Design guidelines for this scenario in C#?", "text": "I have to create a validation system(I don't want to use Data Annotation or any other system) for my C# application using .Net Compact Framework, where I have an `object` which contains many other objects. Many of the properties are dependent on each other means they have some sort of dependencies. I can write a simple `validator` class which goes through all the properties and checks them one by one with lots of `if and else` but I want to design something dynamic e.g. I should be able to specify list of dependencies on a property and specify a method which should be invoked during validation. I mean some design guidelines would be appreciated ? Example ( just for demo purpose ): public enum Category { Toyota, Mercedes, Tata, Maruti } class MyBigClass { public Category Category { get; set; } public double LowerPrice { get; set; } public double UpperPrice { get; set; } public SomeOtherObject OtherObject { get; set; } public List Validate() { List listErrors = new List(); ParameterInfo pInfo = null; switch (Category) { case Category.Toyota: pInfo = ParameterStorage.GetParameterInfo(PTYPE.Category_Toyota); break; case Category.Mercedes: pInfo = ParameterStorage.GetParameterInfo(PTYPE.Category_Mercedes); break; case Category.Tata: pInfo = ParameterStorage.GetParameterInfo(PTYPE.Category_Tata); break; case Category.Maruti: pInfo = ParameterStorage.GetParameterInfo(PTYPE.Category_Maruti); break; default: break; } if (LowerPrice < pInfo.Min || LowerPrice >= pInfo.Max) { listErrors.Add(\"LowerPrice\"); } if (UpperPrice > pInfo.Max || UpperPrice <= pInfo.Min) { listErrors.Add(\"UpperPrice\"); } return listErrors; } } public enum PTYPE { RATING, Category_Tata, Category_Toyota, Category_Mercedes, Category_Maruti } public class ParameterInfo { public PTYPE Type { get; set; } public int Min { get; set; } public int Max { get; set; } public int Default { get; set; } } public class ParameterStorage { private static Dictionary _storage = new Dictionary(); static ParameterStorage() { _storage.Add(PTYPE.Category_Maruti, new ParameterInfo { Type = PTYPE.Category_Maruti, Min = 50000, Max = 200000 }); _storage.Add(PTYPE.Category_Mercedes, new ParameterInfo { Type = PTYPE.Category_Mercedes, Min = 50000, Max = 800000 }); _storage.Add(PTYPE.Category_Toyota, new ParameterInfo { Type = PTYPE.Category_Toyota, Min = 50000, Max = 700000 }); _storage.Add(PTYPE.Category_Tata, new ParameterInfo { Type = PTYPE.Category_Tata, Min = 50000, Max = 500000 }); } public static ParameterInfo GetParameterInfo(PTYPE type) { ParameterInfo pInfo = null; _storage.TryGetValue(type, out pInfo); return pInfo; } } In the above example I have a `MyBigObject` which contains some properties and some other objects and I have a `storage` class which keeps all the limits of properties which will be required to validate a property. As described in the above example I have to get `ParameterInfo` for each property and then compare, I am looking for some kind of automatic way/dynamic way of doing the same."} {"_id": "36513", "title": "Why is the warranty disclaimer section of a licence usually (always?) shouted?", "text": "For example, the templates provided on the Open Source Initiative website for the 3-clause BSD License, and the MIT License both include an all-caps warranty disclaimer, though the rest of the license is written with normal capitalisation. Is there some genuine reason for this? Or is it just a tradition to make the warranty disclaimer harder to read?"} {"_id": "230358", "title": "Choosing a database for a framework with both asynchronous and synchronous calls", "text": "I'm building a framework to work as an all-purpose astronomy pipeline and before I get too far into development I was hoping to run my needs by you all to see if there are any optimizations or pitfalls I'm missing. This is a long one, so thank you to anyone who takes the time to look this over. My main needs are as follows (in order): 1. Cross platform and easy to install without a lot of dependencies. Most of the astronomers I work with are very good at understanding physics and astronomy but are weaker in programming ability and due to time constraints they are unlikely to adopt a new solution to reduce their data unless it's lightweight and easy to install on their choice of OS. If something looks complicated to install or setup then they will likely look for something else or continue using the decades old software they are currently using. I also like using python since a growing number of people in the astro community are using it. 2. Centrally located on a server. Currently most open source solutions for reducing data sit on the users machine. This means each user in a group must download the same data from the server, wasting disk space and time (often file sizes can be O(100Mb) or more). It also means they have to install and setup a variety of programs written by different people and it can take weeks to get one machine fully configured... until the next software update when something breaks and the procedure must be started again. So I want to use a web based solution where the images will all be located on a server that does the majority of the processing and my application is basically just a UI in a web browser to interact with all the nastiness that is open source astronomy command line software. 3. Ability to run both short and long runtime functions. For example, I have a viewer that converts a fits image into a bunch of png tiles and sends them to the client to be viewed on a canvas. This must be done as quickly as possible. Other tasks will run routines like photometry or data reduction that may take an hour or more, depending on the task and dataset. So it is important to be able to have some routines run asynchronously (like building and loading images) while other tasks are queued (like finding all the bright and isolated stars in a given image). 4. Modular structure. I plan on building a basic UI framework in python that can easily be extended by users. This will allow the community to share more of their code and only install the pieces needed for an individual or groups research. Functions essentially become webpages with an image viewer, instructions, and an interface for parameters to minimize the time it takes for new students to learn to use the software. 5. Small group of users. If there are 10 people using a server concurrently that is usually a lot. But if possible it would be nice to have something that scales up rather easily so that larger projects could adapt this for their needs. 6. Processing on multiple cores. While not necessary, our groups server has more cpu's than members of the group. So it would be nice if each user could log in and have their own core to run their jobs without interfering with other users. So what I have begun to develop is a python tornado server with mongodb solution on the server. Clients connect to the server via a web socket and send jobs to be processed. The job server then runs the routine specified by the client and sends the result to the client, or sends a link if an image or large data file is ready to be downloaded. The database is optional but handy (this is the part I'm implementing now). Typically the image files we deal with contain a header with 100 or so keywords. So when a new set of images is downloaded the user can load the headers into the database (this could be 100's of files). For a number of reasons, the user may want to search the files for any one of the hundred keywords, not just a few key indices. When I dump the header data into a cpickle object it's over 100mb and takes a minute or so to load. I'm hoping that mongodb (or some other db with a python interface) is faster and will allow for multiple users to access the data concurrently, but I've learned that databases can be very different and performance can vary greatly depending on the need. An additional function of the database will be to keep track of user accounts and which directories of images they are allowed to see (for example a collaborator from another university may have access to a set of data taken together while other datasets are for internal use only). One thing I've thought of doing is writing a general db api. Since most of the searches will be fairly simple functions could just call a getFiles routine that allows them to input lists of selection criteria and then the user simply needs to add the correct syntax for his/her particular db in the get function. Perhaps this is overkill, but one of the main things I'm trying to combat is the tendency in astronomy to use outdated code and technologies because no one wants to take the time to update them. My hope is that by making this as flexible as possible it can easily change as new tools are available (such as a better suited web server, database, or even more commonly used programing languages, keeping in mind that many of my collaborators still use fortran 77 code written before I was born). I've also considered running multiple job servers, one for tasks that need to be quickly accessed, and anther for jobs with long run times. But it would be nice to have a single server solution. I've put about a month into building the framework so far and unless there is a very good reason to switch web application servers I'd like to stick with Tornado. It's lightweight, easy to use, and seems to meet all of my needs. I just started looking at databases so I'm much more flexible and open to other options here. Thank you to anyone who has read through or skimmed most of this, any advice you have is greatly appreciated."} {"_id": "255133", "title": "Better and cleanest way to bind a ICommand to a RelayCommand", "text": "By reading various source code, I see that there is different ways of binding an ICommand to a RelayCommand : * From the constructor MyAwesomeViewModel() { this._fooCommand = new RelayCommand((x) => Bar(x)) } * Directly from the command property private ICommand _fooCommand ; public ICommand SearchCommand { get { return _fooCommand ?? (_fooCommand = new RelayCommand()); } } Which way is the better from your point of view ?"} {"_id": "236353", "title": "How to manage memory in C interface for C++ implementation considering c++11?", "text": "I have a library implemented in C++ which has a C interface. This C interface is, for all intents and purposes, the only way to use this library. C++11 seems to discourage the use of raw pointers but neither shared_ptr or unique_ptr are suitable in this case. From what I understand, intrusive_ptr is an option, but I am both unsure of how this differs from maintaining my own ref count (I am new to C++) and wary of wading into boost yet (I am new to C++). How should I handle memory management in C++11 and higher? I only state the spec to make it clear that I am willing to use the features of the latest spec and have no need to maintain compatibility with older specs."} {"_id": "230355", "title": "Unit Testing with an Optimization Problem", "text": "Suppose I'm making an algorithm that identifies the subject of a picture. It could be anything that a computer doesn't do that well, but I'm not expecting to get the right answer every time - 80% is fine. Suppose further, that the accuracy of the intermediate steps was also somewhat fuzzy. Is there a way to incorporate unit tests? The option that immediately comes to mind is to add 1 to a tally and every time a 'test' passes, increment the tally. When that finishes, divide by the total and test 'passes/tests > 0.8', but that seems kludgey. EDIT: Thank you all for your kind words and well-reasoned responses. My particular problem, while fuzzy, has nothing to do with pictures, and I'm currently getting about 80% pass. The short term value for me in a testing scheme would be knowing whether small adjustments were more globally beneficial or catastrophic. Long term testing value should be obvious."} {"_id": "226097", "title": "Models Views and Controllers jobs", "text": "First, I know there are lot of answers about MVC but I need some more-specific ansewer based on my probably wrong understanding of MVC. I've already read this very good answer (Explain Model View Controller) but I still need some clarification. I need to understand if I've understood how MVC works because I'm using it in a kind of MVC framework I'm developing. * I have the views that are at the moment just \"stupid\" templates * The models that are \"things\" that fetch data from database (or other sources) and transform them in arrays passed to the views. * The controllers that are mostly ajax javascript scripts and observers for buttons and other user events and PHP scripts that are called by the Ajax scripts when I need to perform server-side operations like SQL updates, inserts etc. So my \"MVC understanding\" makes my app works in this way: I open my index that loads the **viewsloader** (model), this one loads the contents calling the **contentsloader** (model) that calls some other script depending by the view the user is requesting. Then when all the data is loaded in a single array, the **viewsloader** (model) uses it to render the requested view and passes it to the user. Now the user can click some button (for example a \"view contacts\" button) and under the hood my **contactslistener** (controller) calls the **contactsloader** (model) to load the contacts list, the array provided by the **contactsloader** (model) passed trhough the **contactslistener** (controller) which injects it in the view. * * * I think I have serious problems with the MVC model, looks like the **controllers** are doing too different things. Probably I should put the observers/listeners in the **views** part and use **controllers** just to load contents provided by the **models**. * Are the **models** dedicated just to data fetching and organization (for example, fetch from database and put it in array organizing it with the right structure)? * Are the **controllers** thought for handle user events or should I use **views** for this purpose? Should I use them to query the **models** and inject the answers into the **views**? * Are the **views** just stupid templates or should I put there some business logic like event listeners? Following what I've read I should put the entire interface-side scripts here and put only (client/server)-side scripts in controllers. But in this case I've not idea about what a controller should do..."} {"_id": "234843", "title": "Multi-level paging tables", "text": "Referring to the image here: ![http://en.wikipedia.org/wiki/File:X86_Paging_4K.svg](http://i.stack.imgur.com/B5cyA.png) From http://en.wikipedia.org/wiki/File:X86_Paging_4K.svg Could somebody please explain something for me? I don't get exactly how this works. As I understand it the page \"directory\" contains an entity which essentially points to the beginning of the \"page table\". However, surely all of the entities in the page \"directory\" would contain the exact same value? Or, does this mean we have 1x page \"directory\" but N page \"tables\" (at the same level in the page multi-level hierarchy\")? If there are N page \"tables\" and one page \"directory\" I get why the PD would point to one of the N PTs. If there aren't then I am really confused."} {"_id": "234845", "title": "Confused in NoSql data model", "text": "I've a web application uses `MongoDB` as primary database. This is the first project I do with a NoSql db, I'm trying to create the application model but I'm confused. I know NoSql are non-normalized and it is normal to replicate data avoiding joins (that are not supported) but I'm now thinking at this scenario: I have the `User` class containing email address and the user can change it in the profile page. This email address is displayed in few pages in different environments and I think the right choice is to embed this property in few class documents (ex.: product subscription, invoice, shopping cart, ... ). What if the user change his email address? Have I to trigger multiple updates to change every collection, every document embeds this property?"} {"_id": "236354", "title": "WPF Control Lifeycle and Navigating through Containers to set Focus", "text": "I wanted to understand a control's lifecycle in WPF. Let me explain my scenario: I have a complex screen containing various container controls hosting forms. Let's say my screen has 3 Accordion Items. In the first Accordion item, I have a TabControl and this TabControl has 4 TabItems. Each TabItem has various Forms containing label-value pairs such as checkboxes, comboboxes, textboxes, RadWatermarkTextBoxes etc.., Now, the user introduces an error in one of the TextBox present in AccordionItem#1 --> TabControl#1 --> TabItem#1 --> TextBox#1 (for example) and then he/she switches over to TabItem#2 within the TabControl#1. Now, when the user switches over to TabItem#2, does the TextBox#1 die? Is it recreated again, when I expand TabItem#1? What exactly happens when a container is collapsed and how can I set focus to an element which is not existing in the view anymore? Let me explain it further in detail ... Now, let's say, the user has introduced various errors in different forms of the TabItems. and has now come out of the AccordionItem#1 and entered into AccordionItem#2. There are lots of errors (the user knows that and would like to correct them later by navigating to them). After finishing all the form filling in all the accordions. He starts looking at the errors before saving. The user needs to navigate and to do this, we have an icon and clicking on this icon, will take us to the next erroneous field, which means I need to automatically open the AccordionItem#1 --> TabControl#1 --> TabItem#1 --> TextBox#1 and set focus to this control, so that the user can start editing it. When the user finishes correcting the error in this field, he/she presses the icon again, and I need to automatically open AccordionItem#2 --> TabControl#2 --> TabItem#3 --> TextBox#4 for example. All these were previously collapsed as we were in AccordionItem#1 previously?! So, I wanted to understand, if something like this is possible or not? I was wondering what kind of a pattern will help us achieve this kind of functionality. When I find an error. Can I store a reference to this element(using its GUID - i.e. x:Name value) and when user clicks on the next error, then raise an event on it, which will propogate up the visual tree letting the parent tab handle it, passing it further and on till I reach the highest level and then pass another event down the line so that each of these parents can then expand themselves come into view and finally setting focus to the errorneous control? Is something like this possible? It is an interesting scenario and probably many developers might have encountered an issue like this when designing and developing huge applications."} {"_id": "83251", "title": "Tips and recomendations for a phone interview", "text": "A while ago I asked this question, I got very nice results but now I have a similar situation, but I'm looking for advices and I want to know if you think is possible to tell if you want a developer only with a phone interview. Let's say that you need to perform an interview to a candidate which is not on your country, so in case you'd hire him/her you'd have to provide a relocation package, wait for him/her to get the whole work permit in order and all those stuff, so you'd need to make sure that this person is quite worth it. Let's say that the candidate is applying for a junior developer position as a C#/ASP.NET developer with a small / medium software development company. * Can you tell in a phone interview if you'd like the candidate? * Any recomendations for the candidate to impress the interviewer?"} {"_id": "83254", "title": "Career with PeopleSoft", "text": "I have 3 years of experience writting applications with Java. I am doing my Master's now. Last year, I started working with PeopleSoft systems for my university. I am considering it as my career path from now on. I am trying to determin what thee pros and cons for it would be."} {"_id": "83259", "title": "Any recommendation to manage an investigation projects?", "text": "I'm working on an investigation on the NLP field. Since software engineering and software documentation are not the primary concerns of my whole investigation I decided to do it using XP. My question is, can you recomend: * Any tool you've used to manage XP projects * Any recomendation/link on how to manage investigation projects I'm very biased with XP because I've used it before, but if anyone has a better methodology you can post it as well"} {"_id": "202341", "title": "Does the expansion of new technologies cause a perceived shortage of skill?", "text": "Systems programming and desktop application development is a well established field. In recent years, web and mobile development have shown rapid expansion. As a software engineer I understand that _programming is programming_ , and C++, Javascript, and Android APIs are tools that any engineer worth his salt can handle. Unfortunately, there exist recruiters, managers, and companies that don't understand that skill isn't necessarily tied to tools. Also, some companies just aren't interested in providing the ramp-up time required when hiring an engineer with little direct experience with a software tool that is used by their existing development team. Does this combination of increasing demand for applications built with recently established tools/platforms and recruiter/HR figures' juxtaposition of _programming_ experience with _tools_ experience cause a (albeit arguably incorrect) perceived shortage of skill applicable to new technologies? To phrase it differently, given two similarly talented engineers, is the one doing C++ application development less valued in the market than the one with some web applications under his belt?"} {"_id": "133169", "title": "Creating controls dynamically in the code-behind or ViewModel?", "text": "Right now I'm working migrating an app I made entirely using code behind to MVVM and had a question on where I'm supposed to be creating controls dynamically. Basically I have a web service that returns {#} of items. For each item a button will be created and the item will be assigned to its data context. Now so far I know that I should set the Command to the ViewModel as well as the Command property. Also I know I should call those items in my web service inside of the view model (or model, right now its irrelevant to the question). The part that is questionable is where to create the buttons. I didn't really like the idea of creating buttons in my ViewModel since it was, well, related to the View. Is this correct or should I be creating them inside the ViewModel and then some how pass them back to the View via Messaging?"} {"_id": "202349", "title": "Strategies for removing register_globals from a file", "text": "I have a file (or rather, a list of about 100 files) in my website's repository that is still requiring the use of register_globals and other nastiness (like custom error reporting, etc) because the code is so bad, throws notices, and is 100% procedural with few subroutines. We want to move to PHP 5.4 (and eventually 5.5) this year, but can't until we can port these files over, clean them up, etc. The average file length is about 1000 lines. I've already cleaned up a few of the low-hanging fruit, however the job took almost an entire day for 2 300-500 line files. I am in a quagmire here (giggity). Anyway, has anyone else dealt with this in the past? Are there any strategies besides tracing backwards through the code? Most static analysis tools don't look at code outside of functions - are there any that will look at the procedural code and help find at least some of the problems?"} {"_id": "40031", "title": "Property-coalescing operator for C#", "text": "The null-coalescing operator in c# allows you to shorten the code if (_mywidget == null) return new Widget(); else return _mywidget; Down to: return _mywidget ?? new Widget(); I keep finding that a useful operator I'd like to have in C# would be one that allowed you to return a property of an object, or some other value if the object is null. So I'd like to replace if (_mywidget == null) return 5; else return _mywidget.Length; With: return _mywidget.Length ??! 5; I can't help thinking that there must be some reason for this operator not to exist. Is it a code smell? Is there some better way to write this? (I'm aware of the null object pattern but it seems overkill to use it to replace these four lines of code.)"} {"_id": "133162", "title": "Multiple source files vs. libraries for a single project", "text": "I write a lot of scientific software, and I originally got into programming with F77. I moved to C++ for my primary programming about 10 years ago now, but I do catch myself using F77 habits. One thing I cannot seem to get used to is library usage. To be clear, I understand why one would want to use one, but for most of the code I write, no one but the end user and the maintainers will ever use the routines I write, so I question the usefulness to me. Okay, so libraries give you a good place to stash reusable code that's easy to follow and maintain. But didn't separate source files already solve that problem for my situation? What advantage would I gain by putting my source files into a library? I can understand executable file size savings with shared libraries, but is that all that important? I have a lot of math routines that are general and could be extracted to a shared library, but should I? Most of them are only called once by a single point in my code, and usually only by a single project, so do I gain anything? Isn't any memory savings gone the moment the symbol is resolved, and the code loaded? Thanks."} {"_id": "133163", "title": "Organizing projects in SVN", "text": "I'm fairly new to SVN and I would be interested in hearing about how other people would organize projects in SVN. We have projects that have different types of source files: C#, SQL, and Matlab. In addition to these types of files, we have Excel and Word file reports that we do not intend to put into SVN. So let's say we have a projects directory: Projects\\ Projects\\Project1 Projects\\Project2 ... Now for the SVN repository, would it make sense to categorize things by source file type (C#, SQL, Matlab) then project: SVN\\C#\\Project1 SVN\\C#\\Project2 SVN\\SQL\\Project1 SVN\\SQL\\Project2 SVN\\Matlab\\Project1 SVN\\Matlab\\Project2 Or would it make more sense to categorize things by project first then source file type: SVN\\Project1\\C# SVN\\Project1\\SQL SVN\\Project1\\Matlab SVN\\Project2\\C# SVN\\Project2\\SQL SVN\\Project2\\Matlab Or maybe there's a better way to do this? Also, what is the best way to indicate in the Projects folders (not SVN) that a project contains files saved in SVN? Like, if I went into a the project folder for Project1 (Projects\\Project1) and only saw Excel and Word reports, what would be the best way to indicate that there are other files checked into an SVN repository? Also, how does everyone feel about putting checked out SVN files on shared drives? So for instance, let's say Projects\\Project1 contains the following files: Report1.doc Script.sql (This is a checked out SVN files in the Projects folder, which is a shared network drive) In this situation, it would be very easy to see when there are SVN files used with a project, BUT having the checked out SVN file on a shared drive would make collaboration much more difficult. Is this others do to?"} {"_id": "131632", "title": "Is querying KeyValue Pairs efficient compared to two-property objects?", "text": "I'm working on a webservice and I'm returning JSON. However, I'm returning a `List>`, since that's all I'm really ever going to be dealing with (property => value) Now sometimes I myself will be calling these functions to aggregate the results, so I end up with something like List> myList = resultList.where(o => o.Key ==\"StringKEY\"); and sometimes List> myList = resultList.sum(o => o.Key ==\"StringKEY\"); My question is, would it be more efficient to do the following (custom class vs dictionary): List myObj = resultObjList.where(o => o.Property1 == \"stringProperty\") and so on. Is there any benefit to using custom objects? I will not ever need to be adding properties, and in this case I can say there will never be a need for additional properties."} {"_id": "149866", "title": "Are R&D mini-projects a good activity for interns?", "text": "I'm going to be in charge of hiring some interns for our software department soon (automotive infotainment systems) and I'm designing an internship program. The main productive activity \"menu\" I'm planning for them consists of: * Verification testing * Writing Unit Tests (automated, with an xUnit-compliant framework [several languages in our projects]) * Documenting * Code * Updating wiki * Updating diagrams & design docs * Helping with low priority tickets (supervised/mentored) * Hunting down & cleaning compiler/run-time warnings * Refactoring/cleaning code against our coding standards But I also have this idea that having them do small R&D projects would be good to test their talent and get them to have fun. These mini-projects would be: * Experimental implementations & optimizations * Proof of concept implementations for new technologies * Small papers (~2-5 pages) doing formal research on the previous two points * Apps (from a mini-project pool) These kinds of projects would be pre-defined and very concrete, although new ideas from the interns themselves would be very welcome. Even if a project is too big or is abandoned, the idea would also be to lay the ground work so they can be retaken by another intern or intern team. While I think this is good in concept, I don't know if it could be good in practice, as obviously this would diminish their productivity on \"real work\" (work with immediate value to the company), but I think it could help bring aboard very bright people and get them to want to stay in the future (which, I think, is the end goal for any internship program). My question here is if these activities are too open ended or difficult for the average intern to accomplish and if R&D is an efficient use of an interns time or if it makes more sense for to assign project work to interns instead."} {"_id": "85946", "title": "Independent projects as a student to show off abilities", "text": "I'm an inexperienced student (having learned up to data structures and algorithms through various online resources) of computer science, and I'm hoping to get a job as a developer some time after I've gotten a few independent projects done. My question is, how should I choose which projects to work on? I've been looking around stackoverflow-- people usually say to pick whatever you're interested in, but I don't have enough experience to even know what specifically I'm interested in, and possibly more importantly, I don't know what some common beginner project types are. Essentially, I'm at the gap between course work (and the projects entailed in those classes) and real programming, and I don't quite know how to start. If any of you have any ideas, I'd really appreciate it."} {"_id": "85942", "title": "Software cost estimation", "text": "I've seen on my work place (a University) most students making the software estimation cost of their final diploma work using COCOMO. My guessing is that this way of estimating costs is somewhat old (COCOMO dates of 1981), hence my question: How do you estimate costs in your software? I've seen things like : Cost = ( HoursOfWork + EstimatedIddle ) * HourlyRate That's not what I want, I'm looking for a properly (scientifically) defined cost model **EDIT** I've found some related questions on SO: * What are some of the software cost estimation methods and models? * How do you estimate the cost of developing software requirements?"} {"_id": "225698", "title": "Why should an \"Order\" object have a \"Status\" property?", "text": "I always see standard Order classes implemented with a \"Status\" property, but I don't feel comfortable with that. Isn't the status a property of the fulfillment process instead of the order itself? What about orders that can be subject to different fulfillment processes?"} {"_id": "138843", "title": "Communicating from lower level components to GUI?", "text": "What is the recommended way for a lower level software component/module to communicate with the GUI? I'm using C++. I have a service layer class that if some conditions occur needs to notify the user, but still continue processing. In the case of an unrecoverable error I understand that I can use exceptions to bubble up to the GUI, but what about in the case of a simple message that needs to be communicated to the user where the lower level component continues executing? **UPDATE** Just to clarify the service layer (aka business logic, compute layer, etc) does not \"know\" about the GUI nor do I want it to."} {"_id": "138841", "title": "Library/Framework usage guidelines", "text": "At my core, I'm a structural, Computer Science sort of programmer (in college I was required to do a lot of programming using C, C++ and even COBOL(!)) and I'm finding more and more conflict with the core fundamentals I developed in my CS degree as opposed to the modern web development world that I'm currently immersed in. One of those conflicts I'm having is the place of libraries/frameworks in web development. I've been discovering and experimenting with several libraries/frameworks for some web application development I've been doing. Some that I've used and/or experimented with include jQuery, jQueryUi, TinyMCE, CodeIgniter, Struts, Spring and GWT. I have a slight fear of using these libraries/frameworks too extensively because of the rapidly changing nature of web technology. It seems about every other minute there's some new library/framework available to use in web development, whether it's a new framework or enhancement to an existing technology. This really rubs against the academic world I was in as we generally spent an entire semester learning the language/concept with the expectation we would later use that knowledge effectively in the workplace. My fear is that a lack of good understanding of a library/framework will lead to me to a dead end path where I will have more problems than the one that I set out to solve initially. Sorry for rambling, but I'm wondering if anyone else has experienced such a fear? I'm also wondering what could be some general guidelines for implementing libraries/frameworks into a web application. i.e. Should there be a limit of how many libraries/frameworks are used when developing a web application? Should a developer spend a week (or 2 or 3...) really getting to know the library/framework before attempting to implement it into their web application? Ultimately looking for answers in the context of libraries/frameworks which (loosely) includes anything that is implemented in a web application outside of the core technology (Java, JSP, PHP, HTML, CSS and JavaScript are core technologies that I personally use)."} {"_id": "138844", "title": "Should we check in Setup Project of Visual Studio", "text": "From what I understand, the ideal way to prepare builds using Visual Studio is to use a single build machine. That means, while the developers go through development process using their respective machine, the build isn't prepared from any of those machines and a build PC is used instead. Under this condition, is it necessary to check-in(using TFS as example here) the setup project. There isn't any version controlling on that particular project as any required change would be done in the build machine itself, and this isn't of any concern to others."} {"_id": "144019", "title": "Pointers in C vs No pointers in PHP", "text": "Both languages have the same syntax. Why does C have the weird `*` character that denotes pointers (which is some kind of memory address of the variable contents?), when PHP doesn't have it and you can do pretty much the same things in PHP that you can do in C, without pointers? I guess the PHP compiler handles this internally, why doesn't C do the same? Doesn't this add unneeded complexity in C? For example I don't understand them :)"} {"_id": "171368", "title": "Large enterprise application - clients wish to use duplicate e-mails addresses?", "text": "I'd like to know people's opinions, reactions to clients and technical work arounds (if applicable), to the issue of an enterprise application where a client wishes to use duplicate e-mail addresses? To clarify, when I say duplicate e-mail addresses I mean within the same client system, having multiple users that have the same e-mail address. So not just using generic e-mail addresses but using the e-mail address of another user. e.g. Bob Jenkins: bjenkins@myorg.com James Jeffery: bjenkins@myorg.com **Context** To give this some further context, in the e-learning sector it is common that although all staff in an organisation must complete e-learning - they may not have their own e-mail address so they choose to use their managers e-mail address. Albeit against good practice in public sites... it's a requirement we've over and over again where an organisation is split between office based staff and perhaps e.g. staff in a warehouse. **Where problem lies** Mr Steak, good point, the problem lies in password resets and perhaps in situations where semi-personal information could be sent (not confidential enough to worry about the insecurities of email). Perhaps reminders for specific system actions, which would be confusing for the unintended party to see (if perhaps misreading the e-mail's intended recipient) **Possible solutions** * System knowing the difference between a \"for the attention of\" and direct to the person e-mails, including this in the body text. * Using alternative communication such as SMS * Simply not having e-mails sent to people who are not the intended recipient. * Providing an e-mail service ourselfs (not really viable for a corporate IT dept) Thoughts?"} {"_id": "238128", "title": "versioning data schema in android internal storage", "text": "I am using Android's internal storage to hold data for my application and I am trying to find a good way to handle reading data of an older schema. For example, let's say I have serialized and written several instances of class Data to a file public class Data{ private String text; public Data(text){ this.text = text; } } and later I read that stringified data back out and cast to (Data). But then say I change Data to look like this public class Data{ private String name; public Data(name){ this.name = name; } } now that the property is named differently, when I read out data that was saved in the original schema, it will not easily cast to the new type. What are some solutions to version the data so that I can read out old data and cast it to a new type?"} {"_id": "78043", "title": "Writing a CSS parser in C#. What do you think is the best strategy?", "text": "I'm in the middle of writing a CSS parser in C#. I'm well under way, but I also have those times where I wonder if I'm taking the best approach. The things I've considered are: 1. Feed the CSS grammar from the W3C into a parser generator and working off that. 2. Hand-code a CSS parser off the grammar. 3. Use a generated tokenizer, but hand-code the parsing of the productions. 4. The reverse of (3) - generate the productions, but hand-code the tokenizer. Without revealing my current approach, I was wondering how others feel about this, and appreciate any comments and guidance from your experience. Part of this is also to see what questions people ask and compare the questions to what I asked myself."} {"_id": "233717", "title": "How to Recover from Inconsistent Job State without Database Polling", "text": "I'm working on scaling an application which is currently polling a mySQL database to send async jobs to a background processing system. A state machine is being used to keep track of the entities progress throughout a workflow. For example, we might have three states: * Scheduled * Processing * Complete My plan is to add a messaging queue system to broker the jobs sent to the background processing system. So `Application A` would insert a new entity, then push a message to the queue. `Application B` would consume these messages and route them to the correct background processing job. def do_work(entity) # Precondition check raise \"Wrong State\" unless entity.scheduled? # Update state to processing entity.processing # ... do work # Update state to complete entity.complete end Given a job like above, I am having a hard time determining how it would be possible to recover from a situation where there was an error between the `processing` and `complete` event transitions. For example a process crash. The job processor would re-try the job, but now the entity is in an inconsistent state, and would fail. How could I handle this case without resulting back to polling the entity table looking for stale jobs? **Edit** There are two different concepts at play here. We have the data, and we have the state of the data as it relates to the job execution (scheduled/processing) + the state of the workflow (complete). Both of these pieces of information are in the same table. So after data grows the process of polling becomes inefficient (reads/updates/inserts constantly happening). So maybe a solution is to have `Application B` move active jobs into a separate datastore. So when the \"clean up\" task that @\u04cd\u03c3\u1d8e refers to is running, the dataset should be much smaller. Ultimately, there it seems like there is no way to avoid polling the database to ensure data is a correct state."} {"_id": "171363", "title": "Are There Other Use Cases For F# Type Providers?", "text": "So I think I know the main use case for F# 3.0's Type Providers, i. e. better Intellisense when working with data stores that use them. Are there other use cases for Type Providers or is that pretty much it?"} {"_id": "173213", "title": "Can interface be not abstract?", "text": "Friend of mine said that not every interface is abstract. I haven't chance to discuss that with him but it get me thinking of not abstract interface in any type of language. Is there a non abstract interfaces?"} {"_id": "109731", "title": "Career Day: how do I make \"computer programmer\" sound cool to 8 years old?", "text": "Have to do a talk at Career Day at my kid's school & looking for ideas on how to make \"computer programmer\" sound cool to 8 years old."} {"_id": "100725", "title": "Should I be concerned if I have nothing to do during an internship?", "text": "I have been an intern at this company for about four months now. At first, it was great because I didn't have a ton of hours, the work was interesting, and it didn't interfere with school. However, once summer started, some problems turned up. They changed my direct supervisor from the person who managed the programmers to just one of the software engineers. He is pretty busy and doesn't have a lot of time to help me, so I can end up stuck for days before I can finally get help from someone. This is pretty frustrating, as I really have nothing to do during this time period and my new supervisor is extremely hard to get a hold of. As an intern, I really feel like I need more direction and help. I quickly run out of things to do or get completely overwhelmed when assigned broad things. I'm never really \"checked on\", so I can sit for a long time without getting anything done and no one really cares. I've tried asking for help multiple times, but he either doesn't respond to my emails or cancels our meetings. Is this normal for an intern? Is there anything I should be doing so that I can be more productive? Should I go ask management for help or would that be considered out of line for an intern? Should I not really worry about it, since this is just an internship?"} {"_id": "170093", "title": "Use of keyword \"Using\" in C# interface", "text": "When I'm using C# to write some code and I define an interface using Visual Studio 2010, it always includes a number of \"using\" statements (as shown in the example) using System; using System.Collections.Generic; using System.Linq; using System.Text; namespace TestEngine.TestNameSpace { interface ITest1 { bool testMethod(int xyz); } } I wonder what these are for and if they are really necessary. Can I leave these out? Are they only necessary when I'm using those parts in my interface description?"} {"_id": "15391", "title": "What's the most appropriate duration of the video tutorial?", "text": "It is now more and more popular to have video tutorials of the software or technology instead of writing long articles. As for me, it makes perfect sense, especially for such areas like \"Getting Started\" or \"What's New\". What do you think is the most appropriate duration for such a videos? If you record the video tutorial yourself, would you just touch main points to keep it brief, or rather split it into a number of parts to keep the same level of details? And why? I know this question isn't implied to have one and only answer. I'd like to hear the reasoning behind various opinions. I have personally come across a video tutorial 45 minutes long today, and I got tired at around minute 10... So, my own impression is it's better to keep it from 5 to 10 minutes to gain visitor's attention fully. Thanks!"} {"_id": "15397", "title": "Python and only Python for almost any programming tasks!", "text": "Am I wrong if I think that Python is all I need to master, in order to solve most of the common programming tasks? **EDIT** I'm not OK with learning new programming languages if they don't teach me new concepts of programming and problem solving; hence the idea behind mastering a modern, fast evolving, with a rich set of class libraries, widely used and documented, and of course has a \"friendly\" learning curve programming language. I think that in the fast evolving tech industry, specialization is key to success."} {"_id": "166640", "title": "Besides macros, are there any other metaprogramming techniques?", "text": "> **Possible Duplicate:** > Programming languages with a Lisp-like syntax extension mechanism I'm making a programming language, and, having spent some time in Lisp/Scheme, I feel that my language should be malleable. Should I use macros, or is there something else I might/should use? Is malleable syntax even a good idea? Is it perhaps too powerful a concept? In doing some research, I found `fexprs`. I don't really understand what these are. Help with that in an answer too please. Is it possible to have a language with macros/something-of-a-similar-nature without having s-expressions?"} {"_id": "215013", "title": "Languages like Tcl that have configurable syntax?", "text": "I'm looking for a language that will let me do what I could do with Clipper years ago, and which I can do with Tcl, namely add functionality in a way other than just adding functions. For example in Clipper/(x)Harbour there are commands #command, #translate, #xcommand and #xtranslate that allow things like this: #xcommand REPEAT; => DO WHILE .T. #xcommand UNTIL ; => IF (); ;EXIT; ;ENDIF; ;ENDDO LOCAL n := 1 REPEAT n := n + 1 UNTIL n > 100 Similarly, in Tcl I'm doing proc process_range {_for_ project _from_ dat1 _to_ dat2 _by_ slice} { set fromDate [clock scan $dat1] set toDate [clock scan $dat2] if {$slice eq \"day\"} then {set incrementor [expr 24 * 60]} if {$slice eq \"hour\"} then {set incrementor 60} set method DateRange puts \"Scanning from [clock format $fromDate -format \"%c\"] to [clock format $toDate -format \"%c\"] by $slice\" for {set dateCursor $fromDate} {$dateCursor <= $toDate} {set dateCursor [clock add $dateCursor $incrementor minutes]} { # ... } } process_range for \"client\" from \"2013-10-18 00:00\" to \"2013-10-20 23:59\" by day Are there any other languages that permit this kind of, almost COBOL-esque, syntax modification? If you're wondering why I'm asking, it's for setting up stuff so that others with a not-as-geeky-as-I-am skillset can declare processing tasks."} {"_id": "184089", "title": "Why don't languages include implication as a logical operator?", "text": "It might be a strange question, but why there is no implication as a logical operator in many languages (Java, C, C++, Python Haskell - although as last one have user defined operators its trivial to add it)? I find logical implication much clearer to write (particularly in asserts or assert-like expressions) then negation with or: encrypt(buf, key, mode, iv = null) { assert (mode != ECB --> iv != null); assert (mode == ECB || iv != null); assert (implies(mode != ECB, iv != null)); // User-defined function }"} {"_id": "210902", "title": "Can I use the code for responsive video on this github link under a license?", "text": "This is the link: https://gist.github.com/jgarber/2302238 I am new to github and web development.So please bear with my questions. I would like to use the responsive video code on the link mentioned above, for a commercial website I am building. I see that the code is posted under the 'public' section but there is a copyright attached as well. And, the file name is MIT-LICENSE.txt. Does this have any connection with the MIT-License? Two questions: a) Can I use this code by giving an attribution? b) And, if I can, under which license can I use the code and what sort of attribution is required?"} {"_id": "234159", "title": "Database Schema Data Independence", "text": "I'm currently reading about database schemas about how the three-level ANSI- SPARC architecture works and i'm a bit confused about a concept it's talking about. My question is what would happen if an application was modified so it could store data for a column not defined in the logical schema? The book i'm reading talks about: > ... the addition or removal of new entities, attributes or relationships > should be possible without having to change existing external schemas or > having to rewrite application programs.\" - Database Systems: A practical > Approach to Design, Implementation and Management. Page: 40 (Part: 2.1.5) * How does the logical schema handle input (columns) from applications for which it is not defined in the logical schema? ANSI-SPARC architecture: http://en.wikipedia.org/wiki/ANSI-SPARC_Architecture"} {"_id": "234158", "title": "Make a java program use more processors", "text": "I am running a program that is supposed to solve a problem with the brute force approach. My computer has a quad-core CPU. When I run the program clearly java uses only one core, because in my program I use only one thread. Is there a way to split up the program in more threads so that more core's are used and so to improve calculation speed? I mean: if one has a program which has a natural solution that uses only one thread, is it possible to force it use more threads without completely changing the code?"} {"_id": "241081", "title": "How to fix poorly designed software?", "text": "I am working on large project solo as a hobby, and I made a mistake in the very beginning: I jumped right into programming without giving a second though to design. Now I am nearly 6 months in and things are starting to fall apart: I cant get anything done, the code is very inconsistant, and the whole thing is a mess. Ive definitely learned the importance of design in software development, but I dont know how one designs software to begin with. Are there any programs that can help with this? Or do I need to sit down with pen and paper for the next few weeks trying to work things out?"} {"_id": "183733", "title": "What features should be tested via automated UI testing?", "text": "We recently had a consultant tell us that if a feature can only be tested via automated UI tests (e.g. Selenium, Coded UI), then there is an underlying architectural issue. While this statement might be a bit extreme, it is along the same lines of the testing pyramid in that UI tests should make up a small portion of your overall automated test suite. So, what kinds of features _should_ have automated UI testing? Will a system with a cogent architecture still have features that can only be verified through UI tests, or should these tests just serve as \"back-up\" for a suite of unit and service tests?"} {"_id": "183730", "title": "Machine Learning With Categorical and Continuous Data", "text": "This question could go here or on S.O. perhaps... Suppose that your training dataset contains both categorical and continuous data such as this setup: Animal, breed, sex, age, weight, blood_pressure, annual_cost cat, calico, M, 10, 15 , 100 , 100 cat, tabby, F, 5, 10 , 80 , 200 dog, beagle, M, 3, 30 , 90 , 200 dog, lab, F, 8, 75 , 80 , 100 And the dependent variable to be predicted is the annual vet cost. I'm a bit confused as to the specific techniques available to deal with such a dataset. What are the methods commonly used to deal with datasets that are a mixture of both continuous and categorical data?"} {"_id": "183731", "title": "Convert grammar into an LL(1) grammar which recognises the same language", "text": "I have the following sample question for a compilers exam and wanted to check my solution. Convert the following grammar into an LL(1) grammar which recognises the same language: E -> E + T E -> T T -> id T -> id() T -> id(L) L -> E;L L -> E For my answer I have E -> T E' E' -> + T | \u03b5 T -> id T -> id() T -> id(L) L -> E L' L' -> ;E | \u03b5 Can anybody verify the answer? **Edit** Ok so would it be similar to... E -> T E' E' -> + E | \u03b5 T -> id T -> id() T -> id(L) L -> E L' L' -> ;E | \u03b5"} {"_id": "241089", "title": "Keep user and user profile in different tables?", "text": "I have seen in a couple of projects that developers prefer to keep essential user info in one table (email/login, password hash, screen name) and rest of the non essential user profile in another (creation date, country, etc). By non-essential I mean that this data is needed only occasionally. Obvious benefit is that if you are using ORM querying less fields is obviously good. But then you can have two entities mapped to same table and this will save you from querying stuff you don't need (while being more convenient). Does anybody know any other advantage of keeping these things in two tables?"} {"_id": "229958", "title": "In C++ what is the commonly accepted method for making a program platform-agnostic?", "text": "The way I usually do it is I make some namespace Platform in Platform.h and every OS call is encapsulated by a static function in this namespace. So the only place in the entire code base that knows what OS is being used is Platform.cpp. Is this a good way of making things easier? For instance when I call Platform::MessageBox(...), what actually happens is: void Platform::MessageBox(...) { #ifdef(_WINDOWS) .... #elif(_LINUX) .... #elif(_MAC) .... #endif }"} {"_id": "144528", "title": "Interesting Topics in Comp. Sci. for New Students?", "text": "I hope this is the right forum to ask this question. Last friday I was in a discussion with my professors about the students' lack of motivation and interest in the field of Computer Science. All of the students are enrolled, but through questionnaires and other questions that my professor posed it was revealed that over 90% of all enrolled students are just in it for the reward of getting a job sometime in the future (since it's a growing field with high job potential) I asked my professor for the permission to take over the first couple of lectures and try and motivate, interest and inspire students for the field of Computer Science and programming in particular (this is the Intro to Programming course). This request was granted and I now have a week to come up with a lecture topic for my professor's five groups. My main goal isn't to teach, I just want to get students to be as interested in the field as I am. I want to show them what's possible, what awesome magical things have been done in the field, the future we are heading towards using programming and Comp. Sci. Therefore, I would like to pose this question: I have a few topics, materials and sample projects that I would like to talk about: * Grace Hopper (It is my hope to interest the female programmers in the class. There are never more than two or three per group and they, more than males, are prone to jumping ship and abandoning Comp. Sci.) * The Singularity Institute * Alan Turing * Robotics * Programming not as a chore or a must, but the idea that we are, at our core, the nexus to which anything anybody does in the digital world is connected to. We are the problem solvers; we assemble all the parts together and we are the ones that, essentially, make the vision a reality. * Give them an idea for a programming project which, through the help of the professor, could be significant to every student (I want students to not only feel interested in the topic, but they should feel important, that what they do here makes a difference) Do you have interesting topics worthy of discussion, something I can tell the students which they can get interested about? How would you approach the lecture? If you had 90 minutes worth of time to try and get students interested in the project, what would you do?"} {"_id": "184337", "title": "Emphasize negation", "text": "I was just writing an if statement with fairly long property names and came occross this problem. Let say we have an if statement like this: if(_someViewModelNameThatIsLong.AnotherPropertyINeedToCheck == someValue && !_someViewModelNameThatIsLong.ThisIsABooleanPropertyThatIsImportant) { //Do something } The second property is of a boolean type and it makes no sense to have the stetement like if(boleanValue == true) Is there a better way to emphasize the negation then to put the `!` in front. To me it seems like this can easily be overseen when reading the code and may potentialy cause problems with debuging"} {"_id": "184336", "title": "Parsing a Log File - Java", "text": "Im reading a log file in Java on a Linux box on a continual schedule of 2 minutes looking for certain messages. I store the last offset (RandomAccessFile getFilePointer) and read from it onwards when I detect LastModified has changed; is this best practice or even right?"} {"_id": "227817", "title": "Data sharing across processes", "text": "I have a C# .NET app that downloads a users Tweets using LinqToTwitter. This is for TV broadcast - the client wants to show Tweets on air. We have two other apps that need to access these tweets. One is a C++ app and one will use MS Script Host. These two apps don't run at the same time - it's one or the other. My question is, of the many ways to share this data, which would you choose? **DataBase** \\- Like MySql. This was my first choice. But then it seemed like overkill for the 10-20 tweets they would get per day. **Streaming** \\- Like TCP or named pipes. This would involve some type of protocol. Like \"Give me the last 10 tweets...\" **Xml** \\- Store the data in a file all programs can access. Simplest, but just doesn't feel right for some reason. **Memory Mapped IO** \\- I think this would require a COM library for the scripts to be able to use it There are others. Just curious what you would use. I am the only programmer in a small company and don't have others to bounce ideas off of. Thanks."} {"_id": "180421", "title": "Using database Indexes", "text": "This might sound like a naive question but when are database table indexes in mysql required? How do such indexes affect performance and why?"} {"_id": "180429", "title": "Is there a way to display the stack during recursion method?", "text": "I'm trying to learn recursion. I copied this bit of code from my book and added the displays to help me trace what the method is doing and when. public static void main(String[] args) { System.out.println(sum(4)); }//main public static int sum(int n) { int sum; System.out.println(\"n = \" + n + \"\\n\"); if (n == 1) sum = 1; else sum = sum(n - 1) + n; System.out.println(\"n after function = \" + n + \"\\n\"); System.out.println(\"sum after function = \" + sum + \"\\n\"); return sum; } Here's the results: > n = 4 > > n = 3 > > n = 2 > > n = 1 > > n after function = 1 > > sum after function = 1 > > n after function = 2 > > sum after function = 3 > > n after function = 3 > > sum after function = 6 > > n after function = 4 > > sum after function = 10 > > 10 I'm getting hung up on what's being stored in the stack, or perhaps how it's being stored. Is there a way to display what's in the stack while n is counting down to 1? This is an odd concept - it's like an invisible loop storing values in a way I don't understand."} {"_id": "159453", "title": "Are there any good, open-source change request systems?", "text": "I have a need to make a somewhat custom change request system (very high auditing requirements). Instead of completely reinventing the wheel I was wondering if there's an open-source change request system, considered decent that I could write a custom module for? I've done some searching but the ones I have found don't seem to encapsulate what a change request system should be (no way of recording test plans or work streams). In my experience a change request system has (does) store test plans / technical specs and a log of work."} {"_id": "159454", "title": "How important are the people I work with?", "text": "I'm a very lucky individual who's managed to push a job I enjoy as a junior developer (I say push as I was hired as a Business Analyst but moved into development by proving I could do the work). I'm happy with my salary and working conditions. BUT (Seriously, why wouldn't there be a but?) the people I work with are... in a word... up themselves. I have two senior developers - brilliant minds, great work but complete knowledge horders. I feel like there's a lot I _could_ learn from them but very little they're going to _let_ me learn. As I'm early in my career and really want to become as great at this as I can - how important is learning from others (outside of the Internet) at this point in my career? Am I doing myself a disservice that I'm going to live to regret later in my career by staying here and not seeking out a workplace that has people willing to teach me and help me to further my skills?"} {"_id": "136440", "title": "Asynchronous Java", "text": "I'm wondering if I wanted to implement a web service based on java that does web analytics, what sort of architecture should I use. The actualy processing of the Big Data would be done by Hadoop. However I am not sure what I would need to do to make it asynchronous, or is Hadoop already asynchronous by nature? Could I do something with JMS? If so, how would that fit into the whole thing? Would it be something along the lines that upon receiving the web service request, I use JMS to send a message to Hadoop to handle a particular data, and then wait for it to come back to me? I am not too familiar with asynchronous java so I'm not sure where to start."} {"_id": "218823", "title": "What does \"testing surface\" mean in the context of programming?", "text": "I encountered the following text in an article on the ModelViewPresenter pattern and my brain blue-screened: > Passive View usually provides a **larger testing surface** than Supervising > Controller because all the view update logic is placed in the presenter. I haven't come across this particular jargon before and I'm rather puzzled about what exactly constitutes the \"testing surface\" of a design pattern. Would someone please enlighten me regarding what the term means and also what it means for one design pattern to have a larger (or smaller) \"testing surface\" than another?"} {"_id": "218822", "title": "Dota 2 running on Linux, Mac and Windows - How do they do it?", "text": "How do Valve create games that run on Linux, Mac and Windows? I imagine they dont really write one version for each platform bec that would just be a nightmare.. or do they? I imagine it is written in a portable C++ code (or C#?) but I wanted to know more details about this. I've developed an app on Adobe AIR and considering on porting it to a diff language as Adobe abandoned linux support."} {"_id": "116678", "title": "How good does a well-rounded programmer need to be with bit-wise operations?", "text": "I have been browsing some OpenJDK code recently and have found some **intriguing** pieces of code there that has to do with **bit-wise operations**. I even asked a question about it on StackOverflow. Another example that illustrates the point: 1141 public static int bitCount(int i) { 1142 // HD, Figure 5-2 1143 i = i - ((i >>> 1) & 0x55555555); 1144 i = (i & 0x33333333) + ((i >>> 2) & 0x33333333); 1145 i = (i + (i >>> 4)) & 0x0f0f0f0f; 1146 i = i + (i >>> 8); 1147 i = i + (i >>> 16); 1148 return i & 0x3f; 1149 } This code can be found in the Integer class. **I cannot help but feel stupid when I look at this.** Did I miss a class or two in college or is this not something I am supposed to just _get_? I can do simple bit-wise operations (like ANDing, ORing, XORing, shifting), but come on, how does someone come up with a code like that above? **How good does a well-rounded programmer need to be with bit-wise operations?** **On a side note...** _What worries me is that the person who answered my question on StackOverflow answered it in a matter of minutes. If he could do that, why did I just stare like deer in the headlights?_"} {"_id": "77171", "title": "PHP framework plunge", "text": "I am about to take plunge into PHP frameworks and my weapon of choice is kohona. I have a reasonable amount of OO experince with java. And my javaScript, jQuery, PhP and html/CSS skills are passable. Now to the question. Is this a reasonable choice? As ZEND looks like a steeper learning curve and CakePHP commandline configuration does not immediately strikes me as good way to produce code."} {"_id": "77174", "title": "Is function memoization really only for primitives?", "text": "I was thinking about this for quite some time. Is function memoization really only for primitives? I currently have this piece of code: Public Shared Function Mize(Of TArg1 As Structure, TResult)(ByVal input_f As System.Func(Of TArg1, TResult)) As System.Func(Of TArg1, TResult) Dim map = New System.Collections.Generic.Dictionary(Of TArg1, TResult) Return Function(arg1 As TArg1) If map.ContainsKey(arg1) Then Return map.Item(arg1) Dim result = input_f(arg1) map.Add(arg1, result) Return result End Function End Function And I'm wondering should i upgrade TArg1 such that it could accept any arbitrary class?"} {"_id": "201062", "title": "How to handle flag in multiple if-else's", "text": "I seem to see this often enough in my code and others. There's nothing about it that seems horribly wrong, but it annoys me as it looks like it can be done better. I suppose a case statement, might make a little more sense, but often variable is a type that does not work well or at all with case statements (depending on language) If variable == A if (Flag == true) doFooA() else doFooA2 else if variable == B if (Flag == true) doFooB() else doFooB2 else if variable == C if (Flag == true) doFooC() else doFooC2 It seems there's multiple ways to \"factor\" this, such as 2 sets of if-elses, where one set handles when Flag == true. Is there a \"good way\" to factor this, or perhaps when this if-else algorithm happens it usually means you are doing something wrong?"} {"_id": "201065", "title": "Scalable spring core with AMQP?", "text": "I use 3 standard Spring MVC war, which share a common core (Services, DAO, and Models). The main problem is when I plan to deploy all the 3 wars on a same server. I have the Core Application Context instantiated 3 times. I know it's possible to share a common context using an EAR, but for scalability purposes I have to keep all 3 wars, as 3 distinct units, which could deploy in a different count each. My question is about an architectural choice: Is it a good idea to split the Service layer used by my Controllers in a facade + commands (pattern) which can be distributed to the core (backend) via an AMQP? So that I could share requests across multiple cores."} {"_id": "201066", "title": "I believe I have mixed C and C++ code when I shouldn't have; Is this a problem and how to rectify?", "text": "**Background/Scenario** I started writing a CLI application purely in C (my first proper C or C++ program that wasn't \"Hello World\" or a variation thereof). Around midway through I was working with \"strings\" of user input (char arrays) and I discovered the C++ string streamer object. I saw that I could save code using these, so I used them through the application. This means that I have changed the file extension to .cpp and now compile the app with `g++` instead of `gcc`. So based on this, I would say the application is now technically a C++ application (although 90%+ of the code is written in what I would call C, as there is lots of cross-over between the two language given my limited experience of the two). It is a single .cpp file around 900 lines long. **Important Factors** I want the program to be free (as in money) and freely distributable and usable for all to have. My concern is that someone will look at the code and think something to the effect of: > Oh look at the coding, it's awful, this program can't help me When potentially it could! Another matter is the code being efficient (it is a program for testing Ethernet connectivity). There should be no parts of the code that are so inefficient that they can severely hinder the performance of the application or its output. However, I think that is a question for Stack Overflow when asking for help with specific functions, methods, object calls, etc. **My question** Having (in my opinion) mixed C and C++ where perhaps I shouldn't. Should I look to rewrite it all in C++ (by this, I mean implement more C++ objects and methods where perhaps I have coded something in a C style that can be condensed using newer C++ techniques), or remove the use of string streamer objects and bring it all \"back\" to C code? Is there a correct approach here? I am lost and need some guidance on how to keep this application \"Good\" in the eyes of the masses, so they will use it and benefit from it. **The Code - Update** Here is a link to the code. It is circa 40% comments, I comment nearly every line until I feel more fluent. In the copy I have linked to though, I have removed pretty much all the comments. I hope this doesn't make it too hard to read. I am hoping though that no one should need to fully understand it. If I have made fatal design flaws though, I am hoping they should be identifiable easily. I should also mention, I am writing a couple of Ubuntu desktops and laptops. I'm not intending to port the code to other operating systems."} {"_id": "224356", "title": "Debugging checklists: How much it's necessary to have?", "text": "Should making _debug-checklists_ be an essential part of development process? How it can be integrated with _unit-tests_? **Update** Debugging checklist: Think about it as your troubleshooting checklist -- like what you do for your network connection, this time for your developers and your source code. For example if you're trying to access web via your web browser and you can't, then you'd probably go and check if you can load other websites or not, if not, then you'd check your internet/network connection, and so on. Here if you have a team of multiple developers and you ran into a bug, you wouldn't just jump into the source code and try to debug it there, because someone else might changed the code and that might cause the problem. To spot the actual bug without a checklist, everybody needs to spend a lot of time looking at different things, probably in an unorganized way also. For example we have a Map module in our software. If you're trying to use that module somewhere in the application and that doesn't work, then there is a small checklist to help you debug it faster: 1. Check if the license exists in Dashboard/Settings/Map or in the database. Is that a valid license? 2. What is the MapCenter? Is that a valid LatLng? 3. What is the MapProjection? 4. Can you reach the MapServer? So, specially if you're new to the team/code, you can catch up with others much faster without spending hours trying to spot the cause of errors. There are ways to do a better error handling -- like throw an _exception_ for example if the MapServer is unreachable, however there are also situations that you still need to check different elements to make sure what exactly causing the error. The question is: If I'm writing a _sort_ function, and I know that you need to specify the correct _encoding_ in order to get the correct result, should I write a checklist simply like this: 1. Make sure you have set the proper _encoding_ in configuration file. If the above example could save myself or another developer 10-15 minutes of looking around to find the problem, should we make it mandatory for every developer to write this kind of checklists when they spot something that's potential to be source of a problem on a specific part of application later?"} {"_id": "170541", "title": "Does putting types/functions inside namespace make compiler's parsing work easy?", "text": "Retaining the names inside `namespace` will make compiler work less stressful!? For example: // test.cpp #include class vec { /* ... */ }; Take 2 scenarios of `main()`: // scenario-1 using namespace std; // comment this line for scenario-2 int main () { vec obj; } For scenario-1 where `using namespace std;`, several type names from `namespace std` will come into global scope. Thus compiler will have to check against `vec` if any of the type is colliding with it. If it does then generate error. In scenario-2 where there is no `using namespace`, compiler just have to check `vec` with `std`, because that's the only symbol in global scope. I am interested to know that, shouldn't it make the compiler little faster ?"} {"_id": "170547", "title": "Refactoring and Open / Closed principle", "text": "I have recently being reading a web site about clean code development (I do not put a link here because it is not in English). One of the principles advertised by this site is the **Open Closed Principle** : each software component should be open for extension and closed for modification. E.g., when we have implemented and tested a class, we should only modify it to fix bugs or to add new functionality (e.g. new methods that **do not** influence the existing ones). The existing functionality and implementation should not be changed. I normally apply this principle by defining an interface `I` and a corresponding implementation class `A`. When class `A` has become stable (implemented and tested), I normally do not modify it too much (possibly, not at all), i.e. 1. If new requirements arrive (e.g. performance, or a totally new implementation of the interface) that require big changes to the code, I write a new implementation `B`, and keep using `A` as long as `B` is not mature. When `B` is mature, all that is needed is to change how `I` is instantiated. 2. If the new requirements suggest a change to the interface as well, I define a new interface `I'` and a new implementation `A'`. So `I`, `A` are frozen and remain **the implementation** for the production system as long as `I'` and `A'` are not stable enough to replace them. So, in view of these observation, I was a bit surprised that the web page then suggested the use of **complex refactorings** , \"... because it is not possible to write code directly in its final form.\" Isn't there a contradiction / conflict between enforcing the Open / Closed Principle and suggesting the use of complex refactorings as a best practice? Or the idea here is that one can use complex refactorings during the development of a class `A`, but when that class has been tested successfully it should be frozen?"} {"_id": "19437", "title": "What does Microsoft Public License (Ms-PL) clause 3 section (A) actually limit?", "text": "I've been looking at the Ms-PL license that ASP.NET MVC and DotNetOpenAuth are published under and 3A says this: 3 Conditions and Limitations (A) No Trademark License- This license does not grant you rights to use any contributors' name, logo, or trademarks. Does this mean that I cannot name my project ASP.NET MVC DotNetOpenAuth Sample Project and publish it if I use these two technologies or does it just mean that I cannot use the author's name to promote this project?"} {"_id": "145947", "title": "Client/Server App \u2013 Best online data storage/administration/communication library?", "text": "I work on some iphone/android apps and the \u201cdata stored on a centralized server\u201d pattern seem fairly common. So far, I have done one app like this, doing everything myself at a low level without using many libraries: 1. MySQL database 2. PHP/HTML web forms allowing the app manager to enter/update content with some validation 3. PHP scripts turning the SQL data into downloadable XML files 4. App code (java/objective-c) to download the XML files, parse them and turn them in some proper objects then displayed in the app UI As you can imagine, that is very tedious and boring and there is lots of duplicated stuff. Also, it was interesting to do it once that way to understand how it works but I\u2019m sure some libraries do it much better that my own php/java/obj-c code. Now, I will have to work on something similar and I\u2019m looking for some sort of API/service that will do most of this work for me but still allow me to alter/customize what I need to (e.g.: web form validation so that the app manager doesn\u2019t break the integrity of the database, e.g.: plugging with my own classes). So I\u2019m looking for the best generic solution that cover all the steps above plus the case when the app can insert/update data as well (download and upload). (Note the library shouldn\u2019t have a licence that prevents me from selling the app) Let\u2019s say I have to build an app about cars, I want to store the car data in an online SQL database with some HTML forms allowing the app manager to add more cars (but only allowing him to enter validated data). I want the app user to be able to download the cars and insert new one in the online SQL database from his phone. What library/API would you recommend? What service would save me the most time while still allowing flexibility?"} {"_id": "185063", "title": "Version control: Dealing with incomplete/broken code", "text": "It seems to be a generally accepted good practice not to push invalid, broken, or incomplete code. But one of the huge advantages of version control systems is that it gives you a remote place for your code, so you can work on the same codebase from more than one place. Suppose you are working on a feature with a couple other developers, and a bad winter storm is announced; everybody agrees to go home and keep working rather than get caught in the storm, but nobody is at an ideal _stopping place_. What do you do? 1. Create different branches for each developer and commit your changes to that, push those branches to remote and hope you can clean it up later on remote without making life difficult? 2. Create a different `remote` just for incomplete code? 3. Copy your code to your personal flash drive and hope you don't lose it later, or get caught violating company policy? 4. Something else? (We use `git`, but I would hope the answer could be general to any version control system.)"} {"_id": "185064", "title": "Using a stream manipulator (endl) or a newline escape character (\\n)?", "text": "I don't have a specific context in which I'm asking the question, but while I was reading a beginner book on C++ I noticed the use of both an endl stream manipulator and a newline escape character when dealing with a stream object. The exmaple is as follows: cout << \"Hello World\" << endl; cout << \"Hello World\\n\"; My questions are: 1. Is it more appropriate to use the stream manipulator (endl) in a certain situation and an escape character in a different one? 2. Are there drawbacks efficiency wise to using one of the two? 3. Are they completely interchangeable? 4. I read that an escape sequence is stored in memory as a single character. Does that mean it is more appropriate to use endl if you're going for low memory consumption? 5. Does the stream manipulator endl use up memory in any way, if so is it more than the escape sequence? Thanks, StackExchange Apologies if I posted this in the wrong section, I thought it counted as data structures."} {"_id": "230438", "title": "In git, is it a bad idea to create a tag with the same name as a deleted branch?", "text": "I have a project with a git branching model that roughly follows that of nvie's git-flow. Our release branches are named in a SemVer format, e.g. `v1.5.2` Once a release branch is given the green light for production, we close the branch, by merging it into master, applying a tag, and then deleting the branch. As we immediately delete the release branch, we've been using the same identifier for tagging the branch, e.g. `v1.5.2` Here's the commands we'd use to close a release branch: $ git checkout master $ git merge v1.5.2 $ git tag -a v1.5.2 -m \"Version 1.5.2 - foo bar, baz, etc\" $ git branch -d v1.5.2 $ git branch -dr origin/v1.5.2 $ git push origin :v1.5.2 $ git push $ git push --tags This seems to work in the majority of cases, however it's causing an issue in the scenario where another instance of the git repo (e.g. another dev machine, or staging environment) has a local checkout of the v1.5.2 branch. The `git push origin :v1.5.2` command will delete the branch in the remote, but does not delete the local version of the branch (if it exists) in all repos. This leads to an ambiguous reference, when trying checkout `v1.5.2` in those repos: $ git checkout v1.5.2 warning: refname 'v1.5.2' is ambiguous. Can this be avoided without using a different syntax for the branches, e.g. `release-v1.5.2`, or `v1.5.2-rc`? Or is it unavoidable, and therefore a fundamentally bad idea to create a tag with the same name as a deleted branch?"} {"_id": "212639", "title": "Should I store test files in source control?", "text": "I have a number of (large) test files that need to be maintained. This means access to their history is a requirement. **Benefits** * Any new developers get the entire test suite with just a `git pull`. * The history of the files is backed up. * The files themselves are backed up. **Drawbacks** * Huge increase in size of repository. * Huge increase in download size of new developers copying the repository. What are the best practices for maintaining test files? Do I store these files in source control? Are there any alternatives?"} {"_id": "230436", "title": "Sprint Planning Meetings - determine if a work item is \"planned\"?", "text": "We've been working with Scrum for a while now, generally successfully. However of late, as the pressure has started to mount up we've encountered several situations where items came through planning meetings but when it actually came time to code, it became apparent they were woefully under-specified. This is an annoying and awkward waste of time. So I'm wondering, is there anything we can do to make sure that a work item is properly understood and planned before it's included in a sprint? We don't use TDD, but my experiences with it suggest that it's a good way of ensuring a programmer understands a task before beginning work. So I did consider trying to work out some automated testing as a possible approach to this. But not all work items are amenable to automated tests, and it's likely to be difficult/boring for non-programmers in the meeting. As an aside, out of curiosity, is when one has decided on acceptance criteria, should they go on a work item, or a user story, or both?"} {"_id": "235403", "title": "Wrapper around C++ STL", "text": "Where I work we have our own system library, which pretty much is only wrappers around the STL, such as: template class HVector { protected: std::vector data; public: int size () const; //Only returns data.size() int custom(); //Some generic custom function that uses the data vector } Most of the class members are just re-declarations of the STL container members, but we also have a few customized functions that do some generic tasks with the container. Is this a good design? If not, what would be the best way to implement the customized functions around the containers?"} {"_id": "181772", "title": "Mobile number validation", "text": "I am trying to find best way to validate a mobile number with in a country. Currently my understanding is: User can enter whatever format they want in mobile numbers and its a waste of time and energy to validate it against a set of regular expressions. My application is not a critical one like banking application and if the user is entering an invalid mobile number, it is at his own risk to get updates (like activate account/ do something with the application) So I think the best way is to check for mobile number length and whether all are digits. I want to know the best way forward and is there any good resource (non- scattered) to get all mobile number length validations based on a country code?"} {"_id": "181776", "title": "Pictures from iPhone to clients FTP server - Directly (iPhone->FTP) or Cloud (iPhone->Amazon->FTP)", "text": "My clients wants to take pictures with their iPhone's and place them collectible on their server, suggested by FTP. I can see there is two solutions: 1. Directly upload from the iPhone to the FTP server. My colleague can make so the get the ActiveDirectery username and password, so they do not use the same password all of them. 2. Upload them directly to Amazon S3, where I then have a worker which takes the image and puts it on the ftp server. I can do this easily with Python. I will use Kinvey to manage users. Is nr. 2 overkill on short terms? Or is the downsides with creating a FTP connection on a iPhone? My boss thinks it makes it unnecessary more complicated with that extra link. I can easily see what my boss mean, and think he is right. The problem is that I have not tried FTP'ing in Obj-C and do not know how hard or difficult it can might be. I know phonegap, which could easily upload to Amazon."} {"_id": "253979", "title": "Should I reference a CopyOnWriteArraySet from the Set interface?", "text": "There are two ways to use a CopyOnWriteArraySet: // A Set set = new CopyOnWriteArraySet<>(); and // B CopyOnWriteArraySet set = new CopyOnWriteArraySet<>(); With 'normal' sets like HashSet and TreeSet, case A is preferred, because it allows easy switching of the Set implementation. However, in this case a conscious choice is made for a specific thread-safe Set implementation. Should I use case B to signify this intent?"} {"_id": "181778", "title": "Is a coder that 'quality checks' bug fixes and bugs raised by testers a recognised role?", "text": "I've recently found myself frequently in the position where I'm checking both bug fixes by other programmers, and bugs raised by the QA team. Any bug fixes frequently end up having 'collateral damage', and I've found it invaluable to go through any recent fixes, and analyze what other parts of the system could potentially have been impacted by the changes. It's been instrumental in maintaining the reliability of the system. Without this process, the testers generally find new bugs raised, and rarely recognize the cause as the recent bug fix. I'm also quite often going through new bugs raised by the testers, analyzing possible causes to determine if there's a root cause that may impact on other parts of the system, quite frequently merging two or more bugs into one case. It's essentially doing the 'investigation' part of the bug fixing process, so that another programmer can focus on just the coding. The question is, is this a recognised role? It doesn't seem to fit into the 'Programmer' or 'QA' job title neatly, it's kind of in-between. Or is the need for this role more a consequence of bad process or bad design?"} {"_id": "86995", "title": "Modern approaches to retrieve useful content from a web page?", "text": "What are the modern ways to (effectively) determine which part of page contains useful text, data tables, etc. and which are not (e.g. ads, navigation, etc.)? What were the last valuable researches/result/papers in this field in latest years? Thank you in advance!"} {"_id": "86997", "title": "Keep permuting a vector until it is ordered", "text": "I have this problem: Imagine you have a vector V, integers from 0 to 70000 -- sorted in ascending order Now you have a permutation P of that vector. Then you do V[P] \"shuffling\" the vector. If you keep doing V[P] (P never changes), V will eventually be sorted again in ascending order? Is there a way you may know, a priori, how may shuffles you need?"} {"_id": "86993", "title": "Do backend developers care what their code looks like in the frontend?", "text": "As a backend and a frontend developer I see the process from start to finish, first by creating the logic, displaying the correct data on a web page and then using frontend skills to make this look awesome. My question is, do pure backend developers care what their code ends up looking like in the frontend? As far as the user is concerned, they will ONLY see design/frontend. They don't actually care that your code is clean, DRY and maintainable. As long as it doesn't disrupt their payment process or flight booking they do not care. Does this affect the average backend developer?"} {"_id": "38884", "title": "Asynchronous Programming in Functional Languages", "text": "I'm mostly a C/C++ programmer, which means that the majority of my experience is with procedural and object-oriented paradigms. However, as many C++ programmers are aware, C++ has shifted in emphasis over the years to a functional-esque style, culminating finally in the addition of lambdas and closures in C++0x. Regardless, while I have considerable experience coding in a functional _style_ using C++, I have very little experience with actual functional languages such as Lisp, Haskell, etc. I've recently began studying these languages, because the idea of \"no side- effects\" in purely functional languages has always intrigued me, especially with regards to its applications to concurrency and distributed computing. However, coming from a C++ background I'm confused as to how this \"no side- effects\" philsophy works with asynchronous programming. By asynchronous programming I mean any framework/API/coding style which dispatches user- provided event handlers to handle events which occur asynchronously (outside the flow of the program.) This includes asynchronous libraries such as Boost.ASIO, or even just plain old C signal handlers or Java GUI event handlers. The one thing all of these have in common is that the nature of asynchronous programming seems to _require_ the creation of side-effects (state), in order for the main flow of the program to become aware that an asynchronous event handler has been invoked. Typically, in a framework like Boost.ASIO, an event handler _changes_ the state of an object, so that the effect of the event is propagated beyond the life-time of the event handler function. Really, what else can an event handler do? It can't \"return\" a value to the call point, because there is no call point. The event handler is not part of the main flow of the program, so the only way it can have any effect on the actual program is to change some state (or else `longjmp` to another execution point). So it seems that asynchronous programming is all about asynchronously producing side-effects. This seems completely at odds with the goals of functional programming. How are these two paradigms reconciled (in practice) in functional languages?"} {"_id": "157943", "title": "Are there any design patterns that are unnecessary in dynamic languages like Python?", "text": "I've started reading the design pattern book by the GoF. Some patterns seem very similar with only minor conceptual differences. Do you think out of the many patterns some are unnecessary in a dynamic language like Python (e.g. because they are substituted by a dynamic feature)?"} {"_id": "143736", "title": "Why do we need private variables?", "text": "Why do we need private variables in classes? Every book on programming I've read says this is a private variable, this is how you define it but stops there. The wording of these explanations always seemed to me like we really have a crisis of trust in our profession. The explanations always sounded like other programmers are out to mess up our code. Yet, there are many programming languages that do not have private variables. 1. What do private variables help prevent? 2. How do you decide if a particular property should be private or not? If by default every field SHOULD be private then why are there public data members in a class? 3. Under what circumstances should a variable be made public?"} {"_id": "176876", "title": "Why shouldn't I be using public variables in my Java class?", "text": "In school, I've been told many times to stop using `public` for my variables. I haven't asked why yet. This question: Are Java's public fields just a tragic historical design flaw at this point? seems kinda related to this. However, they don't seem to discuss _why_ is it \"wrong\", but instead focus on _how_ can they use them instead. Look at this (unfinished) class: public class Reporte { public String rutaOriginal; public String rutaNueva; public int bytesOriginales; public int bytesFinales; public float ganancia; /** * Constructor para objetos de la clase Reporte */ public Reporte() { } } No need to understand Spanish. All this class does is hold some statistics (those public fields) and then do some operations with them (later). I will also need to be modifying those variables often. But well, since I've been told not to use `public`, this is what I ended up doing: public class Reporte { private String rutaOriginal; private String rutaNueva; private int bytesOriginales; private int bytesFinales; private float ganancia; /** * Constructor para objetos de la clase Reporte */ public Reporte() { } public String getRutaOriginal() { return rutaOriginal; } public String getRutaNueva() { return rutaNueva; } public int getBytesOriginales() { return bytesOriginales; } public int getBytesFinales() { return bytesFinales; } public float getGanancia() { return ganancia; } public void setRutaOriginal(String rutaOriginal) { this.rutaOriginal = rutaOriginal; } public void setRutaNueva(String rutaNueva) { this.rutaNueva = rutaNueva; } public void setBytesOriginales(int bytesOriginales) { this.bytesOriginales = bytesOriginales; } public void setBytesFinales(int bytesFinales) { this.bytesFinales = bytesFinales; } public void setGanancia(float ganancia) { this.ganancia = ganancia; } } Looks kinda pretty. But seems like a waste of time. Google searches about \"When to use public in Java\" and \"Why shouldn't I use public in Java\" seem to discuss about a concept of **mutability** , although I'm not really sure how to interpret such discussions. I do want my class to be mutable - all the time."} {"_id": "249701", "title": "Why shouldn't I make variables public, but should use public getters/setters?", "text": "I'm watching a C++ tutorial video. It is talking about variables inside classes and assert that variables should be marked private. It explains that if I want to use them publicly, I should do it indirectly through functions. Why? What's the difference? It looks like the result is the same, except that instead of having one line of code, I now have like 9 or 10 more LOC to do the same thing. Can someone explain why converting a private variable in a class to public within the class rather than just making it public right out the simple way is any different and more efficient?"} {"_id": "57491", "title": "Do ALL your variables need to be declared private?", "text": "> **Possible Duplicate:** > Why do we need private variables? I know that it's best practice to stay safe, and that we should always prevent _others_ from directly accessing a class' properties. I hear this all the time from university professors, and I also see this all the time in a lot of source code released on the App Hub. In fact, professors say that they will actually take marks off for every variable that gets declared public. Now, this leaves me _always_ declaring variables as private. No matter what. Even if each of these variables were to have both a getter and a setter. But here's the problem: it's tedious work. I tend to quickly lose interest in a project every time I need to have a variable in a class that could have simply been declared public instead of private with a getter and a setter. So my question is, do I really need to declare _all_ my variables private? Or could I declare _some_ variables public whenever they require both a getter and a setter?"} {"_id": "80232", "title": "How do you handle the need to have multiple development environments?", "text": "How do you deal with different project environments? Every project might require a different database (oracle, IBM db2, mysql & etc), a different server (tomcat, IBM WAS, weblogic & etc) or some other new technologies. Every time a new database or new server comes in, I install them on to my workstation for my convenience. Right now I have more than one database and server on my workstation and it has caused my workstation take some time at startup. I have to wait a period of time for my workstation to be ready for me to start working. Sometimes when I install database A, it causes my previous database B to have issues. I found that this will take a lot of my cpu usage although I'm not using them at the moment. In this case, I can think of only one method, I can install the databases on to one virtual machine and the servers on to another virtual machine. Or one project environment one virtual machine. Then I can start just the one that I need it. What do you think?"} {"_id": "173518", "title": "What are the differences between abstract classes, interfaces, and when to use them", "text": "Recently I have started to wrap my head around OOP, and I am now to the point where the more I read about the differences between abstract classes and interfaces the more confused I become. So far, neither can be instantiated. interfaces are more or less structural blueprints that determine the skeleton and abstracts are different by being able to partially develop code. I would like to learn more about these through my specific situation. Here is a link to my first question if you would like a little more background information: What is a good design model for my new class? Here are two classes I created: class Ad { $title; $description $price; function get_data($website){ } function validate_price(){ } } class calendar_event { $title; $description $start_date; function get_data($website){ //guts } function validate_dates(){ //guts } } So, as you can see these classes are almost identical. Not shown here, but there are other functions, `like get_zip()`, `save_to_database()` that are common across my classes. I have also added other classes Cars and Pets which have all the common methods and of course properties specific to those objects (mileage, weight, for example). Now I have violated the **DRY** principle and I am managing and changing the same code across multiple files. I intend on having more classes like boats, horses, or whatever. So is this where I would use an interface or abstract class? From what I understand about abstract classes I would use a super class as a template with all of the common elements built into the abstract class, and then add only the items specifically needed in future classes. For example: abstract class content { $title; $description function get_data($website){ } function common_function2() { } function common_function3() { } } class calendar_event extends content { $start_date; function validate_dates(){ } } Or would I use an interface and, because these are so similar, create a structure that each of the subclasses are forced to use for integrity reasons, and leave it up to the end developer who fleshes out that class to be responsible for each of the details of even the common functions. my thinking there is that some 'common' functions may need to be tweaked in the future for the needs of their specific class. Despite all that above, if you believe I am misunderstanding the what and why of abstracts and interfaces altogether, by all means let a valid answer to be stop thinking in this direction and suggest the proper way to move forward! Thanks!"} {"_id": "228711", "title": "Interfaces vs Base class", "text": "I'm strugging to know when to use a base class with Polymorphism or an interface. Providing my object exposes DoThis() then I can't see why it matters if it's an interface of a base class. Please consider the following public void MyMethod() { var myObject = new MyObject(); myObject.DoThis(); } In regards to MyObject, I could have created it in 2 ways Approach 1 public class MyObject : IMyInterface { public void DoThis() { //logic } } Approach 2 public class MyObject : MyBaseObject { public override void DoThis() { //logic } } From what I can see, both of these implementations achieve the same thing. The issue could be my example is too simple/contrived but, is there a way to know when to use one approach over the other? In my example above, I would _guess_ the answer is preference? I've not done any big projects before and so I'm guessing that one isn't as extendible but I don't know why I think that!"} {"_id": "101399", "title": "How long should I provide free updates for my shareware - per version, per year, or forever?", "text": "I will release my first shareware soon and I'm wondering for how long should a paying user be entitled to free updates. I can think of three options: * You buy a version then all the updates are free for one year (eg. SmartFTP) * You buy a version then all the minor updates (mostly includes bug fixes) are free (eg. UltraEdit) * You buy a version and all the minor and major updates are free forever (eg. Total Commander) It seems that different applications use different ways. What would you recommend?"} {"_id": "218171", "title": "UPOS RFIDScanner data format", "text": "A lot of work that I do currently is based in the OPOS/UPOS world. My company has a device that can read 13.56Mhz tags (RFID), Smart Cards, and Mag Stripe cards. Up until somewhat recently I have only been working with RFID for a very specific scenario. That was to read UltraLight C and Desfire cards. These cards were all setup very specifically so that I could take the data read from those cards and force it into a MSR track2 format. The past couple of weeks, however, I have been working on reading RFID credit cards (since I have a Visa card I've been using mine), and Smart Card credit cards. (The visa card I have has both) In learning how to communicate with SmartCard and reading ISO7816 and EMVCO documents I became a little more familiar with how info is stored. But now I have a question regarding UPOS. The RFID data on my Visa is stored (and read) very similar to how the data is stored and read from the Smart Card on my Visa. Cool. Well in the UPOS spec for SmartCardRW the ReadData method returns a byte array. That's cool, I can just return all that data and then parse it as my heart desires. The RFID though has a LinkedList of Tags. Well this makes sense in terms of my Visa card (reminds me of a question I have in regards to SmartCard, but that is for another question) but what about ULC and Desfire, or for that matter any Mifare card. Pages, Files, Purses don't exactly fit the Tag profile. For instance lets just say I read pages 4-12 on my ULC card. Each page I read is 4 bytes long. Does this mean I have 9 tags in my LinkedList? Is my Tag id the page number? Or then how does that translate to Desfire? I open application 123456 and read file 1 and file 2, Do I have 2 tags? and if so what is my tag id? At least with my Visa I _think_ that I have to use the Tag id (ex 5F24 for my expiration date) and value of {0x15, 0x10, 0x31} Part of me says yes..that makes sense. Another part of me says, \"well if that is the case then why doesn't SmartCardRW have Tags?\" So that is my question. How do I format my data from those different types of media? or is that the job of my Control Object (the application)? Is so how does it know? The only protocols I have are: // Summary: // Enumerates the available predefined RFID tag protocols the device supports. [Flags] public enum RFIDProtocols { EpcClass0 = 1, RFIDSdt0Plus = 2, EpcClass1 = 4, EpcClass1Gen2 = 8, EpcClass2 = 16, Iso14443A = 4096, Iso14443B = 8192, Iso15693 = 12288, Iso180006B = 16384, Other = 16777216, All = 1073741824, } If I use that well all of my cards that I have are all Iso14443A. I use the ATQA and the SAK to know what type of card I really have. There is no RFID property that lets me specify that. So I'm lost."} {"_id": "140319", "title": "Flowchart for solving programming problems", "text": "I noticed that every developer implements a somewhat different flowchart for solving programming problems. By flowchart I mean a defined system of techniques that the developer goes through in a certain sequence, trying to solve the problem at hand. Some examples for techniques: * Google \"how to...\" or \"... tutorial\". * Search the java/msdn/apple/etc API doc for the specific class or method. * Search in stack overflow the exact problem with some tags like [iphone]/[java] etc. * Take a nap and let the subconscious work. * Debug. * Draw the algorithm or system. * Google the logged error message. * Ask a colleague or manager. * Ask a new question in stack overflow. From your experience, what is the best flowchart for solving a programming problem?"} {"_id": "219887", "title": "Does Java development typically involve more subclassing than C#/.NET?", "text": "I've recently started looking at Android development. This has brought me back into the world of Java software development. The last time I worked with Java, I'll admit, I didn't understand OOP nearly as much as (I think) I do now. Having mainly used C# in my career, I'm noticing a startling difference in how inheritance is used Java and C#. In C# it seemed like inheritance could be avoided in most situations. The task at hand could usually be accomplished by using concrete classes of the .NET framework. In Java, from what I'm gathering from code samples, it seems like the Java framework supplies many interfaces or abstract classes that are then meant to be implemented/extended by the developer. This seems to be too big a difference to just boil down to style. What is the reasoning behind this? I feel like I won't be writing clean Java code until I understand this. Also, is this limited to just the Android SDK or is this a Java-wide approach to OOP? Or put in another way, What is it about the design of these two languages that (seems to encourage) more or less inheritance use than the other? If the languages treat inheritance identically, and assuming my observation is valid, then it means this is related to the design of the frameworks/libraries and not the languages. What would the motivation be for this kind of design?"} {"_id": "140311", "title": "How to popularize Nemerle (or another programming language)?", "text": "Any .NET developer who is interested in different programming languages knows that F# is the most popular functional language for the .NET platform nowadays. The only fact describing the popularity of F# is the great support of Microsoft. But we are not limited with F# at all. There are some other functional languages on the .NET platform. I'm very disappointed with the fact that Nemerle isn't well-known. It's an awesome language which supports three paradigms: object-oriented, functional and meta- programming. I won't try to explain why I like it so much. The problem is that I can't use it at work. I think that only really brave companies can rely on Nemerle. It's almost unknown, that's why it's hard to find new developers for the project. Noone wants to make a first step with Nemerle if it can influence the budget what is reasonable. So, here is a question: **what can I do to make Nemerle more popular?** Here are my first ideas: * implement open-source projects using Nemerle; * make presentations on different conferences; * write articles."} {"_id": "139496", "title": "How soon existing projects should be upgraded to newer version of the technologies powering it?", "text": "In large scale applications (e.g. Banking Domain), what is the criteria of upgrading an existing project to a newer version of the technology on which it is built? For instance, a .NET application built on .NET Framework 3.5 and using Oracle 9i back-end. There are not any specific urgent requirements, but should I think of upgrading to .NET 4.5 and Oracle 11g in the near future? Is it advantageous to keep pace with the technology version?"} {"_id": "140317", "title": "What is the difference from the push and pull development models?", "text": "I was reading Extreme Programming Explained, Second Edition and in the chapter 11 \"The Theory of Constraints\" the authors talk about the old and obsolete **\"push\" development model** and the XP way, the **\"pull\" development model**. It looks like a quite important concept, but it takes only a very small paragraph and two images that are mere illustrations of the \"waterfall\" and iterative process, nothing specific about these models except by the image caption. I searched and it doesn't go any further about it in the rest of the book. I couldn't find any further explanations or discussions about it in the Internet either. If the only difference about those is that one is **\"waterfall\"** and the other is **iterative** , them why push and why pull? Does anyone understand what is really the difference between those two and give some good examples?"} {"_id": "42044", "title": "How can I prototype a very abstract theoretical framework?", "text": "I've had an idea for a semantic model of computing that's theoretically sound but is also quite unusual. I'd like to quickly prototype a system to prove that it can work in practice. Most of my work is in programming languages, so I'm comfortable putting together a small language for the purposes of testing, but I wonder if this is the best approach, as it necessitates a certain amount of advance work on the linguistic side that's not directly related to the computational model. So what is the best prototyping strategy for a very high-level theoretical framework such as this? Should I go with a new language, or an embedded DSL, or some other approach? I can provide a bit more background if necessary, but this doesn't need to turn into a discussion of the specifics of the model."} {"_id": "45218", "title": "Releasing software/Using Continuous Integration - What do most companies seem to use?", "text": "I've set up our continuous integration system, and it has been working for about a year now. We have finally reached a point where we want to do releases using the same. Before our CI system, the process(es) that was used was: (Develop) -> Ready for release -> Create a branch -> (Build -> Fix bugs as QA finds them) Loop -> Final build -> Tag (Develop) -> Ready for release -> (build -> fix bugs) Loop -> Tag Our CI setup: 1 server for development (DEV) 1 server for qa/release (QA) The second one has integrated into CI perfectly. I create a branch when the software is ready for release, and the branch never changes thereafter, which means the build is reproduceable without having to change the CI job. Any future development takes place on HEAD, and even maintainence releases get a completely new branch and a completely new job, which remains on the CI system forever, and then some. The first method is harder to adapt. If the branch changes, the build is not reproduceable unless I use the tag to build [jobs on the CI server uses the branch for QA/RELEASE, and HEAD for development builds]. However, if I use the tag to build, I have to create a new CI job to build from the tag (lose changelog on server), or change the existing job (lose original job configuration). I know this sounds complicated, and if required, I will rewrite/edit to explain the situation better. However, my question: [If at all] what process does your company use to release software using continuous integration systems. Is it even done using the CI system, or manually?"} {"_id": "255680", "title": "Buffer-overflow vulnerabilities that a static code analyser won't pick up", "text": "I'll use FlawFinder in this example. FlawFinder is a static code \"analyser\" tool that examines C/C++ source files and outputs warnings/hits if a vulnerability was identified. The way it does this is by using text pattern matching for function names and their parameters. It then matches these against a pre-defined database of commonly known issues/vulnerabilities associated with different standard library functions. E.g. #include int main() { char str[50]; printf(\"Enter a string : \"); gets(str); printf(\"You entered: %s\", str); return(0); } This will cause FlawFinder to generate a hit at the line calling the `gets(str)` function. It will warn against a potential buffer overflow and advises the developer to use fgets() instead. I'm suspecting that this may cause some false positive results in some cases, as it doesn't actually \"analyse\" the code and the context but just matches function names against a set of predefined warnings. But I was wondering if there's a situation where you could have an obvious buffer overflow vulnerability that wouldn't be identified by a tool like FlawFinder or in general most static code analysis tools? Specifically in a C/C++ environment."} {"_id": "124581", "title": "What do you think of this Exception handling practice", "text": "I'm working on a project that includes a lot of creating/manipulating and reading JSONObjects and arrays but not in a systematic way. So there is JSON code everywhere. It is ok for me except that every time I work on a JSONObject I got to handle JSONException. So I created a class that extends JSONObject and Override the put/get methods in it and handled JSONExceptions inside this class. It made my code way much clearer and I believe it is more than enough for my case. What do you think?"} {"_id": "241542", "title": "The purpose of using a constants pool for immutable constants", "text": "_Originally posted atstackoverflow.com/q/23961260_ * * * I come across the following code with a lot of frequency: if (myArray.length == Constants.ZERO_INT) or if (myString != null && !myString.equals(Constants.EMPTY_STRING)) Neither of these makes much sense to me. Isn't the point of having a constant pool for ease of code appearance and to allow for modularity? In both of the above cases, it just looks like needless noise that accomplishes neither objective. My question: what is the purpose of using a constants pool for variables like this which will never change? Or is this just cargo cult programming? If so, then why does it seem to be prevalent in the industry? (I've noticed it with at least two different employers I've worked with)."} {"_id": "201934", "title": "When inheriting from a base class, should you retest constructor arguments?", "text": "Say I have something like this: public class BaseClass { public BaseClass(string someString) { if(someString == null) throw new ArgumentException(); } } public class ChildClass : BaseClass { public ChildClass(string someString) : base(someString) { // Should I do this?? if(someString == null) throw new ArgumentException(); } } Also, what if I'm inheriting a class that I do not have the source for. Should I recheck constructor arguments?"} {"_id": "201930", "title": "Where to store web application custom configuration settings", "text": "I'm currently working on an internal web application for my company in node.js We want to have certain configuration settings be changeable by management. For example, we pre-print some labels automatically during receiving if the quantity is less than 5 so the user doesn't have to stop to confirm a print dialog. What if we wanted to shift this to any quantities less than 10 because it is found to be more efficient? My question is this: What's the best approach to save custom configuration settings if I was to create an area for changing these params on their dashboard? Would saving this information in a json formatted text file on the server and then just parsing it using fs every time be the best way? or is there something I'm not thinking of? Thank you in advance."} {"_id": "139722", "title": "Could JQuery and similar tools be built into the browser install?", "text": "After reading another question about JQuery and CDN's, is it feasible for tools like JQuery to \"come with\" the browser, thus reducing/eliminating the need for the first download from a CDN, or from your own host server. JQuery files specifically are pretty small, so you could easily have a number (all?) of the different versions as part of a browser install. Now fair enough, this would increase install footprint, download time for the browser itself. Then sites could check \"local\" first, before CDN (which then caches), before finally defaulting to downloading from the website server itself. If this is feasible, has it been done, and if not, why hasn't it be done?"} {"_id": "163026", "title": "Teaching version control (git, mercurial) to undergraduates?", "text": "I'm teaching a scientific programming course to undergraduates, targeted at freshmen/sophomores who are seeing the command line for the first time but are likely to need version control in future classes and careers. Are there resource for teaching version control (to those who have never seen it) available? What aspects should be taught first / emphasized most?"} {"_id": "200089", "title": "How do you decide what code to put into a function?", "text": "I started out with a script that was a few hundred lines. Later, I realized I wanted another script that would require much of the same code. I decided to wrap certain areas of the original script that would be shared, into definitions. When I was deciding exactly what should be in a function, I came across various things to consider: 1. What should I set as its input parameters? If anything was needed in the function, I should probably require it as a parameter, right? Or should I declare it within the function itself? 2. If function x always requires the output of function y, but function y sometimes is needed alone, should function X include the code of function y or simply call function y? 3. Imagine a case where I have 5 functions that all call a function 'sub' (sub is essential for these 5 functions to complete their work). If 'sub' always is supposed to return the same result, then wouldn't multiple calls from these parent functions duplicate the same work? If I move the call to 'sub' outside of the 5 functions, how can I be sure that 'sub' is called before the first call to any of the 5 functions? 4. If I have a segment of code that always produces the same result, and isn't required more than once in the same application, I normally wouldn't put it in a function. However, if it is not a function, but is later required in another application, then should it become a function? Sorry if these questions are too vague, but I feel there should be some general guidelines. I haven't programmed for very long, and bounced around between OOP and functional, but I've never remembered reading anything that explained this. Could it simply be a matter of personal preference?"} {"_id": "231387", "title": "Python methods vs builtin functions", "text": "Python widely uses built-ins (or module function) and not class methods. So * `len([])` instead of `[].length()` * `filter(f, [])` instead of `[].filter(f)` * `str(2)` instead of `2.to_str` * same for `map`, `foreach` etc These prevent you from doing nice chaining which is possible in other languages like Ruby or Scala: (sorry very artificial example) my_list.map(f).filter(g).length() In Python you would need to either split into several lines: list_of_something_else = map(f, my_list) list_of_something_else_without_blah = filter(g, list_of_something_else) length = len(list_of_something_else_without_blah) In a single expression it doesn't look readable: len(filter(g, map(f, my_list)) Is it considered to be not-Pythonic to chain methods? Or do people usually extend classes with a dozen of helper functions to make it easier?"} {"_id": "200083", "title": "What is the rule on passing around collections? List vs. Ienumerable vs. IQueryable", "text": "I do Entity Framework stuff using repository patterns that are passed to the controller to than be called by the client using jquery AJAX.. Is there any basic rules on in what format I should be passing around these lists? Within the server code I suppose I could just pass around an Iqueryable? yes? And to the client I could pass around a list version ? That is my first guess.. any ideas ?"} {"_id": "231381", "title": "When/Where to create/assign event handlers to its elements", "text": "Lets say I have the following code // JS $(function(){ $('[data-mc=logout]').click(function(){ if (!confirm(myconfig.msg['asklogout'])) { return false; } }); $('[data-mc=ajax]').click(function(){ var target=$(this).data('target'); //retrieve data-target of the clicked tag return false; }); }); which target some HTML elements // HTML Logout Start page 1. Am I right when I say that this code needs to be called on every full page load? 2. a: If so, what if one of the elements doesn't exists on the chosen page? b: To avoid errors an \"if exist\" check is necessary, right? c: jQuery will do that prior adding the handler? 3. What if some content gets requested using Ajax and inserted into the DOM. Then this code needs to be called again, right? 4. Is this best practice or should the code be split and called per page load, where only existing element's event handlers gets attached?"} {"_id": "70252", "title": "Are gimmicks ever a good idea?", "text": "Are adding gimmicks to a website ever a good idea? What I mean by gimmick is adding \"cool\" features that solve no real problem. We are currently considering adding a \"Magazine style\" product viewing application to the site. You know like a flash based app that shows magazine pages that you can click and flip the pages with your mouse. I personally think this is terrible idea. We already have a category page where you click on a category and it lists products. So this catalog view is not solving any problems that are not already solved. They want to use it because of its \"cool\" factor. I personally don't think its cool, I think its cumbersome, but thats besides the point. I am all for adding complex features, but only when those complex features are solving real problems. I think this worst part of all is there are now 2 paths for the user to take, a category route and a magazine route. I think the shopping route should be linear and I believe too many options like this is distracting and confusing. InDesign has a plugin that can make something like this very easily. So not much time will be wasted on it, but I still think it is a very bad idea. Am I right, or being too opinionated? Can anyone give me any good arguments for either side?"} {"_id": "70251", "title": "What should I expect as a C++ software engineer in a company that develops python web applications?", "text": "A company is hiring C++ software engineers. When I go to the company's website, they provide a web applications written in python. (At least that's what I see from the outside.) What kind of responsibilities can I expect? Does this sound like server and back end coding? What else? At which point would a python web application most likely switch to C++?"} {"_id": "70255", "title": "Recommendations on running Visual Studio inside a VMWare VDI environment?", "text": "Does anyone have experience running Visual Studio inside a VDI environment? * Would you recommend it? * Would you advise against it? * * * **Our Background** Our department is one of the few in the company that doesn't use thin clients. The head of the department is getting pressure that we too should be using thin clients, in a hope that the help desk and server admins will be more responsive to other departments complaints and issues. I agree with the idea behind it, but I'm very leery of running a program like Visual Studio inside a virtual machine."} {"_id": "70254", "title": "Returning from a long function on the first false condition", "text": "I have a long(ish) function of the following pattern: bool func(some_type_t *p1, another_t *p2) { bool a = false, b = false, c = false, etc = false; a = (some_long && expression && \\ (that_deserves | its_own_line)); b = (another || long_expression && \\ (that_deserves | its_own_line)); c = and_so && on; return a && b && c && etc; } As you can see, the return value is `true` if and only if all flags are true. Hence I can return as soon as one of them turns out to be `false`. It may or may not be more optimal than the current version, but I want to do it to satisfy my OCPD. I thought of putting everything in a `do{}while(0);` and breaking out on `false`, but that looks odd (and I remember seeing something like that in tdwtf). I do not want multiple return statements or deeply nested if blocks, and I certainly don't want to compromise on readability with a huge if that relies on short circuiting. I can live with `goto` in this case, but I want to know if there are any other patterns/constructs used in similar scenarios."} {"_id": "203570", "title": "Understanding how memory contents map into a struct", "text": "I am not able to understand how bytes in memory are being mapped into a struct. My machine is a little-endian x86_64. The code was compiled with gcc 4.7.0 from the Win64 mingw32-64 distribution for Win64. These are contents of the relevant memory fragment: ...450002cf9fe5000040115a9fc0a8fe... And this is the struct definition: typedef struct ip4 { unsigned int ihl :4; unsigned int version :4; uint8_t tos; uint16_t tot_len; uint16_t id; uint16_t frag_off; // flags=3 bits, offset=13 bits uint8_t ttl; uint8_t protocol; uint16_t check; uint32_t saddr; uint32_t daddr; /*The options start here. */ } ip4_t; When a pointer to such an structure (let it be `*ip4`) is initialized to the starting address of the above pasted memory region, this is what the debugger shows for the struct's fields: ip4: address=0x8da36ce ip4->ihl: address=0x8da36ce, value=0x5 ip4->version: address=0x8da36ce, value=0x4 ip4->tos: address=0x8da36d2, value=0x9f ip4->tot_len: address=0x8da36d4, value=0x0 ... I see how `ihl` and version are mapped: 4 bytes for a long integer, little- endian. But I don't understand how `tos` and `tot_len` are mapped; which bytes in memory correspond to each one of them. Thank you in advance."} {"_id": "125180", "title": "Why is it so hard to make a great app?", "text": "Maybe a better way to ask the question would be: why is the Facebook app so bad? This is not so much a question specific to the Facebook iOS app, it just uses that as an example. What I want to know is how is it possible that a company like Facebook that has all the resources it needs can make such a bad app. Facebook is famous and \"cool\" so a lot of devs, designers and architects want to work for them, and it has the money to pay for the best of those. And it's not like it doesn't care about the app. It's not some small internal unimportant project, it's one of the most downloaded apps on the app store. But also one of the most complained about. This brings me to my question, how is it possible that an entity with such resources and desire can end up making such a bad product? To put it another way, what are the main complexities involved in such a big project that can ultimately lead a collection of perfectly skilled individuals to collectively create something that is not so perfect? Put in a more positive way, what is required (other than skilled people) to make a great product?"} {"_id": "121408", "title": "Why can we delete some built-in properties of global object?", "text": "I'm reading es5 these days and find that [[configurable]] attribute in some built-in properties of global object is set to true which means we can delete these properties. For example: the join method of Array.prototype object have attributes {[[Writable]]:true, [[Enumerable]]: false, [[Configurable]]: true} So we can easily delete the join method for Array like: delete Array.prototype.join; alert([1,2,3].join); The alert will display `undefined` in my chromium 17,firefox 9 ,ie 10,even ie6; In Chrome 15 & safari 5.1.1 the [[configurable]] attribute is set to true and delete result is also true but the final result is still `function(){[native code]}`. Seems like this is a bug and chromium fix it. I haven't notice that before. In my opinion, delete built-in functions in user's code is dangerous, and will bring out so many bugs when working with others.So why ECMAScript make this decision?"} {"_id": "240492", "title": "Exercise 3.6: Skiena Algorithm Design Manual", "text": "I am preparing for interview and try to solve the exercise problems of the book. > 3-6. [5] Describe how to modify any balanced tree data structure such that > search, insert, delete, minimum, and maximum still take O(log n) time each, > but successor and predecessor now take O(1) time each. Which operations have > to be modified to support this? Solution: Maintain extra pointers to the successor and predecessor. Update the pointers on insert and delete. Nothing else right? Or is there some other trick involved. Thanks"} {"_id": "219226", "title": "Javascript heavy page/application - best practice for handling errors", "text": "Consider an application that contains a number of pages with a relatively large amount of javascript present, or a predominantly JS powered application. The script handles a number things, such as: * Client side validation * Showing/hiding entry sections when appropriate * Autocomplete when filling text fields * Date pickers etc. If we consider the case that javascript is a requirement to use the site (e.g. not public internet site), I am wondering what the best practice would be for handling an unexpected error that occurs at runtime that causes some of the functionality to no longer be present (e.g. sections remain hidden, calendar controls become empty textboxes etc). The options that I can think of would be: 1. Try to fail gracefully and have a page that will respond with no script present (gets trickier the more complex a page/app becomes). or 1. Try..Catch round all the critical code, and on error, display a message, or redirect to an error page. I am curious if anyone has 'best practice' suggestions for this? * * * My motivation for looking into this comes after an obscure bug with an earlier version of jQuery for a user in IE10 running in compatibility mode. This question is not about that, but the knock on affect was that a lot of the code in a document ready handler in jQuery was not loaded, resulting in significant degredation in usability."} {"_id": "219227", "title": "Strategies to find memory leak in AIR app with native extension", "text": "**Background:** I'm working on an Adobe AIR app that has many facets. I'm looking for strategies to find memory leaks. Broadly speaking, the EXE contains an embedded JVM, and an AIR Native Extension for Windows (using C). So there are many moving parts. As one example, I'm using MonsterDebugger which traces either the AIR runtime (or the Flash Player... I'm not sure which). We've used it to find one leak (regarding listeners in ActionScript). I'm also monitoring the EXE via PowerShell and trying to match memory jumps with log files. **Questions:** * How can I monitor an embedded JVM, that is launched via C in an AIR ANE? * Any other strategies? I'm treating this as a \"whiteboard\" question; this is why it's not on Stack Overflow."} {"_id": "247278", "title": "Way for java program to read file from when it was stopped", "text": "I am working on an application that reads a log file, looks for a specific string, sends alert emails and saves data from the file into a database. The reading functionality is working well, but I am coming across an issue when I stop and restart my program. Basically, it always starts from the beginning of the log file, so I am rereading information I have already read. Please reference the example below. Consider I have the log file with the following lines 1 0708 1200 Error in log 2 0708 1230 Received invoice 00001 3 0708 1231 Received invoice 00002 4 0708 0130 Error in log 5 0708 0135 Received invoice 00003 6 0708 0200 Received invoice 00004 7 0708 0230 Received invoice 00005 8 0708 0235 Error in log In this example, say I ran the application. The application would start reading at line 1 and continue until I stop the application (picking up any lines that are added to the file as well). Let's say I stop the application when I get to line 5. If this were the case, I would have received an alert e-mail from line 1 and line 4. I would have also saved lines 2 and 3 in the database. My problem/question comes from when I want to restart the application again. Currently, the application will start reading at line 1 again. This is not an issue when it comes to saving data because I have placed a clause to not save if the line has already been added. However, when it comes to sending e-mails I will receive an e-mail for lines 1 and 4 again. After all of the that rambling, I will present my actual question / inquery. Is there a way to stop a program (it completely stops the process) and begin reading from the spot in which I stopped reading from? I cannot think of a way to save the place holder for the file and use it once I start the program again. I have considered saving the file pointer value to a text file and reading from there at every start up, but I was wondering if there was another way or any suggestions to improve this process. I know this question is abstract, but I am reaching out for extra brains on this issue. Little information: * This is a java program * This is a web app running on a tomcat server * The server that tomcat is running on is a unix server * This is a constantly running application that should only be shut off during an issue or change * The logs being scanned are very large with 500,000+ lines * The logs turn around (get replaced with empty log) once they reach a certain size. This means that when I save the file pointer, I have to make sure I am in the same file before I start reading again, otherwise I want to start from the beginning Any help / suggestions would be very helpful. Sorry for the wordy question, please let me know if anything is unclear. Thanks!"} {"_id": "176652", "title": "Examples of Liskov Substitution", "text": "I'm facilitating a session next week on the Liskov Substitution Principle and I was wondering if anyone had any examples of violations 'from the trenches'? I'm looking for something other than uncle Bob's rectangle - square problem and the persistent set problem he talks about in A-PPP (although that is a great example). So far I'm using the example of a (very simple) List and an IndexedList as the 'correct' use of inheritance. And the addition of a Set to this hierarchy as a violation (as a Set is distinct; strengthening the pre condition of the Add method). I've also taken this great example and it's solution from this question Both those examples are great but I'm looking for something more subtle and harder to spot. So far I've come up with nothing so if you've got a great, subtle example post it up. Also, any metaphors you've come across that helped you understand LSP would be really useful too."} {"_id": "176657", "title": "Quality Assurance=inspections, reviews..?", "text": "Studying this subject extensively, the most books state the following: * **Quality Assurance: prevention activity. Act of inspection, reviewing..** * **Quality Control: testing** While there are some exceptions that mention that QA deals with just processes (planning, strategy, standard application etc.) which is IMHO much closer to real QA, yet I cannot find any good reference in Google Books. I believe that inspections, reviews, testing is all **quality control** as it is about checking products, no matter if it is the final one or work products. The problem is that so many authors do not agree. I would be grateful for detailed explanation, ideally with a reference."} {"_id": "247272", "title": "How bad is using underscore in names?", "text": "I am mainly a C programmer. In my world, writing `likeThis` or `like_this` is just a matter of style. In Haskell however, it seems that camelCase is the definite choice. Personally, I find the later much more readable. Think `pthread_mutexattr_init` vs `PthreadMutexAttrInit`. What's more, I have configured vim to swap the numbers and their alternate symbols (in C), since numbers happen to be written much less frequently than symbols such as parentheses, star, ampersand etc, which makes life easier on my wrist. As a bonus, this lets me write `this_sort_of_thing` without using the shift key. My question is, from the Haskell programmers, whether using underscore in names is acceptable to the Haskell community or not. Is camelCase an unwritten rule or common convention? Would it be ok to make the public functions `likeThis` but internally write `like_this`?"} {"_id": "247270", "title": "Does ES6 help grow the Ecmascript standard library?", "text": "With all the noise about EC6, one thing that I realized I haven't heard about is expanding Javascript's standard library. Javascript has a fairly sparse standard library. You need a 3rd party library to do many basic things like date manipulation. I'd rather have more built into the browser via a standard library then have to download Javascript to do basic things. Is this a focus of the standards body? Is it contingent on ES6 modules? Is it even correct to discuss an \"Ecmascript standard library\" (does the standard specify a std lib like say C++'s spec does) or is it something specific to the Javascript implementation of the Ecmascript standard?"} {"_id": "247274", "title": "MVC and the business rule", "text": "I need to know where in the MVC should I apply the business rule. Imagine the situation: I have a school and I need to generate a calendar of classes for teachers. Each teacher has a school subject and is only available certain times. I need to generate this calendar in such a way that teachers can performing without timing conflicts. **Here are my questions:** 1. What part of MVC the teacher should be part of? Taking into mind that your timings data are stored externally (such as a SQL database or an XML), it should be a Model, correct? 2. Now, where in the MVC the business rule that will compile the calendar should be developed? Like Controller or a Library? 3. These data could be worked directly into the Model, or perhaps a specific Model to work with other Models? **Now a bit of my vision:** (please, correct me if I'm wrong) 1. The data from the teachers should be handled by a Model. So that I could, for example, get the timing available to him and his school subject. So, Teacher is Model. 2. The compilation of the calendar could be done in a controller or library. Question: but, controllers should be related to routes, and libraries to an API?"} {"_id": "141046", "title": "What are the benefits of closing every if-statement with an else in Python?", "text": "I am reading Learn Python the Hard Way by Zed Shaw. In this lesson he writes: \"Every `if-statement` must have an `else`.\" What are the benefits of ending every `if-statement` with an `else`? Are there any legitimate reasons not to end an `if-statement` with an `else`?"} {"_id": "184855", "title": "In memory collection vs database vs individual classes for infrequently changed objects", "text": "I have a ASP.NET application which puts the users through a series of forms in a wizard like fashion and has them fill out fields on the form. In code, these forms and fields are represented as \"Step\" objects, with a collection of \"Field\" objects as properties. Currently I have only one Step class, and each individual step is an instantiation of this class with various identifying properties set on it, including the collection of fields. The step objects are persisted in a database and loaded using an ORM. Note these \"Step\" objects aren't actually responsible for tracking progress of a specific user, they're really just templates that describe step properties like name, display name, step order, etc. Progress is tracked through a series of \"ItemStep\" objects which basically link one of these \"Step\" templates to a specific \"Project\". \"ItemSteps\" store information about which steps are completed, locked, skipped, etc. However, these step objects and the associated field objects are not designed to be configured by end users. Most changes to the objects would likely break the application without corresponding changes to the code. Given all of this, I see 3 possible things I can do. 1. Moving the steps and fields out of the database and into some sort of in memory collection. My basic thought process is to have a static collection that holds all the steps, which will be hidden behind my existing data access logic. 2. Create a new class for each step, and appropriately define equality methods so that two step objects of the same class are considered equal. Then instead of querying either my in memory repository, or my ORM, all I need to do is new up an instance of the appropriate step class. The only downside is I have to find some way to persist the relationship between an \"ItemStep\" in the database and the \"Step\" class it should be instantiated with. 3. Do nothing and keep everything as it is. I figure 1 or 2 will result in a performance improvement since they'll be fewer trips to the database, and will streamline the process of application changes, since I won't have to worry about updating the database. And either 1 or 2 could also make it easier to build out a rich inheritance model around the steps, and move some logic onto the step class itself, instead of where it currently sits, inside my Presenters. Of these three possible solutions, what would you reccommend?"} {"_id": "133298", "title": "Help understanding server-side scripting", "text": "As far as I understand, there are basically 3 options for doing server-side scripting these days: 1. Using scripting languages that can be directly interpreted/executed by the web server (e.g., PHP and ASP), where the scripts are interpreted/executed on the fly (i.e., as HTTP requests come), the output is embed into HTML pages is then sent back to the client. 2. Using any language (e.g., C, C++, PERL, Python) the operating system of the server is capable of executing (either using an interpreter or using the executable file already compiled) and then using CGI to communicate between the web server and the OS. Output of the scripts comes via CGI to the server in the form of complete HTML pages, and is then sent back to the client. 3. Using Java on a server that can handle servlets/JSPs, which is pretty much the same idea as option 1 above, except using Java instead of PHP/ASP. **Questions** : 1. Is my understanding so far on track, or did I get something wrong? 2. Are ASP and PHP the only languages that can be interpreted and executed directly by a web server? 3. Where does Ruby fall in the classification above? Can it be interpreted/executed by servers like PHP? Or does it communicate via CGI? 4. Is server-side scripting via CGI becoming obsolete or not at all?"} {"_id": "255060", "title": "Where to put sample data in project structure", "text": "I am working on a Scala project and I want to include some sample data somewhere in the project. Specifically, my Scala project includes code that performs various natural language processing tasks on text data. I want to include a sample set of text files that the user can use to test out the program somewhere in the project structure. Right now I am putting this sample data in the resources folder (mostly because it is the only location within the project that I know how to read from). But I was wondering if this is the best location? In general what is the best practice regarding where to put sample data within the project? Alternatively, would it be a better idea to simply provide this data separately (i.e external to the project)?"} {"_id": "255065", "title": "Architecture Decoupling", "text": "I have set up my ASP.NET MVC project as follows: Presentation Layer : ViewModels, Views, Certain logic to handle service responses Business Logic / Service Layer : Services that work with data from either database or presentation layer. (Domain models is EF entities) Database Access Layer : Entity Framework. according to http://www.asp.net/mvc/tutorials/getting-started-with-ef-using- mvc/advanced-entity-framework-scenarios-for-an-mvc-web-application#repo a repository/unit of work pattern is no longer needed with EF6. And working with ASP.NET Identity creating a interface for DbContext is hard, making all services have a tight coupling to the ApplicationDbContext. Is this okay? in the scenario you have to change database to NoSQL, you would have to change all the Services code instead of just the \"middleware\". Any comments to architecture is welcome aswell. I am trying to find a good stand point for creating an application."} {"_id": "184859", "title": "When should user stories be combined and separated?", "text": "As a school project, we are rolling out our initial set of user stories. Should a user story record the original idea from a user, without combining them or separate them? For example, John added that \"I want to post multiple choice questions.\", and Mike added that \"Except multiple questions, I want to post true/false questions.\" David added that \"I want a confirmation box before I add questions\" Do you leave those 3 user stories as it is, or you want to combine John's and Mike's as \"I want to post multiple choice and true/false questions.\" and within this new user story a detail like \"show a conformation box before clicking the add buttion\"? What do you choose?"} {"_id": "252591", "title": "How to convert this recursive problem to iterative? Line Simplification algorithm fails to run due to Maximum Recursion Depth being hit", "text": "I am implementing the Douglas, Peucker's Line Simplification algorithm in Python. I started with this implementation. However, it fails to run in Python due to Maximum Recursion Depth being hit. How can I convert this algorithm to a iterative one? I am not able to imagine this problem in an iterative view. My expectation is to get approach/hint which can be used rather than actual code. Is it possible to use some internal stack to resolve the stack overflow (or avoid the maximum recursion depth)? **Update:** Found the iterative implementation of the algorithm here."} {"_id": "209542", "title": "How to get up to speed on latest technologies?", "text": "I am working as Software Developer for Financial Company and we are using standard Java/Java EE stack with Oracle db as backend and using Spring/Hibernate framework with JBOSS Application Server. So this is pretty much standard stack and it's an legacy application. Now, I read blogs on Infoq.com and corporate engineering blogs and see people are trying out new technologies like NoSQL(Mongodb, Cassandra), Cloud Computing(AWS), Big Data(Hadoop and it's related eco system) and distributed computing frameworks(Zookeeper, cluster based cassandra...mongodb...hadoop configuration !!!) but I cannot use those technologies in my current project as it's an legacy application and there is no scope and messing around with core components. So my question is, 1. How can I learn and come to speed on all this new technologies? 2. What approach should I take? (I do lot of reading from Infoq and HighScalability about different architectures but never get an experience where i am migration my complete application to ec2 instance and analyze what are benefits of doing that or what all scenarios should we consider with that kind of project) Any thoughts?"} {"_id": "252593", "title": "Data generation system modular design", "text": "I am trying to think of the most sensible way to design the architecture of a data generation system with several steps. Data in the system goes through several transformations which can be divided into separate steps (from the business logic point of view). I would like the system to keep this modular design, in such a way that each module represents a step in the data transformation. A module's input should be the previous module's output. 1. What are some good ways to orchestrate this flow? 2. How should modules communicate with each other? 3. In each step, where should the input come from, and where should the output go? 4. Is it a good idea to use a database as the source and target of data consumption / generation for each module? 5. Should modules be built as separate scripts / executables which only directly communicate with the database? **Edit:** The system will be implemented by several people. Each developer will be assigned a module. I would like the architecture to simplify the workflow by allowing each developer to work independently; and make assumptions only about the data their specific module consumes. **Edit 2** : The modules relationship is depicted below. Modules are represented as blue boxes. Some modules depend on data generated by other modules (black arrows). Some modules need to persist data on the DB (dotted gray arrows). ![Modules flow](http://i.stack.imgur.com/WWzee.jpg)"} {"_id": "123802", "title": "How can I migrate from Excel VBA with ADO to C++ with OCCI?", "text": "I am employed as a developer, but the only application development tools I have available to me are Access and Excel (i.e., VBA). Fortunately, I have good access to our Oracle DB, but that took some persuasion! It's been a while, but I can write good, albeit vanilla C (amongst other, less relevant things) and am familiar with the OOP model. The problem I've come up against is that VBA with ADO isn't robust enough to suit my needs. So, my question is: how realistic do you think it would be to migrate from this environment to C++ (say, using Visual C++) with Oracle's C++ call interface, given my experience? Would the learning curve (C++, which I've dabbled in briefly, OCCI and, ultimately, Windows GUI coding) be too steep to justify? That is, could I learn this stuff on the job to a production level in a short amount of time?"} {"_id": "123803", "title": "How to estimate the length of a project based on another project?", "text": "I was wondering whether there were any estimating models for projects which use the results of previous projects. For example, imagine I want to develop an application for a phone based on a web app I completed last month. Say the web app took 1000 hours but that I don't know anything about mobile development. If you wanted to add another number into the equation you could say that 50% of the features implement in the web app will need to be implemented for the mobile app. Where would I even start to form an estimate for the new project? Thanks in advance."} {"_id": "168252", "title": "Is a \"model\" branch a common practice?", "text": "I just thought it could be a good thing to have a dedicated version control branch for all database schema changes and I wanted to know if anyone else is doing the same and what have the results been. Say that you are working with: 1. Schema model/documentation (some file where you model the database visually to generate the schema source, say MySQL Workbench, with a .mwb file, which is binary) 2. Schema source (a .sql file) 3. Schema-based code generation The normal way we were working was with feature branches, so we would do changes to the model files (the database specific ones), and then have to regenerate points 2 and 3, dealing with the possible conflicts (or even code rewriting). Now say that your workflow goes the same way as the previous item numbering. With a model branch you wouldn't have to reconcile the schema model with binaries in other feature branches, or have to regenerate schema source and regenerate code (which might have human code on top of it). It makes so much sense to me it feels weird not having seen this earlier as a common practice. **Edit** : I'm counting on branch merges to be the assertions for the model matching the code. I use a DVCS, so I don't fear long-lived branches or scary- looking merges. I'm also doing feature branching."} {"_id": "125234", "title": "Is it a bad idea to stray from CSS selector syntax in a JavaScript selector engine?", "text": "I'm designing a JavaScript selector engine, and I'm currently focused on parsing the selector string. Specifically, combinators. The CSS3 combinators I know of are: * `>` (children) * `space` (descendants) * `+` (next sibling) * `~` (all next siblings) Which is fine for how CSS works (how style rules are applied). However, I'm now realizing (now that I'm examining this paradigm more closely) that this seems a bit limiting in a JavaScript setting. Below I've created an alternative list of combinators. What I would like to know is: 1. Do any of the current selector engines stray from standard CSS selection methods? (I'm familiar with Sizzle and Sly, but none of the others) 2. Do you see any reason why any of the combinators I've listed wouldn't work well? 3. Do you think having more selector string options (more combinators, more filters, etc) is beneficial or are just a waste/confusing/dumb/etc? Thanks all in advance for your thoughts! * `>` (children) * `>>` (descendants) (also, space would still work) * `+` (next sibling) * `++` (all next siblings) (also, tilde would still work) * `-` (previous sibling) * `--` (all previous siblings) * `*` (previous and next siblings) * `**` (all siblings) * `^` (parent) Example 1: `div ^ span` \\- get all spans with a div child Example 2: `div ** span` \\- get all span siblings of div Example 3: `.lastListItem -- li` \\- all li previous to the li with class `.lastListItem` Example 4: `#thing ^ div ** .error` \\- all items with class `.error` that are siblings of #thing's parent (assuming #thing's parent is a div) p.s. Oh, and I also thought of having a placeholder character that could stand in for any simple selector. So, Example 4 might look like this (with an underscore as a placeholder): Example 4 alt: `#thing ^ _ ** .error` \\- all items with class \".error\" that are siblings of #thing's parent (no matter what kind of element #thing's parent is)"} {"_id": "198330", "title": "Solving the last mile problem in software engineering", "text": "The more I write code the more I realize that writing the code is not the hard part. The hard part is making sure all the dependencies are in order, there are no hard coded paths, that I don't have weird implicit library dependencies, that there is documentation on how to get it up and running, etc. Tools like vagrant, rake, chef-solo, and fpm definitely help me with some of those problems but there is almost always something that I overlook and when it comes time to deploy the code something almost always goes wrong. I'm also a little disheartened by how little most programmers care about making sure their code is easy to deploy. So how do people usually solve the last mile problem? How do you make sure your code is deployable with as little fuss as possible?"} {"_id": "138455", "title": "What is a recommended pattern for REST endpoints planning for foresighted changes", "text": "Trying to design an API for external applications with foresight for change isn't easy, but a little thought up front can make life easier later on. I'm trying to establish a scheme that will support future changes while remaining backward compatible by leaving prior version handlers in place. The primary concern on this article is to what pattern should be followed for all defined endpoints for a given product/company. ## Base Scheme Given a base URL template of `https://rest.product.com/` I have devised that all services reside under `/api` along with `/auth` and other non-rest based endpoints such as `/doc`. Therefore I can establish the base endpoints as follows: https://rest.product.com/api/... https://rest.product.com/auth/login https://rest.product.com/auth/logout https://rest.product.com/doc/... ## Service Endpoints Now for the endpoints themselves. Concern about `POST`,`GET`,`DELETE` is not the primary objective of this article and is the concern on those actions themselves. Endpoints can be broken down into namespaces and actions. Each action must also present itself in a way to support fundamental changes in return type or required parameters. Taking a hypothetical chat service where registered users can send messages we may have the following endpoints: https://rest.product.com/api/messages/list/{user} https://rest.product.com/api/messages/send Now to add version support for future API changes which may be breaking. We could either add the version signature after `/api/` or after `/messages/`. Given the `send` endpoint we could then have the following for v1. https://rest.product.com/api/v1/messages/send https://rest.product.com/api/messages/v1/send So my first question is, what is a recommended place for the version identifier? ## Managing Controller Code So now we have established we need to support prior versions we need to thus somehow handle code for each of the new versions which may deprecate over time. Assuming we are writing endpoints in Java we could manage this through packages. package com.product.messages.v1; public interface MessageController { void send(); Message[] list(); } This has the advantage that all code has been separated through namespaces where any breaking change would mean that a new copy of the service endpoints. The detriment of this is that all code needs to be copied and bug fixes wished to be applied to new and prior versions needs to be applied/tested for each copy. Another approach is to create handlers for each endpoint. package com.product.messages; public class MessageServiceImpl { public void send(String version) { getMessageSender(version).send(); } // Assume we have a List of senders in order of newest to oldest. private MessageSender getMessageSender(String version) { for (MessageSender s : senders) { if (s.supportsVersion(version)) { return s; } } } } This now isolates versioning to each endpoint itself and makes bug fixes back port compatible by in most cases only needing to be applied once, but it does mean that we need to do a fair bit more work to each individual endpoint to support this. So there's my second question \"What's the best way to design REST service code to support prior versions.\""} {"_id": "134776", "title": "What licence for a FOSS that would sell packages on Mac and Windows but give them away on Linux?", "text": "The code should remains as open as possible, but I'm planning to sell it on Windows and Mac, and make free rmp/deb. Does the licence even matter? It feels like I'm just selling the service of compiling and distributing the software here."} {"_id": "199657", "title": "Difference between functional and technical specification", "text": "I'm writing a specification for a project, and am struggling with separating functional specification from technical specification (see http://www.joelonsoftware.com/articles/fog0000000035.html). For example, I'm trying to specify the behaviour when a user requests a list of Foo objects they have visibility on. In the functional specification, should I describe what precisely is returned (i.e. the structure of a Foo object) or just that the system returns a list of them and then put the details of the Foo object in the technical specification? The design is for an API in case that makes any difference. I can't find many examples of how such an API specification is written."} {"_id": "134770", "title": "Non-viral open source license but with clause for source disclosure?", "text": "I'm looking for a license (for a C library) which basically says: \"Redistributions in binary form must be accompanied by the source code of Larger Work (My code + Your code)\" i.e., L = M + Y, with '+' here meaning, for example, static linking / dynamic linking. And there must be no more restrictions other than this. If I choose my license 'X' as: * GPL - source code of Larger Work, both M and Y, must also be licensed as whole under GPL. * LGPL - My code continues to be LGPL, Your code can be kept closed-source if needed. * BSD - source code of neither M nor Y need be disclosed. To be more clear, the binary must be under the license 'X' but source code of Larger Work must be made _available_ \\-- under whatever license the Author of Larger Work wants, it need not be restricted to 'X'. LGPL comes close, but it does not mandate the source availability of Your code. Example use case: Author of Larger Work _must_ give the his recipient all of his source code (both M and Y) but can ask the recipient to sign an NDA restricting modification of Y, but the recipient is allowed to study Y. In short, 'X' shouldn't have hereditary characteristics (like *GPL) yet source code availability should be mandated. Is there such a \"non-viral\" FOSS license ? Thanks in advance ! PS: I asked this question first on StackOverflow, someone there suggested that I post it in this site. That question was closed as off-topic there, and I've deleted it from SO."} {"_id": "164557", "title": "Service Layer - how broad should it be, and should it also be used from the local application?", "text": "**The background:** I need to build a desktop application with some operations (CRUD and more) (=winforms), I need to make another application which will re-use some of the functions of the main application (=webforms). I'm using service layer for reusing my functions. The service is calling the functions on the BL layer (correct me if I'm doing this wrong). so my desktop has 4 projects - DAL, BL, UI, WEBSERVICES. **The dilemma (simple but I still need some more experienced opinions):** 1. In my main winform UI - should I call the functions from the BL - bl.getcustomers(), or do it similar to how I call it in the webform, and call the functions from the service - webservices.getcustomers? 2. Should I create a service for every single function on the BL even if I need some of the functions only in one UI? for example - should I create services for all the CRUD operations, even though I need to re-use only update operation in the webform? _YOUR HELP IS MUCH APPRECIATED_"} {"_id": "84265", "title": "Giving employer power of attorney to obtain your inventions", "text": "Is it normal/just for a software company to ask for power of attorney to ensure they obtain patents on anything you invent while you are employed as a programmer? **Update:** In my case, I went back the the employer. They told me they have no interest in using my inventions which are not related the domain that were not invented during company hours, using company resources. The company encourages open source contribution, and will not interfere in that regard. I further examined the contract, and at the point where the term 'inventions' is defined in legal speak, it stated that inventions in the context of this agreement were only those developed using company resources within company time. Power of Attorney allows the company to ensure they can get my signature (or equivalent in my absence) for any inventions which they may wish to patent."} {"_id": "164552", "title": "How to communicate within a company what is being Continually Deployed", "text": "I work for a small development company, 20 people total in the entire company, 3 in actual development, and we've adopted CD for our commits to trunk, and it works great, from a code management and up-time side. However - we're getting flak from our support staff and marketing department that they don't feel that they're getting enough lead time on new features and notifications on bug fixes that could change behavior. Part of why we love the CD system is for us in development, it's fast, we fix the bug, add the quick feature, close the Bugz and move on with our day to the next item. All members of our company are now on HipChat at all times, and when a deployment occurs, a message is sent to a room that all company members are in, letting them know what was just deployed (it just shows the commit messages from tip back to the last recorded deployment). We in development are also attempting to make sure that when we're making a change that modifies the UI or a public facing behavior, we post a screenshot to the All Company room and explain what the behavior change is, seeking pushback or concerns. Often, the response is silence. Sometimes, it's a few minor questions, but nothing that need stop the deployment from happening. What I'm wondering is how do other users of the CD method deal with notifications of new features and changes to areas of the company that are not development - and eventually on to customers in the world? Thanks, Francis"} {"_id": "40373", "title": "So Singletons are bad, then what?", "text": "There has been a lot of discussion lately about the problems with using (and overusing) Singletons. I've been one of those people earlier in my career too. I can see what the problem is now, and yet, there are still many cases where I can't see a nice alternative - and not many of the anti-Singleton discussions really provide one. Here is a real example from a major recent project I was involved in: The application was a thick client with many separate screens and components which uses huge amounts of data from a server state which isn't updated too often. This data was basically cached in a Singleton \"manager\" object - the dreaded \"global state\". The idea was to have this one place in the app which keeps the data stored and synced, and then any new screens that are opened can just query most of what they need from there, without making repetitive requests for various supporting data from the server. Constantly requesting to the server would take too much bandwidth - and I'm talking thousands of dollars extra Internet bills per week, so that was unacceptable. Is there any other approach that could be appropriate here than basically having this kind of global data manager cache object? This object doesn't officially have to be a \"Singleton\" of course, but it does conceptually make sense to be one. What is a nice clean alternative here?"} {"_id": "178685", "title": "How to evaluate code quality when you're not familiar with the language?", "text": "As a hypothetical, if I were to interview someone for a new PHP developer position when my experience is in .NET, how can I determine if the code sample they've provided me is efficient and of good quality? In other words, what is the best way to evaluate a programmer's code if you're not familiar with the language?"} {"_id": "162865", "title": "Is there a name for this use of the State design pattern?", "text": "I'm looking to see if there is a particular name for this style of programming a certain kind of behavior into a program. Said program runs in real time, in an update loop, and the program uses the State design pattern to do some work, but it's the specific way it does the work that I want to know about. Here's how it's used. - Object Foo constructed, with concrete StateA object in it - First loop runs --- Foo.Run function calls StateA.Bar --- in StateA.Bar replace Foo's state to StateB - Second loop runs --- Foo.Run calls StateB.Bar - Third loop runs --- Foo.Run calls StateB.Bar - Fourth loop --- etc. So in short, `Foo` doesn't have an explicit `Initialize` function. It will just have `Run`, but `Run` will do something unique in the first frame to initialize something for `Foo` and then replace it with a different action that will repeat in all the frames following it- thus not needing to check if `Foo`'s already initialized. It's just a \"press start and go\" action. What would you call implementing this type of behavior?"} {"_id": "178682", "title": "Best method to organize/manage dependencies in the VCS within a large solution", "text": "A simple scenario: * 2 projects are in version control * The application * The test(s) * A significant number of checkins are made to the application daily. * CI builds and runs all of the automation nightly. In order to write and/or run tests you need to have built the application (to reference/load instrumented assemblies). Now, consider the application to be **massive** , such that building it is prohibitive in time (an entire day to compile). The obvious side effect here, is that once you've performed a build locally, it is immediately inconsistent with latest. For instance: If I were to sync with latest, and open up one of the test projects, it would not locally build until I built the application. This is the same when syncing to another branch/build/tag. So, in order to even start working, I need to wait a day to build the application locally, so that the assemblies could be loaded - and then those assemblies wouldn't be latest. How do you organize the repository or (ideally) your development environment such that you can continually develop tests against whatever the current build is, or a given specific build, while minimizing building the application as much as possible?"} {"_id": "162860", "title": "FTP file compare", "text": "Are there any FTP programs out there that do a file comparison as well, so I can select the local and remote files and see what actually changed? I didnt find any when I googled. But it doesnt seem like a very advanced use case to me. Surely many others would have felt the need for it.. right?"} {"_id": "230157", "title": "What is the point of link rel=\"self\" in a REST API?", "text": "I often see the following in HTML documents or like this in JSON link: { rel=\"self\", href=\"http://example.com/something\" } or in XML So I had some questions: 1. Why include this link? What advantage does it bring? (Please tell me there is a reason to it and its not just a \"good practice\" talisman) 2. How should I exploit this link in my clients? What are the use case for this link? 3. When _shouldn't_ I use this link? When is it pointless to include it?"} {"_id": "14254", "title": "Is mentioning my blog on my resume helpful or hurtful to a job search?", "text": "I have a blog that I use mostly to record solutions to problems I have had, that I had some trouble finding an answer to. Mostly problems where the online doc I googled provided too much info, and I found the answer to my question on the fifth page of my third google hit. (Or if I asked the question here, I either didn't get an answer or I got slammed for asking a question that \"the answer could be easily googled.\") I frequently look up stuff on this blog to remind myself of how I solved a problem, and it gets a decent amount of hits from others as well. Anyway, I was wondering if mentioning this blog on my resume would help or hurt me in a job search? The topics are all over the map. What I would hope it shows is that * I am a person who finds solutions to problems * I have used many different technologies in my work * I am not afraid to tackle a challenge What I am concerned it shows is that * This person had trouble with something that simple? * Why is this person bothering to blog this stuff?"} {"_id": "122175", "title": "Choosing a web development framework?", "text": "So, I've sort of reached a point where I want to start developing a website. Originally, I planned to build said website using PHP and CodeIgniter, I'm familiar with both, but, truth be told, I'm not too fond of either. I find they just get rather messy, CodeIgniter helps somewhat, but no matter what, it seems that most PHP comes out more obfuscated than it has to be. Anyways, I've come to the point where I want to either use Python or Ruby. I'm familiar in both, though more so towards Python, but I've never done any web development in them. I'll take the necessary time to learn the frameworks (and further my knowledge in the language of my choosing), but I need to choose one. I don't like either language more than the other, they both have their benefits... However, since I've never done any web development with either language, I was hoping that you guys could give me some pointers. What are the available frameworks for each language? What do you recommend and why? Note: I've primarily looked into Rails and Django - but I'm still open to others. I'm looking for one that will work for just one (or maybe two) developers. It has to be fairly easy to learn (but I will take the time to learn it). Also, I'd like it to easily support clean code and agile development."} {"_id": "58246", "title": "Who can call themselves a UI developer?", "text": "**Who can call themselves a UI developer, without being a poser?** I've noticed people calling themselves UI developers, which I would categorize as web designers instead. I'm not knocking it, I'd just like to know _who's the real deal_."} {"_id": "164085", "title": "Does *every* project benefit from written specifications?", "text": "I know this is holy war territory, so please read the question to the end before answering. There are many cases where written specifications make a lot of sense. For example, if you're a contractor and you want to get paid, you need written specs. If you're working in a team with 20 persons, you need written specs. If you're writing a programming language compiler or interpreter (and it's not perl), you'll usually write a formal specification. I don't doubt that there are many more cases where written specifications are a really good idea. I just think that there are cases where there's so little benefit in written specs, that it doesn't outweigh the costs of writing and maintaining them. **EDIT:** The close votes say that \"it is difficult to say what is asked here\", so let me clarify: The usefulness of written, detailed specifications is often claimed like a dogma. (If you want examples, look at the comments.) But I don't see the use of them for the kind of development I'm doing. **So what is asked here is: How would written specifications help me?** Background information: I work for a small company that's developing vertical market software. If our product is easier to use and has better performance than the competition, it sells. If it's harder to use, even if it behaves 100% as the specification says, it doesn't sell. So there are no \"external forces\" for having written specs. The advantage would have to be somewhere in the development process. Now, I can see how _frozen_ specifications would make a developer's life easier. But we'll never have frozen specs. If we see in the middle of development that feature X is not intuitive to use the way it's specified, then we can only choose between changing the specification or developing a product that won't sell. You'll probably ask by now: How do you know when you're done? Well, we're continually improving our product. The competition does the same. So (hopefully) we're never done. We keep improving the software, and when we reach a point when the benefits of the improvements we've added since the last release outweigh the costs of an update, we create a new release that is then tested, localized, documented and deployed. This also means that there's rarely any schedule pressure. Nobody has to do overtime to make a deadline. If the feature isn't done by the time we want to release the next version, it'll simply go into the next version. The next question might be: How do your developers know what they're supposed to implement? The answer is: They have a lot of domain knowledge. They know the customers business well enough, so a high-level description of the feature (or even just the problem that the customer needs solved) is enough to implement it. If it's not clear, the developer creates a few fake screens to get feedback from marketing/management or customers, but this is nowhere near the level of detail of actual specifications. This might be inefficient for larger teams, but for a small team with low turnover it works quite well. It has the additional benefit that the developer in question often comes up with a better solution than the person writing the specs might have. This question is already getting very long, but let me address one last point: Testing. Like I said in the beginning, if our software behaves 100% like the spec says, it still can be crap. In fact, if it's so unintuitive that you need a spec to know how to test it, it probably _is_ crap. It makes sense to have fixed, written tests for some core functionality and for regression bugs, but again, this is nowhere near a full written spec of how the software should behave when. The main test is: hand the software to a user who doesn't know it yet and tell him to use the new feature X. If she can figure out how to use it and it works, it works."} {"_id": "122173", "title": "Can the Abstract Factory pattern be considered as a case of polymorphism?", "text": "I was looking for a pattern/solution that allows me call a method as a runtime exception in a group of different methods without using Reflection. I've recently become aware of the Abstract Factory Pattern. To me, it looks so much like polymorphism, and I thought it could be a case of polymorphism but without the super class `GUIFactory`, as you can see in the example of the link above. Am I correct in this assumption?"} {"_id": "94843", "title": "Duplication of view access control logic in database queries and application component", "text": "Our web application has a complex access control system which incorporates role-based and object-level privileges. In the business logic layer, this is implemented by a component that obtains (and caches) all the necessary data with a batch query and computes the user's type and level of access to any object in the system. (A future optimization would be conditional batching based on the data we need for a particular request.) However, the **view** privilege logic in this component is duplicated elsewhere in database queries. (We need to hide data in listing screens that the user does not have privilege to view.) **How can we reduce or eliminate this duplication of logic between the application access control component and our database queries?** Two approaches come to mind. I'm sure there are others. * Check view privilege in the application for each row that comes back from the server via queries from listing screens. * Move more of the access control logic into a stored function that can be called from the queries as well as the application code. _Answers should defend the merits of the proposed method over other methods. For example, if my second suggested approach is desirable, why? If you have suggested a third approach, why does it win over both my approaches?_"} {"_id": "5427", "title": "Why such popularity with Python?", "text": "Other than being annoyed at whitespace as syntax, I'm not a hater, I just don't get the fascination with Python. I appreciate the poetry of Perl, and have programmed beautiful web services in bash & korn, and shebang `gnuplot`. I write documents in `troff` and don't mind REXX. Didn't find tcl any more useful years ago, but **what's the big stink about Python**? I see job listings and many candidates with this as a prize & trophy on their resumes. * * * I guess in reality, I'm trying to personally become sold on this, I just can't find a reason."} {"_id": "122179", "title": "Random pairing algorithm", "text": "If you have a list of items and you want to randomly pair the items in the list together, what kind of algorithm would you use to do that, such that the items can only be matched to one other item."} {"_id": "114078", "title": "Why should company choose VB.NET over C#", "text": "> **Possible Duplicate:** > VB.Net vs C# debate Are there any actual reasons why company should choose VB.NET over C#? I work for a company which develops medical software and switched from VB6 to VB.NET several years ago. There were two reasons to switch to VB.NET but not to C# at that moment: 1. VB.NET is more similar to VB6 and it should make it easier for developers to switch to .NET world 2. VB.NET had a better support for COM than C# The first reason is not serious as VB6 and VB.NET are two different worlds and there is no significant difference between switching from VB6 to VB.NET or C#. The second reason is not actual after C# 4 release. Our company eventually switched to C# (and yes, we now have modules in VB6, VB.NET and C#; and even some modules in C++) because: 1\\. It is easier to find developers in C# 2\\. .NET world is C#-oriented: Visual Studio works better with C#, Resharper has better support for C#, etc. I am just curious whether there are still some reasons for company to choose VB.NET over C#."} {"_id": "228043", "title": "We use subversion - Should we place comments in my code anyways?", "text": "I brought up in a meeting an annoyance that code changes were not commented within the code itself. We use SVN (Subversion) for source control, and it was relayed to me that you can just go into SVN and check the history and see the changes. My question is is this a best practices? It seems easier to me to place the comment in the code itself with the defect/user story reference (we use Rally/Agile) **AND** in the svn header when you check in the changes. My bosses seem to think putting it the code is unecessary, and I told them I completely disagreed with that practice. I've always been taught that comments in the code are never bad. This is not my first gig. I was even more floored when he told me his boss has been known to rip out comments in some code. After I stopped having the shakes, I wanted to... vomit. What do you think about comments in code vs/with comments in source control and what do you do as a best practice?"} {"_id": "175302", "title": "What to do when TDD tests reveal new functionality that is needed that also needs tests?", "text": "What do you do when you are writing a test and you get to the point where you need to make the test pass and you realize that you need an additional piece of functionality that should be separated into its own function? That new function needs to be tested as well, but the TDD cycle says to Make a test fail, make it pass then refactor. If I am on the step where I am trying to make my test pass I'm not supposed to go off and start another failing test to test the new functionality that I need to implement. For example, I am writing a point class that has a function **WillCollideWith( _LineSegment_ )**: public class Point { // Point data and constructor ... public bool CollidesWithLine(LineSegment lineSegment) { Vector PointEndOfMovement = new Vector(Position.X + Velocity.X, Position.Y + Velocity.Y); LineSegment pointPath = new LineSegment(Position, PointEndOfMovement); if (lineSegment.Intersects(pointPath)) return true; return false; } } I was writing a test for **CollidesWithLine** when I realized that I would need a **LineSegment.Intersects( _LineSegment_ )** function. But, should I just stop what I am doing on my test cycle to go create this new functionality? That seems to break the \"Red, Green, Refactor\" principle. Should I just write the code that detects that lineSegments Intersect inside of the **CollidesWithLine** function and refactor it after it is working? That would work in this case since I can access the data from **LineSegment** , but what about in cases where that kind of data is private?"} {"_id": "175301", "title": "Has the emerging generation of programmers got the wrong idea about design patterns?", "text": "Over the years I've noticed a shift in attitude towards design patterns, particularly amongst the emerging generation of developers. There seems to be a notion these days that design patterns are silver bullets that instantly cure any problem, a proliferating idea that advancing as a software engineer simply means learning and applying more and more patterns. When confronted with a problem, developers no longer strive to truly understand the issue and design a solution - instead they simply pick a design pattern which seems to be a close fit, and try to brute-force it. You can see evidence of this by the many, many questions on Stack Overflow that begin with the phrase _\"what pattern should I use to...\"_. I fall into a slightly more mature category of developers (5-10 years experience) and I have a very different viewpoint on patterns - simply as a communication tool to enhance clarity. I find this perspective of design patterns being lego bricks (collected like pokemon cards) a little disconcerting. Will developers lose this attitude as they gain more experience in software engineering? Or could these notions perhaps steer the direction of our craft in years to come? Did the older generation of developers have any similar concerns about us? (perhaps about OO design or similar...). if so, how did we turn out?"} {"_id": "114322", "title": "What is this kind of architecture", "text": "I have looked at the Force.com videos, I am fascinated by the way it allows one to add custom fields to forms, create new forms, with the validations built in by just marking as the required kind, all on the fly. I am trying to implement the same kind of approach to my Java application. 1. Which of the Java frameworks support this kind of quick development. 2. Do we have a name for this kind of architectural approach? In case I need to provision this feature on the fly, what are the things I need to take care for implementation. Basically I am not clear what is this kind of approach known as to google and find. Please point me any references. Thanks. _Edit: Here is it @barjak:http://www.youtube.com/watch?v=QkRbzd3vxHU#t=0m18s_"} {"_id": "238240", "title": "Help in ensuring unit tests are meaningful", "text": "I've just written a unit test for this function, which loops through a collection of dates and sets properties equal to true or false depending on whether they're before or after a given comparison date: public void CheckHistory(int months) { var endDate = DateTime.Today.AddMonths(months); Dictionary orders = new Dictionary(); foreach (var kvp in this.Orders) { if (kvp.Key.Date >= endDate) { orders.Add(kvp.Key, true); } else { orders.Add(kvp.Key, false); } } this.OrderHistories = orders; } So here's the test I wrote: public void Assert_CheckHistory_SelectsCorrectDates() { MyViewModel vm = GetVmWithMockRepository(); vm.OrderHistories = new Dictionary(); OrderHistory ohOld = new OrderHistory(); ohOld.MailingDate = DateTime.Today.AddMonths(-12); vm.OrderHistories.Add(ohOld, false); OrderHistory ohNew = new OrderHistory(); ohNew.MailingDate = DateTime.Today.AddMonths(-3); vm.OrderHistories.Add(ohNew, false); vm.CheckOrderHist(-6); int selectedOrders = vm.OrderHistories.Where(o => o.Value == true).Count(); Assert.AreEqual(1, selectedOrders, \"Unexpected number of selected Order Histories\"); } Nothing wrong there. Test passes and all is good with the world. However, I'm haunted by a nagging feeling that I'm not actually testing anything useful, and am just writing tests for the sake out it. I get this a _lot_. A creeping paranoia that the tests I'm writing are incomplete in the sense that while they cover the lines of code in the target function, they don't really trap any likely problems and are therefore just a maintenance overhead. Is that sample test worthwhile? Is even a badly-designed test worth worthwhile over no test at all? And most of all are there any principles to help programmers identify whether a test is useful or not, or to guide them in constructing useful tests in the future? To be clear, I'm adding tests to an existing application. Going test-first in true TDD style isn't possible."} {"_id": "238249", "title": "Project Organization", "text": "I have been having a discussion with a colleague about the best way to organize a (C#)project after he reorganized a project I had been working on from something that looked like this: ![Combined](http://i.stack.imgur.com/h50vX.png) To something that looks like this: ![enter image description here](http://i.stack.imgur.com/6Gh2J.png) I tend to sway towards having a few big projects as opposed to many smaller projects, but I am open to new ideas and I would love for some input on this. Does anyone have experience with either of these approaches either really working out, or completely falling apart a few years down the line?"} {"_id": "160705", "title": "Does Fred Brooks' \"Surgical Team\" effectively handle the bus factor?", "text": "My team of 4 experienced developers works on a large, modular Windows application (approx. 200 KLoC). I have focused on the core codebase since the beginning of the project (3 years ago) and have gradually shifted to a semi- lead developer position, though I am not the team manager. Our current iteration is a high-priority UI refresh requested by upper management, involving about 15 changes to the core codebase. When asked by the manager, I estimated that each of the 15 changes would take less than four hours **for me to complete** , a total of less than 7 work days. I then volunteered to perform the work. Instead, the manager decided to evenly divvy up all 15 tasks to all four developers. In the three days since we started work, I have observed two things: 1. The other inexperienced team members completed about 1 or less task each. 2. **Brook's Law** in action: I spent about half of my time providing assistance (attempting to coach them on using the components). As a result, I only finished 2 tasks myself, instead of the expected 5 or 6. I approached my manager with my concern that we were running late and again suggested that I complete the remaining tasks. My request was kindly refused, and the stated reasons to split the load evenly was twofold: 1. Limit the **truck/bus factor** \\- ramping up other developers on these skills now, so that in the future any work can be given to anyone, not just me. 2. To eliminate a \"bottleneck\" (me) and get work done faster. To be clear, I have no problems with: a) investing the time teaching, b) people touching my code, or c) job security. In fact, I regularly suggest to the team leader that I train other devs on certain aspects of the core codebase to reduce risk. In this iteration we also have a large collection of high-priority bug fixes targeted, so it would seem that more progress could be made if the workload were redistributed. In Mythical-Man-Month, Brooks' suggests a **\"Surgical Team\"** where every team is comprised of a lead + sub-lead (the manager and me), and some minor roles. I feel as though we are naturally falling into this organization, but my manager is working against it. I feel that the bus factor is already taken care of (the manager is well-versed in the core code), and that the bottleneck doesn't actually exist (involving more devs won't make the work go faster). I think that in this regard, a Surgical Team is a Good Thing. These are my feelings, but I'm not an experienced manager, nor have we had to deal with the bus factor (knock on wood). **Was Brooks right? Have you worked in a \"Surgical Team\" where the bus factor came into play? Are there better techniques to manage distributing expertise?** Similar questions: * How to increase the bus factor and specialize at the same time? * Always keeping 2 people expert on any one chunk of code"} {"_id": "160700", "title": "Which aspect of normal forms do entity-attribute-value tables violate, if any?", "text": "I'm not asking if EAV tables are good or bad. I'm wondering if they are considered \"normalized\", and if not, why? If they aren't normalized, which normal form are they violating and why?"} {"_id": "234645", "title": "As a programmer, are you professionally obliged to offer ongoing support after you've left a company?", "text": "I've had a few programming jobs in the past where I was the only developer working on a project. After I've left, I typically get several emails a week from these companies, usually from the developer(s) who's replaced me there. These emails are usually asking for details about how things work and how I'd best go about implement feature x based on the existing system. I'm usually polite and helpful, but these kind of communications really start to eat into my time making every job I work on another weight around my ankle. Not to mention they're projects that I chose to leave behind me for a good reason. My question is, would it be professionally 'ok' to tell them I'm just not going to offer support any more and refuse and answer inquiries? NB. None of these companies are paying me any type of retainer and the inquiries are often informal questions from developers and not management."} {"_id": "256188", "title": "Data model for persisting queries to database", "text": "I have been asked to build what is essentially a query builder for a reporting application. The variety of objects to query, potential modifiers, number of conditions, and so forth to be reported on have made me conclude that a query builder would be the best way to go about this task. I am trying to decide the data model to back the storage of query parameters. I have seen the models in this SO question as well as in this tutorial. But since it feels especially important to pick a good model up front, I would really appreciate any input. Here is my current approach... please critique as I know it is not too good. **Query** * QueryID * QueryName, QueryDescription, etc **Entity** \\- _ie Dog, Planet, Employee_ * EntityID * EntityName, EntityDescription, etc **AttributeType** \\- _ie INTEGER, DECIMAL, FLOAT_ * AttributeTypeID * AttributeTypeName, AttributeTypeDescription, etc **BinaryOperator** \\- _ie =, <, IS_ * BinaryOperatorID * BinaryOperatorName, BinaryOperatorDescription, etc **AttributeTypeToBinaryOperator** \\- _ie (INTEGER, >), (BOOLEAN, IS)_ * AttributeTypeID * BinaryOperatorID **EntityAttribute** * EntityAttributeID * EntityID * AttributeTypeID * EntityAttributeName, EntityAttributeDescription, etc **EntityAttributeCondition** \\- _ie Dog.Breed = 'dachshund'_ * ConditionID * EntityAttributeID * BinaryOperatorID * ConditionRawValue - _ie 14, false, Tom_ **EntityAttributeConditionGroup** \\- _ie (a AND b)_ * ConditionGroupID * LeftConditionID * RightConditionID * LogicalOperatorEnum - _ie AND, OR, XOR_ **EntityAttributeConditionGroupToGroup** \\- _ie (a AND b) OR (c OR d)_ * LeftConditionGroupID * RightConditionGroupID * LogicalOperatorEnum"} {"_id": "252043", "title": "What is the Controller in Django MVC?", "text": "Learning Django MVC and the way I thought of it is: **Models** are the database tables represented in Django as Python classes. **Views** are the HTML returned from function in views.py. **Controllers** are the actual functions themselves in views.py invoked from a HTTP request. However I read on Wikipedia (at the time of writing): > ... a regular-expression-based URL dispatcher (\"Controller\"). I would have thought the mapping of URLs to functions as routing - not the controller. But perhaps I am wrong - I guess I got my ideas because is ASP MVC the functions that handle the requests are contained in classes called Contollers..."} {"_id": "156803", "title": "Cannot understand a certain point in Agile Manifesto Principles", "text": "I was reading Agile Manifesto Principles. Everything seems clear and reasonable except for one point: > Simplicity--the art of maximizing the amount of work not done--is essential. I dont understand this. Does this mean that the work that wasn't done should be somehow exaggerated? If so, it doesn't really make sence."} {"_id": "234646", "title": "Java code quality in methods calling methods", "text": "I am currently working with an \"interesting\" code-base and see the following type of thing alot in the code. public Object doSomething() { Object obj = new Object(); // Do some stuff to the object obj = doSomthingElse(obj); return obj; } I always feel when looking at this kind of code that it is somewhat incorrect to have an object be passed into a method as a parameter but also set the object returned by the method to the same variable that was referencing the passed object. I would usually do something like the following, where I create a new variable to avoid any confusion. public Object doSomething() { Object obj = new Object(); // Do some stuff to the object Object anotherObj = doSomthingElse(obj); return anotherObj; } Am I wrong in thinking the second code snippet is more readable/correct?"} {"_id": "49", "title": "Why isn't functional programming more popular in the industry? Does it catch on now?", "text": "During my four years at university we have been using much functional programming in several functional programming languages. But I have also used much object oriented programming to, and in fact I use object oriented languages more when doing my own small project to prepare for my first job. But I often wish that I was coding in a functional programming language when doing these projects. However, when looking for a job, it is very rare to see a job where knowledge of a functional programming language is required. Why isn't functional programming languages used more in the industry? There is quite much news about functional programming languages these days, so I wonder if functional programming is catching on in the industry now?"} {"_id": "111661", "title": "JavaScript and the paradigm shift in web programming", "text": "If my memory serves me right, there was a time when using JavaScript for web development was hugely frowned upon, because among other things, it was a privacy and security concern for users and some people just had it off. Nowadays, you can hardly see a major website that doesn't use JavaScript, and many websites will cease to function altogether without JS, graceful degradation be damned. Either that, or usability will be severely impacted, like on SE sites. What has changed between then and now that made JavaScript practically ubiquitous in web development? Or is my assertion that JS was frowned upon a figment of my imagination and it has always been this way?"} {"_id": "135300", "title": "How to unit test Visual Basic 6 legacy code?", "text": "I am doing legacy software programming in Visual Basic 6.0. How do I unit test it?"} {"_id": "92641", "title": "Are RAD tools worth the trouble?", "text": "I have used RAD tools for windows and web development for many years (primarily Infragistics). I invariably find myself in some situation where I have a very difficult time figuring out what is going on because these controls have layer upon layer of indirection and abstraction. This goes for all sorts of RAD tools, including AJAX update panels, AJAX grids, etc. More than once I have found myself asking \"Is this worth the trouble?\" For all the trouble and time I take debugging what often are bugs or quirks in these controls, would it be better to just code my own AJAX? Code my own extra functionality instead of using these RAD tools? The other problem with them is it keeps us unfamiliar with the technology they are using, for instance I have never done my own AJAX because I have always had update panels etc. But I imagine this would be useful to know. What is your take on this? Note: Please don't just say \"Well you should try XYZ RAD tools instead of Infragistics...they are better\". This isn't a debate on the RAD tool itself. This question is about RAD tools in general, not the merits of the one I mentioned in my question."} {"_id": "110135", "title": "Ubiquitous Language - conflict between correctness and usability", "text": "A core part of Domain Driven Design is the consistent use of a ubiquitous language across the system - in conversations, code, database schema, UI, tests, etc. I'm involved in a project in which there is a well-established domain language already, defined by an international standards organisation. However, the work we're doing is for a public web site, and the 'correct' terms for the domain aren't necessarily how the public typically use and understand them. The compromise we're using at the moment is to use the 'official' terms everywhere, except for in our acceptance criteria which refer to UI components, where we use the informal names. Does this seem like a reasonable approach?"} {"_id": "246276", "title": "How can I use unit tests and TDD to test an app that relies mostly on database CRUD operations?", "text": "At work, one of my projects is mostly about taking data passed in from an external client and persisting it in a database. It's a Java enterprise app using JPA and most of our logic revolves around CRUD operations. The majority of our bugs involve JPA in one way or another. * Example 1: If you click the save button twice, JPA might try to insert the same entity into the database a second time, causing a primary key violation. * Example 2: You retrieve an entity from the database, edit it and try to update its data. JPA may try to create a new instance instead of updating the old one. Often the solution is needing to add/remove/change a JPA annotation. Other times it has to do with modifying the DAO logic. **I can't figure out how to get confidence in our code using unit tests and TDD. I'm not sure if it's because unit tests and TDD are a bad fit, or if I'm approaching the problem wrong.** Unit tests seem like a bad fit because I can only discover these problems at runtime and I need to deploy to an app server to reproduce the issues. Usually the database needs to be involved which I consider to be outside the definition of a unit test: These are integration tests. TDD seems like a bad fit because the deploy + test feedback loop is so slow it makes me very unproductive. The deploy + test feedback loop takes over 3 minutes, and that's just if I run the tests specifically about the code I'm writing. To run all the integration tests takes 30+ minutes. There is code outside this mold and I always unit test that whenever I can. But the majority of our bugs and the biggest time sinks always involve JPA or the database. * * * There is another question that is similar, but if I followed the advice I'd be wrapping the most unstable part of my code (the JPA) and testing everything but it. In the context of my question, I'd be in the same bad situation. What's the next step after wrapping the JPA? IMO that question is (perhaps) a step to answer my question, but not an answer to it."} {"_id": "89572", "title": "Pattern for SQL data mining app", "text": "We have an app that is used for data mining on our client database. Typical uses include getting a list of clients and their email addresses, running reports about user transactions between certain dates and returning clients that live in a specific area. High level functional requirements are: 1. Each data mining query needs to be able to exported to excel. 2. Parameters need to be supported (so that we can filter results etc). 3. Other apps would need to be able to access these queries as well as their results. It has been implemented as follows: * A database function is created for each SQL query. * There is also a table that contains details of each function as well as required parameters as their types. * There is a web front end where the SQL queries are maintained, and can be run and exported. * A web service exists that other apps can use to run queries. The method to run a query returns a dataset and gets passes parameters and their values in XML. Our current solution works but there are some problems with maintaining the hundreds of DB functions (would it be necessary to use functions - couldn't we just store SQL in tables?). I also have some security concerns around SQL injection as our front end basically allows the users to input any SQL. Has anyone had to develop something similar and what approach did you use? What was your reasoning for using that approach?"} {"_id": "48967", "title": "HTML/CSS plagiarism", "text": "I'm facing an issue here. A customer asked me to copy an exact site, and even though I'm trying to convince him of going for a new design he does not accept it. He loves this design so much (on a side note it's horrible and outdated, but I wouldn't say that to him!) It's been a couple of weeks since we are discussing this and I don't know what to do. Do you have similar experiences? I don't want to lose the customer, he pays well and his jobs are really easy. At the same time, I don't want to put my signature on someone else's work. Any suggestions? Similar experiences? Thank you!"} {"_id": "117672", "title": "Age of Design Patterns", "text": "When did these design patterns originate? Balking, Builder, Delegation, Facade, Memento. I have looked for days across the net, so if someone points me to a simple google search I may shoot myself. The real question I have is only which is the oldest and which is the youngest, so the specific dates for each are not important. I have already found that the Balking pattern originated in 2002."} {"_id": "117671", "title": "Can freelancers ask their client to sponsor an iPad for project needs?", "text": "I do freelance web projects for a client. The client has been asking me to buy an iPad for testing purposes. Should I ask him to get me an iPad ? I otherwise don't have any need for the iPad. Is it ethical to ask for sponsorship when you are getting paid for the projects ? Should I try it out ?"} {"_id": "252482", "title": "What do you call a problem caused by a design flaw", "text": "Is there any terminology for a problem that is caused by a previous wrong decision? For example you build your own framework, with a flawed MVC design. This in turn leads to weird situations when routing requests. A person then asks how to solve the latter problem, while the actual problem is the wrong design of the framework. I've tried searching several descriptions but as I'm looking for the word, it's hard to find anything relevant. It's not one of these Anti-patterns"} {"_id": "155380", "title": "How to test the render speed of my solution in a web browser?", "text": "Ok, I need to test the speed of my solution in a web browser, but I have some problems, there are 2 versions of the web solution, the original one that is on server A and the \"fixed\" version that is on server B. I have VS2010 Ultimate, so I can make a web and load test on solution B, but I can't load the A solution on my IDE. I was trying to use fiddle2 and jmeter, but they only gave me the times of the request and response of the browsers with the server, I also want the time it takes to the browser to render the whole page. Maybe I'm misusing some of this tools... I don't know if this could be usefull but: * Solution A is on VB 6.0 * Solution B is on VB.Net Thanks in advance!"} {"_id": "193357", "title": "Domain-specific language for text search/processing?", "text": "I work for an organization that does a lot of work with government data. We have a couple of different projects where we've abstracted out common text search/manipulation operations into reusable libraries, for things like standardizing the way politicians' names are displayed (e.g., transforming \"MCDONALD, BOB (R-VA)\" into \"Bob McDonald (R-VA)\"), or finding legal citations in text (e.g., finding a reference to (e.g., finding occurrences of things like \"1 U.S.C. 7\" in text, determining that it's a US Code citation, and returning a structure that says it's referring to section 1 of title 7). These are relatively simple operations, and lots of collaborators in our space would like to use them, but we end up having to pick a language in which to implement each (the former is in Python; the latter, Javascript), and we freeze out potential consumers/contributors who work in different languages and don't want to resort to hacks like shelling out to a node process to handle their text. This all seems like a shame because what we're expressing is so simple, and ought, one would think, to be pretty easy to share. What would be ideal would be a tiny DSL that could express a few basic text processing operations: regular expression search/replace, a few list- processing operations like map and filter, and the ability to store stuff in JSON-ish data structures (maps and lists), and a mechanism to either translate this DSL into or allow it to be consumed from the actual higher-level languages we and our collaborators want to work with (Python, JS, Ruby, and PHP are probably the main ones). Does anything like this exist? I've considered building one myself... maybe a declarative thing on top of something like YAML, or maybe a tiny subset of Scheme or Lua, or maybe something entirely invented for this purpose. But I wanted to see if anything was already out there first."} {"_id": "193351", "title": "How to unit test code which is intended to have different results on different platforms", "text": "I noticed some duplicate code in a codebase I am working on that appended a filename to a directory path, so I decided to refactor it into its own method. The application I am working on is not well tested, however; I just set up the first unit tests on it two days ago, so I am very concerned to get as much code under test as possible whenever I touch it. So I began to write a unit test like this (pseudocode): Directory directory(\"a\") Expect(directory.prependPathToFilename(\"b\")).toBeEqualTo(\"a/b\") However, I then remembered that the application is cross-platform. So I thought about this: Directory directory(\"a\") String separator = \"/\" if (platform is *nix) else \"\\\" Expect(directory.prependPathToFilename(\"b\")).toBeEqualTo(\"a\" + separator + \"b\") But I remembered that Roy Osherove says in section 7.1.2 (\"Avoiding logic in tests\") of _The Art of Unit Testing_ (178): > If you have any of the following inside a test meethod, your test contains > logic that should not be there: > > * `switch`, `if`, or `else` statements > > * `foreach`, `for` or `while` loops > > > > A test that contains logic is usually testing more than one thing at a time, > which isn't recommended, becauset he test is less readable and more fragile. > But test logic also adds complexity that may contain a hidden bug. Now, it does not seem that in this case I am testing more than one thing at a time. Would this be a case of an acceptible logic block in a test? Is there a pattern for cleanly testing behaviors/results that are _expected_ to be different on different platforms?"} {"_id": "7516", "title": "What programming language and framework has best support for agile web development?", "text": "If I would like to quickly set up a modern website, what programming language + framework has best support for this? E.g. short and easy to understand code for a beginner and a framework with support for modern features. Disregard my current knowledge, I'm more interested in the capacity of web programming languages and frameworks. Some requirements: * Readable URIs: http://example.com/category/id/page-title similar to the urls here on Programmers. * ORM. A framework that has good database support and provide ORM or maybe a NoSQL-database. * Good support for RESTful WebServices. * Good support for testing and unit testing, to make sure the site is working as planned. * Preferably a site that is ready to scale with an increasing number of users."} {"_id": "92877", "title": "getting back to c# programming after dabbling in non tech world", "text": "Whats the best way to get back and stay abreast to latest stuff going on with C#3/4? I do have Jon Skeet's book"} {"_id": "348", "title": "Should companies consider remote employees or stick to local employees?", "text": "Elite developers can be 10x more productive than an average developer. Clearly it's easier to find an elite developer around the whole world than in a company's backyard. If a company is not located in a programming hot spot, should they consider hiring people who work from home?"} {"_id": "206016", "title": "Maintaining SVN history for a file when merge is done from the dev branch to trunk?", "text": "In my org, we use SVN for version control So for each build (done periodically), we merge the code to trunk from the development branch (all the developers checks in to this branch). So when we want a new branch say for a new release, we create from the trunk doing a svn copy. Now in the new branch we have the _history only from the trunk_ and not from _the previous development branches_. Is there any way to maintain the history when merge is done from the dev branch to trunk? **Update :** By history I meant revision History of each and every file . Who created it and who edit it. Unfortunately we are using svn 1.6 right now"} {"_id": "149335", "title": "On MVC can several views have the same controller or one view must have one unique controller?", "text": "I'm having some questions while designing a architecture for a project around MVC. (It's a C++/Marmalade SDK project, I'm not using any particular MVC framework, I'm making one.) On several articles (like on the original Steve Burbek article) I keep reading the concept \"MVC triad\" which bogs me since I took this concept rather literally. When I read it the first time looked like an application is built around \"MVC triad\" units - one for each UI piece I supposed -, but I find this rather un-flexible and I think that's not how MVC was intended to be used. Then, researching further on the issue, I found several examples of tight coupling of the controller and the view, namely, 1-to-1 relationship - TextEditView has TextEditController. But when I get back to my project I find that could be useful to have one controller (by 'logical unit', like AddElementController) and several views for that particular controller. I'm clearly thinking about something like an AddElementController that should have some sort of tab UI. Should I have a AddElementController that has a AddElementTabView and several AddImageView, AddSoundView, etc for the tabs? Or should I have a different 'sub-controller' for each tab view? In sum, and regarding the MVC pattern (not the X framework particular understanding/implementation of this pattern), is it correct to have several views for a controller or should each view have it's particular controller? Also, is it correct to keep some state information on the controller or should it be stateless (meaning that the state should be placed on some non-domain state model)? Thanks to all in advance."} {"_id": "149330", "title": "Good architecture for user information on separate databases?", "text": "I need to write an API to connect to an existing SQL database. The API will be written in ASP.Net MVC3. The slight problem is that with existing users of the system, they may have a username on multiple databases. Each company using the product gets a brand new instance of the database, but over the years (the system has been running for 10 years) there are quite a few users (hundreds) who have multiple usernames across multiple \"companies\" (things got fragmented obviously and sometimes a single Company has 5 \"projects\" that each have their own database). Long story short, I need to be able to have a single unified user login that will allow existing users to access their information across all their projects. The only thing I can think is storing a bunch of connection strings, but that feels like a really bad idea. I'll have a new Database that will hold the \"unified user\" information...can anyone suggest a solid system architecture that can handle a setup like this?"} {"_id": "193682", "title": "When to (enforce) linting in a software project", "text": "I'm heading a new team of developers working on a software project that makes use of continuous integration (circleci) w/ a pretty fleshed out suite of busterjs unit/integration/acceptance tests. Our project is primarily written w/ coffeescript, and I try to make use of coffeescript-linter to ensure everyone working our code base keeps code consistent and as organized as possible. My question is, does anyone have any thoughts on when/if/how to enforce linting? Should I integrate linting into my tests that are executed by circleci before deployment? Another thought I had was writing a simple shell script that combines git-push and the linting utility into one step and then including it in the project & having everyone use it. I'm pretty new to managing teams of programmers so anyone else's feedback is much appreciated. EDIT: In the last 3 seconds it just occurred to me that git-hooks is probably perfect for this. Specifically a git-hook on commit."} {"_id": "149339", "title": "Who should register input listeners: the controller or the view? (MVC)", "text": "I'm using an (C++) SDK (Marmalade) and building a project around the MVC pattern. On my app, user input listeners may be registered on certain UI elements/widgets/etc providing a proper callback function (according to MVC should be a method of a controller, right?). In this scenario, who should register these listeners: * The controller? (must have access to the view UI elements, less decoupling of control from presentation) * Or the view? (direct access to UI elements but must have a reference to the controller, isn't this incorrect in MVC?) Between the two, I think the latter makes more sense and does better separation of concerns, but I'm afraid that I overlooked some problem with that design. Thanks in advance."} {"_id": "255808", "title": "Precedence of operators", "text": "int x=2,y; y= x++ + x-- - ++x; printf(\"%d\",y); When I compiled using Turbo C++, I got 3 as the output. For the GCC compiler, I got 1 as the output. Why does the output vary?"} {"_id": "158097", "title": "What to do when client have unrealistic expectations?", "text": "I've been working on a project for the past six months at a client site since they require data confidentiality and didn't want us to work at our own office. When I showed up alone to this client site, I was told that I needed to finish the project in two months. Since the client is not a software company, and because of various policies, it took around 20-25 days just to give me rights on my machine to install stuff like eclipse, tomcat, etc. Even after the delay in getting the environment setup, they were still expecting me to complete the project in the same two month period. They did not give me any requirement documents, but since I'm working at the client site, we used to have meeting regularly to discuss the requirements. After six months the application is still not finished, and everyone is blaming me, but they fail to realize that we have added many more features than those discussed in the first few meetings. I've had to redo many things during this period, e.g. separate a form into two sections; a few weeks later, they ask me to merge the two forms again as its confusing, and so on. The scope of the application is increasing every day but they still think it's a two month project that got delayed. When I told them that scope has increased they ask why I didn't ask for requirements at the beginning. I already work 11-12 hours everyday and Travel 3-4 hours, and now they expect me to come on Saturdays also. I have to do everything here: take requirements, design, code and test. Please advise me what to do in such a case? Additional details: We did have a list of deliverables, but then they added a few more things to it saying these are also important. They also changed a few deliverables. They dont even have their UAT server, they test on my development machine itself via IP address."} {"_id": "255803", "title": "How to design JavaFX client with JEE server. what is possible design?", "text": "I am using JavaFX as thin client as a view technology. Design is like... * On Client Side:=> (View) JavaFX with client side validations * On Server Side:=> (Controller and Model) Server side validations Servlets, WebServices, EJBs, JPA. There may be a possible need of HTML5 in future. I chose this design so that the application can scale and meet future needs. I have a form with some data which need to be submitted and its values are validated on client side as done in HTML with JavaScript. But after validation how can I submit my form values actually name/value pairs to server side where actual business is done. For getting data from server to client web services are good choice. How to send data from client to server on submit a form. So the question is:- Is there any `
` counterpart of html in JavaFX or it should be done using HttpClient like api? I have read about CaptainCasa framework but It is very thin client. For every change in view it connects to the server that is unnecessarily making the network busy and can not provide a good performance in a slow connections. Many things could be done in client site. I wan't some javascript like processing on client side. Except that I didn't find any framework or design pattern which fits to this problem. Any other ways to achieve this???"} {"_id": "158095", "title": "How Can I Improve my Workflow?", "text": "So I design and develop websites myself, mostly in WordPress. Once I'm happy with the site on my local server, I upload it and its database to the web server and let the client make whatever changes to the site. When he/she needs me to make changes to the code or the backend of WP, I usually work on the remote version of the site from then on, which is a pretty slow process compared to working locally. I guess what I'm asking is, is there a way to work on a local version of the site which syncs any changes made to the remote version of the site? Is this version control by any chance? Here's my typical workflow: * Set up local database * Code local WordPress site * Export local database * Import local database to remote server * Upload all files to remote server * Continue to make any changes post-launch remotely Is there any way to improve my current workflow?"} {"_id": "255801", "title": "Career in SQL Server Development", "text": "I am a fresher and i have started my career as \"associate data analyst\" in one of the reputed MNC. Right now i am working on microsoft sql server for queriong the database using T-SQL and also working on SSRS(SQL Server reporting Services). I want to know future scope for these technologies. Waiting for your answers"} {"_id": "158092", "title": "Does this happen in Common Lisp?", "text": "From Steve Yegge's \"Lisp is Not an Acceptable Lisp\": Lisp has a little syntax, and it shows up occasionally as, for instance, '(foo) being expanded as (quote foo), usually when you least expect it. What is he talking about with `'(foo)` being expanded into `(quote foo)` in some situations? (As opposed, I would imagine, to `(quote (foo))`)."} {"_id": "158090", "title": "When are multimethods useful in practice?", "text": "The Common Lisp Object System (CLOS) supports multiple dispatch (multimethods). When is this a useful feature in practice? I'm not just looking for an example of hypothetical functionality that would be easier to implement with multiple dispatch[1]. I'm looking for examples of where it's useful in real software, for any value of real that means it would get written for something other than just an example. [1] In programming tutorials, are examples contrived more often than not?"} {"_id": "120728", "title": "How should we deal with multiple transaction-report requests?", "text": "We are developing a system for the retail market and of its features will enable clients (actually consumer clubs) to go through all transactions made by end-clients. One of the ways to get this information will be via an API. The idea is that there will be requests for reports with a start date and an end date, and a response will have all the transactions between those dates. We worry that some reports will be very large, and that some clients will repeatedly request for reports, in this case the DB and CPU will be very overloaded. The same server that is going to service those requests will also take care of the actual retail transactions (received by proprietary devices) and of the Web application. We are not sure about how to limit the report requests from the API so that it won't affect the system too much. **So, how should we deal with this scenario? any thoughts?** EDIT: just to make it clear: When I mentioned proprietary devices I meant \"On- Location\" devices which are used during sales with end-clients, this means that these requests shouldn't get delayed, and this is the main concern. **One more question : Some people here suggested the use of prioritized threads, i.e. - giving lower priority to the threads retrieving the reports, is this a good idea?**"} {"_id": "153976", "title": "What is Rainbow (not the CMS)", "text": "I was reading this excellent blog article regarding speeding up the badge page and in the last comment the author waffles (a.k.a Sam Saffron) mentions these tools: > dapper and a bunch of custom helpers like rainbow, sql builder etc Dapper and sql builder was easy to look up but rainbow keeps pointing me to a CMS, can someone please point me to the real source? Thanks. _Obviously the architecture of these [SE] sites is uber cool and ultra fast so no comments on that thanks._"} {"_id": "153970", "title": "Observer pattern for unpredictable observation time", "text": "I have a situation where objects are created at unpredictable times. Some of these objects are created before an important event, some after. If the event already happened, I make the object execute stuff right away. If the event is forthcoming, I make the object observe the event. When the event triggers, the object is notified and executes the same code. if (subject.eventAlreadyHappened()) { observer.executeStuff(); } else { subject.subscribe(observer); } Is there another design pattern to wrap or even replace this observer pattern? I think it looks a little dirty to me."} {"_id": "120721", "title": "What are the differences between aspect-oriented, subject-oriented, and role-oriented programming?", "text": "I know there are many papers describing these three paradigms but I'm looking for a schematic explanation. There are a few very good descriptions of aspect-oriented programming on here so I'm asking this question in the hope to get the kind of high-quality answer people at Stack Overflow are used to delivering."} {"_id": "16493", "title": "What if a team member start listing aggressive deadlines and management is happy and programmers know they will work overtime?", "text": "A team member start listing fairly aggressive deadlines (for the project everybody is working on) -- something that is to be done well can take 4 to 5 days, and he lists it as 2 to 3 days. The program manager and the CEO are both happy, because that means people will work overtime and salary is lower as a result, and faster goal reached, etc. People do get burned out and it is not so long term sustainable. Are there good ways to handle it? I can talk to the management but sure it is a conflict of interest, because they want people working overtime and people don't want to always work overtime. (unless you tell them sprint for 2 months and we will give you 2 weeks extra holiday). If enough coworkers say this is not good, we can tell the management, but some coworkers don't care about quality as much (if shorter time just sacrifice quality), and some coworkers also like to please the management, and the remaining may not want to make noise to suggest that they don't want to work hard. What are some good ways to handle this?"} {"_id": "150773", "title": "Can observer pattern be represented by cars and traffic lights?", "text": "I wanted to verify with all of you, if I have a correct Observer Pattern analogy. The scenario is as follows: Consider, at a junction, there is a traffic signal, having red, yellow and green lights respectively. There are vehicles facing the traffic signal post. When it shows red, the vehicles stop, when it shows green, the vehicles move on. In case, it is yellow, the driver must decide whether to go or to stop, depending on whether he/she has crossed the stop line or not. At the same time, there are vehicles that do not care about the signal. They would do as they like. The similarities are that, the Traffic Signal happens to be the subject, notifying its states by glowing the appropriate lights. Those looking at it and following the signal are the ones subscribed to it, and behave according to the state of the subject. Those who do not care about it, are sort-of un- subscribed from the traffic signal. Please tell me, if you think this is a correct analogy or not?"} {"_id": "250357", "title": "Isn't striving for elegance counter-productive?", "text": "Many questions have been asked about the nature of elegant code and design, but the underlying assumption seems to be that _elegance is a good thing_. Therefore it is acceptable or even desirable to make some extra effort to create elegant solutions. **But is it really worth it?** I am not asking what is elegant; neither am I asking about good code. I assume that \"you know it when you see it\", and it can also be vaguely characterized by verifying that there is nothing left to take away, in a sense that the mental model you have in your mind is expressed directly and unobscrured in the language of your choice (programming language, object oriented design, relational schema etc.), without distractions. However, a corollary to it is that elegance is always relative (and in a way subjective); it depends on what you think the problem is and what the solution should look like. The problem of course is that both the problem domain and your understanding of it constantly change (additionally, your knowledge about the programming language / patterns / tools develops, but that's not what I am mainly concerned about). This implies that in order to keep your design elegant, you have to constantly rethink it as the problem / your undestanding of it evolves. Practically, doing so would result in almost endless refactoring. Furthermore, I even maintain that by striving to develop elegant solutions maintainability necessarily suffers. The most elegant solutions are the most tightly coupled to the problem domain, for if they weren't, then there would be something to take away from it, hence they would not be most elegant anymore. If there is something abundant in your solution, then you can simplify it and make it more elegant by removing that abundant part, right? But then, sometimes, when you have a really elegant design and are required to implement a change, you first have to \"untighten\" your solution, just in order to be able to make an unforeseen (as are most of them) change at all. However, the open/closed-principle states that this should not be necessary. So, another more concrete formulation of the question is: **To some degree, aren't elegance and open/closed-principle at odds with each other?** What do you think?"} {"_id": "119515", "title": "Finding header files", "text": "A C or C++ compiler looks for header files using a strict set of rules: relative to the directory of the including file (if \"\" was used), then along the specified and default include paths, fail if still not found. An ancillary tool such as a code analyzer (which I'm currently working on) has different requirements: it may for a number of reasons not have the benefit of the setup performed by a complex build process, and have to make the best of what it is given. In other words, it may find a header file not present in the include paths it knows, and have to take its best shot at finding the file itself. I'm currently thinking of using the following algorithm: 1. Start in the directory of the including file. 2. Is the header file found in the current directory or any subdirectory thereof? If so, done. 3. If we are at the root directory, the file doesn't seem to be present on this machine, so skip it. Otherwise move to the parent of the current directory and go to step 2. Is this the best algorithm to use? In particular, does anyone know of any case where a different algorithm would work better?"} {"_id": "250358", "title": "Confusion in my If Else ,Else If Condition. in C#", "text": "I have three column as follows: * Month * Tech * Circle According to this column I need to fetch the data. In Month Column, Data is : Jan,feb, March... and so on. In Tech Column, Data is : Gsmnqi, Gsmboi... and so on. In Circle Column, Data is : Ap ,Kol, Mumbai.. and so on. I want to prove four condition as follows: 1. if I select the month it will fetch the data related to month, tech and circle will not be selected. 2. if I select the month and tech it will fetch the data related to month and Tech, circle will not be selected. 3. if I select the month and circle it will fetch the data related to month and circle, Tech will not be selected. 4. if I select month, tech and circle it will fetch the data related to month, tech and circle. But my if condition is not working, seriously I get confused in if, else and else if for this four condition. if (nqiSqiEntity.Month != string.Empty) { query.AppendLine(\"select * from K2_NQISQI with (nolock) where MONTH = '\" + nqiSqiEntity.Month + \"' order by id asc\"); } else if (nqiSqiEntity.Month != string.Empty && nqiSqiEntity.Tech != string.Empty) { query.AppendLine(\"select * from K2_NQISQI with (nolock) where MONTH = '\" + nqiSqiEntity.Month + \"' and TECH = '\" + nqiSqiEntity.Tech + \"' order by id asc\"); } else if (nqiSqiEntity.Month != string.Empty && nqiSqiEntity.Circle != string.Empty) { query.AppendLine(\"select * from K2_NQISQI with (nolock) where MONTH = '\" + nqiSqiEntity.Month + \"' and CIRCLE = '\" + nqiSqiEntity.Circle + \"' order by id asc\"); } else { query.AppendLine(\"select * from K2_NQISQI with (nolock) where MONTH = '\" + nqiSqiEntity.Month + \"' and CIRCLE = '\" + nqiSqiEntity.Circle + \"' and TECH '\" + nqiSqiEntity.Tech + \"' order by id asc\"); } In condition, instead of `string.Empty` I need to put the value or I need to check if the value is present then only condition should get executed. Please kindly help me in my understanding this code."} {"_id": "119516", "title": "What to use for simple cross-platform games instead of Flash?", "text": "In short, for simple games: 1. Is Flash still a good option for browser-based PC clients? It still has 90%+ penetration. 2. What is a good alternative for mobile devices? Is HTML5 + JavaScript **the** choice for mobile? Or does one have to learn a new native language for each target platform? (Android, Apple, Windows Phone)... If you desire further background: There are more blogs about the official demise of mobile Flash than I can count, along with endless useless and vitriolic comments. I'm actually trying to do something practical: build simple games that can be served accross multiple platforms. Several months ago I plopped down $1100 for CS5.5 Web and am wading into Flash. Bummer. My question to people who actually develop simple games and apps: What platform should I use instead? Is Flash still a sensible platform for web-served PC users? For example, let's say I build a simple arcade game that I would like to serve as an app to mobile users and as a browser-based game to PC users. Should I still invest the time and effort to learn and develop in Flash for the PC users, while building a parallel code set in some other language for mobile users? My games are simple enough that it would be annoying but not inconceivable to maintain parallel code sets."} {"_id": "119510", "title": "What's the point of adding Unicode identifier support to various language implementations?", "text": "I personally find reading code full of Unicode identifiers confusing. In my opinion, it also prevents the code from being easily maintained. Not to mention all the effort required for authors of various translators to implement such support. I also constantly notice the lack (or the presence) of Unicode identifiers support in the lists of (dis)advantages of various language implementations (like it really matters). I don't get it: why so much attention?"} {"_id": "144385", "title": "Programming Without A Computer", "text": "> **Possible Duplicate:** > Learning to program without a computer I have a bit of experience programming (6 Months) and am soon to go on a 2 month trip where I will be without a computer, but with lots of spare time. Is there a way I can keep programming (or learning to program) even without a computer? Should I read language-agnostic books like Code Complete or PragProg?"} {"_id": "191267", "title": "Is \"responsive web design\" becoming a \"badge to wear\" for front end devs?", "text": "This may seem like an odd question. However, more and more I am seeing web applications that are built on the \"mobile first\" / \"responsive design\" principle that are about as responsive as a wasp in winter. Mobile isn't a \"trend\" but it's being treated like that. It's \"trendy\" and \"eye-catching\" for digital agencies to be \"mobile first\", but for most that only means one or more of the below: * @media queries * Jquery mobile * Twitter bootstrap I've recently seen a site which was billed as \"mobile first\" which only used @media to differentiate between devices. It was a .NET webforms application that heavily relied upon view state and the download size for just the homepage (pre postbacks) was over 1mb! Responsive? Not on my crappy Samsung in the middle of a field. I know that this just sounds like a rant, but I'd really just like to know what peoples opinions are on how to tackle device differentiation? Personally, I show / hide markup (ie sections of a page / includes) based on device to ensure, not only the layout and functionality are optimized, but the download total too. Am I a dinosaur?? Am I wasting my time??"} {"_id": "191264", "title": "What exactly is a Search Engine Indexer. Where to start building one?", "text": "For one of the projects at uni we were given the task to create a custom, niche search engine. My colleges and I split the tasks amongst each other so that we can tackle the overall project more easily. My part is to create the indexer. I already read the wikipedia page on search engine indexers and some other related articles but I'm still struggling to understand exactly how it works and how it looks. To me it is obvious that it is not just a regular table with an index and a descrption column. So my question would be, what is a search engine indexer comprised of, how does it architecture look, and where to start from in building one?"} {"_id": "201542", "title": "When presenting a software design to upper management", "text": "I'm presenting a software design for approval today and as I'm going through the hundreds of pages of design documentation, cherry picking what's important, I'm unsure if I should start with the activity diagrams or use cases. I was thinking of having it stacked like so: * Activities Diagrams * Use Cases Diagrams * Sequence Diagrams * Class Models I've spent the last 7 months of my life creating this and I want it to be presented in the most logical order and done with."} {"_id": "43054", "title": "What modelling technique do you use for your continuous design?", "text": "Together with my teammates, I'm trying to self-learn XP and apply its principles. We're successfully working in TDD and happily refactoring our code and design. However we're having problems with the overall view of the design of the project. Lately we were wondering what would be the \"good\" practices for an effective continuous design of the code. We're not strictly seeking the right model, like CRC cards, communication diagrams, etc., instead we're looking for a technique to constantly collaborate on the high level view of the system ( _not too high though_ ). I'll try to explain myself better: I'm actually interested in the way CRC cards are used to brainstorm a model and I would mix them with some very rough UML diagrams (that we already use). However, what we're looking for are some principles for deciding **when** , **how** and **how much** to model during our iterations. Have you any suggestion on this matter? For example, when your teammates and you _know_ you need a design session and how your meetings work?"} {"_id": "124911", "title": "As a young developer, should I be worried about having to use \"out-of-style\" tech at work?", "text": "I'm a recent college graduate (last May!). While I was still in school, I wanted to make sure that I had a job before I graduated, and very early (probably too early) in my job search I settled on one in a region I'd been hoping to move to after undergrad. However, I've been second guessing this decision for months now, for several reasons. One is that I'm not very challenged at work, and I feel like I haven't improved much at programming since starting here. I can always make time to work on open source (and have in the past) outside of my job, though, so I do have a venue to get around this disappointment. More importantly, I'm worried by the fact that my job is basically to work on a creaky old Perl web application (using Mason and a weird in-house ORM). Am I shooting myself in the foot here by working with a technology that's no longer popular, and won't really help me out getting a job in the future? I rarely see Perl jobs, and when I do, it's usually doing something I'm not interested in (front-end web development stuff). Systems programming, visualisation, network programming, or at least backend web development stuff are the sort of topics that I'd actually enjoy working in -- it doesn't seem like my current work experience is helping me towards positions doing any of these things."} {"_id": "124913", "title": "Does JavaFX have a future?", "text": "I have not intended to hash and rehash the same matter, but just to decide, what to learn first (JavaFX, Flex, HTML5, etc.) I would pull through a kind of survey, especially as the recent similar questions here came out since at least one year. So, what is JavaFX prospect as a RIA technology for the next couple of years?"} {"_id": "69771", "title": "Why was Scala not implemented with C or C++", "text": "Does anybody know why was Scala implemented in Java and .NET instead of C or C++? Most languages are implemented with Cor C++ [i.e Erlang, Python, PHP, Ruby, Perl]. What are the advantages for Scala implemented in Java and .NET other than giving access to Java and .NET libraries? **UPDATE** Wouldn't Scala gain more benefit if it were implemented in C because it can be tuned better rather than relying on JVM?"} {"_id": "199054", "title": "New C++11 analogous to python 2 ->3?", "text": "1. I'm a Python2 developer and I just ordered The C++ programming language, 4th edition, from Bjarne Stroustrup's, to learn C++11. But right after I ordered it, I started to wonder if I made a mistake. Are the changes made to C++ in C++11 analogous to how Python moved from 2 to 3 insofar as code significantly breaking and not being backwards compatible? Or is learning C++11 safe to do? 2. If I coded C++11 in XCode, the latest version being whatever it is, would it work on say a Windows machine? Or does that depend more on what will compile the code, fairly certain that XCode uses LLVM."} {"_id": "199055", "title": "\"Open-source\" licenses that explicitly prohibit military applications", "text": "I am a researcher, and in my research I do a lot of programming. I am a big fan of the open-source concept - especially in research, where transparency and reproducibility is already a big part of the culture. I gladly contribute as much as I can to the community, and releasing my code for anyone to use is part of that. However, in research there is always a certain measure of uncertainty about what the stuff you produce will be used for. I fully understand that I can't copyright any results or conclusions - but I can protect how others use my code, and I would like to make sure that there is no (legal) way to incorporate software I produce in military applications. I've read through a few of the shorter ones of the common OSS licenses, and summaries of some more, but they all seem to focus solely on the questions \"do you earn money on my code?\" and \"do you make my code available with your program?\" - nothing about what the program actually does with the code. Are there any good open-source licenses that explicitly prohibit all kinds of military applications? ### Update: After reading up some more on how OSS works, I've realized that a license that meets my needs by definition will not be open-source, since open-source licenses cannot discriminate against fields. Thus, I'm rather looking for a license that is _like_ an open-source license, except that it prohibits military use. I want this license to be already existing, authored or at least reviewed by someone who actually knows licensing, since I don't. Also, in response to a couple of remarks that this will be difficult to enforce: yes, I realize that. But this is more for myself than for the legal implications; if I use a license like this, and a military organization uses my code anyway, they are breaking the law and they are doing it despite my explicit instructions not to. Thus, the potentially gruesome things that they do with applications that include software I've written are no longer \"on my conciousness\", since they stole the software from me. (And somewhere I have a na\u00efve hope that if they need something I've done, and my license prohibits them from using it legally, they'll get someone elses program that does the same thing and allows them to use it. Not that governments always do, but they always _should_ abide by the law...) It's a moral safeguard, so to speak, rather than something I actually expect to bring up in court (if my mediocre code is ever used by CIA...)"} {"_id": "69775", "title": "What does the word Relational in \"Relational Database\" imply?", "text": "I tried searching but didn't get any useful information. What does the word \"Relational\" mean here? Is it tables being related to each other just like the real life entities, or does it mean something else?"} {"_id": "254278", "title": "Using object flow and control flow in UML", "text": "I have a UML activity diagram below, where there are two ways are shown in which actions can be done. ![enter image description here](http://i.stack.imgur.com/bhttT.jpg) Since the setting of values of **a** and **b** can be done any order before the action 2 is executed so, I'm wondering if both of the ways shown here are correct or if there is a difference between them?"} {"_id": "125883", "title": "Are there any languages that have both high- and low-level facilities?", "text": "Are there any languages that have both high- and low-level facilities? If not, is it feasible to create one? Why or why not? In theory, it would be very helpful to have a programming language that has both shell and regular programming language facilities, like Forth for example can easily be made to, and also to have high-level facilities; Forth does not seem (and it would probably take a good bit of work to make it) like a natural fit for high-level things. Of course, languages like C or C++ can be made to simulate features of high- level languages, but at the cost of ugly or difficult syntax, and the addition of considerably more complexity, or at the cost of a lot of work, which would be unnecessary since it would often be easier just to use a more standard high-level programming language. **Clarification** : Low-level meaning things like manual memory management, limited data structures, only one return value, and the like. Examples: Assembly language, C, C++ to some extent, and Forth. High-level language examples would be: Python, Perl, JavaScript, Java and C# to a big extent, Lisp."} {"_id": "195604", "title": "How can I compute maximal area including only given integer pairs of (x,y) coordinates?", "text": "The values \u200b\u200bof a function of two variables z = f (x, y), where x, y, z take integer values are stored in sql db. Calculate (appoint) the largest surface area of \u200b\u200bthe flat. By 'flat area' we mean an area about which for each pair of x and y lying inside the area about the value of the function is constant (z = const). I have already tried to use this flood fill algorithm, but I am not sure if this is the right way. e.g. if I have pairs(x,y) as follows: (0,0),(0,2),(2,0),(2,2) for which z equals to 5(z=const) and have point (1,1) where z=1 - how should I compute \"flat area\"?"} {"_id": "52487", "title": "MySQL proficient?", "text": "I've been looking to apply to various web development intern jobs, and I feel comfortable with the PHP/CSS/HTML that they are asking for, but what does MySQL proficient mean? I've designed my own databases in phpAdmin and used them in the CMS I created, does that mean I'm proficient in MySQL? Do you just need to be able to create a database and then update/delete/insert fields in there? Would knowing this much allow me to put MySQL on my resume?"} {"_id": "56900", "title": "How do you QA and release software quickly with a large team?", "text": "My work used to be a smaller team. We had less than 13 devs for a while. We are now growing rapidly, and are over 20 with plans to be over 30 in a few months. Our process for QA'ing and releasing each build is no longer working. We currently have everyone develop the new code, and stick it onto a staging environment. A few days before our weekly release, we would freeze the staging environment and QA everything. By our normal release time, everything was usually deemed acceptable and pushed out the door to the main site. We reached a point where our code got too big so we could no longer regress the entire site each week in QA. We were ok with that, we just made a list of everything important and only covered that and the new stuff. Now we are reaching a point where all the new stuff each week is becoming too big and too unstable. Our staging environment is really buggy week after week, and we are usually 1-2 hours behind the normal release time. As the team is growing further, we are going to drown with this same process. We are re-evaluating everything, and I personally am looking for suggestions / success stories. Many companies have been where before and progressed beyond, we need to do the same"} {"_id": "166930", "title": "Is ASP.NET MVC completely (and exclusively) based on conventions?", "text": "**\\--TL;DR** * Is there a \"Hello World!\" ASP.NET MVC tutorial out there that doesn't rely on conventions and \"stock\" projects? * Is it even possible to take advantage of the technology without reusing the default file structure, and start from a single \"hello_world.asp\" file or something (like in PHP)? * Am I completely mistaken and I should be looking somewhere else, maybe **this**? * I'm interested in the MVC framework, not Web Forms **\\--Background** I've played a bit with PHP in the past, just for fun, and now I'm back to it since web development became relevant for me once again. I'm no professional, but I try to gain as much knowledge and control over the technology I'm working with as possible. I'm using Visual Studio 2012 for C# - my \"desktop\" language of choice - and since I got the Professional Edition from Dreamspark, the Web Development Tools are available, including ASP.NET MVC 4. I won't touch Web Forms, but the MVC Framework got my attention because the MVC pattern is something I can really relate to, since it provides the control I want **but**... _not quite_. Learning PHP was easy - and right form the start I could just create a \"hello_world.php\" file and just do something like this for immediate results: echo \"Hello World!\"; But I couldn't find a single ASP.NET (MVC) tutorial out there (I'll be sure to buy one of the upcoming MVC 4 books, only a month away or so) that would start like that. They all start with a sample project, building up knowledge from the basics and heavily using conventions as they go along. Which is fine, I suppose, but it's now the best way for me to learn things. Even the \"Empty\" project template for a new ASP.NET MVC 4 Application in VS2012 is not empty at all: several files and folders are created for you - much like a new C# desktop application project, but with C# I can _in fact_ start from scratch, creating the project structure myself. It is not the case with PHP: * I can choose from a plethora of different MVC frameworks * I can just create my own framework * I can just skip frameworks altogether, and toss random PHP along with my HTML on a single file and make it work I understand the framework needs to establish some rules, but what if I just want to create a single page website with some C# logic behind it? Do I really need to create a whole bloat of files and folders for the sake of convention? Also, please understand that I haven't gotten far on any of those tutorials mainly because of this reason, but, if that's the only way to do it, I'll go for it using one of the books I've mentioned before. This is my first contact with ASP.NET but from the few comparisons I've read, I believe I should stay the hell away from Web Forms. Thank you. (Please forgive the broken English - it is not my primary language.)"} {"_id": "211318", "title": "How do I easily print number triangles? Using for loops", "text": "I know how for loops work and what a nested loop is but I get very frustrated while printing those number or asterisk triangles in java. It makes me wanna quit learning programming :( Please someone tell me an easy way to master loops. for(int i=1;i<7;i++){ for(int j=1;j Scheduler A -> Trigger 1 Application B -> Scheduler A -> Trigger 1 Application C -> Scheduler A -> Trigger 1 Application D -> Scheduler A -> Trigger 1 and Trigger 2 `2014-02-11 13:56:13,465 [QuartzScheduler_QuartzJobCluster-1APP381392087198304_ClusterManager] WARN jdbcjobstore.JobStoreTX - This scheduler instance (1APP381392087198304) is still active but was recovered by another instance in the cluster. This may cause inconsistent behavior.` Now I want to use different scheduler's instance for each application with specific trigger in a same schema. So,I like to know is that correct to use a **same schema and tables** for all of the applications or I should use different schema for them? Does any one know what's the best practice for this situation? Also, the forth application has two triggers. The second one is different with others. Application A -> Scheduler A -> Trigger 1 Application B -> Scheduler B -> Trigger 1 Application C -> Scheduler C -> Trigger 1 Application D -> Scheduler D -> Trigger 1 and Trigger 2"} {"_id": "116235", "title": "What programming languages are well suited for developing a live coding framework?", "text": "I would like to build a \"live coding framework\". I should explain what is meant by \"live coding framework\". I'll do so by comparing live coding to traditional coding. Generally put, in traditional programming you write code, sometimes compile it, then launch an executable or open a script in some sort of interpreter. If you want to modify your application you must repeat this process. A live coding framework enables code to be updated while the application is running and reloaded on demand. Perhaps this reloading happens each time a file containing code is changed or by some other action. Changes in the code are then reflected in the application as it is running. There is no need to close the program, to recompile and relaunch it. In this case, the application is a windowed app that has an update/draw loop, is most likely using OpenGL for graphics, an audio library for sound processing ( SuperCollider? ) and ideally a networking lib. Of course I have preferred languages, though I'm not certain that any of them would be well suited for this kind of architecture. Ideally I would use Python, Lua, Ruby or another higher level language. However, a friend recently suggested Clojure as a possibility, so I am considering it as well. **I would like to know not only what languages would be suitable for this kind of framework but, generally, what language features would make a framework such as this possible.**"} {"_id": "86636", "title": "TDD vs. Productivity", "text": "In my current project (a game, in C++), I decided that I would use Test Driven Development 100% during development. In terms of code quality, this has been great. My code has never been so well designed or so bug-free. I don't cringe when viewing code I wrote a year ago at the start of the project, and I have gained a much better sense for how to structure things, not only to be more easily testable, but to be simpler to implement and use. However... it has been a year since I started the project. Granted, I can only work on it in my spare time, but TDD is still slowing me down considerably compared to what I'm used to. I read that the slower development speed gets better over time, and I definitely do think up tests a lot more easily than I used to, but I've been at it for a year now and I'm still working at a snail's pace. Each time I think about the next step that needs work, I have to stop every time and think about how I would write a test for it, to allow me to write the actual code. I'll sometimes get stuck for hours, knowing exactly what code I want to write, but not knowing how to break it down finely enough to fully cover it with tests. Other times, I'll quickly think up a dozen tests, and spend an hour writing tests to cover a tiny piece of real code that would have otherwise taken a few minutes to write. Or, after finishing the 50th test to cover a particular entity in the game and all aspects of it's creation and usage, I look at my to-do list and see the next entity to be coded, and cringe in horror at the thought of writing another 50 similar tests to get it implemented. It's gotten to the point that, looking over the progress of the last year, I'm considering abandoning TDD for the sake of \"getting the damn project finished\". However, giving up the code quality that came with it is not something I'm looking forward to. I'm afraid that if I stop writing tests, then I'll slip out of the habit of making the code so modular and testable. Am I perhaps doing something wrong to still be so slow at this? Are there alternatives that speed up productivity without completely losing the benefits? TAD? Less test coverage? How do other people survive TDD without killing all productivity and motivation?"} {"_id": "191867", "title": "Beta-testing app good practice", "text": "I have just released an iOS app for beta-testing with TestFlight. I'm wondering what is good practice during beta-testing? * Should I halt development of the app during beta-testing? * Should I continue developing but make a snapshot image of what the code looked like when it was released for beta-testing? * Issues that could rise that I can think of is that if I make any changes to the code and then a bug is detected in the code that I have changed this would not be ideal. There are probably other issues to deal with but these are the ones that I could think of at the moment. I'm the only developer of the app at the moment."} {"_id": "235360", "title": "Am I covered with GPL If I want to create a derived work from a GPLv2 project and change the project name?", "text": "Recently I create a fork from an \"open source project GPLv2\" with different name. I make a major changes in source code and I gave a new name to the modified open source. I kept in the header of each source file the origin author copyright. And then I publish my new open source in github. is this behavior is legal according to GPLv2 License statements ? do I have the right to change the name of the modified open source ? Could you please provide justification from licenses with the answer."} {"_id": "191196", "title": "CoffeeScript and Named Functions", "text": "Elsewhere: an argument has arisen over the terminology of a named function in CoffeeScript. In particular somebody refereed to something like this foo = -> console.log(\"bar\") as a named function. But its been objected that everything in CoffeeScript is anonymous functions and there are no named functions. This is certainly true, CoffeeScript only has function expressions which can then be stored in a variable. But I don't think that means it is wrong to call this a named function. As I see it, it is a named function because its a function that has been given a name. True, its not a named function in the same way that some other languages have named functions, but I think its close enough that its not inappropriate to call it a named function. To insist otherwise just seems to be nitpicking. Am I out to lunch in thinking that insisting that this isn't a named function is just nitpicking?"} {"_id": "203563", "title": "Fair 2-combinations", "text": "I need to fairly assign 2 experts from `x` experts (`x` is rather small - less than 50) for every `n` applications, so that: * each expert has the same number of applications (+-1); * each pair of experts (2-combination of `x`) has the same number of applications (+-1); It is simple to generate all 2-combinations: for (i=0; i Specify an explicit soft input mode to use for the window, as per > WindowManager.LayoutParams.softInputMode. Providing anything besides > \"unspecified\" here will override the input mode the window would normally > retrieve from its theme. And here is an excerpt from the `InputMethodManager` documentation that seems to say nearly the same thing: > You can also control the preferred soft input state (open, closed, etc) for > your window using the same windowSoftInputMode attribute. > > More finer-grained control is available through the APIs here to directly > interact with the IMF and its IME -- either showing or hiding the input > area, letting the user pick an input method, etc. So what is the difference in these two options to hide the Android soft keyboard, and does one have a benefit over the other? Is one more efficient? What are specific uses for each?"} {"_id": "17729", "title": "how do early stage startups hire ninja programmers", "text": "I am programmer who just started working on a startup idea. At the moment I want to bring onboard at least one programmer. This programmer should be a ninja - a 10x engineer. Since early days are probably the most risky for a startup, I want to make sure I approach this problem the best I can. How do I find these people? and How do I convince them to come onboard? I would love to hear from people who started their own companies and what their thoughts are about hiring **Update** : I would like to get the ninja as a co-founder so besides being a ninja (ie. great programmer with computer science background) he/she has to have a healthy appetite for risk (for great programmers this is not a big deal because they can be hired anytime into mainstream jobs if the startup doesn't work)"} {"_id": "27998", "title": "How do you organize your usability testings?", "text": "* What is your process? * How do you get feedback? * What software do you use? (like Morae from TechSmith) * Who does them? * Have you measured how it positively affected the quality of your software? I'm looking for your experience on the subject. This is something I want to improve."} {"_id": "229885", "title": "sqlite trigger or application event?", "text": "I've have two event queues(table mapped queues) based on two different states of same data stored in two different tables. The events are generated on create/update/delete on both the tables. Constrain is that both the tables have to be in sync. The create/update/delete in one table has to be reflected into other table. So the question is that should i use trigger to queue events in table or application/object to queue events in table? and Why? Note: 1. Update on either is capable of generating 3 different types of events. So on application layer extra diff logic would be required to generate correct event. 2. The Negative thing in trigger is that it will introduce duplicate events from both sides. i.e. if some event is processed on one table it will create an event for processing on other table."} {"_id": "52732", "title": "Do you think natively compiled languages have reached their EOL?", "text": "If we look at the major programming languages in use today it is pretty noticeable that the vast majority of them are, in fact, interpreted. Looking at the largest piece of the pie we have Java and C# which are both enterprise-ready, heavy-duty, serious programming languages which are basically compiled to byte-code only to be interpreted by their respective VMs (the JVM and the CLR). If we look at scripting languages, we have Perl, Python, Ruby and Lua which are all interpreted (either from code or from bytecode - and yes, it should be noted that they are absolutely not the same). Looking at compiled languages we have C which is nowadays used in embedded and low-level, real-time environments, and C++ which is still alive and kicking, when you want to get down to serious programming as close to the hardware as you can, but still have some nice abstractions to help you with day to day tasks. Basically, there is no **real** runner-up compiled language in the distance. Do you feel that languages which are _natively_ compiled to executable, binary code are a thing of the past, taken over by interpreted languages which are much more portable and compatible? Does C++ mark an end of an era? Why don't we see any new compiled languages anymore? * * * **I think I should clarify:** I do not want this to turn into a \"which language is better\" discussion, because that is not the issue at hand. The languages I gave as example are only examples. Please focus on the question I raised, and if you disagree with my statement that compiled languages are less frequent these days, that is totally fine, I am more than happy to be proved mistaken."} {"_id": "60648", "title": "Is software better designed and developed by people who will use it?", "text": "I see many apps written by programmers who obviously don't use the program like the end customers in a production environment. This results in features that don't really make sense in practice, inferior ways of doing things, half- baked functionality, bad workflow, many tools doing similar things, etc. But on the other hand, in rare cases you see the same end customers making similar apps in an effort to overcome these shortcomings, and they might not be as sophisticated code-wise as the ones written by programmers but they work much better and feel like it's written by someone who uses the program professionally just like the end customers. So is software better designed and developed by people who will use it, people who have intrinsic knowledge of the domain the program will serve in?"} {"_id": "60643", "title": "What advantages does TFS have over Tortoise SVN in this scenario?", "text": "This is neither a Holy War invocation nor is it http://stackoverflow.com/questions/661389/tfs-vs-svn \\- This question is much more specific and would potentially make a team of developers very happy: I used an earlier version of TFS for two years but I have not used it for years. What advantages does it have over Tortoise SVN? For example, does the merging work seemlessly or does it involve a lot of manual work; and does the shelving actually work (we could not get it working)? The Platform is Windows (does TFS run on anything else) and the intended use is version control through Visual Studio 2008 / 2010 with the scope for Continuous Integration on x86 or 64bit build servers (depending on the product). There would any ever be one develop stream per product. Projects would typically last less than two weeks (large pieces of work would be broken down into these discrete chunks of this size). The maximum team size to work simultaneously on a product would be less than six developers. Checkins on a branch would occur at any time (only explicit rule is that it builds). Merges back into the trunk (head) occur after project completion. Running a TFS trial is likely to be costly to a business. Therefore, I have asked the question on here. I want to hear answers from those who already know (as well as those who anticipate pitfalls). There is no point reinventing the wheel. It makes no sense to incur unnecessarly research costs. To reiterate: my major concern is merging. I know SVN Tortoise works (it has a few quirks around ASP.Net .csproj files but I can live with that) but TFS is supposed to have a great deal of features. I want the best deal for the devs."} {"_id": "63595", "title": "0x9e3779b9 golden number", "text": "i'm trying to understand a constant 0x9e3779b9 what kind of data is this? it's not binary, not decimal, what is this? it's a constant used on the TEA algorithm it says it's derived from the golden number but the golden number is 1.618?"} {"_id": "80576", "title": "Best practice to save data in different databases using C# ADO.NET", "text": "I would like to know if you guys have any best practice when dealing with transaction between different databases vendors. For example, if you have a C# application that save customer data in two databases (one Oracle and the other Microsoft), what would you do? Would you use OleDbTransaction? Is it possible?"} {"_id": "254074", "title": "How exactly is an Abstract Syntax Tree created?", "text": "I think I understand the goal of an AST, and I've built a couple of tree structures before, but never an AST. I'm mostly confused because the nodes are text and not number, so I can't think of a nice way to input a token/string as I'm parsing some code. For example, when I looked at diagrams of AST's, the variable and its value were leaf nodes to an equal sign. This makes perfect sense to me, but how would I go about implementing this? I guess I can do it case by case, so that when I stumble upon an \"=\" I use that as a node, and add the value parsed before the \"=\" as the leaf. It just seems wrong, because I'd probably have to make cases for tons and tons of things, depending on the syntax. And then I came upon another problem, how is the tree traversed? Do I go all the way down the height, and go back up a node when I hit the bottom, and do the same for it's neighbor? I've seen tons of diagrams on ASTs, but I couldn't find a fairly simple example of one in code, which would probably help."} {"_id": "152731", "title": "What is the main difference between HAS_MANY and BELONGS TO relationship in mysql?", "text": "After making little progress in web development and php I figured out if I can make my grip strong on database designing, I will be able to reduce much code and time in developing the application. But I happen to be a dumb in database designing so can anyone please help to differentiate between has many and belongs to relationship. They seems quite same to me"} {"_id": "80573", "title": "I've been programming in one language for many years. Is this career suicide?", "text": "I have been programming in the same Object Oriented Programming language for many years (Windows-based). The problem is this particular language is not very popular, and not one of the hottest ones in demand in job postings and such. Should I be worried? Would a Java employer understand that I'm a programmer and can pick up any language in a matter of a week or two, or would they be under the impression that since I haven't programmed in their specific language professionally, then I'm just not qualified? PS. As far as learning, I do play around with different technologies at home. But at work, I'm pretty much stuck with the same language."} {"_id": "152733", "title": "What is the term for a really BIG source code commit?", "text": "Sometimes when we check the commit history of a software, we may see that there are a few commits that are really BIG - they may change 10 or 20 files with hundreds of changed source code lines (delta). I remember that there is a commonly used term for such BIG commit but I can't recall exactly what that term is. Can anyone help me? What is the term that programmers usually use to refer to such BIG and giant commit? BTW, is committing a lot of changes all together a good practice? UPDATE: thank you guys for the inspiring discussion! But I think \"code bomb\" is the term that I'm looking for."} {"_id": "148522", "title": "Why is putting something on the stack called \"push\"?", "text": "According to http://dictionary.reference.com > push > > verb (used with object) > > 1. to press upon or against (a thing) with force in order to move it away. > > 2. to move (something) in a specified way by exerting force; shove; drive: > _to push something aside; to push the door open_. > > 3. to effect or accomplish by thrusting obstacles aside: _to push one's > way through the crowd._ > > 4. to cause to extend or project; thrust. > > 5. to press or urge to some action or course: _His mother pushed him to > get a job._ > > This IMO fits to FIFO queues. Is there an explanation for this?"} {"_id": "148523", "title": "chat/game server best way to implement, is WCF the way to go?", "text": "I'm creating a game on wp7 and it will be an online game played between a max of 4 players. The game will be a turn based game. My question really is what is the best way to do this server wise? is WCF the way to go? The following is information transferred to and from the server from each player. * Player Chat message * Chat message from other players. * Players points from game. * Picture sent to each player for start of game"} {"_id": "148521", "title": "websites that show real world scenarios for OOP beginners so that they can implement them", "text": "Since programmers learn more by implementing the real world scenarios rather then by gaining theoretical knowledge and concepts about programming, I wanted to know that is there any website that tells about real world scenarios for OOP learners so that they can practice their designing skills and implement these scenarios using c#? Like which can help them making designing decisions for a problem for e.g in which scenario one should use abstract classes, interfaces, virtual overriding etc."} {"_id": "63598", "title": "improving IM communication skills", "text": "I am an email person, but found that at my new job, co-workers use IM a lot. I have to admit that I have been largely ignoring the IM/SMS as a way of communication by thinking it is only for teenagers... The IM-style of communication is quite different from emailing. The sentences are shorter and there is less time for a response. So when I chat with someone, more often than not, I feel that I lag with the answer and then I just pick up the phone - and the conversation usually last minutes ... which plainly defeats the idea of the \"instant\" communication. Are there some recommendations that I can follow to improve my IM communication skills?"} {"_id": "251000", "title": "How much data should exceptions hold?", "text": "Almost all the exceptions I have ever written have been very lightweight, containing a String message and optionally a throwable. In some situations I have included some application specific enum or some other field. public class MySpecialException() { private MyErrorCode errorCode; public MySpecialException(String message, Throwable cause, MyErrorCode errorCode) { super(message, cause); this.errorCode = errorCode; } .... } Now I face a situation where I need to put in 5 or 6 fields in the exception because the error handler that catches them needs them to generate the output. Would you consider that to be bad code? Can an exception be too big? public class MySpecialException() { private String name; private int age; private int id; private int height; private String duck; private String whatever; .... }"} {"_id": "53284", "title": "What do you as developers think of a marketplace for PC apps?", "text": "I just wanted to get some honest opinions from developers about the prospect of a marketplace/app store for the PC? Would you build apps for it? Examples are silverlightmarket.com, allmyapps.com etc etc... Cheers, Ash."} {"_id": "184695", "title": "When was source control invented?", "text": "I'm aware of many version control systems: CVS, SVN, TFS etc... I've googled for the very first \"revision control/version control system\" and seen various conflicting answers. When was source control invented? Who invented it? What was it called?"} {"_id": "62643", "title": "Why does EL have such low visibility?", "text": "Use of EL in JSP's is the proper way to code logic according to the Sun tutorials and samples, which also recommend against the use of Java scriptlets. Yet searching job listings for EL | \"expression language\" in the SF bay area on craigslist yields 0 hits (vs 144 for jsp, 8 for jstl and 678 for java. this is searching ad text not the titles) so it doesn't seem to be a skill of interest. Also 0 hits on programmers on the topic and I had to create a new tag ;-) What's good or bad about EL and why the low interest? As a java technology you would expect a lot more."} {"_id": "220896", "title": "COBOL & Mainframe & Business", "text": "I have made a search on a Hong Kong job seeking website. There numerous of jobs titled \"Computer Analyst\". These are the requirements of the job: Over 3 year experience relevant working experience in IBM Mainframe environment. University degree in Computer Subject. Proficient in system analysis, design and coding for the banking systems. Application knowledge of Core banking and Card applications is definitely an advantage. Sound in using IBM CICS/COBOL, and VSAM for both online and batch application systems. Good communication skills with internal users. Quite a number of jobs are similar: They requires candidates to have the following qualities: * Computer Subjects especially Computer Science * 3 years above experience * IBM Mainframe COBOL * Proficiency Then it comes to my question: 1. It makes it hard for new programmers to enter the industry. How should someone enter the industry, i.e. making him/herself suitable for this kind of position? 2. Do they still want new programmers in this field? 3. How should one learn to operate IBM Mainframe(Or any kind of Mainframe)? 4. Is knowledge in finance required or crucial?"} {"_id": "54770", "title": "Should we leave our contact details in source code?", "text": "I usually leave my email address as a courtesy in case someone wants to ask me a question about it later. Do other people leave more or less information than that? Does anyone leave a phone number??"} {"_id": "186102", "title": "Functional reactive programming \u2014 is Fay expressive enough?", "text": "> So I'm doing a fairly involved javascript/html client with lots of ajax > calls and other involvements of callback-ism. I'm entertaining the thought > of using Fay for this purpose. I'm aware of Elm. Tried it and liked the FRP > elements. Now I'm looking to know if similar structure is possible in Fay. > > Are there any concrete examples of FRP in Fay out there at this point? This question was migrated from SO due to not adhering to its intended format. Since it got some upvotes there I thought I could move it. I wont be doing FRP with my current code for now but I'm interested for future reference. Some relevant technologies: * Arrowlets, arrowised FRP in javascript * FlapJax, another javascript alternative * Bacon.js, FRP in javascript One possible solution, using Bacon. With demo."} {"_id": "186107", "title": "Can you claim that your product is fit for purpose when it uses OSS software which does not guarantee it?", "text": "I am working on a product for a client that must be valid and fit for purpose. It's built on a LAMP stack (PHP/Cake), so there's GPL, MIT, PHP, APACHE licenses: > \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express > or implied, including, without limitation, any warranties or conditions of > TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or **FITNESS FOR A PARTICULAR > PURPOSE**. You are solely responsible for determining the appropriateness of > using or redistributing the Work and assume any risks associated with Your > exercise of permissions under this License. My rationale that my product is valid and fit for purposes: * The signed UAT doc proves validity and fitness for purpose. * The stack is so widely used by developers, industry and end users (netcraft, gartner etc. stats), that there is a consensus that it IS fit for purpose. (ie we can disregard the fitness for purpose statement in the warranty disclaimer to an extent) Is this a valid point? Can I make claims that my software is fit for purpose?"} {"_id": "20416", "title": "Why is there a decline in CS majors?", "text": "Although I've read on various websites until 2007-2008 that CS majors in decline, I don't understand why--I haven't entered the job market yet. Bruce Webster has an excellent chart regarding the topic: http://brucefwebster.com/2008/03/05/the-decline-in-computer-science-students/ What are some factors that lead to this decline? Is there still deficient amount of graduates? Are more people just skipping higher education altogether to get inside the industry? Does a \"Software Design\" major exit now? So many questions."} {"_id": "222001", "title": "Can a pure-functional solution to this problem be as clean as the imperative?", "text": "I have an exercise in Python as follows: * a polynomial is given as a tuple of coefficients such that the powers are determined by the indexes, e.g.: (9,7,5) means 9 + 7*x + 5*x^2 * write a function to compute its value for given x Since I am into functional programming lately, I wrote def evaluate1(poly, x): coeff = 0 power = 1 return reduce(lambda accu,pair : accu + pair[coeff] * x**pair[power], map(lambda x,y:(x,y), poly, range(len(poly))), 0) which I deem unreadable, so I wrote def evaluate2(poly, x): power = 0 result = 1 return reduce(lambda accu,coeff : (accu[power]+1, accu[result] + coeff * x**accu[power]), poly, (0,0) )[result] which is at least as unreadable, so I wrote def evaluate3(poly, x): return poly[0]+x*evaluate(poly[1:],x) if len(poly)>0 else 0 which might be less efficient (edit: I was wrong!) since it uses many multiplications instead of exponentiation, in principle, I do not care about measurements here (edit: How silly of me! Measuring would have pointed out my misconception!) and still not as readable (arguably) as the iterative solution: def evaluate4(poly, x): result = 0 for i in range(0,len(poly)): result += poly[i] * x**i return result Is there a pure-functional solution as readable as the imperative and close to it in efficiency? Admittedly, a representation change would help, but this was given by the exercise. Can be Haskell or Lisp aswell, not just Python."} {"_id": "222002", "title": "eCommerce use case: removing a product", "text": "I am starting to design the database schema for the eCommerce element of a web service I am creating. The thing I'm trying to get my head around is how to deal with the use case of a seller (user of my web service) deleting a product. The issue I have is that this product could be referenced in a customers basket, to be paid orders, pending orders, deliveries, past orders etc... This problem is surely one that everyone developing such a system has come across. I'm just wondering if there is a best practice way to address it? Can it be addressed at the database schema level? Or maybe it just requires certain validation checks on database data before allowing a product to be deleted, or to stop an error if a customer is trying to pay for an order that had a reference to a product, but that reference has been deleted (i.e. product was deleted)."} {"_id": "175401", "title": "Naming methods that do the same thing but return different types", "text": "Let's assume that I'm extending a graphical file chooser class (`JFileChooser`). This class has methods which display the file chooser dialog and return a status signature in the form of an `int`: `APPROVE_OPTION` if the user selects a file and hits _Open_ / _Save_ , `CANCEL_OPTION` if the user hits _Cancel_ , and `ERROR_OPTION` if something goes wrong. These methods are called `showDialog()`. I find this cumbersome, so I decide to make another method that returns a `File` object: in the case of `APPROVE_OPTION`, it returns the file selected by the user; otherwise, it returns `null`. This is where I run into a problem: would it be okay for me to keep the `showDialog()` name, even though methods with that name \u2014 **and a different return type** \u2014 already exist? To top it off, my method takes an additional parameter: a `File` which denotes in which directory the file chooser should start. My question to you: Is it okay to call a method the same name as a superclass method if they return different types? Or would that be confusing to API users? (If so, what other name could I use?) Alternatively, should I keep the name and change the return type so it matches that of the other methods? public int showDialog(Component parent, String approveButtonText) // Superclass method public File showDialog(Component parent, File location) // My method"} {"_id": "95777", "title": "generic programming, how often is it used in industry", "text": "I do programming in an academic setting at the moment, so I can use whatever I want. I'm using the boost graph library for a few things, and I'm wondering whether investing effort in understanding GP more deeply is worth it. I'm curious - is generic programming (GP) used much in industry? My guess is that most programmers are much more comfortable with OOP or are using languages that don't emphasize or support GP, so that outside of calling STL data structures/functions in C++, my impression is that GP isn't used all that frequently in practice. But, being outside industry at the moment, it'd be nice to hear from practitioners on this. (as I'm writing this, I see that generic-programming isn't even a valid tag!)"} {"_id": "95771", "title": "best practices for testing backbone.js apps with jasmine?", "text": "I've recently started using jasmine to do javascript unit testing. loving it so far. one of the projects i'm working on is a plugin for backbone.js. since backbone is an mvc style framework for javascript, a lot of what it does is view manipulation - typically through jquery. my plugin is no exception to this. i have several things being manipulated in html elements, through backbone views. right now, i am doing what might be some awful stuff to make this worth with jasmine. here's an example of how i'm laying out my tests: describe(\"conventionBindings\", function(){ beforeEach(function(){ this.model = new AModel({name: \"Some Name\"}); this.view = new AView({model: this.model}); this.view.render(); }); afterEach(function(){ this.view.close(); }); describe(\"... that thing it does ... \", function(){ it(\"... stuff .... \", function(){ }); }); }); The important bits here, are the beforeEach and afterEach. notice that i'm calling my view's render method, and then my view's close method. here's what those methods do: AView = Backbone.View.extend({ render: function(){ this.html = $(\"\"); $(\"body\").append(this.html); }, close: function(){ this.html.remove(); } }); i'm specifically adding this \"close\" method to my view for the tests, because if i don't, then the jasmine page that i'm viewing when i run the jasmine test server would show the inputs that i've appended to the body of the page. so... here's my questions: is this a horrible thing to be doing? should i be testing my views and html element manipulations in some way other than appending elements to the body of the page? right now i don't have a need to run these tests in any kind of CI server, but if i do, what kind of problems will i run into? how can i write better jasmine tests, so that i can test my backbone plugin in a CI server, knowing that the plugin has to manipulate html elements during the test?"} {"_id": "81977", "title": "OpenID and data espionage", "text": "This answer[link] to another question here talks about OpenID and data espionage. I quote: > [Data espionage] Why let them gather the detailed statistics from many > consumer site and help them build personal profiles of people? Who knows > what they'll do with it? Sell it, use it to adjust their marketing tactics, > submit it to CIA? This has been also a concern of mine. If you use Yahoo, Yahoo now has a track of all the sites you went to (and signed with your OpenID). I'm wondering if this issue has been addressed more thoroughly. I think we here are the best people to have this discussion, because we're unbiased developers who don't get paid by OpenID or any provider (Yahoo, Google, etc.). What do you think about this?"} {"_id": "163347", "title": "Issues with time slicing", "text": "I was trying to see the effect of time slicing, and how it can consume significant amount of time. Actually, I was trying to divide a certain task into a number of threads and see the effect. I have a two core processor. So two threads can run in parallel. I was trying to see if I have a task `w` that is done by 2 threads, and if I have the same task executed by `t` threads with each thread doing `w/t` of the task. How much does time slicing play a role in it. As time slicing is time consuming process, I was expecting that when I do the same task using a two thread process or by a `t` thread process, the amount of time taken by the `t` thread process will be more. Any suggestions?"} {"_id": "120967", "title": "Does waterfall require code complete before QA steps in?", "text": "The process used at a certain company consists of: > 1. Create a layout according to some designs made in a web page design > tool. (CSS, html) > 2. Requirements come in with \"functional requirements\". These consist of > 100's of lines of business directions. E.G. Create a Table on page X. > Column1 has numeric data. Column1 is the client code. Column2 is a > string...etc. > > 3. Write code to meet all functional requirements. > > 4. When all code is checked in, send to QA (which is the BA that wrote the > requirements) for inspection, bug finds and change requests. > 5. Punt back to developer with a list of X bugs and Y change requests. > 6. While bug finds or change requests > 0 go to step 4. > The agile development environments I have worked in allow, if not demand, early QA inspection and early user acceptance. So, pieces of the program can be refined and redefined before the entire application is in place. Not only that, but the process leaves little room for error or people changing their minds. Instead, those \"change requests\" come in at the last stage when they do the most damage. And being that a bug-fix's cost increases over time, this is a costly way to write code. I am no waterfall expert. As described, is this waterfall being mishandled in some way? How does waterfall address my concerns?"} {"_id": "210957", "title": "Modern web application development! Is flash and silverlight still relevant?", "text": "When building an application one considers the appropriate technology of choice for best long term impact and scalability. If building a media streaming application for mobile and desktop should one still use flash or silverlight, or html5 and javascript? How does one select the right technology?"} {"_id": "195020", "title": "Is it a bad design to specify default bindings when using Inversion of Control (IOC) containers and dependency injection (DI)?", "text": "I'm using Ninject, but this is not a Ninject-specific question. I'm wondering if the advanced and flexible capabilities of the IoC container are giving me enough rope to hang myself with a bad application design. I've got a service interface: public interface IBuilderService { IBuilder Create(string category, string clientID); } and I lean on Ninject Factory Extension's `ToFactory()` to create what I'd guess you'd call a proxy implementation at runtime. I have `IBuilder` bindings like this: kernel.Bind().To(); // for unbound categories kernel.Bind().To(); kernel.Bind().To(); kernel.Bind().To(); kernel.Bind().To(); kernel.Bind().To(); // for unbound clients // etc... I've been experimenting with custom instance providers and binding generators with Ninject (I'm guessing other IoC containers may have similar constructs for overriding basic binding behavior), but I have **two distinct binding scenario goals** : 1. if a **category** of binding cannot be resolved from the `IBuilderService.Create` `category` argument, then a `NullObjectBuilder` should be resolved - this is the _default_ binding in this case. 2. if a **clientID** of a binding cannot be resolved, then a default client implementation like `CatOneBuilder_DefaultClient` for a `category` of `CatOne` should be resolved - this is the _default_ binding in this case. This is a situation where unknown category requests **are expected** , so it seems a shame to use expensive exception handling for something I know will happen very often. I want there to be fallback behavior to \" _handle the unhandled_ \" in a very plain way - in this case, it would be the `NullObjectBulder` for unknown categories, and `*_DefaultClient` for _unknown clients_ of _known categories_ \\- two distinct fallback behaviors for two different _types_ of unknown behavior. I'm having trouble keeping the two fallback behaviors mapped to their respective scenarios, but it's given me pause to think about the friction I'm feeling in fleshing out this design\u2026 Is it a flawed design to have _default_ bindings and the logic used to determine when one is needed contained in IoC custom implementations, rather than an exception being thrown? Is this a mis-use of IoC containers?"} {"_id": "163343", "title": "How should I incorporate a hotfix back into a feature branch using gitflow?", "text": "I've started using gitflow for a project, and I have an outstanding feature branch as well as a newly created hotfix. Per the gitflow workflow, the hotfix gets applied to both the _master_ and _develop_ branches, but nothing is said or done about extant feature branches. Nevertheless, I'd like to incorporate the hotfix changes back into my feature branch, which as near as I can tell leaves three options: 1. Don't incorporate the changes. If the changes were needed for the feature branch, it should've been part of the feature branch. 2. Merge _develop_ back into the feature branch. This seems to follow the gitflow workflow the best, but would cause out-of-order commits. 3. Rebase the feature branch onto _develop_. This would preserve commit order but rebasing seems to be completely absent from the general gitflow workflow. What's the best practice here?"} {"_id": "198019", "title": "Single codebase for client and server with Node.js", "text": "There are a few claimed benefits to Node.js that I typically hear. Some (many?) I agree with. There is one that I completely do not understand, which is the one language argument: \"You can now use one language on both the client side and the server side.\" This does not make sense to me on many levels: 1. most people already know more than one language, and learning a second is anyway not that big of a deal. 2. JavaScript is not that great of a language; if people had any choice they'd probably choose something else on the client side, but they're locked in. 3. the beauty of the server side is you can choose ANY language (so it makes sense to choose the best, most capable language, with the least baggage). The only reasonable argument I've heard is \"if I'm using backbone I can re-use my models\". Since I haven't used backbone myself I'm not sure how much that actually amounts too however. Can someone shed some light?"} {"_id": "163348", "title": "Functional programming readability", "text": "I'm curious about this because I recall before learning any functional languages, I thought them all horribly, awfully, terribly unreadable. Now that I know Haskell and f#, I find it takes a little longer to read less code, but that little code does far more than an equivalent amount would in an imperative language, so it feels like a net gain and I'm not extremely practiced in functional. Here's my question, I constantly hear from OOP folks that functional style is terribly unreadable. I'm curious if this is the case and I'm deluding myself, or if they took the time to learn a functional language, the whole style would no longer be more unreadable than OOP? Has anybody seen any evidence or got any anecdotes where they saw this go one way or another with frequency enough to possibly say? If writing functionally really is of lower readability then I don't want to keep using it, but I really don't know if that's the case or not.."} {"_id": "237610", "title": "How do I manage a JavaScript library with TFS?", "text": "I know that I can share files between Visual Studio projects using linked files and assemblies using project references. Is there a good approach for JavaScripts? I'd rather not use linked files since it doesn't allow projects to have their own version of the script."} {"_id": "157816", "title": "How does it matter if a character is 8 bit or 16 bit or 32 bit", "text": "Well, I am reading Programing Windows with MFC, and I came across Unicode and ASCII code characters. I understood the point of using Unicode over ASCII, but what I do not get is how and why is it important to use 8bit/16bit/32bit character? What good does it do to the system? How does the processing of the operating system differ for different bits of character. My question here is, what does it mean to a character when it is a x-bit character?"} {"_id": "224941", "title": "Anemic domain models - what sort of methods a domain object might need?", "text": "This question might seem strange, but it's something I've faced sometimes. I've been trying to adopt DDD, however I'm always facing the problem of anemic domain models. The problem is that when I start to think about what should be the behaviors of the domain objects, nothing comes to mind. I've started then to seek on the internet what should be put inside the behavior of the domain objects and many people say that are rules and validations. The problem is that I've faced the situation many times where to customer who wants the system doesn't want any validation. Once I've asked a customer: \"what are the required properties for this object?\" and he said \"no, I don't want anything like that, sometimes I want to just place one property and leave the others because I'm without time\". Then I've asked: \"and what sould be the format of data for this property?\" and he said \"I don't want anything checked, I want to be free to put as I like\". So the customer _didn't want_ validation at all, so I coudln't implement it. In that case, there's no validations, there are no rules in place. Then I've looked at the controllers (it was built with MVC) to see if there was any logic specific to the domain entities, and there wasn't any. Just logic to read and write data. So in cases like that, is it a problem to have anemic domain models? Also, I'm working with .NET. In .NET we can put validation with Data Annotations, and we can write getters like properties. So methods to do validation when setting data aren't requireds, because the Data Annotations do the job, also methods to calculate data aren't required, because properties with just a special get can do the job. So what the behaviors become after all?"} {"_id": "224946", "title": "What is the difference between Avalanche and WaterScrumFall?", "text": "Recently I learnt about a new methodology based on waterfall and scrum: Water- Scrum-Fall. Some say that is the new \"reality\" and is what scrum pragmatist use nowadays.... But, from the definition of \"Avalanche\": > The Avalanche model is a Software Engineering project management anti- > pattern, it is a combination of a sequential process such as the Waterfall > model and Agile software development methodologies [...] What do you think about these two models? Aren't those the same? * * * More info: http://www.infoq.com/news/2011/12/water-scrum-fall-is-the-norm http://www.slideshare.net/harsoft/water-scrumfall-isrealityofagileformost"} {"_id": "20652", "title": "Need clarification concerning Windows Azure", "text": "I basically need some confirmation and clarification concerning Windows Azure with respect to a Silverlight application using RIA Services. In a normal Silverlight app that uses RIA services you have 2 projects: * App * App.Web ... where App is the default client-side Silverlight and app.web is the server-side code where your RIA services go. If you create a Windows Azure app and add a WCF Web Services Role, you get: * App (Azure project) * App.Services (WCF Services project) In App.Services, you add your RIA DomainService(s). You would then add another project to this solution that would be the client-side Silverlight that accesses the RIA Services in the App.Services project. You then can add the entity model to the App.Services or another project that is referenced by App.Services (if that division is required for unit testing etc.) and connect that entity model to either a SQLServer db or a SQLAzure instance. Is this correct? If not, what is the general 'layout' for building an application with the following tiers: * UI (Silverlight 4) * Services (RIA Services) * Entity/Domain (EF 4) * Data (SQL Server)"} {"_id": "19987", "title": "Is knowing .NET only enough for a successful career in IT industry?", "text": "> **Possible Duplicate:** > Is it better to specialize in a single field I like, or expand into other > fields to broaden my horizons? Recently, I don\u2019t know from where I got a thought in my mind that, \u201cis knowing .NET development environment enough for a successful career in IT industry\u201d. Should I be learning more languages too or will .NET suffice me for next 10-15 years. By successful career I mean earning decent living and having good growth opportunities."} {"_id": "37317", "title": "Is it okay to be generalist?", "text": "> **Possible Duplicate:** > Is it better to specialize in a single field I like, or expand into other > fields to broaden my horizons? I work at a ~50 employee company (UK), where all the technical people do a bit of everything. Specialising in anything for very long (6 months) is discouraged. For example, last week, I built a new Debian webserver, refactored some Perl, sat on a sales phone call, did a tape backup, reviewed code, built and deployed an RPM, gave opinions about x, y, z... With such a work scheme, I have gained a general knowledge how many things work, and pretty specific knowledge. I maybe program for 5 hours a week, despite officially being a developer. Does anyone else work like this, (or is this company unique)? Is it a problem to have skills developed in this way? (i.e. know a bit about everything in a certain domain, rather than know everything about say, one programming language?) Is it okay to be a generalist?"} {"_id": "126359", "title": "An aspiring programmers proverbial fork. (asp.net or ...)", "text": "> **Possible Duplicate:** > Is it better to specialize in a single field I like, or expand into other > fields to broaden my horizons? > Do Diversified Skills Foster or Hinder Specialization? Afternoon, I am seeking some career guidance, as the road seems to be splitting, winding and veering off in many different directions. It is my desire to be a professional software engineer, focusing (more so) on web applications. My background in web design lead me to php, mysql and minimal javascript (as so many have). From here I'm starting to learn visual studio, C# MS SQL and many tools / scripting languages that I feel would make me a better programmer and asset to a web team (REGEX, HTML5, MSSQL,). The more I dwell into Visual Studio, ASP.NET aspx syntax ( <% %> ) and now reading the up coming razor syntax. I am starting to get this feeling of cornering, isolation from those who develop in areas that I also find interesting. I get it, it's just the nature of life when you become specified in any area of expertise, you have to focus somewhere or you'll be a jack of all trades. But at the same time I would love to \"reach over\" and be able to dab into objective-C or Java, so I can design web applications that are intended for cross-platform use. It is not my intention to come of whining, I'm just looking to be told \"it'll be ok, push hard an any of these directions and you'll be able to do anything you want.\" Where, my next 4 years of effort in becoming an asset in C# and asp.net will no damn me to a life of servitude to Microsoft technology related jobs. Upon my visit to Google's HQ back in April it really inspired me and opened my eyes a bit. But from what I hear their technological push does not utilize any C# or .NET for that matter. So I guess a summed up question would be: Do professional software engineers have the freedom (and time for that matter) to learn, professes and become an asset of other technological sectors? Does one who masters Visual Studio also have a mastery of VIM or EMACS?"} {"_id": "81160", "title": "Does being a jack-of-all-trades hurt your career?", "text": "> **Possible Duplicate:** > Is it better to specialize in a single field I like, or expand into other > fields to broaden my horizons? Looking at job postings, the vast majority ask for extremely specific qualifications and requirements and a great deal of experience in very narrow areas. Having written a job description or two myself, I know that the \"requirements\" are more of a wish-list than anything, and actual job duties can vary wildly. That said, if I look at these job postings as a potential applicant, I don't think I'd even be considered for most of them - mainly because I don't really have a specialty or area of strong focus. I can do just about anything marginally well (programming, database, networking, you name it), and have had experience with a wide range of different technologies (at varying levels of expertise), but I've never had a job that required or otherwise had a need to drill deep down into something, spend a lot of time on it, and \"master\" it. Does being a jack-of-all trades really hurt your career (long term)? Does that mean that you will never be able to move up or advance beyond something like Tier 1/2 tech support or general IT-lackey? In the same vein, is it possible to make a career out of being versatile with technology, having a wide range of (albeit shallow) experience, but without specializing in a particular technology?"} {"_id": "176515", "title": "How to name an subclass that add a minor, detailed thing?", "text": "What is the most concise (yet descriptive) way of naming a subclass that only add a specific minor thing to the parent? I encountered this case a lot in WPF, where sometime I have to add a small functionality to an out-of-the-box control for specific cases. Example: TreeView doesn't change the SelectedItem on right-click, but I have to make one that does in my application. Some possible names are * `TreeViewThatChangesSelectedItemOnRightClick` (way too wordy and maybe difficult to read because there is so many words concantenated together) * `TreeView_SelectedItemChangesOnRightClick` (slightly more readable, but still too wordy and the underscore also breaks the normal convention for class names) * `TreeViewThatChangesSIOnRC` (non-obvious acronym), * `ExtendedTreeView` (more concise, but doesn't describe what it is doing. Besides, I already found a class called this in the library, that I don't want to use/modify in my application). * `LouisTreeView`, `MyTreeView`, etc. (doesn't describe what it is doing). It seems that I can't find a name which sounds right. What do you do in situation like this?"} {"_id": "234657", "title": "Why is \"Select * from table\" considered bad practice", "text": "Yesterday I was discussing with a \"hobby\" programmer (I myself am a professional programmer). We came across some of his work, and he said he always queries all columns in his database (even on/in production server/code). I tried to convince him not to do so, but wasn't so successful yet. In my opinion a programmer should only query what is actually needed for the sake of \"prettiness\", efficiency and traffic. Am I mistaken with my view?"} {"_id": "37954", "title": "How do you determine which skills are marketable?", "text": "While this question might be relevant to other fields, I'm interested specifically in searching for jobs as a developer. Namely, when you are searching for an applying for a new job you might look for listings that are for C# or Java developers; however, most jobs aren't just writing one language so how do you determine which skills are worth highlighting on a r\u00e9sum\u00e9 or more likely on a targeted cover letter? Likewise, are there some skills that aren't worth mentioning because it is implied that most developers should know it (i.e. XML) or is it better to list everything?"} {"_id": "176518", "title": "Extracting color profile information from JPEG files", "text": "I'm trying to look up info about reading JPEG's color profile info and to my surprise there's very little open specific how-to information on that regard, but rather lots of general explanation on what it is. **Does anyone know how to find and read the color profile information from JPEG?**"} {"_id": "143197", "title": "Do any database \"styles\" use discrete files for their tables?", "text": "I've been talking to some people at work who believe some versions of a database store their data in discrete tables. That is to say you might open up a folder and see one file for each table in the database then several other supporting files. They do not have a lot of experience with databases but I have only been working with them for a little over a half year so I am not a canonical source of info either. I've been touting the benefits of SQL Server over Access (and before this, Access over Excel. Great strides have been made :) ). But, other people were of the impression that the/one of the the benefit(s) of using SQL Server over Access was that all the data was not consolidated down into one file. Yet, SQL Server packs everything into a single .mdf file (plus the log file). My question is, is there an RDBMS which holds it's data in multiple discrete files instead of one master file? And if the answer is yes, why do it one way over the other? **edit** Thank you everyone for you're answers. This has really helped clarify the whole situation."} {"_id": "143194", "title": "What advantages are conferred by using server-side page rendering?", "text": "I am developing a web app and I have currently written the entire website in html/js/css and on the backend I have servlets that host some RESTFUL services. All the presentation logic is done through getting json objects and modifying the view through javascript. The application is essentially a search engine, but it will have user accounts with different roles. I've been researching some frameworks such as Play and Spring. I'm fairly new to web development, so I was wondering what advantages using server side page rendering would provide? Is it: Speed? Easier development and workflow? Access to existing libraries? More? All of the above?"} {"_id": "146073", "title": "How does Google crawl frequently updated webpages?", "text": "I'm trying to build a very small, niche search engine, using Nutch to crawl specific sites. Some of the sites are news/blog sites. If I crawl, say, techcrunch.com, and store and index their frontpage, then within hours my index for that page will be out of date. Does a large search engine such as Google have an algorithm to re-crawl frequently updated pages very frequently, hourly even? Or does it just score frequently updated pages very low so they don't get returned? Also, how can I handle this in my own index?"} {"_id": "72632", "title": "Are There Any Flaws With This Git Branching Model?", "text": "I am asking about this git branching model or workflow. I really like this. It strikes me as very intuitive and productive, but what I am asking for is whether there are any flaws or negatives to this approach that are not yet clear to me (coming from another world where ClearCase ruled the day). (You don't need to answer every question, whatever you can is helpful) 1. Do you use this or a similar git branching workflow? 2. Do you consider this a productive approach? 3. Do you see any flaws with this approach? Any potential downside? 4. If you have a better approach, would you mind sharing, or providing a link to an article or discussion about it?"} {"_id": "109281", "title": "How important is multithreading in the current software industry?", "text": "I have close to 3 years experience writing web applications in Java using MVC frameworks (like struts). I have never written multithreaded code till now though I have written code for major retail chains. I get a few questions on multithreading during interviews and I answer them usually (mostly simple questions). This left me wondering how important is Multithreading in the current industry scenario ?"} {"_id": "67693", "title": "Should I represent the Database in my use cases?", "text": "I am creating use cases for my web application and I was wondering if a representation of the DB should be listed as an actor. For example a user can check his profile and edit it (assuming that he is logged in). The two use cases would be: \\- User can view his profile \\- User can edit his profile Would the use cases then be for example: Actor: User, DB Use case: View profile Or can I leave out the DB as an actor? Unfortunately I haven't found any consistent way of drawing the use cases."} {"_id": "115958", "title": "Is it bad practice for services to share a database in SOA?", "text": "I have recently been reading Hohpe and Woolf's Enterprise Integration Patterns, some of Thomas Erl's books on SOA and watching various videos and podcasts by Udi Dahan et al. on CQRS and Event Driven systems. Systems in my place of work suffer from high coupling. Although each system theoretically has its own database, there is a lot of joining between them. In practice this means there is one huge database that all systems use. For example, there is one table of customer data. Much of what I've read seems to suggest denormalising data so that each system uses only its database, and any updates to one system are propagated to all the others using messaging. I thought this was one of the ways of enforcing the boundaries in SOA - each service should have its own database, but then I read this: http://stackoverflow.com/questions/4019902/soa-joining-data-across-multiple- services and it suggests this is the wrong thing to do. Segregating the databases does seem like a good way of decoupling systems, but now I'm a bit confused. Is this a good route to take? Is it ever recommended that you should segregate a database on, say an SOA service, an DDD Bounded context, an application, etc?"} {"_id": "191293", "title": "Using a database for each module in a system", "text": "I was reading this question: > I was trying to standardize and modularize some functions (Email Management > Module, CMS Module & etc) by implementing a 3-tier architecture concept > where each module would have its own independent module database. So that in > the future all we'd need to do is just code a presentation layer, reuse the > BLL layer, DAL Layer and database. My follow-up question is whether it is a good idea to place all the database tables from each module into the same database, or whether they should be separated into entirely separate databases? I am using PostgreSQL. My worries are: 1. Problems with running data analysis if data is in many different databases 2. Problems pinning down database performance issues if we use the same database for all modules 3. In ability to join tables across modules if we some time in the future discover that our modularization is flawed for creating a certain feature"} {"_id": "215260", "title": "Tips for Tail Call Recursion in Python", "text": "Ok, Python doesn't have tail call optimization. But for those who think better recursively than \"looply\", whats the best practices to write code?? 1000 stack calls are enough for many cases, but what are the tips to conceal recursion with efficiency in Python?"} {"_id": "92461", "title": "Is this a valid smartphone CPU vs. desktop CPU speed comparison (Android G1 vs. old Pentium 4 desktop)?", "text": "I am trying to estimate speed differences when creating code on my desktop PC that will be ported to Android phones. I don't need to be exact, but a good estimation will help stop me from creating code that is dismally slow on an Android phone. I want to support down to the Android G1 so I am using it as my \"baseline\". Here is how I am currently performing my calculations using Dhrystone MIPS using an old Pentium 4 for comparison that will be the test unit for quick speed tests. According to this document, a G1 using a Qualcomm MSM72xx ARM CPU is about 1 MIPS per Mhz: http://www.techautos.com/2010/03/14/smartphone-processor-guide/ Web searches turned up user comments indicating that the G1's CPU comes stock running at around 350 Mhz and not at the 523 Mhz shown in the chip's specs so I am assigning a MIPS rating of 350 MIPS for the G1, rightly or wrongly. This Wikipedia page shows the Pentium 4 Extreme edition rated at about 9700 MIPS: http://en.wikipedia.org/wiki/Instructions_per_second This makes the Pentium 4 approximately 27 times faster than the G1. Given that multiplier, if during one of the time consuming operations my code takes 1 second on the Pentium 4, I would estimate that it would take 27 seconds on a G1. Is my logic correct? I am hoping it is not because that means I'll have to do some really painful optimizations to the code to make things livable on the G1. If my logic is not correct and there is a better algorithm for this calculation, please let me know. \\-- roschler"} {"_id": "67694", "title": "What encryption method should I use?", "text": "I am looking for some information on encryption. Here's what I'm trying to do: * Get unique information from our customer (an ID or something) * Generate and encrypt some data on our side (using the clients ID) * Send this data to customer, and allow an application to decrypt (and decrypt only) this data using the ID sent before What methods should I be looking at? Kind regards To clarify: * We encrypt some data using our private key and our clients public key * The client decrypts this data using their public key * The client must not be able to encrypt valid data using their public key"} {"_id": "146076", "title": "Steps for moving towards continuous delivery?", "text": "Say you have a traditional dev > test > production process say on a monthly release cycle. What are some of the steps you need to take and put into place to move towards a model of continuous delivery with frequent releases per day. Please make one suggestion per response. I would be interested in using voting to indicate which steps the community feel are most important."} {"_id": "114862", "title": "At what point does a company need a system engineer?", "text": "I'm 8 months into my first job as a developer at a mid-small company. The four development teams have about 7 developers each, the design team consists of about the same number, and the administration / sales / marketing / hr team is 4 people. We mostly develop web apps, a one-time deal, to run on the client's (usually existing) environment. I'm finding myself setting up development environments to match, usually things I've never used before (from C#.NET 1.1 and MS-SQL to a regionally developed WAS called JEUS), which takes a significant amount of time. Sometimes I get help from other developers but mostly I follow online tutorials until it seems to work, and then I spend more time fixing my code when it breaks because the settings aren't exactly the same as the actual environment. I'm starting to think that one guy who specializes in this stuff would make it much easier for the developers to actually do what they're paid to do. When does it make sense for a company to get a dedicated systems engineer? Or am I wrong and should just suck it up and learn to do it? I do realize that being familiar with different environments would improve my employability..."} {"_id": "114865", "title": "Redundant code ok?", "text": "Using C#/.Net, I have a page where the user can enter data and save these data by clicking a button. To save data, the user needs to enter a valid date, this means a date which has a certain format. The page contains amongst others two modules. The ValidateDate-Event has the responsibility two check the format of the date-field. So it checks the format and displays a message if the date is not well formed. The Save-Event has to save user entries. To successfully complete the task, a well formed date is needed. So I designed the module to check the date format. This has the advantage that the module can never crash and is not dependent of the Validation-module. But I realized that in fact I do the same thing twice, in two modules. I am just curious, how do you handle this matter? What do you think is the best way to do it? **1) ValidateDate Reponsibility: Check format of date field (, inform user) Sub-Task: Check format of date field 2) Save Reponsibility: Save data Sub-Task: Check format of date field**"} {"_id": "146075", "title": "If I drop cookies with JavaScript will it still be compliant with the EU ICO Cookie Law?", "text": "The challenge proposed to me as to create a widget to apply in other sites that makes a website compliant with the cookie law[1]. Can I do this without changing server code? I mean, if there's code on server-side that writes an affiliate cookie to the response and my JavaScript widget deletes it after on window.load event: will the site still be cookie law compliant? Then comes the Google Analytics and share buttons cookies. How would I stop those scripts and iframes from being executed in JavaScript? [1] The Information Commissioner's Office (ICO) : New ICO Cookie Law"} {"_id": "136602", "title": "Should I take on this very large project idea?", "text": "The last month I have had idea of a very large project to take as a software hobby/potential business project - simply because I saw some \"vision\" of a great tool if this would be done. The idea behind the project is really a common idea, and there is this one very big competitor (as the size of a corporation with dozens if not hundreds of programmers working on a product that might contain my \"visionary\" features in the future) - their product gets polished every year with their new edition. A few days ago I saw their latest edition, and it literally \"shut my wind\" down : Their latest edition contained about 50% of the features I dreamed about. And they implemented those in a good way. Also, those \"features\" that they have implemented, are part of a very large product of theirs, so they benefit from integrating those features to their big product. If I were to code my own product, those \"features\" will simply be a stand-alone product without any other _nice and efficient tools_. So my doubts are these : If I were to begin coding this myself, this would take perhaps a few years. Should I really take this project idea and invest my time in it, while some big corporate company might suddenly have those same features implemented in, let's say, one year from now? Should I really _think_ before entering a whole new project about my chances of success, or, simply, should I _dive and kamikaze_ on it?? What do you think? P.S. If you can, please recommend me information resources about starting up projects and eventually getting commercial. Thank you very much."} {"_id": "142571", "title": "Must developers understand the business domain or should the specification be sufficient?", "text": "I work for a company for which the domain is really difficult to understand because it is high technology in electronics, but this is applicable to any software development in a complex domain. The application that I work on displays a lot of information, charts, and metrics which are difficult to understand without experience in the domain. The developer uses a specification to describe what the software must do, such as specifying that a particular chart must display this kind of metrics and this metric is the following arithmetic formula. This way, the developer doesn't really understand the business and what/why he is doing this task. This can be OK if specification is really detailed but when it isn't or when the author has forgotten a use case, this is quite hard for the developer to find a solution. At the other hand, training every developer to all the business aspects can be very long and difficult. Should we give more importance to detailed specification (but as we know, perfect specification does not exist) or should we train all the developers to understand the business domain? **EDIT: keep in mind in your answer that the company could used external developers and that a formation to all the domain can be about 2 weeks**"} {"_id": "136605", "title": "Designing access to file-based \"database\"", "text": "It happened frequently that I have to provide access to a bunch of files organized in a directory tree according to some (sometimes loosely specified) rules. My standard pattern is to provide a Database class which is initialized with the root directory. This class provides getX()-like (example: getStructure()) methods to extract data from the database. These methods normally return semantically meaningful objects (Structure) with proper methods returning data (eg. structure.getPoints()). I am not completely happy with this design, for two main reasons. The first problem is that the mapping between in-application objects and files may not be 1:1, that is, to create the Structure object I may have to open different files in the \"database\", and the mapping may not be perfect. In this case, I call the Structure a \"Thick object\". An alternative (\"thin objects\") is to stick to objects that _are_ fully represented in a file, even if poor in high-level domain meaning (that is, if I have two files to define a structure, one file containing points, and the other containing connections between these points, I just provide a Points object and a Connections object, and let someone else \"connect the dots\" outside Database). The other problem I have is the following: who should perform the actual file parsing? I envision two strategies: either the objects are able to deserialize themselves from file (\"pickle-like\" in python parlance), or they are just dumb data containers filled by stateless parser objects (one per file), or even by the database object itself. When you do ORM, there are clear, well defined rules on how the object is represented in the database, and the process is extremely well defined in terms of interface and behavior. This is not necessarily verified with an arbitrary bunch of files some people call \"database\". I would really enjoy your points on this regard. How should I perform proper deserialization in this case of \"raw mapping\"? Thin or thick objects ? Smart or dumb ? Note that I don't really have any control at all when it comes to the files I have to access. I get the files from external sources, and I have to convert them into some kind of domain objects, generally only for reading, but sometimes also for writing. Readonly is by far the most frequent case though."} {"_id": "236057", "title": "How do remote programmers working on proprietary software go about securing their work/source code?", "text": "I was just wondering what kind of protocols a software engineer would go through if they were working at home for a company developing proprietary closed source software. Would you use a third party anti-virus or would you fear they could steal your code? Can you trust web hosting services like git- hub and bit-bucket? Would you keep your source code on an encrypted partition? Would you make any configurations to ensure windows could not steal your source code through some kind of built in feature? What other kinds of security concerns would you have and how would you deal with them?"} {"_id": "188432", "title": "Algorithm for matching similar content text items", "text": "I am working on a website (C#, ASP.Net MVC 3) which reads some RSS feeds from multiple sources and put feed title and summary in a database table(Sql Server). What I want to do is: Put an algorithm in place which can relate multiple feeds. For example if each feed is a news item, I would like to relate all news which says in different grammar of English \"Some has won some election\". Is there any standard algorithm for such kind of content matching logic? If not, what kind of custom algorithm should be used? If this logic can be written on Database side(e.g. Stored Procedure) it will be better."} {"_id": "188433", "title": "Novel polymorphism - any reasons for this code?", "text": "As part of my work on a legacy C# application I've come across a novel (to me) use of an interface & concrete implementations. I can't think of any reason why you'd do the following, but I'm pretty thick, so maybe someone else can? public interface IContract { ContractImplementation1 Contract { get; } bool IsCollection { get; } bool Touched { get; set; } } public class ContractImplementation1 : IContract { public ContractImplementation1(string propertyOne, string propertyTwo, string propertyThree, string propertyFour) { PropertyOne = propertyOne; PropertyTwo = propertyTwo; PropertyThree = propertyThree; PropertyFour = propertyFour; } public ContractImplementation1 Contract { get { return this; } } public bool IsCollection { get { return false; } } public bool Touched { get; set; } public string PropertyOne { get; private set; } public string PropertyTwo { get; private set; } public string PropertyThree { get; private set; } public string PropertyFour { get; private set; } public override string ToString() { if (string.IsNullOrEmpty(PropertyFour)) return string.Format(\"{0} => {1}: {2}\", PropertyOne, PropertyTwo, PropertyThree); else return string.Format(\"{0} => {1}: {2} {3}\", PropertyOne, PropertyTwo, PropertyThree, PropertyFour); } } public class ContractImplementation2 : IContract { public ContractImplementation1 Contract { get { return null; } } public bool IsCollection { get { return true; } } public bool Touched { get; set; } public List Contracts = new List(); } I can't get my head around the super-type having a property that is a sub-type of itself. Following Cosmin's answer: I can't get my head around why you'd have the sub- type as a property given that the property returns itself on the implementation (rather than a 'parent' of the same type i.e. a different instantiation of the super-type)."} {"_id": "111301", "title": "Challenges to the Agile approach on government projects", "text": "A previous Agile discussion here had good answers specifying what is **critical** to the success of implementing Agile methodology in software development. Most of the points were the typical organizational and management challenges, but one point worries me and it is that the client must be involved throughout the process. The client is the one thing that you cannot control realistically, perhaps your business model gears you to government contracted work for instance where an intensely strict contract obligates the company to: * Provide X features exactly as requested * Feature requests will be thrown over a wall, don't bother us we don't want to hear it. * There is no concept of feature priority in the customer's mind, _they are all important or we wouldn't have asked for them._ * The project will cost no more and no less than Y regardless of overruns or deadlines. * Absolute, strict, final and non-negotiable deadline for complete delivery of all work. We have never worked with such a client before but the money on the project is just too good to pass up. We need this work. I came here and worked **HARD** to change processes within to move towards Agile development and here I don't know how to reconcile where this project fits into our new process. I have never before had the luxury of open-minded hands-off management that trusted me to lead the development team and processes down this path and now that we are here I can't honestly tell myself that this project will truly be done in an Agile way. I feel like management trusted me to lead this path and that I let them down because this situation we are in now so clearly calls for Waterfall. I am afraid that I might lose their trust if I backtrack now. Other answers like the one here say Agile is impossible with this kind of client, do you agree? Have any of you been in a similar situation and made it work? What strategies did you implement to make Agile happen successfully?"} {"_id": "143222", "title": "Is it more advantageous to write a program to test your code (ie a client) or just use the main portion of your program?", "text": "For instance if I had a program with a bunch of methods. public class Dog{ public boolean ishappy{..} public int weight{...} public static void main{ Dog Max = new Dog Max.ishappy}} Is it best to use the main portion of the code to test, or write a completely separate program, which would be a client of the dog class, to test my code?"} {"_id": "143226", "title": "How are you using CFThread in ColdFusion Applications?", "text": "I'm presenting on Concurrency in ColdFusion at CFObjective this year, and I'd like to hear how you're using CFThread in your ColdFusion applications. In addition, what problems have you had while using it, and how (if at all) have you solved them? What do you dislike about CFThread? Have you run into significant weaknesses with CFThread or other problems where it simply could not do what you wanted to do? Finally, if there's anything you'd like to add related to concurrency in CF, not specifically related to CFThread, please do tell."} {"_id": "131055", "title": "Is it a good idea to use an unit test framework for another purpose than test code?", "text": "I'm about to write a simple script to test a dataset for certain conditions. I was designing it as a set of functions each one describing the condition to be tested and pass them to the test engine: # The tester engine: all(f(dataset) for f in conditions) I realized that my approach was similar to unit testing. So, to avoid repeating myself, I am thinking of using my favorite unit testing framework instead. 1. What do you think about that idea? 2. Have any of you used an unit test framework for any other purpose than test code?"} {"_id": "189747", "title": "Ruby manager for Windows: Is Ruby's PIK alive or dying?", "text": "At first, please forgive probably offtopic and/or notconstructive question, but I truly have no idea where to ask it. At first I targetted StackOverflow there's at least some PIK-related traffic visible, but Programmers seem more relevant (although there seems to be not a three words here about PIK..). Some time ago I was looking for a quick way to predictably and repeatibly setup Ruby environments and I've found PIK to be relatively easy and usable. Note that I'm not a full time Ruby developer, I use it more as a \"quick hacking console\" and \"lightweight Bash replacement\" on Windows. For the last one or maybe even three years I was happily using PIK for isolating Ruby's ENVS from the normal windows shell environment. I've also came up with some extensions, both to windows's shell and PIK itself, and from time to time I feel the urge to publish them. They include simple things from running a script by doubleclick alike to .vbs or .bat, to more interesting ones like drag&dropping files onto a .rb script, to some fairly esotheric like emulated support for shebangs like `#!/bin/ruby-187` `#!/bin/ruby-193` so you can easily switch Ruby runtimes in the scripts lying on your desktop. Some background covered, let me then ask the awful question: ~~is it worth publishing?~~. No, obviously almost everything usable is worth that. My point is more about the feeling that I maybe I could spent that time better, there's no point in polishing dead tools, and as I search the web, I see **very** little movement in the subject :) * Are such extensions actually needed, or maybe they do exist and I simply reinvented a wheel? Do you, Ruby users, feel \"lack\" of them in your \"everyday use\" of Windows? * Is the PIK alive, getting popular or dying? is it known/used throughout the windows/ruby community, or should I adapt my addons to other managers?"} {"_id": "189741", "title": "Which design pattern is illustrated by inheriting IStructuralComparable interface?", "text": "We know that some design patterns are found so useful that they become features of the language itself. For instance, the interface `IEnumerator` which is implemented by `Array` object. This helps in separating the iterator from the collection object. The internal representation of the object is encapsulated. The pattern: Iterator Pattern I have just come across another interface `IStructuralComparable`(msdn). This is used to compare the structure of two collection types using `StructuralComparisons`(msdn) class. The intent of this interface seems to be thus: > My understanding is that it's used for collection like types, and > encapsulates the structural part of the comparison, but leaves the > comparison of the elements to a comparer passed in by the user. (link) (Got > from the comments section of the question) **Is this an implementation of any familiar design pattern? If yes, which pattern is it?**"} {"_id": "34791", "title": "Are there any good editors for asp.net on smartphones?", "text": "I came across coderun the other day does anyone know any better editors/ide's espeially that can be used relatively easily on some of todays smartphones for asp.net ?"} {"_id": "209630", "title": "Self reference using a new table vs concatenated list of ids", "text": "At my current workplace there is a common pattern in database design: they don't use foreign keys but they list all corresponding ids in a column like this: some_table id name image_ids 1 a 1,2,3 2 b 4,6,7 images id url 1 ... 2 ... ... They store self references in a same manner: some_table id name some_table_id 1 a 2,3 2 b 1,3 ... They encourage me to use this pattern but it does not feel right for me. I would never design a database like that. I have some counter arguments against it: * What if some day I would add some arbitrary data to a self reference? Using this model I would not be able to. * It does not ensure referential integrity. I can easily add non-existent ids which will lead to problems * Searching through strings is not fast either I need to justify my complaints so my question is: What persuasive counter- arguments can you come up with against this hacky design approach?"} {"_id": "209632", "title": "Why should IQueryProvider implementations throw NotSupportedExceptions?", "text": "Searching the web, we can find plentiful examples of various ORMs (nHibernate, EF, LinqToSql, etc.) that implement but don't actually support the full `IQueryable` interface, throwing `NotSupportedExceptions` when they encounter something they don't like, such as LinqToSql and `SkipWhile`. My question is this: why do ORM providers opt to throw a `NotSupportedException` instead of letting certain query operators (that do not translate well or at all to the target data source) trip a query execution and then let Linq to objects handle the rest? I understand that some heavy physical resource usage could occur as a result, but if `IQueryable` instances were _truly_ swappable, would we not be better off?"} {"_id": "107258", "title": "Standard practices for an architect", "text": "I am the architect for my group. Unfortunately I was given this position organically and there are no standards in place for how I should do my job. At different companies, how does the architect role work? I am very interested, for example, in what type of diagrams and documents an architect might provide so that they can assist a developer in doing their job."} {"_id": "209638", "title": "What's the best approach to building a db schema for profile, but to track changes?", "text": "I'm working on a site where companies can create a profile then add locations for each profile. Those first two tables are simple enough. But any changes to their profiles or locations have to go through an approval process. This is where I'm torn. The end result would be an employee going through a worklist and seeing the requested changes and clicking \"approve\" or \"deny\". Approve applying the new data to the existing record, deny setting a flag on the request record(s) and sending a response. What I'm wondering is the best way to go about the change requests. I would definitely like to keep the change requests in a separate table than the profile or location table. I can't decide if it's best to just create duplicates with a few extra columns of the profile and the location tables and use those for change tracking. Or if I should just create a simple table with just the columns to capture the change target id, field, and value, plus a few extras like changeset, datetime, flag, etc. What do you think?"} {"_id": "209639", "title": "Multi-Thread Return Single SQL Result", "text": "I am having some difficulty with MySQL and returning a unique row to a thread. I want it so the thread will search for a row from the table where the bit (see below) is false and only one row is returned. But I don't want the other threads to return the same result should there might be some race condition where the same result is returned to the thread; because the thread will then do lots of processing off the back of this result and I don't want duplication. Background: I have a MySQL database that contains 3 columns (id, text, bit). The id is auto-incremented. I have a multi-threaded Ruby application that reads, updates and inserts rows into the table. Pseudo code for the thread is as follows: select a row from the table where the bit is false do some processing with the text returned from that row insert more rows with bit set to false I have tried a simple test with a multi-threaded script that uses the following: SELECT id, text FROM table WHERE bit =FALSE LIMIT 1 FOR UPDATE But each thread returns the same row. I have disabled autocommit as per the recommendation. Since I am omitting any commit I would expect the other threads to have a different result since the row is locked. Am I missing something or should I be looking at using another method?"} {"_id": "77373", "title": "Is Scala ready for prime time?", "text": "Now that I've done a few trivial things with Scala (which I love for \"hello world\" and contrived applications!) I am left wondering.. part about maturity of the tools to support development, and part about general applicability. Are the toolsets ready? Is Scala appropriate for use on enterprise / business applications? Would \"you\" use it on a non-trivial project? Some of my (possibly unfounded) concerns would be: * are the IDE and toolsets as rich as what we have to develop .net and java applications (eclipse for Scala seems limited compared to eclipse for java)? * are the build / CI / testing toolsets able to effectively deal with Scala? * how maintainable is the concise code that _can_ be (encouraged?) written in the language? * is it possible to find developers with Scala experience? * is there enough critical mass to get help through on-line reference and books that are more than \"intro\" to the language? So bottom line - is the ecosystem mature enough to use now, or better off waiting to see how it evolves? EDIT: let's say \"non-trivial\" is a multi-year, multi-release, 10-20 developers project."} {"_id": "151414", "title": "Is there a language where collections can be used as objects without altering the behavior?", "text": "Is there a language where **collections** can be used as **objects** without altering the behavior? As an example, first, imagine those functions work: function capitalize(str) //suppose this *modifies* a string object capitalizing it function greet(person): print(\"Hello, \" + person) capitalize(\"pedro\") >> \"Pedro\" greet(\"Pedro\") >> \"Hello, Pedro\" Now, suppose we define a standard collection with some strings: people = [\"ed\",\"steve\",\"john\"] Then, this will call toUpper() on each object on that list people.toUpper() >> [\"Ed\",\"Steve\",\"John\"] And this will call greet once for EACH people on the list, instead of sending the list as argument greet(people) >> \"Hello, Ed\" >> \"Hello, Steve\" >> \"Hello, John\""} {"_id": "151415", "title": "What are best practices to read high level undocumented and uncommented code?", "text": "There are some opensource projects, that have classes and classes without any significant explanation about what the class does and or why is that class needed. For example, classes in the CppEditor plugin for QtCreator. What are the best practices to read undocumented code? Don't read it, is not an option :)"} {"_id": "24412", "title": "Do you have your own 'misc utils' library? What part are you most proud of?", "text": "I know that many of us maintain our own little personal library with tools and utilities that we use often. I've had mine since I was 16 years old so it has grown to quite a considerable size. Some of the stuff I've written has since been added to the framework. I wrote my own little implementation of expression trees for use with genetic algorithms long before LINQ, which I quite like and was proud of at the time - of course its pretty useless now. But recently I have been going through it and upgrading to .NET 4.0 and re-kindled an interest. So I'm curious as to what you use your library for. Maybe we could get some cool ideas going for useful little snippets and share them amongst ourselves. So my questions are: * Do you have a miscellaneous utility library? * Which part are you most proud of and why? Give an example of code if you like `:-)`"} {"_id": "151417", "title": "Class design, One class in two sources", "text": "Is it possible define methods from **the same class** in different \"CPP\" files? * * * I have header file \"myClass.h\" with: class myClass { public: // methods for counting ... // methods for other ... }; I would like to define \"methods for counting\" in one CPP and \"methods for other\" in other CPP. For clarity. Both groups of methods sometime use the same attributes. Is it possible? Thanks :)."} {"_id": "24414", "title": "Thoughts of Cloud Development/Google App Engine", "text": "I use mainly PHP for web development, but recently, I started thinking about using Google App Engine. It doesn't use PHP which I am already familiar with, so there will be a steeper learning curve. Probably using Python/Django. But I think it maybe worthwhile. Some advantages I see: * Focus on App/Development. No need to setup/maintain server ... no more server configs * Scales automatically * Pay for what you use. Free for low usage * Reliable, it's Google after all Some concerns though: * Does database with no joins pose a problem for those who used App Engine before? * Do I have to upload to Google just to test? Will it be slow compared to testing locally? What are your thoughts and opinions? Why would you use or not use App Engine?"} {"_id": "199834", "title": "How do web apps create subdomains?", "text": "I want to understand the architecture of web apps that use subdomains. I don't think that I'm phrasing this well, so let me explain. Many web apps, like tumblr or shopify create a user's site on a subdomain. Say for example my tumblr account was `johndoe` then you could find my tumblr blog at `johndoe.tumblr.com`. Can someone explain how this is implemented?"} {"_id": "180194", "title": "How do you migrate from one language to another?", "text": "I know that language is just a tool and it's all about creating product. But if you are all about enterprise and than you change to mobile development - how do you manage it? If I am used to PHP and it's framework and switch to Ruby - how do I deal with it? With learning and getting experience with Rails, with Ruby language and so on. What if I was C++ programmer working on standalone programs? And than I switch to Python/Django for web? In general, here are my questions for all the examples above: 1. Is it possible to switch your specialisation as software developer? From web-development to enteprise, from standalone to mobile, etc. If yes, how? Or should I stick with one path to become expert in it? 2. How do you learn new languages? I mean, there are new frameworks you need to learn, and if it's different development sphere - say, mobile after web-development - you need to learn basic stuff for this sphere, right? How do you handle it? 3. Is switching language/career path hard in terms of getting new job since you dont have any experience in this language? Do you downgrade in salarys? _**Please explain it all to me! How do people handle it? I am 100% sure I love development, atm I love PHP/RoR and web-development, so I use PHP as my main language at fulltime job, but I don't want to stick with it forever and I want to be able to change languages or/and maybe development direction (mobile, standalone, enterprise). How do you do it without hurting your career, or is it impossible?_**"} {"_id": "404", "title": "Is there any hard data to back up \"Human task switches considered harmful\"?", "text": "Joel Spolsky wrote a famous blog post \"Human Task Switches considered harmful\". While I agree with the premise and it seems like common sense, I'm wondering if there are any studies or white papers on this to calculate the overhead on task switches, or is the evidence merely anecdotal?"} {"_id": "23295", "title": "Moving Old Projects To Newer IDE's and Libraries", "text": "At work we have a few older projects that are stuck on .NET 1.1 and VS 2003. While these are probably to much work now to move forward I'm wondering if the effort to keep our newer projects up to date will be worth it. Specifically we would be looking at moving about 30 projects from .NET 3.5 to .NET 4.0 and VS2008 to VS2010. My Questions for the community are: **Do you move your projects along as new tools and libraries become available or just start the newest stuff in the newest versions? If you do move forward have you found the benefits out weigh the cost of the upgrade?**"} {"_id": "408", "title": "What techniques are used in solving code golf problems?", "text": "### \"Regular\" golf vs. code golf: Both are competitions. Both have a well-defined set of rules, which I'll leave out for simplicity. Both have well-defined goals; in short, \"use fewer hits/characters than your competitors.\" To win matches, athletic golfers rely on * equipment * Some situations call for a sand wedge; others, a 9-iron. * techniques * The drive works better when your feet are about shoulder width apart and your arms are relaxed. * and strategies * Sure, you could take that direct shortcut to the hole... but do you really want to risk the water hazard or sand bunker when those trees are in the way and the wind is so strong? It might be better to go around the long way. **What do code golfers have that's analagous to athletic golfers' equipment, techniques and strategies?** Sample answer to get this started: use the right club! Choose GolfScript instead of C#."} {"_id": "128706", "title": "Most frequently used design patterns in refactoring (my example)", "text": "I've been doing quite a lot of refactoring of C++ and C# code recently, and found that 90% of the patterns I use are: * Template method * Factory * Singleton Are these generally the most commonly used patterns in refactoring or is it just me? Can you share your refactoring experiences?"} {"_id": "74799", "title": "Why is MVC more popular than PAC?", "text": "I just stumbled upon a question at SO about PAC and got interested in the pattern. I'm wondering why it's not as widely used as MVC? What is the MVC benefits compared to PAC?"} {"_id": "151055", "title": "What happens if we serialize and deserialize two objects which references to each other?", "text": "To make it more clear, this is a quick example: class A implements Serializable { public B b; } class B implements Serializable { public A a; } A a = new A(); B b = new B(); a.b = b; b.a = a; So what happens if we serialize a and b objects into a file and deserialize from that file? I thought we get 4 objects, 2 of each. Identical objects but different instances. But I'm not sure if there's anything else or is it right or wrong. If any technology needed to answer, please think based on Java. Thank you."} {"_id": "188294", "title": "Encountering the same issue in an application", "text": "I've often come across the situation when the same mistake is made in many places in an application. For example, in a web application when the user creates an item and clicks the `Add` button to save it. If they click the button several times, multiple items are added instead of one because UI wasn't blocked and backend check wasn't performed properly. How to create bug tickets in such cases? Here are the solutions I could think of: * create one single bug ticket and list where the situation occurs * for every case found create a separate ticket * when finding the bug tell the development to pay attention, wait till they are finished and test again"} {"_id": "188299", "title": "Is this an appropriate use of Mockito's reset method?", "text": "I have a private method in my test class that constructs a commonly used `Bar` object. The `Bar` constructor calls `someMethod()` method in my mocked object: private @Mock Foo mockedObject; // My mocked object ... private Bar getBar() { Bar result = new Bar(mockedObject); // this calls mockedObject.someMethod() } In some of my test methods I want to check `someMethod` was also invoked by that particular test. Something like the following: @Test public void someTest() { Bar bar = getBar(); // do some things verify(mockedObject).someMethod(); // <--- will fail } This fails, because the mocked object had `someMethod` invoked twice. I don't want my test methods to care about the side effects of my `getBar()` method, so would it be reasonable to reset my mock object at the end of `getBar()`? private Bar getBar() { Bar result = new Bar(mockedObject); // this calls mockedObject.someMethod() reset(mockedObject); // <-- is this OK? } I ask, because the documentation suggests resetting mock objects is generally indicative of bad tests. However, this feels OK to me. **Alternative** The alternative choice seems to be calling: verify(mockedObject, times(2)).someMethod(); which in my opinion forces each test to know about the expectations of `getBar()`, for no gain."} {"_id": "55029", "title": "Other than for legacy software, are there reasons for using COBOL?", "text": "COBOL is still (heavily?) used for financial computing. It is an old language, and AFAIK most programmers hate, or at least dislike, COBOL. This brings a question: is the only reason COBOL is still used that legacy software uses it, or does it have any real advantages over other programming languages? Just curious."} {"_id": "91841", "title": "Python template engines: What are the real benifits and drawbacks to XML vs custom syntax", "text": "I'm interested in knowing what are the real difference (benefits and drawbacks) between the two types of python templating engines; XML (like Genshi or Kid) and a custom syntax (like Cheetah or Jinja2). I'm not looking for which is better, or a recommendation. I understand that no one solution will be perfect, and that the best solution will depend on the problem. I do want to better understand the differences between the two types before I choose one for my problem. This list may not apply to all templating solutions. XML Benefits: * Uses XML, it's mostly familial to developers. There are a few new (ifelse, flow logic) items to learn. * It works with existing XML toolchains. * It's more powerful as it is knowledgeable about the data being worked on. (Genshi is context aware) XML Drawbacks: * The XML based engines tend to be slower than the custom syntax engines. * Some will argue that XML is more difficult to learn than custom syntax. Custom Syntax Benefits: * It's fast than XML based engines. (see earlier link) * It's a simple powerful syntax that should be easier to learn. Custom Syntax Drawbacks: * It's another syntax to learn. * It might not work smoothly with existing XML toolchains."} {"_id": "181577", "title": "Is it possible to read memory from another program by allocating all the empty space on a system?", "text": "Theoretically, if I were to build a program that allocated all the unused memory on a system, and continued to request more and more memory as other applications released memory that they no longer need, would it be possible to read recently released memory from another applications? Or is this somehow protected by modern operating system? I have no practical application for this, I'm just curious. I realize there are some issues with allocating \"all available memory\" in real-life. Edit: To clarify, I'm asking specifically about \"Released\" memory, not accessing memory that is currently allocated by another application."} {"_id": "55020", "title": "How do I pick which agency to go through?", "text": "I work in a town where the majority of work comes from the government. As a contractor, I generally have to apply for work through agencies which are on the government's preferred vendor's list. Most jobs are publicly listed and to apply for them, you generally need an agency to represent you by submitting your application with a rate which is usually your rate plus their commission. I've been trying to figure out what the agencies do, and it seems a large part of what they do is 1) get on that preferred vendor's list and 2) forward resumes. So right now, my policy is that since their commission affects how expensive I am, one - I don't work with companies that do not disclose their margin. And two, I go for the agency that takes the least amount of commission for the job I want to apply for. IS that the best approach? I would think applying for a job with the most competitive rate is the best approach but I also wonder whether which agency you're applying through actually matter? I know some agencies actually build personal relationships with senior managers but how do I know which one? How do I know that actually affect my job prospects? What criteria should I use to decide which agent I go through for the job?"} {"_id": "107874", "title": "question, the best data structure and algorithm", "text": "I have a question about implementing a data structure (with algorithm) that has these features: 1. There are too many places (like stores), and each place can store too many items, I want to store items in the places. The point is we need to have optimized insertion (or deletion) and optimized search feature. 2. Search can be done based on places (return items in the place) or base on items (return places that have the item) I thought to use a hashmap for this, and use places ID as the key and store items as a collection (again a hashmap with item id for both key and value). So based on this, the insertion of each item or place would be O(1) but get or remove of place will be O(n) and for item it will be O(n*k)! (assume we have \"n\" places and \"k\" items - I hope this is the correct calculation). Is there any better data structure or algorithm for this problem?"} {"_id": "35448", "title": "Always keep files updated in Eclipse", "text": "I keep lots of files/editors open in Eclipse. I also love using `git stash` and other git commands that essentially change the contents of my open files. Is there an Eclipse feature or plugin that will always keep the contents of my open files up to date and live? Currently if I put focus in an out of sync editor, I get an awkwardly worded dialog that I have to parse carefully every time. I wish it would just keep me synced like Textmate does."} {"_id": "35443", "title": "Do you prefer building your interfaces in IB or programmatically? and why?", "text": "I've been using Xcode and building iPhone apps for two months, but I'm finding it really hard to grasp good application design. I always face problems\u2014like you can't put your tabbarcontroller in another custom viewcontroller, for example\u2014that 'sometimes', of course, would work if you did the creation of the views/viewcontrollers programmatically. So I don't know if I should start writing the creation of my objects or use Interface Builder. What are your experiences?"} {"_id": "140555", "title": "How do you track existing requirements over time?", "text": "I'm a software engineer working on a complex, ongoing website. It has a lot of moving parts and a small team of UI designers and business folks adding new features and tweaking old ones. Over the last year or so, we've added hundreds of interesting little edge cases. Planning, implementing, and testing them is not a problem. The problem comes later, when we want to refactor or add another new feature. Nobody remembers half of the old features and edge cases from a year ago. When we want to add a new change, we notice that code does all sorts of things in there, and we're not entirely sure which things are intentional requirements and which are meaningless side effects. Did someone last year request that the login token was supposed to only be valid for 30 minutes, or did some programmers just pick a sensible default? Can we change it? Back when the product was first envisioned, we created some documentation describing how the site worked. Since then we created a few additional documents describing new features, but nobody ever goes back and updates those documents when new features are requested, so the only authoritative documentation is the code itself. But the code provides no justification, no reason for its actions: only the how, never the why. What do other long-running teams do to keep track of what the requirements were and why?"} {"_id": "165589", "title": "Avoiding \"double\" subscriptions", "text": "I am working on a website that requires a bit of marketing; let me explain. This website is offering a _single_ , say, iTunes 50$ voucher to a lucky winner. To be entered in the draw, you need to invite (and has to join) at least one friend to the website. Pretty straightforward. Now, of course it would be easy for anyone to just create a fake account and invite that account so, I was thinking of some other way to somehow find out of possible cheating. I was thinking of an IP check on the newly subscribed (invited) user, and if there is the same IP logged in the last 24 hours, and if that's the case, investigate more about it. But I was thinking that maybe there is a more clever way around this issue. Has anyone ever though about this? What other solutions did you try? Thanks in advance."} {"_id": "140550", "title": "Storing images in file system and returning URLs or virtually resizing and returning byte arrays?", "text": "I need to create a REST web service to manage user submitted images and displaying them all in a website. There are multiple websites that are going to use this service to manage and display images. The requirements are to have 5 pre-defined image sizes available. The 2 options I see are the following: 1. The web service will create the 5 images, store them in the file system and and store the URL's in the database when the user submits the image. When the image is requested, the web service will return an array of URLs. I see this option to be a little hard on the hard drive. The estimates are 10,000 users per site, and lets say, 100 sites. The heavy processing will be done when the user submits the image and each image is going to be pulled from the File System. 2. The web service will store just the image that the user submits in the file system and it's URL in the database. When the user request images, the web service will get the info from the DB, load the image on memory, create its 5 instances and return an object with 5 image arrays (I will probably cache the arrays). This option is harder on the processor and memory. The heavy processing will be done when the images get requested. A plus I see for option 2 is that it will give me the option to rewrite the URL of the image and make them site dependent (prettier) than having a image repository for all websites. But this is not a big deal. What do you think of these options? Do you have any other suggestions?"} {"_id": "101156", "title": "Tips for achieving \"continual\" delivery", "text": "A team is experiencing difficulty releasing software on a frequent basis (once every week). What follows is a typical release timeline: During the iteration: * Developers work on stories on the backlog on short-lived (this is enthusiastically enforced) feature branches based on the master branch. * Developers frequently pull their feature branches into the integration branch, which is continually built and tested (as far as the test coverage goes) automatically. * The testers have the ability to auto-deploy integration to a staging environment and this occurs multiple times per week, enabling continual running of their test suites. Every Monday: * there is a release planning meeting to determine which stories are \"known good\" (based on the testers' work), and hence will be in the release. If there is a known issue with a story, the source branch is pulled out of integration. * no new code (only bug fixes requested by the testers) may be pulled into integration on this Monday to ensure the testers have a stable codebase to cut a release from. Every Tuesday: * The testers have tested the integration branch as much as they possibly can have given the time available and there are no known bugs so a release is cut and pushed out to the production nodes slowly. This sounds OK in practise, but we have found that it is incredibly difficult to achieve. The team sees the following symptoms * \"subtle\" bugs are found on production that were not identified on the staging environment. * last minute hot-fixes continue into the Tuesday. * problems on the production environment require roll-backs which blocks continued development until a successful live deployment is achieved and the master branch can be updated (and hence branched from). I think test coverage, code quality, ability to regression test quickly, last minute changes and environmental differences are at play here. Can anyone offer any advice regarding how best to achieve \"continual\" delivery?"} {"_id": "220338", "title": "API for expanation of complicated calculation or business rules?", "text": "In online shops there are areas with complicated rules. For example * is a product visible in the product catalog * is a product sold out * what is the price for the product (Discounts, Promotions, ...) Is it a good idea to have an additional explanation API for each complicated rule? Example: `boolean isProductVisible()` has an additional `String explainIsProductVisible()`? How to make understanding these rules easier? Background: Yesterday I as a software developer had the problem that my standard- testproduct on my local test system was \"sold out\" and I needed this product to test the order-confirmation page. So I had to debug the `IsProductAvailable` routine to correct the test data before I was able to do the original task \"test the order-confirmation page\". In order to find out, why my test product was sold out an explanation message would have been helpful. In a similar situation in the price calculation engine I already added log messages like this double calculateItemPriceForProduct(Product product, int quantity, Cart cart, ...) { double cumulatedPrice = 0; ... if (quantity > 1) { Price price = getScalePrice(product, quantity); if (price != null) { cumulatedPrice = price.getValue(); if (log.isDebugEnabled()) { log.debug(\"calculateItemPriceForProduct(\"+ product.getCode()+ \") using scalePrice \" + price.getCode() + \" for quantity \" + quantity + \":\" + price.getValue()); } } } ... } As a developer I can look into the server log and hopefully find out what happened. If I were a product manager it would be more comfortable to have role-specific tooltips on the webpage that shows the explanation part. Of course these tooltips would not be available for ordinary customers, just for developers and product managers. My questions: * Are there other ways to make understanding complicated rules easier? * Is it a good idea to have an additional explanation API for each complicated rule? * Does such a feature pay off the cost of developing this feature?"} {"_id": "65863", "title": "How can I find out what a job title means at a company?", "text": "I am in a dilemma about a job offer from a top 10 semiconductor companies in SF bay area. I have about 10 years of experience at various small to large (> $1 billion) companies in bay area. After my Graduate degree about 10 years ago I started at a big company in Bay area as as Sr Engineer. However, due to job change after a 5 years from that position, more job changes afterward, and not caring too much about titles, I am currently (still) a Sr Engineer at another company. Having had so many years of experience in Hi-tech I now realize that titles are important too. The dilemma is that this offer from another company also is for a Senior Software Engineer, and not a Staff or MTS title, which I after some investigation realize to be higher titles than just Sr engineer. Should I be asking the company (and its recruiter) to give me a higher title than a Sr Sw Engineer; is that expectation reasonable? I had the same Sr Engineer title 7 years ago and after many job experiences and a graduate degree I think I should be acknowledged by the new company making the job offer as a higher staff engineer. **Is there any way to find information about the company hierarchy as I can not trust the information company's recruiter is giving me due to his vested interest?** There are reviews about the company on sites like glassdoor but they do not provide a chance to put questions to employees. Are there any interactive websites/forums for Hi-tech employees (hardware design as well as software) where one could ask such question and hopefully get an answer from someone working at the company? PS: If needed, the company regularly appears in googled list of top 20 semiconductor corporation by sale for every year."} {"_id": "61521", "title": "Any hard evidence on how much smart phone app developers have actually earned?", "text": "I am considering making a smart phone app. Before I start, I want to see whether it is worthwhile at all. Unfortunatelly, I find it very hard to find any hard evidence on how much the developers of such apps actually earn either via application purchases, or in-app advertisement. The best research I have found so far states that the average IPhone app earns 682$ per year. Then again, there are loads of apps some of which might not even work and their earnings also count towards this statistic. Therefore, I am after some specific examples of an app and its earnings."} {"_id": "253097", "title": "What is the point of writing WCF Interceptors, compared to using a helper method?", "text": "For those of you who don't know, WCF allows you to attach interceptors to methods in the service contract. The interceptor is capable of performing custom logic before and after a method call, and is (as far as I know) used mainly for validating parameters and doing custom authorization schemes. But what's the point of using them if you can just write a helper method to do the same thing?"} {"_id": "51610", "title": "How can I implement an escrow payment system in my website?", "text": "I'd like to build a web service similar to kickstarter that allows users to pledge money to an idea, tho I'm unsure how I can implement this kind of payment system. If the the idea receives a specified amount of money, then the donors are charged. If it doesn't, the donors are not charged. I've done some preliminary research and have found Amazon Payments to be a possible solution provider for this, but I'm still unsure where to start with this and was hoping someone could point me in some right directions for how I can go about implementing this kind of payment structure in my web site. I should also note that this is primarily a prototype I'm building, so it's ok if the solution is limited to U.S. customers only. Also, I plan to build the site using Ruby on Rails. Thanks so much for your wisdom!"} {"_id": "117886", "title": "How to design a database wherein multiple tags(string) are to be associated with an id?", "text": "I have to design a database wherein I have to associate an audio_id with multiple tags(words). I am considering following approaches to select one from these: 1) To have multiple fields for multiple tags (columns: tag1, tag2, tag3.... tag10) corresponding to a single audio_id. The number of tags in my application will not be more than 10-15. 2) To save the tags(words) as comma separated single string corresponding to a single audio_id. 3) To save the associations (tag:audio_id) in a separate table. But the issue here is that the associations can be n to n. Multiple tags can be associated with an audio_id and same tag can be in multiple audio_id. Also please let me know if there can be any alternate design for this scenario or its better to consider any other type of database other than MySQL. Total number of tags will be around a million and audio_id are around a few thousands. I am concerned for the performance of the system."} {"_id": "65868", "title": "\"Functional\" php indentation", "text": "I'm a big fan of 1tbs when it comes to c-like languages. Now that php got decent lambdas and closures, though, I'm not sure the style I'm using for them (fundamentally made up, has something to do with the jquery source code) is the most readable/standard option. What do you think? Somebody has some nice examples of 1tbs + lambdas and functions as parameters? Here's a sample of my code. function bold_search_terms($needle, $haystack) { return str_replace( $occurrencies = array_filter ( explode(\" \",$haystack), function ($var) use ($needle) { return(levenshtein($var, $needle) < 3); } ), array_map( function ($var) { return \"$var\"; }, $occurrencies ), $haystack ); }"} {"_id": "253090", "title": "Why are inheritance, encapsulation and polymorphism not the pillars of OOP?", "text": "One day I went to a Stack Overflow chat and saw a phrase, that was stating that inheritance, incapsulation and polymorphism are the pillars of OOP (in the sense that they are fundamental, a construction sole). Also, there's a similar question, that I have been asked very often on college exams and job interviews, and the right answer always was the statement pronounced in the title of the question (\"Yes, inheritance, encapsulation and polymorphism are the pillars of OOP). But in the Stack Overflow chat I was severely ridiculed, participants strongly disagreed with such a statement. So, what's wrong with this statement? Does programmers seem to be trained in different things in post-Soviet and United States colleges? Are inheritance, encapsulation and polymorphism not considered to be the pillars of OOP by US/UK programmers?"} {"_id": "186926", "title": "How much times command executed? Looking for mistake", "text": "I have following piece of code: int sum = 0; for (int i = 1; i <= N; i++) for (int j = 1; j <= N; j++) for (int k = 1; k <= N; k = k*2) for (int h = 1; h <= k; h++) sum++; So I'v calculated how much does each cycle executes and then whole script, and I think I might be wrong for the last one. 1. N 2. N 3. 1 + log2N( means log N to the base 2) 4. N So the total execution amount of inner cycle would be N^3 * (1 + log2N). Am I right? How I can transform this statement? **UPDATE 1** I have another solution which seems monstrous: 1. N 2. N 3. LOG2(N) + 1 4. 2^(LOG2(2N)) - 1 So total cycles amount would be `n^2 * (LOG2(N) + 1) * 2^(LOG2(2N)) - 1`. Which transforms to n^2 order of growth. **UPDATE 2** I wrote simple test app to check my assumption. it seems that third cycle is already calculated for some reason is fourth one. At least this is the result of test app. I threw away `(LOG2(N) + 1)`. As the result of several transformations I have following equation for total amount of sum++ calls: N*N*(2*N - 1) = 2 * N^3 - N^2 ~ N^3"} {"_id": "186921", "title": "How to solve circular package dependencies", "text": "I am refactoring a large codebase where most of the classes are located in one package. For better modularity, I am creating subpackages for each functionality. I remember learning somewhere that a package dependency graph should not have loops, but I don't know how to solve the following problem: `Figure` is in package `figure`, `Layout` is in package `layout`, `Layout` requires the figure to perform layout, so package `layout` depends on package `figure`. But on the other hand, a `Figure` can contain other `Figure`s inside it, having its own `Layout`, which makes package `figure` dependent on package `layout`. I have though of some solutions, like creating a `Container`interface which `Figure` implements and putting it in the `Layout` package. Is this a good solution? Any other possibilities? Thanks"} {"_id": "186922", "title": "Formatting Dynamic Web Pages", "text": "A page built so that is has server side scripts implemented on the page. Should indentation of the code be according to the server side logic (making it easier to read while coding) or according to the HTML/ASP.NET markup (making it easier to read while debugging etc.)?"} {"_id": "186929", "title": "How to do unit tests on a method that takes the elapsed time into account?", "text": "I'm currently in the middle of refactoring an important method in a legacy- system. There were almost zero test until I've started working on it, and I've added quite a lot to ensure the correct work after my refactorings. Now I've came across the most crucial part: the algorithm that calculates an indicator. It's something like indicator = (OneNumberFromA + AnotherNumberFromB) / elapsedTime; How can I test the correct behavior for this Function with Unit tests? There are also some slightly different algorithms in the functions, that the program reaches in some cases - but in all of them, the `elapsedTime` is vital to the outcome."} {"_id": "130095", "title": "Porting library, what to do with JavaDoc comments/credits", "text": "I ported a library to Java, but am wondering what to do with the JavaDoc comments. The original library used javadoc comments too, so do I leave the @author tags from the original code? And how do I give myself credit as the person who ported it over?"} {"_id": "25432", "title": "How can a new programmer impress the software engineer (boss)?", "text": "I'm working at my first programming job. My boss is a very smart software engineer, and I feel like I have very little to offer compared to him. Problem is, he is always busy, and needs someone to help him out. I feel like I'm not good enough, but I still want to succeed. I want to be a great programmer. What can I do to impress him? Thank you."} {"_id": "36230", "title": "What strategy to use when starting in a new project with no documentation?", "text": "**Which is the best why to go when there are no documentation?** For example how do you learn business rules? I have done the following steps: 1. Since we are using a ORM tool I have printed a copy of database schema where I can see relations between objects. 2. I have made a list of short names/table names that I will get explained. The project is client/server enterprise application using MVVM pattern."} {"_id": "139123", "title": "How can I advocate a semi-strict release schedule in a risk-averse environment?", "text": "Recently I've been increasingly plagued by what I would have to describe as one of my most frustrating and morale-killing experiences in this profession: Having to _sit on a release_ that has been tested, re-tested, staged, and for all intents and purposes is _ready to ship/deploy_. As an all-around solutions guy and not just a hardcore coder, I do understand and have even advocated the need for proper change control. But lately, the tenuous balance between covering our bases and shipping on time has gone all lopsided, and I've had little to no success in restoring it to something sane. I'm looking for _compelling_ arguments to help convince risk-averse management that: 1. The dev team should (or must) be able to set its own release schedule - within reason of course (1-3 months should be conservative enough for all but the biggest Fortune 500 companies); 2. Software releases are important milestones and should not be treated cavalierly; in other words, _unnecessary_ delays/stoppages are highly disruptive and should be considered only as a last resort to some critical business issue; and 3. External (non-dev/non-IT) entities who want (or demand) to be involved as stakeholders have a responsibility to cooperate with the dev team in order to meet the release schedule, especially in the last week or so before the planned ship date (i.e. user testing/staging). The above are _assertions_ that ring true for me based on experience, but it looks like I'm now in the position of having to _prove_ it - so I'm asking for something a little meatier here, if such a thing exists. Can anyone who has had to \"sell\" the idea of a fixed (or maybe semi-flexible) release cycle to management give some pointers on what arguments/strategies are effective or persuasive and what is not? Aside from the obvious schedule contention and sunk costs, is there any hard data/evidence that would be useful in making the case that shipping is actually important, even in a \"corporate\" setting? Alternatively, I'm open to hearing constructive arguments about why schedule flexibility (even over a period of weeks/months) is more important than shipping on schedule; it's hard for me to believe right now but maybe they know something I don't. Note we have staged releases, and this went through every stage except production. Issues are tracked using a commercial bug tracker and every issue - 100% of them - that was assigned to this release was closed out. I realize it's difficult to believe and that's really precisely the point - it makes no sense that a 100%, feature-complete, fully-tested, approved-by-stakeholders release would be delayed by management for unexplained reasons, but that's what happened, that's what's been happening, that's the problem to be solved."} {"_id": "139125", "title": "Extracting text from various file formats", "text": "I want to extract text from various files. I used Apache POI for parsing Microsoft documents. It's working and now I want to parse PDFs and extract text from them. Is there a Java API that I could use? I used Apache Tika but it does not provide an API."} {"_id": "128263", "title": "Years experience over unfinished degree?", "text": "I'm currently in my placement year and working for a great software development company. It was always my intention of getting to this stage through university, getting enough academic experience as well as the year\u2019s placement and then try to get a full time programming job without the need to finish my degree. I decided this from an early stage as I have never really liked the whole university environment. I was so unhappy at university and I\u2019m so happy now I\u2019m on my placement year, I really don\u2019t know if I can go back. My question is, do you think companies will take me on if I apply for other jobs after my placement year and not penalize me for not finishing my degree? I guess at the end of the day I don't want to look back on my life and think \"god, why didn't I just spend one more year being unhappy to have a job I love\" but I know that even if I get a degree I could still end up without a programming job and this worries me more than anything."} {"_id": "253325", "title": "Dealing with data, when your database is under version control", "text": "Currently my database is not under some kind of vcs, we can get deltas but that's about it. I would like to try and make product deployments more automated, and less time consuming. I understand that placing a db's schema files under source control allow you to manage versions, and that these files are basically for dropping the old tables/indexes/etc and then adding the new versions. My question is, what about the data that's already there? By dropping everything we'd lose all of the data. So, we would have to do a data dump before updating the database, and then re-load the data back after the update has been done. Problem is, some of our largest databases have 80+ GB of data, and we probably have a total of 20 sets of databases (6 DBs per set). I'm sure that this would work, but given the size of everything, is there a simpler solution that would cut out the need to dump and reload everything each time a schema update took place? And, if not, wouldn't we have to dump the data such that reloading it took in to account the new schema?"} {"_id": "128267", "title": "What are the advantages/disadvantages of using record and playback for regression testing?", "text": "We have a web application that we want to start running regression tests on, and one of the things I'm supposed to look for when choosing an alternative is a tool that has a recorder. However, I get the general feeling that it is frowned upon, and that writing tests in code is preferred. What are the eventual disadvantages or advantages of using a record and playback tool for regression testing?"} {"_id": "128264", "title": "Which recorder do I use for Selenium 2", "text": "We have a web application that we want to start running regression tests on, and have pretty much decided that we will be using Selenium 2. Not all of the testers are programmer savvy. Is Selenium IDE usable for this? Which other recorder alternatives are best for Selenium 2?"} {"_id": "16807", "title": "Is it ever ok to have an empty catch statement?", "text": "I thought about it and could not come up with an example. Why would somebody want to catch an exception and do nothing about it? Can you give an example? Maybe it is just something that should never be done."} {"_id": "220365", "title": "I have a unexceptional exception. That is thrown by a API used in my project. Is it standard to log errors like these or handle them without logging", "text": "The API has a Throttle on the number of requests you can make. So we queue our requests in a local database and make as many requests as possible until the \"Throttle exception\" is thrown. Upon catching we exit the program and a scheduled task starts the process over again in 15 minutes. Is it standard to log this type of exception or just exit the program since this exception is expected"} {"_id": "166581", "title": ".NET developer needs FoxPro advice", "text": "We have a prospect with FoxPro 2.6 (whatever it means) system. Our product integrates with other systems by the means of triggers (usually). We would place couple of triggers on X system and then just pull collected data for our use. This way there is no need to customize customers product and it works great(almost real time - we poll for changes every 30 seconds). Question: 1. Can I put triggers on FoxPro 2.6? 2. Can access FoxPro from .NET? Any catches/caveats?"} {"_id": "253541", "title": "Where should the \"not empty field\" validation code be written on a 3-layer application?", "text": "When working with the 3-layer model, where should the validation code be placed? for: not empty fields, unchecked options, null values, wrong-written dates, etc. To keep total isolation between a form (UI layer) and the business rules, this validation should belong to BLL? It's the only work for the UI: grab user data input, without **any** care of verifying, and just send it to next layer. For example, It's an empty username field, really breaking a business rule, or the BLL doesn't even need to care about getting empty values, sure, UI takes care. Could be this shallow validation, considered as an actual BL rule, or is enough handle it on UI layer?"} {"_id": "205637", "title": "Form, Fit and Function changes in relation to software", "text": "I am looking at configuration management and the idea of form, fit and function change can determine whether a change is minor or major. I wondered how this definition could be applied when doing software development. Is there a good guideline on whether a change to software is classed as a major or a minor change?"} {"_id": "141684", "title": "xmpp flow -server, client and library", "text": "My complete requirement is development of a chat engine - including server, clients etc. Currently I am working on things at my desktop only but once done, I have to host it; basically incorporate it with in a site for chatting purpose. So, now my problem is: I am not clear about how the actual data flow is? I have googled and read about xmpp (a book by Peter Andre) also but I am not clear about the flow and what are the actual requirements to do the above mentioned task. What I currently know is: 1. I need a server - so selected ejabberd 2. I need client - still not sure which one to use and one other doubt is how this client thing will work when deployed on some website for chatting purpose. 3. Some library - don't know which one and what is the purpose? Can anyone guide me?"} {"_id": "166059", "title": "How to have a maintainable and manageable Javascript code base", "text": "I am starting a new job soon as a frontend developer. The App I would be working on is 100% Javascript on the client side. all the server returns is an index page that loads all the Javascript files needed by the app. Now here is the problem: The whole of the application is built around having functions wrapped to different namespaces. And from what I see, a simple function like rendering the HTML of a page can be accomplished by having a call to 2 or more functions across different namespace... My initial thought was \"this does not feel like the perfect solution\" and I can just envisage a lot of issues with maintaining the code and extending it down the line. Now I would soon start working on taking the project forward and would like to have suggestions on good case practices when it comes to writing and managing a relatively large amount of javascript code."} {"_id": "227640", "title": "What algorithm pairs blocks so that the weighted average of the two blocks falls within an upper and lower bound?", "text": "What algorithm would pair parcels with a high proportion of property x to a parcel with a low proportion of property x so that the weighted average of the two parcels falls within an upper and lower bound? The algorithm needs to pair a set of parcels to maximise the quantity that fall into the given range. The parcels are of differing size. For example, suppose I have a table of data that has tonnes of each block and percentage iron in each block. I can process two blocks at the same time to create an average iron percentage. I want to change the sequence of each block to maximise the tonnes that fall inside a percentage iron range."} {"_id": "141688", "title": "Single-developer GIT workflow (moving from straightforward FTP)", "text": "I'm trying to decide whether moving to VCS is sensible for me. I am a single web developer in a small organisation (5 people). I'm thinking of VCS (Git) for these reasons: version control, offsite backup, centralised code repository (can access from home). At the moment I work on a live server generally. I FTP in, make my edits and save them, then reupload and refresh. The edits are usually to theme/plugin files for CMSes (e.g. concrete5 or Wordpress). This works well but provides no backup and no version control. I'm wondering how best to integrate VCS into this procedure. I would envisage setting up a Git server on the company's web server, but I'm not clear how to push changes out to client accounts (usually VPSes on the same server) - at the moment I simply log into SFTP with their details and make the changes directly. I'm also not sure what would sensibly represent a repository - would each client's website get their own one? Any insights or experience would be really helpful. I don't think I need the full power of Git by any means, but basic version control and de facto cloud access would be really useful. **EDIT:** I've narrowed it down to the two options that seem most sensible. The first is based on ZweiBlumen's answer, whereby edits are made on the live server and committed from there to the (external) Git server. This has the advantage that my workflow won't change much (there's the extra step of making the commits, but otherwise it's identical). The second option is to work locally using XAMPP, then to commit changes from the local machine. Only when the site goes live do I upload the finished article to the web server from the local machine (immediately after the final commit to Git). This seems okay in theory, but if the site thereafter requires amends and I make them on the live server (as I usually do) then I'll need to manually copy over the changed files in my local repo, then commit those changes to the Git server. This seems unduly complex and is perhaps too much of a departure from my current workflow. I think on balance I'll give option #1 a go, and see how I get on."} {"_id": "57314", "title": "How to run a F# app in Windows Azure?", "text": "I have a project that requires me to write a library management system in F# and using Windows Azure for the development. I am quite new to Cloud concept and i don't know what to do as a next step. Is there any way to implement it using webForms? If you can help me with the steps i have to take, i can study them and learn. Thanks"} {"_id": "36491", "title": "What can programmers learn from the construction industry?", "text": "When talking with colleagues about software design and development principles, I've noticed one of the most common sources for analogies is the construction industry. We **build** software and we consider the design and structure to be the **architecture**. One of the best ways to learn (or teach) are through analyzing analogies - **what other analogies can be drawn from construction?** (whether already in common use in software or not). Please provide a description, or your personal experience, regarding how the programming concept is similar to the construction concept. [Credit to Programming concepts taken from the arts and humanities for the idea]"} {"_id": "57311", "title": "Twitter's new approach of third party application? How would you see this move as developer.... especially you plan to build a twitter client", "text": "Just today morning I have read news that twitter has issued a warning to developers not to make any new third party client, the official announcement can be read here. As a programmer, how do you see this move of twitter? Does it seems that they want to standardize the behavior of third party client or they don't want any new client in favor of the default clients they have made? What if anybody wants to create a new client? Is there any guidelines that-if followed- ensure that we can create a new mobile client? Or we should stop thinking about it? What are the option for the developers who want to build some clients for twitter? I can realize that I have asked too many questions, but I still think that there can be one common answer."} {"_id": "36496", "title": "Source code used when supplying this to customer", "text": "Do you write your code differently when you need to hand it over to a customer? How does one balance the delivery of good code while at the same time not handing over too much \"intellectual property\"?"} {"_id": "101844", "title": "How can I improve my user interface and usability design skills?", "text": "I invested most my time and resources in programming, and now I think it's time I should invest some time in learning about user interface design, user experience, and usability. What are some good resources about usability and designing user interfaces? I'm mainly looking for more theoretical topics, such as color theory and color physiology and how they affect users, along with information about best practices."} {"_id": "75260", "title": "I've got my Master's in Software Engineering... Now what?", "text": "Recently I completed a Master of Science in Software Engineering from Drexel University (Philadelphia, PA, US), because I wanted to have _some_ formal education in software (my undergrad is in Math Ed) and also because I wanted to be able to advance my career beyond just programming. Don't get me wrong; I love to code. I spend a lot of my spare time coding. However, for me writing code is just a means to an end: what I **REALLY** love is designing software. Not visual design, mind you, but the architecture of the system. So, ideally I'd like to try to get a job doing software architecture. The problem is that I have no _real_ experience in it besides my graduate course work. So, what should I do to make my \"bones\" in software architecture? **UPDATE** Just so it's clear, I have over 5 years of work experience in software development and an MCTS cert in addition to my education, so I'm not looking for the usual \"I'm fresh out of school, what should I do?\" advice."} {"_id": "37548", "title": "What rules of etiquette should be followed at software conferences?", "text": "Whether as an attendee, a speaker, or a vendor I wanted to know what the unspoken rules of etiquette are at software conferences. Other than the blindingly obvious ones (like don't assault the winner of the iPad raffle because you didn't win). What are some of the rules that should be followed, even if you feel they don't need to be said? Please, one rule per answer, with the summary in bold leading the answer. Post multiple answers if you have multiple rules."} {"_id": "18843", "title": "Dev approaches to complex JavaScript UI's", "text": "I am trying to understand the landscape of different approaches, and best practices, around the development of complex client-side JavaScript. I'm not sure what to label this class of application, perhaps **heavy AJAX** or **RIA** (but not plugins like Flash/Silverlight). I'm referring to web apps with these characteristics: * Emulate rich/native desktop UX in JavaScript * Contain most/all behavior in client-side JS, using the server as a data-API (JSON/Html-Templates). This is in contrast to using the web-server for the UI rendering, producing all HTML in a page-refresh model. Some examples are: * Google Docs / Gmail * Mindmeister * Pivotal Tracker As we move forward into HTML5, I can see this style of RIA development, with heavy JavaScript, becoming ever more common and necessary to compete. **QUESTION: So what are the common approaches emerging around managing these kinds of heavy JS developments?** Client-side code, as an app grows in features, is fiendishly complicated. There are problems scaling a development effort across multiple teams with raw JS (or so I hear, and can well believe it). Google has approached the problem by building GWT that compiles from a higher level language (Java) to JS, leaning on the existing development infrastructure that the higher level language has (Eclipse, strong-typing, refactoring tools), along with abstracting browser compatibility and other issues away from the developer. There are other tools, like Script# for C# that do something similar. All this puts JS more in the role of an IL (Intermediate Language). ie. \"You never really write in that 'low level language' anymore.\" But this 'compile to JS' is not the only approach. It's not apparent that GWT is the dominant approach...or indeed will become it. **What are people doing with rich-client JavaScript?** Some orienting questions: * Are most shops crafting JS manually (atop libs like jQuery et al)? * Or are there many many different approaches, with no clear best-practice emerging? * Are most shops avoiding RIA scale development in favor of the simpler to developer server-side/page-redraw model? If so, will this last? * Is compiling to JS perhaps an emerging future trend? Or is this just wrong headed? * How do they manage the complexity and refactoring of client JS? * Modularization and distribution of work across teams? * The application, enforcement, and testing of client-side patterns like MVC/MVP etc. **So, what are the emerging trends in this heavy-JavaScript and HTML5 future of ours?** Thanks!"} {"_id": "75269", "title": "How do I design an arbitrary system in an interview?", "text": "A common question in Tech Interview is to design a particular system, usually an existing product of the company. For example, \"Design Google Docs\". What is the expected answer for such a question? I mean, such systems surely have a complex design which is beyond the scope of any interview. What are the interviewers expecting in such a short time?"} {"_id": "18845", "title": "Are groups like comp.unix.programmer still alive?", "text": "If you see the posts at http://groups.google.com/group/comp.unix.programmer/topics?pli=1, most of them are spam. Are these com groups still good places to visit for programmers, if yes, how do you filter the spam? Why aren't they moderated anyways?"} {"_id": "203777", "title": "When to really focus on performance?", "text": "We've just finishing up the first release of a database driven web application, which is now in regression testing. The application has an advanced search with many different filtering criteria. When the search is first used with only the 3(of 28) minimum required search criteria it takes like 15-20 secs to calculate and retrieve the data for the page to load. This happens only the first time the search(uses a stored proc) is run. The search thereafter take 2-6 secs. The stored procedure has medium complexity around 1500 - 2000 lines. I would like to get it down to run the first time to 6 secs and thereafter like 1-2 secs but I can't seem to find the time to make it faster. My question is, is this acceptable for users if it is only slow the first time the search is run? Has anyone else had a similar experience with creating advanced searches? What was your solution?"} {"_id": "132331", "title": "What is O(...) and how do I calculate it?", "text": "> **Possible Duplicate:** > Plain English explanation of Big O I've seen some questions here and on SO talking about the most efficient way to find or sort this or that. The questions usually talk about the efficiency of a certain algorithms in terms of O(...). As a wannabe-programmer, I would like to start learning how to program algorithmically. So, what is O(...)? How do I calculate it, where can I learn about this?"} {"_id": "250014", "title": "Programming with emacs instead of a debugger-integrated IDE", "text": "There's a question that might be deemed a duplicate of this one (I use an IDE (Eclipse) to develop software. Why should I switch to vim or emacs?) but I don't think answers my question. I usually program in C++ (not exactly to create GUIs so rather low-level) and I find myself comfortable with editing code into visual studio, compiling it and debugging it. I know that all of this can be done in a linux/unix environment (even on Windows) as well, but I'm wondering how come that many low-level programmers are proficient with tools like emacs which I suppose hasn't any debugger integrated (no breakpoints setting, doing that with gdb seems pretty slow and really unhelpful). How can they develop a complex software with a write-compile-debug cycle in such environments? Am I missing something? I doubt that they develop code in Eclipse, Qt Creator or Visual Studio and then get back to their textual tools."} {"_id": "138297", "title": "Joining a Team on a Broken and Non-Controlled Project, Possibly Developed by Beginners?", "text": "Context: A firm which markets oneself with all kinds of partnerships and like a programming firm, but **feels** to me to be a firm employing cheap students at low wages to work on their major product (which looks the part). They market themselves with a lot of praise, but I have never heard about the firm in the open-source community, which makes me skeptical about their level of expertise. Now this firm is looking for some long-time engagement and offering a job to work on this product. But my first impression has been very negative: * no version control, * non-running codebase, * amateurish and inexperienced feel of the codebase. On top of that, they appear to to be dishonest as they push for low wages by minimizing the importance of the problem, but still require a long engagement. I don't think I can ever solve any meaningful problem in such an environment where I face this kind of diversions. It is good to speak about problems honestly and openly. Here, I am afraid about just wasting time. Any experience like that, and recommendations about how to assess the situation or get them to change?"} {"_id": "250013", "title": "Sanity check for design pattern used with an intricate calculation model", "text": "I am working on a project that generates technical brochures in batch. The 3rd party API that is being used expects POCOs with property names that match field names used in each of the brochure templates. The task I am seeking advice on is with the data source that will be used to populate these POCOs. The data is based on cascading calculations against a Domain Model. I originally entertained the idea of having the Domain be self-calculating, but there are so many calcs required that it seemed obvious to me this needed to be abstracted away. I created some hypothetical code that closely mimics the design pattern in question, and I am open to any suggestions or reaffirmations as to whether I am approaching this correctly. My primary concerns: 1. The constructor for the \"AirplaneResultsContext\" class, which injects itself to calculation classes which are also publicly exposed properties. Does this create recursive paths? 2. The possible occurrance of circular references with cascading calculation calls to various properties/methods in various other classes. Note that I am re-using most of the calculation logic (as was requested) but I can propose the need for it to be refactored if need be. I noticed I used the word \"Intricate\", which is usually a telltale warning for code that needs to be abstracted out, but I am struggling to see how I can make this better. * * * **The entity being calculated against**. Picture this as an aggregate root for a larger domain model. class Airplane : Entity { public double WingLength { get; set; } public double FuselageCircumference { get; set; } public double FuselageLength { get; set; } public double SeatWidth { get; set; } public double SeatDepth { get; set; } public double SeatHeight { get; set; } public double IsleWidth { get; set; } public double LegRoomArea { get; set; } } **The calculation context**. The intent is for this object to be passed along as part of a batch process -- an abstract factory which consumes an interface represented by this context class and generates the POCO objects. Note that I also omitted the interfaces and abstract classes that would be used for different calculation strategies for brevity. class AirplaneResultsContext { public AirplaneResultsContext(Airplane airplane) { Airplane = airplane; Fuselage = new FuselageCalculator(this); Seating = new SeatingCalculator(this); } public Airplane Airplane { get; private set; } public FuselageCalculator Fuselage { get; private set; } public SeatingCalculator Seating { get; private set; } public double ComputeWidth() { return Airplane.WingLength*2 + Fuselage.ComputeDiameter(); } } **Calculators**. In my non-hypothetical project, these methods are private/protected, cached on the first call, and read back by read-only properties, but I only wanted to show a basic representation of some of the calculation complexity. class SeatingCalculator { private readonly AirplaneResultsContext _context; public SeatingCalculator(AirplaneResultsContext context) { _context = context; } public int ComputeNumberOfSeats() { double isleArea = _context.Airplane.IsleWidth* _context.Airplane.FuselageLength; double availableArea = (_context.Fuselage.ComputeDiameter()*_context.ComputeWidth()) - isleArea; double seatArea = _context.Airplane.SeatDepth * _context.Airplane.SeatWidth + _context.Airplane.LegRoomArea; return (int) Math.Floor(availableArea/seatArea); } } * * * class FuselageCalculator { private readonly AirplaneResultsContext _context; public FuselageCalculator(AirplaneResultsContext context) { _context = context; } public double ComputeDiameter() { return Math.PI / _context.Airplane.FuselageCircumference; } }"} {"_id": "84062", "title": "Starting Web Development and interactive experiences", "text": "I'm new to web development and I'm a bit confused about the different languages and technologies in the web. I understand the basic is Html, Javascript, and Css. Then there's jQuery, ASP.net, Html5. I'm confused where I should use each technology and which should I use. For example, here is a video of a WPF application that I built: WPF app demo The app is essentially for students, teaching some lessons. The student can choose a lesson, and listen and see images. The student can also test himself. As you can see, the app has some animation and stlying If I were to attempt at building this application for the web- where should I start from and what should I use? HTML5 (Canvas?), jQuery (jQueryUI?), ASP.net? I would really appreciate it if you can help me. Thanks!"} {"_id": "84064", "title": "is object constraint language worth the effort?", "text": "so OCL is supposed to extend UML when we as a designer/analyst cannot express something with a diagram, is has a full blown specification still, there is not (at least not that i know) any production-ready compiler supported by any major company (Oracle, MS, IBM, Apple, Google for example) that can translate OCL to any intermadiate/machine language, so why would anyone invest in learning such a thing?, why not invest that time in learning an actual programming language instead ? there are a few easy-to-learn fast-to-pick up languages out there (ruby, php, visual basic) and if its only function is to extend UML diagrams why not just write some pseudocode and get done with it?, do we really need to learn a full blown specificacion just for writing a special case of pseudocode ?"} {"_id": "159943", "title": "Testing my VB.NET code?", "text": "I'm having trouble developing unit testing approaches to testing both if the \"code does what I want it to do\", and testing \"does my code work\". For example, I'm writing code to validate input that will be inserted into a database table, so I need to test if my code does what I want it to do - Good input is accepted, bad input is rejected. Here's some of the code, in VB.NET: 'Let's check if we are going to update all the fields that must be updated... If record.table.RequiredMustBeSuplied IsNot Nothing Then For Each req In record.table.RequiredMustBeSuplied If query.UpdateFields(req.Key).value Is Nothing Then Throw New ArgumentNullException End If Next End If So, for my unit test I would pass in a query without the proper update fields and expect that it would result in a `ArgumentNullException`, and a corresponding test that passes. The problem that happens to me is later on I'll notice a bug, and realize my code doesn't work correctly. For example, the correct code in this case was: 'Let's check if we are going to update all the fields that must be updated... If record.table.RequiredMustBeSuplied IsNot Nothing Then For Each req In record.table.RequiredMustBeSuplied If query.UpdateFields(req.Key).value Is Nothing Then Throw New ArgumentNullException End If Next End If So, I was able to test for problems I know about (missing fields) but not a problem I don't know about (the difference between `dbnull` and nothing). Is unit testing just not very good at finding these kinds of errors, or is there an approach someone could suggest?"} {"_id": "83598", "title": "Task management - how important it is for a entry level developer targeting PM role in the future?", "text": "I hold masters in CS and now I'm mobile apps developer (entry level), I always start to plan things when starting or doing any project both at work and projects I do at home (for passion) - as I can deliver the project on time but sometimes I am running out of time like 10 tasks a day vs my time forecast will take 2 on that day? As I'm beginner level, I want your suggestions on how important is task management for a person like me and for achieving my goals? My target for the next 3 years will be a Project Manager or similar role - I believe which these time managing skills will be a needed quality."} {"_id": "113433", "title": "DDD - Does an aggregate root's repository handle saving aggregates?", "text": "I am using a DDD-like approach for a greenfield module of an existing application; it's not 100% DDD due to architecture but I'm trying to use some DDD concepts. I have a bounded context (I think that's the proper term - I'm still learning about DDD) consisting of two Entities: `Conversation` and `Message`. Conversation is the root, as a Message doesn't exist without the conversation, and all messages in the system are part of a conversation. I have a `ConversationRepository` class (although it's really more like a Gateway, I use the term \"Repository\") which finds Conversations in the database; when it finds a Conversation it also creates (via Factories) a list of messages for that Conversation (exposed as a property). This seems to be the correct way of handling things as there doesn't seem to be a need for a full-blown `MessageRepository` class as it only exists when a Conversation is retrieved. However, when it comes to saving a Message, is this the responsibility of the ConversationRepository, since it's the aggregate root of Message? What I mean is, should I have a method on ConversationRepository called, say, `AddMessage` that takes a Message as it's parameter and saves it to the database? Or should I have a separate repository for finding/saving Messages? The logical thing seems to be one repository per Entity, but I've also heard \"One repository per Context\"."} {"_id": "83596", "title": "graph or relational database?", "text": "I'm starting to think that a lot of my tables could be replaced by only a graph db: For example: I have 4 tables: accounts, votes, posts, relationships but I can represent all these in a graph table with different edges, NODE1 -> type of relation -> NODE2 account -> vote_+1 -> post account -> wrote -> post account -> friend -> account2 is there a difference of performance or other between them?"} {"_id": "226062", "title": "Unit tests tactics", "text": "The only unit tests tactic I'm familiar with is comparing against golden data _ a predefined set of input data for which output is known (preferably including corner cases). I cannot think of any other reasonable way of unit testing. Trying to unit test using arbitrary input data is (at least) as difficult as writing the application itself. Are there any effective tactics for unit testing other than golden data testing? Note: By saying arbitrary data I mean any `syntactically` valid input which a method can accept (including logically invalid inputs)."} {"_id": "207855", "title": "Should I be afraid of a major corporation stealing my idea?", "text": "If I have an idea for a website that's really good, but I myself am not experienced enough to execute it well single-handedly, but I get it going with a team of people (guess that's called a start-up), should I be afraid of Google or Microsoft noticing my site, thinking to themselves _That's genius-- we have a thousand times their resources, let's do it better than them_ and hanging me out to dry?"} {"_id": "14093", "title": "Will a degree in progress get past the HR filter?", "text": "I dropped out of college half way through my final year due to personal reasons. I have 6 modules to repeat but unfortunately I cant afford to go back and finish them for at least another year. Im wondering if i put this down on my resume will it pass the HR filter? Bachelor of Science in Computing in Software Development 2007 - present"} {"_id": "14092", "title": "How do you quantify competency in terms of time (years)?", "text": "While looking for a job via agencies some time ago, I kept having questions from the recuitment agents or in the application forms like: How many years of experience do you have in: * Oracle * ASP.NET * J2EE etc etc etc.... At first I answered faithfully... 5yrs, 7yrs, 2 yrs, none, few months etc etc.. Then I thought; I can be doing something shallow for 7 years and not being competent at it simply because I am just doing a minor support for a legacy system running SQL2000 which requires 10 days of my time for the past 7 years. Eventualy I declined to answers such questions. I wonder why do they ask these questions anymore. Anyone who just graduated with a computer science can claim 3 to 4 years experience in anything they 'touched' in the cirriculum, which to me can be equivalent to zero or 10 years depending how you look at it. It might hold true decades ago where programmers and IT skills are of very different nature. I might be wrong but I really doubt 'time' or 'years' are a good gauge of competency or experience anymore. Any opinion/rebuttal are welcome!"} {"_id": "219141", "title": "Audit trails and entity relationships", "text": "I'm working on an order system and implementing an audit log. Two main concerns are: 1) While auditing a line item , you should only see audits for the line item 2) While auditing an order, you should see audits pertaining to the header information AND line item. For example, a line item would have an audit trail for what was changed to the line item fields, in addition any adding/removing/modification to comments, photos attached , etc would be also be logged. An audit for a order would contain all the things above, plus changes to the actual order (shipping location changes, purchaser name changes, etc). Instead of mirroring the relationships for the audit trails tables, I am planning on having only 2 tables, which are as followed Audit_Log: -audit log id -entity type -entity id -audit timestamp -audit user -audit message (serialization of object changes most likely) Audit_Relationship -audit log id 1 -audit log id 2 -is child of entity This will greatly simply queries and sorting of audit trails. Big plus is all the audit trails is in one table. You can just sort by timestamp and get an idea of what exactly is going on in the system (although, i think audits are meant mainly to focus on a smaller area , ie a specific order, to see changes). Has anyone tried a similar design? I'm concerned if this confuses entity relationships at all. (Ie, a comment audit long for a line item will also be attached to an order, even though that relationship doesn't exist in the business logic.) The \"is child of entity\" was used to make this a bit more clear... *I am using php with Doctrine 2, but I think this is an abstract enough idea to work with any ORM. **(ADDED) I am also using MySQL, and considering on using database triggers rather then doing this at the ORM library level. **(ADDED) Using the method described above might be a terrible idea. When attaching a audit trail, we need to get all the entities that will be associated with the audit trail. In order to do so we would need to implement a method to get a list of effected id's. Maybe some special annotations Doctrine uses to figure which properties should be followed when the audit log is \"bubbling up\" and attaching? Anyways, shifting to longer read times using joins sound much more reasonable then longer write time for this feature."} {"_id": "14098", "title": "How do you prepare yourself before you start coding?", "text": "Before you start coding something, how do you prepare yourself? Do you make diagrams, pseudocode, mockups or any of that kind of stuff or you just start coding and see what comes along the way. Personally, I prefer to jump into the code as fast as possible when I am comfortable doing what I have to do. If something is more complicated or a problem occurs, I normally take a sheet of paper and start writing pseudocode down since I have less trouble concentrating that way (I guess that's kinda weird and hard to explain, but whatever...It works!) I guess everybody has there own strategy, so what's yours?"} {"_id": "3233", "title": "Why do programmers write closed source applications and then make them free?", "text": "As an entrepreneur/programmer who makes a good living from writing and selling software, I'm dumbfounded as to why developers write applications and then put them up on the Internet for free. You've found yourself in one of the most lucrative fields in the world. A business with 99% profit margin, where you have no physical product but can name your price; a business where you can ship a buggy product and the customer will still buy it. Occasionally some of our software will get a free competitor, and I think, this guy is crazy. He could be making a good living off of this but instead chose to make it free. * Do you not like giant piles of money? * Are you not confident that people would pay for it? * Are you afraid of having to support it? It's bad for the business of programming because now customers expect to be able to find a free solution to every problem. (I see tweets like \"is there any good FREE software for XYZ? or do I need to pay $20 for that\".) It's also bad for customers because the free solutions eventually break (because of a new OS or what have you) and since it's free, the developer has no reason to fix it. Customers end up with free but stale software that no longer works and never gets updated. Customer cries. Developer still working day job cries in their cubicle. What gives? PS: I'm not looking to start an open-source/software should be free kind of debate. I'm talking about when developers make a closed source application and make it free."} {"_id": "255305", "title": "State Change Tests", "text": "In Chapter 3 of his book The Art of Unit Testing: with Examples in C#, Roy Osherove describes the concept of testing state change of a system. The example code under test he uses looks like this: public class LogAnalyzer { public bool WasLastFileNameValid { get; set; } public bool IsValidLogFileName(string filename) { WasLastFileNameValid = false; if (string.IsNullOrEmpty(filename)) { throw new ArgumentException(\"filename has to be provided\"); } if (filename.EndsWith(\".SLF\", StringComparison.InvariantCultureIgnoreCase)) { WasLastFileNameValid = true; return true; } return false; } } and we want to test state of the `WasLastFileNameValid` property. To this end, the author uses the following test: [Test] public void IsValidFileName_WhenCalled_ChangesWasLastFileNameValid() { LogAnalyzer la = MakeAnalyzer(); la.IsValidLogFileName(\"badname.foo\"); Assert.False(la.WasLastFileNameValid); } However, I see the following issues with this test: 1. The 'outcome' part of the test name is `ChangesWasLastFileNameValid`, but the test doesn't really check whether the property value changes; it may have been `false` even before the call to `IsValidLogFileName`. 2. The test is only testing the one case where the last call was an invalid filename. I would use the following test instead (using `xunit.net`) [Theory] [InlineData(true, \"fileWithValidExtension.SLF\", true)] [InlineData(true, \"fileWithBadExtension.FOO\", false)] [InlineData(false, \"fileWithValidExtension.SLF\", true)] [InlineData(false, \"fileWithBadExtension.FOO\", false)] public void IsValidLogFileName_WhenCalled_ChangesWasLastFileNameValid( bool preState, string filename, bool postState) { LogAnalyzer analyzer = new LogAnalyzer(); analyzer.WasLastFileNameValid = preState; analyzer.IsValidLogFileName(filename); Assert.Equal(postState, analyzer.WasLastFileNameValid); } Here I test whether the value changes, and I also test all scenarios. Is this a better test?"} {"_id": "182126", "title": "Why is the minimum value of ints, doubles, etc 1 farther from zero than the positive value?", "text": "I know it has something to do with 2's complement and adding 1, but I don't really get how you can encode one more number with the same amount of bits when it comes to negative numbers."} {"_id": "204270", "title": "What's the difference between these property definitions in C#", "text": "class myClass { int age; public int Age { get{return age;} set{age = value;} } Versus class myClass { public int Age{get; set;} } What's the difference between these two? Are they both the same?"} {"_id": "204278", "title": "Should I use semicolons to delimit Scala statements?", "text": "I'm used to delimit statements by a semicolon from Java, so naturally I do it in Scala code too. I also feel that the code is easier to read, because it's evident where one statement ends and another begins. But many times when I post a piece of Scala code on SO, the code gets edited just to get semicolons removed. 1. Should I use semicolons or not? Are there any \"official\" guidelines or coding style? 2. Are there cases where semicolons are required, otherwise the code is ambiguous?"} {"_id": "41595", "title": "Reaching Intermediate Programming Status", "text": "I am a software engineer that's had positions programming in VBA (though I dare not consider that 'real' experience, as it was trial and error!), Perl w/ CGI, C#, and ASP.NET. The latter two are post-undergraduate, with my entrance into the 'real world'. I'm 2 years out of college, and have had 5 years of experience (total) across the languages I've mentioned. However, when it comes to my resume, I can only put 2 years down for C#, and less than a year down for ASP.NET. I feel like I _know_ C#, but I still have to spend time going 'What does this method do?', whereas some of the more senior level engineers can immediately say, \"Oh, Method X does this, without ever having looked at that method before.\" So I know empirically that there's a gulf there, but I'm not exactly sure how to bridge it. I've started programming in Project Euler, and I picked up a book on design patterns, but I still feel like I spend each day treading water, instead of moving forward. That isn't to say that I don't feel like I've made progress, it just means that as far as I come each day, I still see the mountain top way off in the distance. My question is this: How did you overcome this plateau? How long did it take you? What methods can you suggest to assist me in this? I've read through _Code Complete_, _The Mythical Man Month_, and _CLR via C#, 2nd edition_ \\-- my question is: What do I do now? * * * **Edit** : I just found this question on projects for an intermediate level programmer. I think it adds to the discussion (though it does not supplant my question). As such, I'm adding it to the question as a \"For More Information\"."} {"_id": "178919", "title": "Questions to ask a 3rd party API provider", "text": "I'm due to meet with a developer/sales person from a new 3rd party resource we're about to start using. The main topic I'll be interested in, is their API as I will be the developer making use of it and explaining it to the rest of the team. What questions would you recommend asking? Things I'm already thinking about are: * What happens and how will I be notified when they depreciate a method? * Is there ever any downtime? * Who will I deal with first when I have API issues?"} {"_id": "137213", "title": "Is PHP a more simple introductory language to back-end web development over Ruby on Rails?", "text": "I have a program coordinator for a course I plan to teach who wants to change the \"Introduction to Web Development\" course from using PHP to Ruby (I assume he means Ruby on Rails). His justification is that Ruby \"is the future of web development\". Because we can't argue about the future, only trends, I'm hoping to build an argument against this instead based on \"teach-ability\". My personal experience in learning both PHP and Ruby on Rails is that PHP was more natural to ease into, simply because you can start inserting code wherever you need to and gradually improve your code structure and organization from that point forward. Ruby on Rails however requires a significantly sharper learning curve on code structure and organization (in my opinion). However, I learned Ruby on Rails much later in my programming career, so it was much easier to pick up than a first time student. So my question is, what language would be more appropriate to teach a beginner course on web development given those two options?"} {"_id": "105191", "title": "what are the best tips for storing images in a database?", "text": "Is it appropriate to store the image files in the database? Or it would be better to store only the path of the file in the database, while keeping the file itself on the server? Are there any other methods for doing this right?"} {"_id": "178910", "title": "How to create a Request Specific Thread Safe Static int Counter?", "text": "In one of my server application I have a class that look like, class A { static int _value = 0; void DoSomething() { // a request start here _value = 0; _value++; // a request end here } // This method can be called many time during request void SomeAsyncMethods() { _value++; } } The problem is SomeAsyncMethods is async. Can be called many times. What I need when a request start set _value = 0 and then asynchrosnously increment this. After end of request I need the total. But the problem is that another request at the same time can access the class."} {"_id": "137215", "title": "How is the \"Infinite Monkey Theorem\" different to use than Genetic Programming to solve problems?", "text": "This might be a little open ended, but I heard an explanation of this talk on how GP could be used to fix bugs, and I wonder: How does this differ from the infinite monkey theorem?"} {"_id": "66757", "title": "How does optimization make code \"greener\"?", "text": "It seems clear that whatever the language used, an optimized application consumes fewer resources than a poorly written application, and require fewer servers to manage a similar number of requests for a site with heavy traffic. Similarly, I like to assume that an application written in a compiled language like C++ or. NET require fewer servers than if it were written in PHP, for example, for the same performance reasons. That may be why Facebook has designed a compiler for the application written in PHP. Now, Facebook has recently opened the architecture of its data centers to the public to provoke thought for a reduction in their ecological footprint, but I'm not sure that the equipment is solely responsible if an optimized code needs less servers to address a number of requests. That's why I wonder if quantitative studies have been conducted, with supporting figures to show how the optimization of the code could save resources and machinery thereby reducing the ecological footprint of data centers dedicated to single application like Facebook. If there are examples like \"to do or not do\" for languages, as well as tips to make the code more \"green\", I'd be very interrested. Do you have any resources on this or thoughts to share?"} {"_id": "198566", "title": "Use functions inside a loop declaration", "text": "**What's the best practice?** This : for ($i = 0; $i < count($array); $i++) { //stuff } Or, what I usually do : $count = count($array); for($i = 0; $i < $count; $i++) { //stuff } Is it the same with the magic of compiler optimization?"} {"_id": "152753", "title": "Is it safe to trust emulators when developing multi platform/resolution mobile apps?", "text": "I'm currently developing some mobile applications using PhoneGap for Android, testing only in 3 different kinds of smartphones, and using emulators to test on the others target phone resolutions. Later on I will probably use a iOS emulator. I would like to know if I will have any problems with that."} {"_id": "76760", "title": "Distributed/Network application development that is user focused but NOT web application development", "text": "I was curious, what other architectures exist for business or user focused development that aren't written using web applications. Are these architectures used today? If you are or were in the business world and you need to connect to an application from a remote location, what technology would you use (that isn't web application based)?"} {"_id": "38749", "title": "Why does everybody hate SharePoint?", "text": "Reading this topic about the most over hyped technologies I noticed that SharePoint is almost universally reviled. My experience with SharePoint (especially the most recent versions) is that it accomplishes it's core competencies smartly. Namely: * **Centralized document repository** \\- get all those office documents out of email (with versioning) * **User-editible content creation for internal information disemination** \\- look, an HR site with current phone numbers and the vacation policy * **Project collaboration** \\- a couple clicks creates a site with a project's documents, task list, simple schedule, threaded discussion, and possibly a list of all project related emails. * **Very basic business automation** \\- when you fill out the vacation form, an email is sent to HR. My experience is that SharePoint only gets really ugly when an organization tries to push it in a direction it isn't designed for. SharePoint is not a CRM, ERP, bug database or external website. SharePoint is flexible enough to serve in a pinch, but it is no replacement for a dedicated tool. (Microsoft is just as guilty of pushing SharePoint into domains it doesn't belong.) _If you use SharePoint for what it's designed for, it really does work._ Thoughts?"} {"_id": "76764", "title": "What to cover in a \"introduction to python\" talk?", "text": "I'm in a student team that is focusing on web development. My teammates are interested in Python and I'm the only one that has learned it, so I was asked to give an \"introduction to Python\" talk next week. I'd like to listen to your advice about what to talk about to make the talk interesting instead of just a bunch of grammar things. PS: my teammates are familiar with PHP and .NET"} {"_id": "38746", "title": "Software Design: Build it fast or build it well?", "text": "When building a non-trivial application, is it best to focus on getting things working quickly, and taking shortcuts in the code like mixing model logic with your views, breaking encapsulation - typical code smells? Or, are you better off taking the time upfront to build more architecture, build it right, but running the risk that all this extra code might not be used since your design is quite fluid and you might have to throw it away if feedback causes you to go in a different direction? For context, I'm building a desktop application. I'm the only developer, and I'm doing this part-time since I have a day job. Now, for work, I try to do things the right way, schedule permitting. But for this project, which I expect will morph as I get feedback from people, I'm not sure that's the right approach. I spent several hours this week putting in a textbook Model View Controller design in place to communicate changes in the model to the view. This is great in general, but I'm not sure if I need multiple views to display the data and I know that I could have had things displayed more quickly without the additional architecture. With maybe 10-15 hours a week to spend on the project, I feel it will take ages to get something built that I can demo if I follow good software practices. I know that my users won't care that I used MVC internally, they just want something that solve their problem. But I've also been in the situation where you've incurred so much technical debt from short cuts that the code is just incredibly difficult to maintain and add new features to. I'd love to hear how other people approach this kind of problem."} {"_id": "38742", "title": "How do I handle two algorithms that seem the same but different?", "text": "I have two algorithms that share a lot of commonalities. One performs an iterative procedure, the other does just the first iteration. The results are, of course, different (one class provides results that the other can't), as well as the setup (the iterative process requires tolerances and max number of iterations, which are irrelevant in the non-iterative one). Should I have a unique class with internal ifs (like `if (iterative) do this else do that`) or should I implement two different classes (and potentially put common code in an accessible place so that it can be shared ?) What do you think ?"} {"_id": "34890", "title": "how to learn Java", "text": "This question I am asking because I couldn't find any source which gives complete overview of java development. I just want to know where java technology currently in market & what is preferable for development ! Java always remain top programming language for development point of view. However, java is combo of, j2ee, j2me, jsp, jsf, spring, other frameworks, ui components, jndi, networking tools and various other \"J\" are there ! However, learning java is definitely dependent on the development requirement, but still, to be a well-experienced java developer, what is the organised way of learning java? What is preferable in current technology ? and what is deprecated, currently ?"} {"_id": "171539", "title": "Using nested public classes to organize constants", "text": "I'm working on an application with many constants. At the last code review it came up that the constants are too scattered and should all be organized into a single \"master\" constants file. The disagreement is about how to organize them. The majority feel that using the constant name should be good enough, but this will lead to code that looks like this: public static final String CREDITCARD_ACTION_SUBMITDATA = \"6767\"; public static final String CREDITCARD_UIFIELDID_CARDHOLDER_NAME = \"3959854\"; public static final String CREDITCARD_UIFIELDID_EXPIRY_MONTH = \"3524\"; public static final String CREDITCARD_UIFIELDID_ACCOUNT_ID = \"3524\"; ... public static final String BANKPAYMENT_UIFIELDID_ACCOUNT_ID = \"9987\"; I find this type of naming convention to be cumbersome. I thought it might be easier to use public nested class, and have something like this: public class IntegrationSystemConstants { public class CreditCard { public static final String UI_EXPIRY_MONTH = \"3524\"; public static final String UI_ACCOUNT_ID = \"3524\"; ... } public class BankAccount { public static final String UI_ACCOUNT_ID = \"9987\"; ... } } This idea wasn't well received because it was \"too complicated\" (I didn't get much detail as to _why_ this might be too complicated). I think this creates a better division between groups of related constants and the auto-complete makes it easier to find these as well. I've never seen this done though, so I'm wondering if this is an accepted practice or if there's better reasons that it shouldn't be done."} {"_id": "233515", "title": "What's a DRY alternative to c++ header files?", "text": "In c++, is there any other way, besides header files, to use a function defined in file A.cpp, inside file B.cpp that would be considered good programming practice?"} {"_id": "233501", "title": "mysql, store a single piece of data per row", "text": "I am preparing to write a database system using PHP and MYSQL which will store every piece of info sent to it as an individual row. Each row will store several piece of meta data (time stamp, who created it, version number, attribute number) and one piece of 'true' data (file name, phone number, username, etc.) Rows my also be used to store serialized data. The goal is to create a system that is easy to incrementally back up, search on single rows, and review based on its state at a particular date. Is this an effective design or should I alter my approach? The database will be used to drive a custom CMS system which is intended to store a interconnected system of data. It will involve customer data, location data and job data. We want to be able to track individual database changes based on time so that we can maintain an on-site backup (the system will be cloud based) which we can use in the event that we lose internet connectivity. The system will be strongly permission based, only allowing people to see/change information that they are authorized to see. Most of the data will be viewed in a timeline format, based on the time that the info is entered into the system."} {"_id": "171536", "title": "What's so difficult about SVN merges?", "text": "> **Possible Duplicate:** > I\u2019m a Subversion geek, why should I consider or not consider Mercurial or > Git or any other DVCS? Every once in a while, you hear someone saying that distributed version control (Git, HG) is inherently better than centralized version control (like SVN) because merging is difficult and painful in SVN. The thing is, I've never had any trouble with merging in SVN, and since you only ever hear that claim being made by DVCS advocates, and not by actual SVN users, it tends to remind me of those obnoxious commercials on TV where they try to sell you something you don't need by having bumbling actors pretend that the thing you already have and works just fine is incredibly difficult to use. And the use case that's invariably brought up is re-merging a branch, which again reminds me of those strawman product advertisements; if you know what you're doing, you shouldn't (and shouldn't ever have to) re-merge a branch in the first place. (Of course it's difficult to do when you're doing something fundamentally wrong and silly!) So, discounting the ridiculous strawman use case, what is there in SVN merging that is inherently more difficult than merging in a DVCS system?"} {"_id": "242952", "title": "Can Swift be used for anything besides iOS and OSX apps?", "text": "I'm liking Swift, a lot. But making an iOS or OSX native app isn't totally what I would want to do with it. Is it possible for Swift to be used in other contexts? Like say a web application that runs on a linux server? Or perhaps Arduino micro-controllers? Or is it locked up in the Apple ecosystem?"} {"_id": "242954", "title": "Where does Rails get it's datetime for creating records?", "text": "I have a rails app with a data model called 'jobs' and i'm faced with a critical design choice crossroads. I don't know enough about Rails and it's inner workings to be able to say for sure what I should do despite a complete read of the rails and ruby docs. I want to be able to accurately display the age of a job record in days. So when a customer logs in, they can see that the job they submitted is 'x' days old. Where does a rails app on Heroku get it's time stamps? From Heroku? or the customers system clock? If a customer has a out of date system clock and submits a job, it could really mess up the sorting of their job list, not to mention me the overseer of job records. Any advice out there? EDIT: Just to be clear, i'm not asking how to list jobs by their date, but to which clock does a rails app on Heroku base it's records."} {"_id": "38299", "title": "Do computer glasses work?", "text": "There are a few different types of \"computer glasses\" available: * Steelseries Scope * Gunnar Computer Glasses They seem to be designed for long gaming sessions. Seem a little silly, but thought I'd ask has anyone used them? Do you think they would aide in long coding marathons or reduce eye strain?"} {"_id": "103143", "title": "What is Visual Studio Lightswitch and how does it differ from normal Visual Studio versions?", "text": "How is Visual Studio Lightswitch different from regular Visual Studio? In what sort of situations would you use this IDE over regular Visual Studio? I'm trying to decide if this is something that would be worthwhile for me to take the time learning since I am currently doing WPF/Silverlight development."} {"_id": "138886", "title": "Low Level vs High Level Development", "text": "I really want to start working in OS development, particularly kernel development, with the Open Source Darwin Project - building my own Mac-like operating system, however I am simply not experienced enough to work in the black art of kernel development - I'm constantly deterred by comments : \"your code has to be close to spectacular\" , \"You need to be a programming veteran to write in kernel space\" So I Have tried to find other projects. I am working at CryENGINE 3 at the moment, but I just can't get into the API's as I'm constantly wanting to start kernel development. I have no experience in user-land programs, however I have researched and created small assembly based programs. Note: I'm 17 years old. Are there any projects you can recommend me starting, so I can get some experience for user-land applications? I find it almost impossible to build something revolutionary. I have Experience with C, C++ , C# , Java , Haskell , Ruby , Lua and ASM. Plus: What's the chance of creating a business out of software development?"} {"_id": "83049", "title": "lightweight, sustainable processes for good code/design quality?", "text": "Ok, gang, next question: what do y'all think are the best processes to improve the quality of the code the (corporate) team produces? (The cult of quality question was good, but I'm looking for specific suggestions of things we can do.) Background: the team is diverse, in the usual way: > There's the brilliant coder whose code is impenetrable and undocumented and > whose ability to communicate isn't as good, which manifests itself as > impatience or excuses of busy-ness. > > There's the middle-of-the-road developer who just wants to get his/her work > done and go home. New methodologies would be ok, but he/she won't initiate. > > There's the person who really doesn't want to change (i.e., is hostile to > change, for whatever reason), satisfied with their current level (the old > way). > > There's the person who really doesn't know what the possibilities are, > they're just copying everybody else's code (and that works, too). > > There's me, of course, the handsome, charming, brilliant developer who takes > the time to write good, clear documentation and unit tests and just wants > the best for the group and isn't satisfied with the status quo > (technologically) and couldn't possibly be the problem. There are all these beautiful principles out there: OOP (highly coherent, loosely coupled), functional programming (what I mostly see here is /no side effects or in-out parameters/), strong static typing (no runtime type errors, method not found, etc.), separation of concerns (business logic separate from UI and database). And there's the old school of huge blobs of code that do multiple things all sort of intertwingled together, and global variables and their cousins the singletons and the huge classes with lots of data members. So, how do we get from point A to point B? Grass roots suggestions get ignored. [edit: well, to be more accurate, a single developer running around telling people what they're doing wrong is not the best way, from both managers' and peers' points of view. I don't do that because I don't want to be Cassandra.] Can we do abbreviated code reviews? Like... just call out what your new classes and methods are (ignoring implementations). Or call out (in English) where you added new functionality (i.e., which method did you update to do what)? What if the person(s) who need to review that stuff are constantly in meetings (you know how that goes: new features, partnering, crazy deep bugs that have important customers screaming[2])? Would a wiki or some sort of social thing work, to get more eyes on the new designs? How do we get those who don't want to be involved involved? So... lightweight, sustainable, effective processes? Any ideas? What works for y'all? (I know people are going to say pair programming. Ok, granted, but that ain't gonna fly here.) Thanks. [2] Ok, overblown language. s/screaming/focused on particular bugs/"} {"_id": "134523", "title": "better way to learn programming with the intention of getting a job?", "text": "> **Possible Duplicate:** > How to get a job with no experience? So I've worked on a desktop software coding Swing GUI for past 2 years. I feel that even having a live website selling my java desktop software (customers buy it occassionally) is not enough experience, contrary to opinions from this site, very little of my experience building, running the software business is discussed during interviews. Often jobs have extensive list of qualifications which I think I can cover 70%, but I don't have example projects for each language and framework, rather I've played around with different open source libraries and build things based on what is needed. When I land interviews, sometimes I pass all the technical questions but sometimes I don't when they ask me about experience I don't have. When I do land interviews, I cannot complete the programming tasks in time. I over think the problem, and while I have done more complex problems working on my software project, I cant think quickly enough to do the testing questions fast enough. What other ways can I improve my skills to better showcase my skills? How many portfolio of projects do I need to be convincing? Should I take a course from a local college in Java?"} {"_id": "47331", "title": "Mock Objects for Testing - Test Automation Engineer Perspective", "text": "How often QA engineers are responsible for developing Mock Objects for Unit Testing. So dealing with Mock Objects is just developer job ?. The reason i ask is i'm interested in QA as my career and am learning tools like JUnit , TestNG and couple of frameworks. I just want to know until what level of unit testing is done by developer and from what point QA engineer takes over testing for better test coverage ? Thanks **Edit** : Based on the answers below am providing more details about what QA i was referring to . I'm interested in more of Test Automation rather than simple QA involved in record and play of script. So Test Automation engineers are responsible for developing frameworks ? or do they have a team of developers dedicated in Framework development ? Yes i was asking about usage of Mock Objects for testing from Test Automation engineer perspective."} {"_id": "132067", "title": "Difference between a service class and a Helper class", "text": "I would like to know what differentiates a Service class from a utility class or a helper class? A class only with underlying methods calls the dao's is a service? Doesn't the usage of Helper classes violates SRP?"} {"_id": "213178", "title": "Refactoring two classes from third-party library that could have extended a base class", "text": "I have two Classes, with very similar behaviors, both from a third party library. Both needs to be populated with some value object and sent to specific queues in order for logging. Please note both of them does not have a common parent. I am scratching my head to find out ways to refactor this. Here is the sceanrio. Class 1 : Apple Class 2 : Orange Unfortunately, Apple and Orange are not child classes of Fruit And I can not change them to extend a base class. Constructors of Apple and Orange have different signatures. Current code: if(isApple){ Apple apple = new Apple(....); apple.setColor(Color.RED); apple.setPrice(10); apple.setCount(1000); AppleMQObject applMQObject = new AppleMQObject(apple); Producer appleProducer = Factory.create(\"apple-producer\"); appleProducer.send(applMQObject); }else{ Orange orange = new Orange(...); orange.setColor(Color.ORANGE); orange.setPrice(30); orange.setCount(100); OrangeMQObject oMQObject = new oMQObject(orange); Producer orangeProducer = Factory.create(\"orange-producer\"); orangeProducer.send(orangeMQObject); } I can move the MQ code out to a common method. But how to handle the apple/orange situation."} {"_id": "206651", "title": "Need help understanding UML diagram", "text": "I'm focusing on trying to understand UML diagrams and learning to interpret them in order to implement the designs they describe. In the following diagram, I am not clear on what the implementation for `Port Operation` should be. ![enter image description here](http://i.stack.imgur.com/IMimV.png) It looks like it should be a class/entity the same as the others in the diagram. I understand the relationship between Leg and `Port Operation` is bidirectional. But it seems that it does not make sense to have a class just for port operations, and having Load() and Unload() methods in Leg doesn't make much sense since a Leg class does not represent a port or its' operations. When reading UML diagrams, is it expected that there are multiple ways for someone to implement it? i.e some ways are better than others, and it's a judgement call as to how someone implements it? What would be the recommened implementation for the above diagram with regards to Port Operation?"} {"_id": "225084", "title": "How do I explain that we're wasting developer time adding unnecessary features?", "text": "So I've lead the charge with my fellow engineers to, at the very least, start \"thinking\" Lean. We hit on a few major areas of waste, and 2/3 lead to the exact same point...\"Extra Features\". We dogfood our own software on two fronts, sales and project management. It works great for sales, because that's what a CRM is great for. It's not so great for managing projects, and we're often tasked with adding extra features to make it work for this use case. Does it make more sense to continue adding features that don't add any customer value, or should we accept that having our sales team using our own product is \"good enough\" and perhaps look for an off-the-shelf solution?"} {"_id": "105878", "title": "How to work on a project with a beginner?", "text": "I am currently starting work on a personal project that me and a friend are very passionate about. We want to work on it together, but we are on very different programming levels: I'm in my last semester of my education have been through 2 internships and I'm currently working as a student, programming, while he just finished his first year of uni and just got the basics down. The question is How can we work on this together? He is just making the switch from Java to C# at school, and the project will be an ASP.NET project in C#. I could wait a semester so he can get more into the language and learn more programming concepts, but I feel that it would be a waste of time, and I might loose momentum. So the question is **what is the best way to work with a person on a different programming level than you?** Pair programming, making a part of the system as an example that he can follow in making another part? How would you do it?"} {"_id": "15050", "title": "Natural language detection for web application", "text": "I have my own thoughts how \"ideal\" multilingual web application or web site should behave. Can you think of better solution? What are the pros and cons of them? What are cons of the solution I am presenting bellow? Any comments? My \"ideal\" solution: * application should read browser language (from Accept Language header) * user should be able to override his/her default language in options (logged-on users will see the web site in this language no matter of current browser settings; useful when one is travelling for example) * on top of this, when lang attribute is specified in URL (see example bellow), user will see the page in language specified by this attribute (both accept language and user settings would be overridden; this could be useful for book-marking, sharing, RSS feeds selection, web crawlers). Example URL: http://www.example.com/index.html?lang=ex"} {"_id": "206659", "title": "Use a search box that calls on a JSON file?", "text": "I use a JSON file to populate several drop down lists. The format is: { \"value\" :\"lightyear\", \"name\" :\"Light Year(yl)\" }, { \"value\" :\"astronomicalUnit\", \"name\" :\"Astronomical Unit(AU, UA)\" }, and so on. I'd like to implement a search box so a user can type \"Light year\", \"lightyear\", \"lightyears\" etc etc. I want to make it possible for them to search for a particular unit. The immediate problem I can see is that I've used camel casing for pretty much all values except those displayed in the drop down boxes. There are two options, I think... Either run the search on the value key of the JSON and search for parts of it so \"AU\" would still yield Astronomical Unit or to use the value. The other option is to use the value which I think will overcomplicate things. It's not something I've done before and not sure which direction to go in?"} {"_id": "49067", "title": "What to do if you find a vulnerability in a competitor's site?", "text": "While working on a project for my company, I needed to build functionality that allows users to import/export data to/from our competitor's site. While doing this, I discovered a very serious security exploit that could, in short, perform any script on the competitor's website. My natural feeling is to report the issue to them in the spirit of good-will. Exploiting the issue to gain advantage crossed my mind, but I don't want to go down that path. So my question is, would you report a serious vulnerability to your direct competition, in order to help them? Or would you keep your mouth shut? Is there a better way of going about this, perhaps to gain at least some advantage from the fact that I'm helping them by reporting the issue? **Update (Clarification)** : Thanks for all your feedback so far, I appreciate it. Would your answers change if I were to add that the competition in question is a behemoth in the market (hundreds of employees in several continents), and my company only started a few weeks ago (three employees)? It goes without saying, they most definitely will not remember us, and if anything, only realize that their site needs work (which is why we entered this market in the first place). This might be one of those moral vs. business toss-ups, but I appreciate all the advice."} {"_id": "170668", "title": "Pair programming business logic with a non-IT person", "text": "Have you had any experience in which a non-IT person works with a programmer during the coding process? It's like pair programming, but one person is a non-IT person that knows a lot about the business, maybe a process engineer with math background who knows how things are calculated and can understand non-idiomatic, procedural code. I've found that some procedural, domain-specific languages like PL/SQL are quite understandable by non-IT engineers. These persons end up being co- authors of the code and guarantee the correctness of formulas, factors, etc. I've found this kind of pair programming quite productive, this kind of engineering-type users feel they are also \"owners\" and \"authors\" of the code and help minimize misunderstandings in the communication process. They even help design test cases. * Is this practice common? * Does it have a name? * Have you had any similar experiences?"} {"_id": "80167", "title": "How do I safely write code in my own 'words' and not plagiarize?", "text": "I understand plagiarism and paraphrasing fairly well when it comes to writing a research paper, but those equivalent areas in programming seem foreign to me. I've looked up the topics online, and surprisingly there is not as much material on the subject as one would have expected. When writing code and having to implement something I've never implemented before, I'll go online to look for an example. I try to read through the documentation beforehand, but sometimes I find it challenging to follow. So if that fails, I will search for the topic online and be presented with dozens of examples (whether they be on someone's personal blog or a Q&A site like SO). Now I'm usually presented with 5-10 lines of code. I have and will NEVER copy- and-paste that into my own code, but I still worry about copying it down verbatim. I find it hard to reword a certain piece of code, especially when there are only so many ways to do so. I make sure to rename variables, change formatting, etc. - but is this enough? I've always wanted to understand this topic, but now that I'm working with a new language and in a corporate environment I think that it is especially pertinent. If anyone could explain or link to a good explanation elsewhere, I would greatly appreciate that! **tl;dr** I don't understand how much you have to change and reword 5-10 snippets of code found online to avoid plagiarism. What if there is very little that you can change?"} {"_id": "229853", "title": "Backend development philosophy", "text": "I feel kind of lost in this backend development process I am attempting right now. Most of the usual development practices I use while developing client- side applications don't apply here... Let me provide some context. ### The debugging process While developing a client side application (iOS, Java desktop app, or whatever), it is easy to quickly setup the project on your favorite IDE, get it running on your machine or a testing device, and debug the hell out of it as much as you like. On the other hand, it's not as trivial to hook a debugger to your backend code, and especially if it is python code running on Google App Engine (GAE). That's what I am using, and ... yeah. Linters and all help A LOT, but still, semantic issues cannot be resolved that way, obviously. I am currently going over my recently written backend code and just burying it with logging.debug('msg') statements, asserts and whatnot. This is the only thing I can think of. Is this normal for backend developers? Does logging and digging through logs usually how backend devs iterate on their applications? ### Parallelism and request driven This might be a bit more specific to GAE and other non-blocking backends, honestly. Single threaded servers don't suffer from this problem... Anyways, so when you are dealing with parallelism and everything is driven by socket events, how do backend developers usually test if their backend works? I did the most naive thing of all time, which is to open python console and using the requests library just send requests and test bit by bit. Then, I went ahead and wrote a **kivy** app to help me send the requests from a GUI interface and see what is going on, but it's taking more time to maintain the kivy app than develop the backend!! I tried to check for test frameworks for GAE, but they didn't seem easy to get, so I was wondering if they are worth it? Would I be able to simulate 100s of clients using my backend using test frameworks? What do people use these days (For GAE specifically)? ### Visualizing the \"flow\" Because of my inexperience with backend development, it is **surprisingly hard** hard for me to keep a clear image of the request/response cycle in my head. I know the basics, I have written a few backend apps, but as soon as it get just a little complex, I have to keep reminding myself where the request goes through by looking at the entry point, and all the steps till the response is made. I am sure if I were able to somehow visualize it, I don't have to keep going back over and over. Instead, I would easily know where a bug would originate, for example, or where is the best place to add a certain feature. In any case, I was wondering if there is some sort of standard \"thing\" to design the request flow. I don't know, maybe a UML diagram or something? I tried to sketch it out, but I ended up with a mess. Like, I would sketch the backend design based on the features and requirements, but then the actual logic and model would be left out. Then, I try to include those in the diagram, and it becomes overly complex and cluttered with many weird arrows and boxes. **I need something to backend development, like ER diagrams is for relational database design**. Yeah, Sorry. I talk a lot, and I am lost in this world. Help?"} {"_id": "173004", "title": "Supporting early versions of Android", "text": "What policy do developers have when it comes to supporting earlier versions of Android? I still support , but this means that I am unable to use features such as the action bar. Over 40% of my users are still running versions below 3.0, so I feel somewhat constrained about this. The problem is that 3.x was not very successful, so 2.3.x will be with us for some time. But all new devices will now be shipping with 4.x. I am wondering whether 4.x users are more likely to pay for an app, while most 2.3.x users are just looking. **Update:** With a little effort, I have found that I am able to implement action bars and Holo themes, and still support Android 2.1. All this without recourse to an external library. The only feature that I am still stuck with is tab bars. These do work with action bars, but not in the approved style. For that I would need fragments, which requires Android 3.0. I only have two tab bars though, so it is not a big deal."} {"_id": "57571", "title": "What do these symbols mean? I'm trying to understand them to make an app", "text": "What do the symbols `\u229e` and `\u2295` mean? I'm trying to understand them to make an app. It's on http://en.wikipedia.org/wiki/File:XXTEA.png ![http://en.wikipedia.org/wiki/File:XXTEA.png](http://i.stack.imgur.com/EyUUf.png)"} {"_id": "239020", "title": "Lexer/Parser for multidimensional Languages", "text": "How does _Lexer/Parser_ work in a 2D programming language like Funciton in order to transform such an unusual source-code to the correct AST?"} {"_id": "239024", "title": "NVI for virtual function implemented in every layer of a deep hierarchy", "text": "Suppose we have the following class hierarchy: class Object { public: virtual void update() { // Update position } }; class Rocket : public Object { public: virtual void update() { Object::update(); // Orientate towards target } }; class SparklingRocket : public Rocket { public: virtual void update() { Rocket::update(); // Create sparkling particles } }; For obvious reason this is no good idea. For instance an inheritor of any of the classes might forget to call Base::update() and the behaviour of the program would be incomplete. As I see it, functions that implement important behaviour shouldn't be made virtual, that's something better reserved for replaceable behaviour. So, we would probably change the architecture like that: class Object { public: void update() { // Update position afterObjectUpdate(); } protected: virtual void afterObjectUpdate() {} }; class Rocket : public Object { protected: virtual final void afterObjectUpdate() override { // Orientate towards target afterRocketUpdate() } virtual void afterRocketUpdate() {} }; class SparklingRocket : public Rocket { protected: virtual final void afterRocketUpdate() { // Create sparkling particles afterSparklingRocketUpdate(); } virtual void afterSparklingRocketUpdate() {} }; This is pretty much what I want: * The public interface of all classes is only the non-virtual update() - method * When that method is called it is ensured that every update()-\"submethod\" is called * Even if the one inheritor forgets to call a afterUpdate() - method, the hierarchy is stable from the base class down. This way a API could ensure its own integrity whilst in the first codeexample it would have to rely on the user to call the Base::update() method I dislike one thing though: The name of each class is part of the after...Update() methodname. That seems like codesmell to me. I think the general goal of keeping virtual call hierarchies stable can't be that uncommon. What is the commonly applied solution that I didn't come across yet?"} {"_id": "229859", "title": "How to avoid unauthorized use of an API?", "text": "I have to design a \"widget\", a script that partners will embed in their websites to display some UI and make calls to our API. Basically it will display our data on these sites based on some IDs they provide in our API calls. **What we would like to avoid is someone abusing the API and using it to scrape the entirety of our catalog.** Each partner that embeds our script will be given a public key that must be provided when calling the API. An idea would be to ask them to append this key when loading the script, e.g.: That way the request for the script can be used to register the key/source IP pair, and answer subsequent API calls only if the key/IP pair matches a registered one (with a limited lifetime, and a limit on requests per day). I'm not sure it's a good idea since it's obviously security through obfuscation (someone reloading the script will completely bypass it) ; but I don't see any other way to restrict access. I cannot provide a unique key to every user, only to partners. I can't use a private-key system since all the code will be available to anyone. It's basically restricting access to a public API, i.e. contradictory in its definition. What do you think of this solution, and what would you do with these constraints?"} {"_id": "184277", "title": "The most human language like programming language", "text": "I was wondering, there are so many articles about what the best coding languages are. C, C++ Go Haskel lisp java ML F# etc etc. But rarely i see an article about the most human like programming language. It doesn't need to be fast but be closest to the English (or other natural) language. Once it was a goal to create such languages it was seen as a form of AI. But these days AI went a different direction Now there is Siri etc, but those are application not coding languages themselves. Out of curiosity is there still somewhere a language that understand basic English to code with ?"} {"_id": "100499", "title": "Can I safely use an open source library in an internal closed-source project?", "text": "I m thinking of using iTextSharp, which is licensed under Affero GPL, in an internal closed-source WinForms project. No one outside my company will be using it. GPL (and Affero GPL as well) typically demands that the source be provided with the binary. Given that this is an internal project, do I need to provide my employees with the source code of the project?"} {"_id": "224684", "title": "What is dispatch? Does it imply dynamic resolution?", "text": "AFAIK, the term _dispatch_ means just a method resolution and calling. It doesn't matter whether it is static or dynamic. I saw many people are using a term like _static dispatch_ and _dynamic dispatch_. What makes me confusing is there're also some mysterious descriptions. I was trying to understand what is _multiple dispatch_ , and it seems just _selecting a subprogram by parameter types_. If I understood it correctly, there can be both of _static multiple dispatch_ and _dynamic multiple dispatch_ , and we can say C++ is providing _multiple dispatch_ via free functions. But, Wikipedia article about _multiple dispatch_ says C++ has no multiple dispatch because it doesn't have dynamic resolution of function by multiple parameters. And I really don't get conceptual difference between Common Lisp example and C++ overloaded function. Because I can't find any conceptual difference unless the term _multiple dispatch_ implies _dynamic dispatch_. And I realized that I am confusing what the _dispatching_ really is I also checked QA entry Multiple Dispatch vs. Function Overloading, and it seems the answer is premising the term _dispatch_ is basically _dynamic_. That also makes me confusing. What is correct meaning of the term _dispatch_? Does it imply _dynamic resolution_? Is this term well defined or just conventional? What am I missing?"} {"_id": "228564", "title": "\"Match Making\" script, a way without involving the database and php?", "text": "I am writing a matchmaking script for a game through a web portal. For the past few days I have been looking into the different options and I believe the following approach would be the most optimal but I would like the opinion of others. 1. Get players match requirements through a web form (Method GET) and place the player in queue (mySQL database entry with time stamp and match requirements) for 1 hour. 2. On the submit page run an ajax script every minute that contacts a php script that checks the following: * Is a player still in the system, if their hour is over or their ajax script has not ran in 5 minutes remove them from the database. Return time out. * Is the player in queue flagged for a match? * If yes then place both players in the in match table and remove both players from queue and send match variables. * If no then continue. * Do the remaining players match the current players requirements? * If yes: * Is the player the same one looking for a match. * If no: Return match variables. Flag other player. * If yes: Return match not found. * If no: Return match not found. 3. When match variables are received the page will be updated with jquery, including a new ajax request that will call a new php script every 30 seconds to find out if both players accept the match. Possible ajax results are: 0 waiting, 1 accepted, 2 declined. There will also be a button that will send the players response immediately to the php script through ajax that will disable itself through jquery when pushed. When both players accept the page will then use jquery again to display the instructions for beginning the match. Is there a cleaner or less intensive way to do this with or without involving the database and php? Notes: No code is written yet. The system will not have more than 100 players in queue at any time. (Most likely a peak of 25)"} {"_id": "224268", "title": "Microsoft Reciprocal License: include source even for unmodified binary?", "text": "The Microsoft Reciprocal License (here taken from the WIX toolset I use) has the following sentence: > For any file you distribute that contains code from the software (in source > code or binary format), you must provide recipients the source code to that > file along with a copy of this license Does that mean I have to provide the source code even if I only use the unmodified (original) DLL in binary format? Note that this unmodified DLL contains code from the software, although it was not compiled by me. What does _provide_ mean? Shall I provide 11 MB of source code as part of my application (install the source on hard disk) or can I provide it as an extra download as well?"} {"_id": "99564", "title": "How can I show aptitude to prospective employers when my all my work is on internal projects?", "text": "I've been in my current position for a long time (10 years) and in that time, I feel like I've performed well as a designer, system architect, and programmer. However, all that work has been on internal projects that aren't accessible from the outside world. I see a lot of advice like this that suggests 'If you can literally point to something and say \"I wrote this\" it's very impressive'. What about if you can 'literally point to' nothing at all, because while you're a passionate programmer who (as the classic Joel-ism puts it) \"is smart and gets things done\", all those things are invisible? Do I need to start frantically committing to open-source projects? Start a \"real world\" (not corporate-internal) blog? Frankly, I spent most of my 10 years happy here, and only recently have considered leaving for greener pastures. Am I going to be sunk before I start looking because of my focus on work my current employer, at the expense of my \"public presence\"?"} {"_id": "16654", "title": "What corners have have you cut at work?", "text": "Have you ever let your coding standards slip to meet deadlines or save time? I'll start with a trivial example: We came up with a coding standard at work around which brackets/formatting to use and so on. I ignore it and use the auto-format tool in netbeans."} {"_id": "74857", "title": "Are IT and software industry getting more and more litigious?", "text": "The last couple of years I've been observing an exponential growth in the news related IT companies and individuals taking their cases to court, on one side, and the questions concerning legal matters everywhere on the web, on the other side. I very much doubt people and companies have suddenly started stealing each other's ideas, but something is different. Is it that: 1) IT people on average are getting more legally-educated? 2) Some changes in legal systems of various countries I have missed are causing that phenomenon? 3) IT is now being perceived as the source of potentially unlimited revenue and the trolls and lawyers have turned its attention to it? 4) Any other development in place? * * * The **first part** of my question is **how should an average developer react to this perturbing trend** : a) Continue on as before and ignore everything legal b) Educate themselves in local and international laws related to IT c) Always get a professional legal advice before venturing to do anything programming related d) For any kind of project register an LLC to protect himself even for the most basic and harmless projects * * * A **second part** is a bigger one: **how does this all affect IT companies and start-ups** : e) Is any new company at potential risk? If so, is this risk local like in US with all of its software patents or global? f) Can any new company survive without getting lawyers from the start and applying for all possible patents? g) Is it a risk factor for the registration of a new company to choose a location which supports software patents in its legal code? * * * I admit my question is complex but the matter at hand is even more complex. If you see any restructuring potential, it'd be welcome. I'm also seeking any input either global or related to local markets. We'll figure out the common ground from particular cases."} {"_id": "221004", "title": "Is there any sorting algorithm which is not inherently sequential and is task distributable?", "text": "After googling for a couple hours, I came to a conclusion that all sorting algorithms are inherently sequential which can be data distributed but not task distributable. Is there any algorithm which is not inherently sequential and is task distributable?"} {"_id": "154677", "title": "Migration of PowerBuilder application to multiplatform", "text": "I developed a client/server application with PowerBuilder in the past for medical clinics and done maintenance for it until now. Now, some clients are asking me to develop a release for Mac/Linux and need some advice about what programming language/technology is best suited for it and the learning curve. It\u2019s not a very very big program but I\u2019m the only developer and have done it in my spare time. PowerBuilder is very productive for this kind of projects (database centric), but it\u2019s not multiplatform and it\u2019s hard to sell PowerBuilder application now days (web, .NET, java sells a lot better with his marketing). My programming skills: * I studied C and C++ in the past (university) but never used it on real projects * Have some Java experience but not in desktop applications * Some experience with Ruby on Rails for web projects * Good skills with PowerBuilder and C# (.NET) (there are my main developing languages) My first dilemma is if I change the desktop application to a web interface, but I think the user will lose some user-experience, and some doctors don\u2019t have a clinic (they are alone at home with my software). I think installing a web application (with webserver) for one user will be overwhelming. If I continue developing desktop application, what is at the moment a good framework/toolset to learn having my skills? Somebody has had similar experiences?"} {"_id": "154676", "title": "Etiquette when asking questions in an IRC channel", "text": "Many larger OSS projects maintain IRC channels to discuss their usage or development. When I get stuck on using a project, having tried and failed to find information on the web, one of the ways I try to figure out what to do is to go into the IRC channel and ask. But my questions are invariably completely ignored by the people in the channel. If there was silence when I entered, there will still be silence. If there is an ongoing conversation, it carries on unperturbed. I leave the channel open for a few hours, hoping that maybe someone will eventually engage me, but nothing happens. So I worry that I'm being rude in some way I don't understand, or breaking some unspoken rule and being ignored for it. I try to make my questions polite, to the point, and grammatical, and try to indicate that I've tried the obvious solutions and why they didn't work. I understand that I'm obviously a complete stranger to the people on the channel, but I'm not sure how to fix this. Should I just lurk in the channel, saying nothing, for a week? That seems absurd too. A typical message I send might be \"Hello all - I've been trying to get Foo to work, but I keep on getting a BarException. I tried resetting the Quux, but this doesn't seem to do anything. Does anyone have a suggestion on what I could try?\""} {"_id": "69576", "title": "SQL Auto Formatting and Auto Capatilize in Vim", "text": "I'd like to use vim for writing SQL queries, but I haven't figured out yet how to get two key features that I use all the time in programs like SQL Yog. I usually just start up an instance of Yog in the background, cut-and-paste my SQL into it, hit F12 which auto-formats the whole thing. All the SQL keywords are automatically capitalized as soon as the text is pasted. I then copy it back into whatever other tool I'm using (often SQL Query Browser at work) and go from there. The benefits of these two features are worth the extra effort. Anyone know how I can set up vim to do this? I still may need to cut and paste back into Query Browser, which is acceptable to me, but I'd like to stick with vim for all my editing. Thanks."} {"_id": "69570", "title": "How does enterprise level Flash development work?", "text": "I started working at a PHP shop and occasionally we have to go and tweak some legacy Flash and actionscript code. It's a small shop; we have about six developers. It seems almost every time we pull a flash file down from SVN, we run into some weird dependency, either with fonts, code suddenly deciding not to compile, or version incompatibility with Actionscript and Flash MX/CS3/4/5. None of the folks who created the flash components are still working for the company (nor are any of the dependencies documented), so we have to slog through the code and it takes FOREVER. This got me thinking: how does enterprise level Flash development work? What tools are used? Is it possible to allow for many people to be working on the same Flash file, and be able to unload that file to a new machine with a minimal setup hassle?"} {"_id": "84805", "title": "What is a combined fragment in a UML Sequence Diagram?", "text": "Everything is in the question. I just discover this new feature and I don't really understand what it stands for. I know that it can represent loop, alternative in the sequence diagram. Can someone explain me what it is?"} {"_id": "211448", "title": "Is it okay to check in changes to import statement on opensource projects on a commit?", "text": "There are some guidelines out there (eg scala guidelines) and I'm wondering if it's okay to do some tidying up when committing other changes or if the commits should be more focused and to the point? Eg - remove unused, reorder per style guide. It might make it harder to read the pull request - but it will improve the project quality at least a little. What's your general rule of thumb? I'm not sure if I'm breaking unspoken taboos."} {"_id": "201388", "title": "How to solve the problem of nested comments", "text": "It appears in not just one language that comments can't be nested. Do you have a good solution for this problem? One workaround in C/C++ and Java is to only use the single-line comment but it becomes impossible then to comment out a larger block. I'm facing something like this: So I must manually go through and edit the comments. Can you advice how we should handle this, in many languages? I'm not sure but maybe python has a solution for this with the `'''` way that might be able to include a `#` comment in python? `"} {"_id": "107601", "title": "Strictness in programming methods among Stack Overflow users", "text": "I've been a member of Stack Overflow for a couple of weeks now and have answered questions and read others answers, mostly in C/C++. True, I have learned about some things. For example, `undefined behavior` which in the past, knowing what's going on from inside the CPU to compiler code generation, all those \"seemed\" defined and worked as I expected them to. I understand now how conforming to rules such as not evoking undefined behavior are good for you, even though you know in your PC, in your compiler it works without a flaw. However, at times I got negative points for answers involving using macros or certain codes that makes me wonder why among people here, there are such very strict rules. For example, almost any time you talk about macros, people say \"use functions instead\". I understand that functions are better in many ways, but there are many places macros are more fit. Are macros so frowned upon here? Another example is, if for any reason you write a simple `for` loop to find a character, you get a lot of comments telling you to \"use prewritten functions instead\" or you get negative points for suggesting the loops. I understand the functions in standard library have all kinds of error checking in them and probably are most efficient in general, but is it such a crime to write a simple loop for your own specific case? What I'm asking is more general than these mere examples. It seems to me that among the users of Stack Overflow, there is \"defined\" (if not written) standard of coding in C/C++ and that's the only way they accept it. I'm not saying it's bad. I'm not saying that at all. I learned a lot myself about things that made my code not portable. However, almost all of those things turned out to be too much restriction. Things such as \"if you do like this, your code won't work in computers that don't encode characters in ASCII\". Do you think people will always want their code to run on all possible computers? Isn't that a bit too much? A simpler, faster method that you know works on computers you know your program would run on, seems to me a better solution than a completely portable one. There are great codes out there that seems to me if they were judged here, they would get the worst results. Codes such as the Linux Kernel, implementation of STL that comes with g++, Mesa3D's implementation of OpenGL and so on. Look at these codes and your eyes are flooded with macros and bitwise operations. Do you condemn the programmers of the Linux Kernel too? Would you rather have your OpenGL library drop framerates by 10% but possibly run on a 16bit microprocessor too? So, is what occurred to me true? Are people in Stack Overflow so strict on judging others' codes? Or have I just been unlucky enough to stumble upon a minor few? Edit: I'm a master's student in robotics, my bachelors was in software engineering. My master's project was library/kernel module for a project funded by european union and through that I got offered PhD (in a subject that has to do with writing some software). I also code for fun. It's been 15 years that I'm familiar with C/C++ and 8 years that I actively code. (What I mean is, I didn't start programming just yesterday) Update: After doing some research I found out there are a lot of C programmers, if not most, who hate C++ and a lot of C++ programmers, if not most, who hate C. As someone who finds beauty in both languages and maintains a coding style of a mixture of the two, I find those hatreds absurd. However, as it seems the majority of the programmers in either language have this hatred at least to some extent, I no longer see a reason for continuing this argument. I came to the conclusion that I would write the answers they way I see fit, and _just ignore everyone when they start throwing a fit because I didn't use a`vector` or something_ (Seth Carnegie, the 7th comment). However, I would put a note on the bottom saying: _This code is not meant for copy-paste. There may be lacking error checking. This is just to demonstrate how blabla works, and not necessarily the easiest way to do it._ and other legal stuff. You know, like what you see in a EULA."} {"_id": "21128", "title": "When does it become overkill?", "text": "First off, I apologize cause I don't know how to make a community thread; so someone help me out please. As a developer, across many platforms, technologies and even on an infrastructure level; I always find myself asking, when am I doing TOO much?!? It's been a never ending learning process, since I started. One (1) thing I learned is that requirements are barely valid for an extended period of time, and as such a little foresight may go a long way. But where is the balance, and how do you know when you're losing time, not gaining it?!"} {"_id": "46461", "title": "Best practice with branching source code and application lifecycle", "text": "We are a small ISV shop and we usually ship a new version of our products every month. We use Subversion as our code repository and Visual Studio 2010 as our IDE. I am aware a lot of people are advocating Mercurial and other distributed source control systems but at this point I do not see how we could benefit from these, but I might be wrong. Our main problem is how to keep branches and main trunk in sync. Here is how we do things today: 1. Release new version (automatically create a tag in Subversion) 2. Continue working on the main trunk that will be released next month And the cycle repeats every month and works perfectly. The problem arises when an urgent service release needs to be released. We cannot release it from the main trunk (2) as it is under heavy development and it is not stable enough to be released urgently. In such case we do the following: 1. Create a branch from the tag we created in step (1) 2. Bug fix 3. Test and release 4. Push the change back to main trunk (if applicable) Our biggest problem is merging these two (branch with main). In most cases we cannot rely on automatic merging because e.g.: * a lot of changes has been made to main trunk * merging complex files (like Visual Studio XML files etc.) does not work very well * another developer / team made changes you do not understand and you cannot just merge it So what you think is the best practice to keep these two different versions (branch and main) in sync. What do you do?"} {"_id": "219921", "title": "Database design/relationship for threading messages", "text": "What's the database design or business logic for creating an app for messaging between users? I am having difficulties with choosing how to approach the relationship between each Conversation of User Thread. I show this figure, but it was really hard to grasp the concept ![enter image description here](http://i.stack.imgur.com/tfvlB.png) How all this threading message works (like Gmail or Facebook)?"} {"_id": "219927", "title": "Representation of time expanded graph", "text": "I want to build a time expanded graph with time discretization Dt that starts at t = 0 and ends at t = T where between the node (n1, t) and the node (n2, t') is an arc if and only if (n1, n2) were connected in the original graph. How can this be implemented in pseudo-code? The problem I'm stuck on is how to pass from the original graph to the time expanded one. (I use C++ and Boost libraries to write the code)"} {"_id": "240497", "title": "image processing algorithm to find solid square vs square outline in video frames", "text": "I am working on a system to detect an \"embedded\" 12 bit number within a series of frames that are recorded inside an mp4 file. The point of it is to automatically identify the \"take\", and also trim the start of the file to a common offset for synchronization with other files. this is typically done manually, but the nature of the project demands it needs to happen as much as possible automatically. It's not possible to set the timecode in the camera. The number is generated by a smartphone app, which I have coded a prototype for, but as I am new to image processing, I need some help figuring out how to do the other side of the equation. this algorithm, for what it's worth doesn't need to be real time, and will probably happen in a pc based command line utility, not that it's that relevant to the algorithm itself. My original idea was to use a QR Code, but I found the resolution and artifacts from the mp4 stream were too much to cope with it in all lighting situations and frame rates. So I came up with the idea of sending the number serially, by encoding it in a distinctive series of squares, using RGB colours, which seems to be efficiently handled without too many \"artifacts\" at speed by the mp4 encoder. (This I am determining by manually looking at the video frame by frame, not programmatically) Which whilst it is a slow way to send the number, it can be accurately synced to a known time datum, and the precise frame extrapolated from there (I take precisely 1 seconds to send the 12 bits), which most frame rates can keep up with.) To check the images are \"human readable\" I have extracted frames, and I am confident that it is now a fairly trivial task to analyse each frame to decode each bit of the number. so breaking it into manageable chunks, here is what I need to be able to do: * Detect a \"solid\" red square * Detect 4 squares within each other that are red,green,blue, and white * Detect 4 squares within each other that are red,green,blue, and black The last 2 would effectively be the same algorithm, as i would just change parameters. there's an variation on that theme, but it's not that relevant to the algorithm, i just means swapping the blue and green around, to detect the frame rate. **Solid red square** ![red square solid](http://i.stack.imgur.com/1wZKM.jpg) **Red square, a green square, and a blue square, white square:** ![enter image description here](http://i.stack.imgur.com/vhbKm.jpg) **Red square, a green square, and a blue square, black square:** ![enter image description here](http://i.stack.imgur.com/wyei2.jpg) The algorithm would need to be able to work independent of the previous frame, but would need to return the bounding rect of the red square in each case, for validation purposes. I could in theory attempt this myself, but what i am not sure how to handle is the fact the squares might not be exactly \"level\". aside from rotating the image multiple times by a few degrees, I can't think how to detect it accurately."} {"_id": "124285", "title": "Hobbyist transitioning to earn money on paid work?", "text": "I got into hobbyist Python programming some years ago on a whim, having never programmed before other than BASIC way back when, and little by little have cobbled together a, in my opinion, nice little desktop application that I might try to get out there in some fashion someday. It's roughly 15,000 logical lines of code, and includes use of Python, wxPython, SQLite, and a number of other libraries, works on Win and Linux (maybe Mac, untested) and I've gotten some good feedback about the application's virtues from non- programmer friends. I've also done a small application for data collection for animal behavior experiments, and an _ad hoc_ tool to help generate a web page...and I've authored some tutorials. I consider my Python skills to be appreciably limited, my SQL skills to be very limited, but I'm not totally out to sea, either (e.g. I did FizzBuzz in a few minutes, did a \"Monty Hall Dilemma\" simulator in some minutes, etc.). I also put a strong premium on quality user experience; that is, the look and feel matters much to me and the software looks quite good, I feel. I know no other programming languages yet. I also know the basics of HTML/CSS (not considering them programming languages) and have created an artist's web page (that was described by a friend as \"incredibly slick\"...it's really not, though), and have a scientific background. I'm curious: Aside from directly selling my software, **what's _roughly_ possible--if anything--in terms of earning either side money on gigs, or actually getting hired at some level in the software industry, for someone with this general skill set?**"} {"_id": "162237", "title": "Object Oriented Programming: getters/setters or logical names", "text": "I'm currently thinking about an interface to a class I'm writing. This class contains styles for a character, for example whether the character is bold, italic, underlined, etc. I've been debating with myself for two days whether I should use getters/setters or logical names for the methods which change the values to these styles. While I tend to prefer logical names, it does mean writing code that is not as efficient and not as logical. Let me give you an example. I've got a class `CharacterStyles` which has member variables `bold`, `italic`, `underline` (and some others, but I'll leave them out to keep it simple). The easiest way to allow other parts of the program to access these variables, would be to write getter/setter methods, so that you can do `styles.setBold(true)` and `styles.setItalic(false)`. But I do not like this. Not only because a lot of people say that getters/setters break encapsulation (is it really that bad?), but mostly because it doesn't seem logical to me. I expect to style a character through one method, `styles.format(\"bold\", true)` or something like that, but not through all these methods. There is one problem though. Since you can't access an object member variable by the contents of a string in C++, I would either have to write a big if- statement/switch container for all styles, or I would have to store the styles in an associative array (map). I can't figure out what the best way is. One moment I think I should write the getters/setters, and the next moment I lean towards the other way. My question is: what would you do? And why would you do that?"} {"_id": "81355", "title": "How does a large company make rookie mistakes which leave security holes?", "text": "Sony was recently hacked with a SQL injection and the passwords of their user's was stored in plain text. These are rookie mistakes. In such a large company, how does this pass QA? How do they not have better teams than to know better than this? The sheer size of the company that was hacked makes this different. It affects all of us because we all may one day find ourselves on a team that is responsible for something like this, and then we get the ax. So what are the factors that lead to this, and how do we prevent them?"} {"_id": "240498", "title": "Javascript based application controller in Javascript-less environments", "text": "I just got done watching an informative Box tech talk by Nicholas Zakas on a javascript architecture for web development: https://www.youtube.com/watch?v=mKouqShWI4o&feature=youtu.be This image, which I acquired from http://alanlindsay.me/kerneljs/index.html#nav-what , will give you an overview of the architecture: ![enter image description here](http://i.stack.imgur.com/oKTpl.png) However, to give you a brief summary and help you avoid mundane stuff, the Kernel is basically an Application Controller which makes up the C in the MVC. Now I obviously have to take care of the possibility of an environment where Javascript is disabled. I was wondering if that effectively meant developing an application controller in a language such as PHP and maintaining it in parallel to the one written in Javascript along with the ensuing commands/modules? There's something smelly about that, is there a better way to do this?"} {"_id": "135309", "title": "What precisely does \"Applicative\" mean in computer science?", "text": "I know what an \"applicative Functor\" is, but I've been reading papers recently that refer to other \"applicative\" things, particularly \"purely applicative data structures\". The trouble is, I'm not sure precisely what \"applicative\" means in this context, and Google has so far failed me. I can get a general English (ie useless) definition of \"applicative\", but as soon as I add \"computer science\" to the search all I get is definitions of computer science. I have a vague sense that this relates to keeping them pesky effects in their place, but I could use a precise definition. I found this - http://foldoc.org/applicative \\- but I'm not sure \"applicative language == functional language\" really helps."} {"_id": "180585", "title": "Are there any OO-principles that are practically applicable for Javascript?", "text": "Javascript is a prototype-based object oriented language but can become class- based in a variety of ways, either by: * Writing the functions to be used as classes by yourself * Use a nifty class system in a framework (such as mootools Class.Class) * Generate it from Coffeescript In the beginning I tended do write class based code in Javascript and heavily relied on it. Recently however I've been using Javascript frameworks, and NodeJS, that go away from this notion of classes and rely more on the dynamic nature of the code such as: * Async programming, using and writing writing code that uses callbacks/events * Loading modules with RequireJS (so that they don't leak to the global namespace) * Functional programming concepts such as list comprehensions (map, filter, etc.) * Among other things What I gathered so far is that most OO principles and patterns I've read (such as SOLID and GoF patterns) were written for class-based OO languages in mind like Smalltalk and C++. But are there any of them applicable for a prototype- based language such as Javascript? Are there any principles or patterns that are just specific to Javascript? Principles to avoid _callback hell_ , _evil eval_ , or any other anti-patterns etc."} {"_id": "24485", "title": "Design Patterns for Javascript", "text": "A lot of web frameworks have a MVC-style layout to code and approaching problems. What are some good similar paradigms for JavaScript? I'm already using a framework (jQuery) and unobtrusive js, but that still doesn't address the problem I run into when I have more complex web apps that require a lot of javascript, I tend to just end up with a bunch of functions in a single, or possibly a few files. What's a better way to approach this?"} {"_id": "158373", "title": "JavaScript application design patterns", "text": "I need to write a PhoneGap application with JavaScript and I'm thinking of the code design patterns. I've read some books of JavaScript design patterns but can't really see the advantages/disadvantages on them and I think I don't understand the implementation and usage right. I would like to have some opinions of which might suit my needs best. I'd like the code to be readable and maintainable. The application itself is quite small and simple. It has a player management (add/remove), a game management (add/remove players from a team, start/stop/reset a stopwatch). In last app I created I had over 30 named functions in a global scope which I think is a big no no. So, now I want to make things right, and was thinking of either using an object literal or a module pattern... something like this perhaps: var MYAPP = { init: function() { $(document).ready(function() { MYAPP.bind(); }); }, bind: function() { // Bind jQuery click and page events }, player: { add: function(name) { ... }, remove: function(name) { ... } }, team: { addPlayer: function(name) { ... }, removePlayer: function(name) { ... } }, game: { create: function() { ... }, startTime: function() { ... }, stopTime: function() { ... }, resetTime: function() { ... } } }; document.addEventListener('deviceready', MYAPP.init, false); ... or perhaps this? var MYAPP = { init: function() { $(document).ready(function() { MYAPP.bind(); }); }, bind: function() { // Bind jQuery click and page events } } var PLAYER = (function () { // Some private variables var privateVar, privateMethod; return { add: function (name) { ... }, remove: function (name) { ... } }; }()); document.addEventListener('deviceready', MYAPP.init, false);"} {"_id": "215586", "title": "From Java to Javascript?", "text": "I am primarily a Java programmer. Because of its OO principles and the general paradigm of Java programming, like wrapping things in static variables, and having things return specific types, heavily aids me in \"visualizing\" a program. Instead of thinking of a big program, I can, instead, focus on smaller organized parts of my eventual program, and add functionality and build up from there. Thus, I have trouble programming in other languages. Or at least, I have not been able to program in the same ability as I do in Java compared to other languages. I know Javascript has OO principles, so I'd like to learn this language in a OO-based like I would program with Java. Is this possible?"} {"_id": "198228", "title": "Object Oriented Programming in JavaScript. Is there life without it?", "text": "At our company we have pretty large body of PrototypeJS based JavaScript code, which we are porting to jQuery for several reasons (not really important here). I'm trying to set up coding guidelines to make/keep things tidy during the porting. One observation from wading through the prototype based implementation is that a lot of code is written in OOP style (using Prototype's `Class.create`), but the code is not \"object oriented\" in spirit. The general pattern I've seen: one \"constructor\" which you are expected to call (but not call twice, because the constructor uses hardcoded DOM id's), a bunch of other \"functions\" and event handlers, which you are not expected to call (but the because there is no \"private\" in JavaScript, you don't know that) and data sharing between all these functions through `this`. Seen from the caller's point of view there is just one \"callable\" and nothing else. I'm starting to believe that _OOP in JavaScript can and maybe should be avoided_ in a lot of cases. At least for our use case, which is not the next generation Goole Wave UI, but simply put: a bit of AJAX based enhancements, registering some event handlers here and there, minor DOM manipulations and such. * the functionality we need seems to be implementable just fine in the typical jQuery non-OOP way, using closure magic to obtain encapsulation and privateness. As side effect, the minification efficiency of the ported code is much better. * OOP in JavaScript is non-traditional and confusing if you are coming from a \"traditional\" background. There are a lot of attempts and frameworks to approximate traditional OOP, but IMHO this makes things even more confusing and fragile. * One thing that Crockford's \"JavaScript the good parts\" taught me, is that the true power of Javascript lies in function scopes and closures, much less in OOP patterns. I'm wondering if there is wider support for this feeling that OOP in JavaScript doesn't really cut it as the sacred mantra, like it is in other languages/platforms. And, in extension, what kind of non-OOP/closure based patterns are much more preferable."} {"_id": "202443", "title": "Go with an object-oriented perspective", "text": "My OOP JavaScript question is at the very bottom if you want to skip my introduction. In an answer to the question Accessing variables from other functions without using global variables, there's a comment about OOP that says: > If there's a chance that you will reuse this code, then I would probably > make the effort to go with an object-oriented perspective. Using the global > namespace can be dangerous -- you run the risk of hard to find bugs due to > variable names that get reused. Typically I start by using an object- > oriented approach for anything more than a simple callback so that I don't > have to do the re-write thing. Any time that you have a group of related > functions in javascript, I think, it's a candidate for an object-oriented > approach. That rings true to me from what I've seen some of my old OOP colleague. There are lots of different approaches and different voices leading in different directions. Since I am a front end developer and UI designer, I am a little confused. **Building up to the question** I've heard from a variety of places that global variables are inherently nasty and evil, when doing some non-object oriented Javascript, and that there are three choices available to make a variable from one function, visible and usable by another function, e.g., function A to be visible to function B; or, a variable of Function A to be passed and usable within function B. 1. make it a global 2. make it an object property, or 3. pass it as a parameter when calling B from A. I've read about namespaces, currying, and other approaches... **Question:** With that all said, I was wondering what's the best OOP structure or best code practice in JavaScript that keeps things encapsulated and adds greatest security from having your variables exposed to manipulation?"} {"_id": "190006", "title": "What is the best practice for method parameter validation in a library?", "text": "I develop a game library in javascript, containing many classes. I hesitate on the behavior that I should follow concerning method parameter validation: **Should I check the validity of parameters passed to each method ?** By example, when a method take only a number between 0 and 100 in parameter, should I check that the value is correct? I have dozens of classes, each with dozens of methods. For a simple getter, I can have more than half lines code only used for checking parameters. Add checks makes my code less maintainable, more heavy. Seeing that it's a library, destined to be used by many other programmers, checking parameters can avoid many mistakes and bugs, and would be a appreciated. So, how do other javascript libraries, handle this and what is the best solution?"} {"_id": "200162", "title": "I'm trying to figure out which functions from one C library are being used by C project. Does anyone have a simple solution?", "text": "To be specific, I want to know which function/types in libpri and being used in the Asterisk project. I'm not traditionally a C programmer, but I know some basic stuff because I took a class in college."} {"_id": "41960", "title": "What is a good one-stop-shop for understanding software licensing information?", "text": "I've learned a fair amount about the various different software licensing models and what those models mean for my own software project. However, I'd like to make sure I understand as many of them as possible for making decisions on how to license my own software and in what scenarios I can safely use software under a licensing model. Do you have a good recommendation for a book/site etc.. that has this information in one location?"} {"_id": "41968", "title": "Maintaining a main project line with satellite projects", "text": "Some projects I work on have a main line of features, but are customizable per customer. Up until now those customizations have been implemented as preferences, but now there are 2 problems with the system... 1. The settings page is getting out of control with features. There are probably some improvements that could be made to the settings UI, but regardless, it is quite cumbersome setting up new instances for new customers. 2. Customers have started asking for customizations which would be more easily maintained as separate threads instead of having tons of customizations code. Optimally I am envisioning some kind of source control in which features are either in the main project line and customizations per customer are maintained in a repo per customer set up. The customizations per project would need to remain separate but if a bug is found and fixed in a particular project, I would need to percolate the fix back to the main line and into all of the other customer repos. The problem is I have never seen this done before, and before spending time trying to find source control that can accommodate this scenario and implement it, I figure it best to ask if anyone has something less complicated or knows of a source control product which can handle this with very little hair pulling."} {"_id": "17525", "title": "What should I study to 'learn' Comp Sci?", "text": "[This question was originally asked on Stack Overflow, but recommended to move the question here.] I can't find anything quite like the question I'm about to ask, so please forgive me if there's something just like it already, please feel free to point me in the right direction. It'll take a bit of background explaining too, please forgive me for that. [backstory] Basically, I graduated from University about 18 months ago with a degree in Business Information Systems and Japanese. The Japanese took up half of the degree so the BIS was only joint. I only learned PHP in terms of languages and basically no computing theory - everything was vocational (Networking, programming basics, CMS development, Office and VBA and then loads of Business theory courses). Since this, I decided to teach myself C# and ASP.Net and try to get a position as a programmer. I created an online shop style website and a small CRM application in Windows Forms to both teach myself and build a portfolio, and luckily I managed to snag a developer position. Bad side? I'm the only developer at my company. Now don't get me wrong, in the last year I've learned loads and loads, I did some devpt. before Uni so knew the basics anyway, but it was very much a \"learning from books\" job - every night. Now then... I am now at a point where I'm building software on a regular basis, making good judgements for time scales, and have even been told my code and methodology are good by other professionals that have been in the game longer than me, and they have offered me jobs. [/backstory] What this whole thing boils down to, is that I now want to study up on the topics I'll have missed by not doing CS. More importantly, could you recommend books / free online courses? I want to learn about Computer Science theory, not just better coding. Thank you!"} {"_id": "244444", "title": "Organization of DLL linked functions", "text": "This is a code organization question. I got my basic code working but when I expand it, it will be terrible. I have a DLL which I don't have a `.lib` for. Therefore I have to use the whole `LoadLibrary()/GetProcAddress()` combo. It works great. But this DLL that I'm referencing has 100+ functions. My current process is: 1) Typedef a type for the function. E.g.: `typedef short(_stdcall *type1)(void);` 2) Assign a function name that I want to use such as `type1 function_1;` 3) Load the DLL, then do something like: `function_1 = (type1)GetProcAddress(hinstLib, \"_mangled_funcName@5\");` Normally I would like to do all of my function definitions in a header file but because I have to use the load library function, its not that easy. The code will be a mess. Right now I'm doing (1) and (2) in a header file and was considering making a function in another .cpp file to do the load library and dump all of the (3)'s in there. I considered using a namespace for the functions so I can use them in the main function and not have to pass over to the other function. Any other tips on how to organize this code? My goals are to be able to use `function_1` as a regular function in the main code. If I have to a `ref::function_1` that would be okay but I would prefer to avoid it. This code for all practical purposes is just plain C at the moment."} {"_id": "234823", "title": "Release an open-source project under multiple licenses", "text": "Consider this license I wrote for my software: Copyright 2014, [my name] The author(s) of this software intends to provide the users with the maximum amount of freedom. As such, this software may be used under any one of the following licenses: 1. MIT: [link to the license] 2. Apache 2.0: [link to the license] 3. BSD 2-clause: [link to the license] 4. BSD 3-clause: [link to the license] Or, it can be considered released into the public domain in jurisdictions where this is possible. In any case, the software comes with absolutely no warranty. Do you guys see any potential problems with this licence, especially from a legal point of view? Does it succeed in providing the users with the maximum amount of freedom?"} {"_id": "244446", "title": "Why DbContext object shouldn't be referred in Service Layer?", "text": "I've been looking for some implementations of `Service Layer` and `Controller` interaction in blogs and in some open source projects. All of them seem to refer `DbContext` object in repository classes but avoided to use in service classes. Service classes essentially using a `IQueryable` references of `DbSet`. I want to know why this practice is good and why `DbContext` shouldn't have a reference in Service Layer."} {"_id": "244449", "title": "What is the origin of the negative term \"legacy code\"", "text": "Everyone talks about legacy code in software development and I have heard the term over the last ten years used to paint any codebase as being bad. Where did this term, which has such powerful connotations to programmers alike originate? I am sure there must be some book on software development that pioneered this term. I would love to locate the origin of the term \"legacy code\"."} {"_id": "162587", "title": "Version control and project management for freelancing jobs", "text": "Are there version control and project management tools which \"work well\" with freelancing jobs, if I want to keep my customer involved in the project at all times? What concerns me is that repository hosting providers have their fees based on the \"number of users\", which I feel is the number which will constantly increase as I finish one project after another. For each project, for example, I would have to add permissions to my contractor to allow him to pull the source code and collaborate. So how does that work in practice? Do I \"remove\" the contractor from the project once it's done? This means I basically state that I offer no support and bugfixes anymore. Or do freelances end up paying more and more money for these services? Do you use such online services, or you host them by yourself? Or do you simply send your code to your customer by e-mail in weekly iterations?"} {"_id": "255916", "title": "Does the head of any linked list must contain any data", "text": "Is there any rule that says the head of any linked list should contain data. Is it because by doing it so we save a space for one node. Why for some implementation of head contains data and for other it doesn't."} {"_id": "204488", "title": "Why does OCaml's (and F#'s) type inference algorithm need tagging functions as recursive?", "text": "From _Real World OCaml_ (beta): > OCaml distinguishes between non-recursive definitions (using `let`) and > recursive definitions (using `let rec`) largely for technical reasons: the > type-inference algorithm needs to know when a set of function definitions > are mutually recursive, and for reasons that don't apply to a pure language > like Haskell, these have to be marked explicitly by the programmer. Why is this the case (what is the technical reason, exactly), and why does a pure language like Haskell \u201cget away\u201d with not having to tag functions as recursive?"} {"_id": "7993", "title": "How do you get your product owner more engaged on agile projects?", "text": "During iteration retrospectives on agile projects, one of the topics that comes up most often for us is that the product owner is (or product owners are) not available or engaged in the project at a day to day level. It seems to be a common theme that customers are unwilling to \"give up\" the necessary amount of their product owner's time to the project, but instead have them answer questions via email, or during product demos only. This has the effect of increasing the length of the feedback cycle and making the project less effective. Have you had to overcome this hurdle? How did you do it?"} {"_id": "152533", "title": "Should a server \"be lenient\" in what it accepts and \"discard faulty input silently\"?", "text": "I was under the impression that by now everyone agrees this maxim was a mistake. But I recently saw this answer which has a \"be lenient\" comment upvoted 137 times (as of today). In my opinion, the leniency in what browsers accept was the direct cause of the utter mess that HTML and some other web standards were a few years ago, and have only recently begun to properly crystallize out of that mess. The way I see it, being lenient in what you accept _will_ lead to this. The second part of the maxim is _\"discard faulty input silently, without returning an error message unless this is required by the specification\"_ , and this feels borderline offensive. Any programmer who has banged their head on the wall when something fails silently will know what I mean. So, am I completely wrong about this? Should my program be lenient in what it accepts and swallow errors silently? Or am I mis-interpreting what this is supposed to mean? * * * The original question said \"program\", and I take everyone's point about that. It can make sense for programs to be lenient. What I really meant, however, is APIs: interfaces exposed to _other programs_ , rather than people. HTTP is an example. The protocol is an interface that only other programs use. People never directly provide the dates that go into headers like \"If-Modified- Since\". So, the question is: should the server implementing a standard be lenient and allow dates in several other formats, in addition to the one that's actually required by the standard? I believe the \"be lenient\" is supposed to apply to this situation, rather than human interfaces. If the server is lenient, it might seem like an overall improvement, but I think in practice it only leads to client implementations that end up _relying_ on the leniency and thus failing to work with another server that's lenient in slightly different ways. So, should a server exposing some API be lenient or is that a very bad idea? * * * Now onto lenient handling of user input. Consider YouTrack (a bug tracking software). It uses a language for text entry that is reminiscent of Markdown. Except that it's \"lenient\". For example, writing - foo - bar - baz is _not_ a documented way of creating a bulleted list, and yet it worked. Consequently, it ended up being used a lot throughout our internal bugtracker. Next version comes out, and this lenient feature starts working slightly differently, breaking a bunch of lists that (mis)used this (non)feature. The documented way to create bulleted lists still works, of course. So, should my software be lenient in what _user inputs_ it accepts?"} {"_id": "235618", "title": "Environment-aware Code", "text": "There are situations where the deployed environment ( **development** , **test** , or **production** , for example) might dictate the outcome of certain actions. For example, perhaps a successful \"user registration\" process will send a notification email to the new user. Environment-specific actions: * **Development:** Do not actually send the email. Email logs will provide enough for developers. * **Test:** Send all emails to some testEnvironment@domain.com inbox and not to the user's address. * **Production:** Send the email to the user. I have listed three possible solutions below. ## **CurrentEnvironment configuration value** One way of solving this, which I have seen a lot, is to have some configuration value (whether it be in some xml config file or in the database) such as `CurrentEnvironment` which specifies the current environment in which the system is deployed to. This would require case/if checks in code to determine the desired action: if(CurrentEnvironment == Environment.Test) { // Send all emails to some testEnvironment@domain.com inbox. } else if(CurrentEnvironment == Environment.Production) { // Send the email to the user. } This is not a maintainable solution in my opinion. ## **Wipe all email addresses** Another method is to run a change script, once restoring a database, to do the following: * Remove all user email addresses (in development), * Replace all user email addresses with testEnvironment@domain.com (in test) This is an extra step in the release process which, if missed, could have some dangerous results. Additionally, this solution only fixes the email problem. There are perhaps many other situations where the environment matters. ## **Config transformation** Another idea is to use web.config transformations. This way the config can be different for different environments. For example, we will have the following configs: web.config web.Development.config web.Test.config web.Production.config The transformation can then use different \"providers\" or set certain attributes according to the environment. For example, an `overrideDeliveryAddress` can be set in the `web.Test.config`: This solution requires much more work, but is more maintainable and less invasive. Code is now environment-oblivious. What other ways can the above be achieved? Should code EVER be environment aware?"} {"_id": "125882", "title": "Is there a subset of programs that avoid the halting problem", "text": "I was just reading another explanation of the halting problem, and it got me thinking all the problems I've seen that are given as examples involve infinite sequences. But I never use infinite sequences in my programs - they take too long. All the real world applications have lower and upper bounds. Even reals aren't truly reals - they are approximations stored as 32/64 bits etc. So the question is, is there a subset of programs that can be determined if they halt? Is it good enough for most programs. Can I build a set of language constructs that I can determine the 'haltability' of a program. I'm sure this has been studied somewhere before so any pointers would be appreciated. The language wouldn't be turing complete, but is there such a thing as nearly turing complete which is good enough? Naturally enough such a construct would have to exclude recursion and unbounded while loops, but I can write a program without those easily enough. Reading from standard input as an example would have to be bounded, but that's easy enough - I'll limit my input to 10,000,000 characters etc, depending on the problem domain. tia [Update] After reading the comments and answers perhaps I should restate my question. For a given program in which all inputs are bounded can you determine if the program halts. If so what are the constraints of the language and what are the limits of the input set. The maximal set of these constructs would determine a language which can be deduced to halt or not. Is there some study that's been done on this? [Update 2] here's the answer, it's yes, way back in 1967 from http://www.isp.uni- luebeck.de/kps07/files/papers/kirner.pdf That the halting problem can be at least theoretically solved for \ufb01nite-state systems has been already argued by Minsky in 1967 [4]: \u201c...any \ufb01nite-state machine, if left completely to itself, will fall eventually into a perfectly periodic repetitive pattern. The duration of this repeating pattern cannot exceed the number of internal states of the machine...\u201d (and so if you stick to finite turing machines then you can build an oracle)"} {"_id": "152535", "title": "Pattern for Accessing MySQL connection", "text": "We have an application which is C++ trying to access MySQL database. There are several (about 5 or so) threads in the application (with Boost library for threading) and in each thread has a few objects, each of which is trying to access Database for its' own purpose. It has a simple ORM kind of model but that really is not an important factor here. There are three potential access patterns i can think of: 1. There could be single `connection object` per application or thread and is shared between all (or group). The object needs to be thread safe and there will be contentions but MySQL will not be fired with too many connections. 2. Every object could initiate connection on its own. The database needs to take care of concurrency (which i think MySQL can) and the design could be much simpler. There could be two possibilities here. a. either object keeps a persistent connection for its life OR b. object initiate connection as and when needed. 3. To simplify the contention as in case of 1 and not to create too many sockets as in case of 2, we can have group/set based connections. So there could be there could be more than one connection (say N), each of this connection could be shared connection across M objects. Naturally, each of the pattern has different resource cost and would work under different constraints and objectives. What criteria should i use to choose the pattern of this for my own application? What are some of the advantages and disadvantages of each of these pattern over the other? Are there any other pattern which is better? * * * PS: I have been through these questions: mysql, one connection vs multiple and MySQL with mutiple threads and processes But they don't quite answer exactly what i am trying to ask."} {"_id": "235615", "title": "Refactoring sought for replacing shared data types in .NET component", "text": "I am in charge with updating a software product that is made up of two components the Controller process and the UI process. The Controller and the UI communicate via XML messages. Furthermore, the Controller is built on top of a shared library that implements a number of business flows. For builing the messages for Controller-UI communication, somebody decided to use .NET built-in serialization and deserialization and therefore created class types annotated with attributes that control serialization (XmlElement, XmlAttribute, etc) for each message, serializing it and deserializing it by using the default XML serializer. For the first couple of controllers, this was fine. But now, while some of the messages can be reused, the other cannot. They will accomplish the similar functions but are structurally incompatible with the information we need them to contain. Because this decision was made we are stuck with data-types that cannot model the data we need, and cannot be changed because they would break the other controllers. At the same time we need the functionality and messaging sequences implemented in the shared library. I need a way out of this conundrum, one that has the smallest technical cost in relation to changing the other controllers, I need a sort of \"structural polymorphism\" for these types, where the type is the same but its internal structure different. Perhaps these messages can be changed to 'object' data types, but in that case... I need a way to control from the outside of the shared library the casting that gets made inside the shared library. How do I approach this, I am not experienced enough to find a clean refactoring for this? I feel stuck. We are using .NET 4.5"} {"_id": "254277", "title": "How to fix legacy code that uses unsafely?", "text": "We've got a bunch of legacy code, written in straight C (some of which is K&R!), which for many, many years has been compiled using Visual C 6.0 (circa 1998) on an XP machine. We realize this is unsustainable, and we're trying to move it to a modern compiler. Political issues have said that the most recent compiler allowed is VC++ 2005. When compiling the project, there are many warnings about the unsafe string manipulation functions used (`sprintf()`, `strcpy()`, etc). Reviewing some of these places shows that the code is indeed unsafe; it does not check for buffer overflows. The compiler warning recommends that we move to using `sprintf_s()`, `strcpy_s()`, etc. However, these are Microsoft-created (and proprietary) functions and aren't available on (say) gcc (although we're primarily a Windows shop we do have some clients on various flavors of *NIX) How ought we to proceed? I don't want to roll our own string libraries. I only want to go over the code once. I'd rather not switch to C++ if we can help it."} {"_id": "235616", "title": "Loosely compare user input with database record", "text": "I have a database table with 3 columns; id, question and answer. On the front- end, I have a PHP application that shows questions to the user. The application would then compare the user input with the answer on the database. Using only PHP, how do I loosely compare user's input with the record on the database and pass it as correct even though it is not a strict match? For example:- **Question:** What is the structure of a water molecule? **Answer on database:** Hydrogen and oxygen **Acceptable answers:** Oxygen and hydrogen, hydrogen n oxygen, hydrogen & oxygen"} {"_id": "101459", "title": "Size of database a factor for Hibernate vs JDBC?", "text": "Do you know if the size of the database (number of tables used) is a factor when choosing between Hibernate and JDBC? Why or why not? In my particular case, I am evaluating Hibernate and JDBC for interacting with Oracle database for a small web app."} {"_id": "254271", "title": "git workflow for separating commits", "text": "Best practices with git (or any VCS for that matter) is supposed to be to have each commit do the smallest change possible. But, that doesn't match how I work at all. For example I recently I needed to add some code that checked if the version of a plugin to my system matched the versions the system supports. If not print a warning that the plugin probably requires a newer version of the system. While writing that code I decided I wanted the warnings to be colorized. I already had code that colorized error message so I edited that code. That code was in the startup module of one entry to the system. The plugin checking code was in another path that didn't use that entry point so I moved the colorization code into a separate module so both entry points could use it. On top of that, in order to test my plugin checking code works I need to go edit UI/UX code to make sure it tells the user \"You need to upgrade\". When all is said and done I've edited 10 files, changed dependencies, the 2 entry points are now both dependant on the colorization code, etc etc. Being lazy I'd probably just `git add . && git commit -a` the whole thing. Spending 10-15 minutes trying to manipulate all those changes into 3 to 6 smaller commits seems frustrating which brings up the question **Are there workflows that work for you or that make this process easier?** I don't think I can some how magically always modify stuff in the perfect order since I don't know that order until after I start modifying and seeing what comes up. I know I can `git add --interactive` etc but it seems, at least for me, kind of hard to know what I'm grabbing exactly the correct changes so that each commit is actually going to work. Also, since the changes are sitting in the current directory it doesn't seem like it would be easy to run tests on each commit to make sure it's going to work short of stashing all the changes. And then, if it were to stash and then run the tests, if I missed a few lines or accidentally added a few too many lines I have no idea how I'd easily recover from that. (as in either grab the missing lines from the stash and then put the rest back or take the few extra lines I shouldn't have grabbed and shove them into the stash for the next commit. Thoughts? Suggestions? PS: I hope this is an appropriate question. The help says _development methodologies and processes_"} {"_id": "135234", "title": "Can Agile be accomplished without client involvement?", "text": "I couldn't write a book on Agile. I have worked in several shops that call their process Agile. One of the main points of Agile development is regular client involvement. After a sprint, the work can be demo'd to the client to obtain their feedback. Rinse and repeat. The problem I come across is that many clients do not want to be that involved. They would much prefer a waterfall approach. Gather the requirements up front, then come back when you are done. In my experience, waterfall does not work. Clients do not know what they want until they see it. The waterfall dilemma is further propagated by a large community of developers that want to have all the requirements up front. This way they know what they are building, they can architect accordingly, and the client is to blame because they \"signed off\" on said requirements. Am I incorrect? Can Agile work without client involvement? If so, how and how do you overcome the issues I discussed?"} {"_id": "72867", "title": "What would make you adopt a language with very few resources and tools, barely any libraries and more or less no other users for a project?", "text": "Often, many decent languages just won't get popular because of the chicken- and-egg problem where a language lacks resources (books, tutorials... other than for some basic language documentation), tools (IDE support, debuggers), libraries (other than for some kind of small standard library) and very few people have adopted it. The language might not have been standardized either. Such languages do still get used occasionally, as otherwise, we wouldn't have any popular languages today. Some got popular because they were created for a specific major product and used for that (e.g. C for UNIX), but some languages weren't, but still got quite popular (e.g. Java and Python). What would such a poorly supported language need to offer in order for you to adopt it for a software project you are going to create? Surely, there would have to be something quite revolutionizing or unique about the language."} {"_id": "120964", "title": "How to instrument existing ASP.NET application?", "text": "We have several highly complex ASP.NET web applications that are used internally by hundreds of users. We are trying to figure out which areas of the applications to invest in to improve functionality, but we aren't sure which screens/features are more heavily used. So, ideally, I'd like to find a way to add a layer of instrumentation to the applications that gathers metrics on which buttons are being clicked, which text boxes are being used, etc. Are there any products / open source apps out there that will do this sort of instrumentation for ASP.NET? Obviously I could do it myself manually by going into the code and injecting logging statements everywhere but this would be a significant amount of work that will be hard to accomplish."} {"_id": "120965", "title": "Idea of an algorithm to detect a website's navigation structure?", "text": "Currently I am in the process of developing an importer of any existing, arbitrary (static) HTML website into the upcoming release of our CMS. While the downloading the files is solved successfully, I'm pulling my hair off when it comes to detect a site structure (pages and subpages) purely from the HTML files, without the user specifying additional hints. Basically I want to get a tree like: + Root page 1 + Child page 1 + Child page 2 + Child child page1 + Child page 3 + Root page 2 + Child page 4 + Root page 3 + ... I.e. I want to be able to detect the menu structure from the links inside the pages. This has not to be 100% accurate, but at least I want to achieve more than just a flat list. I thought of looking at multiple pages to see similar areas and identify these as menu areas and parse the links there, but after all I'm not _that_ satisfied with this idea. **My question:** Can you imagine any algorithm when it comes to detecting such a structure? **Update 1:** What I'm looking for is _not_ a web spider, but an algorithm do create a logical tree of the relationship of the pages to be able to create pages and subpages inside my CMS when importing them. **Update 2:** As of Robert's suggestion I'll solve this by starting at the root page, and then simply parse links as you go and treat every link inside a page simply as a child page. Probably I'll recurse not in a deep-first manner but rather in a breadth-first manner to get a more balanced navigation structure."} {"_id": "124426", "title": "Is there an efficient way to convert a Java project with a TUI into one with a GUI?", "text": "So I need to make the interface of a Java-database application a GUI. We coded the project in Eclipse, and I am going to import it into Netbeans. In Netbeans there's a great GUI builder, but I'm unsure about how to go about this."} {"_id": "124423", "title": "How come the design process is so different for Web Design and GUI Design?", "text": "I had the opportunity to develop applications in several niches: server back-end, desktop clients, and recently a small scale website. Once indulged in the website design I am asking myself and you how come the UI design process is so different? Can you point out the differences, and why they originated? Once, HTML was for marking up text and desktop GUI was the front-end for doing the real job, but today, why the GUI development process is still so different? Here are some for a start: Desktop: **Use of explicit layout**. (ex: StackPanel in WPF, BorderLayout in Swing) Web: **Layout is a set of CSS properties** (ex: height/width, margin/padding, float, display..) that are given to each container like element. Desktop: **Isolation of GUI components** to OO classes, easy reuse. Web: **Components are not reused in html level** , they may be reused in dynamic server-side html generation. Only CSS styles may be reused. Can you name a few more? and why? Why can't they be similar? Side note: I have some experience with GWT, it creates a client application that runs in the browser, but why can't I make a webpage (like a blog page) with desktop UI design methods and tools?"} {"_id": "81971", "title": "Why does Knuth forbid the reading of pre-chapter quotations in The Art of Computer Programming?", "text": "After the preface to The Art of Computer Programming is the Procedure for Reading This Set of Books. Step 4 is to begin reading Chapter N, with the specific instruction to \" _not_ read the quotations that appear at the beginning of the chapter.\" (emphasis his). This section is present in the 1997 3rd edition of Volume 1, but not in 3rd edition of Vol 2 or 2nd edition of Vol 3 both dated 1998. I have guesses as to why this curious instruction exists, but I've been unable to find any reference to it. Does anyone first hand knowledge about this, or a link to info?"} {"_id": "124429", "title": "What to choose: freeware, shareware or payware?", "text": "A few years ago I've developed a program that has a steady group of users. It became quite popular and now we have close to two million downloads on Download.com. At first I provided the application as freeware to reach as many users as I could, but there is server costs and of course it would be nice to receive a bit of money in return for your effort. I'm looking for a way to gain reasonable revenue from my freeware application, but still make a bit of money. Here's my experience so far (current in bold): Xacti.com : Low revenue and it took me a lot of emails to eventually get my money. OpenCandy.com : Friendly, revenue is ok, good online overview of your downloads. **InstallMonitizer.com** : We're testing with them right now, revenue seems higher and a referral program. Another option would be to switch to shareware (let's say 30-day trial based), but I think that would cost me a lot of users. The offer screens are harmless in my opinion. What do you use for your software (opensource, freeware, shareware)? What are your experiences so far? What would you accept as a user? **Update 28/12/2011:** Number of installs is still increasing with InstallMonetizer: http://www.installmonetizer.com. The bundle automatically checks for the best match in geographic location and revenue. For some applications we see $1.00 per US download! Many other countries are supported as well. It will take a bit more time to update all of our mirrors. I'll keep you guys posted."} {"_id": "124428", "title": "How to explain to a layperson the variance in programmer rates?", "text": "I recently talked to a guy that is looking for developers to build a product idea. He mentioned he has received interest from people but the rates have varied from $20-120/hr. This project he estimates should take 3-6 months and since he is non-technical, he is confused why there can be so much variance. I understand how I would choose someone but I am a developer and can gauge other people's work. How can I explain to him (in a non-biased way, if possible, as I will apply as well) about the variance in rates? Is there any good analogy that would help?"} {"_id": "118954", "title": "How do I develop in more languages with less IDEs", "text": "I would like to set up my computer so that I can develop in .net, C#, Java, ActionScript, JS/CSS, and functional languages such as Scala or Haskell. However, I want to do this with the least amount of full-featured IDEs to learn / programs taking up harddrive space on my computer / running multiple IDEs simultaneously. Which programs can I use to minimize the amount of full-featured IDEs I have to learn/use. (For example, if Eclipse could handle all of these frameworks, that would be a valid answer)"} {"_id": "85072", "title": "Which is more important: solving or implementing?", "text": "I guess I'm capable of solving problems with my pencil and paper easily, but I find it takes more time to implement the solution in code. Between \"solving\" or \"implementation\", which is more important in the IT field?"} {"_id": "85070", "title": "As a novice, how can I best overcome the complexity of larger projects?", "text": "Currently, I'm a student and I can tackle all my assignments which only require, at most, 3-5 classes to implement. Now, I'm wanting to develop programs on a bigger scale but I am having a really difficult time figuring out how to design the program from a higher level. The real world example I am working on is a group project to build an RSS reader. So far so good and I have a very basic prototype working that uses an RSS framework plus a few classes I created myself. Using the prototype as an example and the requirements document (well over 50 requirements, as of now) I need to build the actual program. I feel like I know a lot of the principles but I am not sure how to apply them to varying contexts such as this. I'm hung up on how to break this simple RSS reader into multiple components, classes, etc and what those should even be. I would really like to know how I can/should go about wrapping my head around this project. I want to be able to step back and visualize the different pieces of the program so I have some structure before writing any code. Extra kudos if you have a link to an article or blog that actually shows, by example, how you can take a concept and break it down at a high level. Sort of a \"from concept to production\" type article instead of just abstract information as available at sites like SourceMaking."} {"_id": "152288", "title": "How to commit a file conversion?", "text": "Say you've committed a file of type _foo_ in your favorite `vcs`: $ vcs add data.foo $ vcs commit -m \"My data\" After publishing you realize there's a better data format _bar_. To convert you can use one of these solutions: $ vcs mv data.foo data.bar $ vcs commit -m \"Preparing to use format bar\" $ foo2bar --output data.bar data.bar $ vcs commit -m \"Actual foo to bar conversion\" or $ foo2bar --output data.foo data.foo $ vcs commit -m \"Converted data to format bar\" $ vcs mv data.foo data.bar $ vcs commit -m \"Renamed to fit data type\" or $ foo2bar --output data.bar data.foo $ vcs rm data.foo $ vcs add data.bar $ vcs commit -m \"Converted data to format bar\" In the first two cases the conversion is not an atomic operation and _the file extension is \"lying\"_ in the first commit. In the last case the conversion will not be detected as a move operation, so as far as I can tell it'll be _difficult to trace the file history_ across the commit. Although I'd instinctively prefer the last solution, I can't help thinking that tracing history should be given very high priority in version control. What is the best thing to do here?"} {"_id": "163185", "title": "Torvalds' quote about good programmer", "text": "Accidentally I've stumbled upon the following quote by Linus Torvalds: > \"Bad programmers worry about the code. Good programmers worry about data > structures and their relationships.\" I've thought about it for the last few days and I'm still confused (which is probably not a good sign), hence I wanted to discuss the following: * What interpretation of this possible/makes sense? * What can be applied/learned from it?"} {"_id": "163184", "title": "Compiler design in Lisp", "text": "With some googling, I could easily find some documents in compiler design in C, Java, and C# and even in Haskell, but not in Lisp except implementing Scheme/Lisp in Lisp. Is Lisp not so popular in implementing other (not functional) programming languages? Do you know of some good documentation about implementing a compiler in LISP?"} {"_id": "231446", "title": "MVC: How to Implement Linked Views?", "text": "I'm developing a java application to visualize time series. I need (at least) three linked views, meaning that interaction with one of them updates the others. The views are: 1. A list represents the available and currently selected time series. The selected time series are used as input for subsequent computations. Changing the selection should update the other views. 2. A line chart displays the available and selected time series. Time series should be selectable from here by clicking on them. 3. A bar chart shows aggregated data on time series. When zooming in to a period of time, the line chart should zoom in to the same period (and vice versa). How to implement this nicely, from a software engineering point of view? I.e. I'd like to write reusable and clear, maintainable code. So I thought of the MVC pattern. At first I liked the idea of having the three view components observing my model class and to refresh views upon being notified. But then, it didn't feel right to store view related data in the model. Storing e.g. the time series selection or plot zoom level in the model makes implications about the view which I wouldn't want in a reusable model. On the other hand, having the controllers observe each other results in a lot of dependencies. When adding another view, I'd have to register all other views as observer of the new view and vice versa, passing around many references and introducing dependencies. Maybe another \"model\" storing only view-related data would be the solution?"} {"_id": "20385", "title": "Should I ask/tell my boss before freelancing?", "text": "I am currently working as an intern at a consulting firm. I am soon to move to a full time employee once I graduate next semester, and I love working there. However, as a student, I lack money and I have met a business owner outside of work who has offered to hire me for some freelance web development. Because I met this individual outside of work, I feel it would not be a conflict of interest to freelance for him. However, the work his is wanting me to do is very similar to what I already do for my current boss. Should I speak with my boss before considering the offer? **Edit** : I understand that I cannot take IP from work, and I am not in a contract, I'm employed at will."} {"_id": "255631", "title": "Liskov Substitution and SRP Principle violation - how best to structure this scenario?", "text": "While learning SRP and LSP, I'm trying to improve the design of my code to comply best with both of these principles. I have an employee class that has a calculatePay method on it. Firstly, I believe following OOP SOLID Principles, calculatePay() method should not be an employee objects responsibility. Common responsibilities of employees would be to performDuties(), takeLunchBreak(), clockIn() and clockOut() etc.. Am I right in thinking this way? That's why I feel calculatePay() should belong in some other class. Okay so that's my SRP insecurity. Coming to LSP: I have subclasses like accountants, salesman, and directors. These are all employees that get paid. How would I change this design to better support volunteers? Volunteers don't get paid. public class Employee { private String name; private int salary; private boolean topPerformer; private int bonusAmount; public Employee(String name, int salary, boolean topPerformer, int bonusAmount) { // set fields etc.. } // This method doesn't seem to belong here. public int calculatePay(){ if(topPerformer) return salary+bonusAmount; else{ return salary; } } }"} {"_id": "255630", "title": "how to install opencv's cross-compiled libraies on raspberry", "text": "my problem is may be complicated, I'll try to detail I cross compiled opencv for ARM (raspberry pi) from my host, and everything works fine. I also tried to cross compile an example and it also worked, i got an executable. but once the executable transferred to the raspberry, I cannot execute it and I get the following error: error while loading shared libraries: libopencv_core.so.3.0: can not open shared object file: No such file or directory. on my host, in my opencv's build folder, I have a \"install\" folder in which there are four subfolders: \"bin\", \"share\", \"include\", \"lib\". what am i suposed to do with these folder ?? i think i must use their content but don't know how. where and how do I install the cross compiled libraries in raspberry ?? how could i fix that error ?? thanks in advance"} {"_id": "255634", "title": "Moving my ASP.NET MVC application to Amazon AWS", "text": "I built an ASP.NET MVC application, and now I want to move it to Amazon AWS from my development server. My question is: How does one migrate an ASP.NET application to Amazon AWS? Here is what I have researched/found so far: * Sessions don't work across instances, so I need to use DynamoDB or memcached to store state. I looked into various clients like Enyim as a possible solution to the session state problem using Amazon ElastiCloud. * Amazon has a web service for SMTP emails. So I will need to rework the code that sends SMTP emails to send through Amazon SES, and reroute incoming emails to a separate mail server by changing the DNS records. * There's an SDK for managing user identity (Amazon IAM). I will need to change the authentication code to use this web service. Perhaps there are more points that I am unaware of. So, how does one migrate an MVC app to AWS?"} {"_id": "255636", "title": "How to determine what should get it's own respective controller?", "text": "I'm using the MVC pattern in my web application built with PHP. I'm always struggling to determine whether I need a new dedicated controller for a set of actions or if I should place them inside an already existing controller. Are there any good rules of thumb to follow when creating controllers? For example I can have: `AuthenticationController` with actions: * `index()` to display login form. * `submit()` to handle form submission. * `logout()`, self-explanatory. **OR** `LoginController` with actions: * `index()` to display login form. * `submit()` to handle form submission. `LogoutController` with action: * `index()` to handle logging out. **OR** `AccountController` with actions: * `loginGet()` to display login form. * `loginPost()` to handle login form submission. * `logoutGet()` to handle logging out. * `registerGet()` to display registration form. * `registerPost()` to handle form submission. And any other actions that are are involved with an account..."} {"_id": "213997", "title": "How do the blogging sites or sites that host prose like contents store the data?", "text": "How do the blogging sites, Q&A sites (or any other sites host prose like content) store their data? That is, how do you store blogs and Q&A content in the database? I hope it is not good to store this huge text data inside RDBMS table columns."} {"_id": "141535", "title": "Optimal Database design regarding functionality of letting user share posts by other users", "text": "I want to implement functionality which let user share posts by other users similar to what Facebook and Google+ share button and twitter retweet. There are 2 choices: 1) I create duplicate copy of the post and have a column which keeps track of the original post id and makes clear this is a shared post. 2) I have a separate table shared post where I save the post id which is a foreign key to post id in post table. Talking in terms of programming basically I keep pointer to the original post in a separate table and when need to get post posted by user and also shared ones I do a left join on post and shared post table Post(post_id(PK), post_content, posted_by) SharedPost(post_id(FK to Post.post_id), sharing_user, sharedfrom(in case someone shares from non owners profile)) I am in favour of second choice but wanted to know the advice of experts out there? One thing more posts on my webapp will be more on the lines of facebook size not tweet size."} {"_id": "210765", "title": "Is this how Simplex Noise works?", "text": "I have done a huge ton of reading up on Simplex Noise now, and after a lot of confusion and headaches I think I am able to form an idea of how Simplex Noise works now. Am I right, that simplex noise (2D) is just a grid build of Simplexes, in this case triangles, where at every corner of a triangle there is a value between 0 and 1. When you ask the algorithm for a point it'll convert the 2D coordinates to the simplex grid coordinates, interpolate the noise and then return the actual value?"} {"_id": "216312", "title": "How to slowly release a web application to more and more users so that too many concurrent users don't crash your site?", "text": "I have a web application that I expect to go viral pretty quickly. How can I control traffic to it so that it doesn't crash under too much load? This application doesn't require user login. It will be properly load tested, but I am not really sure what kind of traffic to expect until I make it live."} {"_id": "243146", "title": "WPF4 Unleashed - how does converting child elements work?", "text": "In chapter 2 of the book **WPF4 Unleashed** the author shows an example of how XAML processes type conversion. He states that White is equivalent to , because a string can be converted into a `SolidColorBrush` object. But how is that enough? It doesn't make sense to me. How does XAML know to which property should the value `White` be assigned?"} {"_id": "216316", "title": "IL and case-sensitivity", "text": "Quoted from A Brief Introduction To IL code, CLR, CTS, CLS and JIT In .NET > CLS stands for Common Language Specifications. It is a subset of CTS. CLS is > a set of rules or guidelines which if followed ensures that code written in > one .NET language can be used by another .NET language. For example one rule > is that we cannot have member functions with same name with case difference > only i.e we should not have add() and Add(). This may work in C# because it > is case-sensitive but if try to use that C# code in VB.NET, it is not > possible because VB.NET is not case-sensitive. Based on above text I want to confirm two points here: 1. Does the case-sensitivity of IL is a condition for member functions only, and not for member properties? 2. Is it true that C# wouldn't be inter-operable with VB.NET if it didn't take care of the case sensitivity?"} {"_id": "243143", "title": "Object-Oriented equivalent of LISP's progn function?", "text": "I'm currently writing a LISP parser that iterates through some AutoLISP code and does its best to make it a little easier to read (changing prefix notation to infix notation, changing setq assignments to \"=\" assignments, etc.) for those that aren't used to LISP code/only learned object oriented programming. While writing commands that LISP uses to add to a \"library\" of LISP commands, I came across the LISP command \"progn\". The only problem is that it looks like progn is simply executing code in a specific order and sometimes (not usually) assigning the last value to a variable. Am I incorrect in assuming that for translating progn to object-oriented understanding that I can simply forgo the progn function and print the statements that it contains? If not, what would be a good equivalent for progn in an object-oriented language?"} {"_id": "168667", "title": "Count function on tree structure (non-binary)", "text": "I am implementing a tree Data structure in c# based (largely on Dan Vanderboom's Generic implementation). I am now considering approach on handling a Count property which Dan does not implement. The obvious and easy way would be to use a recursive call which Traverses the tree happily adding up nodes (or iteratively traversing the tree with a Queue and counting nodes if you prefer). It just seems expensive. (I also may want to lazy load some of my nodes down the road). I could maintain a count at the root node. All children would traverse up to and/or hold a reference to the root, and update a internally settable count property on changes. This would push the iteration problem to when ever I want to break off a branch or clear all children below a given node. Generally less expensive, and puts the heavy lifting what I think will be less frequently called functions. Seems a little brute force, and that usually means exception cases I haven't thought of yet, or bugs if you prefer. Does anyone have an example of an implementation which keeps a count for an _Unbalanced and/or non-binary tree_ structure rather than counting on the fly? Don't worry about the lazy load, or language. I am sure I can adjust the example to fit my specific needs. **EDIT: I am curious about an example, rather than instructions or discussion. I know this is not technically difficult...**"} {"_id": "168666", "title": "Responding to end users about bugs they found", "text": "I have an issue tracking system, but sometimes users report bugs directly to me in an email. If I'm responding to a bug report in an email, what's a good etiquette to use? Do I thank them for reporting the bug? Do I apologize for it ? Is there a good template anywhere?"} {"_id": "173588", "title": "Multiple files in powershell", "text": "How to using multiple files in Powershell to ensure modularity? function.ps1 could have all the utility functions? command1.ps1 could have calls related to a command?"} {"_id": "253165", "title": "Question about mocking externals", "text": "at company we're developing quite a big project and we're arguing at the testing strategy. The question is: should all of the tests be executed in isolation of external services like database or APIs (facebook etc) or just part of them? Of course what I have in mind is using mockups. We discussed following strategies: 1. Write all of the tests in isolation of external services - mock externals everywhere 2. Write part of the tests in isolation and create single bigger functional test for every feature that tests it using externals (without mocking anything) - using fixtures 3. Run all tests communicating with externals (of course that's never appliable to unit tests so they're out of scope in that case) I know it can start quite a discussion but I think that's what it is all about, I'd like to find pros and cons that I didn't think of already."} {"_id": "164324", "title": "Good practices for large scale development/delivery of software", "text": "What practices do you apply when working with large teams on multiple versions of a software or multiple competing projects? What are best practices that can be used to still get the right things done first? Is there information available how big IT companies do development and management of some of their large projects, e.g. things like Oracle Database, WebSphere Application Server, Microsoft Windows, ....?"} {"_id": "164325", "title": "Revamp application", "text": "I am a software developer having an experience of 3 yrs. I want to play with latest technologies always. But this is not practical. Because say, I developed a web application in .Net 3.5, now its 30% done. After the release of .Net 4.0, my mind is always goes with .Net 4.0. I think like this, lot of features are in new version, so why shouln't I implement those versions in my application. When I worked with IT companies, most of them code with very old versions, some body use VB.NET, C, even Classic ASP. So what might be the points should I consider if I revamp an application?"} {"_id": "184044", "title": "What is Context Diagram in a SRS?", "text": "I am learning how to write a System Requirements Specifications, and most of templates I have seen talk of a Context Diagram. What exactly is a Context Diagram?"} {"_id": "178270", "title": "Is \"watermarking\" code with random trailing whitespace a good way to detect plagiarism?", "text": "Consider this: int f(int x) { return 2 * x * x; } and this int squareAndDouble(int y) { return 2*y*y; } If you found these in independent bodies of code, you might give the two programmers the benefit of the doubt and assume they came up with more-or-less the same function independently. But look at the whitespace at the end of each line of code. Same pattern in both. Surely evidence of copying. On a larger piece of code, correlation of random whitespace at line ends would be irrefutable evidence of a shared origin. Now aside from the obvious weaknesses: e.g. visible or obvious in some editors, easily removed, I was wondering if it was worth deploying something like this in my open source project. My industry has a history of companies ripping off open source projects."} {"_id": "121617", "title": "Architecture Question", "text": "I am writing a rules/eligibility Module. I have 2 sets of data, one is the customer data and the other is the customer products data. Customer data to Customer products data is one to many. Now I have to go through a set of Eligibility rules for each of this Customer product data. For each customer products data, I can say the customer is eligible for that product or decline the eligibility and should move on to the next product record. So in all the rules, I need to have access to customer and customer product data(the particular record that the rules are being executed against). Since all the rules can either approve a product or decline a product, I created an interface with those 2 methods and is implementing the this interface for all the rules. I am passing the Customer data and one product data for all the rules (because rules should be executed on each row of customer product data). An Ideal situation would be having the customer and customer product data available for the rule instead of passing them to each rule. What is the best way of doing this in-terms of architecture? Edit: Here is what I am doing public class CustomerContract { public DateTime StartDate { get; set; } public DateTime endDate { get; set; } //Other Contract related details } public class Customer { public int ID { get; set; } public string Name { get; set; } public string state { get; set; } } public class CustomerInfo { public CustomerInfo(int CustomerID) { Customer = new Customer();// Get Customer from DB AppliedProducts = new List();// Get the customer products by customerID CurrentContract = new CustomerContract();//Get the contract by CustomerID, State declineReasons = new List();// just intailizing. the decline codes are added by rules. } public CustomerContract CurrentContract { get; set; } public IList AppliedProducts { get; set; } public IList declineReasons { get; set; } public Customer Customer { get; set; } } public class CustomerProduct { public decimal AmountCharged { get; set; } public int DeclineReasonID { get; set; } public int Product { get; set; } public decimal DiscountApplicable { get; set; } public decimal AmountQuoted { get; set; } public decimal Tax { get; set; } } public interface IRule { // Since we need to have access to contracts when deciding the eligibility CustomerProduct ExecuteRule(CustomerProduct currentproduct, CustomerInfo customerInfo); //when denied, CustomerProduct DenyProduct(CustomerProduct currentProduct); } public class EligibityEngine { List rules = new List(); private CustomerInfo c; public EligibityEngine(int CustomerID) { c = new CustomerInfo(CustomerID); LoadRules(); foreach (var customerProduct in c.AppliedProducts) { ExecuteRules(customerProduct); } } private void ExecuteRules(CustomerProduct currentProductItem) { foreach (var rule in rules) { currentProductItem = rule.ExecuteRule(currentProductItem, c); } } private void LoadRules() { // add all the IRule types here //rules.Add(); } } When I execute the rule, I have pass the customer Product data and customer data. I have to pass the result of one rule execution to the other. In the above pattern, if the rule changes data in customerInfo class, it is not save unless it is passes as ref. My question is I want the all rules to have access to this data with out passing them in. So I need not worry about capturing the out put of the rule. I want the executeRule method to void."} {"_id": "121610", "title": "See number of SVN Checkins per folder", "text": "I have a very large SVN repository (working copy of several gb) that has just reached its 20,000th checkin. As a bit of an interesting statistic for our team (and to partly celebrate our 20,000th checkin) I'd like to make a graph showing which folders in the repository have had the most checkins. Is there any way to do this? We mostly use integrated SVN clients in our IDE and Tortoise SVN, but I'm willing to get other tools for this one-off thing."} {"_id": "178277", "title": "Necessary Infrastructure for large project with many components communicating through IPCs", "text": "I have a fairly in depth question which probably doesn't have an exact answer. As a software engineer, I am usually tasked with working on a program or project with minimal understanding of how other components or programs in the project interact with each other. When one program fails in a sea of multiple components and processes, what infrastructure elements are necessary to ensure that the problem can be accurately tracked to the violating application? More specifically, what infrastructure elements should be necessary for this large project and which are optional but very helpful. One such example I can think of is some form of a common logging infrastructure that allows for a developer or tester to easily browse through a log that contains numerous components for messages that might allude to the culprit program along with a \"trail\" of what happened before the issue occurred. I'm thinking of something similar to Androids alogcat tool. These necessary infrastructure elements should be language-agnostic. While these elements should be understood by all engineers on the team in question, which elements should be understood at great detail by the technical system engineers and what should the individual software engineers be responsible for adding to their tools to allow for such infrastructures to take hold? Please feel free to ask for clarification if something does not make sense as I understand this question is very broad and needs some refinement. I will refine as necessary from the answers and comments I receive. Thanks for any help! Update: I am entering in a team that has maybe 5% of the code with Unit Tests and is just beginning to Instrument and Monitor. Each software programmer (I say programmer and not engineer because not everyone on the team is an engineer) does not understand the basics of fail immediately and sanity checking. Much of our software baseline is legacy code and is in the process of being ported over. Unfortunately we don't have the man power to refactor a lot of the older components. This is what led me to try and understand if there there are necessary infrastructure tools that can be used to detect and find bugs at the source in a much quicker fashion. While I am not expecting a tool to magically do this, I was thinking there might be tools or configurations that allow for more easily finding bugs in a sea of components."} {"_id": "174752", "title": "Lead/Manager vs Individual contributor: which is better?", "text": "Currently I am working in a company as a manager (software dev). But I only have 6.8 yrs experience. I joined this company as a software engineer and got promoted to SSE, Lead, and Manager. Some of my team members have better experience than I, and I feel like I need to have more exposure/experience to take these roles. I feel like it is better to be an individual contributor and learn many things for another couple of years and become a Principal Software Engineer, rather than getting involved in management. Options I have: 1. Ask my current employer to make me an individual contributor? 2. Find a new company and join as an SSE to start over? 3. Find a new company for a lead position? Please advise."} {"_id": "130840", "title": "Efficient graph clustering algorithm", "text": "I'm looking for an efficient algorithm to find clusters on a large graph (It has approximately 5000 vertices and 10000 edges). So far I am using the Girvan\u2013Newman algorithm implemented in the JUNG java library but it is quite slow when I try to remove a lot of edges. Can you suggest me a better alternative for large graphs?"} {"_id": "174751", "title": "Pure Java web browser, is it practical?", "text": "I know that a Java web browser is possible, but is it practical? I've seen the Lobo project and must admit I am impressed, but from what I've gathered it seems that development stopped in 2009. Would a browser coded in pure Java (no WebKit java bindings of any type) be able to compete with those among the ranks of Chrome or Firefox, or would it be inherently slower, hindering the user?"} {"_id": "39371", "title": "What are the factors that have made Java a success as a programming language in enterprise computing?", "text": "What are the factors that have made Java a success as a programming language in enterprise computing?"} {"_id": "15930", "title": "How can we make software development best practices more interesting to people without a software background?", "text": "Where I work there are a few experienced software developers with a software background, but the majority of developers are physicists or chemists with excellent domain knowledge but limited experience when it comes to developing high quality, maintainable software. To address this we have started running regular talks and workshops. **What topics do you think we should discuss to help make these people more effective software developers?** In particular we are struggling to gain enthusiasm for these talks as many developers do not see software as an interesting subject. **How could we make these more interesting to people without a software background?** Thanks"} {"_id": "252362", "title": "Is it a good idea to merge multiple HTTP requests to save bandwidth?", "text": "I am preparing a single page application that would be sometimes used over slow mobile connection. Some of its part are quite heavy in terms of API requests (fetching ten different resources for a new screen display). Now, is it a good idea to merge these services to one that provides all required data, but is not as \"pure\" in terms of REST principles? Are there significant performance gains to be expected?"} {"_id": "252360", "title": "Implementing a simple controller in embedded C", "text": "Is there a known method or pattern to implement a simple controller for an MVC design in pure C or the switch case approach is the standard? **Background :** I have an embedded application and I'm pleased with how the business logic turned out. I also finished the UI which boils down to basically 2 functions: uint8_t getInput(void); void display(logger toDestination, char* message, ...) Inputs can come from multiple sources and outputs can be sent to several destinations. All of that is taken care of so the application has no idea if its run by a PC on on the device. It also has no idea if the events come from humans or are faked by unit tests. Where I struggle is with the application logic. My views are in charge of where and how to display the outputs. My drivers, regrouped into behavioral models providing clear features, know _how_ to do their jobs very well but not _when_. Here's my functional rough draft, which in my opinion violates principles like DRY, cohesion, flexibility and may present scalability problems: **Rough draft :** (I faked the names and reduced the size but you get the idea) _menu.h_ void goToMainMenu(void); // This is the entry point of the application after everything was setup correctly void goToFirstSubMenu(void); void firstFunctionality(void); //The last levels are the ones doing the calls to the business logic void secondFunctionality(void); //Other functionality or nested sub-sub-menus void goToSecondSubMenu(void); void thirdFunctionality(void); int doSomethingWithTheBusinessLogic(int arg1, char arg2, void* arg3); //Other sub-menus and options /* And so on, the function prototype are indented to reflect the logical level of the menus.*/ Then most of the functions will roughly look like this: _menu.c_ void goToMainMenu(void) { uint8_t choice = 0; resetDefaultState(); do { display(toTerminal, \"Press 1 to go to sub-menu 1\" ENDL); display(toTerminal, \"Press 2 to go to sub-menu 2\" ENDL); display(toTerminal, \"Press 3 to do something with the drivers\" ENDL); display(toTerminal, \"Press z to go back.\" ENDL); choice = getChar(); /* Blocking thread waiting for events to pop from the other threads */ if (choice == '1'){ goToFirstSubMenu(); } else if(choice == '2'){ goToSecondSubMenu(); } else if(choice == '3'){ doSomethingWithTheBusinessLogic(); } }while(choice != 'z'); resetDefaultState(); /* When a submenu returns we're taken to the parent menu's do-while loop */ } * Is it a good way to do things for small-medium systems? * How could this be improved considering the limits of C and embedded environnements? **What has been tried :** I tried a n-ary tree approach where nodes are menus and leaves are functionality but without polymorphism it's hard to define elegently how both should behave to inputs. The strongly typed system also makes it hard to do a simple callbacks system because I end up needing tons of wrappers around the drivers to unifiy a single callback signature (`void myCallback(void*)`). It also seem to be overkill for this simple state-flow; I feel like I'm working against KISS and productivity only because I've been hammered with \"chains of if-elses is the devil\". **What I would like :** With higher level languages usually you can bind events and callbacks. Something along the lines of `bind(myMenu, thisInput, theCallback)` so whenever `myMenu` is focused and an event `thisInput` it raised `theCallback` is called. I'm unsure how to do this in C without going way overboard."} {"_id": "8633", "title": "How to pick a framework", "text": "This quote by Joel Spolsky resonated with me: > Which is better, XUL, Eclipse's SWT, or wxWindows? I don't know. They are > all such huge worlds that I couldn't really evaluate them and tell. It's not > enough to read the tutorials. You have to sweat and bleed with the thing for > a year or two before you really know it's good enough... Unfortunately, for > most projects, you have to decide on which world to use before you can write > the first line of code, which is precisely the moment when you have the > least information. I'm enough of a perfectionist that I _hate_ the thought of picking a framework for a new project without complete knowledge of what the best choice is, but the fact is that, as Joel says, there's no (practical) way to have complete knowledge of what the best choice is. Right now I'm trying to pick a web framework. Right now, based on our target platforms and my judgment of various languages, I'd pick Python over Ruby, C# / ASP.NET, or Java / J2EE. Of the available Python frameworks, based on this SO question, I'd pick Pylons over Django. (Our intended projects are somewhat specialized, not content-related, so Pylons' flexibility sounds better than Django's CMS support and built-in admin. Our team has relatively little experience in any of the alternatives, so that's not a significant factor.) But Django is substantially more popular than Pylons, and Ruby on Rails or ASP.NET are substantially more popular than Django. So what am I to conclude? 1. The very popular choices are more popular because they're much better, so I should pick one of the more popular choices instead of what I'm currently planning. 2. The very popular choices are not necessarily much better, but more popularity means better documentation, better community support, better add-ons, and an easier time finding developers with prior experience, so I should pick one of the more popular choices instead of what I'm currently planning. 3. Any of the options would be good, so I should pick what sounds best for my target project without stressing about popularity."} {"_id": "8631", "title": "What do you name functions/variables/etc when you can't think of a good name?", "text": "When you are defining a function/variable/etc and are not sure what to name it, what do you name it? How do you come up with a name? If you use a **_temporary_** name as a place-card until you give it it's real name, what temporary name do you use? * * * ### update I have been using things like `WILL_NAME_LATER`, `NEEDS_NAME`, or `TO_BE_NAMED`. I was hoping there was an adopted convention, I was actually hoping that if I used this adopted convention my IDE would highlight the name until I changed it."} {"_id": "95362", "title": "Can I use publicly mentioned algorithms for writing programs?", "text": "I want to write a program that solves sudoku. So, I found some sudoku algorithms on Wikipedia. Can I use them or do I need to develop my own algorithm? Also, do I need to ask the specific license holder's permission?.. If so, how would I go about obtaining that permission?"} {"_id": "92385", "title": "trust factor in code review", "text": "How much should a reviewer trust the coder who submits the code for review? I always have that dilemma of whether I should go test of the changes proposed or should I trust the submitter that he would be _knowing_ what he is doing? Mainly because at the end of the day its reviewers responsibility also. Code-reviews are not fun always."} {"_id": "126065", "title": "What is the equivalent of Josh Bloch's \"Effective Java\" for Objective-C/Cocoa?", "text": "I'm a newcomer to Objective-C and Cocoa but a reasonably experienced Java programmer. When I was starting out, reading Josh Bloch's Effective Java made me a 20% better Java programmer overnight. As its foreword says, most language books teach you syntax and vocabulary, but not much about idiomatic usage. I **hate** knowing I'm writing \"The vase of my uncle is on the credenza of my aunt\" kind of stuff in a new language. What's the equivalent of _Effective Java_ for Objective-C and Cocoa?"} {"_id": "221161", "title": "Guidance on Excel VBA resource scheduling algorithm?", "text": "This was posted originally at StackOverflow though suggested to post here instead. I am looking to create an Excel VBA solution that will create a rota/schedule allocating staff to service users using an algorithm. I believe there are already existing names for this kind of problem/algorithms but not entirely sure what to refer to it is. Here is my scenario: A colleague approached me with a problem of scheduling staff against visits required. Currently this is a manual process done every week and is labour intensive. The datasets are small, on average there would be 20 service users each requiring 2 visits a day with 5 members of staff available each day. I have a worksheet that contains the list of staff who are available for the following week. Each row of data contains their name, gender, skill level, home postcode, day working, start time and end time. A member of staff may be listed more than once on the same day as they may be available for hours in the morning and also in the evening. Another worksheet contains the list of service users and visits required for the following week. Each row of data contains their name, postcode, day of visit, time of visit, visit duration, gender required (i.e. must be seen by a male/female), skill level required (i.e. any, certified) and the number of staff required for that visit (i.e. 1 or 2). Again a service user may appear more than once on the same day as they may require multiple visits at different times of the day (i.e. morning and evening) I have already created some VBA functions to deal with handling of postcodes and getting latitude/longitude co-ordinates along with a function to calculate the distance between two points as the crow flies. At present the distinct values of postcodes with there co-ordinates are being stored on a helper sheet for the VBA to reference. What I am really looking for is guidance on how to even being creating a procedure/algorithm which will produce a suggested rota/schedule, this could be a simple table with pairings between staff and visits. The process must always make suitable pairings based on: * Skill Level Required * Gender Required * Staff Availability Least important are that matches take into consideration the distance required to travel between visits. I believe this is referred to as a Genetic Algorithm? There may be instances where there are too many service users for the available staff and vice versa. Making use of all staff isn't necessary provided all visits are dealt with, if all visits cannot be paired due to a lack of staff availability then this needs to be listed as being unmet. I can utilize SQL Server if need be but ultimately the solution needs to be presented in Excel to the end user, as this is what they are most familiar with. I'm thinking the procedure/process would need to go something like this: 1. For each day, sort the visits by start time 2. For each visit, determine which staff members meet the gender and skill level requirement and is available on that day 3. For each of the staff member determined above, are they available at the time requested? 4. For each staff member that is available calculate the distance from their current location to the where they need to be 5. Rank them in order of preference by distance required to travel 6. Once this has been done for each day, come up with a suggested allocation of staff to visits using some kind of scoring method I'm no VBA expert and deal with SQL Server as my day job but I am willing to try and come up with a suitable solution, just need some guidance on how best to start."} {"_id": "126061", "title": "Advice/Approach for distilling homogenous code and building common code for a team", "text": "I work for the State of California. Our programming team in my opinion is not really a 'team' in that we usually work solo on projects throughout the application/systems complete life-cycle. The end result is a lot of developers are 'reinventing the wheel'... writing their own data layers, even though the vast majority of us work on the same Oracle DB... writing their own security stuff... the list goes on. I can't change the mentality of my employees, and don't have any realistic ambitions in regards to changing our team process... but my goal is to get our team to work together a little more, at least to build common building block pieces that we can all use for boilerplate functionality. The obvious benefits are, testing and support are much more maintainable when all our users are familiar with a common piece, time to production is less when you aren't writing the same repository someone else already did, and we can focus on providing better solutions to the unique problems our apps must solve... etc. I'm preaching to the choir, I'm sure. The trick is, the State does not like change, neither do its employees. Managers often disregard new ideas simply because they like to avoid friction and would rather continue on as is. There are similar questions out there, but what I am looking for is advice on how any of you may have faced a similar situation, and any direction toward getting a 'grass roots' kind of effort going to have an easier time approaching management. EDIT: Just to clarify a few things: * the scope I'm looking for is within the IT shop of my State Agency. I'm not trying to coordinate across several departments. Got get people off the training wheels before asking them to ride motorcycles. * Security is not much of a concern, most of our applications are internal and written in Windows Forms distributed on Citrix (ugh.) and nearly all use the same enterprise tables in Oracle... very few if any apps are \"classified\" so to speak. it shouldn't hinder collaboration. * I have gone so far as to setup a NuGet feed, with a couple of boilerplate pieces of code packaged, and written a few repos for Oracle, sent out some emails, but received little feedback. I've got about 1/3 of our team using ReSharper, and send emails from time to time with tips... again, not a whole lot of feedback."} {"_id": "240704", "title": "Confused about layered application development", "text": "So, for the first big project that I'm getting paid for, I decided I'd do things right. To that end, I've created several projects in my Solution. Some of these projects are generic and handle common elements that can be used in any project (stuff like logging and such), but that's not where the focus of my question is. The projects I'm concerning myself with in this case are mostly to do with the database and the actual web application that will use it. So I have an `Ortund.Objects` Class Library Project and an `Ortund.DBContext` Class Library Project. **Ortund.Objects** This project defines the classes, setting properties and default values that makes up the database. For example, the Users class defines properties such as Username, EmailAddress, Password **Ortund.DBContext** This project uses Entity Framework and the Objects project to build the database. DbSets of the classes in the Objects project and various other components explicitly define the structure of the database. Obviously, given this structure, I would have the DBContext project referencing the Objects project. This means that, while somewhere in my head, it makes sense to put all my methods and functions that the web app will use into the Objects project, I wouldn't actually be able to do anything because I won't have the DbContext. To reference the DBContext project would create a circular reference. So, in order for my application to interact with the database now, the only option I can see is to create a 4th project to act as the data layer, effectively becoming a \"middle-man\" between the web application and the other 2 projects. When I suggested this on IRC, someone answered with the following: > No, the application layer uses the database layer to provide persistence to > the domain layer. I don't understand what this means. Can someone help me to understand how best to structure this application?"} {"_id": "69215", "title": "What to plan before starting development on a project?", "text": "Say I've received the specs for a project from a client, and now its time to start developing it. Normally, I just start with the first module (usually user registration) and then go from one module to the next. I only plan in my head just before I'm about to start on a module how its going to work, but there's no planning before that. However, I think it would be better if I went over the specs and planned out how the system was going to work before I coded it, e.g what are the main components, how they're going to interact, etc. I'm just not sure exactly what I should plan. To give a better idea of what I'm asking for, how should I a) Divide the project into components, b) Plan their interactions, e.g should I do class diagrams, write unit tests, etc..? Any ideas?"} {"_id": "199085", "title": "Should the design take longer than code development?", "text": "I once heard that if you spend 90% of your time developing the design of your program the coding part will only take a trivial 10% of the time. I have found a lot more success in spending about 30% of my time designing the architecture than when I used to spend 0% in design. As a result my programs seem to avoid unnecessary coupling and are now more flexible to change before any re- factoring takes place. I usually finish the design phase once I have a clear idea of what all the objects should be named (I am really picky about naming objects) and what single responsibility each will have. Once I feel confident in my design and feel ready to start constructing the software, would there be any benefit to taking more time to keep re-thinking the design? To me it might seems like it might be overkill. What's an effective design to code development time ratio and should the design time be greater than the code development time?"} {"_id": "23604", "title": "How much time do you spend on design before coding?", "text": "In my experience, it is useful to spend a little while sketching plans for a project before getting into code. Such planning usually includes choosing frameworks/tools, writing requirements and expectations, and doing mockups. I usually only do this for serious projects though, not so much for one-off or short-lived attempts. I'd be interested to hear how much time you spend on planning/designing projects before starting to do the coding. Do you do it for every project, or just the \"serious\" ones?"} {"_id": "128911", "title": "Process of developing software?", "text": "> **Possible Duplicate:** > What to plan before starting development on a project? I was wondering when you start developing a software what process should you generally go through to do it? By this I mean after you get an idea, do you just sit down and start pushing out code directly, or is there some planning process that you should generally go through first? (i.e. flowcharts, diagrams etc.) Just wondering because I just had an idea and am about to embark on my first \"real\" application but have no idea where to begin. What would you suggest is the steps that should be taken to have a successful developmental cycle?"} {"_id": "115282", "title": "How much design to do first?", "text": "I have never worked with a professional software development team. As such, analyzing and thinking about each and every aspect of my software does not come naturally to me. Whenever I strike an idea that excites me I just start a new project in my IDE (after finishing the previous one) and I start designing the interface\u2014which means to me as placing menubars, buttons and other components at their appropriate places\u2014and start coding for them. Recently I started a chess game in Java and I followed that pattern: first programmed the black-white grid, then registered a mouse listener etc. without thinking about warriors/objects of chess game. One thing that annoys me is that I have never been able to completely use object-oriented features of object-oriented programming, though I understand it. I have the habit of taking on things when it comes. This could be the reason of the poor design I end up with. I complete all the projects that I start, but since they are personal projects I have never felt the need to refactor the code. But I know that if I am asked to modify any function of my program I could be in a mess because they are not properly designed (though they work as asked). What is the correct/better way to start to design software that is somewhat complicated like a chess game?"} {"_id": "39499", "title": "What should be first - functionality or design?", "text": "I've started reading a book from Head First series about OOP and Design. In a first chapter it is stated I have to worry about design of my application just after basic functionality is ready. Basic functionality is ready means you have something to show to your customer or boss. Do you think this is a correct approach? Shouldn't I think about design from the beginning? Can't it happen I won't have time for making a good design after functionality is ready because I will have new high priority requirements to implement? By 'design' I mean Object Oriented design, not GUI or something."} {"_id": "106284", "title": "Starting a hobby project", "text": "> **Possible Duplicate:** > What to plan before starting development on a project? I have been going through the SE for quite a while but could not find a relevant answer for my question.The question is I am planning to stat a hobby project in my spare time using .NET and MVC3.I have everything set up on my box. But the biggest problem is I don't know where to begin with Shall I create a documentation first? Start creating a web template? I am really eager to start this but need some guidance . Let me know if there are any ideas."} {"_id": "111496", "title": "Five or fewer tips to writing good JavaScript?", "text": "JavaScript has obviously become pretty indispensable; however, I'm still new to it, and I've found it's hard to fight the feeling that it seems such a mess and I don't want to deal with it right now. I'm much further in my understanding of other languages than I am with JavaScript, because I can't seem to get a handle on this fear. I get a feeling that, when I'm writing JavaScript, I'm trying to paint a portrait of Weimaraner puppies. It usually helps me to keep a handful of the most important directives in mind that I can ask myself for each move I make. (To my mind, a handful is five or less.) Can you list five (or fewer) questions specific to JavaScript I should ask myself for each move I make, when I'm coding JavaScript? What would they be? Update: to clarify, I'm not asking for five things to keep in mind when learning JavaScript; I'm asking for five questions to always ask myself going forward, that everyone should alsways ask. High-level questions like: \"Am I likely to repeat this somewhere else?\" or \"is this variable/function name specific enough (or too specific)\" <== except these example questions are not peculiar to JavaScript. I'm looking for directives that are peculiar to JavaScript."} {"_id": "111497", "title": "Workflow design for multi-step edits on a webapp", "text": "I have a web app (ASP MVC2) where some forms can be accessed via multiple routes, initially once a form was complete a user was kicked back to a default page for that form rather than the page they entered it from. Now I'm redesigning a sizable area I want to address this, I want a solution that is easy to add in. I have a couple of ideas but I'm not sure which to go with, I do know that I want it to be \"invisible\" (ie not touching my URLs). So I'm thinking either: I could have hidden fields for the referrer page on the forms. Alternatively I could use TempData and have an attribute that handles checking and adding the referrer URL (this would likely include a string for each of the different pathways so that a user could have 2 different forms open and not have the referrers interfere). The problem with the form value is that it would require putting non-model related fields in each of the views and would break if there's ever any GET requests in the workflow. it would also require manually handling this property in each view and action. The TempData+attribute approach would be a much neater way to apply this but it's possible for powerusers who are doing many things at once to have conflicting referrers for the same forms. I'm leaning towards the latter approach as it's more elegant and easier to keep track of as I don't see there being many of the edge cases where it gets overridden but I'm worried about the user experience for if it does happen. Is the trade off worth it?"} {"_id": "152231", "title": "Why don't browsers support haml and sass?", "text": "The time needed to download a website would be significantly reduced, and parsing would be also easier I think. Why are not these languages imposed as a standard? Obviously they are better than raw html and CSS... Browsers are the only thing that keeps us away from eliminating the intermediate HTML/CSS code."} {"_id": "212344", "title": "Is it bad practice to make an iterator that is aware of its own end", "text": "For some background of why I am asking this question here is an example. In python the method `chain` chains an arbitrary number of ranges together and makes them into one without making copies. Here is a link in case you don't understand it. I decided I would implement chain in c++ using variadic templates. As far as I can tell the only way to make an iterator for chain that will successfully go to the next container is for each iterator to to know about the end of the container (I thought of a sort of hack in where when `!=` is called against the end it will know to go to the next container, but the first way seemed easier and safer and more versatile). My question is if there is anything inherently wrong with an iterator knowing about its own end, my code is in c++ but this can be language agnostic since many languages have iterators. #ifndef CHAIN_HPP #define CHAIN_HPP #include \"iterator_range.hpp\" namespace iter { template struct chain_iter; template struct chain_iter { private: using Iterator = decltype(((Container*)nullptr)->begin()); Iterator begin; const Iterator end;//never really used but kept it for consistency public: chain_iter(Container & container, bool is_end=false) : begin(container.begin()),end(container.end()) { if(is_end) begin = container.end(); } chain_iter & operator++() { ++begin; return *this; } auto operator*()->decltype(*begin) { return *begin; } bool operator!=(const chain_iter & rhs) const{ return this->begin != rhs.begin; } }; template struct chain_iter { private: using Iterator = decltype(((Container*)nullptr)->begin()); Iterator begin; const Iterator end; bool end_reached = false; chain_iter next_iter; public: chain_iter(Container & container, Containers& ... rest, bool is_end=false) : begin(container.begin()), end(container.end()), next_iter(rest...,is_end) { if(is_end) begin = container.end(); } chain_iter & operator++() { if (begin == end) { ++next_iter; } else { ++begin; } return *this; } auto operator*()->decltype(*begin) { if (begin == end) { return *next_iter; } else { return *begin; } } bool operator !=(const chain_iter & rhs) const { if (begin == end) { return this->next_iter != rhs.next_iter; } else return this->begin != rhs.begin; } }; template iterator_range> chain(Containers& ... containers) { auto begin = chain_iter(containers...); auto end = chain_iter(containers...,true); return iterator_range>(begin,end); } } #endif //CHAIN_HPP"} {"_id": "126394", "title": "Does syntax matters for a (Lispy) Domain Specific Language (MELT, inside GCC)?", "text": "I am the main author and designer of MELT, a domain specific language to extend GCC (the Gnu Compiler Collection). The implementation is available free software (GPLv3 licensed). If you want a detailed description, from the point of view of a Domain Specific Language, read my DSL2011 paper on MELT. I have chosen a Lisp-like syntax for _MELT_ , so as in Lisp or in Scheme, every operation is written with _\"Lots of Insipid Stupid Parenthesis\"_ so the application of function `f` to arguments `a` and `b` is written `(f a b)` (like it is in Scheme or in Lisp). Of course, `(f)` -a function application without argument- is not the same as `f` -a simple thing denoted by a variable-. _MELT_ shares with many other Lisp-s and with Scheme the usual \"control\" operators like `let if defun define cond lambda letrec list definstance` etc... _MELT_ does not use the common names for primitive operations (no `car` or `+` in _MELT_ , which has `pair_head` `list_first` `+i` `+iv` with different meanings). And _MELT_ deals with both first-class values (like in Scheme) and \"stuff\" (for raw GCC data like _Gimple_ , details are in the DSL2011 paper) The reasons I have chosen a Lisp-like syntax includes: * first, I am lame, and wanted to have a quick implementation. Parsing Lisp is trivial; * I was (and am) able to use e.g. Emacs `lisp-mode` for _MELT_ without trouble. * A small _MELT_ implementation was originally prototyped in Common Lisp * Current _MELT_ translator is bootstrapped so is written in _MELT_ * The implementation uses classical Lisp tricks: S-expressions are expanded by a macro mechanism (into a sort of internal abstract syntax tree), then normalized, and finally translated to C code. * GCC has several Lisp like formalisms inside (notably for the back-end, for \"machine description\" files). * the GNU projects have Emacs Lisp and Guile as Lisp-like dialects (I was not able to use them for technical reasons detailed in my DSL2011 paper). My question is: _should I offer an alternative, infix-like syntax_ (a bit Pythonic or Lua-esque)? One one hand, I am afraid that some people (particularly young ones, who never met any Lisp-like programming languages; I had a course on Lisp in the 1980-s) are completely allergic to Lisp and won't even try _MELT_ because of its look. On the other hand, the only thing I could do is some simple parser producing the same AST as _MELT_ has today, and the syntax will be ad-hoc (but infixed) and probably not pretty. Also, working on an alternative syntax which I won't use will distract me (or take time) from other efforts (writing documentation & tutorial, making good examples of _MELT_ , debugging and improving the implementation). Some young persons (in particular Jeremie Salvucci and Pierre Vittet) have been able to learn and code _MELT_ without prior exposure to _MELT_ , to Lisp dialects (or Scheme) or even to compilation. Would an alternative syntax attract people allergic to Lisp? A nice guy told me that syntax does not matter really for DSL-s. They can be adopted if they bring some value, even with a not very sexy syntax."} {"_id": "151165", "title": "When module calling gets ugly", "text": "Has this ever happened to you? You've got a suite of well designed, single- responsibility modules, covered by unit tests. In any higher-level function you code, you are (95% of the code) simply taking output from one module and passing it as input to the next. Then, you notice this higher-level function has turned into a 100+ line script with multiple responsibilities. Here is the problem. It is difficult (impossible) to test that script. At least, it seems so. Do you agree? In my current project, all of the bugs came from this script. Further detail: each script represents a unique solution, or algorithm, formed by using different modules in different ways. Question: how can you remedy this situation? Knee-jerk answer: break the script up into single-responsibility modules. Comment on knee-jerk answer: it already is! Best answer I can come up with so far: create higher-level connector objects which \"wire\" modules together in particular ways (take output from one module, feed it as input to another module). Thus if our script was: Foo.Input fooIn = new Foo.Input(1, 2); Foo.Output fooOutput = fooModule.operate(fooIn); Double runtimevalue = getsomething(fooOutput.whatever); Bar.Input barIn = new Bar.Input( runtimevalue, fooOutput.someOtherValue); Bar.Output barOut = barModule.operate(barIn); It would become with a connector: FooBarConnectionAlgo fooBarConnector = new fooBarConnector(fooModule, barModule); Foo.Input fooIn = new Foo.Input(1, 2); Bar.Output barOut = fooBarConnector.operate(fooIn); So the advantage is, besides hiding some code and making things clearer, we can test FooBarConnectionAlgo. I'm sure this situation comes up a lot. What do you do?"} {"_id": "28238", "title": "Is using ELSE bad programming?", "text": "I've often come across bugs that have been caused by using the `ELSE` construct. A prime example is something along the lines of: If (passwordCheck() == false){ displayMessage(); }else{ letThemIn(); } To me this screams security problem. I know that passwordCheck is likely to be a boolean, but I wouldn't place my applications security on it. What would happen if its a string, int etc? I usually try to avoid using `ELSE`, and instead opt for two completely separate IF statements to test for what I expect. Anything else then either gets ignored OR is specifically handled. Surely this is a better way to prevent bugs / security issues entering your app. How do you guys do it?"} {"_id": "212349", "title": "What are common categories for Kanban and Scrum JIRA boards?", "text": "One minimal Kanban-like system has three places for cards: \"To do\", \"In progress\", \"Done\". As I've seen Scrum and Kanban on the web, it is maybe five or six categories. What are some of the common options for categories for Kanban and Scrum JIRA boards?"} {"_id": "151163", "title": "Web standards or risk avoidance?", "text": "My company is building an App Engine application. The app encounters a bug (possibly due to an issue with App Engine itself, as per our research) on IE9, but it cannot be reliably reproduced and is experienced by a small percentage of users. The workaround is to force IE9 to use IE8 mode. As a lazy front end developer (who doesn't like CSS hacks, shims and polyfills) I think it's OK to at least try going back to IE9 mode and see what happens, while we're still in private beta. The senior engineer (being more pragmatic) would rather that we continue forcing IE9 users to use the older IE8 mode. Who is right?"} {"_id": "95986", "title": "HTML 4, XHTML and CSS 2 - How long until these are obsolete?", "text": "I got into the game of programming, exactly a year ago. Now i am posing as an HTML, CSS and Javascript Guru. html5 and css3 are the new things out there but in my opinion they are still experimental, (some new websites remind me of the blink tag). I know HTML 4.x and xhtml were out there for a very long time and eventually became standards. But i wasn't around to see that transition from experimental to strict standards. So my question is how long will it take for us to get there with these new versions of html, css and javascript. Most of the code I see around is accommodating older browsers, and i can fairly say that it is easier to write javascript code that work in ie5-8 than to write one that works with i9 and ie5 (let's take localStorage for this example). So when are we going to get to this level where if you want to make sure code works almost universally you have to make sure it's at least HTML5 and CSS3 compliant."} {"_id": "157341", "title": "Refactoring options - multiple methods in same class or into separate classes", "text": "we have some API which will be called either by client A, B, C or D **Current code** doSomething(String client){ if (client.equals(\"A\")){ ... } else if (client.equals(\"B\")){ ... } **Proposed refactoring 1** separate into multiple methods and have each client call the dedicated method such as doSomethingForClientA() doSomethingForClientB() we need this because internally each method will call other private methods defined in the same class **Proposed refactoring 2** Use strategy (or template) pattern and separate into multiple classes for the clients to call the method call from client remains doSomething() Which approach is better in the long run? Is there any design pattern to help with option 1 ? or a 3rd option?"} {"_id": "95988", "title": "Centralized and decentralized logging mechanisms", "text": "I've worked different kind of applications both having centralized and decentralized logging mechanisms. I feel centralized logging mechanism is good to see the proper flow of code and timing of executions while debugging problems. Most of the implementations will have small fixed file size for easily narrowing down the problem. But if the number of components is too high within the system I feel the log directory will get exhausted with huge number of files. Is it a good idea to employ decentralized (module-wise) logging mechanisms in such situations? What are the pros and cons of the same?"} {"_id": "50985", "title": "Best practices for connecting from ASP.NET to SQL Server?", "text": "There are several different ways to connect to SQL Server from an ASP.NET application. I'm working on rebuilding an ASP.NET / SQL Server environment right now and I'm trying to figure out which method I should be going for. Here are the options as I see them: * Connect via SQL Server ID that is stored in web.config. Pro: simple. Cons: password in web.config; have to specifically configure SQL Server ID. * Connect via user NT ID via ASP.NET impersonation. Pro: no passwords in web.config; fine-grained control of security per user. Cons: administrative overhead of configuring user accounts in SQL Server; SQL Server monitoring of application is scattered across many accounts. * Run ASP.NET as a custom NT ID, and have that NT ID configured in SQL Server. Pros: connecting to SQL Server as one ID - simple; no passwords in web.config. Cons: complicated from a security perspective. Have to configure custom SPNs in Active Directory for Kerberos authentication. Are there other options that I'm missing? Which of these options are used in which situations? Which are more standard? Are there pros and cons that I'm not thinking about? Note that my assumption is that users are authenticating with ASP.NET via integrated windows authentication; this is for an intranet application."} {"_id": "51055", "title": "What would you say to a bunch of software engineering students on their first day at college?", "text": "Next Friday I'm giving a short (30 min.) talk to a bunch of software engineering students who will be attending the same university I did. Some context: * The place is Montevideo, Uruguay * The university is Universidad de la Rep\u00fablica (public, free university) * The Software Engineering programme takes 5 years (if you're very good and don't start working early). Around 800 new students per year, around 80 graduates per year. Conditions are harsh, particularly the first two years. Most of them probably have no idea what software engineering or programming is. My goal would be to somehow give them an idea of the field and hopefully motivate them to endure the hardships ahead to eventually become successful developers. So the question is: what would you tell these people?"} {"_id": "197869", "title": "Paid vs ads-based Windows Phone app", "text": "I am looking to understand, before I publish my application on the Windows Phone Marketplace, what is the best way to monetize it. I am looking at 3 options: * Free, ads-based (no paid version) * Time trial, no ads + paid full version * Free trial (not time limited), ads based + paid full version, no ads DISCLAIMER: *All the ideas below assume that the app is actually good and people use it :) The goal is to make as much money as possible. I am not a big fan of the ads- based applications but it sounds to me that having a constant cash flow from ads can bring more money, long term, than getting a lump sum once. I mean, in order to get more money from a pay-only-once application, you need more customers while the ads based only requires people to use the app; the ads based, long term, can generate more money with a smaller user base that is loyal. What do you experts think? Did others research/have experience with the actual amount of money that can be generated in each scenario?"} {"_id": "197868", "title": "Could we build a functional computer?", "text": "As mush as FP has done, in the end, all our programs are structured. That is, it doesn't matter how pure or functional we make a them - they are always translated to assembly, so what actually runs behind the hoods are instructions, states and loops. We are kind of emulating FP. As a hardware noob, my question is: why aren't we using computer architectures that actually computed things in a functional style? For example, a computer could consist of primitive \"functional chips\" such as \"concat\", \"map\" and \"reduce\", and a program would merely tell the computer how to flow the data between those chips in order to compute the desired result, such as in concatenative languages. ![nonsense sketch](http://i.stack.imgur.com/7RA0Q.png) This doesn't really make sense but might illustrate what I'm thinking."} {"_id": "93208", "title": "In Scrum, how to handle contention/workload at end of sprint", "text": "My team started using Scrum a few sprints ago. Our project involves building software interfacing with physical devices (think robots and sensors) and our typical Product backlog usually represent adding controls device to the whole system. We split down the task close to the example here. Each device integration feature is split into code, tests, integration tests, peer review, etc. Obviously, there is a sequence inherent to each Product Backlog Items. Typically, our sprints last 2 weeks and the team has between 4 to 6 members. We run into 2 problems at the end of sprints: * The first is to keep everyone busy at end of sprint. * The second (related) one is contention on system. We pretty much end up integrating during the last few days of the sprint. We only have one integration system, so people are often blocked from continuing to work on their task because they can't access the system. Since it is the end of the sprint, there is not much work left to do in the sprint backlog. What should these people work on? Pick up items from the top of the product backlog is not well received from the product owner, since the current items are not done. Working on technical debt will help the project as a whole but won't help completing the sprint. Are there any best practices to either structure sprints to avoid these issues? Tips to negotiate with product owners?"} {"_id": "93206", "title": "Is it possible to create and distribute an app for the BlackBerry Playbook that doesn't go into App World?", "text": "My company is looking to create an app that we'll use internally on several (about 20) BlackBerry Playbooks. We don't want it to be put up on App World because it's just an internal application. I'm wondering if there are any: * Costs involved with this outside of paying a programmer to develop it - i.e. Are there any license fees, deployment fees, etc. * License issues involved with deploying the app to multiple Playbooks without deploying it to App World * Limitations on functionality of the app * Other things we should be taking into consideration If it matters, the app will be collecting information and downloading it to a computer via USB."} {"_id": "165740", "title": "Are regular expressions a programming language?", "text": "In the academic sense, do regular expressions qualify as a programming language? The motivation for my curiosity is an SO question I just looked at which asked \"can regex do X?\" and it made me wonder what can be said in the generic sense about the possible solutions using them. I am basically asking, \"are regular expressions Turing complete\"?"} {"_id": "160510", "title": "If condition not true: default value or else clause?", "text": "I have searched Programmers and Stackoverflow and was not able to come up with a satisfying answer, even though I'm quite sure it must have been asked many times before. The only question I found has answers only dealing with readability. The code excerpts below are from actual C# code I was writing. * * * When dealing with a local variable that has some calculated value when a condition is true but a default value otherwise, do I: First set the value, then change the value when the condition is true? short version = 1; // <-- set the value here. if (separator != -1) { version = Int16.Parse(filename.Substring(separator + 1)); filename = filename.Substring(0, separator); } Or _not_ set the value, and add an else when the condition is false? short version; if (separator != -1) { version = Int16.Parse(filename.Substring(separator + 1)); filename = filename.Substring(0, separator); } else version = 1; // <-- set the value here. Could a programmer have wrong expectations when confronted with either solution (e.g. when the if-clause is quite big and the else-clause is way down, out of view)? Do you have any experiences where the difference mattered? Is one particular style enforced at your company, and why? Are there any technical reasons why one would be preferred over the other? The first one sets a variable twice, but I'm not sure whether this would matter in modern- day optimizing compilers."} {"_id": "93201", "title": "What is the term for IntelliSense in a non-Microsoft world?", "text": "When talking about IDE software or about what a programming language allows you to do or not at the source level, I often use the word IntelliSense, which has a precise meaning in the Microsoft world, but is inappropriate when talking to people who don't have to be familiar with Visual Studio. In this case, what is the appropriate term to use? I usually use the term \"auto-completion\", but it doesn't always work. In fact, IntelliSense includes auto-completion, but it also provides documentation and hints."} {"_id": "61322", "title": "How do I make money from my FOSS while staying anonymous?", "text": "Let's say that: 1. You have created a FOSS project that other people find useful, perhaps useful enough to donate to or pay for modifications to be done. 2. It is a perfectly legitimate and innocuous software project. It has nothing to do with cryptography as munitions, p2p music, or anything likely to lead to a search warrant or being sued. 3. You want your involvement to stay anonymous or pseudonymous. 4. You would like to receive some money for your efforts, if people are willing. Is that possible, and if so, how could it be done? When I talk about anonymity, I realize that it is necessary to define the extent. I am not talking about Wikileaks style 20 layers of proxies worth of anonymity. I would expect a 3 letter agency to be able to identify the person easily. What is wanted is shielding from commercial competitors or random people, who would not be expected to be able to get the financial intermediary to divulge your details just by asking for them. Why would you want to stay anonymous? I can think of several valid reasons, maybe you operate a stealth mode startup and don't want to give your competitors clues as to the technology you are using. Maybe it is a project that has nothing to do with your daily job, is not developed there, but the company you work for has an unfair (and possibly unenforceable) policy stating that any coding you do is owned by them. Maybe you just value your privacy. For what it's worth, you intend to pay the relevant taxes in your country on any donations."} {"_id": "182344", "title": "What are the Advantages of a \"Combined\" Getter/Setter VS Individual Methods?", "text": "This is what I call a \"combined\" getter/setter method (from jQuery): var foo = $(\"
This is my HTML
\"), myText; myText = foo.text(); // myHTML now equals \"This is my HTML\" (Getter) foo.text(\"This is a new value\"); // The text now equals \"This is a new value\") This is the same logic with separate (theoretical) methods: var foo = $(\"
This is my HTML
\"), myText; myText = foo.getText(); // myHTML now equals \"This is my HTML\" (Getter) foo.setText(\"This is a new value\"); // The text now equals \"This is a new value\") **My Question:** When designing a library like jQuery, why would you decide to go the first route and not the second? Isn't the second approach clearer and easier to understand at a glance?"} {"_id": "118174", "title": "How to store a simple DB \"in the cloud\"?", "text": "Even though the name is similar my question is not a dupe of the very fine question here: Database in the cloud? I've got a webserver (Java/Tomcat) running a webapp (which I wrote) on a dedicated server (which I fully configured myself) and I'd like to have it now use a small persistent DB. I know how to install and configure a SQL DB like, say, PostgreSQL: I've already done it several times. But this time I'd like to do something different. The DB schema is very simple. So simple that a spreadsheet is enough to store the data (a few columns and lots of rows). It would only grow by about 2000 rows a day (and that requirement, seen the datasource, cannot change in the future). (*) I don't want to install PostgreSQL nor any other DB on my dedicated server: I don't want to bother with backup/configuration/re-installation if I migrate/update the server etc. so I'd like to keep my dedicated server as simple as possible and store the DB somewhere \"in the cloud\". By that I mean I'd like to have either Google or Amazon or someone else take care of the data (there's nothing confidential) and have an easy way to access it. Ideally it should be free. Here's what I'm thinking of at the moment: * every time new data comes in (from a fat client) I _a)_ update a Google Doc spreadsheet using the Java API they provide and _b)_ I warn my Java webapp that new data did arrive (about once every 10 minutes) * every time the Java webapp is launched and every time it gets notified that new data was in, it pulls data from the Google Doc spreadsheet using the Java API. The advantage is that: * it's free * the Google Doc spreadsheet limits of 400 000 cells per document would allow me to have one spreadhseet per week (for example) * the DB is stored in Google's cloud, making it highly unlikely that I'd lose it * I can easily visualize my data from wherever I'd be: I can simply log on to Google Docs an open the spreadsheet And of course I can still do my own daily or weekly backup of the data (offline onsite and online onsite for example). I don't see any particularly hard technical issues here but I'm wondering what would the other options be? How would you go about storing a very simple DB \"in the cloud\"? _(*) Should the requirement change later on, I could always move on to a \"real\" DB that I'd install on my server but as of now I'm really after having my data in the cloud_"} {"_id": "118175", "title": "On the path to Enlightenment: Scheme, Common Lisp, Clojure?", "text": "A lot of people smarter than me keep writing about when you learn Lisp it makes you a better programmer because you \"get it\". Maybe all I hear about Lisp(s) changing your life is just a big practical joke on the newbies, but I figure there's no harm in knowing more about the world, even if I find out I've been sent after a snipe or something. I'd like to follow the SICP book, and or ANSI Common Lisp, but at the same time be studying a dialect and implementation that I could go on to use on personal projects. SICP is focused on Scheme, so that's one big vote. Paul Graham said that if he were to teach newbies he'd do it in Scheme, but it sounded like Scheme was still inferior to Common Lisp. But then there's Clojure-- which I'm told is limited in ways, but more practical in others (JVM libraries). It sounds like I could get through Scheme materials easier, achieve \"real\" enlightenment from CL, or come close enough with Clojure and be able to get more done with it in the long run. How much of all of that is true? When should I stop thinking about what to learn about and just go and learn about it?"} {"_id": "126936", "title": "What's an Elegant and OOP way to create a tree from arrays and render it as a nested UL", "text": "I have a series of arrays which represent file system paths, so each next value is actually a directory deeper, for example: var a1 = [\"Desktop\", \"Pictures\", \"Summer 2011\"]; is the equivalent of Desktop |-Pictures |-Summer 2011 I'm trying to find an elegant way to: 1. Flatten/merge all the different arrays I have to come up with one object/dictionary/multi-dimensional-array. 2. Parse the result and render a nested **`UL`** (html list) to the page which represents the hierarchy correctly. 3. Write what I already have in a more 'OOP way'.. I already have a working version:
    Thanks! EDIT: I'm starting from strings which represent the paths i.e. `\"Desktop/Pictures/Summer 2011\"` which are broken to arrays."} {"_id": "110362", "title": "Is JSP a good alternative to PHP", "text": "I'm wondering is JSP a safe alternative to PHP? Some things I'm concerned about are cpu usage, memory usage, and security."} {"_id": "110365", "title": "Getting started with repositories: are they what I need, or are there any alternatives?", "text": "I'm a solo iOS developer - mostly self taught, but have made several successful apps so far and potentially starting some slightly bigger projects. ## What I want to do: As I'm working on some bigger and more complex apps now, there's some things that I want to do - which from what I gather, repositories are what I should be looking at. Basically, I want to: * Online backup of all my code (for now, dropBox works fine) * Work on new features of apps, while somehow being able to update/bugfix the older version and have the changes applied to both versions So, are repositories what I should be looking into? From the brief look I've had, it does look like a bit of a learning curve, with the downside that I could only do work when an internet connection is available (sometimes not the case). ## Is there: any alternative to repositories that would achieve the 'branching'/new features - e.g. is there any way to already achieve this using Xcode and not relying on some server? This would be the easier option in my case. Alternatively, if there isn't: What repository system would be best for me, and easiest to use (and probably free!)?"} {"_id": "163965", "title": "Overloading interface buttons, what are the best practices?", "text": "Imagine you'll have always a button labeled \"Continue\" in the same position in your app's GUI. Would you rather make a single button instance that takes different actions depending on the current state? private State currentState = State.Step1; private ContinueButton_Click() { switch(currentState) { case State.Step1: DoThis(); currentState = State.Step2; break; case State.Step2: DoThat(); break; } } Or would you rather have something like this? public Form() { this.ContinueStep2Button.Visible = false; } private ContinueStep1Button_Click() { DoThis(); this.ContinueStep1Button.Visible = false; this.ContinueStep2Button.Visible = true; } private ContinueStep2Button_Click() { DoThat(); }"} {"_id": "253705", "title": "JS closures - Passing a function to a child, how should the shared object be accessed", "text": "I have a design and am wondering what the appropriate way to access variables is. I'll demonstrate with this example since I can't seem to describe it better than the title. * `Term` is an object representing a bunch of time data (a repeating duration of time defined by a bunch of attributes) * `Term` has some print functionality but does not implement the print functions itself, rather they are passed in as anonymous functions by the parent. This would be similar to how shaders can be passed to a renderer rather than defined by the renderer. * A container (let's call it `Box`) has a `Schedule` object that can understand and use `Term` objects. * `Box` creates `Term` objects and passes them to `Schedule` as required. `Box` also defines the `print` functions stored in `Term`. * A `print` function usually takes an argument and uses it to return a string based on that argument and `Term`'s internal data. Sometime the `print` function could also use data stored in `Schedule`, though. I'm calling this data `shared`. So, the question is, what is the best way to access this `shared` data. I have a lot of options since JS has closures and I'm not familiar enough to know if I should be using them or avoiding them in this case. Options: 1. Create a local \"reference\" (term used lightly) to the `shared` data (data is not a primitive) when defining the `print` function by accessing the `shared` data through `Schedule` from `Box`. Example: var schedule = function(){ var sched = Schedule(); var t1 = Term( function(x){ // Term.print() return (x + sched.data).format(); }); }; 2. Bind it to `Term` explicitly. (Pass it in `Term`'s constructor or something). Or bind it in `Sched` after `Box` passes it. And then access it as an attribute of `Term`. 3. Pass it in at the same time `x` is passed to the print function, (from sched). This is the most familiar way for my but it doesn't feel right given JS's closure ability. 4. Do something weird like `bind` some context and arguments to `print`. I'm hoping the correct answer isn't purely subjective. If it is, then I guess the answer is just \"do whatever works\". But I feel like there are some significant differences between the approaches that could have a large impact when stretched beyond my small example. **Edit** I'll post the solution I'm using but I'd still welcome criticism: All `print` functions take, as arguments, anything `term` doesn't own. This way, `term` is not coupled to `schedule` in any way (obviously `schedule` is still dependent on `term`, though). This allows `term` to be initialized/constructed anywhere without needing knowledge of schedule. So, if `term` had an `init()` function it might take an object that looks something like this: { inc: moment.duration(1,\"d\"), periods: 3, class: \"long\", text:\"Weekly\", pRange: moment.duration(7,'d'), //*...other attr*// printInc: function(increments,period){ return moment(this.start).add(this.inc.product(increments) .add(this.startGap)) .add(this.pRange.product(period)) .format(DATEDISPLAYFORMAT); }, printLabel: function(datetime){ return (datetime).format(DATEDISPLAYFORMAT); } } Where increment, period, datetime would all be passed from whatever is using term's print methods (schedule in this case)."} {"_id": "253704", "title": "When is type testing OK?", "text": "Assuming a language with some inherent type safety (e.g., not JavaScript): Given a method that accepts a `SuperType`, we know that in most cases wherein we might be tempted to perform type testing to pick an action: public void DoSomethingTo(SuperType o) { if (o isa SubTypeA) { o.doSomethingA() } else { o.doSomethingB(); } } We should usually, if not always, create a single, overridable method on the `SuperType` and do this: public void DoSomethingTo(SuperType o) { o.doSomething(); } ... wherein each subtype is given its own `doSomething()` implementation. The rest of our application can then be appropriately ignorant of whether any given `SuperType` is really a `SubTypeA` or a `SubTypeB`. Wonderful. But, we're still given `is a`-like operations in most, if not all, type-safe languages. And that suggests a potential need for explicit type testing. **So, in what situations, if any, _should_ we or _must_ we perform explicit type testing?** Forgive my absent mindedness or lack of creativity. I know I've done it before; but, it was honestly so long ago I can't remember if what I did was good! And in recent memory, I don't think I've encountered a _need_ to test types outside my cowboy JavaScript."} {"_id": "163960", "title": "MVC, when to separate controllers?", "text": "I'm starting with MVC and have a newbie question. What would be the logic criteria to define what a controller should encompass? For example, say a website has a 'help' section. In there, there are several options like: 'about us', 'return instructions', 'contact us', 'employment opportunities'. Each would then be accessed like 'mysite.com/help/aboutus', 'mysite.com/help/returns', 'mysite.com/help/contactus', etc. My question is, should I have a 'help' controller that has 'about us', 'returns', 'contact us', 'employment' as actions with their respective view, or should each of those be a different controller-action-view set? What should be the line of reasoning to determine when to separate controllers?"} {"_id": "195643", "title": "How to implement better security in Linux?", "text": "I'm just investigating the security and control of the Linux platform in comparison to Android. In Android there seems to be a huge development around security - Applications are required to ask for system permissions, and if the user grants that permission, then the system allows that application to execute with those granted privileges. It isn't like that on vanilla Linux. Applications can access anything they want, albeit not granting them to modify files, but nevertheless. Users simply don't know how applications work, and what information - sensitive information - they take and what they do with that information (upload it to a database and sell it to 3rd parties). So what is this dealt with? I'd imagine the Linux kernel has to be modified so it accepts access tokens per application basis or something similar. Windows at least has some type of security system with it's built in firewall and local authority service. (I know little about Windows.)"} {"_id": "195642", "title": "Why are semicolons and commas interchanged in for loops?", "text": "In many languages (a wide list, from C to JavaScript): * commas `,` separate arguments (e.g. `func(a, b, c)`), while * semicolons `;` separate sequential instructions (e.g. `instruction1; instruction2; instruction3`). So why is this mapping reversed in the same languages for **for loops** : for ( init1, init2; condition; inc1, inc2 ) { instruction1; instruction2; } instead of (what seems more natural to me) for ( init1; init2, condition, inc1; inc2 ) { instruction1; instruction2; } ? Sure, `for` is (usually) not a function, but arguments (i.e. `init`, `condition`, `increment`) behave more like arguments of a function than a sequence of instructions. Is it due to historical reasons / a convention, or is there a good rationale for the interchange of `,` and `;` in loops?"} {"_id": "220684", "title": "Cohen\u2013Sutherland algorithm", "text": "I was reading Cohen\u2013Sutherland algorithm on Wikipedia and I am confused in these two parts, int ComputeOutCode(double x, double y) { ... if (x < xmin) code |= LEFT; else if (x > xmax) code |= RIGHT; if (y < ymin) code |= BOTTOM; else if (y > ymax) code |= TOP; ... } and if (outcodeOut & TOP) { // point is above the clip rectangle x = x0 + (x1 - x0) * (ymax - y0) / (y1 - y0); y = ymax; } else if (outcodeOut & BOTTOM) { // point is below the clip rectangle x = x0 + (x1 - x0) * (ymin - y0) / (y1 - y0); y = ymin; } else if (outcodeOut & RIGHT) { // point is to the right of clip rectangle y = y0 + (y1 - y0) * (xmax - x0) / (x1 - x0); x = xmax; } else if (outcodeOut & LEFT) { // point is to the left of clip rectangle y = y0 + (y1 - y0) * (xmin - x0) / (x1 - x0); x = xmin; } What I am confused is, How does it handle points who are top-left, top right, etc. It just handles top, left, right and bottom. Why use `|` in `code |= LEFT;` and `outcodeOut & TOP` in `if-else` below it? I know that these two somehow handle all 8 possible cases but I don't know how."} {"_id": "220687", "title": "How to best protect from 0 passed to std::string parameters?", "text": "I have just realized something disturbing. Every time I have written a method that accepts a `std::string` as a paramater, I have opened up myself to undefined behaviour. For example, this... void myMethod(const std::string& s) { /* Do something with s. */ } ...can be called like this... char* s = 0; myMethod(s); ...and there's nothing I can do to prevent it (that I am aware of). So my question is: How does someone defend themself from this? The only approach that comes to mind is to always write two versions of any method that accepts an `std::string` as a parameter, like this: void myMethod(const std::string& s) { /* Do something. */ } void myMethod(char* s) { if (s == 0) { throw std::exception(\"Null passed.\"); } else { myMethod(string(s)); } } Is this a common and/or acceptable solution? **EDIT:** Some have pointed out that I should accept `const std::string& s` instead of `std::string s` as a parameter. I agree. I modified the post. I don't think that changes the answer though."} {"_id": "96186", "title": "How would you approach teaching web development to teenagers?", "text": "I have been tasked to build an 8 hour program of work to teach teenagers (12-15) the basics of web development. I am at a loss at where to start on such a huge subject. I'm not even sure what the target should be. Any tips would be great."} {"_id": "243154", "title": "C++ strongly typed typedef", "text": "I've been trying to think of a way of declaring strongly typed typedefs, to catch a certain class of bugs in the compilation stage. It's often the case that I'll typedef an int into several types of ids, or a vector to position or velocity: typedef int EntityID; typedef int ModelID; typedef Vector3 Position; typedef Vector3 Velocity; This can make the intent of code more clear, but after a long night of coding one might make silly mistakes like comparing different kinds of ids, or adding a position to a velocity perhaps. EntityID eID; ModelID mID; if ( eID == mID ) // <- Compiler sees nothing wrong { /*bug*/ } Position p; Velocity v; Position newP = p + v; // bug, meant p + v*s but compiler sees nothing wrong Unfortunately, suggestions I've found for strongly typed typedefs include using boost, which at least for me isn't a possibility (I do have c++11 at least). So after a bit of thinking, I came upon this idea, and wanted to run it by someone. First, you declare the base type as a template. The template parameter isn't used for anything in the definition, however: template < typename T > class IDType { unsigned int m_id; public: IDType( unsigned int const& i_id ): m_id {i_id} {}; friend bool operator==( IDType const& i_lhs, IDType const& i_rhs ); }; Friend functions actually need to be forward declared before the class definition, which requires a forward declaration of the template class. We then define all the members for the base type, just remembering that it's a template class. Finally, when we want to use it, we typedef it as: class EntityT; typedef IDType EntityID; class ModelT; typedef IDType ModelID; The types are now entirely separate. Functions that take an EntityID will throw a compiler error if you try to feed them a ModelID instead, for example. Aside from having to declare the base types as templates, with the issues that entails, it's also fairly compact. I was hoping anyone had comments or critiques about this idea? One issue that came to mind while writing this, in the case of positions and velocities for example, would be that I can't convert between types as freely as before. Where before multiplying a vector by a scalar would give another vector, so I could do: typedef float Time; typedef Vector3 Position; typedef Vector3 Velocity; Time t = 1.0f; Position p = { 0.0f }; Velocity v = { 1.0f, 0.0f, 0.0f }; Position newP = p + v*t; With my strongly typed typedef I'd have to tell the compiler that multypling a Velocity by a Time results in a Position. class TimeT; typedef Float Time; class PositionT; typedef Vector3 Position; class VelocityT; typedef Vector3 Velocity; Time t = 1.0f; Position p = { 0.0f }; Velocity v = { 1.0f, 0.0f, 0.0f }; Position newP = p + v*t; // Compiler error To solve this, I think I'd have to specialize every conversion explicitly, which can be kind of a bother. On the other hand, this limitation can help prevent other kinds of errors (say, multiplying a Velocity by a Distance, perhaps, which wouldn't make sense in this domain). So I'm torn, and wondering if people have any opinions on my original issue, or my approach to solving it."} {"_id": "245276", "title": "How do I differentiate between old and new data in backbone collections?", "text": "A common pattern I come across is a backbone collection which is initially seeded from a database. However, the user can also add to the collection. When the user does add to the collection, these should be reflected in the db. Usually I would bind some kind of server call to the collection `add` event. However if I do that, the call will be made even during the initial seed. I only want it to be called on _new_ data. Whats the right way to tackle this problem?"} {"_id": "214427", "title": "business logic: client-side vs. server side", "text": "Let's say 3-5 years ago (more or less) n-tier application on the server side - and some javascript/html/CSS for the UI was a basic approach for web development. Nowadays we can see that traditional web development paradigm changes a lot. Each day I saw more and more application who do not have server side in traditional way. They just consume some services (data-service, auth-service, etc.) but the business logic placed on client side. Also already a lot of javascript frameworks creates for simplify development according such model (Angular, Backbone, etc.) What are the main benefits and disadvantages of new model versus traditional approach?"} {"_id": "82099", "title": "What do I do when my team leader is breaking my database schema with a release coming up?", "text": "My team leader has this terrible habit of mucking with the database schema, and making changes that would cause severe breakage on the code base (without really consulting me on how the changes would affect the code base). Normally, I would just live with it, but we have a deadline in 2 weeks and this has been happening ever since I started 1 and a half months ago. I was brought on, to speed up the development of the project. Due to the deadline I am already putting in 60+ hours a week, and dont really have energy left to deal with this (I have tried in some ways already). We are only a 2 man team, and besides changing the database on a daily basis, he has not contributed much in the sense of actual development (coding). Currently, I am feeling like I am doing all the work, plus having to 'fix' what he breaks with his changes. How does one deal with this? I have already spoken to our manager about his lack of effort in the development department. He has been there 6 months longer than I have, but I have written 95% of the code when you exclude the 5th normal form database monstrosity he 'contributed'. Any suggestions? **Post-mortem:** On Friday we had a discussion with manager, and I made my worries known. This led to a bit of confrontation, but overall I felt the manager was siding with me. So at least we have our data freeze in place now, let's see how it goes from here."} {"_id": "214425", "title": "Why does Clojure neglect the uniform access principle?", "text": "My background is Ruby, C#, JavaScript and Java. And now I'm learning Clojure. What makes me feel uncomfortable about the later is that idiomatic Clojure seems to neglect the Uniform access principle (wiki, c2) and thus to a certain degree encapsulation as well by suggesting to use maps instead of some sort of \"structures\" or \"classes\". It feels like step back. So a couple of questions, if anyone informed: * Which other design decisions/concerns it conflicted with and why it was considered less important? * Did you have the same concern as well and how it end up when you switched from a language supporting UAP by default (Ruby, Eiffel, Python, C#) to Clojure?"} {"_id": "214429", "title": "How can I integrate local web development environments with a central SSO solution?", "text": "We have a single-page web application, and we have a new SSO site (also our own) using OAuth2, and are looking to hook them up. On our production/staging/CI deployments, it's easy to hook everything up. For instance: * on production we'll have `https://app.company.com` access our prod backend point to `https://sso.company.com` for auth and vice versa * on CI we'll have `https://app.ci.company.com` access our dev backend and point to `https://sso.ci.company.com` for auth. When we're doing development on the app, we access it locally at something like `http://localhost:8000/`, and it points to the shared dev backend. In the past, we've had it just do its own auth against the dev backend (getting the end user's credentials and submitting it), but we'd like to get it wired to use SSO to auth. The question comes up of how we can do this without every developer who wants to develop locally needing to set up and custom configure their own SSO service. Specifically, can we use the shared dev/CI SSO and have it point to our local deployments? **What we've considered:** * Use a proxy file/hosts file and have `http(s)://sso.ci.company.com` point to `http(s)://localhost:8000`. This burdens us with setting up HTTPS locally or have the SSO direct to a non-HTTPS URL. Also, the setup is a bit of a pain. * Have our local app redirect to the SSO with an argument identifying itself, and have the SSO use that to know where to redirect. For instance, redirect to `https://sso.ci.company.com/?appUrl=localhost:8000`. This forces us to punch an open-ended redirect in the SSO that we'd have to turn off for production and it is a bit \"weird\" but it works and can be black-boxed into the app. * Just run the SSO locally on every dev box. This is basically the non-solution since it would require a lot of setup to get to a \"one box that can run everything\" script, and it negates one of our valuable advantages at the moment in that basically all you need to do is check out the code on any system and have it running. I've seen this done before commonly though (often on top of dev VMs); I would just like to explore options that we are potentially closer to. Is there a solution to this sort of problem or aspects of our ideas we haven't considered yet?"} {"_id": "215775", "title": "I want a trivial example of where MongoDB can scale but a relational database will have trouble", "text": "I'm just learning to use MongoDB, and when discussing with other programmers would like a quick example of why NoSQL can be a good choice compared to a traditional RDBMS - however the scenarios I come up with and can find online seem pretty contrived. E.g. a blog with lots of traffic could be represented relationally, but will require some performance tuning and joins across tables (assuming full denormalization is being used). Whereas MongoDB would allow direct retrieval from one collection to the same effect. But the response I'm getting from other programmers is \"why not just keep it relational and then add some trivial caching later?\" Does anybody have a less contrived example where MongoDB will really shine and a relational db will fall over much quicker? The smaller the project/system the better, because it leaves less room for disagreement. Something along the lines of the complexity of the blog example would be really useful. Thanks."} {"_id": "251703", "title": "Structuring Resource Files", "text": "In .NET we've got Resource files which are great for allowing providing translations across your application. In the past I've seen these resource files grow into monolithic unmaintainable lists of words and phrases. What is the best way to avoid this? * A single resource file for your entire application? It's easy to translate, has very few duplicates but grows to an incredible size * A resource file per project, this has the benefit of keeping resources and code together, breaks down the file but creates a much higher chance of duplicates between the files. * Many small files. For example a resource file per project in the BLL/Dal and one per Area/View in the UI (let's assume MVC). Having struggled to maintain massive resource files in the past I'm tempted by the third option. To create a single per controller/logical separation in the UI level. Although this may cost more in translation (and lead to more duplicates) it will be much easier to maintain. Is there another or recommended approach to handling resource files in larger applications?"} {"_id": "53274", "title": "\"// ...\" comments at end of code block after } - good or bad?", "text": "I've often seen such comments be used: function foo() { ... } // foo while (...) { ... } // while if (...) { ... } // if and sometimes even as far as if (condition) { ... } // if (condition) I've never understood this practice and thus never applied it. If your code is so long that you need to know what this ending `}` is then perhaps you should consider splitting it up into separate functions. Also, most developers tools are able to jump to the matching bracket. And finally the last is, for me, a clear violation to the DRY principle; if you change the condition you would have to remember to change the comment as well (or else it could get messy for the maintainer, or even for you). So why do people use this? Should we use it, or is it bad practice?"} {"_id": "116276", "title": "Deploying Web Applications", "text": "**Background** So I have developed an order system and order tracking for a organisation. Currently it is web based with plans to develop a mobile application and a desktop application. The business model is an exclusive membership where you sign up to be able to distribute their products at a cheaper price than other competitors. EDIT: To clarify its currently being developed using Yii Framework for PHP, but I have a basic stripped down version in PHP using no framework. It's still in a development environment, no code is live yet. **Question** What (in your opinion or the industries opinion) the most effective way to distribute this application to the members? Possibly in each of the stages of development (such as how to distribute a web app, desktop app or mobile app)."} {"_id": "166218", "title": "How do you write a linearithmic algorithmn?", "text": "> Write psuedocode to determine the number of pairs of values in an input file > that are equal. If your first try is quadratic, think again and develop a > linearithmic solution. I found this question in a textbook and I'm not sure how to write this algorithm. My guess at first was to write it along the formula nC2 = (n(n+1))/2. Any help is appreciated."} {"_id": "166219", "title": "JavaFX 2.0 vs Qt for cross platform stand-alone application", "text": "I need a bit of advice from you developers who deal with cross-platform applications (specifically programs with a GUI). I will be creating an application soon that needs to be cross-platform and so I have done some preliminary research on two different frameworks: JavaFX 2.0 and Qt. Honestly, both would more than suit my needs. So then I asked myself why I would choose one over the other (SPOILER ALERT: I don't know the answer :P ). I do know that JavaFX 2.0 is rather new (as of 2012) and is not fully supported across platforms, but it will be eventually. The question I pose is this: which one of these would you use for a cross- platform application, and what criteria did you look at when making that decision? Thank you for taking the time to read this! :)"} {"_id": "56206", "title": "Getting into Medical Development", "text": "I'd like to start coding for medical techs such as 3D Echocardiograms and other imaging and sensor based technologies. I'm fine with the coding and design aspect but are there any resources on finding hospitals or organizations looking for this tech or do you just have to send out letter after letter to the business office of various hospitals?"} {"_id": "56207", "title": "Hosting own Git service?", "text": "I was wondering if it would be possible to install a git service for use by a small team. Would it be possible to install on a private network/locally or would it be more practical to install over a web network (e.g. a website domain). Thanks and please point my in the right direction, ~Daniel"} {"_id": "166212", "title": "Generic Adjacency List Graph implementation", "text": "I am trying to come up with a decent Adjacency List graph implementation so I can start tooling around with all kinds of graph problems and algorithms like traveling salesman and other problems... But I can't seem to come up with a decent implementation. This is probably because I am trying to dust the cobwebs off my data structures class. But what I have so far... and this is implemented in Java... is basically an edgeNode class that has a generic type and a weight-in the event the graph is indeed weighted. public class edgeNode { private E y; private int weight; //... getters and setters as well as constructors... } I have a graph class that has a list of edges a value for the number of Vertices and and an int value for edges as well as a boolean value for whether or not it is directed. The brings up my first question, if the graph is indeed directed, shouldn't I have a value in my edgeNode class? Or would I just need to add another vertices to my LinkedList? That would imply that a directed graph is 2X as big as an undirected graph wouldn't it? public class graph { private List> edges; private int nVertices; private int nEdges; private boolean directed; //... getters and setters as well as constructors... } Finally does anybody have a standard way of initializing there graph? I was thinking of reading in a pipe-delimited file but that is so 1997. public graph GenereateGraph(boolean directed, String file){ List> edges; graph g; try{ int count = 0; String line; FileReader input = new FileReader(\"C:\\\\Users\\\\derekww\\\\Documents\\\\JavaEE Projects\\\\graphFile\"); BufferedReader bufRead = new BufferedReader(input); line = bufRead.readLine(); count++; edges = new ArrayList>(); while(line != null){ line = bufRead.readLine(); Object edgeInfo = line.split(\"|\")[0]; int weight = Integer.parseInt(line.split(\"|\")[1]); edgeNode e = new edgeNode((String) edges.add(e); } return g; } catch(Exception e){ return null; } } I guess when I am adding edges if boolean is true I would be adding a second edge. So far, this all depends on the file I write. So if I wrote a file with the following Vertices and weights... Buffalo | 18 br Pittsburgh | 20 br New York | 15 br D.C | 45 br I would obviously load them into my list of edges, but how can I represent one vertices connected to the other... so on... I would need the opposite vertices? Say I was representing Highways connected to each city weighted and un-directed (each edge is bi-directional with weights in some fictional distance unit)... Would my implementation be the best way to do that? I found this tutorial online Graph Tutorial that has a connector object. This appears to me be a collection of vertices pointing to each other. So you would have A and B each with there weights and so on, and you would add this to a list and this list of connectors to your graph... That strikes me as somewhat cumbersome and a little dismissive of the adjacency list concept? Am I wrong and that is a novel solution? This is all inspired by steve skiena's Algorithm Design Manual. Which I have to say is pretty good so far. Thanks for any help you can provide."} {"_id": "166213", "title": "How do I know if a particular build has a particular version control change in it?", "text": "Let's say I have a build. I need to know if a particular changelist/commit is present in that build. How would I solve this problem? I can think of a couple of possible approaches: 1) Add the changelist number into the binary so that I can look somewhere in the GUI and know what the changelist number is. I can then use this information to determine if the change I'm interested in is within that build. 2) Tag version control using some string that uniquely identifies that build. What unique string would I use? Is either of these two better? Are there any other better approaches? The solution would have to work for both Mac and Windows builds."} {"_id": "193526", "title": "Is it a bad habit to (over)use reflection?", "text": "Is it a good practice to use reflection if greatly reduces the quantity of boilerplate code? Basically there is a trade-off between performance and maybe readability on one side and abstraction/automation/reduction of boilerplate code on the other side. Edit: Here is an example of a recommended use of reflection. To give an example, suppose there is a an abstract class `Base` which has 10 fields and has 3 subclasses `SubclassA`, `SubclassB` and `SubclassC` each with 10 different fields; they are all simple beans. The problem is that you get two `Base` type references and you want to see if their corresponding objects are of same (sub)type and are equal. As solutions there is the raw solution in which you first check if the types are equal and then check all fields or you can use reflection and dynamically see if they are of the same type and iterate over all methods that start with \"get\" (convention over configuration), call them on both objects and call equals on the results. boolean compare(Base base1, Base, base2) { if (base1 instanceof SubclassA && base2 instanceof SubclassA) { SubclassA subclassA1 = (SubclassA) base1; SubclassA subclassA2 = (SubclassA) base2; compare(subclassA1, subclassA2); } else if (base1 instanceof SubclassB && base2 instanceof SubclassB) { //the same } //boilerplate } boolean compare(SubclassA subA1, SubclassA subA2) { if (!subA1.getField1().equals(subA2.getField1)) { return false; } if (!subA1.getField2().equals(subA2.getField2)) { return false; } //boilerplate } boolean compare(SubclassB subB1, SubclassB subB2) { //boilerplate } //boilerplate //alternative with reflection boolean compare(Base base1, Base base2) { if (!base1.getClass().isAssignableFrom(base2.getClass())) { System.out.println(\"not same\"); System.exit(1); } Method[] methods = base1.getClass().getMethods(); boolean isOk = true; for (Method method : methods) { final String methodName = method.getName(); if (methodName.startsWith(\"get\")) { Object object1 = method.invoke(base1); Object object2 = method.invoke(base2); if(object1 == null || object2 == null) { continue; } if (!object1.equals(object2)) { System.out.println(\"not equals because \" + object1 + \" not equal with \" + object2); isOk = false; } } } if (isOk) { System.out.println(\"is OK\"); } }"} {"_id": "82545", "title": "Entity Framework book for beginners", "text": "I picked up Julia Lerman 1st edition book: Entity Framework book 1st Edition I started reading that and it was pretty good, but I'm wondering if there is even a more higher level book for EF? From what I was reading in the 1st edition it was pretty technical. I'm looking for a real high level book. Almost like a precursor to Julia's book (2nd edition). Are there any out there that this group would recommend?"} {"_id": "194061", "title": "Cyclomatic Complexity Ranges", "text": "What are the categories of cyclomatic complexity? For example: 1-5: easy to maintain 6-10: difficult 11-15: very difficult 20+: approaching impossible For years now, I've gone with the assumption that 10 was the limit. And anything beyond that is bad. I'm analyzing a solution, and I'm trying to make a determination of the quality of the code. Certainly cyclomatic complexity isn't the only measurement, but it can help. There are methods with a cyclomatic complexity of 200+. I know that's terrible, but I'm curious to know about the lower ranges, like in my example above. I found this: > The aforementioned reference values from Carnegie Mellon define four rough > ranges for cyclomatic complexity values: > > * methods between 1 and 10 are considered simple and easy to understand > * values between 10 and 20 indicate more complex code, which may still be > comprehensible; however testing becomes more difficult due to the greater > number of possible branches the code can take > * values of 20 and above are typical of code with a very large number of > potential execution paths and can only be fully grasped and tested with > great difficulty and effort > * methods going even higher, e.g. > 50, are certainly unmaintainable > When running code metrics for a solution, the results show green for anything below 25. I disagree with this, but I was hoping to get other input. Is there a generally accepted range list for cyclomatic complexity?"} {"_id": "191010", "title": "Unit Testing in a \"no setter\" world", "text": "I do not consider myself a DDD expert but, as a solution architect, do try to apply best practices whenever possible. I know there is a lot of discussion around the pro's and con's of the no (public) setter \"style\" in DDD and I can see both sides of the argument. My problem is that I work on a team with a wide diversity in skills, knowledge and experience meaning that I cannot trust that every developer will do things the \"right\" way. For instance, if our domain objects are designed so that changes to the object's internal state is performed by a method but provide public property setters, someone will inevitable set the property instead of calling the method. Use this example: public class MyClass { public Boolean IsPublished { get { return PublishDate != null; } } public DateTime? PublishDate { get; set; } public void Publish() { if (IsPublished) throw new InvalidOperationException(\"Already published.\"); PublishDate = DateTime.Today; Raise(new PublishedEvent()); } } My solution has been to make property setters private which is possible because the ORM we are using to hydrate the objects uses reflection so it is able to access private setters. However, this presents a problem when trying to write unit tests. For example, when I want to write a unit test that verifies the requirement that we can't re-publish, I need to indicate that the object has already been published. I can certainly do this by calling Publish twice, but then my test is assuming that Publish is implemented correctly for the first call. That seems a little smelly. Let's make the scenario a little more real-world with the following code: public class Document { public Document(String title) { if (String.IsNullOrWhiteSpace(title)) throw new ArgumentException(\"title\"); Title = title; } public String ApprovedBy { get; private set; } public DateTime? ApprovedOn { get; private set; } public Boolean IsApproved { get; private set; } public Boolean IsPublished { get; private set; } public String PublishedBy { get; private set; } public DateTime? PublishedOn { get; private set; } public String Title { get; private set; } public void Approve(String by) { if (IsApproved) throw new InvalidOperationException(\"Already approved.\"); ApprovedBy = by; ApprovedOn = DateTime.Today; IsApproved = true; Raise(new ApprovedEvent(Title)); } public void Publish(String by) { if (IsPublished) throw new InvalidOperationException(\"Already published.\"); if (!IsApproved) throw new InvalidOperationException(\"Cannot publish until approved.\"); PublishedBy = by; PublishedOn = DateTime.Today; IsPublished = true; Raise(new PublishedEvent(Title)); } } I want to write unit tests that verify: * I cannot publish unless the Document has been approved * I cannot re-publish a Document * When published, the PublishedBy and PublishedOn values are properly set * When publised, the PublishedEvent is raised Without access to the setters, I cannot put the object into the state needed to perform the tests. Opening access to the setters defeats the purpose of preventing access. How do(have) you solve(d) this problem?"} {"_id": "191013", "title": "How to label software requirements?", "text": "What is a good strategy to label software requirements in an SRS? Typically outline numbering is employed on headers - but these will renumber if a new heading is inserted in the document. To me it seems like a good idea to aim for a more stable designation for each software requirement. This should make it easier to reference a particular requirement even in the face of an updated SRS. What are your experiences on this?"} {"_id": "191018", "title": "Does Silverlight5 provide anything new for WCF", "text": "From WCF standpoint, just wondering whether I can leverage anything after upgrade from `Silverlight 4` to `Silverlight 5`? I did some research regarding new features of `SL5` and cannot find anything about changes to `System.ServiceModel` and `System.ServiceModel.Web` namespaces, is it rigth that nothing were added or am I missing something? **My Questions:** * Have there been any changes to WCF in Silverlight 5? * If yes, has there been any changes to duplex polling on HTTP?"} {"_id": "137537", "title": "I have a bad memory. Is dynamic typing language+vim appropriate for me?", "text": "I am switching from C#+Visual Studio to Ruby+Vim for a few months. The **only** thing that I am missing from C#/Visual Studio is **intellisense** , especially when I have a new ruby gem to familiarize. As a programmer with a below-average memory like Joel, I miss the a happy time in Visual Studio that I can `Ctrl`+`Space` everywhere to get a hint list so that I don't have to memorize a single method , whether it's name or it's parameter list. I can even get it's usage/sample code at MSDN with only a press of `F1`. So, ruby(dynamic typing language)+vim/TextMate programmers, when you are coding Do you run google/gem API reference manual/irb/ri side by side with your vim/TextMate like me most of the time ? or A Good memory is a must-have for ruby(or other dynamic language) programmers?"} {"_id": "16354", "title": "What software do you use to help plan your team work, and why?", "text": "Planning is very difficult. We are not naturally good at estimating our own future, and many cognitive biases exacerbate the problem. Group planning is even harder. Incomplete information, inconsistent views of a situation, and communication problems compound the difficulty. Agile methods provide one framework for organizing group planning--making planning visible to everyone (user stories), breaking it into smaller chunks (sprints), and providing retrospective analysis so you get better at planning. But finding good tools to support these practices is proving tricky. What software tools do you use to achieve these goals? Why are you using that tool? What successes have **you** had with a particular tool?"} {"_id": "187898", "title": "Why five dining philosophers?", "text": "I was wondering why the Dining philosophers problem is based on a five philosophers case. Why not four? I guess that we can observe all unpleasant issues that can occur when discussing five philosophers example also when we are given four thinkers. Is it only for a historical reason then?"} {"_id": "61814", "title": "Is programming in the UNIX philosophy the same as Functional programming?", "text": "The UNIX Programming Environment (the classic text) states that the UNIX approach to programming is to build small, well-defined tools that can be combined to solve more complex problems. In learning C and the Bash shell, I've found this to be a powerful concept that can be used to deal with a wide range of programming problems. Just using a Linux platform, the concept is pretty clear and used all the time. Any expression formed on the command line that redirects I/O, linking system tools like ls, grep, more, and so on shows how powerful this concept is. The thing that confuses me is that many of these programs are written in C, using an imperative/procedural programming style, yet the way they are used and joined together on the command line seems much more like functional programming to me, where each program is an isolated function that doesn't depend on the state of any other program it might be joined to. Is this accurate, understanding the UNIX programming philosophy is basically functional programming using tools that may have been built using an imperative programming style?"} {"_id": "117350", "title": "Who owns the code I wrote, what rights do I have with respect to my employer", "text": "This is a legal question with mixing code I've written in my own time with code I've been paid to write for an employer. I'm employed at a company with an employment contract, and am paid hourly. This is in Ontario, Canada in case that that affects matters. I've been working on a project for my employer, and in this project I've incorporated some code that I had previously written in my own time at my house for another project (as in I own the code outright). My code is now tightly coupled with the project at work (as in it won't work without it). I have at times needed to make changes to my original code to get it working with my work project, and since I am paid hourly for my time, I've technically been paid to work on my own code. But since I was paid for it, does my employer now technically own this code? Suppose I leave this company tomorrow, do I need to leave my source code behind? I'm not saying I would, but do I still retain exclusive and unrestricted rights to my original code?"} {"_id": "117357", "title": "Is Entity Framework Suitable For High-Traffic Websites?", "text": "Is Entity Framework 4 a good solution for a public website with potentially 1000 hits/second? In my understanding EF is a viable solution for mostly smaller or intranet websites, but wouldn't scale easily for something like a popular community website (I know SO is using LINQ to SQL, but.. I'd like more examples/proof...) Now I am standing at the crossroads of either choosing a pure ADO.NET approach or EF4. Do you think the improved developer productivity with EF is worth the lost performance and granular access of ADO.NET (with stored procedures)? Any serious issues that a high traffic website might face, was it using EF? Thank you in advance."} {"_id": "119490", "title": "How to justify having one (or more) mobile developers per platform", "text": "I've come across a situation at work where I have to justify why I need one (or more) mobile developers per platform. Although I'm quite aware of the why (each platform is substantially different, has its own philosophy, etc.) I can't seem to come up with more than that. Are there any articles I can read or even books that will give me hard data about the advantages?"} {"_id": "202316", "title": "Is it okay for an application to check for automatic updates in less than 20 hour interval?", "text": "I have a desktop application that has the ability to automatically update itself on the next restart (without the user's consent - but this is another issue altogether). Assuming that the user would never notice anything related to application updating (such as a progress bar, or pop-up requiring restart), and that our server would support the request spam load, is there any reason why it should not check for updates in less than 20 hour interval? The reason I'm asking this is because all applications that I know that have auto-update capability check for update every 20 to 24 hours and at startup. I was just wondering if there was an ethical rule about it, or simply because of the risk of overloading the server."} {"_id": "119493", "title": "What techniques would you use for a next generation java web application?", "text": "I'm working at a site similar to Foursquare and Yelp, with approximately 100000 unique requests each week that generates content, growing steadily. We are currently using: * Seam as Java web framework. * MySQL as DB * Hibernate as ORM * Hibernate Search as Index * EhCache for Caching. Since our site is slowly growing out of the current setup and has a lot of legacy code, it is time for us to start thinking about a major refactoring/changing setup. 1. Web framework We are not ready to change the language but we are leaning towards Spring Web Framework, since: * Seam is no more. * Almost all of us have worked with Spring and liked it. 2. DB and ORM We have done a little research and we are thinking about MongoDB. 3. Index Do we need to have a separate Index if we use MongoDB? 4. Cache ? So my question is basically: If you take Spring Web Framework and MongoDB into consideration, how would a good setup be for a web application that is growing and handles a lot of logged in users generating input and performing searches? **EDIT** I would like to thank all of you for taking the time to answer me, but the answer I'm looking for should be more specific: \"We choose Spring as our web framework and **Freemarker** as our template language since freemarker is fast.... If you use **MongoDB** you will not need a separate index for doing geo searches since **MongoDB** supports location- based queries out-of-the-box... I very good cache solution to this setup is.... In my previous project we chose to use **Apache Solr** as our search platform because this solved the issue with fast updates...\" Thank you // Jakob"} {"_id": "141917", "title": "Bug reopen vs. new", "text": "A bug was opened, fixed, verified and closed. A month later, it showed up again in a subsequent version after several iterations without any regression. Provided the bug characteristics are the same, would you **reopen** the existing bug ID or open a **new** one with a link to the closed bug?"} {"_id": "141913", "title": "Facing quality issues", "text": "A workforce management software has complex GUI (for example values in a page depends on the status (closed or open) of other pages). Only latest and near past development has test coverage. During our last release, we received lots of bugs from customer in-spite of 2 weeks of testing Sprint. We don't have dedicated test team. The developers does the unit test and User acceptance test. Every day triggers automated regression test. I am afraid the developers are not testing the entire workflow because its time consuming also not able to automate it because of its complexity. Any suggestions? The legacy code (15 years of development) has less code coverage. How can I improve quality? Note: Now not possible to hire testers to have independent test team."} {"_id": "107472", "title": "Which development methodology for a solo programmer on a 1 month project?", "text": "I've been tasked with a solo project to investigate & resolve memory leaks in 8000 lines of Javascript code. I anticipate the project to take up to a month. Please recommend a development methodology I can use to structure my efforts. Prefer something light & easy to pickup and run with. Thanks"} {"_id": "9605", "title": "What did Alan Perlis mean regarding the ways to write error-free programs?", "text": "There's a quotation by Alan J. Perlis that says: > There are two ways to write error-free programs; only the third one works. I recently heard this quote from my friend, and was unable to understand the deeper meaning behind it. What is Perlis talking about here?"} {"_id": "113636", "title": "How to convince a teammate, who sees oneself as senior, to learn SVN conceptual basics?", "text": "To start with some background, I took up a new developer position this summer and ended up being the newest member on the team, yet with most experience under the belt. So far I have managed to push sanity initiatives through easily enough because of low adoption costs (in terms of time and effort). However things have leveled up a bit. One of my teammates, although experienced, does not really understand SVN. Naturally, blank areas on his mental map depicting oceans of SVN cause him to adopt rather strange usage patterns. For example, he had declared a policy of \"1 SVN commit per day per developer\" because otherwise \"the server would soon run out of disk space\". When I explained him that SVN commits are deltas, not full copies, he responded with doubt and even today I'm not entirely sure if he understands what it means. We also had a heated argument about whether to include Eclipse `.project` configuration in SVN. My teammate insisted we should, although it has caused numerous pointless conflicts. I was against keeping individual developer configuration files in SVN. Finally, it turned out that my teammate had a practice of re-checkouting the entire source tree after every commit to just make sure \"code committed into repository works\". That was the reason he was so adamant in keeping the project configuration in SVN - so it would be easy to re-import the project. When I explained that commit synchronizes working copy to remote byte-by-byte which makes re-checkout unnecessary, my teammate responded with doubt again and eventually waved the whole issue off as insignificant. In my opinion, our team wastes time by resolving SVN conflicts in project configuration files which contain only developer-specific settings that need not be shared to SCM at all. All this mess because someone tailored the process around incorrect assumptions. How can I convince a teammate, who sees oneself as senior, to get a better understanding of SVN basics?"} {"_id": "107478", "title": "How Do I calculate the the \"Human Resources/Effort\" for a programming project?", "text": "as a project manager or CTO how would I calculate the \"time\" it takes to develop a software system? For Example I would be responsible for developing a large web application in Java, MySQL, HTML, JSF. How do I correctly calculate how much time this takes? Prerequisites: I do not know the team that will do this job (so I can not ask them how long they might need). I did not do any calculations like this before. I am completely new in this role. But I have some very basic/slight programming background... Also any good resources (=links) or answers are very much appreciated."} {"_id": "113634", "title": "Should I use Java applets or JavaScript/PHP to make my site more interactive?", "text": "I have a website that is about electronics. I want to make some functional calculators such as calculation for analog filters which will have to show lots of plots and stuff like that. This is a sample of what I am looking for and obviously it is only PHP and its graphic functions. I want to have similar thing but much more interactive with realtime sliders and stuff like that. What should I go for? Java Applets? or stick to JavaScript/PHP? I am asking this because I can do it with Java much faster (know it better than JavaScript and PHP). But I am afraid of browser incompatibility, security options for Applets and similar things. What is your suggestion?"} {"_id": "113635", "title": "What's the difference between the greedy and Hamiltonian methods?", "text": "Given _T={CTAGC, GAGCG, AGCGG, CGGAG}_ , using a greedy algorithm, the superstring _S_ will be _CTAGCGGAGCG_. From `S`, the combination of triplets will be given as _s = {CTA, TAG, **AGC** , **GCG** , CGG, GGA, GAG, **AGC** , **GCG** }_. Both _GCG_ and _AGC_ are repeated. If using _s_ to retrieve the superstring by using Hamiltonian method, would the repeated words will be used or omitted? If the repeated word is omitted, then _CTA_ , _TAG_ , _AGC_ , _GCG_ , _CGG_ , _GGA_ , and _GAG_ constructed back will become _GTAGCGGAGC_. So, in the end, both the greedy method and Hamiltonian method will provide different results in the superstring. Why are they different? In my research, all the examples I found showed that there are no repeated words in the combination, so if I reconstruct the superstring using the Hamiltonian method, the result of the greedy and Hamiltonian methods will be the same. But what about the repeated words?"} {"_id": "113632", "title": "Is this an anti-pattern?", "text": "I've seen this a lot in our legacy system at work - functions that go something like this: bool todo = false; if(cond1) { ... // lots of code here if(cond2) todo = true; ... // some other code here } if(todo) { ... } In other words, the function has two parts. The first part does some sort of processing (potentially containing loops, side effects, etc.), and along the way it might set the \"todo\" flag. The second part is only executed if the \"todo\" flag has been set. It seems like a pretty ugly way to do things, and I think most of the cases that I've actually taken the time to understand, could be refactored to avoid using the flag. But is this an actual anti-pattern, a bad idea, or perfectly acceptable? Edit: The first obvious refactorization would be to cut it into two methods. However, my question is more about whether there's ever a need (in a modern OO language) to create a local flag variable, potentially setting it in multiple places, and then using it later to decide whether to execute the next block of code."} {"_id": "21243", "title": "How can I introduce new technology to my team?", "text": "I work on several teams that are living in the past, and I'm trying to introduce them to new technologies like ASP.NET MVC2. What are good ways to introduce new technology in a positive light?"} {"_id": "151169", "title": "Why do some programmers think there is a contrast between theory and practice?", "text": "Comparing software engineering with civil engineering, I was surprised to observe a different way of thinking: any civil engineer knows that if you want to build a small hut in the garden you can just get the materials and go build it whereas if you want to build a 10-storey house (or, e.g., something like this) you need to do quite some maths to be sure that it won't fall apart. In contrast, speaking with some programmers or reading blogs or forums I often find a wide-spread opinion that can be formulated more or less as follows: _theory and formal methods are for mathematicians / scientists while programming is more about getting things done_. What is normally implied here is that programming is something very practical and that even though formal methods, mathematics, algorithm theory, clean / coherent programming languages, etc, may be interesting topics, they are often not needed if all one wants is to **get things done**. According to my experience, I would say that while you do not need much theory to put together a 100-line script (the hut), in order to develop a complex application (the 10-storey building) you need a structured design, well- defined methods, a good programming language, good text books where you can look up algorithms, etc. So IMO (the right amount of) **theory** is one of the tools for **getting things done**. My question is why do some programmers think that there is a contrast between theory (formal methods) and practice (getting things done)? Is software engineering (building software) perceived by many as **easy** compared to, say, civil engineering (building houses)? Or are these two disciplines really different (apart from mission-critical software, software failure is much more acceptable than building failure)? **EDIT** Thanks for all the answers and the interest in this topic. I would kindly ask you not to post any new answers if you do not have to add some observation that has not been covered by the existing answers yet. So please read all answers carefully before posting new ones. I try to summarize, what I have understood from the answers so far. 1. In contrast to software engineering, in civil engineering it is much clearer what amount of theory (modelling, design) is needed for a certain task. 2. This is partly due to the fact that civil engineering is as old as mankind while software engineering has been around for a few decades only. 3. Another reason is the fact that software is a more volatile kind of artefact, with more flexible requirements (it may be allowed to crash), different marketing strategies (good design can be sacrificed in order to get it on the market quickly), etc. As a consequence, it is much more difficult to determine what the right amount of design / theory is appropriate in software engineering (too little -> messy code, too much -> I can never get finished) because there is no general rule and only (a lot of) experience can help. So if I interpret your answers correctly, this uncertainty about **how much** theory is really needed contributes to the mixed love / hate feelings some programmers have towards theory."} {"_id": "112322", "title": "Can anyone recommend a good robot kit for learning C++ robotics programming?", "text": "Preferably something that is a combination of affordable and close to real world robotics programming and will allow me to program it with C++."} {"_id": "81013", "title": "Methods names prefixes and postfixes", "text": "Some time ago I've read some article (or it was a chapter in some book, cannot remember) about naming method prefixes, that reflect method behaviour like `addFoo()`, `setFoo()`, `removeFoo()` etc Could anyone point me to that article? Cannot google it and it makes me sad panda :-("} {"_id": "113988", "title": "How can I change sloppy company culture?", "text": "Sometimes when I have a problem that needs to be solved, I find that the easiest way to solve it is by writing a small program as a personal tool. I don't make it super usable or super robust, as I am the only one going to use it, and I don't have the time to refine it and test it thoroughly. Then a coworker sees the program and asks for it, because he has run into the same problem and the tool could help. I give him the \"It's not pretty but it'll get the job done\" disclaimer and let him have it. Next thing I know, my superior is calling me, telling me that he's trying to get the software to work on a client's computer but it's showing X error message. WTF?? That software is not ready for release, nor was I told that it needed to be ready for release. But for some reason, my superior thought it was good enough and released it without telling the original developer. Now, this particular problem is easy to fix with a `MessageBox.Show(\"DO NOT GIVE TO CLIENTS!\");`. However the problem is indicative of a much deeper problem: our company culture is sloppy. Sloppy software is OK and sloppy processes are OK. Don't worry about the future - put just enough effort into getting it barely working now, put the binaries in a .zip file, and ship it. Good enough for government work. This is a small company with 10 full-time employees, is growing, and has been around for a while. Don't get me wrong; I love working here and I love the company. Don't tell me to run; I want to be a part of making the company better. How do you start to bring good change to this kind of culture?"} {"_id": "198195", "title": "Objective C - nested messages ... confusion about", "text": "Wonder if anyone could shed some light on this messaging construct: The documentation says that messages appear btwn brackets [] and that the msg target/object is on the left, whilst the msg itself (and any parameters) is on the right: [msgTarget msg], e.g., `[myArray insertObject:anObject atIndex:0]` OK, simple enough... but then they introduce the idea that it's convenient to nest msgs in lieu of the use of temporary variables--I'll take their word for it--so the above example becomes: `[[myAppObject theArray] insertObject:[myAppObject objectToInsert] atIndex:0]` In other words,` [myAppObject theArray] ` is a nested msg, one, and, two, 'theArray' is the 'message'. Well, to say I find this confusing is a bit of an understatement ... Maybe it's just me but 'theArray' doesn't evoke a message semantically or grammatically. What this looks like to a guy who knows Java is a type/class. In Java we do things like `Class objectInstance = new Class() ... ` the bit to the left of the assignment operator is what this so-called nested message reminds me of ... with object and class/type positions switched of course. Anyway, any insight much appreciated."} {"_id": "210850", "title": "What is the purpose of the stand-up and its duration in agile methodologies?", "text": "I used to work in a waterfall methodology and now I am in a team that is following an agile methodology. It seems they are doing it wrong. For example, we have stand-ups that last 25+ minutes daily, which is really annoying. Additionally, I feel more like I am justifying my salary to management than anything else. Am I wrong to feel this way? Is this how stand-ups are usually conducted?"} {"_id": "210851", "title": "How to make a \"git push\" update files on your web host?", "text": "I have a few sites which are all hosted on the same web hosting service under shared hosting. My web host supports Git and I have SSH access to it, and I also have Git setup on my laptop as well. I want to make it so that when I do a \"git push origin master\", it will automatically update the files on my web server, and also save a backup of the previous commit's files so I can easily rollback if I want to. Is this possible?"} {"_id": "212348", "title": "Using Java Reflection to decouple code modules", "text": "I'm involved in a project with several modules. I found that programmers have designed one module to be easily decoupled from its dependent modules using Java Reflection. If other modules need to call a method in this module, the programmers are expecting them to use reflection to call it. This has resulted in a lot of places with **hard-coded** reflection calls. By _hard-coded,_ I mean the class and method names are permanently fixed as Strings, which kind of defeats the purpose of reflection which is supposed to be for dynamic programming. How can this be justified? I feel they are being novice about it and misusing the reflection API. I think polymorphism is the right way to decouple a module from another without breaking functionality. (Unfortunately, changing the entire code base to polymorphism is way too much maintenance.)"} {"_id": "137052", "title": "migration and interoperability", "text": "A large system has already been done in visual fox pro. They want to add more requirements to a module and make some changes to the system.At the same time they are thinking about migration to .net. Some parts like report generation has been done in .net using visual fox pro database and loaded in vfp. Is it a good idea to start migrating database as well? I mean is it good to create database (and tables) for the module for new requirements and changes? I think this will cause problem and difficulty to maintain the software. How is the process of data migration initiated? Should it be done gradually (partly in visual fox pro and partly in sql) or should it be done all at a time?"} {"_id": "137057", "title": "Do gantt charts have a role in agile software development?", "text": "I've heard it said that gantt charts are a relic of waterfall project management techniques. I typically use an agile-like approach to project planning and tracking progress which provides good visibility into feature-time trade-offs as the project progresses. We're at the outset of a project in in which, while the exact front-end user interaction design is somewhat unclear at this early stage, the back-end requirements are fairly clear (components that communicate with various third- party APIs, the server infrastructure, etc). We'll be going through a process to develop good user interaction design for the front-end (starting with user stories and working forward from there), but we also wanted to get an idea of how long the back-end would take. I decided the best approach was to break it down into sub-components, where each component consisted of tightly coupled code that should probably be worked on in a contiguous period of time. I assigned rough time estimates to each component and sub-component, typically ranging from 1 to 10 days. I then used OmniPlan to indicate dependencies between these various components, and assigned developers to each task, with the goal of distributing effort as equally as possible. I then used OmniPlan's leveling tool. All, I think, a fairly standard way to use OmniPlan, and I thought reasonable way to come up with a rough time estimate. I should clarify that the intention was not to come up with a rigid blueprint about who would work on what and when, but more come up with at least one plausible way that we could build what we needed to build in a two month period. The gantt chart suggested that this was feasible. To my surprise, I received quite strong pushback from another team-member familiar with the agile development process, who accused me of adopting a waterfall methodology. Their more specific criticism was that the gantt chart was specified in terms of architectural components, rather than on user-visible features. They were frustrated that it didn't lend itself to saying \"we should have functionality X by Y date\". Where, if anywhere, did I go wrong?"} {"_id": "12087", "title": "What are great online code reviews tool for Open Source projects?", "text": "I would like to start new Open Source project, and I believe all check-ins should be reviewed before merged into trunk. What are some great **online** code review tools for Open Source projects that have this functionality, and what makes them great?"} {"_id": "190766", "title": "Should I use TDD and BDD if my project is changing fast?", "text": "I have my own little project I am creating using RoR, I plan it to have small- medium load. With no doubt I started with BDD and TDD (Cucumber and RSpec to be exact, but I am also experienced with TestUnit), I like it but since it's my own project and it's a somewhat startup - I am changing many things in it, many requirements, many ideas how things should work and look. So it becomes too much time-consuming to always code it using BDD and TDD, even if I cover only common cases. What should I do? Should I sacrifice BDD and TDD for productivity till I get to some point when I have a solid basis and it's time for production, and than I write tests? Should I write them right now but as minimal as possible? Should I only write RSpec and forget about Cucumber for now? Or maybe just TestUnit to test model for now since it's the most important and everything else can change? Thanks in advance! **Update:** I know all the pros of TDD and BDD, by no means it makes it easier to scale and bugfix application in future and save time, but maybe it's more reasonable in my situation to wait few weeks till I have at least some skeleton architecture of my app and than once I am sure with it I can cover it with tests to have a solid base? And than continue with TDD and do all the tests with TDD."} {"_id": "190763", "title": "Is there a name for this kind of database constraint? \"Functional foreign key\"? And is it a good idea?", "text": "Say we want to enforce that a table represents a tree. I have found this method for doing so: create table tree ( id integer[] primary key, parent integer[] references tree (id), item text not null, constraint tree_parent check ( (id = array[]::integer[] and parent is null) or parent = id[1:array_length(id,1)-1]) ); A node in the tree is identified by its _path_ from the root node. As such, the foreign key `parent` is just that path with one element dropped off the end. The combination of `foreign key` and `check` guarantees the tree structure. **First question:** is this a good idea? Are there better approaches? In Oracle at least, I'm able to define `parent` as a virtual column, which is elegant. **Second question:** is there a name for this approach? I like to think of this as a \"functional foreign key\": `tree.id` references `tree.id`, but _via a function_ that drops the last element in the path. I think this concept has more general use than the tree example above."} {"_id": "223565", "title": "Should I pay for the use of subversion?", "text": "I am confused about paying subversion. Our company has a solution and wants to manage the version of the source. Our company product is obviously commercial product and not open source. & it's not for distribute purpose. 1. Can I use the subversion without payment for the company? 2. Should I open the source of our solution because we use the subversion free? 3. Is there any obligation followed by this? 4. Exactly, in which case do I have to pay for using the subversion? Thank you."} {"_id": "223566", "title": "Are my novice user stories correctly composed?", "text": "Is this user story substantively correct: > As a system owner, I want everybody that uses the system to have to log in > using a secure password and login system, to prevent unauthorised and random > access to system data. And then: > As a user, I want to be able to log in with my secure password, so that only > I can access areas of the system I need to do my work. I have several other stories for the system owner, like password format, lockouts etc. Is this a good place for these requirements? As stories for the system owner?"} {"_id": "223567", "title": "Deploying artifacts and dependencies on another system with Maven", "text": "I am coding a Java program on my development machine. Maven packages this a `myjar.jar`, and I can run it from the command line using `java -cp myjar.jar my.FantasticClass`. It uses library `somelib.jar`. I use Maven, and the project is hosted on github. When I want to run my project on the target machine, what is the best way to go? I could package my code as a jar, grab the `somelib.jar` and move it to the target machine manually, but I suppose I could also check out the Maven project from github on the target machine, run it, and have Maven both generate the jar and get the `somelib.jar` file. Or am I simply overlooking something?"} {"_id": "49420", "title": "What features/changes would you most like to see in Visual Studio 2012?", "text": "Or at least the \"next version\". There are some very interesting, grand ideas out there, threatening to revolutionise the way we code. Equally, there is likely a whole flurry of little improvements that might make the whole experience better. I do love Visual Studio 2010, it is a luxury environment for developers (in my opinion). What can be done next?"} {"_id": "223561", "title": "Why should we be aware of licenses?", "text": "I know this is common question, but why should a programmer be aware of software licenses as well as extensions and plugins licenses? I'm working in a company which focuses on the business process (Ruby on Rails) and I'm curious as to why they are strict on such licenses in the project development. What licenses should I be aware of and why?"} {"_id": "49422", "title": "Was Java originally designed for a toaster?", "text": "I've heard this tossed around few times, but never really a source. The wiki page says it was designed for home appliances, but never really references a toaster. Anyone have a source?"} {"_id": "116004", "title": "Is it bad to join open-source projects as an amateur?", "text": "I've thought for about six months now that I should join an open-source iPhone or iPad project to hone my skills in Objective-C, but every time I go to do it I see thousands of lines of code on huge projects that I end up convincing myself I would never understand. I always think that my commits would just end up being a hassle for project admins and more senior contributors, so I always back out at the last second. My question essentially is, **is it a hassle when an intermediately- experienced programmer joins an open-source project?**"} {"_id": "116005", "title": "What exactly is a programming language? What enables us to write in such a language?", "text": "Alright I'm new to programming and I admit this is a fairly abstract question. The natural language we speak every day exist because people can understand each other. How can computers understand my code written in a certain language? Let's say Mr. A creates a new language. How is that accepted by machines? Must the creator communicate with the machine using machine language to create a new language? What guarantees that we can written in a language while getting understood by the machine properly?"} {"_id": "211105", "title": "Best practice for code coverage of empty interface methods", "text": "Given a class that implements an interface, but does not need all of the methods implemented, what is the best practice for unit testing this class with respect to code coverage? - or is it considered a sign for a code smell? To make the problem more concrete, consider a Java class (the question is not limited to Java though) that implements `ComponentListener` but derives from some `X` (so as to rule out the choice to use `ComponentAdapter`). The class is however only interested in the `componentResized()` method and leaves the other callback method bodies empty. When checking the code coverage, we get a report that indicates correctly that the remaining methods are untested. For obvious reasons, I hesitate to add a test that simply calls a no-op method for the sake of coverage. While I am not bound to reach a certain coverage, I still think that in and of itself, this phenomenon may indicate a code smell with respect to the single- responsibility principle. On the other hand, it's not far-fetched either that a component representation is responsible for updating its state on a resize. Is there some sort of consensus or best practice on how to handle such methods, or is the question illegitimate as in it is a result of a supposedly bad design?"} {"_id": "211104", "title": "what are the practical steps of software development life cycle", "text": "I am building a software and trying to learn the typical steps for software life cycle, I am not sure if what I did is correct or enough so I would like to ask for the typical steps you follow during your life cycle in terms of actual steps taken for example requirement gathering is a not the step i mean, but gathering requirements using User Stories from the customer or from the requirements documents is the step I mean. This is what I do now, 1. Write User Stories from the requirement document I had. 2. Write Use Cases for these stories. 3. Collected nouns from the requirement documents and used them as classes to build class diagram. 4. Draw state diagrams for main entities in the class diagram. 5. Draw database diagram to map the class diagram to database tables 6. Started coding to 3-tier architecture. Please tell me if I am doing it right or missing something."} {"_id": "68036", "title": "When should a junior not be helping a senior?", "text": "What do you do when the person you\u2019re helping is your senior, do you continue to help them? I've seen this happen at a previous job. A senior developer lost touch and was having trouble with the basics. He'd got lazy over years and was accustomed to loading work on the juniors. Restructuring meant that senior now had to work hard for the money. What do you do? Help or expect more from them. **EDIT:** Myself and the other seniors did help and the juniors. But the senior developer was accustom to delegating and many felt the senior developer in question just was trying to get others do their work. I was asking the question in reference to reading: When do you not give help to less experienced programmers? Basically when should you draw a line in the sand."} {"_id": "68035", "title": "What is the best way to store files when using winforms and web?", "text": "I have to develop an application (in C#) that has to work with files. The application consists of two versions: a web version and a windows version. Therefore, the files must be stored on a place where both versions can access the files. In both versions, the files can be edited and new files can be created. I've been thinking about the following options to store files: * IIS using WebDav * Sharepoint * Using a share and make this share a virtual directory in IIS. I don't have experience with one of these options and these were the options that came to my mind. What is in your eyes the best way to store the files?"} {"_id": "157130", "title": "Nice iterator naming", "text": "How do you name your iterators when you return a begin and an end iterator from a class? Without it sounding clunky, that is. Example: typedef std::vector Ideas_Type; Ideas_Type::const_iterator GetIdeasBegin() const; Ideas_Type::const_iterator GetIdeasEnd() const; Should it be GetIdeasBeginIter? IdeasBegin?"} {"_id": "90725", "title": "Is Displaying Degree In Office Appropriate?", "text": "This is a little bit of a non-technical question for those working in the software development field. My employer has spent a fair amount to renovate 2 executive offices into 1 software development area. There are 3 programmers here, and I am the only one who holds a degree, and the only general software developer - my co-workers program in a scripting language that controls our hardware product. I feel that my employer is attempting to demonstrate to customers and others who visit the office that they are taking an active step in quality and functionality control by bringing the development in house (some of it has previously been contracted out). I feel that we're being looked after; new furniture, new hardware to work on, etc. I am considering whether it is appropriate to display my degree in the office. I generally feel that it would be fine, I'm proud of it and worked hard for it. My only hesitation is that there are engineers here who are much more well established in their careers who don't display any of their certifications (mind you, their work area is constructed of partition walls instead of drywall, but I don't think this is the deciding factor, but may be). I would, of course, consult my employer before hanging anything on the walls, but I thought it would be good to get a feel before approaching them. What has been your experience with this sort of decision, and are there other considerations that I am overlooking? Thanks"} {"_id": "99096", "title": "Sign Language Recognition", "text": "I am a final year undergraduate student of Information Technology. My team and I have taken up \"Sign Language Recognition\" as our Final Year Project. We have just started with it and we are in the phase of information gathering (gathering data). We plan to use Instrumented Gloves as the input device. But we do not have much knowledge in the area. Also, we came across the following methods for training the system for actual recognition of gestures and hence sign language. 1. Neural Networks 2. Symbolic Learning Algorithms 3. Hidden Markov Models 4. Instance Based Learning 5. Grammar based techniques Please tell me which of them I should use for sign recognition. Also, tell me about Instrumented Gloves and is there any specific variety which we should choose for our project?"} {"_id": "90721", "title": "What is the history behind the .NET platform's origins?", "text": "I watched Douglas Crockford's JavaScript talks recently, and at one point, he said that Microsoft did not consider JavaScript important because they saw the web as a passing phase of internet usage that would be supplanted by something loosely known at the time as \"Internet X\", and they wanted .NET to become \"Internet X\" Of course, the web is still with us and we know .NET as an application development platform. Can anyone tell me about the early history of .NET and how to went from being an intended replacement for the web to the platform we know it as today?"} {"_id": "199576", "title": "How to rotate an array of bits", "text": "![enter image description here](http://i.stack.imgur.com/YmIEw.png) I currently have a PIL Image object which I store as an array of bits (1's and 0's). However I now would like to be able to rotate that image 45 degrees. One way to do it is take the original PIL image, apply a transform matrix on it, then convert that to an array of bits. Problem with this approach is that it's computationally expensive, especially if I want to start doing more than just one rotation. It would be faster to just modify the array of bits directly. I tried using numpy.roll: numpy.roll(bits, 45) # rotate 45 degrees Unfortunately, this just does a circular shift, not an actual angular rotation. What algorithm can I use on the array of bits to give me the desired output without having to go through the original image? Even though my application is in Python, your answer can be in whatever language you feel comfortable with, I'm more interested in the algorithm itself not the syntax :)"} {"_id": "90723", "title": "What is the typical workday in the life of a junior programmer?", "text": "I am a college student majoring in CS and I have yet to take an internship. I am wondering what the normal workday for a junior programmer is like. What is the normal daily work load like? Are there any specifics or office ettiquite to being a programmer that's different for other junior-level employees?"} {"_id": "205755", "title": "Encapsulate downcasting when returning from a method", "text": "In chapter 24 of Code Complete the author says, in reference to encapsulate downcasting when returning from a method, \"If a routine returns an object, it normally should return the most specific type of object it knows about. This is particurlarly applicable to routines that return iterators, collections, elements of collections, and so on.\" Now, being a C# programmer, I typically always return ICollection instead of, say, IList or even List. By using an interface from higher up in the hierarchy I am free to switch collection type in the method without breaking the method interface. This seems like a good thing. What am I missing?"} {"_id": "168300", "title": "When not to use Spring to instantiate a bean?", "text": "I am trying to understand what would be the correct usage of Spring. Not syntactically, but in term of its purpose. If one is using Spring, then should Spring code replace all bean instantiation code? When to use or when not to use Spring, to instantiate a bean? May be the following code sample will help in you understanding my dilemma: List caList = new ArrayList(); for (String name : nameList) { ClassA ca = new ClassA(); ca.setName(name); caList.add(ca); } If I configure Spring it becomes something like: List caList = new ArrayList(); for (String name : nameList) { ClassA ca = (ClassA)SomeContext.getBean(BeanLookupConstants.CLASS_A); ca.setName(name); caList.add(ca); } I personally think using Spring here is an unnecessary overhead, because 1. The code the simpler to read/understand. 2. It isn't really a good place for Dependency Injection as I am not expecting that there will be multiple/varied implementation of `ClassA`, that I would like freedom to replace using Spring configuration at a later point in time. Am I thinking correct? If not, where am I going wrong?"} {"_id": "198224", "title": "Why is MongoDb popular with Node.js?", "text": "I've been looking att different web stacks, mainly rails and node.js. One thing that strikes me is that while rails is often used with a relational database Node.js seem to go hand in hand with Mongodb, judging by the blogosphere. Is there a specific reason for this? I like the modularity of node but I'm also sceptical to NOSQL. I get the feeling that if having a rdbms is important I should use rails since they seem to be a second class citizen in the node ecosystem."} {"_id": "198225", "title": "Is there value in learning Entity framework 4.0", "text": "I purchased a used book by Julia Lerman (2010) on EF4.0, and now I am wondering if the EF has changed dramatically since, I'm looking to learn this technology and i do not know if starting with a previous version matters or not ( deprecated functions, new practices, different approaches...). If I start with learning EF 4.0 will I be learning a dead or dying technology or will most of what I learn translate directly to EF 5. Are there things I should be aware of that are extremely different in the application of the 2 versions?"} {"_id": "168308", "title": "Why not commit unresolved changes?", "text": "In a traditional VCS, I can understand why you would not commit unresolved files because you could break the build. However, I don't understand why you shouldn't commit unresolved files in a DVCS (some of them will actually _prevent_ you from committing the files). Instead, I think that your repository should be locked from _pushing_ and _pulling_ , but not committing. Being able to commit during the merging process has several advantages (as I see it): * The actual merge changes are in history. * If the merge was very large, you could make periodic commits. * If you made a mistake, it would be much easier to roll back (without having to redo the entire merge). * The files could remain flagged as unresolved until they were marked as resolved. This would prevent pushing/pulling. You could also potentially have a _set_ of changesets act as the merge instead of just a single one. This would allow you to still use tools such as `git rerere`. So why is committing with unresolved files frowned upon/prevented? Is there any reason other than tradition?"} {"_id": "220694", "title": "Need Advice About A Specialized eCommerce System", "text": "I'm doing requirements/systems analysis on a particular project and could use some advice. **PROBLEM:** The project is for an organization that has a number of suppliers, with each supplier having up to several thousand items in their inventory. The supplier's inventory list will mostly be available via some sort of API. It's contain the product name, description, photos, unique identifier (e.g. barcode), and other details like dimensions, weight, colors, etc. depending on the inventory item. The organization has a number of retail stores. These retail stores may opt to sell the inventory items available from the organization's suppliers at whatever price the retail stores set. Retail stores also need the ability to: * flag whether the inventory item is available in-store and/or online * add items to their inventory that aren't part of the organization's suppliers' inventory list * provide variable pricing models to the retail stores' customers (e.g. a set price for regular customer, a discounted price for preferred customers, etc.) * allow customers to purchase items based on a line of credit For online purchases, the payment gateway/merchant account used should be either: * the organization's payment gateway/merchant account, or * the retail store's own payment gateway/merchant account Retail stores also need basic CMS functionality for their site, such as: * ability to add pages * ability to define custom data (e.g. like the \"channels\" concept in ExpressionEngine) * member management The organization also need to manage the individual retail stores' website -- as well as allow the individual retail stores to do their own management/administration. So a \"multi-site manager\" feature + site level control panels are needed. The organization will also provide the retail stores a few templates to choose from. So, a robust templating system along with ability to choose templates would be needed. **PARTIAL CONCEPTUAL SOLUTION:** I was thinking that I should have a centralized repository for all the supplier's inventory items so that the suppliers can have a uniform way to accessing them. This would result in a duplication of data, but it might be necessary as the data will be needed for displaying product details, categorization, search, etc. Benefit of having a centralized repository are: * any API keys that need to be managed in order to obtain the suppliers' inventory list would be in one place only * any special code to obtain suppliers' inventory would only need to reside in one place, too **QUESTIONS** 1. Is my idea of having a centralized repository a good idea? 2. Is this a system that needs to be built from the ground up?"} {"_id": "220911", "title": "Using public API of a BSD-Licensed library in an MIT-Licensed project", "text": "I'm writing code that I would like to release under the MIT License. It uses the public API of a library licensed under the BSD 3-clause license. I am not redistributing the library with the source code, and I am not providing binaries, users must install the library themselves and compile from source. Do I need to include the license of the library in my project, effectively overriding the MIT License, even though I am not redistributing source code?"} {"_id": "112589", "title": "Examples of MVVM adoption outside the Microsoft community?", "text": "IS MVVM getting any kind of traction outside the Microsoft community? Within Silverlight this is a non-issue, but for other technologies, like JavaScript it surely is: For instance Knockout.js is a great framework, but the 'rest of the world' seems to be on a Backbone path. My concern is that MVVM frameworks (like Knockout) are going to suffer a lack of network effect by being constrained to the Microsoft ecosystem, and thus fall behind compared to the rest."} {"_id": "197087", "title": "Is there any environments like Visual Studio for embedded systems?", "text": "I thought it would be very useful if there is an embedded application development tool like Visual Studio for web development. I mean when we develop web applications we have a toolbox and we drag and drop components and generally use events. In embedded systems events are interrupts. So why don't we have a toolbox which has electronic circuit components and we drag and drop them, also by clicking on them writing the behind code. Is there any extension like that for visual studio? or Is there any other environment like that?"} {"_id": "104925", "title": "Do I still own accreditation of work despite not owning copyright?", "text": "I completed some work at University for a website design. I understand that unless I request IP from the University they own full rights to the design. The clause is stated as below: > 18.1 All intellectual property rights in and to any work created by a > student during the course of his/her study will belong to and be the > absolute property of the University and the student will do all such acts > and sign all necessary deeds and documents to vest legal title in and to the > intellectual property in the University. The questions I have are: 1. Do I maintain any rights to the design? 2. Have I relinquished all rights to the work through this point? 3. Is accreditation and IP separate? I.e. do I still have the right to accreditation on the website despite not owning it? Any advice or help is much appreciated."} {"_id": "224487", "title": "Are there any known effects on cognitive load of many files with one file per object?", "text": "There has been a trend in the Ruby/Rails community to create lots of objects that have very small functionality (SRP anyone?) and live in their own file. These are often extracted from large, bloated model files. This could be good, but I think it is often changing one problem (bloated models with hundreds of lines of code) for another, worse problem (hundreds of small object files in subdirectories several layers deep). Are there any known effects on the cognitive load difference between lots of small files and fewer large ones?"} {"_id": "104921", "title": "Why so many layers with domainobject like objects in an application?", "text": "Take a typical Line-of-Business Silverlight applicatioin for example.First,We have a Product table in database,which has fields of ID,Name,Price,etc; Then, on the server side,we should have a Product Entity class in the Data Access Layer,which has properties of ID,Name,Price,etc; Then, we should have a Product DTO Class with properties of ID,Name,Price,etc; Then, on the client(Silverlight) side, we have the Product DTO class with properties of ID,Name,Price,etc as Data Contract to communicate with the server; Then, We should have a Product ViewModel Class,which alse has the properties of ID,Name,Price,etc to binding with the View. **Don't these many layers of similar objects violate the Don't Repeat Yourself principle?** What if the Business Domain Model itself faces frequent change? For example, the customer wants to remove/add new properties to the product, which will cause all layers to change its code. We introduce an addtional DAL layer in the assumption that the underlying database will be changed,but in the real world, the business model itself changes far more frequently than data access or presentation logic."} {"_id": "102189", "title": "Evaluating third-party libraries", "text": "At our company, we frequently use third-party libraries and frameworks in our products. Often, we are faced with the task of evaluating one or more libraries that full fill similar requirements (in a specific case we had to choose between some encryption libraries). We often do a simple prototype to try to figure out, if the library somehow works for us. However, often the prototypes get really complex and when we are at a point where we figure out, that the library is not suited for our purposes, lots of time was spend, that we could have used to write the desired functionality ourselves. My question would be, do you have or practice any guidelines when it comes to evaluating third-party libraries? More specifically, do you have something like a maximum time that will be spend on prototypes and do you have general requirements for libraries you use (license issues, support, documentation, ...)?"} {"_id": "115675", "title": "Pesky bugs - nonexistant?", "text": "Very short introduction (this is quite a context-heavy question): I'm a 17 year old script kiddo/programmer, doing some projects, usually netting around 20 files of 200 lines each. I usually don't program very low- level, more high-level with battery included API's like Python + pygame and Lua + WoW API. Nevertheless I've written quite some code in the lower levels of the computer too (mostly C/C++). Now, I read **a lot** of programmer discussion and a common returning argument is _preventing pesky bugs_ , for example in \"reusing variable names\". I always nodded and thought that it was a valid argument, but just now I wondered, how valid is it? To be honest, after thinking for a while, I figured I have no idea what they mean with _pesky bugs_. We all heard stories about phenomenal impossible-to- debug bugs, we all have spent useless evenings on that one annoying bug, but apart from a few cases I have never been busy with a programming-related bug for longer than a few hours. Though on the other hand, the projects I work(ed) on aren't huge 10 million line projects like the linux kernel, and are quite simple... scripts. I have a good understanding of them (or at least my part in collaborations) and are not very error-prone. So I'm wondering, do these pesky bugs occur exponentially more as the amount of code increases, or ...?"} {"_id": "102181", "title": "Should CMS be coupled with presentation?", "text": "There seems to be a growing trend for CMS systems to manage not only data capture and workflow, but to be an end-to-end product that manages presentation and end user content (like comments/forums). I thought it was a pretty ridiculous, but it seems like this approach is gaining more and more traction. It seems to me that the two concepts should be completely orthogonal. The CMS needs to either publish content to database or XML repo or expose an API (RESTful or otherwise) and remain agnostic to presentation. I'm sure there are some efficiencies in coupling, but at the expense of lock-in and inflexibility. I guess my question is, should I go with the flow or is this something I should keep fighting against?"} {"_id": "147401", "title": "from software developer to a software teacher career", "text": "I am a software engineer on a large company and I love what I do. However, I feel that on the long run I would like to become a teacher on these subjects, not exactly a classic uni teacher, but more like giving workshop/seminars for professional software developers. Any advice about stuff that I could start doing now to achieve that (in let's say 5 years), would be extremely useful. I live in London, UK, if that helps ..."} {"_id": "79293", "title": "Deploying Microsoft-Access databases with our application", "text": "Up to now we use DAO 3.51 (MS-Access 97) databases with our application. We are considering using a newer version. * Which versions are available? * How about deployment? * How about royalties/licence fees? * Any recommendations about alternatives? This question should be easily researchable, but I failed."} {"_id": "132263", "title": "Testing an IRC Bot", "text": "I'm using Autumn gem to create a Ruby IRC bot for a game. However, it makes me feel rather embarrassed because I don't know how to test this kind of program... I think I should mock the IO process to have a control over it but I can't see how in this case when using the gem. Has anyone an idea ?"} {"_id": "106409", "title": "What should you include in a development approach document?", "text": "I'm in the middle of co-producing a \"development approach\" document for off- shore resources as they ramp up onto our project. The most recent (similar) document our company has used is over 80 pages, and that does not include coding standards/conventions documents. My concern is that this document will not be consumable and will therefore fail. What _should_ be in a development approach document? Are there any decent guidelines on this topic? EDIT: The development approach document should detail the practices and techniques that will be used by software developers while software is designed, built, and tested."} {"_id": "111803", "title": "Questions about releasing an application that has been built on an IDE on an educational license", "text": "If an application is made during education using an IDE that had an Educational License (provided by the institute), and later the developers decide to release it in the market as a product: 1. Is it legal, and valid to do so? 2. Is it valid to build that application through the free or commercial version of the same IDE, or the project has to be re-coded or developed from the start, entirely on the commercial version of the IDE. 3. What permissions and licenses should be acquired if there is such plan? 4. What are the other things too look for in this regard? 5. If there is such plan, how to prepare for it at the start of the project, like who to talk to and how to talk about it, what paperwork etc. should be done so that in the end, there is nothing illegal."} {"_id": "35594", "title": "How can I improve my problem-solving ability?", "text": "Everyone says the same thing: \"a real programmer knows how to handle real problems.\" But they forget how they learned this ability or where: it's not taught in schools. What can I do to improve my ability to tackle complex programming problems? What strategies have worked for you? Are there specific areas I should be focusing on, like algorithms or design patterns?"} {"_id": "210913", "title": "How can I organize my data flow to process a good program solution?", "text": "My main problem when trying to form a program is that while I have the tools at my disposal, I am too disorganized to properly use them to solve my problems. Often my process is simply trial and error until the program works. I am looking to see what others use to organize the process of writing programs."} {"_id": "94643", "title": "How do you improve your problem solving skills?", "text": "> **Possible Duplicate:** > How does one improve one's problem-solving ability? I'm focusing on becoming a better developer and one area I'd like to focus on is improving my problem solving skills. The really good developers I know understand the problem very well and know how to approach HOW to find out what they don't know. I've often heard that a developer must know his problem domain well or his code can't be built upon a poor understanding of the business. I've almost never been in an environment where I was given time to sufficiently learn the business enough. Can others share their thoughts on best practices of how to approach solving a problem, or more importantly... how to best approach finding out the information you need to know in order to get to a reasonable answer or hypothesis? Are there any good books or resources for this?"} {"_id": "196684", "title": "What do small business people do with regards to 'legal' when selling software", "text": "For those who have sold software successfully over the Internet, after going to the effort of writing the software, creating a website, get a domain name, hosting the website, setting up a merchant facility, creating a trial version, etc - what do you do about the legal aspect of allowing the version downloaded from the Internet to be used only within the intended scope. That is from the perspective of say:- * each workstation, or server, or CPU, etc counts as a CAL * Development Vs Evaluation Vs Production * support and upgrades * not allowing the user to on-sell * disallow modifying/extending Is there a standard procedure / contract for this - i.e. are there documents in the public domain for this or can I take one from another software vendor, tailor it, etc. Or, do I need to hand crank it - i.e. get an accountant/legal- person to write it up"} {"_id": "115587", "title": "Definition of a Software bug. Blizzard Entertainment insists that my \"bug\" is not a bug at all. Are they right?", "text": "According to Wikipepdia, > _A software bug is the common term used to describe an error, flaw, mistake, > failure, or fault in a computer program or system that produces an incorrect > or unexpected result, or causes it to behave in unintended ways._ Recently I've found a \"bug\" in StarCraft 2 which produces an unexpected result: http://eu.battle.net/sc2/en/forum/topic/2868627470 The problem is that if I keep StarCraft 2 minimized for a long time, the game does not disconnect or generate any form of timeout. It does disconnects however _after_ first battle and sometimes also loses game data (match statistics). Unfortunatly, according to Blizzard: > _The game is not designed to be kept minimized for such a long period of > time. (Blizzard) cannot consider such behaviour as erroneous as StarCraft II > is not meant to be minimized for hours._ So, is my \"bug\" really a bug?"} {"_id": "186714", "title": "Ria service security", "text": "I have a silverlight app that connects to a entity framework over WCF ria service. These calls have to be secure. What can I do so only valid users can call the ria service, and to make the call secure? The user has to log in to get to the silverlight app, so maybe that login in some way can be saved for authentication of the ria calls?"} {"_id": "196688", "title": "Considerations for beginning work on a unified search", "text": "I have become interested in creating a unified search for a corporate asset management database. My goal is to allow users to submit queries like: * `stuff in building 3210` * `stuff in building 3210 owned by Jim` * `stuff in building 3210 owned by org 2` * `people in org 2 who own stuff` * `stuff owned by people in org 2` * `people in building 3210` * `people in building 3210 who own stuff` While the application is written in C# I have no real issue with building a search layer in a different language if it is more appropriate to the task. Working with .NET I could see utilizing F#, Lucene or ANTLR to accomplish my goals, but was hoping for some sage advice before starting down any one path. What are some considerations when working with language recognition? What questions am I not asking that I should be?"} {"_id": "196689", "title": "Computer organization and software engineering", "text": "We have a course called Computer Organization. I am wondering if it is useful in terms of software engineering. These are some topics from the course: Instruction set architecture (ISA), ISA design considerations, RISC vs. CISC, assembly and machine language, programming a RISC machine. Computer arithmetic, arithmetic logic unit, floating-point numbers and their arithmetic implementations. Processor design, data path and control implementation, micro programmed control, exception detection. Pipelining, hazards, pipelined processor design, hazard detection and forwarding, branch prediction and exception handling. Memory hierarchy, principles, structure, and performance of caches, virtual memory, segmentation and paging. I/O devices, I/O performance, interfacing I/O. Should I learn all of those topics or are there any topics that i could skip learning? This is the online course page http://courses.bilkent.edu.tr/videolib/course_videos.php?courseid=16"} {"_id": "225241", "title": "Best practice with respect to anonymous classes in UI applications", "text": "When working with user interface based Java programs, one way of attaching behaviour to a certain actions _(e.g. to a button click)_ is through the use of anonymous classes. In the example below, the GUI framework is SWT, however I have the same issues with Swing or Android UI components. Namely the structuring of my programs. MenuItem sampleMenuItem = new MenuItem(popupMenu, SWT.NONE); sampleMenuItem.addSelectionListener(new SelectionAdapter() { public void widgetSelected(SelectionEvent event) { doSomething(); // handle all table entries for (int i=0; i<10; i++) { doSomething2(); } doSomething3(); doSomething4(); } }); Of course, some might argue that the amount of code in the sample above already warrants for the creation of a dedicated class that contains the logic. This is also suggested by Sonar's **\"Anonymous classes should not have too many lines\"** rule. Interestingly, this rule also specifies that: > squid : S1188 - While waiting for support of closure in Java, anonymous > classes is the most convenient way to inject a behavior without having to > create a dedicated class. But those anonymous inner classes should be used > only if the behavior can be accomplished in a few lines. With more complex > code, a named class is called for. However, since closures have not yet arrived in Java, **my question is whether there are any more elegant solutions than** : * writing a bunch of code within anonymous classes which has all sorts of drawbacks _(limited reuse, slower navigating through the code, ...)_ * creating a huge number of dedicated classes which themselves all might have very limited functionality _(i.e.: over-engineering, ...)_ _To extend my question:_ I would also like to know what are the best practices with respect to this aspect of Java based UI applications? Are there any well established patterns?"} {"_id": "225246", "title": "Factory method - when objects need information to get initalized", "text": "Lets look on a simple example: assume that I have three classes implemeting `IPersonRepository`: `SQLPersonRepository`, `WebPersonRepository`, `InMemoryPersonRepository`. I also have `PersonRepositoryFactory` class which is simple Factory method pattern implementation - contains `GetPersonRepository` methods which have enum/string as parameter. How could I create Factory pattern (or in other words centralize my object creation) when `SQLPersonRepository` needs dependency for: `SQLConnection` or string to database path, `WebPersonRepository` needs some http settings object and `InMemoryPersonRepository` has no dependencies? I could pass it in Factory constructor or as method parameter but it'd be ugly, unclean and unmaintable especially if I had more `IPersonRepository` implementations."} {"_id": "216450", "title": "How to practice object oriented programming?", "text": "I've always programmed in procedural languages and currently I'm moving towards object orientation. The main problem I've faced is that I can't see a way to practice object orientation in an effective way. I'll explain my point. When I've learned PHP and C it was pretty easy to practice: it was just matter of choosing something and thinking about an algorithm for that thing. In PHP for example, it was matter os sitting down and thinking: \"well, just to practice, let me build one application with an administration area where people can add products\". This was pretty easy, it was matter of thinking of an algorithm to register some user, to login the user, and to add the products. Combining these with PHP features, it was a good way to practice. Now, in object orientation we have lots of additional things. It's not just a matter of thinking about an algorithm, but analysing requirements deeper, writing use cases, figuring out class diagrams, properties and methods, setting up dependency injection and lots of things. The main point is that in the way I've been learning object orientation it seems that a good design is crucial, while in procedural languages one vague idea was enough. I'm **not** saying that in procedural languages we can write _good_ software without design, just that for sake of practicing it is feasible, while in object orientation it seems not feasible to go without a good design, even for practicing. This seems to be a problem, because if each time I'm going to practice I need to figure out tons of requirements, use cases and so on, it seems to become not a good way to become better at object orientation, because this requires me to have one whole idea for an app everytime I'm going to practice. Because of that, what's a good way to practice object orientation?"} {"_id": "237683", "title": "How to deal with a misnamed function in production code?", "text": "I've recently come across a Python library on GitHub. The library is great, but contains one glaring typo in a function name. Let's call it `dummy_fuction()` while it should be `dummy_function()`. This function is definitely \"in the wild\" and most likely used in embedded systems. The first thing that springs to mind is to add a second version of the function with the correct name and add a deprecation warning to the first version for the next release. Three questions: 1. Could the approach above have any unintended consequences? 2. Is there a standard approach to this kind of problem? 3. How long should any deprecation warning be left in place?"} {"_id": "179139", "title": "How to identify a software development framework?", "text": "Based on what information, can we identify something as a software development framework? For example the Wikipedia article of 'software framework' claims it should include support programs, compilers, code libraries, etc. But there are some companies I know of which call a code library 'framework'! What should a certain development environment contain to be considered as a 'framework'?"} {"_id": "66139", "title": "When can you offer innovation without being off-task?", "text": "**When can you offer innovation, without being off-task?** I can't understand how creative innovation fits in Agile and other methodologies. **People just want what they asked for, right?** Say you're following an efficient methodology like Agile/SCRUM. When can you offer innovation, without being off-task? Say you're making an app and doing some creative stuff with the UX. The client gave you implicit instructions, and you're staying on track. Imagine you discover an innovation that's not a beta. If you introduce innovations the wrong way, you could appear off-task."} {"_id": "66138", "title": "Why does PHP have interfaces?", "text": "I noticed that as of PHP5, interfaces have been added to the language. However, since PHP is so loosely typed, it seems that most of the benefits of using interfaces is lost. Why is this included in the language?"} {"_id": "136016", "title": "Putting audit functionality into the database", "text": "Our database does not have any audit functionality. * It does not records who inserts the record or who who changes it * It does not keep a history of the changes made * Nothing can be restored if something goes wrong * There are many other short comings for example the database does not have any data integrity, anything an be thrown in, no check on different fields, duplicates etc My question is, who's job it is to put this functionality in? An ASP Developer who knows basics of SQL and who interacts with the database regularly. Or an SQL Administrator who's sole job is to work with the databases, optimize it and maintain it? At this point we do not have an SQL Administrator, but can hire one. How big of an undertaking this should be for a developer to fix the above issues?"} {"_id": "66131", "title": "Can you copyright code under 2 separate entities?", "text": "I am wondering if it is possible to copyright a piece of software under 2 separate entities, allowing both entities full right do whatever they want with the software while not allowing one of the entities to be able to revoke privileges to the other? Basically I am working on a team, and not getting paid so it is not a paid for hire situation, and I am probably going to be building a web platform system from scratch in C#. I want to be able to make sure that in the event the team breaks up or I kicked off the team that I will still be able to continue work on the web platform system without any legal issues. It is not that I don't trust the team as I do, but life has taught me to always cover my own ass. Would a better way to handle this be to copyright the code under my name and then just give the team a license to use the code however they see fit? I also know that any advice I receive here is most likely not from a lawyer and even if it is, any advice given may not be 100% accurate according to the law."} {"_id": "185250", "title": "QR Codes as Booking Confirmations for Conference?", "text": "A client of mine is holding a conference and we have the task of creating a booking system for them. However they have requested that we use QR codes so that on the door, a person can simply present their QR code, be scanned, and boom! they are signed in. This isn't so much of a problem because I thought well I could use a long URL to connect to our DB and sign the person in, mark them as booked in/confirmed, and be done with it. That's all very easy, the problem is then that what if the person scan's the QR code themselves? How do ensure that, only the people who are on the door of the conference have the power to scan the barcodes and sign people in? I am limited really to php / jquery, if I knew XCODE I would write an App but I don't. Thoughts I had: 1. Get the IP of the local WIFI, and only accept requests from that (however that does not stop the public from signing in) 2. Use some variable in $_SERVER[] that I could map as coming from a certain person's phone only."} {"_id": "181948", "title": "How to organize MVVM files in solution", "text": "I'm fairly new to the MVVM concept but like a lot of the flexibility it gives me so far. However, I'm struggling to find a good way to manage my code. I have several classes that are just sitting in a folder in my solution such as `xxxView.cs`, `xxxViewModel.cs`, `yyyView.cs`, `yyyViewModel.cs`, `zzzView.cs`, `zzzViewModel.cs` you get the idea. It has started to crowd my solution, making it harder to find the files I'm looking for. Is there some standard way to organize these files? Do I create a **View** and **ViewModel** folder to separate and clean up the solution or have people found a better way?"} {"_id": "181944", "title": "What are the challenges related to typing in writing a compiler for a dynamically typed language?", "text": "In this talk, Guido van Rossum is talking (27:30) about attempts to write a compiler for Python code, commenting on it saying: > turns out it's not so easy to write a compiler that maintains all the nice > dynamic typing properties and also maintains semantic correctness of your > program, so that it actually does the same thing no matter what kind of > weirdness you do somewhere under the covers and actually runs any faster **What are the (possible) challenges related to typing in writing a compiler for a dynamically typed language like Python?**"} {"_id": "204335", "title": "Software Design Stability , YAGNI and Agile", "text": "I've met the criterion of good system desing as its stability relative to requirements change. Small req. changes should raise small changes in design. Yet I have gut feeling that almost for any somewhat complex software system it's possible to state the set of small changes in requirements which causes unacceptable amount of changes in software. This feeling is based on personal experience though not very great and the fact that schedules are often broken. Community can argue against this statement in case of strong disagreement. But the question only has meaning if you agree. YAGNI principle works well against overengineering and Agile methodology brings it to the whole process of software development: system evolves incrementally and this perfectly conforms to stability concept which I started this post from. But... If we however agree that there are some instability points shouldn't we localize this cases to understand the implications from the very beginning? And how is it possible if we designs system in agile incremetnal manner?! P.S. Itneresting part of the dialog with my colleague: * Nothing is wrong with incremental software development. * Do you imagine the costs of changing the architecture of Curiosity's firmware after it landed Mars? * Hey, but the can and changed the firmware! * Yes, but before that, they had carefully designed this feature. And if not...that would be the point of no return."} {"_id": "25978", "title": "Begin and Finish or Pre and Post in async call pair?", "text": "In an async call pair you would rather have a BeginDoSomething & FinishDoSomething or PreDoingSomething & PostDoingSomething pair? Cheers"} {"_id": "139377", "title": "Current iOS version/device statistics?", "text": "The answer to this SO question has become stale: iOS version/device statistics - where can i find? because time currency wasn't part of that question, and iOS version updates have been release since this question was asked. Is there a web site or other publicly available source which keeps a current or frequently updated list of the percentages of iOS devices and OS versions in use, perhaps by continual monitoring of app analytics or web site logs or other means? And what device or OS information are iOS app analytics currently allowed to report, if any? (...assuming an appropriate privacy policy and adhering to such, of course.)"} {"_id": "103318", "title": "iOS version/device statistics - where can I find?", "text": "Since I am developing iOS apps I'm trying to find stats on how many iOS devices there are out there. Preferably broken down into versions: Iphone 3G, Iphone 3Gs, Iphone 4 etc. Also I'd like to know the stats on different iOS versions. After an hour of googling I found out that it is not the easiest thing to find. Anyone got some nice bookmarked links to share?"} {"_id": "196355", "title": "What are the drawbacks of immutable types?", "text": "I see myself using more and more immutable types _when the instances of the class are not expected to be changed_. It requires more work (see example below), but makes it easier to use the types in a multithreaded environment. At the same time, I rarely see immutable types in other applications, even when mutability wouldn't benefit anyone. **Question:** Why immutable types are so rarely used in other applications? * Is this because it's longer to write code for an immutable type, * Or am I missing something and there are some important drawbacks when using immutable types? ## Example from real life Let's say you get `Weather` from a RESTful API like that: public Weather FindWeather(string city) { // TODO: Load the JSON response from the RESTful API and translate it into an instance // of the Weather class. } What we would generally see is (new lines and comments removed to shorten the code): public sealed class Weather { public City CorrespondingCity { get; set; } public SkyState Sky { get; set; } // Example: SkyState.Clouds, SkyState.HeavySnow, etc. public int PrecipitationRisk { get; set; } public int Temperature { get; set; } } On the other hand, I would write it this way, given that getting a `Weather` from the API, then modifying it would be weird: changing `Temperature` or `Sky` wouldn't change the weather in real world, and changing `CorrespondingCity` doesn't make sense neither. public sealed class Weather { private readonly City correspondingCity; private readonly SkyState sky; private readonly int precipitationRisk; private readonly int temperature; public Weather(City correspondingCity, SkyState sky, int precipitationRisk, int temperature) { this.correspondingCity = correspondingCity; this.sky = sky; this.precipitationRisk = precipitationRisk; this.temperature = temperature; } public City CorrespondingCity { get { return this.correspondingCity; } } public SkyState Sky { get { return this.sky; } } public int PrecipitationRisk { get { return this.precipitationRisk; } } public int Temperature { get { return this.temperature; } } }"} {"_id": "199677", "title": "Source code of jar.exe - is it available", "text": "This may seem an odd question, but I want to create an executable which runs under Windows written in C++. The program needs to be able to update a jar file even if Java is not installed on the target machine. I have seen the src.zip in the Java JDK folder and the jar folder but this is Java code. I assume jar.exe is written in C or C++? Is it possible to see the jar.exe source code? If so where would I find that?"} {"_id": "23481", "title": "How to go about working as a Contract/Consultant remotely", "text": "I currently have a full-time job, but my company is thinking of turning me into a Contract/Consultant due to the fact that I'm now in a different city and have to work remotely. I know usually Contract positions are hired through agencies, in this case since I'll be dealing directly with my company, I told them that I'm only going to charge the same amount as if I were to get a Contract job from an agency, this way I'm still getting a competitive rate, and my company gets to pay a lower rate (no agency to collect a cut). Some background facts: 1. I have a typical team - some are keener's, others not so much, the tech lead/manager try to do their best to achieve deadlines, which means work load is not always spread out evenly, but this is probably true for any team out there. 2. I enjoy my work so far as a software developer, I have a lot of responsibilities and I'm handed meaningful work most of the time. 3. My team members are mostly young, just like me, ideally I'd like to have experienced senior developers/architects on the team to bounce ideas with, but so far that hasn't happened. 4. Our team produces desktop applications written in c#, this means no web applications are involved, I sometimes wonder if this means I'm \"pigeon-holed\". Back to the Contract possibility, I have the following concerns: 1. Would this actually be a good career move? (this is my biggest concern) As our HR pointed out to me, such a Contract position can be terminated once they decide that I'm not needed anymore, or if the company needs to cut budget, after which I'll have to look for new work, will \"Contract/Consultant\" on the resume hurt my job-searching? (I'll be doing more architectural / design work once I'm in this new Contract role) 2. I'll be working remotely most of the time, I wonder if this would hurt or help my growth to be an architect. 3. Should I be a sole proprietor, or should I get incorporated? What would be the advantage of either? 4. There will not be any medical/dental/other benefits (basics are covered by OHIP Canada), should I look into buying private insurance? If so how expensive would it be? 5. My company is looking at putting on restrictions such as: I will not be allowed to work overtime (cannot charge more than the normal # of hours per day), I will need to pay for all expenses (other than equipment) such as travelling and courses <\\- I'm fine with the 2nd one, is the 1st one normal for Contract positions to be ask to complete your tasks but without any overtime? Does the above sound familiar to anyone? Any suggestions/advices you can give me on what I should think about / watch out for?"} {"_id": "144173", "title": "What are some useful things you can do with Mvc Modelbinders?", "text": "It occurs to me that the ModelBinder mechanism in ASP MVC public interface IModelBinder { object BindModel(System.Web.Mvc.ControllerContext controllerContext, System.Web.Mvc.ModelBindingContext bindingContext); } Is insanely powerful. What are some cool uses of this mechanism that you've done/seen? I guess since the concept is similar in other frameworks there's no reason to limit it to Asp Mvc"} {"_id": "34614", "title": "Have you ever worked at branding your software?", "text": "Clearly, if I want my software to attract enough eyeballs I ought to brand it. Unless of course there isn't any competition. I'd like to understand your thoughts on how to go about branding your software."} {"_id": "109094", "title": "Why doesn't Google use GWT in most of its applications?", "text": "Google created Google Web Toolkit (GWT) and doesn't use it when building their own web applications. Does this mean GWT is not suitable for building dynamic applications? Or have caching problems? Or have RPC problems? Or are there other concerns that prevent Google from using this technology? `EDIT`: I don't say here that Google don't ever used GWT in any app, but What I am want to say, why they are not using it in a wide range?"} {"_id": "72467", "title": "How important is working with a team?", "text": "I am about to graduate from college with a Masters in computer science. I have a couple of offers that I am considering and to me, the biggest difference between the two jobs comes down to one thing - how important is it for a recent grad to be part of a team of software engineers? Will it help me grow more in terms of a career in software engineering? Will it help me become better software engineer since I can get guidance from the experienced engineers? The first offer is with a small telecom company. I've been interning here for a while and love the people I work with. Also, this is where I live currently and so there is no relocation required. The problem is that I'm pretty much the only software engineer here. Everyone else is either network engineers or system integrators. My goal is to have a career in software development and I am concerned about my opportunity for growth. Mainly, I've been creating internal tools and writing scripts for managing networks. My director says he plans on expanding our software team eventually. Also, not having a software engg team means that stack overflow is pretty much my mentor, not that that's a bad thing. The second offer is actually with a top tech company. Pluses with this offer is that I will be part of a team of software engineers and hope that I will have the opportunity to learn a lot from them. I'm not really sure if that's what will happen since I've never worked in a software engg team outside of college. Cons of this job are: * have to relocate more than 1600 miles away * northwest, so not exactly pleasant weather year round. I'm from Texas * I liked the manager and the other interviewers but no guarantee that I will like the team * pay including benefits are somewhat lower than at small company What do you think? Please help me out. Thanks."} {"_id": "96770", "title": "What features contributed to the evolution of Pascal?", "text": "I am compiling a detailed history of the Pascal language, and there are a few details I am missing. There are so many features today that we take for granted. What features significantly contributed to the evolution of Pascal, and why were they significant? I'm looking for language features, not platform or framework features. So like operator overloading or default parameters, but not Linux support or new Rich Text widget. I know there are a few different flavors of Pascal (Delphi, Free Pascal, Oxygene, Quick Pascal, Apple Pascal, etc.) and they introduced the same features at different times and in parallel. That is OK. I'm looking at the Pascal language as whole, and **_when_** the significant milestones occurred (dates, versions, etc.)"} {"_id": "177268", "title": "Looking for a real-world example illustrating that composition can be superior to inheritance", "text": "I watched a bunch of lectures on Clojure and functional programming by Rich Hickey as well as some of the SICP lectures, and I am sold on many concepts of functional programming. I incorporated some of them into my C# code at a previous job, and luckily it was easy to write C# code in a more functional style. At my new job we use Python and **multiple** inheritance is all the rage. My co-workers are very smart but they have to produce code fast given the nature of the company. I am learning both the tools and the codebase, but the architecture itself slows me down as well. I have not written the existing class hierarchy (neither would I be able to remember everything about it), and so, when I started adding a fairly small feature, I realized that I had to read a lot of code in the process. At the surface the code is neatly organized and split into small functions/methods and not copy-paste-repetitive, but the flip side of being not repetitive is that there is some magic functionality hidden somewhere in the hierarchy chain that magically glues things together and does work on my behalf, but it is very hard to find and follow. I had to fire up a profiler and run it through several examples and plot the execution graph as well as step through a debugger a few times, search the code for some substring and just read pages at the time. I am pretty sure that once I am done, my resulting code will be short and neatly organized, and yet not very readable. What I write feels declarative, as if I was writing an XML file that drives some other magic engine, except that there is no clear documentation on what the XML should look like and what the engine does except for the existing examples that I can read as well as the source code for the 'engine'. There has got to be a better way. IMO using composition over inheritance can help quite a bit. That way the computation will be linear rather than jumping all over the hierarchy tree. Whenever the functionality does not quite fit into an inheritance model, it will need to be mangled to fit in, or the entire inheritance hierarchy will need to be refactored/rebalanced, sort of like an unbalanced binary tree needs reshuffling from time to time in order to improve the average seek time. As I mentioned before, my co-workers are very smart; they just have been doing things a certain way and probably have an ability to hold a lot of unrelated crap in their head at once. I want to convince them to give composition and functional as opposed to OOP approach a try. To do that, I need to find some very good material. I do not think that a SCIP lecture or one by Rich Hickey will do - I am afraid it will be flagged down as too academic. Then, simple examples of Dog and Frog and AddressBook classes do not really connivence one way or the other - they show how inheritance can be converted to composition but not why it is truly and objectively better. What I am looking for is some real-world example of code that has been written with a lot of inheritance, then hit a wall and re-written in a different style that uses composition. Perhaps there is a blog or a chapter. I am looking for something that can summarize and illustrate the sort of pain that I am going through. I already have been throwing the phrase \"composition over inheritance\" around, but it was not received as enthusiastically as I had hoped. I do not want to be perceived as a new guy who likes to complain and bash existing code while looking for a perfect approach while not contributing fast enough. At the same time, my gut is convinced that inheritance is often the instrument of evil and I want to show a better way in a near future. Have you stumbled upon any great resources that can help me?"} {"_id": "194857", "title": "Assuring Quality and Bug Fix speed in Open Source Python Project", "text": "I'm maintaining an open source framework (in Python on *nix platforms if that matters) for the first time in my life. It is pretty much pre-alpha, not much more then a scientific proof of concept, yet. But it is also already used in production by another department because it is the only framework globally that approaches their needs. Now there are two anti polar goals: quality and development speed. Of course I want quality in form of documentation, unit tests, code reviews and some kind of \"beta\" usage, before I am confident to let a change be used in production. But the development team has dead lines and when they find a bug, which happens rather often in this prototype, then they need the bug fix to be in production very fast. I have currently no working solutions and our ideas diverge. I think this project can't be the only one with that problem. How do other projects solve this? I'm going to post my idea and the dev teams idea as answers for further discussions, both ways are not a solution though, because my idea basically only focusses on quality and their solution only focusses on speed."} {"_id": "100127", "title": "How do I view Scala code without all the syntactic sugar?", "text": "I have been studying Scala, but what I keep running into is the optimization of syntax. I'm sure that will be great when I am an expert, but until then.. Not so much. Is there a command or a program that that would add back in all/most of the code that was optimized away? Then I would be able to read the examples, for example."} {"_id": "194855", "title": "What is \"Semantics visibility\"?", "text": "I'm reading 97 Things Every Programmer Should Know, now I'm positioned in \"Apply Functional Programming Principles\", and there is a paragraph that says: > ...A leading cause of defects in imperative code is attributable to mutable > variables. Everyone reading this will have investigated why some value is > not as expected in a particular situation. Visibility semantics can help to > mitigate these insidious defects, or at least to drastically narrow down > their location, but their true culprit may in fact be the providence of > designs that employ inordinate mutability... What is the objective of Semantics Visibility in this context? How to apply this approach to solve mutable variables' insidious defects?"} {"_id": "92610", "title": "Is it worth developing custom shopping cart?", "text": "We have handsome library of cakephp modules at my workplace, and we develop custom websites at good pace until the shopping cart comes our way to slow down the process. I have used various ( _Magento, Opencart, Zencart_ ) shopping carts in different project, where we have to merge them with our core cakephp application. Usually very custom requirements make the shopping cart **non-upgradable** and takes **lot of time**. I am thinking to make our own shopping cart ( _quite basic at present, and will we extended as we move on_ ) from scratch so it can adapt the custom requirements easily. Is it worth doing? # UPDATE 24-Aug-11 I continued developing our own shopping cart. Here are my experiences that i want to share with you guys. Benefits 1. New cart is easy to change and extend. 2. It saves time when we have vague or custom requirements, and allow us to directly import modules from our existing code library. 3. No need for dual template implementation for cart & custom website. 4. Single admin panel for our shopping cart & custom website. Limitations 1. Still not mature enough w.r.t other carts in market. 2. Security concerns. We mostly rely on cakephp security. 3. Lacks functionality Problems faced 1. Developing Shipping/Payment gateways was the real pain. As @davidhaskins pointed out **It saved us significant time which we might have spent hacking standard shopping carts to meet our needs**"} {"_id": "177262", "title": "Project Dashboards", "text": "I'm attempting to create a dashboard so that people not intimately involved with the project can get an indication of project's health. I'm struggling with determining what to put on said dashboard. I think it needs to be brief to be useful, yet complete. The project I'm working on depends on both 3rd party contractors, external hardware and of course my team's effort. Are there any suggestions or guidelines on how to encapsulate it all in a relatively easy manner? Mods, I believe this question falls squarely between _development methodologies_ and _business concerns_ as outlined in the faq. Thank you!"} {"_id": "194859", "title": "Security in Authentication in single page apps", "text": "What's the most secure method of performing authentication in a single paged apps? I'm not talking about any specific client-side or server-side frameworks, but just general guidelines or best practices. All the communications are transfered primarily through sockJS. Also, OAuth is out of the question."} {"_id": "177260", "title": "Hash Algorithm Randomness Visualization", "text": "I'm curious if anyone here has any idea how the images were generated as shown in this response: Which hashing algorithm is best for uniqueness and speed? Ian posted a very well-received response but I can't seem to understand how he went about making the images. I hate to make a new question dedicated to this, but I can't find any means to ask him more directly. On the other hand, perhaps someone has an alternative perspective. The best I can personally come up with would be to have it almost like a bar graph, which would illustrate how evenly the buckets of the hash table are being generated. I have a working Cocoa program that does this, but it can't generate anything like what he showed there. So the question is two fold I suppose: **A)** How does one truly interpret the data he shows? Is it more than \"less whitespace = better\"? **B)** How does one generate such an image based on some set of inputs, a hash, and an index? Perhaps I'm misunderstanding entirely, but I really would like to know more about this particular visualization technique. Or maybe I'm mis-applying this to hash tables rather than just hashes in general, but in that case I don't know how it would be \"bounded\" for the image."} {"_id": "191089", "title": "OwnCloud App Licensing", "text": "I'm a bit confused about the OwnCloud licensing model. OwnCloud is an AGPL licensed product (at least the open source version is). Does that mean that the only license I can use for an OwnCloud app I write using the API would be AGPL as well? The way I understand it, the AGPL license is so restrictive that no app can be written without it having to be released under the AGPL, which would include custom apps for customers who don't want to spend a small fortune on the OwnCloud Enterprise edition. Somehow that feels wrong, so I might have gotten it all wrong and would be happy if somebody with a clear understanding of this topic could shed some light on this."} {"_id": "92619", "title": "Revenue sharing with customer who is unable to pay development fee", "text": "I have a potential customer who has an idea for an ipad application but is unable to find sufficient fundings for this. One idea that came up is that I do the work either for free or for a minor fee and then receive a percentage of the income from appstore. How do I decide what percentage is realistic? How is this affected by the price in appstore and how do I protect myself from the scenario where the customer suddenly decides to offer the app for free?"} {"_id": "187040", "title": "Have there been disputes when software identifies genders with a boolean?", "text": "I remember that in our first programming class with Java, while explaining data types, the following occurred (kinda): > **Professor** : So, what data type would you choose if your program needs to > store the user's gender? > > **Someone** : How about a boolean? You know, true for male and false for > female. > > **Professor** : Sure, that may work, but some people may hesitate about > that. There have been disputes about calling men \"true\" and women \"false\" in > the past... > > * _laughter_ * He ended up recommending us to consider chars (like `'m'` and `'f'`) although booleans should be fine. I tried searching a bit if there has been any kind of historical entry regarding a major dispute based on this programming practice with no luck. I'm not asking what data type to use for gender nor if it is fine or not to use booleans. I'm asking if, **historically** , there has been a dispute regarding the usage of booleans to determine the gender in programming because of the apparently \"wrong\" (I'm not saying it _is_ wrong - I don't care about that) behavior of calling a woman \"false\" as the professor seemed to suggest. Searching around related questions, the results happen to be only about software efficiency."} {"_id": "187042", "title": "codeigniter pagination - how to have multiple sets of pagination links on one view", "text": "**Problem** I don't know how to create two sets of pagination links one view. **Background Information** I have a situation where I have parent / child tables that I have to display on the same view - perhaps in two different sections. Each section needs its own set of pagination links. The contents (and therefore, pagination links) in section 2 will change depending on what is selected in the parent section / section 1. The parent is what I call \"subcategories\" and the child table is \"products\". SO when the user comes to the first page, all categories are shown. But once they select a specific category, I want to take them to this view that I'm discussing which will show any subcategories... and any products. **Possible Solution** I'm thinking that I need the following methods in my controller: public function getsubcategories($categoryID) { 1. find data yu need to display 2. set up pagination object 3. use add_js method to add ajax call to a method \"get_products_in_category\" Ajax method to be invoked on document.ready and will return product data. 4. load view. } public function get_products_in_cateogry($categoryID) { 1. look up products in category 2. build html to display products. 3. return string with all html to display product information ??? can i create another pagination object here??? } If you have any suggestions on what's wrong with my idea / how I can accomplish this, I'd really appreciate it Thank you. **Edit 1** While waiting for a response, i went ahead and tried a little test... and it doesn't seem to be working. I still have the issue where the pagination links displayed in section 2 actually manipulates the data in section 1. I'm going to move post this question in stackoverflow. Thank you."} {"_id": "228478", "title": "Testing all combinations", "text": "I need to do some performance measurements inside my application. I want to measure, change some parameters, measure again. There are different algorithms I want to test, and there are various parameters which interact with each other in that the total performance depends on all of the parameters (but the parameters themselves don't influence each other. e.g. if I set x to 5, it will always stay 5 and changing some other parameter won't change x). I think the total number of combinations is quite high, at least enough so that I don't want to manually change everything and test out each possibility by hand. I'm looking for a piece of lightweight piece of software architecture (I dare say a design pattern) that basically enables me to define a set parameter types relevant to an algorithm, the possible values of those types, and that piece of code should then run through all combinations of those types and their values, for each one doing the required stuff (calling some functions to change values, etc.) and then executing the algorithm. Example: An algorithm depends on values x, y and z. x can be either 0 or 1, y can be \"hello\" or \"goodbye\", and z can be in the range [0,100]. The solution I'm looking for starts with [0,\"hello\",0], calls some functions to set the values of those variables, lets the profiling run for some time, then changes to [0,\"hello\",1], repeat, [0,\"hello\",2]... etc. This is probably something that people have needed to solve before. How do I solve this elegantly?"} {"_id": "50049", "title": "Two internships at the same time -- good or bad?", "text": "I had no internship a few months ago, so I basically went on a 'resume mailing' spree and emailed a _lot_ of companies that I was interested in working for and that had my line of work. This didn't prove futile until a company accepted me into their internship program but said that I would be working remotely. I had no problem with that, the project was good and I was interested. Now I have another internship at a company that is close to my home and I don't want to miss it at all! I can manage both internships side-by-side. In the day, I will do the internship that is closer to my home and at night (and other times), I can manage the remote internship. My question is -- should I both? I am particularly interested in how two internships at the same time are viewed. Would it look good or bad? PS: Neither is paying me anything, so money is not a factor."} {"_id": "228472", "title": "Include GPLv2 licensed data in MIT licensed project", "text": "I'd like to include some data from a GPLv2 licensed project in my MIT licensed project. More specifically, I want to use the data from the other project as the training data for my machine learning algorithm and I'd also like to include the trained model in my project. I don't want to include the whole project source code, just those data files. I will not modify them. I also want to have the trained model in my project which I think is derived work? Can I create a folder for those data files, add a copy of the GPLv2 license, make it clear that my project is MIT licensed apart from that folder which contains GPLv2 licensed files? Does the trained model also have to be released under GPLv2? If so, can I also keep it in that folder?"} {"_id": "228470", "title": "Socket Connection Data Insert", "text": "I have been working through a high-performance application where I have identified a bottleneck. The bottleneck is actually when the application must insert messages from a socket, it will record them in a database. The trouble is the speed at which the messages come through are so fast that the application cannot catch up due to poor architectural planning. Clearly, insertion needs to be improved. So here's the question: What is the most optimal format for performing inserts of individual records? I have looked over at this link on high performance inserts revision 4 and found that doing something of this nature would require an extensive restructure in the application compared to its current attempt at concatenating a string together such as this: INSERT INTO table (col1,col2,col3) VALUES (val1,val2,val3),(val4,val5,val6),(val7,val8,val9) Does anybody have any have an insight over which is better, **datatable insertion** vs **string concatenation**?"} {"_id": "86461", "title": "Writing Web \"server less\" applications", "text": "### TL;DR What are the prospects of writing applications which are completely based on a **REST** database server (CouchDB) and web applications which directly access the DB instead of having a web server in between? * * * I recently started looking up some NoSQL databases. MongoDB seems to be a popular choices. I also liked the project. But I personally liked the REST interface of **CouchDB**. So what I wanted to know is if there was the possibility of applications (maybe cached apps in web browser, a chrome extension etc.) which could just just query the database directly with no requirement of a webserver in between. All the computational logic would reside in the client application and the database will do what it does, **CRUD**. Since mostly (I don't know which doesn't) client frameworks support REST quaries, it could be a good way writing applications well optimized for respective framework. These applications though won't be doing complicated computation, but still provide enough functionality which could replace lots of conventional applications. _Are existing resources and projects which would help me move towards writing such applications and also the scope and moving towards developing in this way?_ _Are their any technical/security issues with this?_ * * * This post will help me decide to look into project like CouchDB (and maybe Dive into Erlang later) or stay with the conventional frameworks (like django) and SQL databases. ### Update A specific point of such apps I had in mind is creation of _offline applications_ just by replicating couchdb data on client."} {"_id": "145803", "title": "A software design pattern to model runtime-dependent behavior", "text": "In a interview I was asked, _Suppose we are going to create a software that runs on both desktop machines and smartphones. Name a software design pattern that could be used to enable the application to create different classes for display at runtime depending on the platform._ I know there are simple solutions to implement this feature in the actual code. For example, in Java I can check the display size and create the suitable class (`MobileDisplay` or `DesktopDisplay` class) for that display. But I don't how this is related to the software design. IMO creating suitable class based on runtime platform is an implementation concern than a software design issue."} {"_id": "228475", "title": "REST and RPC in multi-tier API", "text": "My team is developing a multi-tier API with scalability and modularity in mind. The public access point of the API is fully REST. However, we are splitting the data access layer as another tier in our architecture and this layer will be on another physical server. We decided that RPC would be a good communication protocol between the public facing API and the private data access layer. We decided to use RPC over REST for the private communication between our tiers because: * Avoid REST routes duplication between each tier * Transparency in executing code/functions on different servers * The communication is private. With good documentation there shouldn't be any problems for the team to understand the communication protocol. My question are: * Did we miss advantages or disadvantages in the choice of communication protocol for the private communication between tiers? * What is normally use to communication between tiers in a multi-tier architecture, especially for web APIs?"} {"_id": "50043", "title": "Is OrientDB document-database? or graph-database?", "text": "Some document says OrientDB is document-database, some others talks it's graph-database. What's right?"} {"_id": "32618", "title": "Solid principles vs YAGNI", "text": "When do the SOLID principles become YAGNI? As programmers we make trade-offs all the time, between complexity, maintainability, time to build and so forth. Amongst others, two of the smartest guidelines for making choices are in my mind the SOLID principles and YAGNI. If you don't need it; don't build it, and keep it clean. Now for example, when I watch the dimecast series on SOLID, I see it starts out as a fairly simple program, and ends up as a pretty complex one (end yes complexity is also in the eye of the beholder), but it still makes me wonder: when do SOLID principles turn into something you don't need? All solid principles are ways of working that enable use to make changes at a later stage. But what if the problem to solve is a pretty simple one and it's a throwaway application, then what? Or are the SOLID principles something that always apply? As asked in the comments: * Solid Principles * YAGNI"} {"_id": "176497", "title": "How to keep the trunk stable when tests take a long time?", "text": "We have three sets of test suites: * A \"small\" suite, taking only a couple of hours to run * A \"medium\" suite that takes multiple hours, usually ran every night (nightly) * A \"large\" suite that takes a week+ to run We also have a bunch of shorter test suites, but I'm not focusing on them here. The current methodology is to run the small suite before each commit to the trunk. Then, the medium suite runs every night, and if in the morning it turned out it failed, we try to isolate which of yesterday's commits was to blame, rollback that commit and retry the tests. A similar process, only at a weekly instead of nightly frequency, is done for the large suite. Unfortunately, the medium suite does fail pretty frequently. That means that the trunk is often unstable, which is extremely annoying when you want to make modifications and test them. It's annoying because when I check out from the trunk, I cannot know for certain it's stable, and if a test fails I cannot know for certain if it's my fault or not. **My question is, is there some known methodology for handling these kinds of situations in a way which will leave the trunk always in top shape?** e.g. \"commit into a special precommit branch which will then periodically update the trunk every time the nightly passes\". And does it matter if it's a centralized source control system like SVN or a distributed one like git? By the way I am a junior developer with a limited ability to change things, I'm just trying to understand if there's a way to handle this pain I am experiencing."} {"_id": "233980", "title": "JavaScript: Bundle a required, but common, polyfill in my library?", "text": "First, here are **a couple of related, but not-quite-the-same** questions: * Should I include dependencies for which I have the source as projects in my solution? * Depending on another open source library: copy/paste code or include Now on to **my particular question:** I'm building a small JavaScript library. If run within certain older browsers, part of its functionality will depend on a certain polyfill. For the purposes of an example, let's say this polyfill is the `requestAnimationFrame` polyfill. This polyfill is: * Well-known: Most JavaScript developers are at least vaguely aware of it. * Available: Copy-pasteable code snippets of variants of it are all over the place. * Tiny: It's only a few lines of code, really. * Endemic: Libraries in the same problem space as mine may have it bundled already. Obviously, if my library were the only library being used in someone's project, and if part of that code could fail without the polyfill in circumstances likely for the developer's audience, it would make sense to bundle the polyfill with the library. But in the JavaScript world, particularly in this library's domain, it's possible that the developer already has the polyfill, and that by bundling it, I could be re-polyfilling `requestAnimationFrame` for the second, third, fourth time over. (This itself isn't really an issue, since most polyfills by their very nature include a pre-check of the namespace. But, I admit, the thought of the _same polyfill_ appearing multiple times in someone's code bothers me like a painting hung slightly crooked.) So what is the sensible thing to do? Include it? Or just leave a note in the documentation that says, \"By the way, make sure you have the Polyfill X if you need to support Browser Y\"?"} {"_id": "233987", "title": "Is it legal to distribute a closed source X Window Manager?", "text": "I\u2019m doing a thought experiment about making a product on top of Linux. I\u2019m wondering: If you make a custom window manager (akin to KDE, for example) on top of X and you release it, do you have to release it under the GPL (Linux) or MIT (X.org)? Or can you keep it closed source?"} {"_id": "96809", "title": "Why isn't there a Boolean for x values of a variable?", "text": "> **Possible Duplicate:** > Would you see any use of a Trilean (True, False, ??) Well first and foremost, I'm not a programmer, I am a civil engineer that does some programming and quite enjoy it. Second and Second most, I'd like to claim rights to this amazing invention and call it TROLLEAN or DASSOUKIEAN. Joking aside, on numerous times I run into variables that can only hold three values (let's cal them True, False, and Other. For example, take a traffic intersection. It could be signalized, unsignalized, or non-controlled (aka no stop signs and no traffic lights). it would be dnice to define `TROLLEAN traffic_light = Other` Other in this case represents non-controlled intersections. Instead I find myself writing a code sequence to define that."} {"_id": "233984", "title": "Trying to \"combine\" similar objects without doing a bazillion comparisons", "text": "In this database I have \"accesses\" - each access can have any number (usually less than 4, sometimes as many as 15) attributes. I also have \"tickets\" where N users are requesting M accesses, so I've got a sub-ticket table with the NxM \"user access requests\". Then for the sub-ticket, there is another table with the A attributes for that access (where \"A\" varies depending on the request). ![enter image description here](http://i.stack.imgur.com/s7QQL.png) So, for example, if I have two users requesting three accesses, and those accesses have 2, 3 and 4 attributes respectively, I have 1 ticket record, 6 sub-tickets records, and 18 sub ticket attribute records. The problem is that I want to quickly group the sub-tickets together, so that when I display them I can find all the ones for one type of access, then all the ones of another type of access, and then the third type of access. I know, each type of access should have been given a unique id somehow, but they weren't, and now it's my burden. I only am allowed to \"suggest\" database changes, not insist on them. Any suggestions?"} {"_id": "131240", "title": "Difference between brittle and fragile", "text": "When you're discussing a system or a code base, when you use the adjectives 'brittle' or 'fragile' to describe it, what's the difference between those two terms?"} {"_id": "189531", "title": "What's the protocol for a autoexecuting JQuery plugin?", "text": "I have a jQuery Plugin that I use myself which modifies the selected value of select items on a page. In my own code the plugin automatically executes as soon as it is included in the page this code in the plugin file. jQuery('Document').ready(function(){ jQuery('select').SelectOptions(); }); this automatically executes my plugin on all `select` items on the page. Obviously for people other than myself who might come across this on github I'd like to change this slightly. I still think the idea of auto execute is good but my question is. 1. Don't setup autoexecute at all, let the developer call the code. 2. Call autoexecute on All Select items and let the developer modify the code if he doesnt want that. 3. Autoexecute, but modify my selector called in the plugin js file to modify only select tags with a particular class - such as below . jQuery('Document').ready(function(){ jQuery('select .selectoptions').SelectOptions(); }); Is there any established protocol for this?"} {"_id": "189534", "title": "How many are too many nested function calls?", "text": "Quoted from MSDN about StackOverflowException: > The exception that is thrown when the execution stack overflows because it > contains too many nested method calls. `Too many` is pretty vague here. How do I know when too many is really too many? Thousands of function calls? Millions? I assume that it must be related in some way to the amount of memory in the computer but is it possible to come up with a roughly accurate order of magnitude? I'm concerned about this because I am developping a project which involves a heavy use of recursive structures and recursive function calls. I don't want the application to fail when I start using it for more than just small tests."} {"_id": "151206", "title": "Why has the accessor methods from the JavaBean specification become the standard for Java development?", "text": "The JavaBeans Specification describes a JavaBean as > A Java Bean is a reusable software component that can be manipulated > visually in a builder tool Since the majority of the lines of code that are written seem to have nothing to do with being manipulated visually in a builder tool, why has the JavaBean specification been the \"way\" to write object oriented code? I would like to forgo the traditional getter/setter in favor of Fluent Interfaces all throughout the code, not just in builders but fear doing so since this is traditionally not the way way object oriented code is written in Java."} {"_id": "24629", "title": "How did you get started with .NET?", "text": "I'm interested how programmers got started with learning .NET development. What books and blogs did you read, podcasts or videos did you watch/listened and what other resources did you use to self learn .NET development. I have to mention that while I am new to .NET I'm not new to programming in general. I've used PHP, Python for a while and Java for about 1-2 months."} {"_id": "155054", "title": "What are the commonly confused encodings that may result in identical test data?", "text": "I'm fixing code that is using ASCIIEncoding in some places and UTF-8 encoding in other functions. Since we aren't using the UTF-8 features, all of our unit tests passed, but I want to create a heightened awareness of encodings that produce similar results and may not be fully tested. I don't want to limit this to just UTF-8 vs ASCII, since I think issue with code that handles ASN.1 fields and other code working with Base64. So, what are the commonly confused encodings that may result in identical test data?"} {"_id": "135542", "title": "php + web service", "text": "Can someone confirm good performance using PHP + Java webservice (for heavy duty tasks) running on NGINX (or Apache) because PHP is slow in completing some heavy tasks and I want to make a web service in java to communicate with PHP via SOAP or REAT or JSON. The fact I'm asking this question is that I already know PHP at a very good level (and Java and I don't want to start learning ASP or Ruby) and HTML5, CSS3 and I want to develop a big portal that will have increased number of requests (1000 / 2000 request/sec) **and I don't want to code the whole application in Java** (cause PHP has more sense), I only need the heavy work to be done by a webservice. **For my project PHP is slow in doing some Math computation and I/O file access every second. It consume a lot of memory and a lot of CPU. At the rate of 1000 req/s it almost crushes.** Is this a good practice? Is there another solution?"} {"_id": "135544", "title": "Why are several popular programming languages influenced by C?", "text": "The Top 10 programming languages, according to the TIOBE index seem to be heavily influenced by C: **1\\. Java** > The language derives much of its syntax from C and C++ but has a simpler > object model and fewer low-level facilities. - _wikipedia.org_ **2\\. C** > C is one of the most widely used programming languages of all time and there > are very few computer architectures for which a C compiler does not exist. - > _wikipedia.org_ **3\\. C#** > During the development of the .NET Framework, the class libraries were > originally written using a managed code compiler system called Simple > Managed C (SMC). In January 1999, Anders Hejlsberg formed a team to build a > new language at the time called Cool, which stood for \"C-like Object > Oriented Language\". - _wikipedia.org_ **4\\. C++** > It was developed by Bjarne Stroustrup starting in 1979 at Bell Labs as an > enhancement to the C language. - _wikipedia.org_ **5\\. Objective-C** > Objective-C is a reflective, object-oriented programming language that adds > Smalltalk-style messaging to the C programming language. - _wikipedia.org_ **6\\. PHP** > He rewrote these scripts as C programming language Common Gateway Interface > (CGI) binaries, extending them to add the ability to work with Web forms and > to communicate with databases and called this implementation \"Personal Home > Page/Forms Interpreter\" or PHP/FI. - _wikipedia.org_ **8\\. Python** > Python was conceived in the late 1980s and its implementation was started in > December 1989 by Guido van Rossum at CWI in the Netherlands as a successor > to the ABC programming language (itself inspired by SETL) capable of > exception handling and interfacing with the Amoeba operating system. - > _wikipedia.org_ > > ABC (programming language) Its designers claim that ABC programs are > typically around a quarter the size of the equivalent Pascal or C programs, > and more readable. - _wikipedia.org_ **9\\. Perl** > Perl borrows features from other programming languages including C, shell > scripting (sh), AWK, and sed. - _wikipedia.org_ **10\\. JavaScript** > JavaScript uses syntax influenced by that of C. - _wikipedia.org_ It appears that most of them borrow their syntax from C and / or are heavily influenced in several other ways, at least in their beginnings. Why?"} {"_id": "240565", "title": "When calling for a random integer from 1-6, how can I make it gradually get less likely to pick a number the bigger it is?", "text": "Like having 1 be the most common number and 6 be the least common number. And everything else just leveling out."} {"_id": "135546", "title": "GUI architecture and class naming advice", "text": "Problem: I'm working on coding a few light-weight touch-tablet games and often get stuck with difficulties naming my user interaction/interface classes and their relationships with each other (architecture?). I'm rarely satisfied with any of my solutions, thus doubt and change them several times on down the line. For example, I have a board game with these components (among others): * \"HUD\" - the HUD frames the view of the game world thus enabling user interaction * \"boardOverlay\" - this is invisible and lies on top of the viewed game world. It receives and interprets touches, consequently calling methods in \"thinger\". * \"board\" - maintains all elements on the board. * \"thinger\" - the class that gets things done, changes game state. I call it thinger since I dont know what it should be called. Better to have a non-descript name than one that will be misinterpreted. Searching for enlightenment: Now I would like to have general/abstract names/architecture for these components which will likely be used in many other games/apps. But I have difficulty coming up with satisfying ones. I have searched the net many a time for guidelines/advice but I find that all of the sources are language/technology/API specific and seldom like their approach. I don't know the name of this discipline/practice to search for for enlightenment. I have tried \"event driven framework\", \"event driven naming conventions\", \"GUI architecture\", \"+oop +GUI +taxonomy\"... on and on .... .... with no luck. Question1: Can anyone provide a resource to enlighten me? A technology independent resource that philosophizes about this type of naming and architecture. Extra credit question: It would also be great with a reliable/concise resource discussing practices/disciplines of this context? There are too many different and overlapping usages/interpretations out there. Example: GUI framework, GUI architecture, GUI structure, GUI design - what should the proper usage of these terms be?"} {"_id": "197866", "title": "Overriding - Access to Members with Reference Reassignment", "text": "I have recently been moving through a couple of books in order to teach myself Java and have, fortunately, mostly due to luck, encountered very few difficulties. That has just changed. I read a section on the following under inheritance and the whole superclass subclass setup * When a new superclass object is created, it is, like all objects, assigned a reference (superReference in this example) * If a new subclass object (with the defining subclass extending the superclass) is created, and then the superReference reference is set to refer to that instead of the original object, it is my understanding that, since the reference is made for a superclass, only members defined by the superclass may be accessed from the subclass. First - is this correct? Second: If I am overriding a method and therefore have one in the super and one in the sub, and I create a superclass object and then assign its reference, as I did above, to a subclass object, by the principle called something like _Dynamic Method Dispatch_ , a called overridden method should default to accessing the subclass method right? Well, my question is: If a reference to a superclass-object is retooled for a subclass-object and will deny direct `object.member` access to subclass-defined members, only supporting superclass-defined members, how can, if a superclass reference is retooled for a subclass object, an overridden method apply to the subclass- object if access is limited by the superclass-originated-reference"} {"_id": "141271", "title": "If I use locks, can my algorithm still be lock-free?", "text": "A common definition of lock-free is that at least one process makes progress. 1 If I have a simple data structure such as a queue, protected by a lock, then one process can always make progress, as one process can acquire the lock, do what it wants, and release it. So does it meet the definition of lock-free? * * * 1 See eg M. Herlihy, V. Luchangco, and M. Moir. Obstruction-free synchronization: Double-ended queues as an example. In Distributed Computing, 2003. \"It is lock-free if it ensures only that some thread always makes progress\"."} {"_id": "73334", "title": "What is a hack?", "text": "I often hear co-workers saying to each other, \"That's a horrible, horrible hack.\" What I can take away from that is that it's not good. When I asked them if it works they say \"yes, but it's not good\". Does that mean it's not a good solution? How is a solution bad if it works? Is it due to good practice? Or not maintainable? Is it using a side effect of code as a part of your solution? It's interesting to me when something is classified as a hack. How can you identify it?"} {"_id": "184664", "title": "mvc pattern on procedural php", "text": "First off, I do not have anything against OO programming (I'd be mad if i believed so). But I was thinking on some sort of procedural MVC pattern with PHP; let me explain better. As we all know, variable scopes, unless they are taken from $_SESSION (, database, redis, etc.) are never global and only refer to a singular execution of a script. For example: class Car { this->name = \"foo\"; function setName($name) { ... } function getName() { return $this->name; } } Where obviously in a more common situation, this data will be taken from the DB, otherwise any object car, per execution, would have the same name. Now, is it not possible to apply MVC pattern on procedural code for simplicity purpose? Imagine a really simple social network like application; I could have a structure like this *views -profileview.php -messageview.php -loginview.php -notloggedin.php *models -user.php -message.php - profile.php - messages.php - login.php where profile, messages and login.php work as controllers and route to the right view; and user and message.php work as the classes, that contain all the functions that are eventually needed by the controllers, such as getUserById($id), postMessage($id, $meesage), etc. I have simplified it a lot but I think you can sort of understand my point. What would you think of such implementation on the long run, and why?"} {"_id": "184666", "title": "Ensuring conceptual integrity in Python project with multiple programmers", "text": "One objection that I have _often_ heard raised against Python that it is difficult to synchronize a team of many programmers on large Python project. Note: that synchronization is _possible_ in such a project does not necessarily entail that it is practical, cheap or easy. There's still n*(n-1) communication channels between programmers, so communication cost grows more or less with square of number of programmers, on average. One thing I liked in Java are interfaces. Sadly, both PEP 245 and even modified Guido's version on interfaces ( http://www.artima.com/forums/flat.jsp?forum=106&thread=87182 ) have not been implemented. I was thinking that duck-typing style limited interfaces (no types, just method names) might be very useful, emphatically not for sake of pychecker, faster code, or forcing (dumb) programmers to follow (smart) architect's design, but as means of easy synchronization between 2+ programmers. So, apart from typical but not working very well suggestion of \"more documentation\" (nobody wants to write it really), what are your means of synchronization high-level design and conceptual integrity on such large, multi-person projects? (in part I ask because large projects I have participated in never used Python) I heard objections mentioned above in person: At least 3 times when talking to startups (2 Scala startups, so they have slant towards static typing) and at least once within corp when considering various programming languages for projects. On the web: can't remember this now, I have vague memory of reading it on some OO blogs and forums. Static typing guys (Java, Scala) seem to have had big on this point, even though typically static typing and interfaces are usually meant for other things (speed, IDE autocompletion). Note I know TDD defense on this but the problem here is that it is defensive since typically it's one person that writes unit tests. Of course, one could envision 1+ programmers sitting down together and writing unit tests before coding as sort of indirect spec + test but have you seen it done? I haven't. I also feel that unit tests, even written by groups, are not quite (limited) design or spec. They are low level after all. There's a reason (good maybe?) that Java has interfaces in addition to and not instead of JUnit."} {"_id": "22753", "title": "C# open source projects", "text": "What C# project(s) would you consider contributing to if you were a beginner trying to sharpen your skills in C# and .NET framework ? The project should be (besides all) active and not something less active and/or stagnant."} {"_id": "38966", "title": "At which architecture level are you running BDD tests (e.g. Cucumber)", "text": "I have in the last year gotten quite fond of using SpecFlow (which is a .NET port of Cucumber) I have used it both to test a ASP.NET MVC application at the web layer, i.e. using browser automation, but also at the controller layer. The first gives me a higher confidence in the correctness of the application, because JavaScript is tested, and improper controller configuration is also caught. But those tests are slower to execute, and more complex to implement, than those just testing on the controller layer. My tests are full functional tests, i.e. they exercise all layers of the application, all the way down to the database. So the first thing before any scenario is that the database is cleared of data, allowing the test to assume that only data specified in the \"Given\" block exists. Then I see example on how to use it, where they test just exercise the model layer. So what are your experiences with these tools? Which layer of the application do you test?"} {"_id": "208177", "title": "How to architect a P2P application", "text": "[Moved here at the suggestion of SO users (10k SO+)] I'd like to develop a peer-to-peer application. While I have a lot of experience in LOB apps, I'm new to the P2P arena. I've got a rough idea of how things should work but need some more details to fill out my understanding. What I know (believe) I need to do: * A significant proportion of clients need to enable inbound connections (ala uPnP/NAT rules) * Nodes should share other known nodes to provide resiliency if any particular node goes down * Some form of synchronisation/route finding is required for sending data between arbitrary clients * Possibly some resource sniffing to differentiate between \"dumb\" clients and and more powerful \"super nodes\" to handle sync/sharing of node lists and maybe relay messages * Clients without inbound support should hold open an outbound connection through which they can receive info of nodes to connect to In short, I'm hoping to offer (at first) a chat/messenger service which doesn't rely on a connection to any particular central server. While I imagine I'll need to supply a number of centralised \"supernodes\" to get things started (or after significant upgrades), these should be optional once a functional P2P network has been established. I can see a slew of problems and don't know how to address them. Mainly how to... * Authenticate users to other nodes without a central authority to verify * Co-ordinate which nodes know about which other nodes (min-max number/by latency/???) * Allow a given user to determine if another user (or node) is online * Deal with a situation where 2 groups of nodes are physically disconnected (airgapped) and how to resync on reconnection of the groups * Etc etc I know this is a pretty open-ended question, so while high-level design patterns would be appreciated, what I'm really looking for is a decent guide to how others have handled these problems (and the ones I haven't considered yet)."} {"_id": "200885", "title": "Testing From A Developer's Perspective", "text": "I have a book which mentions: > \"There are many types of testing, including unit testing, integration > testing, functional testing, system testing, performance testing, and > acceptance testing\". It is often noted that many developers carry out unit-testing only, or no testing at all. From the types of testing above, which ones are to be conducted by the developer? What are the responsibilities of a developer when it comes to testing?"} {"_id": "204841", "title": "Migration from a complex C++ application to C# a -- good idea?", "text": "We currently have a complex VC++ software application, which uses a library like ObjectARX to build the dll. I feel there are many features in C# like Collections, Generics, and other libraries which can be used to build the current application in a better and efficient way. I have been thinking about it, but I am not sure on how to present it to my Supervisor and colleagues. I would appreciate any help, to help me think in the right direction and highlight the points to bring it to the team. Few points that I thought was; 1. With some current examples, implementing it in C# with the features. 2. Highlight the development time is comparatively lesser in C# than C++. 3. Use a Design Architecture."} {"_id": "204843", "title": "Could it be possible to add the integer type to the ECMAScript standard?", "text": "In JavaScript, every number you will ever use will always be represented with what C programmer would call a `double`. The official type is I believe `number`. If I recall correctly, that fact was mentioned by Google as a \"fundamental\" problem with JavaScript and one of the reason why they wanted to have clean break with Dart. Now, I can't help but wonder : * Can the ECMAScript standard just add that type? Would it be possible to do without breaking existing code? * Anyway, can't a JIT engine watch the integer usage and generate code that is almost as efficient as using a native integer type, by using integer registers and opcodes? In effect : Could it be done? _Why should it be done?_"} {"_id": "77656", "title": "Code Reviews do they really work in true Agile?", "text": "So I started working for a large corp., one of those with 3 letters in the name, and they are trying to become Agile, but have tons of processes, which I don't feel are Agile. The one that has me the most wound up are code reviews. My last job was with a startup that I would say is the most Agile development team I have seen, been on, and/or ever heard of. Anyway, my argument is that Code Reviews are a waste of time in iterative or Agile development where the UX/UI is extreme/intense (think Apple/Steve Jobs perfection). Maybe someone here can help understand before they fire me? Here is my development process and the one at my last startup... very Agile. We do the early feature work to sort development task/todos. We would mock a couple versions up and present to users, team, and marketing to get feedback. We then do another mockup iteration to get one round in from the same stakeholders above. Then we divvy up the work and get started. We have milestones and dates to meet, but we keep plugging away. We have no code reviews during any of this. Several times during the weeks of our development we hold sesssions with the stakeholders again to see if they still agree features/functions/UX/UI are still a fit and on target. As we approach the end of the 8 week iteration cycle QA starts testing, then it goes to alpha users, and finally to beta users. But during the alpha and beta developers are going over the new features and older features making iterative daily or hour changes to the UI to improve the UX. So a feature that was being developed this release, might end up being changed 3 more times the last four weeks to improve and perfect it or add a few tiny features (e.g. make the component a little slicker or smarter). Sometimes the changes might be superficial meaning no CRUD operations are changed or modified, but all UI only changes. So with this type of development process, extreme Agile, wouldn't code reviews be a waste of time? Meaning if I had another developer or two review my code, but then that code changes 3 more times before it goes out the door, because of all the UI/UX improvements, are we not wasting our time for first 3 times they reviewed it the code as that code/component/UI was scrapped? We never had many quality issues with this process and yes if a developer left all the knowledge walked out the door, but we always found smart developers to pick it up and takeover. _**And yes, we have a lot of testers because they may have to retest things 3 or 4 times. Also please don't get hung up on asking why all the UI/UX changes...thats just how things are done... been then thats why the app wins tons of awards for UI/UX and the users will kill for the app. The thought process is if I can make a even a 2% improvement in something because I have an extra hour then do it. Users will be happier, which means more $ or users. And yes, our users are ok with the app changing continuously because thats how its been done since day one so they don't see it as bad or a negative._** Hope this post doesn't come off as pompous, but I just can't see how Code Reviews aren't wasteful. Maybe 2% of all our code in the code reviewed has bugs. Each release we might find 3 bugs via code review. So it ends up being 40 hours of code review per developer per release (4 x 40 = 160 hours) to find 3 to 5 bugs? Chances are 50% those 3 to 5 bugs would have been picked up by QA anyway. Wouldn't it be better to spend that 40 hour per developer adding a new feature or improving the existing ones?"} {"_id": "93567", "title": "The state of localization in web applications", "text": "Given that browsers (Chromium/Chrome thus far AFAIK) seem to be introducing embedded site translations and utilities such as Wibiya and given the amount of work that localizing a site can be (depending on framework, view re-writes, database-driven message localization, etc), does it make sense to put in work to localize sites that are intended for an international market anymore? I'm developing such an application solo and I'm wondering if it's _really_ a worthwhile investment at this point and hoping to hear from others who develop internationally targeted large scale applications. I'm not looking so much for opinions (although those are valuable as well) as factual numbers, i.e. the market share that Chromium and Chrome may have compared to other browsers, which browsers offer native localization, etc. Basically, an estimated percentage of users that I may be factoring out of usability without integrating site-driven localization."} {"_id": "165263", "title": "Let a model instance choose appropriate view class using category. Is it good design?", "text": "Assume I have abstract base model class called MoneySource. And two realizations BankCard and CellularAccount. In MoneysSourceListViewController I want to display a list of them, but with ListItemView different for each MoneySource subclass. What if I define a category on MoneySource @interface MoneySource (ListItemView) - (Class)listItemViewClass; @end And then override it for each concrete sublcass of MoneySource, returning suitable view class. @implementation CellularAccount (ListItemView) - (Class)listItemViewClass { return [BankCardListView class]; } @end @implementation BankCard (ListItemView) - (Class)listItemViewClass { return [CellularAccountListView class]; } @end @implementation MoneySourceListController - (ListItemView *)listItemViewForMoneySourceAtIndex:(int)index { MoneySource *moneySource = [items objectAtIndex:index]; Class viewClass = [moneySource listItemViewClass]; ListItemView *view = [[viewClass alloc] init]; [view setupWithMoneySource:moneySource]; return [view autoreleased]; } @end so I can ask model object about its view, not violating MVC principles, and avoiding class introspection or if constructions. Thank you!"} {"_id": "165264", "title": "design pattern advice: graph -> computation", "text": "I have a domain model, persisted in a database, which represents a graph. A graph consists of nodes (e.g. NodeTypeA, NodeTypeB) which are connected via branches. The two generic elements (nodes and branches will have properties). A graph will be sent to a computation engine. To perform computations the engine has to be initialised like so (simplified pseudo code): Engine Engine = new Engine() ; Object ID1 = Engine.AddNodeTypeA(TypeA.Property1, TypeA.Property2, \u2026, TypeA.Propertyn); Object ID2 = Engine.AddNodeTypeB(TypeB.Property1, TypeB.Property2, \u2026, TypeB.Propertyn); Engine.AddBranch(ID1,ID2); Finally the computation is performed like this: Engine.DoSomeComputation(); I am just wondering, if there are any relevant design patterns out there, which help to achieve the above using good design principles. I hope this makes sense. Any feedback would be very much appreciated."} {"_id": "165268", "title": "Java heap space", "text": "In Java/JVM, why do we call the memory place where Java creates objects as \"Heap\"? Does it use the Heap Data Structure to create/remove/maintain the objects? As I read in the documentation of Heap data structure, the algorithm compares the objects with existing nodes and places them in such a way that Parent object is \"greater\" than the children. ( Or \"lesser\" in case of min heap). So in JVM, how are the objects compared against each other before placing them in the heap?"} {"_id": "230861", "title": "Difference between TSimpleRWSync's BeginWrite and BeginRead methods?", "text": "I have recently switched to TSimpleRWSync from TRTLCriticalSection. The methods BeginRead and BeginWrite confuse me as wherever I read help, they seemingly do the same thing i.e. acquire the critical section whenever it gets relinquished. As the TSimpleRWSync doesn't allow multiple read threads, there is seemingly no point in having two separate methods either. Is there a special difference between them aside from the contextual one?"} {"_id": "170888", "title": "Name for Osherove's modified singleton pattern?", "text": "I'm pretty well sold on the \"singletons are evil\" line of thought. Nevertheless, there are limited occurrences when you want to limit the creation of an object. Roy Osherove advises, > If you're planning to use a singleton in your design, separate the logic of > the singleton class and the logic that makes it a singleton (the part that > initializes a static variables, for example) into two separate classes. That > way, you can keep the single responsibility principle (SRP) and also have a > way to override singleton logic. ( _The Art of Unit Testing_ 261-262) This pattern still perpetuates the global state. However, it does result in a testable design, so it seems to me to be a good pattern for mitigating the damage of a singleton. However, Osherove does not give a name to this pattern; but naming a pattern, according to the Gang of Four, is important: > Naming a pattern immediately increases our design vocabulary. It lets us > design at a higher level of abstraction. (3) Is there a standard name for this pattern? It seems different enough from a standard singleton to deserve a separate name. _Decoupled Singleton_ , perhaps?"} {"_id": "93568", "title": "How do I apply a computer science degree to web development?", "text": "I'm a web programmer, but I haven't found many opportunities to take advantage of a formal education in computer science. Maybe I'm not looking in the right places, but it seems to me like most of the web jobs I come across are CRUD, web forms, and data grids. For these jobs a formal CS background doesn't seem necessary, and you could do fine with O'Reilly cookbooks in jQuery, CSS 3, PHP, SQL, or ASP.NET MVC. What kinds of web developer jobs exist that really let you apply your computer science background? Do I need to branch out into other areas of programming to take full advantage of my degree?"} {"_id": "208281", "title": "Options for implementing aggregate message processing", "text": "I am developing a web service in java that requires to do an asynchronous call to a 3rd party app by aggregating about 100 requests or for a minute (whichever happens earlier). Is there any open source framework or known design patterns available to help me in this. I have looked at JMS and Spring-Batch. JMS processes each message (and not aggregate) and Spring Batch provides batch execution but also doesn't provide this aggregation support. I can implement my own queue and handler service to make sure I aggregate the results and process it, but I was looking for solutions like JMS which also takes care of persisting the message/data incase of failure and re-using it once system is back up."} {"_id": "208285", "title": "Alternative inheritance paradigms in object-oriented design", "text": "My apologies if a variation of this question has been asked before, but due to its nature it is hard to search for. I am having a discussion with a colleague about object-oriented design, and it basically comes down to a choice between two different paradigms. What I had been searching Google for was a discussion between these - as I've seen both discussed. **Paradigm A** Have a small number of classes which answer a lot of public methods, many of which are not applicable unless in certain circumstances. In these cases, they throw a runtime exception. _Example: a mathematical vector class can have any number of elements. This class answers 'Cross', but returns an error if it has anything other than 3 elements._ **Paradigm B** A deeper inheritance tree, with methods only appearing where they are applicable. _Example: the mathematical vector class is specialised as a 3D-vector, with only the 3D-vector answering 'Cross'. 3D vectors are a special case as there are many functions relevant to a positional vector alone (Dot, Cross, Distance, AngleTo, etc)._ ~~I'm not actually asking for opinions on the above (although clearly the second is better!) but where can I find a good discussion on this - and similar - design issues. I can find many beginners' discussions/tutorials on the subject, but I haven't had any luck finding the more in-depth articles.~~ * * * Edit: After a prompt from @JohnDibling, I'll rephrase the question (and add another Paradigm on request): **Paradigm C** Lightweight objects with a suite of functions to interpret them, and throw errors if the provided arguments weren't valid. _Example: all vectors only respond to the most basic requests, and a series of functions like Size(vector), Cross(vector) do all of the work. Cross would throw a (runtime) error if an argument of the wrong size is given._ Could anyone give me specific reasons why A or C might be preferable to B, or point me to an article which discusses this issue in depth?"} {"_id": "205462", "title": "Fowlers Data Access Layer patterns", "text": "Fowler talks about a number of design patterns available for the data access layer e.g. Table Data Gateway, Row Data Gateway, Active Record and Data Mapper. In the book it suggests using Data Mapper with Transaction Script and Active Record with Domain Model. This doesn't seem to be logical to me as Transaction Script classes contain business logic and data logic and domain model separates business logic and data logic. Active Record combines business logic and data logic (like Transaction Script rather than domain model) and Data Mapper separates business logic and data logic (like domain model rather than Transaction Script). What am I not understanding here?"} {"_id": "4507", "title": "Do you think that GAE alone is enough to justify learning Python over Ruby?", "text": "Considering the fact that you don't have to get involved in setting up/buying a server or even buying a domain, do you think that fact alone is enough to choose one over the other? I don't necessarily want to work on Google App Engine, I just find it convenient when it comes to hosting/environment/etc. and wondering if that's a good enough reason to learn python. In any case, I'm not looking for a debate between python and ruby but more on Google App Engine and whether its value is enough to dictate the language you should learn."} {"_id": "155956", "title": "Getting through a lengthy book?", "text": "This may seen like a weird question, but since we're challenged--as engineers --to constantly adapt to changing technologies, we always find ourselves buried in documentation. That said, we also need to consider that time is of the essence because people want their stuff fixed and improved with little hesitation if any. How do you get through lengthy manuals, books/manuals within a short period of time? Take for example: \"The Linux Programming Interface,\" by Michael Kerrisk, which is roughly 1500 pages in length. How would you get through a monster of a book like this if you're pressed for time while still learning most of the material?"} {"_id": "224803", "title": "Is virtual machine image a good protection for source code?", "text": "We have developed an application that is sold as an online service. After some time we realized that some of customers would need/prefer/require to have it installed locally on their intranet. However the application was developed using scripting language and we wouldn't want to allow clients to access the source code. The question is: What are the technical downsides of distributing application as a virtual machine image?"} {"_id": "138645", "title": "What maths should I learn to become a better computer scientist?", "text": "I'm a self taught programmer, and although I know many people feel math isn't necessary, I find that in many examples of algorithms I come across talk about (what sounds to be) some pretty complex mathematics. I would love to eventually have a solid understanding of the math that a good, university educated computer scientist should know. I don't really remember any math past algebra 2. With that being where I left off, what should my starting point be? What math topics should I research, and in what order? I'm looking to build a curriculum for myself that will be pretty easy to take on from where I left off and continually learn until I have a similar understanding to that of what a university would provide."} {"_id": "138641", "title": "What are the roles of a Software Delivery Manager", "text": "I have been told about a position that may be open to me - the role of a Software Delivery Manager. From what I understand this role does not already exist within my organisation. To be perfectly honest I'm not quite sure what a Software Delivery Manager's roles are. I have a few ideas and would appreciate some input around whether they are correct or not, or if there is anything missing: * ensure the quality of the software being delivered * document the relationships between the components being delivered * ensure that the delivery of these components does not break other components * ensure that the components being developed make the best use of the environments they are being deployed in * being on-hand during software deliveries (though not actually performing the delivery of software, rather giving the Go) I have also been told that the role would include some software development work (which is important to me being a developer at heart!) - is there software development specifically associated with the role of Software Delivery Manager or is this more likely to just be a case of helping the team out when time is short?"} {"_id": "103233", "title": "Why is DRY important?", "text": "Quite simple, why would I want to write code that works for all cases and scalable data when all I need to do is repeat the same process a few times with a few minor tweaks? I'm unlikely to need to edit this again any time soon. It looks like a lot of less work to just go... function doStuff1(){/*.a.*/} function doStuff2(){/*.b.*/} function doStuff3(){/*.c.*/} And if I ever need to add something... function doStuff4(){/*.d.*/} And if I need to remove it, I remove it. It's harder to figure out how to make all of those into one straight-forward pattern that I can just feed data into and deal with all the cases, and make a bunch of changes I don't feel like I'm ever going to have to do. Why be DRY when it looks like a quick cut+paste is going to be so much less work?"} {"_id": "74832", "title": "Moving to Python (SciPy and NumPy) for Scientific Computing", "text": "Just read a presentation about using Python for Scientific Computing. I am currently using MATLAB (student license FTW, which will expire when I graduate soon). So I was wondering **how matured SciPy and NumPy are with respect to relying on them for all the Scientific Computing** I need to do. The advantage is that it's free. I am mainly focused on Signal Processing, Audio, Acoustics kind of computing. I can imagine that the NumPy and SciPy projects are evolving with respect to the support for more complex techniques. So, **how fast are they evolving** , are there large communities behind them? Finally, **are there other solutions**?"} {"_id": "190414", "title": "Python potential for science applications?", "text": "I'm a beginner in programming. I am learning Python as a hobby. However, after reading some things in what concerns, for example, its speed, I asked again myself if I should really learn Python or learn other more difficult language. I am in sciences at school, and have interest in physics and microcontrollers, so a performance-wise language would be needed for, for example, computational calculations, and that. So probably a language like C or C++ would be more handy in terms of future university degree. So, my questions are: * I saw that Python is an interpreted language, therefore slower (up to 100 times from C++, from what I read at Stack Overflow, which is a lot). Can't it be compiled so that no interpreter is needed and it has high speed? (like C is) I mean, it's what the interpreter does, isn't it? * Does Python have any chance of being useful for science? How could it surpass it's speed lack? * Isn't there the possibility of writing a program in Python, and translate it to C++ in order to compile? Therefore having the simple writing of Python and the speed of C++? Are there any Python-to-C++ translators?"} {"_id": "70028", "title": "Do you write Documentation in a language other than English?", "text": "A couple of months ago I moved to Germany. Taking some projects on my own, I've recently had the opportunity to develop with a company-based framework, that was very well documented, but in German. My German is pretty good, however my programming terminology is somewhat lacking. Long story short, I was wondering, how common is it to document code in a language other than English? Sorry to seem English-centeric but it seems like a bad habit."} {"_id": "130114", "title": "What is a good IDE for client side JavaScript development?", "text": "I recently started learning JavaScript and am looking for a good JavaScript Editor/IDE. I found dozens of them in a Google search but I would appreciate if users who have experience with using such an IDE could recommend one. I want an IDE with **syntax highlighting** , possibly **IntelliSense** and **debugging support** for JavaScipt code. I'm a **Windows 7** user and do just **client-side** JavaScript development. Any suggestions??"} {"_id": "130115", "title": "What is an untriaged bug?", "text": "I am an undergrad studying Computer Science. When I tried reporting bugs to several projects, I came across the classification _untriaged_ a lot. A web search didn't really explain what this means. Could you tell me what an untriaged bug is?"} {"_id": "130116", "title": "Are there any advantages of using separate RDBMS for reports?", "text": "My production database is SQL Server 2008 R2 Express. It has limited features and I am testing it in a production environment. To better handle reporting queries in a highly concurrent environment, I want to use any open-source RDBMS with tables that will be acting as temporary tables to frequently fill recent records and delete them after use. The only problem is that I need to keep two connections for each RDBMS, but I feel, Firebird or PostgreSQL can give me better performance of reports in a highly concurrent environment. What are your suggestions? Is this approach used in any commercial scenario?"} {"_id": "130119", "title": "Is Agile applicable in product development companies as well?", "text": "The following principles of Agile development makes it look like Agile is mostly suited for services companies: * Customer satisfaction by rapid delivery of useful software * Welcome changing requirements, even late in development * Working software is delivered frequently (weeks rather than months) * Working closely with the client. Often this involves a client representative to be a part of the development team. * Face-to-face conversation is the best form of communication (co-location) My question is: Is Agile exclusively suited for service oriented companies or do even product development companies (including web based companies) benefit from Agile techniques?"} {"_id": "370", "title": "How old is \"too old\"?", "text": "I've been told that to be taken seriously as a job applicant, I should drop years of relevant experience off my r\u00e9sum\u00e9, remove the year I got my degree, or both. Or not even bother applying, because no one wants to hire programmers older than them.1 Or that I should found a company, not because I want to, or because I have a product I care about, but because that way I can get a job if/when my company is acquired. Or that I should focus more on management jobs (which I've successfully done in the past) because\u2026 well, they couldn't really explain this one, except the implication was that over a certain age you're a loser if you're still writing code. But I _like_ writing code. Have you seen this? Is this only a local (Northern California) issue? If you've ever hired programmers:2 * Of the r\u00e9sum\u00e9s you've received, how old was the eldest applicant? * What was the age of the oldest person you've interviewed? * How old (when hired) was the oldest person you hired? How old is \"too old\" to employed as a programmer? 1 I'm assuming all applicants have equivalent applicable experience. This isn't about someone with three decades of COBOL applying for a Java guru job. 2 Yes, I know that (at least in the US) you aren't supposed to ask how old an applicant is. In my experience, though, you can get a general idea from a r\u00e9sum\u00e9."} {"_id": "100656", "title": "I'm questioning my education choice because they say there's no place for old programmers", "text": "> **Possible Duplicate:** > How old is \"too old\"? I'm about to start CS next year. I love both the coding and the mathematical aspects of programming. However, recently I encountered multiple rants about how it's impossible to actually be a programmer over 45, unless you're in a managing position or filling a very specific niche. Any advice? I'm seriously questioning my career choice here, and would be grateful for some input."} {"_id": "61678", "title": "Can a developer move into a fast-paced career later in life?", "text": "> **Possible Duplicate:** > How old is \"too old\"? Most developers I know pushed really hard to get ahead in their careers early on, then maybe slowed down about 10 to 15 years in to marry and raise a family. I didn't. I had a family early in my career, knowing that children were more important to me than my career. I will be in my early 40's when my children are going to college, and it seems that I might be able to take advantage of the lull in my responsibilities and get ahead at a level of experience where so many developers are slowing down to spend more time with their families. However, I'm also wondering if the effort would pay off that late in life. Specifically, I'm concerned about age-ism, especially as a woman in development. Women are affected by age-ism more strongly in most careers, and development seems to be far more biased towards youth than most fields. I also know the women who are in software development are fairly likely to leave the field at ages 35 to 40, although I wonder if that is related to delayed childbearing in this field more than anything. Can an older developer reap the same benefits of intense work that younger developers receive? Or will I automatically be perceived as \"stable\" and \"jaded\" at that age, regardless of my interest in moving ahead and investing lots of time into my career once the kids are grown? Would I be better off planning on starting my own business at that time if I want to make a large investment in my career and have it pay off (financially)? ETA: A little clarification: I expect to work between now and then, and grow on the job. So I would be a very senior SDET by that age. I don't expect to have trouble getting a job at that point; my question is more about climbing the ladder to get promotions and pay increases. I see the work the VP of tech does, and it looks fun - but he got to that point by rising quickly early in his career. Can I make the same rise later in my career? Or will I be limited to the rate of growth in position that I achieve in my earlier career?"} {"_id": "566", "title": "Is it ever worthwhile using goto?", "text": "Goto is almost universally discouraged. Is this statement every worthwhile using?"} {"_id": "125710", "title": "What are the arguments against parsing the Cthulhu way?", "text": "I have been assigned the task of implementing a Domain Specific Language for a tool that may become quite important for the company. The language is simple but not trivial, it already allows nested loops, string concatenation, etc. and it is practically sure that other constructs will be added as the project advances. I know by experience that writing a lexer/parser by hand -unless the grammar is trivial- is a time consuming and error prone process. So I was left with two options: a parser generator \u00e0 la yacc or a combinator library like Parsec. The former was good as well but I picked the latter for various reasons, and implemented the solution in a functional language. The result is pretty spectacular to my eyes, the code is very concise, elegant and readable/fluent. I concede it may look a bit weird if you never programmed in anything other than java/c#, but then this would be true of anything not written in java/c#. At some point however, I've been literally attacked by a co-worker. After a quick glance at my screen he declared that the code is uncomprehensible and that I should not reinvent parsing but just use a stack and String.Split like everybody does. He made a lot of noise, and I could not convince him, partially because I've been taken by surprise and had no clear explanation, partially because his opinion was immutable (no pun intended). I even offered to explain him the language, but to no avail. I'm positive the discussion is going to re-surface in front of management, so I'm preparing some solid arguments. **These are the first few reasons that come to my mind to avoid a String.Split-based solution:** * you need lot of ifs to handle special cases and things quickly spiral out of control * lots of hardcoded array indexes makes maintenance painful * extremely difficult to handle things like a function call as a method argument (ex. add( (add a, b), c) * very difficult to provide meaningful error messages in case of syntax errors (very likely to happen) * I'm all for simplicity, clarity and avoiding unnecessary smart-cryptic stuff, but I also believe it's a mistake to dumb down every part of the codebase so that even a burger flipper can understand it. It's the same argument I hear for not using interfaces, not adopting separation of concerns, copying-pasting code around, etc. A minimum of technical competence and willingness to learn is required to work on a software project after all. (I won't use this argument as it will probably sound offensive, and starting a war is not going to help anybody) **What are your favorite arguments againstparsing the Cthulhu way?*** *of course if you can convince me he's right I'll be perfectly happy as well"} {"_id": "85270", "title": "When Interfacing with a 3rd Party API, What Can Make Things Challenging?", "text": "Reuse of components in development of software program are always exposed through the API's. Most of the products in today's world (such a facebook, google, .Net, JDK, ...) provide API's to reuse their components without being actually coding from the stratch. Also API's provide a huge high-level abstraction to the under lying components. My question is about the usability of these API's. Certainly, in all API's there are Obstacles. Obstacles can be * skill sets, * resources, * Documentation of the API's * the API itself. What do the programmers think the real obstacle are? List few of your opinions. To Question this here: Being a developer I investigated the obstacles when using 3rd party API's, one such is Highcharts. Though I have strong points why Highchart libraries are great compared to other Javascript visulization libraries (not comparing it with Google visulization API's) I found myself lost with the usability of the charts when going through their API docs at the very beginning. Indeed, their support forum is good to get your questions answered. Developers uisng API's - most of the time - do not (sometimes, need not) understand the design intents of the API, architecural design and the decisions made during the development of the API. Also, what will be your suggestions to make better API that will reach to large set of developers? I'm not entirely sure about the friction of learing new API's. PS: This may not be a constructive answer. But I am sure I am communicating with real developers on this site."} {"_id": "128540", "title": "What do I need to know about Agile to blag my way past a recruitment agent?", "text": "I'm currently hunting for a new job. I'm a c# / SQL server developer, but have never worked in an agile environment. I'm in the UK, where very few (IT) employers advertise directly, but instead use recruitment agencies. The problem I'm having is that many agents have little or no understanding of the skills they're trying to recruit for. Agile, being the latest buzzword, appears as an \"absolute requirement\" on many job postings. I've already spoken to couple of agents today who believe that it's some kind of programming language, rather than a development methodology. It looks like I'm going to have to tell a few white lies to get my CV into the hands of potential employers. Can someone suggest a resource for the basics of Agile so I can at least sound convincing to an agent."} {"_id": "141495", "title": "What is a microframework?", "text": "Why are some frameworks (e.g. Flask for Python, Sinatra for Ruby) called microframeworks? What differentiates them from full-fledged frameworks, like Django or Rails?"} {"_id": "246351", "title": "Using abstract methods to force subclasses to define values for member fields", "text": "Often in my designs I define an abstract superclass whose subclasses will vary mostly in their values for the fields defined in the superclass. For example, in a game I'm developing there's an abstract class `Missile` with a number of subclasses. Each one defines a different value for member variables such as `MAX_SPEED`, `MASS`, `FLYING_FORCE`, which are then used in calculations performed in the superclass. I use abstract methods in `Missile` to force it's subclasses to define these variables. For example I simply have an abstract method `defineMass()` that forces subclasses to define their own value of mass. A different example would be my `Entity` class (the superclass of all game- play classes) defining an abstract `loadImage()` method that forces subclasses to define an image for themselves. Is this good practice? Is it common?"} {"_id": "246356", "title": "Functional programming strategies in imperative languages", "text": "I've been convinced for awhile now that some strategies in functional programming are better suited to a number of computations (i.e immutability of data structures). However, due to the popularity of imperative languages, its unlikely that I will always be working on projects that are functionally implemented. Many languages (Matlab, Python, Julia) support using a functional paradigm, but it feels tacked on in many cases (looking at you, anonymous functions in Matlab). That being said, what are functional methods and strategies that I can use even in OOP/Imperative code? How can I write solid functional code that does not allow side effects even in languages that have mutable state and global variables?"} {"_id": "122729", "title": "Alternatives to Octopus for deploying .NET applications?", "text": "Our shop has a TeamCity server that produces deployment packages. The packages are either ASP.NET web apps, Windows services, or miscellaneous binaries (that just get copied to the network somewhere). The packages are all simple zip files. This is all working fine but right now the act of deploying involves a developer manually unzipping the package and copying it to the right place, starting/stopping Windows services if necessary, dealing with synchronization processes to push files to all servers in a web farm, etc. I'm working on figuring out the best way to automate all this. There is a new tool called Octopus which looks to be exactly what we need. However, for various reasons (cost, immaturity of product), we can't use it. At the other end of the spectrum, obviously I could script all this out with MSBuild. But what are my other options? Are there tools similar to Octopus out there? Are there open source equivalents? How are other shops solving this?"} {"_id": "14744", "title": "I don't know C. And why should I learn it?", "text": "My first programming language was PHP ( ** _gasp_** ). After that I started working with JavaScript. I've recently done work in C#. I've never once looked at low or mid level languages like C. The general consensus in the programming-community-at-large is that \"a programmer who hasn't learned something like C, frankly, just can't handle programming concepts like pointers, data types, passing values by reference, etc.\" I do not agree. I argue that: 1. Because high level languages are easily accessible, more \"non-programmers\" dive in and make a mess 2. In order to really get anything done in a high level language, one needs to understand the same similar concepts that most proponents of \"learn-low-level-first\" evangelize about. Some people need to know C; those people have jobs that require them to write low to mid-level code. I'm sure C is awesome, and I'm sure there are a few bad programmers who know C. Why the bias? As a good, honest, hungry programmer, if I had to learn C (for some unforeseen reason), I would learn C. Considering the multitude of languages out there, shouldn't good programmers focus on learning what advances us? Shouldn't we learn what interests us? Should we not utilize our finite time moving _forward_? Why do some programmers disagree with this? I believe that striving for excellence in what you do is the fundamental deterministic trait between good programmers and bad ones. Does anyone have any real world examples of how something written in a high level language--say Java, Pascal, PHP, or Javascript--truely benefitted from a prior knowledge of C? Examples would be most appreciated."} {"_id": "170704", "title": "Why learn more programming languages?", "text": "> **Possible Duplicate:** > (Why) Should I learn a new programming language? I came across a line in this article which is, > Learn one programming language every year Why do good programmers suggest to learn more programming languages . We can be a jack of all and master of none too in this case."} {"_id": "136133", "title": "(Why) Should I learn a new programming language?", "text": "I'm quite proficient with Java, C/C++, JavaScript/jQuery and decently good at Objective-C. I'm quite productive with the languages and their corresponding frameworks too and do produce enterprise level systems (and also small scale ones) with sufficient ease all the while keeping code 'clean' and maintainable (yes, I can read my own code after six months :) Unless mandated by the platform (iPhone, iPad, etc.) or by the client/implementation organization, just \"why\" should I learn a new programming language? Just for \"fun\"? And do what with that fun if I'm not going to do anything worthwhile with it? A lot of my peers are ready to dive in to learn the \"next new thing/language\" and it's usually Python, Ruby or PHP (just naming a few popular ones). Now, just knowing the language by itself is futile IMHO. You also need to know the frameworks, learn their usage/APIs as well as 'good implementation practices', etc. So from an 'economic' sense, is there any benefit in learning a new programming language? If the language is learned in a quick and dirty fashion, it'll probably also be used for quick and dirty prototyping/implementation - but I don't see THAT as a justifiable investment of time/effort. So just WHY should I (or anyone for that matter) learn a new programming language other than \"it's fun so let's try it out\" - if the investment of time may not be worth it in the long run?"} {"_id": "115783", "title": "Is C a MUST-learn language for programmer?", "text": "> **Possible Duplicate:** > I don't know C. And why should I learn it? I start my programming from Java, and learned PHP. Also, because of the work, I learned Objective-C. But most of the programmer learn from C. So, is this necessary for a programmer to learn C? If yes, how important it is? Please drop your comments, thanks."} {"_id": "123088", "title": "Why we still need C in terms of development for high level programming modules?", "text": "> **Possible Duplicate:** > I don't know C. And why should I learn it? I was kind of awkward in coding with C, though I am somewhat familiar with it. I like Java and C#. The reason I like to code with them is for better GUI application development. With C, it's hard to code for a better appearance of any application. I understand the fact that C is the basis of all. But now we have tons of libraries which can help us to do anything in no time. But why do many people still code in the C programming language?"} {"_id": "30974", "title": "Should I Learn C/C++ Even If I Just Want To Do Web Programming?", "text": "> **Possible Duplicate:** > I don't know C. And why should I learn it? My goal is to be able to create online apps and dynamic, database driven websites. For instance, if in the future I get the idea for the next Digg or Facebook, I want to be able to code it myself. To arrive there I think I have basically two paths: ### Path 1 Start at a basic level, learning C, then C++ for OOP, then algorithms and data structures, with the goal of getting a solid grasp of computer programming. Only then move to PHP/MySQL/HTTP and start working on practical programming projects. ### Path 2 Start directly with PHP/MySQL/HTTP and getting my hands dirty with practical projects right away. What would you guys recommend?"} {"_id": "122726", "title": "Java void methods implicitly returning this", "text": "there are a couple of discussion on SO about setter methods returning \"this\" type. And looks like java 7 had the proposal of void methods returning this. But this proposal could not make it to java 7 features. I could not find if this proposal is moved on to java 8 or future or have been completely discarded. Is it? Ref link - Design: Java and returning self-reference in setter methods"} {"_id": "122721", "title": "Documentation with images - what is the most git-friendly approach?", "text": "When documenting software I traditionally write a plain text file (like a README.txt) which works well, but unfortunately cannot contain images like HTML can. Unfortunately images must be stored as separate files with a normal HTML document (using ) which give unnecessary clutter. The \"Save a single web-page\" in Internet Explorer use MIME HTML (MHTML) to save all resources within a single file. The requirement of having to use a browser to read it, is not a impediment these days. Apparently Opera and Word support editing these documents directly. We use git for our source code, hence the differences as seen by git between two edits of the same documents should not be large to avoid unnecessary disk usage and unusable diff's when looking at the document outside the MTHML editor. What is the best mainstream HTML editor for producing git-friendly MHTML files?"} {"_id": "94423", "title": "Can we regard file-system structure as one part of architecture?", "text": "Put this image inside `images` folder. All JavaScript files should go inside a folder called `js`. Put templates inside a folder called site-templates and for each template, have three folders called `layouts`, `looks`, and `pages`. We're all familiar with these file-system structures in which we try to logically and efficiently categorize files and folders in an acceptable hierarchy inside our projects. On the other hand, because many times we do I/O operations on these files, changing the file-system structure forces us to update parts of our code, no matter how high-tech, and decoupled our code is. My question is, based on the effect that a file-system structure has on an overall project, can we consider it as a part of the software architecture? Because, in many cases, choosing a correct file-system structure prevents us from duplicating a file, say jQuery, in many places."} {"_id": "246460", "title": "Reduce a list of IPv4 to lowest common CIDR", "text": "I have a long list of IP addresses that show some pattern of closeness Example: XX.249.91.16 XX.249.91.21 XX.249.91.32 XX.249.91.160 XX.249.91.165 XX.249.92.15 XX.249.92.25 XX.249.92.51 XX.249.92.234 and sometimes a whois reveals a range like this XX.163.224.0 - XX.163.239.255 I would like to create a list of CIDR that gives me the lowest common number It seems Python already has such a thing - iprange_to_cidrs - but I need it in JavaScript XX.249.90.0/16 if that is what would handle XX.249.91.0 - XX.249.92.255 ditto XX.163.200.0/nn to handle XX.163.224.0 - XX.163.239.255 A calculator that can do it one range at a time when I fiddle with the mask is here http://www.subnet-calculator.com/cidr.php My preferred language is JavaScript and I have the following start but need the algorithm that does the match and transform var list = [ [1234,\"10.249.91.16\"], [5678,\"10.249.91.21\"], [123,\"10.249.91.32\"], [456,\"10.249.91.160\"], [789,\"10.249.91.165\"], [3456,\"10.249.92.15\"], [7890,\"10.249.92.25\"], [1234,\"10.249.92.51\"], [5432,\"10.249.92.234\"] ] function dec2bin(ip) { return ip.split(\".\").map(function(num) { return (\"00000000\"+parseInt(num,10).toString(2)).slice(-8); }).join(\".\"); } $(function() { $res = $(\"#res\"); $.each(list,function(_,arr) { $res.append('
    '+dec2bin(arr[1])+\" - \"+arr); }); }); UPDATE after reviewing an answer FIDDLE function getDec(ip) { var octet = ip.split(\".\"); return (octet[0] << 24) | (octet[1] << 16) | (octet[2] << 8) | octet[3]; } var res = document.getElementById(\"res\"); var adr1 = \"111.163.224.0\"; var adr2 = \"111.163.239.255\" var XOR = getDec(adr1)^getDec(adr2); // now what? Update 2: could the result of the range 111.163.224.0 - 111.163.239.255 be `111.163.224.0/20` ? since we have 01101111.10100011.11100000.00000000 - 0,111.163.224.0 01101111.10100011.11101111.11111111 - 0,111.163.239.255"} {"_id": "74300", "title": "Has anyone successfully used Windows Workflow for a Business Rules/Validation engine?", "text": "I was wondering if anyone has successfully used Windows Workflow Foundation for a BusinessRules/Validation engine, or if you know of some sample code or articles about this. If you have used it before, what do you think of it? How is it compared to other BusinessRule/Validation systems? I am thinking of rules like if (A, B, and C) AllowAccess(); Or if (Value between X and Y) return true;"} {"_id": "146727", "title": "Event driven design and separation of core/UI logic", "text": "I am new to event driven development, and I feel lost when I try to implement events that should pass the core/UI boundary. In my program I have the following (example in c#): UI.RuleForm Core.RuleList UI.ResultForm Cell 1 Rule 1 Cell 2 Rule 2 Cell 3 Rule 3 What I want is: when a RuleForm cell changes, it will update the corresponding rule in RuleList. And when the RuleList changes, the resultFrom will be recalculated from the rules. My current thought is that, in order to keep core logic separated from UI logic (i.e. core should know nothing about UI), core should then only generate events, but not processing events generated by others. So I have to create some kind of UI.RuleListWrapper which can process RuleForm change events, updating Core.RuleList. RuleList in term should fire OnChange events that UI.ResultForm can use. So in summary, my questions are: I want to know if my reasoning and purposed implementation is okay or not, which probably means: **should a core module be able to process events generated by outside UI** Is my separation some kind of \"mysophobia\", or has it been done before. Are there other better approaches?"} {"_id": "185532", "title": "What is the best way to plot 3D/2D plots with real time data?", "text": "First of all, I like to use Python, because it is easy to work with. I am not a programmer, so I prefer anything that is easy to use and understand. I understand that it might be faster to program 3D in C/C++ or whatever, but that is beyond my scope. Now, I wish to create a 3D plot of lots of data points (from a scientific sonar). I get roughly 5-10 updates per second and each update contains thousands of points. So the drawing of the 3D plot needs to be fast. Below the 3D plot I want to have a 2D plot \"seen from above\", and to the right another 2D plot \"seen from the side\". Both of these 2D plots will be updated at the same time as the 3D plot, with the same number of points. At any time I will want to pause the real time playback of data and select points from the 2D plots and extract data from those points. I also wish to represent the points with different size and color depending on target strength. And in the paused selection-mode, show the selection by changing color of selected points and/or make individual points blinking or similar (apply two different selections at once). I've done a bit of research, and it seems like Enthought has some nice tools. First of all Mayavi for the 3D plot, and then Chaco for the 2D plots. These seem to have nice built-in functionality of selecting points etc. But I am a bit concerned about the plotting speed. Are they able to plot thousands of points 5-10 times per second in real time? Also, maybe I missed something during my research and there are better alternatives out there? E.g. how much work is it to code directly for OpenGL and is it worth the extra hassle in this case? Or some gaming engine? Or matplotlib? I would appreciate any advice I can get on this. And if it is platform independent, all the better. I am using both Ubuntu and Windows7."} {"_id": "146720", "title": "Database architecture decisions", "text": "I do not know too many things about databases, so a lot of questions concerning architecture came up lately. Two of these things are: 1. If I have a table with a lot of entries(millions probably), how can I make the `select` queries faster? I thought about sorting the table alphabetically and then splitting it in two, but that doesn't seem to make things easier for me. Do you have any suggestions? 2. I have a table _user_ and one _message_. In _message_ I should have `sender_id` and `receiver_id`. From what I know, I can't make them both _foreign keys_ for _user_ , so I have to pick one of them. Doesn't this, however, lead to data inconsistency(which, as far as I know, is bad)? What is the right approach here? I do not think that it matters, but I use MySQL 5.5."} {"_id": "135890", "title": "Does an iterator have a non-destructive implied contract?", "text": "Let's say I'm designing a custom data structure like a stack or a queue (for example - could be some other arbitrary ordered collection that has the logical equivalent of `push` and `pop` methods - ie destructive accessor methods). If you were implementing an iterator (in .NET, specifically `IEnumerable`) over this collection that popped on each iteration, would that be breaking `IEnumerable`'s implied contract? Does `IEnumerable` have this implied contract? eg: public IEnumerator GetEnumerator() { if (this.list.Count > 0) yield return this.Pop(); else yield break; }"} {"_id": "42941", "title": "How do you navigate and refactor code written in a dynamic language?", "text": "I love that writing Python, Ruby or Javascript requires so little boilerplate. I love simple functional constructs. I love the clean and simple syntax. However, there are three things I'm really bad at when developing a large software in a dynamic language: * Navigating the code * Identifying the interfaces of the objects I'm using * Refactoring efficiently I have been trying simple editors (i.e. Vim) as well as IDE (Eclipse + PyDev) but in both cases I feel like I have to commit a lot more to memory and/or to constantly \"grep\" and read through the code to identify the interfaces. This is especially true when working with a large codebase with multiple dependencies. As for refactoring, for example changing method names, it becomes hugely dependent on the quality of my unit tests. And if I try to isolate my unit tests by \"cutting them off\" the rest of the application, then there is no guarantee that my stub's interface stays up to date with the object I'm stubbing. I'm sure there are workarounds for these problems. How do you work efficiently in Python, Ruby or Javascript?"} {"_id": "135896", "title": "Have not been working as a programmer since the three years since I graduated and would like to, any advice?", "text": "I graduated with my BS in CS, June 2009 and then took a job near school which they told me would be web site maintenance. It was actually web-based customer service with a small amount of working on the company site. After about six months I got super bored with it and decided to move back home. I was desperate for a job so I could pay rent and start paying off my student loans. After six months of looking I found my current job where I've been for about a year an a half. My current boss also promised me the job would be company website development and maintenance but I ended up getting railroaded into doing customer service again. I just found out that my boss is having a website developed by a friend which I guess I will be asked to keep updated. I'm really unhappy in my job and I feel like my education is totally going to waste. I can't seem to find any job listings for CS degree students who don't have software development employment experience. And now that it's been three years since I graduated and I'm still not in the field it seems like that makes me look really bad to employers. I was offered an internship at a software company about a year ago but my fiance' asked me to turn it down because it would have been unpaid and we were running out of money. My current job takes up so much of my time and energy and leaves me so drained that I have a hard time working on projects at home or even finding the time to apply for new jobs. So, my question is two fold: 1. how do I get past the stigma of not having been working in the industry since I graduated and 2. how do I find a CS job in the small amount of time I have. Posting resumes and applying to job postings online, writing cover letter after cover letter seems useless."} {"_id": "116911", "title": "Is the \"Software Project Survival Guide\" methodology compatible with Agile ones?", "text": "I'm considering re-reading Steve McConnell's excellent \"Software Project Survival Guide\" and perhaps applying it verbatim to my next project. However, one thought struck me: the book was written in 1998, before Scrum and other agile methodologies became popular. Are the teachings of this book still relevant in light of the newer methodologies? Or are they compatible? If the latter, do you have any experience of agile projects run according to McConnell's book(s)?"} {"_id": "204865", "title": "Can You Use 2 Python Modules Issued Under LGPL and BSD License in a Program?", "text": "I'm making a drawing program as my first open-source program merely to get a taste of the open-source community. To make said program, I am using Python 2.7. I'm using the following modules: 1. EasyGUI 0.96 - Under the 3-clause BSD license 2. Pygame 1.9.1 - Under the LGPL 2.1 license All I intend to do is create a program using these modules which depend on it, and upload it to a public GitHub repository for others to freely modify and distribute. _If_ using my code is allowed in closed source software, or profits to be made by others, I do not want it. I do not wish to modify Pygame or EasyGUI themselves, either. They just need to be simply provided to run the software. Is this possible with the two licenses mentioned above? If I were to do this, what license would I have to issue my program under and why? Are some better choices than others? Is it also possible release my drawing program code under a reciprocal license so the people contributing send their bug fixes in my script back?"} {"_id": "128093", "title": "Are detailed style requirements inappropriate from Marketing?", "text": "I'm working on the UI-side of a project which is under intense scrutiny by our Marketing group. Most of the time, we're reviewing functionality requirements. Once in a while, however, they will get very in-depth regarding colors, styles, etc. For example, I've been tasked by Marketing to make one particular button a very specific color and a very specific size, In other projects I've worked on, however, these types of requirements have been quietly ignored or rejected when they came from Marketing. Managers and tech leads have insisted Marketing shouldn't be specifying details like colors, styles, control layouts, etc., but I've never really been told _why_. Is it inappropriate for Marketing to list detailed style requirements for a UI, including colors, shades, layout, sizes, etc.? If so, why? Is this just a case of conflicting philosophies regarding Marketing responsibilities? (I should probably mention that we're not working with a unified style guide for company-wide applications. In the past, other projects I've worked on have \"winged it\" when assembling their UIs.) * * * **Edit:** I removed an editorial comment I made which probably implied this post was a complaint couched in a question. I'm not really concerned with what I should _do_ , but rather I would like better understanding as to what Marketing's responsibility is regarding these types of requirements. I've had very little exposure to Marketing (I've been more distanced from them in previous projects) and I don't understand their role as it pertains to UI styling."} {"_id": "203104", "title": "Why can't `main` return a double or String rather than int or void?", "text": "In many languages such as C, C++, and Java, the `main` method/function has a return type of `void` or `int`, but not `double` or `String`. What might be the reasons behind that? I know a little bit that we can't do that because `main` is called by runtime library and it expects some syntax like `int main()` or `int main(int,char**)` so we have to stick to that. So my question is: why does `main` have the type signature that it has, and not a different one?"} {"_id": "140156", "title": "Is unit testing or test-driven development worthwhile?", "text": "My team at work is moving to Scrum and other teams are starting to do test- driven development using unit tests and user acceptance tests. I like the UATs, but I'm not sold on unit testing for test-driven development or test- driven development in general. It seems like writing tests is extra work, gives people a crutch when they write the real code, and might not be effective very often. I understand how unit tests work and how to write them, but can anyone make the case that it's really a good idea and worth the effort and time? Also, is there anything that makes TDD especially good for Scrum?"} {"_id": "41419", "title": "Convincing my coworkers to use Hudson CI", "text": "Im really aware of some benefits of using Hudson as CI server. But, im facing the problem to convince my coworkers to install and use it. To put some context, we are developing two different products (one is an enterprise search engine based on Apache Solr) and several enterprise search projects. We are facing a lot of versioning issues and i think Hudson will solve this problems. They argued about its productivity and learning curve What Hudson's benefits would you spotlight?"} {"_id": "204330", "title": "How commonly used is jQuery DataTables, and what are the alternatives?", "text": "I'm doing some work on a project that uses jQuery DataTables. I've never used this particular jQuery extension before. In the past, I've used the ASP.net DataGrid control, which of course is not available in ASP.net MVC. I've also used the Telerik Kendo UI grid, which is expensive. I've been wondering how commonly used jQuery DataTables is across the internet (especially with ASP.net MVC) and what other technologies are being used to replace the formerly ubiquitous ASP.net DataGrid."} {"_id": "41411", "title": "Is there a trend for cross-platform GUI toolkits?", "text": "What is the trend like for the usage of **cross-platform** GUI frameworks right now? Are more people starting to use cross-platform frameworks (such as GTK+, Qt and wxWidgets) or are there more who use more platform-tied frameworks (e.g. Cocoa or WPF)? Is it more or less stagnant? Is it like a rollercoaster? What do you think the trend will be like, say, 5 years from now? The OS landscape is shifting with less people using Windows (personal observation). This should increase the demand for cross-platform toolkits, shouldn't it? Edit: Also, which (cross-platform) toolkits are growing the most, if so?"} {"_id": "70688", "title": "Are web applications limited by the amount of memory or by the speed of the database server on the server side?", "text": "Many web sites mostly only do CRUD (create, read, update, delete) operations to the database for different URLs. Suppose that we have a 3-tier solution with the database server on a dedicated server and the web server on another server. This question is not about the database server, but the web server. The web server use dynamic content so it execute some code for every request and is communicating with the database server. What should be the limitations of the web server if the web applications aren't computation-heavy under heavy load? Shouldn't the web application be limited of the speed of the database server? or is it limited by the amount of memory? With the speed of CPUs today I don't think that the web servers CPU speed is the limitation. For static content web servers like Nginx and Lighttpd the web server almost always use very little memory and are limited by the disk-IO speed. But how is it for dynamic content?"} {"_id": "218422", "title": "Many JS files included, one breaks all, smart approach to fix?", "text": "I'm doing a project that uses svg-edit(JS drawing tool) and the page grows big. By now there are about 20 JS file in a page - plug-ins and so on. One of the editor's files breaks in some kind of 'Uncaught TypeError' but it's compressed and can not understand anything. I used several jquery plug-ins that are important for the usability of the page and it's not possible to remove any of them. There are severe conflicts obviously but I can't rewrite any of the plug-ins. Any smart approach to fix this ? Where I can start ?"} {"_id": "83946", "title": "Would it be dishonest to use side tools during a phone interview?", "text": "A lot of time for decent tech jobs you have to go through a phone screening process. If you suck over the phone you generally don't get a chance to show how bad ass you are in person. These interviews, even for experienced and advanced positions, very often involve trivia style questions over basics. Unfortunately for me they're always in something I don't generally think about and didn't imagine to prepare for. With more experience with them (I haven't applied for a lot of such positions) I imagine I might get better but in the meantime... So for example, converting some decimal value XX to hexidecimal or binary. Last time I interviewed for something I had a total brain-fart on how to actually do this. I very rarely have to care. I use hexidecimal any time I want to be associating values with bit construction and decimal when I want to think of the value as a number. I rarely convert between the two. When I do need to I just pop open the calculator and let it do it for me. It's not like it's hard or anything but for some reason I simply couldn't even remember how to do it. I told the interviewer the truth and told him I'd have a better chance to convert the other way (since it's way, way easy and still shows understanding I guess). He then mentioned that most people struggled with that and wondered if it was some sort of age difference thing. Maybe, maybe not. I explained where I was coming from and let it at that. But, being a phone interview he couldn't exactly tell how I'd come up with the answer. Perhaps it's perfectly legitimate to just use the tools I always do? What do you think? If you where interviewing someone, asked a question like that, and then found out that they'd used a tool or reference to answer your question rather than do it by hand or in their head....would you be pissed off? Would you consider that dishonest or a good use of tools available to solve the problem?"} {"_id": "83948", "title": "How to distribute our software on Linux without shipping source code", "text": "For our company needs, we need to sell one of our software on *nix-like system. How can we distribute and protect our software ? I know that almost every program on linux is open-source, so how can we protect source code ? Do we need distribute part of source code in object files ? Software written in C."} {"_id": "70682", "title": "What is ActiveRecord in Rails?", "text": "Today I interviewed with a big-ego VP for a Rails and Rails position at a profitable company in my city. He asked me to explain what ActiveRecord is in Rails. I told him the following: > ActiveRecord provides an object-oriented interface to an application's > database to make development easier and friendlier for the developer. It > provides validation functionality as well which helps keep data in the > database clean. ActiveRecord also pertains to the model part of the MVC He then told me that was \"kind of mickey moused.\" So, IS my explanation \"mickey moused,\" or should I just have hung up on him right then and there?"} {"_id": "133495", "title": "What's the simplest example out there to explain the difference between Parse Trees and Abstract Syntax Trees?", "text": "To my understanding, a parser creates a parse tree, and then discards it thereafter. However, it can also pop out an abstract syntax tree, which the compiler supposedly makes use of. I'm under the impression that both the parse tree and the abstract syntax tree are created under the parsing stage. Then could someone explain why these are different?"} {"_id": "39700", "title": "How should compilers report errors and warnings?", "text": "I don't plan on writing a compiler in the near future; still, I'm quite interested with compiler technologies, and how this stuff could be made better. Starting with compiled languages, most compilers have two error levels: warnings and errors, the first being most of the time non-fatal stuff you should fix, and errors indicating most of the time that it's impossible to produce machine- (or byte-) code from the input. Though, this is a pretty weak definition. In some languages like Java, certain warnings are simply impossible to get rid of without using the `@SuppressWarning` directive. Also, Java treats certain non-fatal problems as errors (for instance, unreachable code in Java triggers an error for a reason I'd like to know). C# doesn't have the same problems, but it does have a few. It seems that compilation occurs in several passes, and a pass failing will keep the further passes from executing. Because of that, the error count you get when your build fails is often grossly underestimated. On one run it might say you have two errors, but once you fix them maybe you'll get 26 new ones. Digging to C and C++ simply shows a bad combination on Java and C#'s compilation diagnostic weaknesses (though it might be more accurate to say that Java and C# just went their way with half the problems each). Some warnings really ought to be errors (for instance when not all code paths return a value) and still they're warnings because, I suppose, at the time they wrote the standard, compiler technology wasn't good enough to make these kind of checks mandatory. In the same vein, compilers often check for more than the standard says, but still use the \"standard\" warning error level for the additional findings. And often, compilers won't report all the errors they could find right away; it might take a few compiles to get rid of all of them. Not to mention the cryptic errors C++ compilers like to spit, where a single mistake can cause tens of error messages. Now adding that many build systems are configurable to report failures when the compilers emit warnings, we just get a strange mix: not all errors are fatal but some warnings should; not all warnings are deserved but some are explicitly suppressed without further mention of their existence; and sometimes all warnings become errors. Non-compiled languages still have their share of crappy error reporting. Typos in Python won't be reported until the code is actually run, and you can never really kick of more than one error at a time because the script will stop executing after it meets one. PHP, on its side, has a bunch of more or less significant error levels, _and_ exceptions. Parse errors are reported one at a time, warnings are often so bad they should abort your script (but don't by default), notices really often show grave logic problems, some errors really aren't bad enough to stop your script but still do, and as usual with PHP, there are some really weird things down there (why the hell do we need an error level for fatal errors that aren't really fatal? `E_RECOVERABLE_E_ERROR`, I'm talking to you). It seems to me that every single implementation of compiler error reporting I can think of is broken. Which is a real shame, since how all good programmers insist on how important it is to correctly deal with errors and yet can't get their own tools to do so. What do you think should be the right way to report compiler errors?"} {"_id": "133490", "title": "VisualVM Sampling & Accuracy", "text": "It's been said in other questions that jvisualvm sampling works as a \"lightweight\" profiling tool by calculating metrics directly from Java stack frames. Its almost unanimously agreed that such a technique is _faster_ but not as _accurate_ in terms of its \"timings\". My question: why is this method not as accurate? And, what is it not as accurate as (as opposed to what? Are there more precise profiling techniques?)?"} {"_id": "133492", "title": "Is \"Code Smell\" still a useful metaphor, or has misuse of the term subverted its meaning?", "text": "I've come across some comments and answers on Programmers.SE that decry the use of the phrase \"Code Smell\" and I've been wondering what the reasoning is for those who dislike it. I first encountered this term when I read Fowler's Refactoring back in 2000, yet it's only in the last year or two that I've heard any complaint about the use of the phrase. **Q:** Would it be correct to say that the phrase _Code Smell_ has recently started to fall into disfavor, and if so why does/should this metaphor no longer apply? * * * I am looking for well defined answers supported by sound reasoning as I realize that this question is borderline subjective."} {"_id": "231804", "title": "Where to Populate Objects", "text": "As mentioned in this question, I am moving our team towards objects (as opposed to just throwing DataTables and variables around everywhere). I have picked a suitable spot for the project that contains the object definitions, but not sure how I should go about populating some of the objects. In particular, what do I do with classes as properties? Public Class Class1 Public Property SomeProperty As String Public Property SomeOtherProperty As String Public Property SomeKey As Integer Public Property Foo As Class2 End Class In this basic example, I can easily populate the first three properties from a query, but `Foo` has its own set of properties. Right now, to create a `List(Of Class1)` I generate the data for all of the Class1 objects, and then use `SomeKey` to go back and, one object at a time, populate all of the `Foo`. Certainly this works, but in my case it is rather slow, and it seems that perhaps I am simply doing it wrong. Am I handling that incorrectly? If so, what should I be doing? The only other thing I can come up with is to create a query (or stored procedure) that returns all of the appropriate data for all of the objects. In my test case that amounts to a list that contains 154 `Class1`'s, with each of them having four properties like `Foo`, and some of those having classes as properties."} {"_id": "59880", "title": "Avoid Postfix Increment Operator", "text": "I've read that I should avoid the postfix increment operator because of performance reasons (in certain cases). But doesn't this affect code readability? In my opinion: for(int i = 0; i < 42; i++); /* i will never equal 42! */ Looks better than: for(int i = 0; i < 42; ++i); /* i will never equal 42! */ But this is probably just out of habit. Admittedly, I haven't seen many use `++i`. Is the performance that bad to sacrifice readability, in this case? Or am I just blind, and `++i` is more readable than `i++`?"} {"_id": "71379", "title": "What are the advantages of the Unified Software Development Process?", "text": "Why should an organization adopt the unified process over others? What are the relative advantages? I know that it is closely coupled with UML, but clearly this cannot be the only advantage. Why choose this approach over others?"} {"_id": "27686", "title": "What are useful metrics for source code?", "text": "What are useful metrics to capture for source code? How can metrics, like for example _(Executable?) Lines of Code_ or _Cyclomatic Complexity_ help with quality assurance or how are they beneficial in general for the software development process?"} {"_id": "231809", "title": "Why don't interpreters interpret bytecode (like VMs) - instead of source code?", "text": "**Edited the question to be more clear:** It is known that interpreting bytecode is much faster than interpreting source code or some IL version of the source code. **The interpreter has a much easier time understanding bytecode than source code or source-code-like IL.** Virtual machines interpret bytecode. This bytecode interpreted by VMs is the result of compiling source code to bytecode. However, 'independent interpreters' (not inside a VM) are known to interpret source code or source-code-like IL, instead of bytecode. **Why is that? Why don't interpreters interpret bytecode, like VMs?** All that is needed is to first compile the source code to bytecode (like done for VMs), and then the interpreter can interpret this bytecode. **Is the reason for this is that an interpreter that interprets bytecode is a VM by definition?** (Just guessing here). Or is it something else?"} {"_id": "71374", "title": "Functional language with C-like syntax", "text": "I've been looking for functional language with C-like syntax and static typing. So far my choice would be Nemerle. Is there anything else/better? EDIT: second choice would be Lua or Go. Any pros and cons?"} {"_id": "232576", "title": "Building a serverless p2p CMS/System", "text": "I'm thinking about building an Open Source, serverless, offline-replicated p2p CMS, but I'm concerned about it really working in a real environment. How would you go about doing that, reducing the exposition to risks/bugs? _I've thought about this schema for now._ On the local device we have a folder called \"cmsfiles\" containing the software's files and a subfolder \"database-shards\", inside this subfolder we have: shard0.sqlite, shard1.sqlite, shard2.sqlite and so on.. each shard has a cap of 50GB. Inside these shards we'll assign the tables in a random way, being: comments, posts, answers, users etc. If a device visits a portal, it has read-only access and to interact it has to have a locally installed instance, with the same content as explained above. When he installs the software, he's asked for a nick and a password, that will then be saved on a private key for later verification. When a device does an insert (an answer to a post in this case), it has to send other information as well. _**Locally, the request is processed in the following way:_** The remote peer is asked for his ip address/proxy/tor-node and hardware UUID. A check is then done to determine if the remote peer is in the blacklist list, if TRUE, the whole operations/negotiations are discarded, but if it evaluate FALSE, a checking is done to see if it's present in the \"breaking attempt log\" list and if the number is less than 5 && not equal to 0, the number is ++incremented, at the same time all the operations are discarded with the remote peer. Else, if the number equal 0, we proceed to further verification. The remote peer's message (answer), private key and files in the \"cmsfiles\" folder are fetched. The files are now being compressed locally to cmsfiles.lz, hashing and comparing. This hash is then compared to the local one and if they're not equal, the number in the \"breaking attempt log\" list is ++incremented and all the operations being done discarded. But if they match, we proceed signing the remote private key with the public key, if it signs successfully, we check if the post (the answer) already exists. If it exists (if TRUE), the number in the \"breaking attempt log\" is ++incremented and any operation discarded, else (if FALSE) we check if post votes equal 0 (a new post cannot have more than 0 votes!), if it doesn't, the number in \"breaking attempt log\" list is ++incremented and any operations discarded. Otherwise, if it indeed equals 0 we can finally add the message (answer) to the local database. I'm including a tree schema representation just in case: ![example tree](http://i.stack.imgur.com/7m7Xe.jpg)"} {"_id": "147027", "title": "determine language limitations before development", "text": "It so happens that you are given a project specs and your knowledge about the language is very basic. There are features in the project that you have never worked with. But you have some ideas and logic on how this can be implemented. Later on as the development continues you discover that the required features cannot be developed because of some language limitations or any other issues. Now you have to convince your client about this, which is not easy. **Edit** : the reason for working with not so comfortable technology is that,Company at times cannot afford to hire someone for this task. Hence relies on existing employees to learn something new and contribute. So how do I avoid such situations? How do I research about the features before accepting the project ?"} {"_id": "66810", "title": "How did the \"Rails can't Scale\" meme start?", "text": "One meme about Rails is that Rails can't Scale. Is it known how this meme started? Was there a particular blog post that argued this is the case?"} {"_id": "232579", "title": "Testing non-central features", "text": "Only 1% of our clients use particular features or scenarios. Do you think we should spend as much time testing these features and scenarios as we spend on our central features and scenarios, or we should test mostly the \"happy path\" for these features? **UPDATE** Maybe I should have asked the question in a different way. Do you think a company needs to spend more time testing the central features of its products? For example I assume the Windows 8 team tested the the boot sequence more in depth than they tested the driver for a very uncommon graphics card. The cost of a bug in a boot sequence is huge since no one would be able to use the OS. But a bug in a driver of an obscure GPU will hurt only a small client base."} {"_id": "70978", "title": "I feel that my manager slows my work, how to deal with it?", "text": "It seems strange, but my manager sits right next to me and, well, when he isn't near, I feel free, and I'm planning, doing, etc, doing my work with flashes in the eyes, feel accomplishments. But everything is opposite when he's on his workplace, I simply have to force myself to do anything, I'm feeling very much worried and nervous and my brain doesn't work normally. How to deal with it?"} {"_id": "7038", "title": "Can I do anything to improve performance in VS 2010?", "text": "I'm using VS 2010 since we're developing an app in .Net 4 and the performance is driving me crazy. It's mostly bad when I don't view the IDE for a while (such as when I get pulled away for a help desk call or come in in the morning). I realize it's probably built in WPF which unloads its resources when unused, but the few minute delay while it loads everything back up is really annoying. I've also noticed some significant delays when opening files or compiling."} {"_id": "70973", "title": "focusing on a language itself VS focusing on language + CS", "text": "So, I've been working on Ruby with the goal of self-employment (I'm an impoverished philosopher (not even in university!) and can't get a normal low- paying job). But there is a quandary in my road of learning and discovery: I keep getting sidetracked by CS (mostly theoretical). I'm starting to think that I should consciously stop myself from straying into the theoretical until I am, I dunno, not starving? :) So I would like opinions on whether or not my view is rational (I guess a better term would be pragmatic), and of course discussion in general relating to the interrelationship between language mastery and general CS mastery is welcome."} {"_id": "132436", "title": "Which skills should I improve in order to become a professional software engineer?", "text": "> **Possible Duplicate:** > most desirable skills for a graduate software engineer I'm currently doing my PhD in physics and will be finished in about two years from now. I plan to become a software engineer afterwards. I'm now looking for advice on which skills I should improve during this time in order to reach my goal. My background: For my research I need to write my own programs. I've been programming every now and then since I was a child but only in the last 2.5 years I've actually spent a lot of time coding. I've got around 2 years of experience in C++ now and consider myself being good at it. Recently, I started learning Python. I didn't attend any university courses on algorithms, data structures or databases, so maybe I'm lacking some basic knowledge here. I thought about reading books on the following topics: * General books like Code Complete or The Pragmatic Programmer. * Test-driven programming and agile development methods (though I'm not sure how widely they are used in companies) * Databases * Algorithms and data structures I also thought about learning Java... What would you consider useful in my situation? Which skills are important for a company?"} {"_id": "71080", "title": "What does \"downstream/upstream design\" mean?", "text": "What does \"downstream/upstream design\" in software development mean?"} {"_id": "180833", "title": "Why can't Windows services have a GUI?", "text": "I was using this feature in earlier Windows release like XP and NT. I was able to run a GUI from a Windows service. But it is not possible in the later versions. What is the reason behind the removal of this feature? Why can't Windows services have a GUI?"} {"_id": "226189", "title": "Designing a fitness / weight lifiting routine database", "text": "I'd like to create an app similar to Barbell Pro for Android, for practice / interest / educational purposes really. Or even as another example for database purposes, Fitocracy The problem is, I have no idea how the database could be designed... For example: * We have a 1000 Persons using the app * Each Person could have 1-7 individualised WorkoutRoutines, depending on the day. (Perhaps even more -> AM workouts / PM workouts) * Each WorkoutRoutine has an individual set of Exercises, with some crossover e.g. someone could do Bench Press on Monday and on Friday afterall. * Each Exercise has a number of Sets * Each Set has a number of Reps Now to me, this seems like it could potentially be a large amount of information to store for Each Person, and maybe pretty complicated to do so. I only have some experience with relational databases and I don't know how I'd go about designing that in an efficient matter for a relational database. I'm not asking for a design, just how I'd begin to go about it. The potential complexity is daunting for me due to lack of database experience. Maybe it's not even that complicated, but like I say, ignorance from myself."} {"_id": "226188", "title": "Significance of many-to-many relationships?", "text": "How important are to-many relationships in iOS programming? Do you often hold a list of pointers to objects in an array in your codes? I don't think I fully understand the concept of to-many relationships, let alone how to implement them into code. From my current understanding, to-many relationships holds a list of pointers to one or more class objects and their corresponding properties and instance variables?"} {"_id": "182087", "title": "Are there differences between Functionality and Functional Requirements", "text": "I'm writing some documentation in a project in a tool. In this tool of mine, and write in specific area, I have here use Case, Business Rule, N-Diagram Types, and Functionality and Functional Requirements. So I got me wondering, Are there differences between Functionality and Functional Requirements? In my vision, a Functionality a implementation of a Functional Requirement, is it correct? What other differences one can find between them?"} {"_id": "37733", "title": "What benefits do I get from learning Scheme?", "text": "I'm a java programmer and I've decided to learn a bit about theoretical computer science. I don't have a degree in that and a little background would help me a lot since I don't know anything other than coding when it comes to software development. I've searched this website for answers and I've found a lot of people recommending the book \"Structure and Interpretation of Computer Programs\" but since I don't have the required mathematical know-how to handle this book, I decided to go with \"How to Design Programs\" instead. My question here is what would I gain from this experience? Would it teach me about Computer Science like I want? Or am I better off reading about algorithms and data structures instead?"} {"_id": "147894", "title": "How should I go about re-releasing a GPL-licensed project?", "text": "A few months ago we found a library licensed under GPL that fit the bill for what we were looking to do at that time. We included it in our codebase and all was fine. Now, a few frantic months of coding later, we've refactored the hell out of the library: it's more feature complete, more stable, fully unit tested, PSR-0 compatible, etc. Now we would like to use the library in another one of our projects and that got me thinking, why not re-release the library? The problem is that I don't have any idea on how to attribute the work the original developers have put in (which is actually that much refactored that it's kinda unrecognizable) when releasing the library as GPL again. Over time all the file documentation headers with the original credits have been replaced and all that remains is the LICENSE file which is an exact copy of the GPL v3 license. I have absolutely no problem giving credit where it's due, but I would like to do it according to what is right in the FOSS world. Anyone can tell me how to proceed?"} {"_id": "182083", "title": "Possible to implement OOP without using extensive heap operations?", "text": "Is the concept of OOP intimately tied to allocating objects on the heap? Is it possible to write normal OOP without creating excessive objects on the heap?"} {"_id": "182082", "title": "PHP Framework for RESTful Web Service", "text": "I have been going round in circles with this question for days - which is the best PHP framework to use to create a RESTful Web service? I've trawled the web for info and have come across three main factors that are important: * must have REST architecture built into the framework * must be a stable application * must be full featured It may be that what I want does not exists, but I wanted to check with the community to see of I had missed something. Currently the three contenders are: **CodeIgniter** Is a very stable framework with a large community and plenty of features and 'extensions'. Issue is that it's not RESTful. I have found a RESTful controller but there are a few things that I don't like about it. Mainly that it does not seem to correctly use the HTTP methods as per the RESTful architecture definition. I think this stems from restrictions in the CodeIgniter core though. **Yii** Again, seems like a large community and is stable with plenty of features, but not RESTful. **Laravel** A framework that is RESTful straight out of the box, and it has a good amount of features. The issue is that this is a relatively new framework so lacks stability. Other frameworks I've considered: Zend \\- from what I've read, avoid unless writing Enterprise software. Recess \\- RESTful, but seems very inactive and under used. **UPDATE:** In the end I went for Laravel. Can't recommend it enough! I had a RESTful API up and running in a week and also a simple web client. Amazing framework."} {"_id": "83364", "title": "How does Facebook calculate mutual friends?", "text": "How does Facebook calculate mutual friends? Does it cache all mutual friends for each user? Does it use MySQL to calculate mutual friends with a query?"} {"_id": "83361", "title": "More Powerful language for client-side web apps: JavaScript or C#?", "text": "Concerns over Microsoft's future with Silverlight, HTML 5 and Windows 8 have led me to reconsider plans to develop a business app over the next few years in Silverlight. It can be difficult to define what powerful means. Here I mean to ask this as power in the sense that Paul Graham describes, as what you can express with the language. I'm not sure but it seems that even the latest features of C# 4.0 such as dynamic types, anonymous functions, closures, and even LINQ exist in JavaScript. Do you think there are plans to add additional power to the JavaScript language in its next release? Also, to deal more with the HTML5/JavaScript stack vs. the Silverlight/C# stack: which is or will be more powerful in the sense of what capabilities it gives to developers and their applications? e.g. WCF communication, tools, UI controls..."} {"_id": "83363", "title": "Is there a large bank using Mysql or PostgreSQL?", "text": "I always thought the largest scale of banks use Oracle. However, there is no proof they really use Oracle instead of Mysql or PostgreSQL, nobody knows the secret. Any idea what they really use? Can i build an ATM/Bank system where millions of transaction will happen using Mysql? Can I use PostgreSQL? Or must I use Oracle only?"} {"_id": "123180", "title": "What are the programming languages that never outdated and programs created be able to run for next 20 years?", "text": "I want to know what are the programming languages that have longevity? I mean, code that written today be able to run for next 20 years or so?"} {"_id": "123189", "title": "Algorithm for an exact solution to the Travelling Purchaser Problem", "text": "do you know of any algorithms which give an exact solution for the Traveling Purchaser Problem. I can only find heuristic and probabilistic approaches. I do have implemented a genetic algorithm so far, which by its nature does not terminate by itself an does not always yield the optimal result. Thus I'm looking for an exact solution to the problem such that I'm able to compare my solution to an exact / optimal value for a given test data set. For those of you who haven't heard of the Traveling Purchaser Problem (TPP), _this is_ **_not_** _the Traveling Salesman Problem (TSP)_ , but a generalization of it. It thus is also NP-hard."} {"_id": "237892", "title": "Using someone else's code in your commercial products", "text": "Is it acceptable to use code made by other people that was deliberately made to be shared amongst Web developers? Things such as long libraries of code that contain hundreds of lines of code that you may want to copy rather than write yourself because it is too long and complicated. Is this unacceptable and frowned upon in the programming community simply because everyone is required to write all their code themselves and no copying to speed things up is allowed? Also, do you think large organizations would copy pre-made code that has been made to be used to speed the process of programming web development up? Must they write every single line of code themselves?"} {"_id": "202274", "title": "How can I test linkable/executable files that require re-hosting or retargeting?", "text": "_Due to data protection, I cannot discuss fine details of the work itself so apologies_ **PROBLEM CASE** Sometimes my software projects require merging/integration with third party (customer or other suppliers) software. these software are often in linkable executables or object code (requires that my source code is retargeted and linked with it). When I get the executables or object code, I cannot validate its operation fully without integrating it with my system. My initial idea is that executables are not meant to be unit tested, they are meant to be linkable with other system, but what is the guarantee that post- linkage and integration behaviour will be okay? There is also no sufficient documentation available (from the customer) to indicate how to go about integrating the executables or object files. I know this is philosophical question, but apparently not enough research could be found at this moment to conclude to a solution. I was hoping that people could help me go to the right direction by suggesting approaches. To start, I have found out that Avionics OEM software is often rehosted and retargeted by third parties e.g. simulator makers. I wonder how they test them. Surely, the source code will not be supplied due to IPR rgulations. **UPDATE** I have received reasonable and very useful suggestions regarding this area. My current struggle has shifted into testing 3rd party OBJECT code that needs to be linked with my own source code (retargeted) on my host machine. How can I even test object code? Surely, I need to link them first to even think about doing anything. Is it the post-link behaviour that needs to be determined and scripted (using perl,Tcl, etc.) so that inputs and outputs could be verified? No clue!! :( thanks,"} {"_id": "86705", "title": "Using MVP, how to create a view from another view, linked with the same model object", "text": "_**Background_** We use the Model-View-Presenter design pattern along with the abstract factory pattern and the \"signal/slot\" pattern in our application, to fullfill 2 main requirements Enhance testability (very lightweight GUI, every action can be simulated in unit tests) Make the \"view\" totally independant from the rest, so we can change the actual view implementation, without changing anything else In order to do so our code is divided in 4 layers : Core : which holds the model Presenter : which manages interactions between the view interfaces (see bellow) and the core View Interfaces : they define the signals and slots for a View, but not the implementation Views : the actual implementation of the views When the presenter creates or deals with views, it uses an abstract factory and only knows about the view interfaces. It does the signal/slot binding between views interfaces. It doesn't care about the actual implementation. In the \"views\" layer, we have a concrete factory which deals with implementations. The signal/slot mechanism is implemented using a custom framework built upon boost::function. Really, what we have is something like that : http://martinfowler.com/eaaDev/PassiveScreen.html Everything works fine. _**The problem_** However, there's a problem I don't know how to solve. Let's take for example a very simple drag and drop example. I have two ContainersViews (ContainerView1, ContainerView2). ContainerView1 has an ItemView1. I drag the ItemView1 from ContainerView1 to ContainerView2. ContainerView2 must create an ItemView2, of a different type, but which \"points\" to the same model object as ItemView1. So the ContainerView2 gets a callback called for the drop action with ItemView1 as a parameter. It calls ContainerPresenterB passing it ItemViewB In this case we are only dealing with views. In MVP-PV, views aren't supposed to know anything about the presenter nor the model, right ? How can I create the ItemView2 from the ItemView1, not knowing which model object is ItemView1 representing ? I thought about adding an \"itemId\" to every view, this id being the id of the core object the view represents. So in pseudo code, ContainerPresenter2 would do something like itemView2=abstractWidgetFactory.createItemView2(); this.add(itemView2,itemView1.getCoreObjectId()) I don't get too much into details. That just work. The problem I have here is that those itemIds are just like pointers. And pointers can be dangling. Imagine that by mistake, I delete itemView1, and this deletes coreObject1. The itemView2 will have a coreObjectId which represents an invalid coreObject. Isn't there a more elegant and \"bulletproof\" solution ? Even though I never did ObjectiveC or macOSX programming, I couldn't help but notice that our framework is very similar to Cocoa framework. How do they deal with this kind of problem ? Couldn't find more in-depth information about that on google. If someone could shed some light on this. I hope this question isn't too confusing ..."} {"_id": "111932", "title": "What are the deciding factors in choosing to expose a web service as a SOAP or REST service?", "text": "As far as I can see consuming SOAP requires a SOAP stack, so it is harder for your clients to consume i.e. they need to ensure that they have a SOAP stack in place that formats the POST data and the headers correctly and then gives you back some data structure, whereas with REST you just make an HTTP GET request with the arguments in the query string and get back some text that I guess is probably XML. So what does the extra overhead / complexity of SOAP give you, when do you need it and when could you and should you do without it?"} {"_id": "99389", "title": "How do I convince my boss to use REST over SOAP?", "text": "We need to create an API to our system. How do I convince my boss that REST is a better option than SOAP (or XML-RPC)? I say REST is... * easier to implement and maintain * not much new to learn -- plain old HTTP * lot of people have chosen it Yahoo ~ Facebook ~ Twitter * will be lot quicker to code My boss says SOAP is... * richer and more expressive * it's all standard XML (SOAP, WSDL, UDDI) -- and so will be easier to consume * well standardized than REST * Google uses a lot of SOAP * it is important to adhere to SOAP standards than to create a custom XML schema in REST"} {"_id": "159754", "title": "Why the question \"give five things you hate about C#\" is so difficult to answer during an interview?", "text": "In podcast 73, Joel Spolsky and Jeff Atwood discuss, among other subjects, \"five things everyone should hate about their favorite programming language\": > If you\u2019re happy with your current tool chain, then there\u2019s no reason you > need to switch. However, if you can\u2019t list five things you hate about your > favorite programming language, then I argue you don\u2019t know it well enough > yet to judge. It\u2019s good to be aware of the alternatives, and have a healthy > critical eye for whatever it is you\u2019re using. Being curious, I asked this question to any candidate I interviewed. None of them were able to quote at least one thing they hate about C#\u00b9. Why? What's so difficult in this question? It is because of the stressful context of the interview that this question is impossible to answer by the interviewees? Is there something about this question which makes it bad for an interview? * * * Obviously, it doesn't mean that C# is perfect. I have myself a list of five things I hate about C#: * The lack of variable number of types in generics (similar to `params` for arguments). `Action`, `Action`, `Action`, \u205e `Action` Seriously?! * The lack of support for units of measure, like in F#. * The lack of read only properties. Writing a backing `private readonly` field every time I want a read only property is boring. * The lack of properties with default values. And yes, I know that I can initialize them in the parameterless constructor and call it from all other constructors. But I don't want to. * Multiple inheritance. Yes, it causes confusion and you don't need it in most cases. It's still useful in some (very rare) cases, and the confusion applies as well (and was solved in C#) to the class which inherits several interfaces which contain methods with the same name. I'm pretty sure that this list is far from being complete, and there are much more points to highlight, and especially much better ones than mine. * * * \u00b9 A few people criticized some assemblies in .NET Framework or the lack of some libraries in the framework or criticized the CLR. This doesn't count, since the question was about the _language_ itself, and while I could potentially accept an answer about something negative in the core of .NET Framework (for example something like the fact that there is no common interface for `TryParse`, so if you want to parse a string to several types, you have to repeat yourself for every type), an answer about JSON or WCF is completely off-topic."} {"_id": "205925", "title": "use of LGPL libraries in closed source android software", "text": "I'm investigating the legal issues of using LGPL native libraries in a closed source Android software. As for now, my research on the subject shows that using LGPL libraries in closed source software is doable, and that the requirements are not specially high. On a regular application (for example a C closed source application), I would dynamically link the library, and distribute the binaries of the application along with the library with proper reference to it and instruction on how to replace this library with the version the user would like to. (I may be forgetting some stuff here but this is not the point of my question). My question refers to Android software and JNI. Assuming I am building an Android software using JNI I do have : My java source A JNI folder including : * Android.mk file for the compilation of the application * the library source code * _.cpp/_.h files linking the source code with JNI To compile my application, two steps are required : * Compilation of the library using NDK to generate a *.so file * Compilation of the Android application using Ant. The java code includes a System.loadlibrary(\"nameOfTheLibrary\") The problem I am facing is that the source code of the library is first compiled to a *.so file. This can be considered as derivative work, am I right ? How could I include the native LGPL libraries and distribute it in a proper way to avoid any legal problems ?"} {"_id": "226239", "title": "Maintaining independence between modules", "text": "I am reading Algorithms 4th Edition by Robert Sedgewick and in chapter 1.2 it discusses API design. It says: \"The key to success in modular programming is to maintain independence between modules. We do so by insisting on the API being the only point of dependence among modules.\" Now the book is based on Java. I am considering this in terms of C++ development (but I imagine concepts are similar). Is this suggesting that if I have a library eg to do networking, that if a client #includes my net.hpp header (fictional name) and maybe sets to link with net.lib then no other includes or libraries should be required? Is that the jist of this? Or is there something else?"} {"_id": "235547", "title": "Under what license may this PyQt-based \"Hello World\" app be distributed?", "text": "**Update:** I was wrong about the PyQt license. It isn't merely a GPL license. The PyQt authors include a special set of exceptions that allow users to release their own code under a different license, as long as it is one of the Open-Source licenses specifically listed in the PyQt `GPL_EXCEPTION.TXT` file. * * * For the purpose of this discussion, consider the following fully-functional app, which depends on PyQt: # hello_world.py from PyQt4.QtGui import QApplication, QPushButton app = QApplication([]) button = QPushButton(\"Hello, World!\", clicked=app.quit) button.show() app.exec_() (In case it's relevant to the discussion, please note that Python programs like this one do not require \"linking\" per se.) My preference is to distribute my code under a **permissive** license, e.g. the BSD license. However, PyQt is released under the ~~GNU GPL~~ **GNU GPL with special exceptions**. With that in mind, what are my options here? Am I obligated to release under the GPL, even if I don't distribute PyQt itself? To be more specific, in which (if any) of the following scenarios am I permitted to release my code under the BSD license vs. being obligated to release under the GPL? * **Scenario 1:** I give you a fully-packaged binary that includes `hello_world.py` and PyQt. * **Scenario 2:** I give you the source code of `hello_world.py` and PyQt in a single download (say, a `.tar.gz`), but it's up to you to get them running together. * **Scenario 3:** I give you `hello_world.py` alone, leaving you to obtain PyQt on your own. I know that most of us aren't lawyers, so it is very much appreciated if you can cite the sources your answer is based on."} {"_id": "47813", "title": "Do you think code is self documenting?", "text": "This is a question that was put to me many years ago as a gradute in a job interview and it's nagged at my brain now and again and I've never really found a good answer that satisfied me. The interviewer in question was looking for a black and white answer, there was no middle ground. I never got the chance to ask about the rationale behind the question, but I'm curious why that question would be put to a developer and what you would learn from a yes or no answer? From my own point of view, I can read Java, Python, Delphi etc, but if my manager comes up to me and asks me how far along in a project I am and I say \"The code is 80% complete\" (and before you start shooting me down, I've heard this uttered in a couple of offices by developers), how exactly is that self documenting? Apologies if this question seems strange, but I'd rather ask and get some opinions on it to gain a better understanding of why it would be put to someone in an interview."} {"_id": "40457", "title": "Pros and cons of working at a company that uses a lot of its own frameworks", "text": "I have an opportunity to work in a company that looks good from the outside but has something that bothers me. It doesn't use commercial technologies. The main language that will be used is Java but the company doesn't use all known technologies like Spring, Struts, Hibernate etc. Instead it has its own technologies that do pretty much the same. Can you tell me what are the pros and cons about working in such a company?"} {"_id": "40454", "title": "What is a closure?", "text": "Every now and then I see \"closures\" being mentioned, and I tried looking it up but Wiki doesn't give an explanation that I understand. Could someone help me out here?"} {"_id": "230045", "title": "Refreshing website design and architecture", "text": "I have a website that is build with asp.net web forms. I would like to refresh design (using css, html5, responsive design) and also change it from web forms to asp.net MVC. To me this is more of a frontend project. Since there is already existing backend, it is \"only\" necessary to reuse it in MVC. Should I first start with design update or MVC? This project if far more easier, than starting a new website from the start, or am I mistaken?"} {"_id": "219485", "title": "Creating classes according to a struct", "text": "I get a array via SOAP service, in the array in each value which is a struct is a description of any visual components (size, position, default value etc.) like CheckBox, RadioButton etc. And I wonder about how to write a code which will return instances of those any visual components according to the input parameter with usage a design pattern. Which design pattern is the best for this purpose?"} {"_id": "219482", "title": "Avoid too complex method - Cyclomatic Complexity", "text": "Not sure how to go about this method to reduce Cyclomatic Complexity. Sonar reports 13 whereas 10 is expected. I am sure nothing harm in leaving this method as it is, however, just challenging me how to go about obeying Sonar's rule. Any thoughts would be greatly appreciated. public static long parseTimeValue(String sValue) { if (sValue == null) { return 0; } try { long millis; if (sValue.endsWith(\"S\")) { millis = new ExtractSecond(sValue).invoke(); } else if (sValue.endsWith(\"ms\")) { millis = new ExtractMillisecond(sValue).invoke(); } else if (sValue.endsWith(\"s\")) { millis = new ExtractInSecond(sValue).invoke(); } else if (sValue.endsWith(\"m\")) { millis = new ExtractInMinute(sValue).invoke(); } else if (sValue.endsWith(\"H\") || sValue.endsWith(\"h\")) { millis = new ExtractHour(sValue).invoke(); } else if (sValue.endsWith(\"d\")) { millis = new ExtractDay(sValue).invoke(); } else if (sValue.endsWith(\"w\")) { millis = new ExtractWeek(sValue).invoke(); } else { millis = Long.parseLong(sValue); } return millis; } catch (NumberFormatException e) { LOGGER.warn(\"Number format exception\", e); } return 0; } All ExtractXXX methods are defined as `static` inner classes. For example, like one below - private static class ExtractHour { private String sValue; public ExtractHour(String sValue) { this.sValue = sValue; } public long invoke() { long millis; millis = (long) (Double.parseDouble(sValue.substring(0, sValue.length() - 1)) * 60 * 60 * 1000); return millis; } } * * * ## UPDATE 1 I am going to settle down with a mix of suggestions here to satisfy Sonar guy. Definitely room for improvements and simplification. Guava `Function` is just a unwanted ceremony here. Wanted to update the question about current status. Nothing is final here. Pour your thoughts please.. public class DurationParse { private static final Logger LOGGER = LoggerFactory.getLogger(DurationParse.class); private static final Map> MULTIPLIERS; private static final Pattern STRING_REGEX = Pattern.compile(\"^(\\\\d+)\\\\s*(\\\\w+)\"); static { MULTIPLIERS = new HashMap<>(7); MULTIPLIERS.put(\"S\", new Function() { @Nullable @Override public Long apply(@Nullable String input) { return new ExtractSecond(input).invoke(); } }); MULTIPLIERS.put(\"s\", new Function() { @Nullable @Override public Long apply(@Nullable String input) { return new ExtractInSecond(input).invoke(); } }); MULTIPLIERS.put(\"ms\", new Function() { @Nullable @Override public Long apply(@Nullable String input) { return new ExtractMillisecond(input).invoke(); } }); MULTIPLIERS.put(\"m\", new Function() { @Nullable @Override public Long apply(@Nullable String input) { return new ExtractInMinute(input).invoke(); } }); MULTIPLIERS.put(\"H\", new Function() { @Nullable @Override public Long apply(@Nullable String input) { return new ExtractHour(input).invoke(); } }); MULTIPLIERS.put(\"d\", new Function() { @Nullable @Override public Long apply(@Nullable String input) { return new ExtractDay(input).invoke(); } }); MULTIPLIERS.put(\"w\", new Function() { @Nullable @Override public Long apply(@Nullable String input) { return new ExtractWeek(input).invoke(); } }); } public static long parseTimeValue(String sValue) { if (isNullOrEmpty(sValue)) { return 0; } Matcher matcher = STRING_REGEX.matcher(sValue.trim()); if (!matcher.matches()) { LOGGER.warn(String.format(\"%s is invalid duration, assuming 0ms\", sValue)); return 0; } if (MULTIPLIERS.get(matcher.group(2)) == null) { LOGGER.warn(String.format(\"%s is invalid configuration, assuming 0ms\", sValue)); return 0; } return MULTIPLIERS.get(matcher.group(2)).apply(matcher.group(1)); } private static class ExtractSecond { private String sValue; public ExtractSecond(String sValue) { this.sValue = sValue; } public long invoke() { long millis; millis = Long.parseLong(sValue); return millis; } } private static class ExtractMillisecond { private String sValue; public ExtractMillisecond(String sValue) { this.sValue = sValue; } public long invoke() { long millis; millis = (long) (Double.parseDouble(sValue)); return millis; } } private static class ExtractInSecond { private String sValue; public ExtractInSecond(String sValue) { this.sValue = sValue; } public long invoke() { long millis; millis = (long) (Double.parseDouble(sValue) * 1000); return millis; } } private static class ExtractInMinute { private String sValue; public ExtractInMinute(String sValue) { this.sValue = sValue; } public long invoke() { long millis; millis = (long) (Double.parseDouble(sValue) * 60 * 1000); return millis; } } private static class ExtractHour { private String sValue; public ExtractHour(String sValue) { this.sValue = sValue; } public long invoke() { long millis; millis = (long) (Double.parseDouble(sValue) * 60 * 60 * 1000); return millis; } } private static class ExtractDay { private String sValue; public ExtractDay(String sValue) { this.sValue = sValue; } public long invoke() { long millis; millis = (long) (Double.parseDouble(sValue) * 24 * 60 * 60 * 1000); return millis; } } private static class ExtractWeek { private String sValue; public ExtractWeek(String sValue) { this.sValue = sValue; } public long invoke() { long millis; millis = (long) (Double.parseDouble(sValue) * 7 * 24 * 60 * 60 * 1000); return millis; } } } * * * ## UPDATE 2 Though I added my update, it is only that much worth the time. I am going to move on since Sonar now does not complains. Don't worry much and I am accepting the mattnz answer as it is the way to go and don't want to set a bad example for those who bumps on to this question. Bottom line -- Don't over engineer for the sake of Sonar (or Half Baked Project Manager) complains about CC. Just do what's worth a penny for the project. Thanks to all."} {"_id": "208909", "title": "Is a well written documentation a good enough reason for learning a programming language?", "text": "I am learning Python currently which wasn't part of my college curriculum. I was asked in an interview why I chose Python and I replied that it is easy to learn and the documentation is very well written. The interviewer didn't reply whether it was a good enough reason. He looked convinced but I cannot be sure. Is a well written documentation along with ease of learning a good enough reason for choosing a scripting language? Or should I have elaborated more about the availability of Python libraries and bigger user base of Python? Just a note. Python wasn't required for the job. The company worked on Ruby- on-rails. Python was in my resume and I think the interviewer just wanted to know what considerations I made as a fresher while choosing a programming language."} {"_id": "76534", "title": "SaaS / PaaS / IaaS / HaaS", "text": "I've read: * IaaS, PaaS and SaaS Terms Clearly Explained and Defined and * Cloud Computing \u2013 Demystifying SaaS, PaaS and IaaS And I've got 2 questions: 1. Is Google App Engine considered PaaS or IaaS? 2. Is HaaS a subset of IaaS or is HaaS really just another name for IaaS?"} {"_id": "157121", "title": "How To Document an Object Oriented Design in Text", "text": "For my next project, I'm looking to document my Object Oriented design in simple text before jumping the gun to code it up. I want to do this for two reasons. 1. I want to give proper thought to my design and possibly revise it several times. 2. I haven't exactly decided which language I want to implement my project in. I'm looking for a convention to outline my design in simple text, instead of a UML diagram. I like the convenience of text. I can version control it and easily put in a blog or wiki to share it. I am thinking something concise e.g. how you might code in Python. However, I want the representation to be language independent. I've looked around the internet but couldn't find anything. The closest would probably be how properties and methods are defined in the box representing a class in a UML Model. **Update:** I just wanted to clarify \"in Text\". My goal is to be able to outline the object model in a github wiki. I imagine it being kind of an open source design in addition to code. Thus, I can create a Wiki page per entity and identify the relationships using links and formatting etc. However, what I wanted suggestions on was how to outline the specification of an entity on its page."} {"_id": "208901", "title": "How to assure users that website and passwords are secure", "text": "On reliable websites I always see claims such as \"All data is encrypted\" or \"All passwords are encrypted using 128bit encryption\" and etc. However I have never come across a claim such as \"All passwords are hashed.\" On my website, I will be storing all user passwords in a database after using SHA-512 (most likely) hashing with a random salt. I want to add a snipet assuring users that their passwords are secure so they will not be deterred from using my website because it requires a password. I want users to feel secure but I do not think everyone knows what hashing is. MY QUESTION: Is it okay to provide a message saying that \"All passwords are encrypted and secure\" since I do not think that the average user will know what the difference between hashing and encryption is, and will more likely feel secure just because they see the comfort word \"encryption\"? Or is there an alternative message I should provide? On a side note, I am new to encryption and password hashing and I was wondering if this would be safe enough for now as I launch my site. I do not want to tell users it is secure if it is not. Any information would be greatly appreciated. Thanks."} {"_id": "76539", "title": "I'm using JSON and degrading gracefully, so how do I prevent duplicate code?", "text": "There are a bunch of questions on Stack Overflow about whether AJAX should return JSON or HTML, and most seem to agree that it is ideal to return JSON for the sake of speed. However, that means that if I degrade gracefully, I will have some duplicate code because I am generating the same markup in both PHP and Javascript. A hypothetical example: A website has a list of links to short stories. If the user has Javascript, then clicking on one of these links loads the story without a page refresh. This is done with an AJAX request that returned a JSON with the story information. Javascript generates the markup for the story. If the user does not have Javascript, then clicking on the same link reloads the page with the story now loaded. PHP generates the markup for the story. Is there a solution to use JSON and degrade gracefully without duplicating the code?"} {"_id": "72858", "title": "GPL Confusion (I made a mistake)", "text": "I have a project that I created under GPL (I am copyright holder). The mistake I made is I deciding to go closed source, in doing that I removed the open source download access and removed the project from sourceforge. A third party came in and recreated the project on sourceforge (under the same name) with the last GPL release and plans to modify the software. (For download access) Is it required by GPL that any branches of the project by under a new name? http://www.gnu.org/licenses/gpl- faq.html#WhyDoesTheGPLPermitUsersToPublishTheirModifiedVersions \"Sometimes control over modified versions is proposed as a means of preventing confusion between various versions made by users. In our experience, this confusion is not a major problem. Many versions of Emacs have been made outside the GNU Project, but users can tell them apart. The GPL requires the maker of a version to place his or her name on it, to distinguish it from other versions and to protect the reputations of other maintainers. UPDATE The third party and I decided to leave the project page in place and redirect users to a new page for the open source version. This way the old page serves a purpose of notifying former users of the status of the project."} {"_id": "171705", "title": "Where should I draw the line between unit tests and integration tests? Should they be separate?", "text": "I have a small MVC framework I've been working on. It's code base definitely isn't big, but it's not longer just a couple of classes. I finally decided to take the plunge and start writing tests for it(yes, I know I should've been doing that all along, but it's API was super unstable up until now) Anyway, my plan is to make it extremely easy to test, including integration tests. An example integration test would go something along these lines: Fake HTTP request object -> MVC framework -> HTTP response object -> check the response is correct Because this is all doable without any state or special tools(browser automation etc), I could actually do this with ease with regular unit test frameworks(I use NUnit). Now the big question. Where exactly should I draw the line between unit tests and integration tests? Should I only test one class at a time(as much as possible) with unit tests? Also, should integration tests be placed in the same testing project as my unit testing project?"} {"_id": "174951", "title": "DDD: service contains two repository", "text": "Does it correct way to have two repository inside one service and will it be an application or domain service? Suppose I have a Passenger object that should contains Passport (government id) object. I am getting Passenger from PassengerRepository. PassengerRepository create request to server and obtain data (json) than parse received data and store inside repository. I have confused because I want to store Passport as Entity and put it to PassportRepository but all information about password contains inside json than i received above. I guess that I should create a PassengerService that will be include PassengerRepository and PassportRepository with several methods like `removePassport, addPassport, getAllPassenger` and etc. **UPDATE:** So I guess that the better way is represent Passport as VO and store all passports inside Passenger aggregate. However there is another question: Where I should put the methods (methods calls server api) for management passenger's passport. I think the better place is so within Passenger aggregate."} {"_id": "194354", "title": "Constants and Big O", "text": "Are constants always irrelevant even if they are large? For example is O(10^9 * N) == O(N) ?"} {"_id": "174958", "title": "Why creating a new MDX language instead of extending SQL?", "text": "I have a long experience with SQL, but recently began working with datawarehouse and OLAP technologies: building fact and dimension tables, that then are queried using MDX (MultiDimensional eXpressions). The problem is that MDX works with a completely different logic compared to SQL, and it's a whole new learning curve even for someone with a strong SQL background. Yes, MDX allows you to do things that would be hard or almost impossible with plain SQL. But sometimes it's frustrating to be hours around an MDX to do something you know you could achieve in minutes using SQL (ok, you can tell me to RTFM ...). But why go on to the trouble of creating a new completely different language when you could build on SQL, extend it to add the features needed by OLAP applications?"} {"_id": "174959", "title": "How can I make a case for \"dependency management\"?", "text": "I'm currently trying to make a case for adopting dependency management for builds (ala Maven, Ivy, NuGet) and creating an internal repository for shared modules, of which we have over a dozen enterprise wide. What are the primary selling points of this build technique? The ones I have so far: * Eases the process of distributing and importing shared modules, especially version upgrades. * Requires the dependencies of shared modules to be precisely documented. * Removes shared modules from source control, speeding and simplifying checkouts/check ins (when you have applications with 20+ libraries this is a real factor). * Allows more control or awareness of what third party libs are used in your organization. Are there any selling points that I'm missing? Are there any studies or articles giving improvement metrics?"} {"_id": "11334", "title": "Does your company have a written policy about contributing to open-source projects?", "text": "Does your company have a written policy about contributing to open-source projects? We've been contributing \"don't ask don't tell\" style, but it's time to write something down. I'd appreciate both full written policy text and bits and pieces. **Update** : we've made some progress since I asked this question and now have such a policy - read this."} {"_id": "201777", "title": "Break on default case in switch", "text": "I am a bit puzzled on whenever or not to include `break` after the last case, often `default`. switch (type) { case 'product': // Do behavior break; default: // Do default behavior break; // Is it considered to be needed? } `break`s sole purpose is in my understanding to stop the code from running through the rest of the `switch`-case. Is it then considered more logical to have a `break` last due to consistency or skip having it due to the `break` applying no functional use whatsoever? Both are logical in different ways in my opinion. This could to a certain degree be compared with ending a `.php` file with `?>`. I never end with `?>` mostly due to the risk of outputting blank spaces, but one could argue that it would be the logical thing to end the file with."} {"_id": "201776", "title": "Does Jar file shrinker affect performance", "text": "I've heard ProGuard's Jar shrinker affects the performance of you application. Is this true? And if so just how much slower does the Jar go shrinked compared to unshrinked?"} {"_id": "170791", "title": "Dropbox as a Version Control tool", "text": "I have some friends telling me that Dropbox can be used as a version control tool. I have always used SVN or Git. I was looking around Dropbox and couldn't find anything that tells me about is characteristics as Merging, Fork and other characteristics for a complete VCS. Can someone point me what I'm missing."} {"_id": "19100", "title": "Becoming an \u201cApplication DBA\u201d?", "text": "I have to admit, until recently I did not know there was such a thing as a \"System DBA\" and \"Application DBA.\" Currently, I am an Application/System Admin for a large CMS. I would like to pursue a track/training that would eventually lead to this role of \"Application DBA\", the ultimate goal being understanding how to best use Apps and DBs together. Our Apps run on Oracle 10g. I do OK with basic SQL (selects, inserts, updates and can figure out a join or two, etc), but right now most of our DB support (e.g., generating and interpreting AWR reports, etc) is from the System DBA. I came across several posts that talked about becoming a DBA, but I would appreciate any advice and pointers towards an \"Application DBA\" role."} {"_id": "19104", "title": "How to interview someone you know well?", "text": "How would you interview someone you know well or someone you may even be friends with? What if you already know their strengths and weaknesses. I would prefer to avoid this situation by delegating this task to somebody else, but what if this is not an option. I feel like there is just too much personal feelings involved and it is almost impossible to be unbiased. If you have been in an similar situations, how did you handle them?"} {"_id": "170254", "title": "Building a website, want to use java", "text": "I'd like to make a simple-ish website that is essentially a small game. Key strokes are to be processed and sent to a server (already acquired and should support SQL and JSP, I believe) which then translate to a location and written to the DB. SQL queries are to be used to retrieve these locations and written to other clients connected to the website. Their page is to be updated with these locations. I have working knowledge of Java, jQuery/Ajax, SQL and JavaScript but I'm unfamiliar with JSP and how everything hooks up. I'm aware of the MVC paradigm as well. For my little game idea, would these technologies work? Am I over thinking this and can make it much easier to implement? What might be a good tutorial or example to study? EDIT: I was just informed I will not be able to use WAR files on the server. I'm not big on php and really don't like developing with it, can I still use Java?"} {"_id": "170258", "title": "What is the best way to create HTML in C# code?", "text": "I have a belief that markup should remain in mark-up and not in the code behind. I've come to a situation where I think it is acceptable to build the HTML in the code behind. I'd like to have some consensus as to what the best practices are or should be. When is it acceptable to build html in the code behind? What is the best method to create this html? (example: Strings, StringBuilder, HTMLWriter, etc)"} {"_id": "88852", "title": "How do you track Production tasks", "text": "I manage a team of coders (5people) that maintain a few modules in a large project. On top of doing coding, we also do production operational tasks (like doing server housekeeping, batch backlog tracking) These tasks are done daily, done by 1 person, and is rotated weekly The problem is this: These tasks are routine, but there I cant think of a practical way of ensuring the person does what he is supposed to do. I thought of using spreadsheets to track, or to the extent of doing a paper checklist, which the person on duty will have to physically sign off. I just want the guy on duty to remember and execute every daily item. What works on your project?"} {"_id": "214629", "title": "How is\"cloud computing\"different from \"client-server\"?", "text": "Watching a CEO for a new \"cloud computing\" company describe his company on a finance TV program today, he said something like \"Cloud computing is superior to old-fashioned client-server computing\". Now I'm confused. Can someone please explain what \"cloud computing\" means in contrast to client-server? As far as I understand it, cloud computing is more of a network services model, such that I do not own or maintain the physical hardware. The \"cloud\" is all the back-end stuff. But I still might have an application that communicates with that \"cloud\" environment. And if I run a web site presents a form that a user fills out, pushes a button on the page, and returns some report that was generated by the web server, isn't that the same as \"cloud\" computing? And would you not consider my web browser as the \"client\"? Please note my question is specific to the concept of \"cloud computing\" with respect to \"client-server\". Sorry if this is an inappropriate question for this site; it's the one closest in the Stack universe and this is my first time here. I'm an old timer, programming since mainframe days in the late 70's."} {"_id": "155064", "title": "How do I convince my team that a requirements specification is unnecessary if we adopt user-stories?", "text": "We are planning to adopt user-stories to capture stakeholder 'intent' in a lightweight fashion rather than a heavy SRS (software requirements specifications). However, it seems that though they understand the value of stories, there is still a desire to 'convert' the stories into an SRS-like language with all the attributes, priorities, input, outputs, source, destination etc. User-stories 'eliminate' the need for a formal SRS like artifact to begin with so what's the point in having an SRS? How should I convince my team (who are all very qualified CS folks by the way - both by education and practice) that the SRS would be 'eliminated' if we adopted user-stories for capturing the functional requirements of the system? (NFRs etc can be captured too, but that's not the intent of the question). So here's my 'work-flow' argument: Capture initial requirements as user- stories and later elaborate them to use-cases (which are required to be documented at a low level i.e. describing interactions with the UI prototypes/mockups and are a deliverable post deployment). Thus going from user-stories to use-cases rather than user-stories to SRS to use-cases. How are you all currently capturing user-stories at your workplace (if at all) and how do you suggest I 'make a case' for absence of SRS in presence of user- stories?"} {"_id": "103331", "title": "Balance between workload and helping new-hires", "text": "I've been at my first job for about 2 months and I've started to notice that there is a delicate balance between workload and helping new-hires. Since there is a lot of pressure from managmanet to fix bugs and resolve as many customer issues as possible everyone on the team seems to be very focused on their backlog of work instead of helping the new-hires get up to speed. The new-hires can ask questions and occaisonally we'll get a developer to sit down and help us but often we'll get an obscure answer that only a veteran of the product would understand because they are too busy with their task. I understand the the new-hire must also maintain a balance. Sometimes it will take a new-hire 3 days to investigate and fix something where a veteran could have done it in 20 minutes. New-hires need to show effort toward learning the product and the codebase. With out simply reducing the workload of the veterans, how can you balance between helping new-hires and continuing to work on your backlog at a reasonable rate?"} {"_id": "155061", "title": "Design Pattern for building a Budget", "text": "So I've looked at the Builder Pattern, Abstract Interfaces, other design patterns, etc. - and I think I'm over thinking the simplicity behind what I'm trying to do, so I'm asking you guys for some help with either recommending a design pattern I should use, or an architecture style I'm not familiar with that fits my task. So I have one model that represents a `Budget` in my code. At a high level, it looks like this: public class Budget { public int Id { get; set; } public List Months { get; set; } public float SavingsPriority { get; set; } public float DebtPriority { get; set; } public List SavingsCollection { get; set; } public UserProjectionParameters UserProjectionParameters { get; set; } public List DebtCollection { get; set; } public string Name { get; set; } public List Expenses { get; set; } public List IncomeCollection { get; set; } public bool AutoSave { get; set; } public decimal AutoSaveAmount { get; set; } public FundType AutoSaveType { get; set; } public decimal TotalExcess { get; set; } public decimal AccountMinimum { get; set; } } To go into more detail about some of the properties here shouldn't be necessary, but if you have any questions about those I will fill more out for you guys. Now, I'm trying to create code that builds one of these things based on a set of `BudgetBuildParameters` that the user will create and supply. There are going to be multiple types of these parameters. For example, on the sites homepage, there will be an example section where you can quickly see what your numbers look like, so they would be a much simpler set of `SampleBudgetBuildParameters` then say after a user registers and wants to create a fully filled out `Budget` using much more information in the `DebtBudgetBuildParameters`. Now a lot of these builds are going to be using similar code for certain tasks, but might want to _also_ check the status of a users `DebtCollection` when formulating a monthly spending report, where as a `Budget` that only focuses on savings might not want to. I'd like to reduce code duplication (obviously) as much as possible, but in my head, every way I can think to do this would require using a base `BudgetBuilderFactory` to return the correct builder to the caller, and then creating say a `SimpleBudgetBuilder` that inherits from a `BudgetBuilder`, and put all duplicate code in the `BudgetBuilder`, and let the `SimpleBudgetBuilder` handle it's own cases. Problem is, a lot of the unique cases are unique to 2/4 builders, so there will be duplicate code _somewhere_ in there obviously if I did that. Can anyone think of a better way to either explain a solution to this that may or may not be similar to mine, or a completely different pattern or way of thinking here? I really appreciate it."} {"_id": "228733", "title": "What are the benefits of Android way of \"saving memory\" - explicitly passing Context objects everywhere?", "text": "Turned out, this question is not easy to formulate for me, but let's try. In Android, pretty much any UI object depends on a `Context`, and has defined lifetime. Android can also destroy and recreate UI objects and even whole application process at any time, and so on. This makes coding asynchronous operations correctly not straightforward. (and sometimes _very_ cumbersome) But I never have seen a real explanation, why it's done that way? There are other OSes, including mobile OSes (iOS, for example), that don't do such things. So, what are the wins of Android way (volatile UI objects and Contexts)? Does that allow Android applications to use much less RAM, or maybe there are other benefits?"} {"_id": "120378", "title": "Is error suppression acceptable in role of logic mechanism?", "text": "This came up in code review at work in context of PHP and `@` operator. However I want to try keep this in more generic form, since few question about it I found on SO got bogged down in technical specifics. Accessing array field, which is not set, results in error message and is commonly handled by following logic (pseudo code): if field value is set output field value Code in question was doing it like: start ignoring errors output field value stop ignoring errors The reasoning for latter was that it's more compact and readable code in this specific case. I feel that those benefits do not justify misuse (IMO) of language mechanics. * Is such code is being \"clever\" in a bad way? * Is discarding possible error (for any reason) acceptable practice over explicitly handling it (even if that leads to more extensive and/or intensive code)? * Is it acceptable for programming operators to cross boundaries of intended use (like in this case using error handling for controlling output)? **Edit** I wanted to keep it more generic, but specific code being discussed was like this: if ( isset($array['field']) ) { echo '
  • ' . $array['field'] . '
  • '; } vs the following example: echo '
  • ' . @$array['field'] . '
  • ';"} {"_id": "215577", "title": "Naming a predicate: \"precondition\" or \"precondition_is_met\"?", "text": "In my web app framework, each page can have a precondition that needs to be satisfied before it can be displayed to the user. For example, if user 1 and user 2 are playing a back-and-forth role-playing game, user 2 needs to wait for user 1 to finish his turn before he can take his turn. Otherwise, the user is displayed a waiting page. This is implemented with a predicate: def precondition(self): return user_1.completed_turn The simplest name for this API is `precondition`, but this leads to code like `if precondition(): ...`, which is not really obvious. Seems to me like it is more accurate to call it `precondition_is_met()`, but not sure about that either. Is there a best practice for naming methods like this?"} {"_id": "11486", "title": "Does working with good code make you a better developer or does it make you soft?", "text": "Does working with good code make you a better developer or does it make you soft and reluctant to work with anything _less_ than quality code?"} {"_id": "11485", "title": "How to test the tests?", "text": "We test our code to make it more correct (actually, _less likely to be incorrect_ ). However, the tests are also code -- they can also contain errors. And if your tests are buggy, they hardly make your code better. I can think of three possible types of errors in tests: 1. Logical errors, when the programmer misunderstood the task at hand, and the tests do what he thought they should do, which is wrong; 2. Errors in the underlying testing framework (eg. a leaky mocking abstraction); 3. Bugs in the tests: the test is doing slightly different than what the programmer thinks it is. Type (1) errors seem to be impossible to prevent (unless the programmer just... gets smarter). However, (2) and (3) may be tractable. How do you deal with these types of errors? Do you have any special strategies to avoid them? For example, do you write some special \"empty\" tests, that only check the test author's presuppositions? Also, how do you approach debugging a broken test case?"} {"_id": "213713", "title": "Good or bad code? Or \"a secret reason\"?", "text": "I think this code: if(file_exists(\"amodule.inc.php\")) require_once(\"amodule.inc.php\"); is _misleading_ because of the use of the require_once. I think that - to keep the logic and \"wording\" in line - \"include_once\" would be appropriate (where there are a number of arguments to not use the \"if\" at all but I would like to concentrate on \"require vs include\"). As far as my understanding goes, the ONLY difference between \"require\" and \"include\" is that \"require has a consequence (halt) if the file does not exist vs. include proceeds with just a warning. But, in the example, if the file does not exists, the require_once code will not executed anyways. Therefore the \"require\" _misleads_ from my point of view. From a superficial view, one could argue that the above code using require_once and if(file_exists(\"amodule.inc.php\")) include_once(\"amodule.inc.php\"); are \"identical\" what, from my view, is not. Because: A \"rough\" analysis (like an automated check of a project) would throw a message, that \"amodule.inc.php\" is a vital project file, which the code shows, is not. Also, in a (not very likely) cas that between the execution of the \"if\" statement and the require_once statement, the file could be deleted. Then, even worse, the code would NOT execute like expected (to run without load) but give up. So, how would you guys out there argue?"} {"_id": "218992", "title": "REST API at backend and MVC Javascript framework at client side", "text": "I am building an online social network. I have finished writing RESTful API service using Django. This will return only JSON response (No HTML will be generated from server side) so that this JSON response can be used to build native smartphone apps. API service being common to all clients. My question is, since there is no HTML response from server side, can the MV* Javascript Frameworks like Angular / Backbone / Ember take care of complete Front-end, right from generating HTML page with CSS?"} {"_id": "82709", "title": "Statistics on time estimates for web application", "text": "I have been asked to help estimate the time it would take to develop a web application. I will not be involved in the actual programming, but I am participating as an \"experienced\" programmer. The actual work will probably be handed over to a consulting company, but the client (a university department) wants to have an estimate to have an idea of how much time and money will be needed. We will try to break down the features to implement and then try to create some kind of grand total estimate (even though Joel Spolsky says this will not work), but I thought that these kinds of web applications have been done hundreds of times and that there must be lots of experience to draw from on one or another of the stackexchange sites. ## Is it possible to answer this question: How many hours/weeks does it generally take for an experienced programmer, using their language and framework of choice (be it Java, Ruby on Rails, or some other fairly big technology), to create a web application, given that: * It is fairly standard, meaning that there is a database, an administration interface and a presentation layer for the general public. * It is written from scratch, but there are old systems to draw experiences from. * The administrators (think they) know fairly well what they want. I know this is **very** vague, but I am looking for your experiences: (made up examples follow) \" _We have bought these kinds of systems several times, and they generally take twenty staff-months to complete._ \" \" _Using Python and Django, I'd say most web apps are up and running in 6 staff-months, top._ \" ## Edit: Some clarification: * I my question most information about this project is missing. The client has written a detailed specification draft on the system and there is also a requirement analysis based on user feedback on the old system. * I want to state, again, that I am looking for _your experiences_ (see my example answers), not a quote for this system. **Thanks for all the insightful answers, though!**"} {"_id": "214194", "title": "clone/copy constructor and child class", "text": "I just want feedback on what one considers 'most elegant design' for a situation like this. I have an object that is immutable with some (sort of) copy constructors to allow creating a similar but modified version of my immutable object. So something like this: class ImmutableObject{ Fee fee; Fi fi; Fo fo; public ImmutableObject(Fee fee, Fi fi, Fo fo){ this.Fee=fee; this.fi=fi; this.fo=fo; } public ImmutableObject(ImmutableObject cloneObj, Fee fee){ this.fi=cloneObj.fi; this.fo=cloneObj.fo; this.fee=fee; } public ImmutableObject(ImmutableObject cloneObj, Fi fi){ this.fee=cloneObj.fee; this.fo=cloneObj.fo; this.fi=fi } public ImmutableObject(ImmutableObject cloneObj, Fo fo){ this.fi=cloneObj.fi; this.fee=cloneObj.fee; this.fo=fo; } } I'm now expanding this by creating a sub-class with extra behavior: Class ImmutableChild extends ImmutableObject{ Fum fum; ImmutableChild(Fee fee, Fi fi, Fo fo, Fum fum){ super(fee, fi, fo); this.fum=fum; } The problem is that I have a collection of 'ImmutableObjects' that contain both original and child classes. When I try to clone them to get a new object with one modifid element My copy constructors will always create a class of parent type 'ImmutableObject' type and completely loose the state of 'fum' if I'm actually cloning the child class. I can easily refactor this to meet any solution I want, copy constructor, clone methods etc. However, while I can get the behavior I want, I'm not sure what would be considered both elequant and non-ridged solution. What is the cleanest method to allow a method that will create a 'clone' of my object with one value modified which works well with children classes that expand functionality?"} {"_id": "240518", "title": "Accessing a 32-bit DLL from a 64-bit process", "text": "I'm aware that it's not possible to load a 32-bit DLL into a 64-bit process. The DLL in question is a ODBC driver which is no longer supported (although it works fine) and no 64-bit version of it exists. I don't have access to the source code either. There doesn't seem to be an off-the-shelf way of \"thunking\" between the two architectures - the only option I can think of is to write a 32-bit COM object and get it to wrap the 32-bit DLL, and then also write a 64-bit DLL which exposes the same API the 32-bit DLL exposes and then bridge the gap by serializing the parameters to the API call and utilising DCOM's data marshalling functionality to get across the 32-bit / 64-bit boundary. This is a pretty tedious approach though and it could be quite slow if the interface is \"chatty\". So what I'm wondering is if there are any other ways of solving this problem?"} {"_id": "201196", "title": "Is the future of the web application going the same as the desktop application?", "text": "I've noticed an increase in mobile applications being released that offer high value features, but without any corresponding web based version. Was just reading about a new medical app for iPhone that allows doctors to share medical images, and had this idea been done 5 years ago it surely would have been a web application. Now it's released purely on iPhone without any web alternative. It's been years since I've had a client request a desktop application, and everything has been web based, but today all future projects involve some kind mobile solution. How much longer should I recommend clients to continue investing in web based applications?"} {"_id": "201191", "title": "creating a struct in java and searching for a specific member", "text": "Im a beginner in Java, I have the following code: public class operations { public ArrayList stream; public ArrayList funct; public ArrayList name; public ArrayList funcgroup; public ArrayList source; } I read and understood that classes in Java are comparable to structs in C. In my program, I have to get an input from the user for name and search through the class and provide the other fields (stream, func, funcgroup, source) as output. My question is similar to Efficient way to shuffle objects From the first and best answer, I understand that rather than parallel lists, I should create a single list and then call a list with this class. But how do I add all my elements to the various fields in List in that case? I already have a separate `ArrayList` of these fields collected in another class."} {"_id": "43489", "title": "How to handle coworker with \"obsessive refactoring disorder\"", "text": "My coworker (who is very clever, but with severly limited inter-personal skills), keeps refactoring my code even when it is work in progress and assigned to me as a task. Whereas I fully subscribed to the idea of collective ownership of code, I find this extremely irritating, but attempts to have him stop seem to have no effect. My analysis of his personality is that he considers himself the best, and if it had not been for him, the codebase would have been in a mess. I should add that I am not a novice, I know my skills and I produce quality work. Some of the refactorings are indeed to the better, most are basically just introduction of a style that he likes better than mine. In addition, he has a almost child-like need to have the last word in any discussion and has never any word of praise for work done by co-workers. There is always something that he, the master, would have done differently. I feel this is strongly affecting the quality of my work-life. What should I do ?"} {"_id": "201192", "title": "Where should the processing to structured XML be done?", "text": "I have been given the task of redeveloping an \"in house\" solution to make it expandable and easier to maintain and administer. The original solution had been hashed together over time using PHP as more requirements were added, and the need to expand is expected into the foreseeable future. The solution, gathers many different files such as word documents and varying structures of XML document, from different \"locations\" and converts each into a specifically structured XML documents which get sent on via a web service. To add to the mix, some of the original XML files are retrieved from the same \"location\" but have varying levels of processing that are required depending on where they are from before that (identified by a \"customer\" field in the XML) My intention is to make the new solution as modular as possible so that processing can be suspended at the individual \"customer\" or \"location\" level without affecting anything else. While I have many puzzles to overcome and questions to answer, the question that is keeping me up at night at the moment is \"Where should the processing to the structured XML be done?\" At present it is done per \"customer\" but as you can imagine, it leads to a lot of duplicated code. Maybe it is unavoidable, due to the \"customer\" specific processing required sometimes but sanity tells me there must be a better way."} {"_id": "240512", "title": "How to efficiently troubleshoot or test new code when hardware setup to reproduce bugs is difficult or impossible to obtain?", "text": "I work at a mid-sized company (150ish employees, ~10 size engineering team), and most of my projects involve interfacing with lab equipment (oscilloscopes, optical spectrum analyzers, etc) for the purpose of semi-automated test applications. I have run into a few different scenarios where I am unable to efficiently troubleshoot or test new code because I no longer or never had the hardware setup available to me. **Example 1:** A setup where 10-20 \"burn-in\" processes are run independently using a bench top type sensor - I was able to obtain one such sensor for testing and could occasionally steal a second for simulating all of the facets of interfacing to multiple devices (searching, connecting, streaming, etc). Eventually a bug showed up (and ultimately ended up being in the device firmware & drivers) that was very difficult to reproduce accurately with only one unit, but hit near \"show stopper\" levels when 10-20 of these devices were in use simultaneously. This is still unsolved and is ongoing. **Example 2:** A test requiring an expensive optical spectrum analyzer as its core component. The device is pretty old, legacy according to the manufacturer who was acquired by a larger company and basically dissolved, and its only documentation was a long winded (and uninformative) document that seems poorly translated. During initial development I was able to keep the device at my desk, but now its tied up, both physically and in schedule during its 24/7 multi-week tests. When bugs show up related or unrelated to the device, I often need to go through the trouble of testing code external to the application and fitting it in, or writing code blindly and attempting to squeeze in some testing time in between runs, as much of the program logic requires the OSA and the rest of the test hardware to be in place. _I guess my question is how should I approach this?_ I could potentially spend time developing device simulators, but figuring that into the development estimate will balloon it more than most would probably appreciate. It may not accurately reproduce all issues either, and it's pretty rare to see the same equipment used twice around here. I could get better at unit testing...etc...I could also be loud about the issue and make others understand that temporary delays will be required, not much more than a headache for Research and Development but usually a perceived as a joke when pitched to manufacturing."} {"_id": "201198", "title": "Separation of Concerns, Data Access Layer", "text": "I was thinking about this earlier today and figured I would get some input on the matter. When I develop applications I would usually have the Data Access Layer on another project, incase it could be re-used elsewhere in a similar manner in the future but also to allow updating the DAL without updating the UI layer. When doing so I handle all of the data querying etc in the DAL. The application does not need to know if its ADO.NET, EF or a List. However, it occurred to me that in almost all cases the return types are specified in the DAL. So the UI Layer does need to know about types defined there. Is that normal or is there a way for more seperation? (other than using Anon types)"} {"_id": "80702", "title": "How do you use the sample codes while reading programming books?", "text": "Programming books often contain a lot of code scattered within it. Usually there will be an accompanying website to download the code used in the book. How do you use the code? Do you just run them and check the results or do you code it from scratch again? If you are coding it from scratch, have you found any advantages( like remembering the content better etc)?"} {"_id": "80703", "title": "CSS in a CMS - Pros/Cons", "text": "I am a regular user of Wordpress and whenever I feel the need to change the look of a DOM element, I prefer to use it inline. This is just because I don't want go back to the stylesheet and create new classes and styles for a single different looking element on a particular post. Can anyone please list the **PROS and CONS of this technique**. And should I change it? _**PS - All suggestions should be given keeping the CMS factor in mind._**"} {"_id": "16595", "title": "Is taking frequent breaks really that beneficial when programming?", "text": "I keep reading that it is recommended for a programmer to take frequent breaks while programming, and the usual recommendation I see is 5 minutes every half hour or 10 minutes every hour. I gave it a try, but quite often I find something interesting during those 5 minutes, and it takes me away from what I was working on for longer than I planned. Either that, or my mind gets focused on something else and I find it hard to get back into my work and don't focus very well. Is it really that beneficial to take frequent breaks while programming? Am I doing something wrong for it to be decreasing my productivity instead of increasing it?"} {"_id": "12683", "title": "Should one use pseudocode before actual coding?", "text": "Pseudocode helps us understand tasks in a language-agnostic manner. Is it best practice or suggested approach to have pseudocode creation as part of the development lifecycle? For instance: * Identify and split the coding tasks * Write pseudocode * Get it approved [by PL or TL] * Start Coding based on Pseudocode Is this a suggested approach? Is it practiced in the industry?"} {"_id": "200828", "title": "Reason for placing function type and method name on different lines in C", "text": "I just started at a company and one of the style comments at my first code review was that the return type and method name should be on different lines. For example, this void foo() { } should be this void foo() { } I've always used the first style, and I was wondering if there is any reason other than personal preference why people use the second style? I don't think the first one hurts readability at all. Is one more common than the other with C programmers and large open source projects?"} {"_id": "161958", "title": "The limit of Int32 for Identity Column", "text": "This is just a consideration for a site am creating and for other big sites out there. I am using Identity Column to store the ID of some of my tables and I have classes whose Id are decorated with `Int32` to hold the value of the ID retrieved from database. My worry is that as the site grows bigger, some tables that grows exponentially e.g `QuestionComments` might exceed the `Int32` limit in future. So I change my class to use `long`. public class Question { public long QuestionID { get; set; } ... } //Converting database value to .Net type Question q = new Question(); q.QuestionID = Convert.ToInt32(myDataRow[\"QuestionID\"]); How true is my assumption? Would using a UniqueIdentifier be better? Are there other way to address this? **UPDATE:** But for the sake of learning, how would site like FaceBook, Google, StackOverflow etc. handle Visit table assuming they have VisitID as Identity Column"} {"_id": "200822", "title": "How to copyright personal code", "text": "I am writing a program using VBA because its readily available, the code will be available to Cracks even though I password protect it. I knew this when I chose VBA because portability is more important than protection. My question relates to copyright, I put copyright and my name in the code and on the spread sheet to indicate it is my program but is this enough when people start copying the code? Do you have a strong legal version which I could place in my code?"} {"_id": "88278", "title": "How similiar should the environments of PreProd and Prod be?", "text": "I've just recently been on a project and during the release, we realized that it didn't work in Production. It works in all other environments but because we have a separate release team, and we cannot set up the servers and environments ourselves, we have no visibility of the configuration on them. We suspect that Prod has some user permissions in its account or IIS settings that are different, so we are working though it now. So I think this whole thing has been a learning experience for me and I don't want the same thing repeated again. I would like to ask, how different should these environments be? I always thought that PreProd should be an identical copy to the Prod environment using a copy of the same database, using a copy of the same user account, should be installed on the same servers etc. But how far should I take it? If the web site is externally facing, should PreProd be externally facing? What if the website has components that don't require a user account or password to navigate to? Is it still okay to expose it to the outside world?"} {"_id": "200821", "title": "How to write a HTTP server?", "text": "As the title says, I would like to write a HTTP server. My question is this, how do I do this? I know this sounds VERY general and too \"high level\", but there is a method to my madness. An answer to this question should be, I believe, language agnostic; meaning, no matter what language I use (e.g., C, C++, Java, etc.) the answer should be the same. I have a general idea of how this is supposed to work: 1. Open a socket on port 80. 2. Wait for a client to make a request. 3. Read the request (i.e., this person wants page \"contact-us.html\"). 4. Find and read \"contact-us.html\". 5. Send an html header, then send the content of \"contact-us.html\" 6. Done Like I said, I believe this is the process, but I am not 100% sure. This leads me to the heart of my question. How or where does a person find out this information? What if I didn't want to write just an HTTP server, what if I wanted to write an FTP server, a chat server, an image viewer, etc.? How does a person find out the exact steps/process needed to create a working HTTP server? A co-worker told me about the html header, so I would have NEVER know this without him. He also said something about handing off each request to a new thread. Is there some big book of how things work? Is there some manual of what it takes to be an HTTP server? I tried googling \"how does a HTTP server work\", but the only answers I could find were geared towards your average Joe, and not towards a person wanting to program a HTTP server."} {"_id": "177703", "title": "Abstract Factory Method and Polymorphism", "text": "Being a PHP programmer for the last couple of years, I'm just starting to get into advanced programming styles and using polymorphic patterns. I was watching a video on polymorphism the other day, and the guy giving the lecture said that if at all possible, you should get rid of `if` statements in your code, and that a switch is almost always a sign that polymorphism is needed. At this point I was quite inspired and immediately went off to try out these new concepts, so I decided to make a small caching module using a factory method. _**Of course**_ the very first thing I have to do is create a switch to decide what file encoding to choose. DANG! class Main { public static function methodA($parameter='') { switch ($parameter) { case 'a': $object = new \\name\\space\\object1(); break; case 'b': $object = new \\name\\space\\object2(); break; case 'c': $object = new \\name\\space\\object3(); break; default: $object = new \\name\\space\\object1(); } return (sekretInterface $object); } } At this point I'm not really sure what to do. As far as I can tell, I either have to use a different pattern and have separate methods for each object instance, or accept that a switch is necessary to \"switch\" between them. What do you guys think?"} {"_id": "177704", "title": "Software Manager who makes developers do Project Management", "text": "I'm a software developer working in an embedded systems company. We have a Project Manager, who takes care of the overall project schedule (including electrical, quality, software and manufacturing) hence his software schedule is very brief. We also have a Software Manager, who's my boss. He makes me write and maintain the software schedule, design documents (high and low level design), SRS, change management, verification plans and reports, release management, reviews, and ofcourse the software. We only have one Test Engineer for the whole software team (10 members), and at any given time, there are a couple of projects going on. I'm spending 80% of my time making these documents. My boss comes from a Process background, and believes what we need is better documentation to improve software: (1) He considers the design to be paramount, coding is \"just writing the design down\", it shouldn't take too long, and \"all the code should be written before the hardware is ready\". (2) Doesn't understand the difference between a Central & Distributed Version control, even after we told him its easier to collaborate with a distributed model. (3) Doesn't understand code, and wants to understand every bug and its proposed solution. (4) Believes verification should be done by developer, and validation by the Tester. Thing is though, our verification only checks if implementation is correct (we don't write unit tests, its never considered in the schedule), and validation is black box testing, so the units tests are missing. I'm really confused. (1) Am I responsible for maintaining all these documents? It makes me feel like I'm doing the Software Project Management, in essence. I'm ok with technical documentation, but I believe scheduling/planning should not be done by the developer. (2) I don't really like creating documents, I want to solve problems and write code. In my experience, creating design documents only helps to an extent, its never the solution to better or faster code. (3) I feel the boss doesn't really care about making better products, but only about being a good manager in the eyes of the management. What can I do? This whole year I've done 3 months of actual coding, the rest just spent in making documents and waiting for bug reports from clients."} {"_id": "177709", "title": "How do web-developers do web-design when freelancing?", "text": "So I got my first job recently as junior web-developer. My company creates small/medium sites for wide variety of customers: autobusiness companies, weddign agencies, some sauna websites, etcetc, hope you get my point. They don't do big serious stuff like bank systems or really big systems, it's mostly small/medium-sized websites for startups/medium sized business. My main skills are PHP/MySQL, I also know HTML and a bit of CSS/JS/AJAX. I know that good web-developer must know some backend language (like PHP/Ruby/Python) AND HTML+CSS+JS+AJAX+JQuery combo. However, I was always wondering. In my company we have web-designer. In other serious organisations I often see the same stuff: web-developers who create business-logic and web-designers, who create design. As far as I know, after designers paint design of website they give it to developers either in PSD or sliced way, and developers put it together with logic, but design is NOT created by developers. Such separation seems very good for full-time job, but I am concerned with question **_how do freelance web-developers do websites_**? **_Do most of them just pay freelance designers to create design for them? Or do some people do both?_** Reason why I ask - I plan to start some freelancing in my free time after I get good at web-development. But I don't want to create websites with great business-logic but poor design. Neither I want to let someone else create a design for me. I like web-development very much and I am doing quite good, I like design aswell, even though I am a bit lost how to study it and get better at it. But I am scared that going in both directions won't let me become expert, it seems like two totally different jobs and getting really good in both seems very hard. But I really want to do both. What should I do? Thank you!"} {"_id": "52211", "title": "Open source IDEs released under GPL", "text": "Some IDEs (or code editors like Notepad++) are released under GPL. If I decide to make a program using the IDE, do I have to release my program under the GPL? Or is it only if I take code from the open source?"} {"_id": "255903", "title": "How to validate the ajax post request is coming from", "text": "I am developing an web application in **MVC4**. I my application all the function is did by the **ajax post call**. I do not even **post single form(Even not have the form tag also)** all the things are did by the ajax call. but I am scared for the miss use of the my java script. Any one who got this code he can post the **dummy data** to my application. So I need to **validate** the weather the post request is coming form my website or not. I thought that the ajax call is good instead of posting all the form to server. Also I have did the validation at client side only. Is that also the threat for me?"} {"_id": "235796", "title": "Using the Decorator pattern to add public methods to an object", "text": "The Decorator pattern is usually used to extend the functionality of an object by extending one of it's current methods. To illustrate, please consider an object `object` and a decorator `decorator`. `object` has one method called `methodA()`. `decorator`, because we're in the Decorator pattern, also has this method by inheritance (both `object` and `decorator` eventually inherit from the same super-class which has `methodA()`. Let's call it `AbstractObject`). `methodA()` in `object` looks like this: public void methodA(){ // do stuff } `methodA()` in `decorator` looks like this: public void methodA(){ // do some extra stuff. wrappedObject.methodA(); // and then delegate to the wrapped object. } It would be used like this: AbstractObject object = new ConcreteObject(); object.methodA(); // does stuff object = new ConcreteDecorator(object); object.methodA(); // does stuff and some more stuff. the extended methodA(). The point of `decorator` is to extend the functionality of `object`, **and it does this by extending one of`object`'s current methods**. So far, classic Decorator. **Now, my question:** As I said Decorator usually extends an object's functionality by adding content to one of the object's current methods. But what if I wanted to use Decorator to **add public methods to an object**? Please consider the following scenario where `decorator` has another method `methodB()`: AbstractObject object = new ConcreteObject(); object.methodA(); // does stuff object = new ConcreteDecorator(object); ((ConcreteDecorator)object).methodB(); // methodB() belongs to ConcreteDecorator() but not to AbstractObject(). As you see if I wanted to do this, I would have to cast `object` to `ConcreteDecorator`. This exposes the concrete type of `object` and the fact that it's decorated, and as such somewhat defeats the intent of the Decorator pattern. Also, the code invoking `methodB()` would have to know `object` is decorated. Again, not very much in the spirit of Decorator. **In light of this I have two questions:** * Is the Decorator pattern ever used this way? Is it ever used to add public methods to an object, as opposed to only 'extending' it's current methods? * Are there situations where this would be a good idea or a wise use of Decorator?"} {"_id": "53166", "title": "Any experience porting a Powerbuilder application to Java (or another language)?", "text": "Does anyone out there have any experience porting a large Powerbuilder client/server application to a Java application (or another language)? By large I mean something on the order of 2 million lines of code and literally hundreds of database tables. How did you do it? How long did it take and how much did it cost? What were the biggest challenges?"} {"_id": "53162", "title": "Programmer logbook application?", "text": "I've just released my application to the public, and I'm working on an updated version, but I really think I should keep track of ALL the code changes. In case some functionality suddenly starts failing, with a history of all the changes I made it would be a lot easier to figure out where I messed it up, in case the problem wasn't already there. The _ideal_ would be to have a super fast computer with a huge hard drive and an application that automatically saves a backup of the whole project every time I change a line in the code, with some file comparison tool that would show me every difference between any two backed up projects, but that's not really possible for now. So, do you know any application that makes it easy for a programmer to keep track of the changes made to the source code?"} {"_id": "191562", "title": "Multi-threaded server", "text": "I have written a server/client program in which I am using 2 threads : * One to recieve data continously and * Other to send data as the user writes it on screen Problem: I have created the threads but durung execution ,the control gets stuck in the recieving thread having the infinite loop to recieve data and never goes to the sending thread. Please tell me if there is any solution to the problem."} {"_id": "21082", "title": "What is the most obscure sorting algorithm you know?", "text": "I just read about cyclesort via a sortvis.org blog post. This is probably the most obscure one I have heard of so far, since it uses maths that I am not familiar with (detecting cycles in permutations of integer sets). What is the most obscure one you know?"} {"_id": "21084", "title": "Object Oriented Programming Concepts and Interviews", "text": "I'm an Object Oriented Programming fanatic. I have always believed in modelling solutions in terms of objects. It is something that comes to me naturally. I work with a services start up that essentially works on application development using OOP languages. So I tend to test understanding of OOP in the candidate being interviewed. To my shock, I found very very few developers who really understood OOP. Most candidates brainlessly spit out definitions they mugged up from some academic book on object oriented programming but they don't know squat about what they are saying. Needless to say I reject these candidates. However, over the course of time, I ended up rejecting almost 98% of the candidates. Now this gets me thinking if I'm being over critical about their OOP skills. I still believe OOP is fundamental and every programmer MUST GET it. Language knowledge and experience is secondary. Do you think I'm being over critical or do I just get to interview bad programmers unfortunately? **EDIT:** I usually interview programmers with 2 to 5 years of experience. The position that I usually interview for is Ruby/Ruby on Rails application developer."} {"_id": "63650", "title": "Why does every installer on Windows have to be run with elevated privileges?", "text": "The current tendency to ship even the smallest applications (like simple games, tools, etc.) as a Windows Installer (.msi) or even with .exe that can do virtually anything is very annoying. After the application is installed it is run as standard user, but it is useless if during installation it can do what it wants. For example, configuring a firewall, maybe I do not want my new shiny 15 KB calculator application to have full access to the Internet (for incoming connections :) ) Custom actions inside Windows Installer installer files are even worse, since it is possible to run a custom function inside the provided DLL file! How do you manage to handle this, just ignore problem, or maybe have some unknown for me methods of running Windows Installer packages? And from the other side, why on Earth is everybody making installers? In many many cases, a simple ZIP file will be enough, at least as a opportunity to a standard installer."} {"_id": "195271", "title": "Do tools, like Windows Workflow, inhibit development growth?", "text": "I\u2019ve had this gut feeling about Windows Workflow (WW) for a while now. And, until now, I couldn\u2019t think of the right words to say in order to explain it. Since I think I have a good way to verbalize it now, I thought I\u2019d share. I believe there is a problem in our industry where a great percentage of developers take a long time to understand simple, solid OOP principles and design patterns. Indeed, many developers never reach this level of understanding. My unsubstantiated concern is that tools like WW further shield developers from good design. The reason for this is that, absent WW, a developer could be shown how to write code using a state pattern, or chain of responsibility pattern, or a rules engine, or even simple OOP. Many developers simply don\u2019t know how to implement polymorphism. My fear is that tools like WW provide an abstraction that hides, or inhibits, good design by simply providing a way for developers to click boxes on a canvas and write code in those boxes. What\u2019s going on under the hood, or what should be going on under the hood, is lost to the developer. Do tools, like Windows Workflow, inhibit development growth? Disclaimer: I have not used WW. There may be great use cases for using it. My thoughts, above, are simply a gut feeling, and I'd be curious to know how others feel about the topic."} {"_id": "63655", "title": "What ESB systems work the best for .Net stack", "text": "What Enterprise Service Bus would you use and why?"} {"_id": "195273", "title": "Is MVC the optimal pattern to build a RESTful web service?", "text": "Not being a Java practitioner, I recently came to learn about the JAX-RS specification and Apache CXF framework. While still wrapping my head around all these things, I also read the this question on SO. Since MVC is a design pattern while JAX-RS is a specification, I was confused about the comparison. Granted my nascent understanding of JAX-RS, what makes it more suitable for implementing a RESTful API than say a framework that uses the MVC pattern? N.B. I have a C and Python background."} {"_id": "245921", "title": "How to structure REST api service that accepts POST parameters in the body", "text": "Everything I've read says to pass parameters to a REST service in the URI, whether by template, or query string: https://www.myapp/my/login/api/authenticate/ganders/mypassword or https://www.myapp/my/login/api/authenticate?user=ganders&password=mypassword But obviously those are exposing my username and password in clear view. Our security guy is saying that we need to pass the parameters in the body of the request. How do I do that? Examples, or just a link to an example is sufficient because apparently my googling skills are not very high. One thing that I've found so far is that if you decorate your service methods with the attribute [FromBody], like this: public AuthenticationResult Authenticate(**[FromBody]**LoginData loginData) { return AuthenticationResult.Authenticated; } that will grab them from the body. My other task is trying to test this? Is this a task for Fiddler? Or Chrome Dev tools? Another thing I've found so far is that you can only have 1 parameter in the post, is that accurate? Sorry, lot's of questions here. Spent all day yesterday trying to research this and obviously didn't get very far... Edit: So here's what I have so far, this is the \"GET\" version that we need to convert to a POST (this is Angular calling C# REST API) $http.defaults.headers.common.Authorization = 'Basic ' + encoded; var url = $rootScope.serviceBaseUrl + \"login/get\" + \"?username=\" + user + \"&password=\" + password + \"&accesstoken=\"; $http({method: 'Get', url: url}).success(.....).error(......); And the REST service: [HttpGet] public AuthenticationResult Get([FromUri]LoginModel login) { try { AuthenticationService authService = new AuthenticationService(); AuthenticationResult result = authService.IsAuthenticated(login); if (result.IsAuthenticated) return result; else { return new AuthenticationResult() { IsAuthenticated = false, User = new User() { UserId = login.Username } }; } } catch(Exception ex) { return new AuthenticationResult() { IsAuthenticated = false, User = new User() { UserId = login.Username, Token = ex.Message } }; } } Edit2: I'm getting closer. I've got the response to go through (via Fiddler), but my json array of data that I'm passing is not getting mapped to my complex type. Here's what I've got: In Fiddler: User-Agent: Fiddler Host: localhost:42761 Content-Length: 73 Content-Type: application/json Accept: application/json \"Request Body\": { \"Username\": \"ganders\", \"Password\": \"hashedPassword\" } My \"LoginData\" object is instantiated on the service-side, but my two properties are null..."} {"_id": "245922", "title": "Combining asynchronous and synchronous programming", "text": "I've got trouble wrapping my head around a certain problem: In my data-flow app (a visual editor), I have both autonomous objects which communicate though ports via unordered simultaneous messages and thus represent an asynchronous system. ![in this picture, f would receive x from the first object, transform and immediately send the transformed value to the next](http://i.stack.imgur.com/MNRFL.png) Still, I also want to use functions which are synchronous, e.g they need input to compute it's value Now the problem is - how do I combine both with a non-trivial system? ![2](http://i.stack.imgur.com/wHG9E.png) In this picture, f needs x and y to compute its value, but during any point in time, only one input may be present (suppose x is present, y is not). Generalize this to higher dimensions I can see a possible way out: * make x an array X = (x,y) and combine the first two objects resulting in a Type-1 like system .. * or make f an active object, which caches x,y and then computes * or make f always compute its value (e.g when only x is present, compute x*y = x*0 ...) * * * So the possibility of futures was mentioned. I'M using Java, so i'll frame my possible solution via Objects ![X is sent, y is missing -> create future \\(maybe a copy of f\\) waiting for y](http://i.stack.imgur.com/4N34O.png) X is sent, y is missing -> create future (maybe a copy of f) waiting for y ![enter image description here](http://i.stack.imgur.com/JJme0.png) ![enter image description here](http://i.stack.imgur.com/WWuPM.png)"} {"_id": "236449", "title": "DDD / Optimizing a specific service belonging to a specific bounded context regarding hardware", "text": "Well known is the split of a whole application into several bounded contexts to emerge an Ubiquitous language. while practicing Domain-Driven Design. In general, 1 bounded context = 1 archive file ready to be deployed (JAR / EAR / DLL etc..) My question concerns the bounded contexts deployment regarding hardware optimization: Suppose I want to optimize one specific service within a specific bounded context (optimization could be a more powerful CPU, more RAM, etc..), is it a good practice to create a specific archive file just for it? Meaning to split further the initial bounded context. Thus, in order to deliver it on a specific hardware tunes for its usage. Or should I consider 1 bounded context as being an **indivisible** unit and therefore optimizing the bounded context as a whole even if some specific services doesn't need this increase of hardware power?"} {"_id": "236448", "title": "DATABASE design for storing specifications?", "text": "I am creating , a site like gsmarena , can any one help me with database design to store mobile specifications like here \"http://www.gsmarena.com/asus_padfone_infinity_lite-6120.php\" as user can add more custom specification ,"} {"_id": "245929", "title": "Using class like an object in Python", "text": "I am learning from _Learn Python the hard way_ where I have come across a study drill where they want to know that whether a class can be used like an object. As I have experimented: class A(object): def __init__(self): print \"in here\" def cat(self,name): print \"my cat \",name Here if I write `A.cat(A(),\"Tom\")`, it works and found out that its call unbound method. Now, does that mean I can assume that classes can be used like an object?"} {"_id": "236441", "title": "Specific class to pass a bundle of data from my Model to Controller or a simple Collection?", "text": "I am doing an Invoice application implementing MVC pattern design in Java. One of the features my application has to have is showing all info about a Customer: Personal data, Calls, Invoices, TelephoneNumber, etc. My model, which has access to all this data, has to pass and assemble this to pass it to the Controller. Then, my Controller will show it via my View. I have thought how to make the connection between the Model and the Controller in a pretty way. One of my alternatives is to add every DTO (TelephoneNumber, CustomerData), and Collection of DTO (Collection, Collection, and more) to a Collection. IMHO, this is not a good approach to solve this problem well, because I am creating a lot of dependency in the Controller when receiving this information and casting according to the order of assembling. As I wanted to find another solution, I had the idea of creating a class (Bundle) which wraps me customer's data and then, in Controller's code, I use the methods of this class to get the data I need to display. My question is about if my solution is good or it would be another which will be better. I am not quite sure if the Bundle class will be a solution or just crap. Thanks."} {"_id": "153052", "title": "Is there a Javascript library for creating vintage photos?", "text": "I'm working on a Canvas object in HTML5, and I am attempting to make some photos look \"better\". I tried VintageJS, an existing photo-retouching Javascript library, and Picozu, a web application cloning some Adobe Photoshop functionalities, but I'm still not happy. Can you help me with an algorithm or point to an existing Javascript library that would allow me to make my photos look like the following example? http://i46.photobucket.com/albums/f137/thanhtu_zx/Untitled-1.jpg ![example of a \"vintage\" look](http://i.stack.imgur.com/59Uee.jpg)"} {"_id": "69306", "title": "Pair Swapping: What are the Pros and Cons?", "text": "The general idea that is espoused by most _Agile_ / _XP_ theorists seems to be that pairs should swap regularly. For example each programmer should swap pairs once per day; half the people swap at the start of the day, half the people swap after lunch: due to external factors such as meetings, holidays and the like most people will tend to flip their swap times once or twice per week so that the pair configurations distribute fairly evenly across the team. One rationale behind frequent swapping is that knowledge is spread amongst the team quickly and evenly, rather than having specific skills and knowledge being concentrated in particular individuals - implying that work can continue smoothly if people are either away or leave the company. Another rationale, which is a sort of corollary to the dogma surrounding pair programming itself, is that each time someone swaps on you are getting a new code review by a fresh pair of eyes, so it can only improve code quality. Both assertions sound reasonable; from a management point of view it sounds like you get an increase in both stability and quality, and as such frequent swapping is pretty much standard theory in most _Agile_ / _XP_ books that I've looked at. So, when actually put into practice, what do people actually think about pair swapping from * A programmer's point of view? * A manager's point of view? And * What should determine when someone swaps off of / onto a pair?"} {"_id": "86287", "title": "C# String.format extension method", "text": "With the addtion of Extension methods to C# we've seen a lot of them crop up in our group. One debate revolves around extension methods like this one: public static class StringExt { /// /// Shortcut for string.Format. /// /// /// /// public static string Format(this string str, params object[] args) { if (str == null) return null; return string.Format(str, args); } } Does this extension method break any programming best practices that you can name? Would you use it anyway, if not why? If I renamed the function to \"F\" but left the xml comments would that be epic fail or just a wonderful savings of keystrokes?"} {"_id": "86286", "title": "Is there something better than a StringBuilder for big blocks of SQL in the code", "text": "I'm just tired of making a big SQL statement, test it, and then paste the SQL into the code and adding all the `sqlstmt.append(\"` at the beginning and the `\")` at the end. It's 2011, isn't there a better way the handle a big chunk of strings inside code? Please: don't suggest stored procedures or ORMs. **edit** Found the answer using XML literals and CData. Thanks to all the people that actually tried to answer the question without questioning me for not using ORM, SPs and using VB **edit 2** the question leave me thinking that languages could try to make a better effort for using inline SQL with color syntax, etc. It will be cheaper that developing Linq2SQL. Just something like: dim sql = SELECT * ... "} {"_id": "154001", "title": "De-facto standards for customer information record", "text": "I'm currently evaluating a potential new project that involves creating a DB for typical customer information (userid, pwd, first & last name, email, adress, telfnr ...). At this point, requirements are only roughly defined. The customer DB is expected in the O(millions) of records. In order to calculate some back-of-the-envelope numbers for DB sizing and evaluate potential DB options & architectures, I'm looking for some de-facto standards for these kind of records. **In particular, the std size of every field (first name, last name, address,...) or typical avg for a simple customer record would be great info**. With so many e-commerce websites out there, there should be some kind of typical config that can be reused and avoid re-inventing the wheel. Any ideas? \\---- edit ---- The answers seems to be steering towards adopting an standard customer record vs designing your own. I would like to stress that the focus of this question is to locate a **reference** for field sizing for a customer object, and avoid figuring that out on my own.(I've emphasized that part on the original text - **now in bold** -)"} {"_id": "86283", "title": "Static teams or dynamic teams?", "text": "Is it better to assemble permanent teams of developers within the company that always work together, from project to project, or is it better to have dynamic teams that assemble for a project, and then dissasemble afterwards? My inclination is to treat the entire company as a \"platoon\" and to assemble \"fireteams\" for individual projects, choosing from the \"platoon\" those developers best suited for the project."} {"_id": "154003", "title": "Designing a hierarchical structure with lots of reads and writes?", "text": "I am in the process of working on a video on demand system part of it involves the management of a hierarchical tree structure (think windows explorer) which allows users to upload videos, move folders around, create new folders etc. User action will be allowed depending on which permissions they have. In addition to managing the structure (creating folders and uploading videos), subscribers will be viewing content (read access). The number of reads will be significantly more than the writes. My question (and it is big one) is should I store the data in a database (for the writes) and also have some sort of cache which will be used for the reads? or do I use two databases? or is there a better solution? Also I will have to resolve concurrency issues which I think optimistic locking on the database will resolve. I have a fair bit about CQRS over the last few months but not sure if this is the way to go. Any ideas?"} {"_id": "153059", "title": "faster algorithm for finding all subsets", "text": "This is the algorithm (pseudocode) I have right now for finding all subsets for a given set with length k: void allSubsets(set[]) { for (i = 0; i 60,000 lines of code) on which I was the only developer. The specification documents for the project were designed well early on, but as always happens during development, some things have changed. For example: * Bugs were discovered and fixed in ways that don't correspond well with the spec * Additional needs were discovered and dealt with * We found areas where the spec hadn't thought far enough ahead and had to change some implementation * Etc. Through all this, the spec was not kept updated, because we were lean on resources (I know that's a questionable excuse). I'm hoping to get the spec updated soon when we have time, but it's a matter of convincing management to spend the effort. I believe it's a good idea, but I need some good reasoning; I can't speak from very much experience because I'm only a year and a half out of school. So here are my questions: 1. What value is there in keeping an updated spec? 2. How often should the spec be updated? With every change made that might affect it, just at the end of development, or something in between? EDIT to clarify: this project was internal, so no external client was involved. It's our product."} {"_id": "212924", "title": "Increase speed of a VB.net Application to SQL server 2008 and Hamachi VPN", "text": "Our current Information System (complete with path for pictures for records stored in db) has the following specifications: * A desktop application was developed in vb.net * We use SQL server 2008 r2 as the database * Server specs: Intel xeon processor clocking in at 2.40 ghz, 2 GB RAM and Operating system of 2008 R2 * Hamachi VPN for our offshore offices (within country). * Office has 6mb dedicated internet connection. Offshore offices have varied ranging from 2mbps to 1mbps The application made in vb.net is connected to the server via IPaddress(locally) and hamachi public IP (for offshore)(we made sure that the tunnel is direct, not relayed) We found some weird things while users used the application. There are times where there are 5 users (local and offshore) simultaneously using it, and it went great. Then there would be times where even just three active users (still local or offshore) would suddenly experience extreme slowdown (even for something as simple as a select query to database for verification of log-In). For offshore clients we tried to ping them in hamachi and found out that each has speed averaging at 500 bytes per reply. Also, loading records that have pictures in them takes a REALLY long time. Wasn't it supposed to be that if you put image path instead of using BLOB, imaging would be faster? (The applications images is stored in a shared folder on the server, then we map that folder to the client unit). The questions are: based on the given above, how can we exactly speed up the system? Should we improve the server specs? is the internet provider the problem? must we redesign how we manage pictures in the system? What are the bottlenecks and how do we remedy it? Are there any other things that I should be aware of?"} {"_id": "149003", "title": "How can this deterministic linear time selection algorithm be linear?", "text": "Im trying to understand the basic concepts of algorithms through the classes offered at Coursera (in bits and pieces), I came across the deterministic linear time selection algorithm that works as follows: Select(A,n,i) (0) If n = 1 return A[1]. (1) p = ChoosePivot(A, n) (2) B = Partition(A, n, p) (3) Suppose p is the jth element of B (i.e., the jth order statistic of A). Let the \u201c\ufb01rst part of B\u201d denote its \ufb01rst j \u2212 1 elements and the \u201csecond part\u201d its last n \u2212 j elements. (3a) If i = j, return p. (3b) If i < j, return Select(1st part of B, j \u2212 1, i). (3c) Else return Select(2nd part of B, n \u2212 j, i \u2212 j). And sorts the array internally in the `ChoosePivot` subroutine to calculate the median of median using a comparison based sorting algorithm. But isnt the lower bound on comparison based sorting `O(nlogn)`? So how would it be possible for us to acheive `O(n)` for the entire algorithm then? Am I missing something here?"} {"_id": "149001", "title": "Are there any situations where having a \"data model\" in which all your entities are instances of a Map", "text": "Are there any situations where having a \"data model\" in which all of the entities are instances of a `Map` would make any sense? How do you explain to someone why having such a \"data model\" is not such a good idea? It seems like it's an absurd approach to me, could I be missing something?"} {"_id": "149006", "title": "Testing for object equality by comparing the toString() representations of objects", "text": "Following on from this question \\- how do you explain to someone that this is just crazy!: boolean someMethod(Map context) { Object object = context.get(\"someProperty\") Object another = context.get(\"anotherProperty\") return object.toString().equals(another.toString()); } Apparently the reason for why `Object.equals(...)` is not used is that \"what's contained in the Map is not concretely known, but is definitely known to be one of the primitive wrappers i.e. `String`, `Double`, `Boolean`, etc... and that `Boolean.TRUE` is required to be `equal(...)` to the `String` \"TRUE\"\"."} {"_id": "212921", "title": "building a chat bot for an e-commerce site", "text": "I was recently talking to a couple of friends who run a modest e-commerce website (Daily visitors: approx. 100,000,). They plan to have an in-site chat module to engage customers. Since the whole operation is run by 3 people at the moment, they probably won't be able to respond to all the queries in a timely manner. The question they asked me was, was it possible to build an automated chat bot which could maybe engage the customers for simple queries and for more complex ones, forward the chat to one of them. Now, I haven't built anything of the sort and it looks like a great challenge that I'd love to take up. There's no set deadline for the moment (which is really great) and this gives time (hypothetically) to learn a few things that may come in handy. My questions are as follows: * What sort of architecture would this chat bot consist of? * I was going through this site for a rough idea but I noticed they didn't use a database. * Is noSQL the standard where chat bots are concerned? If yes, which one's recommended? * I have am comfortable with Python - is that a good enough language to build this? * If I wanted to make this project available to more than one person/project, what are the considerations which have to be taken into account so that this bot is adaptable to any site. For example, the people for whom I intend to build this for sell only t-shirts on their site. So they bot would answer questions related to fabrics, designs, colors etc. But tomorrow, if I want to show this bot to someone who, say sells pottery on his site, which parts of the bot should I leave open to learning? Or should I have a separate module which learns the products of the target site? At the moment I am using this question as a starting point. I apologize if there's anything left out. Please let me know if there are more details required."} {"_id": "49624", "title": "What does the lack of Unicode support in PHP mean?", "text": "How can the lack of Unicode support in PHP affect a PHP web app?"} {"_id": "49623", "title": "Should programming languages be strict or loose?", "text": "In Python and JavaScript, semi-colons are optional. In PHP, quotes around array-keys are optional (`$_GET[key]` vs `$_GET['key']`), although if you omit them it will first look for a constant by that name. It also allows 2 different styles for blocks (colon, or brace delimited). I'm creating a programming language now, and I'm trying to decide how strict I should make it. There are a lot of cases where extra characters aren't really necessary and can be unambiguously interpreted due to priorities, but I'm wondering if I should still enforce them or not to encourage consistency. What do you think? * * * Okay, my language isn't so much a programming language as a fancy templating language. Kind of a cross between Haml and Django templates. Meant to be used with my C# web framework, and supposed to be very extensible."} {"_id": "41883", "title": "Why are most browsers developed in C++", "text": "It seems like most of common web browsers (Firefox, Chrome, Safari) are developed using C++. My question is straightforward. Why they use mainly C++ rather than any other language? **Edit: What factors must have been considered when its come to C++ Write ability, Cost, Reliability etc. ?**"} {"_id": "41880", "title": "How can I apply regression modeling to analyze the characteristics of Extreme Programming and the Rational Unified Process?", "text": "I have been given task that has a case study developed by Extreme Programming (XP) and the Rational Unified Process (RUP). I was asked to apply regression models and take some dependent variables and some independent variables in order to compare both XP and RUP. Even though I have been searching, I have not been able to identify dependent and independent variables could be used in software quality testing?"} {"_id": "49995", "title": "Convert from Procedural to Object Oriented Code", "text": "I have been reading Working Effectively with Legacy Code and Clean Code with the goal of learning strategies on how to begin cleaning up the existing code- base of a large ASP.NET webforms application. This system has been around since 2005 and since then has undergone a number of enhancements. Originally the code was structured as follows (and is still largely structured this way): * ASP.NET (aspx/ascx) * Code-behind (c#) * Business Logic Layer (c#) * Data Access Layer (c#) * Database (Oracle) The main issue is that the code is procedural masquerading as object-oriented. It virtually violates all of the guidelines described in both books. This is an example of a typical class in the Business Logic Layer: public class AddressBO { public TransferObject GetAddress(string addressID) { if (StringUtils.IsNull(addressID)) { throw new ValidationException(\"Address ID must be entered\"); } AddressDAO addressDAO = new AddressDAO(); return addressDAO.GetAddress(addressID); } public TransferObject Insert(TransferObject addressDetails) { if (StringUtils.IsNull(addressDetails.GetString(\"EVENT_ID\")) || StringUtils.IsNull(addressDetails.GetString(\"LOCALITY\")) || StringUtils.IsNull(addressDetails.GetString(\"ADDRESS_TARGET\")) || StringUtils.IsNull(addressDetails.GetString(\"ADDRESS_TYPE_CODE\")) || StringUtils.IsNull(addressDetails.GetString(\"CREATED_BY\"))) { throw new ValidationException( \"You must enter an Event ID, Locality, Address Target, Address Type Code and Created By.\"); } string addressID = Sequence.GetNextValue(\"ADDRESS_ID_SEQ\"); addressDetails.SetValue(\"ADDRESS_ID\", addressID); string syncID = Sequence.GetNextValue(\"SYNC_ID_SEQ\"); addressDetails.SetValue(\"SYNC_ADDRESS_ID\", syncID); TransferObject syncDetails = new TransferObject(); Transaction transaction = new Transaction(); try { AddressDAO addressDAO = new AddressDAO(); addressDAO.Insert(addressDetails, transaction); // insert the record for the target TransferObject addressTargetDetails = new TransferObject(); switch (addressDetails.GetString(\"ADDRESS_TARGET\")) { case \"PARTY_ADDRESSES\": { addressTargetDetails.SetValue(\"ADDRESS_ID\", addressID); addressTargetDetails.SetValue(\"ADDRESS_TYPE_CODE\", addressDetails.GetString(\"ADDRESS_TYPE_CODE\")); addressTargetDetails.SetValue(\"PARTY_ID\", addressDetails.GetString(\"PARTY_ID\")); addressTargetDetails.SetValue(\"EVENT_ID\", addressDetails.GetString(\"EVENT_ID\")); addressTargetDetails.SetValue(\"CREATED_BY\", addressDetails.GetString(\"CREATED_BY\")); addressDAO.InsertPartyAddress(addressTargetDetails, transaction); break; } case \"PARTY_CONTACT_ADDRESSES\": { addressTargetDetails.SetValue(\"ADDRESS_ID\", addressID); addressTargetDetails.SetValue(\"ADDRESS_TYPE_CODE\", addressDetails.GetString(\"ADDRESS_TYPE_CODE\")); addressTargetDetails.SetValue(\"PUBLIC_RELEASE_FLAG\", addressDetails.GetString(\"PUBLIC_RELEASE_FLAG\")); addressTargetDetails.SetValue(\"CONTACT_ID\", addressDetails.GetString(\"CONTACT_ID\")); addressTargetDetails.SetValue(\"EVENT_ID\", addressDetails.GetString(\"EVENT_ID\")); addressTargetDetails.SetValue(\"CREATED_BY\", addressDetails.GetString(\"CREATED_BY\")); addressDAO.InsertContactAddress(addressTargetDetails, transaction); break; } << many more cases here >> default: { break; } } // synchronise SynchronisationBO synchronisationBO = new SynchronisationBO(); syncDetails = synchronisationBO.Synchronise(\"I\", transaction, \"ADDRESSES\", addressDetails.GetString(\"ADDRESS_TARGET\"), addressDetails, addressTargetDetails); // commit transaction.Commit(); } catch (Exception) { transaction.Rollback(); throw; } return new TransferObject(\"ADDRESS_ID\", addressID, \"SYNC_DETAILS\", syncDetails); } << many more methods are here >> } It has a lot of duplication, the class has a number of responsibilities, etc, etc - it is just generally 'un-clean' code. All of the code throughout the system is dependent on concrete implementations. This is an example of a typical class in the Data Access Layer: public class AddressDAO : GenericDAO { public static readonly string BASE_SQL_ADDRESSES = \"SELECT \" + \" a.address_id, \" + \" a.event_id, \" + \" a.flat_unit_type_code, \" + \" fut.description as flat_unit_description, \" + \" a.flat_unit_num, \" + \" a.floor_level_code, \" + \" fl.description as floor_level_description, \" + \" a.floor_level_num, \" + \" a.building_name, \" + \" a.lot_number, \" + \" a.street_number, \" + \" a.street_name, \" + \" a.street_type_code, \" + \" st.description as street_type_description, \" + \" a.street_suffix_code, \" + \" ss.description as street_suffix_description, \" + \" a.postal_delivery_type_code, \" + \" pdt.description as postal_delivery_description, \" + \" a.postal_delivery_num, \" + \" a.locality, \" + \" a.state_code, \" + \" s.description as state_description, \" + \" a.postcode, \" + \" a.country, \" + \" a.lock_num, \" + \" a.created_by, \" + \" TO_CHAR(a.created_datetime, '\" + SQL_DATETIME_FORMAT + \"') as created_datetime, \" + \" a.last_updated_by, \" + \" TO_CHAR(a.last_updated_datetime, '\" + SQL_DATETIME_FORMAT + \"') as last_updated_datetime, \" + \" a.sync_address_id, \" + \" a.lat,\" + \" a.lon, \" + \" a.validation_confidence, \" + \" a.validation_quality, \" + \" a.validation_status \" + \"FROM ADDRESSES a, FLAT_UNIT_TYPES fut, FLOOR_LEVELS fl, STREET_TYPES st, \" + \" STREET_SUFFIXES ss, POSTAL_DELIVERY_TYPES pdt, STATES s \" + \"WHERE a.flat_unit_type_code = fut.flat_unit_type_code(+) \" + \"AND a.floor_level_code = fl.floor_level_code(+) \" + \"AND a.street_type_code = st.street_type_code(+) \" + \"AND a.street_suffix_code = ss.street_suffix_code(+) \" + \"AND a.postal_delivery_type_code = pdt.postal_delivery_type_code(+) \" + \"AND a.state_code = s.state_code(+) \"; public TransferObject GetAddress(string addressID) { //Build the SELECT Statement StringBuilder selectStatement = new StringBuilder(BASE_SQL_ADDRESSES); //Add WHERE condition selectStatement.Append(\" AND a.address_id = :addressID\"); ArrayList parameters = new ArrayList{DBUtils.CreateOracleParameter(\"addressID\", OracleDbType.Decimal, addressID)}; // Execute the SELECT statement Query query = new Query(); DataSet results = query.Execute(selectStatement.ToString(), parameters); // Check if 0 or more than one rows returned if (results.Tables[0].Rows.Count == 0) { throw new NoDataFoundException(); } if (results.Tables[0].Rows.Count > 1) { throw new TooManyRowsException(); } // Return a TransferObject containing the values return new TransferObject(results); } public void Insert(TransferObject insertValues, Transaction transaction) { // Store Values string addressID = insertValues.GetString(\"ADDRESS_ID\"); string syncAddressID = insertValues.GetString(\"SYNC_ADDRESS_ID\"); string eventID = insertValues.GetString(\"EVENT_ID\"); string createdBy = insertValues.GetString(\"CREATED_BY\"); // postal delivery string postalDeliveryTypeCode = insertValues.GetString(\"POSTAL_DELIVERY_TYPE_CODE\"); string postalDeliveryNum = insertValues.GetString(\"POSTAL_DELIVERY_NUM\"); // unit/building string flatUnitTypeCode = insertValues.GetString(\"FLAT_UNIT_TYPE_CODE\"); string flatUnitNum = insertValues.GetString(\"FLAT_UNIT_NUM\"); string floorLevelCode = insertValues.GetString(\"FLOOR_LEVEL_CODE\"); string floorLevelNum = insertValues.GetString(\"FLOOR_LEVEL_NUM\"); string buildingName = insertValues.GetString(\"BUILDING_NAME\"); // street string lotNumber = insertValues.GetString(\"LOT_NUMBER\"); string streetNumber = insertValues.GetString(\"STREET_NUMBER\"); string streetName = insertValues.GetString(\"STREET_NAME\"); string streetTypeCode = insertValues.GetString(\"STREET_TYPE_CODE\"); string streetSuffixCode = insertValues.GetString(\"STREET_SUFFIX_CODE\"); // locality/state/postcode/country string locality = insertValues.GetString(\"LOCALITY\"); string stateCode = insertValues.GetString(\"STATE_CODE\"); string postcode = insertValues.GetString(\"POSTCODE\"); string country = insertValues.GetString(\"COUNTRY\"); // esms address string esmsAddress = insertValues.GetString(\"ESMS_ADDRESS\"); //address/GPS string lat = insertValues.GetString(\"LAT\"); string lon = insertValues.GetString(\"LON\"); string zoom = insertValues.GetString(\"ZOOM\"); //string validateDate = insertValues.GetString(\"VALIDATED_DATE\"); string validatedBy = insertValues.GetString(\"VALIDATED_BY\"); string confidence = insertValues.GetString(\"VALIDATION_CONFIDENCE\"); string status = insertValues.GetString(\"VALIDATION_STATUS\"); string quality = insertValues.GetString(\"VALIDATION_QUALITY\"); // the insert statement StringBuilder insertStatement = new StringBuilder(\"INSERT INTO ADDRESSES (\"); StringBuilder valuesStatement = new StringBuilder(\"VALUES (\"); ArrayList parameters = new ArrayList(); // build the insert statement insertStatement.Append(\"ADDRESS_ID, EVENT_ID, CREATED_BY, CREATED_DATETIME, LOCK_NUM \"); valuesStatement.Append(\":addressID, :eventID, :createdBy, SYSDATE, 1 \"); parameters.Add(DBUtils.CreateOracleParameter(\"addressID\", OracleDbType.Decimal, addressID)); parameters.Add(DBUtils.CreateOracleParameter(\"eventID\", OracleDbType.Decimal, eventID)); parameters.Add(DBUtils.CreateOracleParameter(\"createdBy\", OracleDbType.Varchar2, createdBy)); // build the insert statement if (!StringUtils.IsNull(syncAddressID)) { insertStatement.Append(\", SYNC_ADDRESS_ID\"); valuesStatement.Append(\", :syncAddressID\"); parameters.Add(DBUtils.CreateOracleParameter(\"syncAddressID\", OracleDbType.Decimal, syncAddressID)); } if (!StringUtils.IsNull(postalDeliveryTypeCode)) { insertStatement.Append(\", POSTAL_DELIVERY_TYPE_CODE\"); valuesStatement.Append(\", :postalDeliveryTypeCode \"); parameters.Add(DBUtils.CreateOracleParameter(\"postalDeliveryTypeCode\", OracleDbType.Varchar2, postalDeliveryTypeCode)); } if (!StringUtils.IsNull(postalDeliveryNum)) { insertStatement.Append(\", POSTAL_DELIVERY_NUM\"); valuesStatement.Append(\", :postalDeliveryNum \"); parameters.Add(DBUtils.CreateOracleParameter(\"postalDeliveryNum\", OracleDbType.Varchar2, postalDeliveryNum)); } if (!StringUtils.IsNull(flatUnitTypeCode)) { insertStatement.Append(\", FLAT_UNIT_TYPE_CODE\"); valuesStatement.Append(\", :flatUnitTypeCode \"); parameters.Add(DBUtils.CreateOracleParameter(\"flatUnitTypeCode\", OracleDbType.Varchar2, flatUnitTypeCode)); } if (!StringUtils.IsNull(lat)) { insertStatement.Append(\", LAT\"); valuesStatement.Append(\", :lat \"); parameters.Add(DBUtils.CreateOracleParameter(\"lat\", OracleDbType.Decimal, lat)); } if (!StringUtils.IsNull(lon)) { insertStatement.Append(\", LON\"); valuesStatement.Append(\", :lon \"); parameters.Add(DBUtils.CreateOracleParameter(\"lon\", OracleDbType.Decimal, lon)); } if (!StringUtils.IsNull(zoom)) { insertStatement.Append(\", ZOOM\"); valuesStatement.Append(\", :zoom \"); parameters.Add(DBUtils.CreateOracleParameter(\"zoom\", OracleDbType.Decimal, zoom)); } if (!StringUtils.IsNull(flatUnitNum)) { insertStatement.Append(\", FLAT_UNIT_NUM\"); valuesStatement.Append(\", :flatUnitNum \"); parameters.Add(DBUtils.CreateOracleParameter(\"flatUnitNum\", OracleDbType.Varchar2, flatUnitNum)); } if (!StringUtils.IsNull(floorLevelCode)) { insertStatement.Append(\", FLOOR_LEVEL_CODE\"); valuesStatement.Append(\", :floorLevelCode \"); parameters.Add(DBUtils.CreateOracleParameter(\"floorLevelCode\", OracleDbType.Varchar2, floorLevelCode)); } if (!StringUtils.IsNull(floorLevelNum)) { insertStatement.Append(\", FLOOR_LEVEL_NUM\"); valuesStatement.Append(\", :floorLevelNum \"); parameters.Add(DBUtils.CreateOracleParameter(\"floorLevelNum\", OracleDbType.Varchar2, floorLevelNum)); } if (!StringUtils.IsNull(buildingName)) { insertStatement.Append(\", BUILDING_NAME\"); valuesStatement.Append(\", :buildingName \"); parameters.Add(DBUtils.CreateOracleParameter(\"buildingName\", OracleDbType.Varchar2, buildingName)); } if (!StringUtils.IsNull(lotNumber)) { insertStatement.Append(\", LOT_NUMBER\"); valuesStatement.Append(\", :lotNumber \"); parameters.Add(DBUtils.CreateOracleParameter(\"lotNumber\", OracleDbType.Varchar2, lotNumber)); } if (!StringUtils.IsNull(streetNumber)) { insertStatement.Append(\", STREET_NUMBER\"); valuesStatement.Append(\", :streetNumber \"); parameters.Add(DBUtils.CreateOracleParameter(\"streetNumber\", OracleDbType.Varchar2, streetNumber)); } if (!StringUtils.IsNull(streetName)) { insertStatement.Append(\", STREET_NAME\"); valuesStatement.Append(\", :streetName \"); parameters.Add(DBUtils.CreateOracleParameter(\"streetName\", OracleDbType.Varchar2, streetName)); } if (!StringUtils.IsNull(streetTypeCode)) { insertStatement.Append(\", STREET_TYPE_CODE\"); valuesStatement.Append(\", :streetTypeCode \"); parameters.Add(DBUtils.CreateOracleParameter(\"streetTypeCode\", OracleDbType.Varchar2, streetTypeCode)); } if (!StringUtils.IsNull(streetSuffixCode)) { insertStatement.Append(\", STREET_SUFFIX_CODE\"); valuesStatement.Append(\", :streetSuffixCode \"); parameters.Add(DBUtils.CreateOracleParameter(\"streetSuffixCode\", OracleDbType.Varchar2, streetSuffixCode)); } if (!StringUtils.IsNull(locality)) { insertStatement.Append(\", LOCALITY\"); valuesStatement.Append(\", :locality\"); parameters.Add(DBUtils.CreateOracleParameter(\"locality\", OracleDbType.Varchar2, locality)); } if (!StringUtils.IsNull(stateCode)) { insertStatement.Append(\", STATE_CODE\"); valuesStatement.Append(\", :stateCode\"); parameters.Add(DBUtils.CreateOracleParameter(\"stateCode\", OracleDbType.Varchar2, stateCode)); } if (!StringUtils.IsNull(postcode)) { insertStatement.Append(\", POSTCODE\"); valuesStatement.Append(\", :postcode \"); parameters.Add(DBUtils.CreateOracleParameter(\"postcode\", OracleDbType.Varchar2, postcode)); } if (!StringUtils.IsNull(country)) { insertStatement.Append(\", COUNTRY\"); valuesStatement.Append(\", :country \"); parameters.Add(DBUtils.CreateOracleParameter(\"country\", OracleDbType.Varchar2, country)); } if (!StringUtils.IsNull(esmsAddress)) { insertStatement.Append(\", ESMS_ADDRESS\"); valuesStatement.Append(\", :esmsAddress \"); parameters.Add(DBUtils.CreateOracleParameter(\"esmsAddress\", OracleDbType.Varchar2, esmsAddress)); } if (!StringUtils.IsNull(validatedBy)) { insertStatement.Append(\", VALIDATED_DATE\"); valuesStatement.Append(\", SYSDATE \"); insertStatement.Append(\", VALIDATED_BY\"); valuesStatement.Append(\", :validatedBy \"); parameters.Add(DBUtils.CreateOracleParameter(\"validatedBy\", OracleDbType.Varchar2, validatedBy)); } if (!StringUtils.IsNull(confidence)) { insertStatement.Append(\", VALIDATION_CONFIDENCE\"); valuesStatement.Append(\", :confidence \"); parameters.Add(DBUtils.CreateOracleParameter(\"confidence\", OracleDbType.Decimal, confidence)); } if (!StringUtils.IsNull(status)) { insertStatement.Append(\", VALIDATION_STATUS\"); valuesStatement.Append(\", :status \"); parameters.Add(DBUtils.CreateOracleParameter(\"status\", OracleDbType.Varchar2, status)); } if (!StringUtils.IsNull(quality)) { insertStatement.Append(\", VALIDATION_QUALITY\"); valuesStatement.Append(\", :quality \"); parameters.Add(DBUtils.CreateOracleParameter(\"quality\", OracleDbType.Decimal, quality)); } // finish off the statement insertStatement.Append(\") \"); valuesStatement.Append(\")\"); // build the insert statement string sql = insertStatement + valuesStatement.ToString(); // Execute the INSERT Statement Dml dmlDAO = new Dml(); int rowsAffected = dmlDAO.Execute(sql, transaction, parameters); if (rowsAffected == 0) { throw new NoRowsAffectedException(); } } << many more methods go here >> } This system was developed by me and a small team back in 2005 after a 1 week .NET course. Before than my experience was in client-server applications. Over the past 5 years I've come to recognise the benefits of automated unit testing, automated integration testing and automated acceptance testing (using Selenium or equivalent) but the current code-base seems impossible to introduce these concepts. We are now starting to work on a major enhancement project with tight time- frames. The team consists of 5 .NET developers - 2 developers with a few years of .NET experience and 3 others with little or no .NET experience. None of the team (including myself) has experience in using .NET unit testing or mocking frameworks. What strategy would you use to make this code cleaner, more object-oriented, testable and maintainable?"} {"_id": "152099", "title": "Object Oriented programming on 8-bit MCU Case Study", "text": "I see that there's a lot of questions related to OO Programming here. I'm actually trying to find a specific resource related to embedded OO approaches for an 8 bit MCU. Several years back (maybe 6) I was looking for material related to Object Oriented programming for resource constrained 8051 microprocessors. I found an article/website with a case history of a design group that used a very small RAM part, and implemented many Object based constructs during their C design and development. I believe it was an 8051. The project was a success, and managed to stay inside the very small ROM/RAM they had available. I'm attempting to find it again, but Google can't locate it. The article was well written, and recommended a \"mixed\" approach using C methods for inheritance and encapsulation - if I recall correctly. Can anyone help me locate this article?"} {"_id": "168146", "title": "What is the term for a 'decoy' feature or intentional bug?", "text": "I have forgotten a slang programming term. This thing is an intentional bug or a decoy feature used as a distraction. An example usage, \"Hey Bob, QA is doing a review today. Put a `$THING` into the module so they actually have a problem to find\". This can be used negatively, to have a very obvious intentional flaw to discover as a distraction from a real problem. This can also be used positively. Its like how you always let rescue dogs 'find' a victim when searching a disaster area. It can also be used to verify that a QA process is actually catching flaws. What is the term I am looking for?"} {"_id": "228287", "title": "Returning null or a empty value/throw exception?", "text": "Various programming books suggest that methods should not return `null` values (Clean Code for example). Instead of returning `null` default values (0 or empty string or empty object) should be returned or an exception should be thrown. This is recommended in order to avoid many `!= null` checks or to avoid `NullPointerException`. I really don't understand how this helps. If you return a default value you have to make `!= DEFAULT` checks; more over, the default value might mean something (like 0 for a credit amount). Throwing a specific exception is not worth with, because if you don't handle your code well, an exception is thrown anyway - `NullPointerException`. I've thought about this when I found a bug caused by a method which returned an empty object (e.g. `new MyBean()`) instead of returning `null`. The code didn't worked functionally and the logs didn't warn anything, so detection and fixing was not that easy; with `null` a `NullPointerException` would have been thrown so spotting and fixing the bug would have been trivial. The only case where it makes sense to me to return a default value instead of returning `null` is when returning arrays/collections; here they are usually consumed `for each element in collection` so if its empty nothing happens but if it's `null` NPE is thrown."} {"_id": "152097", "title": "What are the legal risks if any of using a GPL or AGPL Web Application Framework/CMS?", "text": "Tried to ask this on SO but was referred here... Am I correct in saying that using a GPL'ed web application framework such as Composite C1 would NOT obligate a company to share the source code we write against said framework? That is the purpose of the AGPL, am I correct? Does this also apply to Javascript frameworks like KendoUI? The GPL would require any changes that we make to the framework be made available to others if we were to offer it for download. In other words, merely loading a web sites content into my browser is not \"conveying\" or \"distributing\" that software. I have been arguing that we should avoid GPL web frameworks and now after researching I am pretty sure I am wrong but wanted to get other opinions? Seth"} {"_id": "49999", "title": "Including a GPL-license library in a commercial Java program", "text": "I am in the process of helping develop a commercial Java program. We want to use a certain third-party LookAndFeel in the GUI. The problem is that the LookAndFeel library is GPL licensed. Since we are planning to sell our product, would it be legal to use that GPL LookAndFeel in our product? The LookAndFeel in question is the InfoNode Look And Feel"} {"_id": "254757", "title": "Where is the 'this' variable stored?", "text": "Let's take this simple c++ program as an example: #include class A { void fun() { a = this + 1; } A* a; }; main() { std::vector
    vec; vec.resize(100); } Forgetting for the moment that `this` is a pointer to some instantiation of `A`, this question is about the variable `this` itself, and where its contents (the value of the memory address) is stored. It can't be a literal because the compiler can't know the location of class `A`, so its contents must be allocated _somewhere_ on the stack or heap as a variable. In this case the instances of `A` are allocated on the heap; does this mean the `this` pointer is also on the heap? Looking at the `resize` call of `vec`, a compiler might do something like this under the hood: std::vector::resize(this, 100); // where 'this' is a pointer to vec So my question is, where does the `this` pointer come from? Ie. where is the contents of `this` (the 32/64-bit memory address value) stored in memory for it to be passed to this method? I would presume it can't be a normal member; so my first thought was that it's a mangled global variable of some sort, but then by storing an unknown number of them in a vector I don't see how the compiler could achieve this either. Where is the contentx of the `this` variable stored in relation to the contents of the `A` class member to which it refers? If the standard has something to say about this, I would prefer an answer with reference to the standard. Note that the reason I want to know is because I am concerned about false sharing of the memory allocated for the `this` object if I modify member variables. Knowledge of where the `this` object is stored might affect padding decisions."} {"_id": "158634", "title": "Dynamic choice of compilers?", "text": "An application has the following logic: * client => created *.cpp => sent to the server => cl.exe + *.cpp = *.exe * client => created *.cs => sent to the server => csc.exe + *.cs = *.exe * client => created *.pas => sent to the server => PascalCompiler + *.pas = *.exe * etc. The language - C#. I hear that MSBUILD can help me, but I don't understand how to change compiler on server at runtime. Does anyone know of options other than Process class? I'm trying to write an online system to test programs in programming contests. The programmer selects the compiler and sends the source code to the server for verification. And I do not need to build the project, I just need to compile a single file."} {"_id": "158633", "title": "Distributed application using RabbitMQ", "text": "I am on my way to create an application with 4 bounded context using CQRS & event sourcing. In order to make these bounded context talk to each other I was planning on using Rabbit MQ. My requisites are the following : * The bounded context must not know about each other * The bounded context must not be hindered if the other bounded context are down or if rabbitMQ is down * If a bounded context is not available the messages, that it should receive, must wait patiently for its return ie : every event published must be forwarded at least once to all the bc interested. Therefore, I was thinking about having each of them referencing a queue : BandMasterQueue and creates it if it is not. When producing an event, the event store stores it and its dispatch consists of two steps : 1. project on itself this event to every little projection. Each projection has a little tiny store of events already projected to warrant idempotency over some period of time. 2. publish this event to the queue BandMasterQueue if it would fail on step 2 because of no rabbitmq available, then I expect my event store to try again to dispatch the message to the RabbitMQ after sometime (not really though precisely about this part, I must say...) The bandMaster \"bounded context\" knows about every different BCs he looks after. He is the one reading the bandMasterQueue and decides which events are public and which are not. He is also responsible to handle the \"sagas\" and can send corrective actions to each bc. In order to do that, he will comunicate with each bounded context using the dedicated queue of the bounded context and creates it if it does not exits. Each BC is bound to this specific queue, and add the event coming from outside in its own event store, and afterwards project it to its own projection. Should this \"architecture\" work or do you see any major flaw in it?"} {"_id": "253135", "title": "Property-level value transformation for indirect object casting", "text": "Does any programming language exist to support the explicit, property-level object copy? For example, assume this code: public class Student { public string Name { get; set; } public string Code { get; set; } public List Interests { get; set; } } public class Item { public string Name { get; set; } public string Code { get; set; } } I wonder if there is a compiler (or a language) out there that can indirectly cast two objects to each other, based on property-level matching and a lossy transformation algorithm? 1. Create an instance of the target class 2. Using reflection, find all properties of the target class 3. For each property, try to find a match in the source class, using name and type 4. If found, transfer the value 5. Otherwise, initialize to the default value Of course, this should be done **explicitly** with the full intention of the developer. For example, a C# pseudo-syntax might look like: Student student = new Student(); student.Name = \"Saeed\"; student.Code = \"513223\"; Item item = copy student; // assume \"copy\" to be a keyword // Here, item's name is \"Saeed\" and its code is \"513223\""} {"_id": "58156", "title": "SEO for AJAX based website", "text": "My question is that we have a completely ajax based website: http://news.swalif.com/. Now we want to get all the 'pages' crawled by Google and other search engines, but we do not want to move to a PHP based solution or make an archive. I would like to get any ideas on how it could be possible."} {"_id": "190214", "title": "Time limit on user input", "text": "I am trying to put a time limit on the user input, so if they take longer than 2 seconds to put type in an input then the program will terminate. I was wondering if there is a simple way to do this in C language?"} {"_id": "160337", "title": "Unit Tests code duplication?", "text": "How can I avoid code duplication in unit tests? Using Java and JUnit suppose I have something like this: public interface Arithmetic { public T add(T a, T b); public T sub(T a, T b); } public void IntegerArithmetic { // } public void DoubleArithmetic { // } And the test class: public class TestArith { private Arithmetic arith; @Before public void init() { Arithmetic arith = new IntegerArithmetic(); Integer term1 = 4; Integer term2 = 3; } @Test public void testAdd() { assertEquals(7, arith.add(term1, term2)); } @Test public void testAdd() { assertEquals(1, arith.sub(term1, term2)); } } The problem is that the test class for the DoubleArithmetic class should look exactly the same; the only difference is in @Before init() where the initialization of the concrete type is done. How can this code duplication be avoided?"} {"_id": "190216", "title": "Which is simpler for REST client call to return JSON - JQuery/JavaScript or Spring RestTemplate?", "text": "I've been trying to hack up an annotated Spring MVC web app but it's proving pretty hard to call a URL of my web app which fires a request to a remote API (UK Police data) and recieves a reply which I can then return as JSON. I've been trying to use Spring's RestTemplate. The odd JavaScript example that I've glanced at does seem to be suited to that task a lot better and more simplistic than a full blown servlet. **Is that an opinion that you also share and can back up with a good, clear simple example?** JavaScript does seem to provide a good platform for this kind of task **Or, in other words, is it simpler to create a \"web-app\" with JavaScript that does some REST calls and renders the response than trying to do it with Spring?**"} {"_id": "158639", "title": "What do you need to do when mentoring a new graduate?", "text": "Well I have been THE junior developer in a small team for several years. Now, there is a new fresh graduate so I have become the not-so-junior-developer and have been asked to mentor him. But I have no idea what to do since I have always been kind of a \"figure- things-out-on-my-own\" person and didn't receive much mentoring. So, what do you need to do when mentoring a new graduate?"} {"_id": "158638", "title": "How stable is Common LISP as a language?", "text": "I have been reading a bit about Common Lisp and I am considering trying to learn it (I only know very basic concepts) or even using it for some project. Question: how stable is Common Lisp as a language? What I mean is, regardless of new libraries, are new language features (involving new syntax and semantics) being added to the language? In other words, if I learn Common Lisp now, would I be able to use the same language in 10 years from now, or read code that was written 10, 20 years ago (apart, of course, from having to get familiar with different libraries and API's)?"} {"_id": "251642", "title": "Inheriting from Abstract class vs Enum Types for custom exceptions", "text": "I am creating an interface and would like the implementer(s) of this interface to throw exceptions in the case that something goes wrong. Let's call the implementer a plugin. I have a director which can call any one of the plugins. Rather than each plugin throwing it's own random exception, I would like plugins to throw specific exceptions which my director can then handle. For example: If the plugins fail to authenticate with the credentials that are provided by the director or if the requested action times out on the plugin, the plugin should throw an exception. Rather than each plugin throw it's own random exception, I would like plugins to throw specific exceptions that the director knows about and can handle. I do this because if an exception is fatal (like authentication failed) I don't want to retry, whereas if it was a timeout, the director may decide to retry. Now, I am confused as to which of the following ways is better: **Idea 1:** Create a custom exception (lets say `PluginException`) and have an enum type which different types of exception (such as `authenticationfailed`, `timeoutout`, `permissiondenied`, etc.) The director will catch all `PluginException`s and handle each exception case by case. **Idea 2:** Create an abstract exception (lets say `PluginException` again) and have specific exception types derive from my abstract exception (like `AuthenticationFailedException`, `TimeoutException`, etc.) The director can listen for any specific exception it is interested in and also listen for the broader `PluginException` if none of the above exception types match the exception thrown by the plugins."} {"_id": "169819", "title": "Detect duplicate in a subset from a set of elements", "text": "I have a set of numbers say : 1 1 2 8 5 6 6 7 8 8 4 2... I want to detect the duplicate element in subsets(of given size say k) of the above numbers... For example : Consider the increasing subsets(for example consider k=3) Subset 1 :{1,1,2} Subset 2 :{1,2,8} Subset 3 :{2,8,5} Subset 4 :{8,5,6} Subset 5 :{5,6,6} Subset 6 :{6,6,7} .... .... So my algorithm should detect that subset 1,5,6 contains duplicates.. My approach : 1)Copy the 1st k elements to a temporary array(vector) 2) using #include file in C++ STL...using unique() I would determine if there's any change in size of vector.. Any other clue how to approach this problem.."} {"_id": "179064", "title": "Date calculation algorithm", "text": "I'm working on a project to schedule a machine shop, basically I've got everything covered BUT date calculations, I've got a method called schedule (working on PHP here): public function schedule($start_date, $duration_in_minutes) Now my problem is, currently I'm calculating end time manually because time calculations have the following rules: 1. During weekdays, work with business hours (7:00 AM to 5:00 PM) 2. Work on Saturdays from 7:00 AM to 2:00 PM 3. Ignore holidays (in Colombia we have A LOT of holidays) I already have a lookup table for holidays, I also have a Java version of this algorithm that I wrote for a previous version of the project, but that one's also manual. Is there any way to calculate an end time from a start time given duration?, my problem is that I have to consider the above rules, I'm looking for a (maybe?) math based solution, however I currently don't have the mind to devise such a solution myself. I'll be happy to provide code samples if necessary."} {"_id": "164828", "title": "Why are people laughing at visual basic?", "text": "When I was at high school I used visual basic 6 and I think it was pretty good. Then I came to the university and began to use c/c++ java python etc.. I didn't find a reason why people laugh at visual basic, is it because of its syntax? or something else? **UPDATE:** all of you have gave fantastic answers, but tdammers's answer is what I'm wondering at first. Thanks a lot for all your help:)"} {"_id": "164756", "title": "TDD, BDD or both?", "text": "I'm a little bit confused about BDD. I'm doing TDD currently. My question is whether BDD is complementary to TDD or it's a whole new thing and my team should do both TDD and BDD? Or is it enough to do just one or the other?"} {"_id": "135218", "title": "What is the difference between BDD and TDD?", "text": "I have been learning writing test cases for BDD using specflow. If I write comprehensive tests with BDD is it necessary to write TDD test separately? Is it necessary to write test cases for both TDD and BDD separately? It seems to me that both are same, the only difference being that BDD can be understood by non developers and testers."} {"_id": "53585", "title": "Is the difference between BDD and TDD nothing more than a vocabulary shift?", "text": "I recently made a start on learning BDD (Behaviour Driven Development) after watching a Google tech talk presented by David Astels. He made a very interesting case for using BDD and some of the literature I've read seem to highlight that it's easier to sell BDD to management. Admittedly, I'm a little skeptical about BDD after watching the above video. So, I'm interested to understand if BDD is indeed nothing more than a change in vocabulary or if it offers other benefits."} {"_id": "111837", "title": "Relation between BDD and TDD", "text": "What is the relation of BDD and TDD? From what I understood BDD adds two main things over TDD: tests naming (ensure/should) and acceptance tests. Should I follow TDD during development by BDD? If yes, should my TDD unit tests be named in the same ensure/should style?"} {"_id": "169816", "title": "Returning a mock object from a mock object", "text": "I'm trying to return an object when mocking a parser class. This is the test code using PHPUnit 3.7 //set up the result object that I want to be returned from the call to parse method $parserResult= new ParserResult(); $parserResult->setSegment('some string'); //set up the stub Parser object $stubParser=$this->getMock('Parser'); $stubParser->expects($this->any()) ->method('parse') ->will($this->returnValue($parserResult)); //injecting the stub to my client class $fileWriter= new FileWriter($stubParser); $output=$fileWriter->writeStringToFile(); Inside my `writeStringToFile()` method I'm using `$parserResult` like this: writeStringToFile(){ //Some code... $parserResult=$parser->parse(); $segment=$parserResult->getSegment();//that's why I set the segment in the test. } Should I mock `ParserResult` in the first place, so that the mock returns a mock? Is it good design for mocks to return mocks? Is there a better approach to do this all?!"} {"_id": "169815", "title": "Help to understand the abstract factory pattern", "text": "I'm learning the 23 design patterns of the GoF. I think I've found a way to understand and simplify how the Abstract Factory works but I would like to know if this is a correct assumption or if I am wrong. What I want to know is if we can see the result of the Abstract Factory method as a matrix of possible products where there's a Product for every \"Concrete Factory\" x \"AbstractProduct\" where the Concrete Factory is a single implementation among the implementations of an AbstractFactory and an AbstractProduct is an interface among the interfaces to create Products. Is this correct or am I missing something?"} {"_id": "169814", "title": "What are the differences between Special Edition and the Third Edition of Stroustrup's The C++ Programming Language?", "text": "I'm buying a few C++ books after moving from Java. I obviously want to read the reference manual from the man himself, though I cannot tell the difference between these two editions. The special edition is ten pages shorter than the third edition. However, the special edition is recommended over the third edition and it seems this version covers the ISO standard when the other edition does not. Can anyone shed a bit of light on this?"} {"_id": "162798", "title": "Can we create desktop application with Ruby?", "text": "I know the Ruby on Rails framework is only for web development and not suitable for desktop application development. But if a ruby programmer wants to develop a desktop application, is it suitable and preferable to do it with Ruby only (not jRuby, as most of the tutorials are for jRuby)? If yes, please provide some good tutorials. I want to use linux as OS for development. Please suggest something, as I am a ruby developer and wants to develop desktop application."} {"_id": "162796", "title": "How quickly does the Java language get outdated?", "text": "I started learning Java recently. I started learning it using books that I picked up from the library, some that I bought, and here and there from Java documentation. The book that I use for Java was published in the year 2011. In 2012, Java8 will be released followed by Java9 in the year 2013. The questions are: * How do I keep myself updated about developments in Java without having to buy a tome for Java8 and/or Java9 * Is a book published in 2008 an outdated book for studying JSP and Servelets? I'm talking about Head First Servlets and JSP"} {"_id": "203844", "title": "What does it mean to \"expose\" something?", "text": "So I am working on creating a Google App Engine Application, and I've come across the term \"expose\" a number of times, e.g. \"your first app can expose objects using an HTTP based API\" and \"expose this datamodel class through a REST API\". What does \"expose\" mean? Is there a particular action associated it, or is it an abstract part of design?"} {"_id": "203847", "title": "Need thoughts on a course curriculum for an entry-level programmer", "text": "Would love to know your thoughts on a curriculum that will be developed for people that want to get started in the programming field. The goal of the course is to help it's students to ensure that students have an understanding of the theory and skills required to be a programmer, and have enough skills to get entry-level jobs upon graduation. Curriculum is: * 400 hours of instruction plus self-study. * 40 hours of practicum **1\\. Foundation** a. Clarity b. Detail orientation c. Communication d. Compartmentalization/abstraction e. Typing f. Memory **2\\. Theory** a. Lexical analysis b. Parsing c. Optimization d. Hierarchical thinking e. Scope f. Type systems **3\\. Computer Science** a. Data structure (ex. sorting, searching, data structure transversal) b. Recursion c. Language structures (ex. arrays, linked lists, dictionaries, etc) **4\\. Systems Architecture** **5\\. Operating System Theory** a. POSIX b. IO c. Memory allocation **6\\. Object Oriented Programming** a. Java b. PL/SQL * * * Would love to know: 1. Is the above curriculum laid out clearly? If not, how would it be best to change it? 2. Are there any crucial components being missed from the above outline? 3. Should any components be removed from the above outline (considering the timeline)? Perhaps it's too detailed at certain points and is not in balance with the rest of the content?"} {"_id": "54302", "title": "Is using SVN for development and CM a bad practice?", "text": "I have a bit of experience with SVN as a pure programmer/developer. Within my company, however, we use SVN as our Configuration Management (CM) tool. I thought using SVN for development at the same time was OK since we could use branches and the trunk for dev, and tags for releases. To me, the tags were the CM part, and the branches/trunk were the dev part. Recently a person, who develops high level code (but outside of the \"pure SW\" group) mentioned that the existing philosophy (mixing SVN for dev and CM) was wrong... in his opinion. His reasoning is that he thinks the company's CM tool should always link to run-able SW (so branches would break this rule). He also mentioned that a CM tool shouldn't be a backup utility for daily or incremental commits. Finally, he doesn't like the idea of having to jump from revision 143 to 89 in order to get a working copy... and further that CM tools shouldn't allow reversion to a broken state. In general he wants to separate the CM and back-up/dev utilties that SVN offers. Honestly, I am new and the person with this perspective is one of seniority, experience, and success, so I want to field this dilemma with the stackoverflow userbase to see if his approach has merit. My question: Should SVN be purely used for development, and another tool for CM (or vice versa)? Why? If so, what tools would you suggest for this combo? Or do you think that integrating both CM and dev into SVN is the best approach? Why? Thanks. * * * **Edit-** during my research I found the following site which may give some more reasons against using SVN for CM. Are the author's points valid? I feel like this is aligned with what my colleague was inferring... Link: SVN isn't CM"} {"_id": "54301", "title": "Different ways of solving problems in code", "text": "I now program in C# for a living but before that I programmed in python for 5 years. I have found that I write C# very differently than most examples I see on the web. Rather then writing things like: foreach (string bar in foo) { //bar has something doen to it here } I write code that looks like this. foo.ForEach( c => c.someActionhere() ) Or var result = foo.Select( c => { //Some code here to transform the item. }).ToList(); I think my using code like above came from my love of map and reduce in python - while not exactly the same thing, the concepts are close. Now it's time for my question. What concepts do you take and move with you from language to language; that allow you to solve a problem in a way that is not the normal accepted solution in that language?"} {"_id": "54300", "title": "Transitioning from Oracle based CMS to MySQL based CMS", "text": "We're looking at a replacement for our CMS which runs on Oracle. The new CMSes that we've looked at can in theory run on Oracle, but * most of the vendor's installs run off of MySQL * vendor supports install of their CMS on MySQL, and a \"theoretical\" install on Oracle * the vendor's dev shops use MySQL * none of them develop/test against Oracle Our DBA team works exclusively with Oracle, and doesn't have the bandwidth to provide additional support for a highly available and performing MySQL setup. They could in theory go to training and get ramped up, but our time line is also short (surprise!). So ... I guess my question(s) are: If you've seen a situation like this, how have you dealt with it? What tipped the balance either way? What type of effort did it take? If you're to do it over, what would you do differently ... ? Thanks! KM"} {"_id": "168494", "title": "How important is positive feedback in code reviews?", "text": "Is it important to point out the **good** parts of the code during a code review and the reasons why it is good? Positive feedback might be just as useful for the developer being reviewed and for the others that participate in the review. We are doing reviews using an online tool, so developers can open reviews for their committed code and others can review their code within a given time period (e.g. 1 week). Others can comment on the code or other reviewer's comments. Should there be a balance between positive and negative feedback?"} {"_id": "245588", "title": "Where to put format validation in a CQRS \u201cstylish\u201d domain model?", "text": "It feels right to put format validation inside the domain objects (VOs or entities) because it is the natural place for high cohesion and the domain knows best what every domain description/attribute/property means. Many DDD practitioners and books authors (Vaugh Vernon, Dan Bergh - and even Eric Evans suggest , but on a different aspect: authorization, to model the domain to reflect these matters) sugest to design the domain model in a way to reflect and enforce a proper state of the business. I agree that format validation (ie.: address property in EmailAddress VO should be max ~250 chars and should match a regex) should be implemented inside the domain objects. But these validation checks work best with a domain model that will be used for both state changes and state queries. What about CQRS where the command side (where the domain model is) has little or no knowledge about the read side. Should one reimplement the same input format checks in the query side of the application? In CQRS there are some common responsibilities for both command and query side (like input format validation), so the usual format validation implementations that most of DDD practitioners sugest will be duplicated. How should one deal with this, w/o making the domain model (from the command side) to be anemic and w/o making unaware of the state of the attributes it will hold . An example : there might be a feature that would let a conference owner to schedule a conference and change the number of available seats. But because the conference room is not big enough, the maximum allowed number of seats would be 100. So a business rule would be that : a conference owner cannot add more than 100 seats for a scheduled conference (also minimum 1 seat needed for the conference to be in a valid state). So a method would be on a ConferenceAR : changeNumberOfAvailableSeats(numberOfAvailableSeats) { if(!isNumber(numberOfAvailableSeats) || numberOfAvailableSeats > 100 || numberOfAvailableSeats < 1) { throw new DomainException ... } // Change the number of available seats ... } On the query side there might be an UI page where you can find all the scheduled conferences that have a certain number of available seats. So again , on the server's query side of the application, there must be a format validation that will check if the query for scheduled conferences with a number of seats available is a number and is in the range 1-100. This query might be needed by someone who wants to reserve a certain number of seats for a conference . Also conferences might be on the same topic so one could see a confenrece with the same topic many times and choose the one cheaper or with more seats available. So again, should one reimplement the format validation on both query and command sides or there is another solution? P.S. : there are many format validation duplicates (on both command and query side) like the email format validation , or a currency format validation (if the currency is : USD || CAD || AUD etc ...) **P.P.S. : Another question that poped up : Does the query side of the application reuquire any input format validation ? If the input is not a valid format it will return nothing , query found no data related to the request. I see that format validation on the query side is purely a security measure (ie. for buffer overflows). So is the input format validation really required on the query side ?**"} {"_id": "163445", "title": "How to make a PHP function triggered automatically at a user defined time", "text": "I am developing an internal system for a company with PHP using Zend framework. I need one of its functions to execute on a time specified by user. My research on this matter found me several ways of doing this using CPanel Cron jobs and setting up scheduled tasks on the server. But in this scenario, I don't have a CPanel and I already use scheduled tasks. But my challenge is to provide an interface for the user to specify the time to trigger the function."} {"_id": "94754", "title": "How do I convince my teammates that we should not ignore compiler warnings?", "text": "I work on a huge project (more like a tangled up combination of dozens of mini projects that can't be easily separated due to poor dependency management, but that's a different discussion) in Java using eclipse. We've already turned off a number of warnings from compiler settings and the project still has over 10,000 warnings. I'm a big proponent for trying to address all warnings, fix all of them if possible, and for those that are looked into and deemed safe, suppress them. (Same goes for my religious obsession with marking all implemented/overriden method as @Override). My biggest argument is that generally warnings help you find potential bugs during compile time. Maybe out 99 of 100 times, the warnings are insignificant, but I think the head scratching that it saves for the one time that it prevents a major bug, it's all worth it. (My other reason is my apparent OCD with code cleanness). However, a lot of my teammates don't seem to care. I occasionally fix warnings when I stumble across them (but you know it's tricky when you touch code written by a co-worker). Now with literally more warnings than classes, the advantages of warnings are very much minimized, because when warnings are so common-place, nobody will bother looking into all of them. How can I convince my teammates (or the powers that be) that warnings need to be addressed (or suppressed when fully investigated)? Or should I convince myself that I'm crazy? Thanks (P.S. I forgot to mention what finally prompted me to post this question is that I sadly noticed that I'm fixing warnings slower than they are produced)"} {"_id": "122444", "title": "Controllers in CodeIgniter ", "text": "I little bit new to the CodeIgniter framework and this is my first project with this framework. During a chat on StackOverflow somebody said that we need to make controllers tiny as possible. Currently I have a default controller named `home` with 1332 lines of codes (and increasing) and a model named `Profunction` with 1356 lines of codes (and increasing). The controller class have about 46 functions on it and also with model class. I thought that Codeigniter can handle large Controllers or Models well, is there any problem/performance issue/security issues regarding this?"} {"_id": "122440", "title": "How do regular expressions actually work?", "text": "Say you have a document with an essay written. You want to parse this essay to only select certain words. Cool. Is using a regular expression faster than parsing the file line by line and word by word looking for a match? If so, how does it work? How can you go faster than looking at each word?"} {"_id": "62216", "title": "How does a programmer tell a business analyst that she isn't right?", "text": "I implemented some functionality, then a business analyst created a defect, saying that some message text is wrong. Then I reviewed the spec again and found that I'm right. I posted a spec excerpt and closed the defect. Then she reopened the defect again, saying that there was a mistake in the spec and that I shouldn't implement it mindlessly and should escalate this particular thing to her. How should I respond?"} {"_id": "62217", "title": "Interview question: which is the time period estimated by you to learn Java", "text": "At my last interview(it was a phone interview) I was asked: \"Which is the time period in which you can learn Java?\". I've answered that I believe that in 2-3 month I'm able to write good code for non-fancy/regular applications. After that I observed that the employer took a long break and switched to other questions. Now, I'm asking you, what you've answered if you were in my place. PS: I didn't worked a lot with Java(2 weeks), so I don't think that a person who is saying something like \"I can learn Java in 2 days\" is fair with him/herself."} {"_id": "42181", "title": "Tester-Developer communication", "text": "While a lot is written about developer-developer, developer-client, developer- team manager communications, I couldn't find any text which gives guidelines about tester-developer communication and relation. Whether testers and developers are separate teams or in the same one (in my case, I am a lone tester in an agile development project), I have the belief that how testers are perceived is extremely important in order for testing to be well-accepted, and to serve its goal in enhancing the quality of the project (for example, they should not be viewed as a police force). Any advices, or studies about how a tester should communicate with developers? **Update** : Thank you all for your answers. They all confirmed what I had in mind. As for now, my team was very receptive of my role and we ended up making real progress. I could have chosen more than one as the answer but I had to make my decision."} {"_id": "210503", "title": "Is seniority/paygrade an important factor for effective QA members?", "text": "As a member of our company's QA team, I frequently get entirely unenthusiastic feedback from developers in their responses to test results in our agile, web- based software-as-a-service shop. Most of our testing is manual, since automated testing doesn't really make sense for us right now, and developers are usually reluctant to listen to any change suggestions beyond those that prevent javascript/500 errors. I understand that fixes/changes require work, and our developers are rarely short on work to do, but I don't think developers respect QAs input. Unfortunately, our product owners are vacant: acceptance testing doesn't exist, and user stories usually are only one sentence long, and don't provide the developer with much to go off of. There is no other feedback mechanism to development other than from customers x-weeks later, who aren't designers/developers either, of course, and whose suggestions are all over the board. I am technically competent, at worse, and am capable of simple development on our LAMP stack and feel confident that developers respect my knowledge. However, I have for the most part given up on feedback beyond that which prevents critical errors--which affect data integrity, or bottom-line functionality. This has raised the question of whether seniority, or pay grade is a significant factor in how seriously developers value QAs input. In our case, where we don't do automated testing and QA members likely don't have as much technical expertise, it kind of makes sense that we make less than developers (between 60-70%, depending on time in grade). I don't believe in the argument that the opinion of the team member with the biggest pay check is the most important, however I can imagine how it's difficult to take feedback from team members who have a year or two less experience, are not as technically knowledgeable, and make noticeably less. In the end the best idea should win, but unfortunately that might be decided after the enhancement has been on production for several months, and users love it or hate it."} {"_id": "179390", "title": "What are the advantages of Scala's companion objects vs static methods?", "text": "Scala has no _static_ -keyword, but instead has similar functionality through companion objects. Behind the scenes the companion objects are compiled to classes that have static methods, so all this is syntactic sugar. What are the advantages of this design choice? Disadvantages? Do other languanges have similar constructs?"} {"_id": "179393", "title": "Review tests first", "text": "I am going to adopt TDD in our team and one of the ideas I have is to review tests first. So one would write interfaces, mocks, and tests first, submit them for a code review and once interfaces and tests (think specification) are approved an actual implementation can be written (theoretically, can be done by another developer). I wonder how viable this idea is?"} {"_id": "145989", "title": "CoffeeScript translates to JavaScript. Is there something like it for C?", "text": "> **Possible Duplicate:** > Is there a language that transcompiles to C with a better syntax? There are many language implementations which compile to C. However, most of them have some language specific runtime requirement. For example, Gambit-C and Vala both compile to C. Gambit-C depends upon a garbage collector. Vala depends upon the GObject system. Are there languages which compile to C which have no dependencies beyond standard C libraries such as libc?"} {"_id": "143729", "title": "Readability of || statements", "text": "On HTML5 Boilerplate they use this code for jQuery: Any suggestions where to start? For Windows, I suppose Microsoft Windows Script Interfaces - Introduction. What about Linux and Mac? On the way there, I'm also trying to figure out how to make Lhogho into a server-side language, rather like PHP or ASP, viz Lhogho Preprocessor I figure that'll be slightly easier. Maybe."} {"_id": "44090", "title": "how to deal with parallel programming", "text": "I know that parallel programming is a big resource in computer graphics, with moder machines, and mayebe a computing model that will be grow up in the near future (is this trend true?). I want to know what is the best way to deal with it. there is some practical general purpose usefulness in studying processor n-dimensional mesh, or bitonic sort in p-ram machines or it's only theory for domain specific hardware used in real particular signal elaborations of scientific simulations? Is this the best way to acquire the know how for how to become acquainted with cuda or opencl? (i'm interested in computer graphics applications) and why functional programming is so important to understand parallel computing? ps: as someone has advice me i have forked this discussion from http://stackoverflow.com/questions/4908677/how-to-deal-with-parallel- programming"} {"_id": "95008", "title": "How to Improve this socket over TCP-IP system", "text": "Im new to Socket programming and im building a Socket over TCP-IP system with this architecture: ![system](http://i.stack.imgur.com/N2txX.png) **Server:** Basically a TCPIP Listener that waits for connection from the clients. **Client:** It knows the server Ip and try to connect, when the connection is set i can send commands from another client that acts like the ADMIN. In other words its like a chat system adapted to this use case. Once the client receive the commands form the admin console it pass it to the LED display connected by Serial Port. My question is: If want to remove those clients(Laptop, PC) from the system, and connect directly to the led Display, how can i do it? What happen to the Client logic? Can i send the commands from the server directly to the led display?"} {"_id": "123716", "title": "What are the software technologies behind real time web apps?", "text": "What are some of the software technologies behind \"real-time\" web apps, like Twitter, Trello, or even Gmail? I know there's a webserver and probably a database on the backend, but what are the software pieces that make for that rich web interaction that I'm seeing more of today?"} {"_id": "123714", "title": "What is a good code pattern for single retry then error?", "text": "I am writing a routine which has the following form: TRY A IF no success, B IF no success, RETRY A IF no success, throw error It's not trivial to extract either A or B into it's own routine, so **what is most simple structure that will allow me to retry A without code duplication?** Currently, I have a `do..while` that allows `N` retries: int retries = 1; do { // DO A if ( /*success*/ ) { break; } else if (retries > 0) { // DO B if ( /*success*/) { break; } } else { // throw Error } } while (retries-- > 0); This is ugly and not ideal as it implies that I might want to ever retry more than once, which I don't. I will have to use a loop, but there has to be a more simple way that I'm not seeing. For context, this is code generated in Java, executing SQL statements to try an `UPDATE` first, then if no entry to update is found, `INSERT`, and if that command fails (concurrency, already created), try `UPDATE` again."} {"_id": "53423", "title": "How to teach a class in management information systems without access to computers?", "text": "I teach evenings in a prison. The students need a computer course in management information systems to fulfill a degree requirement. There are no computers or calculators available and I am not allowed to bring equipment into the prison. Most of what I've found on the internet regarding this question is created for young children. I need something for adults."} {"_id": "216590", "title": "Design pattern: static function call with input/output containers?", "text": "I work for a company in software research department. We use algorithms from our real software and wrap them so that we can use them for prototyping. Every time an algorithm interface changes, we need to adapt our wrappers respectively. Recently all algorithms have been refactored in such a manner that instead of accepting many different inputs and returning outputs via referenced parameters, they now accept one input data container and one output data container (the latter is passed by reference). Algorithm interface is limited to a static function call like that: class MyAlgorithm{ static bool calculate(MyAlgorithmInput input, MyAlgorithmOutput &output); } This is actually a very powerful design, though I have never seen it in a C++ programming environment before. Changes in the number of parameters and their data types are now encapsulated and they don't change the algorithm callback. In the latest algorithm which I have developed I used the same scheme. Now I want to know if this is a popular design pattern and what it is called."} {"_id": "206473", "title": "How much does the data model affect scalability and performance in so called \"NoSQL\" database?", "text": "You can't ever have a talk about so called \"NoSQL\" database without bringing the CAP theorem (Consistency, Availability, Partition : pick two). If you have to pick say, between MongoDB (Partition, Consistency) and CouchDB (Availability, Partition), the first you need to think about is \"Do I need correct data or do I need access all the time?\". Those new database were _made_ to be partitioned. But what If I _don't_? What if I just think its pretty cool to have a Key/Value, Column, Document, whatever database instead of a relational one, and just create one server instance and never shard it? In that case, wouldn't I have both availability and consistency? MongoDB wouldn't need to replicate anything, so it would be available. And CouchDB would have only one source of data, so it would be pretty consistent. So that would mean that, in that case, MongoDB and CouchDB would have little difference in term of use case? Well, except of course performance, API, and al, but that would be more like choosing between PostgreSQL and MySQL than having two fundamentally different set of requirements. Am I right here? Can I change a AP or CP database to an AC one by not creating more than one instance? Or is there something that I am missing? Let's ask the question in reverse. What if I take a relational database, let say MySQL, and put it in a master/slaves configuration. I don't use ACID transactions If I require that any write be synchronized to the slave immediately, wouldn't that make it a CP database? And what if I synchronise it a some predefined intervals, and it doesn't matter if a client read stale data from a slave. Wouldn't that make it an AP database? Wouldn't that mean that if I give up ACID compliance I can still use the relationnal model for a partionned database? In essence : is scalability about what you are ready to give up in the CAP theorem, more than the underlying data model? Does having Column, Document, Key Value, whatever give a boost to scalability over a relational model? Could we design a relational database designed from the ground up for partition tolerance? (Maybe it already exists). Could we make NoSQL database ACID compliant? Sorry, its a lot of questions, but I have read a lot about NoSQL database recently and it seem to me that the biggest benefit of using them is that they fit better the \"shape\" of your data, rather than just the partition, CAP and giving up ACID compliance. After all, not everyone has so much data that they need to partition it. Is there a performance/scalability benefit to not using the relational model _before I even think about partitioning my data?_"} {"_id": "10230", "title": "Is it out of line to give unsolicited constructive criticism to a programmer?", "text": "I recently started work at new office that uses a proprietary program written by a solo developer. He's occasionally around as part-time tech support, but the company has signed off on this software and he's no longer being paid to develop it. As a user of his software there are many issues that leap out to me as a source of concern: * very simple to directly view the DB as a low-privilege user * passwords stored as plaintext * dictionary admin passwords * app uses the DB root account * DB doesn't meet 1NF (eg. appointment1, appointment2, etc.) I feel like these are serious issues and that if I were him I'd want them pointed out to me, but I'm not a qualified programmer (I'm a social worker) and I don't know if it would be rude of me to just buttonhole him and start blabbering about salted hashes and normal forms, especially when this is no longer a paid task of his. Is it out of line? If not, how would you broach it? **Edit - more info prompted by comments:** * The application holds sensitive data. * It's for internal use, but it's running on lots of machines in several cities and many visitors come through our offices. * My concerns are for the developer's future projects as well as this specific one."} {"_id": "255489", "title": "Technique for distorting part of an image to take features from another image?", "text": "I have a project of a community website, where people can alter/manipulate photos that they upload, in a particular way. For example, a person could select various contours and control points on their nose, and do the same for an image of another person's face, and hence morph their own photo so that it distorts the nose to match that of someone else's. The setting up of a profile based site, where images can be uploaded, is under the belt, but I was wondering if someone can point me in the right direction for image manipulation. I have been searching for scripts/libraries - chiefly php and python but gotten nowhere. I tried searching composite picture scripts, but what I found just stuck one picture onto another. Like cutting and pasting one nose onto another picture, without blending it in. I tried searching morphing scripts, but I just found scripts that turned one image into another image. I tried searching for image superposition, but that just gave composite image script. Neither of the above are what I am trying to do. What is the technique for distorting part of an image to take features from another image?"} {"_id": "1745", "title": "What's the most absurd myth about programming issues?", "text": "To put it another way... What is the most commonly held and frustrating misunderstanding about programming, you have encountered? Which widespread and longstanding myths/misconceptions do you find **hard for programmers to dispel/correct**. Please, explain why this is a myth."} {"_id": "255481", "title": "Modeling e-Commerce Content Relations with Entity Framework", "text": "I'm building an e-Commerce system in my spare time to better lean the ins and outs of ASP.net MVC, WebAPI, and Entity Framework. One of the things I'm struggling with is how to model the relations for the content pieces (categories, products, and webpages). 1. Categories can have 1 or no parent category. 2. Products can have 0 or more categories, but 1 may be specified as primary (for default breadcrumbs). 3. Webpages can have 0 or more categories, but 1 may be specified as primary (for default breadcrumbs). 4. I want to be able to specify the default sort order for each within the parent category. This tells me I'm going to need a many-to-many table with a sort column. 5. I want to be able to efficiently count and/or display products further down the category tree. For example: Using categories **Apple => MacBook** and **Apple => iMac** , I want to be able to display products from MacBook and iMac on the Apple category. Since categories, products, and webpages all share some similar properties like Name, Page Title, Meta Description, Main Content, and Published, I created an abstract base class called CatalogPage that they all inherit from. Since category, product, and webpage all have an optional default/parent category I also included nullable **DefaultCategory/DefaultCategoryId** properties on the CatalogPage class. I assume that if I want #5 to be efficient I'm going to have to store some redundant data. For example, store a Macbook Air 11\" as a child of both Apple and Macbook. What are some good ways to model the rest of the relationships?"} {"_id": "119181", "title": "What type of encoding can I use to make a string shorter?", "text": "I am interested in encoding a string I have and I am curious if there is a type of encoding that can be used that will only include alpha and numeric characters and would preferably shorten the number of characters needed to represent the string. So far I have looked at using Base64 encoding to do this but it appears to make my string longer and sometimes includes `==` which I would like to avoid. Example: > test name|120101 becomes > dGVzdCBuYW1lfDEyMDEwMQ== which goes from 16 to 24 characters and includes non-alphanumeric. Does anyone know of a different type of encoding that I could use that will achieve my requirements? Bonus points if it's either built into the .NET framework or there exists a third party library that will do the encoding."} {"_id": "245683", "title": "update methods in simple factories", "text": "I have simple factory class with differently named methods which create the same object but differently. These created objects are persisted to db. These are then retrieved from the db elsewhere and modified differently based on different conditions. I want to leave the conditional logic inside the service class and centralize this object modification, just like I did with object creation. Is it valid to add update* methods inside a simple factory ? Is there another pattern I should be using instead ? In an ideal world I could add these update methods to the Dao itself, but I am using JPA repositories for Dao. So it is not straight forward to add the update methods to Dao. These Message object updating methods have to live somewhere else and I cannot decide where. public class Message { // has some properties } public class Messagefactory { public Message createMessageForConditionA() {} public Message createMessageForConditionB() {} public Message createMessageForConditionC() {} public Message createMessageForConditionD() {} // Is this okay ? public Message updateMessageForConditionE() {} public Message updateMessageForConditionF() {} } public class MessageService { @Autowired private MessageDao messageDao; public Message createMessage(String condition){ Message message = null; if(condition.equals(A)) { message = createMessageForConditionA(); } else if(condition.equals(B)) { message = createMessageForConditionB(); } else if(condition.equals(C)) { message = createMessageForConditionC(); } else if(condition.equals(D)) { message = createMessageForConditionD(); } messageDao.save(message); return message; } public Message updateMessage(string messageKey, String condition) { Message message = messageDao.findByMessageId(messageKey); if(condition.equals(E)) { message = updateMessageForConditionE(); } else if(condition.equals(F)) { message = updateMessageForConditionF(); } messageDao.save(message); return message; } }"} {"_id": "252134", "title": "What is the reasoning behind these design choices?", "text": "In ReactJS tutorial you are guided in the building of a commenting system. It's a very useful tutorial to understand ReactJS library, but there are some design choices I can't fully understand. So let's start from the initial setup: The system is composed of three components: CommentBox, CommentList, Comment and CommentForm. The hierarchy is the following CommentBox CommentList Comment CommentForm The tutorial then implements all the data-fetching logic in the CommentBox component, which then pass this data to CommentList that renders it in the DOM. **Q1: Why? Is an arbitrary choice or there is some reasoning behind?** Then, when the tutorial come to the logic of new comments submission, it states: > When a user submits a comment, we will need to refresh the list of comments > to include the new one. It makes sense to do all of this logic in CommentBox > since CommentBox owns the state that **represents the list of comments**. **Q2: Why CommentBox represents a list of comments when there is another component representing it (CommentList, as the name seems to suggest)?** Further, implementing the submission logic, the code handling the comment POST to the server is part of the CommentBox component, wrapped in a method which is passed to CommentForm as a callback. So when the CommentForm form is submitted, it calls this CommentBox function to perform the server request, while the rest of the logic is implemented in CommentForm's functions. **Q3: Why isn't all submission logic implemented in CommentForm?** _EDIT: as I stated in a comment, \" I already understood the application architecture and the concept of box. I'm arguing about the \"individual implementation\" details and **why some choices are preferable than others** \"_"} {"_id": "252133", "title": "UI Applications and operations in background threads", "text": "I am not really sure about what is the best way to deal with operations executed in background threads in an application I am writing. I am writing it in C# and I am following the MVVM design pattern. Some user actions are expensive and I'd like to run them in a background thread, which is a quite straightforward approach. The worker thread then modifies my model, which notifies the ViewModel that something changed and finally the change gets propagated to the View. My problem is that the chain of events triggered would cause the modification of the View from the background thread, when it is only allowed in the UI thread. I know I can use the UI thread dispatcher but I am not really sure where (could the view-model be the right place?) so I'd like to know if you follow any design pattern to address this situation."} {"_id": "252132", "title": "Is this approach to CSS correct?", "text": "Reading SASS basic features on their website, I stumbled upon the `@extend` feature. The example they give is the following: .message { border: 1px solid #ccc; padding: 10px; color: #333; } .success { @extend .message; border-color: green; } .error { @extend .message; border-color: red; } .warning { @extend .message; border-color: yellow; } That compiles to .message, .success, .error, .warning { border: 1px solid #cccccc; padding: 10px; color: #333; } .success { border-color: green; } .error { border-color: red; } .warning { border-color: yellow; } Thus with this snippet of HTML
    hello world
    you style your element with properties of `.message` and `.success`. However I feel this way of writing HTML (and CSS) very poor, in terms of semantic (you don't explicitly see from the markup above that the element has _also_ the styling of `.message`). Shouldn't the snippet above be
    hello world
    ? I feel it to be more descriptive, and more easily reusable. For example I could assign the `.success` class (as written in the above CSS but without the `@extend`) to other elements which are not messages, and so using the class as a modifier, in a BEM-fashion. **So my question is, is the SASS example approach more desiderable than mine?**"} {"_id": "245686", "title": "How can I use guice to replace code dependent on service locator implementation?", "text": "Consider I have a service called FileSystem, and that this FileSystem is used by various classes throughout the application. Typically, the service is acquired via some static class method ServiceLocator.getService(FileSystem.class) Now consider I have a class Game, that depends on the FileSystem object to open up some UI resources. The game will also construct a world, using a WorldFactory. The WorldFactory also requires the FileSystem service. Here is where I encounter the problem, I construct a new instance of WorldFactory - which requires an instance of FileSystem to be injected into it. However, Guice will not inject into objects it has not constructed. So WorldFactory is left without a FileSystem injected into it. So I could pass a FileSystem object to the WorldFactory but then in essence Guice hasn't solved any of my problems and I have come to realize I haven't been using Guice correctly because I can do all of this without it? So then you get an odd instance where the constructor of a game entity requires that you pass an instance to the AudioSystem to it which it can play audio through. It just gets very messy with all these objects/services being explicitly passed around. Thanks. I'm still new to guice and trying to figure out how to use it."} {"_id": "178590", "title": "How can I give my client \"full access\" to their PHP application's MySQL database?", "text": "I am building a PHP application for a client and I'm seriously considering WordPress or a simple framework that will allow me to quickly build out features like forums, etc. However, the client is adamant about having \"full access\" to the database and the ability to \"mine the data.\" Unfortunately, I'm almost certain they will be disappointed when they realize they won't be able to easily glean meaningful insight by looking at serialized fields in wp_usermeta, etc. One thought I had was to replicate a variation on the live database where I flatten out all of those ambiguous and/or serialized fields into something that is then parsable by a mere mortal using a tool as simple as phpMyAdmin. Unfortunately, the client is not going to settle for a simple backend dashboard where I create the custom reports for them even though I know that would be the easiest and most sane approach."} {"_id": "221864", "title": "Should I create a \"placeholder\" parent case, or set up Areas?", "text": "I'm using FogBugz for Project Management, and I'm unsure how to structure my cases. I have different features which require implementing as part of this project. For example, if I have a new feature to include autocomplete on a search text box, this would have cases such as: * Method for returning search terms * Front-end changes to request method * Front-end changes to display search terms Should I create these as **subcases** under a \"Autocomplete\" case, or should they be created as top-level cases under an **area** called \"Autocomplete\"?"} {"_id": "178596", "title": "Looking for best practice for version numbering of dependent software components", "text": "We are trying to decide on a good way to do version numbering for software components, which are depending on each other. Let's be more specific: Software component A is a firmware running on an embedded device and component B is its respective driver for a normal PC (Linux/Windows machine). They are communicating with each other using a custom protocol. Since, our product is also targeted at developers, we will offer stable and unstable (experimental) versions of both components (the firmware is closed-source, while the driver is open-source). Our biggest difficulty is how to handle API changes in the communication protocol. While we were implementing a compatibility check in the driver - it checks if the firmware version is compatible to the driver's version - we started to discuss multiple ways of version numbering. We came up with one solution, but we also felt like reinventing the wheel. That is why I'd like to get some feedback from the programmer/software developer community, since we think this is a common problem. So here is our solution: We plan to follow the widely used _major.minor.patch_ version numbering and to use even/odd minor numbers for the stable/unstable versions. If we introduce changes in the API, we will increase the minor number. This convention will lead to the following example situation: Current stable branch is 1.2.1 and unstable is 1.3.7. Now, a new patch for unstable changes the API, what will cause the new unstable version number to become 1.5.0. Once, the unstable branch is considered stable, let's say in 1.5.3, we will release it as 1.4.0. I would be happy about an answer to any of the related questions below: * Can you suggest a best practice for handling the issues described above? * Do you think our \"custom\" convention is good? * What changes would you apply to the described convention?"} {"_id": "178597", "title": "In Subversion, how should I set up a new major version of my application?", "text": "I'm about to start work on a new version (version 4) of my commercial application. I use Subversion. Based on your experiences, mistakes, and successes, how would you recommend I set up the new version in Subversion? Here's some info: I intend to keep releasing critical updates in version 3 for some time after version 4 is released. However all development of new features will be solely in version 4. In case it is relevant: I'm a solo developer on this product, and that is likely to remain the case. EDIT: I'm aware of SVN's tags and branches. I guess what I need is an optimal strategy for using tags and branches in my situation."} {"_id": "70971", "title": "I feel stuck in the center of Python, How to get past beginner", "text": "I really apologize if this doesn't follow the S.O rules but I need a little help, I personally still classify myself as a beginner in python, Yet I've wrote a very small and VERY SURE impractical program for my boss to use. I know I'm still a beginner because simple things still perplex me but every book I read for beginners honestly just rehashes what I do already know but every 'more advanced' book doesn't really allow me to learn, they depend on example files and I never really understand why they built 'said' function or 'said' class. So onto my question... Is there any recommendations on a book or ANYTHING that pushes me out of this stage, I've used head first and normally they are really good but my issue there is they have me back tracking just to move forward again, It worked in HTML but its confusing in Python, basically I think I need to build a program while following along, Again I like HeadFirst's style but I need something that isn't going to make me have to remember one thing just to forget it... for record, I've checked into some O'Reilly books"} {"_id": "212227", "title": "Software Licenses: No Distribution and Private Selling Using Dual Licenses", "text": "I recently wrote a couple of WordPress Themes in PHP and was wondering what license I should put on it. I don't mind users reusing my code, but I don't want them to be able to sell and redistribute my themes as I want to retain that right. I heard somewhere that an _all rights reserved_ link would stop the distributing etc... Is that true or do I need to include another license and dual license my Themes? So to sum it up I want to use a license to stop others from selling and distributing my themes, while at the same time letting others use the code if they want to."} {"_id": "11175", "title": "Have you tried Chronon the Java \"Time Travelling Debugger\" -- if so, is it worth trying and why?", "text": "First heard about it from Google Tech Talks, back 2009 when it was called Silver Bullet: http://www.youtube.com/watch?v=LpfmKIxusZY It's now called, Chronon... http://chrononsystems.com/ \"Sounds\" nice, have you tried Chronon the Java \"Time Travelling Debugger\" -- if so, is it worth trying and why?"} {"_id": "28376", "title": "Is macros support in a programming language considered harmful?", "text": "The first abuse that comes to my mind in C is: #define if while But at the same time it is extremely handy and powerful when used correctly. Something similar happens with Common Lisp macros. Why doesn't all the programming languages support macros like this and what are the alternatives? Is it they are considered harmful?"} {"_id": "104975", "title": "Website for code review", "text": "I know that we all are trying to improve our structure and style of our code, striving to be truly agile. Websites like Stack Overflow are great when you want an answer to a specific question but what about when you want to get your work critiqued. Are there any communities out there that focus on code review or, even better, agile solutions or even better still focus on T/BDD solutions?"} {"_id": "105862", "title": "Online Code Reviews", "text": "> **Possible Duplicate:** > Website for code review Where can I post my code online so people can comment, make suggestions, and/or criticism? I tried 4chan but people lose interest fast if you don't say something really stupid. Ideas? My language can be located at http://github.com/tekknolagi/gecho/tree/testing"} {"_id": "77466", "title": "Code Review Services", "text": "> **Possible Duplicate:** > Website for code review As a sole developer on a Sharp Architecture project, I'd like to find a way to get some code review. How do people do this? Are there services that people can recommend? Edit: Specifically, I need code review for the overall project - a big task."} {"_id": "126295", "title": "How to get collaborators for an Android project?", "text": "How do I get people to want to collaborate on a project idea I have/actually contribute? I am a bit sensitive about my idea for a couple reasons: 1. I don't want people to take my idea. 2. I have gotten a couple people who wanted to help but then work/family/school stops them from having time and makes me less enthusiastic in the process. For now it's actually ok because it has forced me to become familiar with PHP and servers and I have just been working on it on my own because I want to learn and its fun, but at some point I will need others to help in order to make it robust and ready to distribute. How do I get collaborators when I will need them? I feel like I should start figuring that out now. I have just been learning how to write software using mainly Stack Overflow, idiot guides and YouTube. Great to learn, but development is slow."} {"_id": "126292", "title": "Planning Development Projects", "text": "We are currently re-evaluating the way we manage, plan and run our projects (an area that I think we can massively improve upon) so I just wanted to get some ideas as to how other development teams go about starting new a web application project. At present, following the initial client meetings we produce a simple planning document that outlines what we intend to create for them (this usually contains screen shots of the various sections of the app and some detail about what each page will do). Once we have sign-off on this, this is what is used to by the developers as a blueprint to develop from. I can't help but feel we are lacking here as the client planning document simply isn't detailed enough and doesn't really show any of the business logic decisions made in the application. Following development, the application goes to testing where the testers record bugs in an Excel spreadsheet. I feel this is also a very basic way of doing things, I have looked at web apps like Sifter and think this is how we should be doing things. We use TFS as our source control but the bug-tracking in there is overkill for what we need. So what I'd love to know from you guys is, what sort of documentation you produce for projects and what processes you follow during planning and development."} {"_id": "126293", "title": "SQL Servers Resource Governor - Pros and Cons", "text": "From what I have read about SQL Server Resource Governor, there really aren't any cons. I'm looking at implementing it into our infrastructure where we host websites as well as a management system plus reporting. The management system requires data to be loaded which has to be done out of hours because its quite resource intensive and has a knock-on effect to the websites. So, using the Resource Governor to split out the resources makes sense. Any thoughts?"} {"_id": "164175", "title": "Algorithm to calculate trajectories from vector field", "text": "I have a two-dimensional vector field, i.e., for each point `(x, y)` I have a vector `(u, v)`, whereas `u` and `v` are functions of `x` and `y`. This vector field canonically defines a set of trajectories, i.e. a set of paths a particle would take if it follows along the vector field. In the following image, the vector field is depicted in red, and there are four trajectories which are partly visible, depicted in dark red: ![trajectories](http://i.stack.imgur.com/u78WF.gif) I need an algorithm which efficiently calculates some trajectories for a given vector field. The trajectories must satisfy some kind of minimum denseness in the plane (for every point in the plane we must have a 'nearby' trajectory), or some other condition to get a reasonable set of trajectories. I could not find anything useful on Google on this, and Stackexchange doesn't seem to handle the topic either. Before I start devising such an algorithm by myself: **Are there any known algorithms for this problem? What is their name, for which keywords do I have to search?**"} {"_id": "228238", "title": "How to write an optimal LAN messenger software?", "text": "I am asking this question as an extension to the following question: https://superuser.com/questions/713409/how-to-message-any-user-on-your-lan I don't think I have a real answer. I found it quite difficult to get the `net send` command working on Windows 7 and even if I could have it would have been enabled only for admin users. So I have decided to write my own program to solve the problem. Using `python` I can write a program to check for messages on a common file (which I can store on a common harddrive which we have on our LAN). So the program will write `from:192.168.23.44 to:192.168.23.45 Hello`. Assuming the program is installed on both machines, I can ask the machines to poll this file every few seconds and pop-up a message if a message has been written for it, by searching for it in the `to:` field. If writing to a single file becomes an issue I will make multiple files which will act as inboxes for each of the computers on the LAN. However, this approach is very inelegant. I don't suppose internet messengers, Google talk for instance, poll a server to check whether there are any new messages for the user. So my question is: what is the optimal way to write a LAN messenger program. Now that I think of it, I am looking for a minimalist approach. I will just put the program in the `Startup` folder of all the machines and it should come alive whenever any user logs in. I was just able to configure the machine to respond to the `ping` command by making a new rule for the ICMPv4 protocol (ref: http://answers.microsoft.com/en-us/windows/forum/windows_7-networking/how-to- enable-ping-response-in-windows-7/5aff5f8d-f138-4c9a-8646-5b3a99f1cae6). I am guessing that answer lies in `python` and the `ICMPv4` protocol, but I needed some direction."} {"_id": "223974", "title": "Are the IETF BCP 47 language tags defined as enums anywhere in JDK?", "text": "Are the IETF BCP 47 language tags defined as enums anywhere in JDK? For `Locale.forLanguageTag()` we pass values like `fr-FR`, `jp-JP` etc. Are there any enums already provided by JDK for it? Or should the developer be writing a custom enum for that? How to handle it in i18n application?"} {"_id": "199368", "title": "Git Branch Model for iOS projects with one developer", "text": "I'm using git for an iOS project, and so far have the following branch model: feature_brach(usually multiple) -> development -> testing -> master Feature-branches are short-lived, just used to add a feature or bug, then merged back in to development and deleted. Development is fairly stable, but not ready for production. Testing is when we have a stable version with enough features for a new update, and we ship to beta testers. Once testing is finished, it can be moved back into development or advanced into master. The problem, however, lies in the fact that we can't instantly deploy. On iOS, it can be several weeks between the time a build is released and when it actually hits users. I always want to have a version of the code that is currently on the market in my repo, but I also have to have a place to keep the current stable code to be sent for release. So: * where should I keep stable code * where should I keep the code currently on the market * and where should I keep the code that is in review with Apple, and will be (hopefully) put on the market soon? Also, this is a one developer team, so collaboration is not totally necessary, but preferred because there may be more members in the future."} {"_id": "194393", "title": "Service as a Model in MVC", "text": "I have a PHP MVC application and a file table. I need to implement the functionality: mark all as read. The best solution for the code I found so far was to put the actual implementation in a Model/Service/FileService.php The problem is the class FileService will get bloated really fast with different functionality. What would be a good solution to avoid bloated Services. Should I created different Services for each action?"} {"_id": "114250", "title": "You write the server, I write the client: Best practices for designing an API?", "text": "I'm working on a brand new application that involves a client and a server. Specifically, it's a native mobile app talking to a web server, using a custom API that we will define. I was hired to write the client and another guy (I've never met) has been hired to write the server. To get started we need to agree on an API. We both have very different perspectives on it, working with different technologies. Each of us might not even be sure what sort of small changes make things easier or harder for the other guy. I can't deliver a client if his server isn't working, and his server is pretty useless without a client. What are good strategies for dealing with arrangements like this? I know the answer usually boils down to \"it depends on your situation\" but if I can learn from anybody else's mistakes, I'd sure like to avoid making them again on my own."} {"_id": "114253", "title": "Which microsoft qualification should a C# ASP.NET Web developer work towards?", "text": "I've been working as a vb.net ASP.NET web developer for the past 4 years, and I'm currently making the change over to C# Apart from school/sixth form qualifications, I'm currently lacking any industry related qualifications, and I'm keen to start working towards a Microsoft qualification. What would be the best suited qualification for me to work towards? Any relevant links would be most appreciated."} {"_id": "49850", "title": "How to deal with \"software end-of-life\" situations?", "text": "When a vendor declares that they no longer intend to provide any support or services to a piece of software (and stated the intent to exit the business - offering no upgrade paths), what kind of recourse is available to the customer? Please consider this from the _customer's_ viewpoint. The customer's IT staff will likely only consider the technical options, but there are likely non- technical options which the customer can pursue as well. Also, what kind of reasonable steps can be taken by the customer ahead of time to minimize disruption, such as in contract terms? Things I can think of: * Need to purchase spare hardware and set up a spare environment on which the software can continue to operate. * Various data export methods which do not require vendor involvement. (This can include trivial techniques such as examining the data stored in a commodity database backend, to the more involved techniques such as screen scraping, printing to image followed by re-scanning, etc) * Parallel systems where staff will duplicate the old data into a new system manually or semi-automatically * Legal means, in case the vendor is in financial trouble (as in the case of source code escrow) Any other ideas? * Assuming that there is no \"circumvention\" involved (no DRM, no DMCA), is data recovery or reverse engineering legal/acceptable? _**Edited note:_** _It is a combination of several anecdotal, but real stories. I am not directly involved in any of those. It is simply my desire to learn about how \"software end-of-life\" situation is handled in general. It is not my intention to make the original story sound like too \"difficult\" to be solved._"} {"_id": "160418", "title": "Sencha Ext JS run time license", "text": "Do you need to buy a run time license from Sencha if your application code written is developed in Ext JS and deployed on a web server? http://www.sencha.com"} {"_id": "118588", "title": "Collaborate on UML online", "text": "Some friends and I are starting up a hobby programming project. I wonder if there is any good method/tool/service which lets us collaborate on UML diagrams online. This can be in any form (online direct edit, source-control style, etc.) as long as we can make changes independent of each other."} {"_id": "254999", "title": "How to let the outside world decorate my private field?", "text": "Imagine a simple `Controller` (as in process control) interface. I have some concrete classes, say `PIDController`, that implement it. I also have some decorator classes that extend these classes somehow, say `ITAETuningDecorator`. Now imagine that a `FloodGate` class has a private `Controller` field. I would like, from the outside and at runtime, to attach to the `Controller` in `FloodGate` an `ITAETuningDecorator`. How can I do it? I have no access to the field from the outside and obviously the decorator needs a reference to the original controller to be built."} {"_id": "254998", "title": "Redundancy caused by polymorphism", "text": "I have two chat rooms, one has administration behaviour, and one doesn't. I have factored out all of the common code into a base chat room, but the `AdministerChatroom` behaviour I have pulled out into an interface called `IAdministrable` and it sits on the base chatroom class. My issue is that because the interface is on the base class, which allows for the behaviour to polymorphic, the `NonAdministratableChatroom` chatroom class must implement that behaviour even though it will do nothing. I have seen some developers put the interface on only one subclass: the class that has the behaviour, in this case the `AdministerChatroom` method, and when a chatroom is being used, they simply check the existence of the the `IAdministrable` interface, something like `if(Chatroom is IAdministrable){ // do something }`, which seems like a violation of the \"tell don't ask\" principle. I seem to have two choices: I stick to the polymorphic strategy pattern, or I test the instance for an interface; the former seems preferable but I am wondering if there are any other options? The code for the classes is as follows: public interface IAdministrable { void AdministerChatroom(); } public abstract class BaseChatRoom : IAdministrable { public void Close() { } public abstract void AdministerChatroom(); } public class AdministrableChatroom : BaseChatRoom { private readonly Administrator _administrator; public AdministrableChatroom(Administrator administrator) { _administrator = administrator; } public override void AdministerChatroom() { _administrator.DoSomethingSpecial(); } } public class NonAdministratableChatroom : BaseChatRoom { public override void AdministerChatroom() { // do nothing as this is not administrable } } public class Participant { } public class Administrator : Participant { public void DoSomethingSpecial() { // implement something special here } }"} {"_id": "160410", "title": "TDD: Mocking out tightly coupled objects", "text": "Sometimes objects just need to be tightly coupled. For example, a `CsvFile` class will probably need to work tightly with the `CsvRecord` class (or `ICsvRecord` interface). However from what I learned in the past, one of test-driven development's main tenets is \"Never test more than one class at a time.\" Meaning you should use `ICsvRecord` mocks or stubs rather than actual instances of `CsvRecord`. However after trying this approach, I noticed that mocking out the `CsvRecord` class can get a little hairy. Which leads me to one of two conclusions: 1. It's hard to write unit tests! That's a code smell! Refactor! 2. Mocking out every single dependency is just unreasonable. When I replaced my mocks with actual `CsvRecord` instances, things went much more smoothly. When looking around for other peoples' thoughts I stumbled across this blog post, which seems to support #2 above. For objects that are naturally tightly coupled, we should not worry so much about mocking. Am I way off track? Are there any downsides to assumption #2 above? Should I actually be thinking about refactoring my design?"} {"_id": "160411", "title": "What is a legitimate reason to use Cucumber?", "text": "I've worked in several contracts where the client used Cucumber and I've often felt that the testing suite didn't really have a place in our stack. From what I understand, business analysts/non-technical coworkers write up the tests and the developers make the step definitions work. My problem with this approach is the tests are never valid or terse enough to be used without rewriting the whole file."} {"_id": "118583", "title": "increasing productivity - mastering a language vs. selecting efficient tools", "text": "I'm looking for advice from experienced developers on this question. In my work there's a need for a lot of one-off code. It's tempting to just dip into the right python/perl library calls to do these little tasks as quickly as possible. I used to be of the philosophy of - \"use most efficient tool for the task\". However, I'm afraid that over time, this means that I won't have a deeper experience and expertise, so recently I've been forcing myself to use C++ (w/ Boost and STL) for everything, even if I could do the task in python or perl much more quickly. I'm hoping that in the long run, this will make me a more productive developer. I hope to reach a level of familiarity that I can do things in C++ as quickly as I can in python (and also have the practice to work on bigger projects that would require C++). Is this a good strategy towards long-term productivity and deeper skills? Or am I unnecessarily wasting time / torturing myself?"} {"_id": "207478", "title": "Best Object Oriented way of parsing a model with several fields(>50) and with null checks", "text": "I have a JSON-based data which contains many fields from a particular model. Any value from the list of fields can be null. I am trying to find the best object oriented way of parsing it and also it has to be light-weighted as it's a mobile application. PS: Want to avoid numerous \"if-checks\" Structure of JSON Data: data=( classId=\"\" classDesc=\"\" items=( { //Array of 50 Items },{ //Array of 50 Items },{ //Array of 50 Items } ); }, //Remaining Items );"} {"_id": "190935", "title": "Have they missunderstood currying or have I?", "text": "This question is similar to the question posted on Does groovy call partial application 'currying'?, but not completely the same, and the answers given there do not really satisfy me. I would like to state right at this point, before I go any further, that currying is one of the concepts that have been hard to fully understand. I think the reason for that is the fact that there seem to be two general definitions of currying, and they don't really mean the same thing both in theory and in application. That being said, I think currying is a very simple concept once you decide to take only one of the two definitions and stick to it. In fact, currying should not be difficult to understand at all, but in my experience, understanding currying has been the equivalent of showing a little boy an apple and telling him that fruit is called apple, and then you go on calling that fruit an orange. The boy will never understand the simple concept of an apple because you have caused an enormous amount of confusion by calling the apple an orange. The definition of currying that I've accepted as the right one is that currying is taking a function of an arbitrary arity and turning it into multiple chainable functions each of which accepts only one parameter (unary functions). In code, it would look like this: f(1,2,3); //becomes f(1)(2)(3); However, some people, including well known programmers like John Resig, explain currying as something completely different. For them, currying is no more than simple partial application. There is even a generally accepted javascript function for currying: Function.prototype.curry = function() { var fn = this, args = Array.prototype.slice.call(arguments); return function() { return fn.apply(this, args.concat( Array.prototype.slice.call(arguments))); }; }; This functions does not really curry anything, but rather partially applies the supplied arguments. The reason why I don't think this function curries anything is because you can end up with a function that accepts more than one argument: var f2 = f.curry(1); //f accepts 3 arguments f(1,2,3); f2(2,3); // calling the resulting \"curried\" function. So, my question is, have I misunderstood currying or have they? I wanted to post links to the material I've read on currying, but I cannot post more than 2 links since I don't have a reputation of at least 10. I will leave you with this post that explains currying in javascript, but in a way in which I don't really see any currying: https://javascriptweblog.wordpress.com/2010/04/05/curry-cooking-up-tastier- functions/"} {"_id": "118586", "title": "How Does A Compiler Work?", "text": "_Note: I am surprised that this hasn't been asked before, and if it has I could not find it in a search._ I've been on tons of websites, I've read tons of articles, and I have heard tons of explanations. Most of them were good, but they were all either to broad or too complicated or just plain bad. So my question is , how does a compiler work? If this is a difficult, broad question, please tell me. But if not, please answer the question."} {"_id": "8487", "title": "Compiler optimization examples", "text": "I'd like to see (good) examples of optimizations performed by compilers (static and JIT). Why? * To learn what we don't have to optimize ourselves (often leading to better code) * To be amazed"} {"_id": "110005", "title": "What's the best way to show a CLI CRUD app to client?", "text": "I'm developing a CRUD application on command line for a non-tech client. What's a simple (for both parties) way for me to demo the application remotely? A middle-ground between SSH (easy for me) and setting it up as a webapp (easy for the client) would be great."} {"_id": "110006", "title": "What is the best way to introduce algorithms to undergraduate students?", "text": "If you are given the task of designing the courses leading up to a degree in computer science and are allowed just two courses on the topic of 'Algorithms', how would you structure the courses? My specific questions are: 1. Should the first course introduce some of the fundamental algorithms of Sorting and Searching without introducing the notions of time and space complexity? 2. A second course on designing algorithms. This course would cover not only time and space complexities but also generic algorithm design techniques such as Divide and Conquer, Dynamic Programming etc.. This is the typical profile of the students I am designing this course for: 1. Most would be new to the notion of programming though familiar with the use of computers. 2. They would be learning C programming in the same semester that they learn about algorithms. 3. They Would be taking a simultaneous course on introduction to computer science. Most of the text books I have considered are either too voluminous or too intimidating for the student with over emphasis on the analysis of the complexity of algorithms. Any suggestions on either the appropriate text books or methodology for teaching would be appreciated!"} {"_id": "110007", "title": "How do I protect my website (Codes, Database, FTP informations) from freelancer?", "text": "I'm developing a website using Joomla and many other 3rd party plugins and also I want to make some custom components so I'm hiring some freelancer developers and few are an online companies they say located in UK or somewhere. After explaining them about the component's functionality and layouts they are requesting access to my website's FTP, Database, CPanel. In this case how I'm going to protect my website? What if they take my other code and make a clone site or something? Is there any ideas besides company contract?"} {"_id": "114780", "title": "Do you need to buy Visual Studio to develop/deploy an ASP.NET web application?", "text": "Since the .NET Framework SDK is free, is Visual Studio anything more than an IDE? If yes, do I need to buy Visual Studio to deploy my ASP.NET web application (written in C#)? I'm using MySQL for the database. I'm a student that is going to graduate college soon, and I'm exploring technologies for a web application that I'm building for a startup."} {"_id": "28995", "title": "What job is better for a newbie, one that requires you to create a new program frequently, or something like software maintenance?", "text": "One of my friends has just completed his college degree and is ready to join the programmers' world. Today he has two offers, one with new projects every time, and another with software maintenance. The remaining factors are not important to him, what he wants to know is which option is better? My experience goes with second option because my first job was the maintenance one and **I could learn how my fellow programmers made mistakes while coding** . But I soon switched to a new job which required me to create new project every time. I enjoyed both but I must admit that my first job has given me a more advantage today. But it's not necessary that my experience can give benefit to him. But I want to know what is general approach? If I have to give him final verdict on these two, what should I tell him? _**Edit_** Everybody deserves one up vote here, I am really learning a lot from you guys."} {"_id": "126871", "title": "What is an algorithm to find simple cycles?", "text": "I have a graph with an Eulerian cycle and no Hamiltonian cycles. I would like to divide this graph into simple cycles. Edges may not be repeated in simple cycles. How can this be done?"} {"_id": "126875", "title": "Dependency Injection and Singleton. Are they two entirely different concepts?", "text": "I've been hearing about using the dependency injection over Singleton for my colleague. I still can't make out if it they are two orthogonal patterns which can be replaced with one another? Or is DI a method to make the Singleton pattern testable? Please take a look at the following code snippet. IMathFace obj = Singleton.Instance; SingletonConsumer singConsumer = new SingletonConsumer(obj); singConsumer.ConsumerAdd(10,20); The `SingletonConsumer` is accepting a parameter of type `IMathFace`. Instead of accessing the singleton class internally, `SingletonConsumer` will get the singleton instance passed by the caller. Is this a good example of consuming singleton class via dependency injection?"} {"_id": "148155", "title": "How do programs like subversion detect when a file has been edited as opposed to created/deleted?", "text": "This is my first question here so I hope it is not off topic. Although I am using the Linux `inotify` library to listen for changes to files, and I compare use of that against the `Subversion` program, I am specifically looking for the algorithm used. To a human it is very easy to tell if a file has been created or modified. Clicking the `New` button constitutes the former, and clicking the `Save` button constitutes the latter. To Linux, both those actions have serious overlap. In text editors, for example, generally a swap file is created and then copied/moved. This makes it difficult to distinguish via `inotify` between a minor edit to a file and a deliberate overwrite of a file. What I am trying to understand is how a program such as `Subversion` recognizes the difference between a user having modified a file with a text editor, and a user having actually deleted the file and opened a new file with the same name. **Edit:** It has been pointed out that subversion does not do what I want it to do, so it was a blunder on my part to use it as an example. Instead allow me to rephrase the question: \"Is there any known program or programming approach to match high level actions such as creating new files and saving them to low level actions such as modifying, moving, copying, etc. such that I can log all the files in the system and changes to them\"?"} {"_id": "118639", "title": "Is it a good practice set connection strings in a web config?", "text": "Recently I have a discussion with some of my colleagues at my work because they said that it's better have in a .DLL a string connection encrypted. And I said why just don't use the string connection defined in the web.config encrypted? it's the same and it's better because entity framework, for example looks for the name of the connection in the web config of the application, Now i want to know from a security point what's better or what's the best practice??"} {"_id": "220769", "title": "How is DRY principle ( applied at class level ) related to SRP?", "text": "In other words, is DRY (don't repeat yourself) applied at a class level a subset of SRP (single responsibilty principle)? What I mean is that while SRP states that each class should have only a single responsibility ( ie. class should only have one reason to change ), is it the application of DRY at class level that prevents two classes from having the same responsibility ( thus it is DRY that prevents the repetition/duplication of same responsibility in two different classes )? Thank you"} {"_id": "253421", "title": "Running time of simple for-loops", "text": "I'm reading algorithms and I understand most of it, one thing that I can still struggle with a lot is something as simple as running times on different for- loops. Everyone seems to have easy with that, except for me and therefore I search help here. I am currently doing some excercises from my book and I need help completing them in order to figure out the different running times. The title of the exercise is: \"Give the order of growth(as a function of N) of the running times of each of the following code fragments\" a: int sum = 0; for (int n = N; n > 0; n /= 2) for(int i = 0; i < n; i++) sum++; b: int sum = 0; for (int i = 1; i < N; i *= 2) for(int j = 0; j < i; j++) sum++; c: int sum = 0; for (int i = 1; i < N; i *= 2) for(int j = 0; j < N; j++) sum++; We have learned different kinds of running times/order of growth like n, n^2, n^3, Log N, N Log N etc. But I have hard understanding which to choose when the for loop differs like it does. the \"n, n^2, n^3\" is not a problem though, but I can't tell what these for-loops running time is. Here's an attempt of something.. the y-axis represents \"N\" value and the x axis represents times the outer loop has been run. The drawings in the left is: arrow to the right = outer-loop, circle = inner loop and the current N value. I have then drawn some graphs just to look at it but I'm not sure this is right though.. Especially the last one where N remains 16 all the time. Thanks. ![My drawing](http://i.stack.imgur.com/35QwR.jpg)"} {"_id": "114484", "title": "Domain knowledge vs Programming", "text": "> **Possible Duplicate:** > How important is Domain knowledge vs. Technical knowledge? I often hear from my colleagues and sometimes from interviewers that, > \"There is nothing so great in having excellent programming knowledge. One > must gain the domain knowledge as the 1st priority. If you have a good > domain knowledge, then writing code for that is not a big deal.\" (Here domain knowledge is something related to area you are working in. For example, I work in telecom domain, someone might be in Finance or Pharma or Web development or Embedded and so on.) I disagree with above passage and think exactly opposite. In my career _till now_ , I have seldom missed any deadline for bugfix or feature enhancement. I have kept changing the domains (within telecom) but stuck with learning the programming techniques. Though I might be wrong. **Questions** : * Is my current approach correct ? * Given a choice between domain or programming (with limited time) which one should be chosen ? * Is there a good future for a person, who is primarily a very good coder but not so great in a domain (of course initially) ? I ask this because, I feel that every domain ultimately boil down to code sea in which one has to dive!"} {"_id": "220762", "title": "How to store historical user data?", "text": "I'm trying to think of fine way to store historical data for my web application. The application has users that add photos, comment and grade them. All this is stored in relational database. The users also have ability to delete own photos (and thus comments/grades) and own account. I want to store those deleted entities as historical data. **The aim of it is to distinguish categories of what people like to photograph and classify users towards those categories**. Should I store it in some NoSQL database? Or maybe in my origin database by marking it with `is_deleted` field? Maybe different table e.g. `archived_users`, `archived_photos`? Another database instance for only historical data? Has anybody dealt with such task and is willing to share thoughts? * * * So far, I'm using `is_deleted` flag for photos and backup table (`archived_users`) for users. Why such distinction? To avoid violating `unique` constraint on `username` field. This solution has 3 problems to me. * It is incoherent to do same thing in 2 ways. Or maybe it's ok? * It is somehow strange moving `user` to another table (with associations to photos). It creates some superficial fields. Or maybe it's ok? * \"Living\" data is mixed with historical data. Could it be a performance problem when database is really huge?"} {"_id": "51265", "title": "Internal and external API architecture", "text": "The company I work for maintains a successful SaaS product that grew \"organically\" over the years. We are planning to expand the line with a suite of new products that will share data with the existing product. To support this, we are looking to consolidate business logic into a single place: a web service layer. The WS layer will be used by: * The web applications * A tool to import data * A tool to integrate with other client software (not an API per se) We also want to create an API that can be used by our customers that are capable of using it to create their own integrations. We are struggling with the following question: Should the internal API (aka the WS layer) and the external API be one in the same, with security and permission settings to control what can be done by who, or should they be two separate applications where the external API just calls the internal API like any other application? So far in our debate it seems that separating them may be more secure, but will add overhead. What have others done in a similar situation?"} {"_id": "253429", "title": "Is table inheritance the wrong approach", "text": "If we take the two below entities as en example. public class Person { public string Username {get;set;} public string DisplayName {get;set;} } public class Worker:Person { public string Title {get;set} public int SecurityId {get;set;} } When anyone first registers with this app, basic information would be captured and a Person would be created. Once an admin comes along, they may want to assign a role to this Person. Now Roles in the sense of Authorisation are not in question, there is a RoleProvider. However, if I make someone a worker, some additional details need to be captured. How those new details are best stored is what is in question. * I could have a WorkerProperties class with the fields and give Person \\--> Worker a 0.1->1 relationship. * I could have all the fields as part of Person and just fill in what is required. At runtime accessing the Persons role would be needed to work out what fields would be required. * I could create a Worker as shown above. Inheriting from Person. With this option there is the problem that the PK is the username registered with. I would need to somehow change the discriminator column generated by EF to essentially change the object type to Worker."} {"_id": "220765", "title": "Open Close Principle (OCP) vs Dependency Inversion Principle (DIP)", "text": "I was trying to understand the difference between Open Closed Principle (OCP) and Dependency Inversion Princible (DIP). Based on research I've made on the internet so far, I came to the conclusion that 'DIP is one option through which we can achieve OCP'. I'm I right on this? Can you give me an example which doesn't follow DIP but follows OCP ?"} {"_id": "220764", "title": "Should my statefull widget generate necessary html itself?", "text": "What behaviour should be preferred for widget initialization: 1) Required HTML is generated by widget itself: Layout:
    Initialization: $('#my-widget').myWidget(); // Creates HTML 2) Required HTML is placed to the page by user:
    Initialization: $('#my-widget').myWidget(); // Won't create nested div's Mark Otto (one of Bootstrap creators) suggest not to generate markup by javascript: https://github.com/mdo/code-guide#javascript-generated-markup Currently, I use the following function, which appends element to the root of the widget (this.$element) only if it doesn't exist: function($content) { var $child = this.$element.children('.' + $content.attr('class')); return $child.length ? $child : $content.appendTo(this.$element); };"} {"_id": "213571", "title": "Should Uncle Bob's example be refactored to an AbstractFactory or a SimpleFactory?", "text": "In the book \"Clean Code\" Robert Martin makes a statement regarding the following code: public Money calculatePay(Employee e) throws InvalidEmployeeType { switch (e.type) { case COMMISSIONED: return calculateCommissionedPay(e); case HOURLY: return calculateHourlyPay(e); case SALARIED: return calculateSalariedPay(e); default: throw new InvalidEmployeeType(e.type); } } Statement: The solution to this problem (see Listing 3-5) is to bury the switch statement in the basement of an ABSTRACT FACTORY,9 and never let anyone see it. What I don't understand is why does he call it an Abstract Factory? If the solution is to create 3 Employee subclasses each implementing it's own CalculatePay method then the logic is moved up to let's say the controller. But then we have to create a \"Simple Factory (Idiom)\" not an Abstract Factory as presented in the original book from the GOF. The Abstract Factory has the intent to: \"Provide an interface for creating families of related or dependent objects without specifying their concrete classes.\" but this is clearly not the case."} {"_id": "213577", "title": "Rich Domain Models -- how, exactly, does behavior fit in?", "text": "In the debate of Rich vs. Anemic domain models, the internet is full of philosophical advice but short on authoritative examples. The objective of this question is to find definitive guidelines and concrete examples of proper Domain-Driven Design models. (Ideally in C#.) For a real-world example, this implementation of DDD seems to be wrong: The WorkItem domain models below are nothing but property bags, used by Entity Framework for a code-first database. Per Fowler, it is anemic. The WorkItemService layer is apparently a common misperception of Domain Services; it contains all of the behavior / business logic for the WorkItem. Per Yemelyanov and others, it is procedural. (pg. 6) So if the below is wrong, how can I make it right? The behavior, i.e. _AddStatusUpdate_ or _Checkout_ , should belong in the WorkItem class correct? What dependencies should the WorkItem model have? ![enter image description here](http://i.stack.imgur.com/QKWNn.png) public class WorkItemService : IWorkItemService { private IUnitOfWorkFactory _unitOfWorkFactory; //using Unity for dependency injection public WorkItemService(IUnitOfWorkFactory unitOfWorkFactory) { _unitOfWorkFactory = unitOfWorkFactory; } public void AddStatusUpdate(int workItemId, int statusId) { using (var unitOfWork = _unitOfWorkFactory.GetUnitOfWork()) { var workItemRepo = unitOfWork.WorkItemRepository; var workItemStatusRepo = unitOfWork.WorkItemStatusRepository; var workItem = workItemRepo.Read(wi => wi.Id == workItemId).FirstOrDefault(); if (workItem == null) throw new ArgumentException(string.Format(@\"The provided WorkItem Id '{0}' is not recognized\", workItemId), \"workItemId\"); var status = workItemStatusRepo.Read(s => s.Id == statusId).FirstOrDefault(); if (status == null) throw new ArgumentException(string.Format(@\"The provided Status Id '{0}' is not recognized\", statusId), \"statusId\"); workItem.StatusHistory.Add(status); workItemRepo.Update(workItem); unitOfWork.Save(); } } } (This example was simplified to be more readable. There are other entities, and for instance the AddStatusUpdate has some extra behavior -- IRL it actually takes a status category name, and if that category doesn't exist, the category is created.) ## Update @AlexeyZimarev gave the correct answer, a perfect video on the subject in C# by Jimmy Bogard, but it was apparently moved into a comment below because it didn't give enough information beyond the link. I have a rough draft of my notes summarizing the video in my answer below. Please feel free to comment on the answer with any corrections. The video is an hour long but very worth watching."} {"_id": "141860", "title": "Should I be concerned that I can't program very fast without Google?", "text": "> **Possible Duplicate:** > Google is good or bad for programmer? I'm currently in college to be a software engineer, and one of the main principles taught to us is how to learn for ourselves, and how to search the web when we have a doubt. This leads to a proactive attitude - when I need something, I go get it. Recently, I started wondering how much development would I be able to do without internet access and the answer bugged me quite a bit. I know the concept of the languages and how to use them, but I was amazed by how \"slow\" things were without having the Google to help in the development. Most of the problems I have are related to specific syntax. For example, reading and writing to a file in Java. I have done this about a dozen times in my life, yet every time I need to do it, I end up googling \"read file java\" and refreshing my memory. I completely understand the code and fully understand what it does, but I am sure that without Google it would take me a few tries to get the code correct. Is this normal? Should I be worried and try to change something in my programming behaviour?"} {"_id": "223219", "title": "Private comments in git", "text": "I need to add a feature to some code in a big project and due to the complexity I find it very helpful to add detailed comments to many lines to keep track of what's happening. This is only for my own understanding however and I will delete the comments before pushing the code. The problem however is that if I do private commits during my development, when I push to the main server, I guess git will show the lines that I commented and then uncommented (but didn't otherwise modify) as being modified by me, making \"blame\" more opaque. I suppose the only way to do this cleanly is to have 2 private branches, one for development and one for commenting, or can anyone recommend a better way?"} {"_id": "25067", "title": "Which management book would you recommend to read for a fresh Team Leader?", "text": "Which management book would you recommend to read for a fresh Team Leader?"} {"_id": "120436", "title": "What is a benefit of having source code for ICS?", "text": "People are so happy the source code of Ice Cream Sandwich became available for syncing. Why is that cool?"} {"_id": "176553", "title": "LL(\u221e) and left-recursion", "text": "I want to understand the relation between LL/LR grammars and the left- recursion problem (for any question I know parcially the answer, but I ask them as I don't know nothing, because I am a little confused now, and prefer complete answers) I'm happy with sintetized or short and direct answers (or just links solving it unambiguously): 1. What type of context free-language isn't LL(\u221e) languages? 2. LL(K) and LL(\u221e) have problems with left-recursion? Or only LL(k) parsers? 3. LALR(1) parser have troubles with left or right recursion? What type of troubles? 4. Only in terms of the LL/LALR comparision. What is better, Bison (LALR(1)) or Boost.Spirit (LL(\u221e))? (Let's suppose other features of them are irrelevant in this question) 5. Why GCC use a (hand-made) LL(\u221e) parser? Only for the \"handling-error\" problem?"} {"_id": "151955", "title": "What is the definition of \"Big Data\"?", "text": "Is there one? All the definitions I can find describe the size, complexity / variety or velocity of the data. Wikipedia's definition is the only one I've found with an actual number > Big data sizes are a constantly moving target, as of 2012 ranging from a few > dozen terabytes to many petabytes of data in a single data set. However, this seemingly contradicts the MIKE2.0 definition, referenced in the next paragraph, which indicates that \"big\" data can be small and that 100,000 sensors on an aircraft creating only 3GB of data could be considered big. IBM despite saying that: > Big data is more simply than a matter of size. have emphasised size in their definition. O'Reilly has stressed `\"volume, velocity and variety\"` as well. Though explained well, and in more depth, the definition seems to be a re-hash of the others - or vice-versa of course. I think that a Computer Weekly article _title_ sums up a number of articles fairly well \"What is big data and how can it be used to gain competitive advantage\". But ZDNet wins with the following from 2012: > \u201cBig Data\u201d is a catch phrase that has been bubbling up from the high > performance computing niche of the IT market... If one sits through the > presentations from ten suppliers of technology, fifteen or so different > definitions are likely to come forward. Each definition, of course, tends to > support the need for that supplier\u2019s products and services. Imagine that. Basically \"big data\" is \"big\" in some way shape or form. What is \"big\"? Is it quantifiable at the current time? If \"big\" is unquantifiable is there a definition that does not rely solely on generalities?"} {"_id": "182085", "title": "php return values", "text": "I have a codeigniter app and in my model, I always return true or false for all functions, and if I have data that needs to be passed, I also set a property that contains my data. The only trouble is, in my controller, if I have to call 3 or 4 methods in my model, the code gets really repetitive. If ( $this->my_model->functionA() ) { $localvar = $this->my_model->data(); } else { show_error(\"Error A\"); } If ( $this->my_model->functionB() ) { $localvar = $this->my_model->data(); } else { show_error(\"Error B\"); } If ( $this->my_model->functionC() ) { $localvar = $this->my_model->data(); } else { show_error(\"Error C\"); } I'm wondering if i change the logic so that the functions don't return true, but return the data instead... does it simplify things alot? I think I'd still need code like this: If (! $this->my_model->functionA() ) { show_error(\"Error A\"); } else { $localvar = $this->my_model->data(); } Or is there a way I can combine my $localvar assignment statement with the if statement? Is there a better way to do this?"} {"_id": "756", "title": "Where can I find programming puzzles and challenges?", "text": "I'm trying to find places where I can hone my craft outside the context of school or work. Are there places online, or books available, where I can access lists of programming puzzles or challenges?"} {"_id": "191990", "title": "Confusion over a hacker-howto article", "text": "I have gone through http://www.catb.org/esr/faqs/hacker-howto.html What I understand from the article: 1. develop a hacker attitude. 2. learn programming language (eg. python) 3. solve interesting problems Point 3) is where I'm stuck. Where do I find these interesting problems? Sure, I can go to UVA Online and other sites which have good set of programming problems. But solving these problem I just like an exercise from which I can polish my python skills and learn how to go about solving a problem. I want to do something real/fun problems which the above programming problems lack. I need a bit of guidance on the issue."} {"_id": "131969", "title": "Fun Learning Exercises Recommendations", "text": "> **Possible Duplicate:** > Where can I find programming puzzles and challenges? Recently in our workplace we have been playing Design Pattern Poker. This is fun and really helps the participants to understand design patterns and to think of useful way to apply them. Can anyone recommend any other similar, fun games that will help us improve our development skills?"} {"_id": "132806", "title": "Good collection of short code samples in different languages to solve programming problems?", "text": "I am interested in learning some new programming languages and looking for the collection of short solutions for programming problems provided in different languages. The optimal format would be: > **Problem description** > Language A: .... (solution) > Language B: .... (solution) etc It would be best if I could sort/filter the samples basing upon the languages and programming paradigms. What I am currently doing is using the Project Euler and some other sites for programming contests where I first have to solve a problem on my own and then I can pick the examples in the languages I am interested in the forum thread of this particular problem. This is ok, but sometimes it is a somewhat too long way for me."} {"_id": "113273", "title": "Site for programming tasks", "text": "> **Possible Duplicate:** > Programming Puzzles This may sound crazy, but is there a website where you can pick and choose a programming task for someone? Unpayed. As a matter of exercise. I would like to have some real world challenges that I can do in my spare time maybe."} {"_id": "90228", "title": "Exercises for starting programmer", "text": "I'm learning how to program with a book on Objective-c. I know, that's not great start, but anyway. What I want to find is exercises which gradually increase in difficulty from starting ones like hello world to fairly complex ones which require few hours to complete. Is there any site out there that offers those excercises? Not necessarily language-specific. I have seen Project Euler but it is more math based, and I'm not too good in math. I know loops, got some grasp on classes, inheritance. What I want is to make something useful. I can't understand where to start."} {"_id": "166156", "title": "Spending a good fortune on a certificate holding Scrum Master or a Veteran XP coach?", "text": "There is a very prestigious company that delivers a well-sold software about financial systems. It has more that 20 years of history, and is staffed with about 20 programmers and much larger number of managerial staff. Dissatisfied customers have reported strange bugs and no one has a clue what is wrong, hard to read code, and customization is prohibitively expensive. In a word, the software is rotten. The company decided to spend a fortune and found the Agile thing as the remedy but they are stuck about what it is they need most urgently. Is it about the process or the developers or both? The challenge breaks down to the following options: 1. They can hire a certificate holding Scrum Master to teach them Scrum. When asked about the value of doing it, the SM responded: \"I will prepare them to embrace Agile and only then they can go Agile and save the product\". 2. They can as well hire a veteran XP coach. When posed with the same question he responded : \"The most urgent problem is with the programmers and not the management, XP will save the product from rot and only then Scrum will make sense\" Developers are far from capable of doing agile programming practices at the moment. No unit tests, no pair programmings, no CI (huh? what is it?) ... you get the idea. Some say they would be far better trying to improve their programming first (hire option 2) and then go with the process. Many say quite the opposite. Any insights ?"} {"_id": "166155", "title": "How do I parse a header with two different version [ID3] avoiding code duplication?", "text": "I really hope you can give me some interesting viewpoints for my situation, because I am not satisfied with my current approach. I am writing an MP3 parser, starting with an ID3v2 parser. Right now I`m working on the extended header parsing, my issue is that the optional header is defined differently in version 2.3 and 2.4 of the tag. The 2.3 version optional header is defined as follows: struct ID3_3_EXTENDED_HEADER{ DWORD dwExtHeaderSize; //Extended header size (either 6 or 8 bytes , excluded) WORD wExtFlags; //Extended header flags DWORD dwSizeOfPadding; //Size of padding (size of the tag excluding the frames and headers) }; While the 2.4 version is defined : struct ID3_4_EXTENDED_HEADER{ DWORD dwExtHeaderSize; //Extended header size (synchsafe int) BYTE bNumberOfFlagBytes; //Number of flag bytes BYTE bFlags; //Flags }; How could I parse the header while minimizing code duplication? Using two different functions to parse each version sounds less great, using a single function with a different flow for each occasion is similar, any good practices for this kind of issues ? Any tips for avoiding code duplication? Any help would be appreciated."} {"_id": "82221", "title": "When do you think of efficiency? Before/During/After actually coding?", "text": "I've been programming in Java for quite some time and I always find myself doing very little \"planning\" (I haven't worked on anything HUGE yet, but im no stranger to big projects) and develop ideas and fix code efficiency as I go. I tend to write a block of code, test it, test it again, try to break it, then make it look nice (visually) and then worry about modifying it to be more efficient (if possible). Is this how most people think/work? I'm wondering if anyone else takes the time to pretty up their code after its done or if they plan how its supposed to look and the most efficient method of writing it before actually coding anything. I'm trying to fine tune my own efficiency and hearing from the community will give me a broader idea of how professional and amateur programmers alike think."} {"_id": "117782", "title": ".NET internals:Where is a good place to learn the \"under the hood\" stuff in .NET?", "text": "I'll be completely honest, one reason I love C# and .NET is that it has for ALL of my development career abstracted me from the well crafted under the hood stuff, I believe as a .NET developer I shouldn't need to know much about. This might seem like blasphemy for me to say I really do not care much about CLR, CIL, JIT, or MSIL and all the other internal technologies which up until now to be completely honest are more or less acronyms that mean very little to every day problems I find myself facing as a .NET developer, and solving with the bread and butter of .NET. My experience is varied like most of you guys with 2 years + experience. I've been using .NET since it was launched, and even back in the days when ASP.NET was called ASP+. I have yet to come across a situation where knowing how .NET does what it does provides me any practical value. I also fully believe that the authors of .NET very carefully planned .NET to provide this fantastic level of abstraction from specific hardware and OS version for this reason - simply put write your code and get on with your job! Long story short: Going for an interview next week, and I'm sure these theoretical questions will come up, and so I would like to put in about 4-5 hours to study the internals so I can (with confidence) satisfy an interview test. Is there a good book or site which details all the .NET internals in a way that is easy to understand? EDIT: To clear things up a bit, when I refer to .net internals - what I mean is not reflecting and inspecting framework code, mainly knowing how .net code from writing it ends up as executable byte code."} {"_id": "117780", "title": "is there any relationship between a story point in two different projects? ", "text": "Assume we have two projects and one or more teams (one team doing the projects in serial or two teams doing the projects in parallel). If Team A decides that a task is 8 story points and Team B decides that a task on their project is also 8 story points, does that say anything about the relationship between the two tasks? Or is there no relationship between the complexity of the two tasks?"} {"_id": "117786", "title": "high level design of a browser layout engine?", "text": "I'm interested in how browser layout engines like gecko, webkit and trident are architected from a high level. what are the key abstractions? how are they layered? what are the inputs/outputs for the different abstractions? Is there a diagram or article that explains this particularly well? I am sure the implementations vary, but I'm generally curious about how all the pieces fit together. I realize webkit is open source and I can just look at the code, but I wouldn't know where to start."} {"_id": "37734", "title": "Coupling. Best practices", "text": "Following on from this thread I started The Singleton Pattern It got me thinking about how coupled my classes are and how best to achieve loose coupling. Please bear in mind I am a new programmer (4 months into my first job) and this is really the first consideration I have given to this, and am very keen to understand the concept. So, what exactly constitutes loose coupling vs heavy coupling? In my current(and first project), I am working on a c# winforms project, where the GUI section creates objects and subcribes to their events, when they are triggered, the GUI creates another object (in this example, a datagridview (a class that I have created that wraps around a standard datagridview and adds additional functionality) and attaches it to the GUI. Is this bad coupling or good? I really don't want to get into bad habits and start coding poorly, hence I would appreciate your responses."} {"_id": "131446", "title": "What is inversion of control, and when should I use it?", "text": "I am designing a new system and I want to know what inversion of control (IOC) is, and more importantly, when to use it. Does it have to be implemented with interfaces or can be done with classes?"} {"_id": "206703", "title": "British royal succession algorithm", "text": "I was reading about the rules for royal succession of the British monarchy and thought the easiest way I could understand it is with code (or pseudo-code). I came up with this (in a C# like pseudo-code): EventHandler CurrentMonarch_Died { CurrentMonarch = SuccessorRootedAt(Person.Electress_Sophia_Of_Hanover); } enum Gender { Male, Female }; // Male must come before Female Person SuccessorRootedAt(Person person) { if (person.IsAlive && IsQualified(person.Religion, person.Citizenship)) return person; sortedChildren = person.Children.OrderBy(p => p.Gender).ThenBy(p => p.DateOfBirth); foreach (child in sortedChildren) { successor = SuccessorRootedAt(child); if (successor != null) return successor; } return null; } Is it correct?"} {"_id": "56128", "title": "Using an open source non-free license", "text": "Are there any projects/products out there that use an open source license that basically says \"free for small companies\" and \"cost money for larger companies\" in addition to \"make modifications available\"? (And are there any standard licenses with such a wording?) If I were to release a project under such a license, would it be automatically shunned by every developer on the face of the earth, or, assuming it is actually a useful project, does it have a fair chance at getting contributions from Joe Programmer? The second part of this question can easily become subjective, but any well argued point of view will be highly appreciated. For example, do dual licensed projects made by commercial entities have success with the open source communities?"} {"_id": "206709", "title": "How to keep your programming resources, tips, tools and snippets handy and easy to find?", "text": "In my everyday work and on my free time, I tend to discover/learn/find pieces of information that can be really useful if used at the right moment but also really easy to forget and/or to find again when it's required. To give you examples, I am thinking about things such as : * nice algorithms or concepts from a book * crazy options for a compiler * neat online tools such as interpreters/compilers. * snippet of codes * documentation of different languages * programming tips from a forum and/or SE.question * page of an interesting github project As you can imagine, most of them could be saved as bookmarks but not all of them. Also, bookmarks wouldn't have any explanation/documentation/tagging/nice formatting around it. At the moment, I was thinking about creating a plain HTML page or a personal wiki to keep everything somehow organised. Thus, I was wondering which support/tool programmers were using to keep that kind of resources handy and easy to find when required and how they were organizing them?"} {"_id": "98316", "title": "Pushing changes to an open web page", "text": "I have a warehouse management app that I have written that handles batching of orders for batch picking, scanning of items for packing and accuracy. Part of this app is a Dash or control panel that the different managers in the warehouse work out of. It shows who's picking, packing, who's available, how many orders are available to be batched and a BUNCH of other details and controls. Right now i have an ajax call that checks a time stamp that runs every 30 seconds or so to see if the page should be updated (a change made on another managers workstation). A manager does something that should be shown on all other workstations that are viewing the \"dash\". Maybe they batched some orders and pickers that were available are no longer available. Right now when the change is made i update a timestamp saved in a database. On the html page the timestamp that was in the db at last page load is saved in a hidden div. The javascript takes that timestamp and checks the database at an interval if its different the page updates. Its a bit more complicated than this as sections of the page get changed not the whole thing but the description sums up the process. This works but i don't like it. It requires a database hit from each workstation every 30 seconds. Seems overly complex. I am looking for some thoughts on how I might push a change as it occurs and minimize or eliminate some of the database interactions. Thoughts?"} {"_id": "57294", "title": "Adding to the System namespace in C#", "text": "Would it be acceptable for some very generic utilities or classes to be added in the `System` namespace? I'm thinking of really basic stuff like a generic `EventArgs` (`EventArgs`), Use case: would be shared in a company's core library (so that it can be recompiled in a new project as-is, without changing the namespace);"} {"_id": "194107", "title": "Why it is not possible to Instantiating Types with Wildcards in Java", "text": "I am trying to instantiate LinkedList op = new LinkedList(); But I get error Cannot instantiate the type LinkedList Why is it that this cannot be instantiated in Java?"} {"_id": "138561", "title": "Pros/cons between emphasizing client-side or server-side processing", "text": "Why would I want to write a web app with lots of processing server-side? To me, writing the program client-side is a huge advantage because it takes away as much server load as possible because it only has to send data to the client with minimal processing. I see very little on writing web-apps besides writing it server-side and treating client-side as _only a view_. Why would I ever want to do this? The only advantage I see is that I can write in whatever language I want (http://www.paulgraham.com/avg.html)."} {"_id": "194108", "title": "HTTP Session or Database approach", "text": "I am confused a little as what should be my approach, Working on a design of shopping cart and i need to store shopping cart either in session or in database but not sure which approach would be best.here are the use case 1. User is not logged in and adding product to cart (Anonymous user) 2. User is logged in and adding product to cart. First case is more confusing to me, since there can be many cases where user just visiting web-shop and adding product without logging in and can be quite possible that he might not going for a checkout process. But still we need to create a Shopping-Cart for this user, in order to create and save shopping-cart i have two option. 1. When user adding a product, create a cart in the database and associate this cart with this user, moment he logged in move this cart to logged in user. 2. Create Cart, add product to it and save it to the Session, when user logged in create Cart in the database and associated logged in user with this cart with User. I know that both database driven Cart system as well Session based can have there positive as well negative aspects, but not sure which one might be the best approach taking in to account following points 1. Scalability 2. Flexibility 3. Extensibility 4. Application Should take care of speed Looking for input on this aspect to decide the path."} {"_id": "198830", "title": "Need help understanding \"An Image Signature for any Kind of Image\"", "text": "I'm trying to implement the paper titled An Image Signature for any Kind of Image, but I'm having trouble understanding what they mean in this paragraph: > For each column of the image, we compute the sum of absolute values of > differences between adjacent pixels in that column. We compute the total of > all columns, and crop the image at the 5% and 95% columns, that is, the > columns such that 5% of the total sum of differences lies on either side of > the cropped image. We crop the rows of the image the same way (using the > sums of original uncropped rows). First, what do they mean by \"sum of absolute values of differences between adjacent pixels in that column\"? If start at the top-left pixel and work downwards (down the column), we can compute the difference between (0,0) and (0,1), but what about the next pixel? Am I supposed to compute the difference with just the pixel below, or the pixels on either side, or perhaps even the 4 or 8 neighbours? And then, as I understand it, we sum these differences for each column such that each column is reduced to a single number? Then we use this information to crop the _left_ and _right_ edges of the photo, right?"} {"_id": "117256", "title": "How often do you take notes, and how are you able to maintain a consistent amount?", "text": "I take notes about language features, project details, etc. I've noticed that I tend to take a lot of notes when I start learning a new aspect of a language or start working on a new project, but then as the momentum builds my note taking drops off. I often wish I would take more notes, since I frequently find them helpful months later when I encounter a problem for the second time or when I want to remember some past detail. Do you take a lot of notes? and how do you or don't you maintain the volume of notes you take? My problem is that once I'm going on something, jotting done a note feels like an interruption; even though, I know it will be useful in the future. Any suggestions?"} {"_id": "210175", "title": "Apache 2 License and copyrighted Libraries", "text": "I am working on my first big(ger than before) project at my university which is coded under the Apache 2 License. So here is the question: Can I use copyrighted material/libraries (no restrictions and free to use) inside that project? I want to be specific: The library is flot (flotcharts.org) My guess is yes, but guesses do not count."} {"_id": "117253", "title": "C++ skills higher than C skills?", "text": "I feel that the often seen C/C++ doesn't really describe my skills in my CV. So I'm planning to separate it into advanced C++ knowledge and mediocre C skills. Do you think this is confusing for the reader? She could think: \"C is a subset of C++, so what is this guy trying to tel me?\" Well, what I'm trying to tell is: I have done several real world C++ projects while pure C projects where just a hobby thing. Do you agree that a skilled C++ programmer not necessarily is a qualified C guy or do you think that this switch is done easily?"} {"_id": "198838", "title": "Books on software development career path", "text": "Can you recommend books **and other material(video)** on how to build career , what path and wayouts are there in the whole area of software engineering? **What a developer can evolve into?** Some sort of evolution path from development? I m not asking just for 2 well- known directions: project management and freelancing, i m asking for the whole range of possibilities where these skills can be useful I m not asking for stuff about: * frameworks, languages * certifications for Microsoft, Oracle etc"} {"_id": "117258", "title": "Is depending on references a common process or should I be memorizing syntax?", "text": "> **Possible Duplicate:** > Is it important for a programmer to memorize the syntax of the language? For example I could not write a SQL connection string from scratch if I tried. I always either Google it or copy and paste it from a previous project. For some reason this bothers me, but I don't know if it should or if it's just not a big deal. There are probably other people who can relate and think of other examples as well."} {"_id": "105002", "title": "If I have two developer license accounts with Apple, can I easily switch between the two for app development and publishing?", "text": "I've been interested in iPhone development since the release of the iPhone, however, I am on a limited personal budget. I've never been willing/able to fork out the money necessary for an iPhone and a Macbook. I know, some may argue that it's not that expensive, but for a hobby it hasn't been worth the price. By some twist of amazing fortune, the company that I work for is wanting me to develop an iPhone and iPod app. This means I am getting pretty much all the hardware you would want to develop on-- an awesome Macbook, Thunderbolt display, an iPhone, an iPad, a few accessories. In short, it's basically my \"wish list\" that I have dreamt of one day owning. In my free time, I will be allowed to use the hardware to develop whatever software I desire but since my development laptop will primarily be used for my employing companies project I need assurance that any work I do under my own developer license will not affect my developer license associated with my work account. Can I easily switch between two developer accounts when developing and publishing applications? What if one account is a standard one and the other is an enterprise? As of yet, I don't have any hardware or any developer licenses and I'm asking in complete ignorance. Can I do this or will the accounts collide? **REQUESTING UPDATE/SECOND OPINION** Thanks to **mouviciel** for promptly answering this question with a straight- forward response. However, I've been working with with XCode for quite a while and I've also had experience with downloading sample projects and modify them so that I can run them on my iOS device. This lends me to believe that answering my initial question might involve tweaking PList files and other project settings, but should still be doable without having to create a separate account on my Macbook. So I ask again, can I easily switch between two developer accounts when developing and publishing applications, strictly using one account on my Macbook? If the answer is, yes, (which is what I assume is the case) how must I redefine my personal projects which may initially inherit user setting associated with my business account and credentials?"} {"_id": "97333", "title": "Where can you find your first customers as a freelancer?", "text": "I want to start doing freelance work, but no matter how I look at it, it seems like the best way to get customers and to have work most of the time, you have to already be in the freelancing game. Most freelancers I've talked to have had the same customers over the years or got new customers because their satisfied clients referred them. What I'd like to know from the successful people here that work as freelancers is how do you start doing business when you haven't yet set foot in freelancing? I want to start small, creating websites that won't require me to hire other people other than maybe a designer I already know. (I'd like to create desktop applications as well, but I think I should keep that for later when I'm more experienced) . I thought about localized Google ads or visiting companies and meeting the people in charge there, but I wouldn't know which kind of businesses to look for or if it's even a good way to approach this. Anyone care to share their personal startup experiences / advice that can help future freelancers?"} {"_id": "64536", "title": "Creating a web-end for a C++ program", "text": "I was wondering what would be the best method for creating a web end for interfacing with a C++ program on the server. At first I simply thought just using shell execution from the web server side language (like `shell_exec()` in PHP), but I was wondering if there is a \"better\" way. Maybe something more native or is this a bad practice for some reason?"} {"_id": "226544", "title": "Algorithm/research on detecting language of text", "text": "I am interested in finding an approach that will detect what language a string of text is. As Google translate does."} {"_id": "193779", "title": ".NET software design and Oracle ODP.NET UDT", "text": "I'm working on a new common .NET software design (mainly) for WCF-based web service applications with related client frontends (all written in C#). As far I've chosen some frameworks (NUnit,Autofac/Castle Windsor) as basis for the application. Now I'm doing some research concerning db abstraction. I'm considering NHibernate (together with FluentNHibernate) as persistence framework. But there are some concerns about NHibernate. Database interfaces provided by our db dev team heavily rely on stored procedures and often use UDT objects as output parameters (sometimes also ref cursors). Many already existing applications are using auto-generated UDT C# classes. NHibernate seems to work well with Oracle (with appropriate configuration and usage of ODP.NET). See Fluent NHibernate - how to configure for oracle?, Fluent NHibernate - Configure Oracle Data Provider ODP. Also ref cursors and stored procedure calls seem to work with nhibernate (see Calling an Oracle store procedure with nHibernate and Oracle stored procedures, SYS_REFCURSOR and NHibernate). Is it appropriate to use NHibernate in this case (stored procedures and UDT/ref cursor output)? Or would it be better to keep auto-generated UDT C# classes and implement custom data access objects? Design A (with auto-generated UDT classes): * Create business objects in the domain model (e.g. `class Product`) * Define database-independent interfaces e.g. `IDataAccessProduct`. * Implement it in classes e.g. `OracleDataAccessProduct` that represent specific data access objects. For example this class calls performs a mapping of auto-generated UDT classes (entities) to `Product` domain objects and vice versa. Design B (with NHibernate): * Create business objects in the domain model (e.g. entity class `Product : IEntity`.) * Add interface an IProductRepository for the repository. In the domain model. Add `ProductRepository` that extends the e.g. base class `Rhino.Commons.NHRepository`. * Usage of hibernate-mapping for domain objects to db table (`Product.hbm.xml`). So which design would you prefer?"} {"_id": "64535", "title": "What are the differences between a website and a web application?", "text": "How do you differentiate a web application from websites? It's language/platform agnostic."} {"_id": "165625", "title": "What is the difference between web designers and developers?", "text": "> **Possible Duplicate:** > What are the boundaries between the responsibilities of a web designer and > a web developer? Web designers and developers work in the same project, I've been searcing for an hour and can't find what's the difference? I found some slides on Slideshare that discuss the difference, but I'd like more explanation on the topic."} {"_id": "149934", "title": "How should I implement an unsubscribe feature for text messages?", "text": "On many e-mail subscription lists, the is a link at the bottom that says something to the effect of: > You are subscribed to our e-mail list as you@example.com. To unsubscribe, > click here What is the best way (and is there a way?) to emulate this when sending out mass text messages through SMS gateway?"} {"_id": "250195", "title": "Dividing 2D grid to most efficient search", "text": "## Requirements 1. I have `Grid` of unlimited size (I don't and can't know what its size) 2. There is `Object` that should be placed on the grid, every object have hes stream distance (in other words: range, default: 300) and 2D coordinates (position) 3. There is function `find` that takes 2D coordinates and should return what object are shown It seems that R-tree is the answer to my question, is there any explanation about how exactly the algorithm works? ## My way 1. Dividing the cell sizes to levels, level 1 is 600x600 (2 times bigger than default stream distance), level 2 is 1200x1200 and so on... (every time the cell sizes is multiplied by 2). 2. Associate the objects in the corresponding cell by multiplying the stream distance by 2 and round to the next multiple of 600, for example: stream distance is 450, multiplying it by 2 gives 900, rounding it to the next multiple of 600 gives 1200. 3. Every time object inserted check what cells does the object fit and adding the object to the cell object list. 4. When function `find` get called he search's only in the cell of the given coordinate. [ Assume triangle is an object ] ![enter image description here](http://i.stack.imgur.com/BUWJz.jpg) ## Similar Questions * Most efficient way of finding entities in a grid? (The accepted answer )"} {"_id": "116786", "title": "Does switching from one programming language to another cause a loss in experience?", "text": "If I was working for a programming language X and want to switch to Y (from Java to C++, or from Java to Objective-c), does this cause my years of experience in the previous languages to be lost (from point of view for companies)? For example, if I was a Senior in X, and want to move to Y, will I start Junior at Y or Senior? I believe that languages are just tools, and the experience is in problem solving and way of thinking, but do companies concern?"} {"_id": "196074", "title": "Should my async task library swallow exceptions quietly?", "text": "I've just learned that .NET 4.5 introduced a change to how exceptions inside a `Task` are handled. Namely, they are quietly suppressed. The official reasoning for why this was done appears to be \"we wanted to be more friendly to inexperienced developers\": > In .NET 4.5, Tasks have significantly more prominence than they did in .NET > 4, as they\u2019re baked in to the C# and Visual Basic languages as part of the > new async features supported by the languages. This in effect moves Tasks > out of the domain of experienced developers into the realm of everyone. As a > result, it also leads to a new set of tradeoffs about how strict to be > around exception handling. (source) I've learned to trust that many of the decisions in .NET were made by people who really know what they're doing, and there is usually a very good reason behind things they decide. But this one escapes me. If I were designing my own async task library, what is the advantage of swallowing exceptions that the developers of the Framework saw that I'm not seeing?"} {"_id": "60091", "title": "Programmers and Database Professionals in Performance Based Companies", "text": "Anybody here work for a company (or know of someone that does) in the fields of programming or anything related to DBs and not have set work hours? Where you are paid for performance rather than how many hours you sit in a chair at the office? Any project / company I have been apart of always has pretty strict primary hours with the \"great opportunity\" / expectation to stay until the job is done. Is this type of flexibility really feasible in a group environment in these fields? Would pay for performance work within a company in these fields? With having strict primary hours I notice a lot of inefficiencies. Some weeks or days there is only so much that can be done (for whatever the reason may be) and if your work is done it doesn't help moral to force someone to stay for 8 hrs/day or 40hrs/week if the next week they may have to pull a 60+hr work week. I know that a lot of flexibility can come from working independently or as a consultant so this question really does not encompass those types of positions."} {"_id": "240856", "title": "What can I do to let our team have code reviews of branch merges having hundreds of screens worth of Github diffs?", "text": "\"Code review\" (aka \"peer review\") seems like a really great idea, so my team started practicing it. For a little while it worked well, but then a co-worker merged a branch in, and asked for a review of the code. When I went to review her code, the Github diff page was about 420k pixels in height. Given that my screen is about 500px, that works out to 840 screens worth of code to review. To read the code, fully \"grok\" it, and write appropriate comments, I probably need an average of one minute per screen, which works out to 14 hours. Now to be fair, some libraries got checked in to this commit, so some portion of that can be skipped ... but even if libraries took up 6 hours worth, that still leaves me with an entire day spent reviewing this merge. That can't be the most effective use of my time. And this is just one merge; we will no doubt have other large merges to review in the future as well. So, my question is, what can I do (either in terms of procedure or in terms of utilizing review tools) to let our team have code reviews of branch merges, while at the same time not eating up entire days on reviews?"} {"_id": "107534", "title": "Submit issue, pull request, or both for a very small fix?", "text": "I've made a 4-line bug fix to a minor issue in a library on github. The project is actively maintained with changes every few days. What is the ideal way to submit a fix that small? Is it to fork, fix and submit a pull request; file an issue _and_ a pull request; or just file the issue with my fix copy-pasted? (Or something else?)"} {"_id": "113576", "title": "Is hungarian notation a workaround for languages with insufficiently-expressive (i.e. Haskell-style) static typing?", "text": "### Edit To be clear, I'm _not_ talking about annotation variable names with the _data_ type, but rather with information about the _meaning_ of the variable in the context of the program. For example, a variable may be an integer or float or double or long or whatever, but maybe the variable's _meaning_ is that it's a relative x-coordinate measured in inches. This is the kind of information I'm talking about encoding via Hungarian Notation (and via Haskell types). ### Original Question: In Eric Lippert's article What's Up With Hungarian Notation?, he states that the purpose of Hungarian Notation (the good kind) is to > extend the concept of \"type\" to encompass semantic information in addition > to storage representation information. A simple example would be prefixing a variable that represents an X-coordinate with \"x\" and a variable that represents a Y-coordinate with \"y\", regardless of whether those variables are integers or floats or whatever, so that when you accidentally write `xFoo + yBar`, the code clearly looks wrong. But I've also been reading about Haskell's type system, and it seems that in Haskell, one can accomplish the same thing (i.e. \"extend the concept of type to encompass semantic information\") using _actual types_ that the compiler will check for you. So in the example above, `xFoo + yBar` in Haskell would actually fail to compile if you designed your program correctly, since they would be declared as incompatible types. In other words, it seems like Haskell's type system effectively supports compile-time checking equivalent to Hungarian Notation So, is Hungarian Notation just a band-aid for programming languages whose type systems cannot encode semantic information? Or does Hungarian Notation offer something beyond what a static type system such as Haskell's can offer? (Of course, I'm using Haskell as an example. I'm sure there are other languages with similarly expressive (rich? strong?) type systems, though I haven't come across any.)"} {"_id": "191681", "title": "Designing a self-describing JSON document system", "text": "I'm trying to devise a document system for a specific domain involving simple objects (People, Companies, Invoices, etc.) that would also be able to completely describe itself. This self-description ability would ideally be recursive. The document system will be based on JSON, but I believe the principle in question applies to any structured notation. To illustrate what I mean, let's say I have the following JSON document that describes a person (company, invoice, etc. are basically all the same, only with different properties): { \"Name\": \"John Doe\", \"Email\": \"john.doe@example.org\", \"BirthDate\": \"1976-07-04\" } Then, I have a second JSON document that describes the structure of the first one. Let's call this a \"Level 1 Meta-document\": { \"Type\": \"Person\", \"Properties\": { \"Name\": { \"Type\": \"String\", \"Required\": true }, \"Email\": { \"Type\": \"String\" }, \"BirthDate\": { \"Type\": \"Date\" } } } This would be simple enough, if not for the requirement, that system should also be able to fully describe this meta-document. To put it in more general terms: I'm looking for a way to define a self- sufficient \"Level N Meta-document\", that would be able to describe a structure of the \"Level N-1 Meta-documents\". **NOTE** : I'd be willing to go with a solution for N = 2, but instinct tells me that a true solution for N = 2 would also work for any N. Now that I think of it, this may be more of a math puzzle than programming one. :) Is this even possible? If yes, can you give me some examples? If not, what are my other options? **EDIT: I've included a naive example of how a \"Level 2 Meta-Document\" would look, based on the above:** { \"Type\": \"MetaLevel2\", \"Properties\": { \"Properties\": { \"Type\": \"Hash\", \"Required\": true } } } The problem with this is that it doesn't describe the object that describes the property details (i.e. the one with the \"Type\" and \"Required\" attributes). If I were to include description of those, I'd have to add another attribute **to the very same object I'm trying to describe** : { \"Type\": \"MetaLevel2\", \"Properties\": { \"Properties\": { \"Type\": \"Hash\", \"Required\": true, \"ValueProperties\": { \"Type\": \"String\", \"Required\": \"Boolean\" } } } } Unfortunately, this throws me into recursive problem, because I now lack the description for \"ValueProperties\". In fact, for every new attribute I invent on level N, I have a problem describing it on level N+1 without introducing yet another attribute that needs description. What I'm looking for is a solution that wouldn't suffer from this problem. To be clear: I'm aware of XSD, but I'm not sure how to apply its principles to my case. Unless I'm missing something, XSD would suffer from the same recursive problem. This gives me reason to believe I have a problem with the approach itself."} {"_id": "9521", "title": "What constitutes a dead programming language?", "text": "Imagine you were elected coroner of IEEE or somesuch governing body and you had to pronounce a programming language as dead. What signs would you look for? Are there any zombie languages out there that don't know they're already dead?"} {"_id": "81679", "title": "Zend Framework: Should I worry about the details of the MVC implementation?", "text": "I've been studying Zend Framework's MVC for a few weeks, and am having a really, really difficult time with it. I'm new to OOP, but I understand OOP in PHP without too much difficulty; I understand how to use some of the packages in Zend's library, etc. I understand interfaces, abstract classes, composition, etc. etc. etc. I understand MVC on a high-level, but does anyone know if/where there is a resource that describes the MVC implementation on the ground level? Or should I just stop worrying about it and try to ignore the mechanics of the implementation initially? I've been working through several books and resources online on the MVC implementation, and it seems extremely complex. (Also, earlier versions had different implementations, making it slightly more confusing to understand). The tack I'm trying to take is what I usually do when I want to understand something: go through it line by line and follow the logic around until it all becomes clear. After trying this over and over (and not getting very far), I'm wondering if I'm not going about this the wrong way. After all, OOP is all about not worrying about the implementation, right? I mean, I hear myself saying that and I cringe. I don't like not knowing what's going on, but I'm finding this extremely complicated, and I would really like to get to the part where I actually create something. From what I can tell, though, Zend is incredibly well-conceived (perhaps too thoroughly well-conceived, if that makes any sense). All of the i's are dotted and t's crossed, which makes it very difficult to dig through (lots of abstract classes, some interfaces, objects being passed here and there, etc.-- hard to follow). But the books and sites I've looked at don't go into significant detail about how the process works, only high level descriptions, or they are outdated and using an different implementation. I would prefer to understand it at a lower level."} {"_id": "153439", "title": "Caching strategies for entities and collections", "text": "We currently have an application framework in which we automatically cache both entities and collections of entities at the business layer (using .NET cache). So the method GetWidget(int id) checks the cache using a key Widget_Id_{0} before hitting the database, and the method GetWidgetsByStatusId(int statusId) checks the cache using Widgets_Collections_ByStatusId_{0}. If the objects are not in the cache they are retrieved from the database and added to the cache. This approach is obviously quick for read scenarios, and as a blanket approach is quick for us to implement, but requires large numbers of cache keys to be purged when CRUD operations are carried out on entities. Obviously as additional methods are added this impacts performance and the benefits of caching diminish. I'm interested in alternative approaches to handling caching of collections. I know that NHibernate caches a list of the identifiers in the collection rather than the actual entities. Is this an approach other people have tried - what are the pros and cons? In particular I am looking for options that optimise performance and can be implemented automatically through boilerplate generated code (we have our own code generation tool). I know some people will say that caching needs to be done by hand each time to meet the needs of the specific situation but I am looking for something that will get us most of the way automatically."} {"_id": "167285", "title": "Two interfaces with identical signatures", "text": "I am attempting to model a card game where cards have two important sets of features: The first is an effect. These are the changes to the game state that happen when you play the card. The interface for effect is as follows: boolean isPlayable(Player p, GameState gs); void play(Player p, GameState gs); And you could consider the card to be playable if and only if you can meet its cost and all its effects are playable. Like so: // in Card class boolean isPlayable(Player p, GameState gs) { if(p.resource < this.cost) return false; for(Effect e : this.effects) { if(!e.isPlayable(p,gs)) return false; } return true; } Okay, so far, pretty simple. The other set of features on the card are abilities. These abilities are changes to the game state that you can activate at-will. When coming up with the interface for these, I realized they needed a method for determining whether they can be activated or not, and a method for implementing the activation. It ends up being boolean isActivatable(Player p, GameState gs); void activate(Player p, GameState gs); And I realize that with the exception of calling it \"activate\" instead of \"play\", `Ability` and `Effect` have the exact same signature. * * * Is it a bad thing to have multiple interfaces with an identical signature? Should I simply use one, and have two sets of the same interface? As so: Set effects; Set abilities; If so, what **refactoring** steps should I take down the road if they become non-identical (as more features are released), particularly if they're divergent (i.e. they both gain something the other shouldn't, as opposed to only one gaining and the other being a complete subset)? I'm particularly concerned that combining them will be non-sustainable as soon as something changes. The fine print: I recognize this question is spawned by game development, but I feel it's the sort of problem that could just as easily creep up in non-game development, particularly when trying to accommodate the business models of multiple clients in one application as happens with just about every project I've ever done with more than one business influence... Also, the snippets used are Java snippets, but this could just as easily apply to a multitude of object oriented languages."} {"_id": "167280", "title": "What can procs and lambdas do that functions can't in ruby", "text": "I've been working in Ruby for the last couple weeks, and I've come to the subject of procs, lambdas and blocks. After reading a fair share of examples from a variety of sources, I don't how they're much different from small, specialized functions. It's entirely possible that the examples I've read aren't showing the power behind procs and lambdas. def zero_function(x) x = x.to_s if x.length == 1 return x = \"0\" + x else return x end end zero_lambda = lambda {|x| x = x.to_s if x.length == 1 return x = \"0\" + x else return x end } zero_proc = Proc.new {|x| x = x.to_s if x.length == 1 puts x = \"0\" + x else puts x end } puts zero_function(4) puts zero_lambda.call(3) zero_proc.call(2) This function, proc, and lambda do the exact same thing, just slightly different. Is there any reason to choose one over another?"} {"_id": "167281", "title": "How long to spend estimating programming tasks?", "text": "For example, if I break a project into n discrete work products (say classes or functions or components) is there a time t such that n*t is a suitable amount of time to spend on estimation?"} {"_id": "167288", "title": "Is it better to define all routes in the Global.asax than to define separately in the areas? ", "text": "I am working on a MVC 4 project that will serve as an API layer of a larger application. The developers that came before me set up separate `Areas` to separate different API requests (i.e `Search`, `Customers`, `Products`, and so forth). I am noticing that each `Area` has separate Area registration classes that define routes for that area. However, the routes defined are not area-specific (i.e. `{controller}/{action}/{id}` might be defined redundantly in a couple of areas). My instinct would be to move all of these route definitions to a common place like the `Global.asax` to avoid redundancy and collisions, but I am not sure if I am correct about that."} {"_id": "152600", "title": "Should functions of a C library always expect a string's length?", "text": "I'm currently working on a library written in C. Many functions of this library expect a string as `char*` or `const char*` in their arguments. I started out with those functions always expecting the string's length as a `size_t` so that null-termination wasn't required. However, when writing tests, this resulted in frequent use of `strlen()`, like so: const char* string = \"Ugh, strlen is tedious\"; libFunction(string, strlen(string)); Trusting the user to pass properly terminated strings would lead to less safe, but more concise and (in my opinion) readable code: libFunction(\"I hope there's a null-terminator there!\"); So, what's the sensible practice here? Make the API more complicated to use, but force the user to think of their input, or document the requirement for a null-terminated string and trust the caller?"} {"_id": "210799", "title": "Where to validate domain model rules that depend on database content?", "text": "I'm working on a system that allows Administrators to define Forms that contain Fields. The defined Forms are then used to enter data to the system. Sometimes the Forms are filled by a human via a GUI, sometimes the Form is filled based on values reported by another system. For each Field, the Administrator can define a Validation Rule that limits the allowed values for the Field. The Validation Rules can be anything from \"the value entered in the Field must be True or False\" to \"the value entered in the Field must exist in column A of table B in the database\". The Administrator may at any time change the Validation Rule for the Field. In this scenario, what in your opinion is the most suitable place for validating that each Field is filled correctly? I currently have two main approaches in mind: **Option #1: Validate in the Domain Model** Each Field object would contain the Validation Rule specified by the Administrator. The Field objects would also have a reference to an IValidator. When an attempt is made to set the value of the Field, the Field would pass the given value and the Validation Rule to the IValidator. If the given value is not valid, a ValidationException would be thrown and appropriately handled in the GUI/interface to the other system. _Pros:_ * Strong protection against Fields being accidentally assigned values that violate the Validation Rule _Cons:_ * The Data Access Layer needs to be able to bypass the validation and construct Fields that violate the current Validation Rule. Despite the Administrator changing the Validation Rule for a Field, we still need to be able to construct Field objects based on the old data e.g. when rendering a Form that was filled years ago. This could potentially be resolved by storing the current Validation Rule whenever we store the Field. * In this design, the Field model has an indirect link to the Data Access Layer/Repository via the IValidator. The injection of Services/Repositories to Domain Models seems to be generally frowned upon. **Option #2: Validate in a Service** Try to ensure that all attempts to set the value of a Field pass through a Service that ensures the Validation Rule holds. If the Validation Rule is violated, throw a ValidationException. Of course, the Data Access Layer would **not** use the Service when creating Field objects that have previously been persisted in the DB. _Pros:_ * Does not violate the \"don't inject Services/Repositories into your Domain Models\"-thinking. * No need to persist the current Validation Rule when persisting the Field. The Service can simply look up the current Validation Rule for the Field; when looking at history data, the value of the Field will not be changed. _Cons:_ * No guarantee that all logic that should use the Service to set the Field value actually does so. I see this as a major drawback; all it seems to take is someone writing \"thisField.setValue(thatField.getValue())\" and the Validation Rule of thisField might be violated without anyone being the wiser. This could potentially be mitigated by ensuring that the value of the Field matches with the Validation Rule when the Data Access Layer is about to persist the Field. I currently prefer Option #1 over Option #2, mainly because I see this as business logic and feel that Option #2 poses a greater risk of introducing bad data to the system. Which option do you prefer, or is there another design that fits this scenario better than the two options described? **Edit (Complexity of validations)** The validation cases that have come up for now are relatively simple; the Field value must be e.g. numeric, a date, a date with a time or be an existing value in a database column. However, I suspect complexity to gradually increase over time. For example, the validation solution needs to be built with internationalization in mind - things such as Dates may be input in a locale-specific syntax. I've decided to proceed with Option #1 for now, attempting to take care not to assign too many responsibilities to the Domain Model. Those facing a similar situation may also want to check out the related questions Validation and authorization in layered architecture and Data input validation - Where? How much?."} {"_id": "190482", "title": "Why use a database instead of just saving your data to disk?", "text": "Instead of a database I just serialize my data to JSON, saving and loading it to disk when necessary. All the data management is made on the program itself, which is faster AND easier than using SQL queries. For that reason I have never understood why databases are necessary at all. Why should one use a database instead of just saving the data to disk?"} {"_id": "190485", "title": "Client Server .NET application with queuing message", "text": "I am new, so forgive me if my question is mistaken or anything, just give me an alert and I'll be glad to fix it. Me and my team is about to develop a system where the database is located in a private server, and the applications is distributed between clients on distant location. It is actually a simple CRUD application, but our primary concern is how poor the internet connection on some remote clients. I thought to give a try to WCF queue message so that when the connection is down, I can save the message to later send it again when it is up, but I don't have a real clear solution in mind. Currently, my solution would be something like this : > **CLIENT** > > **MySolution.sln** > > \\-- **MyDataAccess** > > \\---- Entities (Contains my object class definitions and properties) > > \\---- Repositories (Handles database communication) > > \\---- Services (Handles message queue to server CRUD) > > \\-- **MyClassLibraries** (Contains third party's DLLs) > > \\-- **MyHelper** (Contains helper classes and functions) > > \\-- **MyCore** (Main application project) > > \\---- Model > > \\---- View > > \\---- ViewModel and for the server : > **SERVER** > > \\-- **MyBackgroundService** (Responsible for fetching incoming message and > doing CRUD to database) Is there a better solution to this? I am not good enough yet to see it. Please let me know if I violate some rules here. Cheers !"} {"_id": "190488", "title": "How to store weekly-occuring time spans in Java objects", "text": "My school programming club is setting up a tutoring program. We have 12 tutors who specialize in a range of different languages, and each is available for a different set of hours each day of the week. I want to set up a web interface where students can select the language they need help in, and then be shown a list of the time spans each day that there is a tutor in that language available to help them. My main question at this point is how I should go about storing tutor availability data - should I use a database (and if so, how), or should I simply store availabilities as fields in POJO \"Tutor\" objects. If the latter would be best, what data-type should I use? Can I get by simply storing them as strings or should I use joda or java.util Date objects? The task that seems particularly tricky to me at this point is combining availabilities of multiple tutors into a single time-span for each day - i.e., if I have one tutor available from 10-2 and one available from 12-4, I want a single line saying \"10-4\" to be presented to the user. I'm just beginning to design my approach to this task, so I'm very open to general suggestions and/or having unforeseen problems pointed out to me. I would like to use a Java JApplet as my web interface."} {"_id": "197457", "title": "Services in Model Layer", "text": "I understand services should have no state and no business logic. * How can you implement a service like AuthentificationService considering these rules? * All the methods in a service should be static?"} {"_id": "157078", "title": "Is it worth moving from Microsoft tech to Linux, NodeJS & other open source frameworks to save money for a start-up?", "text": "I am currently getting involved in a startup, I am the only developer involved at the moment, and the other guys are leaving all the tech decisions up to me at the moment. For my day job I work at a software house that uses Microsoft tech on a day to day basis, we utilise .NET, SqlServer, Windows Server etc. However, I realise that as a startup we need to keep costs down, and after having a brief look at the cost of hosting for Windows I was shocked to see some of the prices for a dedicated server. The cheapest I found was \u00a3100 a month. Also if the business needs to scale in the future and we end up needing multiple servers, we could end up shelling out \u00a310's of \u00a3000's a year in SQL Server / Windows Server licenses etc. I then had a quick look at the price of Linux hosting for a dedicated server and saw the price was waaaaaay lower than windows hosting. One place was offering a machine with 2 cores for less than \u00a320 a month. This got me thinking maybe the way to go is open source on Linux. As I write a lot of Javascript at work (I'm working on a single page backbone app at the moment), I thought maybe NodeJS and a web framework like Express would be cool to use. I then thought that instead of using SQL why not use an open source NoSQL database like MongoDB, which has great support on NodeJS? My only concern is that some of the work the application is going to do is going to be dynamically building images and various other image related stuff, i.e. stuff that is quite CPU heavy - so I'm thinking of maybe writing anything CPU heavy in C++ and consuming it as a module in Node. That's the background - but basically is Linux a good match for: 1. Hosting a NodeJS/Express site? 2. Compiling C++ node modules? 3. Using a NoSQL DB like MongoDB? And is it a good idea to move to these unfamiliar technologies to save money? * * * ## 3 MONTH UPDATE I've been working on this for the past few months now so thought I would give an update in case anyone is interested. In the end I decided against using a NodeJS & Linux stack for the simple reason of time. I am doing this startup on the side, so I am working 9 hour days, then going home and working until late on the startup. While working in this way I obviously need to be as efficient with my time as possible, or I will never end up shipping the product. After taking some of the advice on this thread I did apply for Microsoft BizSpark, and was accepted. This means I now have access to Visual Studio license, Windows Server license etc, all for free. Which is awesome. Hopefully by the time we are required to begin paying for everything we will be turning over enough that will make it a non-issue. Do not think I am only using Microsoft tech, however, as I have tried to use open source stuff where possible. The main place I have done this is my data layer, where I have decided to use PostgreSQL and MongoDB. I am also using BackboneJS on my front end. Below is a summary of the tech/frameworks I am currently using: * Standard DB stuff: PostreSQL * Logging & Data Store: MongoDB * ORM: Entity Framework 5 * Core libraries: .NET (C#) * Web Framework: ASP.NET MVC3 * UI: Razor view engine / BackboneJS"} {"_id": "94157", "title": "Inserting copyright notice", "text": "What is the easiest way to insert copyright notice in lots of PHP files. It's not possible to do it manually."} {"_id": "68178", "title": "Which skills would you look for in an offshore junior developer?", "text": "Suppose you have to interview a developer for a (not senior) developer position. The catch is that the developer is not from your country and your company would have to invest some money and time to bring him to your country. You have to make the interview. What skills would you look for in such a candidate? **For the sake of context** , suppose it is a position of a web developer or desktop developer. Thanks in advance **EDIT** : This question is related to the _developer_ point of view, so in this case, the answers should be oriented to help the developer in which to expect..."} {"_id": "116123", "title": "How should I go about learning PHP given a background in C#.NET?", "text": "I have developed a few websites and applications using C#, and lately I've been developing a website in ASP.NET MVC. To be honest, this has pissed me off somewhat as it has me feeling like I have zero knowledge. So many things work under the hood in .NET that I'm just not able to focus and understand what is left to the developer and how these things are done. Now, the question that prompted me to make this post is: **How should I proceed for learning PHP from a very basic standpoint?** Mind you, I have no idea about this language at all, just seen some code and heard people talking."} {"_id": "157072", "title": "Refactor class (extract methods) in a main / helper classes", "text": "Simply spoken, one of my c# classes got too big and I'm currently splitting this class in several subclasses by clustering semantically related methods (actually actions, which do side effects). So, to give you an example: class ViewModel { // a lot of properties bound to view (MVVM pattern) public Property 1 //bound to view public Property 2 //bound to view // a lot of actions, containing the controller logic of the view //Function 1 changes Property 1 -> shall be moved to class MyViewModelHelper Function1() {} That is, when I create a helper class for the viewModel, it needs to to receive the instance reference of the main view model in its constructor, so that it has access to the public properties. I do not want that the view has to have the knowledge about the helper classes, that is the main view model shall provide all the properties... but the functions shall delegate the work to the helper classes to keep the viewModel class maintainable. Is there any known add in /tool for visual studio, which can automatize this?"} {"_id": "98867", "title": "Is information hiding more than a convention?", "text": "In Java, C# and many other strongly-typed, statically checked languages, we are used to write code like this: public void m1() { ... } protected void m2() { ... } private void m2() { ... } void m2() { ... } Some dynamically checked languages don't provide keywords to express the level of \"privateness\" of a given class member and rely on coding conventions instead. Python for example prefixes private members with an underscore: _m(self): pass It can be argued that providing such keywords in dynamically checked languages would add little use since it is only checked at runtime. However, I can't find a good reason to provide these keywords in statically checked languages, either. I find the requirement to fill my code with rather verbose keywords like `protected` both annoying and distracting. So far, I have not been in a situation where a compiler error caused by these keywords would have saved me from a bug. Quite in contrary, I have been in situations where a mistakenly placed `protected` prevented me from using a library. With this in mind, my question is: **Is information hiding more than a convention between programmers used to define what is part of the official interface of a class?** Can it be used to secure a class' secret state from being attacked? Can reflection override this mechanism? What would make it worthwhile for the compiler to **enforce** information hiding?"} {"_id": "53272", "title": "The Default State of Unused Memory", "text": "In an embedded device, during the initializing of memory locations, is there any convention that are being practiced. I mean, say setting every byte to zero or 0xFF or any other value."} {"_id": "162102", "title": "What makes for a good architect/manager/lead developer?", "text": "I am the Lead Developer for a small software company. Over the past two years, my team has grown from one developer (me) to a group of about nine people. Most of us are very capable, senior engineers (20+ years of experience building software per person), so very little hand-holding is generally necessary. We use Scrum to manage our efforts, and we usually get a lot done quickly with minimal written requirements. As the team has grown, I've reached the point where it is difficult for me to retain technical oversight over the entire project while also writing significant amounts of new code myself, so it is time for me to adjust my role. How can I make myself most useful to the team when I'm no longer spending most of my time developing? **My goal is to allow my group to grow even further (i.e. increase Scrum velocity) by adding more developers** , so I don't want to simply become the \"architecture police\" who imposes my will on the team. In other words, I want to be the guy who helps things work better/smoother, rather than be the guy who slows things by down by adding an unnecessary layer of bureaucracy. Still, one of our main risks is that things will spin out of control if we add more people without having enough structure to keep us all on the same page. What is the best way to achieve my goal?"} {"_id": "90645", "title": "Are there any drawbacks to the Major.Minor.YMDD.Build version strategy?", "text": "I'm trying to come up with a good version strategy to fit our specific needs. We've proposed settling on this and I wanted to ask the question to see if anyone's experience would suggest avoiding this or altering it in any way. Here's our proposal: Versions are released in this format: MAJOR.MINOR.YMDD.BN. Here it is broken out: * MAJOR & MINOR are typical; we'll increase MINOR when we feel code and new feature sets warrants it; once every few months most likely. MAJOR will increase ~yearly. * YMDD: Y will be the last digit of the current year, so \"1\" for 2011, \"2\" for 2012, etc. A non-padded month will be used to keep the number smaller (9 instead of 09 for example). DD of course is the day, padded with a zero for days under 10. * BN: BN is the build number and increases by one anytime we make a change to a branch of the code represented by the build, for example: If were to make a build today, our release would be version 5.0.1707.1. I release to QA today and 3 days from now QA finds that a change broke the save functionality on a page. Instead of me changing our current development code, I'd go back to the code that I used to create version 5.0.1707.1, make the fix there, then increase the BN portion of the version and would then re-release 5.0.1707.2 back to QA. In short, anytime a change is made to a branched version that isn't the active dev branch, we'd use the original version number and increase only the BN portion (even if the change happened 3 days, 3 weeks or 3 months from the initial release of that version). Anytime we make a new release from our Active dev branch, we'd come up with a new version based on the M/D of the release using the outlined strategy. We do this once every 2-3 weeks. Are there holes or pitfalls with this? If so, what are they? Thanks **EDIT** To clarify one point that I didn't get out very well - Oct/Nov/Dec will be two digits, it's only the year that won't be. So 9 for Sept, 10 for Oct, 11 for Nov, etc."} {"_id": "36840", "title": "listing my programming experience on my resume", "text": "On my resume, I list myself as having \"7 years of hands-on experience programming in C++\". To clarify, I am a self-taught C++ programmer with some college courses thrown in the mix. I've worked on some small personal projects, and I consider myself to be more competent than a CS grad with no actual real-world experience, though by no means am I anywhere near being an expert. The issue is this... I keep getting calls and emails from recruiters that see my resume on job sites, inquiring about my interest in senior developer positions, contracts, etc., of which I feel that I am completely under- qualified for. My resume only has 3 years of work experience listed (which is all IT stuff), so when they ask about my prior experience in C++, I have to clarify that it was personal work, not professional work. I'd really like a job as a developer, but I don't want to get hired for something that I can't handle, nor do I want to misrepresent myself while trying to show off my strengths. I deliberately chose the phrasing \"hands-on\" to imply that it wasn't professional. How should I phrase my C++ experience on my resume to clarify it better?"} {"_id": "198309", "title": "Is XML, HTML/CSS, XSL analogous to Model, View, Controller?", "text": "For some time in personal projects I have been using XSL to convert my raw XML data into human-friendly HTML/CSS (in simple projects, I have no JavaScript, so let's leave that out of the equation for simplicity). Now I'm trying to understand the MVC architectural pattern (not my first experience with it, but it is taking some work to go from understanding it basically to understanding it well), and I'm wondering if there is an analogy between the two. * **XML** : data model; lacks the complexity/logic of a full-blown model component, but intent seems similar * **XSL** : converts raw data for viewing--seems like a controller * **HTML/CSS** (rendered): the viewable output Is this analogy fitting? What in it matches well and what does not? (One dissimilarity, I suppose, is that in my example I am not getting any input back from the view--only producing output.)"} {"_id": "108719", "title": "Has JPA replaced CMP?", "text": "No question too stupid right :) Came across this on wikipedia > The Java Persistence API replaces the persistence solution of EJB 2.0 CMP > (Container Managed Persistence). My understanding was the CMP is still there i.e Pooling,Transactions etc can be done via CMP where as JPA is ORM for me. I did not think there are even same thing . I thought may be CMP VS JTA might make sense but obviously not. I know how off track I am but thats why I am here may some can help me get things in perspective."} {"_id": "71825", "title": "Does current evidence support the adoption of Contextual over Canonical Data Models?", "text": "The \"canonical\" idea is pervasive in software; patterns like Canonical Model, Canonical Schema, Canonical Data Model and so on, seem to come up again and again in development. Like many developers, I've often followed, uncritically, the conventional wisdom that you _need_ a canonical model, otherwise you'll face a combinatorial explosion of mappers and translators. Or at least, I _used to_ do that until a couple of years ago when I first read the somewhat-infamous EF Vote of No Confidence: > The hypotheses that once supported the pursuit of canonical data models > didn\u2019t and couldn\u2019t include factors that would be discovered once the idea > was put into practice. We have found, through years of trial and error, that > using separate models for each individual context in which a canonical data > model might be used is the least complex approach, is the least costly > approach, and the one that leads to greater maintainability and > extensibility of the applications and endpoints using contextual models, and > it\u2019s an approach that doesn\u2019t encourage the software entropy that canonical > models do. The essay presents no evidence of any kind to support its claims, but did make me question the CDM approach long enough to try the alternative, and the resulting software didn't explode, literally or figuratively. But that doesn't mean a whole lot in isolation; I could have just been lucky. So I'm wondering, has any serious research been done into the practical, long- term effects of having a canonical model vs. contextual models in a software system or architecture? Or, if it's too early to be asking that, then have any developers/architects written about personal experiences switching from a CDM to independent contextual models, or vice versa, and what the practical effects were on things like productivity, complexity, or reliability? What about the differences at different levels, i.e. using the same model across a single application vs. using it across a system of applications or an entire enterprise? (Facts only, please; war stories are welcome but no speculation.)"} {"_id": "71824", "title": "Python Metadata Files and Project Organization", "text": "I have a project in Python where I have a server. The server can have a variety of different services, but I don't know what those are until I start it up. I want it to do a search of available plugin files to know what services are available. Right now, I'm doing that by just having the metadata files as Python source code and loading it using imp.load_source. My question is where do I store these metadata files? The metadata files need to contain information pointing to files on the filesystem. For example, one of the programs the service needs to launch is a java program. I have the jar file and I compile it myself, but when I'm working on the project the compiled file goes into the build directory. When I'm going to want to install this on a machine, I'll need to store the files somewhere else. I want to keep the source directory and build directory separate (as it makes the clean command easier). What standard practice is done to handle the need for metadata files and where they are located? EDIT: I'm trying to target both Windows and Linux systems (and hopefully Mac systems too, but that's less important). For Linux, most important are the major distributions (such as Fedora and Debian) and hopefully others (although this is dependent on others trying to use it on whatever distribution, as I cannot test all of them and will only test on Fedora and Debian myself)."} {"_id": "168263", "title": "3d point cloud render from x,y,z 2d array with texture", "text": "Need some direction on 3d point cloud display using OpenGL in c++ (vs2008). I am brand new to OpenGL and trying to do a 3d point cloud display with a texture. I have 3 2D arrays (each same size 1024x512) representing x,y,z of each point. I think I am on the right track with glBegin(GL_POLYGON); for(int i=0; i<1024; i++) { for(int j=0; j<512; j++) { glVertex3f(x[i][j], y[i][j], z[i][j]); } } glEnd(); Now this loads all the vertices in the buffer (I think) but from here I am not sure how to proceed. Or I am completely wrong here. Then I have another 2D array (same size) that contains color data (values from 0-255) that I want to use as texture on the 3D point cloud and display. I understand that this maybe a very basic OpenGL implementation for some but for me this is a huge learning curve. So any pointers, nudge or kick in the right direction will be appreciated."} {"_id": "168260", "title": "Is it the job of a developer to suggest IT requirements?", "text": "I am the only developer working on a web application which is nearing to its end. Now we are looking into making it Live in maybe a couple of months time. This is a web application for a non IT company. Though they have their own internal IT team, they have asked me on what will be the hardware requirements for the live servers eg. RAM, 32 bit or 64 bit. Shouldn't the internal IT team be doing this or since I am the only person working on the project is it my resposiblity to let them know of the any specific hardware requirements which may impact the performance of the project? The reason I am asking this question is that, I have not done this before. All the times I used to be given a server and asked to deploy apps on it. I never used to worry about the server configuration, etc."} {"_id": "58219", "title": "Advice: How to convince my newly annointed team lead against writing the code base from scratch", "text": "I work in a pretty reknowned MNC, and the module that I work in has been assigned to a new \"lead\". The code base is pretty huge (~130K or more, with inter dependencies on other modules) , but stable - some parts have grown ugly over the years, but its provably in working state. (Our products are running for years on them, even new ones). The problem is, our lead wants to rewrite the code _from scratch_ , to encompass \"finer granularity and a proactive design\". I know in my guts thats not a very good idea, but how do I convince him/the rest of the team(who are pretty much more senior than me in terms of years of exp), without sounding too pedantic myself (Thou shalt not rewrite , as Joel et al have clear articles prohibiting it)? I have a good working relation with the person concerned, and don't want to ruin it, but neither do I want to be party to a decision which would surely plague us for _years_ to come !! Any suggestions for a milder,yet effective approach ? Even accounts of how you have tackled such a situation to your liking would help me a lot! EDIT: The code base I'm talking about is not a product/GUI, but at kernel level with all the critical functionalities for our product. I hope now you know why i sound so apprehensive !!"} {"_id": "62880", "title": "More job responsibilities, yet still entry level?", "text": "I currently work for a large-ish manufacturing company that has a variety of pay grade levels. I've put some serious time and effort into my work over the past 4 years and I've managed to make a significant enough impact that they moved me into a new branch of the company to start an in-house software development branch dedicated to serving our international interests. This has implied a ton of traveling and working at night due to time zone differences. While this is a great opportunity and sounds impressive to my friends/family, I'm still working at the lowest possible pay grade level. I assumed with all the additional job responsibilities that I'd also receive some kind of promotion. Is this common? Do I need to speak up in order to get a promotion? Do I even deserve one? Should I just wait it out? I'm lost..."} {"_id": "62883", "title": "Is it reasonable for QA department to get higher average salary than that of development department?", "text": "I just talked to a friend. He said that in his company, QA persons get higher salary than developers, that on average it is 2 times higher. I'm quite surprised because I have thought that QA persons will get lower salary than developers (on average). (true confession: my company does not have QA department) So, how is your company's QA salary compared to developer's? Is it reasonable for QA department to get higher average salary than that of development department?"} {"_id": "104844", "title": "Starting a second undergraduate degree", "text": "## Some background I've started programming when I was about 10 years old and truly loved it ever since. This year I've graduated from a mediocre university with a CS degree in Eastern Europe and two months later got a job offer at Google in Mountain View as a software engineer. ## The problem The problem is with my education - when I went through any curriculum from the top 10 US universities (an a lot of my co-workers are from such universities), and watched some of the online-available video lectures from these universities and the assignments that their students had during their studies, I've noticed that my university does not even come close to their level. I just feel that I was wasting time at my university, when I see that others got incomparably better training in CS. I would say that the courses at my university covered at most 30% of what the top-10 universities did - despite having identical classes. They just have better teachers, better curriculum, harder projects, better resources and they go more in-depth. Now, I feel quite comfortable with solving algorithmic problems, with programming in general, but still - each time I look at any of the undergraduate courses in these universities I find a lot of nuances that weren't covered in my classes. Some of them are significant, other less. ## The question Now being in the US, being financially and legally able to afford paying for an undergraduate CS course - is it worth going to one of the top 10 US universities and complete a second CS bachelor degree to fill in the gaps left over my first degree?"} {"_id": "58216", "title": "Why isn't protection against SQL injection a high priority?", "text": "On Stack Overflow, I see a lot of PHP code in questions and answers that have MySQL queries that are highly vulnerable to SQL injection attacks, despite basic workarounds being widely available for more than a decade. Is there a reason why these types of code snippets are still in use today?"} {"_id": "148635", "title": "Javascript MVC in principle", "text": "Suppose I want to implement `MVC` in JavaScript. I am not asking about MVC frameworks. I am asking how to build it _in principle_. Let's consider a search page of an e-commerce site, which works s follows: * User chooses a product and its attributes and press a \"search\" button. * Application sends the search request to the server, receives a list of products. * Application displays the list in the web page. I think the _Model_ holds a `Query` and list of `Product` objects and \"publishes\" such events as \"query updated\", \"list of products updated\", etc. The _Model_ is not aware of DOM and server, or course. _View_ holds the entire DOM tree and \"subscribes\" to the _Model_ events to update the DOM. Besides, it \"publishes\" such events as \"user chose a product\", \"user pressed the search button\" etc. _Controller_ does not hold any data. It \"subscribes\" to the _View_ events, calls the server and updates the _Model_. Does it make sense?"} {"_id": "148631", "title": "designing large scale applications in a low level language", "text": "I have been learning C for a while but still get confused about designing large programs within C (a large application such as the Linux kernel). Moving from Java where you have classes it's difficult to understand how to design a large application in C. What advice/links can people advise from moving from a high level language to designing applications in a low level language such as C?"} {"_id": "148638", "title": "Copyright question about building a similar app", "text": "A client wants me to build a Wordpress plugin similar to an app on Windows Phones. Although the app that I will build will be different in many ways (such as the UI, logos, graphics and a couple of more functions) but the basic functions will be similar to the app currently available on Windows Phone. I am doing this as a freelance project. I would like to know once I complete the development and send over the files to the client, and in case the original owner of the app on the Windows Platform comes out saying that the Wordpress plugin has a lot of features from their app. Who would be the one in trouble? Can he file a case against me or will it only be my client who will be held responsible as he will own all the rights after I send him the files and he makes a payment. Update : thank you for all your inputs and help. Much appreciated. To clarify on some of the questions raised : 1. Yes, this is a work for hire project. 2. The freelancing contract says that all rights and files including source code have to be transferred to the client after he makes the payment and thereafter he owns all the code and the app. 3. The app to be made is a utility app, but i am not sure how common such kind of an app is and whether i can really relate it to the accounting app example. 4. I am writing my own code, I have no access to the original app source"} {"_id": "200447", "title": "View is calling constructor method in the Model instead of passing model to the controller method", "text": "The problem is the UserTypes property (used to populate a drop down list in the AddUser view) is not being retained after POSTback of the form. **Here is my AddUserModel:** (which inherits from EditUserModel which doesn't have properties for UserTypes/UserTypesSelectList - probably part of the problem?) public class AddUserModel : EditUserModel { public AddUserModel() : this(new Dictionary()) { } public AddUserModel(Dictionary userTypes) { UserTypes= userTypes; } //properties etc public Dictionary UserTypes{ get; private set; } public SelectList UserTypesList { get { return new SelectList(UserTypes, \"key\", \"value\"); } } } **The view has the following** but it calls the parameterless constructor in the above code first instead of the AddUser() method in the controller: using (Html.BeginForm(c => c.AddUser(Model) I can see when debugging that this is because the parameterless constructor method in the AddUserModel is being called first instead of the controller method being called (thereby instantiating an empty AddUserModel). I found this question on this site but I still do not understand. Debugging shows that after POSTback of the form the AddUserModel contains no data, but the EditUserModel does contain the data (except the UserTypes data because it doesn't have a property for that) Should both the Add and Edit Models have the same properties? i.e UserTypes and UserTypesSelectList? Is there something I'm missing in terms of how Models inherit from each other and relate to Views? **Edit** After abit of research and deciding to use a shared model I was able to fix the above. Please refer to my answer for more details."} {"_id": "142779", "title": "Introducing Agile development after traditional project inception", "text": "About a year and a half ago, I entered a workplace that claimed to do Agile development. What I learned was that this place has adopted several agile practices (such as daily standups, sprint plannings and sprint reviews) but none of the principles (just in time / just good enough mentality, exposing failure early, rich communication). I've now been tasked with making the team more agile and I've been assured that I have complete buy-in from the devs and the business team. As a pilot program, they've given me a project that just completed 15 months of requirements gathering, has a 110 page Analysis & Design document (to be considered as \"written in stone\"), and where I have no access to the end users (only to the committee made up of the users' managers who won't actually be using the product). I started small, giving them a list of expected deliverables for the first 5 sprints (leaving the future sprints undefined), a list of goals for the first sprint, and I dissected the A&D doc to get enough user stories to meet the first sprint's goals. Since then, they've asked why we don't have all the requirements for all the sprints, why I haven't started working on stuff for the third sprint (which they consider more important but is based off of the deliverables of the first 2 sprints) and are pressing for even more documentation that my entire IT team considers busy-work or un-related to us (such as writing the user manual up- front, documenting all the data fields from all the sprints up front, and more \"up-front\" work). This has been pretty rough for me as a new project manager, but there are improvements I have effectively implemented such as scrumban for story management, pair programming, and having the business give us customer acceptance tests up front (as part of the requirements documentation). So my questions are: 1. What can I do to more effectively introduce change to a resistant business? 2. Are there other practices that I can introduce on the IT side to help show the business the benefits of agile? 3. The burden of documentation is strangling us - the business still sees it as a risk management strategy instead of as a risk. What can we do to alleviate their documentation concerns and demands (specifically the quantity of documentation and their need for all of it up front)? 4. We are in a separate building from our business, about 3 blocks away and they refuse to have their people on the project co-habitate b/c that person \"won't be able to work on their other projects while they're at our building.\" They expect us to always go over there and to bundle our questions so that we can ask them all at once and not waste that person's time with \"constant interruptions.\" What can we do to get richer communication from them? Any additional advice would also be appreciated. Thanks!"} {"_id": "132349", "title": "How can we inspire \"hacker\" culture?", "text": "I came across these essays the other day whilst dreaming of becoming a Lisp hacker: http://paulgraham.com/yahoo.html, http://paulgraham.com/gba.html and they crystallised for me that the reason for the unrelenting failure I'm seeing in so many software projects I find myself working on is that programming is seen as a low-end commodity resource that is an afterthought in comparison to the heavyweight layers of non-technical management that strategise above the code. Given our commodity nature, what practical steps can we take to inspire hacker culture in our fellow commodities?"} {"_id": "79390", "title": "Learning About Data Feeds", "text": "How and where can I learn more about financial data feeds? I'm specifically interested in learning how feeds are implemented, i.e. are feeds implemented as SOAP/RESTful Web Services? Or if they are implemented differently, how so and as what?"} {"_id": "252298", "title": "Moving a multi-activity android app into a navigation drawer activity", "text": "I'm making an Android app and have it structured as a simple multi-activity app. I want to add a navigation drawer to it, and it looks like I will need to completely restructure the app and load everything as fragments. (http://developer.android.com/training/implementing-navigation/nav- drawer.html) How should I begin performing this \"movement\" of code? Should I completely copy and paste my class variables and functions into a giant file? This definitely feels like it would be prone to bugs and errors. Perhaps there's a method of separating the class files but still having one navigation drawer?"} {"_id": "65270", "title": "want to switch from Enterprise software development to C programming", "text": "I'm a software engineer with 5years of IT experience in a reputed (tier 1 indian MNC) service based software company in peoplesoft and informatica. But unfortunately i'm interested to switch to a programming career, esp. C. Can anyone please let me know what is a way to switch to a C programming career? Also, what are the pros and cons for such a switch?"} {"_id": "198384", "title": "Mono patent safety", "text": "Could you please share you thoughts about Mono patent safety? Is it risky to use Mono in production for commercial projects? In case of WEB application, for example the following technology stack: \"Linux + PostgreSQL + C# Mono + EntityFramework(Mono) + ASP.NET MVC(Mono)\", can I be sure that if I will use implemented by Mono Project frameworks (ADO.NET, ASP.NET/MVC, EntityFramework, WCF) and run them on Linux I will not get a persecution from Microsoft caused by some patent violation? Thank you!"} {"_id": "214523", "title": "Which timeout should I set to an external service?", "text": "This service is a remote session pool. I need to ask for a session to work with other services. In most cases, this pool will have a session available, so in 15ms I will have a response. But sometimes, it will need to create a session on demand, requiring up to 800ms to create it. I have two options in mind to handle this situation: 1. To set a 15ms timeout, and to implement a retry policy up with an exponential back off until 800ms. This service will create the required session no matter whether I am connected to it. 2. To set a 800ms timeout, and to keep connected to the service until a session is available for me. In both cases, there's no guarantee that I will have a session after 800ms. **Which are the pro/cons for each option?**"} {"_id": "99732", "title": "What are some interesting problems to solve using GA's?", "text": "I want to start messing around with Genetic Algorithm's in my spare time. I've been thinking of some ways to incorporate them either into personal projects or work projects. I work in financial software - so a lot of risk analysis and things like that using derivative markets. Not sure how GA's would fit into that. Was curious to see if anyone had any cool ideas about how I could use GA's to cure some boredom and do some really cool stuff!"} {"_id": "222313", "title": "What is a good use case for native XML databases?", "text": "What would be a good use case to use a native XML database such as Apache Xindice and eXist-db? I have used XML features of SQL Server in the past and they were of great value, but there it is possible to use XML for 5% and traditonal storage for 95% of the application. Which application types would benefit from a 100% XML storage?"} {"_id": "186610", "title": "Good platform to teach Programming 101?", "text": "Years ago, I used to suggest Visual Basic 6 to rank beginners as a way to learn what programming is. Key point: This was NOT for career training, but just as a primer to the basic concepts of programming. I subscribe to the KISS principle (many programmers do not) Detest Microsoft as we may, the instant graphical nature of VB6 really helped kickstart things for the rank beginner (GUI control methods/events vs, cmdline programming) VB may have been a \"toy\", but it allowed real programming concepts to be applied. One could learn these UNIVERSAL topics in any language, including VB6: * Datatypes, variables and constants * Conditionals (If....Then....Else Blocks, Select Case) * Random number generation / seeding * Introduction to threads/timers * Loops (while loop, for loop) * Functions & procedures (and passing parameters) * Using arrays * GUI controls (properties, methods and events) * GUI animation (manipulating top, left, visible properties) * Creating and adding external modules * Debugging (breakpoints, watches, debug window) What environment or platform would you suggest to use today? Is Java/Eclipse making things too complex? To some extent, I think Java is overkill for this type of goal (OOP, etc). Would the path of least resistance be Visual Basic Express Edition 2012? or something else entirely like some sort of smartphone IDE? Or is that just too distracting, too complex, or too easy? And what do you lose compared to VB (Think how trivial it is to learn basic animation with the Timer and .top and .left) The KEY is to keep it EASY while not watering it down. Again, suggesting some esoteric language just because it's more \"elegant\" misses the entire point. What would you tell your neighbor's 15 year old kid to use if he wanted to know what programming is? (He does not want to become a professional programmer, and is not a good independent learner) Thank you."} {"_id": "137640", "title": "Why does the stack grow downward?", "text": "I'm assuming there's a history to it, but why does the stack grow downward? It seems to me like buffer overflows would be a _lot_ harder to exploit if the stack grew upward..."} {"_id": "137644", "title": "PDF help in the software manual", "text": "Just need a bit of advice here, if we embed Adobe Reader as an activex control, as well as have a fallback PDF viewer (in case the user doesn't have Adobe Reader installed) would it be wiser to put screenshots in the manual using whatever the latest version of Adobe Reader is, or using the fallback reader? I can see the drawbacks and benefits of both. For one thing, the look and feel of the fallback reader is constant, whereas Adobe Reader changes, so we don't have to redo the manual as often. But we expect that 90% of users will have Adobe Reader, so what they see in the manual won't even be what's in the program if we use the fallback reader for screenshots. So what's the solution. Adobe, Fallback Reader or Both? (For the record, the manual is a CHM, not a PDF)"} {"_id": "216799", "title": "Sharding / indexing strategy for multi-faceted search", "text": "I'm currently thinking about our database structure and how we modify it for scale. Specifically, we're thinking about using ElasticSearch to provide our search functionality. One common pattern with ElasticSearch seems to be the 'user-routing' pattern; that is, using routing to ensure that any one user's data resides on the same shard. This is great for client-specific search e.g. Gmail. Our application has a constraint such that any user will have a maximum of a few thousand documents, so this pattern seems like a good candidate. However, our search needs to work across _all_ users, as well as targeting a specific user (so I might search my content, Alice's content, or all content). Similarly, we need to provide full-text search across any timeframe; recent months to several years ago. I'm thinking of combining the 'user-routing' and 'index-per-time-interval' patterns: * I create an index for each month * By default, searches are aliased against the most recent X months * If no results are found, we can search against previous X months * As we grow, we can reduce the interval X * Each document is routed by the user ID So, this _should_ let us do the following: * search by user. This will search **all indeces** across **1 shard** * search by time. This will search **~2 indeces** (by default) across **all shards** Is this a reasonable approach, considering we may scale to multi-million+ documents? Or should I be denormalizing the data somehow, so that user searches are performed on a totally seperate index from date searches? Thanks for any pros-cons of the above scenario."} {"_id": "216798", "title": "What is the need of Odata when I have JSON ?", "text": "I am trying to understand the point of Odata and when it would make sense. Right now how I work is I use ASP.NET and MVC/WebApi controller to serialize/deserialize objects into JSON and have javascript do something with it. From what I can tell the benefit of OData is being able to query directly from the URL ... But since I am writing the client and server code there is no need for that. Would anyone ever parse the results of a ODaya query in javascript?? Maybe OData is more about providing a generic endpoint for ALL clients to get detailed information from a query that JSON does not provide ? So if I was a provider of data then I suppose that is what odata is for ? Help me understand the purpose and use of REST/JSON/ODATA."} {"_id": "158804", "title": "Bringing your own computer to work", "text": "I recently joined a startup, and we've been growing quickly over the last year. However, I've noticed that out of all the new hires (mostly sales and support) they are bringing in their own laptops to work. Is that normal? Most of the developers have basic linux boxes provided for them. Meanwhile the company keeps telling us every month that they've set a new revenue record."} {"_id": "156382", "title": "How do you keep track of the meaning of your SQL fields?", "text": "The more SQL fields I add to my tables and the more primary/foreign keys I add the more I lose the overview for specific fields for certain scenarios like Get/Add/Delete/Update data. I use SQL Management Studio (SQL Server) to design my database in the database diagram designer. Maybe there is a better tool or good approach how to keep track of the meaning of all those fields?"} {"_id": "216792", "title": "Side effect-free interface on top of a stateful library", "text": "In an interview with John Hughes where he talks about Erlang and Haskell, he has the following to say about using stateful libraries in Erlang: > If I want to use a stateful library, I usually build a side effect-free > interface on top of it so that I can then use it safely in the rest of my > code. What does he mean by this? I am trying to think of an example of how this would look, but my imagination and/or knowledge is failing me."} {"_id": "216791", "title": "Best Practices for serializing/persisting String Object Dictionary entities", "text": "I'm noticing a trend towards using a dictionary of string to object (or sometimes string to string), instead of strongly typed objects. For example, the new Katana project makes heavy use of `IDictionary`. This approach avoids the need to continually update your entity classes/DTOs and the database tables that persist them with new properties. It also avoids the need to create new derived entity types to support new types of entity, since the Dictionary is flexible enough to store any arbitrary properties. Here's a contrived example: class StorageDevice { public int Id { get; set; } public string Name { get; set; } } class NetworkShare : StorageDevice { public string Path { get; set; } public string LoginName { get; set; } public string Password { get; set; } } class CloudStorage : StorageDevice { public string ServerUri { get; set } public string ContainerName { get; set; } public int PortNumber { get; set; } public Guid ApiKey { get; set; } } versus: class StorageDevice { public IDictionary Properties { get; set; } } Basically I'm on the lookout for any talks, books or articles on this approach, so I can pick up on any best practices / difficulties to avoid. Here's my main questions: * Does this approach have a name? (only thing I've heard used so far is \"self-describing objects\") * What are the best practices for persisting these dictionaries into a relational database? Especially the challenges of deserializing them successfully with strongly typed languages like C#. * Does it change anything if some of the objects in the dictionary are themselves lists of strongly typed entities? * Should a second dictionary be used if you want to temporarily store objects that are not to be persisted/serialized across a network, or should you use some kind of namespacing on the keys to indicate this?"} {"_id": "157926", "title": "Good reasons for destroying mutexes with waiting threads", "text": "I'd like to see some valid examples of needing to let a thread enter a locked, non-re entrant mutex, then destroying the mutex (which supposedly terminates the thread). I can't think of any good reason to do this. I can't remember precisely what we were arguing about earlier today, but my colleague insists such techniques are needed for \"point-of-no-return\" problems (again, forgive me, but I forgot the example he gave me)."} {"_id": "68824", "title": "Are all dirty fixes created equal?", "text": "I have seen some dirty code in my time. I have heard varied feedback about \"dirty fixes\" too: a) a dirty \"fix\" is not a fix b) some fixes dirtier than others but dirty fixes are not acceptable c) some fixes dirtier than others and some dirty fixes are more acceptable than others d) dirty fixes show a pragmatic use of ones time but should be discouraged e) dirty fixes show a pragmatic use of ones time and should not be discouraged f) other Which of the above would you agree with and why? How do you stop attitudes like (e) from becoming standard practice?"} {"_id": "68821", "title": "What techniques should be used to ensure clear communication from the customer?", "text": "Some customers use terms from the domain language interchangably and incorrectly. This has, on several occassions, lead to an implementation of a feature in a way that was not desired by the customer. Are there any techniques that can be used to enforce the correct use of terms? Are there any techniques that can be used to prevent someone from handing \"back of a cigarette packet\" requirements to a developer?"} {"_id": "68820", "title": "do you use built in Stack/Queue/List or Vector in your program?", "text": "Yesterday i saw in a post a guy said that built in Stack/Queue are 5x slower than a normally coded Stack/queue ! Is it true ? If true then why Microsoft (C#.NET,VB.NET) or Oracle (JAVA) is giving such ADT ? Which one you prefer ? Writing your own or using the built in ? [I think most of the people prefer the built in as the code have been tested many times and is very very optimized]"} {"_id": "188501", "title": "Is there an idiom for a loop that executes some block of instructions between iterations? (In Ruby in particular)", "text": "I often need to do some operations in a loop and some other operations between the iterations. A simple example would be collecting words from an array into a string, spelled backwards and separated with commas. Is there an idiom or language support for this in any language? (For now i am mostly interested in Ruby.) I usually do something like a = ['foo', 'bar', 'baz'] s = '' n = a.size - 1 i = 0 loop do s << a[i].reverse break if i == n s << ', ' i += 1 end But i know no way to save this half iteration if using a ruby iterator: s = '' ['foo', 'bar', 'baz'].each do |w| s << w.reverse # ??? end"} {"_id": "195470", "title": "Is there any difference between interfaces and abstract classes that have abstract methods only?", "text": "Let's say we have an abstract class and let this class has only abstract methods. Is this abstract class different from an interface that has same methods only? What I am looking to know is if there are any differences both philosophically, objectively and in the underlying programming language implementation between an Abstract Class with only abstract members and an equivalent Interface?"} {"_id": "109446", "title": "Is this the right strategy to convert an in-level order binary tree to a doubly linked list?", "text": "So I recently came across this question - Make a function that converts a in- level-order binary tree into a doubly linked list. Apparently, it's a common interview question. This is the strategy I had - Simply do a pre-order traversal of the tree, and instead of returning a node, return a list of nodes, in the order in which you traverse them. i.e return a list, and append the current node to the list at each point. For the base case, return the node itself when you are at a leaf. so you would say left = recursive_function(node.left) right = recursive_function(node.right) return(left.append(node.data)).append(right);` Is this the right approach?"} {"_id": "109444", "title": "Is microoptimization worth it in mobile devices?", "text": "Usually microoptimization is considered not worth it with the following explanation: it might speed up the program by less that one percent, but noone cares of that minor boost - that's just too little of a change to be noticed. Furthermore, there might be some event handler that fires one thousand times per second and exits very fast - before it is fired again. Noone cares how fast it is - making it faster can't be noted, because it already \"as fast as can be observed\". However in mobile devices energy consumption is an important factor. The same event handler optimized to run ten percent faster will lead to less energy consumed and that's longer battery life and a longer operating device. How accurate is the latter judgement about mobile devices? Are there any real life examples that confirm or disprove it?"} {"_id": "109442", "title": "Difference between networking programming and socket programming", "text": "Are there any major differences when we talk about \"socket programming\" compared to \"network programming\"? Are there some topics that cover \"network programming\" but not \"socket programming\"?"} {"_id": "196419", "title": "Is this solution RESTful and secure?", "text": "Our product registers new players on our service, and we've chosen to host it on Azure (we're using .NET) and we wanted it to be stateless (for scalability) and relatively secure. Since this is the first REST WS I'm writing, I wanted to get some feedback on whether or not it's a solid solution. Some presumptions to know about our app: 1. Users are logged into the service anonymously, without requiring a password from a user 2. The WS must be completely stateless to allow horizontal scaling 3. We're connecting using HTTPS (SSL) to prevent 3rd party snooping **EDIT:** 1. We target for native iOS/Android devices 2. Our main concern is making sure only non-tampered clients are able to send requests And the abstract authentication process: 1. The client creates a simple hash (UDID:Timestamp) and encrypts it using the timestamp with some basic algorithm (for example, secret key is every 2nd character from the hash) 2. The client sends his UDID, Timestamp & hash to the server 3. The server rebuilds the hash and decrypts the encrypted hash sent from the user 4. If the two are equal - we know that it was actually sent from our client (and hopefully not from a malicious sender) Any input/suggestions would be great - obviously since it's the first time I'm handling this issue I might have designed it incorrectly. Thanks! **2nd update:** Reading the security specs for OAuth, it seems that there is no real answer to my question - since the client and server must know the secret keys and the client is locally stored on our users' mobile devices (as opposed to a web app). From the OAuth security guide (http://hueniverse.com/oauth/guide/security/): > When implementing OAuth, it is critical to understand the limitations of > shared secrets, symmetric or asymmetric. The client secret (or private key) > is used to verify the identity of the client by the server. In case of a > web-based client such as web server, it is relatively easy to keep the > client secret (or private key) confidential. > > However, when the client is a desktop application, a mobile application, or > any other client-side software such as browser applets (Flash, Java, > Silverlight) and scripts (JavaScript), the client credentials must be > included in each copy of the application. This means the client secret (or > private key) must be distributed with the application, which inheritably > compromises them. > > This does not prevent using OAuth within such application, but it does limit > the amount of trust the server can have in such public secrets. Since the > secrets cannot be trusted, the server must treat such application as unknown > entities and use the client identity only for activities that do not require > any level of trust, such as collecting statistics about applications. Some > servers may opt to ban such application or offer different protocols or > extensions. However, at this point there is no known solution to this > limitation."} {"_id": "194486", "title": "Permuting a list of numbers by pushing and popping onto a stack", "text": "Suppose we have some long stack of numbers. There is another intermediate stack, and a destination stack to be returned in the end. The only two operations allowed is transferring the top of the old stack to the top element of the intermediate stack and transferring the top element of the intermediate stack onto the top of the new stack. List all the possible new stacks that can be produced, assuming the original stack is used up. I'm using Racket/Scheme. Purely functional solutions are more preferable ;) Any ideas on how to do it? By some handwaving it seems that the following function correctly predicts whether a certain output sequence is possible, given that the input sequence is sorted from small to big: (define (possible? lst) (match lst [(cons a b) (and (if (andmap (\u03bb(x) (< x a)) b) (sorted? b) #t) (possible? b))] [x #t])) thus I can simply generate all permuations of the input list and filter out the impossible ones. It seems that there should be a more elegant solution though, which preferably uses recursion and functional programming instead of nested loops and mutation."} {"_id": "156030", "title": "IPC linux huge transaction", "text": "I'm building and application that requires huge transactions/sec of data and I need to use IPC to for the mutithreaded mutliprocceses communication, I know that there are a lot of methods to be used but not sure which one to choose for this application. This is what the application is gonna have, 4 processes, each process has 4 threads, the data chunk that needs to be transferred between two or more threads is around 400KB. I found that fifo is good choice except that it's 64K which is not that big so i'll need to modify and recompile the kernel but not sure if this is the right thing to do? Anyway, I'm open to any suggestions and I'd like to squeeze your experience in this :) and I appreciate it in advance."} {"_id": "194480", "title": "I'd like to write an \"ultimate shuffle\" algorithm to sort my mp3 collection", "text": "I'm looking for pseudocode suggestions for **sorting my mp3 files in a way that avoids title and artist repetition**. I listen to crooners - Frank Sinatra, Tony Bennett, Ella Fitzgerald etc. singing old standards. Each artist records many of the same songs - Fly Me To The Moon, The Way You Look Tonight, Stardust etc. My goal is to arrange the songs (or order the playlist) with the maximum space between artists and song titles. So if I have 2000 songs and 20 are by Ella I'd like to hear her only once in every 100 songs. If 10 artists sing Fly Me To The Moon I'd like to hear it once in every 200 songs. Of course I want to combine these two requirements to create my \"ultimate shuffle\". I know this is a fairly wide open question. I haven't started programming it yet so I'm just looking for suggestions of a good approach to take. I actually have some other requirements regarding evenly spacing other song attributes but I won't get into that here. * * * As a starting point I'm modifying code I found here to manipulate mp3 files and read ID3 tags. I wrote a small app that satisfies my need using parsifal's answer below. I also wrote a follow up question here. Thanks for all the great responses!"} {"_id": "94073", "title": "How does git/GitHub handle changed local and changed pushed repositories?", "text": "I'm working with a few other people on projects, and today I ran into a situation I'm sure I'll see often. One of my co-workers and myself had split up some tasks to do on the same project and began working. He finished his tasks, and pushed his changes to our shared repository. I was still working on my changes, but I was afraid to push them and overwrite the code he had worked on with my project, only containing half the needed changes. For this situation (just to be safe, I had a presentation on the code in a few minutes and didn't want any major setbacks), I made a copy of my local code, pulled his new code, copied the files I had edited into the project I had just pulled (we were working in two separate sections, but I'm curious as to how git/GitHub handles this issue when working in the same files), and then pushed our combined code. So how does git (or how should we) handle one project with two different changes to the same file?"} {"_id": "184943", "title": "Overcoming circular reference", "text": "I am working on an asp.net MVC web application which contains several projects. One is BusinessObjects, which contains business logic / processes. Another is EmailGeneration which is used to send marketing campaign / customer emails. The EmailGeneration project references the BusinessObjects project because it need to generate templated emails based on Business Objects. I need to be able to trigger emails from business objects so that I can, say, automatically send an invoice when an order is completed. However, I can't add the reference as it would create a circular reference. This suggests that my design is flawed. How can I change my design to reduce coupling between the components?"} {"_id": "196416", "title": "What's the dominant naming convention for variables in PHP: camelcase or underscores?", "text": "The consensus seems to be that one should follow the convention of the platform they're developing for. See: Underscore or camelcase? Naming conventions: camelCase versus underscore_case? However, PHP doesn't seem to strictly follow any convention internally (no surprises there), even for methods and functions (e.g. `mysqli::set_local_infile_default`, `PDOStatement::debugDumpParams`); however, underscores seem to be dominant in function names. However, what I couldn't find was this: **what's the dominant naming convention for variables in PHP?**"} {"_id": "137011", "title": "Where to use C++ today?", "text": "> **Possible Duplicate:** > Is there any reason to use C++ instead of C, Perl, Python, etc.? > When to use C over C++, and C++ over C? I am going to enter university next fall in computer sciences, but have been programming already for a few years. I am currently doing web application development. I use Python and Java mainly. I also know C++, but never really pacticed it because everytime I think of a project, there seems to be a language that is better suited for the job than C++. For example, Java seems to give the same result as C++, but in a more productive way (even performance-wise, Java is not far behind C++). Not to mention that the Java library is much greater than the C++ one. This makes me wonder where, in the world of web development, could I use C++ and get an advantage that other languages won't give me? I currently believe that it would be as an add-on to a language like Java or Python."} {"_id": "216245", "title": "How to test issues in a local development environment that can only be introduced by clustering in production?", "text": "We recently clustered an application, and it came to light that because of how we're doing SSL offloading via the load balancer in production it didn't work right. I had to mimic this functionality on my local machine by SSL offloading Apache with a proxy, but it still isn't a 1-to-1 comparison. Similar issues can arise when dealing with stateful applications and sticky sessions. What would be the industry standard for testing this kind of production \"black box\" scenario in a local environment, especially as it relates to clustering?"} {"_id": "76886", "title": "Why do most languages provide a min-heap instead of a max-heap implementation?", "text": "I just noticed something and I wonder if there is any reason for that. Except for C++ (std::priority_queue is a max heap), I don't know any other language that offers a max heap. > Python's heapq module implements a binary min-heap on top of a list. > > Java's library contains a PriorityQueue class, which implements a min- > priority-queue. > > Go's library contains a container/heap module, which implements a min-heap > on top of any compatible data structure. > > Apple's Core Foundation framework contains a CFBinaryHeap structure, which > implements a min-heap. I find a max-heap more intuitive than a min-heap and I believe technically the implementation difference is only a question of changing a comparison operator. Is there any real reason? Most of the applications need a min instead of a max heap? Thanks in advance"} {"_id": "79536", "title": "How far should you be from your closest data backup?", "text": "Pretend for a moment that something catastrophic happened; you're hacked, and your production database becomes a mess. How far (time) should you be from your rolling out your latest backup and resuming operations as normal? Of course, \"immediately\" is \"ideal\", but I'd like to see some real answers. Perhaps some existing, real-life situations that you're currently in with regards to backups."} {"_id": "91683", "title": "Completing a project successfully despite hostile management?", "text": "I work for a subsidiary of a large world wide company. This was not a subsidiary from the beginning, it was a company bought by the larger company. We seem to be heading toward a death march and I'm wondering if there is anything I (or my team) can do to either resolve the management problems or complete the project in spite of them. It's a typical Daily WTF situation: * Unrealistic deadlines (management \"estimated\" 6 months, dev team needs 18 months _minimum_ ); * Long (up to 3h) daily meetings, further cutting our productivity; * Management refuses to budge on the schedule because they want to look good to the new owners; * Developers being bullied, accused of incompetence, put on a \"wall of shame\", etc. * Team lead just resigned and morale is at an all-time low. I'd like to leave - most of the dev team is considering it - but I'm reluctant to quit. I really need the money, and I've also only been here for a short time (5 months) after a period of unemployment, so quitting now might raise red flags for prospective employers. Are there any strategies which might be effective in fostering more cooperation from the management, or at least minimizing the ongoing disruptions? **EDIT:** _I received great advice from answers to this question. I can't really accept an answer because I can't accept more than one and it would be unfair for the others._ _We've decided to do our best at work given the circumstances. We're pushing back on management to see if we can actually obtain help from them (as new people, new deadline etc). At the same time I'm looking for a new job (as my colleagues were already doing)._ _We'll see what happens and plan accordingly. But one thing is sure: Sacrificing ourselves on the corporate shrine is out of the question._ _Thank you all!_"} {"_id": "79538", "title": "Java but no JEE experience", "text": "I have quite extensive Java experience (many years of Swing, desktop apps, gigantic applets), but have done very little JEE - only one single gig for 9 months, but that was very narrow work where I was basically just implementing business logic (in a highly abstracted internal company IDE) and at most tweaking some JSP pages. Never really picked up the macro flow of the JEE framework in depth. In the meantime, I've done some very limited maintenance of an ASP.NET MVC app at my previous job ( _very_ limited) and am currently playing around with Ruby on Rails at home. I feel like I have the basic gist of web development down, but nothing that I can call solid \"industrial strength experience\" with any of these frameworks. I'm still a bit in the \"cargo cult programming\" stage with web development, can't really say that I could plug leaks in the abstractions if they broke, so to speak. So what I'm wondering is: with this background (extensive Java and a decent grasp of how web frameworks work), is JEE going to be be relatively logical and easy to pick up, or will it be a huge and steep learning curve? Basically, I'm planning to learn it, and want a preliminary idea about how big an effort it's going to be."} {"_id": "131695", "title": "How long does it take for one to become really comfortable in a programming language and coding starts being fun?", "text": "I am just starting out in programming learning Aspect and I'm nearing the end of the introduction. I've learned about algorithms and functions and types of code and Python and briefly the basic ideas of programming. And I have to say it totally fits the description of a language it is like learning how to speak again. So what I would like to know is after this trivial and trying stages of learning just how to get into this gate of overstanding, does programming become easier and more fun? Also I would like to get an idea of how long or far into the learning process it is, before you can speak this language and do things like write an application or design on the web and all these electrifying things are graspable and doable?"} {"_id": "105827", "title": "Do I need to understand pointers to use C++?", "text": "Well, I love C++, I have been using it for a while: I like all the libraries (Allegro, SDL, QT, Ogre, etc.), but I have a problem: I don't understand pointers. Do I really need them ? I just program for fun: but I want to study it some day. Thanks."} {"_id": "101762", "title": "Do you sign each of your source files with your name?", "text": "> **Possible Duplicate:** > How do you keep track of the authors of code? One of my colleagues is in the habit of putting his name and email address in the head of each source file he works on, as author metadata. I am not; I prefer to rely on source control to tell me who I should be speaking to about a given set of functionality. Should I also be signing files I work on for any other reasons? Do you? If so, why? To be clear, this is in addition to whatever metadata for copyright and licensing information is included, and applies to both open sourced and proprietary code."} {"_id": "126230", "title": "Is it a good practice by commenting with owner name?", "text": "> **Possible Duplicate:** > How do you keep track of the authors of code? Here's several scenarios which may comment with owner name: 1. bug fixing, i.e. `// fixed bug 123 by xxx, solution is ... ...` 2. fixme/todo tags, i.e. `// TODO: .... by xxx.` 3. hacks, i.e. `// HACK! ... by xxx` For case #2, please refer to Comment Tags The obvious advantage is that we can ease tracking by names. The downside is the risk of abuse. Actually my previous company allowed this way of commenting style, but current employer completely disallows names appearing in code. In my opinion, I would vote for discreetly commenting with author names. I'm open to hear from you if this commenting style is good or bad. Thanks."} {"_id": "109191", "title": "dbdeploy (phing) and development teams", "text": "Does anyone here use dbdeploy (phing) in a commands? If so - how do you manage the migrations? Let's suppose we have 2 different features developed in different branches of mercurial repository (phing build file and dbdeploy migrations are stored in mercurial). Which IDs to give to each feature changes?"} {"_id": "131698", "title": "How can I debug exceptions that are not easily reproducible and only occur in a production environment?", "text": "I am working on an issue where the exception only occurs in our production environment. I don't have access to these environments, nor do I know what this exception means. Looking at the error description, I'm unable to understand the cause. javax.net.ssl.SSLHandshakeException: Received fatal alert: handshake_failure Would someone please advise me on how to approach this kind of problem?"} {"_id": "199712", "title": "In which cases build artifacts will be different in different environments", "text": "We are working on automation of deployment using Jenkins. We have different environments - DEV, UAT, PROD. In SVN, we are tagging each release and placing same binaries in DEV, UAT, PROD. The artifacts already contains config files w.r.t each environment but I am not understanding why we are storing binaries in environment folder again. Are there any scenarios where deployment might be different for different environments."} {"_id": "180758", "title": "Design Parts DB", "text": "I'm developing a tool that handles (electrical) parts. The parts can be created, viewed, modified, deleted, grouped and so on... In order to make this question useful for future visitors I like to keep this question universal since managing parts in a DB is very common no matter what parts are in the DB (CDs, cars, food, students, ...). I am thinking of 3 different DB designs: 1. Using a parts table and derived tables for specialized part attributes. Parts (id, part_type_id, name) PartTypes (id, name) Wires (id, part_id, lenght, diameter, material) Contacts (id, part_id, description, picture) 2. Using only specialized part tables. Wires (id, name, lenght, diameter, material) Contacts (id, name, description, picture) 3. Using a Parts-, PartTypes-, ValueTypes- and PartValues table that contain all values. PartTypes (id, name) ValueTypes (id, part_type_id, name) Parts (id, part_type_id, name) PartValues (part_id, value_type_id, value) Which one to prefer and why? Or is there a better one? I am concerned about the DB queries. I don't want the queries become overly slow or complicated. ## Update The number of types in the DB are pretty much given and static since they rest on a international standard and will be enhanced seldomly."} {"_id": "180757", "title": "is it possible to monitor more than one stream simultanously?", "text": "E.g. \\- $stream1 is the STDOUT of a child process and $stream2 is the STDERR of the same child process \\- $stream1 is one child process and $stream2 is another child process Is there a possiblility in perl to monitor two or more streams at the same time? So we loop as long as anything pops-up in any of the streams and we take whatever comes first either in $stream1 or $stream2. Something like while (my $line = <$stream1 or $stream2> ) { #do something with the $line } ??"} {"_id": "145922", "title": "How can I do test automation using a python script to test a c program?", "text": "I was wondering if I can use a Python script to test a C program automatically. Suppose I have a very simple C program which reads data (numbers as test cases) from the console, calculates them, then outputs it to the console. Now I want to read data from a file then output whatever the program outputs into a file. Suppose in the original C program I use `while` loop and`scanf` to read two numbers into two variables `a` and `b` for several times and do different calculations according to the values of `a` and `b`, like this: if(a>4 && a<10){...} else if(a>=10){...} Now can I use a Python script to test the program automatically? And do I need to modify my C program? How? EDIT: If any method is permitted, then what is the best way to test the C program automatically without using frameworks?"} {"_id": "81248", "title": "Prevent Casual Piracy for Simple Utility", "text": "I've written a small utility that I wish to sell for less than $10. My primary concern is \"casual piracy\". The scenario that plays out in my mind is this: > User buys the program, enjoys using it and tells their friends. The friends > copy the application to their USB drives and take it home - using the > application for free (maybe never realizing they should have purchased it. Since I've got absolutely no protection built in, it would just be a simple copy'n paste to pirate the app. The users who would be using the app are in close proximity to each other (work in the same environment), so casual piracy would likely occur frequently. **Any ideas?** Keeping in mind the app is cheap (partly to reduce casual piracy), and the level of effort to write the app hasn't been very demanding. Update: the app will run on any system that supports .NET 3.5."} {"_id": "233175", "title": "suggest structure for classes that maps to json with dynamic data without using dynamic or object reference", "text": "this is a kind of data i have to de-serialize { \"id\": \"M:11427\", \"title\": \"DAX30\", \"nextStartId\": \"S:727831\", \"sections\": [ { \"type\": \"HIGHLIGHTS\", \"baseResults\": [ values of type highlights ], \"hasMore\": true }, { \"type\": \"CHART\", \"hasMore\": false, \"chartTypes\": [ string values ] }, { \"type\": \"TWEETS\", \"baseResults\": [ values of type tweets ], \"hasMore\": true }] } I have to serialize & deserialize them all. I want to create something that can hold the values corresponding to baseResults. there is a main class that represent the whole json class Data { ... ObservableCollection
    sections{get;set;} ... } then there is a class that represents the data in **sections** array of main json class Section { string type{get;set;}// thi elements decides what kind of data would be in baseResults dynamic baseResults{get;set;} //it should hold ObservableCollection of either Highlights or Tweets etc. } base class for type of data coming in **baseResults** array is an abstract class `class CategoryData` and its children are `class Highlights` & `class Tweets` I am using `dynamic` since I can not assign an `ObservableCollection` with `ObservableCollection`. But I don't want to use this dynamic or object type instead I want something relevant. please suggest me what could be a better approach for this problem."} {"_id": "181482", "title": "Git-friendly spreadsheet format?", "text": "We're trying to move our project documentation process from Google Documents to a set of self-hosted Git repositories. Text documents are Git-friendly enough, since we usually do not need any fancy formatting, we'll just convert everything to, say, multimarkdown with an option to embed LaTeX for complex cases. But spreadsheets are quite different story... Is there a spreadsheed(-like) format that is friendly to version control systems (and, preferably, is as human-readable as Markdown)? _\"Friendly format\": Git works well with the format (itdoesn't with XML) and it generates human-readable diffs (extra configuration involving external tools is OK)._ Obviously, Markdown flavors allow one to build static tables, but I'd like to be able to use stuff like `SUM()` etc... (Note that CSV has the same problem.) No WYSIWYG is OK, but decent editor/tool support would be nice. _Update: Linux-friendly answers only, please. No MS Office stuff._"} {"_id": "181483", "title": "Root cause analysis in event correlation", "text": "I'm to write an event correlator module for the device we produce. When a fault occurs the resulting avalanche of logs about derivative conditions from various modules is readable only to a person skilled in the task - a layman will be utterly lost. The event correlator is supposed to solve this problem - find root cause and present a friendly message guiding the user to the origin of the problem. I can collect tokenized events from all the modules and observe the internal state of the main module. I can even write history a few seconds back. Now, the hard part and my actual question: writing the analysis algorithm. I tried the Wikipedia page, but it's awfully sketchy on the specific Root Cause Analysis part, and the RCA article linked is more about a business practice than about something that can be phrased as computer algorithm. The net is full of event correlators as software to be bought or used, but I have trouble finding anything about making one. So, summarizing: * a history of events and states dumped at the moment of error (the \"black box\") * a set of rules (preferably in human-editable form in some external file) * a set of root causes, derived from the events according to the rules. * a program interpreting these rules, then processing the black box through them and producing a set of root causes matching. How do I even approach writing something like that? * What would be the format of the ruleset file and a parser for the rules - lexical parsing alone is easy enough but what would it produce? * How to represent _a rule_ internally? As some kind of object probably, but how would that object look like? How to chain the rules together in a decision tree? * processing black box data through these rules. Seems an awful lot like running data through a script, but the data contains a lot of of noise. How to assure obtaining _some_ results? Now, some caveats: * I'd prefer to avoid bayesian or other statistical approach - I'd prefer a fully deterministic solution so that given set of conditions will produce the same result in laboratory analysis, and the precise set of rules is fully human-readable and human-editable. * Not all components are directly monitored, and some faults can be derived only indirectly. The root cause will not always be one particular message in the black box. Sometimes it will be something that led to a certain set of logs. * Not always the sequence of events is maintained, as messages sometimes are delayed on the message bus. * If the main power goes out, we're not guaranteed to capture all the data. Examples: 1. * Module 1 reports halt; blockade line is active. * Module 2 reports halt; blockade line is active. * Module 4 reports halt; blockade line is active. * PSU module reports external power is okay, internal power has been disabled (per blockade line.) Root cause: Module 3 failed gracefully, serious hardware condition activated the blockade line correctly but it was unable to send its message out. 2. * Module 1 reports halt; blockade line is active. * Module 2 reports halt; blockade line is active. * Module 4 reports halt; blockade line is active. * PSU module reports external power is okay, internal power has been disabled (per blockade line.) * Module 3 reports halt; No power in its output line. Activating the blockade line. Root cause: Burnt fuse of output line of Module 3 Now Module 3 was mean by sending out its message last, while pulling the blockade line first. 3. * Module 3 reports halt; No power in its output line. Activating the blockade line. * Module 1 reports halt; blockade line is active. * PSU module reports external power is okay, internal power has been disabled (per blockade line.) * Module 2 reports halt; No power in its output line. Activating the blockade line. * Module 4 reports halt; No power in its output line. Activating the blockade line. Root cause: Main fuse of internal power disengaged. Likely reason: short circuit in the PSU or fuse disengaged manually. Note how Module 1 got late with detecting lack of power, say, its output was inactive and the capacitors kept it active long enough that it detected blockade line (engaged by other modules) first, before detecting power failure. Also, normally PSU would report a fault condition of internal power going out, but it's simply slower - less sensitive than the output modules, and they will pull the blockade line before it detects power is out, and by then it will assume this is correct behavior. The difference between cases 2. and 3. is only the number of simultaneous \"no power\" faults: it's extremely unlikely several fuses go out at the same time, while using the main fuse to disable power for service works is the \"traditional\" approach and it is expectable at least a few modules will detect \"power out\" if the main fuse is disengaged."} {"_id": "109087", "title": "Will taking a job in web development prevent a low-level programming career?", "text": "I'm a computer science student in my last year, and I want to get a job, but have difficulties finding one in my city where I can use what I already know pretty well. There is a company which does PHP/web that insists to work for them, and it is tempting, because pay would be good (for someone who is still a student) and the working conditions are great; the problem is that in no way I want a career in web developement, because I love low-level programming (C++, C, a bit of C# and Python - for my thesis I'm working on a complete C compiler with optimizations - I will make it open source later this year). This job would be only until I graduate and find something I like. Now the question: could taking this job reduce my chances in being accepted on a job where low-level things are done? I think it might, because the interviewer could say something like \"Well, if you pretend to know so much low-level stuff, why did you choose a job where you're doing the complete opposite?\" Here PHP is considered by many a language used by novice programmers (I know it's not entirely true) and I fear it doesn't give a good impression to claim I have deep knowledge of low-level stuff while having a PHP job (and the only one) listed on my CV."} {"_id": "144347", "title": "When or why should one use getters/setters for class properties instead of simply making them public properties?", "text": "I program primarily in ColdFusion but this is a general OOP question. Is there any benefit to using: getProp() { return prop; } setProp(val) { prop = val; } As opposed to simply obj = new Obj(); obj.prop = \"1\";"} {"_id": "77289", "title": "Getting your user agreement right", "text": "I'm planning to provide a little service with which you can control your computer from anywere. It exists out of a server (which I will be providing), and two clients (a controlled one and a controlling one). Now I want to provide that service for a little fee. However, the server is actually a small old Dell laptop with a broken screen at my home with Ubuntu as OS. It's not very reliable, but I want to use it, because it's cheep. Now, as I said already, the laptop maybe isn't the most reliable machine, along with my Internet connection, which also can go down if there are some unexpected problems. But I don't want to take risks, and let the user, before he pays, know that this can occur. But I also want me to be safe, so I won't need to give any money back. It is my first application that I will be giving away for a little fee, and I don't know about the legal responsibility I have to take. Basically I don't want to give any money back, even at a 100% downtime (which will be very unlikely), but I also don't want it to be my fault if someone loses data, by using my software, even if it is a bug, or someone intercepted his data. So, I have put this in the agreement, and this only. Is it enough and is it legally binding? What should I also know? > * iControl actually implements a client-server-client protocol, where one > client let's the other client execute a shell command. Then the controlled > client will return the termination status, the standard output, and the > standard error of that command. > * iControl uses no encryption to send data over the network. It is a > potential security risk to use this service, because all data is sent raw > over the network. You can compare it with the FTP protocol, the HTTP > protocol, which you're using right now, and the telnet protocol. > * iControl's maker does not in any way stand in for the damage this > service can do. Not by misuse by the user, not by a bug in the software. No > illegal activities maybe done using this service. The provider of this > service may suspend you from his service at any time without an explanation, > without a money-back guaranty. The collector of the rent fees does not need, > in any circumstances, to return the payed money. > * The iControl service is maintained by one man and one man only. Any > downtime of the used server may occur. In case of any length of downtime, > the service provider does not have to give any money back, however he'll do > his best to make you as happy as possible by trying to make the service be > up all of the time. > I'm planning to let the user check a box that they read this and agreed to it, when they register. Is this enough?"} {"_id": "221152", "title": "Freelance charging based on tasks completed", "text": "I've just started freelancing and have two projects where I am splitting my week with both companies, one I'm charging hourly and everything is fine, the other one however they have said they will pay me based on a points basis using Pivotal Tracker. For example, there's a fixed rate per point and they work on an estimate that I will complete two points per day, I was just wondering if this is something that is used regularly or if I should try and get them to switch to hourly? My concern is that points are only used for features and if a feature has a bug this isn't taken into consideration so any time spent fixing that bug won't be charged. Correct?"} {"_id": "50675", "title": "What's the most productive coding environment", "text": "I was speaking with an ex-colleague the other day about the most productive way to write code and he said he found it best \"to CIMP, or Code In My Pants\". When I asked him exactly what he meant, he explained he found it best to work at home, coding at his own pace, dressed comfortably (in his pants), and communicating with his team through emails, IM, or the telephone. Digesting his approach (which he describes to clients as the Complete Integrated Method of Programming), I realised my coding is also more productive when working in an isolated environment, which made me wonder if the software industry has got it all wrong and should development be really done by dispersed teams of individuals, or are there advantages to geographical herding that make up for the added interruptions it brings? So has business got it wrong? Should development occur predominantly across geographically isolated individuals to increase productivity, or are there real reasons why herding developers together makes sense?"} {"_id": "209703", "title": "How will I restrict the number of users logged in based on session?", "text": "I have a software (in cloud) with so many registered users. I want to restrict the number of users logged in at a time to certain number (configuration based, let's say 100). If the 101 user tries to login he should be placed in queue with priority 1, subsequently 102 user should be 2 in queue, I know at every time a logged in users request is hitting the server the `sess_file` access time will be updated. What should be the logic from here? Should I check the last accessed time of all the `sess_file` and log-out the uses that are idle? and how will I manage the waiting queue? Software is done with PHP-MySQL. When the user is logged in I mark the column as logged in in user table, and when the log-in page is loaded if the total logged in user count is 100, the 101 user who is trying to log-in is directed to a intermediate page which will check via ajax call fired at an interval of 5 mins that if any user is logged out. If a slot is available it redirects to the log-in page, automatically logs in and redirects to the user dashboard. But I want to know any better practice is there. Mostly I want to know the invalidation of logged in user by checking the access time of `sess_file` is the right way to do it?"} {"_id": "50673", "title": "Internal Libraries (Subversion Externals, 'library' branch, or just another folder)", "text": "Currently working on multiple projects that need to share internal libraries. The internal libraries are updated continually. Currently only 1 project needs to be stable but soon we will need to have both projects stable at any given time. What is the best way to SVN internal libraries? Currently we are using the 'just another folder' like so... > trunk\\project1 > trunk\\project2 > trunk\\libs It causes a major headache when a shared library is updated for project1 and project2 is now dead until the parts that use the library are updated. So after doing some research on SVN externals I thought of this... > trunk\\project1\\libs (external to trunk\\libs @ some revision) > trunk\\project2\\libs (external to trunk\\libs @ different revision) > trunk\\libs\\ I'm a little worried about how externals work with commits and not making library commits so complicated that I am the only one capable of doing it (mostly worried about branches with externals as we use them extensively). On top of that we have multiple programming languages within each project some of which don't support per-project library directories (at least not easily) so we would need to check out on a per project basis instead of checking out the trunk. There is also the 'vendor' style branching of libraries but it has the same problem as above where the library would have to be a sub folder of each project and is maybe a little to complicated for how little projects we have. Any insight would be nice. I've spent quite a bit of time reading the Subversion book and feeling like I'm getting no where."} {"_id": "221156", "title": "Should a DR & CR entry both be created or a single-entry accounting is fine", "text": "A new project has an amount. It is always related to a client. One client can have many projects. And he may pay in steps for a particular project. In order to keep track of client payments, I've created a table namely `client_ledger` which contains information about date, time, amount, mode of payment & related project. But then in one of the software screens, I have to reports pertaining to a client's debit & credit. So there are 2 things that can be done : 1. Either I can get the total amount of all projects of that client and his total payments & then can show how much balance is left to be paid. 2. Or if I'd created an entry in the `client_ledger` table for `DR`, then as soon as a project is created, an entry is made in the table for `DR` and subsequent payments received will be `CR`ed into the data-base. In this case, if a project's amount is modified/edited later, then either the original `DR` entry has to be modified, or a new `CR` entry followed by the new project amount's `DR` entry should be made. Which of these processes should be followed ?"} {"_id": "92088", "title": "Developers inheriting code. What to ask the old developer to better help the new developer?", "text": "I know nothing about code. We've had our old iOS developer drop out and we're looking for a new developer or a team to pick up where he has left off. I'm aware there can be issues with developers inheriting code etc. What are some questions that I can ask my old developer to better help the new developer understand what stage the code is at, what needs working on etc etc. Just general type questions? So that when I meet with potential new developers I can hand them a 'fact sheet' of where the code is at form the old developer."} {"_id": "145690", "title": "Is there a need for a factory class for creating viewmodels?", "text": "A colleague of mine suggested using a factory class for creating viewmodel objects in our ASP.NET MVC solutions. The idea being that it can help with the design, and maintainability, of the way viewmodels are built in our apps. I wanted to find out if anyone else has experience of this. I've done some research and found very little on this practise. Currently we create viewmodel objects at the controller level eg public ActionResult Index() { return this.View(this.BuildIndexViewModel()); } So this.BuildIndexViewModel() is responsible for creating the viewmodel class (obviously :). But we're looking into the possability of: public ActionResult Index() { return this.View(ViewModelFactory.CreateIndexViewModel()); } This is an interesting idea, but I'm not 100% convinced. I was interested in other people's opinions on this."} {"_id": "100373", "title": "Switching to Windows Identity Foundation - Should we use the SQL Server role and membership provider?", "text": "We have an application which uses SQL Server as the back end. It was client server based, now it will be web based. Our implementation to logon was a user ID and password hash/salt stored in the database. Additionally, all policies and roles were stored in our table schema and assigned to users and or user defined groups. So, they would login, authenticate, and then we would pull the role(s) and policies for that user so they could use the parts of the application that thier role(s) and policies allowed for. We are investigating WIF now that it is web based. Ideally we would like to re-use our current model, that is, be able to logon using our existing table(s) and then pull the roles and polices afterward. However, there is a SQL Server role and membership provider. Would it be easier to switch the login, roles, and membership using the built in SQL Server provider and deprecate our table schema? Or should we build a provider from scratch?"} {"_id": "199948", "title": "How can I keep a production Python environment secure?", "text": "Most of my work is creating websites in Django (a Python web framework) and deploying them to my own or clients' servers. I work from a `virtualenv` to separate site from system packages and have perhaps 60-80 packages installed in there and that lot is shared between two-dozen sites. This obvious limitation to this approach is needing to test every site if I upgrade a package it uses. I consider that a fair trade-off for not needing to keep on top of umpteen separate virtualenvs. And that is essentially my whole problem. **How on earth are you supposed to keep on top of`virtualenv` deployments?** People just seem to treat them like a dumping ground but if the programming universe has learnt anything this past week from the Ruby on Rails explosion, using old versions of software is unacceptable. I have a simple script that attempts to check for current package versions with the latest `pip` counterpart but it's quite inacurrate. It also doesn't differentiate between security upgrades and feature upgrades (which require days of testing and fixing). I am looking for something better. I am looking for something that can let me know if Django has a new security release out, or if something is end-of-life. I want something to help me (and other Python devops) not become the next batch of people crying after a wave of kids with scanners and scripts convert our servers into a botnet. Does such a thing exist?"} {"_id": "92081", "title": "What is the difference between bug and new feature in terms of segregation of responsibilities?", "text": "We are planing to introduce Help Desk and Support Desk in our project. Our current development team would be divided into two smaller teams: Support Desk and Development team. Support Desk would be responsible only for bug fixing. Development team would be responsible for making enhancements and new features development. What value do you see from such responsibilities and teams segregation? Can we find a differences between bug and new features in terms of development activities? Is there any real value of doing such things?"} {"_id": "110949", "title": "Does hierarchial inheritance belong to the past?", "text": "Recently it came to my attention that hierarchical inheritance may be a relic of thinking of classes as \"structs with functions\" rather than a product contract-driven mentality. Consider, as a simple specimen the implementation of \"Unmodifiable Iterator\" from Guava. http://guava- libraries.googlecode.com/svn/trunk/javadoc/com/google/common/collect/UnmodifiableIterator.html It's an iterator which throws an \"UnsupportedOperationException\" on invocation of remove(). Now, I'm sure most people would agree that implementing a contract and then having one of its methods always throw an exception is bad form -- when you implement a contract you're implicitly guaranteeing that all the methods would work. Yet, what are our options here? We could declare an interface which does not contain the remove method but that would render our return type incompatible with all methods which work on iterators. We could blame the Java API designers for forcing the remove() method to be a part of every iterator, rather than moving it to a higher level interface such as \"RemovableIterator\". If they did that it would indeed avoid some problems but let's say we need an iterator which can also set values called \"SettableIterator\" (implements setValue(T) ) and also a resettable iterator. If we require a combination of these functionalities we are forced to declare an interface for every combination. RessetableSettableIterator, RessetableRemovableIterator, RemovableSettableIterator , etc. The combinations grow exponentially to the extra features we add to the interface. What we are usually trying to express is something like 'this function requires a parameter which is Iterable, Settable and Ressetable' or 'this function returns a value which is Iterable and Resettable'. Yet languages like C#, Java and C++ do not allow us to do this without very awkward use of generics. Are there recent methods that make this kind of design obsolete?"} {"_id": "74955", "title": "Which UML colors should this example be represented by?", "text": "See UML colors here. Here is the example, I have following entity classes: User(name) MailBox(owner: User, label) Mail(from: User, to: User, subject, body, composeTime) MailCopy(type, from: User, to: User, subject, body) MailDelivery(mailCopy, mailBox) Use case: Every users have three mailboxes by default: `MailBox(*, draft)`, `MailBox(*, received)` and `MailBox(*, sent)`. `User(alice)` composed a new `Mail(alice, bob, hello, world)`, and clicked the \"send\" button. A new `MailCopy(SOURCE, alice, bob, hello, world)` is created and delivered to the `MailBox(alice, sent)`, and another copy `MailCopy(DEST, alice, bob, hello, world)` is created and delivered to the `MailBox(bob, received)`. I want to know how to classify these entities according to the UML colors archetype?"} {"_id": "85506", "title": "Is there a canonical book on mathematics for programmers?", "text": "I'm a self-taught programmer. I am honestly not good in math. What advice you can give to improve my Mathematical skills so that I will not be so insecure around my fellow programmers? What are the steps or guidelines that you can recommend to improve my mathematical skills? Is there a book out there that's the de-facto standard for describing best practices, design methodologies, and other helpful information on mathematics for programmers? What about that book makes it special?"} {"_id": "125689", "title": "How would you go about education yourself in math in a way that is relevant to programming?", "text": "> **Possible Duplicate:** > Is there a canonical book on mathematics for programmers? My undergraduate math educating was mostly breadth with little to know depth. Yes, I passed the tests; i got my As, but I do not feel competent in mathematics. If you were to change that? How would you go about doing it? What books? What order? What would you emphasize? What would you skip altogether?"} {"_id": "110941", "title": "Is there such a thing as too much asynchronous code?", "text": "I am at the moment messing around with clients and servers in C# winforms and I'm trying to implement it all asynchronously. However, I'm beginning to wonder, should I use asynchronous code for everything? Here's a list of what I'm doing asynchronously at the moment: 1. TcpClient.BeginConnect with TcpClient.EndConnect 2. NetworkStream.BeginRead with NetworkStream.EndRead 3. TcpListener.BeginAcceptTcpClient with TcpListener.EndAcceptTcpClient 4. Listening thread on server for client connections 5. Listening thread on client for incoming data from server 6. Listening Task on server for incoming data from each client 7. New Task created every time an event such as ConnectionLost is raised (so the respective form can update). Think Delegate.BeginInvoke. Everything is set up asynchrously and it works well, but I'm beginning to wonder if all of these _should_ be asynchronous. I mean, it all sounds nice and people claim it to be efficient due to IO completion ports not blocking or something, but is it really? I can understand having a single thread for listening on both the client and server, and for reading it makes sense as well. But every time an event is raised (which may be quite often!), should its invocation really be asynchronous? It seems like I am using Tasks for everything that _can_ be made asynchronous and I'm not sure whether or not that is best practice."} {"_id": "130870", "title": "Parent and child permission schemes", "text": "Which approach makes most sense to use, the destructive one or the non- destructive one? Does anyone have real-world experience with one, both or even a different approach? **Model** A permission scheme has a set of permissions linked to it. It can also have a parent from which it inherits permissions. **Destructive** Taking away permissions from a parent scheme also takes away those permissions in any child scheme. Reason to use this: when later re-enabling those permissions for a parent scheme, they are not automatically re-enabled for child schemes. Reason to not use this: accidently taking away permissions from a parent scheme means the permissions for child schemes have to be restored separatly. **Non-destructive** Taking away permissions from a parent scheme has no effect on child schemes. The parent and child schemes are combined to determine the correct permissions. Reason to use this: accidently taking away permissions from a parent scheme does not require the child schemes to be restored separatly. Reason to not use this: when later re-enabling those permissions for a parent scheme, they are automatically re-enabled for child schemes."} {"_id": "110944", "title": "what is \"graceful degradation\"?", "text": "I hear a lot about this term \"Graceful degradation\". For example , \"An application server should gracefully degrage when it is under heavy load \" \"Graceful degradation of user interfaces.... \" The term looks like an abstract thing to me. Any concrete example of what it means?"} {"_id": "210900", "title": "How to Implement Error Handling", "text": "Even though I've programmed on a professional level for some years I still do not fully understand error handling. Although my applications work fine, the error handling isn't implemented at a professional level and is a mix and match of a number of techniques. There is no structure behind my error handling. I'd like to learn and understand how it's implemented at a professional level. This is one area where I lack knowledge. When should I use an exceptions and when should I return a success status, to be checked in the logic flow? Is it OK to mix exception and returning a status? I code in C# mainly."} {"_id": "189361", "title": "Building automated unit tests for tools which don't have an xUnit implementation", "text": "# Preamble I have been bitten by the bug of automated unit testing. I feel the benefits and the confidence in a code base it can deliver. I am also feel I have a reasonable intuition as to what parts of the code _deserve_ to be unit tested. Code that has logic, code that might be messy (because it's dealing with some messy requirements) that you are happy to have encapsulated, that may have a few odd fringe conditions. # Unusual Context At my place of work, we leverage a number of tools which are fairly niche. For the sake of argument I can certainly see that these tools deliver on features and efficiencies making them indispensable for getting their particular job done. The challenge I have is that I certainly have implemented logic in these tools which my gut says \"this should be covered with a unit test\". I have tested these bits of code... * In a throwaway fashion - try several inputs to exercise the code until satisfied it works - this has the downside of being lost effort when it comes to people maintaining the code in the future * In a less throwaway but manual fashion - writing up the test cases and steps to poke the code to produce output; with test data; such that the above throwaway tests are not thrown away - this retains the investment in time, but is slow and subject to people knowing about the tests and running them * In a slightly more automated fashion - with scripts to do the poking of the code; perhaps even check the output to a degree - this is more pleasant to re-run but is far from leveraging a mature framework in a consistent manner like when writing the xUnit style tests While the progression feels nice, I feel I'm missing the xUnit experience I've become a fan of. # What kind of languages/platforms exactly am I dealing with here? * An ETL tool, akin to SSIS, called RedPoint Data Management: not open source so I'll describe, it has projects which have tools, these tools are linked by data flows, and some have some logic I wish to unit test * A composition tool, this for the uninitiated is a product which takes a data file and outputs a print file. Imagine XML in, PDF out. * DOS script functions: I promise this is a real feature of good'ol'fashioned DOS scripts, tutorial here * (I'll add this to the list, not strictly what I need help with, but may be more clear than the above if my tool set is difficult to empathise with) COBOL data transformations, using CopyBooks to specify the formats of the flat file inputs and outputs. When compiled results in one big-ball-of-mud binary even if functionally decomposed inside # The Actual Question Are xUnit style tests fundamentally aimed at Object Oriented languages?"} {"_id": "189360", "title": "Optimal Data Structure for our own API", "text": "I'm in the early stages of writing an Emacs major mode for the Stack Exchange network; if you use Emacs regularly, this will benefit you in the end. In order to minimize the number of calls made to Stack Exchange's API (capped at 10000 per IP per day) and to just be a generally responsible citizen, I want to cache the information I receive from the network and store it in memory, waiting to be accessed again. I'm really stuck as to what data structure to store this information in. Obviously, it is going to be a list. However, as with any data structure, the choice must be determined by what data is being stored and what how it will be accessed. What, I would like to be able to store all of this information in a single symbol such as `stack-api/cache`. So, without further ado, `stack- api/cache` is a list of conses keyed by last update: `( ) where `` would be (1362501715 . ) At this point, all we've done is define a simple association list. Of course, _we must go deeper_. Each `` is a list of the API parameter (unique) followed by a list questions: `(\"codereview\" ) Each `` is, you guessed it, a cons of questions with their last update time: `(1362501715 ) (1362501720 . ) `` is a cons of a `question` structure and a list of answers (again, consed with _their_ last update time): `( and ` `(1362501715 . ) This data structure is likely most accurately described as a tree, but I don't know if there's a better way to do this considering the language, Emacs Lisp (which isn't all that different from the Lisp you know and love _at all_ ). The explicit conses are likely unnecessary, but it helps my brain wrap around it better. I'm pretty sure a ``, for example, would just turn into ( ...) **Concerns:** * Does storing data in a potentially huge structure like this have any performance trade-offs for the system? I would like to avoid storing extraneous data, but I've done what I could and I don't think the dataset is that large in the first place (for normal use) since it's all just human-readable text in reasonable proportion. _(I'm planning on culling old data using the times at the head of the list; each inherits its last-update time from its children and so-on down the tree. To what extent this cull should take place: I'm not sure.)_ * Does storing data like this have any performance trade-offs for that which must use it? That is, will set and retrieve operations suffer from the size of the list? Do you have any other suggestions as to what a better structure might look like?"} {"_id": "61236", "title": "Other than the Linux kernel, which operating system kernels should you study?", "text": "The Linux kernel is often listed as a code base which you are recommended to read and, even if it is poorly commented (or the files I have looked at have all been), it does have some really good code in it. Now, putting the Linux kernel aside, which other operating system kernels do you recommend people interested in systems programming and/or operating systems to study? **Why?** What is so great about the code base? Would it show you very different approaches from what the Linux kernel has gone with? Do they use interesting technologies? Something else...? That the kernels are under an open source license is more or less necessary."} {"_id": "71920", "title": "What are some good open source c++ packages to study in order to learn advanced software construction?", "text": "I've heard that you should read 10 times more than you should write. This applies to both literature and source code. Therefore, I'd like to study the best c++ packages we've developed. I'm interested in discovering extremely high quality software packages that use c++, c and assembly. This begs the question, \"What determines high quality?\". Well, that's kind of up to you. Could you provide a bulleted list of **_your reasons for choosing your packages_** and also **_the list of open source c++ packages_**. For example my list of qualities that I think make good software are: * Well commented (with full javadoc comments for every function, method and class signature) * Well organized (files broken up into logical, manageable pieces) * Well documented (a reasonable amount of up to date, bug free documentation. Hopefully with some documentation on the high level structure) * Well named classes and variables (succinct, verbose variable names; this should reduce the need for inline comments, however inline comments are always welcomed; no single-letter variables) * Should have unit tests. * Consistent style and formatting. * Hopefully demonstrates clean and recommended usage of a good library like Boost, Qt, STL, etc. My list of packages is: * TrueCrypt (Though is doesn't have full javadoc signatures) * Chrome * OpenSceneGraph I'm aware of this post. It's two years old and doesn't have the list of reasons. Thank you very much for your contributions, in advance. :-) UPDATE: I've created a github repository will all of the packages listed below. I'll continue to update it as more are added. https://github.com/homer6/c_reading"} {"_id": "77757", "title": "Difference between a pseudo code and algorithm?", "text": "Technically, Is there a difference between these two words or can we use them interchangeably? Both of them more or less describe the logical sequence of steps that follow in solving a problem. ain't it? SO why do we actually use two such words if they are meant to talk of the same? Or, In case if they aren't synonymous words, What is it that differentiates them? In what contexts are we supposed to use the word pseudo code vs the word algorithm?"} {"_id": "22019", "title": "What's the cheapest way to host hobby projects?", "text": "What's the best place to put your hobby web projects(the web app itself, not the code) ? Typically, the projects are such that: a) I just want to test out an interesting idea without exploring the business angle to it, just to see how people take it. b) I don't expect a lot of traffic c) I don't want to scale immediately d) I don't want to be tied down to one technology(I want to do different projects to get familiar with various web stacks, langs and libs) Google app engine seems very restrictive for such exploratory stuff ... Restrictions like no outbound request can go beyond 10 seconds and every request has to return with 30 seconds, etc. piss me off, I know they are needed for scale, but I would like them to be optional. Amazon EC2 micro nodes are free for a year. But they ask for credit card information which I am not sure if I want to give away when I'm not paying initially. What other free/cheap alternatives do I have?"} {"_id": "73072", "title": "Make my project Open Source but protect my Company", "text": "My Company is developing a web application, similar to GMail or Remember the Milk. We are on the verge of releasing the source code under AGPL. We are just afraid somebody will take the code and set up as a competitor. What can we do to prevent that? We were thinking to keep the code that powers our API closed, as it is the most important asset of the Company. We don't like that very much, so we would like to find alternative routes."} {"_id": "73070", "title": "What's your approach to assuming someone else's project?", "text": "I think this is a fairly common and challenging topic for anyone that has ever inherited code from another programmer or team. I can think of two common scenarios in my professional career that I've run into; diving into an open source project, and strictly inheriting the code of an individual's project. I'm coming up on the latter and will be inheriting a project created by an individual, and while thinking about how to best approach this, I figured I would ask for the intelligence of p.se so I can learn from your experiences. When Magento (large scale eCommerce application) came out, I was really lost in the Zend framework. There was little documentation out during alpha, so it was hard to use the community resources. I ended up using `grep` a lot and back tracing methods to get a firm understanding of what it does. By doing that, I eventually learned more about the application design, although it did take a little while. What is your approach when you're up against a project that has little documentation? I feel like it is almost finishing the painting of someone else, which is always an interesting task."} {"_id": "77750", "title": "JRE for 64-bit and 64-bit Java?", "text": "FWIK Java can run on 64-bit system, no problem. I'd like to know how Java support 64-bit features, e.g., `System.identityHashCode()` returns a 32-bit int, it's common to see the object pointer (memory address) is returned. Should 64-bit Java returns long instead? If not, how would Java scale to 64-bit systems?"} {"_id": "198044", "title": "Order collisions in ecommerce", "text": "Suppose I have a web app where sellers add their products and set them as available for sale. Then I show a list of products in my mobile app, where I get products via my REST API. My problem: suppose two clients view the same product(and there's only 1 item available from the seller) and they make simultaneous order. How do I resolve such collision? My technology stack: RoR, PostgreSQL, heroku."} {"_id": "141330", "title": "User input and automated input separation", "text": "I have a MySQL database and an automation script which modifies the data inside once a day. And these columns may have changed by an user manually. What is the best approach to make the system only update the automated data, not the manually edited ones? I mean yes, flagging the cell which is manually edited is one way to do it, but I want to know if there's another way to accomplish this? Just curiosity. BTW, the question is about cell values, not rows."} {"_id": "141331", "title": "Continuous integration (with iOS and Android projects)", "text": "I'm trying to make some positive changes in my company and one of the changes is implementing continuous integration. We do mobile development (iOS/Android) so I need a CI that supports both types of projects. As you can tell I don't know a lot about CI but I've googled a little bit and I think that Jenkins and Hudson are the two most popular. I have a two part questions. 1. Your thought on Jenkins? 2. Is there a way for CI to check if the project is compiling to the coding standards (like loose coupling and so on)?"} {"_id": "77758", "title": "Why are there multiple Unicode encodings?", "text": "I thought Unicode was designed to get around the whole issue of having lots of different encoding due to a small address space (8 bits) in most of the prior attempts (ASCII, etc.). Why then are there so many Unicode encodings? Even multiple versions of the (essentially) same one, like UTF-8, UTF-16, etc."} {"_id": "93681", "title": "Storing T&C acceptance in database - best practice", "text": "I wondered if anyone could advise on the best way of storing a users acceptance of the Terms and Conditions in the database. I am in the UK if this changes anything. I have had the T&Cs drawn up by a UK Lawyer so don't need any advice on that part! At the moment I am thinking at the time of signup having a checkbox saying \"I agree to the [linked] terms and conditions\" and making sure this is checked to sign them up. In the database I will have a boolean saying True and also a Timestamp along with the email address they used to signup. Is this enough, if a user ever decided they wanted to challenge their acceptance is this recognised as proof? I have been able to find very little, if no, information about this on the web."} {"_id": "165349", "title": "What's a good, quick algorithms refresh?", "text": "I have programming interviews coming up in a couple weeks. I took an algorithms class a while ago but likely forgot some key concepts. I'm looking for something like a very short book ( **< 100 pages**) on algorithms to get back up to speed. Sorting algorithms, data structures, and any other essentials should be included. It doesn't have to be a book...just looking for a great way to get caught up in about a week. What's the best tool for a **quick** algorithms intro or refresher?"} {"_id": "93684", "title": "Questions about TDD and unit testing", "text": "I am a junior software developer and I have been researching some of the practices in the industry to make myself better. I have been looking at unit testing briefly and I cannot see how the extra time writing a ton of unit tests is going to make my code better. To put things into perspective: 1. The projects I work on are small 2. I am the only developer on the said project (usually) 3. The projects are all bespoke applications The thing I don't get the most is, how can a unit test tell me whether my calculate price function (which can depend on things like the day of the week and bank holidays etc, so assume 20-40 lines for that function) is correct? Would it not be quicker for me to write all the code and the sit through a debugging session to test every eventually of the code? Any examples that are forms-based would be appreciated (examples I have seen from the MSDN videos are all MVC and a waste of time IMHO)."} {"_id": "165341", "title": "Listing technologies on a resume for a software position when your background is game programming?", "text": "So I'm thinking about applying for a entry level position in the software industry but my limited experience working and all my notable experience in college is with game technologies. Sure, the languages transfer over well but most of the technologies I have experience with are all related to graphics programming, engines of various types, and such, and do not transfer over at all. I feel like it would be inappropriate to just take my game programming resume and basically replace the word game with software for the reasons mentioned but on the other hand if I take them out I will only have languages and some technologies that I have some small passing experience with- which will obviously not reflect well on me. Should I leave them out or put them in, and if so how can I spin them to be appropriate?"} {"_id": "98451", "title": "How Do You Pull Something from a Release?", "text": "Let's say your team is working on 10 features/fixes for a sprint. At the end of the sprint, there are one or two things that the product owner does not accept. But, they would really like the other 8 or 9 to be released. How do you handle this? Using subversion, what would be the best methodology to manage a sprint with the possibility that this could happen?"} {"_id": "98458", "title": "Reading Java book -- I'm confused on this explanation of the difference between information and data?", "text": "From \"Demystifying Java\".. > For many of us, the terms _information_ and _data_ are synonymous. However, > information and data are distinctly different in programming. Data is the > smallest amount of meaningful information. Can someone please give me an example that could help me out, and perhaps relate to a 14 year old? I _sort of_ understand, but when I try to seperate the two in my head, I'm having problems for some reason (probably thinking of them as the same for my whole life is the reason for this)."} {"_id": "224925", "title": "Where should I start reading AngularJS's source code?", "text": "After reading this article I realized that I really didn't read any \"serious\" source code during my 3-years as a professional developer. Recently I started a new web-project which makes heavy use of AngularJS, so I decided to start my reading - or, better, _decoding_ [as the blogger wrote] - activity from something that is both challenging and professionally useful. Now I just need to be pointed in the right direction. Should I just start from the start of the source code or is there a better starting point?"} {"_id": "138729", "title": "How to effectively find, hire and work with contractors?", "text": "I know there are some questions that are sort of similar to this one, but I don't think any of them really ask or answer my set of questions. A bit of background: Approximately 6 mos ago, I went from working for a large, bureaucratic, process-driven software company to a very small start-up. I wanted to work for a smaller, faster team and company -- note: be careful what you wish for! :) Although I have a much fancier title, I am mostly a dev manager (and part-time software architect and technical lead). One of my biggest challenges in my career has always been hiring. On average, I probably get 1 good dev for about every 5 I hire. It's a huge waste of time and money. I find that I still struggle with it, but that now that I work primarily with contractors, the challenges are a bit different (the good news is that contractors are easier to fire than full-time US based employees). First things first... Finding, interviewing and Hiring: What sources do you use? What questions do you ask? Do you ask for references? Do you ask for them to work problems/submit code samples? Note: I never want to risk someone submitting someone else IP, but I have had potential employers in the past ask me to create an app for them to solve a problem, etc. I don't want this to be a cultural thing, but do you ask different questions based on country of origin? Again, I don't want it to turn into don't use contractors from this or that country because they are crap... there are good devs everywhere, but a lot of bad ones, too. Do you find it better to work with individuals or contracting companies? I guess I've spent a lot more time working with larger companies. Once hired, how to effectively work with them? What technologies do you use? For example, I can't imagine trying to do this kind of work without a DVCS. I don't think I'll ever go back to SVN! :) What collaboration tools/websites do you use? Have you had better experiences with fixed priced or hourly contracts? I've managed both and I understand some of the pros and cons of each, but I swear I still haven't figured out which is the better \"bet\" (and yes, it's pretty much a bet in my opinion). I think for me, I need to start hiring hourly because I don't like the control that I seem to give up over the code when it's fixed priced? Do you demand to have access to the code during development so you can review and see if you feel like you are getting quality work? How do you handle poor code? What SLAs do you demand in the contract? For example, unit-test coverage? etc. Finally, I know that most people could take any one of these things and write a bunch on the topic. Please feel free to answer just one part of the question. Or provide a link to a blog post where you think someone has answered a similar set of questions."} {"_id": "224921", "title": "Naming interfaces for persistent values", "text": "I have 2 distinct types of persistent values that I'm having trouble naming well. They're defined with the following Java-esque structure, borrowing Guava's Optional for the example and using generic names to avoid anchoring: interface Foo { T get(); void set(T value); } interface Bar { Optional get(); void set(T value); } * With `Foo`, if the value hasn't been set explicitly then there's some default value available or pre-set. * With `Bar`, if the value hasn't been set explicitly then there's a distinct \"no value\" state. I'm trying to optimize the names for their call sites. For example, someone using `Foo` may not care whether there's a default value involved, only that they're guaranteed to always have a value. How would you go about naming these interfaces?"} {"_id": "224920", "title": "Is this a design pattern?", "text": "I have following C# code. It helped me to avoid some code repetition in a good way. The `ExecuteQueryGenericApproach` method receives a `Func` generic delegate as argument. The delegated method has a parameter which receives `IDataRecord` as argument. That is, the `ExecuteQueryGenericApproach` method provides the required IDataRecord to the functions passed to it. **QUESTIONS** 1. What is the name of this pattern? (Based on `GoF` design patterns) 2. Is there any other scenario where this pattern is used? **Note** : Knowing the name of this pattern will help me to better research on this and find opportunities to use it. CommonDAL public class CommonDAL { public static IEnumerable ExecuteQueryGenericApproach(string commandText, List commandParameters, Func methodToExecute) { string connectionString = @\"Server=XXXX;Database=AS400_Source;User Id=dxxx;Password=xxxx5\"; //Action, Func and Predicate are pre-defined Generic delegates. //So as delegate they can point to functions with specified signature. using (SqlConnection connection = new SqlConnection(connectionString)) { using (SqlCommand command = new SqlCommand()) { command.Connection = connection; command.CommandType = CommandType.Text; command.CommandText = commandText; command.CommandTimeout = 0; if (commandParameters != null) { command.Parameters.AddRange(commandParameters.ToArray()); } connection.Open(); using (var rdr = command.ExecuteReader()) { while (rdr.Read()) { yield return methodToExecute(rdr); } rdr.Close(); } } } } } EmployeeDAL public class EmployeeRepositoryDAL { public static List GetEmployees() { string commandText = @\"SELECT E.EmployeeID,E.EmployeeName,R.RoleID,R.RoleName FROM dbo.EmployeeRole ER INNER JOIN dbo.Employee E ON E.EmployeeID= ER.EmployeeID INNER JOIN dbo.[Role] R ON R.RoleID= Er.RoleID \"; //IEnumerable employees = MyCommonDAL.ExecuteQueryGenericApproach(commandText, commandParameters, Employee.EmployeeFactory); //Group By is needed for listing all the roles for an employee. IEnumerable employees = CommonDAL.ExecuteQueryGenericApproach(commandText, null, Employee.EmployeeCreator) .GroupBy(x => new { x.EmployeeID, x.EmployeeName }, (key, group) => new Employee { EmployeeID = key.EmployeeID, EmployeeName = key.EmployeeName, Roles = group.SelectMany(v => v.Roles).ToList() } ).ToList(); return employees.ToList(); } } Entity public class Employee { public int EmployeeID { get; set; } public string EmployeeName { get; set; } public List Roles { get; set; } //IDataRecord Provides access to the column values within each row for a DataReader //IDataRecord is implemented by .NET Framework data providers that access relational databases. //Static Method public static Employee EmployeeCreator(IDataRecord record) { var employee = new Employee { EmployeeID = (int)record[0], EmployeeName = (string)record[1], Roles = new List() }; employee.Roles.Add(new Role { RoleID = (int)record[2], RoleName = (string)record[3] }); return employee; } }"} {"_id": "199309", "title": "What concept am I missing with private methods and testing?", "text": "I've read a lot o blogs arguing about private methods and testing. Some people say you should not test private methods, they say you should make them public or put those methods in a new class. But then, what about encapsulation? let's say: 1. I have this method which is simple enough for not being part of a new class, and 2. It can be useful only for internal stuff, so there's not a good reason to make it public. My general rule about methods being public or private is simple, if it's useful outside it's public, if it's useful just inside then it's private. Am I missing something with this logic?"} {"_id": "100958", "title": "Is it wise to accept for a backend developer to work only on the Front end part of the application only because you even know it well?", "text": "I am a python developer , i have experience in developing web applications using python. Since i deal with web , i have to know some knowledge about the front end part , hence i even learned HTML, CSS & JS. I have chosen python as my career domain. I recently moved onto a new R & D company , they hired me as a python developer with a huge package and gave me a web application to develop. They doesn't had any designer to work on the front end part , so in my first few weeks i developed its design and i started developing the application by myself doing the front end and the back end. Everyone were very much impressed about the design and the flow of the application on the front end part. In two months i almost finished 30% of the work. So far everything was good. Suddenly they hired a new guy who have experience in Perl for another project and since the manager wanted to finish off my project soon he asked the perl guy to work temporarily on the back end part of my app ( Python ) and asked me to do the only the Front end part. Now i feel like this will degrade my profile. Since i have not given any python work as of now. They hired me as a python developer and since i took initiative and did the front end of app , so that i wanted my product to look good. And now since they all liked the way i did its design they now made me a full time front end developer. **Few Details About Me:** Before getting in this company i had got 5 offers for python job including some of the giant companies but i chose this since this was only the R & D company and even the highest package. And this company had conducted around 4 rounds for python and after selecting me they had said that they were waiting for a guy like me for the past six months. All of the people who interviewed me for python were from other team. Then once i started developing the project i came to know python work is somewhat less compared to the front end js part , since it is a whole SPI application with fully ajax driven. And in my previous company i used to do even the JS and the back end python for my projects. Since i am doing the python part , i felt good working in that way. Then in the new company also am working in the same way , but since now the manager came to know that python has less work , he asked a less experienced guy to take care of it. So that the project will get finished soon. The manager is thinking only about finishing the project , he is not caring about we employees and our main domain. **The Reason to choose python:** I think i can say in confidence that i am already expert in the front end side , but i feel like python is a huge side , and i may require many more years experience to get expert in it. And python is the domain which has given me a rapid boost in my career , when i compare with my friends in other web technologies like php. What my heart says is become expert in python in its every field. Please any one suggest me how will i deal with this? Do i need to inform manager anything of how i feel? Do these product companies work in this way? Should i switch my job?"} {"_id": "100956", "title": "What kind of licensing is required for Visual FoxPro?", "text": "I am thinking of developing customized software for desktops in Visual FoxPro 9 and want to know what type of licensing is required. As a developer, would I need to have a Visual FoxPro 9 license and would my users need to have the End User License? What type of licenses would be needed for commercial release? How would the licensing change if I released this as freeware?"} {"_id": "224929", "title": "Is CSV a good alternative to XML and JSON?", "text": "Is CSV considered a good option against XML and JSON for programming languages? I generally use XML and JSON (or sometimes a plain text file) as flat file storage. However, recently I came across an CSV implementation in PHP. I generally have seen CSV used for inputs in Excel files though, but I have never used it with programming. Would it be better than XML or JSON in any way?"} {"_id": "138860", "title": "Why GWT .. what are the advantages?", "text": "> **Possible Duplicate:** > What are proven advantages of tools like GWT over pure JavaScript > frameworks? When should we decide to use GWT, what are it's advantages over normal web app design with html/jsp, css, js etc. Still we need to design our pages, css/styles but cant debug with tools like Firebug ... Seems to be complex Java coding to design web pages - which is ultimately html having never understandable complex javascript. Can we do complex web page design easily with GWT?"} {"_id": "138724", "title": "Is it considered best practice to dynamically bind return types from the Entity context?", "text": "MyDbEntities context = new MyDbEntities(); var result = context.StoredProcedureName(userId); In the situation above, is it considered best practice to use `var` or `ObjectResult`?"} {"_id": "130257", "title": "Optimistic work sharing on sparsely distributed systems", "text": "What would a system like BOINC look like if it were written today? At the time BOINC was written, databases were the primary choice for maintaining a shared state and concurrency among nodes. Since then, many approaches have been developed for tasking with optimistic concurrency (OT, synchronization primitives like vector clocks, virtual synchrony, shared iterators etc.) Is there a paradigm for optimistically distributing units of work on sparsely distributing systems which communicate through message passing? Sorry if this is a bit vague. P.S. The concept of Tuple-spaces is great, but locking is inherent to its definition. **Edit2** : The entire system is sparsely distributed - they can communicate only through WAN. And communication _can_ be slow and faulty. The question is about how to best distribute units of work among them without a central co-ordinator and with as little consensus as possible (because consensus is expensive). The answers here seem to be talking about databases - data isn't the problem. The problem is in distributing work. **Edit** : I already have a federation system which works well. I'm looking to extend it to get clients to do units of work."} {"_id": "27454", "title": "Should I focus on being deep or broad", "text": "I have been a professional developer for just over half a year and have been amazed at how big the world really is out of college. I have continued to learn in my free time but I am wondering where should I focus? FOCUS 1) The development stack used by my company. Quickest payoff in my day to day development. #Deepest FOCUS 2) A different language with the same paradigm. See if I can generalize my knowledge and way of thinking. FOCUS 3) Different language, different paradigm. Expand my borders and learn new ways to do things. #Broadest If anyone has a different focus feel free to put that out there."} {"_id": "132173", "title": "Coding in known language or learning a new one in free time?", "text": "> **Possible Duplicate:** > Should I focus on being deep or broad Doing which of {coding,learning} will you recommend for a programmer in his/her free time? Of course,this question is valid in case of coders who are able to do reasonable things in at least one mainstream language.The reasons why I'm asking this question are, * Coding will improve skills on the language already learned, but there will be situations in which we have to use other languages. * If one learns a new language,instead of using what he/she learned, it will cause losing at least some knowledge on the language already learned. * No doubt,no one can completely learn a language,so moving to a new language may make him/her nothing"} {"_id": "141654", "title": "is it better to spend my free time mastering a language I work with or learning a new one?", "text": "> **Possible Duplicate:** > Should I focus on being deep or broad I work full time on an android project and am very comfortable with both java and the android framework. On a good day, I would rate my abilities at an 8, and maybe a 7 on a bad day. I've recently found myself with more free time then I'm used too, so I have been working on a lot of personal projects. I am beginning to wonder what others think about this; is it worth my time to continue experimenting and pushing Android, or would I be better off learning another language? What do you all think about this? What would you do with more free time and energy than you know what to do with?"} {"_id": "130250", "title": "Why doesn't \"object reference not set to an instance of an object\" tell us which object?", "text": "We're launching a system, and we sometimes get the famous exception `NullReferenceException` with the message `Object reference not set to an instance of an object`. However, in a method where we have almost 20 objects, having a log which says an object is null, is really of no use at all. It's like telling you, when you are the security agent of a seminar, that a man among 100 attendees is a terrorist. That's really of no use to you at all. You should get more information, if you want to detect which man is the threatening man. Likewise, if we want to remove the bug, we do need to know which object is null. Now, something has obsessed my mind for several months, and that is: **Why doesn't .NET give us the name, or at least the type of the object reference, which is null?**. Can't it understand the type from reflection or any other source? Also, what are the best practices to understand which object is null? Should we always test nullability of objects in these contexts manually and log the result? Is there a better way? **Update:** The exception `The system cannot find the file specified` has the same nature. You can't find which file, until you attach to the process and debug. I guess these types of exceptions can become more intelligent. Wouldn't it be better if .NET could tell us `c:\\temp.txt doesn't exist.` instead of that general message? As a developer, I vote yes."} {"_id": "181154", "title": "Type systems: nominal vs. structural, explicit vs. implicit", "text": "I'm a bit confused about the difference between nominal and structural type systems. Can someone please explain how they differ? From what I understand: * Nominal: Type compatibility is based on type name. * Structural: Type compatibility is based on the type structure, e.g. in C if 2 variables are struct types with different names but the same structure, then their types are compatible. Now about explicit and implicit: why is it different from static and dynamic typing? In static typing, types will be explicit while in dynamic typing, types are implicit. Am I right?"} {"_id": "181157", "title": "How to program thread allocation on multicore processors?", "text": "I would like to experiment with threads on a multi-core processor, e.g. to create a program that uses two different threads that are executed by two different processor cores. However, it is not clear to me at which level the threads get allocated to the different cores. I can imagine the following scenarios (depending on operating system and programming language implementation): 1. Thread allocation is managed by the operating system. Threads are created using OS system calls and, if the process happens to run on a multi-core processor, the OS automatically tries to allocate / schedule different threads on different cores. 2. Thread allocation is managed by the programming language implementation. Allocating threads to different core requires special system calls, but the programming language standard thread libraries automatically handle this when I use the standard thread implementation for that language. 3. Thread allocation must be programmed explicitly. In my program I have to write explicit code to detect how many cores are available and to allocate different threads to different core using, e.g., library functions. To make the question more specific, imagine I have written my multi-threaded application in Java or C++ on Windows or Linux. Will my application magically see and use multiple cores when run on a multi-core processor (because everything is managed either by the operating system or by the standard thread library), or do I have to modify my code to be aware of the multiple cores?"} {"_id": "181150", "title": "Difference between language virtual machine and emulating vm?", "text": "I'm having a hard time understanding the difference between an emulation virtual machine and a language vm. I started with the research and implementation of an emulation virtual machine. Primarily emulating quite old 16-bit architectures. I want to get the basics down for a language virtual machine. Are both systems similar? Do they both use register-based architectures and stack-based? I'm under the impression a language VM is basically a run time environment. Depending on the complexity of the VM, it may have a garbage collector, JIT compiler, etc... Would that assumption be correct? EDIT: I'm also talking about bytecode VMs, but native machine code works too."} {"_id": "128468", "title": "Books that every software system architect/data modeller must read", "text": "I hope I'm not confusing the term system architecture with the data model/database. Is there a book out there that's the standard for describing best practices, design methodologies, and other helpful information on designing the architecture and the data model for a web-based application? What about that book makes it special?"} {"_id": "220208", "title": "Cost Benefit Analysis of choice of javascript framework for excel-like web app", "text": "I am engaged by a telco company sales team to develop a web application to replace their current workflow. Their current workflow consists of many spreadsheets containing product, pricing, and sales data. I have successfully done this, but they continuously want me to allow them to download certain data into excel files for them to view. I appreciate why they want this because a) sometimes they want a lot of data and pagination on the web app just does not work for them. b) they have been so used to Excel and its functions that it is hard to wean them off it completely. As a vendor, I want to marry the convenience of both Excel and a web application, that I ended up surfing for javascript frameworks that allow me to simulate an Excel software on the web app. My web application is developed in CakePHP 2.4.2 using MySQL as the database. I have narrowed down to 2 choices: dhtmlx and wijmo. This web application is hosted within the company's intranet system. The top Excel features I noticed the sales team is using: 1. They want tabs 2. They want to be able to see the formula bar so that they know why a figure is 1,789.45 for e.g. 3. They want to be able to select a bunch of cells and then find out either the sum of the figures in the cells or the count. 4. They want to be able to do sorting and filtering the way it is done in Excel with checkboxes. See the attached screenshot. ![enter image description here](http://i.stack.imgur.com/RlkPm.png) Although I have narrowed my choice to these 2 javascript libraries, I am open to other choices. Another important thing is that the library must be as simple for a beginner me to write the application as possible."} {"_id": "93350", "title": "Is it legal or good idea to have a backup of all client sites on my own server", "text": "I have seen many times that if we build a website for a client then there is a possibility that this site gets changed over a period of time. I was thinking that from now onwards whichever site I make I will host a copy of the site on a personal server. Like `client1.myserver.com` so that even if they change it I have the copy of it. So that if I need to show someone or I need to refer myself few things I have the proof there. I will not make them public but will password protect it. I want to know whether this is legal and a good idea or not."} {"_id": "128461", "title": "Strictly from an employability perspective, what is the best web development framework to learn right now?", "text": "Even though Ruby on Rails seems to be the most popular with the most job openings, is there a compelling employability reason to learn django for example? Maybe because everybody's learning RoR and there's a shortage of django developers in relation to the demand? Or is php along with a popular php framework the way to go simply because of the sheer volume of php related jobs out there?"} {"_id": "128462", "title": "Evidence of Bad Design: Updating feature A breaks/interferes with existing feature B", "text": "I am working on a Javascript Web App & I am finding as I expand the program & add new features(new animations) it causes existing features to break or work differently than prior to adding that new feature. So as I continue to finish the app, it keeps breaking old features & I have to go back & redo/update those features. So it seems I never complete a feature because I am always coming back to it so I can get it to work nicely with other features. Is this indicative of bad application design? Or is this a normal feature of computer programming, ie just like how in any program you will spend 20% of time developing the program & 80% of the time fixing bugs(or is it maintaining the program?)? **Are there design &/or any other techniques I can employ to avoid having this problem occur?** The application is a renderer (which takes an XML file which contains information about objects that perform animations - move around, fade etc.) that creates HTML elements & animates them. The design is OOP, with as close as you can get to classes in Javascript by using prototype. It employs ALOT of multiple inheritence. It also makes use of the anonymous functions where possible so I think it would be very much like a Java project because they both have OOP, anon functions & inheritence. There is a class purely dedicated towards animation called AnimatedObject & many objects inherit from this class. You just perform an animation by simply calling a function from that class such as myAnimObj.performFade() or myAnimObj.performRotate()."} {"_id": "122608", "title": "Clang warning flags for Objective-C development", "text": "As a C & Objective-C programmer, I'm a bit paranoid with the compiler warning flags. I usually try to find a complete list of warning flags for the compiler I use, and turn most of them on, unless I have a really good reason not to turn it on. I personally think this may actually improve coding skills, as well as potential code portability, prevent some issues, as it forces you to be aware of every little detail, potential implementation and architecture issues, and so on... It's also in my opinion a good every day learning tool, even if you're an experienced programmer. For the subjective part of this question, I'm interested in hearing other developers (mainly C, Objective-C and C++) about this topic. Do you actually care about stuff like pedantic warnings, etc? And if yes or no, why? Now about Objective-C, I recently completely switched to the LLVM toolchain (with Clang), instead of GCC. On my production code, I usually set this warning flags (explicitly, even if some of them may be covered by -Wall): * -Wall * -Wbad-function-cast * -Wcast-align * -Wconversion * -Wdeclaration-after-statement * -Wdeprecated-implementations * -Wextra * -Wfloat-equal * -Wformat=2 * -Wformat-nonliteral * -Wfour-char-constants * -Wimplicit-atomic-properties * -Wmissing-braces * -Wmissing-declarations * -Wmissing-field-initializers * -Wmissing-format-attribute * -Wmissing-noreturn * -Wmissing-prototypes * -Wnested-externs * -Wnewline-eof * -Wold-style-definition * -Woverlength-strings * -Wparentheses * -Wpointer-arith * -Wredundant-decls * -Wreturn-type * -Wsequence-point * -Wshadow * -Wshorten-64-to-32 * -Wsign-compare * -Wsign-conversion * -Wstrict-prototypes * -Wstrict-selector-match * -Wswitch * -Wswitch-default * -Wswitch-enum * -Wundeclared-selector * -Wuninitialized * -Wunknown-pragmas * -Wunreachable-code * -Wunused-function * -Wunused-label * -Wunused-parameter * -Wunused-value * -Wunused-variable * -Wwrite-strings I'm interested in hearing what other developers have to say about this. For instance, do you think I missed a particular flag for Clang (Objective-C), and why? Or do you think a particular flag is not useful (or not wanted at all), and why? **EDIT** To clarify the question, note that `-Wall` only provides a few basic warnings. They are actually a lot more warning flags, not covered by `-Wall`, hence the question, and the list I provide."} {"_id": "141594", "title": "How does one pronounce \"cron\" as in \"cron job\"?", "text": "Before someone ban-hammers this question as they do with all other pronunciation questions, let me explain its relevance. Verbal communication among co-workers and partners is important; today I was on a conference call with people discussing what I thought was something to do with \"Chrome\", as in Google Chrome. I pronounce the \"cron\" in \"cron job\" with a short O, much like \"tron\", \"gone,\" or \"pawn\", but this individual pronouced it with a long O, as in \"hone\", \"bone\", or \"stone\" (notice the e at the end of all those!). Is there a _standard_ pronunciation? Or is this a matter of opinion. For example, there's nothing ambiguous about the pronunciation of \"Firefox\", but debate is raging over \"potato\" and \"tomato\"."} {"_id": "246419", "title": "Dynamic query in Mysql", "text": "I'm doing a J2EE web application with Struts2, Mybatis and a MySQL database, so what I want to allow to the user is to be free to choose different parameters to perform a select to a table in the database. For example, I have a Table called SALES, so I want to that the user select different parameters according to what he needs. Maybe, he wants to filter by dates and the employee name, and then he want to filter just by customer and so on. Should I show all the necessary data to filter on the client side with js, or can I do a query to several tables after the user choose the parameters? if that is possible, how should I write the query with parameters that could be present or not?"} {"_id": "122601", "title": "peer to peer inside a web browser", "text": "I've built peer to peer app a while ago in QT and C, but I had to build the whole thing with a friend of mine. I'm wondering if it possible to build such an application in web browser. Is there an opensource lib, API or anything that could help me achieve that on a browser?"} {"_id": "177", "title": "What are your thoughts about the Actor Model?", "text": "The Actor Model which is used by Erlang seems to be a very different way to do concurrent programming. What are your thoughts about the Actor Model? Will it be a popular solution for concurrency?"} {"_id": "85426", "title": "Can PHP be used for desktop application development?", "text": "I want to know whether it is possible to develop desktop applications using the PHP language. In my case, I have never developed one with it. If it possible, please give me referral link or detailed understanding of where i can improve my skills."} {"_id": "74223", "title": "Introducing a new JVM programming language into an established enterprise environment", "text": "Imagine that your current workplace is a Java shop. There is a lot of built-up knowledge about the Java language and there is a comprehensive build and deployment process in place to handle everything in a smooth and Agile manner. One day, a project comes along that just screams out to be written in, say, Ruby. Only the senior developers have any clue about Ruby but there's a general notion that since JRuby exists for the JVM then existing infrastructure could continue to be used and supported. Also, JRuby could show the way to a better way of implementing the current applications with less code so this could represent an ongoing migration. Remember that JRuby is just an example, it could equally be Clojure or Groovy or whatever else runs on the JVM. The question is how would you go about introducing this kind of change - if at all?"} {"_id": "85422", "title": "Well documented open source projects to study for a beginner?", "text": "I've been reading up on/learning C for a week or two and was wondering if anybody could recommend some extremely well documented open-source projects I could study? Things like sourceforge are great and so on, but there's a great deal of things I don't understand about most of the projects on there and could do with a bit more hand-holding to begin with. Suggestions?"} {"_id": "36303", "title": "Why should you get MCTS certified?", "text": "I just found out about MCTS (Microsoft Certified Technical Specialist). But why should you become certified? Why not just study the stuff, and don't take the exam. It saves you a few bucks and you know just as much."} {"_id": "185674", "title": "Is it possible to use GNU GPL for application that has no source?", "text": "I mean, it is possible to create application without source code - for example using HEX editor or some debugger that can assembly instructions (actually every decent debugger can). Creating programs this way is of course hard, but possible for small applications. I even can imagine cases, where it is the only possible way - for example, on some very old hardware where you simply does not have any development tools. For another example - read the article \"Programming in extreme conditions\", published in Assembly programming journal, issue 9 So, what if the author of such application wants to distribute it under GNU GPL?"} {"_id": "118703", "title": "Where did the notion of \"one return only\" come from?", "text": "I often talk to Java programmers who say \"Don't put multiple return statements in the same method.\" When I ask them to tell me the reasons why, all I get is \"The coding standard says so.\" or \"It's confusing.\" When they show me solutions with a single return statement, the code looks uglier to me. For example: if (blablabla) return 42; else return 97; \"This is ugly, you have to use a local variable!\" int result; if (blablabla) result = 42; else result = 97; return result; How does this 50% code bloat make the program any easier to understand? Personally, I find it harder, because the state space has just increased by another variable that could easily have been prevented. Of course, normally I would just write: return (blablabla) ? 42 : 97; But the conditional operator gets even less love among Java programmers. \"It's incomprehensible!\" Where did this notion of \"one return only\" come from, and why do people adhere to it rigidly?"} {"_id": "77530", "title": "Best practices concerning exit in Delphi", "text": "A co-worker and myself are having a debate on whats best. Both concepts work and work well but is there a general consensus on not making a call to exit? Whats better? To call exit within a procedure to avoid doing the rest of the code as in ... if Staging.Current = nil then exit DoSomethingA(FileNameA); DoSomethingB(FileNameB); Staging.DeleteCurrent; or to not call exit and instead wrap it in a begin and end if Staging.Current <> nil then begin DoSomethingA(FileNameA); DoSomethingB(FileNameB); Staging.DeleteCurrent; end; Both work. I prefer to use the exit statement since it results in fewer lines of code and looks cleaner to me, but is there any reason or any consensus among programmers to avoid using exit to leave a procedure?"} {"_id": "233410", "title": "Python - only one return per method?", "text": "I'm trying to sort out whether this is just a personal preference thing or whether it's actually bad practice to do this one way or another. I did try to reference PEP8 first, and I did not see an answer. What I'm talking about here is the use of more than one `return` statement. In these examples, it's not _such_ a big deal, as they are very short and easily read. def foo(obj): if obj.condition: return True return False def bar(obj): answer = False if obj.condition: answer = True return answer It bugs me to assign a variable every time `bar()` is called, instead of just determining the answer and returning it. But in a longer function a second `return` might be hidden and so someone reading the code may have not realized that the function might return a value before hitting the end of the line: def spam(obj): blurb = \"Francis\" if obj.condition: blurb = map(gormf, [z for z in porp(obj)]) return blurb[2] elif not obj.condition and dreg(obj.jek): blurb = obj.hats * 17 if blurb % 13: return blurb else: blurb = obj.name return \"Whales and %s\" blurb That's a terrible function, but I think you see what I'm poking at. So is it preferable to create a function to have a _single, clear exit_ \\- even if it's a few more operations than you actually need? Or should one try to root this out whenever possible in favor of shorter code?"} {"_id": "18454", "title": "Should I return from a function early or use an if statement?", "text": "> **Possible Duplicate:** > Where did the notion of \"one return only\" come from? I've often written this sort of function in both formats, and I was wondering if one format is preferred over another, and why. public void SomeFunction(bool someCondition) { if (someCondition) { // Do Something } } or public void SomeFunction(bool someCondition) { if (!someCondition) return; // Do Something } I usually code with the first one since that is the way my brain works while coding, although I think I prefer the 2nd one since it takes care of any error handling right away and I find it easier to read"} {"_id": "198587", "title": "What are the pros and cons of temporary variables vs multiple returns", "text": "Take the following examples: public static String returnOnce() { String resultString = null; if (someCondition) { resultString = \"condition is true\"; } else { resultString = \"condition is false\"; } return resultString; } public static String returnMulti() { if (someCondition) { return \"condition is true\"; } return \"condition is false\"; } Is one approach objectively better than the other, or is it just a matter of preference? The first requires a temporary variable, but has just a single place where the method returns. Does the decision change if the method is more complex with multiple factors that could change the result?"} {"_id": "250653", "title": "\"Proceed if true\" vs \"stop if false\" in if statements", "text": "While I was writing a private helper method in Java, I needed to add a check for a precondition that would cause the method to do nothing if not met. The last few lines of the method were just off the bottom of the editing area in the IDE, so in an epic display of laziness, I wrote the check like this: function(){ if (precondition == false){ return; } //Do stuff } as to opposed finding the end of the block to write a more \"traditional\" statement like this: function(){ if (precondition == true){ //Do stuff } } This made me wonder: is there a reason to avoid my version in favor of the other, or is it just a stylistic difference? (Assuming they are used in such a way that they are _meant_ to be functionally equivalent)"} {"_id": "104551", "title": "Should a function use premature returns or wrap everything in if clauses?", "text": "> **Possible Duplicate:** > Where did the notion of \"one return only\" come from? Which is better? I see pros and cons for both, so I can't really decide on which one to stick to. ## Wrap in if clause function doIt() { if (successfulCondition) { whenEverythingGoesWell(); } } * **Pro** : Shows programmer's intention through indentation. * **Con** : Indentation can get really deep if you need to short circuit many times. For example, `doThirdThing()` requires the success of `doSecondThing()`, which in turn requires the success of `doFirstThing()`. This happens a lot in web development where many web services are not reliable. ## Premature return function doIt() { if (!successfulCondition) { return; } whenEverythingGoesWell(); } * **Pro** : Subversion checkins would be succinct. Sometimes, I see co-workers wrap super long functions in an if clause. The whole shebang gets checked in and makes reading Subversion diffs difficult. * **Con** : Requires you to read the whole function to figure out the various run paths."} {"_id": "201320", "title": "function exit condition on parameter consistency check", "text": "When checking for parameter consistency a the top of a function body, what is the best strategy? This one: protected void function(Object parameter) if (parameter == null) return; //... do things } ...or this one: protected void function(Object parameter) if (parameter != null) { //... do things } }"} {"_id": "74228", "title": "What advantage is there in pairing when programming that there isn't when pairing in other jobs?", "text": "Surely two people sitting together doing countless other jobs you would see the same benefits. Why has programming been singled out for this practice?"} {"_id": "185672", "title": "kindle paperwhite", "text": "How good is the kindle paperwhite to read programming e-books?, I was looking info but anyone gives any answer from the own experience, and the current answers are outdated (> 2 years) It is a kindle paperwhite recommended as reading tool for developers? I would like to hear people's actual experiences."} {"_id": "200552", "title": "Which is architecturally correct for Data Access Layer method names - Fetch or Select?", "text": "I have seen the words Fetch and Select used seemingly interchangeably when naming data access layer methods (ex. Person.Select or Person.Fetch). Which one is correct? My instinct is that the point of the data access layer is to abstract data access and thus the term Fetch would be more of an abstraction perhaps than Select would be. But if one can imagine for a moment that SQL was not an existing technology, the term Select on its own might be appropriate."} {"_id": "135750", "title": "Is there any proprietary PHP MVC framework?", "text": "I was doing a small list of PHP MVC framework (like Zend, CakePHP, Yii etc...) and I noticed that all of them are open source. Then I tried to find some proprietary framework, but my research was unsuccessful. **Is there any proprietary PHP MVC framework? Why they are so few(if there is any)?**"} {"_id": "135753", "title": "Is specializing beneficial in IT industry?", "text": "For the past few years I started wondering if it makes sense to learn advanced techniques in some fields of IT, or is it better to be just good at everything, and use the simplest code possible everywhere. I will use C++ as a language for comparisons, since I'm a C++ developer. As a C++ developer you can write truly interesting code, ranging from code that looks like C with simple functions all around, to template meta- programming stuff, sometimes intermixed with pure abstract classes. I've started noticing that I'm leaning towards the later lately. The problem is, that with larger solutions programmers are rarely working alone, and most likely skills don't overlap that much either. So some parts of code I write might be confusing to other team members, and the other way around. I might like STL and templates, while other people might prefer pure abstract classes everywhere, others will rely on dynamic_casting, or code generated by VC wizards. While some just enjoy everything written in a very simple, C like manner. This also backfires with the fact that people who haven't worked with, say, with virtual classes, might not even know why the new class they've derived is broken because of simple missing virtual destructor. So the code you write paves a way toward memory leak if some other people decide to reuse your code later on. Do you have experience working in environments with multiple developers with different skill-sets? Do you try to find a common dominator in your environments? Does using the skill-sets of every person even works in larger projects?"} {"_id": "205700", "title": "If the spec is flawed, should it still be followed?", "text": "I have been assigned to develop an integration to one of my employer's applications to an external system developed by our client. Our client's specification for the integration that has some blatant flaws related to security. The flaws would allow an unauthorized user access to the system to view restricted data. I have pointed out the flaws and their potential security risks if they are implemented as designed, and provided an alternative without the flaw, but (in short) have been told \"do it the way we specified\" by the client. Does a programmer have an ethical responsibility NOT to implement code with known security risks? At what point do a client's requirements outweigh the ethical responsibility we have as software developers to create secure applications?"} {"_id": "200554", "title": "How my website should use its own API?", "text": "Im building small web-service which will provide my users with data through API. Also, some data will be available right on my website. The question is about how to use my own API? Should my website make a query to, for example, `http://website.com/api/users/?format=json` and then render data? Because I can use standard Django ORM features, but this is not corresponding with using of own API. For more explanation please look this gist: https://gist.github.com/xelblch/bde4c8f107f1ed398a7e update: Imagine that I have a database full of games with release date, game name, game platform etc. And on my own website I will show this games as a list or a grid. Also, this data can be reached through API in JSON format, this data can be updated through API and even deleted. And on my own website I have html forms that allow me to do this actions in a user-friendly way. So how should I access my own API? Via POST request to my API or via django ORM directly to database?"} {"_id": "139234", "title": "people fork my project but don't fetch from upstream - what can I do?", "text": "Several people have forked my github repo but they have not fetched-merged from upstream. So my original repo has evolved significantly since the fork took place, and meanwhile these people are showing an old outdated version of my work, which kind of makes me look bad (e.g. what they're displaying is incomplete, contains bugs that I've fixed since then, etc.). These people apparently don't understand what a fork is for; they just push the fork button (perhaps as a way of saying \"this is cool\") and then walk away. Ideally what I'd like is for them either to keep their fork updated from upstream or make a significant contribution or delete the fork. Is there anything I can do about this? (There isn't, is there? This is just the price of being open source, isn't it?)"} {"_id": "200559", "title": "How would you get all your expectations done within a user story?", "text": "I'm setting up some requirements for a software extension and try to use user stories to describe what I want to achieve. > As a creator I want to add a new entry, to let users access it through the > main page. I have the idea of adding an entry via Drag&Drop. My company has some guidelines, to have every command always available via context menu and ribbon. Should I split up my origin story into separate actions? * As a creator I want to drag a new entry onto the main page, to let users access it. * As a creator I want to create a new entry on the main page via ribbon, to let users access it. * As a creator I want to create a new entry on the main page via context menu, to let users access it. That approach would result in at least three user stories per feature what won't feel comfortable to me. I could add these expectations during the discussion, but then it seems like giving a \"way-to-follow\" for developers. So how would I ensure that the required quality/user experience is achieved?"} {"_id": "205708", "title": "Average video frame size for video codec", "text": "In simple words, is it true that an average size of an encoded video frame would be `FrameSize = BitRate/FrameRate`? Because `BitRate` shows how much data we are transferring per time unit, and `FrameRate` shows how many frames does this time unit contain."} {"_id": "204075", "title": "Can compilers and interpreters have bugs, and what can we (as users) do to deal with them?", "text": "If a compiler's work is essentially translating source code into machine level code, can there be any glitch in a compiler, i.e. a faulty \"translation?\" The same goes for an interpreter: can it fail to output the required content sometimes? I have not heard of any bugs in compilers/interpreters, but do they exist?"} {"_id": "87373", "title": "How can I get the business analysts more involved in BDD?", "text": "I am a proponent of Behavior Driven Development, mainly with Cucumber and RSpec, and at my current gig (a Microsoft shop) I am introducing SpecFlow as a tool to help with testing. I'd like to get the business analysts on my team involved in writing the features and scenarios, but they are put off by the \"technical\" aspect of it, meaning creating the files in Visual Studio (or even having Visual Studio on their machines). They want to know if we can put all the scenarios for a feature in Jira. What I'm looking for is suggestions for a workflow that will work well with BA types that are accustomed to project management/work tracking tools like Jira (we also use Greenhopper)."} {"_id": "87370", "title": "Is there a secure way to add a database troubleshooting page to an application?", "text": "My team makes a product (business management software) that our customers install on their own servers. The product uses a SQL database for data storage and app configuration. There have been quite a few cases where something strange happened in the customer's database (caused by bugs in our app and also sometimes admins who mess with the database). To figure out what is wrong with the data, we have to send SQL scripts to the customer and tell them how to run them on the database server. Then, once we know how to fix it, we have to send another script to repair the data. Is there a secure way to add a page in our application that allows an application admin to enter SQL scripts that read and write directly to the database? Our support team could use that to help customers run these scripts, without needing direct access to the SQL server. My big concerns are that someone might abuse this power to get data they shouldn't have and maybe to erase or modify data that they shouldn't be able to modify. I'm not worried about system admins, because they could find another way to do the same thing. But what if someone else got access to the form? Is there any way to do this kind of thing securely?"} {"_id": "134409", "title": "what does \"domain\" mean when referring to DDD", "text": "What does the word domain mean regarding driven design/development? Not in terms of semantics or a scholarly definition but in terms of how it modifies processes or philosophies? I was reading a post: Your software-problem-solution approach I came across this buzzword, DDD, and didn't really know what that meant."} {"_id": "249993", "title": "Is there still any value in learning assembly languages today?", "text": "Specifically for a game programmer. If you really needed some assembly routines you could look for help, whereas back in the 80s/90s it was one of the mainstream languages. I read that compilers can generally equal (or beat) most hand written assembly code, so unless you are an embedded programmer who can only program some device in assembly, for example, is there any real point in learning it nowadays?"} {"_id": "87379", "title": "OpenCart (GPLv3)", "text": "I'm planning to use OpenCart to build a web site. According to the licence (GPLv3) it's necessary to provide the source code of any modifications made. Is it acceptable to provide this on request or should it be linked to from every page on the shop?"} {"_id": "134403", "title": "LINQ TO SQL or ADO.NET?", "text": "What is the best choice LINQ TO SQL (.DBML) or using ADO.NET with procedures for a database with 29 tables and about 30 concurrent users that will run the system that I am going to build? I know that ADO.NET is faster than LINQ TO SQL but it is so much simpler to work with LINQ TO SQL. Will LINQ TO SQL handle all the concurrency? Or will there be problems in performance? The system I am going to build will be a WCF service using multiple layers: * Service layer * Business * Repositories * Data Access Layer"} {"_id": "138180", "title": "Should a standard include header be specified in each file or as a compiler parameter?", "text": "I've got a file such as this: #ifndef STDINCLUDE #define STDINCLUDE #include #include #endif I want this file to be included in every header file, because I use stuff from those headers so much. Is it preferable to include this via compiler options (`-I stdinclude.hpp`), or should I physically include them in each header? (`#include `). Note that I am attempting to be cross-platform- minded. I use cmake to serve atleast Unix and Windows."} {"_id": "218306", "title": "Why not expose a primary key", "text": "In my education I have been told that it is a flawed idea to expose actual primary keys (not only DB keys, but all primary accessors) to the user. I always thought it to be a security problem (because an attacker could attempt to read stuff not their own). Now I have to check if the user is allowed to access anyway, so is there a different reason behind it? Also, as my users have to access the data anyway I will need to have a public key for the outside world somewhere in between. Now that public key has the same problems as the primary key, doesn't it? * * * There has been the request of an example on why do that anyway, so here is one. Keep in mind that the question is meant to be about the principle itself not only if it applies in this example. Answers addressing other situations are explicitly welcome. Application (Web, Mobile) that handles activity, has multiple UIs and at least one automated API for intersystem communication (e.G. the accounting department wants to know how much to charge the customer based on what has been done). The Application has multiple customers so separation of their data (logically, the data is stored in the same DB) is a must have of the system. Each request will be checked for validity no matter what. Activity is very fine granular so it is together in some container object, lets call it \"Task\". Three usecases: 1. User A wants to send User B to some Task so he sends him a link (HTTP) to get some Activity done there. 2. User B has to go outside the building so he opens the Task on his mobile device. 3. Accounting wants to charge the customer for the Task, but uses a third party accounting system that automatically loads the Task / Activity by some code that refers to the REST - API of the Application Each of the usecases requires (or gets easier if) the agent to have some addressable identifier for the Task and the Activity."} {"_id": "218304", "title": "WCF or ASMX WebService", "text": "I have been asked to create a web service that communicates with Auth.NET CIM and Shipsurance API. My web service will be used by multiple applications (one a desktop and another a web application). I am confused whether I should build a WCF or an asmx web service. Auth.NET CIM and Shipsurance API have asmx web services which I would be calling in my newly created web service. So is WCF the right approach or can I stay with asmx?"} {"_id": "253752", "title": "How to change the state of a singleton in runtime", "text": "Consider I am going to write a simple file based logger `AppLogger` to be used in my apps, ideally it should be a singleton so I can call it via public class AppLogger { public static String file = \"..\"; public void logToFile() { // Write to file } public static log(String s) { AppLogger.getInstance().logToFile(s); } } And to use it AppLogger::log(\"This is a log statement\"); The problem is, what is the best time I should provide the value of file since it is a just a singleton? Or how to refactor the above code (or skip using singleton) so I can customize the log file path? (Assume I don't need to write to multiple at the same time) p.s. I know I can use library e.g. log4j, but consider it is just a design question, how to refactor the code above?"} {"_id": "83869", "title": "Interview approaches and questions for a software developer intern", "text": "What are some good ideas, common approaches and appropriate questions that you would bring when interviewing a software development intern to join your team? I really don't have expectations of any kind for this person, I understand that as an intern with no prior work experience that he won't have much to bring to the table. I am more or less looking for a good attitude and somebody willing to learn. What would be appropriate if you intend to put this intern 70/30 (QA Testing/Coding)? Would that be a good internship experience in your opinion?"} {"_id": "202953", "title": "Placeholders in strings", "text": "I find that I sometimes use placeholders in strings, like this: $ cat example-apache ServerName ##DOMAIN_NAME## ServerAlias www.##DOMAIN_NAME## DocumentRoot /var/www/##DOMAIN_NAME##/public_html Now I am sure that it is a minor issue if the placeholder is `##DOMAIN_NAME##`, `!!DOMAIN_NAME!!`, `{{DOMAIN_NAME}}`, or some other variant. However, I now need to standardize with other developers on a project, and we all have a vested interest in having our own placeholder format made standard in the organization. Are there any good reasons for choosing any of these, or others? I am trying to quantify these considerations: 1. **Aesthetics and usability**. For example, `__dict__` may be hard to read as we don't know how many underscores are in there. 2. **Compatibility**. Will some language try to do something funny with `{}` syntax in a string (such as PHP does with `\"Welcome to {$siteName} today!\"`)? Actually, I know that PHP and Python won't, but others? Will a C++ preprocessor choke on `##` format? If I need to store the value in some SQL engine, will it not consider something a comment? Any other pitfalls to be wary of? 3. **Maintainability**. Will the new guy mistake `##SOME_PLACEHOLDER##` as a language construct? 4. **The unknown**. Surely the wise folk here will think of other aspects of this decision that I have not thought of. I might be bikeshedding this, but if there are real issues that might be lurking then I would certainly like to know about them before mandating that our developers adhere to a potentially-problematic convention."} {"_id": "70835", "title": "How does one think about object oriented design and Aspect oriented Design for solution", "text": "I have worked on few projects in which both AOP and Object oriented paradigm were used. But, AOP usage was limited to logging only. I think AOP is a much more powerful technique. My question, to those who have worked with both AOP and OOP paradigm in projects, is, how do they come up with a solution, combining these powerful paradigms. Do they think AOPwise first and then design objects or vice versa. My general question to those who have used AOP extensively is, whether they find it a powerful modularizing technique. If yes, please quote examples."} {"_id": "203399", "title": "Are the technologies used in an application part of the architecture, or do they represent implementation/detailed design details?", "text": "When designing and writing documentation for a project an architecture needs to be clearly defined: what are the high-level modules of the system, what are their responsibilities, how do they communicate with each other, what protocols are used etc. But in this list, should the concrete technologies be specified or this is actually an implementation detail and need to be specified at a lower level? For example, consider a distributed application that has two modules which communicate asynchronously via AMQP protocol, mediated by a message broker. The fact that these modules use the Spring AMQP library for sending and receiving messages is a fact that needs to be specified in the architecture or is a lower-level detailed design/implementation detail?"} {"_id": "203398", "title": "How to familiarize myself with Python", "text": "I'm a Python beginner. I started programming with Python 1.5 months back. I downloaded the Python docs and read some parts of the tutorial. I have been programming on codechef.com and solving problems of projecteuler. I am thinking of reading Introduction to algorithms and following this course on MIT opencourse ware as I haven't improved much in programming and I am wasting a lot of time thinking just what should I do when faced with any programming problem. But I think that I still don't know the correct way to learn the language itself. Should I start with the library reference or continue with the Python tutorial? Is learning algorithms useful for languages such as C and not so much for Python as it has \"batteries included\"? Are there some other resources for familiarization with the language and in general for learning to solve programming problems? Or do I need to just devote some more time?"} {"_id": "70831", "title": "Is objected oriented programming paradigm outdated since it is anti-modular and anti-parallel?", "text": "I have read the controversial article Teaching FP to freshmen posted by Robert Harper who is a professor in CMU. He claimed that CMU would no longer teach object oriented programming in the introductory course sine it is \u201cunsuitable for a modern CS curriculum.\u201d And he claimed that: > Object-oriented programming is eliminated entirely from the introductory > curriculum, because it is both anti-modular and anti-parallel by its very > nature. Why consider OOP as anti-modular and anti-parallel?"} {"_id": "15874", "title": "Tips on persuading boss that code review is a good thing", "text": "Let's say one works in a hypothetical company that has several developers that rarely worked together on projects and the Boss didn't believe that code reviews are worth the time and cost. What are various arguments that could be presented in this scenario that will portray the benefit of code review? Furthermore, what are the potential arguments against code review here and how can these be countered?"} {"_id": "15873", "title": "EE vs Computer Science: Effect on Developers' Approaches, Styles?", "text": "Are there any systematic differences between software developers (sw engineers, architect, whatever job title) with an electronics or other engineering background, compared to those who entered the profession through computer science? By electronics background, I mean an EE degree, or a self-taught electronics tinkerer, other types of engineers and experimental physicists. I'm wondering if coming into the software-making professions from a strong knowledge of flip flops, tristate buffers, clock edge rise times and so forth, usually leads to a distinct approach to problems, mindsets, or superior skills at certain specialties and lack of skills at others, when compared to the computer science types who are full of concepts like abstract data types, object orientation, database normalization, who speak of \"closures\" in programming languages - things that make little sense to the soldering iron crowd until they learn enough programming. The real world, I'm sure, offers a wild range of individual exceptions, but for the most part, can you say there are overall differences? Would these have hiring implications e.g. (to make up something) \"never hire an electron wrangler to do database design\"? Could knowing about any differences help job seekers find something appropriate more effectively? Or provide enlightenment or some practical advice for those who find themselves misfits in a particular job role? (Btw, I've never taken any computer science classes; my impression of exactly what they cover is fuzzy. I'm an electronics/physics/art type, myself.)"} {"_id": "123551", "title": "Rails generators for subscription and payment processing services?", "text": "I've been thinking about making a generator for managing subscriptions with options for a couple different payment processing services, it is something that I would use. Are there any existing examples of this, or any thoughts on a generator. I thought it might work like the scaffold function, similar to the authentication function in nifty-generators. EDIT: Also, It seems that there should be a lot of security concerns when implementing payment processing, having a consistent and common way to generate this would let me make sure best practices and security are followed the same in all my apps."} {"_id": "137988", "title": "Geo IP data on stackoverflow API?", "text": "Have just looked at stackgeography and at soapi. I cannot see where the geo data comes from as it's not on the soapi. Can anybody shed some light on that? More info on the stackgeography"} {"_id": "147993", "title": "(How can I / Am I allowed to) Use Google Maps as a Texture on an OpenGL object?", "text": "_**Note** : I asked a very similar question on StackOverflow but did not get much attention, so was directed to http://programmers.stackexchange.com as licensing issues seem to have more interest here..._ I am not an expert in Google Maps policies although I am aware that downloading / caching map tiles is not encouraged at all. In Android, developers were given the MapActivity and MapView classes which attempt to provide all possible services for displaying map tiles and even modifying them on the fly if needed, but they are pretty useless for me as I would like to use map tiles as a texture on a 3D OpenGL object. I'd like to know under which conditions the use of map tiles as an OpenGL texture is allowed. Anyone? _**UPDATE to clarify my question after Gavin's comment:_** **\"Given that the Google Maps API does NOT provide a way to directly access map tiles as low level bitmaps, which is required by OpenGL texturing functions, is there any way to implement this without breaking Google licensing policies?\"**"} {"_id": "147996", "title": "Learning the Operating Systems concepts and Programming Languages for elderly", "text": "I am not sure if this would be the right forum to ask the question (Please let me know and I will move the question to appropriate forum). I would like to know what can be the best way for elders (in my case, my parents) to learn how to program. I was looking at Qimo 4 kids as a possible platform so that my parents can have a closer look at Linux environments. But it seems like Qimo mostly consists of educational games. I also don't know about which programming language would make the best choice. If anyone can provide me with information on how to go about with this \"project\", that would be grateful!!!"} {"_id": "253656", "title": "single for-loop runtime explanation problem", "text": "I am analyzing some running times of different for-loops, and as I'm getting more knowledge, I'm curious to understand this problem which I have still yet to find out. I have this exercise called \"How many stars are printed\": for (int i = N; i > 1; i = i/2) System.out.println(\"*\"); The answers to pick from is ` A: ~log N B: ~N C: ~N log N D: ~0.5N^2` So the answer should be A and I agree to that, but on the other side.. Let's say `N = 500` what would `Log N` then be? It would be 2.7. So what if we say that `N=500` on our exercise above? That would most definitely print more han 2.7 stars? How is that related? Because it makes sense to say that if the for-loop looked like this: for (int i = 0; i < N; i++) it would print `N` stars. I hope to find an explanation for this here, maybe I'm interpreting all these things wrong and thinking about it in a bad way. Thanks in advance."} {"_id": "237999", "title": "What is the best way to store multilingual data in MongoDB?", "text": "I want to save/serve multilingual data in my CMS application using Mongoose. Is this the correct way? name: { global: { type: String, default: '', trim: true, required: 'Please fill name', }, en_US: String, tr_TR: String, sv_SE: String }"} {"_id": "232743", "title": "How do you approach writing a piece of code?", "text": "I'm a junior php/js developer and have been working for about a year. When I start working on a task I tend to have a quick think about what needs to be done then jump straight in, in a somewhat brute force approach. However I think this hinders my productivity since I haven't defined path to completing the task. What have you found to be the best approach and way to prepare writing code? Especially dealing with legacy code"} {"_id": "237992", "title": "jQuery - When to Bind Event Listeners?", "text": "I'm building a medium-sized \"single-page\" JavaScript application using jQuery. If you include all possible functionality of the application, there are 134 `click` bindings. Since most of the content is dynamically loaded, I can't just do this: `$(element).click(function() { ... });` Instead, I'm doing this: `$(document).on('click', selector, function() { ... });` Currently, I'm binding all of these on `$(document).ready`. But is that bad practice? Is this a speed issue? The alternative would be to bind the event listeners every time the content is loaded, for example: $('#content').html(...); $('#newcontent').click(function() { ... }); The problem with that is it's difficult to organize. What advice can you give me?"} {"_id": "237993", "title": "Python shelve class rename", "text": "The main reason I use Shelve is to be able to quickly develop code without worrying about db table structure. Unfortunately changing a class name from, say, 'class1' to 'class2' gives \"AttributeError: 'module' object has no attribute 'class2'\" on the next run. What is the canonical way to rename a Shelved class? What other similar surprises are lurking deeper into the Shelve db structure?"} {"_id": "232746", "title": "Do Design Patterns Stifle Creativity", "text": "Many years ago, I was talking with an Economics professor about design patterns, how they were establishing a common language for programmers and how they were solving well known problems in a nice manner, etc etc. Then he talked back to me that this is exactly the opposite approach he would use to his Economics students. He usually presented a problem and asked them to find a solution first, so they could think about it first and try to find ways to solve the problem first, and only after that, he presented the \"classical\" solution. So I was thinking if the \"design pattern\" approach is really something that makes programmers smarter or dumber, since they're many times just getting the \"right solution for this problem\" instead maybe of using creativity and imagination to solve some problem in a new and innovative way. What do you think?"} {"_id": "83288", "title": "My first job in a dev team - what should my priorities be?", "text": "I just landed my dream job - working in a big (famous) company as a frontend web developer. I've never even worked in a proper dev team before - the last 4 years since I started learning web dev I've been working on my own just building sites and apps for small business clients I found. I want to make the best possible start in my new job but it's not going as easily as I thought it might - there's very little induction or training - its a bit like being thrown in at the deep end, and to make matters worse most of the other developers in the office whilst being friendly and supportive, are speaking a language I don't understand very well. When they do speak english they use a lot of technical terms / jargon / names of products / systems etc that I've never heard of. I want to make a good impression and don't want to show myself up by asking too many dumb questions. What advice would all of you seasoned veterans of development teams suggest?"} {"_id": "237997", "title": "Is there a metric that can be equated to complexity in laymens terms?", "text": "Often times users cannot comprehend the complexity of software. They think that because a problem is easy to describe then it is easy to solve. I want to equate the complexity of a \"simple program\" I have built with the complexity of a process with a tangible result. Is there a system that can take tangible processes and assign them a complexity such as: Building an outhouse = 5 Building a house = 50 Building the empire state building = 5000 that I can then equate to the software in a meaningful way: thisProject = 5 Maybe the project weight is determined by Cylcomatic Complexity. Whatever it is is not that important. What is important is that such a system exists. Does such a system exist? **Update** I'm looking for a means to compare the complexity of two complete systems. I am not looking for estimation methods. The limitation I see in using \"money and man hours\" is that the laymen (or maybe even somewhat technical) individual could attribute the reason the project took so long and cost so much money is because the developer(s) were simply not good at their jobs. Thinking out a possibility... I imagine a solution to this problem could be similar to Dijkstra's algorithm. Create a flowchart for System1 and another for System2. Give each decision a weight based on how many routes the decision contains. Give each action a weight as well. The cumulative weight of each respective chart can then be used to compare the two processes. 1. For the \"tangible process\": If some engineer mapped a flowchart for \"building an outhouse\" then we could use such an algorithm to obtain a \"total weight\" for the process. 2. For the \"intangible process\": Assuming the entire program can be mapped to a Finite State Machine then it can be described as a flowchart and assigned a weight as well. So the outhouse process is weighted at 50 and \"my program\" is weighted at 60. So it's about as complex as building an outhouse."} {"_id": "233290", "title": "What is the idiomatic way to persist the data in this simple data-based application?", "text": "I'm looking at developing a simple application in Java for a game which allows a user to keep track of which items they own and how much experience they have obtained while using the item. All of this data is input by the user. What is the idiomatic/standard way to persist the data that is being manipulated (the model)? The immediate possibilities seem to be saving it to a file, serialization, databases, etc, but many of these seem to be overkill for simple text fields that don't seem to necessitate objects. What is the idiomatic/traditional way to handle something like this?"} {"_id": "171088", "title": "Object oriented wrapper around a dll", "text": "So, I'm writing a C# managed wrapper around a native dll. The dll contains several hundred functions. In most cases, the first argument to each function is an opaque handle to a type internal to the dll. So, an obvious starting point for defining some classes in the wrapper would be to define classes corresponding to each of these opaque types, with each instance holding and managing the opaque handle (passed to its constructor) Things are a little awkward when dealing with callbacks from the dll. Naturally, the callback handlers in my wrapper have to be static, but the callbacks arguments invariable contain an opaque handle. In order to get from the static callback back to an object instance, I've created a static dictionary in each class, associating handles with class instances. In the constructor of each class, an entry is put into the dictionary, and this entry is then removed in the Destructors. When I receive a callback, I can then consult the dictionary to retrieve the class instance corresponding to the opaque reference. Are there any obvious flaws to this? Something that seems to be a problem is that the existence static dictionary means that the garbage collector will not act on my class instances that are otherwise unreachable. As they are never garbage collected, they never get removed from the dictionary, so the dictionary grows. It seems I might have to manually dispose of my objects, which is something absolutely would like to avoid. Can anyone suggest a good design that allows me to avoid having to do this?"} {"_id": "233298", "title": "OOP what is meant by object-to-object communication", "text": "I've been reading in basic concepts of OOP,as i'm trying to make this shift from transactional scripts to more oop manner in php, and i often come across this definition : > An object stores its state in fields (variables) and exposes its behavior > through methods (functions). Methods operate on an object's internal state > and serve as the primary mechanism for object-to-object communication and a simple example like : > Objects like real life objects have state and behavior. Dogs have state > (name, color, breed, hungry) and behavior (barking, fetching, wagging tail). from definition i see that methods should be either manipulating internal state (setters) or communicating with other objects; but when i try to apply this to a Blog example. composed of mainly 2 domains User and posts. **what could be the`User object` Behavior ?** I cannt find any ! 1. login, its an auth lib. thing so i should not include it in user. 2. posting articles is a Post object thing; again user conduct it; but its more of a post object concern to create a post right ? User may be the main Aggregate object in a blog; yet the user is more like the Creator of Other Objects but he does not have a Behavior i can think of; he is being used -and his state- by other objects in most cases that all ! **in a nutshell :** what are allowed type of methods inside an object ?"} {"_id": "212028", "title": "Portlets (and portals) vs usual webapps", "text": "I am working on a big project containing (among others) two web applications: one is implemented with Spring MVC and the other is meant to be implemented using Web Experience Factory (portlets). One big problem whith using Web Experience Factory is its poor community. The main motive to use portlets in the second application is the already existing user management system inside IBM Websphere Portal (like any other portal). A next step will be to integrate the first application (Spring MVC) into the Websphere portal. So I have a bunch of questions here: 1) Is the integration of a web application (Spring MVC) into Websphere portal obvious ? And what kind of problems should I be ready to encounter ? 2) Is it really a good reason to use a portal only to get advantage of its user management (authentication/authorization) system, knowing that both of the apps to be integrated are implemented by the very same team ? I really appreciate any hints especially if they came with solid arguments."} {"_id": "132809", "title": "Is the Leptonica implementation of 'Modified Median Cut' not using the median at all?", "text": "I'm playing around a bit with image processing and decided to read up on how color quantization worked and after a bit of reading I found the Modified Median Cut Quantization algorithm. I've been reading the code of the C implementation in Leptonica library and came across something I thought was a bit odd. Now I want to stress that I am far from an expert in this area, not am I a math-head, so I am predicting that this all comes down to me not understanding all of it and not that the implementation of the algorithm is wrong at all. The algorithm states that the _vbox_ should be split along the lagest axis and that it should be split using the following logic > The largest axis is divided by locating the bin with the median pixel (by > population), selecting the longer side, and dividing in the center of that > side. We could have simply put the bin with the median pixel in the shorter > side, but in the early stages of subdivision, this tends to put low density > clusters (that are not considered in the subdivision) in the same vbox as > part of a high density cluster that will outvote it in median vbox color, > even with future median-based subdivisions. The algorithm used here is > particularly important in early subdivisions, and 3is useful for giving > visible but low population color clusters their own vbox. This has little > effect on the subdivision of high density clusters, which ultimately will > have roughly equal population in their vboxes. For the sake of the argument, let's assume that we have a vbox that we are in the process of splitting and that the red axis is the largest. In the Leptonica algorithm, on line 01297, the code appears to do the following * Iterate over all the possible green and blue variations of the red color * For each iteration it adds to the _total_ number of pixels (population) it's found along the red axis * For each red color it sum up the population of the current red and the previous ones, thus storing an accumulated value, for each red _note: when I say 'red' I mean each point along the axis that is covered by the iteration, the actual color may not be red but contains a certain amount of red_ So for the sake of illustration, assume we have 9 \"bins\" along the red axis and that they have the following populations > 4 8 20 16 1 9 12 8 8 After the iteration of all red bins, the _partialsum_ array will contain the following count for the bins mentioned above > 4 12 32 48 49 58 70 78 86 And _total_ would have a value of 86 Once that's done it's time to perform the actual _median cut_ and for the red axis this is performed on line 01346 It iterates over bins and check they accumulated sum. And here's the part that throws me of from the description of the algorithm. It looks for the first bin that has a value that is _greater_ than _total/2_ Wouldn't _total/2_ mean that it is looking for a bin that has a value that is greater than the _average_ value and not the _median_ ? The median for the above bins would be _49_ The use of _43_ or _49_ could potentially have a huge impact on how the boxes are split, even though the algorithm then proceeds by moving to the center of the larger side of where the matched value was.. Another thing that puzzles me a bit is that the paper specified that the bin with the median value should be located, but does not mention how to proceed if there are an even number of bins.. the median would be the result of _(a+b)/2_ and it's not guaranteed that any of the bins contains that population count. So this is what makes me thing that there are some approximations going on that are negligible because of how the split actually takes part at the center of the larger side of the selected bin. Sorry if it got a bit long winded, but I wanted to be as thoroughas I could because it's been driving me nuts for a couple of days now ;)"} {"_id": "41686", "title": "Moving Forward with Code Iteration", "text": "There are times when working on my programming projects, and I get to a point where I'm ready to move on to the next part of my program. However, when I sit down to implement this new feature I get stuck, in a sense. It's not that I don't know how to implement the feature, it's that I get stuck on figuring out _the best_ way to implement said feature. So I sit back for a day or two and let the ideas ferment until I am comfortable with a design. I get worried that I may not write something as well as it could be, or that I might have to go back and rework the whole thing; so I put it off. This is a big reason why I've never really finished many personal projects. Anyone else experience this, and how do you keep your self moving forward in your project?"} {"_id": "41683", "title": "Helpful articles on the subject of managing programmers?", "text": "What are the most helpful articles on the subject of managing programmers? I came across this one recently, and thought it was excellent - The unspoken truth about managing geeks What else is out there?"} {"_id": "84484", "title": "Considering career change to patent agent/patent engineer", "text": "After programming professionally for 17 years, I'm considering change careers and becoming a patent agent/engineer. Has anyone else pursued or evaluated that path? Any insight, rants or raves?"} {"_id": "40555", "title": "Communicating with Management", "text": "What are some concrete suggestions and techniques for communicating with management? My premise for this question is that as programmers we think in a much more granular/detail-oriented way than managers. It's not that managers can't think at this level, it's just that their decision making and judgement applies to a different spectrum. Where the manager may think in terms of \"done\" / \"not done\" the developer may think in terms of individual features, test cases, or bugs. Where a manager may be thinking \"lots of bugs\" / \"bug free\" a developer may think in terms of a ratio of feature to bugs or feature complexity or test coverage."} {"_id": "83135", "title": "Creating a library in two languages simultaneously", "text": "I'm planning on writing an open source HTML parsing library in .Net so that I have a project out in the wild when I start looking for developer jobs. Now, in my Masters program I started learning Java and found that I like it as well and was also thinking that I could also write the library in Java at the same time, to improve my Java skills at once. Would that be advisable? Does the fact that Java and C# are similar languages play into the decision to do two libraries at once? My main concern is instead of having one great library, I end up with two mediocre libraries. Being new to development, are there any other pitfalls I should look out for? Also would having the library on the JVM and .Net work to my advantage any more than just one of those platforms when it comes to future employers? I did see this question, but my motivations aren't the same and I plan on releasing this code as well, so I'm not sure if the answers will differ."} {"_id": "165010", "title": "skills that can't be outsourced- web development related", "text": "I never know where it's acceptable to post something like this, so please forgive if it's in the wrong place. I'm very interested in going further in to web development; I know a bit of javascript, a bit of php, and so forth, but I'm now seeing these services that will go from psd to wordpress for 200 bucks and I'm wondering how the hell is anyone able to compete with this? So I'm wondering if those more knowledgeable than me could tell me what areas are the least likely to be able to be outsourced, for 5 bucks to some kid in Uzbekistan( no offense to that kid).. do you think it's on the database management side, or maybe app development? ideas appreciated."} {"_id": "137329", "title": "Convince a lone developer to use a separate build tool instead of the IDE one-click build", "text": "In my years of programming Java and more recently Scala, I've never used Ant, Maven, Gradle or any of those build tools for Java. Everywhere I've worked there was a build manager who took care of all of that -- I'd compile locally with the IDE for development and unit testing, then check in the source code and notify the build manager who did what was necessary to compile everybody's files for the shared environment. Now that I'm between contracts I've been working on my own self-funded project and it's getting to the point where it could be good enough to actually make money. I even have a potential investor lined up and plan to show them a beta version in the next few weeks. But all along I just click the build button on the IDE, and it creates the Jar file and it works just fine. Of course, the conventional wisdom suggests that I \"should\" be writing my own Ant/Maven/Gradle scripts and using that instead of the IDE, but what are the concrete advantages of that in my situation (working alone)? I've done some reading about how to use those build tools, and it just looks like I'd be writing hundreds of lines of XML (or Groovy or whatever) to do to what the IDE does in one click (the IDE-generated Ant XML for the project is over 700 lines). It just looks error-prone and time-consuming and unnecessary for my situation. Not to mention the learning curve, which would take away time from all the other work I'm doing to get the product ready to show people."} {"_id": "108457", "title": "WinRT and .NET: What is it, where do i place it and what does it change?", "text": "Say I'm a .NET developer and want to build my application on WinRT. What I've read is that it is a completely new API for Windows 8, strongly related to Metro-style apps. I'm assuming that I can develop for WinRT in .NET/C#? How does it relate for example to WPF or Silverlight? Does WinRT provide it's own UI framework, or can I build a WPF application on top of WinRT? What about basic I/O. .NET provides methods for that, if I develop with/for WinRT, do I have to use other methods, or will the .NET framework use WinRT under the hood? Besides these specific questions, the overal question is really how does it relate to the other APIs and frameworks I'm currently familiar with as a .NET/C# developer?"} {"_id": "108454", "title": "Product Owner introduces ungroomed (unfamiliar, not estimated) User Story(s) into the Sprint Planning meeting", "text": "A problem that I am facing and would like some input into is; a Product Owner introduces ungroomed (unfamiliar, not estimated) User Story(s) into the Sprint Planning meeting. The issue that this has caused is the team rushes to understand and estimate the User Story(s), which puts significant time pressure on the commitment portion of the Sprint Planning meeting. The team also seems to be unsure of their estimate due to the rushed nature of grooming them in the Sprint Planning meeting. The end result of this is a rushed, half hearted Sprint commitment, which is usually an under-commitment due to so much uncertainty. I have seen two distinctly different causes for the late introduction of the User Story(s): 1. The team is new to Scrum and has been having difficulties grooming stories, prior to planning. 2. A brand new high priority User Story has appeared just before the Sprint Planning meeting. I have discussed these issues with the Product Owner and we have decided upon on actions, I am wondering what have you tried when the Product Owner introduces a brand new high priority User Story just before or even in the Sprint Planning meeting? What worked, what failed? The team that is having difficulty grooming User Stories in new to Scrum; hence I suspect that some facilitation of their grooming sessions, mentoring and some time will help them. Do you have any other suggestions for helping teams come up prompt Planning Poker estimates?"} {"_id": "108451", "title": "What should I include in XML documentation comments?", "text": "I'm trying to make a point of documenting my code better, especially when it comes to the XML comments on class members, but often it just feels silly. In the case of event handlers, the naming convention and parameters are standard and clear: /// /// Handler for myCollection's CollectionChanged Event. /// /// Event Sender /// Event Arguments private void myCollection_CollectionChanged(object sender, NotifyCollectionChangedEventArgs e) { // actual code here... } I also frequently have simple properties that are (IMO) named clearly so that the summary is redundant: /// /// Indicates if an item is selected. /// public bool ItemIsSelected { get { return (SelectedItem != null); } } I don't feel like such comments don't add any information that the declaration itself doesn't already convey. The general wisdom seems to be that a comment that repeats the code is best left unwritten. Obviously, XML documentation is more than just regular inline code comments, and ideally will have 100% coverage. What **_should_** I be writing in such cases? If these examples are OK, what value do they add that I may not be appreciating now?"} {"_id": "178860", "title": "Are there bots that download new iOS apps without being asked?", "text": "I noticed on the first day my iOS app hit the app store I got a download from Albania and another one from China. The app is only useful to a specific group, all of which are in a certain U.S. city. Are there bots that automatically download every new app? If so, to what end?"} {"_id": "178863", "title": "Where can I learn more about JavaScript and Python?", "text": "Been teaching myself how to code over the past four months or so -- mainly in JavaScript, but just started Python -- and had a revelation today. I can write in JavaScript pretty well, but I don't actually know what JavaScript **is**. Basically I know how to use it, but not the advantages/disadvantages, its origination, its purpose, etc. What is the purpose and background behind Javascript and why was it created?"} {"_id": "202047", "title": "What determines which Javascript functions are blocking vs non-blocking?", "text": "I have been doing web-based Javascript (vanilla JS, jQuery, Backbone, etc.) for a few years now, and recently I've been doing some work with Node.js. It took me a while to get the hang of \"non-blocking\" programming, but I've now gotten used to using callbacks for IO operations and whatnot. I understand that Javascript is single-threaded by nature. I understand the concept of the Node \"event queue\". What I DON'T understand is what determines whether an individual javascript operation is \"blocking\" vs. \"non-blocking\". How do I know which operations I can depend on to produce an output synchronously for me to use in later code, and which ones I'll need to pass callbacks to so I can process the output after the initial operation has completed? Is there a list of Javascript functions somewhere that are asynchronous/non-blocking, and a list of ones that are synchronous/blocking? What is preventing my Javascript app from being one giant race condition? I know that operations that take a long time, like IO operations in Node and AJAX operations on the web, require them to be asynchronous and therefore use callbacks - but who is determining what qualifies as \"a long time\"? Is there some sort of trigger within these operations that removes them from the normal \"event queue\"? If not, what makes them different from simple operations like assigning values to variables or looping through arrays, which it seems we can depend on to finish in a synchronous manner? Perhaps I'm not even thinking of this correctly - hoping someone can set me straight. Thanks!"} {"_id": "202044", "title": "Is this considered an implementation of a Layer Supertype pattern?", "text": "**1.** According to Fowler's definition this pattern is implemented by having all classes in a layer inherit from a superclass. But after some googling it appears all of the following are considered as implementations of a _Layer Supertype_ pattern: a - all classes in a layer inheriting from the same superclass b - having only some ( thus not all ) of classes in a layer inheriting from a superclass c - layer having superclasses **S1** and **S2** , where classes **A** and **B** inherit from **S1** , while **C** inherits from **S2** So which of above are considered as implementations of a _Layer Supertype_? **2.** If a,b and c are all considered implementations of a _Layer SuperType_ , then I fail to see how this pattern is any different from a regular _class inheritance_. In other words, couldn't we then claim we're using _Layer Supertype_ any time some class inherits from another class? thanks"} {"_id": "207671", "title": "Which component is responsible for updating the database schema?", "text": "We have what can be called a _distributed application_ : One full-blown web app and a small REST service. Both must be separately deployable. Both access the same database schema (they actually share only two tables). The common functionality is encapsulated in a common artifact (a JAR file here, since it\u2019s all Java). So far, so good. However, I\u2019m unsure how to handle schema changes properly. The web app uses Flyway for its schema updates, it works like a charm. But what should be the recommended procedure if one of the _shared tables_ needs an update? Whose responsibility should the upgrade be? (At the moment it is the web app that performs all the upgrades, perhaps that\u2019s good enough, but it worries me.) I thought of perhaps even changing the architecture to have a separate application or service that has both web app and REST service as a client, but this would only make things difficult, not actually remove the problem."} {"_id": "47684", "title": "Do developers ever contract by retainer? If not, why?", "text": "I'm leaving my current employer in two weeks, but have an excellent relationship with them and am somewhat emotionally attached to a project that I have developed over the last couple of years. They've asked me if I'd be willing to contract with them to continue supporting that project. I was thinking of proposing a monthly retainer that would assure them 5-10 hours of support, including answering support E-mails, fixing any bugs, and small stability enhancements and feature tweaks. I haven't ever seen any mention of software developers contracting under retainer; is this done? If not, is there something that makes it is a bad idea?"} {"_id": "212791", "title": "Am I barking up the wrong tree with Scala?", "text": "Having some spare time, I've decided to learn a new programming language while developing - for fun, will never see the light of day - an insurance administration web application (insurance is the industry I know). So I chose Scala as the language to learn but the more I learn the more I wonder if Scala fits the bill for the app I want to build. I don't want to bother learning functional programming and ignore all the best practices by making my domain objects - Policy, Section, Coverage, Insured, Insurer, etc. - mutable. Insureds have Policies which have Sections which have Coverages provided by Insurers blah blah. So there is this inter-relationship between all these objects and I don't understand how to update one object in the functional paradigm. Is it possible and worth the effort?"} {"_id": "14472", "title": "What's are best Windows 7 gadgets for programmers?", "text": "I am looking for a theme with gadgets that will make my life easier as a programmer. Googling from the desktop is one feature I am looking for. Integration with SharePoint or other bug trackers are the second. Any other idea that might make me more productive is a good one."} {"_id": "14474", "title": "Noobie wants to use source control, how would you guide them?", "text": "Let's say you know of an anonymous noobie that wants to be lead upon the path of righteousness. This noobie wants to use some sort of source control tools simply for the experience of using source control tools (and possibly for whatever benefits they bring along with them). To constrain things further (and to make this possibly even more noobie- tastic), let's say they're stuck in windows developing in visual studio. How would you guide your neophyte?"} {"_id": "197338", "title": "Origin of the name \u201cOpenServer\u201d for the SCO Unix operating system", "text": "I was looking over the evolutionary history of Unix and Unix-like systems on Wikipedia and one operating system name stood out to me: **OpenServer**. Judging from the image, SCO's **OpenServer** is completely _closed sourced_ ; so why does SCO **OpenServer** have the word \"open\" in its name if the operating system is proprietary and contains no open source code? I suspect this is because the original **Unics** operating system had a mixed/shared license, but that seems like a pretty distant ancestor when the proprietary **Xenix** came before **OpenServer** as well. Can anyone shed some light on the reason for this name?"} {"_id": "171626", "title": "Combining template method with strategy", "text": "An assignment in my software engineering class is to design an application which can play different forms a particular game. The game in question is Mancala, some of these games are called Wari or Kalah. These games differ in some aspects but for my question it's only important to know that the games could differ in the following: * The way in which the result of a move is handled * The way in which the end of the game is determined * The way in which the winner is determined The first thing that came to my mind to design this was to use the strategy pattern, I have a variation in algorithms (the actual rules of the game). The design could look like this: ![enter image description here](http://i.stack.imgur.com/mk3EA.png) I then thought to myself that in the game of Mancala and Wari the way the winner is determined is exactly the same and the code would be duplicated. I don't think this is by definition a violation of the 'one rule, one place' or DRY principle seeing as a change in rules for Mancala wouldn't automatically mean that rule should be changed in Wari as well. Nevertheless from the feedback I got from my professor I got the impression to find a different design. I then came up with this: ![enter image description here](http://i.stack.imgur.com/3Ad2p.png) Each game (Mancala, Wari, Kalah, ...) would just have attribute of the type of each rule's interface, i.e. `WinnerDeterminer` and if there's a Mancala 2.0 version which is the same as Mancala 1.0 except for how the winner is determined it can just use the Mancala versions. I think the implementation of these rules as a strategy pattern is certainly valid. But the real problem comes when I want to design it further. In reading about the template method pattern I immediately thought it could be applied to this problem. The actions that are done when a user makes a move are always the same, and in the same order, namely: * deposit stones in holes (this is the same for all games, so would be implemented in the template method itself) * determine the result of the move * determine if the game has finished because of the previous move * if the game has finished, determine who has won Those three last steps are all in my strategy pattern described above. I'm having a lot of trouble combining these two. One possible solution I found would be to abandon the strategy pattern and do the following: ![enter image description here](http://i.stack.imgur.com/uiopy.png) I don't really see the design difference between the strategy pattern and this? But I am certain I need to use a template method (although I was just as sure about having to use a strategy pattern). I also can't determine who would be responsible for creating the `TurnTemplate` object, whereas with the strategy pattern I feel I have families of objects (the three rules) which I could easily create using an abstract factory pattern. I would then have a `MancalaRuleFactory`, `WariRuleFactory`, etc. and they would create the correct instances of the rules and hand me back a `RuleSet` object. Let's say that I use the strategy + abstract factory pattern and I have a `RuleSet` object which has algorithms for the three rules in it. The only way I feel I can still use the template method pattern with this is to pass this `RuleSet` object to my `TurnTemplate`. The 'problem' that then surfaces is that I would never need my concrete implementations of the `TurnTemplate`, these classes would become obsolete. In my protected methods in the `TurnTemplate` I could just call `ruleSet.determineWinner()`. As a consequence, the `TurnTemplate` class would no longer be abstract but would have to become concrete, is it then still a template method pattern? To summarize, am I thinking in the right way or am I missing something easy? If I'm on the right track, how do I combine a strategy pattern and a template method pattern?"} {"_id": "171623", "title": "Who owns the IP rights of the software without written employment contract? Employer or employee?", "text": "I am a software engineer who got an idea, and developed alone an integrated ERP software solution over the past 2 years. I got the idea and coded much of the software in my personal time, utilizing my own resources, but also as intern/employee at small wholesale retailer (company A). I had a verbal agreement with the company that I could keep the IP rights to the code and the company would have the \"shop rights\" to use \"a copy\" of the software without restrictions. Part of this agreement was that I was heavily underpaid to keep the rights. Recently things started to take a down turn in the company A as the company grew fairly large and new head management was formed, also new partners were brought in. The original owners distanced themselves from the business, and the new \"greedy\" group indicated that they want to claim the IP rights to my software, offering me a contract that would split the IP ownership into 50% co-ownership, completely disregarding the initial verbal agreements. As of now there was no single written job description and agreement/contract/policy that I signed with the company A, I signed only I-9 and W-4 forms. I now have an opportunity to leave the company A and form a new business with 2 partners (Company B), obviously using the software as the primary tool. There would be no direct conflict of interest as the company A sells wholesale goods. My core question is: **\"Who owns the code without contract? Me or the company A? (in FL, US)\"** **Detailed questions:** 1. I am familiar with the \"shop rights\", I don't have any problem leaving a copy of the code in the company for them to use/enhance to run their wholesale business. What worries me, **Can the company A make any legal claims to the software/code/IP and potential derived profits/interests after I leave and form a company B?** 2. **Can applying for a copyright of the code athttp://www.copyright.gov in my name prevent any legal disputes in the future? Can I use it as evidence for legal defense?** Could adding a note specifying the company A as exclusive license holder clarify the arrangements? 3. **If I leave and the company A sues me, what evidence would they use against me? On what basis would the sue since their business is in completely different industry than software (wholesale goods). Every single source file was created/stored on my personal computer with proper documentation including a copyright notice with my credentials (name/email/addres/phone).** It's also worth noting that I develop significant part of the software prior to my involvement with the company A as student. 4. **If I am forced to sign a contract and the company A doesn't honor the verbal agreement, making claims towards the ownership, what can I do settle the matter legally?** I like to avoid legal process altogether as my budget for court battles is extremely limited at the moment. 5. **Would altering the code beyond recognition and using it for the company B prevent the company A make any copyright claims?** My common sense tells me that what I developed is by default mine in terms of IP, unless there is a signed legal agreement stating otherwise. But looking online it may be completely backwards, this really worries me. _I understand that this is not legal advice, and I know to get the ultimate answer I need to hire a lawyer. I am only hoping to get some valuable input/experience/advice/opinion from those who were in similar situation or are familiar with the topic._ Thank you, PT"} {"_id": "171622", "title": "Pure C++ for iOS apps", "text": "Is it possible to use only C++ to create iOS apps? Is there any downside to that? I read somewhere that you have to use a mix of objective C and C++ if you want to use C++ that bad."} {"_id": "108180", "title": "Namespaces and naming", "text": "I'm having some trouble working this one out, probably more than I should. This is a pretty large project, with a very clearly defined structure, with no obvious problems, but I can't seem to figure out the optimal solution for this case, even after reading several convention documents and such. For example, given the following namespace: \\vendor\\core\\lib\\view; Obviously it refers to a \"view\" namespace...which technically also has a 'view' class in it: \\vendor\\core\\lib\\view\\view This is clearly not ideal, but I can't come up with a better way to name these. Take for example the cache namespace: \\vendor\\core\\lib\\cache; Same deal, the cache class becomes confusing when inside this namespace: \\vendor\\core\\lib\\cache\\cache; Should I just move these 'core' classes from their namespaces, and put them in the parent namespace? \\vendor\\core\\lib\\cache (class) \\vendor\\core\\lib\\cache\\adapter\\redis (class) Or is there a better term to refer to these types of classes? Pardon my circularity, it's a bit late and my head's not quite working in optimal condition."} {"_id": "31578", "title": "Interview questions on C++ for developers with mainly Java/.NET experience", "text": "What questions would ask a candidate with mainly Java/.NET experience (10 years+) in an interview for a C++ position?"} {"_id": "230674", "title": "Programs that claim they are not \"multi-core\" friendly", "text": "You see this phrase or similar kicked around from time to time, generally referring to a program that claims they were not designed to take full advantage of multi-core processors. This is common especially with video game programming. (of course a lot of programs have no concurrency and do not need it, such as basic scripts, etc). How can this be? A lot of programs (especially games) inherently use concurrency, and since the OS is in charge of task scheduling on the CPU, then are these programs not inherently taking advantage of the multiple cores available? What would it mean in this context to \"take advantage of multiple cores\"? Are these developers actually forbidding OS task scheduling and forcing affinity or their own scheduling? (Sounds like a major stability issue). I'm a Java programmer, so maybe I have not had to deal with this due to abstractions or whatnot."} {"_id": "201638", "title": "Use the word \"Apple\" and \"Safari\" in my application", "text": "I am developing an application and i want to use two words somewhere in my application. First i want to use the word \"Safari\" in a menu option which labels \"Open in Safari\"(for a url). The second is the word \"Apple\" in my Terms pages where i want to use it in the phrase \"....as provided by Apple\",referring to technologies and API's provided by Apple. Can i use them and if yes with in which conditions?(adding \"TM\" or \"Copyright\" signs maybe?). Thanks."} {"_id": "230673", "title": "How do memory-clean apps work?", "text": "In terms of operating system architecture, what does a memory-clean software to get rid of all data that fill the virtual memory? I assume that it simply saves all RAM data into a file, but how does it exactly do this? What happens if you delete a file while a process that has resources on that file is executing? Hope this is not a too-broad question."} {"_id": "238461", "title": "What do I gain by using the Strategy pattern in this case?", "text": "I wrote a program with Java that plays simple music. Currently chords have only one way ('strumming pattern') to be played. I want to expand this and create different 'strumming patterns' that chords can use to play their notes. The `Chord` class has a `play()` method that is responsible for playing the chord. Currently it contains the logic for the only 'strumming pattern' of how the notes are played. To add new strumming patterns, the simplest approach is to change `play()` to something like this: void play(int strummingStyle){ if(strummingStyle == REGULAR_STYLE) playRegular(); if(strummingStyle == SOMETHING_ELSE_STYLE) playSomethingElse(); // .. etc } Have a method for each strumming style, and parameterize the method to play a specific style. However using the Strategy pattern feels like a better approach. What I mean is to encapsulate each strumming pattern in a subclass of `StrummingPattern` and set the `Chord` to a specific strumming pattern: `chord.setStrummingPattern(StrummingPattern pattern);`. `play()` would then simply delegate to the strumming pattern like so: void play(){ pattern.play(); } While this clearly **feels** like the better, more OO approach - I find that I can't explain myself what the actual benefits are. And it's important for me to actually understand _why_ I'm doing something. Please explain why using the Strategy pattern in a case like this is a better approach than the more naive 'a method represents a behavior' approach. **What exactly are the Strategy pattern's benefits in this kind of situation? Why should I use it?**"} {"_id": "96934", "title": "Objective Metrics for Software Quality", "text": "There are various types of quality that can be measured in software products, e.g. fitness for purpose (e.g. end use), maintainability, efficiency. Some of these are somewhat subjective or domain specific (e.g. good GUI design principles may be different across cultures or dependent on usage context, think military versus consumer usage). What I'm interested in is a deeper form of quality related to the network (or graph) of types and their inter-relatedness, that is, what types does each type refer to, are there clearly identifiable clusters of interconnectivity relating to a properly tiered architecture, or conversely is there a big 'ball' of type references ('monolithic' code). Also the size of each type and/or method (e.g. measured in quantity of Java byte code or .Net IL) should give some indication of where large complex algorithms have been implemented as monolithic blocks of code instead of being decomposed into more manageable/maintainable chunks. An analyis based on such ideas may be able to calculate metrics that are at least a proxy for quality. The exact threshold/decision points between high and low quality would I suspect be subjective, e.g. since by maintainability we mean maintainability by human programmers and thus the functional decomposition must be compatible with how human minds work. As such I wonder if there can ever be a mathematically pure definition of software quality that transcends all possible software in all possible scenarios. I also wonder if this a dangerous idea, that if objective proxies for quality become popular that business pressures will cause developers to pursue these metrics at the expense of the overall quality (those aspects of quality not measured by the proxies). Another way of thinking about quality is from the point of view of entropy. Entropy is the tendency of systems to revert from ordered to disordered states. Anyone that has ever worked on a real world, medium to large scale software project will appreciate the degree to which quality of the code base tends to degrade over time. Business pressures generally result in changes that focus on new functionality (except where quality itself is the principle selling point, e.g. in avionics software), and the eroding of quality through regression issues and 'shoe-horning' functionaility where it does not fit well from a quality and maintenance perspective. So, can we measure the entropy of software? And if so, how?"} {"_id": "201630", "title": "Do all programs run in a loop?", "text": "I was wondering whether all programs run in a Loop, checking for variable changes then acting upon said changes followed by another Loop. Or do they run in a While statement, waiting for a variable to change before they execute a routine? Or am I just completely wrong?"} {"_id": "230679", "title": "If an object should be flagged, should it be built with the flag attribute?", "text": "Let us assume I have a basic object with a handful of self-relevant points of data. class A(object): def __init__(self, x, y, width, length): self.x = x self.y = x self.width = x self.length = x And so we go on to instantiate several `A` objects and shove them into a list, as programmers are wont to do. Now let us assume that these objects need to be checked for a particular state every so often, but we needn't check them on _every_ pass, because their state doesn't change particularly quickly. So a checking method is written somewhere: def check_for_ticks(object_queue): for obj in object_queue: if obj.checked == 0 and has_ticks(obj): do_stuff(obj) obj.checked -= 1 Ah, but wait - it wasn't designed with `obj.checked` attribute! Well no big deal, Python is kind and lets you add attributes whenever, so we change the method a bit. def check_for_ticks(object_queue): for obj in object_queue: try: obj.checked =- 1 except AttributeError: obj.checked = 0 if obj.checked == 0 and has_ticks(obj): do_stuff(obj) This works, but it gives me pause. My thoughts are mixed, because while it's a functional attribute to give an object and it solves a problem, the attribute isn't really used by the object. `A` will probably never modify its own `self.checked` attribute, and it isn't really an attribute that it 'needs to know about' - it's just incredibly convenient to give it that attribute in passing. But -- what if there are multiple methods that might want to flag an object? Surely one does not create the class with every attribute that some other method might maybe want to flag it for? Should I be giving objects new attributes in this way? If I should not, what is the better alternative to just giving them new attributes?"} {"_id": "238466", "title": "Is there a name for this technique in testing?", "text": "When I've written tests for some code and want to make sure that it's actually testing what it's supposed to, I'll mess up or remove the code-under-test, and see if the tests fail. If they don't, I've got a problem. Is there a name for this technique? It's a real bother having to talk in sentences everytime I want to suggest to someone that they do it."} {"_id": "199735", "title": "Static or non-static?", "text": "For example I have these two classes: First public class Example { public Example() { } public int ReturnSomething { return 1; } } Second public class Example { public Example() { } public static int ReturnSomething { return 1; } } I want to use the `ReturnSomething` method, which of the two classes allocated less/more memory? When I use int x = Example.ReturnSomething; or when I use Example object = new Example(); int x = object.ReturnSomething();"} {"_id": "234412", "title": "Why have private static methods?", "text": "I just wanted to clear up a question I have. What is the point of having a private static method as opposed to a normal method with private visibility? I would have thought an advantage to having a static method is that it can be called without an instance of a class, but since its private is there even a point to it being static? The only reason I can think of is that it helps conceptually understanding the method on the class level as opposed to object level."} {"_id": "234747", "title": "Dependency Inversion Principle vs \"Program to an interface, not an implementation\"", "text": "I'm trying to understand how the Dependency Inversion Principle differs from the \"program to an interface, not an implementation\" principle. I understand what \"Program to an interface, not an implementation\" means. I also understand how it allows for more flexible and maintainable designs. But I don't understand how the Dependency Inversion Principle is different from the \"Program to an interface, not an implementation\" principle. I read about DIP in several places on the web, and it didn't clear up my confusion. I still don't see how the two principles differ from each other. Thanks for your help."} {"_id": "234741", "title": "Location services on Mobile platforms", "text": "Do location based services on mobile devices know what business you are in exactly or does it just have a general approximation of where you are and you have to manually check in to be specific? Trying to see if it's possible for a application on a mobile phone to use location services to determine how many given people are in a particular business without having to manually check in. Is this possible,and if so, would this be accurate information or a loose approximation? Thanks in advance ^_^"} {"_id": "38361", "title": "What design is best for data transformation?", "text": "My company's database makes available data to a lot of external applications. So I need to transform the same data to a lot of _dynamic_ views. I can see that a former database developer had implemented many long chains of view- function-procedure call sequences to do transformation more common to all external applications. I think, this architecture and so long requests (stored proc calls a function, then function calls some view and this view based on other one and so on) are a performance problem, at least query optimizer does not resolve these issues (please confirm my guesses). Is it a good approach? Does it cause degradation of performance? If yes, how can I reimplement objects of the database. At this moment I see these steps to do this: * analysis of source data structure (own data) * analysis of all external systems (what formats does database have to provide for) * separate views, functions, stored procs for every external subsystems (I have to avoid long chains, common to many subsystems DB objects, if it is a cause of problem)"} {"_id": "251182", "title": "How to handle a Restful Call in a RESTless state?", "text": "I have a bit of a dilemma. We are choosing our DBContext using a dynamic builder. This is done because in the current database structure we have a separate server for every \"Customer\". All of these Customer Databases use the same model so they are all exactly the same structure. Enter picking the context to use with each call. We are using WebApi 2 and angular. What we do is on the first \"page\" you pick your customer and we pass the customerId down and I need to store this value until the user decides to swap. That being said Session and WebApi2 are not friends. I have a base repository that needs the customerId every time a call is made to one of the repositories. We have to verify that we are using the right context for that customer. So.... That's where I am. Should I break the rules and use Session to store the customerId? I could use my DI container to inject an instance of HttpContext.Current into every call so that I would ignore the fact that session is null on a WebApiController. Is there another way to handle this?"} {"_id": "5613", "title": "Is the abundance of frameworks dumbing down programmers?", "text": "With all of the frameworks available these days, ORMs, dependency injection (DI), Inversion of control (IoC), etc., I find that many programmers are losing or don't have the problem solving skills needed to solve difficult issues. Many times, I've seen unexpected behaviour creep into applications and the developers unable to really dig in and find the issues. It seems to me that deep understanding of what's going on under the hood is being lost. **Don't get me wrong** , I'm not suggesting these frameworks aren't good and haven't moved the industry forward, only asking if, as a unintended consequence, developers aren't gaining the knowledge and skill needed for deep understanding of systems."} {"_id": "85997", "title": "Learning how to integrate JavaScript with other languages", "text": "After learning JavaScript syntax, what are some good resources for learning about integrating JavaScript with other languages (HTML, XML, CSS, PHP) to create real, useful applications? I'm most interested in reading articles or other people's code - not so interested in books. Basically, I'm looking to move from programming puzzle- solvers to programming complex applications and could use some advice."} {"_id": "38368", "title": "what kind of online technical documentation system would you recommend?", "text": "the goal is to have an online documentation system, with these major requirements: * will be mainly used as an intermediate stage for the final technical docs of all our application (which will probably never get completed though :]). It would be typically used as so: someone has a problem, I fix it, and write down the fix immedeately. What happens now is getting unmanagable: someone has a problem, I fix it, both me and someone are happy but 2 months later somebody else has the same problem and nobody remembers what the fix was. * accessible from everywhere, running behind our apache server * user/group managment, allowing read-only/read-write/admin access * the format is not too important: plain text would do, wiki-style would be nicer though * cheap or free some ideas of mine: * just serve files on a file share or through ssh (cons: not too copmatible with windows, pros: simple, can be any file type) * keep it in an SCM (svn/git, idem as above but easier to access and control access) * Confluence: we use Jira already, is Confluence worth it? How does it integrate with Jira? * something else? Please don't hesitate commenting on these or share your experience with other systems."} {"_id": "225648", "title": "Why can python webapps keep sessions between restart and not java?", "text": "I've used both webapp2 + GAE for python and a number of Java/JEE webapp frameworks. The python WSGI framework could keep users logged in while I redeploy the app while none of the Java web framework that I tried could do it. If I redeploy, users of the Java webapp will get logged out but if I use GAE for python then a redeployment doesn't log users out. Is this a general \"feature\" of python vs Java webapps and is this true in general or just a coincidence that the ones I tried had these features? Is there a name for this feature, ability to keep users logged in between redeployments and also keeping caches (e.g. memcache) without getting the cache erased for updating the app="} {"_id": "245339", "title": "How to avoid Memory Error", "text": "I am working with quite large files (pytables) and I am having problems with the `Memory Error` when I try to load the data for processing. I would like some tips about how to avoid this in my python 32bits, since I am new working with pandas and pytables, and I do not know how to work splitting the data in small pieces. My concern also comes when, if I get to split the data, how to calculate statistics like mean, std, etc without having the entire list or array, etc. This is a sample of the code that I am using now, this works fine with small tables: def getPageStats(pathToH5, pages, versions, sheets): with openFile(pathToH5, 'r') as f: tab = f.getNode(\"/pageTable\") dversions = dict((i, None) for i in versions) dsheets = dict((i, None) for i in sheets) dpages = dict((i, None) for i in pages) df = pd.DataFrame([[row['page'],row['index0'], row['value0'] ] for row in tab.where('(firstVersion == 0) & (ok == 1)') if row['version'] in dversions and row['sheetNum'] in dsheets and row['pages'] in dpages ], columns=['page','index0', 'value0']) df2 = pd.DataFrame([[row['page'],row['index1'], row['value1'] ] for row in tab.where('(firstVersion == 1) & (ok == 1)') if row['version'] in dversions and row['sheetNum'] in dsheets and row['pages'] in dpages], columns=['page','index1', 'value1']) for i in dpages: m10 = df.loc[df['page']==i]['index0'].mean() s10 = df.loc[df['page']==i]['index0'].std() m20 = df.loc[df['page']==i]['value0'].mean() s20 = df.loc[df['page']==i]['value0'].std() m11 = df2.loc[df2['page']==i]['index1'].mean() s11 = df2.loc[df2['page']==i]['index1'].std() m21 = df2.loc[df2['page']==i]['value1'].mean() s21 = df2.loc[df2['page']==i]['value1'].std() yield (i,m10, s10), (i,m11, s11), (i,m20,s20), (i,m21,s21)) As you can see, I am loading all the necessary data into a Pandas DataFrame to procoess it, just the mean and the std by now. This is quite fast, but for a pytable with 22Millions of rows I get `Memory Error`"} {"_id": "213773", "title": "content fingerprint algorithm", "text": "I have a database containing just simple URLs. It is as simple as it sounds for now and URL can link to a website or a document(i.e. anything parseable to text). Now I have a simple code which inserts records to database. The problem is that website/document may be actually the same, just: * Hosted somewhere else * Not available so linked from Google Cache * Not available so linked from archive.org * Page can be republished from another source * etc... _**I would like to get some kind of fingerprint of a website/document and think of a way to do so._** What I have thought of: **I can rely on title** because even if content is published somewhere else or cached on some server it will probably will have the same title. That's ok, as title is usually short and doesn't consume a lot of space. Downside - this works only on website. Maybe a filename is appropriate for documents but these can be renamed too. **I can rely on keyword count** But the problem is that I just enter URL and no keywords nor do I want to do so. It is a simple URL storage engine. **I can get a some kind of checksum of all the content** But this method would be a total guess. **SO MY QUESTION IS:** How can I fingerprint a content so later on I could identify possible duplicates ? **EDIT** I don't want to fingerprint just the title. I want to fingerprint the whole content. Content stays the same but can be hosted anywhere and it's structure(title too) can change. For documents - filename can change too. I want to fingerprint content. All the text so I can later on identify possible duplicates."} {"_id": "213771", "title": "Mixed licenses inside product", "text": "This question relates to software-licensing. Someone familiar with open-source licensing may be able to answer this question. I am facing an issue in trying to interpret/understand some Licenses, due to a language barrier (English is not my first language). My situation is that I am using three products inside a distributable closed-source commercial product. These are: * Jersey (Common Development and Distribution License) * Jetty (Dual Apache & Eclipse) * RXTX (LGPL) The tools above are not modified in my project and are used 'as is' included with Maven. My question is what step/s should I take to be perfectly sure I am not violating any Licence agreements? Should I (for example): * Put some information inside the README file that I am using this software inside? * Print something in command line? * Should I place the full-license texts of those tools in separate files (but this could be misleading to customer, because my product doesn't have any Licence)? * Could I extract these Licenses text files during the first JAR run? I am looking for practical information, not necessarily from a lawyers."} {"_id": "228078", "title": "Data Access Layer for application", "text": "I am working on a retail application where currently I'm using raw SQL like insert into some_table values (Textbox1.Text, Textbox2.Text, ...) and update some_table set some_column = value for save and update, but the problem is I need to write this everywhere that a save or update is needed. Is there a better solution for handling CRUD operations, like a separate class to contain data access logic which would use methods in the data access layer to perform CRUD operations?"} {"_id": "228077", "title": "How to combine Google Analytics with Relational Database", "text": "I have a website with quite a lot of registered users, they are stored in a PostgreSQL database. I also running Google Analytics on the website to track the users. Is there a way to individually combine the GA data with the registration data from the website, e.g. finding out the e-mail address of a user we found in GA? Also, is there a way for non-technical people to access the SQL database without knowing SQL? Thanks!"} {"_id": "218899", "title": "Automating tests in a browser not supported by selenium", "text": "I work on a company that have a payment system that runs under a executable kiosk instance of internet explorer 7. when the application is running the process name is not similar with iexplore.exe or something like that. The name of the windows process is the same as the executable.exe. We cannot run the system under a local webserver because the executable expect some pages in pure .html to process the data and give the users change. The executable pass information to the .html forms and vice-versa. So we cannot user another browser. I think selenium will not work with this browser. Any idea how i can automate tests on the interface? **EDIT** Selenium fail trying to start my executable: _Exception: Failed to start new browser session: org.openqa.selenium.server.RemoteCommandException: Error while launching browser_ I think the executable its not the browser itself. Its just a executable that make some security tasks and after that call a internet explorer instance somehow. But the internet explorer process don\u00b4t appear on the windows process."} {"_id": "11565", "title": "What programming skill do you use most often that you are terrible at?", "text": "What are the techniques that you use just about everyday, but are still awful at? Are you bad at it because of the difficulty of the task, laziness to learn better, or some other reason?"} {"_id": "47028", "title": "How could we rewrite the 'No Evil' license to make it 'free'?", "text": "I did not find the lawyers' SE site, so I thought it best to post here. /* * ...subject to the following conditions: * * The above copyright notice and this permission notice shall be included in all * copies or substantial portions of the Software. * * The Software shall be used for Good, not Evil. * * THE SOFTWARE IS PROVIDED \"AS IS\"... */ This is the 'non-free', Crockford, No-Evil, MIT-style, license. This license is considered non-free because of this phrase: \" **The Software shall be used for Good, not Evil.** \" How could we rewrite this to become a 'free' license, while retaining the original spirit of the sentence?"} {"_id": "103563", "title": "Making a basic computer that adds number passed by the user", "text": "What are the things that I should be familiar with to make my own computer that can add two numbers, in the same way that happens in a calculator? Can anyone please give me links that teach these things in detail? I know how to program but have never programmed at the root level."} {"_id": "250302", "title": "How to identify and run the most relevant automated tests?", "text": "Suppose you have a reasonably large codebase (0.5 - 1 msloc) with a large test-suite (6-7hr single-threaded runtime; with a mix of unit-tests and integration-tests built with different tools). You have a proposed patch (or diff or pull-request) for the codebase, and you want to automatically run the most _relevant_ tests. Running _all_ tests would be too expensive. Running smoke-tests would be too shallow. The goal is to find something in-between. By automating the selection of relevant tests, you can help new developers participate in TDD -- and provide timely pre-merge feedback (as part of code- review). What techniques or arrangements would you use to identify \"relevant\"? (A basic example: if a patch modifies \"src/(.*).php\", then run \"phpunit test/{$1}Test.php\".) What, if any, existing tools come to mind? Or what new tools are needed? Or is it impossible for tooling to help? (For context: In my particular use-case, I work with LAMP-based web-apps, so tools or techniques that work with PHP/JS/bash are most helpful. But similar issues probably arise with any large app/deep stack, so examples from other stacks could provide insight.)"} {"_id": "103567", "title": "What is the most orthogonal programming language?", "text": "I find myself repeatedly annoyed by having to teach freshmen about special language rules (like array-to-pointer decay) that have absolutely nothing to do with programming in itself. So I wondered: What is the programming language with the smallest number of special language rules, where everything is first class and can be composed without annoying technical restrictions? Wouldn't such a language be the perfect teaching language? > ### Moderator Note > > We're looking for long answers that provide some explanation and context. > Don't just list a language: please explain _why_ you think the language > answers the question. Answers that don't explain anything will be deleted. > See Good Subjective, Bad Subjective for more information."} {"_id": "82424", "title": "What suitable Q & A application can integrate with a Java portal?", "text": "I'm looking for suggestions on whether **we should build** _or_ **are there readymade applications** that do the following: * Allow admins to build a structured workflow of support Q & A, similar to Windows Troubleshooting i.e. when you try one tip, you then get the next one. * Allow end-users to search and follow the Q & A We have an existing Liferay portal which we would like to plug in this app to it - is there some suitable app like a CMS which does this? I mean building the structured questions - not just plain content like a wiki. Open source options are welcome, but I'd like to know of COTS as well."} {"_id": "155655", "title": "Best practice for marking a bug as resolved in Bugzilla ?", "text": "I am wondering what is the best way to handle the situation of marking a bug as resolved and providing a version of component/product in which this fix can be found. **Context** For a project I am working on, we are using Bugzilla for issue tracking, and we have the following: * A product \"A\" with a version number like vA.B.C.D, This product \"A\" have the following components: * Component \"C1\" with a version number like vA.B.C.D, * Component \"C2\" with a version number like vA.B.C.D, * Component \"C3\" with a version number like vA.B.C.D. Internally we keep track of which component versions have been used to generate the product A version vA.B.C.D. Example: Product \"A\" version v1.0.0.0 has been produced from component \"C1\" v1.0.0.3, component \"C2\" v1.3.0.0 and component \"C3\" v2.1.3.5. And Product \"A\" version v1.0.1.0 has been produced from component \"C1\" v1.0.0.4, component \"C2\" v1.3.0.0 and component \"C3\" v2.1.3.5. Each component is a SVN repository. The person in charge of generating the product \"A\" have only access to the different components tags folder in SVN, and not the trunk of each component repository. **Problem** Now the problem is the following, when a bug is found in the product \"A\", and that the bug is related to Component \"C1\", the version of product \"A\" is chosen (e.g. v1.0.0.0), and this version allow the developer to know which version of component \"C1\" has the bug (here it will be v1.0.0.3). A bug report is created. Now let's say that the developer responsible for component \"C1\" corrects the bug, then when the bug seems to be fixed and after some test and validation, the developer generates a new tag for component \"C1\", with the version v1.0.0.4. At this time, the developer of component \"C1\" needs to update the bug report, but what is the best to do: 1. Mark the bug as resolved/fixed and add a comment saying \"This bug has been fixed in the tags v1.0.0.4 of C1 component\" ? 2. Keep the bug as assigned, add a comment saying \"This bug has been fixed in the tags v1.0.0.4 of C1 component, update this bug status to resolved for the next version of the product that will be generated with the newest version (v1.0.0.4 of C1)\" ? 3. Another possible way to deal with this problem. Right now the problem is that when a product component CX is fixed, it is not sure in which future version of the product A it will be included, so it is for me not possible to say in which version of the product it will be solved, but it is possible to say in which version of the Component CX it has been solved. So when do we need to mark a bug as solved, when the product A version include the fixed version of CX, or only when CX component has been fixed ? Thanks for your personal feedback and ideas about this !"} {"_id": "82429", "title": "Is FPA incompatible with Agile?", "text": "I am currently working in an Agile + SCRUM based project. I work on a lower level module. This leads to a peculiar problem. The work I do cannot be directly associated with a user story many times. Since I cannot relate a user story directly to my work I often face the problem that my requirements are unclear. Also I tend to \"miss out\" a few things which only become clear to me late in the sprint. And my layers work cannot be tested directly. On a similar note our GUI team was generating a lot of bugs due to missing out minor implicit requirements. For instance, the text field width was less than anticipated, etc. I had used FPA before in another project and I think it is a good way to break down requirements to the tiny atomic details , clarify the minor points and then build your software using the points as a checklist. I really felt that it would benefit our project and my layer in particular. I suggested it in my team meeting bit it was turned down becuase of the documentation involved. The \"wise guy's\" rationale: FPA is documentation-heavy and Agile frowns on heavy documentation. I argued that Agile is about producing good products and if an Agile process cannot be moulded to add a process that will reduce defects and produce good code, then it simply isnt \"agile\". In the end the team voted down the proposal claiming that it wasnt \"Agile\". however I suspect the reason was more likely to be laziness. Is FPA really incompatible with Agile due to the amount of documentation involved? What about the lofty aims of the Agile Manifesto? Is it just \"wroking code\" or \"good working code\"?"} {"_id": "51115", "title": "Is there a canonical book for learning Java as an experienced developer?", "text": "I have been a .NET developer now for about the past 5/6 years give or take. I have never done any professional Java development and the last time I really touched it was probably back in college. I have been toying with the Scala language a little bit but nothing serious. Recently, I've been offered an opportunity to do some pretty cool work, but using Java instead of .NET. I think I can get by alright with my current skill set, meaning I already know how to program well and am familiar with languages such as C# and C++, etc. So, the syntax and all that language stuff are really not a problem. What I need is a really good reference book and a book about how to think in Java. Each language/Framework/Stack tries to address things a certain way and I'm sure Java is no different. What are some great Java books that you simply can't live without? Are there any books that talk about the most important parts of Java that must be understood before all else? As a side note, I will be doing mostly Java web development. Not really 100% on what types of stuff they are using for persistence, framework, server, etc."} {"_id": "187787", "title": "Effective Java for experienced Java programmers?", "text": "I own and read 'Java Puzzlers', 'Clean Code' and GOF's 'Design Patterns' and more specific technology books however I have not yet read 'Effective Java'. Whenever I see a list of must read books I always see a copy a 'Effective Java' but I have put off buying it seeing as a learn to write Java book rather than a write better Java book. Is it still worth a read or even a purchase. What is special about this book that I am missing out on by not reading this. I know my design patterns and practice TDD and attempt to write clean code :), what will I gain from this book?"} {"_id": "106647", "title": "Most efficient way to learn java if you already know how to code?", "text": "> **Possible Duplicate:** > Is there a canonical book for learning Java as an experienced developer? I've been coding for close to 25 years now in basic, (C64 & Amiga), (Object) pascal, C, LPC and for the last few years, Python. Python is definitely my favourite (and strongest) language. However, lately I've been forced to do stuff in Java - I have never written a single line of java code so far. So I'm asking your advice - what is the fastest and most efficient crash course to learn java. **EDIT** : I'm really looking for something along the lines _teaching java to people who know how to code and what OOP is_ \\- not something that has _with no previous programming experience_ in the synopsis."} {"_id": "141144", "title": "what is best book to learn optimized programming in java", "text": "> **Possible Duplicate:** > Is there a canonical book for learning Java as an experienced developer? > Let me elaborate a little: I used to be a C/C++ programmer where I used data structure concept like trees, queues stack etc and tried to optimize as much as possible, minimum no. of loops, variables and tried to make it efficient. It's been a couple of years that I started writing java codes, but it is simply not that efficient in terms of performance, memory intensive etc. > To the point: I want to enter **programming challenges** using java so I need to improve my approach at things I program. So please suggest me some books that can help me learn to program better and have a chance in solving challenges in programming."} {"_id": "201093", "title": "OOP concept: is it possible to update the class of an instantiated object?", "text": "I am trying to write a simple program that should allow a user to save and display sets of heterogeneous, but somehow related data. For clarity sake, I will use a representative example of vehicles. The program flow is like this: 1. The program creates a **Garage** object, which is basically a class that can contain a list of vehicles objects 2. Then the users creates **Vehicles** objects, these **Vehicles** each have a property, lets say _License Plate Nr_. Once created, the **Vehicle** object get added to a list within the **Garage** object 3. \\-- ** _Later on_** \\--, the user can specify that a given **Vehicle** object is in fact a **Car** object or a **Truck** object (thus giving access to some specific attributes such as _Number of seats_ for the Car, or _Cargo weight_ for the truck) At first sight, this might look like an OOP textbook question involving a base class and inheritance, but the problem is more subtle because at the object creation time (and until the user decides to give more info), the computer doesn't know the exact Vehicle type. Hence my question: **how would you proceed to implement this program flow? Is OOP the way to go?** Just to give an initial answer, here is what I've came up until now. There is only one Vehicle class and the various properties/values are handled by the main program (not the class) through a dictionary. However, I'm pretty sure that there must be a more elegant solution (I'm developing using VB.net): Public Class Garage Public GarageAdress As String Private _ListGarageVehicles As New List(Of Vehicles) Public Sub AddVehicle(Vehicle As Vehicles) _ListGarageVehicles.Add(Vehicle) End Sub End Class Public Class Vehicles Public LicensePlateNumber As String Public Enum VehicleTypes Generic = 0 Car = 1 Truck = 2 End Enum Public VehicleType As VehicleTypes Public DictVehicleProperties As New Dictionary(Of String, String) End Class NOTE that in the example above the public/private modifiers do not necessarily reflect the original code"} {"_id": "143263", "title": "Resources on concepts/theory behind GUI development?", "text": "I was wondering if there were any resources that explain concepts/theory behind GUI development. I don't mean a resource that explains how to use a GUI library, but rather how to create your own widgets. For example a resource that explains different methods on how to implement scrollable listboxes. I ask because I have an idea for a game tool where I would like to create my own widgets and let users drag and drop them onto some kind of form. How do GUI libraries usually draw widgets? I'm not sure if reskinning widgets from a GUI library fits my needs, since widget behavior needs to be dynamic based on user interaction."} {"_id": "225984", "title": "What are the valid test cases for method calls within if-else condition", "text": "I have a discount service that gets called if certain conditions are met. I need to write test cases to check if the discount service is called. My doubt is checking if the discount service is NOT called for type 4 and 5 is a valid test case scenario. public void ApplyDiscount(int typeId, double amount) { if(typeId == 1) { //call discount service } else if(typeId == 2) { //call discount service } else if(typeId == 4) { //do other stuff } else if(typeId == 5) { //do some other stuff } }"} {"_id": "97131", "title": "What revision control has the best merging engine?", "text": "When it comes to merging, each control version has it's engine to do it. Conflicts are unavoidable but the question is - which revision control has best AI to avoid conflict. Is it possible to measure such thing? More specifically, we are thinking to move to GIT cause it has nice hype and rumour that handles conflict better... Any comment is appreciated..."} {"_id": "240325", "title": "Should a user story be shared between developers?", "text": "I commonly see stories that have back-end and front-end development. For example, consider a large dialog with a few tables and some dynamic controls. We'll make several stories (maybe one for each table and another for the dynamic control system). The dev team will then split with one person on the back-end and another on the front-end. This makes it easy for the back-end person to just worry about the structure of the SQL layer while the front-end person focuses on stuff like layout. After the initial interface between back- and front-end is agreed, the two developers can focus their attention to get their part done by the end of the sprint. Then comes the chaos. Who \"owns\" which story? What does \"in progress\" mean or \"done\"? Should we make two separate stories for back-end and front-end? If so, doesn't that break the idea of user stories based on feature? Our system has a notion of \"sub-tasks\", which eases some of these problems. But sub-tasks add an extra complexity. Is there a better way? Is this a \"bad\" way to use Scrum? I have been using some form of Agile over the past few years at a couple places. I have no official training yet, so please forgive any wrong terminology or ideology. I'm just trying to learn practical ways to improve our process."} {"_id": "229468", "title": "Use global variables or methods in an API's frontend", "text": "I am currently designing a graphics library in Java and now it's come to making the frontend I am curious why I have never seen libraries using global variables for their settings/properties - in-fact some go out of their way to protect them such as JavaScript libraries, and of course choose to use getters and setters such as: Viewport.VSyncEnabled(true) instead of simply Viewport.VSyncEnabled = true; I assume that it is to let the library know what is happening, but there are so many variables that simply don't need monitoring such as output debug data. I would really like to use global variables however if it very against the convention and there are benefits (that I don't know of right now) then I'll be happy to use methods. My reason for wanting to use global variables instead is by my benchmarks it is faster to access directly like that."} {"_id": "200949", "title": "Class design with bi-directional relationships", "text": "This is a purely design question. I want to port a nice educational \"game\" Bug Brain to Java. In this game you design neural networks which consist of three elements: neurons, nodes and synapses. Neuron is used to perform some kind of processing on its input signals to produce the desired output; it can be connected with other neurons/nodes (connections are uni-directional and weighted). We can think about nodes can as of junctions - they can also be connected with other neurons/nodes (and these connections are also uni-directional and weighted), but the weight of we cannot change weights of input connections. Synapse is a connection between these two elements. To sum up: neurons and nodes have both inputs and outputs (to other neurons/nodes) and they do some inputs->outputs processing. Q: Now, how would you design this class structure? This is a sketch I \"produced\" so far: screenshot of the design Because the operation \"connect two joins (i.e. neuron/node)\" has three responsibilities (add synapse, add output to join A, add input to join B) I've decided to put it as a static methid in a controller class. Of course I could avoid such two-directional relationships between elements: I don't have to keep track of outputs, only inputs but there are some other limitations eg. there can be only 8 I/Os per join so then I'd have to iterate over all synapses every time I add a connection to see if there is still some place to add it. In this case (this game) you won't have too many connections so it would not hit performance, but I'd like to know how to deal with such situations in general. As it's the foundation of this whole logic of this project I'd like to have it properly made. All suggestions are warmly welcome ;)"} {"_id": "244665", "title": "What is the concept behind writing a cancel operation in c++?", "text": "I'm attempting to write a cancel operation for a software download application. This application will first transfer the software to the device and then install the software on it. (These are givens I'm not allowed to change). What should the cancel operation do? When a user presses 'cancel', the application should stop transferring/installing the software immediately. Question: Since I've never written a \"cancel\" function, I'm wondering what are the types of things to consider when writing the code, and what are the common bugs I should expect and how to deal with them? Couldn't find anything in google so if you have some links that would be good reads I'd really appreciate it since I'm not looking for answers I'm just looking for guidelines/macro/concept help"} {"_id": "200945", "title": "Why do C# and Java use reference equality as the default for '=='?", "text": "I've been pondering for a while why Java and C# (and I'm sure other languages) default to reference equality for `==`. In the programming I do (which certainly is only a small subset of programming problems), I almost always want logical equality when comparing objects instead of reference equality. I was trying to think of why both of these languages went this route instead of inverting it and having `==` be logical equality and using `.ReferenceEquals()` for reference equality. Obviously using reference equality is very simple to implement and it gives very consistent behavior, but it doesn't seem like it fits well with most of the programming practices I see today. I don't wish to seem ignorant of the issues with trying to implement a logical comparison, and that it has to be implemented in every class. I also realize that these languages were designed a long time ago, but the general question stands. Is there some major benefit of defaulting to this that I am simply missing, or does it seem reasonable that the default behavior should be logical equality, and defaulting back to reference equality it a logical equality doesn't exist for the class?"} {"_id": "173176", "title": "JavaScript objects and Crockford's The Good Parts", "text": "I've been thinking quite a bit about how to do OOP in JS, especially when it comes to encapsulation and inheritance, recently. According to Crockford, classical is harmful because of new(), and both prototypal and classical are limited because their use of constructor.prototype means you can't use closures for encapsulation. Recently, I've considered the following couple of points about encapsulation: 1. Encapsulation kills performance. It makes you add functions to EACH member object rather than to the prototype, because each object's methods have different closures (each object has different private members). 2. Encapsulation forces the ugly \"var that = this\" workaround, to get private helper functions to have access to the instance they're attached to. Either that or make sure you call them with privateFunction.apply(this) everytime. Are there workarounds for either of two issues I mentioned? if not, do you still consider encapsulation to be worth it? Sidenote: The functional pattern Crockford describes doesn't even let you add public methods that only touch public members, since it completely forgoes the use of new() and constructor.prototype. Wouldn't a hybrid approach where you use classical inheritance and new(), but also call Super.apply(this, arguments) to initialize private members and privileged methods, be superior?"} {"_id": "39013", "title": "Need advice: Staying techie or going the MBA way?", "text": "I know this is a very subjective question and I am the best person to decide this for myself...but I am just looking for your views. I have 5 years of experience as a professional developer. I have a decent background in Maths and have done my bachelors in engineering in CS. I have still not reached a stage in my career where growth is difficult and do not foresee this happenning for a very long time if ever because I find myself constantly (self) motivated to pick up new skills. A lot of my friends have however been getting through their MBA lately ...and not from the likes of Harvard or Kellogs, just mediocre colleges. They've however been landing paychecks fatter than me even though they have little or no work experience. Given that I have the option of pursuing an MBA an have my finances in order (and am planning an MBA from INSEAD / IE) would it make sense for me to sell out what I like doing and go for an MBA? Will I regret not doing an MBA later, given that I am in the right age/experience group to do an MBA? I absolutely love what I am doing right now and also the people I'm doing it with, but am just worried if this career would be as rewarding financially as the one after a management degree."} {"_id": "177668", "title": "How to avoid big and clumsy UITableViewController on iOS?", "text": "I have a problem when implementing the MVC-pattern on iOS. I have searched the Internet but seems not to find any nice solution to this problem. Many `UITableViewController` implementations seems to be rather big. Most examples I have seen lets the `UITableViewController` implement `` and ``. These implementations are a big reason why `UITableViewController`is getting big. One solution would be to create separate classes that implements `` and ``. Of course these classes would have to have a reference to the `UITableViewController`. Are there any drawbacks using this solution? In general I think you should delegate the functionality to other \"Helper\" classes or similar, using the delegate pattern. Are there any well established ways of solving this problem? I do not want the model to contain too much functionality, nor the view. I believe that the logic should really be in the controller class, since this is one of the cornerstones of the MVC-pattern. But the big question is: **How should you divide the controller of a MVC-implementation into smaller manageable pieces? (Applies to MVC in iOS in this case)** There might be a general pattern for solving this, although I am specifically looking for a solution for iOS. Please give an example of a good pattern for solving this issue. Please provide an argument why your solution is awesome."} {"_id": "177666", "title": "How to structure classes in the filesystem?", "text": "I have a few (view) classes. Table, Tree, PagingColumn, SelectionColumn, SparkLineColumn, TimeColumn. currently they're flat under `app/view` like this: app/view/Table app/view/Tree app/view/PagingColumn ... I thought about restructuring it, because the Trees and Tables use the columns, but there are some columns, which only work in a tree, some who work in trees and tables and in the future there are probably some who only work in tables, I don't know. My first idea was like this: app/view/Table app/view/Tree app/view/column/PagingColumn app/view/column/SelectionColumn app/view/column/SparkLineColumn app/view/column/TimeColumn But since the SelectionColumn is explicitly for trees, I have the fear that future developers could get the idea of missuse them. But how to restructure it probably? Like this: app/view/table/panel/Table app/view/tree/panel/Tree app/view/tree/column/PagingColumn app/view/tree/column/SelectionColumn app/view/column/SparkLineColumn app/view/column/TimeColumn Or like this: app/view/Table app/view/Tree app/view/column/SparkLineColumn app/view/column/TimeColumn app/view/column/tree/PagingColumn app/view/column/tree/SelectionColumn"} {"_id": "234487", "title": "Classes in OOP , methods and attributes memory internals", "text": "I would assume that instances of the same class would actually share their methods, and just have save different attributes in their namespace. How often do you arbitrary add methods to a single instance? Yet for example in Python that's not the case. Instances' methods are all different objects. Even if you decorate them as `@classmethods`, (or use `__slots__`) they seem to be different objects in memory. Can someone give me a definition of what I'm describing here, or even given same details on why this model was chosen? What happens in other languages?"} {"_id": "240323", "title": "Term For Errors That Occur In Certain Environments", "text": "What is the term for an error or bug that exhibits itself on computers with certain environments/setups only? **Context:** We have produced our own anti-piracy software. We distribute applications that are locked using this anti-piracy software. The anti-piracy software is simply a windowless Windows OS executable. When it runs; it downloads an encrypted file (Client File) from our server, decrypts it, checks that the users serial number occurs in the Client File and writes the result (hashed) to a Result File. Each of our companies products 'consults'/accesses this executable to verify that the user is allowed to run and use our product. On some of our clients computers, our anti-piracy executable is being blocked from running or blocked from downloading the Client File (its unknown what the cause is yet). These clients are Civil Engineering firms that have their Windows OS's very controlled and 'sandboxed' for security reasons. _So to sum up, the anti-piracy software works and passes our tests on most of our clients computers and on all our in-house test nodes. The software fails on these few Civil Engineering clients computers._ **What would be the term (if any) for this kind of error or bug in software?**"} {"_id": "191443", "title": "Why do relational databases only accept SQL queries?", "text": "As far as I know, most relational databases do not offer any driver-level API for queries, except a `query` function which takes an SQL string as an argument. I'm thinking how easier it would be if one could do: var result = mysql.select('article', {id: 3}) For joined tables, it would be slightly more complex, but still possible. For example: var tables = mysql.join({tables: ['article', 'category'], on: 'categoryID'}); mysql.select(tables, {'article.id': 3}, ['article.title', 'article.body', 'category.categoryID']) Cleaner code, no string parsing overhead, no injection problems, easier reuse of query elements... I can see a lot of advantages. Is there a specific reason why it was chosen to only provide access to queries through SQL?"} {"_id": "191444", "title": "Are generic programming and OOP mutually exclusive?", "text": "I never did generic programming before, being more of a python guy, and generally using OOP. As I moved into generic programming, I faced some issues. I read the FAQ, but I am not satisfied. What I want to ask are the following questions (I am mostly interested in the first, but answering the others will be extremely welcome): Are generic programming and OOP mutually exclusive? Meaning, for example, are you supposed to have methods and functions accepting the template, instead of baseclass or abstract pure class? * * * Some other questions I've got, only to provide context to my lack of understanding are: How do traditional design patterns react to generic programming approach and concepts? How to prevent (or control) the genericity of each class/template to \"bubble up\" in the dependencies dictated by the program logic, in particular when two types are related and go always together (e.g. a RealNumberProducer class and double vs ComplexNumberProducer and std::complex)?"} {"_id": "19911", "title": "How is determined an impact of a requirement change on the existing code?", "text": "How companies working on large projects evaluate an impact of a single modification on an existing code? * * * Since my question is probably not very clear, here's an example: Let's take a sample business application which deals with tasks. In the database, each task has a state, 0 being \"Pending\", ... 5 - \"Finished\". A new requirement adds a new state, between 2nd and 3rd one. It means that: * A constraint on the values 1 - 5 in the database must be changed, * Business layer and code contracts must be changed to add a new state, * Data access layer must be changed to take in account that, for example the state `StateReady` is now 6 instead of 5, etc. * The application must implement a new state visually, add new controls for it, new localized strings for tool-tips, etc. When an application is written recently by one developer, it's more or less easy to predict every change to do. On the other hand, when an application was written for years by many people, no single person can anticipate every change immediately, without any investigation. So since this situation (such changes in requirements) is very frequent, I imagine there are already some clever techniques and ways to predict the impact. Is there any? Do you know any books which deal about this subject? * * * Note: my question is not related to How do you deal with changing requirements? question. In fact, I'm not interested in evaluating the cost of a change, but rather the way to predict the parts of an application which will be concerned by the change. _What_ will be those changes and _how difficult_ they are really doesn't matter in my question."} {"_id": "110431", "title": "What are the benefits of not including other header files in a header file?", "text": "Sometimes, a header file uses objects that are in declared in other header files. The user of this header file would have to include the dependency anyway and get the order correct too."} {"_id": "19912", "title": "Overview of computer science, programming", "text": "Where I can find overview of computer science, programming etc.? I graduated in physics and want to begin programming or similar and I'm not even sure of what's out there or where to begin. What I'm looking for is some broad overview of existing technologies - just like you have a book \"Physics of elementary particles\" that explains general view of that area, but for computers/programming of course. Do you have some book to recommend?"} {"_id": "39594", "title": "Tips for working at home?", "text": "> **Possible Duplicate:** > I program from home. What can I do to be more productive? What about if your office is your home? I'm not talking about a self-employed devoloper. I'm talking about been hired and work from your home. Do you have some tips for been focus on work? Thanks"} {"_id": "195535", "title": "Nested classes vs namespaces", "text": "Is it good to use nested classes or should I use namespaces instead? In context: I have a templated codec loader and a codec have In objects and Out objects template class Codec { private: typename TypeOfData::In* reader; typename TypeOfData::Out* writer; }; On the other hand, I cannot use using TypeOfData; if i dont want to :: my way to a static method."} {"_id": "119843", "title": "What specific task could be given to a Java web development candidate during an interview?", "text": "I'm looking for ideas for tasks that I can give to a Java programmer candidate to check his knowledge of Java; I'm specifically looking for a Java developer for web applications."} {"_id": "221609", "title": "How to simulate and estimate long running application", "text": "We are building a large, modular application that is intended to run across multiple processors. I need to develop a method to estimate how long it will take to run all of the functions. I already have the function's execution time on a single processor. Now I need to find a way to account for the communication overhead. I also need to identify if there is a difference in execution time based upon particular functions being spread across a number of processors instead of remaining on the same processor. How should I simulate the overall application in order to identify the communication overhead and to provide an estimate for what the total execution time would be?"} {"_id": "245802", "title": "Is there a downside to using AggressiveInlining on simple properties?", "text": "I bet I could answer that myself if I knew more about tools to analyze how C#/JIT behaves but since I don't, please bear with me asking. I have simple code like this : private SqlMetaData[] meta; [MethodImpl(MethodImplOptions.AggressiveInlining)] private SqlMetaData[] Meta { get { return this.meta; } } As you can see I put AggressiveInlining because I feel like it should be inlined. I think. There's no guarantee the JIT would inline it otherwise. Am I wrong? Could doing this kind of thing hurt the performance/stability/anything?"} {"_id": "245801", "title": "Web app that runs other applications?", "text": "I am trying to write a web application that interacts with a service that would run scripts at the system level. Something like a website that you can schedule a job, the service will run the job with the given inputs, write the outputs to a database, and inform the website of its status. A few questions: How is this typically done? Do I write to a \"JOBS_QUEUED\" table in my database and have the service check that? How can I pass a \"% done\" back to the web page? Is there a name for this type of thing? Not sure if it matters, but I am working with Ruby on Rails right now."} {"_id": "215327", "title": "Dealing with the node.js callback pyramid", "text": "I've just started using node, and one thing I've quickly noticed is how quickly callbacks can build up to a silly level of indentation: doStuff(arg1, arg2, function(err, result) { doMoreStuff(arg3, arg4, function(err, result) { doEvenMoreStuff(arg5, arg6, function(err, result) { omgHowDidIGetHere(); }); }); }); The official style guide says to put each callback in a separate function, but that seems overly restrictive on the use of closures, and making a single object declared in the top level available several layers down, as the object has to be passed through all the intermediate callbacks. Is it ok to use function scope to help here? Put all the callback functions that need access to a global-ish object inside a function that declares that object, so it goes into a closure? function topLevelFunction(globalishObject, callback) { function doMoreStuffImpl(err, result) { doMoreStuff(arg5, arg6, function(err, result) { callback(null, globalishObject); }); } doStuff(arg1, arg2, doMoreStuffImpl); } and so on for several more layers... Or are there frameworks etc to help reduce the levels of indentation without declaring a named function for every single callback? How do you deal with the callback pyramid?"} {"_id": "100031", "title": "one programmer, many languages -- the name dilemma", "text": "When you work across multiple programming languages, there is a problem you encounter... _**A valid name (identifier) in one language is invalid in another._** For example... `var` `new` `function` `this` are a keywords in JavaScript, but you can use them freely in Python. Similarly `list` `dict` `def` can be used in JavaScript without problems. This is a very common and something programmers generally quickly become acquainted with when they program in multiple languages. However, when you're working in collaboration, you have to lay out some rules/guidelines for your team members to ensure consistency and uniformity in the code. With teams, this issue becomes more important than simply remembering what's valid and what's not while you program. So, my question is, what strategies you adopt... * simply take a union of all the reserved words present in all the languages you use, hand out a list to everybody and abstain their use? * accept the diversity and take extra pains when \"context switching\" * adopt an intermediate ground where one language can use the other's, but not vice-versa (Note: i am only talking about Python and JavaScript in this question ... but please answer the question more broadly) **\\-- UPDATE --** Thanks for all the answers. So the general consensus i see emerging is to let programmers use any name regardless of what they do other languages -- as long as names are descriptive, it doesn't hurt."} {"_id": "64610", "title": "What would you do if a senior level programmer stole your code?", "text": "This was my experience at a previous company. As it was a small startup company, some of the normal software development procedures were not followed strictly. One of my colleagues was a senior programmer with the company for 2 years. His skills were quite lacking. He would allocate his tasks to me and then take credit for the work himself. He did this regularly, while I would also have to finish my own tasks. I felt that I couldn't express this situation to senior management as that colleague had their trust. Later, I tried to delay his tasks assigned to me so that reflect poorly on him (as he couldn't finish tasks) Also, client complaints piled up unresolved issues. Now the company knows about him. Actually, now I'm heading to a new company as a Lead. Now I'm completely free with these situation. Have any of you experienced situations like this? What would/did you do?"} {"_id": "120144", "title": "What happened to the this type of naming convention?", "text": "I have read so many docs about naming conventions, most recommending both Pascal and Camel naming conventions. Well, I agree to this, it's ok. This might not be pleasing to some, but I am just trying to get your opinion on why you name your objects and classes in a certain way. What happened to this type of naming conventions, and/or why are they bad? I want to name a structure, and I prefix it with \"struct\". My reason is that, with IntelliSense, I see all structures in one place, and anywhere I see the struct prefix, I know it's a \"struct\": structPerson structPosition another example is the enum, although I may not prefix it with \"enum\", but maybe with \"enm\": enmFruits enmSex again my reason is that in IntelliSense, I see all my enumerations in one place. Because .NET has so many built-in data structures, I think this helps me do less searching. Note that I used .NET in this example, but I welcome language agnostic answers."} {"_id": "81197", "title": "What did Alan Kay mean by \"assignment\" in The Early History of Smalltalk?", "text": "I have been reading The Early History of Smalltalk and there are a few mentions of \"assignment\" which make me question my understanding of its meaning: > Though OOP came from many motivations, two were central. The large scale one > was to find a better module scheme for complex systems involving hiding of > details, and the small scale one was to find a more flexible version of > assignment, and then to try to eliminate it altogether. (from _1960-66--Early OOP and other formative ideas of the sixties_ , Section I) > What I got from Simula was that you could now replace bindings and > assignment with _goals_. The last thing you wanted any programmer to do is > mess with internal state even if presented figuratively. Instead, the > objects should be presented as _site of higher level behaviors more > appropriate for use as dynamic components_. (...) It is unfortunate that > much of what is called \"object-oriented programming\" today is simply old > style programming with fancier constructs. Many programs are loaded with > \"assignment-style\" operations now done by more expensive attached > procedures. (from _\"Object-oriented\" Style_ , Section IV) Am I correct in interpreting the intent as being that objects are meant to be fa\u00e7ades and any method (or \"message\") whose purpose is to set an instance variable on an object (i.e. an \"assignment\") is defeating the purpose? This interpretation appears to be supported by two later statements in Section IV: > Four techniques used together--persistent state, polymorphism, > instantiation, and methods-as-goals for the object--account for much of the > power. None of these require an \"object-oriented language\" to be employed-- > ALGOL 68 can almost be turned to this style--and OOPL merely focuses the > designer's mind in a particular fruitful direction. However, doing > encapsulation right is a commitment not just to abstraction of state, but to > eliminate state oriented metaphors from programming. ...and: > Assignment statements--even abstract ones--express very low-level goals, and > more of them will be needed to get anything done. Generally, we don't want > the programmer to be messing around with state, whether simulated or not. Would it be fair to say that opaque, immutable instances are being encouraged here? Or is it simply _direct_ state changes that are discouraged? For example, if I have a `BankAccount` class, it's OK to have `GetBalance`, `Deposit` and `Withdraw` instance methods/messages; just make sure there isn't a `SetBalance` instance method/message?"} {"_id": "215328", "title": "What are the types of dynamically typed languages arrays?", "text": "For example, in JavaScript, I can do such things: var arr = [1, \"two\", /three/, [4]]; There is no way to do such a thing in C! Except by using a `void*`, which is not an efficient/safe way. Is this how implementations do it? Using `void*` everywhere?"} {"_id": "63042", "title": "Recommended Payment Schedule for Freelance Development", "text": "I've been a freelance developer for many years and am setting up a system designed to attract clients through one of my developer websites. If successful, these would be clients I do not know, and some jobs may be very small. When working with clients I know on larger projects, I'm accustomed to doing the work before I get paid. But, here, that seems risky. I'm looking for suggestions on the payment schedule I should require for small projects. I suspect potential clients will not be willing to pay the full amount up front, and so the solution will probably be some sort of compromise."} {"_id": "254382", "title": "Event notification for ::SCardListReaders()", "text": "In the PC/SC (Personal Computer Smart Card) Appln, I have (MSCAPI USB CCID based) 1) Calling ::SCardListReaders() returns SCARD_E_NO_READERS_AVAILABLE (0x8010002E). This call is made after OS starts fresh after reboot, from a thread which is part of my custom windows service. 2) Adding delay before ::SCardListReaders() call solves the problem. 3) How can I solve this problem elegantly ? Not using delay & waiting for some event to notify me. since a) Different machines may require different delay values b) Cannot loop since the error code is genuine c) Could not find this event as part of System Event Notification Service or similar COM interface d) platform is Windows 7 Any Help Appreciated."} {"_id": "254383", "title": "How to know whether to create a general system or to hack a solution", "text": "I'm new to coding , learning it since last year actually. One of my worst habits is the following: Often I'm trying to create a solution that is too big , too complex and doesn't achieve what needs to be achieved, when a hacky kludge can make the fit. One last example was the following (see paste bin link below) http://pastebin.com/WzR3zsLn After explaining my issue, one nice person at stackoverflow came with this solution instead http://stackoverflow.com/questions/25304170/update-a-field- by-removing-quarter-or-removing-month When should I keep my code simple and when should I create a 'big', general solution? I feel stupid sometimes for building something so big, so awkward, just to solve a simple problem. It did not occur to me that there would be an easier solution. Any tips are welcomed. Best"} {"_id": "152445", "title": "How can I speed up the process of typing up specification during a meeting with developers?", "text": "I'm studying source code with other developers, and my job is to type up the specifications as the lead programmer describes what each function does. Are there any tips/tricks to doing this faster? a lot of times I have to ask him to repeat himself, as I didn't hear it well. How can I be a better specification writer?"} {"_id": "152440", "title": "What options are there for splitting UI layout from code logic using a markup language?", "text": "What tools similar to GWT's UIBinder exist in other languages? By this I mean a system where you can define your UI layout in a markup language (preferably html+css) and attach the functionality to the layout using the code. I'm most interested in anything for python, but answers in other languages would interest me as well. I'm interested because the benefits of having a non-programmer work directly on the layout without needing to touch the code and adjust a bunch of UI toolkit method calls is very productive. I'm aware of Flex for flash, but is there anything else out there? What search terms might I use to find such frameworks? I've looked around but I haven't found anything concrete. Edit: I'm not only interested in web frameworks, but desktop applications as well. Edit2: My question appears to be close to a duplicate of http://stackoverflow.com/questions/2962874/what-are-the-open-source- alternatives-to-wpf-xaml"} {"_id": "149120", "title": "How can I satisfy this project requirement?", "text": "I'm part of a two-person team that is building a web-based virtual tour application with specific extra functionality that doesn't exist in current applications. This is a summer school project. The team lead wants this application to be as platform and dependency-agnostic as possible. To do this, we're using Google's Fusion Tables and StreetView APIs to display the images and serve as a database. This way, a client installing this application on their server doesn't have to worry about configuring and maintaining a local database and all that is necessary on their end is a Google Apps account and the images themselves. My issue is how does the client make requests for images and data without a server-side language? Most, if not all of the data will be textual, and thus be able to render as JSON and parsed as such. However, that leaves the issue of the images themselves. They will be hosted on a to-be-determined location, and I need some way to access them. Using Ajax, I can execute a PHP call to load an image easily, and I already know how to do this. Its been suggested that I use the Java client library to interface with GData services, but I think this is overcomplicating things. My arguments for including PHP as an application requirement are 1. PHP is synonymous with hosting. I've yet to run across a commercial host that doesn't offer PHP 2. I already know how to implement the Ajax calls to the PHP function. 3. As this application has a 6 week deadline, I'd like to stick with what I know rather than learning a whole new methodology His arguments for looking somewhere else are 1. No guarantee that PHP will be loaded on the server to be used. Our school's CMS is based in Java, and PHP is not installed 2. Java is a standard inclusion in server and client configruations 3. The application should be server and dependency agnostic in the sense that it can be uploaded to a folder and accessed without any configuration Honestly, if I had the time, it would be an interesting side project to include this type of functionality, especially since I've never done it before and it sounds interesting. But I keep thinking about that 6 week deadline, and as I'm the one that will be doing the majority of the coding, I just don't think I can do it in time without PHP."} {"_id": "212801", "title": "Implementation of a serial communication protocol", "text": "I need to implement a serial protocol to communicate with a device using .NET (C#). This implementation should be a library (.dll) to be used in different projects. I have the datasheet that describe the protocol (baudrate, stop bits, message protocol, commands, etc.) and I know that I need to use `SerialPort` class from `System.IO.Ports` namespace. But I have some doubts on how to struture/organize the code. Here are some questions: 1. Should I be concern about thread management or this aspect should be managed by how is using the library? 2. Can I manage received/sent data using strings or should I use bytes? 3. Commands and fixed contents should be stored using `enums`, `constants` or something else? Maybe those questions could be subjectives, but I would like to have some feedback from someone with more experience than me. If someone has some examples, tips, best pratices, etc. on this subject, I will be very grateful."} {"_id": "212800", "title": "which directory to initialize git repo", "text": "I'm just getting started with Git. I'll be doing PHP development and was wondering if I should initialize my repo within my server root folder, or any directory on my hard drive and publish my changes to the server root. Does it really matter, any pros/cons? For now, this is just to keep track of personal projects I'm doing for fun, but any feedback on best practice, personal preference/experience, do's and don'ts will be greatly appreciated."} {"_id": "149124", "title": "What is the benefit of hypermedia (HATEOAS)?", "text": "I don't understand the benefit to HATEOAS for APIs intended for use by programs (as opposed to humans directly browsing your API). Sure, the customer isn't bound to a URL schema but they are bound to a data schema which is the same thing in my mind. For example, assume I want to view an item on an order, let's assume I've discovered or know the order URL already. HATEOAS: order = get(orderURL); item = get(order.itemURL[5]); non-HATEOAS: order = get(orderURL); item = get(getItemURL(order,5)); In the first model I have to know the fact that the order object has an itemURL field. In the second model I have to know how to construct an item URL. In both cases I have to \"know\" something ahead of time so what is HATEOAS actually doing for me?"} {"_id": "219853", "title": "Why REST Api do not follow the Facade design pattern", "text": "In comparing REST [api] structure with a OO model, I see these similarities: Both: * Are data oriented * REST = Resources * OO = Objects * Surround operation around data * REST = surround VERBS (Get, Post, ...) around resources * OO = promote operation around objects by encapsulation However, good OO practices do not always stand on REST apis when trying to apply the facade pattern for instance: in REST, you do not have 1 controller to handle all requests AND you do not hide internal object complexity. ![Simple object relation between 2 concepts](http://i.stack.imgur.com/SyVav.png) ![Analogy between OO and REST](http://i.stack.imgur.com/1CAFF.png) On the contrary, REST promotes resources publishing of all relations with a resource and other on at least two forms: 1. via resource hierarchy relations (A contact of id 43 is composed of an address 453) : `/api/contacts/43/addresses/453` 2. via links in a REST json response: > > >> GET /api/contacts/43 > << HTTP Response { > id: 43, ... > addresses: [{ > id: 453, ... > }], > links: [{ > favoriteAddress: { > id: 453 > } > }] > } > ![Basic complexity hidden by objectA](http://i.stack.imgur.com/xwaSC.png) Coming back to OO, the facade design pattern respect a `Low Coupling` between an _objectA_ and its ' _objectB client_ ' and `High Cohesion` for this _objectA_ and its internal object composition ( _objectC_ , _objectD_ ). With the _objectA_ interface, this allow a developer to limit impact on _objectB_ of the _objectA_ internal changes (in _objectC_ and _objectD_ ), as long as the _objectA_ api (operations) are still respected. In REST, the data (resource), the relations (links), and the behavior (verbs) are exploded in different elements and available to the web. Playing with REST, I always have an impact on code changes between my client and server: Because I have `High Coupling` between my `Backbone.js` requests and `Low Cohesion` between resources. I never figured out how to let my `Backbone.js javascript application` deal with \" _REST resources and features_ \" **discovery** promoted by REST links. I understand that the WWW is meant to be served by multi servers, and that the OO elements had to be exploded to be serviced by many hosts in there, but for a simple scenario like \"saving\" a page showing a contact with its addresses, I end up with: > > GET /api/contacts/43?embed=(addresses) > [save button pressed] > PUT /api/contacts/43 > PUT /api/contacts/43/addresses/453 > which lead me to move the saving action atomic transactional responsibility on the browsers applications (since two resources can be addressed separately). _With this in mind, if I cannot simplify my development (Facade design patterns not applicable), and if I bring more complexity to my client (handling transactional atomic save), where is the benefit of being RESTful ?_"} {"_id": "247400", "title": "What is the simplest archive file format to aim for when writing collections of files?", "text": "I've created a PHP-based document management system and hosted it on my Raspberry Pi. I created a \"backup\" function that zips together all the documents, but it takes too long due to the hardware constraints of the Pi, as well as the fact that compression cannot be disabled when creating zip files in PHP. This has led me to think that perhaps I should just devise some sort of binary file format that allows multiple files to be stored in it, and dump all my uncompressed documents into it when the user asks for backups. Oddly, I'm not necessarily concerned about the lack of possible \"unarchivers\". Which file format is the easiest to implement, whether a standard format, or a proposed new format? Is there any other alternative for solving my problem that does not require implementing a custom archiver? Note that I wish to avoid the following: * Shell commands (not portable). * Installing third party dependencies (may not work on a third party hosted system). * Rewriting my system in a different language."} {"_id": "21631", "title": "Should my colleagues review each others code from source control system?", "text": "So that's my story: one of my colleagues uses to review all the code, hosted to revision system. I'm not speaking about adequate review of changes in parts that he belongs to. He watches the code file to file, line to line. Every new file and every modified. I feel just like being spied on! My guess is that if code was already hosted to control system, you should trust it as workable at least. My question is, maybe I'm just too paranoiac and practice of reviewing each others code is good? P.S: We're team of only three developers, and I fear that if there will be more of us, colleague just won't have time to review all the the code we'll write."} {"_id": "49540", "title": "Are there efforts to build a collaboratively edited HTML/JS/DOM reference?", "text": "W3Schools has a reputation of being incomplete, sometimes incorrect, and ridden with advertising; still, when looking to look up some things or link to documentation when answering a SO question, it still is the only handy cross- browser resource. There are other resources like the Mozilla Developer Network that is doing an increasingly great job documenting JavaScript, and the legendary and great Quirksmode. But they, as brilliant as they are, cover only parts of the areas I am talking about, and provide no community editing and quality control options. Is anybody aware of efforts to create a collaboratively edited, cross-browser HTML/CSS/JavaScript/DOM encyclopedia? If you will, I'm thinking of a challenger to W3Schools like SO was to Experts Exchange."} {"_id": "219858", "title": "Scripts to run Java programs - e.g. Ant", "text": "My team currently use an Ant script to _execute_ various Java programs. Note that I am not asking about managing the build/deployment cycle for which we are already using Maven (quite happily). For example, we use something like the following: We originally chose Ant for the following reasons (or rather I didn't choose Ant, it was that way when I joined, but I think these are good reasons for the choice): * it was an easy way of managing dependencies (although we now pick up these from Maven) * it allows us to easily build/maintain dependencies between particular jobs * we can deploy the Ant script, and the Java libraries, and easily run particular jobs - i.e. * we don't have to worry about classpaths (configured in the Ant script) * we don't have to worry about building jars with main classes specified in the manifest * we can use Ant to create / store / manage arguments passed to main() methods * we can plug these ant scripts into our IDEs and run the jobs from there. Generally speaking, we have been very happy with this. However, an increasing issue is the ant-java-shutdown hook issue \\- this means that if one of these Java programs fails but doesn't finish, terminating the Ant process from which it was started doesn't terminate the Java process - which then has to be done manually, and is a real bind. Also, I'm conscious of two other (possible) factors: * this is using Ant in a slightly unusual way, which is perhaps not how Ant was intended to be used (or at least, so it seems to me...)? * I'm not really sure, but it seems that fewer and fewer developers/teams are using Ant these days, with preference for other technologies. So, to clarify: * I want a flexible approach to execute various Java programs * the approach that we have been using was to do this via Ant scripts, but this had the following problems: * terminating the Ant task does not kill the Java program * this \"feels\" like Ant is not the right tool for the job * how might we meet the needs specified above? * if not Ant, the solution should not face the termination / shutdown hook issue"} {"_id": "247407", "title": "Is this a good practice or not?", "text": "I have a colleague who has come up with a way of 'genericizing' information from a database so that all his web application's drop-down lists can share the same object in his MVC.NET C# code and Views, which can contain different data depending on what tables it is being used against. We work for a government agency, and we have facilities divided up into areas called \"Regions\", which contain further subdivided areas called VISNs, which each contain \"Sites\" or \"Facilities\". My colleague has developed a very complex security scheme whereby people can be granted access to data based on their permission level which can be assigned by Region, VISN, Site, or any mixture of the three. It's a nice scheme and very clever. However, he has stored procedures that return lists of Regions, VISNs, and Sites based on a person's User Id, and he is returning \"generic\" values like `TextFieldID`, `TextField`, and `TextParentID`. My first problem with this, is that looking at this data coming out of the database, I would not know what the data is. I feel that fields coming from a query or stored procedure should be descriptive of the data they are delivering. What does everyone else think? The deeper issue for me however is that he is taking some of the data, concatenating it in his stored procedure like this SELECT DISTINCT t.VisnID, NULL, t.StationID, 'V' + CAST(t.VisnID as varchar) + ': ' + t.Station3N + ': ' + t.StationName, t.Inactive FROM Stations t and sending it back in a \"TextField\" property, instead of sending back the discrete data separately (`Station3N`, `StationName`) and concatenating it in the View, which would allow for different concatenation depending on what device what accessing the application (perhaps mobile and desktop). His justification is that he can send all his various drop down data and capture them all, regardless of content, in the same C# object named \"LookupValue.\" public partial class LookupValue : IEquatable { public LookupValue(string textFieldId, string textField, bool inactive) { TextFieldID = textFieldId; TextField = textField; Inactive = inactive; } public LookupValue(string textParentId, string textFieldId, string textField, bool inactive) { TextParentID = textParentId; TextFieldID = textFieldId; TextField = textField; Inactive = inactive; } public LookupValue(string textParentId, string textSelfParentId, string textFieldId, string textField, bool inactive) { TextParentID = textParentId; TextSelfParentID = textSelfParentId; TextFieldID = textFieldId; TextField = textField; Inactive = inactive; } /// /// Returns a custom string identifier if the Inactive property is true. /// public string GetInactiveString() { string value = \"\"; if (Inactive) { value = \"[I]\"; } return value; } /// /// Returns a custom text label that concatenates the TextField property with the custom string identifier of the Inactive property. /// public string GetDisplayNameWithInactiveString { get { return TextField + \" \" + GetInactiveString(); } } public bool Equals(LookupValue other) { //Check whether the compared object is null. if (Object.ReferenceEquals(other, null)) return false; //Check whether the compared object references the same data. if (Object.ReferenceEquals(this, other)) return true; //Check whether the products' properties are equal. return TextFieldID.Equals(other.TextFieldID) && TextField.Equals(other.TextField); } // If Equals() returns true for a pair of objects // then GetHashCode() must return the same value for these objects. public override int GetHashCode() { //Get hash code for the TextFieldID field if it is not null. int hashTextFieldId = TextFieldID == null ? 0 : TextFieldID.GetHashCode(); //Get hash code for the Code field. int hashTextField = TextField.GetHashCode(); //Calculate the hash code for the product. return hashTextFieldId ^ hashTextField; } } He believes the re-usability of this object is worth the violation of Separation of Concerns and possible future difficulties in handling more than one display variation for a drop-down. I should point out that this object is contained in his Data project namespace (our projects are separated into Data, Web, Services, etc.) instead of the Web project namespace and he also returns this object to the Web layer via his Repository queries which call the stored procedures that I described earlier, which is gross violation of Separation of Concerns in my opinion. I am just looking for some confirmation from other programmers that this is in fact a bad practice, and also looking for ideas I can present him on better ways to do what he is attempting. I have my own ideas, but again I'm just looking for other's ideas so I have more options to present to him. \\--Edit -- I forgot to mention that we are using Entity Framework as an ORM, and his Repository classes in his DAL are dumping the data from the stored procs into this LookupValue object."} {"_id": "162349", "title": "Should I prefer instance methods over class methods in Ruby?", "text": "I'm working on a rails application, and I've been pulling functionality out of my rails code and into pure ruby classes in lib/. I've found myself often writing classes like this: class MailchimpIntegration def subscribe_email(email, fname, lname) Gibbon.list_subscribe(:id => NEWSLETTER_LIST_ID, :email_address => email, :merge_vars => {'fname' => fname, 'lname' => lname }, :email_type => \"html\", :double_optin => false, :send_welcome => false) end def unsubscribe_email(email) Gibbon.list_unsubscribe(:id => NEWSLETTER_LIST_ID, :email_address => email) end def change_details(old_email, email, fname, lname) Gibbon.list_update_member(:id => NEWSLETTER_LIST_ID, :email_address => old_email, :merge_vars => {'email' => email, 'fname' => fname, 'lname' => lname }) end def get_email_info(email) Gibbon.list_member_info(:id => NEWSLETTER_LIST_ID, :email_address => [email])[\"data\"].first end end My question is: Should I change these methods to be class methods? It seems reasonable to do, as I'll probably end up calling these by just newing up a MailchimpIntegration class each time. However, I generally prefer to have instance methods as they can be more easily stubbed etc, although this seems to be less of an issue in ruby. I have several classes like this in my system, so I'd be keen to see what people think about this."} {"_id": "69195", "title": "The role of determination of the variable type (floating point or fixed point) in program performance", "text": "I know with changing the variable type, the speed of the program change. I want to understand the other effects of this change in the software performance. I want to know if we use a floating point variable instead of fixed point variable,the amount of memory consumption and speed of program, what will change?"} {"_id": "154291", "title": "How to design highly scalable web services in Java?", "text": "I am creating some Web Services that would have 2000 concurrent users. The services are offered for free and are hence expected to get a large user base. In the future it may be required to scale up to 50,000 users. There are already a few other questions that address the issue like - http://stackoverflow.com/questions/2567254/building-highly-scalable-web- services However my requirements differ from the question above. For example - My application does not have a user interface, so images, CSS, javascript are not an issue. It is in Java so suggestions like using HipHop to translate PHP to native code are useless. Hence I decided to ask my question separately. This is my project setup - 1. Rest based Web services using Apache CXF 2. Hibernate 3.0 (With relevant optimizations like lazy loading and custom HQL for tune up) 3. Tomcat 6.0 4. MySql 5.5 What are the best practices to abide by in order to make a Java based application scalable?"} {"_id": "69193", "title": "Learning about a new technology", "text": "I have approx 9 years of experience in the IT field. I've mainly worked on web applications and currently am looking forward to become a technical architect for J2EE applications. I usually face this challenge; in order to learn a new technology in depth I usually refer to a book. I was recently going through 'Java Persistence with Hibernate' to get in depth knowledge on Hibernate. It takes usually a month for me to complete reading the books/doing some R & D to know about how the APIs work and writing some sample program from books. I should mention that my reading is mostly done in spare time out of my regular project work. At the end of this exercise I get a decent idea about the technology and how it works (its uses in applications etc...). However after couple of months, I also start forgetting about what I read in my previous book (or about previous technology). Mainly because the project on which I work on do not use those technology, so if you do not practice, you will forget those things. For ex: I recently read about FTL, but in my project at the same time we were using plain JSPs, so I did not get chance to apply my FTL knowledge there and I forgot most of what I read about FTL. On one hand, I love to know more about the technologies around us and at the same time I want to keep that knowledge with me so that I could use it in future when needed. What am I doing wrong here? I sometime think that: 1. I should spend more time on one technology before switching on next, so that I could remember most of it. 2. My approach towards learning about new things this way is wrong. 3. Learning through Books might not be a good approach. 4. I need to do something extra to keep my learning up to date (but frankly what is that extra; I don't know). I need some help here."} {"_id": "162340", "title": "When should I use StringBuilder or StringBuffer?", "text": "In a production web application, my fellow programmers used StringBuffer everywhere. Now I am taking care of application development and corrections. After reading StringBuilder and StringBuffer I have decided to replace all the StringBuffer code with StringBuilder because we don't need thread safety in our data beans. For example: (In each data bean I can see the use of StringBuffer) @Override public String toString() { StringBuffer sb = new StringBuffer();// replace it from StringBuilder sb.append(\" ABCD : \").append(abcd); sb.append(\", EFGH : \").append(efgh); sb.append(\", IJKL : \").append(ijkl); } We create a separate data beans for each session/request. A session is used by a single user no other user can access it. Should I consider other points before migrating? If there is a single thread (no waiting threads/no new thread will be looking for object lock), it performs equally with either StringBuffer or StringBuilder. I know in the case of StringBuffer, it takes time to take the object lock but I want to know if there is any performance difference between them except the hold/release of the object lock."} {"_id": "162341", "title": "What is the name of the method to find index from array, in Objective-c?", "text": "I have a following method to find index of a book object which have a book name: +(NSInteger)indexXXX:(NSString *)bookName XXX:(NSArray *)books { for (NSInteger i = 0; i < books.count; i++) { NSDictionary *book = [books objectAtIndex:i]; if ([[book objectForKey:@\"bookName\"] isEqualToString:bookName]) { return i; } } return -1; } But I don't know what name should I use for this method. Could you tell me some method names commonly used for this kind of method."} {"_id": "203884", "title": "application logic, business logic, models, controllers - where to put the application's brains?", "text": "I'm trying to wrap my head around models, views, and controllers, but I feel as though the more I read, the more conflicting information I seem to encounter. I guess the general goal is--as far as I've seen--to shoot for fat models and thin controllers, but I'm still a bit baffled by where to put what. Can I give an example from something I'm working on? This is just a personal project that I'm messing about with in PHP for fun--a mini-game of sorts. Adding timed tasks to an itinerary and collecting rewards when the task completes. So a class kind of like this exists: class ItineraryEntry { public $user_id; public $task_id; public $completion_time; } And another class like this: class Task { public $task_id; public $description; public $duration; //in seconds } So in the user interface, the user sees a task description and clicks something like \"Add to Itinerary\". Now if I'm understanding correctly, the controller needs to pick up that request and do something with it. But...how much? So for example, I need to figure out when the task will complete. Keeping it simple, that's merely time() + the duration of the task. So does the controller do that calculation and then give it to the model to validate, or does the model just handle all of it? And the controller is really just a glorified router of data? Then the second part is what happens when an itinerary task completes. So from the user interface, a task will show as completed if the completion time is in the past, and the user will have to click something like, \"Claim Rewards\". Let's just say it's...I don't know...XP (i.e., experience points)--something that gets updated on the user's account. And let's say that the amount of XP is calculated in some way based upon the difficulty of the task, time required, etc. So where would I put that calculation? Is that inside the model class for my user object, or do I have a controller that figures all of that out and then sends it to the user object? In the model for my itinerary entry model? And the reason I ask this specifically is because any kind of Claim Rewards code would have to do a couple of things at least: 1. Actually claim the reward and update the user's account 2. Clean up the expired task from the database (or marking it completed or something) So it would be interacting both with the user and the itinerary entry. A little specific guidance would be really helpful. My brain is starting to hurt from reading all of these explanations that are kind of like what I need to know, but not totally. So thanks in advance!"} {"_id": "46844", "title": "What should I call the process of converting an object to a string?", "text": "We are having a game of 'semantic football' in the office over this matter: I am writing a method for an object which will represent the object as a string. That string should be such that when typed (more likely, cut and pasted) into the interpreter window (I will keep the language name out of this for now), will produce an object which is, for our purposes, identical to the one upon which the method was called. There is a spirited discussion over the 'best' name for this method. The terms **pickle** , **serialize** , **deflate** , _etc_ have been proposed. However, it seems that those terms assume some process for the de-pickling (unserialization, _etc_ ) that is not necessarily the language interpreter itself. That is, they do not specifically refer to the case where strings of valid code are produced. This is closer to a **quine** , but we are re- producing the object not the code, so this is not quite right. any suggestions?"} {"_id": "135412", "title": "How best do you represent a bi-directional sync in a REST api?", "text": "Assuming a system where there's a Web Application with a resource, and a reference to a remote application with another similar resource, how do you represent a bi-directional sync action which synchronizes the 'local' resource with the 'remote' resource? Example: I have an API that represents a todo list. GET/POST/PUT/DELETE /todos/, etc. That API can reference remote TODO services. GET/POST/PUT/DELETE /todo_services/, etc. I can manipulate todos from the remote service through my API as a proxy via GET/POST/PUT/DELETE /todo_services/abc123/, etc. I want the ability to do a bi-directional sync between a local set of todos and the remote set of TODOS. In a rpc sort of way, one could do POST /todo_services/abc123/sync/ But, in the \"verbs are bad\" idea, is there a better way to represent this action?"} {"_id": "135413", "title": "Should a view and a model communicate or not?", "text": "According to the wikipedia page for the MVC architecture, the view is free to be notified by the model, and is also free to query the model about its current state. However, according to Paul Hegarty's course on iOS 5 at Stanford, lecture 1, page 18 all interaction must go through the controller, with Model and View that are never supposed to know each other. It is not clear to me if Hegarty's statement must be intended as a simplification for the course, but I am tempted to say that he intends the design as such. How do you explain these two opposite points of view ?"} {"_id": "49896", "title": "Is Lambda function same as Actor? How can it be explained?", "text": "Sometime back I read (from a book) that conceptually Lambda function is similar to Actor. However I am clear on the concept, and will appreciate any insight or explanations for this. If they are indeed similar how? If not is there any relationship between the concepts?"} {"_id": "125462", "title": "What's the best way to learn the MS Business Intelligence stack?", "text": "What's the best/easiest way to get into the MS BI stack on my own (preferably with a [kindle] book)? Specifically, I'd like to learn more about SSIS, SSRS, and SSAS in that order. I've tried looking at books on amazon, but I really have no idea. Some background: I just graduated with my bachelor's in computer science, and am going back for my masters in comp sci. I work right now as a C# developer, with some web and database background. I don't necessarily want to become a BI master, but I'd like to gain an intermediate-level understanding of the platform."} {"_id": "152191", "title": "What does path finding in internet routing do and how is it different from A*?", "text": "**Note:** If you don't understand this question then feel free to ask clarification in the comments instead of voting down, it might be that this question needs some more work at the moment. I've been directed here from the Stack Excange chat room Root Access because my question didn't fit on Super User. * * * In many aspects path finding algorithms like A star are very similar to internet routing. For example: > A node in an A* path finding system can search for a path though edges > between other nodes. > > A router that's part of the internet can search for a route though cables > between other routers. In the case of A*, open and closed lists are kept by the system as a whole, sepratly from any individual node as well as each node being able to temporarily store a state involving several numbers. Routers on the internet seem to have remarkable properties, as I understand it: > They are very performant. > > New nodes can be added at any time that use a free address from a finite > (not tree like) address space. > > It's real routing, like A*, there's never any doubling back for example. > > Similar IP addresses don't have to be geographically nearby. > > The network reacts quickly to changes to the networks shape, for example if > a line is down. > > Routers share information and it takes time for new IP's to be registered > everywhere, but presumably every router doesn't have to store a list of all > the addresses each of it's directions leads most directly to. I'm looking for a basic, general, high level description of the algorithms workings from the point of view of an individual router. **Does anyone have one?** I presume public internet routers don't use A* as the overheads would be to large, and scale to poorly. I also presume there is a single method worldwide because it seems as if must involve a lot of transferring data to update and communicate a reasonable amount of state between neighboring routers. For example, perhaps the amount of data that needs to be stored in each router scales logarithmically with the number of routers that exist worldwide, the detail and reliability of the routing is reduced over increasing distances, there is increasing backtracking involved in parts of the network that are less geographically uniform or maybe each router really does perform an A* style search, temporarily maintaining open and closed lists when a packet arrives."} {"_id": "152195", "title": "Is there a massive other side to software development which I've somehow missed, revolving entirely around Microsoft?", "text": "I'm still a beginning programmer; I've been at it for 2 years. I've learned to work with a few languages, a bit of web development technologies, a handful of libraries, frameworks, and IDEs. But over the past two years (and long before I even started, really), I keep hearing references to these...things. A million of them. Things such as C#, ADO, SOAP, ASP, ASP.NET, the .NET framework, CLR, F#, etc etc. And I've read their Wikipedia articles, in-depth, multiple times, and they all mention a million other things on that list, but I just can't seem to grasp what it all _is_. The only thing I've taken away with any certainty is that Microsoft is behind all of it. It sounds almost like a conspiracy. Are all these technologies just for developing on the Windows platform? What _is_ .NET? Do some software developers dedicate their entire career just to that side of things? Why would I want to get into it, and what advantage does...whatever it is...have over all the other technologies there are? I hope this makes sense. It's a broad question, but inside it there's a very specific question asking about something I don't know the name of. Hopefully you can grasp my confusion."} {"_id": "46849", "title": "How to legally protect yourself from malicious and/or dumb users?", "text": "When building a public facing website that allows visitors to post comments, link to media and/or upload media (e.g. audio, video, images) ... what should I do to protect myself legally in the case such visitors link to or upload content that they shouldn't (e.g. adult oriented media, copyrighted images and/or media owned by someone else, etc...)? Some questions that come to mind in particular: 1. Should I allow folks to post anonymously? 2. If I make visitors agree to some kind of statement whereby they take full responsibility for what they upload, what should the copy of such a statement be? Please provide as specific as possible steps one should take if possible."} {"_id": "46848", "title": "Qt's future in the light of Nokia-Microsoft partnership", "text": "In case you missed it, a lot has happened in the last two day that could potentially impact the Qt framework, for the worse. :-( It will impact the mobile sector in several and probably not currently acknowledged ways, for sure. It started yesterday with Nokia's CEO Stephen Elop internal letter depicting Nokia sitting on a burning platform and the need for a big and aggressive shift in business. A day later, at the Nokia World conference, Nokia announced the partnership with Microsoft, which at the moment resumes to Nokia adopting the Windows Phone 7 platform and development environment, dumping Symbian along the road and tagging Meego as **R &D**(a pretty dangerous keyword if you ask me), as for Maemo/N900 series I guess it's bye bye for good. I know what you're thinking but no, Qt is not going to be ported to the Window Phone platform. And I'm also scared about this. You can watch the Elop & Ballmer joint press release here. Now after reading this huge thread on the Qt-interest mailing list I can't help but wonder, what is the future of Qt at Nokia, now that they aren't focused(at all?) on Qt anymore(remember the **full** focus switch on Qt as main development framework for all Nokia products(including Symbian, yes) back in October?). I love Qt, in my opinion it is the only true cross-platform application development framework and one of the few to make C++ development a joy(to the extent possible) and good things has happened to the framework and considerable momentum while under Nokia, thus i am wondering, what are the chances that Qt might suffer a slow death at Nokia after this? Yes i know about KDE.org and the fact that Qt is easily spawnable, but I still feel uneasy. It also must be horrible for all of the efforts either by Nokia employees or third parties that have gone into Symbian and all of the Ovi Store Symbian/Qt content and business and why not, Maemo/Meego. There are also massive layoffs planned, I suspect Symbian techs and Qt? I'd love to hear your input on this? Is Qt future safe&proof? LE: The question as been gradually revised, improved and better referenced, thus you might want to throw a quick re-read to see what you might have missed."} {"_id": "241257", "title": "Should a package manager modify your .bashrc file?", "text": "I am writing a package for something that requires an environment variable to be set in order to execute properly. Should the install step of a package manager modify a user's environment, or simply prompt the user to do so themselves? My intuition would be the latter, but I can see arguments for the former."} {"_id": "149698", "title": "Provide a URL to a CouchDB document attachment without giving the username/password?", "text": "I posted this question on DBA, but it got closed and was never reopened even after I rewrote the whole thing to be more specific. I think its more appropriate for programmers anyway :) ## Background Information I have a CouchDB server (Server B) that is accessed by another server running PHP (Server A). My web app (Client-side Javascript) makes a request for a file to Server A. Server A sends requests to Server B with the access credentials (username:password). Server B returns JSON objects that correspond to matched files, then Server A returns that data to the client. The response includes some meta data about the file and a URL to the file on Server B. The actual file data itself is not included in the response. ## The Problem The URL to the document attachment included in the JSON response from Server B includes access credentials (username:password). Server A responds to the client-side request by returning those JSON objects. The client now has this information: https://username:password@host:port/dbname/path/to/file.whatever Currently all data contained within Server B resides in one database instance. If the client has access to the username:password to the CouchDB then all data could be queried with those access credentials. Like so, https://username:password@host:port/dbname/_all_docs ## Server-side considerations CouchDB on Cloudant ## Client-side considerations The Javascript running on the client requires a URL string to the file. The files that are being referenced are KML layers for Google Maps. A new KML layer is created using this constructor. ## What I am looking for Essentially my solution would come if I can provide a link to a CouchDB document attachment without giving the username/password. https://host:port/path/to/file.whatever ## What I did I had to enable cURL for PHP on Server A. sudo apt-get install curl libcurl3 libcurl3-dev php5-curl sudo /etc/init.d/apache2 restart Got a bit of code from David Walsh's blog. My Javascript client code gets a list of JSON file meta data from Server A. This data includes unique DB IDs and file names (No passwords or API keys). I then pass the document ID and filename to another service on Server A which echoes the value of, get_data('https://username:password@Server B:port/dbname/path/to/file.whatever'); I can now serve out the files to the client without revealing username or password for CouchDB on Server B."} {"_id": "158775", "title": "Eager Loading Constraints", "text": "The thing is, eager loading some object with all of it's associations reduces the queries made to the database, but the query time is greatly increased. The question is: This increased query time in the end pays off the database not being hitted so often? It seems to me that you are avoiding one problem and falling into another one. Obs.: Would put eager-loading tag if i had reputation enough. Someone could create it and edit this observation."} {"_id": "190311", "title": "Why is (position < size) such a prevalent pattern in conditionals?", "text": "In a condition statement (IF) everyone use `(position < size)`, but why? Only convention or there is a good reason for that? Example: if (pos < array.lengh) { // do some with array[pos]; } why not if (array.lengh > pos) { // do some with array[pos]; } ?"} {"_id": "149691", "title": "Teaching Programming Concepts Without a Specific Language", "text": "I'm teaming up with a guy who has no programming experience. We're using a tool to make our game (RPG Maker) that has an event-based system that allows you to do pretty much everything you want. They have a GUI and a simple text editor for events. My friend has no programming experience. None. I need him to understand basic stuff, like control flow (if/else, do/while), variables/constants, etc. What can I use to teach him this, bearing in mind that I don't care about specific language syntax? Ideally, I'm looking for a \"programming\" book that talks about these ideas (perhaps visually) and doesn't care much about code. Does something like this exist? My \"google-fu\" failed me."} {"_id": "158779", "title": "How have languages influenced CPU design?", "text": "We are often told that the hardware doesn't care what language a program is written in as it only sees the compiled binary code, however this is not the whole truth. For example, consider the humble Z80; its extensions to the 8080 instruction set include instructions like CPIR which is useful for scanning C-style (NULL-terminated) strings, e.g. to perform `strlen()`. The designers must have identified that running C programs (as opposed to Pascal, where the length of a string is in the header) was something that their design was likely to be used for. Another classic example is the Lisp Machine. What other examples are there? E.g. instructions, number and type of registers, addressing modes, that make a particular processor favour the conventions of a particular language? I am particularly interested in revisions of the same family."} {"_id": "160277", "title": "What software models are appropriate for daily builds and continuous integration?", "text": "On reading the joel test and about daily builds, a discussion with a few tech- lead friends of mine in various companies revealed that they never did daily builds or continuous integration because according to them: 1. Daily builds were meant for projects following Agile practices. It does not work for the waterfall model. 2. Continuous integration required automated testing which their OpenGL/OGRE/OSG type-of graphics project couldn't use, because apparently automated testing is impractical for projects where the graphics part is under (re)development. 3. One of them believes that it's not necessary to have configuration management. It's just enough to follow the philosophy of it, by defining small 2-3 day tasks, perform all activities of the development cycle for each task (code, code-review, unit test, integration), deliver to integration stream and build and generate the setup. 4. Even if they created scripts for the entire build and burn-to-cd process, creating automated tests were a problem because in any program you create, you can't always know what tests you'd have to run and scripting a test case for it takes up too much time and automated testing tools may not always support the specific kind of tests you might want to make. Is daily builds and continuous integration really practical for non-agile projects? Is it meant only for test driven development? And is it really sufficient to follow the point 3 (above) kind of philosophy? I was talking to them about the build-in-one-step kind of scripted automation that Joel talks about, but they weren't willing to buy the idea. p.s: I've been through the questions on this site about daily builds etc, but I believe this question is different. **EDIT: Point 3 should've started with \"One of them believes it isn't necessary to integrate configuration mangaement with development tools\"**"} {"_id": "162698", "title": "Why are cryptic short identifiers still so common in low-level programming?", "text": "There used to be _very_ good reasons for keeping instruction / register names short. Those reasons no longer apply, but short cryptic names are still very common in low-level programming. Why is this? Is it just because old habits are hard to break, or are there better reasons? For example: * Atmel ATMEGA32U2 (2010?): `TIFR1` (instead of `TimerCounter1InterruptFlag`), `ICR1H` (instead of `InputCapture1High`), `DDRB` (instead of `DataDirectionPortB`), etc. * .NET CLR instruction set (2002): `bge.s` (instead of `branch-if-greater-or-equal.short`), etc. Aren't the longer, non-cryptic names easier to work with? * * * When answering and voting, please consider the following. Many of the possible explanations suggested here apply **equally** to high-level programming, and yet the consensus, by and large, is to use non-cryptic names consisting of a word or two (commonly understood acronyms excluded). Also, if your main argument is about **physical space on a paper diagram** , please consider that this absolutely does not apply to assembly language or CIL, plus I would appreciate if you show me a diagram where terse names fit but readable ones make the diagram worse. From personal experience at a fabless semiconductor company, readable names fit just fine, and result in more readable diagrams. What is the _core thing_ that is different about low-level programming as opposed to high-level languages **that makes the terse cryptic names desirable in low-level but not high-level programming?**"} {"_id": "162699", "title": "Boundary conditions for testing", "text": "Ok so in a programming test I was given the following question. > Question 1 (1 mark) > > Spot the potential bug in this section of code: void Class::Update( float dt ) { totalTime += dt; if( totalTime == 3.0f ) { // Do state change m_State++; } } The multiple choice answers for this question were. > a) It has a constant floating point number where it should have a named > constant variable > > b) It may not change state with only an equality test > > c) You don't know what state you are changing to > > d) The class is named poorly I wrongly answered this with answer C. I eventually received feedback on the answers and the feedback for this question was > Correct answer is a. This is about understanding correct boundary conditions > for tests. The other answers are arguably valid points, but do not indicate > a potential bug in the code. My question here is, what does this have to do with boundary conditions? My understanding of boundary conditions is checking that a value is within a certain range, which isn't the case here. Upon looking over the question, in my opinion, B should be the correct answer when considering the accuracy issues of using floating point values."} {"_id": "53755", "title": "Should we put the specification documents in source control system such as svn?", "text": "Today, One of my colleague and I have a debate about \"Should we put the specification documents in source control system such as SVN?\". In my opinion, It should be. Everything relate to developing project should be controled carefully with source control system. Is it a wrong concept in software development process?"} {"_id": "162696", "title": "Why don't interviewers ask the applicant to read some code?", "text": "I had a dozen interviews in my life (I'm about to graduate) and I wonder why I was only once asked to read and explain some code. Roughly, 90% jobs are mostly about maintaining existing systems. IMO ability to read someone else's code is an important skill. Why don't interviewers check it?* *Among my friends I am the only one that was asked to review some code."} {"_id": "163569", "title": "How to manage 2 DAO methods in a single transaction?", "text": "In an interview someone asked me : How do we manage 2 transactional/dao methods in a single transaction. Desired capabilities: 1. If anyone of them fails we need to rollback both methods. 2. Both of the methods can be called separately attached with a single transaction. 3. The management should be on DAO layer, not on service layer. I think : the question relates to spring transaction management."} {"_id": "160185", "title": "Can I legally and ethically take an open-source project with community contributions to closed-source?", "text": "Let's say I start and develop some project under open-source license, and accept some community contributions. How shaky is the ground I stand on if I decide to take the project commercial and closed-source (or split license)? This question doesn't directly address the issue of a project with community contributions, which feels like different territory, at least as far as ethics are concerned. Legally, this might be iffy as well, because I'm not sure whether contributions fall under my copyright, or whether the contributor hold the copyright to the part of the project he added. Am I safe (ethically and legally) as long as I'm up front with the possibility that I may take the project commercial in the future?"} {"_id": "137006", "title": "Starting with Ruby on Rails? I see a lot of criticism everywhere. Is it okay to start with Rails now in 2012?", "text": "I have worked on C before and have never tried my hands on any web application creating framework. After some convincing from one of my friends I thought of giving Rails a go. Before starting to work on Rails I have just started learning Ruby a little and hence some curiosity to know what is being talked about the framework. In the last few days, I have seen a lot of criticism of Rails as a framework. Should I try my hands on Rails or should I try something else? Is python with Django the way to go? Is Nodejs the way to go? Is there some other ruby based framework which can work for me? Please suggest with an open mind. I want to be absolutely clear in my mind before I decide. [Note: The question was closed on Stackoverflow and I was suggested that the this is a better place]"} {"_id": "163563", "title": "Most appropriate OSS license for infrastructure code", "text": "I'm looking into potentially releasing some infrastructure code (related to automated builds and deployments) as OSS and I'm curious about how the various OSS licenses effect it. Specifically, LGPL prevents the code _itself_ (part/whole) being modified into a commercial product (which is what I'm after), but allows it to be \"linked to\" in the creation of commercial products (also ok). How does the \"linked to\" clause relate to infrastructure code, which is not deployed with the product itself? Would the application still be required to provide \"appropriate legal notices\" (which I'm not fussed over)? Would I be better off looking at the Eclipse Public License?"} {"_id": "188440", "title": "Is it possible to half-way synchronize javascript functions that include async calls?", "text": "I am wondering if there exists a way to half way synchronize javascript functions, where data is requested on the fly. I don't want to make it purely blocking (which I guess is impossible), I just want a nice way to write the overall algorithm in one piece. Let's take the following example: function(){ command1; var x1 = something; async_function(x1, callback1); } function callback1(data){ var x2 = command2(data); async_function2(x2, callback2); } function callback2(data1){ command3(data1); } I would like to write something like: function(){ command1; var x1 = something; call(async_function, [x1], callback1); var x2 = command2(data); call(async_function2, [x2], callback2); command3(data1); } * I tried playing around with the `caller` property of functions, but I'm not familiar enough with the execution environment. * I also tried writing a function \"call\" like above, using the `apply` function to pass in a custom callback which returns the result of the async function to it's caller. What bugs me is that while programming\\debugging, I have to follow the code from one function into another into another (just like the movie inception). I want one place to write the high-level algorithm without decomposing it into many functions. For example, If I want to write A* algorithm, but the \"getNeighbors\" function is asynchronous."} {"_id": "164696", "title": "What's special in July 26th and why is it used in examples for Expires header so often?", "text": "I've noticed that July 26th (my birthday) is used really often in various PHP examples related to preventing http caching using `Expires` header, like: http://stackoverflow.com/questions/12398714/cache-issue-with-private- networking-stream http://stackoverflow.com/questions/2833305/how-to-expire-page-in-php-when- user-logout http://expressionengine.com/archived_forums/viewthread/81945/ What's special in that date?"} {"_id": "246753", "title": "Why is mixing plural with singular and camel case with underscores in cake php naming convention better than a simpler convention?", "text": "I have been using cake php for over a year and generally I like it but I struggle to understand the advantages of the complex naming conventions over something simpler. Cake uses plural here and singular there (controllers vs models and reported controller names) for it's names and uses camel case and underscores too (method names and class variables vs database tables and view variables). As far as I can see picking either plural or singular and either underscores or camel case and using just one of each choice everywhere would still yield all of the advertised benefits but would be simpler to use for the programmer. What are the advantages of cake php naming convention which could not be achieved with the above system? Please demonstrate why a simpler system wouldn't work for any given assertion."} {"_id": "164695", "title": "Caching factory design", "text": "I have a factory `class XFactory` that creates objects of `class X`. Instances of `X` are very large, so the main purpose of the factory is to cache them, as transparently to the client code as possible. Objects of `class X` are immutable, so the following code seems reasonable: # module xfactory.py import x class XFactory: _registry = {} def get_x(self, arg1, arg2, use_cache = True): if use_cache: hash_id = hash((arg1, arg2)) if hash_id in _registry: return _registry[hash_id] obj = x.X(arg1, arg2) _registry[hash_id] = obj return obj # module x.py class X: # ... Is it a good pattern? (I know it's not the actual Factory Pattern.) Is there anything I should change? Now, I find that sometimes I want to cache `X` objects to disk. I'll use `pickle` for that purpose, and store as values in the `_registry` the filenames of the pickled objects instead of references to the objects. Of course, `_registry` itself would have to be stored persistently (perhaps in a pickle file of its own, in a text file, in a database, or simply by giving pickle files the filenames that contain `hash_id`). Except now the validity of the cached object depends not only on the parameters passed to `get_x()`, but also on the version of the code that created these objects. Strictly speaking, even a memory-cached object could become invalid if someone modifies `x.py` or any of its dependencies, and reloads it while the program is running. So far I ignored this danger since it seems unlikely for my application. But I certainly cannot ignore it when my objects are cached to persistent storage. What can I do? I suppose I could make the `hash_id` more robust by calculating hash of a tuple that contains arguments `arg1` and `arg2`, as well as the filename and last modified date for `x.py` and every module and data file that it (recursively) depends on. To help delete cache files that won't ever be useful again, I'd add to the `_registry` the unhashed representation of the modified dates for each record. But even this solution isn't 100% safe since theoretically someone might load a module dynamically, and I wouldn't know about it from statically analyzing the source code. If I go all out and assume every file in the project is a dependency, the mechanism will still break if some module grabs data from an external website, etc.). In addition, the frequency of changes in `x.py` and its dependencies is quite high, leading to heavy cache invalidation. Thus, I figured I might as well give up some safety, and only invalidate the cache only when there is an obvious mismatch. This means that `class X` would have a class-level cache validation identifier that should be changed whenever the developer believes a change happened that should invalidate the cache. (With multiple developers, a separate invalidation identifier is required for each.) This identifier is hashed along with `arg1` and `arg2` and becomes part of the hash keys stored in `_registry`. Since developers may forget to update the validation identifier or not realize that they invalidated existing cache, it would seem better to add another validation mechanism: `class X` can have a method that returns all the known \"traits\" of `X`. For instance, if `X` is a table, I might add the names of all the columns. The hash calculation will include the traits as well. I can write this code, but I am afraid that I'm missing something important; and I'm also wondering if perhaps there's a framework or package that can do all of this stuff already. Ideally, I'd like to combine in-memory and disk- based caching. EDIT: It may seem that my needs can be served well by a pool pattern. On further investigation, however, it\u2019s not the case. I thought I'd list the differences: 1. Can an object be used by multiple clients? * Pool: No, each object needs to be checked out and then checked in when no longer needed. The precise mechanism may be complicated. * XFactory: Yes. Objects are immutable, and can be used by infinitely many clients at once. There\u2019s never a need to create a second copy of the same object. 2. Does pool size need to be controlled? * Pool: Often, yes. If so, the strategy to do so may be quite complicated. * XFactory: No. An object must be delivered on demand to the client, and if an existing object is unsuitable, a new one needs to be created. 3. Are all objects freely substitutable? * Pool: Yes, the objects are typically freely substitutable (or if not, it\u2019s trivial to check which object the client needs). * XFactory: Absolutely not, and it\u2019s very hard to find out if a given object can service a given client request. It depends on whether an existing object is available that was created with (a) the same arguments and (b) the same version of the source code. Part (b) cannot be verified by XFactory, so it asks the client to help. The client fulfills this responsibility in two ways. First, the client may increment any of its several designated internal version counters (one per developer). This cannot happen in runtime, only a developer may change these counters when he believes that the source code change makes existing objects unusable. Second, a client will return some invariants about the objects that it needs, and XFactory will verify that these invariants are not violated before serving the object to the client. If any of these checks fail, XFactory will create and deliver a new object. 4. Does performance impact need careful analysis? * Pool: Yes, in some cases a pool actually hurts performance if the overhead of object management is greater than the overhead of object creation/destruction. * XFactory: No. The computation costs of the objects in question is known to be very high, and loading them from memory or from disk is without doubt superior than recalculating them from scratch. 5. When are objects destroyed? * Pool: When the pool is shut down. Perhaps it might also destroy objects if told to (partially) release resources or if certain objects have not been used for a while. * XFactory: Whenever an object was created with the version of the source code that is no longer current, as evidenced by either invariant violation or counter mismatch. The process of locating and destroying such objects at the right time is quite complicated. In addition, time-based invalidation of all objects may be implemented to reduce the accumulated risks of using invalid objects. Since XFactory is never certain that it is the sole owner of an object, such invalidation is best achieved by an additional \u201cversion counter\u201d in the client objects, which is incremented programmatically on a periodic basis, rather than by a developer. 6. What special considerations exist for multithreaded environment? * Pool: Has to avoid collisions in object checking out / checking in (don\u2019t want to check out an object to two clients) * XFactory: Has to avoid collision in object creation (don\u2019t want to create two objects based on two identical requests) 7. What needs to be done if client does not release an object? * Pool: It may want to make the object available to others after waiting for some time. * XFactory: Not applicable. Clients do not notify XFactory about when they are done with the object. 8. Do objects need to be modified? * Pool: May have to be reset to the default state before being reused. * XFactory: No, the objects are immutable. 9. Are there any special considerations related to persistence of objects? * Pool: Typically not. A pool is about saving the cost of object creation, so all the objects are kept in memory (reading from disk would defeat the purpose). * XFactory: Yes, XFactory is about saving the cost of performing complex calculations, so storing pre-calculated objects on disk makes sense. As a result XFactory needs to deal with the typical problems of persistent storage; e.g. at initialization, it needs to connect to persistent storage, obtain from it the metadata about which objects are currently available there, and be ready to load them into memory if requested. And object may be in one of three states: \u201cdoesn\u2019t exist\u201d, \u201cexists on disk\u201d, \u201cexists in memory\u201d. While XFactory is running, the state may change only in one direction (to the right in this sequence). In summary, the pool's complexity is in items 1, 2, 4, 6, and possibly 5, 7, 8. The XFactory complexity is in items 3, 6, 9. The only overlap is item 6, and it\u2019s really not the core function of either pool or XFactory, but rather a constraint on the design that is common to any pattern that needs to work in a multithreaded environment."} {"_id": "237505", "title": "Why is studying an lisp interpreter in lisp so important?", "text": "I have seen many CS curriculums and learning suggestions for new programmers that call for the aspiring programmer to study a lisp interpreter that is specifically written in lisp. All these sites say things similar to, \"its an intellectual revelation\", \"it is an enlightenment experience every serious programmer should have,\" or \"it shows you hardware/software relationships,\" and other vague statements, particularly from this article taken from this reputable how-to. The general sentiment of my question is, how does lisp achieve the above goals and why lisp? Why not some other language? I am asking this because I just finished writing a scheme interpreter in scheme (taken from SICP http://mitpress.mit.edu/sicp/ ) and now I am writing a python interpreter in scheme and I am struggling to have this legendary epiphany that is supposed to come specifically from the former. I am looking for specific technical details between the two languages that I can exploit in their scheme interpreters to gain understanding about how programs work. More specifically: > Why is the study of an interpreter that is written in the language it > interprets so emphasized - is it merely a great mental exercise to keep the > original language and built language straight or are there specific problems > whose solutions can only be found in the nature of the original language? > > How do lisp interpreters demonstrate good architecture concepts for one's > future software design? > > What would I miss if I did this exercise in a different language like C++ or > Java? > > What is the _most used_ takeaway or \"mental tool\" from this exercise? ** ** I selected the answer I did because I _have_ noticed that I have gained from this exercise more skill in designing parsing tools in my head than any other single tool and I would like to find different methods of parsing that may work better for the scheme interpreter than the python interpreter."} {"_id": "237509", "title": "Persist AJAX values", "text": "I have a simulator that pulls data from a DB - calculates and return JSON result to an ajax call that renders a table for the results. The calculation procedure are as follow: 1. grab X number of data that are group together and have weights to them. 2. use historical data and run a weight distribution algorithm on the group. 3. return as a JSON the metrics of each data, the old value, and the calculated new value 2 Questions: 1) Is Javascript calculation better or worse (or equal) to PHP calculation in order to yield the results. 2) IF Javascript calculation is faster or on par to PHP then I would assume the trip to call ajax is a small bottleneck. So one possibility would be to pre-load data and have it calculated on the fly. Now how would you persist that pre-loading efficiently? Would that just simply load everything in a simple var via ajax? Thus far it takes approx 55 seconds for the result to return 2000 entries on 14 days worth of data."} {"_id": "175531", "title": "Unit test: How best to provide an XML input?", "text": "I need to write a unit test which validates the serialization of two attributes of an XML(size ~ 30 KB) file. What is the best way to provide an input for this test? Here are the options I have considered: 1. Add the file to the project and use a file reader 2. Pass the contents of the XML as a string 3. Create the XML through a program and pass it Which is my best option and why? If there is another way which you think is better, I would love to hear it."} {"_id": "179293", "title": "Behavior-Driven Development / Use case diagram", "text": "Regarding growing of Behavior-Driven Development imposing acceptance testing, are use cases diagram useful or do they lead to an \"over-documentation\"? Indeed, acceptance tests representing specifications by example, as use cases promote despite of a more generic manner (since cases, not scenarios), aren't they too similar to treat them both at the time of a newly created project? From this link, one opinion is: > Another realization I had is that if you do UseCases and automated > AcceptanceTests you are essentially doubling your work. There is duplication > between the UseCases and the AcceptanceTests. I think there is a good case > to be made that UserStories + AcceptanceTests are more efficient way to work > when compared to UseCases + AcceptanceTests. What to think about?"} {"_id": "254655", "title": "Drawing a custom rendered control in Windows - resizing", "text": "Some background: in order to learn GUI programming and drawing in Windows I'm starting to create my own GUI toolkit in Windows (so this is a didactic exercise, please don't suggest \"use Qt\" or \"use MFC\"). I would like to draw the entire window area from memory DCs in double buffering (thus also handling mouse and keyboard events myself) but I'm now wondering how should I handle resize events from the window: suppose I have a text control and a sidebar and my window gets a resize command ![enter image description here](http://i.stack.imgur.com/YTwfs.png) the sidebar should grow larger and the text control should show more text (it is also larger). The first thing that comes to my mind is to redraw all the widgets on the memory DC and then BitBlt it to the screen. Anyway some widgets might not have changed at all or might be changed just a bit (the portion _\"Text here, text here, te\"_ hasn't changed at all during the resize). What can be done to exploit this fact and avoid redrawing (even in memory DC, not device DC) the parts which haven't changed?"} {"_id": "254651", "title": "c++ GUI application in linux without toolkit", "text": "We all know , there are number of toolkits available for GUI application in c++ for Linux. But for some reasons I want to create a GUI application without any toolkit.I know this question is ask earlier but there is no proper answer provided. So please tell the ways through which I can create GUI application on Ubuntu 12.04. Please do not suggest me toolkit, That's not an option for me."} {"_id": "203880", "title": "Rewriting C# Formula Calculations in T-SQL", "text": "We have a 3-tier application with a C# client that connects to a C# web service via WCF and requests data from a SQL Server database. One feature in our application is a user-created form app in which our customers setup forms that have fields. The fields refer to values that can be imported, entered by the user, or calculated. The calculations can be thought of as Operation / Value / Timeframe. Each value referenced in a calculation might also be a calculated value. (Care is taken to ensure circular calculations do not exist.) For example, a user might setup a field that shows Cash Over / Short that subtracts the value Total to Account For and Total Accounted For. These fields might in turn add up sales, credit card payments, cash totals, etc. Changing Grocery Sales in the form UI would trigger a change that recalculates the Total to Account For, and that field's change triggers the Cash Over / Short field to recalculate itself. Complicating the calculations are historical values. Some calculations might be phrased in English like \"Subtract Today's Totalizer Value from Yesterday's Totalizer Value\". Or, someone might simply set a Last Year Sales field to the value of Total Sales exactly 365 days ago. All this calculation is done in the C# user interface. The C# code builds dependency trees and sets up event handlers when values change. Changing a value triggers an event, and calculations that depend on the changed value are recalculated. We also have a recalculation engine that verifies the totals are correct before making permanent changes (i.e. before drafting someone's account, so the amounts better be correct). This recalculation engine uses the same C# calculation engine across multiple threads. This incurs penalties of reading from the database, creating the C# objects, performing calculations, and writing to the database. The timeframe dependencies mean we really need to calculate in some kind of day order. I could use some kind of cursor to move day-by-day, gather up all the data on each day, figure out the dependencies, join to historical values, but... **My question: how would you approach performing complex user-defined calculations in a set-based manner in SQL?** I don't see how to do this without RBAR operations on each calculated value."} {"_id": "254659", "title": "Multithreaded Pre/Post Functions", "text": "I'm programming an application for an embedded device. We are using an RTOS that supports multi threading. The device is supposed to mimic an older project that was programmed in plain C (without threads). The original project uses a stack and START / END functions at the beginning and end of each function that can be called throughout runtime. Each time START is called a constant number (identifying the function) will be put on the stack - END will remove it from the stack and check if the top is equal to the called function. If it is different something really bad has happened and the device will recover/reset. This feature is great for debugging purposes as it fully displays how functions are called. It would be very lovely to have such a feature in the multi-threaded version. I implemented START and END as a class in C++ using constructor and destructor which makes the handling fairly easy. Running this single-threaded works like a charm. Anything multithreaded will destroy the stack as the scheduler switches context and writing to one stack would not make sense. The most basic solution in my mind would be having one START/END stack for each thread -> Problem each function has to know which threads it runs on. Is there a better solution to this problem?"} {"_id": "255525", "title": "How to make script with faster queries?", "text": "# I have project to share useful urls with some information about the url. ## the project using: 1. add new post ( opening bootstrap with textarea ) 2. user add content (url , hashtags, text) 3. the js fetch urls 4. the js fetch hashtags 5. the js send 1st url to php curl 6. php return title, description, images, site-name by tags title and meta-tag and 7. user select Image and share the post I did all that above, but there is some errors and new steps wanna ask about ## Errors ### On fetching url content * some sites didn't have good data to fetch it ( title, description, images ) * PHP Curl fetching take long time ## The help: * how can get the right data from any url * how can make the Curl faster * is there are any php class make this faster and secure # new steps I wanna make ### 1 - I wanna display posts like as Twitter time line * ( unread new posts button with #no of posts ) real time on click append new posts * ( load older posts on scroll down ) * ( make it fast ) # Suggests load queries result to json file and update it by time will be faster ? if users posting the same url get it from db with out fetching ? will need to save the fetching on db ? i've insert all post as a json array on db table just the site name not url on other column? is that's good ![enter image description here](http://i.stack.imgur.com/nxlA3.png)"} {"_id": "255526", "title": "Do I need to use Node.js, Java or both for a multi-platform application?", "text": "I'm new on Java and I have experience building Node.js apps, so I'm thinking to create a program that it's kind of \"multi-platform\" where there's going to be a tablet kindle that's going to run the Java app, a PC that will show queues made from this java app in many tablets and another PC that's going to manage the data on the application (those could be web application from Node.js). So, Node.js could be the server managing queries and database while Java could be on Kindle Fire app making requests to the server, I really don't know Java too much but by the time I don't know if this project structure is fine, maybe it could work better using both sides on Java or Node.js web-app. Let's say this project is for a restaurant, waiters are going to have kindle fire (java app) instead of books, on a side there's going to be a big screen showing queues (this could be shown from Node.js) made from waiters on kindles, and the entrance there's going to be a PC that's going to manage the money and other stuff (maybe Node.js as well)."} {"_id": "255521", "title": "Passing the HR Screening Process", "text": "Just a quick question here. I already have a Bachelor's degree in Statistics, and I am thinking of getting the Graduate Certificate in Software Engineering offered by Harvard Extension. Assuming that I hold both BS in Statistics and Graduate Certificate in Software Engineering, Would it be possible me to pass the HR screening process if I apply for a programming job later on? (I am asking this because many programming analyst jobs seem to require a degree or diploma in Computer Science or Computer Engineering) Thanks,"} {"_id": "54198", "title": "User roles in GWT applications", "text": "I'm wondering if you could suggest me any way to implement \"user roles\" in GWT applications. I would like to implement a GWT application where users log in and are assigned \"roles\". Based on their role, they would be able to see and use different application areas. Here are two possible solution I thought: 1) A possible solution could be to make an RPC call to the server during onModuleLoad. This RPC call would generate the necessary Widgets and/or place them on a panel and then return this panel to the client end. 2) Another possible solution could be to make an RPC call on login retrieving from server users roles and inspecting them to see what the user can do. * I'm also considering java security frameworks like Apache Shiro and Spring Security... What do you think about them? What do you think about? Thank you very much in advance for your help!"} {"_id": "255529", "title": "FXML Boiler Plate Code", "text": "Is there a way to address a typical boiler plate code to instantiate and load FXML? A typical version: public void start(Stage primaryStage) { URL fxmlUrl = this.getClass().getResource( /* your string path*/ ); FXMLLoader loader = new FXMLLoader(fxmlUrl); try { primaryStage = loader.load(); this.controller = loader.getController(); } catch(IOException ioe) { ioe.printStackTrace(); } primaryStage.show(); } Note for the code above, the backing FXML file has `` as root. This is just my preference. The loading and matching up to the FXML root can adjust accordingly. In any case, the loading of the FXML file and obtaining reference to a controller is the crucial part (for most of the time). I've attempted to create a utility method such the one below to help prevent this boiler plate code: void loadFXML(R root, C controller, String path) { URL fxmlUrl = this.getClass().getResource( path ); FXMLLoader loader = new FXMLLoader(fxmlUrl); try { root = loader.load(); controller = loader.getController(); } catch(IOException ioe) { ioe.printStackTrace(); } } However, at some stage there was null pointer errors, or the loading was incorrect. I believe it has something to do with the Generics and/or casting. How can the this code be imporoved? Or perhap can another type code be suggested to solve this problem?"} {"_id": "54191", "title": "Is there a process-oriented IDE?", "text": "My problem is simple : when I'm programming in an OO paradigm, I'm often having part of a main business process divided in many classes. Which means, if I want to examine the whole functional chain that leads to the output, for debugging or for optimization research, it can be a bit painful. So I was wondering : is there an IDE that let you put a \"process tag\" on functions coming from different objects, and give you a view of all those functions having the same tag ? **edit** : To give an example (that I'm making up completely, sorry if it doesn't sound very realistic). Let's say we have the following business process for a HR application : receive a holiday-request by an employee, check the validity of the request, then give an alert to his boss (\"one of those lazy programmer wants another day off\"); at the same time, let's say the boss will want to have a table of all employee's timetable during the time the employee wants his vacations; then handle the answer of the boss, send a nice little mail to the employee (\"No way, lazy bones\"). Even if we get rid of everything not purely business-related (mail sending process, db handling to get the useful info, GUI functionalities, and so on), we still have something that doesn't really fit in \"one class\". I'd like to have an IDE that would give me the opportunity to embrace quickly, at the very least : * The function handling the validation of the request by the employee; * The function preparing the \"timetable\" for the boss; * The function handling the validation of the request by the boss; I wouldn't put all those functions in the same class (but perhaps that's my mistake ?). This is where my dreamed IDE could be helpful."} {"_id": "20275", "title": "Mono is frequently used to say \"Yes, .NET is cross-platform\". How valid is that claim?", "text": "In What would you choose for your project between .NET and Java at this point in time? I say that I would consider the \"Will you always deploy to Windows?\" the single most important (EDIT: technical) decision to make up front in a new web project, and if the answer is \"no\", I would recommend Java instead of .NET. A very common counter-argument is that \"If we ever want to run on Linux/OS X/Whatever, we'll just run Mono\", which is a very compelling argument on the surface, but I don't agree for several reasons. * OpenJDK and all the vendor supplied JVM's have passed the official Sun TCK ensuring things work correctly. I am not aware of Mono passing a Microsoft TCK. * Mono trails the .NET releases. What .NET-level is currently fully supported? * Does all GUI elements (WinForms?) work correctly in Mono? * Businesses may not want to depend on Open Source frameworks as the official plan B. I am aware that with the new governance of Java by Oracle, the future is unsafe, but e.g. IBM provides JDK's for many platforms, including Linux. They are just not open sourced. So, under which circumstances is Mono a valid business strategy for .NET- applications? * * * Edit: Mark H summarized it as: \"If the claim is that _\"I have a windows application written in .NET, it should run on mono\"_ , then not, it's not a valid claim - but Mono has made efforts to make porting such applications simpler.\"."} {"_id": "94563", "title": "What is idiomatic?", "text": "I understand an \"idiom\" to be a common operation or pattern that in a particular language is not simplified by core language syntax, such as integer increment: i = i + 1; In C++, this idiom is simplified by an operator: ++i; However, when someone uses the term \"idiomatic\", I am not sure how to understand it. What makes a piece of code \"idiomatic\"?"} {"_id": "94560", "title": "SQL injection attacks, how do I test and secure coldfusion queiries?", "text": "I'm running Coldfusion 8 and SQL server 2008. I've been building serveral forms that insert data into the database from external users, we have a custom built security module built by the guy who I've taken his job. 1) How can we test our HTML forms to ensure that we're protected from SQL injection attacks? 2) How do I secure CFqueries in CFC's? 3) What are some best practices in terms of SQL & Coldfusion for security? \\-- A lot I know!"} {"_id": "161589", "title": "Efficient algorithm to sort html list of players", "text": "We have list of players, which is represented in form like this:
    Player1
    1
    Player2
    2
    Player3
    3
    Player4
    4
    Something happens, and Player 4 recieves 600 points. So it goes on 1st place. With him - player3 recieves 300 points and goes on 2nd place. It happens after same event - let's assume, that they get these points from server, but that's all load that is allowed on server and server does not calculate positions. What is most efficent way to get view like this:
    Player4
    1
    Player3
    2
    Player1
    3
    Player2
    4
    It's not question about code, it's question about: * What kind of task it is? Just sort? * Is it ok to use JavaScript to solve this task if lists are less than 500 rows? * Is it ok to re-render whole players list, or there are some ways to just change div element position(index in DOM or something like this)? * Are there any tools and libraries around, that can help solve this task efficiently? P.S.: If we want to animate all this story - we should create a queue of animation tasks each time scores for all players recalculates? **Update:** I have followed @Jalayn's answer, and stuck on following problem: Let's assume that list `[1,2,3,4]` transforms to `[2,3,4,1]`. If we map it, we will get following position diffs `[+1, +1, +1, -3]`.. Now, if we move one by one elements to it's position change, all will be ok, till we get to negotiate position diff - 1st position diff can't move down any more.. So, my update to question - how we can animate rows more than twice? **And final problem, on which I am stucked:** Not sure, if it most be solved in this question. BTW, if 2 players have same points - animation plan brakes and element_ids duplicates. My question now is how to make following transformation of structures list: `[(#id1, 1, 100),(#id2, 2, 50), (#id3, 2, 50), (#id4, 4, 25)] => [(#id3, 1, 150), (#id1, 2, 100), (#id4, 3, 75), (#id2, 4, 50)]` For now my solution is to refresh whole list, but not completly regenerate it - just move all elements to the first position in order they sorted in backed JSON. Animation is too complex to be done cause of final problem."} {"_id": "29743", "title": "Delphi vs C# for GUI programming", "text": "I'm coming from PHP and Python background with little knowledge of C, I have done many web based application now I'm thinking of Desktop application for windows platform. A friend told me to go for Delphi and others are saying C# is the best, well, what I'm looking for is 1. Simplicity 2. Productivity 3. Good API documentation 4. Speed 5. Drag and Drop 6. Multi threading & Good Network API Thanks"} {"_id": "29742", "title": "What kind of positions could a beginning developer expect to qualify for in five years?", "text": "I am a Junior developer, just landing in a job after completing my Masters. I'm not clear on what positions or roles I could possibly be in after five years? So My Question is, What kind of positions can an Entry level programmer like me seek in 5 years?"} {"_id": "164035", "title": "java application architecture", "text": "We have to write an administration panel for many customers. But we want to have just one administration panel, and use it in various projects. This admin panel will have basic components such as access control logic, maker-checker system in changes, user logging and etc. It will also have reporting for the customer, logs of transaction of the customer (which can vary according to the industry such as mobile banking, banking, ticket sales). These components may have to be modified according to the business. So we are thinking about the architecture here, Is it OK to use jars of every basic components, and bring them together on a glue application? Or should we build each component as WARs and make interfaces between them? If there are any more ideas, it will be appreciated."} {"_id": "1151", "title": "Where does the word \"Programming\" come from?", "text": "Where does the word \"Programming\" come from? I'm referring to the word \"program\" as referring to \"to instruct a computer\". Programming, programmer etc..."} {"_id": "164039", "title": "What is a zombie process or thread?", "text": "What is a zombie process or thread, and what creates them? Do I just kill them, or can I do something to get diagnostics about how they died?"} {"_id": "97917", "title": "My company won't let me listen to music anymore", "text": "I began programming at the age of 14 and over a decade has gone by, during this time, a day hasn't gone by when I don't listen to music, radio or podcasts. When I'm programming at home or in work, I always like to listen to music, sometimes the same album or playlist throughout the week, as it becomes a constant, and I almost don't notice its there. 1 headphone in usually means, talk to me, both headphones is known to the girlfriend and friends as \"code mode\". They know better than to get more than a few words out of me. I'm sure you guys understand! So, I use to work for a big ATM company dealing with banks on a daily basis, I was more of a number than a person, and when I worked I could listen to music all day and night. If anyone had any issues, they would ping me on MSN or give me a tap. I was fine with that, and so was everyone else (as they all did the same). Now I've moved on (had to relocate) I have a new job in a much smaller company. The company is NOT software orientated at all, I am the only on site programmer, all previous work was outsourced. We have around 30 people in the office (mixture of admin, accounts but mainly noisy sales teams), I probably talk to 2 of them. They're not a social bunch, I'm fine with that, as I know what they do has no impact on what I do and vice versa, so I sit quietly, put my headphones on and code. I know (they kind of know) that I'll be leaving this time next year, as they're a small company we established in the interview that after 2 years I might struggle to find projects. So... back to my problem (I promise that was all relevant) in my first week I set up my coding environment, and began working. The IT manager pulled me aside and said \"we have a slight issue, you can't wear your headphones\" when I asked why they responded \"the boss will go mad\". So after a few months of no headphones, or sneakily wearing them when it got really loud. I became confident to wear them, as the boss himself acknowledged I wore them. I told him \"it helps me concentrate, and gets me in the zone\". He didn't say anything at this point (and never has). I had no issues for around 4-5months. Then last month I was called into a meeting with one of the MDs, he told me I can no longer wear headphones. Again I asked why, this time they said \"the sales team are wearing them\". They don't, I know they don't because I sit next to them all. I tried to argue my point, I received the 'I know I feel for you' speech, and that was that. No more music. I returned to the office and had a few pairs of eyes on me, looking pretty happy with themselves. One even turned to the receptionist behind me and said \"have you noticed\", the receptionist responded... \"what?\" then got up looked over the divide straight at me. Subtle huh. I like to call this \"Small Company Fever\"... they don't like change when trying to 'save the world' everyday. So now after a month, I find I can't concentrate fully, my mind wanders. To make it worse... they've moved my desk from the middle of the office, to the main door, which happens to be next to the toilets, the huge photocopier/printer and the kitchen. Its loud, really loud. Its actually starting to stress me out, and make me want to look for a new job. So heres is my dilemma. Am I just being a big girls blouse, should I just shut up and just get on with it as its only for another year and a bit... or do you think I have a genuine problem, and if so... how would you deal with it? Sorry for War & Peace II. All the best! Rocky"} {"_id": "179872", "title": "To map - word usage in software context", "text": "I always get confused about how to use the verb \"to map\" in the context of associating objects with each other, particularly if the mapping is directional. Is there a distinction between > mapping users to IDs and > mapping IDs to users In the given example, which would be the better way to put it?"} {"_id": "245628", "title": "Analogy for Android fragments", "text": "I'm trying to understand the flow of Android apps. I come from a RoR background, so I try and use that background to understand new concepts in Android. Here's my question: How can I think about fragments? Are they like partials in Rails? If so, how? If not, why?"} {"_id": "17443", "title": "What does it mean to write \"good code\"?", "text": "In this question I asked whether being a bad writer hinders you from writing good code. Many of the answers started of with \"it depends on what you mean by good code\". It appears that the term \"good code\" and \"bad code\" are very subjective. Since I have one view, it may be very different from others' view of them. So what does it mean to write \"good code\"? What is \"good code\"?"} {"_id": "193155", "title": "LGPL, .lib, .dll, and linking", "text": "I am trying to build a project which uses an unmodified copy of libconfig (http://www.hyperrealm.com/libconfig/). libconfig is LGPL, but I don't want to open source any of my code. By my understanding, LGPL means I need to provide the source for the library (easy), and _a means for them to use their own modified version of the library_. It's the latter component that confuses me a bit (apologies for some c naivete). Currently, my VS2010 solution has my project, and the libconfig project. The libconfig project builds a dll, but I also need to link my project against the .lib of libconfig to get the dll's definitions (Can someone explain why this is necessary, when I'm already including the header file?). Despite the linking, the .dll file needs to be present at runtime for the binary to be able to run Do I need to provide all of the .obj and .lib files I produce to satisfy the LGPL? Is there a way to avoid the linking of the .lib file? I've looked into LoadLibary and GetProcAddress but it looks way more complicated than I'd like. Or am I simply overestimating the requirements of the LGPL here? If there's another, more permissive config library for c++, that would also solve my dilemma. But I haven't been able to find something (and I'd like to avoid boost)."} {"_id": "161634", "title": "How should I deal with Time Zones in a .NET WCF application?", "text": "Our company runs a SaaS application where users log in from across the world (although mostly in the US). We store all our time relevant information as UTC, but we need to display times using local time. The application is web based, and we would like to \"auto-detect\" the user's Time Zone by using javascript to determine their UTC offsets during various times of the year. The user's offset info would be passed to our server in their first request and the server will look up all the Time Zones that it knows about and see which valid time zones match. I'd like to use the built in TimeZoneInfo object, but I understand that TimeZoneInfo.GetSystemTimeZones(); relies on the server's registry settings. We currently have 18 load balanced web servers that host our WCF service layer, and I _believe_ our TechOps team is vigilant in making sure that all servers are patched with the same service pack at all times. I also need to pay special attention to the various US Time Zones, specifically Pacific, Mountain, Central and Eastern. Can I assume that the string based Id field will _not_ change anytime soon? For example, if I call TimeZoneInfo.FindSystemTimeZoneById(\"Pacific Standard Time\") I want the appropriate TimeZoneInfo object to be returned. So a few questions: * What is the best way to deal with time zones, especially when it comes to the changing rules (US time zone rules changed in 2006) and determining offsets. Should I be trying something else? Keep in mind that the product team doesn't want the user to have to select their time zone and they are fine with time zones being inaccurately reported under certain conditions (e.g. if the user is in Mexico, it is ok to be reported as \"Pacific Standard Time\" instead of the more accurate \"Pacific Standard Time (Mexico)\". * If I assume that all of our load balanced web servers are always running the same OS version and patch level, can I assume that I will get consistent results? * Are there any other things I need to consider?"} {"_id": "211342", "title": "raw aql query in framework with Active Record pattern based ORM", "text": "I use yii framework that implements Active Record pattern as ORM base. It has CActiveRecord class that is a table wrapper class with attributes reflecting table columns. So each object of this class represents a database row. Wiki says about Active Record pattern: > Active record is an approach to accessing data in a database and > A database table or view is wrapped into a class. Thus, an object instance > is tied to a single row in the table. So far so good. But where should I put complex raw sql query that retrieves statistics data for example? And, more generally, where should I put methods that retrieve some data that can not be an active record object (like data retrieved with aggregation queries) or if I knowing do not want to retrieve an object but an array instead for example?"} {"_id": "161639", "title": "Filtering lists in an Android App offline or online?", "text": "I have an android app that fetches a list of about 100 Items. Trying to get the server load down, I've built a filtering feature that does everything offline, so basically instead of calling the server (and thus a MySQL query every call) It filters on its own. However I've noticed the app tends to lag (while the only change is the offline filtering). Is the processing of relatively big object-lists something that should be done offline or should I live the load on the server ?"} {"_id": "191949", "title": "Benefits of combining programming languages", "text": "I know there are different ways to combine programming languages (Haskell's FFI, Boost with C++ and Python, etc...). I have an odd interest in combining programming languages; however, I have only found it \"necessary\" once (I didn't want to rewrite some older code). Also, I notice that this interest is shared (there are an abundance of questions about integrating languages on SO). My question is, simply, are there any other benefits in combining programming languages? Is there value in mixing different programming paradigms (e.g. functional+OO, procedural+aspect-oriented)? Any from-the-field examples would be much appreciated. **UPDATE** When I say \"combine two languages\" I am talking about using them in conjunction, in ways not necessarily originally intended. For example, suppose I use Boost to incorporate Python code in C++."} {"_id": "191944", "title": "How do you schedule software updates, major releases, milestones (what is this?)", "text": "What are the common terminology used to schedule software update and support. For example, I really have no clue how releases and updates differ, how often are updates released (not everyday I hope)? Most importantly communicating stuff like milestone (what is this mean?), roadmaps, basically I want to know how I can offer support to end users by using industry terminology."} {"_id": "191941", "title": "Hating your own code - for good or bad, how do you deal with it?", "text": "Have you ever had this feeling that your code is bad, the whole project is a mess, and you just want to step off? On your daily job you can explain this feeling away with your coworkers, asshole boss, or something like this. But with side/pet projects there is really no excuse. For example I'm currently maintaining my Firefox extension - fixing bugs and adding new features. Quite frequently when I go back to the code written months ago the feelings that arise inside me are quite controversial - \"Did I write that? Really??\" Knowing that proper implementation of new feature \"the right way\" requires throwing away whole module and still putting together a quick hack - sometimes I don't even hesitate. As the project grows, features are added there remains less and less room for refactoring... Do you have any recipes for dealing with this kind of emotional state? Do you just pull yourself together and give the whole project another thought, rewrite version 2? Or maybe just ignore all this - \"working code is better than perfect code\"?"} {"_id": "238850", "title": "Drawbacks to redefining method in precompiled header", "text": "I have a lot of calls to `NSLog(...)`. I need to change all of these calls to `CLSNSLog(...)`. So I added this to my precompiled header (.pch): #import #define NSLog(...) CLSNSLog(__VA_ARGS__); Everything works. I don't see any drawbacks to this -- the team can continue to use NSLog(...) and I can revert pretty easily. But am I missing anything, or are there any design issues this will create later on?"} {"_id": "238851", "title": "Identify this programming style", "text": "Some of the legacy code I've inherited uses the fact that C# supports multiple assignment to write code like: void DisableControls() { ddlStore.Enabled = ddlProgram.Enabled = ddlCat.Enabled = btnSaveProxy.Enabled = btnCancelProxy.Enabled = btnViewMassManagerProxy.Enabled = grdPromo.Enabled = false; } It's a very _pretty_ style, with all the `=` lined up neatly, and the value being assigned down at the bottom, but it's also a rather annoying one to work with. (And yes, those are tabs lining things up, not spaces). I would like to know what, if anything, this style implies about the original coder, since I've only seen it on a few occasions before. Does it indicate a C++ background? .NET 1.1? Just someone's personal preference? ~~Is there any reason that the chained assignment like this would be more efficient than individually setting each control's`.Enabled`? Less so?~~ (This part was answered by @gnat's comment: `This way makes it easy to insert btnWhatever.Enabled = as new buttons are added and delete as some buttons are deleted`) * * * Edit: Other elements of this person's style, assuming the whole file was written by one person (likely but not certain), include putting function arguments on the next line, and putting commas at the beginning of a line instead of the end of the previous line. Example: PromoUpdate[] GetPromoUpdates( StateField field) { List updates = new List(); foreach (PromoOptInState p in field.States.Where( x => x.IsDirty)) { updates.Add(new PromoUpdate() { promoCode = p.Code , optIn = p.IsOptedIn }); } return updates.ToArray(); } It's very weird."} {"_id": "238855", "title": "Does using Xapian in Django-Haystack enforces GPL?", "text": "Consider the following situation: * She uses Django (BSD) for a website. * She uses Haystack (BSD) for textual search in the website. * She uses the backend of haystack, xapian-haystack (GPL), to use Xapian (GPL) as the search engine of Haystack. Question: must she distribute the source code of her website under GPL (or equivalent)? My question is specific on whether the source code is considered to be a derivative GPL code. Pragmatically, I would say the code is written entirely in Python+Django+Haystack, and Xapian could be substituted by other search engine. Indeed, the difference in the code is as simple as HAYSTACK_CONNECTIONS = { 'default': { 'ENGINE': 'haystack.backends.xapian_backend.XapianEngine', 'PATH': os.path.join('tmp', 'test_xapian_query'), 'INCLUDE_SPELLING': True, } } or HAYSTACK_CONNECTIONS = { 'default': { 'ENGINE': 'haystack.backends.elasticsearch_backend.ElasticsearchSearchEngine', 'URL': 'http://127.0.0.1:9200/', 'INDEX_NAME': 'test_default', 'INCLUDE_SPELLING': True, }, } But I would like very much to have a second opinion on this."} {"_id": "238856", "title": "Why does Java support brackets behind variables and even behind method signatures?", "text": "Java allows this: class X{ int i,j[]; // j is an array, i is not } and even worse, it allows this: class X{ int foo(String bar)[][][] // foo actually returns int[][][] { return null; } } Okay, the reason for this might be that it was lent from C/C++. However, Java meant to be easier than C/C++. Why did the Java inventors decide to allow this hard-to-read construct. The convoluted types of C where the variable name is in the middle of the type are just hard to read and provoke programming errors. Especially the brackets behind the method signature. I have never seen these in use and that is for a good reason. No one looks behind the signature when checking the return type of a method. While the first example may save some keystrokes (because `int` does not have to be written twice), the brackets behind the signature do not even save any, so I see absolutely no gain here. So is there a good reason for this (especially the second one) that I am missing?"} {"_id": "12858", "title": "Using Windows tools to edit Linux project files", "text": "In the spirit of getting new people on the project up to speed and leveraging their familiarity with existing Windows tools, what are possible options for cross-platform editing/development? Of course, the Windows people could learn vi/Emacs, X, Eclipse/NetBeans, etc, but this would involve a period of learning, per-person. Is there any combination of technologies/tools that would let developers that are most familiar with a Windows tool-chain help out on a Linux-based project? E.g. One developer would like to use UltraEdit. For reference, the Linux project(s) are all daemon processes written in C++ and the development server has: * Version Control systems include RCS, CVS and SVN. * Compile tools are GCC and a Make-like build system. * X Windows * Telnet/RSH/SSH Suggestions don't have to be free/open-source, but an indication of licensing terms would be greatly appreciated (Updated to add language)"} {"_id": "92248", "title": "What problem does automated user interface testing solve?", "text": "We are currently investigating automated user interface testing (we currently do automated unit and integration testing). We've looked at Selenium and Telerik and have settled on the latter as the tool of choice due to its much more flexible recorder - and we don't really want testers writing too much code. However, I am trying to understand the overall benefit. What are peoples' views and what sort of things work well and what doesn't? Our system is under constant development and we regularly release new versions of our (web based) platform. So far the main benefit we can see is for regression testing, especially across multiple client deployments of our platform. Really looking for other people's views. We \"think\" it is the right thing to do but in an already busy schedule are looking for some additional insight."} {"_id": "154902", "title": "Is there any research about daily differences in productivity by the same programmer?", "text": "There has been a flurry of activity on the internet discussing a huge difference between the productivity of the best programmers versus the productivity of the worst. Here's a typical Google result when researching this topic: http://www.devtopics.com/programmer-productivity-the-tenfinity- factor/ I've been wondering if there has been any research or serious discussion about differences in day-to-day productivity by the same programmer. I think that personally, there is a huge variance in how much I can get done on a day by day basis, so I was wondering if anyone else feels the same way or has done any research."} {"_id": "92241", "title": "What do you feel is the best way to increase colleagues knowledge?", "text": "Our team recently joined up with a few smaller teams, and as we had some direction and standards to our development it was decided that it would be best to train those up in our chosen languages and methodologies. Some members take years to achieve an acceptable level of competence, and I've noticed that there seems to be this assumption that a developer with less knowledge requires almost 50% of a more knowledgable persons time. When I started devloping several years ago I had no mentor, I had a project in front of me, a good book, and later on I had the internet, and that's pretty much paved my learning approach over the years, as a result of that I can taken on a language or methodology and learn very quickly, the only time I need to seek advice from colleagues is for design work, code reviews and standards discussions. The staff who have this allocated time to mentors appear to be constrained, and I strongly feel that when it comes to technology, you shouldn't need much human interaction, if you rely on someone to **_walk you through_** technologies I fear that they will only remember that and therefore be constrained in the learning. So my question is, how do you deal with training up staff? My way has always been to **not find out the solution** but to **find out how to achieve the solution**. I don't quite understand the whole **_spending time shadowing another developer_** , maybe that's just me, but there must be a standard approach to this. Would like to some input please."} {"_id": "119027", "title": "Munging intro level knowledge of set theory with intro level knowledge of electronics to parse and evaluate the content of HL7 messages", "text": "This question is being asked for the purposes of evaluating whether or not attempting to use things I picked up in college was a good idea, or even remotely defensible. Last year I wrote a 'formula creator' for figuring out what to do with fields or how to populate fields of HL7 data. The fact that it's HL7 data is almost unimportant, except for the fact that it's not always known whether or not an HL7 field is going to exist or if it's going to repeat. Therefore, I thought, if I need to get a list of these fields I can return myself a set of them. If I don't find them at all, I can add an undefined, fuzzy value, to the set. Then I can evaluate certain things on the set, (i.e. do all of them exists, do more than 6 match some regular expression, do the ascend from greatest to least). Everything works OK, except what to do with 'All Of' and empty set. For the interest of making a program easy to use, should 'all of []' return true or false? I tried to make a case that \"In set theory, the existential operator yadda yadda deaf ears\". I'd like an answer to the first question, because it may require going back to the drawing board, but I'll fast forward a to a few months ago we got around that problem by never having an empty set. If we encountered 0 fields in the HL7 message, we'll just ploink undefined in the set. The way I wanted to use undefined was as some sort of catalyst for, \"I have no idea\". So True or Undefined = Undefined, False and Undefined = Undefined and Not Undefined = Undefined. I made it a point to be very consistent in my programming with this, but everything that ailed the program could have been alleviated by making \"Not Undefined\" = True. I told the two guys who actually use my program to 'always make his formulas to expect the true case', cuz that's how Don Knuth does it. In summary, my program works. The problem is that it is encumbered by my adherence to principles, clearly outside the domain of the problem, that if I slackened would make the program flow better and function without requiring me to consult the source code. My question is, should abstractions match the domain? Can I legitimately argue to a nurses, bosses and (more military minded) programmers that \"all of an empty sack is true\" or \"the opposite of unknown is still unknown\"?"} {"_id": "66563", "title": "Burnout... Project Issues... Now They're Witholding My Salary... What Should I Do Next?", "text": "I've been working at a startup for 9 months. Except for the first 3 months, I've been miserable. It started when I joined the organization. I didn't join as a fresher. My first task was to create a replica of the mobile application that was being developed by a senior developer. Mobile development was new to me, but I managed things gracefully and the application got client acceptance. During this time, that senior developer got fired, and I was the only developer left on the team. Soon, I was told to move over to the new mobile development platform to help the team there, who was having trouble with the initial release of the application. I helped them, and they were able to ship the initial release gracefully. With my efforts, I got management's attention, and as the fog of worry (temporarily) faded, they told me to do some research about a core piece of that app. The research took me two months, and meanwhile the team messed up the application. I was told to take charge of the main development and abandon research until further instructions. Those \"further instructions\" never came. When I took charge, we were a team of two. We were both new to technology and we were struggling. Within a month my teammate left the company, and I was alone. While my learning process continued, the client reported some serious design issues. During the development phase, it took me 10 days to troubleshoot these issues and another month to solve them. During this time, I was doing 4 tasks (i.e. development, design, research, and QA) all alone. `I was weak in design and QA`, and for that I was criticized a lot by both the client and management. It took me one week to cope with the burnout; and it took me three months to stablize the project. I wasn't granted any leave during those 3 months, was called every weekend for 5 weeks, and forced to work for at least 12 hours each day. I once tried to oppose this and my salary was on hold for 10 days without any explanation. Today, the status is this: * Among three versions of that application (including mine) every team has at least one feature that is not acceptable. * The whole project is 3 months behind schedule. * Two weeks ago, our project leader, for the first time, acknowledged that his initial work estimates were wrong. * So he wanted us to estimate the remaining time. * He told us that he would convince the client to accept these new dates. * In three days, our estimations were rejected thrice by the project leader himself and weren't discussed with the client. * We were three days behind our schedule for the release (and a week behind the schedule reported to the client). * Still, the project estimates were not accepted by the client and there are 5 minor bugs open in that application. After this, they moved me from that application to a different application which was abandoned by a developer some 7 months ago. The client wanted its beta delivered quickly with all of its issues resolved. There were 12 issues and, among them, 10 issues were resolved. I sent a \"test beta\" to the client and he mostly accepted it. But there were some problems and the client forced me to solve the remaining two issues at any cost in the given timeline. As the sole developer, the project leader requested my opinion about those two bugs. Both tasks were impossible. One was unsupported by the api and the other couldn't be completed in the given timeline even if somebody worked 24 hours/day. But he wasn't convinced and he asked me to complete these two bugs in the given timeline, thus forcing me to work the weekend. He even froze my salary again, and this time I am in a financial crunch. > **What should I do in this situation?** I am inclined to leave, but the question is - Should I leave **immediately**? If I do, that will put them in jeopardy and will make them realize that something was wrong on their side. I am not worried about relations with them, but I am worried about the next developer who will take my place (if they can find one). The only problem is that I have to suffer economically somewhat for a month. OR Should I wait for my salary? I do not believe that my salary will be credited tomorrow, and I don't have enough mental strength to face them and still be calm and able to work. But if I pursue this tact, it's at least somewhat _more likely_ that I will have _some_ money and that I can support myself and my family. Or do you have some other option in mind? Thanks for patiently reading my question."} {"_id": "151368", "title": "Designing A 2-Way SSL RESTful API", "text": "I am starting to develop a WCF API, which should serve some specific clients. We don't know which devices will be using the API so I thought that using a RESTful API will be the most flexible choice. All devices using the API would be authenticated using an SSL certificate (client side certificate), and our API will have a certificate as well ( so its a 2 Way SSL) I was reading this question over SO, and I saw the answers about authentication using Basic-HTTP or OAuth, but I was thinking that in my case these are not needed, I can already trust the client because it possesses the client-side certificate. Is this design ok? Am I missing anything? Maybe there's a better way of doing this?"} {"_id": "66565", "title": "How to deal with IDE Addon creep?", "text": "I'm sure I am not alone in this issue so I wanted to see what others do to handle this problem. Whenever I have to reinstall my IDE, one of the first things I do is go out and look for addons. It starts out just grabbing whatever I plan on primarily doing but ends up with getting anything that looks interesting or may find useful in the future. \"But it's addtional functionality _for free!_ \" In the end I just end up with a dev environment with a lot of bloat. Also happens over time so you reinstall only to start the cycle over again... Is there a good way of handling this?"} {"_id": "66569", "title": "Should a web developer understand TCP/IP and how routers manage requests?", "text": "I had a job interview today for a Job position as developer on an important site. They asked tonnes of programming language related questions, which I managed to answer without problems, but then they started asking question about how TCP/IP requests were made once I made a request on my PC to a web server. I did received those contents as a student, but I dont remember them well, because I'm working mostly in web development, my question is: As a software developer, mainly working on web applications, do I need to have **extensive** knowledge of TCP/IP and how routers manage requests or it's just black box knowledge to me?"} {"_id": "151360", "title": "How to unit test with lots of IO", "text": "I write Linux embedded software which closely integrates with hardware. My modules are such as : -CMOS video input with kernel driver (v4l2) -Hardware h264/mpeg4 encoders (texas instuments) -Audio Capture/Playback (alsa) -Network IO I'd like to have automated testing for those functionalities, such as integration testing. I am not sure how I can automate this process since most of the top level functionalities I face are IO bound. Sure, it is easy to test functions individually, but whole process checking means depending on tons of external dependencies only available at runtime."} {"_id": "245476", "title": "problem on calculating Big O complexity", "text": "i have this function on which i have to calculate the time complexity with the **Big O notation** : public void print(ArrayList operations, ArrayList> setOfStrings) { int numberOfStrings = 0; int numberOfLetters = 0; String toPrint = operations.get(1); for (Iterator> iteratorSets = setOfStrings.iterator(); iteratorSets.hasNext();) { LinkedHashSet subSet = iteratorSets.next(); if (subSet.contains(toPrint)) { for (Iterator iterator = subSet.iterator(); iterator.hasNext();) { numberOfLetters = numberOfLetters + iterator.next().length(); } numberOfStrings = subSet.size(); break; } } } the method does this operation: For example, if have as operation `print foo`, I have to do these steps,first of all, I have to find where `foo` is: * Inside `setOfStrings`, I can have this situation: position 1 : [car, tree, hotel] ... position n : [lemon, coffee, tea, potato, foo] * When I find the string `foo`, I have to save the number of strings inside that position and the number of letters of each string, so in this case, I will save: 5(number of strings) 23(sum of number of letters) some considerations: 1. For the `arrayList` of `operations`, I get always a specific position, so I don't iterate. It is always `O(1)`. 2. For the `ArrayList>` , I have to iterate, so the complexity in the worst case is O(n) 3. the operation `if (subSet.contains(toPrint))`, it will be O(1),because hashSet has mapped all objects inside it. 4. the iteration inside the hashset made with `for (Iterator> iteratorSets = setOfStrings.iterator(); iteratorSets.hasNext();)` , it will be O(m),because i have to iterate inside the entire hashset to sum the letters of each words **_so in conclusion i think the time complexity of this algorithm is`(O(n)*O(m))`_** are these considerations all corrects? thanks."} {"_id": "61039", "title": "Smart Pointers inside class vs Normal Pointers with Destructor", "text": "Regarding pointers which are members of classes. Should they be of a smart pointer type or is it enough to simply deal with them in the destructor of the class they are contained in?"} {"_id": "245470", "title": "Version Control & Deployment on Large Ecommerce Site", "text": "I am currently Front End Dev for a large ecommerce store and I'm wanting to switch the entire site to a Version Control System (Git) and Deployment service (Beanstalk). I'm having trouble understanding a couple of things though. Firstly, my only experiance with version control has been my personal Wordpress Site, where things were simple. I would have a local copy of the site on ampps, and there was the live site. I would make changes locally, then when happy push to bitbucket which in turn deployed to the live site via FTPloy. Simple. Now we have two servers, dev and live. At the moment we simply edit the files on the dev via FTP and when signed off, upload those to the live server. If I was to implement version control, how do I develop on the dev server efficiently? Without being able to have a local version of the site, each minor change (a simple css property for eg) would have to be pushed to the dev server to see the results, no? Surely this isn't how it's done, it would be way too much hassle, especially if working on it for hours... I can't seem to find any examples on the web of how this is handled when you don't have a local environment set up. Which isn't possible in this case. I think I'm missing something fundamental to be honest, not being entirely clued up on how it works. If you need more details just let me know. Cheers."} {"_id": "127287", "title": "Simple issue tracker for 1-2 developers", "text": "I'm currently working mostly alone on a project (in Java). I'm mostly alone as I have an advisor that gives me high level instructions on what to do, and will seldom make any code contribution. She will code in a couple of acceptance tests from time to time, though. I've never used an issue tracker before, and was thinking about starting to use one now, as I'd like to have a place where I can log possible bugs I find and keep track of them in a centralized manner. Would it be possible to integrate the issue tracker with Eclipse, better yet. So here are the constraints: 1. It's NOT a open-source project. Our code is not to be shared with anyone! 2. we are and will be using Subversion; 3. we have our own Subversion server and we will keep using this same Subversion server; 4. it must be free; 5. it must allow at least 2 users. What is your advice on what to pick? I'm looking for the simplest solution available."} {"_id": "189169", "title": "How to synchronize configuration for Findbugs between build server/Maven and several programmers", "text": "Here's a common scenario for developers who want to integrate static analysis in their workflow. Any suggestions on how to get this working with a minimum of pain? The situation is similar to the one discussed here: http://enterprise-it- solutions.blogspot.dk/2011/02/maven-configure-checkstyle-pmd-eclipse.html The dev system uses a centralized build server based on Maven and Hudson. The developers use Eclipse and Java, and use a standard tool (git, svn or baazar) for source control The developers need a local version of the static analysis tool to avoid having to deal with a dozen quality issues at once when they want to check in a change. The build server (Maven + Hudson) has a Findbugs configuration that defines \"Current standard for quality\". For each programmer, there should also be a standard configuration for Findbugs that is at least as restrictive as the server-side configuration, and perhaps even identical to it. The programmer should be able to change his local configuration, because - let's face it - he's gonna do that anyway at least once, yet it should be relatively easy to reset to standard. Changes should only flow from the server to the developers, not the other direction."} {"_id": "61036", "title": "Can I use this GPL'd PGP library internally in my company?", "text": "This PGP library was bought up by Network Associates and then eventually Symantec Corporation. The source code is available and is licensed under the GPL (it was linked to from here). They have a source code license here: > http://www.pgp.com/developers/sourcecode/sourcecode_license.html Even though this code is licensed under the GPL, are they restricting me from using it for business purposes. i.e. I am not making any modifications to the code, I am not linking the code into any other commercial or otherwise business application. I just want to run the command line and desktop applications as part of the normal day-today goings on of my business (signing and encrypting emails). Is this permissable? **Update:** After having carefully re-read this page: > http://www.pgp.com/developers/sourcecode/index.html I have just noticed that the library itself _is not_ licensed under the GPL: > **Modifications to GPL Code** > Although PGP\u00ae software itself is not subject to the GNU General Public > License (GPL), PGP Corporation utilizes certain code subject to the GPL However, the question and answers, had the library been fully GPL'd did help."} {"_id": "245478", "title": "How to create animated Gifs of programming examples?", "text": "I've seen this a few times on GitHub. You visit a repo and there are animated Gifs showing how to use the open source project via a console or IDE. These Gifs show text as a person is typing. It's clearly a screen capture taking place, but I was wondering how these are created? Here is an example repo: https://github.com/YuriyZanichkovskyy/ReReflection Here is an example GIF: ![enter image description here](http://i.stack.imgur.com/Ck4GP.gif) > If this is the wrong place for this question. Please point me in the right > direction."} {"_id": "223224", "title": "How to include license for MIT-licensed CSS?", "text": "I have a project licensed under the GPL. However, it includes a bit of code from animate.css, which uses the MIT license. How would I include the appropriate licensing information for that CSS? I've included that bit of css (maybe 25 or so lines) in my main stylesheet, so I'd rather not have a huge stamp like what's on this page, if I can avoid it."} {"_id": "26544", "title": "Is there any way to use Python to replace Flash for in browser animation, gaming, whatever?", "text": "Is there any way to use Python to replace Flash for in browser animation, gaming, webapps, whatever? Pretty straight-forward IMHO. Does anyone know how to do this?"} {"_id": "216913", "title": "Naming conventions for language file keys", "text": "What is your strategy for naming conventions for the keys in language files used for localization? We have a team that is going to conversion of a project to multiple languages and would like to have some guidelines to follow. As an example, usually the files end up being a series of key/value pairs, with the key being the placeholder in the template for the language specific value. 'Username': 'Username', 'Enter Username': 'Enter your username here'"} {"_id": "1", "title": "\"Comments are a code smell\"", "text": "A coworker of mine believes that _any_ use of in-code comments (ie, not javadoc style method or class comments) is a code smell. What do you think?"} {"_id": "201657", "title": "Forcing people to read and understand code instead of using comments, function summaries and debuggers?", "text": "I am a young programmer (finished computer science university but still under a year of working in the industry) and I recently got a job working on some C code for a decent size web service. Looking at the code the only places I saw comments were when people were stashing their old code. Function and variable names are similarly informative most of the time - `futex_up(&ws->g->conv_lock[conv->id%SPIN]);`. Confronting a senior programmer about the situation and explaining that adding comments and meaningful names would make the code more maintainable and readable in the future, I got this reply: > In general, I hate comments. Most of them time, like the case you mention > with the return value, people use comments to get around reading the code. > The comments don't say anything other than what the guy thought the code > does at the time he put in the comment (which is often prior to his last > edit). If you put in comments, people won't read the code as much. Then bugs > don't get caught, and people don't understand the quirks, bottlenecks, etc. > of the system. That's provided the comments are actually updated with code > changes, which is of course totally unguaranteed. > > I want to force people to read the code. I hate debuggers for a similar > reason. They are too convenient and allow you to step through dirty code > with watches and breakpoints and find the so-called problem, when the real > problem was that there are bugs in the code because the code has not been > simplified enough. If we didn't have the debugger, we would refuse to read > ugly code and say, I have to clean this up just so I can see what it is > doing. By the time you are done cleaning up, half the time the bug just goes > away. While what he wrote goes against a lot I have been taught in the university, it does make some sense. However, since experience in the studies sometimes doesn't work in real life, I would like to get an opinion of people more vetted in code. Is the approach of avoiding commenting code to make people actually read the code and understand what is going on make sense in a medium-sized coding environment (one that can be reasonably read in whole by every person working on it within a month or two), or is it a recipe for a long-term disaster? What are the advantages and disadvantages of the approach?"} {"_id": "119600", "title": "Beginner's guide to writing comments?", "text": "Is there a definitive guide to writing code comments, aimed at budding developers? Ideally, it would cover when comments should (and should not) be used, and what comments should contain. This answer: > Do not comment WHAT you are doing, but WHY you are doing it. > > The WHAT is taken care of by clean, readable and simple code with proper > choice of variable names to support it. Comments show a higher level > structure to the code that can't be (or is hard to) show by the code itself. comes close, but it's a little concise for inexperienced programmers (an expansion on that with several examples and corner cases would be excellent, I think). **Update** : In addition to the answers here, I think this answer to another question is highly relevant."} {"_id": "121775", "title": "Why should you document code?", "text": "> **Possible Duplicate:** > \"Comments are a code smell\" I am a graduate software developer for an insurance company that uses an old COBOL-like language/flat-file record storage system. The code is completely undocumented, both code comments and overall system design and there is no help on the web (unused outside the industry). The current developers have been working on the system for between 10 and 30 years and are adamant that documentation is unnecessary as you can just read the code to work out what's going on and that you can't trust comments. Why should such a system be documented?"} {"_id": "173118", "title": "Should comments say WHY the program is doing what it is doing? (opinion on a dictum by the inventor of Forth)", "text": "The often provocative Chuck Moore (inventor of the Forth language) gave the following advice[1]: > Use comments sparingly! (I bet that's welcome.) Remember that program you > looked through - the one with all the comments? How helpful were all those > comments? How soon did you quit reading them? Programs are self-documenting, > even assembler programs, with a modicum of help from mnemonics. It does no > good to say: > > `LA B . Load A with B` > > In fact it does positive bad: if I see comments like that I'll quit reading > them - and miss the helpful ones. What comments should say is what the > program is doing. I have to figure out how it's doing it from the > instructions anyway. A comment like this is welcome: > > `COMMENT SEARCH FOR DAMAGED SHIPMENTS` Should comments say _why_ the program is doing what it is doing? * * * In addition to the answers below, these two _Programmers_ posts provide additional insight: 1. _Beginner's guide to writing comments?_ 2. An answer to _Why would a company develop an atmosphere which discourage code comments?_ ### References 1\\. _Programming a problem-oriented-language_, end of section 2.4. Charles H. Moore. Written ~June 1970."} {"_id": "216918", "title": "Is there really anything to gain with complex design?", "text": "I've been working for a consulting firm for some time, with clients of various sizes, and I've seen web applications ranging in complexity from really simple: * MVC * Service Layer * EF * DB To really complex: * MVC * UoW * DI / IoC * Repository * Service * UI Tests * Unit Tests * Integration Tests But on both ends of the spectrum, the quality requirements are about the same. In simple projects, new devs / consultants can hop on, make changes, and contribute immediately, without having to wade through 6 layers of abstraction to understand what's going on, or risking misunderstanding some complex abstraction and costing down the line. In all cases, there was never a need to actually make code swappable or reusable - and the tests were never actually maintained past the first iteration because requirements changed, it was too time-consuming, deadlines, business pressure, etc etc. So if - in the end - * testing and interfaces aren't used * rapid development (read: cost-savings) is a priority * the project's requirements will be changing a lot while in development ...would it be wrong to recommend a super-simple architecture, even to solve a complex problem, for an enterprise client? Is it complexity that defines enterprise solutions, or is it the reliability, # concurrent users, ease-of- maintenance, or all of the above? I know this is a very vague question, and any answer wouldn't apply to all cases, but I'm interested in hearing from devs / consultants that have been in the business for a while and that have worked with these varying degrees of complexity, to hear if the cool-but-expensive abstractions are worth the overall cost, at least while the project is in development."} {"_id": "60929", "title": "Sticking to a Task vs Varying It", "text": "When you have several programming tasks to do at once, do you prefer to go through them one at a time, or to vary them, perhaps based on subtasks or on time? Why? For myself, I find that: Pros for sticking: * If I lose my concentration or train of thought, it takes a while to get back into it. If I chug through one task, I'll probably get more done. * Getting to cross things off a todo list feels good and looks good to my employer. Pros for varying: * Sometimes my most productive work is in the first hour or so that I'm working on a task. If I vary, I get several first hours in a day. * Getting stuck on a task when you're not making any progress is not much fun. Giving it some time often allows you to come up with solutions in a more relaxed manner. What do you do? Is there a particular time frame between varying tasks that works best?"} {"_id": "44649", "title": "Better programming by programming better?", "text": "I am not convinced by the idea that developers are either born with it or they are not. Where\u2019s the empirical evidence to support these types of claims? Can a programmer move from say the 50th to 90th percentile? However, most developers are not in the 99th or even 90th percentile (by definition), and thus still have room for improvement in programming ability, along with the important skills. The belief in innate talent is \u201clacking in hard evidence to substantiate it\u201d as well. So how do I reconcile these seemingly contradictory statements? I think the lesson for software developers who wish to keep on top of their game and become experts is to keep exercising the mind via effortful studying. I read a lot of technical books, but many of them aren\u2019t making me better as a developer."} {"_id": "65208", "title": "Have you ever organised a \"Code War\"?", "text": "The IT management classic PeopleWare suggests organised \"Code Wars\" as a way to boost morale. Has anyone ever put this into practice? How did you do it?"} {"_id": "214518", "title": "Managing p/t consultants like Open Source contributors?", "text": "I just assumed tech lead for a project which has had a problem with its very part-time (maybe 2-5 hours / week) contractors: * submitting code that brings down the production app, * pull requests that cannot be auto-merged, * writing misnamed tests that don't actually test what they claim, and * not writing tests for the new submitted features. A big problem is that these remote, part time contractors aren't in the office to respond immediately if there's a problem with their code. Also, I'm the only full time developer on the app, and we can't afford to have my time spent checking and debugging others' code. My idea is to adopt an open-source working style with them. I.e., would a big, important project (like Linux, Ruby, Rails, etc.) accept the proposed changes? We find out the criteria they have and then enforce that. We let go any contractor who doesn't play by the rules. Is this the way to handle the relationships? EDITED: To highlight the very part-time nature of the contractors. Less than 10 hours per week, usually 2-5."} {"_id": "214517", "title": "Are we queueing and serializing properly?", "text": "We process messages through a variety of services (one message will touch probably 9 services before it's done, each doing a specific IO-related function). Right now we have a combination of the worst-case (XML data contract serialization) and best-case (in-memory MSMQ) for performance. The nature of the message means that our serialized data ends up about 12-15 kilobytes, and we process about 4 million messages per week. Persistent messages in MSMQ were too slow for us, and as the data grows we are feeling the pressure from MSMQ's memory-mapped files. _The server is at 16GB of memory usage and growing, just for queueing._ Performance also suffers when the memory usage is high, as the machine starts swapping. We're already doing the MSMQ self-cleanup behavior. I feel like there's a part we're doing wrong here. I tried using RavenDB to persist the messages and just queueing an identifier, but the performance there was very slow (1000 messages per minute, at best). I'm not sure if that's a result of using the development version or what, but we definitely need a higher throughput[1]. The concept worked very well in theory but performance was not up to the task. The usage pattern has one service acting as a router, which does all reads. The other services will attach information based on their 3rd party hook, and forward back to the router. Most objects are touched 9-12 times, although about 10% are forced to loop around in this system for awhile until the 3rd parties respond appropriately. The services right now account for this and have appropriate sleeping behaviors, as we utilize the priority field of the message for this reason. **So, my question, is what is an ideal stack for message passing between discrete-but-LAN'ed machines in a C#/Windows environment?** I would normally start with BinaryFormatter instead of XML serialization, but that's a rabbit hole if a better way is to offload serialization to a document store. Hence, my question. [1]: The nature of our business means the sooner we process messages, the more money we make. We've empirically proven that processing a message later in the week means we are less likely to make that money. While performance of \"1000 per minute\" sounds plenty fast, we really need that number upwards of 10k/minute. Just because I'm giving numbers in messages per week doesn't mean we have a whole week to process those messages. =============== edit: ## Additional information Based on the comments, I'll add some clarification: * I'm not sure serialization is our bottleneck. I've benchmarked the application, and while serialization does show up in the heat graph, it's only responsible for maybe 2.5-3% of the service's CPU utilization. * I'm mostly concerned about the permanence of our messages and potential misuse of MSMQ. We are using non-transactional, non-persistent messages so we can keep queueing performance up, and I would really like to have at least persistent messages so they survive a reboot. * Adding more RAM is a stopgap measure. The machine has already gone from 4GB -> 16GB of RAM and it's getting harder and harder to take it down to continue adding more. * Because of the star routing pattern of the application, half the time an object is popped then pushed to a queue it doesn't change at all. This lends itself again (IMO) to storing it in some kind of key-value store elsewhere and simply passing message identifiers. * The star routing pattern is integral to the application and will not change. We can't application-centipede it because every piece along the way operates asynchronously (in a polling fashion) and we want to centralize the retry behavior in one place. * The application logic is written in C#, the objects are immutable POCOs, the target deployment environment is Windows Server 2012, and we are allowed to stand up additional machines if a particular piece of software is only supported in Linux. * My goals are to maintain current throughput while reducing memory footprint and increasing fault tolerance with a minimum outlay of capital."} {"_id": "127753", "title": "Defining classes in JavaScript that exist in your back-end", "text": "Doesn't it seem relatively duplicative to define your Models in your backend code AND on your front end for a rich internet application? I'm porting a GUI application I had written to have a web interface, which is all grand and nice any all, but things like Spine, SproutCore, JavascriptMVC would have you define your models and views and implement specific controllers. Being that I've got a well defined MVC pattern on my backend code (which is making this super easy to port; the views in my app took python dicts and returned python dicts to the controllers which could easy interface with the models; I can just convert these to JSON back and forth to speak to the web front end), why would I want to recreate the entire pattern again on the front end? What are good ways to work around this? Should I just say \"screw this\" and use something like http://pyjs.org? Should I write a bunch of code to export my models into JSON and then write some JavaScript code to build the Models on the front-end automatically so I'm still only defining them once? What would be the best approach to this?"} {"_id": "214512", "title": "An extensible logging architecture for Android?", "text": "The end goal is to have a variety of data sets that can all be graphically plotted against eachother. All data should correspond to a date, so that when plotted against each other, they show the correct relationship through time. The key feature though, is to be able to add a new parameter to log at any time which will then be logged simultaneously and will then be able to be plotted against other data sets (from the time the later began onwards). As a sub point, it also needs to be able to deal with sparse information which is sometimes available and sometimes not. An example of the data can be seen below. C is sparse data and D is a data field added half way down the table. ![Example of data](http://i.stack.imgur.com/1Q30u.jpg) I have considered SQLite but the thought of all the column changes due to columns added by myself and users didn't appeal. Currently, I am thinking of creating an object which keeps track of time and contains a data field list of different data lists such as A, B, C and D as in the image. The thinking being that to add a new data field, I could just add a new parameter (e.g D) list to the data field list which already contains A, B, C. Then I could just serialize this into JSON to keep it persistent and when I want to graph anything with sparse data, I could interpolate it. That is my current solution but I would very much appreciate any opinions on better practices or flaws in my current solution."} {"_id": "127754", "title": "Is this a good code documentation practice?", "text": "I'm working on 2 projects `projA` and `projB`, and both projects are maintenance projects. For `projA` there are 3 programmers and for `projB` there are 2 programmers. In each case all are working on separate points. As code documentation is very important, can you tell me which one is the better practice? Examples: {Author : lee -verion 1.7 -date 30 dec -asked by someGuy -decription: to antialiase a line} procedure antiAlias(thisline); begin //do the anti aliasing... end; or {*123} unit testone; .. .. .. procedure antiAlias(thisline); begin //do the anti aliasing... end; .. .. end. //end of the unit.. {*123} {Author : lee -verion 1.7 -date 30 dec -asked by someGuy -decription: to antialiase a line} the `{*123}` represent the point I have done for the particular unit. So the next point `{*124}` will be at the end of the unit. This is done (very rarely though) as the code looks clean without the commented part and lots of decription at the end."} {"_id": "209421", "title": "Best practice to store DateTime based on TimeZone", "text": "Developing a web application which should allow User to schedule appointment based on their TimeZone. And I am storing the User scheduled datetime as server datetime into database field. While showing schedule information retrieved the value from Database and converted into user timzone. processing in Code base I am converting the DateTime based on the user timezone. Please suggest is this best practice or any easy way is exist?"} {"_id": "165637", "title": "Best Practice - XML To Excel", "text": "I've to read a big XML file with a lot of information. Afterwards I extract the needed information (~20 Points(columns) / ~80 relevant Data (rows, some of them with subdatasets) and write them out in a Excel File. My Question is how to handle the extraction (of unused Data) part, * should I copy the whole file and delete the unused parts, and then write it to excel * or is it a good approach to create Objects for each column? * should I write the whole xml to excel and start to delete rows in excel? What would be performant and a acceptable solution?"} {"_id": "173491", "title": "Releasing Mobile Application under multiple platforms, same time or?", "text": "I've always had this idea circling around until I am facing this issue. We made an android app, it is ready, but we are planning to release the same app on iOS and possibly Windows Phone; Now, should we just release the Android app and promise the clients that the iOS version is coming soon (create anticipation before release) or delay the release until the iOS version is ready? Same applies if we have a premium and free version, should we release the free version and promise that the better premium is coming soon, or release them both the same time? EDIT: as requested, the APP is a social APP, it depends on people's activity to succeed, and the manpower is the same, we made the iOS version, but to compile it we are waiting for the company to get us better Mac Machines (will be in a week for most)"} {"_id": "114423", "title": "Are there restrictions on table and column names in DB2?", "text": "I'm working with a DB2 database and I can't help but notice - these table and column names are really confusing! I realize that good names are important in every facet of software development, and it's quite possible that the non-intuitive names are unique to my experience, but someone told me that DB2 places severe restrictions on table and column names. **Is this true?** Or could it be that DB2 admins have traditionally followed a certain naming convention that seems foreign to SQL Server veterans like myself?"} {"_id": "114420", "title": "Patient Record Tracking System", "text": "## Background At a medical facility, staff can remove patient records (file folders) from a room. The room is locked using a standard tumbler lock (i.e., no swipe cards). The medical facility does not have much funding available. All staff members have a computer in their office (some computers are several years old). All staff members, who have permission to fetch file folders, have a computer and a known location. ## Problem A significant amount of time is spent physically \"tracking down\" the location of patient records. This is inefficient for the person trying to find the files and disrupts other staff who are queried during the search. Eventually (possibly years hence) the records will be digitized. This is a classic library book check-out problem. ## Solution Architecture This is what I am thinking: 1. Web service that tracks file folders by ID. 2. Print QR Code (or bar code) label stickers with ID. 3. Label all the physical patient record file folders. 4. Equip all computers with inexpensive scanner (e.g., web cams). 5. Install QR Code (or bar code) reader software on each computer. 6. Write software that passes scanned document to code reader software (e.g., press CTRL-F12). 7. Write a web service that updates a file folder's location given an IP address. The part that remains is another web service that allows a person to search for an ID. The search result indicates the last known office where the folder was scanned. The tracking process becomes: 1. Staff takes file folder and returns to office as usual. 2. Staff holds folder to scanner, presses hot-key. 3. System audibly (and/or visually) acknowledges new location. ## Question What inexpensive solution would you put in place to substantially reduce the time taken to track down patient records (and curtail disrupting other staff)? Ideas for hardware components (e.g., inexpensive bar code readers) are appreciated. Inexpensive bar code readers requiring resident driver software that takes several megabytes of RAM, however, would not be feasible (due to older hardware and budget constraints). The system likely does not need to be perfect, but it should offer a dramatic reduction on the amount of time it takes to locate a physical file folder, with minimal overhead. Thank you!"} {"_id": "114427", "title": "What to do after accidentally erasing many database entries?", "text": "Today at work I did an update to production. The update involved replacing old items from two tables with new data. I of course did a back up of the tables before that. Well, now 6 hours after getting home I realized at shower that there was a foreign key reference on cascade delete for many tables and when I replaced the older data it also removed lots of other data -- I'm not even sure I know all of the data that got removed... The thing is, there are about 270 tables and I've been to the company for 3 months. I was supposed to do this update today and the CTO was away doing remote work. I tried to contact via Skype and email, but he's away. I figured not to call him at this time of day (late evening here). What should I be doing here? It's the beginning of weekend and I feel bad leaving the system in such a condition for two days... and especially because I screwed up things. The client is doing billion dollar business and its the biggest client the company has so this was a major drawback. If the hosting provider does not have nightly backups I will probably get fired. Damn I hate automatic magic like FKs. :( What would you do in my position?"} {"_id": "114424", "title": "How do you address the gap on your resume after a self-funded sabbatical?", "text": "There are a ton of questions on Programmers.SE about whether or not taking extended time off is a good idea and what to do during that time off to maintain your skill level: Will taking two years off for school in a related field destroy a mid level development career? Can I take a year off without hurting my career? If you take a year or two out from being a developer, is it really that hard to get back into it? Is taking a break in career to learn stuff \"a bad idea\" They've been really helpful, but I still have a question about the logistical details of take a self-funded sabbatical. I'm coming to the end of a year off from working as a software developer (after 7 years in the field.) I've taken this year to let myself explore interests that I'd never had enough time for: baking, sewing, photography, and making new and interesting friends. During this time, I've also been working on pet development projects in technologies and disciplines that I would never have had the chance to explore otherwise. in addition, I've read all of those software development books I'd never had time to read and kept up with programming news and blogs. The development projects have all had an eye towards being part of a Micro ISV, but only one of the projects has made it to any sort of production stage. That project is impressive, but not very successful (yet!) I've got a reasonably active programming blog that I believe reflects a high dedication to software development and demonstrates that I haven't just put my programming skills on a shelf for a year. My question is: **What is the best way to transmit this information to a potential employer at the resume/cover letter level?** I'm reasonably confident that I can explain this sabbatical in an interview setting. I know that at the resume level though, hiring managers will use any excuse they can to throw my resume out (I've done hiring and I would probably have thrown out my own resume if it didn't handle this time off really well, maybe it's karma.) So I feel like the resume/cover letter is the really tricky part in getting a kick-ass new job. I have a few ideas about what to do, but I'm not sure what the larger community sees as acceptable. Here are the approaches I'm considering: 1. Put a special section on my resume for the sabbatical time in which I outline the personal projects. If I do this, what's a good way to label it? 2. Create a personal company name and put these projects in the work experience section of my resume under that company. This seems like the most go-getter way to do it, but I'm worried it could be perceived as trickery when I get to the interview stage. Also if I do this, what do I put as my title? 3. Leave my resume as-is and explain everything in my cover letter, with a link to my active blog. 4. There's probably something I'm not thinking of, I'm open to any other ideas. I know I've included a bunch of special-snowflake information about my personal situation, but I would prefer answers for the more general case. The personal details are here more as an example than as a request for answers designed specifically for me."} {"_id": "104258", "title": "How does one get a group of programmers together for a project?", "text": "I recently pitched a project I've been working on to a small company that I know, coincidentally, has a need for something similar. They seemed excited by what I have, and what I said I could do, but worried that, as a lone (actually, they said 'rogue') programmer I could up and leave them with a non- functional program. They suggested that I get a few people together, write up a solid proposal, and meet with them again. They've promised office space and a good budget because this is an application that would greatly increase productivity for them. The problem is I'm still quite new to the field, I have no professional experience, and while I know I could complete this project, I don't know how to find other people to bring in on it. Does anyone have any experience in situations like this? Any ideas on where I could look? Also, the company and I are both located in Manhattan, if that helps determine where to look for people."} {"_id": "220048", "title": "Internal-use websites: Is there a compelling case against SQLite?", "text": "Many web frameworks, such as Flask or Django use SQLite as their default database. SQLite is compelling because it's included in python, and administrative overhead is pretty low. However, most high traffic public production sites wind up using a heavier database: mySQL, Oracle, or postgresql. _The questions_ : Assume: * Site traffic is moderate, and concurrent read/write access to the database will happen * We will use SQLAlchemy with SQLite write locks (although this comment makes me a little nervous) * The database will contain perhaps 60,000 records * Data structures do not require advanced features found in heavier databases Is there ever a compelling case against SQLite concurrency for websites that serve as moderate-traffic internal corporate tools? If so, what conditions will cause SQLite to have concurrency problems? I'm looking for known specific root causes, instead of general fear / unsubstantiated finger pointing."} {"_id": "37294", "title": "Logging: Why and What?", "text": "I've never written programs which make significant use of logging. The most I've done is to capture stack traces when exceptions happen. I was wondering, how much do other people log? Does it depend what kind of application you are writing? Do you find the logs actually helpful?"} {"_id": "55649", "title": "Idea for a physics\u2013computer science joint curriculum and textbook", "text": "I want to write (and have starting outlining) a physics textbook which assumes its reader is a competent computer programmer. Normal physics textbooks teach physical formulas and give problems that are solved with pen, paper and calculator. I want to provide a book that emphasizes computational physics, how computers can model physical systems and gives problems of the kind: write a program that can solve a _set_ of physics problems. Third party open source libraries would be used to handle most of the computation and I want to use a high-level language like Java or C#. Besides the fact I'd enjoy working on this, I think a physics-computer science joint curriculum _should_ be offered in schools and this is part of a larger agenda to make this happen. I think physics students (like myself) should be learning how to use and leverage computers to solve abstract problems and sets of problems. I think programming languages should be thought of as a useful medium for engaging in many areas of inquiry. Is this an idea worth pursuing? Is the merger of these two subjects in the form of an undergraduate college curriculum feasible? Are there any specific tools I should be leveraging or pitfalls I should be aware of? Has anyone heard of college courses or otherwise that assume this methodology? Are there any books/textbooks out there like the one I'm describing (for physics or any other subject)?"} {"_id": "93116", "title": "Handling \"backup\" files with Git", "text": "I have a project folder that has multiple duplicate files that are named slightly differently to create \"backups\". So I have \"file1.txt\" and \"file1 backup.txt\". Is there a preferred method to capture the old history when I make the folder into a git repository?"} {"_id": "86857", "title": "Should all, none, or some overridden methods call Super?", "text": "When designing a class, how do you decide when _all_ overridden methods should call `super` or when _none_ of the overridden methods should call `super`? Also, is it considered bad practice if your code logic requires a mixture of supered and non-supered methods like the Javascript example below? ChildClass = new Class.create(ParentClass, { /** * @Override */ initialize: function($super) { $super(); this.foo = 99; }, /** * @Override */ methodOne: function($super) { $super(); this.foo++; }, /** * @Override */ methodTwo: function($super) { this.foo--; } }); After delving into the iPhone and Android SDKs, I noticed that `super` must be called on _every_ overridden method, or else the program will crash because something wouldn't get initialized. When deriving from a template/delegate, _none_ of the methods are supered (obviously). So what exactly are these \"je ne sais quoi\" qualities that determine whether a _all_ , _none_ , or _some_ overriden methods should call super?"} {"_id": "211101", "title": "Assert equality in mstest when types may differ", "text": "I've been working on some MSTest automated test infrastructure, that is testing a tool that merges data sets into SQL Server database tables. The basic structure of the test is to: 1. Define the incoming dataset using anonymous types 2. Apply the data using the reconcile tool 3. Read records from the output tables 4. Compare result rows to input data, column by column Example input data: public class InputData : List {} // Inspired by Massive etc InputData input = new InputData() { new { ExternalID = 1, PropertyName = \"hello\", Agent = \"test\" }, new { ExternalID = (Int64)2, PropertyName = \"fred\", Agent = \"test\" } // Fixes the problem with a cast }; The issue I'm dealing with, is that the type inferred on my anonymous objects will be an Int32, but the corresponding column in my target table is a bigint, and hence the record will have an Int64 value. As a result, when I use Assert.AreEqual across each column, it fails on any int fields: Assert.AreEqual failed. Expected:<1 (System.Int32)>. Actual:<1 (System.Int64)> You can see I have cast the int on my second anonymous object, this can be used to fix the issue. The primary aim of these tests is to make the sample data as slim and easy to read/write as possible, and I'd prefer to avoid the visual noise of all those casts. I'm thinking about the best way to deal with the assertions. It seems like I should use dedicated assertions based on type. I guess the real question is, how aggressive should I be in converting between types automatically?"} {"_id": "139028", "title": "How is software for machines such as ATMs or TVs built?", "text": "As a beginner programmer I've only worked with programming computer based applications, but a question has been coming to my head very often since I started programming and I can't get it answered properly. Machines don't act on their own, that's the programmer's job, he tells it what to do and when to do it, but my curiosity lies beneath computers. I'll take the examples of an ATM software on this post but keep in mind there are many others such as a washing machine display, or a TV, mobile phone, you name it. How exactly is the software for these kind of machines built? I imagine it can't be identical to computer-based programming. What language do they use to make such things work and how does one get the job done? Are there programmers specialized on this kind of programming? What is the process of making these machines come to life?"} {"_id": "98127", "title": "Rewarding programmers based on application importance", "text": "Is it correct for management to give more importance or reward programmers who have worked on strategic and important application versus someone who has worked on a general application? Both may have put the same efforts."} {"_id": "98126", "title": "how can you avoid version nightmare?", "text": "I am in charge of the deployment or implementation work.Due to my company's immaturity product,there is a version upgrade serveral days. Each time there is a upgrade version,I had to notify the customer they had to replace their old version product.What I do is reconfigure all the configuration files for different customers who has its specific needs. Finally email them the new version of products and tell them ... Doing all this work is boring. Is there anyone have similar experiences? Any advice is welcomed. More context: I get a updated version from svn respository,it's configuration files is default.And each customer needs serveral configuration files different from each other,but their own configuration files is normally stay unchanged.My work is to export serveral products from the svn folder for each customer and copy their own configuration files in."} {"_id": "25385", "title": "What to do about \"next piece syndrome\"?", "text": "Inspired by this question: What to do about \"stopping point syndrome\"? I often find myself a half hour away from knocking-off time having just wrapped up a component of my work, and facing a \"next piece\" that is going to take more than half an hour. (That's happening right now, as a matter of fact. Hence my surfing stackexchange during business hours!) What do you do? Launch into the new piece knowing you'll have to stop in the middle? Set it aside and twiddle your thumbs until the clock runs out?"} {"_id": "167675", "title": "Do App Stores Have Exclusivity on Your Apps?", "text": "Let's say I write a mobile app and I want to sell it on all three: 1. Apple Market Place 2. Android Market Place 3. Windows Market Place Will any of these market places tell me that it's not possible, that in order to sell it I must only go through them and not through the other ones? p.s I know Angry Birds sells on all three. But how is that? I mean, don't books, for example, have exclusivity deals with their distributors?"} {"_id": "2185", "title": "When is the time right to bring a project to the alpha/beta/public phase?", "text": "When should a project be released to alpha, beta and to the public? Is it a good idea to extend the alpha and beta phases when it is needed? When in a later phase (eg. beta), is it wise to go back to an early phase (eg. alpha) if it didn't work out?"} {"_id": "209390", "title": "Pair Programming/Collaboration in a small company", "text": "I work at a small development company as the lead developer. We have two other developers, as well as my boss who is a developer, but doesn't really do much of the actual coding anymore. The problem I am trying to overcome is multifaceted. We have a tendency all to work on our own projects without much collaboration between us. As a matter of fact, I (as the most advanced developer) ask for the others' opinion/help more than they do mine, because I value the input of an outside eye. I want to increase our collaboration, and have expressed that to them. In large part because I'd like to show them some things about how to become better developers and follow better practices. But given our other developers' personality types, I think they are more comfortable working alone. I've been reading about pair programming, and I have read (in some forums) that it doesn't work well when you have one developer being more advanced than the others (which I am). And yet, I feel it is imperative that we start to collaborate so that our work is not so disparate. My question is whether anyone has ever been in a similar situation, and what worked for them? I realize this is not a one-size-fits-all situation, but I'm willing to give multiple approaches a shot. We all work in a common area, developers don't have have individual offices / cubicles."} {"_id": "209397", "title": "Efficiently \"moving\" data upward through a communication stack", "text": "I have implemented an application protocol stack that moves an incoming stream of data upward through several layers, as follows: 1. **copies** a TCP segment from an OS buffer to `my_buffer`. 2. after identifying a record boundary, splits the record in `my_buffer` into tab-separated strings which are **copied** into `deque my_deque` (rather than into a vector because I then have to immediately pop a couple of fields from the front) 3. **copies** `my_deque` to `vector my_records[n]` where it is presented without further copying to the application. I'm wondering whether it's best to stick to a clean architecture (layering) and pay the price (whatever that is) of copying payload from layer to layer, or if there are some simple optimizations that people use. Is it customary to use separate buffers for separate layers, or is this something which can easily be optimized away without dangerously compromising the independence of the layers?"} {"_id": "222575", "title": "Breaking the \"ubiquitous language\" by having an IoC Container in Domain Model?", "text": "I am a bit new to DDD and bear with me if my understanding seems way off. My question is about Udi's solution to domain events, particularly the class `DomainEvents` (see code below) An excerpt from Udi's code. It lives domain model as a static class. public static class DomainEvents { [ThreadStatic] //so that each thread has its own callbacks private static List actions; public static IContainer Container { get; set; } //as before //Registers a callback for the given domain event public static void Register(Action callback) where T : IDomainEvent { if (actions == null) actions = new List(); actions.Add(callback); } //Clears callbacks passed to Register on the current thread public static void ClearCallbacks () { actions = null; } //Raises the given domain event public static void Raise(T args) where T : IDomainEvent { if (Container != null) foreach(var handler in Container.ResolveAll>()) handler.Handle(args); if (actions != null) foreach (var action in actions) if (action is Action) ((Action)action)(args); } } Based on the code above, in order for the `DomainEvents` to be used by the domain model, both must first be in the same assembly. Which makes the `DomainEvents` part of the domain model right? (I may be wrong here) So my question is: **Does`DomainEvents` itself breaks the rule _\"ubiquitous language of DDD\"_**? Because it's implementation does not pertain to any domain. My other concern is that the static member `IContainer` creates an ioc- container-dependency in the domain model. Though I am not really sure if Udi's `IContainer` is an interface or an actual IoC container. My 2nd question is: **What is this`IContainer` in the `DomainEvents` class?** If it is truly an IoC container then doesn't it break the rule of _\"DDD should not have an infrastructure in the domain\"_? Is my understanding correct that an IoC-Container is considered an infrastructure? (Please correct me if I'm wrong) If you may find any of this confusing, please say so. **EDIT:** I have built my projects where the domain model is separated on its own assembly (I call this business layer) with absolutely no references to any infrastructure components. See onion architecture. ![enter image description here](http://i.stack.imgur.com/xN3GJ.png) Now I want to incorporate the domain events pattern. But doing so forces me to add infrastructure components to my business layer. Components being the `DomainEvents` and an IoC framework just to satisfy the `IContainer`, both having no relation to the domain whatsoever. **Isn't one of the idea of DDD is about separating the infrastructure from the domain?** Now I will play the pragmatic programmer, I just wanted to know that is it generally ok to do so? are there alternatives? What are you thoughts on this approach? Am I missing something basic here?"} {"_id": "222576", "title": "Critique the Structure of my Horse Racing Betting Platform", "text": "I am creating a program (mostly just for fun) that displays live prices for horse racing markets and the prices that several models predict they should be. I am very interested in the optimal way to structure this kind of data. I need a `Dictionary` to store `` where the key (raceId) is a number I generate (to get rid of old races from the dictionary when the race is over). The vales (data) consists of the actual prices for each selection in the race and the prices of 3 separate models that correspond to each selection. The dictionary will contain 20-60 races depending on the time. Here is an example of the data for each race: ActualPrice ModelAPrice ModelBPrice ModelCPrice 3.25 3.26 3.29 3.20 5.60 5.63 5.55 5.57 etc. The number of rows is usually around 40. Some of the models don't offer prices for some of the rows, but for each actual price there is at least 1 model price. Additionally, for each race data I store strings for the RaceCourse, StartTime, Location, etc. I also store marketIds in the form of ints, these correspond to several rows at a time (these are used to place the actual bets), there are about 6 marketIds per race. I was using a class to store all the data in called RaceData that is made up of fields for strings (i.e. RaceCourse, Location), ints (i.e. WinMarketId, PlaceMarketId) and doubles (i.e. HorseA_ActualPrice, HorseA_ModelAPrice, etc). So my dictionary is made up of ``. But I was wondering if instead it would be better to store the data over 3 Dictionaries. Dictionary> > Example of usage: Use RaceId=102313 and StringName=\"RaceCourse\" to get Value=\"Flemington\" Dictionary> > Example of usage: Use RaceId=131231 and MarketName=\"WinMarket\" to get MarketId=1321313 Dictionary[]> An array of dictionaries (one for each column of the data). > Example of usage: Use RaceId=12313 and RowName=\"HorseAWinPrice\" to get Price=3.25 I'm not really sure if an array of dictionaries is needed for the last one, as I could just change the keys to have the column at the front, i.e. \"ModelA_HorseBWinPrice\" or \"Actual_HorseBWinPrice\". Which one is better? Speed is a major concern when I have ~60 races and ~200 primatives for each. I have to perform several basic calculations using the values in each row for all of the races. I also need to display the racename/time in a listbox, and for the selected item in the listbox, display all of the prices on the form (in several list boxes). It is worth noting that each listbox corresponds to a marketId. Which is a better solution or can you suggest a better one that I have not put forward?"} {"_id": "57645", "title": "What could be a reason for cross-platform server applications developer to make his app work in multiple processes?", "text": "We consider a server app development - heavily loaded with messing with big data streams. An app will be running on one powerful server. A server app will be developed in form of crossplatform application - working on Windows, Mac OS X and Linux. So same code, many platforms for stand alone server architecture. We wonder what are the benefits of distributing applications not only over threads but over processes as well, for programmers and server end users? Some people said to me that even having 48 cores, 4 process threads would be shared via OS through all cores, is that true?"} {"_id": "237805", "title": "Can I scrape a website for font stylings?", "text": "I am trying to scrape websites for valuable text for example the title of an article, the author's name, and other distinguished text. I cannot always guarantee that this sort of text will have informative tags, and but this needs to be done as quickly as possible. As a possible short cut I think that I could just try to pull the text with unique styling. The title is normally bigger than the body text, and the by-line is normally smaller. Is there a way to quickly pull all of the font stylings of a page and then rank them by size and how often it is used?"} {"_id": "220590", "title": "Is using static-typing the solution to domain-driven design and decreasing the number of errors?", "text": "We are using PHP (a dynamically-typed language) in our project. However, I have found my colleagues asking questions such as http://stackoverflow.com/questions/20438322/modeling-a-binary-relationship- between-two-types. I\u2019m feeling like we have a paranoia that we are going to get unexpected errors (E.G. if you look at that example question, you will see that the poster is asking how to ensure that a `like` does not get initialized for a `Person`), and we are using static-typing (something that doesn't exist in PHP) along with unit tests to help ourselves rest easy at night. :-) Now, the thing that tells me our approach is wrong is the number of big websites written using dynamically-typed languages: Google uses Python extensively, Facebook is written using PHP, Twitter and GoodReads are written using Ruby. So what I feel is that web applications\u2014at the very least\u2014are moving toward dynamically-typed languages. However, no matter how I try to comprehend this, i can\u2019t. Don\u2019t these guys have trouble comprehending the domain? Don\u2019t they have problems that arise when using dynamically-typed languages (E.G. \u201cProperty `whatever` is not defined on this object.\u201d)? If they do, how do they deal with those while keeping the agility that comes with using dynamically-typed languages?"} {"_id": "153457", "title": "amount of code in code review", "text": "In my experience, most teams in the company review the code of team member with a small amount of code, always less than hundreds of lines. Is it appropriate to review large amount of code, for example a module, when the code is complete and ready to get reviewed once for all?"} {"_id": "106095", "title": "Being stupid to get better productivity?", "text": "I've spent a lot of time reading different books about \"good design\", \"design patterns\", etc. I'm a big fan of the SOLID approach and every time I need to write a simple piece of code, I think about the future. So, if implementing a new feature or a bug fix requires just adding three lines of code like this: if(xxx) { doSomething(); } It doesn't mean I'll do it this way. If I feel like this piece of code is likely to become larger in the nearest future, I'll think of adding abstractions, moving this functionality somewhere else and so on. The goal I'm pursuing is keeping average _complexity_ the same as it was before my changes. I believe, that from the code standpoint, it's quite a good idea - my code is never long enough, and it's quite easy to understand the meanings for different entities, like classes, methods, and relations between classes and objects. The problem is, it takes too much time, and I often feel like it would be better if I just implemented that feature \"as is\". It's just about \"three lines of code\" vs. \"new interface + two classes to implement that interface\". From a product standpoint (when we're talking about the _result_ ), the things I do are quite senseless. I know that if we're going to work on the next version, having good code is really great. But on the other side, the time you've spent to make your code \"good\" may have been spent for implementing a couple of useful features. I often feel very unsatisfied with my results - good code that only can do A is worse than bad code that can do A, B, C, and D. Are there any books, articles, blogs, or your ideas that may help with developing one's \"being stupid\" approach?"} {"_id": "222688", "title": "Using std::sort et al. with user-defined comparison routine", "text": "In the evaluator of a custom language, I would like to replace our own sort routines with `std::sort` or other routines, possibly `tbb::parallel_sort`. The problem is that we allow users of the language to supply their own comparison routine, and in the implementations I have tried, `std::sort` does not take kindly to routines that fail to be a strict order. In particular, it quickly starts looking at \u201celements\u201d outside the iterator range to sort. I assume that if I put an indirection layer on top of the iterators, I could avoid that by using virtual sentinels, but there is no reason to assume that the resulting calls would necessarily ever terminate. So, given a black box `bool f(widget const &a, widget const &b)` and a non- user-controlled total order `operator<(widget const &a, widget const &b)`, what would be the minimal amount of calls I would need to make to get a sort call that does terminate and that does order according to `f` if that is, in fact, an order? It looks to me like the following should work, but I am hoping that I could get by with fewer calls to `f` by some clever method, possibly remembering previous comparison calls: bool f_stabilized(widget const &a, widget const &b) { bool fab = f(a, b); bool fba = f(b, a); return (fab != fba) ? fab : (a < b); } Would it be reasonable to start out by just calling `f` and only after seeing n^2 calls for a list of length n to fall back to such a \u201cstabilized\u201d version? I realize that there is no reason to assume the result would be correctly ordered and I would need to start over from the beginning with such a wrapper."} {"_id": "106092", "title": "Is it okay to have many Abstract classes in your application?", "text": "We initially wanted to implement a Strategy pattern with varying implementations of the methods in a commmon interface. These will get picked up at runtime based on user inputs. As it's turned out, we're having **Abstract classes implementing 3 - 5 common methods** and **only one method left for a varying implementation** i.e. the Strategy. _Update: By many abstract classes I mean there are 6 different high level functionalities i.e. 6 packages , and each has it's Interface + AbstractImpl + (series of Actual Impl)._ Is this a bad design in any way? Any negative views in terms of later extensibility - I'm preparing for a code/design review with seniors."} {"_id": "102964", "title": "Portable programming style", "text": "I'm a Python programmer that prefers to develop on Windows but still end up deploying to a Linux server. I've just finished writing a little script: stuff that downloads files from a site, generates a sitemap, gzip it, pings the search engines and emails the response code. Nowadays most GNU tools are available and compiled also for Windows and I sure do use Wget and Grep whenever I need to. Until now I've always tried to implement the functionality I needed in Python (gzipping and opening urls require very few lines of code) but I found myself thinking if maybe I could write more resilient code if I didn't reinvent the wheel and just scripted everything in a bash script style where a big part of the functionality is delegated to an external process such as mail, wget, curl, etc and the other tools *nix makes generally available. What is your take on this? When you target *nix do you glue tools together or you tend to implement all the functionality in your scripting language of choice?"} {"_id": "60073", "title": "What ways are there to determine if an idea for change is viable or not?", "text": "A recent discussion on here about whether or not program windows should still be called screens or if we should have improved terminology got me thinking... Dangerous I know! People as a whole tend to be fairly resistant to change. We get comfortable in our niches and used to the way things are. While some changes lead to good results and improve our lives or the way things are done, others are clearly not enough of a change or overall bad and not even worth attempting. What guides can we use as we program to determine if an improvement (whether it be to coding style, terminology, user interface, language use, etc) is really an improvement or not? I'm sure to some extent nothing will replace the try-it-out approach but are there any tests or guides that can be used to eliminate certain ideas that would eventually turn out to be worthless or a waste of time to pursue? EDIT: For anyone who is wondering the discussion that brought this question up in my mind is found here: Does your organization still use the term \"screens\" to describe a user interface?"} {"_id": "186387", "title": "Does it violate the DRY principle to use an MVC server-side framework and a client-side MVC framework", "text": "When using a MVC pattern for server side code (in my case django), the model definition is defined once in the model component. When using a client side MVC based library(in my case backbone) the model definition or some subset is **redefined**. If I were to make a change to my server side model definition, let's say add a field to a model, then I would have to make a similar model change to my client side model definition to include that new field. Is this a violation of the DRY principle? Should server side model definitions auto-generate client side javascript based model definitions or is this an acceptable violation of the DRY principle?"} {"_id": "251634", "title": "Best practices in designing a social network app?", "text": "What are best practices when it comes to designing a social network app for Android (possible iOS port later depending on uptake)? It's my first time architecting something this scale from scratch, and I'm probably already making mistakes in the design phase. What I already have in mind is to: 1. Have a Postgresql backend to store customer data. I'm going to set up one 'app' user to handle all the incoming RESTful requests, which leads to my next point.. 2. Connect the database to the app through a RESTful interface (I don't think the platform is overly important but I'm looking at either Go or Puma/Sinatra or Flask as I want to minimise overhead and keep everything as efficient as possible). 3. Authenticate users using whatever authentication plugin is suitable within the RESTful interface and then.. 4. Tunnel all traffic through a SSL socket which I'll open on the app which will connect to the interface (this is one of the areas where I don't have a great amount of knowledge/experience) 5. Load data pertaining to other users and store it locally in an encrypted sqlite db on the Android device which I will create as part of the app start-up process. Images will be the only media type I'll allow. I'll wipe the sqlite db each time the user logs in to the app or once it reaches a certain size i.e. over 3 mb. 6. Make the app subscription based so along with checking the user credentials each time they log on I'll check their subscription status and enable different behaviour accordingly. 7. Deal with heavy load connection issues by putting Pgbouncer on the db. 8. Store the user images on my server as normal image files and when the relevant users are selected I'll send them to the app over the socket. 9. Run the entire server as a VM, possibly from CoreOS/Docker machines, which sounds like a good fit from what I've read. Is there anything wrong with any part of my intended design? Is there anything necessary I've missed out?"} {"_id": "251630", "title": "Should client ideas about the UI turn into User Stories?", "text": "I'm trying to learn about Agile methodologies through the Head First Software Development book and after reading about it, I've tried to apply the concept of User Stories to a recent mini-project (set of new features added to an existing application). Besides the fact that I have a hard time not thinking in terms of \"tasks\", I've also had trouble with the fact the client has asked things in a way that, to me, include implementation. For example, instead of asking \"we want Limited Users to have an Address now, just like Regular Users\" and leaving the rest to us, they would say \"we want Limited Users to appear in the same list as Regular Users and we want them to have an Address too, and it should be editable in the same Address Editing screen as Regular Users\". It seems to me that User Stories are meant to express a business need, but not its technical implementation, nor its UI implementation. That said, does it make sense that a client would impose its vision of the UI and have that be their business need? How would I turn the client's request into one or more User Stories? Should I say \"woah there! Tell me your ultimate goals and we'll see how we can get the UI to accomodate them later\"? Should I make multiple stories, one for the main business need (Limited Users have an Address) and additional ones for the UI requests? Should I just make the business need a single story and have other concerns as details / acceptance tests? Admittedly I'm quite wet behind the ears when it comes to Agile and User Stories and I have trouble defining how much goes into them and where the rest would go too."} {"_id": "237808", "title": "MVC: Controller often simply delegates to Model when notified by View of GUI events. Is this reasonable?", "text": "Since I learnt about MVC, I used it for every app I made (which is arguably not the best idea, but that's not the topic of this question). All of them small, 1000 LoC apps. I am using Java and Swing for the GUI. What usually happens is this: The view (the GUI class) reports to the controller about any GUI event (most commonly a button click) made. For example when a button is pressed, the view simply calls `controller.someButtonPressed()` or `controller.someOtherButtonPressed()`. It's only reaction to user input is reporting to the controller, nothing else. This, I think, is fine and is proper MVC View implementation. The part I'm having doubts about is the following: In the controller's `someButtonPressed()` methods, it very often simply delegates to the model. For example: public void someButtonPressed(){ model.doTheAppropriateThing(); } Nothing more. No 'decision making' or actual 'interpretation' of what the view reported. Very often, only simple delegation to the model. **Is it considered reasonable when implementing MVC structures and specifically controllers, to have the controller often simply delegate directly to the model in reaction to GUI events?** Or does this signal that maybe I'm doing something wrong?"} {"_id": "193630", "title": "Summary of differences between Java versions?", "text": "What are the major differences in between Java version in terms of software development? Where can one find a summary of the most important changes related to programming? The Release Notes such as http://www.oracle.com/technetwork/java/javase/releasenotes-136954.html can be hard to read. For example there is new code structure \"for each\" in Java 1.5."} {"_id": "193638", "title": "Why didn't == operator string value comparison make it to Java?", "text": "Every competent Java programmer knows that you need to use String.equals() to compare a string, rather than == because == checks for reference equality. When I'm dealing with strings, most of the time I'm checking for value equality rather than reference equality. It seems to me that it would be more intuitive if the language allowed string values to be compared by just using ==. As a comparison, C#'s == operator checks for value equality for strings. And if you really needed to check for reference equality, you can use String.ReferenceEquals. Another important point is that Strings are immutable, so there is no harm to be done by allowing this feature. Is there any particular reason why this isn't implemented in Java?"} {"_id": "199771", "title": "Should Cross browser testing be explicitly mentioned in the scope of a project?", "text": "I do freelance web development and front end dev is not my strongest point. This question came to me in my recent fixed bid project. Nowadays we use Jquery and Bootstrap and these take care of lots of cross browser stuff. Still, there are lots of errors in different browsers. It requires testing tools and a whole lot of other things to worry about. Should it be explicitly mentioned that the resulting site should be tested for cross browser compatibility in a contract? Or Does it come by default in the scope of the project? How can we estimate hours spent in cross browser compatibility? Edit: Some great answers here. But how should a scenario be handled when no browser discussions happened at all at the time of signing contract?"} {"_id": "167093", "title": "What is the best way to go about testing that we handle failures appropriately?", "text": "we're working on error handling in an application. We try to have fairly good automated test coverage. One big problem though is that we don't really know of a way to test some of our error handling. For instance, we need to test that whenever there is an uncaught exception, a message is sent to our server with exception information. The big problem with this is that we strive to never have an uncaught exception(and instead have descriptive error messages). So, how do we test something what we never want to actually happen?"} {"_id": "199778", "title": "Is MVVM in WPF outdated?", "text": "I'm currently trying to get my head round MVVM for WPF - I don't mean get my head round the concept, but around the actual nuts and bolts of doing anything that is further off the beaten track than dumb CRUD. What I've noticed is that lots of the frameworks, and most/all blog posts are from 'ages' ago. Is this because it is now old hat and the bloggers have moved onto the Next Big Thing, or just because they've said everything there is to say? In other words, is there something I'm missing here?"} {"_id": "93806", "title": "Should I patent my software?", "text": "I go to a university where students are allowed to make their semester schedule based on the information about the subjects they are going to take, that is, the hours that the courses are available, the professors, and the remaining room for other people. Making these schedules by hand was a very difficult/boring task. I wrote a pretty nifty Python program that automates this process. You pick the codes for the subject you're going to take and filter out the professors you don't want. Then the program outputs all the possibilities there are if there aren't time conflicts. This program helped a lot of students. The time to make a schedule reduced from 2 days to less than 30 seconds! Now here begin the problems. My family and all the people that used the program tell me to patent the program before someone steals the idea (that could happen in my country). But I question that myself. Is it necessary to patent a web scraper mixed with a backtracking engine? It was difficult to make the program because I didn't know a lot of things, but now that I have finished, I feel that it would be very stupid/immature to patent such a thing. But on the other hand, I don't want someone else to get the credit for it. What do you think?"} {"_id": "93801", "title": "Are Concurrency Abstractions Emulating UNIX Processes?", "text": "OK, I was pondering this today, and I've come to ask for _completely subjective and bias_ opinions on it. Paradoxically, despite this, I don't think it's flame-war fodder either. I think there is room for perfectly civilized conversation -- It's hardly Vim vs Emacs. I've used a lot of concurrency abstractions, specifically those built on top of threads. There's a big trend among them, whether it be message-passing, immutable-by-default, or thread-local-by-default or other ones. The trend is that they're all reversing the idea of threads by making data sharing explicit rather than implicit. That is, all data is not shared unless otherwise specified, which is the opposite of traditional threading found in languages like Java. (I know Java supports its own higher-level concurrency abstractions though.) For example, you explicitly pass messages. You explicitly state which variables are thread-local. You explicitly state which variables are mutable. These are just a few examples found in some languages. I thought this easy style of concurrency was a modern concept. I was mistaken. I started playing around with UNIX toys like fork() and pipe() recently, and I was shocked to discover that effortless, explicit concurrency mechanisms have been around since the start of UNIX. There's something a little strange about realizing that C + UNIX from the '70s makes concurrency _far_ easier than many modern trendy threading mechanisms. So, here's what I'm wondering... are these modern thread abstractions simply trying to emulate UNIX-style processes on top of threads, with all of their explicitness and not-shared-by-default traits? I know that some mechanisms like STM offer things like DB-style transactions which is genuinely a modern and innovative solution for concurrency, but most just seem like new ways of doing what UNIX coders were doing a long while back. Just to say, I'm not a UNIX fanboy, not by any stretch of imagination. I'm aware that processes are far slower than threads to initialize on many platforms, but I'm talking about this from a conceptual basis."} {"_id": "191165", "title": "How to model and query spatial responsibilites of company local branches", "text": "what are best practices, to model and query spatial responsibilities of company local branches? My company has many local branches which are all in general responsible for 1 city. This is the easiest case. However, there are some exceptional cases. Here's a list of all possible cases: * branch a is responsible for city a (general case) * branch a is responsible for city a, b and c * branch a is responsible for disctricts 1 and 2 of city a, whereas branch b is responsible for districts 3, 4 and 5 of city a * branch a is responsible for city a and for Foostreet of district 2, whereas branch b is responsible for district 2 of city a * branch a is responsible for city a and for odd housenumbers of Foostreet of district 2, whereas branch b is responsible for district 2 of city a * branch a is responsible for city a for all customers whose surname begin with letters A-F, whereas branch b has G-K, and branch c has .... * branch a is responsible for city a in case of handling Foos, whereas branch b is responsible for city a in case of handling Bars These are the possible responsibility cases, which could of course also be mixed. The planned system should provide the answer to the question: \"Which branch is responsible for city a (or b or c ...)?\". If there are exceptional cases, then the question has to be more specific. Is it possible to provide hints like: \"City a has more than one branch, please chose a district!\" or \"For which case are you asking? Responsible branch for handling Foos, Bars, ....?\" The question which would most likely to be asked is: \"Which branch is responsible for Foostreet 345 in 12345 Foocity?\" My requirements are Java Enterprise and a mySQL Database. It may be possible to use MongoDB or Neo4j if it really makes more sense. I already have database with every city, district, zipcode and street, with which the data could be \"connected\". I hope this question doesn't sound like: \"Please do my job for me.\", but my experience in spatial domain modelling is almost zero."} {"_id": "191166", "title": "upgrading to newer version of compiler", "text": "I had legacy code that was originally build for some quite old version of compiler. We are talking about native code, not managed. Now it is ported to almost newest version of the compiler. Every compile error was fixed and now the product is properly compiling and running. But: * although it compiles and even runs... how can I be sure is is working as expected? * can I be sure that underlying runtime does not change in a way that it introduces new and unexpected runtime bugs? * what are best practices when upgrading to newer version of the compiler?"} {"_id": "221884", "title": "What format is the data going to Windows print drivers?", "text": "I have been tasked with writing a print driver. I have no experience with this and have been researching it for a few days. The goal of this is to essentially write the data coming to the driver to a file and then do some additional processing of it. If I understand it correctly, I can start other processes to do most of the work, but I'll need the driver itself to generate the initial file/data. I am confused about what state the data is in when it comes to the driver from an application. Is this something that is determined by the application, or is there a standard of some kind?"} {"_id": "221887", "title": "Is lucene.net/solrnet a good solution for searching a list of names with fuzzy matching?", "text": "At the moment, we're using sql server full text search, but it's too inflexible. The main thing we do is look up names of people from a database based on a search query. The searches need to be fast, and they need to be fuzzy. SQL Full Text Search doesn't really support fuzzy matching especially when combined with the thesaurus option. Therefore I need a better solution. My research suggests that lucene and solr are widely used enterprise solutions, but my searching suggests these are more designed for indexing things like documents and webpages, or what it refers to as 'unstructured data'. Our data is very well structured, and therefore I'm unsure if it's suitable for this type of work or if I should be investigating another product. According to the book I have Solr 1.4 Enterprise Search Server it supports all of the above except for prefix matching out of the box, however it states there are performance issues with substring searches. Do you think that solr/lucene is a good technology to investigate for solving my problem? If not, do you have an alternative? Any advice is welcome. I am a .NET developer, hence solrnet rather than solr."} {"_id": "102340", "title": "How to structure multiple overlapping solutions / projects in .Net?", "text": "I recently started out working for a new client with an _old legacy_ codebase in which there are multiple .net solutions, each typically hosts some projects unique to that solution but then \"borrows\" / \"links in\" (add existing project) some other projects which technically belongs to other solutions (at least if you go by the folder structure in TFS) I have never seen a setup this intertwined, there is no clear build order, sometimes a project in solution A just reference dlls directly from the output directory of a project hosted in solution B, sometimes the project has just been included directly even if it resides FAR away in the folder structure. It looks like everything has been optimized for developer laziness. When I confronted them with why they didn't have a CI server they responded that it is hard to set up with the code organized like it is. (I'm setting it up now and cursing this code-organization) The solutions are organized around deployment artifacts (stuff that needs to be deployed together are in the same solution) which I _think_ is a wise decision, but the contents of those solutions (the projects) are all over the place. Is there a consensus of a best practice to use when reusing common class libraries across multiple solutions / deployment artifacts, * How to structure code in the VCS * How to facilitate sharing of business logic between separate deployment artifacts"} {"_id": "102346", "title": "Difference between statechart and sequence diagram", "text": "I realise these two diagrams are very similar, with the obvious difference one models the sequence of a certain function, whilst the other models the state throughout a function being carried out. The differences that I have identified are as follows - probably incorrect: 1. State chart is more of a logical view of any functionality, showing a wider array of deviating paths - however, saying that, sequence diagrams also have the ability to provide alternate (alt) paths, conditions, loops etc. 2. A sequence diagram is aimed at one specific function, e.g. withdrawing money from your bank account, whereas a state chart can model a whole system. State chart example & Sequence diagram example"} {"_id": "102347", "title": "How do you manage cross-class dependencies on destruction/design (more of a C++ question)", "text": "So if I understand correctly, from SOLID design principles, every class should keep a single responsibility. So there should be one class that creates and manages a resources, a second class that does resource processing/update, the third class that connects the resource to some other resource and so on. The question is, how do you cleanly express that in code? Because, in C++, the class destruction order plays an important role. So you basically come to a situation where you get ResourceClassInstanceManager cf;//does the ResourceClass creation, accounting and destruction? ResourceClass &rcInstance = cf.GetResourceClass();//creates the instance of resource ResourceClassUpdater rcu(&rcInstance);//updater that works on the class instance ResourceConnector rcc(&rcInstance, &instanceOfAnotherClass); The problem here is that if cf leaves the scope first, or rcInstance is deleted before rcu or rcc. The other classes will still reference the nonexisting instance of ResourceClass. This seems somewhat messy, as the whole creation/destruction chain has to be followed very carefully, and becomes really troubling as soon as you have pointers to objects and deeper nesting levels. Is there a clean solution for this design problem?"} {"_id": "78547", "title": "How do you tell if advice from a senior developer is bad?", "text": "Recently, I started my first job as a junior developer and I have a more senior developer in charge of mentoring me in this small company. However, there are several times when he would give me advice on things that I just couldn't agree with (it goes against what I learned in several good books on the topic written by the experts, questions I asked on some Q&A sites also agree with me) and given our busy schedule, we probably have no time for long debates. So far, I have been trying to avoid the issue by listening to him, raising a counterpoint based on what I've learned as current good practices. He raises his original point again (most of the time he will say best practice, more maintainable but just didn't go further), I take a note (since he didn't raise a new point to counter my counterpoint), think about it and research at home, but don't make any changes (I'm still not convinced). But recently, he approached me yet again, saw my code and asked me why haven't I changed it to his suggestion. This is the 3rd time in 2--3 weeks. As a junior developer, I know that I should respect him, but at the same time I just can't agree with some of his advice. Yet I'm being pressured to make changes that I think will make the project worse. Of course as an inexperienced developer, I could be wrong and his way might be better, it may be 1 of those exception cases. My question is: what can I do to better judge if a senior developer's advice is good, bad or maybe it's good, but outdated in today context? And if it is bad/outdated, what tactics can I use to not implement it his way despite his 'pressures' while maintaining the fact that I respect him as a senior?"} {"_id": "187479", "title": "Encapsulate external JavaScript library", "text": "I am about to develop a ASP.NET MVC 4 project that will make use of maps. Our company has it's own map API which is very basic at the moment, but is intended to be further developed in future to match some specific customer needs. The project will have a mobile/tablet interface in addition to it's web interface and needs to be developed now. So we have planned to start use the Google Maps v3 API. I would like to encapsulate the API, so our applications don't have Google API specific calls. The same approach as if I would like to use som external API i C# and wanted to encapsulate it in order to be able to switch API in future without having to re-code the entire application. But I haven't been able to find similar examples, so the following questions arise: 1. Is it a good and reasonable approach? Let me hear your pros and cons. 2. How would it be done in practice?"} {"_id": "223769", "title": "How to simulate a REST API?", "text": "I am working on a new project which will query data from a 3rd party REST API. This is for a real time sports data feed, so the feed only works when a game is actually taking place. Although the 3rd party provides good documentation (XSD, etc), they have no way to simulate a game happening, and so to test code I have written against this API I would have to wait for an actual game to be happening. My only recourse is to write code to simulate a game on my own, but it seems like a lot of work. Has anyone any experience with any tools similar to Apiary to do this? How would you approach this? Thanks"} {"_id": "187470", "title": "Can profiling be used to verify if optimization was successful?", "text": "I know that profiling is useful to identify bottlenecks and determining what parts of the code require how much time to execute. The latter isn't always very easy to track in the midst of other paths being executed, so once I decide what I want to optimize it might be problematic to see the improvement in numbers. This is especially true in desktop apps which run constantly and it is difficult to: execute the same path and execute it the same number of times to have reliable comparison. It won't help me if before optimization the function ran X times and took 500 milliseconds, and after optimization it run Y times and took 400 milliseconds. In such cases, can I somehow use a profiler to determine improvement or do I have to resolve to other options?"} {"_id": "223762", "title": "Bloom filters or similar, but with no false positive", "text": "To improve some lookups, I am considering the use of Bloom Filters. But in my use-case, the most probable outcome is that the element do exists in the target set. Bloom filters can have false positive, but no false negatives. That would make me check the real (big and slow) storage most of the time because of the uncertainties. Is there another algorithm/data structure with the same properties for space and computation speed (and parallelism of query) that can assure no false positive and a low probability of false negative? (The max size of the set will be around 300k items, the items will be strings of, at most, 512 characters, and I will have hundreds of sets like that.)"} {"_id": "112219", "title": "Java Swing in real world software development?", "text": "Is Java Swing really used for constructing real-world software? I'm taking an introductory course in Java, but nothing more than drawing some shapes, lines & basic tables."} {"_id": "112218", "title": "What teamwork methods can be learned without being in a team?", "text": "I've worked in team-based environments most of my career, but not as a programmer. (I am trained as a musician/composer.) I'm becoming aware of _programming specific_ team-based concepts, though--slowly but surely--such as Version-Control, SDLC, documentation, etc. I'm getting a certain theoretical understanding, but not really feeling confident about it. **I'd like to practice anything I can to help me learn how to work more effectively in a team.** ...before I actually go and bother someone with my heavy-handedness or ignorance. I'm not sure if it's possible or even recommendable to attempt to 'practice' team-based methodologies privately. Maybe the only way to really learn how to work in a team is to just experience it. Likewise, I wouldn't know how to explain teamwork to another musician, but as musicians we don't really have codified methodologies in the same way that programmers do, and I think that could make all the difference. In music there are certain conventions and inflexibility due to the mechanical nature of the instruments/recording equipment (akin to hardware/language, I suppose), but otherwise we kind of make it up as we go along (and I assume programmers do a little bit of that, too, depending on the context--I wouldn't know. That's kind of what I'm trying to figure out). I'm pretty sure that the inevitable answer is that I should work on something open-source; the world is flat, after all, and it's not difficult to communicate across great distances as a programmer (like it often is with musicians). But my feeling is that I'm not ready for that. I would feel safer learning _how_ to approach teamwork with programmers before just diving in, because programmers can perhaps be a little persnickety. I'm trying to avoid a trainwreck-type of experience that might cause some irrational fear. So is there anything I can study beforehand, and if so, is there a generally accepted way to go about it? Like mock-teamwork, team-related practices that can be practiced privately; is any of this making sense?"} {"_id": "235207", "title": "How to store progress of abstract events?", "text": "I making a game in node.js. I have players and they can perform a lot of actions. Actions are all coded as functions, and they change certain variables in either User object or other objects that are stored in a database. Actions are like: 1\\. Player enters room (filled with other players and some objects) room.players.push(player); 2\\. The room distribute some things to the players (increments a variable in player object) who were already in the room, and the player that just entered. player.gemDiam++; 3\\. A host player in the room asks the player to give up something (decrement some variable) of his own. player.potion--; host.potion++; 4\\. The player asks other players to give something else to the room. (decrement their own and increment some of the rooms's variable) room.players.forEach(function(player){ io.sockets.socket(player.socketID).emit('askNicely'); io.sockets.socket(player.socketID).on('agree', function(){ player.gemCirca--; }); }); 5\\. They all give up what the room game them originally (delete the variable Room set a value for on them) and walk out. room.players.forEach(function(player){ player.gemDiam--; }); Now, mind that the players are actual players (that are controlled via client- side socket.io) and these actions are just one of many I chose for this example. How should I go about storing all that in a way that I can reproduce without actual players, sort of like a history of what happened in that room?"} {"_id": "146846", "title": "How to read from a database, and write to a file asynchronously / non blocking way in Java", "text": "I'm trying to modify a serial program that reads from a database, and writes results to a file, this is done in a blocking way and I think we can get performance boost of there is a memory buffer and have the file being written in the \"background\" asynchronously I can think of the \"job interview\" solution, using Threads, shared resource, syncronized blocks etc... but I'm sure there is a better way (is there a nice little \"delayed write\" library out there that will do it for me?) Does any of the java.util.concurrent package offer any help? java.nio? or perhaps JMS/ActiveMQ? What about PipedOutputStream / PipedInputStream as a basis for my buffer? How do I implement a delayed / background / buffered / non-blocking / asynchronous file writter in Java?"} {"_id": "71980", "title": "Has the license changed for the parts of BOOST that have been accepted into C++11?", "text": "The Boost Software Library is licensed under the BOOST License Now, if you only use C++11, does this mean you're bound by the BOOST license terms if you use those parts of C++?"} {"_id": "7733", "title": "Good questions to ask a potential new boss?", "text": "Imagine you were working as a software developer. Imagine that the manager of your team leaves and your company is looking for a replacement. Imagine that as part of the hiring process you had the opportunity to talk with him. You are not the only person doing an interview, and while it is not ultimately your decision whether or not to hire him, you do have an influence. What questions would you ask? What would you talk with him about?"} {"_id": "108699", "title": "Why does Java's Collection.size() return an int?", "text": "Why does Java's Collection.size() return an int? This limits the size of collections to just over 2 billion entries. With the rapidly increasing amounts of memory available to us, this seems a little short-sighted - no?"} {"_id": "197643", "title": "\"Is\" prefix and \"On\" suffix as reasonable exceptions to a \"non-hungarian\" naming standard?", "text": "First, I believe I've seen this question discussed here before, but I cannot find it. My apologies if you do find it. I'm starting a new project, and trying to figure out why IsResolved and/or ResolvedOn make more sense to me than Resolved. Obviously, when something is named \"CompanyName\" (for example), I'm fairly confident I'll test it as a string. However, when I see \"Resolved\", and I know the object it describes may not have a valid value for a resolution date, it bothers me that I have to inspect the type to determine how to test it: for undefined/null/etc. (as a date), or as a boolean value. Is it reasonable to claim that Is and On typically do more than declare the variable type; they declare the intent? Or perhaps simply that it makes the code clearer for some other reason I'm not quite able to codify? Arguing the other side, if I look at the type of Resolved, and see that it is a boolean, I will have my answer. Perhaps this is no different than knowing that I'll need to inspect a \"Feasibility\" variable to determine if it is an int, enum, or something else. (Although perhaps just that means \"Feasbility\" would be a sub-optimal name also.) And regardless, in the case of \"ResolvedOn\", I still have to inspect whether it is nullable to determine if I additionally need to inspect an \"IsResolved\" value. What is the point of the verbosity, encoding a second time in the variable name that which can be deduced from the type? Please give me a hand understanding **why Is[Property] and [Property]On make sense to me** , although Hungarian in general does not. Or, explain why an exception like this wouldn't make sense? Note: I primarily work with SQL, C#, and JavaScript."} {"_id": "157228", "title": "How to motivate students for a programming section?", "text": "Last year I tried establishing a programming section at my alma mater. We started with a clear document detailing our plans and goals and a solid attendance (around 20 students). I ran into following problems: * I found it hard to manage such a large team * the majority of students had little or no programming experience other than `C` starting courses I tried splitting the section into 4 teams of approximately 5 students each, with more experienced programmers at the head of each team. We defined some real world problems to solve to make it more interesting for students and got to work. However, with upcoming exams attendance dwindled and then finally stopped. At the time I felt that the students were not willing to put in the effort to learn something new. Later, I believe that they felt the task was too hard for them, and motivation suffered as a result. My question is therefore: how would you motivate students?"} {"_id": "157223", "title": "Am I shooting myself in the foot if I use Mercurial for Rails development?", "text": "I'm a Rails developer and I prefer Mercurial over Git. However, I know that the Rails community is very pro-Git. So my question is it possible that my choice of version control system would turn back some developers?"} {"_id": "157225", "title": "Genetic Algorithm - solving a matrix with hard and soft constraints", "text": "I'm writing a Genetic Program that I need some advice on for crossover operations. The GP is attempting to find the best solution for a matrix that has hard row constraints and softer column constraints. For a given solution in the population, the rows contain a random combination of object type ids from a fixed set. The GP is trying to find a solution where, after the rows are laid out, if you tally the id's in each column, the number of each type must fall within a recommended range for that id. I wrote a fitness function that allows me to grade the solution on how close it comes to the columns constraints - 100% being all the columns fall within specs. Since fitness is tied to columns it seems logical that the crossover operation should grab columns of two parents to create a candidate offspring. Should a multipoint crossover be a better way to go? My concern is a crossover operation, almost certainly, will break the row contraints. Thanks for any advice."} {"_id": "211071", "title": "help in theory regarding prefix_suffix_set", "text": "I found this question at codility.com, but I don't understand the question. Can you help me figure out what do they want? I am more interested in the theory than a code. A non-empty zero-indexed array A consisting of N integers is given. A prefix_suffix_set is a pair of indices (P, S) such that 0 \u2264 P, S < N and such that: * every value that occurs in the sequence A[0], A[1], ..., A[P] also occurs in the sequence A[S], A[S + 1], ..., A[N \u2212 1], * every value that occurs in the sequence A[S], A[S + 1], ..., A[N \u2212 1] also occurs in the sequence A[0], A[1], ..., A[P]. The goal is to calculate the number of prefix_suffix_sets in the array. For example, consider array A such that: * A[0] = 3 * A[1] = 5 * A[2] = 7 * A[3] = 3 * A[4] = 3 * A[5] = 5 There are exactly fourteen prefix_suffix_sets: (1, 4), (1, 3), (2, 2), (2, 1), (2, 0), (3, 2), (3, 1), (3, 0), (4, 2), (4, 1), (4, 0), (5, 2), (5, 1), (5, 0). Write a function: * function solution(A); that, given a non-empty zero-indexed array A of N integers, returns the number of prefix_suffix_sets."} {"_id": "67310", "title": "Should you use \"internal abbreviations\" in code comments?", "text": "Should you use \"internal abbreviations/slang\" inside comments, that is, abbreviations and slang people outside the project could have trouble understanding, for instance, using something like `//NYI` instead of `//Not Yet Implemented`? There are advantages of this, such as there is less \"code\" to type (though you could use autocomplete on the abbreviations) and you can read something like `NYE` faster than something like `Not Yet Implemented`, assuming you are aware of the abbreviation and its (unabbreviated) meaning. Myself, I would be careful with this as long as it is not a project on which I for sure will be the only developer."} {"_id": "186269", "title": "What should junior developer expect from their senior team lead", "text": "Disclaimer: Opinions expressed are solely my own and do not express the views or opinions of my employer. I work for a small company, in which few people are developers, others are QA/Test and 1 is a Manager. I joined this company 1.5 years ago. 3 senior developers have 8+ years of experience. These are the observations which I made about the team lead. (considering me as a fresher with less experience compared to them in all aspects ) 1. They never discuss 1:1 or they never consider the junior suggestion (I agree that it's up to them, whether they accept it or not, at least they should consider an opinion). 2. As senior team leader they can try to refactor the codebase with new technologies ( including the factor of rolling out new technologies is possible and other developer and infrastructure also ready), but these team leader feel less in-secure to work with new technologies, as they are not up to date. (reason I am telling, they don't know what current programming trend, *(such as popular open source projects like modernizr, bootstrap and many others). 3. In our codebase more than 10000+ lines are repeated, so I told them about `DRY: Don't Repeat yourself`. Their reply was : \"It is a fascinating article, but never works in practice\". I just told them if we do not make it 100% DRY, we can at least use interfaces, but that also was not considered. *(interfaces can be added for new features, not touching the previous codebase, if they are not ready to refactor) 4. All senior developers do maintenance and hot fixing of patches. The rest of the time they just spend on entertainment sites. They are just happy to finish the task. 5. Introducing new technology is bad? *(including factor of feasibility can be done). 6. Manager also least concerned about the things which I am talking about. 7. Junior expects they can learn many things from team lead. *(not by asking help or senior coding for them). My questions are: 1. Am I too aggressive about the changes which I am proposing? 2. What should I expect from senior dev leads who have 8+ years experience? 3. Am I wrong to expect to learn and gain experience from a company? Update : Why they feel DRY is impractical: because they don't want get involved with OOP concepts. They are happy with repeating tasks. New technologies I am proposing: 1. Usage of Minification of CSS, JS, SPrite images 2. Usage of Interfaces and .net framework 4, generics and many others. 3. Client side libraries such as modernizr, knockout js, bootstrap for responsive,"} {"_id": "37981", "title": "Who is using the MVVM architecture for large applications?", "text": "I am currently working on an LOB application which I am basing on the MVVM architecture. Going by the answers to the questions I'm asking, it seems like there are not that many people building _large_ and/or _complex_ applications in MVVM. Hopefully, there is a number of you who have built such animals. If so, have you found that MVVM scales?"} {"_id": "220862", "title": "I've been on C# course and struggle to remember things?", "text": "I'm an apprentice and I've been on a couple of training courses in C#, I understood the concept of c# programming and covered inheritance etc, but when I've came to code for my coursework \"I've got to make a shopping basket and save the contents into a .txt\" I just seems to not know what to do, as if I've got no idea how to program. I've got a copy of the end product apart from it's altered so we can't copy it whole, so I've just been taking code and not really understanding everything that's doing on. I just seem as if I don't want to program as if I'm in a different mind set, I just seem to go blank at the thought of creating a simple program from scratch and have no Idea where to start? Any one got any ideas for help ? :) thx"} {"_id": "238925", "title": "I'm not sure how to add common functionality to my business objects using DTOs/DDD?", "text": "I have created a couple of projects to create a better division of my code: I have a Portable Class Library targeting all frameworks that contains just basic DTOs (auto-generated against a database). These DTOs will contain some basic DataAnnotations for the class representing the table. For example: [DataContract] public class OrderDetail { [DataMember] public virtual int Id { get; set; } [DataMember] [Required] [StringLength(32)] public virtual string OrderCode { get; set; } [DataMember] public virtual DateTime OrderDate { get; set; } [DataMember] public virtual DateTime? ShippedDate { get; set; } } * * * Now, the problem I have is that I want to create a \"Business Object\" and provide common features such as n-level Undo/Redo, marking child objects for deletion, etc. The ways I could think about doing this is as follows: **Option 1** : Inherit the DTO to add the additional members (build out the object graph). I can use partial classes and interfaces and even some AOP to implement the common features. For example: // OrderDetail.generated.cs: Auto-Generated/T4 public sealed partial class OrderDetail : DTO.OrderDetail, IUndoRedo { // Common Features coded here (i.e. n-level Undo/Redo) } // OrderDetail.cs: Custom Code [DataContract] public sealed partial class OrderDetail { [DataMember] [Required] public Address ShippingAddress { get; set; } [DataMember] [Required] [Cardinality(1, Cardinality.Infinity)] public ICollection Products { get; set; } } Now, the problem with this is that every single Auto-Generated business object extension of each DTO (`OrderDetail`, `Address`, `ProductDetail`, etc.) needs to have the common business features available (i.e. n-level Undo/Redo), but they can't inherit from some sort of `BusinessBase` class because they inherit from the DTO. **Option 2** : Inherit each business object from a `BusinessBase` abstract class which already has common features built into it. Then, simply wrap the DTO properties and re-expose them? [DataContract] public abstract class BusinessBase : IUndoRedo { // Common Features coded here (i.e. n-level Undo/Redo) } [DataContract] public sealed class OrderDetail : BusinessBase { // Wrapped Properties private Dto.OrderDetail Dto { get; set; } [DataMember] [Required] [StringLength(32)] public string OrderCode { get { return Dto.OrderCode; } set { Dto.OrderCode = value; } } // etc ... // Custom properties (Object-Graph) [DataMember] [Required] public Address ShippingAddress { get; set; } } The problem with this is all the wrapped up DTOs, and it lacks inheritance, so the \"Business\" version of OrderDetail isn't a type of DTO OrderDetail, and the wrapped properties can get redundant. I'm really stuck as to what should be done here, is one option better, or maybe some other way of doing this that wasn't accounted for (Option 3, 4, 5, etc.)?"} {"_id": "169949", "title": "How to visualize the design of a program in order to communicate it to others", "text": "I am (re-)designing some packages for R, and I am currently working out the necessary functions, objects, both internal and for the interface with the user. I have documented the individual functions and objects. So I have the description of all the little parts. Now I need to give an overview of how the parts fit together. The scheme of the motor so to say. I've started with making some flowchart-like graphs in Visio, but that quickly became a clumsy and useless collection of boxes, arrrows and-what-not. So hence the question: * Is there specific software you can use for vizualizing the design of your program * If so, care to share some tips on how to do this most efficiently * If not, how do other designers create the scheme of their programs and communicate that to others? * * * Edit: I am NOT asking how to explain complex processes to somebody, nor asking how to illustrate programming logic. I am asking how to **communicate** the **design** of a program/package, i.e.: * the objects (with key features and representation if possible) * the related functions (with arguments and function if possible) * the interrelation between the functions at the interface and the internal functions (I'm talking about an extension package for a scripting language, keep that in mind) So something like this : ![enter image description here](http://i.stack.imgur.com/pohJ4.png) But better. This is (part of) the interrelations between functions in the old package that I'm now redesigning for obvious reasons :-) PS : I made that graph myself, using code extraction tools on the source and feeding the interrelation matrix to yEd Graph Editor."} {"_id": "59975", "title": "How to learn to translate real world problems to code?", "text": "I'm kind of a beginner to Java and OOP and I didn't quite get the whole concept of seeing a real world problem and translating it to classes and code. For example, I was reading a book on UML and at the beginning the author takes the example of a tic tac toe game and says: \"In this example, it's natural to see three classes: Board, Player and Position.\" Then, he creates the methods in each class and explains how they relate. What I can't understand is how he thought all this. So, where should I start to learn how to see a real world problem and then \"translate\" it into code?"} {"_id": "144996", "title": "Learning MVC - Why does home and about share the same controller?", "text": "Let me start this out by saying I've been a asp.net web forms developer for a while now and that I understand mvc is a new way of doing things. As I'm learning mvc and going through tutorials and training videos, I have questions that these tutorials don't address. This is my attempt to address them here... I started a new project with the new internet application template in Visual Studio. I'm looking around the project trying to wrap my head around the mvc paradigm and I notice there is a Home and an About page. In the views, there is a file for each of these two pages. That makes sense. But why do they share the same controller? I think it would make sense if I had several screens that edit/view/delete the same data table, but the home and the about page don't necessarily have anything to do with each other. Does this mean if I create other pages that don't need a full blown controller (like a sitemap or something), I should just stick their views in the \"Home\" views folder? It just doesn't seem right. I know this basic stuff isn't that big of a deal, but this is the type of stuff that bugs the hell out of me. Thanks in advance for the clarification!"} {"_id": "30355", "title": "How to become a \"faster\" programmer?", "text": "My last job evaluation included just one weak point: timeliness. I'm already aware of some things I can do to improve this but what I'm looking for are some more. Does anyone have tips or advice on what they do to increase the speed of their output without sacrificing its quality? How do you estimate timelines and stick to them? What do you do to get more done in shorter time periods? Any feedback is greatly appreciated, thanks,"} {"_id": "169941", "title": "What are unique aspects of a software Lifecycle of an attack/tool on a software vulnerability?", "text": "At my local university, there is a small student computing club of about 20 students. The club has several small teams with specific areas of focus, such as mobile development, robotics, game development, and hacking / security. I am introducing some basic agile development concepts to a couple of the teams, such as user stories, estimating complexity of tasks, and continuous integration for version control and automated builds/testing. I am familiar with some basic development life-cycles, such as waterfall, spiral, RUP, agile, etc., but I am wondering if there is such a thing as a software development life-cycle for hacking / breaching security. Surely, hackers are writing computer code, but what is the life-cycle of that code? I don't think that they would be too concerned with maintenance, as once the breach has been found and patched, the code that exploited that breach is useless. I imagine the life-cycle would be something like: 1. Find gap in security 2. Exploit gap in security 3. Procure payload 4. Utilize payload **What kind of differences (if any) are there for the development life-cycle of software when the purpose of the product is to breach security?**"} {"_id": "91183", "title": "What is easier to do with web applications compared to native GUI applications?", "text": "I have the impression that more and more applications with a user interface use HTML+CSS+JavaScript client-side instead of a native GUI framework. I wounder what are the most important driving forces for this? or in other words what are native GUI frameworks lacking to be competitive? e.g. is it easier to do some kind of things with HTML+CSS+JavaScript than with native GUI frameworks. If we suppose the application is a client-server application where the client has a graphical user interface. **What is the most important things that are easier to do in HTML+CSS+JavaScript than in the most popular native GUI frameworks?**"} {"_id": "229500", "title": "Doing work in vector's push back", "text": "I often use the following syntax: std::vector vec; vec.push_back( someClass.getFoo(...).modifyAndReturn() ); Considered about exception safety, I quote the standard on vector's push back behavior (23.3.7.5): > If an exception is thrown other than by the copy constructor, move > constructor, assignment operator, or move assignment operator of T or by any > InputIterator operation there are no effects. If an exception is thrown by > the move constructor of a non-CopyInsertable T, the effects are unspecified. * Is it a good practice to use complicated push backs? * Are there any perils exception wise?"} {"_id": "229501", "title": "Validation and Authorisation in Domain Models and Carrying that through a Service Layer to MVC", "text": "With the current project I'm working on there's an architecture question being asked which feels like it might just be asking too much. System Basics: * HTML/JS MVVM * Asp.net MVC * Web Services * EF * SQL 2012 The Web Services deal with DTOs passed back and forth either to the presentation layer (MVC or mobile app) or to various other external services. The **big question** at the moment is whether it's possible to somehow define our **DTOs** in such a way as to **include** all **Validation and Authorisation rules**. This then needs to be carried into, for example, our MVC presentation layer or View Models. The idea here is that we'll end up with 1) Validation rules on the ViewModel which then plug into the MVC framework 2) Only fields that the user is allowed to edit should be shown as editable. Some have mentioned Fluent or other mechanisms that will consist of an extra definition class somewhere used define these rules, then during compilation code may be injected into various view model classes or base view model classes are generated which one can inherit from. The overall objective is to keep validation and 'data authorisation' (can user X edit fields 1, 2 and 3 based on their permissions/roles) rules in one place. Is this stretching it, or is it possible?"} {"_id": "232838", "title": "What is the underlying mechanism behind va_list and where is it defined?", "text": "http://www.cplusplus.com/reference/cstdarg/va_list/ According to the above link, `va_list` is an argument or parameter used in some macros - `va_start`, `va_arg`, `va_end`. These macros are present in the `stdarg.h` file. I know that `va_list` can hold multiple values but what kind of an entity is `va_list`? My question is what is the underlying mechanism behind `va_list`? How is it able to hold multiple values? (For example, an array can hold multiple values and the mechanism behind it is multiple memory locations referenced by subscript values. Array is a predefined data structure present in C.) My second question is where is `va_list` defined?"} {"_id": "228677", "title": "Is it better to use a switch statement or database to look through 5,000 to 10,000 instances?", "text": "I have some JSON data in this format (5,000 instances for now): {\"id\":\"123456\",\"icon\":\"icon.png\",\"caseName\":\"name of case\"} I am allowing the user to search for the name of the case and then return the id and icon to them, and in some situations, **use** the case id to run other functions, based on the search. How should I read this data? I'm currently leaning towards a switch because I want the switch to allow me to define multiple cases giving me the same result: name of case and name-of-case should return me the id 123456 I can easily use a switch based on the string, listing out different cases, to do this. Probably going to be extremely tedious, but it's an option. So, my options I can think of are as follows: 1. Run a massive switch based on the query string (probably in a separate PHP file) 2. Store this result in a MySQL database, then based on the query string, retrieve the data, and I probably need to do indexing (which I understand, but have no idea how to implement). Also, I would like to prepare for the scenario that this 5,000 instances increases to 10,000 instances at some point in time. What's the best way of going about this? Thanks."} {"_id": "232830", "title": "Unit test JSON parsing code?", "text": "In my humble view of unit testing a unit test tests a single unit of code. For me, this means a single class. Every dependency for that class is mocked in the corresponding test class, and passed in the constructor (or injected via a DI container). This allows you to test the class in complete isolation from the rest of the system. I take this as the gospel truth of what a unit test is supposed to be. If you are testing multiple _real_ classes (more than one class in your test class is not a mock) then you don't have a unit test, you have an integration test. Given this point of view, there are things that _seem_ to fall outside of the realm of unit testing even though it seems as if you are testing a single class. One of these types of tests is data parsing tests. For example, given a data class: public class Person { private Name name; private PhoneNumber phoneNumber; private Address address; } a purported unit test then attempts to validate correct parsing: @Test public void shouldParsePerson() { String json = \"{ the json would be here }\"; Person person = new Gson().fromJson(json, Person.class); assertThat(person.getName().getFirst()).isEqualTo(\"Jim\"); // more assertions based on what the JSON was would be here } What is this really testing? For me, it appears to be testing that Gson works. The test is tightly coupled to both the structure of the JSON and the class in Java that it maps to. Is the test really worth writing? Is it worth maintaining? Is it useful? Wouldn't an integration test somewhere else that relies on the parsing code catch this anyways? There is another question here that is related to my question, but it is actually different because it is asking **how** to unit test consumers of an API that wraps third parts classes, while I'm asking what constitutes a valid _unit_ test that involves a third party class. My question is more philosophical in nature, while the other is more practical. My question could also be extended to things like testing configuration code which isn't necessarily involving third party classes."} {"_id": "250134", "title": "OpenGL's relationship to OpenGL ES (3.0)", "text": "I'm beginning my journey into graphics programming and want to learn OpenGL. Because I'm green to graphics programming but not to C and C++, a familiar question came up when I looked at OpenGL and ES... Is OpenGL a superset of OpenGL ES? I read in an ES 3.0 guide that it uses shader-based implementations that exist as part of OpenGL's embedded libraries, and that ES avoids using similar/redundant libraries provided by big 'ol OpenGL due to mobile hardware limitations... Are shader-based implementations which are a huge part of OpenGL ES less efficient solutions when programming on the desktop? OR should I look at the two APIs as different beasts altogether? thx in advance."} {"_id": "228673", "title": "How does the product owner decide how successful a Sprint was?", "text": "Obviously if the product owner judges solely based on whether the committed stories were done, it wouldn't be good, because then the team will just commit to fewer stories next Sprint. On the other hand if the team made a tremendous job and overcame huge technical problems, but the commitments were not met, its also not good. I think that some measure is needed though, because if there is no feedback from the product owner, the team doesn't know what to improve, if at all. So what, in your opinion, are the criteria for the success of the Sprint?"} {"_id": "55091", "title": "Why did Facebook use C++ beside PHP?", "text": "What is the main reason that made Facebook need to use C++ beside PHP? I am wondering if I make a website with alot of vistors would I need to use C++ as well?"} {"_id": "147710", "title": "How to include database changes during application publish", "text": "I am maintaining a WinForms application, which talks to a SQL Server database. Sometimes I have to change database schema (for example to alter a sql procedure or add new one). For this purpose I have a SchemaChange.sql file, where I put the corresponding sql code. When I create an installer for my project, the msi package is created. It contains my application, referenced assemblies, COM+ assemblies,... Alongside the msi package I provide an SchemaChange.sql file, which has to be run on the production sql server. But sometimes I forget to add something to SchemaChange.sql file during development or I forget to execute it on production server after upgrading my application on client machines. Any advice how to automate this to avoid further problems?"} {"_id": "147713", "title": "Where are null values stored, or are they stored at all?", "text": "I want to learn about null values or null references. For example I have a class called Apple and I created an instance of it. Apple myApple = new Apple(\"yummy\"); // The data is stored in memory Then I ate that apple and now it needs to be null, so I set it as null. myApple = null; After this call, I forgot that I ate it and now want to check. bool isEaten = (myApple == null); With this call, where is myApple referencing? Is null a special pointer value? If so, if I have 1000 null objects, do they occupy 1000 object memory space or 1000 int memory space if we think a pointer type as int?"} {"_id": "91230", "title": "Addressable memory unit", "text": "From Wikipedia: > the term endian or endianness refers to the ordering of individually > **addressable** sub-components within a longer data item as stored in > external memory (or, sometimes, as sent on a serial connection). These sub- > components are typically **16- or 32-bit words, 8-bit bytes, or even bits**. I was wondering what \"addressable\" means? For example, in C, the smallest addressable is a byte/char. How can a bit be addressable? Thanks and regards!"} {"_id": "103762", "title": "Developers Terms and Conditions", "text": "I've recently been approached to do some development work for a start up. The fellow has requested my terms and conditions. Being new to the industry, what sort of information would/should I include in this? Obviously pay rate and maybe information about ownership of Intellectual Property perhaps. Is there anything I'm missing?"} {"_id": "103765", "title": "What is a good intro to cloud-computing architecture?", "text": "Awhile ago I tried reading about Google App Engine, from its site and from the Wikipedia article. What I realized was that not only did I not understand what it did, I didn't even know what problem it was solving. I am a competent programmer in a variety of languages, but I have little experience with large frameworks. Is there a book/site that explains cloud computing in a way similar to how \"Programming with POSIX Threads\" explains threading, ie identifying specific problems and the mechanics of how they are solved, from a programmer's perspective? I tend to be a low-level bits-bytes-and-memory-addresses sort of person, so abstract explanations of the architecture tend to give me a giddy and uneasy feeling. I suppose cloud computing can't be understood at the byte-level, but the lower down it is the better I'll understand it."} {"_id": "91235", "title": "For what purpose I can use c++ to increase my skills?", "text": "I want to learn new things. Initially I was a PHP programmer. Then I thought it was not enough. Then I started learning Java thing. It took me 3 months to learn. Java, J2EE, Spring, Hibernte, Spring Security, Spring Roo and many design patterns MVC and stuff like AOP, DI . I never knew that before but I got the idea what J2EE. After 3 months, I just made a simple page with Registration form integrated with Spring Security. I wanted to make one complete project in it but that was too much for me and I didn't want spend more time on it as then i need to host that as well so I left that. Then I started learning Python and made few sys admin scripts and then Django and now I am finishing a complete web app in Python. Now I want to learn C++, but before that I need to find out what i can do with it. Just like I know Python is very useful because I have my own servers so I can write scripting and websites so Python is good for me. But I am confused in which areas C++ can help me. I don't want to end up like I have with Java where either I have big projects or nothing for day to day use."} {"_id": "103767", "title": "Tips to succeed in a new job for my situation", "text": "Here is my situation: * I am switching to a new company in a new city as an engineer [code, test, performance etc]. * I was not particularly successful in my previous role [at least not as per my immediate manager] and finally decided to quit on good terms. * The new job is exciting [I haven't started it yet]. I am the first engineer for this new team and there is no existing code for what I would be working on, except an old prototype. * I have to work closely with sister teams to bootstrap my product. * I am due for a performance review within 5 months. I will have to take a few weeks vacation toward the end of it. I am a little anxious about this since I think 5 months isn't much time for me to bootstrap and show some early successes. * I am working in a new space I am not familiar with though the stack is mostly Java, Apache, etc. which I am more comfortable than other stacks out there. I am currently reading _First 90 Days_ though it is a bit more focused on leadership rather than engineers. I am an engineer and not manager though would like to move to leadership roles future in my career. Thanks for your input and I will add more details if someone requests."} {"_id": "91237", "title": "how to start fixing bugs in open source softwares", "text": "I a student and have good knowledge in C programming and like to contribute any open source project which is developed in C. I searched sourceforge and selected 7-Zip because its widely used one and developed using C. I thought to start first by fixing bugs (which was suggested by many people in their websites) and gone through few bugs but couldn't understand how to respond to them and how to start fixing them.. I didn't understand anything. Could you please explain how to approach this.. I have even gone through some files in the source code which I downloaded but didn't understood anything. Please help me!"} {"_id": "34189", "title": "How do you update copyright notices?", "text": "So now it's 2011, and as I carry on coding on our active projects it's time to update some copyright notices. eg. Copyright Widgets Ltd 2010 to Copyright Widgets Ltd 2010, 2011 My question is when do you update the copyright notices? * Do you change the notice in the head of a file the first time you work on that file? * Since a module is one piece of code consisting of many files that work together, do you update all notices in that module when you change a single file in that module? * Since a program is one piece of code (maybe consisting of many modules), do you update all notices in that program when you change a single file in that program? * Or do you just go through and change en-mass over your morning coffee on the grounds your about to start programming and updateing things?"} {"_id": "186407", "title": "Why Bootstrap 3 changes camelCase to dashes - is it more readable?", "text": "I'm wondering what's the reasoning behind Bootstrap's decision to change all camel case names into hyphenated names in v3.0. I searched on Google, and looked in a few books, but I can only find opinions one way or the other - no hard data. Are there any studies that suggest camel case variable names are more readable than dashes, or is this just a matter of personal preference?"} {"_id": "111976", "title": "Gap year in a computer science course", "text": "Due to circumstances outside of my control, I might have to take a gap year from my computer science course at university. The trouble is that due to other circumstances, for example, living in the middle of nowhere, and even entry-level programming jobs requiring a completed degree, there's a realistically zero prospect of getting a job in this period. What else can I do to significantly progress my career or education in this time frame?"} {"_id": "115294", "title": "What is Business systems exposure?", "text": "I saw this job adverted for db admin and one of the things they are looking is for someone who can demostrate in business systems exposure. What does Business systems exposure mean? Have I selected the right tag? Thanks"} {"_id": "143414", "title": "Java - multirow array", "text": "Here is my situation: a company has x number of employees and x number of machines. When someone is sick, the program have to give the best possible solution of people on the machines. But the employees can't work on every machine. I need a code for making little groups of possible solutions. this is a static example private int[][] arrayCompetenties={{0,0,1,0,1},{1,0,1,0,1},{1,1,0,0,1},{1,1,1,1,1},{0,0,0,0,1}}; => row is for the people and columns are for machines m1 m2 m3 m4 m5 m6 m7 p1 1 1 p2 1 1 1 1 p3 1 1 1 p4 1 1 1 p5 1 1 1 1 p6 1 1 1 1 p7 1 1 1 1 1 1 my question => with what code do i connect all the people to machine in groups (all the posibilities) like: p1 -> m1 , p2->m2 , p3 -> m3 , p4->m4 , p5 -> m5 , p6->m6 p1 -> m1 , p2->m3 , p3 -> m3 , p4->m4 , p5 -> m5 , p6->m6 p1 -> m1 , p2->m4 , p3 -> m5 , p4->m4 , p5 -> m5 , p6->m6 p1 -> m1 , p2->m5 , p3 -> m3 , p4->m4 , p5 -> m5 , p6->m6 p1 -> m1 , p2->m2 , p3 -> m3 , p4->m4 , p5 -> m5 , p6->m6 .... i need a loop buth how? =D thanks!"} {"_id": "143413", "title": "Differences between symfony 2 and 1?", "text": "I'm starting symfony and interested in learning where symfony is coming from. In terms of it's architectural challenges. What are the architectural or philosophical differences between symfony 2 and 1? What changes make it so different from the other point version?"} {"_id": "115291", "title": "How much does a progammer giving a free service make from donations?", "text": "I know this is long-shot question but if there's anyone making money from donations, would you share how much you're making from which service/website?"} {"_id": "35272", "title": "Is there a way to dispute or challenge poor iPhone app reviews?", "text": "Problem: user has left review of iPhone app which is simply incorrect Question: is there a way to make contact with the reviewer and a) explain how what they've said is incorrect and help them with their issue or b) challenge the review?"} {"_id": "180120", "title": "How does a binary delta update work?", "text": "Both Android and iOS seem to support their application having a binary delta update. But how does it work? I build a binary program, neither of the distribution sites have the source code - how does the update process know what is changed?"} {"_id": "180121", "title": "Is there a formula for this?", "text": "TL/DR: Any way to work out if known numbers between a known start and ending figure should be positive or negative numbers? I am developing an application in PHP which can import and read PDFs. The PDFs are financial ones such as bank statements with records of transactions in and out of a bank account. I only have PDFs to work with, no other formats such as CSV unfortunately. I convert the PDF to HTML using pdftohtml and start parsing the data, the intended end result is an array of transactions. So far I have it working smoothly collecting dates, descriptions and balance. Converting the XML instead doesn't help. There are other pieces of transcriptional data such as debit or credit amounts. In the PDF, the credit amount is in one column and the debit amount is in another column so it is quite clear in the PDF. However, when converted to HTML, the formatting is lost and therefor I don't know if the amount was a credit or debit amount. So, my question is, given a starting balance and an ending balance and several known figures in between, is it possible for a programme to work out if those known figures in between are credit or debit amounts? I imagine there could potentially be several combinations of those known values to reach the ending balance so I'd like to apply a formula to return the correct credit/debit sequence only if its the only possible solution. If there are several ways of adding/subtracting the known values to reach the end balance, I can ask the user to look at it manually but I'd like to keep this to a minimum if possible. Possible to do, do you think? Thank you in advance for any help."} {"_id": "66048", "title": "Should I use a root namespace?", "text": "I'm currently working on a couple projects in Flash ActionScript, and I've been building up a small library of classes. I've been using a naming convention similar to: `foo.events.Bar` and `foo.controls.Baz`, but I've noticed that many people have released their libraries in the `com` and `org` package/namespace (i.e. `com.foo.events.Bar` and `com.foo.controls.Baz`). I assume the meaning of `com` is `common` or `community`, and that `org` is `organization`. Is there a particular reason to adding an _additional_ namespace? Is this common for namespaced languages (Java, C#, AS3,...)?"} {"_id": "180125", "title": "Maintenance wise, is `else while` without intervening braces considered safe?", "text": "Is `else while` without intervening braces considered \"safe\" maintenance wise? Writing `if-else` code without braces like below... if (blah) foo(); else bar(); ...carries a risk because the lack of braces make it very easy to change the meaning of the code inadvertently. However, is below also risky? if (blah) { ... } else while (!bloop()) { bar(); } Or is `else while` without intervening braces considered \"safe\"?"} {"_id": "188718", "title": "How do I document a communication protocol on top of message queues and channels?", "text": "I'm working on a large project at the moment and I'm trying to document the communication protocol that sits on top of a message queue middleware. The documentation is intended for another organisation that is going to be implementing a new back end to the client. The messages format uses protocol buffers, so I'm less interested in describing the wire format. I'm interested in documenting the interactions between the service and client. A typical interaction might be something like: 1. Client subscribes to Channel1 2. Client pushes RequestForStream to Queue1 3. Service reads from Queue1 publishes StreamResponse (containing endpoint to subscribe to, Channel2) to Channel1 and if successful starts publishing notifications to Channel2 4. Client receives from Channel1 and subscribes to Channel2 5. More stuff... How can I best document scenarios like this? Does anyone know of any examples of documented interactions with Queues and channels? Edit: I am not writing a Message Queue. I am using a JMS-like middleware named Nirvana. I want to document the application built on top of the Message Queue system. Edit 2: In UML how would I model a Queue or Channel? Would I show them on a sequence diagram?"} {"_id": "136309", "title": "Why don't software libraries solve all our problems?", "text": "Modular programming and reusable software routines have been around since the early 1960's, if not earlier. **They exist in every programming language.** Conceptually, a software library is a list of programs, each with its own interface (entry points, exit points, function signatures) and state (if applicable). Libraries can be high quality because modules are focused on solving narrow problems, have well-defined interfaces, and the cost can be amortized across all future software programs that will ever use them. Libraries are purely additive: adding new modules does not introduce bugs or limitations in existing modules. So why don't software libraries solve all our problems? Why can't we write software as merely a composition of high-quality software modules? Can we fix these problems with software libraries and unleash the full potential of this incredibly powerful mechanism for writing high-quality programs faster? Or are these problems intrinsic to libraries and can never be solved? _Note: Several comments said that I have too many questions. Please treat the above questions as the real questions and everything that follows as points for discussion._ It is true that software libraries are widely used. Some programming languages such as C, Java and Python have enormous software libraries. But there are numerous problems. 1. Some well-known languages have less than ideal library support (e.g. C++, Lisp). To some extent this is mitigated by piggybacking on a virtual machine platform (e.g. JVM, CLR). A corollary question, should all future software be written for a virtual machine platform to increase library support? This is problematic for scripts that don't want to incur the cost of launching a virtual machine every time. 2. There is a lot of \"reinventing the wheel\". Have you ever written a linked-list module in C? Yes, of course you have. I don't enjoy writing linked lists in C, but what is the alternative? 3. Can a given library (e.g. libfoo-0.1.2) be trusted as as a basis to write your important software? Is the library tested, documented, and does it implement the features you need? How can you tell? 4. Learning a library's API can be as time consuming as learning a whole new programming language. 5. If a bug is discovered in a library, what is the proper procedure for fixing the bug in your software? How should the bugfix be distributed to all library users? 6. How should libraries be built and distributed? (For example, autotools obviously got it wrong.) Semantic Versioning is good, but how can this be verified and enforced? 7. How should library dependencies be handled? Can we download and install automatically? Will the versions and licenses be compatible? 8. Is the license for a given library compatible with your software? How can you tell? 9. What should you do if the library is abandoned before starting your software? What about after starting your software?"} {"_id": "23240", "title": "Nested CSS styles ~ Where have I seen this concept before?", "text": "I can't find this now but I've read it before on various blogs discussing the topic. I'll give an example and hopefully it's clear (albeit it may not clarify anything at all). Given this piece of markup (arbitrary on purpose):
    While the CSS could read like this: .myWrapper .img img {border: thin black solid;} .myWrapper .img {margin:10px;} .yourWrapper .img img {border: thick red dashed;} .yourWrapper .img {margin:20px;} It could also be written like this: .myWrapper { .img { margin:10px; img { border: thin black solid; } } } .yourWrapper { .img { margin:10px; img { border: thick red dashed; } } } But I can't remember seeing where that was discussed or if it's something in the works. Anybody know what the hell I'm talking about? And I don't think this is an SO question or I would've put it on SO."} {"_id": "132683", "title": "Build Process for Web Application", "text": "I have a web application with lots of Javascipt and CSS code. I want to minify the CSS and JS code using something like UglifyJS. However, I don't want to program with the minified code and I want my team to checkout only the non-minified code. What's the most common way of doing this? I guess you would push both the minified and non-minified code and have the code reference only minified files. Do most people create a file call BUILD that runs all JS/CSS files through the minifier? Programmers would then have to run the BUILD script before pushing right?"} {"_id": "154004", "title": "Why did Golang discontinue the \"netchan\" package?", "text": "The Golang \"netchan\" package seems to have been discontinued. That makes me think that the concept of \"networked channels\" were not a good practice after all. (Why wouldn't them just \"let it be\" otherwise?) Is this the case? And if it is, why is that?"} {"_id": "109253", "title": "People's experience of Cloud Computing (using Force.com)", "text": "I would like to know about people's experience of working with APEX and the SalesForce.com platform, was it easy to work with? How similar to Java and C# is it? What did you like? What don't you like? Would you recommend it? Do you think cloud computing has a long term successful future? My reason for asking is that I am currently looking at a new position which involves working with APEX on the SalesForce.com platform. The position interests me but I just want to try and understand what I might be signing up for with regards the languages/platform as it is completely different from what I have worked with before. I have seen lots of videos/blog posts online (mainly from the recent Dreamforce event) and they obviously are very positive but I was just after some experiences from developers, both positive and negative. I find cloud computing a very interesting idea, but I am very new to the subject. The position I am looking at offers a fantastic opportunity but I was just after some opinions on APEX and the platform as I have no real world experience just what I have seen from the online videos. I guess ultimately what I am asking is: * Are APEX and the SalesForce.com platform good to get involved in? **Is development on the Force.com just a \"career dead end\"?** * Is cloud computing just a fad? Or does it have a long term future? Apologies in advance if this is the wrong place to ask such a question. Thanks"} {"_id": "87920", "title": "Is Tracking Software Usage Illegal?", "text": "Let's say if I am doing desktop application, and I am interested to know whether our software really gets used or not. Is it alright to insert in code that tracks whether our software is used, for how long and so on? Note that no person-identifiable information will be collected, all I am interested to know is how frequent and for how long the software is used. The information will be sent to our server for diagnosis. What do you think?"} {"_id": "194695", "title": "Online courses focussed on learning LISP for beginners?", "text": "I'm looking for an online course that I can use to learn programming using Lisp (especially Scheme), from scratch. I didn't find anything similar on Coursera/Udacity - the only resource I found was on MIT Openware, but I'm not sure if its something that is tailored for an online audience, and also the fact that the lectures are based on the first edition which is not the one I have... I'm not looking for explicit video lectures - a set of good lecture notes, with exercises/assignments and if possible, an exam at the end would be great to get me started!"} {"_id": "177464", "title": "Can someone provide a short code example of compiler bootstrapping?", "text": "This Turing award lecture by Ken Thompson on topic \"Reflections on Trusting Trust\" gives good insight about how C compiler was made in C itself. Though I understand the crux, it still hasn't sunk in. So ultimately, once the compiler is written to do lexical analysis, parse trees, syntax analysis, byte code generation etc, a separate machine code is again written to do all that on compiler? Can anyone please explain with a small example of the procedure? Bootstrapping on wiki gives good insights, but only a rough view on it. PS: I am aware of the duplicates on the site, but found them to be an overview which I am already aware"} {"_id": "130686", "title": "Is there a design pattern that would apply to discount models?", "text": "Are there any known design patterns for implementing discount models? By discount models, I mean the following: 1. If a customer buys Product X, Product Y and Product Z he gets a discount of 10% or $100. 2. If a customer buys 100 units of Product X, he gets a discount of 15% or $500 3. If a customer has brought in the last year more than $100K, he gets a flat 20% discount 4. If a customer has purchased 2 units of Product X, he gets 1 unit of Product X (or Product Y) free. 5. ... Is there a generic pattern that can be applied to handle all of the above scenarios? I am thinking of a few models, but unable to come across a generic one."} {"_id": "72515", "title": "Examples of permission-based authorization systems in .Net?", "text": "I'm trying to figure out how to do roles/permissions in our application, and I am wondering if anyone knows of a good place to get a list of different permission-based authorization systems (preferably with code samples) and perhaps a list of pros/cons for each method. I've seen examples using simple dictionaries, custom attributes, claims-based authorization, and custom frameworks, but I can't find a simple explanation of when to use one over another and what the pros/cons are to using each method. (I'm sure there's other ways than the ones I've listed....) I have never done anything complex with permissions/authorization before, so all of this seems a little overwhelming to me and I'm having trouble figuring out what what is useful information that I can use and what isn't. What I DO know is that this is for a Windows environment using C#/WPF and WCF services. Some permission checks are done on the WCF service and some on the client. Some are business rules, some are authorization checks, and others are UI-related (such as what forms a user can see). They can be very generic like boolean or numeric values, or they can be more complex such as a range of values or a list of database items to be checked/unchecked. Permissions can be set on the group-level, user-level, branch-level, or a custom level, so I do not want to use role-based authorization. Users can be in multiple groups, and users with the appropriate authorization are in charge of creating/maintaining these groups. It is not uncommon for new groups to be created, so they can't be hard-coded."} {"_id": "140295", "title": "Moving away from .Net to Ruby and coping without intellisense", "text": "I am in the process of trying to learn Ruby, however after spending nearly 10 years in the MS stack I am struggling to get by without intellisense. I've given RubyMine a try which does help however ideally I would like to go free which would mean no RubyMine. How have other people learnt to cope with remembering everything instead of relying on Ctrl-Space? Any advice is appreciated at the moment as I am feeling very stupid (no jokes about MS devs please ;))"} {"_id": "140296", "title": "Interview questions for programming tutor?", "text": "My family is looking for a programming/computer science tutor. Personally, I want to learn Java or some other brand of web programming. I am best described as a PC \"power user.\" I have never programmed in the past and would like a good jump start. I am a very quick learner and do not expect the tutor to have to teach me the ultra basic stuff that I can learn myself. My son also needs a programming tutor. He just got into Carnegie Mellon as a computer science major. Having done only robotics and mathematics in the past he is very nervous that he does not have the same level of knowledge as his future classmates. I need some help coming up with a list of questions to ask potential tutors and some criteria to judge them by. Thanks! Edit: So far I have come up with just the obvious... > 1. Where did you receive your education? > 2. What languages are you familiar with? > 3. How long have you been tutoring? > 4. What made you decide to become a tutor? > 5. What software projects have you worked on? > 6. What work references can you give me? > 7. How much do you charge? >"} {"_id": "131084", "title": "Designing web-based plugin systems correctly so they don't waste as many resources?", "text": "Many CMS systems which rely on third parties for much of their code often build \"plugin\" or \"hooks\" systems to make it easy for developers to modify the codebase's actions without editing the core files. This usually means an **Observer** or **Event** design pattern. However, when you look at systems like wordpress you see that on every page they load some kind of _bootstrap_ file from each of the plugin's folders to see if that plugin will need to run that request. Its this poor design that causes systems like wordpress to spend many extra MB's of memory loading and parsing unneeded items each page. **Are there alternative ways to do this? I'm looking for ideas in building my own.** For example, Is there a way to load all this once and then cache the results so that your system knows how to lazy-load plugins? In other words, the system loads a configuration file that specifies all the events that plugin wishes to tie into and then saves it for future requests? If that also performs poorly, then perhaps there is a special file-structure that could be used to make educated guesses about when certain plugins are unneeded to fullfil the request. Any ideas? If anyone wants an example of the \"plugin\" concept you can find one here."} {"_id": "131086", "title": "SVN: Working with branches using the same working copy", "text": "We've just moved to SVN from CVS. We have a small team and everyone checks in code on the trunk and we have never ever used branches for development. We each have directories on a remote dev server with the codebase checked out. Each developer works on their own sandbox with an associated URL to pull up the app in a browser (something like the setup here: Trade-offs of local vs remote development workflows for a web development team). I've decided that for my current project, I'll use a branch because it would span multiple releases. I've already cut a branch out, but I am using the same directory as the one originally checked out (i.e. for the trunk). Since it's the same directory (or working copy) for both the branch and the trunk, if for e.g. a bug pops up in the app I switch to the trunk and commit the change there, and then switch back to my branch for my project development. My questions are: * Is this a sane way to work with branches? * Are there any pitfalls that I need to be aware of? * What would be the optimal way to work with branches if separate working copies are out of the question? I haven't had issues yet as I have just started doing this way but all the tutorials/books/blog posts I have seen about branching with SVN imply working with different working copies (or perhaps I haven't come across an explanation of mixed working copies in plain English). I just don't want to be sorry three months down the road when its time to integrate the branch back to the trunk."} {"_id": "253241", "title": "Poltergeist and factories", "text": "I've just come across the Poltergeist anti-pattern \\- and maybe it's because the morning coffee hasn't kicked in yet, but I read the description: > a poltergeist (or gypsy wagon) is a short-lived, typically stateless object > used to perform initialization or to invoke methods in another, more > permanent class. and thought that this sounds a lot like some implementations of Factory that I've seen (and written myself) - particularly in web development where a Factory might only be used once per request. Unfortunately, the wiki article is fairly superficial. Could someone explain a bit more about how a Factory should be implemented, and also why a Poltergeist is an anti-pattern."} {"_id": "92762", "title": "How to completely decouple Model from View/Controller in Java Swing", "text": "Is there a collection of commonly-agreed-upon design guidelines for separating the Model classes from the View/Controller classes in a Java Swing app? I'm not so concerned that the View/Controller know nothing about the Model as the other way around: I'd like to design my Model to have no knowledge of anything in javax.swing. Ideally it should have a simple API enabling it to be driven by something as primitive as a CLI. It should be, loosely speaking, an \"engine.\" Communicating GUI events to the model isn't too hard -- the Action Performers can call the model's API. But what about when the model makes its own state changes that need to be reflected back to the GUI? That's what \"listening\" is for, but even \"being listened to\" is not entirely passive; it requires that the model know about adding a listener. The particular problem that got me thinking involves a queue of Files. On the GUI side there's a `DefaultListModel` behind a `JList`, and there's some GUI stuff to choose files from the file system and add them to the JList. On the Model side, it wants to pull the files off the bottom of this \"queue\" (causing them to disappear from the JList) and process them in some way. In fact, the Model code is already written -- it currently maintains an `ArrayList` and exposes a public `add(File)` method. But I'm at a loss as to how to get my Model working with the View/Controller without some heavy, Swing-specific modifications to the Model. I'm very new to both Java and GUI programming, having always done \"batch\" and \"back-end\" programming up to now -- hence my interest in maintaining a strict division between model and UI, if it's possible and if it can be taught."} {"_id": "230999", "title": "Constants vs public properties for configuration", "text": "My application has a few high level configuration options such as directories which will be used for various things, database connection information and a few other settings which are required for the application to run. I'm debating with myself whether or not constants are better than variables for this purpose and would like some input. By having: $config = new Config; new Whatever($config->foo, $config->bar); Instead of: new Whatever(Config::FOO, Config::BAR); I get the power of polymorphism, I can quickly swap between configuration settings by instantiating $config as a different class with the same class members. There's obviously a trade-off here. Anything which needs the config options must instantiate a $config object, using memory and creating extra code as well as making it so the config options are no long immutable. The immutability problem could by solved with a method e.g. $config->get('Foo'); so that is potentially a non-issue, although it does add to verbosity and a method call is more expensive than reading a constant or variable."} {"_id": "230998", "title": "Is this proper OO design for C++?", "text": "I recently took a software processes course and this is my first time attempting OO design on my own. I am trying to follow OO design principles and C++ conventions. I attempted and gave up on MVC for this application, but I am trying to \"decouple\" my classes such that they can be easily unit-tested and so that I can easily change the GUI library used and/or the target OS. At this time, I have finished designing classes but have not yet started implementing methods. The function of the software is to log all packets sent and received, and display them on the screen (like WireShark, but for one local process only). The software accomplishes this by hooking the send() and recv() functions in winsock32.dll, or some other pair of analogous functions depending on what the intended Target is. The hooks add packets to SendPacketList/RecvPacketList. The GuiLogic class starts a thread which checks for new packets. When new packets are found, it utilizes the PacketFilter class to determine the formatting for the new packet, and then sends it to MainWindow, a native win32 window (with intent to later port to Qt).1 ![UML Class Diagram](http://i.stack.imgur.com/dtPwP.png) Full size image of UML class diagram Here are my classes in skeleton/header form (this is my actual code): class PacketModel { protected: std::vector data; int id; public: PacketModel(); PacketModel(byte* data, unsigned int size); PacketModel(int id, byte* data, unsigned int size); int GetLen(); bool IsValid(); //len >= sizeof(opcode_t) opcode_t GetOpcode(); byte* GetData(); //returns &(data[0]) bool GetData(byte* outdata, int maxlen); void SetData(byte* pdata, int len); int GetId(); void SetId(int id); bool ParseData(char* instr); bool StringRepr(char* outstr); byte& operator[] (const int index); }; class SendPacket : public PacketModel { protected: byte* returnAddy; public: byte* GetReturnAddy(); void SetReturnAddy(byte* addy); }; class RecvPacket : public PacketModel { protected: byte* callAddy; public: byte* GetCallAddy(); void SetCallAddy(byte* addy); }; //problem: packets may be added to list at any time by any number of threads //solution: critical section associated with each packet list class Synch { public: void Enter(); void Leave(); }; template class PacketList { private: static const int MAX_STORED_PACKETS = 1000; public: static const int DEFAULT_SHOWN_PACKETS = 100; private: vector list; Synch synch; //wrapper for critical section public: void AddPacket(PacketType* packet); PacketType* GetPacket(int id); int TotalPackets(); }; class SendPacketList : PacketList { }; class RecvPacketList : PacketList { }; class Target //one socket { bool Send(SendPacket* packet); bool Inject(RecvPacket* packet); bool InitSendHook(SendPacketList* sendList); bool InitRecvHook(RecvPacketList* recvList); }; class FilterModel { private: opcode_t opcode; int colorID; bool bFilter; char name[41]; }; class FilterFile { private: FilterModel filter; public: void Save(); void Load(); FilterModel* GetFilter(opcode_t opcode); }; class PacketFilter { private: FilterFile filters; public: bool IsFiltered(opcode_t opcode); bool GetName(opcode_t opcode, char* namestr); //return false if name does not exist COLORREF GetColor(opcode_t opcode); //return default color if no custom color }; class GuiLogic { private: SendPacketList sendList; RecvPacketList recvList; PacketFilter packetFilter; void GetPacketRepr(PacketModel* packet); void ReadNew(); void AddToWindow(); public: void Refresh(); //called from thread void GetPacketInfo(int id); //called from MainWindow }; I'm looking for a general review of my OO design, use of UML, and use of C++ features. I especially just want to know if I'm doing anything considerably wrong. From what I've read, design review is on-topic for this site (and off-topic for the Code Review site). Any sort of feedback is greatly appreciated. Thanks for reading this."} {"_id": "32727", "title": "What are invariants, how can they be used, and have you ever used it in your program?", "text": "I'm reading _Coders at Work_ , and in it there's a lot of talk about invariants. As far as I've understood it, an invariant is a condition which holds both before and after an expression. They're, among other things, useful in proving that loop is correct, if I remember my Logic course correctly. Is my description correct, or have I missed something? Have you ever used them in your program? And if so, how did they benefit?"} {"_id": "101188", "title": "On-site interview without phone interviews - meaning?", "text": "I recently applied to work at a tech company which - to the best of my knowledge - usually does two rounds of phone screening before bringing applicants in for the on-campus interview. They contacted me and want to bring me up without doing a phone interview. I know this isn't much to go on, but should I take this at face-value or should I read something into this? Call me an optimist, but it seems like a positive thing. Maybe somebody on the inside recommended me without my knowing? Or They found something on my resume/online they liked the look of? Or maybe there's some sort of interview quota they had to fill? Any insights from hiring managers/HR people would be greatly appreciated. If this is a common thing that means something specific, knowing what it is might be to my advantage. Thanks! EDIT: Since a few people seem to indicate that it might depend on the company in question, let's say that it's a large national company (not Google or Microsoft, though), and that I don't live anywhere near the headquarters (it's about as far away as two places can be in the continental US)."} {"_id": "101187", "title": "Are there problems with using Reflection?", "text": "I don't know why, but I always feel like I am \"cheating\" when I use reflection - maybe it is because of the performance hit I know I am taking. Part of me says, if it is part of the language you are using and it can accomplish what you are trying to do, then why not use it. The other part of me says, there has to be a way I can do this without using reflection. I guess maybe it depends on the situation. What are the potential issues I need to look out for when using reflection and how concerned should I be about them? How much effort is it worth spending to try to find a more conventional solution?"} {"_id": "230992", "title": "How can I capture an incoming email with ASP.net-mvc?", "text": "I would like to write a web system that can capture and parse incoming emails, traditionally the web system I write are asp.net-mvc, running in a cloud hosted environment like AppHarbour or Azure. How can I leverage those (and potentially my DNS) to acheive this?"} {"_id": "101182", "title": "Learning PHP with a Rails/Django/Lift background", "text": "While I have been at university studying a CS degree I have been picking up web development on the side. As I started in the last couple of years, I have mainly been using Rack/Ruby based frameworks such as Rails and Sinatra but I have also looked at - and occasionally used - Web.py, Django and Lift. I realised that I have never used or seen PHP used for web development. This might seem strange but, as a young twenty something, learning PHP doesn't seem like the most fun/hip route to take. Given this background (which I am sure is relevant to many people my age). What is the best way to approach learning PHP web development? What are the key differences/similarities? Also, what frameworks are widely used and/or respected?"} {"_id": "189445", "title": "Why does Resharper prefer \"as\" to \"is\"?", "text": "When I write code like this, where obj is a local variable: if (obj is IMyInterface) { var result = (IMyInterface)obj; // .... } Resharper offers to change it into code like this: var result = obj as IMyInterface; if (result != null) { // ... } I prefer the former, as it offers no opportunity for accidental null reference exceptions. What reasons are there for preferring the other form? Why does Resharper recommend this?"} {"_id": "182411", "title": "What are the pros and cons of Inter process communication done via sockets vs shared memory?", "text": "I understand that two of the many more options for inter process communication can be : 1. Shared memory 2. Sockets Actually I saw these two options being exposed by Intellij Idea for debugging a Java application . I want to know what are the pros and cons of each approach ."} {"_id": "189440", "title": "How to avoid repetitively logging in to web site?", "text": "While developing web sites it can be annoying that I have to login to the site. Every time the session runs out I have to go through a flow like... Open logon page -> enter username/password -> click link to navigate to my page. I sometimes use Selenium to record this flow, and then automate it. But Selenium doesn't work in all browsers, and it still takes too long to load etc. I would like to be able to just refresh the page, exactly like I would if the site didn't have a login page. Does anyone use any strategies to achieve this? Any way to disable login during development? Or other tools which can help improve the situation?"} {"_id": "182418", "title": "Aggregate Root and Lots of Data Efficiency", "text": "It's more of a scenario, but it isn't far fetched at all. Let's say I have an Aggregate Root (AR) Warehouse which it's used to manage product stock. The Product itself is an AR in a different bounded context (BC) but in this BC is represented only by an id. In the Warehouse I can add a new product (must be unique), Ican remove it and i can update stock. Of course, I can communicate the stock for a product and maybe even keep the in/out flow for a product. THe problem is you can easily reach hundreds or thousands of products. So, for any wareohuse action you'll have to load everything even if that action won't use all that info. It's highly inefficient. A solution I could think of is to pretty much 'break' the warehouse AR in specialiazed objects for different actions. But this means we no longer have an AR and we're back to a CRUD like solution. Even more so, the AR was split not becasue the Domain need it, but because the technical details need it. It looks like you can do DDD until a certain point, after that you go CRUD or buy a MUCH BIGGER and expensive server. What do you think? Can we have both DDD and efficiency when lots of data is involved?"} {"_id": "50118", "title": "Avoid GPL violation by moving library out of process", "text": "Assume there is a library that is licensed under GPL. I want to use it is closed source project. I do following: 1. Create small wrapper application around that GPL library that listens to socket, parse messages and call GPL library. Then returns results back. 2. Release it's sources (to comply with GPL) 3. Create client for this wrapper in my main application and don't release sources. I know that this adds huge overhead compared to static/dynamic linking, but I am interested in theoretical way."} {"_id": "235862", "title": "Rails engine testing - use dummy app or real parent app?", "text": "I'm using Rails engines to break up a big app into smaller pieces. The parent app mostly handles users and authentication. In one of my engine tests I want to log in a user before each test. How should I access this user log in functionality, which exists in the parent app, from the engine? It seems there are two options: 1) Build user authentication into the dummy app. When you run `rails plugin new app_name --mountable` a \"dummy app\" - just a plain Rails app - is created in your test folder. During tests the engine is mounted on to this dummy app. For feature (integration, acceptance, what-have-you) tests, I thought it would be good to use real objects whenever possible. So I was planning on using a factory to create a real user and then log in that user. To do this with the dummy app, I would need to build that user functionality in. This seems like a pain because if the real parent app changes, I need to change this dummy app as well. 2) Put all my tests in the parent app. This will work fine, I think, but it seems fishy. I feel like I should put my engine tests in the engine."} {"_id": "43835", "title": "Recommended Reading for Polishing JavaScript coding style?", "text": "I've been coding in JavaScript for a while now and am fairly familiar with some of the more advanced coding features of the language (closures, self- executing functions, etc). So my question is, what advanced books/blogs/or anything else would be recommended to help tighten up my coding style? For example, recently I was coding something similar to: var x = ['a', 'b', 'c']; var exists = false; for(var i = 0; i < x.length; i++){ exists = x[i] === 'b' ? true : exists; } But found that the following condensed code would work better: var y = {'a':'', 'b':'', 'c':''}; var exists = 'b' in y; Both store the same value in 'exists', but the second is less common, but much cleaner. Any suggestions for where I should go to learn more tricks like this? Edit: I recently found this blog with a lot of good detail on the intricacies of JavaScript: http://javascriptweblog.wordpress.com/"} {"_id": "43838", "title": "Do you own your tools?", "text": "A colleague of mine wrote a post a while ago asking Do you own your tools. It raises an important question. Do you? I answered way down in the comments. As an independent, I do own my tools. Even when I wasn't independent, I had my own (fully licensed) tools that I used for personal development. I don't think buying your own tools are something to puff your chest up about (just because you can buy a $100 pair of basketball sneakers they won't make you as good as Michael Jordan), but it IS an investment in yourself that shouldn't be taken lightly. What do you think good people?"} {"_id": "140872", "title": "Why hill climbing is called anytime algorithm?", "text": "From wikipedia, Anytime algorithm > In computer science an anytime algorithm is an algorithm that can return a > valid solution to a problem even if it's interrupted at any time before it > ends. **The algorithm is expected to find better and better solutions the > more time it keeps running.** Hill climbing > Hill climbing can often produce a better result than other algorithms when > the amount of time available to perform a search is limited, such as with > real-time systems. It is an **anytime algorithm** : it can return a valid > solution even if it's interrupted at any time before it ends. Hill climbing algorithm can stuck into local optima or ridge, after that even if it runs infinite time, the result won't be any better. Then, why hill climbing is called anytime algorithm?"} {"_id": "121568", "title": "PHP rand function (or not so rand)", "text": "I was testing PHP rand function to write on a image. Of course the output shows that it's not so random. The code I used: My question is, if I use image width/lenght (variable $lenght in this example) number like 512, 256 or 1024, it is very clear that it's not so random. When I change the variable to 513 for an example, it is so much harder for human eye to detect it. Why is that? What is so special about these numbers? 512: ![512 pixels](http://i.stack.imgur.com/T0qjD.png) 513: ![513 pixels](http://i.stack.imgur.com/vDzXS.png) **Edit: I'm running xampp on Windows to test it.**"} {"_id": "140874", "title": "How to store a list of Objects that might change in future?", "text": "I have set of Objects of the same class which have different values of their attributes. and I need to find the best match from a function under given scenarios out of these objects. In future these objects might increase as well. Quite similar to the way we have Color class in awt. we have some static color objects in the class with diff rgb values. But in my case say, I need to chose the suitable color out of these static ones based on certain criteria. So should I keep them in an arrayList or enum or keep them as static vars as in case of Colors. because I will need to parse through all of them and decide upon the best match. so I need them in some sort of collection. But in future if I need to add another type I will have to modify the class and add another list.add(object) call for this one and then it will violate the open-close principle. How should I go about it ? **EDIT:** To be more precise I have a roughly 7-8 branches of restaurants which wont increase that much..may be by 3-4 in few years and I need a function that will return the nearest Restaurant satisfying few other criteria. and For that I need to parse through all of them to decide which one suits the customer the most."} {"_id": "70822", "title": "How to use Open Source Licenses and what do you recommend to me", "text": "I have a code that I'd like to share, but I'd like to publish it using an open source license. But I don't know how you have to use that licenses. How do you have to activate an open source license? Just including the text of the license in all files? What else do I have to do? Do I need to buy some right? Include a readme.txt in all the directories? I mean, what do I have to do if I want my code is protected by the legal test of a license? The second question is what it is the better license for this conditions: * I don't care what people do with my code, educational purposes, to make money, I don't mind whatever they want to do with it. * But I want people don't delete my name from the code, and If they use my code force them to mention me. If they change my code and someone ask for those modifications, they give the code with the modifications. * But I don't want to force people to publish the code of they applications, even if they are using my code. What is the best license to that purposes?"} {"_id": "194979", "title": "FOSS licensing decision: What to read? What factors to consider?", "text": "Which reasoning would you follow to choose one license over another? Which literature would you expect someone to read, if he wants to make a meaningful decision about licensing? I specifically don't go too much into detail about my current project, because I am looking to learn how to make this decision correctly without getting a degree in law first."} {"_id": "88707", "title": "Questions about software licensing", "text": "I've been having a discussion about licensing and open source software. Basically - the other guy is saying that licensing is easy, if you're going to build a product you can use an (any) open source project and make money by selling that code. My issue is that say I create a website or app with a project that uses a GPL license the restrictions aren't so straight forward - correct me if i'm wrong on each of these scenarios: 1 - i create an iPhone app using GPL code and put that app into the appstore - the code must be freely available to people buying that app. 2 - i create a website that my client hosts - they must have access to the code. 3 - i create a website as SaaS that my client \"leases\" but does not own - though it is hosted on their infrastructure - they must have access to that code Am i right on each of those assumptions? Are there any other issues i should be aware of under any other licensing terms for other licenses?"} {"_id": "85380", "title": "How can I control Visual Studio's window creation behavior?", "text": "I have recently installed Visual Studio 2010. By default it has a nice layout, and when I open files, the tabbed windows for those files appear where I like them to appear: source code on the upper half of the screen, compiler output, test status, trace windows on the bottom half. But, after a period of using it, I fat finger something, and then the window positioning logic goes haywire. One of the tabs gets undocked, and then after I redock it, nothing else operates like it used to operate. So maddening! It opens source code on the bottom, it opens other windows in unpredictable places. I undock and redock the windows where I want them, and the next time I open them, they go to the wrong place again. WTF? Even the \"start page\" gets moved. This is what it looks like when I start up: ![enter image description here](http://i.stack.imgur.com/8sKDn.png) I know I can do the `devenv /resetsettings` thing to restore to the original, and I have done this (Several times), but really I would like to know: 1. How is this happening? What am I doing to cause this behavior? How can I avoid it? 2. Can I undo it - get it to behave sanely again - without stopping visual studio, and restarting with the /resetsettings switch? What are the magic mouse clicks to make it revert? I know it's challenging to describe UI gestures, but if you have hints, I'd love to hear them. Does anyone else have this problem?"} {"_id": "75895", "title": "Flowcharting and Method Calls", "text": "I am doing out some flow charts and am wondering if I am approaching this correctly. In essence, I have several method calls and I am flowcharting each separately. However, several of these methods make a method call for some info and then continue. See this example: ![enter image description here](http://i.stack.imgur.com/bmqBD.png) I have 3 other methods that also call GetQueue() and I am wondering if I am representing this correctly. The AddQueue() flow visually looks like it is broken. NOTE: Changes made in my flowchart: ![enter image description here](http://i.stack.imgur.com/pWa9Q.png)"} {"_id": "200291", "title": "How much programming should I know before looking for first job/internship in tech company?", "text": "Im in college and just started to learn programming. How much should I atleast know before I start to look for jobs/internship so that they wont laught me out of office. I just started to learn programming and CS and literally don't know anything about it before college."} {"_id": "200297", "title": "Should you reward based on overall project completeness or velocity?", "text": "The development team I'm in has been working on a VERY large project for months now and we recently started to implement a kanban board with agile practices into our process. We have seen MASSIVE improvement in our throughput! Its so exciting! But we have one struggle right now. We have a total ticket count for the project and our supervisor has given us incentive(candy and popcorn boxes) for when we get down to 100 and 50 tickets. It at first felt like a great idea but it feels like new tickets are created just as fast as we can resolve them, and its making the team feel worse about our progress. So heres my question: Should you reward based on overall project completeness or velocity? And if completeness how do you make that feel more reachable?"} {"_id": "203924", "title": "HTML + CSS or HTML5 + CSS3 or HTML5 + jQuery", "text": "I want to be a good UI Developer. I'm confused whether I should learn ( **I need some serious suggestions** ): 1) **HTML + CSS + JavaScript or HTML5 + CSS3 + JavaScript or HTML5 + CSS3 + jQuery** 2) Is it required to learn CSS first before learning CSS3 ??? 3) If the above technologies needs to be prioritize then what would be the order?"} {"_id": "184717", "title": "Randomly select from list with increased odds", "text": "I have a list of entities. Every entity contains a number that holds how many times the entity has been selected. I need to make a function that selects `n` (say 25%) of the entities, randomly. What I want to do is increase the odds for entities that have been selected the least times. Suppose after 5 runs the times the entities have been selected can be anywhere from 0 to 5. But I don't want to have such a spread. I want the times the entities are selected more or less to be equal. How can I write a function that increases the odds for entities that haven't been selected as often as others? One way I could think of is to make a list that has more occurrences of lesser selected entities, increasing the chance the randomizer selects that entity. Any hints, tips, ideas? **EDIT:** Wow. Closed as not being a real question and reopened again. For not being a real question it did get a lot of answers and comments. Thanks for that. I got exactly what i wanted from them. I got some nice ideas to work out and to test."} {"_id": "130952", "title": "What's the best way to learn nature-inspired algorithms?", "text": "I completed the Machine Learning course (Stanford) and got very interested, also after some research, I decided that I'd like to learn nature-inspired algorithms. I've found some resources like: * Clever Algorithms: Nature-Inspired Programming Recipes (Book) * Wyss institute (Website) * Programming Game AI by Example (Book) The first reference look good and complete with pseudocode (giving me the possibility to implement everything in Ruby, my prefered language), and also gives ruby implementations for every code. But it lacks of exercises to practice, which I think is a key feature. The second is something that inspired me a lot to start studying this area, but they don't have any course or material to study. The third one looks good too, but has only a little amount of exercises, and might be too focused (I do like games, but I also want to study everything else related to nature-inspired algorithms). Also it is focused on C++ (not that it is a hard language, but I don't like its limitation compared to ruby), and I'd prefere something in Ruby or pseudocode (although it is not my main priority). Does anyone know something that also have exercises to complement the theory? Is there anything better to learn, with a particular focus on exercises? (Maybe courses or video lectures)."} {"_id": "205514", "title": "Is it necessary to follow the standard , take the C standard for that matter?", "text": "May be this question is a duplicate and I am a dumb to ask it here, but I searched a bit and none of the titles suggested to answer my question exactly . There are some very experienced folks on SO who always talk about the C standard. People seem to not like non-portable solutions, even if they work for me. Ok, I understand that the standard needs to be followed , but doesn't it put shackles on programmer's creativity? What are the concrete benefits that come from following a standard? Especially since compilers may implement the standard slightly differently."} {"_id": "200741", "title": "Tic tac toe class diagram", "text": "I'm in a software engineering class and I want to practice some skills on the most basic case possible : tic tac toe. I know this is overkill but I want to do it in \"proper\" OOP. I designed a class diagram for it but there is one point where I don't clearly see what would be the best design decision. ![enter image description here](http://i.stack.imgur.com/US5Er.jpg) (\"Joueur\" means \"Player\", \"PlancheDeJeu\" means \"Game board\") Given this diagram, according to the `Expert`, `Low coupling` and `Strong cohesion` design patterns (sorry if those are not the correct english terms, I am translating) the score should be kept by the user class (Joueur). However, since there is no direct relation between Joueur and Syst\u00e8me I don't exactly know how the game could display the score in a clean way. In this diagram, Syst\u00e8me would have to ask the PlancheDeJeu object about the score and it would itself have to get it from Joueur. This seems wrong. I am trying to think of how to design an intermediary object that would link Joueur and Syst\u00e8me but I can't come up with a good idea. What would be the best way to go here ? Thanks !"} {"_id": "200740", "title": "Use case decomposition for class registration system", "text": "I am currently working on refactoring a summer camp registration system to include some new features and will also be using it as the basis for a new after-school class registration system. For this new version of the system, I want to start out by writing use case descriptions for the registration process (and ultimately I plan to document all the use cases). I understand that the purpose of a use case is to accomplish a user's goal, but I'm still not sure how the registration process should be divided into use cases. Here are the options I've been thinking about: 1. One big \"Register Student(s) For Classes\" use case 2. Divide it into separate use cases: \"Enter Student Information\", \"Select Classes\", \"Pay for Registrations\" 3. Have separate use cases but also have a \"Register Student(s) For Classes\" use case which \"includes\" the more granular use cases Another factor to consider is timing: parents may want to enter their students' basic information (name, age, etc.) prior to the time when registration begins, but of course some parents will complete the whole registration process in one sitting. Also, payment won't always immediately follow class selection; in some cases parents will be allowed to complete their payment later (within a 48 hour window). I read some articles about use cases on Alistair Cockburn's website but I'm still confused as to what the guidelines are for dividing up use cases in a case like this. My inclination is to go with option 3 since the system will have a wizard interface that could potentially guide the user through the registration from start to end, accomplishing the \"user goal\" of registering his/her student(s) for classes, but the \"included\" use cases could also be initiated on their own. I was also wondering if I should have a separate \"Modify Class Selections\" use case for the case when the parent logs back into the system later and changes class selections on behalf of the student, or whether that should just be an alternative scenario of the \"Select Classes\" use case. What would you recommend, and why? What guidelines should be used in general when analyzing use case boundaries, beyond the idea of a \"user goal\"? * * * **Update:** I found this quote from the first chapter of Alistair Cockburn's book to be helpful: > Depending on the level of view needed at the time, the writer will choose to > describe a multi-sitting or **summary** goal, a single-sitting or **user > goal** , or a part of a user goal, or **subfunction**. Communicating which > of these is being described is so important that my students have come up > with two different gradients to describe them: by height relative to sea > level (above sea level, at sea level, underwater), and by color (white, > blue, indigo). It seems to me that \"Register Student(s)\" is a summary level goal, since it could potentially take multiple sittings to complete. The line is blurred in this case, but I think this idea of a summary use case suggests that my approach #3 above is a reasonable one."} {"_id": "200746", "title": "How could distribution and reuse flexibility be hurt by linking my program as a static or dynamic library?", "text": "I'm writing a small program that I want to be able to link with other programs. I also intend to run it from a command line interface, and maybe later with a GUI interface. How could distribution and reuse flexibility be hurt by linking my program as a static or dynamic library? Distribute it as a library and * Dynamically link for GUI and command line interfaces * Statically link for a stand alone binary"} {"_id": "205513", "title": "Refactoring Tomcat webapp deployments", "text": "**Background** We have several webapps running on Tomcat 7.0.x (on Linux). For historical reasons, we have populated the \"Common\" lib (i.e. $CATALINA_HOME/lib) with many application jars. These jars include Spring, Jackson, and our own persistence-layer jars. As we deploy new persistence-layer jars, we are embroiled in a classpath hell: some jars are OK in ~/webapps/foo/WEB-INF/lib, but some must be excluded and placed in Common. Recent adventures show that some may have to be in _both places_. Our local (Windows) build is similarly messy: we have one folder filled with jars. We build with Eclipse (not Gradle, Ant, etc) and do not have an explicit list of jars used by the web applications. The Linux build does use Ant, but semi-selectively globs jars into the WAR files. **Question** I would like to white-board a strategy on how to extricate ourselves from this predicament. One approach is listed below, but I would appreciate any refinements, or alternative approaches. The goal is to correct the deploy process, not the local build. **Strategy** On local machine, use Gradle to compile each web application: * add compile-time dependencies until compilation is OK * locate Gradle's local cache (or observe output) to find transitive run-time dependencies * add all jars to ~/WEB-INF/lib for the web app, with explicit references in the formal Ant build * deploy into a fresh, pristine Tomcat install * observe startup logs for any \"ClassNotFound\" exceptions, both for 3rd-party libraries and our application libraries * add these libraries to the formal Ant build * perform regression QA tests on application and monitor for \"ClassNotFound\""} {"_id": "205512", "title": "LuaJit FFI and hiding C implementation details", "text": "I would like to extend an application using LuaJit FFI. Having seen http://luajit.org/ext_ffi_tutorial.html this is surprisingly easy when comparing this to the Lua C API. So far so good. However I do not plainly want to wrap C functions but provide a higher level API to users writing scripts for the application. Especially I do not want users to be able to access \"primitives\", i.e. the `ffi.*` namespace. Is this possible or will that ffi namespace be available to user's Lua scripts? On the issue of Sandboxing Lua I found http://lua-users.org/wiki/SandBoxes which is not talking about FFI though. Furthermore, the plan I have described above is assuming that the introduction of abstraction layers happens on the lua side of code. Is this an advisable approach or would you rather abstract functionality on the statically compiled code (on the C-side)?"} {"_id": "205518", "title": "Getting related foreign keys from parent entities", "text": "I'm thinking about a design issue which affects my project's data base. Supposing there are three diferent tables: * CLIENT * ORDER * PACKING_SLIP * * * Each **order** has its **client** and different **packing slips**. So there are some foreign keys which are compulsory, there would be _clientId_ for the `ORDER` and _orderId_ for the `PACKING_SLIP` table. That makes full sense. ![First diagram](http://i.stack.imgur.com/DLSEq.png) Now suppose in my logic I want to have access to the client from the packing slip. As I'm using an ORM tool as Hibernate, that involves firstly accessing the order from the packing slip and after getting the client from it. If I want to have access to the client directly, I should add the _clientId_ foreign key also to the `PACKING SLIP` table. ![Second diagram](http://i.stack.imgur.com/7jwu0.png) My question is, is that a correct design if there's a possibility to get the client joining the `ORDER` table? Isn't it a bit redundant? I think it's a control problem and the data base part shouldn't take care about it..."} {"_id": "249541", "title": "Multivariable decisions", "text": "I am running into a situation where my program can have different outcomes depending on the state of some variables. 4 variables are involved and they can all have different ( 3 to 4 different states). All the possible combinations are leading me to about 48 different cases which would be resolved using a 4 level deep nested if/else structure. So, I have a couple of questions, 1. Is there a better technique to draw logic or decision tables or some other structure so that you can model a decision tree for this many outcomes? When a decision is dependent on only 2 variables you can easily model this on a spreadsheet which is two dimensional by its nature but how do you deal with a case like the one I mentioned? 2. Other than a 4 level deep nested if/else structure is there a better programming technique to deal with this?"} {"_id": "38543", "title": "What techniques/methodologies do you use to organize your own open source projects, and why?", "text": "Aside from using a specific tool such as JIRA or Bugzilla, what techniques or development methodologies do you use to keep your OSS development efforts organized? and more importantly, how do you prioritize which features to deliver when there really isn't any paying clients that decide which features are the most important to deliver?"} {"_id": "249547", "title": "HDD Failure Paranoia", "text": "HDD Failure Paranoia I tend to make small programs for myself. Whenever I make an application I use a database application like MS Access or MySQL. I can CRUD my data as I want. No problem in finishing a task. The problem is when I try exploring more into File IO. I have heard of HDD crashes due to high read/write operations. I used to think that these happened only on server machines. But then I came to know that it happens on desktop machines too! So I now I have this huge mental blockage. Whenever I try converting my former software into ones with raw FILE IO, I lose the \"mental drive/motivation\" and freeze. I get thoughts like \"OMG this task will cause this many reads/writes to the disk\". I fear for loosing my hard disk. My goodness! It has happened so many times that I wish that the computer engineers had redesigned the internals of the computer to fix the problem and so I curse them a lot. A lot of my time gets wasted remotivating myself and the projects never get completed. Ya, ya, I know that behind the scenes the database applications use raw File IO to do their work! But who thinks of that when using them. All you are mostly thinking of is SQL and the resulting table in the resultset/recodset. The database applications are like a black box to raw File IO. Does anyone else think the same? Are my fears valid? What should I do? Any views/ consolations? Help! This makes me mad and unproductive!"} {"_id": "246593", "title": "Should I persist notification before or after publishing it through Redis pub/sub?", "text": "I'm implementing a mechanism to notify a group of users about newly inserted blog comments. The architecture uses the Redis Pub/Sub mechanism. By definition, the pub/sub mechanism aims to propagate message to the right subscribers, without storing/persisting anything. Although blog comments are obviously persisted in DB, I also expect all notifications to be sent to be persisted in order to be retrieved at any time by an offline user that gets logged again. Of course, Redis could have persistent messages, but the cost for that is pretty high. I use a distinct main database, let's take for the example MySql. Currently, the workflow is: 1. A blog comment posted by a user is handled by my backend and firstly stored in DB. 2. An event is raised, let's call it \"CommentedBlogEvent\", that triggers a worker aiming to detect all the targeted user of the comment. 3. Assuming that 10 users are target of the comment, I insert 10 \"notification rows\" in my database associating the commentId and the targeted userId, each specific to a user since it would have a flag read/unread. 4. Then, once all notifications have been persisted, I use Redis Pub/Sub to trigger subscribers aiming to push results to the concerned online clients (through WebSockets for instance). **The \"issue\" is that the process could be slow because of the step 3.** Would it be tolerated to make the step 4 before the step 3, meaning before persisting it in DB, since a potential data loss isn't dramatic in my case (non-financial data etc)? Advantage: Client gets result more quickly. Drawback: User could receive a notification that failed to be stored in background, leading to a missing notification when user refreshes the page. What is the best way to handle this case, while keeping my main database as the notifications store?"} {"_id": "208597", "title": "how do you cope with long hours of coding?", "text": "I am just a high-school student but for a while now I have been coding for weeks (from early in the morning to late at night). I really enjoy it, however it can be really monotone. I usually listen to some webradios but they take my attention from coding sometimes. How do you cope with long hours of coding? How often do you take breaks how do you spend them? Do you have any kind of further advice?"} {"_id": "208592", "title": "Can PHP handle mouse events?", "text": "I'm a newbie in php and i want to know if php can handle mouse events such as upon mouseCicked, mouseDoubleClicked, mouseMoved, mouseHovered etc.. And can events like button click , like displaying alert after clicking the submit button If yes, how can it be done ? I know that this can be done in Javascript '; } elseif (isset($keyword)) { echo ''; } ?> This is basically the js: function ShowSearchResults(key,value) { $.getJSON(\"includes/search_results.php?\"+key+\"=\"+value, function(data) { // populate search results $(\".searchResults\").html( \"search results\" ); }) .error(function() { $(\".searchResults\").html( \"no results\" ); }); } My questions are (while keeping security and best practices in mind): Is it bad practice to pass the query string values to my javascript function from within the body? I initially did this because I had the impression that it was not a good idea to extract query string values using javascript. Maybe I'm wrong there. Should I be extracting the query string values from the $(document).ready function, and then calling the appropriate javascript functions from there? What is the appropriate way to do that (htmlspecialchars equivalent)? Keep in mind that within my javascript code, I use $.getJSON to call another server side function (in PHP) that can reject anything insecure. Any insight would be greatly appreciated."} {"_id": "154313", "title": "Switching from abstract class to interface", "text": "I have an abstract class which has all abstract methods except one which constructs objects of the subclasses. Now my mentor asked me to move this abstract class to an interface. Having an interface is no problem except with the method used to construct subclass objects. Where should this method go now? Also, I read somewhere that interfaces are more efficient than abstract classes. Is this true? Here's an example of my classes abstract class Animal { //many abstract methods getAnimalobject(some parameter) { return //appropriate subclass } } class Dog extends Animal {} class Elephant extends Animal {}"} {"_id": "215151", "title": "Nested Entities and calculation on leaf entity property - SQL or NoSQL approach", "text": "I am working on a hobby project called Menu/Recipe Management. This is how my entities and their relations look like. A `Nutrient` has properties `Code` and `Value` An `Ingredient` has a collection of `Nutrients` A `Recipe` has a Collection of `Ingredients` and occasionally can have a collection of other `recipes` A `Meal` has a Collection of `Recipes` and `Ingredients` A `Menu` has a Collection of `Meals` The relations can be depicted as ![Menu Entities and Relationships](http://i.stack.imgur.com/Un7Pr.png) In one of the pages, for a selected menu I need to display the effective nutrients information calculated based on its constituents (Meals, Recipes, Ingredients and the corresponding nutrients). As of now am using SQL Server to store the data and I am navigating the chain from my C# code, starting from each meal of the menu and then aggregating the nutrient values. I think this is not an efficient way as this calculation is being done every time the page is requested and the constituents change occasionally. I was thinking about a having a background service that maintains a table called MenuNutrients (`{MenuId, NutrientId, Value}`) and will populate/update this table with the effective nutrients when any of the component (Meal, Recipe, Ingredient) changes. I feel that a GraphDB would be a good fit for this requirement, but my exposure to NoSQL is limited. I want to know what are the alternative solutions/approaches to this requirement of displaying the nutrients of a given menu. Hope my description of the scenario is clear."} {"_id": "124277", "title": "How was the first Malbolge interpreter tested?", "text": "According to Wikipedia, Malbolge was so difficult to understand when it arrived that it took two years for the first Malbolge program to appear. If this is true, how was the first Malbolge interpreter tested (to check if it did the right thing when a Malbolge program was given)? Was it tested at all?"} {"_id": "20840", "title": "Correcting indentation", "text": "I work for a multi-national I.T. services company. I'd thought they'd be applying good development practices but they aren't. One example is the source code base for a Java project that is currently being developed for a client. It's full of incorrectly indented code. There's mixtures of tabs and spaces on lines (no they're not using the GNU formatting style) as well as 2 or 3 or 4 space indents. Everywhere else I've worked people have agreed to a fixed indentation coding standard and stuck with it, so that's why I find this project an anathema. How do I diplomatically get them to indent properly?"} {"_id": "232865", "title": "Thoughts and Best Practices on Static Classes and Members", "text": "I am very curious as to thoughts and industry best practices regarding static members, or entire static classes. Are there any downsides to this, or does it participate in any anti-patterns? I see these entities as \"utility classes/members\", in the sense that they don't require or demand instantiating the class in order to provide the functionality. What are the general thoughts and industry best practices on this? See the below example classes and members to illustrate what I am referencing. // non-static class with static members // public class Class1 { // ... other non-static members ... public static string GetSomeString() { // do something } } // static class // public static class Class2 { // ... other static members ... public static string GetSomeString() { // do something } } Thank you in advance!"} {"_id": "240429", "title": "A 'task' system which has an ending, to get ready for next task", "text": "I want to make a system, so that there are certain tasks. For example, let's talk about a game. I want to make it so there are 100+ tasks doing different things, but when the player's magic level is 5, it will do the magic task, if the player's fighting skill is level 5, it will fight. I have that already, however here is the catch. I want to make it so once the task executes, it will have an 'ending'. So, it will do something before it finally gets killed. My code so far: for (GameTask a : s.gameTasks) { if (a != null) { if (a.validate()) { a.execute(); } } } It will loop all the tasks and execute them, however how can I implement an 'ending' to it, so that it will get ready for the next task? I hope I have written it clearly as English is not my first language. tl;dr, I want to add an 'end' to each task so that it can be killed and can be ready for next task."} {"_id": "141182", "title": "Data base structure of a subscriber list", "text": "I am building a application that allow different user to store the subscriber information * To store the subscriber information , the user first create a list For each list, there is a ListID. `Subscriber` may have different attribute : email phone fax .... For each list, their setting is different , so a `require_attribute table` is introduced. It is a bridge between subscriber and List That store `Listid, subid, attribute, datatype` **That means the system have a lot of list, each user have their own list, and the list have different attribute, some list have email , phone , some may have phone, address, name mail.. And the datatype is different, some may use 'name' as integer, some may use 'name' as varchar** * attribute means email phone, it is to define for `which list have which subscriber attribute` * datatype means for each attribute, what is its datatype Table :subscriber : Field :subid , name,email Table :Require Attribute: Field : Listid ,subid , attribute, datatype The attribute here is {name, email} So a simple data is Subscriber: 1 , MYname, Myemail Require Attribute : Listid , 1 , 'email', 'intger' Listid , 1 , 'name', 'varchar' I found that this kind of storage is too complex to handle with, Since the subscriber is share to everybody, so if a person want to change the datatype of name, it will also affect the data of the other user. Simple error situation: Subscriber: list1, Subscriber 1 , name1, email1 list2, Subscriber 2 , name2 , email2 Require Attribute : List1 , Subscriber 1 , 'email', 'varchar', List1 , Subscriber 1 , 'name', 'varchar', Listid , Subscriber 2 , 'email', 'varchar', Listid , Subscriber 2, 'name', 'integer', **if user B change the data type of name in require attribute from varchar to integer, it cause a problem. because list 1 is own by user A , he want the datatype is varchar, but list 2 is won by user B, he want the datatype to be integer** So how can I redesign the structure?"} {"_id": "118369", "title": "What type of problem is this?", "text": "Business Analyst creates a requirement. Requirement implemented by developer. BA performs QA. Bug report includes various bugs that were never requirements. E.G. A requirement is made to display a report. The report is displayed. In QA, the analyst see's that a user can enter invalid data. The invalid data makes the report invalid. Per the requirements, an error message is displayed. The analyst decides that they want the previously run report to have its data cleared. Currently, when entering into an invalid state the system displays results from the previous report along with an error message. In this case, the problem is that we are in an error state. The error state requires an error message, but is otherwise undefined. Is this \"bug\" scope creep, poorly defined requirements, lack of an overall business process for error management, lack of technical foresight on part of the BA/Dev, a valid request or other? How can you minimize these types of problems? **Other details (specific to this app)** Web app. Persistent State is an overall part of the design (thus the reason you see the report still displayed). There are little to no HTTP POST/GET operations performed outside of AJAX. Navigation menu changes pages without full postbacks. Instead, perform HTTP GET to load HTML/JSON into portions of the web page on click."} {"_id": "152878", "title": "Overload or Optional Parameters", "text": "When I have a function that might, or might not receive a certain parameter, is it better to overload the function, or to add optional args? If each one has it's ups and downs - when would I use each?"} {"_id": "210508", "title": "How can I stop myself overwriting member variables with 'new' ones?", "text": "The bulk of my programming experience has been with C++ and (shudder) FORTRAN (I'm a scientist not a programmer as such, but I do my best). I've recently started using python extensively and find it great. However, I just spent a frustrating few hours tracking down a bug that turned out to be caused by me creating a new object member in a function, i.e. def some_func(self): self.some_info = list() but, I had already created elsewhere the member `some_info` for this object for a different purpose, so bad things obviously happened later on, but it was tricky to track back to here. Now obviously in a language like C++ this is impossible to do, since you can't just create object members on the fly. Because I'm used to the language taking care of this, I don't have a well developed procedural discipline to prevent me abusing the freedom that python (and other dynamically typed languages) provides. So what is the best way to prevent this kind of mistake when using languages like python?"} {"_id": "16689", "title": "What causes overtime and how can it be avoided?", "text": "Today, my manager **_told_** me that I must work over time to make up for a lack of planning and project management. Instead of incentivizing this unpaid mandatory overtime, my manager has made it clear I have no choice in the matter. This isn't a single project. We currently have a dozen projects all on the go at one time, and I simply can't get them all done. Therefore, I must work overtime so we don't have to push deadlines. * Is this a sign of an ignorant or disrespectful manager, or simply an inexperienced one? * I'm in this position because of a lack of planning and project management (I think). How might I avoid this in the future? I'm no Project Manager, it isn't my strength. * What are good ways to get an employee to work overtime if you can't directly pay them? Good incentives, etc. From what I hear, gaining your employee's respect is the single best way to get your employees to work over time, although you should never make a habit of it."} {"_id": "152871", "title": "For nodejs what are best design practices for native modules which share dependencies?", "text": "Hypothetical situation, I have 3 node modules all native, A, B, and C. * A is a utilities module which exposes several functions to javascript through the node interface, in addition it declares/defines a rich set of native structures and functions * B is a module which is dependent on data structures and source in A, it exposes some functions to javascript, in addition it declares/defines native structures and functions * C is a module which is dependent on data structures and source in A & B, it exploses some functions to javascript, in addition it declares/defines native structures and functions So far when setting up these modules I have a preinstall script to install other dependent includes, but in order to access all of another modules source what is the best way to link to it's share library object (*.node) ? Is there an alternative best practice for native modules (such as installing all source from other modules before building that single module)? Reply"} {"_id": "16687", "title": "How do you get a client to understand the importance of a project lifecycle?", "text": "Situation is I have a number of in-house clients who continue to have trouble grasping the concept of defining clear requirements for a project and committing to them. Unfortunately my job situation is that I can't decline such projects, and I am often forced into situations that I can tell are doomed to scope creep (or worse) from the very start. Telling them that detailed requirements are vital and why we need them (using examples of past troubled projects due to this) have not helped. I am looking for techniques, tips, words, etc that you might use to motivate or illuminate clients in such situations to the necessity of requirements,setting milestones, etc. I am only a developer, and not a manager."} {"_id": "210501", "title": "ASP MVC Action parameters all strings vs explicit type", "text": "A colleague and I were developing an ASP.NET MVC project and during this we created a new action method to handle one of our views. I initially started created the action like so: public ActionResult Index(int id, int numberOfAssessments, int age) { // do stuff } However when I started writing we had a little discussion: **Him** : _\"Why am I making these ints, they should all be strings\"_ **Me** : _\"Well they are ints so it makes more sense to make them strongly typed and so stop potential errors later in the code block\"._ **Him** : _\"Back what if we wanted to change the id to a alpha-numeric representation. We know have to change this signature everywhere where if it was a string it would already handle this\"._ **Me** : _\"It's not a string now and we don't have any plans for making it a string at this moment though\"._ We didn't really decide explicitly what to do but it did get me thinking. **Question** : Should parameters to Action methods be all strings if that is going to make it more flexible in the future, or should they be specific to their type so that we can leverage MVC binding and error handling? What is the best or accepted practice in these situations. Are there too many considerations to take in for any definitive answer?"} {"_id": "152875", "title": "Taking Object Oriented development to the next level", "text": "Can you mention some advanced OO topics or concepts that one should be aware of? I have been a developer for 2 years now and currently aiming for a certain company that requires a web developer with a minimum experience of 3 years. I imagine the interview will have the basic object oriented topics like (Abstraction, Polymorphism, Inheritance, Design patterns, UML, Databases and ORMs, SOLID principles, DRY principle, ...etc) I have these topics covered, but what I'm looking forward to is bringing up topics such as Efferent Coupling, Afferent Coupling, Instability, The law of Demeter, ...etc. Till few days ago I never knew such concepts existed (maybe because I'm a communication engineer basically not a CS graduate.) Can you please recommend some more advanced topics concerning object oriented programming?"} {"_id": "195471", "title": "CSS3 shorthand properties", "text": "Working on my own I tought I need to learn that css3 shorthand properties, because - as I know now, it affects website's loading time, so I need to optimize it a little bit. I was thinking - is there any order how to write property values, when using shorthand prop's? For example: `font: 1em/1.5em bold italic serif`"} {"_id": "166879", "title": "Scenarios for differences between UICulture and Culture", "text": "In .NET there are two \"culture\" values, UICulture and Culture. The first one is for localized texts on the UI, while the latter sets the culture for date and number formats. I can't come up with any reason or scenario for those two values to be different. Is there some reason to do so?"} {"_id": "166875", "title": "Metrics / Methodology for estimating resource utilization for software in planning stage", "text": "I'm looking for approaches to estimate the resource utilization of an (web-)application in a JEE environment. The overall target is to get a forecast for hardware/software requirements while the application is still under development or even in planning stage. Is this task too complex (many different factors) to get a fairly reliable proposition without spending too much time?"} {"_id": "166870", "title": "External file (images, sounds) naming convention at Xcode", "text": "* What is better naming conventions at Xcode regardng External File (images, sounds) etc? * Is there any guideline from vendor Apple? * As we store our projects at SVN, is there any complicity from Hosted Server (we use Linux + Server SVN ) ?"} {"_id": "99543", "title": "What is the difference between K&R and One True Brace Style (1TBS) styles?", "text": "I have read the Wikipedia article on Indent Styles, but I still don't understand. What is the difference between K&R and 1TBS?"} {"_id": "99548", "title": "What factors influence you to try out a new framework or tool?", "text": "I'm in the process of putting the final touches on an open-source framework that I hope to release in the next few months. It's something that I'd like to develop a community for and so I'm curious about what factors influence your decision to use a new framework or tool and why. Some of the specific things I'd like to know more about (feel free to add to this): * Types of documentation/tutorials/instruction * Community support (comments/forum) * Updates (blog/social media/feeds) * Look and feel of the project website design * White papers/testimonials * A big feature list * Community size * Tools * Ability to contribute * Project test coverage (stability/security) * Level of buzz (recommended by friends or around the web) * Convincing marketing copy Ideally, I'd like to have all of the above, but what specific features/qualities will carry greater weight in getting programmers to adopt something new? **What says, 'This is a professional-grade project,' and what are red flags that keep you from trying it out?**"} {"_id": "70767", "title": "Planning Before Starting a Project", "text": "How much planning should one do before starting a project? Should they have everything already planned when they begin coding or should they just get a basic idea of what they want and then make things up on the fly? For instance, I want to create a YouTube client that allows for streaming videos and for downloading multiple videos simultaneously (similar to Minitube). I know what I want the interface to look like when the program is first opened. Is this enough for now? Should I create this and then plan the next step or should I continue planning? How much planning is enough?"} {"_id": "7966", "title": "What traits do the best managers you've worked for have in common?", "text": "I have been listening to Scott Hanselman and Rob Conery's podcast, This Developer's Life. In the latest episode, they discuss personality traits: > 1.0.4 - Being Mean. > > What makes people mean in our industry? What about aggressive? Confident? > What's the difference? Would you rather have a drill sergeant for a boss, or > a zen master? We talk to Cyra Richardson and Giles Bowkett. It got me thinking, what traits did the best managers _you've_ worked for have in common? EDIT: Just to clarify, as there have been a few close votes, I'm interested in whether there are traits common to managers of developers that are not necessarily those traits that a manager of some other profession requires. As for whether this is programming related or not, well I don't want to ask this question on a site that isn't about programming because, frankly, I'm not as interested in what people who make cans of soup for a living want from their managers as I am interested in what _developers_ want from their managers."} {"_id": "9175", "title": "Do you handle Out-Of-Memory conditions?", "text": "What do you do when `malloc` returns 0 or new throws exception? Just halt or try to survive OOM condition/save the user's work?"} {"_id": "95435", "title": "How to handle large scale js+jquery projects using well written, Object-Oriented JavaScript and jQuery code?", "text": "I love the whole user experience/interface thing and put a lot of jQuery and JavaScript (pure JavaScript for HTML5 stuff, like canvas, file API, etc). The problem I face now is that my codes are growing large, 3-4 files per project with each file around 1000 lines of code, and maintaining and extending them is becoming a headache. I know that Object-Orientated JavaScript coding can solve a lot of my problems. I know the basic concepts of Object-Orientated Programming, have done a decent amount of it in C++, Java and Python in school and college projects. I want to read some good examples large scale OO programming in JavaScript (and jQuery), or if possible, some book specifically on writing big OO programs in JavaScript. EDIT1: Checked out backbone.js, guess thats what i needed, will be sure once i use it in my next project, thanks Raynos, also, my question should have been about handling large js projects, and not what it is, sry"} {"_id": "95432", "title": "Finding a new job when my skillset isn't matching the job ads", "text": "I am a C++ programmer and have worked on various web apps using Perl, Ruby, etc.. I have decent a knowledge of Java as well. The problem I face while applying for jobs is companies have specific requirements like, \"Experience with C# 3.5/4. Should have experience experience building web apps with ASP.NET\". What should you do in these situations? 1. Not consider these companies and look for companies with your expertise? 2. Just put that in your resume, and learn the technology before interviews? After all, how hard is it for a developer to wrap their brains around new languages? 3. Try to convince the employer in the cover letter that you are good at one thing but can a learn a new language a tool pretty easily? 4. Something else?"} {"_id": "95431", "title": "Priority list of tasks stored in a database", "text": "I am trying to think of the best way to do the following: I have a list of tasks stored in the database. A task has a priority assigned to it. You can change the priority of a task to reorder the order they should be carried out. I am thinking of something very similar to Pivotal Tracker. So imagine we had the following: 1 Task A 2 Task B 3 Task C 4 Task D 5 Task E We decide that E is now the most important task 1 Task E 2 Task A 3 Task B 4 Task C 5 Task D I need to update all 5 tasks to give them a new priority. If Task B then becomes more important then A I would I would have 1 Task E 2 Task B 3 Task A 4 Task C 5 Task D I need to update Task B and A only. What ways would go about structuring this in a DB? I imagine that you would have a differnt projects stored in the same table that would have there own weight. Would it be better to point a Task that takes place after it (a bit like a link list). This is just a brain dump really. Just was wondering how you would go about implementing something like this."} {"_id": "230970", "title": "How to develop custom functions on top of Ejabberd?", "text": "I'm developing a real time chat app. After searching around for a while, I found Ejabberd and Erlang is a good option. The question is the Ejabberd is not providing all the functions I need. I need some custom features such as location based matching and anonymous login. So how to development custom functions on top of Ejabberd? write modules for it? or develop another standalone server app (web or other kind of server app) to interact with it? Thanks!"} {"_id": "42193", "title": "what are the benefits of closure, primarily for PHP?", "text": "I am beginning the process of moving code over to PHP 5.3 and one of the most highly touted features of PHP 5.3 is the ability to use closures. My understanding of closures is that they allow anonymous functions, can be assigned to variable names, and have interesting scoping abilities. From my point of view the only seeming benefits in real world applications is the reduction of clutter in the namespace because closures are anonymous. Am I wrong in this? Should I be trying to put closures wherever I code? **EDIT** : I have already read this post on Javascript closures."} {"_id": "16975", "title": "Does your team function well without following a work methodology (such as scrum)?", "text": "I've worked in a number of small teams over the last 9 years. Each had the obvious good practices, such as short meetings, revision control, continuous integration software, issue tracking and so on. In these 9 years, I've never heard much about development methodologies; for example, there's never been a \"we're doing scrum\", or \"lets do agile\", or anything more than a passing reference. All of the teams seemed to function fine, without following much process, we were just freeflowing and just naturally worked well. Has anyone else progressed for long periods of time without encountering scrum/agile/etc? The only exposure I've had to these is through sites like this one. I read questions like http://programmers.stackexchange.com/questions/16905/sprint- meetings-what-to-talk-about ... and all the talk seems to describe almost robotic like people who follow a methodology finite state machine. Is it really (though exaggerated) like that? I wonder if the people posting on the internet just loud supporters of \"best practice\", with similar textbook views, not really reflecting how people work... Or that I've encountered some teams making their processes up naturally. Furthermore (I am in the UK, which may be relevant)... I think if a methodology were introduced to any of the teams I'd work on, they'd just reject it as being silly and unnecessary... then carry on. I'd tend to agree, following processes seems a bit unnatural. Is this typical or common?"} {"_id": "17081", "title": "How can I measure my competency level or skill-set in ASP.NET?", "text": "> As a ASP.NET developer with 5+ year experience. I like to measure my > competency level in ASP.NET & SQL Server. Basically my goal is to raise my > competency level and skill-set in ASP.NET; before that I need to know what > is my level considering current ASP.NET and related technologies... Please provide some pointers... * Is there are any skill-set measuring Quiz or exam, which account experience and technology ? * How do you measure your or your junior developers skills or competency? Note: This question was originally asked in SO, I am reposting here to get more input."} {"_id": "97496", "title": "What does it mean if a job requires a \"Bachelor's degree in Computer Science or related field\"?", "text": "Specifically, what is meant by \"related field\"? I'm in the process of pursuing an IT Infrastructure B.A.S. from the U of M (Twin Cities), but have been playing around with the idea of just doing the CSCI B.S. I don't want to be a hardcore programmer, but would having the CSCI degree, instead of the ITI degree, open more doors to whatever profession within the IT world I end up setting my sights on?"} {"_id": "97490", "title": "Is having a switch to turn mocking on or off a code smell?", "text": "I have a method that looks like this: def foobar(mock=False, **kwargs): # ... snipped `foobar` actually makes several calls to Amazon S3 and returns a composed result. In order to make this testable, I introduced the `mock` parameter to turn off making live network connections. It feels like a code smell for me but testability is also very important. What else can I do if I want to do away with the parameter ?"} {"_id": "118989", "title": "Using static classes as namespaces", "text": "I have seen other developers using static classes as namespaces public static class CategoryA { public class Item1 { public void DoSomething() { } } public class Item2 { public void DoSomething() { } } } public static class CategoryB { public class Item3 { public void DoSomething() { } } public class Item4 { public void DoSomething() { } } } To instantiate the inner classes, it will look like the following CategoryA.Item1 item = new CategoryA.Item1(); The rationale is that namespaces can be hidden by using the \"using\" keyword. But by using the static classes, the outer-layer class names have to be specified, which effectively preserves the namespaces. Microsoft advises against that in the guidelines. I personally think it impacts the readability. What are your thoughts?"} {"_id": "121001", "title": "Why is Conway's \"Game of Life\" used for code retreats?", "text": "Code Retreat is an all-day training event that focuses on the fundamentals of software development. There's a \"global\" code retreat day coming up, and I'm looking forward to it. That said, I've been to one before and have to say there was a huge amount of chaos... which is fine. One thing that I still don't get is why the \"Game of Life\" is a good problem for TDD, and what good and bad TDD for it feels like. Realize this is a pretty open ended question, so feel free to comment."} {"_id": "121004", "title": "Which of these algorithms is best for my goal?", "text": "I have created a program that restricts the mouse to a certain region based on a black/white bitmap. The program is 100% functional as-is, but uses an inaccurate, albeit fast, algorithm for repositioning the mouse when it strays outside the area. Currently, when the mouse moves outside the area, basically what happens is this: 1. A line is drawn between a pre-defined static point inside the region and the mouse's new position. 2. The point where that line intersects the edge of the allowed area is found. 3. The mouse is moved to that point. This works, but only works _perfectly_ for a perfect circle with the pre- defined point set in the exact center. Unfortunately, this will never be the case. The application will be used with a variety of rectangles and irregular, amorphous shapes. On such shapes, the point where the line drawn intersects the edge will usually not be the closest point on the shape to the mouse. I need to create a new algorithm that finds the _closest_ point to the mouse's new position on the edge of the allowed area. I have several ideas about this, but I am not sure of their validity, in that they may have far too much overhead. While I am not asking for code, it might help to know that I am using Objective C / Cocoa, developing for OS X, as I feel the language being used might affect the efficiency of potential methods. My ideas are: * Using a bit of trigonometry to project lines would work, but that would require some kind of intense algorithm to test every point on every line until it found the edge of the region... That seems too resource intensive since there could be something like 200 lines that would have each have to have as many as 200 pixels checked for black/white.... * Using something like an A* pathing algorithm to find the shortest path to a black pixel; however, A* seems resource intensive, even though I could probably restrict it to only checking roughly in one direction. It also seems like it will take more time and effort than I have available to spend on this small portion of the much larger project I am working on, correct me if I am wrong and it would not be a significant amount of code (>100 lines or around there). * Mapping the border of the region before the application begins running the event tap loop. I think I could accomplish this by using my current line-based algorithm to find an edge point and then initiating an algorithm that checks all 8 pixels around that pixel, finds the next border pixel in one direction, and continues to do this until it comes back to the starting pixel. I could then store that data in an array to be used for the entire duration of the program, and have the mouse re-positioning method check the array for the closest pixel on the border to the mouse target position. That last method would presumably execute it's initial border mapping fairly quickly. (It would only have to map between 2,000 and 8,000 pixels, which means 8,000 to 64,000 checked, and I could even permanently store the data to make launching faster.) However, I am uncertain as to how much overhead it would take to scan through that array for the shortest distance for _every single mouse move event_... I suppose there could be a shortcut to restrict the number of elements in the array that will be checked to a variable number starting with the intersecting point on the line (from my original algorithm), and raise/lower that number to experiment with the overhead/accuracy tradeoff. Please let me know if I am over thinking this and there is an easier way that will work just fine, or which of these methods would be able to execute something like 30 times per second to keep mouse movement smooth, **or** if you have a better/faster method. I've posted relevant parts of my code below for reference, and included an example of what the area might look like. (I check for color value against a loaded bitmap that is black/white.) // // This part of my code runs every single time the mouse moves. // CGPoint point = CGEventGetLocation(event); float tX = point.x; float tY = point.y; if( is_in_area(tX,tY, mouse_mask)){ // target is inside O.K. area, do nothing }else{ CGPoint target; //point inside restricted region: float iX = 600; // inside x float iY = 500; // inside y // delta to midpoint between iX,iY and tX,tY float dX; float dY; float accuracy = .5; //accuracy to loop until reached do { dX = (tX-iX)/2; dY = (tY-iY)/2; if(is_in_area((tX-dX),(tY-dY),mouse_mask)){ iX += dX; iY += dY; } else { tX -= dX; tY -= dY; } } while (abs(dX)>accuracy || abs(dY)>accuracy); target = CGPointMake(roundf(tX), roundf(tY)); CGDisplayMoveCursorToPoint(CGMainDisplayID(),target); } Here is \"is_in_area(int x, int y)\" : bool is_in_area(NSInteger x, NSInteger y, NSBitmapImageRep *mouse_mask){ NSAutoreleasePool * pool = [[NSAutoreleasePool alloc] init]; NSUInteger pixel[4]; [mouse_mask getPixel:pixel atX:x y:y]; if(pixel[0]!= 0){ [pool release]; return false; } [pool release]; return true; } ![bitmap for the \"allowed area\" \\(allowed area is black\\)](http://i.stack.imgur.com/MHaNC.png)"} {"_id": "121005", "title": "CQRS without using others patterns", "text": "I would like to explain CQRS to my team of developers. I just can't figure out how to explain it in the simplest way so they can implement the pattern rapidly without any others frameworks. I've read a lot of resources including video and articles but I don't find how to implement CQRS without using others patterns like a service Bus, event sourcing pattern, domain driven design. I know the purpose of these pattern but for the first step, I don't want them to think CQRS and these patterns must be tied together. My first idea is to say that CQRS is about separating the read part and the write part. The read part is composed only of the UI project, and DAL project. Then the write part is composed of a typical multilayer architecture: UI/BLL/DAL. Then, does CQRS say we must also have two datastore ? What about the notion of commands which reveal the user's intention, is it also something part of CQRS or DDD ? Basically, how to implement CQRS without using others patterns. I concede it's also not that clear in my mind because I've used to work with NCQRS/DDD/Event Sourcing/ServiceBus in my personal project."} {"_id": "152255", "title": "Is OOP becoming easier or harder?", "text": "When the concepts of Object Oriented Programming were introduced to programmers years back it looks interesting and programming was cleaner. OOP was like this Stock stock = new Stock(); stock.addItem(item); stock.removeItem(item); That was easier to understand with self-descriptive name. But now OOP, with pattern like Data Transfer Objects (or Value Objects), Repository, Dependency Injection etc, has become more complex. To achieve the above you may have to create several classes (e.g. abstract, factory, DAO etc) and Implement several interfaces Note: I am not against best practices that makes Collaboration, Testing and Integration easier"} {"_id": "152252", "title": "HTML5 file API or Java bridge to acces to local Files?", "text": "I've to access to files on the software user and I don't know if it's better to use a full JS app with HTML5 File Api rules or use Java and communicate with it ?"} {"_id": "152251", "title": "What does a programmer really need to become a professional?", "text": "There are huge numbers of programmers, especially juniors, who need good assistance in becoming professional. I've heard a lot of about \"books every programmer should read\", etc. I have two years of programming experience, feel good in C++, but currently have a strange feeling, that I do not know anything about programming. I should read Algorithms by Cormen, Code Complete by McConnell etc., but I don't know the exact steps required to become a professional. What should we do? What should we learn? Operating systems? Computer organization? Algorithms? C++ in depth? How much time do we have to spend to become what we want?"} {"_id": "236060", "title": "Inheritance: Is code from superclass virtually *copied* to subclass, or is it *referred-to by subclass*?", "text": "Class `Sub` is a subclass of class `Sup`. What does that mean practically? Or in other words, what is the practical meaning of \"inheritance\"? **Option 1:** The code from Sup is virtually _copied_ to Sub. (as in 'copy- paste', but without the copied code _visually_ seen in the subclass). Example: `methodA()` is a method originally in Sup. Sub extends Sup, so `methodA()` is (virtually) copy-pasted to Sub. Now Sub has a method named `methodA()`. **It is identical to Sup's`methodA()` in every line of code, but entirely belongs to Sub - and doesn't depend on Sup or is related to Sup in any way.** **Option 2:** The code from Sup isn't actually _copied_ to Sub. It's still only in the superclass. But that code can be accessed through the subclass and can be used by the subclass. Example: `methodA()` is a method in Sup. Sub extends Sup, so now `methodA()` can be accessed through Sub like so: `subInstance.methodA()`. But that will actually invoke `methodA()` in the superclass. Which means that **methodA() will operate in the context of the superclass, even if it was called by the subclass.** **Question: Which of the two options is really how things work? If none of them is, than please describe how these things actually work.**"} {"_id": "8889", "title": "What do you do to convince your boss to update your development environment?", "text": "I've barely succeeded in this, I'm just wondering how you convinced your boss to update and if you're faring better than me. Here's where I stand: 1. I'm using Delphi 7. 2. We've bought Delphi 2009 for each programmer. 3. I'm still using Delphi 7. 4. It's almost 2011 and Delphi 2010 and Delphi XE area already out. So I'm stuck in a timewarp, there are too many commitments to bother about creating bugs by updating our IDE. I have a bad feeling that when we do finally start using Delphi 2009 it'll be as obnoxious as Delphi 7. (This may be a bad generic example as there are special Unicode considerations in the 2009 update, but I'd like general answers - because I do more than just Delphi programming for my company)"} {"_id": "21463", "title": "Writing the minimum code to pass a unit test - without cheating!", "text": "When doing TDD and writing a unit test, how does one resist the urge to \"cheat\" when writing the first iteration of \"implementation\" code that you're testing? For example: Let's I need to calculate the Factorial of a number. I start with a unit test (using MSTest) something like: [TestClass] public class CalculateFactorialTests { [TestMethod] public void CalculateFactorial_5_input_returns_120() { // Arrange var myMath = new MyMath(); // Act long output = myMath.CalculateFactorial(5); // Assert Assert.AreEqual(120, output); } } I run this code, and it fails since the `CalculateFactorial` method doesn't even exist. So, I now write the first iteration of the code to implement the method under test, writing the minimum code required to pass the test. The thing is, I'm continually tempted to write the following: public class MyMath { public long CalculateFactorial(long input) { return 120; } } This is, technically, correct in that it _really_ is the minimum code required to _make that specific test pass_ (go green), although it's clearly a \"cheat\" since it really doesn't even _attempt_ to perform the function of calculating a factorial. Of course, now the refactoring part becomes an exercise in \"writing the correct functionality\" rather than a true refactoring of the implementation. Obviously, adding additional tests with different parameters will fail and force a refactoring, but you have to start with that one test. So, my question is, how do you get that balance between \"writing the minimum code to pass the test\" whilst still keeping it functional and in the spirit of what you're actually trying to achieve?"} {"_id": "122763", "title": "High School Dropout - Is It Always This Difficult?", "text": "> **Possible Duplicate:** > Can someone find a job as a programmer without an education? I made a lot of mistakes when I was younger and never ended up graduating from even my ninth-grade classes (this is Canada and I'm going to admit ignorance as I'm not sure how the educational systems differ between countries). I'm 24 now and have been looking for work in regards to programming for quite some time. Everybody seems to want degrees (be it University or College) in computer programming of some kind, so that leaves me at an impasse. I don't have my high school degree, let alone one that is post-secondary. What does a guy like me do to be taken seriously as a programmer? I am self- taught, but no one seems to put much stock in that this far in Northern Ontario. I've considered coding in open-source and trying to build a portfolio out of that, but, again, I don't know if that'll be taken very seriously. I've even considered blogging about programming, if not to just have some sort of validation that I know what I'm talking about. Unfortunately, it's way too easy to pretend to know anything about **anything** these days, so I'm just wondering - are any of you drop-outs and, if so, how did you get taken seriously? Also, if there are people who hire programmers, what would someone have to do to be considered in your eyes? Thanks."} {"_id": "94887", "title": "Are abstract classes / methods obsolete?", "text": "I used to create a lot of abstract classes / methods. Then I started using interfaces. Now I am not sure if interfaces aren't making abstract classes obsolete. You need a fully abstract class? Create an interface instead. You need an abstract class with some implementation in it? Create an interface, create a class. Inherit the class, implement the interface. An additional benefit is that some classes may not need the parent class, but will just implement the interface. So, are abstract classes / methods obsolete?"} {"_id": "21467", "title": "UI design and confirmation paradigm", "text": "I just saw this lecture by Spolsky, where he questions the need for choices and confirmation dialogs. At some point he has a MacOS settings window and he mentions that \"now some are getting rid of the OK button\". The window indeed has no OK (or cancel) button. Changing a setting makes it change, when you're done configuring, you close that window, period. Being a long time Windows user and a recent Mac owner, the difference is noticeable at first. I looked for the OK button for a while, only to find out, quite naturally and painlessly, that there was none. I expressed satisfaction and went on my merry way. However, I'm curious to know if this UI design pattern would succeed in the Windows-based world. Granted that if Microsoft brought it out with say Windows-8 (fat chance, I know), people would get used to it eventually. But is there some experience out there of such an approach, of changing the \"confirmation paradigm\" on a platform where it's so prevalent? Did it leave users (especially the non-technical ones) confused, frustrated, scared, or happy? TL;DR: Remove OK/cancel confirmation, what happens? EDIT: Mac GUI for appearance settings. ![alt text](http://i.stack.imgur.com/YtoYX.png)"} {"_id": "142123", "title": "Can we set up svn server on a local computer without any network access?", "text": "I want to set up an SVN repository on my computer without any network access. I am working on a code without any collaborator, so I don't want it to be publicly available. I read this post, but it suggests using online SVN repository services that give free repositories. In that case, my code will be publicly available (as is included in the terms of free plans). So I was wondering if I can set up a local server on my Windows XP machine that only I access even when I don't have any internet connection?"} {"_id": "53521", "title": "What exactly is OO reuse?", "text": "And why is it often talked about? Like I know what OO programming is obviously... but people always say \"Oh OO reuse is the biggest programming myth ever\". What exactly does this mean?"} {"_id": "45614", "title": "Are there issues with learning to program on a Virtual Machine?", "text": "I'm not a programmer, but financial analyst by profession who is learning how to code (Python, Django, C w/ ctypes when I need something to go really fast) in a virtual Ubuntu setup on my work laptop. Question: I just assumed that coding on a Virtual machine was no different from if Linux was my main OS. Is this a bad assumption? Is there anything I need to watch out for? Thanks, Mike"} {"_id": "106568", "title": "How to Quantify the Value of Unit Testing", "text": "Our organization is considering integrating unit testing into our software development workflow. I've heard lots of anecdotal stories about how it encourages better, easy to maintain, and well-planned code. As a programmer, I understand the benefits. However, in order to build consensus around using unit testing in projects that might affect customers, I would like to demonstrate its value quantitatively, in terms of saving hours of developer time. Does anyone in the community have any historical data or experience they would be willing to share as quantitative evidence in my case for unit testing?"} {"_id": "39368", "title": "Automated unit testing, integration testing or acceptance testing", "text": "TDD and unit testing seems to be the big rave at the moment. But it is really that useful compared to other forms of automated testing? Intuitively I would guess that automated integration testing is way more useful than unit testing. In my experience the most bugs seems to be in the interaction between modules, and not so much the actual (usual limited) logic of each unit. Also regressions often happened because of changing interfaces between modules (and changed pre and post-conditions.) Am I misunderstanding something, or why are unit testing getting so much focus compared to integration testing? It is simply because it is assumed that integration testing is something you have, and unit testing is the next thing we need to learn to apply as developers? Or maybe unit testing simply yields the highest gain compared to the complexity of automating it? **What are you experience with automated unit testing, automated integration testing, and automated acceptance testing, and in your experience what has yielded the highest ROI? and why?** If you had to pick just one form of testing to be automated on your next project, which would it be? Thanks in advance."} {"_id": "78478", "title": "How to explain the value of unit testing", "text": "I want to introduce the concept of unit tests (and testing in general) to my co-workers; right now there are no tests at all and things are tested by actually performing the tasks via the UI to see the desired result. As you might imagine, the code is very tightly coupled to the exact implementation - even resulting in code that should be in a class and reused across the system being copied and pasted across methods. Due to changed requirements, I have been asked to modify a module I previously wrote and that is fairly loosely coupled (not as much as I would like, but as best I can get without having to introduce a lot of other concepts). I've decided to include a suite of unit tests with my revised code to \"prove\" that it works as expected and demonstrate how testing works; I'm not following true TDD as some of the code is already written but I'm hoping to follow some TDD concepts for the new code I'll have to create. Now, inevitably I'm sure I'll be asked why it's taking me more than a day or two to write the code, since parts of what I'll be interacting with already exist in the system (albeit without any tests and very tightly coupled), and when I check the code in I'll be asked just what this \"Tests\" project is. I can explain the basics of testing, but I can't explain the actual benefits in a way the others would understand (because they think testing requires you to run the app yourself, since often the actual UI matters in determining if the feature \"works\" or not). They don't understand the idea of loose coupling (clearly evident by the fact nothing is loosely coupled; there aren't even any interfaces outside of the code I've written), so trying to use that as a benefit would probably earn me a \"Huh?\" kind of look, and again I can't be as loose as I would like to without having to rework several existing modules and probably introduce some kind of IoC container, which would be viewed as wasting time and not \"programming\". Does anyone have any suggestions on how I can point to this code and say \"We should start creating unit tests\" without coming off as either condescending (e.g. \"Writing tests forces you to write good code.\" which would probably be taken to mean code except mine is _bad_ ) or without making it seem like a waste of time that doesn't add any real value?"} {"_id": "238452", "title": "How do you prevent confused tests?", "text": "Testing code for correctness is important. Whether you do strict TDD or not, tests are really the only way a project can scale in size beyond a point where every team member can reasonably keep all the code knowledge loaded in their brains at once. After that point, tests are imperative in order to ensure future modifications don't break old logic (that or you're burning lots of resources doing the same thing manually). The majority of people understand what I've stated above -- not much to argue with there. However, without much of a CS background, there is a lot open to interpretation. Naively, _tests_ are likely to simply mean _code that runs other code_. While this is true, it doesn't paint the entire picture. Without much concern for proper testing, in my experience, people will tend to generate a bunch of **integration** or **functional** tests when asked to write **unit** tests. Whether you aren't privy to testing nuances, or whether you simply choose to ignore them (in the interest of time or because you don't like testing), the result is the same. You get a project with a bunch of \"confused tests\", or tests that don't really respect the line between unit, functional, and integration styles of testing. In other words, you get tests that don't have a clear purpose other than to exercise as much code as possible. This is frustrating because it becomes increasingly cumbersome to parse the result of those tests for meaningful information. * _Did the tests fail because my remote server is down, or is my logic incorrect?_ * _Why is this \"unit\" test failing when I swap [un]related system component`x` for component `y`?_ And so on. How can you illustrate this distinction in a way that someone with a more engineering/get-it-done mindset can identify with? In essence, how do you make people _care_ about the distinction? Indeed, at a recent Android testing meetup at Twitter, the team asserted that full blown testing of software is a project in-and-of itself. And projects require resources. You can't just slip testing in on the side and not think much about it. _Resources must be allocated and developers must take it seriously_. That's a good start, but we can't just throw retrospective resources and attention at code that's already been written. I'm curious, what approaches work to remedy a confused test landscape, and how you can prevent confused tests from happening in the first place?"} {"_id": "203320", "title": "How do I explain the importance of NUNIT Test cases to my Colleagues", "text": "I am currently working in Software Development for applications including lot of Mathematical Calculations. As a result there are lot of test cases that we need to consider. We donot have any NUNIT Test case system, I am wonderring how should I get the advantages of implementing the NUNIT testing in front of my colleagues and my boss. I am pretty sure, it would be of great help for our team. Any help regarding the same, will be higly appreciated."} {"_id": "45611", "title": ".Net community in Germany", "text": "I recently moved to Germany and I'm interested in what German .Net community looks like. Do you know any regularly held workshops in Germany (around Mannheim preferably)?"} {"_id": "20596", "title": "Cross border data obfuscation and how to deal with it?", "text": "I am currently facing a situation where I will need to obfuscate data for legal reasons (these are countrywide rules and not specific to me) I will still need to work with teams outside this area who will still need to access applications/databases for testing/deployment and so on, however the data must be fully obfuscated. What are your experiences of this scenario? e.g. Who owns what stage of the process, what access do the various parties have, how strong is the encryption, how do updates occur"} {"_id": "211675", "title": "Why isn't iostream included as a header file anymore?", "text": "First of all I have gone through this question Why is #include bad? and there the reason was simply that it is outdated but I personally think that as a header iostream was better cos you don't have to declare objects like cout and endl in global scope cos they were already in global scope and moreover since # is not used anymore that means it does not comes under a pre-processor directive. I just need to know that why was this done? I still use turbo sometimes and am comfortable in using it in c++ programs even though gcc and g++ provide better debuggers and everything."} {"_id": "211677", "title": "Optimization of time-varying parameters", "text": "I need to find an optimal set of \"n\" parameter values that minimize an objective function (a 2-hr simulation of a system). I have looked at genetic algorithm and simulated annealing methods, but was wondering if there are any better algorithms and guidance on their merits and limitations. With the above optimization methods I can find the optimal parameter values that hold true for the entire simulation duration. Incase, I want to find the optimal \"time varying\" parameter values (parameter values change with time during the 2-hr simulation), are there any methods/ideas other than making each time varying parameter value a variable to optimize? Any thoughts?"} {"_id": "211673", "title": "GitHub Organizations for a project spanning multiple repositories?", "text": "I've started a project that involves at least three repositories on GitHub. One of the repositories is a generic documentation-and-examples dump, and the other two contain the implementation of two programs that form the backbone of the project. Should I use a GitHub Organization to handle such a configuration? Or should I just dump it all to my own account, along with a dozen other, completely unrelated repositories?"} {"_id": "251873", "title": "Structuring an application that reads from a .properties file", "text": "I have a Java app with three classes: `Foo`, `Bar` and `Baz`. All three depend on a bunch of what are currently constants defined in each class in order to determine how to run. On top of that, `Baz` interacts with a number of builder classes that need to be set up in various ways. I'd like to refactor out all of the configuration info in to a .properties file so that I don't have to recompile every time I change a constant. But I'm not sure how to structure it: Do I have a `ConfigInfoSingleton` that holds all of the parameters of the application in static properties, and have other classes ask it for information when they need it? That seems wrong, as it introduces global state. Do I pass around a `ConfigInfoMap` as a parameter to all of the classes that need that info? Then I need to have `Foo`, `Bar`, and `Baz` parsing strings in order to decide what they're going to do, and that seems very wrong. Especially in the case of `Baz`, where I can't just pass around values, but I need to use `switch` statements on configuration info to determine builder method calls. Do I parse the info in a different class, and have that class tell the others how to set themselves up? Then I have a class that needs to know about everyone else's implementation details, which seems wrong, but less so than the other options. This seems like a really basic question but I'm not sure how to proceed in terms of good software design, and I haven't had much luck Googling. Whats the \"right\" thing to do?"} {"_id": "251872", "title": "What time should be recorded against story points in order to determine velocity and hours per point", "text": "We recently tried to correlate the hours spent on a sprint against the stories assigned to the project (to try and get an idea of velocity and hours per point). However our recording system only allowed us to put time against a project and so included time such as: * Customer meetings / phone calls * Scrums * Development / testing Should all of the time for these be taken into account when trying to determine velocity. Or should only the development / testing time be taken into account that occurred against specific stories. If everything, what should scrums and meetings be against. Should there be a separate story for these, or just recorded against a different metric i.e. project-elaboration vs project-development. If only development do we need a finer grained recording system for recording time spent on a project? **EDIT** : As well as velocity we are also trying to get a better idea of hours per point **EDIT** : I've had a few comments saying this isn't the way to do it (that's great). However no-one in those comments has suggested how we can give a priced quote for the next lot of work if we won't know how long the points took on an hour basis. The customer wants to know how much cost the next lot of work will be. Some additional points to note are: 1\\. The team isn't solely dedicated to this project. Some time is spent on other work for some people and this may vary. 2\\. The team may vary in size as the project goes with a member coming and going (I guess this effects estimations, but how?)"} {"_id": "124821", "title": "Single or multiple files for unit testing a single class?", "text": "In researching unit testing best practices to help put together guidelines for my organization, I've run into the question of whether it is better or useful to separate test fixtures (test classes) or to keep all tests for a single class in one file. Fwiw, I am referring to \"unit tests\" in the pure sense that they are white-box tests targeting a single class, one assertion per test, all dependencies mocked, etc. An example scenario is a class (call it Document) that has two methods: CheckIn and CheckOut. Each method implements various rules, etc. that control their behavior. Following the one-assertion-per-test rule, I will have multiple tests for each method. I can either place all of the tests in a single `DocumentTests` class with names like `CheckInShouldThrowExceptionWhenUserIsUnauthorized` and `CheckOutShouldThrowExceptionWhenUserIsUnauthorized`. Or, I could have two separate test classes: `CheckInShould` and `CheckOutShould`. In this case, my test names would be shortened but they'd be organized so all tests for a specific behavior (method) are together. I'm sure there are pro's and con's to either approach and am wondering if anyone has gone the route with multiple files and, if so, why? Or, if you've opted for the single file approach, why do you feel it is better?"} {"_id": "243199", "title": "Find points whose pairwise distances approximate a given distance matrix", "text": "**Problem.** I have a symmetric distance matrix with entries between zero and one, like this one: D = ( 0.0 0.4 0.0 0.5 ) ( 0.4 0.0 0.2 1.0 ) ( 0.0 0.2 0.0 0.7 ) ( 0.5 1.0 0.7 0.0 ) I would like to find points in the plane that have (approximately) the pairwise distances given in D. I understand that this will usually not be possible with strictly correct distances, so I would be happy with a \"good\" approximation. My matrices are smallish, no more than 10x10, so performance is not an issue. **Question.** Does anyone know of an algorithm to do this? **Background.** I have sets of probability densities between which I calculate Hellinger distances, which I would like to visualize as above. Each set contains no more than 10 densities (see above), but I have a couple of hundred sets. **What I did so far.** * I did consider posting at math.SE, but looking at what gets tagged as \"geometry\" there, it seems like this kind of computational geometry question would be more on-topic here. If the community thinks this should be migrated, please go ahead. * This looks like a straightforward problem in computational geometry, and I would assume that anyone involved in clustering might be interested in such a visualization, but I haven't been able to google anything. * One simple approach would be to randomly plonk down points and perturb them until the distance matrix is close to D, e.g., using Simulated Annealing, or run a Genetic Algorithm. I have to admit that I haven't tried that yet, hoping for a smarter way. * One specific operationalization of a \"good\" approximation in the sense above is Problem 4 in the Open Problems section here, with k=2. Now, while finding an algorithm that is _guaranteed_ to find the minimum l1-distance between D and the resulting distance matrix may be an open question, it still seems possible that there at least is some approximation to this optimal solution. If I don't get an answer here, I'll mail the gentleman who posed that problem and ask whether he knows of any approximation algorithm (and post any answer I get to that here)."} {"_id": "251878", "title": "Long running task initiated in the web site", "text": "The plan is to develop generic solution for long running task initiated in web site by users such as: 1\\. upload large file and do some custom processing and then insert in the database. 2\\. export large amount of data But we do not want to run these tasks on web servers and instead run on dedicated other sever. I have a couple of following solutions but looking for suggestion on these solutions or a new option. 1. Web site calls WCF service on other server (one-way operations / fire-forget). WCF service has all the information to process request and task will run in WCF service. 2. Window service but not sure how Window service will start the job as soon as possible as we do not want any delay between user submitted the form and process start time. Need some kind of flag for Window service to start."} {"_id": "223853", "title": "How to publish my application as open source software?", "text": "Recently I wrote some (I hope) useful application. I would like to share it with every person on the world, making it open source. I would like to publish the source code as well as binary files (both for Windows and Linux). I used only free tools to write my application: Code::Blocks, MinGW, gcc/g++, Linux Ubuntu/Linux Mint, Windows 7 with the license I bought. I read about different types of licences but still have some doubts. I looked into the source code of some open source software, for example, here: https://polarssl.org/aes-source-code or here: http://qt- project.org/doc/qt-5/qtnetwork-blockingfortuneclient-blockingclient-h.html. Those files have some header, where you can find the license as a comment. Here are my questions: 1. Is there any place in the internet where I can read about these things? Licensing, how to publish my code? 2. Should I copy the license to every file of my source code when I want to publish it? From where I can take the license to copy it? (meaning should I copy it both in `*.h` and `*.cpp` file?) 3. For Windows version of my application I used Visual Studio Express Edition and Qt. To make an `*.exe` which will run out of the box with no additional installations, I need to redistribute with my `*.exe` file some files from Qt and Visual (`*.dll` files). How and where can I check if its free to redistribute those files with my application? 4. Which license would be the best for my needs? I did a little research and found that license: http://opensource.org/licenses/BSD-3-Clause. Is it 100% good for my needs, any advices?"} {"_id": "243190", "title": "Software monetization that is not evil", "text": "I have a free open-source project with around 800K downloads to date. I've been contacted by some monetization companies from time to time and turned them down, since I didn't want toolbar malware associated with my software. I was wondering however, is there a non-evil way to monetize software ? Here are the options as I know them: * **Add a donation button.** * I don't feel comfortable with that as I really don't need \"donations\" - I'm paid quite well. * Donating users may feel entitled to support etc. (see the second to last bullet) * **Add ads inside your application. * ** In the web that may be acceptable, but in a desktop program it looks incredibly lame. * **Charge a small amount for each download.** * This model works well in the mobile world, but I suspect no one will go for it on the desktop. * It doesn't mix well with open source, though I suppose I could charge only for the binaries (most users won't go to the hassle of compiling the sources). * People may expect support etc. after having explicitly paid (see next bullet). * **Make money off a service / community / support associated with the program.** * This is one route I definitely don't want to take, I don't want any sort of hassle beyond coding. I assure you, the program is top notch (albeit simple) and I'm not aware of any bugs as of yet (there are support forums and blog comments where users may report them). It is also very simple, documented, and discoverable so I do think I have a case for supplying it \"as is\". * **Add affiliate suggestions to your installer.** * If you use a monetization company, you lose control over what they propose. Unless you can establish some sort of strong trust with the company to supply quality suggestions (I sincerely doubt it), I can't have that. * Choosing your own affiliate (e.g. directly suggesting Google Toolbar) is possibly the only viable solution to my mind. Problem is, where do I find a solid affiliate that could actually give value to the user rather than infect his computer with crapware? I thought maybe Babylon (not the toolbar of course, I hate toolbars)?"} {"_id": "101125", "title": "Any success stories continuously using commercial static analysis tools for C++?", "text": "I can't decide whether an offer of a commercial static analysis tool is worth spending the resources. We tried the tool on several million lines of our C++ code and it found something like 50 real issues. We wanted to find how those issues might have affected users. We grabbed a year old version of stable branch and analyzed it and then looked into the defects base - none of the issues found in the old code has caused any of the problems users reported. Not only those defects don't seem to manifest themselves to users but using the tool requires nontrivial effort - address false positives, analyze every warning the tool produces. Also the tools doesn't find all defects, it of course only finds some defects. So once again * a noticeable effort * very low number of issues found * issues found don't seem to affect customers The tool is licensed at 7K euro for a team of five per year. Currently it looks like it's a lot of money and effort and no return except _now our code will maybe have less defects that likely don't manifest themselves_. It feels like we could have spent that effort on addressing issues that hurt right now. The supplier claims that using their tool in the development process helps _drastically_ improve code quality. I currently can't get any explanation of what this _drastic_ improvement can be - all the facts I considered are listed above. I'd like to hear stories that go like this: _before we licensed X we were miserable, because (what specifically was wrong), now using X at all times we have (what's much better now than before), so we're pretty happy - money were spent well_ and negative experience with enough details would also be helpful. Has anyone had really positive experience continuously using commercial static analysis tools? Is there a drastic improvement in source code quality that directly affects customers? Was the result worth the license fee and effort?"} {"_id": "160028", "title": "1 to 1 Comparison and Ranking System", "text": "I'm looking to create a comparison and ranking system which allows users to view 2 items, click on the one that they feel is the better one and then get presented with 2 more random items and continue to do this until they decide to stop. In the background, I want the system to use these wins and loses to rank each item in an overall ranking table so I can then see what is #1 and what isn't. I haven't got a clue where to begin with the formula, but I image I need to log wins and loses. Any help/direction appreciated!"} {"_id": "157755", "title": "Lisp Macros: A practical approach", "text": "On my way to learn Lisp I have discovered the all powerful and feared so called Macros, then after spending a hard time trying to understand them and their usefulness I said to myself, I FINALLY GOT IT. I couldn't be more wrong, I was thinking that the only purpose of Macros was to define new control structures and that was all I needed to know until I start reading more and more on the subject and a whole new world has comes to me ! One can use Macros for new **Domain Specific Language** , **Code Transformation** and to avoid **Boilerplate Code**. Hence, if a few examples at which the aforementioned usages of macros and others as well, with a concise and simple-to-understand explanation could be given it would be really nice."} {"_id": "157754", "title": "\"Read\" a file without using a file pointer", "text": "I was asked this question in an interview. I'm somehow supposed to \"read\" a file into my C program as input without using a file pointer (including \"f\" functions, e.g. fgets, fscanf etc.). I'm also not allowed to redirect it using the terminal, i.e. no system or exec calls. The program will not get the file during runtime (that's what he said). The interviewer did not answer the question even though I requested him a lot, and I'd like to know how it'd be possible to do this."} {"_id": "235185", "title": "What's the reason of choosing PascalCasing over camelCasing or vice versa from a programming language design POV?", "text": "I like both but I notice languages that use camelCasing for members sometimes need more adjustments when you want to edit your code. For example (in Python): node.customData() vs node.setCustomData() custom changes casing. It could be avoided if it was: node.getCustomData() In C# they will use PascalCasing for example. I know some even use no casing so: node.getcustomdata() It seems to be that scripting languages generally prefer camelCasing more than PascalCasing to be more relaxed? Is there a reason for choosing one over the other from a programming language design POV?"} {"_id": "158929", "title": "Domain Objects with Interfaces", "text": "I'm in a situation where part of my system has a dependency on another module in the same system, but the modules themselves need to remain independently deployable, where the parts they depend on would be filled in with another implementation. A module in this instance is the concept of a bunch of components that all contribute to servicing a business function. The system is also built in C#. So, where I depend on another module, the source module defines an interface describing the functions it needs the dependent party module to implement. The contract types (domain model objects) are also interfaces for the dependent module. Here is where it gets a bit hazy to me. The dependency inversion principle doesn't sit well in my head at this point. Both of those \"modules\" are of the same importance as each other. Which should define interfaces and force the other to reference it? I'm suspecting a 3rd project sitting between the modules that handles setting up the dependencies (probably a DI container). Should the entities in the modules be interfaced? They are simply bags of get/set (no DDD here). Can anyone offer any guidance here? Thanks in advance. **Edit 1:** The project structure: Module1.Interfaces IModuleOneServices Module1.Models ModuleOneDataObject Module1 Module1 implements IModuleOneServices Module2.Interfaces IModuleTwoServices Module2.Models ModuleTwoDataObject - has property of type ModuleOneDataObject Module2 Module2 implements IModuleTwoServices depends on IModuleOneServices Module2 needs to be deployable by itself, and remains compilable, and sometimes, run without a Module1 present at all."} {"_id": "160027", "title": "How to capture different build verisons of the same production release artifact version in Artifactory?", "text": "Let's say I have artifacts \"mylibrary-5.2.jar\" and \"mylibrary-5.3.jar\" representing the 5.2 and 5.3 versions of a library that our project creates and publishes for one of our other projects. Does Artifactory support having multiple \"versions\" of each of these artifacts to represent the different builds that were performed during a release to construct this artifact? For example, to produce the final version of the 5.2 release of \"mylibrary\" aka the artifact: **mylibrary-5.2.jar** , we went through 3 builds to get to a version that passed our integration environment's automated tests and our user acceptance tests. So there were three separate builds that produced three separate artifacts for release 5.2. We want to be able to retain and potentially recall these different build's artifact at a later date (for testing, etc). In order to do this, which of the following options would work? 1. Capture the artifacts as separate Artifacts, i.e. build-5.2-b1.jar (build 1's artifact), build-5.2-b2.jar (build 2's artifact), build-5.2-b3.jar (build 3's artifact), and build-5.2.jar (the final production release; which matches build 3) 2. Capture a SINGLE artifact named \"build-5.2.jar\" which has VERSIONS of the artifact which capture builds 1 through 3 and which can be recalled later, by version number. 3. Some other option we have not considered, but should"} {"_id": "240901", "title": "Is it appropriate to remove redundant code explicitly assigning the default values?", "text": "Sometimes while I'm editing something I see some useless code added by other developers, probably due to habit. Editing this code will not make any difference, so is it appropriate? In my specific case I'm talking about Java private fields like this: private int aSimpleInt = 0; private boolean myBool = false; private MyObject obj = null; declaring the default values is redundant and useless. Should I remove them or just skip?"} {"_id": "178280", "title": "Can I maintain copyright of the name/logo of an otherwise copyleft (GPL) application?", "text": "I'd like to release an application that basically has the license of \"the code is GPLd but forks cannot use the name or logo\", with the intent of avoiding confusion to users. Is that possible or is it an all or nothing deal?"} {"_id": "178281", "title": "Does distributing non-GPLd assets with a GPL application violate the license?", "text": "This is somewhat related to my other question, but is actually different. I would like to license a Windows Phone application under the GPL. All other Windows Phone Marketplace issues aside (I'll ask those on the forums), I'd like to include icons that ship with the SDK in my application. While this is common practice (documentation points to the icons' location), I'm not sure if I'd be forcing GPL on the icons (a move expressly forbidden by the Application Provider Agreement). How is this usually handled in GPL or am I simply out of luck? **EDIT** The icons in question are copied into the project's source tree and distributed with the application. The license that ships with the SDK states that you may **not** : > modify or distribute the source code of any Distributable Code [incl. the > icons] so that any part of it becomes subject to an Excluded License. An > Excluded License is one that requires, as a condition of use, modification > or distribution, that > > * the code be disclosed or distributed in source code form; or > * others have the right to modify it. > My question now becomes: can I distribute the icons with my GPL'd application in any way (for example, by including provisions in my license text) that would not violate either the GPL or the SDK agreement?"} {"_id": "178285", "title": "Should we hire a new developer now, or wait until the code is refactored to make it suitable for a team environment?", "text": "I support and develop a large system that uses various technologies e.g. c++,.net,vb6 etc. I am a sole developer. I am debating whether now is the right time to approach my manager (who is not a developer) to ask if another developer can be recruited. I don't have any experience working in software teams. I have always been a sole developer. The concerns I have are: 1. There is still a lot to do. Training another developer would take time and distract me from my duties. 2. The company does not invest heavily in tools e.g. source control 3. The code in this system needs to be refactored to introduce concepts such as interfaces, polymorphism etc, which are supported by methodologies such as Agile (I inherited the system about 12 months ago). I am gradually trying to refactor the code. I believe I have two options: 1. Approach my manager now 2. Wait until I have had time to refactor the code so it is more suitable for a team environment. Which option is best? I am hoping to hear from other developers who have been in my situation."} {"_id": "209973", "title": "Understanding unit testing for dynamically changing condition", "text": "I was trying to understand how to write unit tests for a few days now. I'm confused with following scenarios in the existing code. In first function the max value changes depending on the object created at run time but in the second case it is a constant. NOTE: The following functions are not related. These are two different scenarios. SomeFunction1(arg1,....) { if(arg1 > someObject.MaxAllowedValue) { throw exception; } } SomeFunction2(arg1,....) { if(arg1 > maxAllowedValue) { throw exception; } } I am trying to test whether the function throws an exception when max value is exceeded. Does the unit test remain the same in both the cases or is it different?"} {"_id": "54016", "title": "What do you do when you realize your job requires you to do something out of your depth?", "text": "For a large software project recently, I was really out of my depth. And I did actually know this; and that the only reason I was employed was mostly a lack of other qualified candidates. The job was to build a large application on top of PHP/MySQL, a system I had little experience with. (I did advise the employer of this beforehand -- I've been spoiled by C# ASP.NET/MVC and MSSQL Server) The main reason I applied was location, location, location -- on campus jobs which actually have any programming component are relatively rare. For almost a year and a half I've slogged through this, and I think I can say I know (at least somewhat) what I'm doing now. I've made some mistakes, torn out some hair, and moved on. (I'm still working on this system nowadays, but I no longer feel completely lost) In the future though, I'd like to keep my personal and professional self a little healthier than what occurred in this case. So I'm curious -- what's the best way to handle a situation like this?"} {"_id": "58577", "title": "Shifting between XML schemas and C# classes in a sensible fashion", "text": "A .NET/C# system receives XML messages for processing and further transmission. Since working directly on XML documents would be very inconvenient, it is necessary to deserialize the message to a C# object and then serialize just before delivery. Up until now this has been done with Microsoft's XSD tool (xsd.exe) and it gets the job done, but poorly. The tool is old and buggy and generates outrageous class names and code representations of the various XML schema constructs. Are there better ways to accomplish this in .NET, or should I go a whole other way about it?"} {"_id": "28468", "title": "Key bindings war in my brain", "text": "How many key bindings are there in your brain? I personally use Intellij Idea as my primary IDE for development, but sometimes switch to Eclipse and Netbeans a bit, and sometimes I use Jedit,Notepad++, Emacs, Vim. I am always trying to find a perfect key bindings that I can use crossing all the editors. What's your idea to solve this problem?"} {"_id": "151751", "title": "List structures in memory", "text": "Could anyone give an overview of how list structures which are composed of a head and a tail which references the rest of the list i.e linked list are represented in memory of the computer? Does the computer make use of cpu registers to hold the pointers the head and rest of the list?"} {"_id": "151755", "title": "How to temporarily save the result of the query, to use in another?", "text": "I have this problem I think you may help me with. P.S. I'm not sure how to call this, so if anyone finds a more appropriate title, please do edit. ## Background * I'm making this application for searching bus transit lines. * Bus lines are a 3 digit number, and is unique and will never change. * The requirement is to be able to search for lines from stop A to stop B. * The user interface is already successful in hinting the user to only use valid stop names. * **The requirement is to be able to display if a route has a direct line, and if not, display a 2-line and even 3-line combination.** ### Example: I need to get from point A to point D. The program should show: * If there's a direct line A-D. * If not, display alternative, 2 line combos, such as A-C, C-D. * If there aren't any 2-line combos, search for 3-line combos: A-B, B-C, C-D. Of course, the app should display bus line numbers, as well as when to switch buses. ## What I have: My database is structured as follows (simplified, actual database includes locations and times and whatnot): +-----------+ | bus_stops | +----+------+ | id | name | +----+------+ +-------------------------------+ | lines_stops_relationship | +-------------+---------+-------+ | bus_line | stop_id | order | +-------------+---------+-------+ Where `lines_stops_relationship` describe a many-to-many relationship between the bus lines and the stops. Order, signifies the order in which stops appear in a single line. Not all lines go back and forth, and order has meaning (point A with order 2 comes after point B with order 1). ## The Problem * We find out if a line can pass through the route easily enough. Just search for a single line which passes through both points in the correct order. * How can I find if there's a 2/3 line combo? I was thinking to search for a line which matches the source stop, and one for the destination stop, and see if I can get a common stop between them, where the user can switch buses. **How do I remember that stop?** * 3 line combo is even trickier, I find a line for the source, and a line for the destination, and then what? Search for a line which has 2 stops I guess, but again, **How do I remember the stops?** ## tl;dr How do I remember results from a query to be able to use it again? I'm hoping to achieve this in a single query (for each, a query for 1-line routes, a query for 2, and a query for 3-line combos). _**Note:_ I don't mind if someone suggests a completely different approach than what I have, I'm open to any solutions.** Will award any assistance with a cookie and an upvote. Thanks in advance!"} {"_id": "17105", "title": "Being productive during downtime", "text": "Lets say you are somewhere where coding and getting online isn't possible (On a busy flight, for example)what do you do to stay productive? Things I would do are read whatever tech book I am currently slogging through and maybe doodle some UI stuff or workflows. What else could I be doing?"} {"_id": "20335", "title": "When do you have enough experience?", "text": "I've worked on three successful projects (freelanced) throughout my highschool career; however, they primarily involve web technologies. I know that I'm proficient in setting up LAMP stacks, working with PHP, databases, and designing with strict markup, version control, usability testing, and all that fun stuff but since I began programming, I've seen an evident gap between the complexity from hash maps to binary search trees (Java is the ultimate PL for learning no matter its condition in the industry). Although I'm attempting to pursue some multifarious career in software design, is it really necessary to push for a University education (Berkley or Stanford ideally in CS) before I fully commit myself to programming? Are three successful projects regarding web design, web development, and video streaming (and all concomitant technologies) enough to be regarded as \"experienced\"? Could that experience get me hired? Ultimately, when is there enough experience such that University degrees become irrelevant?"} {"_id": "256064", "title": "MongoDB Embedded vs Reference Private info", "text": "I have searched extensively for a similar Mongo schema design and can't find relevant examples. I have a store (with public info), each store has an account (with private account info). // store object { name: \"Departement Store\", email: \"contact@store.com\", account: { // private info not returned by API manager: \"Steve\", employees: [...] } } The stores will be searched through a public API. I am limiting the search queries using MongoDB's features to limit the returned data: db.stores.find({}, {account:0}); **My question** : is it more efficient to keep the private data as a subdocument or in a separate collection? It seems a separate collection with account info is the best choice as I will be picking and choosing from an embedded document. References: * mongo schema (embedding vs reference) * Embedded document vs reference in mongoose design model?"} {"_id": "1588", "title": "What is better for coding - desktop or laptop?", "text": "Use of desktops are decreasing day by day in daily life but for coding purpose are there any reasons for using desktop over laptop?"} {"_id": "256066", "title": "Class hierarchy question - do you implement separate classes for the same behavior?", "text": "**NOTE:** The language I am using is C#. I am currently working on a 'The Quest' mingame where there is a player and some enemies. My design so far involves a base abstract class called 'Mover' and an interface called 'IAttacker', since the Player object and the Enemy object(s) both move and attack, albeit in different ways. However I also think this may be unnecessary because I could just create one big combination interface, 'IMoveAttack' or something like that. Moreover, my friend who gave me the challenge recommeded that the Weapon class (the player can pick up weapons along the way which are lying on the floor) to be a subclass of Mover, even though the Weapons don't really need to move, they just need to spawn at random locations at every level. What is the best design principle in this case?"} {"_id": "48481", "title": "Do we need use case levels or not?", "text": "I guess no one would argue for decomposing use cases, that is just wrong. However sometimes it is necessary to specify use cases, which are on lower, more technical level, like for example authetication and authorization, which give the actor value, but are further from his business needs. Cockburn argues for levels when needed and explains how to move use cases from/to different levels and how to determine the right level. On the other hand, e.g. Bittner argues against use case levels, although he uses subflows, requires the vision documents which contain information about the purpose of the system in the business, much like use cases on the higher level, and at the end of his book mentions, that at least two levels are needed most of the time. My questionis, do you find use case levels necessary, helpful or unwanted? What are the reasons? Am I misssing some important arguments?"} {"_id": "48483", "title": "How can the process of creating and maintaining documentation be improved?", "text": "I know it's a very subjective question , but this is one section of Software Engineering wherein I haven't seen any improvement in terms of how we can do it in a better way. I guess every programmer does documentation with some frustration, but when working I feel it's the most important tool for me. How can the process of creating and maintaining documentation be done in a better way?"} {"_id": "256062", "title": "can a logic error happen way later than its cause?", "text": "For comparison, for a runtime fatal error, it is often that the cause of the error is way before the error crashes a program. For a logic error, it doesn't crash a program. It happens when the state of the execution of the program isn't what we expect for the first time. The cause of a logic error in a program, I think, just like the cause of a runtime fatal error, is where you make correction and the program will not have the logic error. I wonder if the cause of a logic error must be where the error happens, or can be way before that? Thanks."} {"_id": "62105", "title": "Assertions in private functions - Where to draw the line?", "text": "We use assertions to check for illegal behaviour which just shouldn't happen if everything is working as it should be, such as using `NULL` as argument when it clearly shouldn't. This is all very well when you write public functions since you can't trust that the programmer who will use them won't make a mistake. But what about private functions, which won't be accessible from the outside? Of course, the function which uses those private functions may contain a bug (i.e. _you_ made a mistake), but should we _always_ use assertions in private functions? Is there a line where we can say \"Hey, we don't need an assertion here because due to previous assertions and usage of the private function, we can assume that the parameters are always safe\"? Now I myself am a bit skeptical about that last part - can we ever safely assume that things are always as they should be?"} {"_id": "48489", "title": "Who should write the test plan?", "text": "I am in the in-house development team of my company, and we develop our company's web sites according to the requirements of the marketing team. Before releasing the site to them for acceptance testing, we were requested to give them a test plan to follow. However, the development team feels that since the requirements came from the requestors, they would have the best knowledge of what to test, what to lookout for, how things should behave etc and a test plan is thus not required. We are always in an argument over this, and developers find it a waste of time to write down things like:- 1. Click on button _A_. 2. Key in _XYZ_ in the form field and click button _B_. 3. You should see behaviour _C_. which we have to repeat for each requirement/feature requested. This is basically rephrasing what's already in the requirements document. We are moving towards using an Agile approach for managing our projects and this is also requested at the end of each iteration. Unit and integration testing aside, who should be the one to come up with the end user acceptance test plan? Should it be the reqestors or the developers? Many thanks in advance. Regards CK"} {"_id": "237331", "title": "How to explain that catch(...) is wrong?", "text": "I have a problem: Now Microsoft has changed, and in respect with the C++ Standard, and starting from from Visual Studio 2005 now: * Access Violation are not to be catched in `catch(...)` However, if one compile with `/EHsc` or other similar flags under VS2003 and before, then Access Violation are really caught in `catch(...)`. Yes, really. In my new company, they have protected their whole application accordingly. And now I have a **whole application** with hundreds, maybe thousands, of `catch(...)` compiled in VS2003. Those `catch` don't put an end to the process. Sometimes they hopefully trace something like \"unexpected error in...\", but only sometimes. How can I explain to them it's plain wrong ? That it hides corruptions until a point where it's impossible to tell what is going wrong ?"} {"_id": "252997", "title": "How to chain together points in an array", "text": "I have a series of points with lengths and rotations like this: ![points](http://i.stack.imgur.com/xiG62.jpg) I need to create separate chains from points whose lines overlap but I\u2019m having real trouble doing this efficiently. I have an array of simple Point objects, in no particular order, and I can loop through them and test them with a simple \"intersect\" function. I need to end up with an array of chains, each with an ordered list of points. (Or another way of representing the chains). At the moment every avenue I explore seems to involve a convoluted hack of arrays, nudges and fudges. Having never studied Computer Science I wonder if there is some sort of data structure or technique that would lend itself well to this sort of thing."} {"_id": "67557", "title": "What differences can I expect on the Java 6 Exam compared to the Java 5 Exam?", "text": "I am taking a Java class that teaches based on a Java 5 book and the prof. says that if we get the Java certification then we instantly pass the class, sounds like a deal. My question is, since I want to go for the most recent certification (Java 6), what can I expect to be different? I was told that Java 6 is so similar to Java 5 that learning Java 5 is not useless, but what about when it comes to exam differences?"} {"_id": "143659", "title": "What was the most used programming language before C was created?", "text": "C is a language written between '69 and '73 according to WIkipedia. I imagine it made programming a whole lot easier and opened the gate for other programming languages. My question, however, is what programming language dominated the market before C appeared? I'm tempted to say fortran and/or BASIC but I wasn't even alive back then so I have no idea. By \"most used\", I mean a MUST-LEARN programming language, which most programmers were using at the time."} {"_id": "115059", "title": "What responsibilities does a Management Information Systems job entail?", "text": "At my school there is apparently a Computer Science degree, which is located under the \"department of natural sciences\", while Management Information Systems is considered \"business\". Besides the usual descriptions that can be found about both jobs, I was wondering how the job of MIS actually differs from say, that of a software engineer or programmer. Just curious, thanks"} {"_id": "67552", "title": "How to output library test/benchmark data in a web framework?", "text": "I am writing an MVC framework. I have a folder full of library classes, each are self contained, and could be ripped out of the framework and used by themselves. The only problem is that a few of these libraries (benchmarking, unit testing) display HTML to report results. I am wondering, do I display this HTML in a view file, or hard code it into the class? If I use view files, these modules will no longer be able to be used by themselves, and will require the print_view() method found in another class. If I hard code this HTML into the library class however, the class becomes difficult to read, as well as makes it harder to modify the design aspects of the reports. Any suggestions/thoughts?"} {"_id": "179485", "title": "Dependency Management tool for REST endpoints", "text": "I work in a Rest Oriented environment. The number of endpoints is quite large and span multiple applications. The dependencies between the endpoints are large in number as well and not very well planned. Applications have cyclic dependencies amongst each other. Unfortunately, there is no central location where all the endpoints are documented and declare dependencies (the endpoints that they inturn call). Is there a tool that will help in such dependency management. I tried searching for a tool online, but not know what such a thing would be called, I am unable to find anything."} {"_id": "144602", "title": "How do I make complex SQL queries easier to write?", "text": "I'm finding it very difficult to write complex SQL queries involving joins across many (at least 3-4) tables and involving several nested conditions. The queries I'm being asked to write are easily described by a few sentences, but can require a deceptive amount of code to complete. I'm finding myself often using temporary views to write these queries, which seem like a bit of a crutch. What tips can you provide that I can use to make these complex queries easier? More specifically, how do I break these queries down into the steps I need to use to actually write the SQL code? _Note that I'm the SQL I'm being asked to write is part of homework assignments for a database course, so I don't want software that will do the work for me. I want to actually understand the code I'm writing._ More technical details: * The database is hosted on a PostgreSQL server running on the local machine. * The database is very small: there are no more than seven tables and the largest table has less than about 50 rows. * The SQL queries are being passed unchanged to the server, via LibreOffice Base."} {"_id": "115050", "title": "How do I convince some one that test should do assertion (not assertions) and not the helper methods", "text": "Joined a new employer and came across a new style of writing tests. @Test() public testMethodWhichDoesNotDoAnyAssertion() { LoginPage loginPage = signUpPage.doLogin(\"username\",\"password\"); oneMoreCommonMethodCalledHere() anotherCommonMethodCalledHere() } public void doLogin(String userName, String password) { //login here Assert.assertTrue(\"Login Successful\") } public void oneMoreCommonMethodCalledHere() { //Some more operations here. Assert.assertTrue(\"This also succeeded\") } public void anotherCommonMethodCalledHere() { //Some more operations here. Assert.assertTrue(\"Even this succeeded!!! Your code is awesome!!!\") So far I have been doing assertions in tests and not in the methods which are invoked from test method. The problems I have with approach are multiple - There are two many assertions happening in one method, though indirectly and it defeats the idea of one responsibility per test There are times when I want to do one assertion in my test method while testing for a work flow. And many of the helper methods which would be called would assert things and might even fail which would hamper work flow test. Now thing I have heard in favour of this approach - It is easy for any one to just plug in helper method in a test while not worrying about the assertions which should be carried out for a scenario, as helper method takes care of it. Comments?"} {"_id": "115051", "title": "How to present asynchronous state change in chart or diagram?", "text": "I started to study about _state transition chart_. As I see, it assumes all state transition is done instantly with no time consuming. But in most of my case, I'm heavily depending on asynchronous I/O, so it seems to be less efficient modeling them with the chart. How do you think about how to represent asynchronous state change? Please recommend something to me :)"} {"_id": "66687", "title": "Does single inheritance limit what we can do with generalisation?", "text": "As a rule of thumb, generalisation is used only in specific circumstances. For example, when we can say that X is literally a subclass of Y. So, we can happily say that a Horse is a subclass of Mammal. I have always been lead to believe that we should use generalisation and inheritence here. If we do not have this strict correspondence between two objects then we should not. A Horse is a Mammal. However, it is also literally a mode of transportation. So what happens in a world of single inheritance where horse cannot inherit from both? Do we then subjugate the fact that a horse is literally a mode of transportation to realisational, i.e., an interface such as ITransportable?"} {"_id": "178758", "title": "Why is using C++ libraries so complicated?", "text": "First of all, I want to note I love C++ and I'm one of those people who thinks it is easier to code in C++ than Java. Except for one tiny thing: libraries. In Java you can simply add some jar to the build path and you're done. In C++ you usually have to set multiple paths for the header files and the library itself. In some cases, you even have to use special build flags. I have mainly used Visual Studio, Code Blocks and no IDE at all. All 3 options do not differ much when talking about using external libraries. I wonder why was there made no simpler alternative for this? Like having a special .zip file that has everything you need in one place so the IDE can do all the work for you setting up the build flags. Is there any technical barrier for this?"} {"_id": "66682", "title": "What can we learn from inactive assembly languages?", "text": "There are still groups of programmers who support old microprocessors, e.g., Z80, 6510, 68000, etc. What can we learn from old assembly languages at a time when functional programming is becoming fashionable? **EDIT** I imagine that there would be more to learn for embedded systems. However, assembler styles apply to a more limited extent for web programming where caching is used and the size of routines does not really matter. The guideline for best practice for embedded and web styles differs massively (server and client styles being different). For example, the optimising a sprite multiplexer to run in say, under 30 bytes, is different to types of optimising we would make for code we intend to run on a web server. The types of optimisation are very different. The sprite multiplexer is written with memory usage as the main priority but regarding our web server routine, we want maximum performance which has little to do with efficient use of memory unless we are talking about shared resources."} {"_id": "188150", "title": "Compile GPL-code into a JNI-capable shared library and use it in commercial software", "text": "I am developing an application for Android in JAVA which calls GPLed C-code via JNI. I have modified & capsulated a GPL-software under a JNI-interface and compile it as a shared library (.so) which I can use via JNI from my JAVA-part (the commercial part with closed source). Because I have stripped the original GPL-software from everything which is not needed by my software, that GPL-software has no more other GPL-dependencies which explicitly forbid using it as a shared library in a closed-source software (like SecretRabbitCode for example forbids it). I have no problem to set my modified version of the GPL-software online, but I cannot do this with my commercial software. Because I need the modified version of the GPL-software in my software, I have to bundle it in one package (a APK-file in Android). Sadly, the GPL-software has 1 or 2 more dependencies to GPL-code (just C-files like aes.c), so it's not as easy as just ask the author of this GPL-software. Because the Android-way of \"packing JAVA and .so-files together, bundle it and distribute it\" is quite new, it's not as easy as with pure JAVA or pure C code. Who can help out?"} {"_id": "145553", "title": "Sharing API's between different Programming languages?", "text": "I was just wondering how API's can be shared between different Programming languages. I mean, MS have .Net which uses VB.net C# and various other technologies. I doubt .Net is written for each programming language. How are structs and classes shared between languages? Also the same for Unity3D - Javascript shares API's with C# and BOO. How?"} {"_id": "66689", "title": "What would happen if a bit of GPL code sneaked into the Windows source code?", "text": "Imagine that an unfaithful (or maybe careless) employee at Microsoft managed to sneak in three lines of GPL code into a the core of a distribution of Windows. Wouldn't this mean that Microsoft would need to publish all their source code under GPL as well? Or could they just rewrite the three lines once they are notified about the issue? Shouldn't Microsoft be scared to death about this? **Edit** In the comments below, Kenneth states that it has actually happened, so I would be most intereted in any references. I also do not understand how this question could get closed."} {"_id": "178755", "title": "Setting up ASP.NET structure for code", "text": "I've always coded in C# MVC3 when developing web applications. But now i wanted to learn a bit more about developing web sites with just ASP.NET. But now i'm wondering what a good setup for my code would be. For me, an MVC like pattern seems to be a good way to go. But obviously ASP.NET doesn't have any `router` and `controller` classes. So i guess people have a different way of setting up their code when they do ASP.NET. So i'm looking for more information on how to get started with this. So not really the basics of ASP.NET, but something that focuses on a good code setup. Any good tutorials/information about this/?"} {"_id": "225307", "title": "Another way of expressing 'Dirty' coding", "text": "I'm writing a brief for a new starter at the company. I've pointed out the aims of what I want to achieve and now I'm writing some suggested (but not concrete) solutions. For each of the aims, I'd like to suggest a fast, dirty method, and a slower, but more robust solution. I don't mean 'dirty' as in uncommented and untabbed code, but simply as in faster, hackishy code. I'm happy that the new starter takes whichever pathway they feel are necessary, and I feel my use of the word 'dirty' is going to put them off that particular solution. They're probably going to try to impress the management, and I don't want the 'dirty' solution to be a turn-off if that's what they're most comfortable with. Can you help me decide on a slightly better terminology than dirty? I aim to contrast between the hacky/fast solution (which is still valid) and a more robust, scalable solution. For a project as trivial as the one we're producing, then either method will be accepted."} {"_id": "143387", "title": "Which topics do I need to research to enable me to complete my self-assigned \"Learning Project\"?", "text": "I want to continue learning C#. I've read parts of a few books recommended on here and the language is feeling more familiar by the day. I'd like to tackle a mid-sized personal project to take my expertise to the next level. What I'd like to do, is create an application that 'manages expenses', that runs on multiple machines on a LAN. So for example, say we have person1 and person2 on seperate machines running the application, when person1 enters an expense, it will appear on person2's (pretty UI) view of the expenses database and vice versa. What topics do I need to research to make this possible for me? I plan on learning WPF for the UI (though the steep learning curve (or so I'm told) has me a little anxious about that at this stage. With regards to the database, which database would you recommend I use? I don't want a 'server' for the database to run on, so do I need to use an embedded database that each client machine runs a copy of that updates to each other (upon startup/entering of expense on any machine etc)? What topics under networking should I be looking at? I haven't studied networking before in any language, so do I need to learn about sockets or?"} {"_id": "143386", "title": "best practice for initializing class members in php", "text": "I have lots of code like this in my constructors:- function __construct($params) { $this->property = isset($params['property']) ? $params['property'] : default_val; } Is it better to do this rather than specify the default value in the property definition? i.e. `public $property = default_val`? Sometimes there is logic for the default value, and some default values are taken from other properties, which was why I was doing this in the constructor. Should I be using setters so all the logic for default values is separated?"} {"_id": "161774", "title": "What relationship do software Scrum or Lean have to industrial engineering concepts like theory of constraints?", "text": "In Scrum, work is delivered to customers through a series of sprints in which project work is time boxed to a fixed number of days or weeks, usually 30 days. In lean software development, the goal is to deliver as soon as possible, permitting early feedback for the next iteration. Both techniques stress the importance of workflow in which software work product does not accumulate in development awaiting release at some future date. Both permit new or refined requirements and feedback from QA and customers to be acted on with as little delay as possible based on priority. A few years ago I heard a lecture where the speaker talked briefly about a family of concepts from industrial engineering called theory of constraints. In the factory, they use an operations model based on three components: drum, buffer, and rope. The drum synchronizes work product as it flows through the system. Buffers that protect the system by holding output from one stage as it waits to be consumed by the next. The rope pulls product from one work station to the next. Historically, are these ideas part of the heritage of Scrum and Lean, or are they on a separate track? It we wanted to think about Scrum and Lean in terms of drum-buffer-rope, what are the parts? * Drum = {daily scrum meeting, monthly release)? * Buffer = {burn down list, source control system)? * Rope = { daily meeting, constant integration server, monthly releases}? Industrial engineers define work flow in terms of different kinds of factories. * I-Factories: straight pipeline. One input, one output. * A-Factories: many inputs and one output. * V-Factories: one input, many output products. * T-Plants: many inputs, many outputs. If it applies, what kind of factory is most like Scrum or Lean and why?"} {"_id": "67882", "title": "When, if ever, can code standards be ignored?", "text": "My company has decided to use stored procedures for everything dealing with the database (because they didn't know of any other way besides raw SQL), and as the saying goes \"When in Rome...\" so I try to follow. Recently I had to add a hack fix that required grabbing a database value, and since it was a single value from a single table I wrote it as inline SQL (parameterized, of course) since there didn't seem to be a need for a stored procedure for one trivial line of code used in a single part of the application as a kludge. Of course, I've now been told to fix it and only ever use Stored Procs for anything related to the database. This feels just a bit too much like blindly following dogma instead of using common sense. Don't get me wrong, I understand the purpose of having coding standards but I am also a proponent of ignoring standards when they don't make sense, not just blindly following them as though they were Gospel."} {"_id": "129351", "title": "Splitting Logic, Data, Layout and \"Hacks\"", "text": "Sure, we all heard of programming patterns such as MVVM, MVC and such. But that isn't really what I'm looking into as Layout, Data and Logic is already pretty much split up (XML-Layout markup, Database, _insert your language of choice here_ ). The platform I am developing for is hard to maintain over the updated versions and older OSes. The project significantly grew up over the last few months and dealing with different platform versions really is a pain. For example simply disabling an user interface control for all existing versions took me around 40 lines of code in the logic layer, wrangling around with invocation, delegation, singletons that provide UI handling and so on. Is there a clean way to keep track of those \"hacks\" by maybe excluding it into separate classes or even packages? Should I overwrite existing framework code in order to handle _my_ requirements correctly? If so, does that concept have a name?"} {"_id": "129354", "title": "Simulating/developing_a_testing_strategy for factors that might cause the same algorithm to produce different results in Distributed Systems", "text": "This is a **Purely Academic Question** **Context:** I am working with some algorithms which are meant to arrive at a consensus in distributed systems. I intended to tackle Byzantine faults with these algorithms. For this I have implemented several algorithms published in IEEE papers and need a platform to test these algorithms. I wanted to test the merit of existing algorithms. For this I implemented thousands of Linux Containers on my system and now i want to do message passing between them, or say _simulate_ my distributed system. But the question is the data that is flowing must have _faults_. This is the genesis of this question. Why I need something more _sophisticated_ than RNG's is that I will need to attach some real credibility to my work. I want it to tackle some real world application generating faults rather than just fix the faults I myself generated in an algorithm. So, I need to simulate the factors that result in Byzantine faults. OR, to quote FrustatedWithFormsDesigner: `I need to develop a testing strategy that will have a deliberate number of faults to test fault-handling` To Summarize: **Say I am running a program in a distributed environment, then what are the factors that might end up generating Byzantine faults and is it possible for me to inculcate these factors in my simulation and how?** So, what I need is: A program that will make a small no. of mistakes every now and then, and I should not know what mistakes it has made and when. _I do not need it to make multiple mistakes in one set(run of the algorithm), but rather I plan on making (say) 10,000 runs of the program, and I need it to make mistakes 2000 times .._ Very importantly, I must be sanguine that there are no more than (1/5)n mistakes, where n is the total no. of results generated using the program. The _results_ that I am talking about here can be anything that is quantifiable and verifiable, like eg. an array of values. Doing something like this: 1for(int i=0; i<10000; i++) 2 //one fifth of the times put garbage in the array using random function!! 3 for (int j=0; j<5; j++) 4 array[j]=j; using a RNG in step `2` to _hide_ where the fault is present is too simplistic, trivial and not _real_ enough. I thought I could use some algorithm built around some mathematical function that is bound to fail 1/5th of the times, But I could not think of any. P.S. Please tell me if you need more data to understand the problem."} {"_id": "184654", "title": "I've been told that Exceptions should only be used in exceptional cases. How do I know if my case is exceptional?", "text": "My specific case here is that the user can pass in a string into the application, the application parses it and assigns it to structured objects. Sometimes the user may type in something invalid. For example, their input may describe a person but they may say their age is \"apple\". Correct behavior in that case is roll back the transaction and to tell the user an error occurred and they'll have to try again. There may be a requirement to report on every error we can find in the input, not just the first. In this case, I argued we should throw an exception. He disagreed, saying, \"Exceptions should be exceptional: It's expected that the user may input invalid data, so this isn't an exceptional case\" I didn't really know how to argue that point, because by definition of the word, he seems to be right. But, it's my understanding that this is why Exceptions were invented in the first place. It used to be you _had_ to inspect the result to see if an error occurred. If you failed to check, bad things could happen without you noticing. Without exceptions every level of the stack needs to check the result of the methods they call and if a programmer forgets to check in one of these levels, the code could accidentally proceed and save invalid data (for example). Seems more error prone that way. Anyway, feel free to correct anything I've said here. My main question is if someone says Exceptions should be exceptional, how do I know if my case is exceptional?"} {"_id": "220252", "title": "Either Monad and Exceptional Circumstances", "text": "I have a function returning an Either such as GetUserFromDb(int id). If the database is offline, should I catch the error in the function and wrap it in a failure / Left case or should I let it bubble out as an exception as there is nothing I can do and it really is an exceptional situation?"} {"_id": "237492", "title": "Should I use an Exception in a case like this?", "text": "I have a Windows service with a fluent interface like this: aRequest = Repository.getRequest() .createProcess() .validate(); Sometimes `getRequest()` could return a `null` value and this would cause an error in `createProcess()`. I could banally split `getRequest()` from `createProcess()`, but if I wouldn't do that what way should I follow, what way is better: * Check if request (`this`) is null and in the case return null: if(this is null) return null I could do this check in every method next to `getRequest()`. At the end `aRequest` will be `null`. * Throw an exception if `createProcess()` method receive a `null` value: if(this is null) throw new NullRequestException(); PRO of the second way: Only second method need a check, independently of the number of method in the chain. CON of the first way: Every method in the chain needs a check Now the question: Is second way a bad use of exception concept, since could be normal the absence of request sometimes?"} {"_id": "67888", "title": "Which Programming Languages Support the Following Features?", "text": "My personal programming background is mainly in Java, with a little bit of Ruby, a tiny bit of Scheme, and most recently, due to some iOS development, Objective-C. In my move from Java to Objective-C I've really come to love some features that Objective-C has that Java doesn't. These include support for both static and dynamic typing, functional programming, and closures, which I'm trying to leverage in my code more often. Unfortunately there are trade-offs, including lack of support for generics and (on iOS at least) no garbage collection. These contrasts have lead me to start a search for some of the programming languages that support the following features: * Object Oriented * Functional Programming Support * Closures * Generics * Support for both Static and Dynamic Typing * Module Management to avoid classpath/dll hell * Garbage Collection Available * Decent IDE Support Admittedly some of these features(IDE support, Module Management) may not be specific to the language itself, but obviously influence the ease of development in the language. Which languages fit these criteria?"} {"_id": "186439", "title": "Is declaring fields on classes actually harmful in PHP?", "text": "Consider the following code, in which the setter is deliberately broken due to a mundane programming error that I have made for real a few times in the past: testField = $newVal` } function getField() { return $this->testField; } } $testInstance = new TestClass(); $testInstance->setField(\"Hello world!\"); // Actually prints nothing; getField() returns null echo $testInstance->getField(); ?> The fact that I declared `$testField` at the top of the class helps conceal that programming error from me. If I hadn't declared the field, then I would get something similar to the following warning printed to my error log upon calling this script, which would potentially be valuable to helping my debugging - especially if I were to make an error like this in a large and complicated real-world application: > PHP Notice: Undefined property: TestClass::$testField in /var/www/test.php > on line 13 With the declaration, there is no warning. Perhaps I'm missing something, but I'm aware of only two reasons to declare class fields in PHP: firstly, that the declarations act as documentation, and secondly, that without declarations one can't use the `private` and `protected` access modifiers, which are arguably useful. Since the latter argument doesn't apply to public fields - assigning to an undeclared field of an object makes it public - it seems to me that I ought to at least comment out all my public field declarations. The comments will provide the exact same documentation value, but I will benefit from warnings if I try to read an uninitialized field. On further thought, though, it doesn't seem to make sense to stop there. Since in my experience trying to read an uninitialized field is a much more common cause of error than trying to inappropriately read or modify a private or protected field (I've done the former several times already in my short programming career, but never the latter), it looks to me like commenting out all field declarations - not just public ones - would be best practice. What makes me hesitate is that I've never seen anybody else do it in their code. Why not? Is there a benefit to declaring class fields that I'm not aware of? Or can I modify PHP's configuration in some way to change the behavior of field declarations so that I can use real field declarations and still benefit from \"Undefined property\" warnings? Or is there anything else at all that I've missed in my analysis?"} {"_id": "253113", "title": "Real time data synchronization techniques between two systems", "text": "I need to come up with a design for real-time data update from a COTS product (a Point of Sales system) to a custom-built .NET application (an Inventory Management system). Particularly, any sales transaction happening in POS system needs to update the inventory database (present in Inventory Management system) immediately (in real time). The only way any other system can communicate to the POS system is via its API exposed as web services. I have thought about introducing a Service Bus (or any such EAI tool) in between the two systems and taking advantage of the publish-subscribe model so that any sales depletion happening in POS system will trigger data update to the IM system via the service bus. My questions are: 1. Is it a good/feasible solution? 2. Do you have any other suggestions for such real-time data synchronization between different systems?"} {"_id": "29149", "title": "White Boards -- Who Uses Them?", "text": "So as both a full-time programmer and hobbyist as well (developing my own things for personal use and maybe to sell one day), I feel that me purchasing a big white board to hang in my room at home or something would be very useful. Does anyone here have one as well, to use for high level designs (UML, Architecture, etc.) and things like very early UI mockups, etc.? If you guys do have them, which ones have you bought? I can't seem to pin one that would be good for home use and I'm not sure of the pricing/other things. Thanks!"} {"_id": "253118", "title": "I'm always reimplementing observer/subject code in Java. Is there a better option?", "text": "I'm always writing observer/subject interfaces in a particular Java project, e.g.: /** * Registers the receiver to the dispatcher. * When data arrives that the receiver can process, * it will be passed to the receiver. * ... */ void addReceiver(IDataReceiver receiver); /** * @return * True, if the receiver has been registered, false otherwise. * ... */ boolean hasReceiver(IDataReceiver receiver); ... When a class implements these interfaces, I have to implement the code, test it, debug it, etc. It feels so futile to keep reimplementing such similar logic. Is there _any_ better option in Java?"} {"_id": "59221", "title": "How to hire a web-programmer : for non-programmer", "text": "I am a non-programmer that has used the services of : freelancer, odesk, etc I've tried asking for what i need but, I can't find anyone who can show me any type of example similar to what I request in the specs for the web- programming. They have front ends and back ends, but they don't fulfill true \"live\" website requirements. \"live\" as to be ready to support traffic, keys in hand, can be updated constantly by me, ... How do I figure how to evaluate a programmer ? How do I bid the appropriate price for the services ?"} {"_id": "150800", "title": "Why is the \"kill\" command called so?", "text": "Why was it decided to call the `kill` command \"kill\"? I mean, yes, this utility is often used to terminate processes, but it can actually be used to send any signal. Isn't it slightly confusing? Maybe there are some historical reasons. All I know from `man kill` that this command appeared in Version 3 AT&T UNIX."} {"_id": "61481", "title": "Open Source Etiquette", "text": "I've started working on my first open source project on Codeplex and came across some terrible code. (I did learn that C# still has the \"goto\" statement) I started adding features that the \"owner\" wanted and after exploring the code base and seeing what a mess it was (e.g using \"goto\") I wanted to clean it up a bit. But I am a bit concerned and that is why I'm turning to you all: is it proper etiquette for me to \"fix\" the \"bad code\" or should I let it be and work on new features? Like I said before, I'm new to the whole OSS scene and working on a team in general so I would like to not mess it up."} {"_id": "150807", "title": "When should I use AtomPub?", "text": "I have been conducting some research into RESTful web service design and I've reached what I think is a key decision point so I thought I'd offer it up to the community to get some advice. In keeping with the principles of a RESTful architecture I want to present a discoverable API, so I will be supporting the various HTTP verbs as fully as possible. My difficulty comes with the choice of representation of those resources. You see, it would be easy for me to come up with my own API that covers how search results are to be presented and how links to other resources are provided, but this would be unique to my application. I've read about the Atom Publishing Protocol (RFC 5023), and how OData promotes its use, but it seems to add an extra level of abstraction over what is (currently) a rather simple API. So my question is, when should a developer select AtomPub as their choice of representation - if at all? And if not, what is the current recommended approach?"} {"_id": "142652", "title": "Reasons to Use Version Control", "text": "> **Possible Duplicate:** > I'm a Subversion geek, why I should consider or not consider Mercurial or > Git or any other DVCS? > What is the value of using version control? I am a relative noob to programming, and am not going to be developing super- good software or even programming professionally anytime soon. With this predicament, is there really any reason to learn git or subversion or any other version control systems?"} {"_id": "74815", "title": "What is the value of using version control?", "text": "I'm new to version control (currently using SVN), but I don't understand how this helps developers. What does version control do that makes it useful in a development environment?"} {"_id": "35077", "title": "The use of Test-Driven Development in Non-Greenfield Projects?", "text": "So here is a question for you, having read some great answers to questions such as \"Test-Driven Development - Convince Me\". So my question is: **\"Can Test-Driven Development be used effectively on non- Greenfield projects?\"** To specify: _I would really like to know if people have had experience in using TDD in projects where there was already non-TDD elements present? And the problems that they then faced._"} {"_id": "92169", "title": "MASS equivalent for intel compilers and architectures", "text": "I was looking for an Intel alternative to MASS IBM libraries (Mathematical Acceleration Subsystem). I know Intel implements MKL libraries, but I d'nt know if there is a specific Math acceleration library in it."} {"_id": "73629", "title": "How do you get consistency in source code / UI without stifling developer's creativity?", "text": "We have a small team (2-3) of programmers writing a program with a lot of forms and dialogs. We have a problem where we cannot keep good consistency in what we write, or how we write it. The latest issue I've noted is that we have lots of places where we have a date range, and we use all kinds of wording to indicate this range is it Start/End or From/To or \"Between **_ and _** \". The other side of this is that one of the developers might come up with a better way of doing something (like maybe initializing the state of a check box from the settings file). And then we'll have all of the \"old\" stuff written in the old/poor way, and new stuff written in a better method. I try to be constantly vigilant about the first thing, but it seems like I'm always finding new failures. The second one creates a huge burden if we're going to go back and fix all the old stuff as soon as we come up with a slightly better way of doing something. Either that, or we ignore all old stuff until something is broken, and then we have no clue what the heck the software is doing because its written completely differently than what we write currently. One last thing, if we push the burden of \"fix it everywhere now that you've found it\" on the developer who comes up with the better solution, its self defeating, because its like great, that's a better way to check for that error, now fix it everywhere in the code. Bosses don't really ever seem to care about the quality of the code, just when we'll be able to release the next version (but that's a different discussion)."} {"_id": "73623", "title": "What is the best way for a top-down procedural programmer to learn OOP?", "text": "I'm an old school top-down procedure programmer. I started with Turbo Pascal on the DOS environment. Every time I try to learn OOP on my own I stumble. I try and make OOP somehow fit into my top-down mindset. It's very frustrating, I seem to have a mental block. What is the best way for someone like me to learn OOP?"} {"_id": "177651", "title": "How do developers find the time to stay on top of latest technologies?", "text": "I was a freelance web developer until circa 2004 when I started going down the management route but have decided to try to get back into development again (specifically JavaScript and HTML5 web/mobile web apps) and I really get the impression to be truly good at these and similar fast moving technologies a constant amount of time is required to be set aside to invest in getting better at existing skills in addition to learning new skills. I understand right now since I am getting back into things there is a pretty steep learning curve, but seeing how good many guys are out there - the only way I see of getting up there is putting in a serious amount of time. For those working as fulltime developers, what I am trying to understand is this - on most days, how much time in the office is spent actually grinding out code compared to learning/research. I could easily spend 2-4 hours daily getting on top of the best ways to go about doing things. Do most good developers who are employed full time invest significant hours outside of work sharpening their skills? Or maybe I'm looking at all of this completely wrong?"} {"_id": "235232", "title": "How to guarantee invariants / Inner logic in setter methods", "text": "According to DDD-principles I use factory-methods to create consistent objects and to ensure that the objects are in the right state. Now I'm in doubt about inner logic of setter methods. I'm tied up with an object, that's similar to the following code snippet, so let's take it as an example: public class UserCredentials { private String username; private UserCredentials() { super(); } public static UserCredentials create (final String username) { String username = username.toUpperCase(); UserCredentials newInstance = new UserCredentials(username); return newInstance; } private UserCredentials(final String username) { this.username = username; } public String getUsername() { return this.username; } public void setUsername(final String username) { this.username = username; } public void checkConsitency() { ... isUppercase(); ... } } We do have an invariant: The username of the credentials must always be uppercase. Now I want to make sure, that changing the username doesn't violate the invariant. But how do I guarantee the rule? 1. I do rewrite the setter method and add the conversion logic to the setter. Drawback: Setter contains logic. 2. I rely on clients, that they always provide uppercase username and I will throw consistency exceptions in the case of violation. Drawback: Origin of the wrong usage is hard to discover, moreover its bad practice at all. 3. I remove the setter method and replace it with a domain method like `changeUsername(String username)` that contains the conversion logic. Drawback: external developers may miss the setter method. Because of the statement \"username of credential must always be uppercase\" I tend to prefer alternative 3. But is there a smarter solution?"} {"_id": "189286", "title": "Entity Framework eager loading/reference data", "text": "I'm struggling to get my head around how best to eager load entities, and how to assign relationships when creating new entities. I'm using EF5 POCO, by the way. I'm retreiving a large hierarchy of entities from a database, representing chemical analysis results. As a simple example I have an \"AnalysisResult\" entity with two generated properties that relate it to a \"ChemicalElement\" entity - `ChemicalElement` and `ChemicalElementId`. Standard stuff so far. I started out by retrieving my AnalysisResults from the database _and_ eager loading the related ChemicalElements using the `.Include` LINQ statement, meaning I could access a result's chemical element simply via `AnalysisResult.ChemicalElement`. This was fine for results loaded from the database, but what if I wanted to create a new AnalysisResult - how would I get the ChemicalElement entity that I wanted to assign to it? As chemical elements never change, I decided to treat them as \"reference/lookup data\", so I dropped the eager loading, and I now retrieve _all_ ChemicalElements into a separate collection that I can refer to whenever I need. The downside of dropping eager loading is that an AnalysisResult loaded from the database has a null .ChemicalElement property, so I have to use its .ChemicalElementId property to lookup the ChemicalElement entity in my \"reference collection\". Have I overcomplicated the solution? Should I keep the idea of the ChemicalElement \"reference collection\" _but also_ eager load them when retrieving existing AnalysisResults? It seems wasteful to (potentially) retrieve them twice - once when I retrieve them all, and again when eager loading during the AnalysisResults retrieval."} {"_id": "78908", "title": "Best practices for backing out a feature from a QA trunk", "text": "I have a question regarding source control in a general sense and specific to TFS. Let's say you have a three-tier system (Development branches, QA main trunk and Production branch). At a certain point, the changes from the dev branches are merged to the QA branch and the QA branch is tested. After the initial test, \"feature A\" is broken and needs to be fixed or removed, how should a feature be updated or removed from the QA branch? Should it be edited directly? or should the developer checks in the changes to dev and let it propagate up to QA again? Any thoughts on this (in general)? Also if this is done in TFS is there any tools that can help out with this situation? thanks!!"} {"_id": "8310", "title": "What is the best way to bring a new programmer up to speed on a project?", "text": "When you add a programmer to a project with a large code base, what is the best way to get them involved right away? Do you give them formal training in the code base or let them figure it out themselves? What tasks are best to get them up to speed? What kind of tasks are \"safe\" until they are familiar enough with the project to contribute to it with a full understanding of what they're doing? How do you determine if they've reached a level of full understanding? **Edit** : The project isn't behind a deadline. I work alone and want to add employees soon. I'd like to know how best to familiarize them with the code base."} {"_id": "8311", "title": "Are programmers good at learning \"spoken\" languages?", "text": "This might be slightly off topic, but I'll risk it, as the site is about _Programmers_ ! Programmers are good at constantly learning new programming languages, but how good are they at learning a new spoken language ? Have you taken up a foreign language (French/Spanish/etc) **as an adult** and mastered it? Was it easy? I ask because I have been trying to learn French for quite some time now, and I'm still at the annoying \"Je parle un peu de Fran\u00e7aise\" stage. I've attended two French courses, one where the majority of the class were programmers, and one where they weren't and the difference in ability was quite apparent. Does a mathematical / logical inclination hinder learning a spoken language where grammar is not in ones and zeros? Or am I just transferring blame instead of simply accepting that I am not good with languages. _[It is important that you have not been taught the language in school, as early exposure really gives you the upper hand. I've picked up and got quite good at languages I've been exposed to under the age of 10.]_"} {"_id": "166461", "title": "Does comparison operand order affect speed?", "text": "I notice that someone in my organization programs comparisons like: if (100 == myVariable) rather than: if (myVariable == 100) He claims the former is quicker in languages like C++. I can't find any evidence. We program in C#. Is this true for any programming language?"} {"_id": "189283", "title": "How can architects work with self-organizing Scrum teams?", "text": "An organization with a number of agile Scrum teams also has a small group of people appointed as \"enterprise architects\". The EA group acts as control and gatekeeper for quality and adherence to decisions. This leads to overlaps between the team decision and EA decisions. For instance, the team might want to use library X or want to use REST instead of SOAP, but the EA does not approve of that. Now, this can lead to frustration when team decisions are overruled. Taken far enough, it can potentially lead to a situation where the EA people \"grabs\" all power and the team ends up feeling demotivated and not very agile at all. The Scrum guides has this to say about it: > Self-organizing: No one (not even the Scrum Master) tells the Development > Team how to turn Product Backlog into Increments of potentially releasable > functionality. Is that reasonable? Should the EA team be disbanded? Should the teams refuse or simply comply?"} {"_id": "79189", "title": "Using work time effectively: How to create code quickly?", "text": "> **Possible Duplicates:** > How can I be more productive at work? (additional context inside) > How to code on a very tight schedule? Sometimes we create new code quickly. And sometimes we can't concentrate. Development process has different _thinking stages_. There is a _fast_ stage when we create code as a result of our ideas. And there is a _slow_ stage when we think. And it is important not to be disturbed when _slow_ stage or thinking time may be extended. Do you know another recomendations to use work time effectively? May be good guide?"} {"_id": "64818", "title": "How to code on a very tight schedule?", "text": "I'm working on a project that has very tight schedule. I don't have much time to code and test (even though I work more than 12 hours everyday, it's still delayed), and the result is very fragile. Its code is also very dilemma. This program is used by all offices in our customer's company, which is located in many countries. I regularly get phone calls at midnight about errors from our user/tester or about them not knowing how to use some features. After three years on this project, I feel very stressed and I can't sleep well because I'm very worried about errors and phone calls. I have a few questions: 1. For three years, all the code I've written is just the perfect usage scenario code (so it break easily). It's poorly designed and doesn't have any unit tests. I have lots of problems because of this fact. Therefore, I want to know whether is it's feasible to write code that works when the project has a very tight schedule? 2. How can I write better code in the same amount of time? 3. How can I clear my mind and don't get worried about work when I go to sleep? I also welcome any suggestions."} {"_id": "78906", "title": "tracking web traffic with Python", "text": "I learned of Compete.com and I am interested in how they track web traffic from other websites. I am most interested in doing this with Python but when I Google I can't find snap. Probably my English. Could someone tell me of existing modules that I may look at, and in general how to track web traffic from other sites (what do they look at when tracking)?"} {"_id": "119617", "title": "Should I keep separate client codebases and databases for a software-as-a-service application?", "text": "My question is about the architecture of my application. I have a Rails application where companies can administrate all things related to their clients. Companies would buy a subscription and their users can access the application online. Hopefully I will get multiple companies subscribing to my application/service. What should I do with my code and database? 1. Seperate app code base and database per company 2. One app code base but seperate database per company 3. One app code base and one database The decision involves security (e.g. a user from company X should not see any data from company Y) performance (let's suppose it becomes successful, it should have a good performance) and scalability (again, if successful, it should have a good performance but also easy for me to handle all the companies, code changes, etc). For the sake of maintainability, I tend to opt for the one code base, but for the database I really don't know. What do you think is the best option?"} {"_id": "115948", "title": "Is it unnecessary to use Github (social coding) even if you are the only one working on a project?", "text": "I started using Github now that I'm working on a project with some guys. And I started to wonder if I should use it in my personal projects too. I'm not sure if this will help me in some way or if it is unnecesary?"} {"_id": "119613", "title": "Single click handler for all buttons in Javascript? Is it a pattern? Whats the benefit?", "text": "I have been told that when there are multiple buttons on the page for same purpose but targeting different item e.g. delete item on a grid of items, they say it is recommended to just register for click handler only on the top most element like 'body' and check what was clicked instead of hooking up click with every delete button. Whats the benefit of this? Creating more handlers causes problems? Is it an optimization of some sort? Is it a pattern? Does it have anything to do with performance? Where can I read more about it?"} {"_id": "119610", "title": "GPL: does one line of GPLed code make program a \"derived work\"", "text": "I've recently run into some argument with a person that claims to be a lawyer (I have my suspicions about this not being completely true, though). As far as I know, copying even one line of code from GPLed program into proprietary body of code requires you to release the whole thing under GPL, if you ever decide to publish the software and make it available to the public. The person in question claims that it is \"absurd\" (I know it is, but AFAIK that's how GPL works), it is \"redefining the copyright\", \"GPL has no power to do that\", and claiming that \"one line of GPLed code makes you release the whole thing under GPL\" is absurd. That contradicts the GPL FAQ. Can somebody clarify the situation? Am I right in assumption that copying even smallest subroutine from GPL program into your code automatically makes your program a \"derived work\" which means you are obliged to release it under GPL license if you publish it?"} {"_id": "119618", "title": "Should the developer be involved in setup of test data in QA and UAT environment?", "text": "When QA or UAT comes, should the developer still be involved in setting up data or finding test data for the QA tester or business user? Or will this introduce bias to the developer who coded the system changes that are being tested in the said environments?"} {"_id": "119619", "title": "What undergraduate course to choose for a mature programmer returning to study", "text": "I have been developing applications (mostly web-based) for almost 10 years now and have learnt pretty much everything I know through experience (and the internet!). I wouldn't call myself an advanced programmer, but I am quite proficient in several languages (C#, Javascript, Ruby, HTML/CSS etc) and spend a quite a bit of time working on personal projects and reading countless books & articles. I am looking to emigrate to Canada, hopefully Vancouver (im from the UK) and one way would be on a student visa, if I was going to be studying for a minimum of 2 years. Having never been to university or achieved anything higher than A-Levels I am quite tempted by this path. The thought of learning is more exciting to me now than it was 10 years ago! What would be people recommend as a good undergraduate course to take that would complement this career path? Would Math be beneficial, if so which area of Math? TL;DR What undergraduate course/area of study would complement 10 years of (mostly web-based) programming experience?"} {"_id": "107938", "title": "THREADS: Kernel threads vs. Kernel-supported threads vs. User-level threads?", "text": "Does anyone know what the difference between these are? It seems to me that kernel threads correspond to the code that runs the kernel (intuitively), but I'm not sure about the other two... Also, would the pthreads standard be considered user-level and kernel- supported, since you are accessing a library while the kernel does all the thread scheduling/switching?"} {"_id": "191103", "title": "What skills should I cultivate to become a development/technical lead?", "text": "I am currently a professional programmer. I want to expand my skillset, but I also want to make the career jump to being a dev lead as part of a team. I know there's got to be a lot to learn (and this won't be an instant thing) but I think I'm smart enough to do it and I'm up to the challenge. I'm sure that many of the members here have probably gone through this themselves, and are now successful dev leads. Unfortunately, even though I know some personal areas I'd like to improve (depth of knowledge, breadth of knowledge, skillsets, etc), I'm not really sure how I would start something like this. As a programmer now, what steps should I take to get me to this goal? What should I prioritize?"} {"_id": "137285", "title": "Developer to team leader", "text": "How much experience is considered enough for a developer to become a team leader? For IT managment what is the measure to check if the current team member is good enough to become a team leader (technical level, interpersonal level) * * * updated: removed second part of the question for being duplicate (see comments)"} {"_id": "110532", "title": "What kinds of copyright or privacy issues apply to tweets?", "text": "If I want to use the data on Twitter in my application, what are the restrictions? What user data should I avoid touching? For example, if I want to collect information from certain type of tweets about users and use this information on my site as profiles for these users in my site, does this require permission from the users first or can I directly go and get the information? Technically, from anyone with a public account on twitter, we can get data of his/her tweets and create profile information for him/her, but does twitter privacy policy allow this? If you published some tweet on Twitter, am I allowed to take that tweet content and put it in my site (stating that it is written by you) but without asking you for permission? I found this in the Twitter Terms of Service: > Tip We encourage and permit broad re-use of Content. The Twitter API exists > to enable this. But also found this on the Twitter API Terms: > You may not use Twitter Content or other data collected from end users of > your Client to create or maintain a separate status update or social network > database or service."} {"_id": "114090", "title": "Are value converters more trouble than they're worth?", "text": "I'm working on a WPF application with views that require numerous value conversions. Initially, my philosophy (inspired in part by this lively debate on XAML Disciples) was that I should make the view model strictly about supporting the _data_ requirements of the view. This meant that any value conversions required to turn data into things like visibilities, brushes, sizes, etc. would be handled with value converters and multi-value converters. Conceptually, this seemed quite elegant. The view model and view would both have a distinct purpose and be nicely decoupled. A clear line would be drawn between \"data\" and \"look\". Well, after giving this strategy \"the old college try\", I'm having some doubts whether I want to continue developing this way. I'm actually strongly considering dumping the value converters and placing the responsibility for (almost) all value conversion squarely in the hands of the view model. The reality of using value converters just doesn't seem to be measuring up to the apparent value of cleanly separated concerns. My biggest issue with value converters is that they are tedious to use. You have to create a new class, implement `IValueConverter` or `IMultiValueConverter`, cast the value or values from `object` to the correct type, test for `DependencyProperty.Unset` (at least for multi-value converters), write the conversion logic, ~~register the converter in a resource dictionary~~ [see update below], and finally, hook up the converter using rather verbose XAML (which requires use of magic strings for both the binding(s) ~~and the name of the converter~~ [see update below]). The debugging process is no picnic either, as error messages are often cryptic, especially in Visual Studio's design mode/Expression Blend. This isn't to say that the alternative--making the view model responsible for all value conversion--is an improvement. This could very well be a matter of the grass being greener on the other side. Besides losing the elegant separation of concerns, you have to write a bunch of derived properties and make sure you conscientiously call `RaisePropertyChanged(() => DerivedProperty)` when setting base properties, which could prove to be an unpleasant maintenance issue. The following is an initial list I put together of the pros and cons of allowing view models to handle conversion logic and doing away with value converters: * Pros: * Fewer total bindings since multi-converters are eliminated * Fewer magic strings (binding paths ~~\\+ converter resource names~~ _) * ~~No more registering each converter (plus maintaining this list)~~_ * Less work to write each converter (no implementing interfaces or casting required) * Can easily inject dependencies to help with conversions (e.g., color tables) * XAML markup is less verbose and easier to read * Converter reuse still possible (although some planning is required) * No mysterious issues with DependencyProperty.Unset (a problem I noticed with multi-value converters) *Strikethroughs indicate benefits that disappear if you use markup extensions (see update below) * Cons: * Stronger coupling between view model and view (e.g., properties must deal with concepts like visibility and brushes) * More total properties to allow direct mapping for every binding in view * ~~`RaisePropertyChanged` must be called for each derived property~~ (see Update 2 below) * Must still rely on converters if the conversion is based on a property of a UI element So, as you can probably tell, I have some heartburn about this issue. I'm very hesitant to go down the road of refactoring only to realize that the coding process is just as inefficient and tedious whether I use value converters or expose numerous value conversion properties in my view model. Am I missing any pros/cons? For those who have tried both means of value conversion, which did you find worked better for you and why? Are there any other alternatives? (The disciples mentioned something about type descriptor providers, but I couldn't get a handle on what they were talking about. Any insight on this would be appreciated.) * * * **Update** I found out today that it's possible to use something called a \"markup extension\" to eliminate the need to register value converters. In fact, it not only eliminates the need to register them, but it actually provides intellisense for selecting a converter when you type `Converter=`. Here is the article that got me started: http://www.wpftutorial.net/ValueConverters.html. The ability to use a markup extension changes the balance somewhat in my pros and cons listing and discussion above (see strikethroughs). As a result of this revelation, I'm experimenting with a hybrid system where I use converters for `BoolToVisibility` and what I call `MatchToVisibility` and the view model for all other conversions. MatchToVisibility is basically a converter that lets me check if the bound value (usually an enum) matches one or more values specified in XAML. Example: Visibility=\"{Binding Status, Converter={vc:MatchToVisibility IfTrue=Visible, IfFalse=Hidden, Value1=Finished, Value2=Canceled}}\" Basically what this does is check if the status is either Finished or Canceled. If it is, then the visibility gets sets to \"Visible\". Otherwise, it gets sets to \"Hidden\". This turned out to be a very common scenario, and having this converter saved me about 15 properties on my view model (plus associated RaisePropertyChanged statements). Note that when you type `Converter={vc:`, \"MatchToVisibility\" shows up in an intellisense menu. This noticeably reduces the chance of errors and makes using value converters less tedious (you don't have to remember or look up the name of the value converter you want). In case you're curious, I'll paste the code below. One important feature of this implementation of `MatchToVisibility` is that it checks to see if the bound value is an `enum`, and if it is, it checks to make sure `Value1`, `Value2`, etc. are also enums of the same type. This provides a design-time and run-time check of whether any of the enum values are mistyped. To improve this to a compile-time check, you can use the following instead (I typed this by hand so please forgive me if I made any mistakes): Visibility=\"{Binding Status, Converter={vc:MatchToVisibility IfTrue={x:Type {win:Visibility.Visible}}, IfFalse={x:Type {win:Visibility.Hidden}}, Value1={x:Type {enum:Status.Finished}}, Value2={x:Type {enum:Status.Canceled}}\" While this is safer, it's just too verbose to be worth it for me. I might as well just use a property on the view model if I'm going to do this. Anyway, I'm finding that the design-time check is perfectly adequate for the scenarios I've tried so far. **Here's the code for`MatchToVisibility`** [ValueConversion(typeof(object), typeof(Visibility))] public class MatchToVisibility : BaseValueConverter { [ConstructorArgument(\"ifTrue\")] public object IfTrue { get; set; } [ConstructorArgument(\"ifFalse\")] public object IfFalse { get; set; } [ConstructorArgument(\"value1\")] public object Value1 { get; set; } [ConstructorArgument(\"value2\")] public object Value2 { get; set; } [ConstructorArgument(\"value3\")] public object Value3 { get; set; } [ConstructorArgument(\"value4\")] public object Value4 { get; set; } [ConstructorArgument(\"value5\")] public object Value5 { get; set; } public MatchToVisibility() { } public MatchToVisibility( object ifTrue, object ifFalse, object value1, object value2 = null, object value3 = null, object value4 = null, object value5 = null) { IfTrue = ifTrue; IfFalse = ifFalse; Value1 = value1; Value2 = value2; Value3 = value3; Value4 = value4; Value5 = value5; } public override object Convert( object value, Type targetType, object parameter, CultureInfo culture) { var ifTrue = IfTrue.ToString().ToEnum(); var ifFalse = IfFalse.ToString().ToEnum(); var values = new[] { Value1, Value2, Value3, Value4, Value5 }; var valueStrings = values.Cast(); bool isMatch; if (Enum.IsDefined(value.GetType(), value)) { var valueEnums = valueStrings.Select(vs => vs == null ? null : Enum.Parse(value.GetType(), vs)); isMatch = valueEnums.ToList().Contains(value); } else isMatch = valueStrings.Contains(value.ToString()); return isMatch ? ifTrue : ifFalse; } } **Here's the code for`BaseValueConverter`** // this is how the markup extension capability gets wired up public abstract class BaseValueConverter : MarkupExtension, IValueConverter { public override object ProvideValue(IServiceProvider serviceProvider) { return this; } public abstract object Convert( object value, Type targetType, object parameter, CultureInfo culture); public virtual object ConvertBack( object value, Type targetType, object parameter, CultureInfo culture) { throw new NotImplementedException(); } } **Here's the ToEnum extension method** public static TEnum ToEnum(this string text) { return (TEnum)Enum.Parse(typeof(TEnum), text); } * * * **Update 2** Since I posted this question, I've come across an open-source project that uses \"IL weaving\" to inject NotifyPropertyChanged code for properties and dependent properties. This makes implementing Josh Smith's vision of the view model as a \"value converter on steroids\" an absolute breeze. You can just use \"Auto-Implemented Properties\", and the weaver will do the rest. **Example:** If I enter this code: public string GivenName { get; set; } public string FamilyName { get; set; } public string FullName { get { return string.Format(\"{0} {1}\", GivenName, FamilyName); } } ...this is what gets compiled: string givenNames; public string GivenNames { get { return givenName; } set { if (value != givenName) { givenNames = value; OnPropertyChanged(\"GivenName\"); OnPropertyChanged(\"FullName\"); } } } string familyName; public string FamilyName { get { return familyName; } set { if (value != familyName) { familyName = value; OnPropertyChanged(\"FamilyName\"); OnPropertyChanged(\"FullName\"); } } } public string FullName { get { return string.Format(\"{0} {1}\", GivenName, FamilyName); } } That's a huge savings in the amount of code you have to type, read, scroll past, etc. More importantly, though, it saves you from having to figure out what your dependencies are. You can add new \"property gets\" like `FullName` without having to painstakingly go up the chain of dependencies to add in `RaisePropertyChanged()` calls. What is this open-source project called? The original version is called \"NotifyPropertyWeaver\", but the owner (Simon Potter) has since created a platform called \"Fody\" for hosting a whole series of IL weavers. The equivalent of NotifyPropertyWeaver under this new platform is called PropertyChanged.Fody. * **Fody setup instructions:** http://code.google.com/p/fody/wiki/SampleUsage (replace \"Virtuosity\" with \"PropertyChanged\") * **PropertyChanged.Fody project site:** http://code.google.com/p/propertychanged/ If you'd prefer to go with NotifyPropertyWeaver (which a little simpler to install, but won't necessarily be updated in the future beyond bug fixes), here is the project site: http://code.google.com/p/notifypropertyweaver/ Either way, these IL weaver solutions completely change the calculus in the debate between view model on steroids vs. value converters."} {"_id": "107930", "title": "Comparing Visual Foxpro with .NET", "text": "I used Foxpro 2.6 a decade ago. In my present company, few new projects are being developed in Visual Foxpro 9.0 and few projects are on .NET platform. From time-to-time I notice how Visual Foxpro is more powerful as compared to .NET when comparing dependency issues. Here are some of the advantages of VFP I noticed: 1. There is no hard and fast rule to use another full-fledged RDBMS. Even though the latest version of Visual Foxpro can be connected to SQL Server, but I have seen very large projects still, happily, using the old DBF. With .NET you have to connect to another RDBMS and it just adds to your Setup dependency. 2. Foxpro has built-in rich reporting capability since the time of DOS era, whereas .NET developers have to depend on Crystal Reports, SSRS or other third-party variants. 3. Visual Foxpro 9.0 has .NET capabilities as well. However, it is not dependent on .NET and therefore your final executable doesn't need .NET framework installed, which is again a dependency. 4. I have seen very large databases of Foxpro in few major government organizations, successfully running from a long time. These organizations don't feel a need to shift to SQL Server, not because of migration risks, but because they are happy with VFP. I want to step into VFP world but I read somewhere that Microsoft is stopping further developments of VFP and also stopping support to VFP 9.0 from 2015 onwards. What you guys suggest?"} {"_id": "101717", "title": "Scrum re-estimation of stories", "text": "Every day, after the stand-up, my team and I update our estimates for each story. I have a feeling that there is something wrong with the way we do it, so I need your help. This is how we do: Story A estimate: 24 hours (8 hours per day - we use \"ideal days\" as the measure) * Day N: developer starts working on Story A in the morning (8 hours of work completed by the end of the day) * Day N+1: Story A re-estimation = 16 hours (one workday taken out of Story A, from day N) * Day N+2: Story A re-estimation = 8 hours (one workday taken out of Story A, from day N+1) * Day N+3: Story A should be done by now. But it's not. The developer reckons it will take another 3 hours to finish. We update the story on the whiteboard and burndown accordingly. * Day N+4: Story A took the whole day to be finished instead of only 3 hours! Now it's done. The difference, 5 hours, is completely unaccounted for in our planning. How should we be daily re-estimating our stories?"} {"_id": "112705", "title": "Javascript - is this a grey area for anyone else?", "text": "I have a firm understanding of HTML, CSS, PHP, MySQL (and to some extent apache/linux) and find that one of the things missing from my 'web development knowledge base' is javascript - creating richer user interfaces. I'd like to learn Javascript before I look at any frameworks (I've used light javascript/jquery before, but that's besides the point). Can anyone recommend a firm book or online documentation from 'absolute beginner' to 'expert' for javascript? I seem to be finding too many 'display the time' and 'hello world' tutorials..."} {"_id": "117540", "title": "Conceptually speaking, how would I pull facebook wall posts from 100k users?", "text": "Not looking for any code snippets here, so here goes: Let's say 100,000 users on my web app have authorized my application to connect to facebook (with or without `offline_access`). I want to build a sort of \"pull\" mechanism, whereby when a user posts to their facebook wall, I can grab it from the Graph API and store it locally on my server. I would assume that this would require a call to the Graph API every _n_ minutes to pull their latest wall posts. Ideally, this would be done for Twitter as well. I know LinkedIn does this, but I am not sure of the exact details. Question 1: I'd need to make an individual Graph API call for each user, right? Question 2: If I suddenly bombard the Graph API with 100k Graph API calls, wouldn't I run head-first into the rate limiter? Question 2b: If not, what if I had a million users? Surely..."} {"_id": "220659", "title": "How should I draw the (special) is predicate, which is used for arithmetic, in a Prolog search tree?", "text": "I normally construct my search tree by following the common convention: * Place Queries or Goals in need of unification inside node boxes. * Write down decision points on the edges, where Prolog has assigned an element to a variable. * Leaf nodes which are fully satisifed will be an empty box, they represent a solution. * Leaf nodes which can not be satisifed and represent failed attempts will have the unfulfilled goal in their box, to make them even more clear I also follow the convention of marking them by placeing a cross symbol below them. The above way has the nice side effect that it's easy to see the decision points. But what about creating a search tree for something like: accLen([_|T],A,L) :- Anew is A+1, accLen(T,Anew,L). accLen([],A,A). How should the assignment of Anew be represented in the search tree? It's not a decision point, the code has no other option then assigning it 1 plus the current value of A. Do you still place it on the edge, but underline it or something?"} {"_id": "220658", "title": "How can we calculate Big-O complexity in Functional & Reactive Programming", "text": "I started learning functional programming, I am trying to compare between different algorithms that are written in an imperative, functional , parallel programming and using Collections and Lambda expressions. In order to make my question clear and avoid sharing long algorithms that I am working on. I will take as an example the well-known modified Fibonacci algorithm (Euler Problem 2): Here is the problem : http://projecteuler.net/problem=2 //Iterative way: int result=2; int first=1; int second=2; int i=2; while (i < 4000000) { i = first + second; if (i % 2 == 0) { result += i; } first = second; second = i; } Console.WriteLine(result); // Recursive functional way: FibonacciTerms(1, 2).TakeWhile(x => (x <= 4000000) ) private static int SumEvenFibonarciTerms() { return FibonacciTerms(1, 2) .TakeWhile(x => (x <= 4000000)).Where(x => x % 2 == 0) .Sum(); } //Asynchrounous way let rec fib x = if x <= 2 then 1 else fib(x-1) + fib(x-2) let fibs = Async.Parallel [ for i in 0..40 -> async { return fib(i) } ] |> Async.RunSynchronously How can I calculate the Big-O in algorithms that are written using Lamda expression (Complexity of functions like: filter, where, reduceleft, reducright, .. ) In the functional and Asynchrounous algo, the both are written in the same way ? Should we consider their complexity the same knowing that there is difference in the time of execution ?"} {"_id": "220656", "title": "How to get requirements from Non-technical client?", "text": "I\u2019ve working as a software engineer for last 4 years. Throughout my small experience I worked for IT organizations, where we had separate role of project manager, analyst, quality engineer and software engineers and many more. Also we applied complex agile process working in large/small team. I personally participated on every step of software developer life cycle. Few months before I got good opportunity on a co-operate company and I\u2019m serving right now. I\u2019m assigned few internal projects which are company goals. The problem is, I\u2019m spending most of the time in getting requirements and my boss is not willing to give me input. He is giving me full blame that I did not deliver the tasks. I\u2019m unable to explain that without requirements I can\u2019t work and even I can\u2019t commit any deadline. He uses to say me often that please make my dreams come true. How can I get this divine requirement? It took around 1 month just to give me rights on my machine to install IDE and internet connection. In short, they did not give me any requirements or documents, the targets are still unclear and also team is not contributing. I\u2019m not trying to leave this company, I\u2019m consider it as a challenge to get things right. As a programmer, how can I handle this issue?"} {"_id": "166791", "title": "From Imperative to Functional Programming", "text": "As an Electronic Engineer, my programming experience started with Assembly and continue with PL/M, C, C++, Delphi, Java, C# among others (imperative programming is in my blood). I'm interested in add to my previous knowledge, skills about functional programming, but all I've seen until now seems very obfuscated and esoteric. Can you please answer me these questions? 1) What is the mainstream functional programming language today (I don't want to get lost myself studying a plethora of FP languages, just because language X has the feature Y)? 2) What was the first FP language (the Fortran of functional programming if you want)? 3) Finally, when talking about pure vs. non pure FP what are the mainstream languages of each category? Thank you in advance"} {"_id": "166795", "title": "schedule compliance and keeping technical supports and resolving issues", "text": "I am an entrepreneur of a small software developer company. The flagship product is developed by myself and my company grew up to 14 people. One of pride is that we've never have to be invested or loaned. The core development team is 5 people. 3 are seniors and 2 are juniors. After the first release, we've received many issues from our customers. Most of them are bug issues, customization needs, usage questions and upgrade requests. The issues from customers are incoming many times everyday, so it takes little time or much time of our developers. Because of our product is a software development kit(SDK) so most of questions can be answered only from our developers. And, for resolving bug issues, developers must be involved. Estimating time to resolve bug is hard. I fully understand it. However, our developers insist they cannot set the any due date of each project because they are busy doing technical supports and bug fixes by issues from customers everyday. Of course, they never do overwork. I suggested them an idea to divide the team into two parts: one for focusing on development by milestones, other for doing technical supports and bug fixes without setting due days. Then we could announce release plan officially. After the finish of release, two parts exchange the role for next milestone. However, they say they \"NO, because it is impossible to share knowledge and design document fully.\" They still say they cannot set the release date and they request me to alter the due date flexibly. They does not fix the due date of each milestone. Fortunately, our company is not loaned and invested so we are not chocked. But I think it is bad idea to keep this situation. I know the story of ant and grasshopper. Our customers are tired of waiting forever of our release date. Companies consume limited time and money. If flexible due date without limit could be acceptable, could they accept flexible salary day? What is the root cause of our problem? All that I want is to fix and achieve precisely due date of each milestone without losing frequent technical supports. I think there must be solution for this situation. Please answer me. Thanks in advance. PS. Our tools and ways of project management are Trello, Mantis-like issue tracker, shared calendar software and scrum(collected cards into series of 'small and high completeness' projects)."} {"_id": "166797", "title": "Should I Use WCF For My Purpose?", "text": "I wrote two programs that server and client can connect to each other (one program for server and another for client) with their IP addresses (socket programming). Now I want to modify it so that if the client or server for example wrote something in `textbox` other user be notify of that. My programs are Windows Form Application. For this purpose, should I use WCF?"} {"_id": "166374", "title": "is Java free for mobile development?", "text": "Q1. I would like to know if it's free for a developer (I mean, if I have to pay no royalties to Sun/Oracle) to develop (Android) mobile apps in Java? After reading this snippet about use of Java field, I'm getting the impression that Java is **not** free for mobile development, is that right? > ..\"General Purpose Desktop Computers and Servers\" means computers, including > desktop and laptop computers, or servers, used for general computing > functions under end user control (such as but not specifically limited to > email, general purpose Internet browsing, and office suite productivity > tools). The use of Software in systems and solutions that provide dedicated > functionality (other than as mentioned above) or designed for use in > embedded or function-specific software applications, for example but not > limited to: Software embedded in or bundled with industrial control systems, > **wireless mobile telephones, wireless handheld devices,** netbooks, kiosks, > TV/STB, Blu-ray Disc devices, telematics and network control switching > equipment, printers and storage management systems, and other related > systems are excluded from this definition and not licensed under this > Agreement... and from http://www.excelsiorjet.com/embedded/ > Notice : The Java SE Embedded technology license currently prohibits the use > of Java SE in cell phones. Q2. how come these plethora of Android Java developers aren't paying Sun/Oracle a dime?"} {"_id": "36241", "title": "Your experience with haxe and other languages that compile to PHP?", "text": "I would like to hear opinions from people who have used a language that compiles to php. One such language I know is Haxe. Other ones I've read about are Kira and Pharen. How well do these languages integrate with PHP? Is it relatively easy to write a plug-in for a PHP CMS in them? How mature are their implementations and tools? Would you recommend them to someone who has to use a php cms but hates php?"} {"_id": "227921", "title": "How to store currency ranges in a Postgres table?", "text": "I am using Postgres 9.3. I have to store range data like `<$250K, $250K-$4M,` etc which we will display in a dropdown. now I am making a table for all possible options that can be configured from an admin interface. Should I take separate column for min,max,currency_type(dollar,euro) and a column to store K,M etc or there is a better way to do that ?"} {"_id": "227922", "title": "Is saving SQL statements in a table for executing later a bad idea?", "text": "Is saving SQL statements in a MySQL table for executing later a bad idea? The SQL statements will be ready for execution, i.e. there will be no parameters to swap or anything, for example `DELETE FROM users WHERE id=1`. I guess I'm being lazy, but I thought of this idea because I'm working on a project that will require quite a few cron jobs that will execute SQL statements periodically."} {"_id": "25773", "title": "How big sites scale up and optimize to massive traffic?", "text": "How do sites like Facebook and Twitter optimize their sites for massive traffic. Aside from spending big bucks on getting the best servers, what can be optimized in your code to accommodate massive traffic? I've read about caching your pages to static HTML, but that's impractical for social networking sites where the pages are constantly updated."} {"_id": "186889", "title": "Why Was Python Written with the GIL?", "text": "The global interpreter lock (GIL) seems to be often cited as a major reason why threading and the like is a touch tricky in Python - which raises the question \"Why was that done in the first place?\" Being Not A Programmer, I've got no clue why that might be - what was the logic behind putting in the GIL?"} {"_id": "253515", "title": "Reflective discovery of an inner class in an API", "text": "Let me ask you, as this bothers me for quite a while but appears to be subjectively the best solution for my problem, **if reflective discovery of an inner class for API purposes is that bad idea**? First, let me explain what I mean by saying \"reflective discovery\" and all that stuff. I am sketching an API for a Java database system, that'll be centered around block-based entities (don't ask me what that means - that's a long story), and those _entities_ can be read and returned to the Java code as objects subclassed from the `Entity` class. I have an `Entity.Factory` class, that, by means of fluent interfaces, takes a `Class` argument and then, uses an instance of `Section.Builder`, `Property.Builder`, or whatever builder the entity has, to put it into the back-end storage. The idea about registering all entity types and their builders just doesn't appeal to me, so I thought that the closest solution to the problem that'd suffice my design needs would be to discover, using reflection, all inner classes of `Entity` classes and find one that's called `Builder`. Looking for some expert insight :) And if I missed some important design details (which could happen as I tried to make this question as concise as possible), just tell me and I'll add them."} {"_id": "186883", "title": "Is it considered poor practice to include a bug number in a method name for a temporary workaround?", "text": "My coworker who is a senior guy is blocking me on a code review because he wants me to name a method 'PerformSqlClient216147Workaround' because it's a workaround for some defect ###. Now, my method name proposal is something like PerformRightExpressionCast which tends to describe what the method actually does. His arguments go along the line of: \"Well this method is used only as a workaround for this case, and nowhere else.\" Would including the bug number inside of the method name for a temporary workaround be considered bad practice?"} {"_id": "25779", "title": "Social networking sites static caching", "text": "In sites like facebook or orkut or friendster what's displayed at the bottom instead of main web address like static.ak.fbcdn.net or profile.ak.fbcdn.net? Are these different servers or they are just used to misguide the hackers?"} {"_id": "253801", "title": "Allocating memory inside a function and returning it back", "text": "I want to pass a pointer to my function and allocate the memory to which this pointer points. I've read in other posts that I should pass a double pointer to this function and I did so, but I keep getting segmentation fault: #include #include using namespace std; void allocate(unsigned char** t) { *t=(unsigned char*)malloc(3*sizeof(unsigned char)); if(*t == NULL) cout<<\"Allcoation failed\"< Usually, create has fallen into the POST camp, because of the idea of > \"appending to a collection.\" It's become the way to append a resource to a > list of resources. I don't quite understand the reasoning behind the idea of \" _appending to a collection_ \" and why this idea prefers POST for create. Namely, if we _create 10 resources via PUT_ , then server will contain a _collection of 10 resources_ and if we then create another resource, then server will append this resource to that collection ( which will now contain _11 resources_ )?! Uh, this is kinda confusing thank you"} {"_id": "115967", "title": "The usual metadata objects or: How to move a typical ExtJS App to jQuery, and: What's missing in the middle?", "text": "I have entered into an existing project that is all about maintaining nested data structures. You have companies which are assigend to accounts, and contacts and notes and... basically the usual bunch of 1:1, 1..n, n:m Database relations, stored in mysql or postgres, wrapped by Doctrine. Of course, every node comes with the usual set of metadata, that needs validation in terms of type (number, text, ...) and semantically (email, url, 'enums' like status, type, currency). More specifically what I stepped into I would describe (also from previous experience) as typical ExtJS hell. **Does this sound familiar to anyone?** * a typical single URL-Application: No history, no meaningful specific bookmarks, oh and dare you pressing the back button... * tabbed browsing is impossible (the not so rare use case of looking at Customer A, while entering something meaningful into Account or Customer B side-by-side) * you have that typical navigation tree on the left, the stuff on the right is called in by Ajax. Data, but also a lot of executable java code. * a strong \"MS Office feel\": CSS sprites are far, far way, simple pullDowns and multiple-Choice buttons need effort to be 'dressed down' to be usable. * Not to mention all the clobbered div-s as opossed to few clean, semantically meaningful tags * a hell of javascript (some of it quite akwardly created in PHP) to suit hard-to-debug controllers and stores and plenty of redundant code, to ensure the same thing over and over again for the various data fields.Sure, there is OOP class hierarchy for those Ext.[ux.]grids and windows, but of limited help. From previous experience I would LOVE to switch this to a straight REST-ful API, stuff is actually done between page request, the URl gives it to you just as it should look: foo.com/accounts/16324/contact/create foo.com/customer/search/state=California And make the whole think jQuery-based, where I gain more control. That brings me to my general question: **Do I miss a middle layer?** Trouble: I wonder, how to fill certain gaps (that other 'Converts' are likely to encounter, too, hence I dare to ask such broad question): 1. _what plugins would be good for (editable) tables? Including resizable, sortable comments? I know about jQueryUI and jQueryTools but I don't think that alone fits the bill. I need a really good table/grid think, probably with backend routines that fit._ 2. *Generally speaking, I feel, that I miss a middle layer between ORM and UI if I tinker with data structures, can I bring some automatism like \"Here's my data structure, including types and validation rules, (oh, and the exisiting/default data) now build the form from it\" Since this might vary from customer to customer, is another reason to not complete hardcode this to much. Also the Ext-JS-ish store functionaliy of only sending back fields that truly changed would be worthwhile. As is recognizing dirty-ness (aka the need to confirm 'save/cancel' on dialog closing).* 1. **Very** valuable would be generic mechanisms of \"tentativene sub-Dialogs\", e.g. Windows Screensaver do: So going into a Menu, from there to a submenu, saying ok on sub-dialog, then cancel on main ==> cancels the whole thing, nothing stored. In other words: Storing a few hierarchical data sets flexibly in the Session would be good. And pushing them into the DB (by repetive, generic means) when I got the actual \"OK\". Any good pointers for me the table-editing part in the frontend and/or the data-structure middle part? Thank you, Danke, Merci, Mille grazie, Xie xie! Fronker"} {"_id": "98088", "title": "certified scrum master", "text": "hello Im learning about Scrum, and noticed that there is a certified scrum master course is this worth it? or can I learn all of that from books and online, to implement in my work? thanks!!"} {"_id": "242912", "title": "Infinite loop with a singleton - does this type of issue have a name?", "text": "I ran into an unusual error while working on my project. To better learn from and remember it, I'd like to know if this type of error has a name or some definition. (The error itself `OutOfMemoryError` isn't unusual, I'm talking about what lead to this error). My (simplified) situation was this: I had a class `Singleton`: class Singleton extends ClassA{ private static Singleton instance; private Singleton(){ // implicitly calls super() } public static Singleton getInstance(){ if (instance==null) instance = new Singleton(); return instance; } } And it's superclass `ClassA`: abstract class ClassA{ public ClassA(){ ClassB objectB = new ClassB(); // .. some logic .. } } And a class `ClassB` that uses `Singleton` in it's constructor: class ClassB{ public ClassB(){ Singleton singleton = Singleton.getInstance(); // .. some logic .. } } What happens is the following: 1- `ClassB` is instantiated somewhere. It's constructor is invoked and calls `Singleton.getInstance()`. 2- `Singleton.getInstance()` sees that `instance == null` and executes `instance = new Singleton()`. 3- The constructor of `Singleton` is run. It's empty, but it implicitly calls the superclass' (`ClassA`) constructor. 4- The superclass of `Singleton` instantiates a `ClassB` object `new ClassB()`. 5- `ClassB`'s constructor is invoked and calls `Singleton.getInstance()`... Since the instantiation of `instance` in `Singleton` never reached it's finish, `instance == null` still returns `true`, and the cycle never ends. Resulting in an infinite loop, that finally resulted in an `OutOfMemoryError`. So my question is: **is this kind of infinite-loop-with-singletons error a common issue?** **Any ideas how I can avoid it in the future?**"} {"_id": "3645", "title": "I code rarely. Is this a bad sign?", "text": "I am a computer science student and learning Java now a days. I want to be a good developer/programmer. I like reading books. I search on the internet for the related topics and study them. I refer to StackOverflow and other good programming websites daily but I code rarely. Is this a bad sign? If yes then what should I do to overcome this problem?"} {"_id": "98083", "title": "Can't I just use all static methods?", "text": "What's the difference between the two UpdateSubject methods below? I felt using static methods is better if you just want to operate on the entities. In which situations should I go with non-static methods? public class Subject { public int Id {get; set;} public string Name { get; set; } public static bool UpdateSubject(Subject subject) { //Do something and return result return true; } public bool UpdateSubject() { //Do something on 'this' and return result return true; } } I know I will be getting many kicks from the community for this really annoying question but I could not stop myself asking it. Does this become impractical when dealing with inheritance? Update: Its happening at our work place now. We are working on a 6 month asp.net web application with 5 developers. Our architect decided we use all static methods for all APIs. His reasoning being static methods are light weight and it benefits web applications by keeping server load down."} {"_id": "209218", "title": "What are the advantages/disadvantages of using objects as parameters to other object methods?", "text": "I'm mostly thinking of PHP here, but if you want to refer to other languages that's fine. An object has a method that requires data contained in another class of object. Should I extract that data from the other object into a variable and then send just that data to the requesting object? Or should I pass the entire data-containing object into the data-requiring object, and then have the requesting object perform the data extraction operations? I'm asking because I'm working on a big project that I've written 100% of the code for, and some of my early code takes string or numeric arguments, but my later code seems to be leaning toward passing around objects. As I bump into some old code, I'm wondering if I should convert it to object parameters."} {"_id": "153864", "title": "Languages with a clear distinction between subroutines that are purely functional, mutating, state-changing, etc?", "text": "Lately I've become more and more frustrated that in most modern programming languages I've worked with (C/C++, C#, F#, Ruby, Python, JS and more) there is very little, if any, language support for determining what a subroutine will actually do. Consider the following simple pseudo-code: var x = DoSomethingWith(y); How do I determine what the call to _DoSomethingWith(y)_ will actually do? Will it mutate _y_ , or will it return a copy of _y_? Does it depend on global or local state, or is it only dependent on _y_? Will it change the global or local state? How does closure affect the outcome of the call? In all languages I've encountered, almost none of these questions can be answered by merely looking at the signature of the subroutine, and there is almost never any compile-time or run-time support either. Usually, the only way is to put your trust in the author of the API, and hope that the documentation and/or naming conventions reveal what the subroutine will actually do. **My question is this:** Does there exist any languages today that make symbolic distinctions between these types of scenarios, and places compile- time constraints on what code you can actually write? (There is of course _some_ support for this in most modern languages, such as different levels of scope and closure, the separation between static and instance code, lambda functions, et cetera. But too often these seem to come into conflict with each other. For instance, a lambda function will usually either be purely functional, and simply return a value based on input parameters, or mutate the input parameters in some way. But it is usually possible to access static variables from a lambda function, which in turn can give you access to instance variables, and then it all breaks apart.)"} {"_id": "209216", "title": "What do you do to estimates for agile stories where developers are pair programming?", "text": "If it was a 2-point story for one person, would you double it if people are pairing? Pairing isn't always necessarily done 100% of a dev task, so it seems that doubling the story points seems wrong. And it might not be obvious how much of the task will require pairing until the end, so you wouldn't know what the points the story should be until you've finished - and it seems strange to change an estimate midway through a sprint. However, if velocity is going to be an accurate representation of how much work we can get through during a sprint, it seems right to change the estimates if more than one person is working on a task. But also, this seems like it adds a lot of admin overhead. Thoughts?"} {"_id": "26080", "title": "If someone offers you an unverified statement regarding software development practices, do you respond with \"citation needed\"?", "text": "Recently I attended a lecture given by Greg Wilson (Chief Scientist of Software Carpentry). From the abstract: > The idea that claims about software development practices should be based on > evidence is still foreign to software developers, but this is finally > starting to change: any academic who claims that a particular tool or > practice makes software development faster, cheaper, or more reliable is now > expected to back up that claim with some sort of empirical study. Overall, the lecture was very informative and left me thinking quite deeply about my approach to development. In particular, I now find myself looking for citations to back up a lot of statements. Previously, I had slipped into the habit of simply repeating offered truths, with perhaps a mental note to go check up on it later. Putting it bluntly, **I was being gullible**. Here's an example taken from the lecture: > \"If more than 25% of the code needs refactoring, it's quicker to rewrite > it\". Sounds plausible, but is it true? Where's the study backing this up? Is it true for all languages? And so on. OK, it's quite possible to take this to an extreme and not believe anything by anyone unless you have derived it yourself from first principles. That way lies madness (or maybe mathematics ;-) ). But, if someone comes up to you with a statement along the lines of \"Hey, by doing this in [pick language of moment] we'll be able to boost productivity by [pick multiple of 10]%\" are you inclined to just accept it, or are you going to ask for proven evidence? If it's the latter (and I hope it is) then 1. where would you go to find this evidence? 2. how stringent would you be? In short, if someone offers you an unverified statement, will you respond with \"citation needed\"?"} {"_id": "204623", "title": "How to implement a message queue over Redis?", "text": "### Why Redis for queuing? I'm under the impression that Redis can make a good candidate for implementing a queueing system. Up until this point we've been using our MySQL database with polling, or RabbitMQ. With RabbitMQ we've had many problems - the client libraries are very poor and buggy and we'd like not to invest too many developer-hours into fixing them, a few problems with the server management console, etc. And, for the time being at least, we're not grasping for milliseconds or seriously pushing performance, so as long as a system has an architecture that supports a queue intelligently we are probably in good shape. Okay, so that's the background. Essentially I have a very classic, simple queue model - several producers producing work and several consumers consuming work, and both producers and consumers need to be able to scale intelligently. It turns out a naive `PUBSUB` doesn't work, since I don't want _all_ subscribers to consume work, I just want _one_ subscriber to receive the work. At first pass, it looks to me like `BRPOPLPUSH` is an intelligent design. ### Can we use BRPOPLPUSH? The basic design with `BRPOPLPUSH` is you have one work queue and a progress queue. When a consumer receives work it atomically pushes the item into the progress queue, and when it completes the work it `LREM`'s it. This prevents blackholing of work if clients die and makes monitoring pretty effortless - for instance we can tell if there is a problem causing consumers to take a long time to perform tasks, in addition to telling if there is a large volume of tasks. It ensures * work is delivered to exactly one consumer * work winds up in a progress queue, so it can't blackhole if a consumer ### The drawbacks * It seems rather strange to me that the best design I've found doesn't actually use `PUBSUB` since this seems to be what most blog posts about queuing over Redis focus on. So I feel like I'm missing something obvious. The only way I see to use `PUBSUB` without consuming tasks twice is to simply push a notification that work has arrived, which consumers can then non-blocking-ly `RPOPLPUSH`. * It's impossible to request more than one work item at a time, which seems to be a performance problem. Not a huge one for our situation, but it rather obviously says this operation was not designed for high throughput or this situation * In short: _am I missing anything stupid?_ Also adding node.js tag, because that's the language I'm mostly dealing with. Node may offer some simplifications in implementing, given its single-threaded and nonblocking nature, but furthermore I'm using the node-redis library and solutions should or can be sensitive to its strengths and weaknesses as well."} {"_id": "165844", "title": "Framework for Everything - Where to begin? [Longer post]", "text": "Back story of this question, feel free to skip down for the specific question * * * Hello, I've been very interested in the idea of abstract programming the last few years. I've made about 30 attempts at creating a piece of software that is capable of almost anything you throw at it. I've undertook some attempts at this that have taken upwards of a year, while getting close, never releasing it beyond my compiler. This has been something I've always tried wrapping my head around, and something is always missing. With the title, I'm sure you're assuming, \"Yes of course you noob! You can't account for everything!\" To which I have to reply, \"Why not?\" To give you some background into what I'm talking about, this all started with doing maybe a shade of gray hat SEO software. I found myself constantly having to create similar, but slightly different sets of code. I've gone through as many iterations of way to communicate on http as the universe has particles. \"How many times am I going to have to write this multi-threaded class?\" is something I found myself asking a lot. Sure, I could create a class library, and just work with that, but I always felt I could optimize what I had, which often was a large undertaking and typically involved frequent use of the CRTL+A keyboard shortcut, mixed with the delete button. It dawned on me that it was time to invest in a plugin system. This would allow me to simply add snippets of code. as time went on, and I could subversion stuff out, and distribute small chunks of code, rather than something that encompasses only a specific function or design. This comes with its own complexity, of course, and by the time I had finished the software scope for this addition, it hit me that I would want to add to everything in the software, not just a new http method, or automation code for a specific website. Great, we're getting more abstract. However, the software that I have in my mind comes down to a quite a few questions regarding its execution. I have to have some parameters to what I am going to do. After writing what the perfect software would do in my mind, I came up with this as a list of requirements: * Should be able to use networking * A \"Macro\" or \"Expression system\" which would allow people to do something like : =First(=ParseToList(=GetUrl(\"http://www.google.com?q=helloworld!\"), Template.Google)) * Multithreaded * Able to add UI elements through some type of XML -- People can make their own addons etc. * Can use third party API through the plugins, such as Microsoft CRM, Exchange, etc. This would allow the software to essentially be used for everything. Really, any task you wish to automate, in a simple way. Making the UI was as also extremely hard. How do you do all of this? Its very difficult. * * * So my question: With so many attempts at this, I'm out of ideas how to successfully complete this. I have a very specific idea in my mind, but I keep failing to execute it. I'm a self taught programmer. I've been doing it for years, and work professionally in it, but I've never encountered something that would be as complex and in-depth as a system which essentially does everything. Where would you start? What are the best practices for design? How can I avoid constantly having to go back and optimize my software. What can I do to generalize this and draw everything out to completion. These are things I struggle with. P.s., I'm using c# as my main language. I feel like in this example, I might be hitting the outer limit of the language, although, I don't know if that is the case, or if I'm just a bad programmer. Thanks for your time."} {"_id": "165846", "title": "Do other developers feel that as they get better, it becomes harder to get jobs?", "text": "When I was starting out, it seemed I had a much better time getting interviews and passing them. But now that I'm more experienced, I'm finding that its harder and harder to find a job. Do other developers out there feel the same way? I'll give you an example. I did an interview last Wednesday. It was a small start-up with only one other engineer and the CEO. They flew me in from Ohio (they are SF based). When I got there, they had me write them a link shortener, which took me about 10 minutes to write. I was supposed to be there all day working on this. When I finished it early, the interviewer seemed kind of shocked. After that, we were talking, and I asked him what they use to store data. He told me Mongo. I ask why he decided to use mongo. He then stammered and mumbled his answer, which basically boiled down to \"We're using it because Mongo is a the trendy database technology and we don't want to be left out\", which I've found is pretty much most common reason people use NoSQL these days. The interviewer quickly ended the interview and pretty much shoved me out the door. I was supposed to have lunch with the CEO, but I he kicked me out before I had a chance. The intervier wasn't mean or rude, (and neither was I). After I got back to Ohio, I got an email from them saying \"I wasn't a fit\". This sot of thing happens to me all the time. I'm starting to think \"not a fit\" can sometimes mean \"are too high of a skill level that we are\". Is this all in my head, or do other experienced developers notice the same thing happening? Back when I used to struggle with coding problems, I would work with the interviewer and it would be a positive thing and I'd get hired. But now I usually blow through the coding part, and the interviewer being left speechless is working against me. Should I feign struggling with coding problems?"} {"_id": "225514", "title": "IS a command-line (Console) is important to learn for ASP.NET developer?", "text": "I saw many RoR developers use command line to interact with interfaces and to deploy their web applications. Is that necessary step to earn for asp.net developer?"} {"_id": "253518", "title": "Are first-class functions a substitute for the Strategy pattern?", "text": "The Strategy design pattern is often regarded as a substitute for first-class functions in languages that lack them. So for example say you wanted to pass functionality into an object. In Java you'd have to pass in the object another object which encapsulates the desired behavior. In a language such as Ruby, you'd just pass the functionality itself in the form of an anonymous function. However I was thinking about it and decided that maybe Strategy offers more than a plain anonymous function does. This is because an object can hold state that exists independently of the period when it's method runs. However an anonymous function by itself can only hold state that ceases to exist the moment the function finishes execution. In an object-oriented language that supports first-class functions, does the strategy pattern have any advantage over using functions?"} {"_id": "99973", "title": "Measures of Javascript engine performances over time?", "text": "Since the beginning of the Javascript race -- which I would situate around Google Chrome launch in 2008 -- the improvement in the Javascript engine performances have been impressive. The web is crowded with comparisons such as \"Firefox V3.42.7 vs. Safari 3.0-prealpha2\", and the winner of those comparisons changes every few months and differs on each benchmark. But the big picture, independently of who got their new version out last, is that the average speed of current up-to-date browsers has improved a lot over the last years. Yet this long-term improvement is difficult to quantify: * people usually compare the last version of each browser, and not different versions of one browser * announced performance improvements do not generally pile up: when someone announce V3 twice as fast as V2, and later V4 twice as fast as V3, this does not mean that V3 is fourth times as fast as V1, because they usually mean \"in a favorable case\", and the favorable case in the V3-4 transition are not necessarily the same as in the V2-3 transition * benchmarks themselves evolve over time; what is referred as \"the Sunspider test\" today is not the same as in 2008, so we cannot compare raw scores over time. Does anyone know of a valuable measurement of javascript engines performance improvement over the last few years?"} {"_id": "165849", "title": "Cross-platform desktop programming: C++ vs. Python", "text": "Alright, to start off, I have experience as an amateur Obj-C/Cocoa and Ruby w/Rails programmer. These are great, but they aren't really helpful for writing cross-platform applications (hopefully GNUStep will one day be complete enough for the first to be multi platform, but that day is not today). C++, from what I can gather, is extremely powerful but also a huge, ugly behemoth that can take half a decade or more to master. I've also read that you can very easily not only shoot yourself in the foot, but blow your entire leg off with it since memory management is all manual. Obviously, this is all quite intimidating. Is it correct? Python seems to provide most of the power of C++ and is much easier to pick up at the cost of speed. How big is this sacrifice? Is it meaningful or can it be ignored? Which will have me writing fast, stable, highly reliable applications in a reasonable amount of time? Also, is it better to use Qt for your UI or instead maintain separate, native front ends for each platform? EDIT: For extra clarity, there are two types applications I want to write: one is an extremely friendly and convenient database frontend and the other, which no doubt will come much later on, is a 3D world editor."} {"_id": "99975", "title": "What are pros and cons of using temporary \"references\"?", "text": "In C and C++ (and I guess other languages that allow taking a \"reference\" to an array element or something similar), when you have an array like type, accessing individual elements of such an array can be done \"directly\" or via a temporary reference or pointer. Example: // Like this: for(int i=0; i` property (or should it be `IEnumerable`?). Now I wanted to be able to mark a list item as \"to be deleted\", and since I'm eventually going to have a whole bunch of such \"deletable\" types I created an `IDeletable` interface. The `SomeEntity` type in the data layer doesn't need to know about `IDeletable`, so I implemented the interface in a `SomeEntityViewModel : ISomeEntity, IDeletable` class, in the presentation layer (part of me is starting to think it should be defined in the business/domain layer instead). The problem I'm having, is that my model is spitting out `ISomeEntity`, which really is `SomeEntity` from the data layer: I need a way to convert from `SomeEntity` to `SomeEntityViewModel`, and given that `SomeEntityViewModel` is declared in the presentation layer, my only option seems to have my ViewModel class have a constructor like this: public SomeEntityViewModel(ISomeEntity poco) { // set the properties from the POCO implementation } ...and then in the class that holds the presentation logic, I can bridge between the two implementations by doing something like this: _viewModel.Items = _model.SelectThoseItems() // returns IEnumerable .Select(e => new SomeEntityViewModel(e)) .ToList(); ...and it works, but somehow doesn't feel right... or does it? Given `ISomeEntity`, `SomeEntity`, `SomeEntityViewModel` and `IDeletable`, what would be the \"common\" or \"normal\" way of implementing this? Or do I have it all wrong?"} {"_id": "202503", "title": "Offline version of dynamic pages", "text": "Researching archiving systems like archive.org, found out main issue of such is the dynamic content. Initial analysis shows that content 'dynamicity' can be assigned to one of the following levels: 1. **Static html content** \\- plain old web page which is represented only by html markup with auxiliary css-referred resources (usually images). 2. **Static html powered by javascirpt** \u2013 same as Level 1, but has javascript code, which only manipulates existing markup (such as expand/collapse). 3. **\u201cOnload\u201d page construction** \u2013 web page with javascript code, which makes a certain additional requests during page load phase. After loads phase page content is fully constructed. 4. **Dynamic client-side content** \u2013 UI elements are modified by javascript code on-the-go, as user traverses through interface. Usually these are modern SPA (single-page-applications, like gmail.com), \u201cendless\u201d lists (list tail is loaded when user scrolls down to the list bottom) , loading content on demand (smart expanders) and so on. So I assume that Levels 1 and 2 can be archived pretty easily. Could you please suggest how to handle Levels 3 and 4? Looks like it should involve page rendering, but some details would be helpful. _Update_ : To clarify the question: ideally offline version should be fully- functional, at least within the site level (ignoring external domains content). Also, if Level4 is too hard to automate fully - is there an approach involving human operator who makes hints to the system about content?"} {"_id": "251400", "title": "Do I need to care about 'Thread safety' in java web applications", "text": "I have general understanding that what thread safety means , but I don't know much about threads in context of a java webapplication. I am just curious to know that do I need to concern about thread safety while working with web application. Can their be chances that my web application will cause some problems if I have written codes without concerning about \"Thread safety\"? Do I need to think about that whether I am using a thread safe collection or not?"} {"_id": "251405", "title": "Have I created a Big Ball of Mud?", "text": "I'm working on a WPF application, trying to stay strict in separating View, ViewModel and Model. My application has a few different views in a relatively flat hierarchy. There is one view for editing something we call a substance, and one for classifications, where each substance has a number of classifications. My question is about the ViewModel classes for the substances and classifications, each is about 1000 lines! The thing is, the code is mostly consisting of get/set properties, with some inherent logic working against my model, as well as some commands and private functions. So I could easily divide the class into a bunch of smaller classes (like one per GroupBox in my view or whatever) with one class owning instances of the other, to get smaller classes. But this doesn't seem meaningful to me, it seems like solving a symptom, like doing it for its own sake. The size of the class clearly depends on the number of fields related to the substances and classifications, and I can't do anything about that. Do you think I'm in trouble here? Is it normal for WPF applications to be so dominated, size-wise, by the ViewModels? How serious should I be about limiting class sizes? I could reduce it significantly by changing the way I write braces, but I think it looks more clear now than with Egyptian brackets. My question isn't really about whether my code is muddy or not (some of it is), but rather if you can actually say that a ViewModel class is muddy just because it's large. To give some perspective, my substance view has 7 lists/datagrids/similar, 13 buttons, 9 comboboxes, 5 checkboxes, 9 textboxes and 3 radio buttons. All in all, that's about 26 rows per control. There are some cases where there's interaction going on between different controls, setting a value in one control will disable and nullify another one etc. Some code samples: public RelayCommand AddSubstanceNumberCommand { get { return new RelayCommand( (x) => { var number = new SubstanceNumber(); number.SubstanceID = Id; number.ModifiedBy = Main.op.LoginName; number.ModifiedDate = DateTime.Now; number.CreatedDate = DateTime.Now; number.Type = Converters.NumberTypeToStringConverter.translation.First(t => t.Value == SelectedNewSubstanceNumberType).Key; number.NumberText = Validator.Normalize(NewSubstanceNumber, SelectedNewSubstanceNumberType); var temp = NewSubstanceNumber.Replace(\"-\", \"\"); number.NumberValue = Int32.Parse(temp.Substring(0, temp.Length - 1)); if (NewSubstanceNumber != null && !SubstanceNumbers.Any(sn => sn.NumberText.Replace(\" \", \"\") == number.NumberText && sn.Type == number.Type)) SubstanceNumbers.Add(number); NewSubstanceNumber = \"\"; }, param => this.CanAddSubstanceNumber); } } public ObservableCollection SubstanceGroups { get { return Substance.SubstanceGroups; } set { Substance.SubstanceGroups = value; OnPropertyChanged(\"SubstanceGroups\"); } } public RelayCommand OpenAllClassificationsCommand { get { return new RelayCommand( (x) => { // Opening tabs for all classifications that don't already have a tab open var currentSubstanceTab = Main.SelectedTab; var classificationTabsNow = currentSubstanceTab.Tabs.Where(t => t.GetType() == typeof(ClassificationViewModel)); var toOpen = Classifications.Where(c => !classificationTabsNow.Any(ctn => ctn.Id == c.SubstanceClassificationID)); toOpen.ToList().ForEach(o => Main.SelectedTab.Tabs.Add(new ClassificationViewModel(Main, o.SubstanceID, o.SubstanceClassificationID, false, new Services.MessageBoxNotifyUserService()))); }); } } And here's one that's particularly muddy, and repeated: public OccupationalExposureLimitUnit OccupationalExposureLimitUnit5 { get { if (SubstHasOccExpLimits) return Substance.OccupationalExposureLimitUnit5; else if (GroupForLimits != null) return GroupForLimits.OccupationalExposureLimitUnit5; return null; } set { bool hadValue = SubstHasOccExpLimits; if (value.Name_SV == \"\") OccupationalExposureLimitShortTerm2 = null; Substance.OccupationalExposureLimitUnit5 = value; if (!hadValue && SubstHasOccExpLimits) TransferLimitValuesFromGroupToSubstance(); OnPropertyChanged(\"OccupationalExposureLimitUnit5\"); } } This property includes some fallback-functionality for when the substance lacks a certain set of values, and some state changes for going to substance- has-values-state when the user starts editing one of the values. There is certainly room for improvement here, but even without it, the class would be a number of times larger than the 200 rows some recommend for class size."} {"_id": "99026", "title": "Skills required for senior developer who knows C# and WPF", "text": "I have average level of knowledge in both C# and WPF. I want to move to next level. What are the skills needed for that means for Senior developer?"} {"_id": "27847", "title": "Is it advisable to disable the Microsoft enforced coding standards in VC# 2010?", "text": "Coming from eclipse I've developed my own coding standards which I got used to. In Visual C# 2010 however, it appears that some coding standards that MS recommends are enforced in the default configuration. E.g.: I'm used to write conditional statements like this: if (somecondition) { return true; } else { return false; } But the braces are forced to a newline in visual c#. Is it recommended to use that standard ?"} {"_id": "107481", "title": "Which software methodology works when you are in learning phase of a technology?", "text": "In my scenario, we have a team of experienced developers with 3+ years and they are not co-located. We are in a learning phase in terms of the domain and technology. I want some recommendations in regards to what software methodology we should adapt. I have thought of implementing waterfall, since we have to design upfront and we know all the design issues and technical challenges. Using agile on teams that are not co-located, we aren't sure how to estimate on per week bases when dont have a complete grasp on the technology."} {"_id": "194529", "title": "Vagrant's Users: What are its drawbacks?", "text": "I've been studying http://www.vagrantup.com/ for several days and I'd like a feedback by people using it in a professional context or for personal projects. I think I get what positive things can bring Vagrant but I have questions about its drawbacks: * I'm a php developer and I work in a small company. I'm wondering if Vagrant could be useful for me and newcomers in my company since it's forcing a total development environment. I mean, people often prefers to work on their OS (Linux/Mac/Windows). How do you deal with that, do you force everyone to use Linux for example? * Doesn't it add complexity since you are entirely working on a Virtual Machine? * Don't you fear your VM main file get corrupted one day and you wouldn't be able to use it again? I feel nervous about using VM because of that. Thank you."} {"_id": "65438", "title": "Corporate vs Personal email for Corporate Sponsored OSS", "text": "In the absence of a written policy, what is the preferred email domain (corporate or personal) when contributing to an open source project that your employer sponsors? \\--Edit-- I'm interested in the programmer's preference assuming the company doesn't care either way."} {"_id": "194524", "title": "Google Blink (new WebKit fork): Meaning of \"Moving DOM into Javascript\"?", "text": "From the Blink Blog: > Finally we\u2019d like to explore even larger ideas like moving the entire > Document Object Model (DOM) into JavaScript. What does this mean? Does it mean WebKit's DOM is currently _not_ coded in JavaScript but in some other language? Does it mean that they want to expose more public accessors to the DOM? Or what?"} {"_id": "99240", "title": "Best JavaScript Coding Structure Using Object Literal?", "text": "Object Literal has no doubt been established in recent years as one of the essential components to best JavaScript coding practices. However, I am not very sure about what is the best way to structure my codes using Object Literal. It has been suggested earlier before that Module Pattern might be a good technique to structure your codes [1] [2], but criticisms regarding Module Pattern have begun to surface as people spent more times exploring the extent of the technique. [3] [4] So, my question is: as of summer 2011, what is the acknowledged best way to structure your codes utilizing Object Literal? Is it still Module Pattern, or has some other technique emerged already as a yet better replacement?"} {"_id": "233158", "title": "Should I use HTTP search", "text": "I am working on a web api and I am curios about the `HTTP SEARCH` verb and how you should use it. My first approach was, well you could use it surely for a search. But asp.net WebApi doesn't support the `SEARCH` verb. My question is basically, should I use `SEARCH` or is it not important? PS: I found the `SEARCH` verb in fiddler2 from telerik."} {"_id": "233152", "title": "Tablet development for a dedicated system", "text": "I need to make an architectural decision for developing (actually porting) my embedded solution on a tablet. The choice comes down to Ubuntu or Android, so I have some specific questions to help me decide. * On Android, is it possible to develop applications outside of Dalvik, using Python? If yes, can I access the hardware this way, without the API's provided by the Android SDK? * On Android, can I control the process' `core_affinity` to bind a process to a single core? And can i use `isolcpus` to isolate other processes from that core making it a (almost) dedicated core for a process? This is possible in regular linux, not sure if it can be done in Android. * On Ubuntu, how much control over the HW do I have outside Ubuntu SDK?"} {"_id": "233155", "title": "Functional problems due to localization", "text": "With localization come various UI issues such as untranslated or badly translated strings, clipped strings, incorrectly formatted dates or numbers, wrong sorting order and more. However are there any non display related issues that cause major loss of functionality? One such example might be storing in a DB a floating number in a comma separated format, and then parsing it in a culture invariant dot separated format. This is likely to crash the application or produce wrong calculations. Being a Test Manager in my organization, I am wondering what other non display issues have you encountered and are likely to happen in a localized environment. Specific examples are very welcomed."} {"_id": "233157", "title": "Can one determine the creation date of an email account?", "text": "Is it possible to determine the creation date of the email supplied with the authentication process flow; Or at least determine that the email was/was not created the same day as signup (or specifically after confirmation)? A use case is to flag such accounts for closer scrutiny as part of a risk management system. What general considerations do you think come into play in approaching this problem? I imagine that solutions might be specific to different email providers but if I could determine this information for the major ones it's a good place to be. Would love specific answers in any language or pseudocode assuming this is solvable within ethical principles. _Disclaimer: I have asked this question on Stackoverflow. Some nice person said that email providers would not allow it for privacy reasons. Completely acceptable, but IMHO I think that the age of an email has little to do with personal privacy. Also there are too many smart people using the stack* forums for me to just lie down and die._"} {"_id": "15159", "title": "Real world T4 usage", "text": "How many of you have ever used (or even know of) the Visual Studio Text Templating Engine? I mean, T4 templates have been around for quite some years now. The where initially included in Visual Studio 2005 SDK, and they where moved to Visual Studio 2008 core just before releasing it. What I want to know is if this is really used out there. I mean, why doesn't the VS team invested in a descent minimal editor for the templates? It doesn't even have syntax couloring! I know there are a couple of good T4 editors out there, but as they don't come built in, I tend to think that this is an unknown feature for most developers. Have you ever used T4 templates in any of your projects? I'm really interested in answers that explain how they created their own T4 to accelerate, automate, or generate artifacts for your project."} {"_id": "15158", "title": "Do you ever read language specs?", "text": "While trying to answer questions on SO site (particularly C++ questions) I find that most of the times one person or the other includes a quote from the standard. I myself have hardly read the language spec. Even when I try to read it, I can't get my head around the language used in the specs. So, my question to become a good developer in a particular language is it essential to read its spec and know all its dusty corners which invoke undefined/implementation defined behavior?"} {"_id": "80425", "title": "How do you spec out your project?", "text": "all. I'm wondering how do you spec./plan out your project? For example, for webiste you can use Wireframe. How about for desktop or add-ins such as MS Office add-ins? I want it as simple as possible and don't want spend all day using some sort of diagram software to sketch it out. Do I simply write down what features I want to build for the software then filter out what is possible and what is not possible? Any tips is appreciated."} {"_id": "76021", "title": "How do I set up a source code control system for myself?", "text": "I program on my desktop in my office, but also sometimes at home in a different room on my laptop, and even away from home. What I need is a system that automatically or on-demand syncs my work from one to the other, at need. I do not have a home network setup, and although I guess I could do it, that would be a question for another board, perhaps. I've thought about some kind of system that would keep the source code in the cloud, but I don't know enough about this to get started. I need a kind of free or cheap way to do this. I work in .NET (Windows Phone 7, in fact)."} {"_id": "39535", "title": "Argument notation in Python documentation", "text": "I read the Python documentation a lot and sometimes I am baffled by this notation: > os.path.join( **path1[, path2[, ...]]** ) I somehow make that **[, path[,...]]** is a list but I would like to know if I am reading it correctly. Bear with me, this is coming from a Java developer who is trying out Python. X)"} {"_id": "112022", "title": "Alternatives to *documents* in the SDLC?", "text": "Since no one prints anything any more, the concept of actual project documents (meaning, a monolithic piece of formatted text) seems like it could be improved on. Problems with documents are things like Word version incompatibilities, endless messing around with templates, the fact people hate reading them... I'm wondering if there are alternatives, like websites that would help manage the content, imposing some structure, but allowing things like linking to individual bits. But I guess the site would have to be able to spit out a \"document\" for the times it's needed (filing, showing sponsors who ask, etc) For example, instead of a \"project plan document\", I'm imagining a project planning site where you can work on just the resourcing with relevant people, then you work on scheduling with other relevant people, and work packages with other relevant people. Instead of an SRS document, some site that lets you manage requirements, assigning priorities, phases etc...but that could ultimately generate an actual SRS document if you needed it. Thoughts? The context here is small, agile teams but with occasional need to produce documents to demonstrate progress. We're certainly not big enough to warrant anything like Rational or whatever they use these days in big companies."} {"_id": "78172", "title": "Is developing for a niche tablet market worth it?", "text": "As an avid fan of tablets, I finally bought a Nook Color a while back. Since it's based on Android and I have Java experience, I'm intrigued by the possibility of developing for it. I've seen that it has an estimate of a few million units shipped (those stats were fairly recent), and the app market is rather small at this time (which I presume would be an advantage to a new app). What I'm wondering is, what are the advantages and disadvantages of developing for such a small, reading-based market - in terms of app popularity and use? Am I wrong in assuming the small app market is an advantage? Does developing for a niche tablet have any synergy with the larger phone market, in terms of cross-development or promotion?"} {"_id": "88416", "title": "How will we be able to produce websites without using cookies with the new law?", "text": "> **Possible Duplicate:** > How do I comply with the EU Cookie Directive? Under this new EU law we are not allowed to use any cookies without asking first, I for one need to use a cookie to register the user logged on, as if not with a cookie they can log on more than once and breach the license terms of the software. so i find myself asking what can we use instead of cookies to perform this task?"} {"_id": "115555", "title": "How can we plan projects realistically while accounting for support issues?", "text": "We're having a problem at work: we're trying to schedule work so that we can assess time scales and get deadline dates. The problem is that it's difficult to plan for a project without knowing everything that's going to happen. For instance, right now we've planned all our projects through the start of December, however in that time we will have various in house and external meetings, teleconferences and extra work. It's all well and good to say that a project will take three weeks, but if there is a week's worth of interruption in that time then the date of completion will be pushed back a week. The problem is 3 fold: 1. When we schedule projects the time scales are taken literally. If we estimate three weeks, the deadline is set for three week's time, the client is told, and there is no room for extension. 2. Interim work and such means that we lose productive time working on the project. 3. Sometimes clients don't have the time that we need to take to do the work, so they'll sometimes come to us and say they need a project done by the end of the month even when we think that the work will take two months - not to mention we already have work to be doing. We have a Gantt chart which we are trying to fill in with all the projects we have and we fill in timesheets, but they're not compared to the Gantt chart at all. This makes it difficult to say \"Well, we scheduled 3 weeks for this project, but we've lost a week here so the deadline has to move back a week.\" It's also not professional to keep missing deadlines we've communicated to the client. How do other people deal with this type of situation? How do you manage the planning of projects? How much \"extra\" time do you schedule into a project to account for non-project work that occurs during a project? How do you deal with support issues and bugs and stuff? Things you can't account for during planning? **UPDATE** Lots of good answers thank you."} {"_id": "131983", "title": "In a legacy codebase, how do I quickly find out what is being used and what isn't?", "text": "I've been asked to evaluate what appears to be a substantial legacy codebase, as a precursor to taking a contract maintaining that codebase. This isn't the first time I've been in this situation. In the present instance, the code is for a reasonably high-profile and fairly high-load multiplayer gaming site, supporting at least several thousand players online at once. As many such sites are, this one is a mix of front- and back-end technologies. The site structure as seen from the inside out, is a mess. There are folders suffixed \"_OLD\" and \"_DELETE\" lying all over the place. Many of the folders appear to serve no purpose, or have very cryptic names. There could be any number of old, unused scripts lying around even in legitimate-looking folders. Not only that, but there are undoubtedly many defunct code sections even in otherwise-operational scripts (a far less pressing concern). This is a handover from the incumbent maintainers, back to the original developers/maintainers of the site. As is understandably typical in these sorts of scenarios, the incumbent wants nothing to do with the handover other than what is contractually and legally required of them to push it off to the newly-elected maintainer. So extracting information on the existing site structure out of the incumbent is simply out of the question. The only approach that comes to mind to get into the codebase is to start at the site root and slowly but surely navigate through linked scripts... and there are likely hundreds in use, and hundreds more that are not. Given that a substantial portion of the site is in Flash, this is even less straightforward since, particularly in older Flash applications, links to other scripts may be embedded in binaries (.FLAs) rather than in text files (.AS/ActionScript). So I am wondering if anyone has better suggestions as to how to approach evaluating the codebase as a whole for maintainability. It would be wonderful if there were some way to look at a graph of access frequency to files on the webserver's OS (to which I have access), as this might offer some insight into which files are most critical, even though it wouldn't be able to eliminate those files that are never used (since some files could be used just once a year)."} {"_id": "88500", "title": "Does anyone work 10 hours shifts as a developer?", "text": "I would like to switch from a 5 day week to a 4 day, but maintain a 40 hour working week. Would the 10 hour days affect your ability to be productive? I hate our public transit system so if I could reduce my transportation by 20% I would be happy. If other developers who work 10 hours shifts could be clear as the their experiences with it that would help me. I think my boss is flexible enough that he would be cool with it."} {"_id": "229714", "title": "Using Scrum on small projects where Owner doesn't want to be involved", "text": "Recently I've been reading and learning quite a lot about scrum and I like it a lot. However, I do have a couple of likely scenarios in my head to which I don't know the solution. So let's say that I might want to organize an agile team of (for instance) four web developers (one of them UI/UX designer). This team would operate on scrum principles. Initially we would probably be working on projects like landing pages for ordinary people's small businesses, like renting apartments, selling cookies... Such customers simply can't be set with Product Owner role (IMHO), because they usually expect to hire a company, give them the overall project goal with some details, and then expect the job to be done (including a lot of decision making) with as little of their involvement as possible (in their opinion, they have more important things to do). Let's say I'd like to engage myself in a developer/scrum master role (I know that even that is debatable, being a team member and scrum master at once), so I simply shouldn't take the role of the product owner as well. So as for my questions: If I'm my company's business owner, do I simply need to be a product owner as well (do these roles include each other)? Can I employ a sales person which might have the product owner role? Would it be better if it is an experienced developer instead of a sales person? Is this even a smart move? Lastly, is there another agile approach that might better suit my position? * * * **EDIT:** Thank you everyone for good inputs. I added some comments, any aditional info will be greatly appreciated."} {"_id": "109682", "title": "What's the title of someone who presents software for a living?", "text": "I'm wondering what the title of someone who's main job is to present software/products (basically do presentations) (internally or externally)? I'm not really talking about salespeople, but more on the side of people who go out there and make presentations as a key part of their job, while still having some parts of involvement with the development? A possible title: Product Evangelist. Is it only the program managers and VP's who go out and present to the \"higher-ups\" or the public?"} {"_id": "34067", "title": "Nested Classes: A useful tool or an encapsulation violation?", "text": "So I'm still on the fence as to whether or not I should be using these or not. I feel its an extreme violation of encapsulation, however I find that I am able to achieve some degree of encapsulation while gaining more flexibility in my code. Previous Java/Swing projects I had used nested classes to some degree, However now I have moved into other projects in C# and I am avoid their use. How do you feel about nested classes?"} {"_id": "131989", "title": "Where is a good place to start to learn about custom caching in .Net", "text": "I'm looking to make some performance enhancements to our site, but I'm not sure exactly where to begin. We have some custom object caching, but I think that we can do better. **Our Business** We aggregate news stories on a news type of web site. We get approximately 500-1000 new stories per week. We have index pages that show various lists of the items and details pages that show the individual stories. **Our Current Use case: Getting an Individual Story** 1. User makes a request 2. The Data Access Layer(DAL) checks to see if the item is in cache and if item is fresh (15 minutes). 3. If the item is not in cache or is not fresh, retrieve the item from SQL Server, save to cache and return to user. **Problems with this approach** * The pull nature of caching means that users have to pay the waiting cost every time that the cache is refreshed. Once a story is published, it changes infrequently and I think that we should replace the pull model with something better. **My initial thoughts** * My initial thought is that stories should ALL be stored locally in some type of dictionary. (Cache or is there another, better way?). If the story is not found, then make a trip to the database, update the local dictionary and send the item back. * Since there may be occasional updates to stories, this should be an entirely process from the user. * I watched a video by Brent Ozar, How StackOverflow Scales SQL Server, in which Brent states \"the fastest database query is the one that you don't make\". **Where do I start?** At this point, I don't know exactly what the solution is. Is it caching? Is there a better way of using local storage? Do I use a Dictionary, OrderedDictionary, List ? It seems daunting and I'm just looking for some good starting points to learn more about how to do this type of optimization."} {"_id": "224766", "title": "Why function returning by Address can not be a Lvalue?", "text": "Why it is not possible to make it LValue if a function return by address (while possible in case of reference)? int* returnByAdress() { int x =20; return &x; } int& returnByReference() { int x =30; return x; } int main() { returnByReference() = 23; //possible int x = 23; returnByAdress() = &x;//not a lValue error *returnByAdress() = 245;//possible return 0; }"} {"_id": "133851", "title": "Exceptions as asserts or as errors?", "text": "I'm a professional C programmer and a hobbyist Obj-C programmer (OS X). Recently I've been tempted to expand into C++, because of its very rich syntax. So far coding I haven't dealt much with exceptions. Objective-C has them, but Apple's policy is quite strict: > **Important** You should reserve the use of exceptions for programming or > unexpected runtime errors such as out-of-bounds collection access, attempts > to mutate immutable objects, sending an invalid message, and losing the > connection to the window server. C++ seems to prefer using exceptions more often. For example the wikipedia example on RAII throws an exception if a file can't be opened. Objective-C would `return nil` with an error sent by a out param. Notably, it seems std::ofstream can be set either way. Here on programmers I've found several answers either proclaiming to use exceptions instead of error codes or to not use exceptions at all. The former seem more prevalent. I haven't found anyone doing an objective study for C++. It seems to me that since pointers are rare, I'd have to go with internal error flags if I choose to avoid exceptions. Will it be too much bother to handle, or will it perhaps work even better than exceptions? A comparison of both cases would be the best answer. Edit: Though not completely relevant, I probably should clarify what `nil` is. Technically it's the same as `NULL`, but the thing is, it's ok to send a message to `nil`. So you can do something like NSError *err = nil; id obj = [NSFileHandle fileHandleForReadingFromURL:myurl error:&err]; [obj retain]; even if the first call returned `nil`. And as you never do `*obj` in Obj-C, there's no risk of a NULL pointer dereference."} {"_id": "39287", "title": "Preserving version control commit history vs Refactoring and Documentation", "text": "It costs almost nothing to use the commit history maintained by the version control system. However, during a major project refactoring (or reorganization / cleanup) effort, functions and classes and even namespaces will be moved around; sometimes several files will be merged together and other files will be split. These changes often lead to the loss of the original commit history of a few files. In my personal opinion, the up-keeping of the organization of the project is more important than keeping the source code history. Good project organization allows new features to be added continuously with reasonable effort, while the value of source code history appears to be dubious. Furthermore, with the use of unit testing, regression issues are quickly identified. As long as the latest version continues to satisfy all of the software requirements, do we actually need to preserve the history of the source code? I understand that any _shipped_ source code must be preserved because of the need to provide support to the customers without requiring them to perform a major version upgrade. But aside from this, is there any value in keeping the history of source code commits? Does source code commit history play any role in the communications between team members? What if we abolish the commit history, but instead rely on \"source code\" + \"unit testing code\" for communication? Does the existence of commit history make one complacent about the long-term documentation of important information about the project, such as the major design/requirement changes and the streams of thought that drove those changes?"} {"_id": "184137", "title": "How to write \"SMART\" Objectives as an agile developer?", "text": "Like many corporations the company I work for is transitioning to a performance review system based on SMART objectives. My team is a high functioning agile development team employing practices from both SCRUM and Extreme Programming. To our great benefit our employment of agile practices has the full support of immediate and upper management. To accomplish work our team utilizes three week iterations. Beyond the immediate iteration we have a general plan laid out into quarters. Meaning that what we will have accomplished a few quarters from now is a lot hazier than what we will be accomplishing in the immediate quarter. We certainly have a general idea of where our project is headed, but the keyword here is general. Given our approach to project planning members of our team and myself are finding it difficult to write objectives which are specific, measurable, attainable, relevant, and time-bound. Two existing questions on Programmers.se do a good job of addressing some of our concerns: * What is an example of a good SMART objective for a programmer? * Are SMART goals useful for programmers? However, the questions elicited more general responses than specifics for dealing with SMART goals when working on an agile development team. As an agile developer how do you write five to seven, year long objectives which are specific, measurable, attainable, relevant, and time-bound?"} {"_id": "199852", "title": "Does domain specific languages use other programming languages?", "text": "Does a domain specific language use other languages like c++ or java, or is it standalone?"} {"_id": "108086", "title": "Is programming a profession not for a person with speech impairment?", "text": "My friend has 15 years of programming experience and Ph.D. in mathematics. He has also cerebral palsy with speech impairment. Because of his handicap, he chose to being a software developer after his Ph.D. As far as I can see, he is still an excellent c# developer. Nowadays, however, he has hard time to find a job for himself because most of developer jobs require good communication skills. Looking at him struggling so much, do I have to advise him software industry is not suitable for him any more? It will be extremely difficult for me to do that to the friend but I think it would be better than making him wasting his time. What do you think? **Update:** Thanks a lot for your excellent answers. I can see most answers recommend against my advice and I really really hope you guys are right. In reality, however, he has been rejected in 100 or so phone interviews. That's where I want to be a potential bad adviser rather than a politically right friend."} {"_id": "108083", "title": "Is it an inefficient workflow to write out code and THEN go back and optimize?", "text": "I'm a newer enthusiast programmer. I've been doing small and big projects on and off for the past two or three years. I always hit small snags in my code that add up to big memory hogs or unnecessary lines of code down the road. My philosophy on big projects has always been to just 'get it out of my head' and write all the code into a working product, then going back and reducing the code to a simpler, more optimized form. However, this doesn't always work: when doing database structures, I tend to have to go back many times to add in features that come up spur-of-the-moment. Entire blocks of code have to be erased because of a spontaneous brainchild. My question to you is, **is it more efficient to write an entire program and THEN go back and optimize it, or should I optimize code as I write it?**"} {"_id": "68515", "title": "Code licensing question. Client stole my code", "text": "A client I did a project for accidentally ended up with the source code of said project. Stupid i know. The arrangement was that they would get the product, never the source code. However, obviously, they are now trying to pull a fast one and use the source code to create their own products for other clients. Now i don't want to make a huge deal out of it, but i am contemplating releasing the code as an open source project for anyone to use so that their 'unique' selling point (my code) is moot. What do you guys suggest i do / release the code under if they decide to be *ssholes about this?"} {"_id": "157414", "title": "Open source software with good code documentation to improve design skill", "text": "As I'm trying to get better at designing good software I'm wondering if there are out there good (as in well written) open source software with lot of code documentation that aims to explain details about why this or that design choice was made in the context of that specific problem set. I'm interested in OOP and I don't really care much about the language (php / java / c# are preferred tho :)) Example: Problem: X Possible way to handle this problem: A, B, C. We choose B because... (technical details about the implementation, pros and cons, code comments etc)"} {"_id": "183640", "title": "How should we draw the release burndown chart?", "text": "I have been in various Agile projects and seen many release burndown chart styles. Most of them were handled manually since somehow all the tools that I have run across don't produce really useful burndown charts. In reality, we may not be dealing with all stories estimated and we just keep burning down. Many times, the scope is added during the release. Currently, I am dealing with these problems with the burndown chart and have questions about the following things: 1. How do we represent added scope in the release burndown chart? 2. How do we deal with unestimated stories in release burndown chart assuming that there is high probability that we will do it but we just do not have time to estimate them yet? 3. How do you project the end of the release? The burndown chart below is from the fictional case study in the appendix of classic Mike Cohn's Estimating and Planing book. ![Mike Cohn's burndown](http://i.stack.imgur.com/MXQ5o.png)"} {"_id": "206970", "title": "Which layer does async code belong?", "text": "I am developing an application that consumes data from an external service. The application is being implemented following a typical layered architecture with UI, Presentation, Domain and Data layers. The service client resides within the Data layer. As is typically the case, I do not want the UI to 'lock-up' while waiting for calls from the external service to complete. However, implementing the service agents with asynchronous methods results in a complicate model chaining async methods up the layers so the UI can stay responsive. For example: In the ViewModel: TheRepository.LoadDataAsync().ContinueWith(t => { _data = t.Result; }); In TheRepository: public Task LoadDataAsync() { return ServiceClient.GetDataFromServiceAsync(); } (Note, this is a significant over-simplification meant to convey what I mean by task chaining.) Because the requirement is a UI requirement (prevent the UI from 'locking'), doesn't it make sense to keep the Domain and Data layers synchronous and leave it up to the Presentation layer to decide when it needs to perform some operation asynchronously? On the other hand, it is so easy and natural to implement the service client/proxy with async methods (VS will auto-generate them if needed) because this is what we think of when we think about the need for async. If the application was pulling data out of a database instead of making a service call, the layers wouldn't be any different and it shouldn't really change how the UI, Presentation or Domain layers have been implemented, right? In this case, we wouldn't think twice about having the data access be synchronous. Would it be a better design to model the Domain synchronously and leave it to the Presentation layer to address performance issues in the UI? Or, in the emerging async world, should we just accept async as the norm and make everything fit this model?"} {"_id": "224493", "title": "What is casting supposed to mean?", "text": "When coding in low level languages like C I find that casting sometimes means 'reinterpret these bytes as if it had always been of this other type' and at other times as 'convert this value intelligently into this other type'. What is the original meaning of the word and is there any consistency in when to expect a conversion and when to expect a raw reinterpretation?"} {"_id": "159528", "title": "Learning new programming languages and technologies", "text": "Some time ago I was to learn iOS development at my work. I didn't manage to learn it because I was moved to more important task with another technology already familiar to me. But the short time I spent on iOS, I felt that I could do better if I spent a few hours at the workplace during which I could discuss some matters concerning iOS with my colleagues who know it, and the rest of the time working at home. I think many developers would share my opinion and prefer to learn the new technology at home. Is there any accepted practice? Has any developer had an experience when the company encouraged the developers to choose where they feel more comfortable to learn new technologies?"} {"_id": "159529", "title": "How to structure template system using plain PHP?", "text": "I was reading this question over on stackoverflow: http://stackoverflow.com/questions/104516/calling-php-functions-within- heredoc-strings and the accepted answer says to do plain PHP templates like this: template.php: <?=$title?> index.php: Hello World!

    '; } include('template.php'); ?> To me the above isn't well structured in the sense that template.php depends on variables that are defined in other scripts. And you're using an include() to execute code when you do `include('template.php')`(as opposed to using include() to include a class or a function which isn't immediately executed). I feel like a better approach is to wrap your template inside a function: template.php: <?=$title?> index.php: Hello World!

    '); ?> Is the second approach better? Is there an even better way to do it?"} {"_id": "180590", "title": "Is Agile a variant of RAD?", "text": "Wikipedia says that Agile is a type of \"RAD\" which I guess is incorrect. From what I know, Agile was developed becasue RAD itself was not that sucessfull in 90'S (too rigid for changes). Or am I wrong? A reference from a book Radical Project Management (Thomsett) > \"..new development fad such as RAD, Agile, Object oriented...\" CISA Certified Information System auditor: > ..aware of **two alternative** software dev. methods: Agile and Rapid > Application Development Agile management for Software: > Agile methods are mostly derived from lightweight approach of RAD. Software estimation best practices: > The major methods of sw. dev. can be summarized as follows: > 1\\. Waterfall .. > 4\\. RAD > 5\\. Agile The point of this question is: **Is Agile type of RAD or standalone development approach?**"} {"_id": "100671", "title": "Making money from developing Open Source Software. How does that work?", "text": "> **Possible Duplicate:** > Making money with Open Source as a developer? I've been hearing recently that more and more developers contribute their efforts to many open-source products. Being a novice in many aspects of software development I've never thought about myself as being a member of a team that builds a product almost anybody can use for free. But this trend is getting more and more attention, and today it is almost considered that any self-respected software engineer should have at least one example when they participated in an open-source project. It gives you a few more credits on the interview, it's cool, it means something very good, especially if the product is popular and successful. However, it seems that it's not just cool, but somehow it is prosperous business today. Not just some students or passionate enthusiasts anymore, but well known developers quit their jobs and start spending most of their time in that sector, proprietary companies spend good amount of money to support them. I understand there are many different types of licensing and stuff. But I simply don't understand, how those developers get paid? How the companies make their revenue of it? I don't believe that behind all that is just people's altruistic nature, and they just happy to work for virtually nothing. Can you explain me taking any well known project as an example, the business model of it? How can somebody participate? What should you know first? How can you monetize your efforts? Is there any \"guide to beginners\" or \"complete manual for idiots\" to start with?"} {"_id": "233470", "title": "Is open-sourcing previously-commercial engines a smart move?", "text": "I saw that the Unreal Engine is going open-source on github (see link1 and link2) and that made me wonder: \"is open-sourcing a previously commercial engine a smart move?\" What's the benefit to Epic Games in open-sourcing their core code in this manner? How can they not be concerned about people stealing parts of it just because they put a copyright and license statement on the code? It seems like there has to be a catch. How does releasing their source code benefit Epic Games?"} {"_id": "108338", "title": "Does TDD's \"Obvious Implementation\" mean code first, test after?", "text": "My friend and I are relatively new TDD and have a dispute about the \"Obvious Implementation\" technique (from \"TDD By Example\" by Kent Beck). My friend says it means that if the implementation is obvious, you should go ahead and write it - **before** any test for that new behavior. And indeed the book says: > How do you implement simple operations? Just implement them. Also: > Sometimes you are sure you know how to implement an operation. Go ahead. I think what the author means is you should test first, and then \"just implement\" it - as opposed to the \"Fake It ('Till You Make It)\" and other techniques, which require smaller steps in the implementation stage. Also after these quotes the author talks about getting \"red bars\" (failing tests) when doing \"Obvious Implementation\" - how can you get a red bar without a test?. Yet I couldn't find any quote from the book saying \"obvious\" still means test first. What do you think? Should we test first or after when the implementation is \"obvious\" (according to TDD, of course)? Do you know a book or blog post saying just that?"} {"_id": "108337", "title": "Search multiple tables", "text": "I have developed a web application that is used mainly for archiving all sorts of textual material (documents, references to articles, books, magazines etc.). There can be any given number of archive tables in my system, each with its own schema. The schema can be changed by a moderator through the application (imagine something similar to a really dumbed down version of phpMyAdmin). Users can search for anything from all of the tables. By using FULLTEXT indexes together with substring searching (fields which do not support FULLTEXT indexing) the script inserts the results of a search to a single table and by ordering these results by the similarity measure I can fairly easily return the paginated results. However, this approach has a few problems: * substring searching can only count exact results * the 50% rule applies to all tables separately and thus, mysql may not return important matches or too naively discards common words. * is quite expensive in terms of query numbers and execution time (not an issue right now as there's not a lot of data yet in the tables). * normalized data is not even searched for (I have different tables for categories, languages and file attatchments). **My planned solution** Create a single table having columns similar to id, table_id, row_id, data Every time a new row is created/modified/deleted in any of the data tables this central table also gets updated with the `data` column containing a concatenation of all the fields in a row. I could then create a single index for Sphinx and use it for doing searches instead. Are there any more efficient solutions or best practises how to approach this? Thanks."} {"_id": "218610", "title": "Understanding velocity update in Binary Particle Swarm Optimization", "text": "I am wondering how to interpret the velocity update in a _BinaryParticle Swarm Optimization_ (PSO). To recap the velocity update : V(t+1) = V(t) + c1 * r1 * (XlocalBest - X(t)) + c2 * r2 * (XglobalBest - X(t)) I understand that the binary position vector maps to values in the discrete domain. However, the binary vector contains only `0` and `1`s, where the length of this vector is `n`-dimensional. What I don't get is that the `n` real valued velocity vector takes the `n` binary vector (position), and takes e.g the difference between the local best position at a given index and the current position. Is it correct that there's only `0`,`1` values that are subtracted for each `X` value in the update ? (So that e.g `c1 * r1` is only multiplied either with `0`, `1` or `-1`)? How can I use this information in relation to the sigmoid function that is often used in these PSOs? Should I update the velocities, and for example take a particles velocity vector where each element is in the domain `[0,1)`, and put this into the sigmoid function, and turn the corresponding bit positions on or off in the particle's binary vector such as `r < sigmoid(velocity value)`, where `r` is random in `[0,1)`? Does this sound about right, or did I get it wrong?"} {"_id": "108330", "title": "Io (Language) IDE/Compiler", "text": "Can you recommend a free compiler/IDE for writing some simple Io programs? I want to learn the language at home in my spare time."} {"_id": "79227", "title": "Microsoft's current best practices for building a .NET data tier? And reality?", "text": "The development team I'm working with will be moving to .NET 4.0 soon, however, the data access class library we use still uses ADO.NET \"classic\", meaning SqlDataReader, DataTable, and the like. Meanwhile, it seems like Microsoft and probably the rest of the world is moving forward with Entity Framework and WCF Data Services. I didn't find anything on MSDN that indicated which of the data access technologies Microsoft considers best practices. Does Microsoft have a preference? What data access are most people using currently? Are there good reasons to stay with ADO.NET classic and not move to Entity Framework?"} {"_id": "103487", "title": "What was the first hierarchical file system?", "text": "\"Directories containing directories and files\" seems to have been around forever, but there must have been a first."} {"_id": "79221", "title": "How to report the progress of my project (Agile) to my employer (who is not a programmer)?", "text": "I have a problem on reporting progress to my employer. I am a part-time programmer, handling a software project for my school's (non-technical) department. Contact person: 1\\. The staff who actually uses the software and raises feature requests, 2\\. My boss (non-programmer), and she is not the software's user. The project's nature: It is a ready-made software, which has been bought from third-party. I have to modify or add feature/function to this software in order to cater for department's need. This is a software is need to use throughout the semester. Not all features needs to be used at the beginning. Hence we are using the Agile model: When the staff needs a certain feature, they raises a request, and I make the changes. By the end of the semester, I suppose all the required features will be raised and implemented. The problem: Everytime my boss asked me how the progress, I can't answer, because I don't know how to answer. I don't have complete list of all the required features. Even though I have completed features which were raised last week, I still can't tell my boss I have \"completed\", because new features are coming in too, and I don't know how much. I can't tell \"We have how many % completion\" nor \"We are going to complete it by xxx\". Sometime out of 3 requests, I manage to complete 2, I would tell my boss \"I have completed 2, but there is one feature not complete yet\". After a long period of time, I sounds like \"I always have something not finish, after so long\". Being unable to report the progress makes me looks really bad. It's not about how much I've done, it's about how to let people know. If I were the manager, and my staff keep failing to report the progress to me for months, I will feel this guy is incapable too. Do you guys have any idea how to report, or answer question as simple as \"what is the status / progress of the software modification\"? **UPDATE** My boss doesn't involve in development task directly, so she doesn't have a clue on what I am doing, or how the program works. We don't meet regularly as she is busy, and I feel it will be waste of time because she is not the main user, she doesn't know the detail of the program. I meet regularly with the staff who uses and knows better about the software. I feel hard to explain the progress to my boss."} {"_id": "239144", "title": "How other thread can show the incremented value unless the 1st thread reaches the return statement?", "text": "My code is given below. In the for loop I am getting unexpected output i.e. before completing the execution of for loop by 1st thread the 2nd thread comes in for loop and shows incremented value. public class ThreadSafe { public static void main(String[] args) throws InterruptedException { System.out.println(\"main()\"); B b=new B(\"1st Thread\"); B b1=new B(\"2nd Thread\"); B b2=new B(\"3rd Thread\"); b.start(); b1.start(); b2.start(); } } class MyCounter { private static int count; public static int getCount(){ for(int m=0;m<2;m++){ System.out.println(count+\" \"+Thread.currentThread().getName()); } return count++; } } class B extends Thread{ public B(String tname) { super(tname); } public void run() { MyCounter.getCount(); } } The question is when 1st thread executes for loop then other threads executing return statement is this the reason of showing increment value or something else? How other thread can show the incremented value unless the 1st thread reaches the return statement?"} {"_id": "72230", "title": "How to architect a website's presentation layer?", "text": "I have been going back and forth in different approaches for designing the presentation layer for websites. Many people swear by one of the different CMS packages which is too constraining and inflexible. Many people suggest writing the HTML by hand which seems not to scale in the case of many pages. I have a new web application I am designing and I really want to have a \"correct\" and \"elegant\" and practical approach in how I architect it. Has anyone done this in a way in which they are happy with and can plug in various modules and libraries with no problems? Any tips or pitfalls I should watch for?"} {"_id": "73547", "title": "What is the best practice to develop a visual component in Flex Hero?", "text": "What is the best practice to develop a visual component in Flex Hero? I do it like this: I consider a component has 2 \"parts\", the declarative part (the visual sub-components) which I define in the skin (just mxml) and the code part (event handlers...) which I define in an action script class. I load the skin in the ctor of the action script class. I also define skin parts, states, and I bind event handlers in the partAdded function. I am having an argument about this; that I should define the component purely in an .mxml, with listeners in the script tag, and maybe attach a skin (but the skin should be loose - maybe for reuse :-?) I come from .NET and maybe I am biased with the code behind pattern, and I am wondering from your experience and Adobe's intent, what is the best practice to usually implement a visual component?"} {"_id": "73544", "title": "What skill set should an engineer have in order to build a large social networking site?", "text": "I am trying build a social media site, but I need a hands on senior engineer/architect to guide and assist me in the server-side development since I am a rookie. Technology (which I am familliar with) to be used includes Java, JAX-RS, Spring and Hibernate. Unfortunately, I dont know what other technology skills an engineer/architect should have besides those mentioned above? The SNS I am trying to build is like Facebook which can guide you in terms of features and functionalities."} {"_id": "71358", "title": "Good references for End User documentation examples and advise", "text": "Our in house software has been used for many users and the training department asked us for any tips of end user documentation format. Does any know where can I find good examples of software end user documentation that a training department to use for inspiration or any sites with good advise? This is similar to this question however I am looking for end user documentation for use used by non-technical users."} {"_id": "211636", "title": "Should user documentation include screenshots?", "text": "This question focuses on _user_ documentation, not on code documentation. I just finished my software project, and the people I work for are expecting me to write user documentation, describing everything the software does. All the documentation they had until now (about the rest of the software they use) is full of screenshots, and sometimes barely contains any text. I think it's awful. I've been struggling for hours to understand how screenshots were connected, and most of the time I had to ask for help from someone else. I wrote a Java desktop application, and its appearance is likely to depend on the current Windows theme and Java update. I don't think screenshots can form reliable, definitive reference. Taking hundreds of screenshots and annotating them will take forever, and I don't believe it will help more than accurate, plain text documentation. **How should I approach putting the user documentation together? What guidelines should I follow regarding the use of screenshots versus explaining in text with user documentation?**"} {"_id": "59895", "title": "I sold my source code to a client, can I now re-build similar code and sell to someone else?", "text": "So we built a website and software for a client, charged our fee and handed over the code. The client then got a request from another company about the software. The client passed on the request but said since they owned the code they would need to recieve money for it. I'm thinking there are 2 options here: 1. Work with the client as requested 2. We've actually re-built the software, made it much better and use it for other projects. Am i in my rights to sell that direct to the company that enquired about it instead of going through the client? Any help on this would be much appreciated"} {"_id": "132727", "title": "Java - using single class or multiple class for each type?", "text": "I currently have a Java class called \"node\" which has a number of fields. Which fields in the class are used depends on a field called \"type\". There are close to 10 (this can grow) different types of \"nodes\". I was wondering if it is good to have a single class to handle all types or have different class for each type. What is the best programming practice in these cases? I would like to know (or a link to similar question/tutorials) how the performance will be affected (like memory etc.) if I use a single class?"} {"_id": "132726", "title": "Is it possible to release a library based on the ASIO SDK under the LGPLv3?", "text": "I'm wondering if it's possible to write and release a library based on the ASIO SDK under the LGPLv3. More specifically, the ASIO license says something I'm not sure how to interpret (2.2) : > The Licensee has no permission to [...]. This includes re-working this > specification, **or reverse-engineering any products based upon this > specification.** Which sounds like it could conflict with LGPL's: > You may convey a Combined Work under terms of your choice that, taken > together, effectively do not restrict modification of the portions of the > Library contained in the Combined Work and reverse engineering for debugging > such modifications Also, 5.a) is kind of bugging me : > Accompany the combined library with a copy of the same work based on the > Library, uncombined with any other library facilities The way I see it, my Library will have no use, point or effect on its own, when uncombined with the ASIO SDK. Will I still have to release an uncombined dummy piece of software to respect 5.a) if I choose LGPL?"} {"_id": "132723", "title": "is there a java library to create assisted wizard flow into your desktop application?", "text": "I am looking to create a step by step introduction/wizard guide to help users create a job on my application. For example, they will told what to click, and will be asked what actions they would like to take next etc. Framework or library that aides this? What do you call this wizard/step-by- step workflow in software? Found something: http://code.google.com/p/cjwizard/"} {"_id": "241172", "title": "NCurses, scrolling of multiline items, \"current item\" pointer and \"selected items\"", "text": "I am looking for hints/ideas for the best (most effective) way on how to scroll multi-line items as well as emphasizing of the \"current item\" and \"selected items\" such as: 1 FOO ITEM 1 Foo sub-item 2 Foo sub-item 3 Foo sub-item 2 BAR ITEM 1 Bar sub-item 3 BAZ ITEM 1 Baz sub-item 2 Baz sub-item 4 RAB ITEM 5 ZZZ ITEM 1 Zzz sub-item 2 Zzz sub-item 3 Zzz sub-item 4 Zzz sub-item using _NCurses_ (some combination of windows, sub-windows, pads, copywin? Uff! In fact, the lines could exceed the `stdscr`'s width so that possibility to scroll left/right would be also nice - pads?)... The whole items (including the sub-items) are supposed to be emphasized as full-width window/pad areas. The \"current item\" (including it's set of lines) should be emphasized (i.e. using `A_BOLD`), selected set of items of choice (including the set of lines for each the selected item) should be emphasized in another way (i.e. using `A_REVERSE`). What would you choose to cope with it the most effective NCurses way? (The less redrawals/refreshes the better and terminal is supposed to have the ability to change it's size - such as XTerm running under \"floating window\" management.) Thank you for your ideas (or perhaps some piece of code where something similar is already solved - I was not able to find anything helpful on the Internet. I mean I am not going to copy/paste foreign code but programming NCurses _properly_ is still somehow difficult to me). P.S.: Would you suggest to \"smooth-scroll\" +1/-1 screen line or rather \"jump- scroll\" +lines/-lines of the items? (I personally prefer the latter one.) Sincerely, \\-- mjf"} {"_id": "241171", "title": "Algorithm for detecting windows in a room", "text": "![enter image description here](http://i.stack.imgur.com/kvzbf.png) I am dealing with the following problem and i was looking to write a Pseudo- code for developing an algorithm that can be generic for such a problem. Here is what i have come up with thus far. **STEP 1** In this step i try to get the robot where it maybe placed to the top left corner. Turn Left -> If no window or Wall detected keep going forward 1 unit.. if window or wall detected ->Turn right --> if no window or Wall detected keep going forward.. if window or wall detected then top left corner is reached. **STEP 2** (We start counting windows after we get to this stage to avoid miscounting) I would like to declare a variable called **turns** as it will help me keep track if robot has gone around entire room. Turns = 4; Now we are facing north and placed on top left corner. while(turns>0){ If window or wall detected (if window count++) Turn Right Turn--; While(detection!=wall || detection!=window){ move 1 unit forward Turn left (if window count++) Turn right } } I believe in doing so the robot will go around the entire room and count windows and it will stop once it has gone around the entire room as the turns get decremented. I don't feel this is the best solution and would appreciate suggestions on how i can improve my Pseudo-code. I am not looking for any code just a algorithm on solving such a problem and that is why i have not posted this in stack overflow. I apologize if my Pseudo-code is poorly written please make suggestions if i can improve that as i am new to this. Thanks."} {"_id": "87887", "title": "Where to start for writing a simple java IDE?", "text": "I would like to start working on my own custom IDE. The biggest reason I want to work on the IDE is to help me gain an even greater, more intimate understanding of java (and other languages I add into it.) I don't want to do anything super fancy or revolutionary, I'd be happy if I could create something as compact as the BlueJ IDE I used in high school and be content. I have a few question on the specifics of the task that I hope I can get cleared up before I start investing time in this: * Is there anything I should be aware of when writing the parser? * Does anyone have any pointers that I should be aware of; pitfalls, brick walls or other constraints?"} {"_id": "87888", "title": "What is the proper name for this design pattern in Python?", "text": "In Python, is the proper name for the PersonXXX class below PersonProxy, PersonInterface, etc? import rest class PersonXXX(object): def __init__(self,db_url): self.resource = rest.Resource(db_url) def create(self,person): self.resource.post(person.data()) def get(self): pass def update(self): pass def delete(self): pass class Person(object): def __init__(self,name, age): self.name = name self.age = age def data(self): return dict(name=self.name,age=self.age)"} {"_id": "218552", "title": "iOS7 apps only using iPad only", "text": "I would like to develop iOS apps but due to budgetary issues I can only afford an iPad mini gen1 or iPod touch 5th gen. Is it possible for me to make apps for all iPhones and iPads/iPods using just a single device? Another question is how does the latest iOS7 work on ipad mini gen 1/iPod touch and what are the limitations when working with these devices when compared to an iPhone(My target apps are trivia and endless runner games). Thanks P.S. I do have a mac with xcode on it and I am developing a cocoa app right now and the question is if i can develop for all io7 devices using just one deceive for final testing."} {"_id": "241179", "title": "Should I always encapsulate an internal data structure entirely?", "text": "Please consider this class: class ClassA{ private Thing[] things; // stores data // stuff omitted public Thing[] getThings(){ return things; } } This class exposes the array it uses to store data, to any client code interested. I did this in an app I'm working on. I had a `ChordProgression` class that stores a sequence of `Chord`s (and does some other things). It had a `Chord[] getChords()` method that returned the array of chords. When the data structure had to change (from an array to an ArrayList), all client code broke. This made me think - maybe the following approach is better: class ClassA{ private Thing[] things; // stores data // stuff omitted public Thing[] getThing(int index){ return things[index]; } public int getDataSize(){ return things.length; } public void setThing(int index, Thing thing){ things[index] = thing; } } Instead of exposing the data structure itself, **all of the operations offered by the data structure are now offered directly by the class enclosing it, using public methods that delegate to the data structure.** When the data structure changes, only these methods have to change - but after they do, all client code still works. **Note that collections more complex than arrays might require the enclosing class to implement even more than three methods just to access the internal data structure.** * * * Is this approach common? What do you think of this? What downsides does it have other? Is it reasonable to have the enclosing class implement at least three public methods just to delegate to the inner data structure?"} {"_id": "244525", "title": "Why is the function called lseek(), not seek()?", "text": "The C function for seeking in a file is called lseek(). Why ins't it called just seek()?"} {"_id": "244520", "title": "Who owns the copyright in a patch file?", "text": "I'm trying to determine OSS compliance for a body of source code, part of which includes sections from OpenEmbedded, and contains a lot of patch files, as they (seem to) try to adapt various tools. In these patch files I find a range of copyright possibilities * the patch file removes a hunk of code from an original, hence 90% of the patch file itself is the original code, so the patch file is \"derived\" and the original copyright still applies * In betweenies * the patch file contains just a few lines of actual patching (or is primarily additions), and has a large number of lines as comment explaining why it is needed. Seems fair to assign the copyright in that file to the patch author, not the original How should copyright be identified for the inbetweenies? What lines can be drawn? The tool I am using is insisting on attribution per file, so I have been assessing a \"ratio of contribution\" for original/patch authors and picking one. Is that a reasonable/fair/legal approach? Also: I have been applying the merger doctrine the context lines in a patch file (those lines around the actual change). Hence excluding them from \"what ratio is original/patch?\") and that tends to bias the ratio toward the patch author, not original author - is that fair?"} {"_id": "124074", "title": "In what types of programming environments is Reactive Management better than Proactive Management?", "text": "Most everyone seems to agree that Reactive Management is worse than Proactive Management; however, it seems that I am constantly seeing Reactive Management from development managers. Logic would suggest that there is some benefit to Reactive Management, since it is seen so often. I tend to see nothing but negative issues coming from this type of management, but I may be naive. In what sorts of programming environments/situations would Reactive Management be advantageous?"} {"_id": "206737", "title": "Collection interfaces in C#, coming from Java", "text": "In Java, I'm used to declaring collections using the most-abstract interface possible and then constructing them using the concrete implementation that makes sense at the time. It usually looks something like this: public class MyStuff { private Map customerAddresses; private List tasks; private Set people; public MyStuff() { customerAddresses = new HashMap(); tasks = new ArrayList(); people = new HashSet(); } } This allows me more flexibility to change a collection's implementation later when all I really depend on is the high-level interface (i.e. I need something to store key-value pairs, or something to store ordered data), and it's generally considered a standard \"best practice\" in Java. I'm just starting to program in C#, though, and I'm not sure if there's an equivalent practice for C#'s collections hierarchy. Collections in C# differ from collections in Java in several ways: `Collection` is a concrete type, the `ICollection` interface exposes similar methods to Java's `Set` while the `ISet` interface specifies a lot more features, and the key-set or value-set of a `Dictionary` is not an `ISet`, to name a few. Does it make sense to do something like this in C#? public class MyStuff { private IDictionary customerAddresses; private IList tasks; private ISet people; public MyStuff() { customerAddresses = new Dictionary(); tasks = new List(); people = new HashSet(); } } Or are there different \"standard\" interfaces and implementations to use for such collections? Should I be using `ICollection` and `Collection` in place of either the `Set` or the `List`? I'm tempted to use the C# classes and interfaces that \"look closest\" to the Java ones I'm used to, but I'd rather use the setup that better fits with C# paradigms and standards."} {"_id": "230540", "title": "Why can't we program without compiling (using an IDE/debugger)?", "text": "I find it very interesting that even people who design a particular framework still have to rely on compiling to ensure the code is correct. I don't mean for 100s of lines of code, but 2-10 lines. I say this because they mention the code they wrote might not compile/work because it's untested. But why can't we write even the most basic code without relying on compiling? Is it because of the languages or the frameworks? Or is it because we have gotten used to using IDEs too much? I am not against doing this, but just wondering the reasons."} {"_id": "185772", "title": "Good practices to implement mappers in a multi-tier application", "text": "When you are working with a multi-tier application very often you run into task of converting objects in one layer to objects in another layer. This could be converting database objects into domain objects, service objects into domain objects, domain object into view models, etc. I've seen it being done in multiple ways, those include: 1. In-line conversion 2. Creating a translator class with static helper methods 3. Using extension methods to simplify conversion 4. Overriding cast operators 5. Creating sepearate mapper classes for every possible translation 6. Utilizing solutions such as automapper What would be the \"proper\" way to handle it. The application I currently work with utilizes separate mapper class for every possible translation, and although it has benefits of testability we are using dependency injection and we have to pass all those mappers as dependencies in the constructor. Has anybody figured out a better way to do it? **UPDATE** I am using .NET stack here (EF +Business Layer + MVC)"} {"_id": "185773", "title": "Library design: provide a common header file or multiple headers", "text": "There are essentially two camps of library designers concerning the design of the final header file inclusion: * Provide a single header file that includes every other header file that makes up the public API. For example, GTK+ dictates that only `gtk.h` shall be included. This does _not_ mean that everything is explicitly written into that common header file. * Have the user include the header file whose functionality he is most interested in. The most prominent example would be the standard library where he has to include `stdio.h` for I/O, `string.h` for string manipulation etc., instead of a common `std.h` Is there a preferred way? libabc gives no advice on that matter, whereas an answer to a similar question suggests to keep them separated but does not specifies why."} {"_id": "130762", "title": "Why does email sent from my site end up in SPAM folders?", "text": "I am developing a course registration website in Django. New users must confirm their email address by clicking on a link emailed to them. Unfortunately, this message is consistently ending up in people's spam folders. What steps can I take to prevent this from happening? Should I include an unsubscribe paragraph? Should I send less mass mail from my site (I occasionally send out a message to 500 emails.)?"} {"_id": "181646", "title": "Is there a correlation between the type of a company/industry and the software engineering rigor?", "text": "I would like the answer to explain what impact, if any, does the type of company/industry have on the rigor, depth and breadth with which software engineering is practiced. The best would be some links to support the answer with references. As a control point, let's declare an assumption that a company which follows the SWEBOK or CMMI (any level) is doing 100% (the best) whereas one which does not follow anything at all is 0% (worst). Would it be possible to find out what companies or what industries score highest, average, and lowest? **EDIT:** CMMI (any level) refers to any organization which is CMMI certified. **EDIT 2:** The middle paragraph specifies that a company which follows some rigorous software engineering standard should score higher than a company which doesn't. **EDIT 3:** The SWEBOK is not only organizing content. It does much more, i.e. the **SWEBOK characterizes the contents of software engineering body of knowledge**. The **SWEBOK promotes a consistent view of software engineering worldwide** which is the main point. It is aimed at both practitioners and academics, individuals and organizations. For example, see the CSDP and consider companies who hire them. These are the companies that follow the SWEBOK. **EDIT 4:** Note that the answer should tell us about industries or companies that follow some rigorous software engineering standard in the prescribed depth and breath, and those who don't. What is the correlation between the type of company/industry and their software engineering rigor? Throwing in a bunch of links for reference would be great."} {"_id": "181647", "title": "Resources for small programming excercises", "text": "> **Possible Duplicate:** > Books or websites containing easy programming problems? Are there any resources that offer small programming exercises? I am looking for problems that are relatively small but taxing in some way or another. It would be nice if the problems focussed on specific areas of skill - i.e. algorithm, logic, problem solving, performance etc. So that they can be picked for purpose. I am looking for one program for every few days that I can use to better my programming skills. Also I am trying to learn tdd and these excesses will help me this too. I am struggling to think of problems every few days and would prefer exercises that are known to push specific programming skills forward."} {"_id": "181641", "title": "Wrapping Primitives to Enable Returning null -- Bad Practice?", "text": "I am frequently tempted to wrap integers, etc, solely for the purpose of writing methods that can return `null`. Negative 1 can work in many cases, but too often (especially in sound) it's a valid return value. Often, to get around this, I am returning a reference to a larger object that contains the primitive in question and calling a getter. This seems less efficient than the wrapper, and in some cases less encapsulating. So, are there any penalties with wrapping for this reason? Does anybody do this? Does this smell?"} {"_id": "231963", "title": "Is Factory method subclass of Abstract factory in essence", "text": "Is it correct to say that factory method is essentially just a particular case of the abstract factory which produces the only one object not a group? I know that classic realizations assumes the architectural differences, but the point of a pattern is not only a concrete realization but a conceptual abstraction, isn't it?"} {"_id": "233044", "title": "Why can't the Factory Method pattern create a family of objects?", "text": "There are two main differences between the design patterns Factory Method and Abstract Factory. Difference 1 is that Factory Method is mainly based on inheritance. A class in a way uses it's subclass to create objects. The objects created depend on the subclass used. While Abstract Factory is based on inheritance but in a way also on composition - A client 'owns' an Abstract Factory instance to hold a concrete factory. Difference 2 is that Factory Method creates one object, while Abstract Factory creates a family of related objects. Difference 1 I understand (although I'm having trouble understanding why anyone would prefer to subclass a class that needs a factory instead of just 'giving' it a factory using composition [unless that class already has a subclass]). But difference 2 I don't understand. Why can't Factory Method create a family of products instead of one product? It can certainly return an array of products of the same family. All of these products share a common interface - the method can return an array of that type. Why is it said that this method only produces one product? Additionally, a factory method inside a concrete subclass of an abstract factory creates different products using different methods that are called by the client. Why can't a concrete factory in the Factory Method pattern do the same? Have different factory methods for different products?"} {"_id": "81832", "title": "Why we should build with the highest warning level in .NET?", "text": "I heard that we have to build with the highhest warning level. **Is it true? why?** And **how to change default .NET warning level?**"} {"_id": "81835", "title": "Single-responsibility and custom data types", "text": "In the past months I've asked for people here on SE and on other sites offer me some constructive criticism regarding my code. There's one thing that kept popping out almost every time and I still don't agree with that recommendation; :P I'd like to discuss it here and maybe things will become clearer to me. It's regarding the single-responsibility principle (SRP). Basically, I have a data class, `Font`, that not only holds functions for manipulating the data, but also for loading it. I'm told the two should be separate, that loading functions should be placed inside a factory class; I think this is a mis- interpretation of the SRP... ## A Fragment from My Font Class class Font { public: bool isLoaded() const; void loadFromFile(const std::string& file); void loadFromMemory(const void* buffer, std::size_t size); void free(); void some(); void another(); }; ## Suggested Design class Font { public: void some(); void another(); }; class FontFactory { public: virtual std::unique_ptr createFromFile(...) = 0; virtual std::unique_ptr createFromMemory(...) = 0; }; The suggested design supposedly follows the SRP, but I disagree -- I think it goes too far. The `Font` class is not longer self-sufficient (it is useless without the factory), and `FontFactory` needs to know details about the implementation of the resource, which is probably done through friendship or public getters, which further expose the implementation of `Font`. I think this is rather a case of _fragmented responsibility_. Here's why I think my approach is better: * `Font` is self-sufficient -- Being self-sufficient, it's easier to understand and maintain. Also, you can use the class without having to include anything else. If, however, you find you need a more complex management of resources (a factory) you can easily do that as well (later I'll talk about my own factory, `ResourceManager`). * Follows the standard library -- I believe user-defined types should try as much as possible to copy the behavior of the standard types in that respective language. The `std::fstream` is self-sufficient and it provides functions like `open` and `close`. Following the standard library means there's no need to spend effort learning yet another way of doing things. Besides, generally speaking, the C++ standard committee probably knows more about design than anyone here, so if ever in doubt, copy what they do. * Testability -- Something goes wrong, where could the problem be? -- Is it the way `Font` handles its data or the way `FontFactory` loaded the data? You don't really know. Having the classes be self-sufficient reduces this problem: you can test `Font` in isolation. If you then have to test the factory and you know `Font` works fine, you'll also know that whenever a problem occurs it must be inside the factory. * It is context agnostic -- (This intersects a bit with my first point.) `Font` does its thing and makes no assumptions about how you'll use it: you can use it any way you like. Forcing the user to use a factory increases coupling between classes. ## I too Have a Factory **(Because the design of`Font` allows me to.)** Or rather more of a manager, not merely a factory... `Font` is self-sufficient so the manager doesn't need to know _how_ to build one; instead the manager makes sure the same file or buffer isn't loaded into memory more than once. You could say a factory can do the same, but wouldn't that break the SRP? The factory would then not only have to build objects, but also manage them. template class ResourceManager { public: ResourcePtr acquire(const std::string& file); ResourcePtr acquire(const void* buffer, std::size_t size); }; Here's a demonstration of how the manager could be used. Notice that it's used basically exactly as a factory would. void test(ResourceManager* rm) { // The same file isn't loaded twice into memory. // I can still have as many Fonts using that file as I want, though. ResourcePtr font1 = rm->acquire(\"fonts/arial.ttf\"); ResourcePtr font2 = rm->acquire(\"fonts/arial.ttf\"); // Print something with the two fonts... } ## Bottom Line... (It'd like to put a tl;dr here, but I can't think of one. :\\ ) Well, there you have it, I've made my case as best as I could. Please post any counter-arguments you have and also any advantages that you think the suggested design has over my own design. Basically, try to show me that I'm wrong. :)"} {"_id": "205234", "title": "What must testers be able to do to be 'highly qualified'?", "text": "**Are modern testers like 'stupid' users which try to find a bug clicking and filling fields or they have solid knowledge about product cores?** What modern testers must be able to do? * Must they be coders creating independant/embed tests? * Must they understand source code of products and even can fix easy bugs? * Other things? I'm interested in real practice."} {"_id": "139528", "title": "Javascript naming conventions", "text": "I am from Java background and am new to JavaScript. I have noticed many JavaScript methods using single character parameter names, such as in the following example. doSomething(a,b,c) I don't like it, but a fellow JavaScript developer convinced me that this is done to reduce the file size, noting that JavaScript files have to be transferred to the browser. Then I found myself talking to another developer. He showed me the way that Firefox will truncate variable names to load the page faster. Is this a standard practice for web browsers? What are the best-practice naming conversions that should be followed when programming in JavaScript? Does identifier length matter, and if so, to what extent?"} {"_id": "233933", "title": "Building a webservice with mvc", "text": "I'am planning my website over here based on MVC. And I am thinking about a webservice (who knows, maybe one day I'll create an android app or something). The site and the webservice will behave diferently from each other. So what should I do? Write different controllers for each... or stuff IFs all over the place? For example: on the site, the controller will act as such: Password correct? \\---->redirect user to welcome screen Password incorrect? \\---->render login page again But the webservice will simple show a json like: {message: 'wrong password/sucess'} Which one you think is the best approach?"} {"_id": "139525", "title": "Tooling and support for message format specifications", "text": "How do most companies define and manage message format specifications? Is it common for companies to create custom tools for creating and working with these documents? At work we have a lot of systems that communicate via UDP messages. Each system has its own Interface Control Document (in Word or in HTML) which describes its message scheme. The documents have descriptions of each message along with its fields, and usage notes. We're looking to standardize the format of these ICDs and write some tools that help create them. I was thinking we should create an ICD schema, and define the ICDs in some data-interchange format (XML, JSON, protobuf). Then we could create parsers that generate human-readable documentation (HTML, PDF), or even message parsing code (where useful and appropriate). I don't want to reinvent the wheel, and I don't want to get too carried away. Does the above proposal sound reasonable?"} {"_id": "140481", "title": "Should I create my own Assert class based on these reasons?", "text": "The main reason I don't like Debug.Assert is the fact that these assertions are disabled in Release. I know that there's a performance reason for that, but at least in my situation I believe the gains would outweigh the cost. (By the way, I'm guessing this is the situation in most cases). And yes, I know that you can use Trace.Assert instead. But even though that would work, I find the name Trace distracting, since I don't see this as tracing. The other reason to create my own class is laziness I guess, since I could write methods for the most usual cases like Assert.IsNotNull, Assert.Equals and so forth. The second part of my question has to do with using Environment.FailFast in this class. Would that be a good idea? I do like the ideas put forth in this document. That's pretty much where I got the idea from. One last point. Does creating a design like this imply having an untestable code path, as described in this answer by Eric Lippert on a different (but related) question?"} {"_id": "139523", "title": "Can the JSF 2.0.0 code be converted to JavaScript/HTML5?", "text": "I am developing an application using WebSocket to exchange messages; it is based on JSF 2.0.0, but the idea is similar and HTML5 have WebSocket support. Is the only difference between those language the used tag and the syntax? Is the logic behind it the same? I wonder if there any converter for this."} {"_id": "140483", "title": "Is it a waste of time to free resources before I exit a process?", "text": "Let's consider a fictional program that builds a linked list in the heap, and at the end of the program there is a loop that frees all the nodes, and then exits. For this case let's say the linked list is just 500K of memory, and no special space managing is required. * Is that a wast of time, because the OS will do that anyway? * Will there be a different behavior later? * Is that different according to the OS version? I'm mainly interested in UNIX based systems, but any information will be appreciated. I had today my first lesson in OS course and I'm wondering about that now. **Edit:** Since a lot of people here are concerned about side effects, and general 'good programming practice' and so. You are right! I agree 100% with your statements. But my question is only hypothetical, I want to know how the OS manages this. So please leave out things like 'finding other bugs when freeing all memory'."} {"_id": "10334", "title": "Is Silverlight only for eye-candy, or does it have a use in business?", "text": "Granted that Silverlight may make eye-popping websites of great beauty, is there any justification for using it to make practical web applications that have serious business purposes? I'd like to use it (to learn it) for a new assignment I have, which is to build a web-based application that keeps track of the data interfaces used in our organization, but I'm not sure how to justify it, even to myself. Any thoughts on this? If I can't justify it then I will have to build the app using the same old tired straight ASP.NET approach I've used (it seems) a hundred times already."} {"_id": "195170", "title": "Class table inheritance... To 'type' or not to 'type'", "text": "I currently have a database that uses Class table inheritance model. Three different tables inherit from this table. The child tables have all a FK to the parent table and the fields are properly indexed. We hare currently having some issues at the application level because it is difficult to figure out what child table we are referencing from the parent. In addition to this, a query where we organize by type seems costly. One developer argues that we should just join with all 3 tables to figure out what child the parent record corresponds to, and as long as the tables have the proper index queries shouldn't be costly. In the other hand we could avoid a lot of confusion and problems if we add a discriminator field in the parent table, that way we would know without having to do any joins the 'type' of record we are dealing with. Also grouping by type would be much simpler. * Should we add a type in the parent table? * What would be the disadvantages? * Is it really not costly to join with multiple tables (as long as they are properly indexed) compared to having a discriminator field? * Other thoughts? We are currently using an implementation of active record for PHP, and it doesn't support Class Table Inheritance."} {"_id": "121248", "title": "Where should we put External Libraries in our SVN?", "text": "We have the following SVN structure. * Projects: Our work * Clients: Projects for clients, needs to be different * Shared: Shared libraries we created * Docs: Documents explaining how software development flows within the company (has nothing to do with my question anyway) Where should we put the external libraries? Let's say from opensource projects etc. ![enter image description here](http://i.stack.imgur.com/dYENF.png)"} {"_id": "121249", "title": "For what types of applications is Python a bad choice?", "text": "I just started learning Python, and I'd like to get some more context on the language. I realize that, in many cases, Python is a slow language relative to C or C++. Thus, Python is probably not the best choice for applications that need to run as quickly as possible. Outside of this, it seems like Python is a great general purpose language that is easy to read and write. The available libraries give it a huge amount of functionality. Outside of performance critical applications, where is it a bad choice to use Python (and why)?"} {"_id": "195177", "title": "How can I decouple configuration data from the program that uses it?", "text": "I am a beginning programmer who has written a spider application in PHP. Currently there are three parts: 1) The Spider (spider.php) 2) The Harvester (harvest.php) 3) The Configuration file (for example, craigslist_config.php) I use the spider to search the web for items I want to buy. An item can be found on any website, like ebay, craigslist, etc. The Harvester provides three functions to spider so it can act on the data it finds - `get_title_from($markup)`, `get_description_from($markup)`, and `get_price_from($markup)`. Each web site that I want to spider has, of course, different markup surrounding the data that I want to extract. My config file contains a configuration array that holds the regex patterns for each of the items I want to find. The structure of the file is always the same, the only thing that changes is the regex patterns. So, I would have craigslist_config.php, ebay_config.php, etc. $conf = array( 'title' => ' specific_site title pattern', 'description => 'specific_site description pattern', 'price' => 'specific_site price pattern' ); My problem is when I want to add a new website. I have to edit the Spider.php file and add to an ever-growing \"if, elseif\" statement that detects what site is currently being read, and load the correct config file, which in turn feeds the correct REGEX data to the harvester functions. How can I decouple my configuration from my Spider.php file? What I have designed does not feel like a flexible, scalable solution, and I don't want to have to mess with spider.php everytime I want to add or take away a new site. Ultimately, what I am trying to achieve is the ability to simply drop in a new configuration file into my config directory and move the 'if, elseif' logic somewhere else so that the spider and harverster functions never have to worry about what files are or are not included in the config directory. It's the \"somewhere else\" I am having trouble figuring out. Actually, it would be even better if I could get rid of the 'if else' logic all together so that everything just 'works.' My current design is not an OOP approach, however I am not opposed to one. I am currently reading, \"PHP Objects, Patterns, and Practice\" to get up to speed on OOP and related design patterns, so feel free to suggest in that direction should you feel it a solution. **EDIT:** Based on Doc Brown's direction, I have come up with the following. I have individual configuration files with content like so: $conf['specificwebsite1.com'] = array( 'title' => 'title pattern', 'price' => 'price pattern', etc... ); In my Harvester file I have a new function called `load_config($url, $config)`. As suggested, I loops through all the configuration files and load them into one large $conf array. Then, the `load_config` function checks if the key is a sub string of the url I'm currently reading. If so, then it loads all the necessary values to continue parsing. This is the function: function load_config($url, $config){ foreach($config as $key => $value){ if(stristr($url, $key) !== FALSE){ ## see if a key in our config file ## is a substring of our url. $conf = $config[$key]; break; } else { $conf = FALSE; } } return $conf; } This is working really well, so I'll accept that as the answer. But please feel free to make suggestions for improvements in the comments or as another answer."} {"_id": "236261", "title": "Which one of these answers regarding functions is incorrect?", "text": "So while I've been doing some lengthy compiles I decided to take the C++ general test on ODesk and came across this question. ![This question](http://i.imgur.com/tyX95lv.png) If I'm not mistaken, given the wording (or lack thereof) **all** of these could be true. **a.** int Foo() { } int Foo(int bar) { } **b.** Well, _\"return`void`\"_ would be incorrect semantically but functions can obviously have `void` return types. void Foo() { } **c.** This is the definition of inline functions, yes. **d.** Without going into much detail about the placement of the following elements, typedef void (*Func)(int); Func functions[2]; void Foo(int bar) { } void Bar(int foo) { } functions[0] = &Foo; functions[1] = &Bar; Further, you could always do this using lambdas and functors. **e.** void Foo(int& bar) { ++bar; } int foobar = 5; Foo(foobar); **f.** int bar = 5; int& GetBar() { return bar; } GetBar() = 6; **g.** int bar = 5; int* GetBar() { return &bar; } (*GetBar()) = 5; * * * I fail to see where this question has any _truly false_ answers. Am I missing something? Needless to say I ran out of time and failed the whole thing. I guess I'm a bad C++ programmer. :("} {"_id": "162853", "title": "Asking for a code sample of the company at an interview", "text": "Asking a job seeker to show some code is a fairly common practice for a software company. However, would it be acceptable for the candidate to ask the interviewer to show him a small piece of code that he thinks is well written?"} {"_id": "176418", "title": "Freelancing - Getting paid for the quote or estimate", "text": "It is often necessary to spend time designing a solution, breaking down the design into tasks and sub tasks and estimating the time it will take to complete each task in order to produce a reasonable estimate or quote for a programming task. This process can be a serious investment of time, often without any guarantee that the estimate/quote will be acceptable to the potential client and more often that not the time was 'wasted' with no hope of getting paid for it (in the event of not winning the job). Is it the case that this is a cost of doing business and what can be done to minimise this unpaid time?"} {"_id": "173428", "title": "Bump version before kicking off new development or when tagging a release, which is better?", "text": "Some projects bump version before kicking off a new development, while the other projects bump version when tagging a release. Which approach is better? If version number not changed at the start of new phase, the developers may forget to change it and simply release the program. If version number changed before tagging release, then 2 the version numbers (tag and Makefile/AssemblyInfo.cs) do not match. `git describe` may give you v1.2.3.4-15-g1234567 if current revision is after v1.2.3.4, but you have already changed the files to have v1.2.3.5"} {"_id": "236269", "title": "How to make C# methods work like javascript functions?", "text": "I'll keep it simple, I want to make C#'s methods work like javascript's functions. Mainly so I can convert this - function makeVariable(terp) { var me = {value: 0}; return function () { terp.stack.push(me); }; } into C#. Is there ANY way, no matter how complex or time consuming, to do this?"} {"_id": "182208", "title": "Remote pair programming set up over a VM", "text": "We are looking for a new way of doing pair programming. It's looks like best way of programming, it is faster and you push out great tested code. But there are some downsides. Ad hoc items that pop up. crashing. back ups. there is only ever one driver. What we thought might be the best possible environment is that we have a server running a VM. Both myself and my programming partner SSH inside and run some sort of VNC, or teamviewer locally between so that we can both work on files simultaneously and say one person works on the views and the other works on setting up apache. So we are looking at working on the same system from two different computer simultaneously. This way we could then when needed link to our other branches and have pairing \"systems\" that we could like to. Or if someone needs to jump out and work on something else the other person can carry on without him. What do we need to learn to set up such an environment, or accomplish something like this?"} {"_id": "173423", "title": "Does NASA license the software that it develops?", "text": "NASA provides a visualization software called Panoply. There is a Credits and Acknowledgments page that acknowledges and lists the licenses of software dependencies, but provides no information about its own license. I have looked at other software produced by NASA, including the source code for GISS and can not find any information about a licence. The closest information that I can find is in the FAQ for the global climate model EdGCM Global that says the code is in the \"public domain\" * is it standard practice at NASA to release code into the public domain? * are there exceptions? * Can I assume that Panoply is public domain and can be used without restriction other than than those imposed by licenses of software dependencies? * Is the absence of specific permission to reuse the code a concern (this issue was raised in the answer to a separate question) * How common is this practice across government agencies?"} {"_id": "141280", "title": "My first development job working at a company, what things to look out for?", "text": "So I've worked on my own all this time, selling software, creating a few web applications on my own. I had an Arts background I was self taught. It was a bit difficult to find a development position after endless trying, I finally landed a LAMP position. What I realized was it was all confidence issue. Before when I didn't know a few things I panicked but after spending such a long time working on my own projects and solving various problems, I felt confident enough that I could fulfill requirements on my own. I hope this helps other people applying for jobs This is the first time I will be developing with other team members in an office, are there anything I should prepare for my first day at work next week? Any tips and pointers while working as a developer at a company? I'm kinda nervous but excited."} {"_id": "210149", "title": "More Accurate Random in C", "text": "I have 3 IPs and every IP has a weight, I want to return the IP's according to its weights using the random function. For example if we have 3 IP's * X with weight 3 * Y with weight 3 * and Z with weight 3 I want to return X in 33.3% of cases and Y in 33.3% of cases and Z in 33.3% of cases, depending on random function in C. I have tried this code : double r = rand() / (double)RAND_MAX; double denom = 3 + 3 + 3; if (r < 3 / denom) { // choose X } else if (r < (3 + 3) / denom) { // choose Y } else { // choose Z } I repeat the function 1000 and I get: Choose x 495 times and Choose y 189 times and choose z 316 times. But what I want is to get X:333 y:333 Z:334 How can the weighted random be more accurate?"} {"_id": "135847", "title": "Can a British programmer relocate to the US?", "text": "I'm considering a career change after having worked as a financial software developer in London over the past 8 years. I'd quite like to move to Silicon Valley but I'm not clear on the practicalities. 1) Given the visa requirements, am I limited to joining a large software firm such as Google? Or are startups or self employment open to me? 2) Does the visa requirement put me at a disadvantage compared to local hires?"} {"_id": "135841", "title": "How to implement smart card authentication with a .NET Fat client?", "text": "I know very little about smart card authentication in general so please point out or correct me if anything below doesn't make sense. Lets say i have: * A Certificate Authority \"X\"-s smart card (non-exportable private key) * Drivers for that smart card written in C * A smart card reader * CA-s authentication OCSP web service * **A requirement to implement user authentication in a .NET fat client application via a smart card, that was given out by the CA \"X\".** I tried searching info on the web but no prevail. What would the steps be ? My first thought was: Set up a web service, that would allow saving of (for example) scores of a ping pong game for each user. Each time someone tries to submit a score via the client application, he can only do so by inserting the smart card into the reader. Then the public key is read from the smart card by native c calls through .NET and sent to my custom web service, which in return uses the CA-s authentication OCSP web service to prove the validity of the public key/public certificate (?). If the public key is okay and valid, encrypt a random sequence of bytes with the public key and send it to the client application. If the client application sends back the correctly decrypted random sequence of bytes along with the score of the ping pong game, then the score is saved in the database for the given user. My question is, is this the correct way to do it ? What else should i know about smart card authentication ?"} {"_id": "200460", "title": "SharePoint + InfoPath Joins", "text": "I am trying to do something somewhat unique, and the best path I can find at the moment is to use List joins, but I'm not sure this is possible. I'm hoping someone can suggest a best course of action. The issue at the moment is that I have several lists (i.e. Employees) that all have a lookup to the Assets list. I also have a Scheduling list that contains start and end dates during which assets are unavailable, which also has a lookup to the Assets list. I am trying to find a way to filter a selection list in an InfoPath form to only include Employees that are available. The problem is that I am dealing with five or six types of Assets, so I can't have the Scheduling list directly look up the Employee list (or any other Asset type). I am trying to find a way to have this work done server-side, so that the form doesn't get too heavy. I am accessing the lists via the REST API built in to SharePoint, and I am doing my form design with the InfoPath designer. Any thoughts as to how I can get this form working? **UPDATE 1:** My current thinking is that I may be better off splitting the Scheduling table into individual lists for each Asset type, then finding a way to toss everything back together when I need it later. Does this seem reasonable? I have no idea how difficult it would be to put the pieces back together later- has anyone attempted something like this? **UPDATE 2:** New line of though- I have resigned myself to being unable to do the joins that I would like to do, and instead am trying to accomplish my goal by filtering the data that appears in the selection lists. I am pulling the relevant data from the Scheduling list and all of the data from the Employees list, and trying to filter the Employees list for entries NOT IN the Scheduling list. Does anyone know how to do this, as it will accomplish the goal without having TOO much unneeded data downloaded to the form..."} {"_id": "247386", "title": "subclass reference to another subclass", "text": "Imagine I have the following code: class A: pass class B(A): pass class C(A): def __init__(self): self.b = B() Is the above code correct in terms of correct inheritance? I mean is it a good practice to reference another subclass in a subclass?"} {"_id": "121791", "title": "Getting in to smart card programming", "text": "I have a Compaq nw8440 with a smart card reader that is: > Compatible with ISO 7816 compliant Smart Cards. PC/SC interface support I have been interested in smart cards and wanted to start playing around with them. If I wanted to get in to programming smart cards where can I find resources on how to do it, and would I need any additional hardware other than what my laptop provides (besides the cards to program)?"} {"_id": "125033", "title": "How should I go about learning new technologies?", "text": "In a career of software engineer, you need to learn new technologies very often and always in less time due to deadlines. So just need to know how programmers should go about learning the technologies really quick still thoroughly. What approaches do you follow?"} {"_id": "247385", "title": "How to deal with a new version of visual studio's directory?", "text": "When upgrading Visual Studio to a newer version, a new directory is created e.g. \"Visual Studio 2013\". I understand that that makes sense as we want to differentiate between code targeting different versions of .net. But since some projects use other projects as their dll's for some repeated code - this raises a question - should we copy everything and have everything separate for .net 4.0 from .net 4.5 or should we have one library that works for both (since 90% of the code will be the same) and only create a special library for code that can't run on 4.0? Just suggesting doing \"whatever works for you\" is not a good solution - at the moment both look viable. If one of the options has some disadvantages, they might become apparent only after we have many internal links from solution to solution, and there's no real way to change all links at once - one must search for every one of them manually, hoping not to miss anything. Since we're obviously not the first facing this question, I'd like to hear from those with experience in the matter. (or anyone else, of course.) (Though there might not be a clear cut answer to this question, I hope it will be allowed here just as the highest voted question on this site is.)"} {"_id": "229968", "title": "What are the advantages of pass by value?", "text": "I always thought pass by value is a legacy from the early languages, because the designers had never seen anything else. But after seeing the brand new languages like Go adapting the same principle confused me. The only advantage I can think of is you make sure that the value you are passing won't be modified. But every time you need to pass a big structure to a function, you need to pass the pointer (or reference) to prevent copying, if the language permits it. To prevent modification of parameters, some `const`ness mechanism could be introduced instead of making the whole language pass by value. I have been coding for years and I rarely needed to pass by value. Almost always, I don't need my value/object to be copied to a function call; and often I don't need it to be modified inside the function*. What is the motivation behind making modern languages pass by value? Are there any advantages that I am not aware of? **Edit:** When I say pass by value, I don't mean primitives. They are easy and cheap to pass by value. My main focus is objects or structs that consists of more than 1 primitive and thus _expensive_ to copy. *: I have asked around and others reported similar thing with their parameter usage."} {"_id": "208257", "title": "Utilizing a Java Concurrent Utility from a Web App", "text": "I have the following lines of code in my application: return \"Service is alive since: \" + TimeUnit.MILLISECONDS.toMinutes(mxBean.getUptime()) + \" minutes\"; It uses the following package: import java.util.concurrent.TimeUnit; My application is a web application. Does it means that I have something wrong logically if I use something from concurrent package at a web application?"} {"_id": "255771", "title": "Problem with opening JFrames from LWJGL", "text": "I'm making a program that uses LWJGL to render a Display window, and it also listens for keyboard input to prompt it to open a swing window. The problem is, upon the first keyboard prompt, it opens the window successfully. But when you close it and try it again, it seems the LWJGL window 'sticks' down the key you just pressed and instead starts opening up infinite new Swing windows... I'm not sure how to resolve this issue. Here is some code to demonstrate my problem... public class Launcher{ public static void main(String[] args){ new Launcher(); } public Launcher(){ loadGUI(); go(); } public void go(){ while(!Display.isCloseRequested()){ listen(); glClear(GL_COLOR_BUFFER_BIT); Display.update(); Display.sync(60); } } private void listen(){ if(Keyboard.isKeyDown(Keyboard.KEY_SPACE)){ System.out.println(\"fk\"); Example e = new Example(); } } private void loadGUI(){ try{ Display.setDisplayMode(new DisplayMode(480,480)); Display.setTitle(\"Example\"); Display.create(); }catch(LWJGLException e){ e.printStackTrace(); Display.destroy(); System.exit(1); } glMatrixMode(GL_PROJECTION); glLoadIdentity(); glOrtho(0, 480, 480, 0, 1, -1); glMatrixMode(GL_MODELVIEW); } private class Example extends JFrame{ public Example(){ add(new JPanel(), BorderLayout.NORTH); pack(); setResizable(true); setTitle(\"Test\"); setVisible(true); } } }"} {"_id": "255772", "title": "What is the best way to make UI in iOS?", "text": "I am frustrated with the numerous options available for creating UI in iOS. Each with their own downsides. I like doing programmatic UIs but the compile- debug loop is quite slow (10s each compile) which slows me down when I am endlessly trying to get programmatic auto-layout UIs to work. I would like to know what is the best approach to use for 2 specific screens with code examples. This might be a good way to compare the pros and cons of each approach. Here are two screens of a very common and simple interface - WhatsApp's phone number verification flow: http://www.whatsapp.com/faq/iphone/20902747 # Options * Auto layout * Apple NSLayoutConstraint SDK API * Apple Visual Format Language * Apple Interface Builder * 3rd party library * Pure Layout * Masonry/KeepLayout * Non auto layout Thanks!"} {"_id": "204545", "title": "Which parallel pattern to use?", "text": "I need to write a server application that fetches mails from different mail servers/mailboxes and then needs to process/analyze these mails. Traditionally, I would do this multi-threaded, launching a thread for fetching mails (or maybe one per mailbox) and then process the mails. We are moving more and more to servers where we have 8+ cores, so I would like to make use of these cores as much as possible (and not use 1 at 100% and leave the seven others untouched). So conceptually, as an example, it would be nice that I could write the application in such a way that two cores are \"continuously\" fetching emails and four cores are \"continuously\" processing/analyzing the emails (since processing and analyzing mails is more CPU intensive than fetching mails). This seems like a good concept, but after studying some parallel patterns, I'm not really sure how this is best implemented. None of the patterns really fit. I'm working in VS2012, native C++, but I guess from a design point of view this does not really matter and just some pointers on how to organize this would be great!"} {"_id": "45369", "title": "When in the project life-cycle would you fit in external penetration/security testing of the software?", "text": "Assuming that you're working on a piece of software which is required to pass a third party security/penetration test before being released to the client, at which point in the project life-cycle would you perform the tests? Passing the test means that no major flaws are detected, or, more likely, that any flaws detected have been properly corrected before release."} {"_id": "138697", "title": "How important is it to pick the best IDE for your programming language of choice?", "text": "As I learned some basic programming languages I came across tens of IDEs, and tens of compilers. Most people you ask will tell you \"Go with that IDE or go with the best\" etc, however they do not provide a proper statement as to why this is important. I understand a good IDE will provide you with functionalities to save time and money, such as debugging or quick-word-fill, but I doubt that's the whole reason a programmer picks a good IDE. At school we work on old compilers (Money probably isn't the reason) because the theory \"As long as you learn it's good\" works. The bottom-line question is: How important is it to pick the best IDE for your programming language? YOu have Eclipse for Java, C++, Python and more, but can't you simply use a different one? What difference does a good IDE to your programming skills or your programming time?"} {"_id": "141750", "title": "Dual-licensing LGPL 2.1 and LGPL 3", "text": "I maintain a software, a small PHP library, that is released under the LGPL version 3 license (LGPLv3). Someone wants to use the library in their software which has the GPL version 2 license. This license compatibility matrix suggests this is not possible without changing the licensing terms of one of the software. I have been requested to dual-license my code under LGPLv2.1 and LGPLv3. Does it make sense, and what might the drawbacks be? Thank you."} {"_id": "198803", "title": "Multiple orders in a single list", "text": "I have a problem with a ranking system I am using. Scenario: An online game with around 10k players calculates a real time ranking of points when a certain event occurs. Events don't occur that often, around 1 time per minute. This ranking is kept in the cache for quick calculations, sorting and access. Now players can form groups and play against each other, but the scoring system is the same, only the ranking is based for the players in that group. At first I created for every group a separate ranking, effectively having the same scores as the complete ranking, but with different positions in the ranking. This is trivial because there are over 1.000 groups and every time an event occurs all the groups would have to be updated. So what I did now is when the group ranking is requested take only the players that are in that group from the complete ranking and show them. The positions would have to be re-counted. That re-counting is where the problem is. Because the sub-list is by-reference from the complete ranking list I cannot change the position of the player in the sub-ranking without updating it in the complete ranking, because it's just a reference. I came up with two solutions: * Create a copy from the record every time it is requested and do some output caching (not very desirable because the rankings are live) * Create a copy and store this in a cache which is reset when an event occurs. * Create a sub-list with just the positions which is updated when an event occurs. And the last solution: do the position counting in the output instead in the business side. This would be the best solution, only problem is that on some pages this text appears: \"You are on position # in the ranking\" where # is your ranking position. This number would be tedious to get then. Does anybody have any suggestions to this problem?"} {"_id": "138692", "title": "Display dynamic content from embedded web server", "text": "I have an embedded device running a slimmed down version of a HTTP server. Currently, it can display static HTML pages. Here is an example of how it displays a static HTML page: char *text=\"HTTP/1.0 200 OK\\r\\nContent-Type: text/html\\r\\n\\r\\n\" \"Hello World!\"; IPWrite(socket, (uint8*)text, (int)strlen(text)); IPClose(socket); What I'd like to do is display dynamic content, e.g. a reading from a sensor. What I thought of so far is to have the page refresh every once in awhile with and use sprintf() to attach the sensor reading to the _text_ variable for the response. Is there a way I can do this without having to refresh the page constantly?"} {"_id": "141754", "title": "Are there any actual case studies on rewrites of software success/failure rates?", "text": "I've seen multiple posts about rewrites of applications being bad, people's experiences about it here on Programmers, and an article I've ready by Joel Spolsky on the subject, but no hard evidence or case studies. Other than the two examples Joel gave and some other posts here, what do you do with a bad codebase and how do you decide what to do with it based on real studies? For the case in point, there are two clients I know of that both have old legacy code. They keep limping along with it because as one of them found out, a rewrite was a disaster, it was expensive and didn't really work to improve the code much. That customer has some very complicated business logic as the rewriters quickly found out. In both cases, these are mission critical applications that brings in a lot of revenue for the company. The one that attempted the rewrite felt that they would hit a brick wall if the legacy software didn't get upgraded at some point in the future. To me, that kind of risk warrants research and analysis to ensure a successful path. Have there been actual case studies that have investigated this? I wouldn't want to attempt a major rewrite without knowing some best practices, pitfalls, and successes based on actual studies. **Aftermath:** okay, after more searching, I did find three interesting articles on case studies: 1. Rewrite or Reuse. They did a study on a Cobol app that was converted to Java. 2. The other was on Software Reuse: Developers Experiences and Perceptions. 3. Reuse or Rewrite Another study on costs of maintenance versus a rewrite. I recently found another article on the subject: The Great Rewrite. There the author seems to hit on some of the major issues. Along with this was the idea of prototyping by using the proposed new technology stack and measuring how quick the devs picked it up. This was all as a prelude to a rewrite, which I thought was a great idea!"} {"_id": "138690", "title": "Data structure to use for complex lookups in an event engine?", "text": "What would be an optimal way to organize arbitrary data types into a structure that allows for complex (non-index based) lookups into the data without performing full loop-through's? For a little background, I am not looking for an answer on how to architect an Event engine, rather I am looking for methods to improve a current implementation. With the 5 second tour, I have a architecture in place with the following components, * Signals - Signals what handles to execute * Handles - Executes when a signal is received * Queue - Manages handles * Engine - Manages the queues and processes signals, new handles etc.. My current implementation has a few pitfalls, 1. Signals are currently coupled into a Queue, this is done so the engine can identify Queues in storage and the signals they represent. 2. There are two separate Queue storages, indexed and non-indexed, index exists for lookups on strings/int's, non-index exists for complex signals such as regex, array comparisons etc.. I think that I may have a solution but would like the thoughts of others that have developed similar systems. With the new architecture I have thought of, it would consist of the following, 1. Queues would store only a \"data\" property rather than a signal which, would be would passed to the engine for signal comparisons. 2. The engine would no longer have non-index and index storage, rather a single storage would be used that would require loop through lookups. The library is written in PHP (if that matters), if you want to see the source it's at http://www.github.com/nwhitingx/prggmr."} {"_id": "133448", "title": "Unit/Integration Testing my DAL", "text": "So I've done some research on this but I couldn't quite come to a conclusion, so I figured I'd ask you guys to see if I could get some other opinions. All of my database access is currently done through stored procedures. I am going to make no direct calls using custom queries. Because of this, my DAL is going to be pretty simple. It's basically going to just contain a bunch of methods that more or less interface out the different stored procedures in the database. It will always be in sync with the procedures that are there, and never call into the database any other way. I'm not sure if this is the greatest way to do things, and I am aware of the advantages and disadvantages of only using stored procedures, but it's just the way I've chosen. I think it will be the cleanest in the long run. But I want to test this DAL. I want to test this at a low level opposed to only testing it via the Business Objects that are going to be tied to these calls. I figure doing that will give me confidence that the procedures are working correctly, and than at a high level I can just mock this out and test business logic - but I'm fighting with how I test this stuff. If I write unit tests by the book, I would mock out the actual calls to the database and just make sure that they are getting called, or create some stubs or whatever that return fake data and do it that way (both make sure the function is actually called). However, what good is this if the _only_ thing these methods are doing are taking parameters and calling a stored procedure? All my mocks would be doing would be making sure I'm calling the method from the test more or less, and that seems like a huge waste of time and not really that effective. Now if I integration test this stuff, I could be dealing with real test data, real database calls, which is fine, but then it wouldn't be 100% considered a unit test, and since within the Business Objects I'm mocking these calls out, I'd technically never have official unit tests for this stuff, only integration ones. Does this all make sense? Basically, this seems fine to do for me. Have integration tests for the DAL, and have that be the only time the DAL itself is tested. When I unit test the business logic and mock out the DAL, I'd know it handles data correctly, and that would be comforting enough for me. What I'm asking is, am I approaching this correctly? Is there anything else you guys do that I am missing here that would shine some light on this stuff? Any feedback is much appreciated :)"} {"_id": "80879", "title": "Why does microsoft's own included MVC3 template not follow fat models, skinny controllers?", "text": "**From Microsoft's own built in template for MVC3** The model is extremely skinny, having basically no code. **Model** public class RegisterModel { [Required] [Display(Name = \"User name\")] public string UserName { get; set; } [Required] [DataType(DataType.EmailAddress)] [Display(Name = \"Email address\")] public string Email { get; set; } [Required] [StringLength(100, ErrorMessage = \"The {0} must be at least {2} characters long.\", MinimumLength = 6)] [DataType(DataType.Password)] [Display(Name = \"Password\")] public string Password { get; set; } [DataType(DataType.Password)] [Display(Name = \"Confirm password\")] [Compare(\"Password\", ErrorMessage = \"The password and confirmation password do not match.\")] public string ConfirmPassword { get; set; } } While the Controller on the other hand seems to be fat, doing more that simple routing... **Controller** [HttpPost] [AllowAnonymous] public ActionResult Register(RegisterModel model) { if (ModelState.IsValid) { // Attempt to register the user MembershipCreateStatus createStatus; Membership.CreateUser(model.UserName, model.Password, model.Email, null, null, true, null, out createStatus); if (createStatus == MembershipCreateStatus.Success) { FormsAuthentication.SetAuthCookie(model.UserName, false /* createPersistentCookie */); return RedirectToAction(\"Index\", \"Home\"); } else { ModelState.AddModelError(\"\", ErrorCodeToString(createStatus)); } } // If we got this far, something failed, redisplay form return View(model); }"} {"_id": "133440", "title": "Is studying more than one programming language as a beginner confusing?", "text": "Intuitively it seems like this might be the case. Is there real research or authoritative anecdotal data (yes, please) supporting (or contradicting) this theory?"} {"_id": "33036", "title": "Which self balancing binary tree would you recommend?", "text": "I'm learning Haskell and as an exercise I'm making binary trees. Having made a regular binary tree, I want to adapt it to be self balancing. So: * Which is most efficient? * Which is easiest to implement? * Which is most often used? But crucially, which do you recommend? _I assume this belongs here because it's open to debate._"} {"_id": "236881", "title": "Why do we have to use divs?", "text": "This morning, as I was writing some html and haml, it occurred to me that the way divs are used is ridiculous. Why are divs not implied? Imagine if this:
    was this: If the \"div class\" portion of the element was assumed, HTML would be more semantic, and infinitely more readable with the matching closing tags! This is similar to HAML, where we have: .content Hello, World! Which becomes:
    Hello, World!
    It seems to me the only thing that would have to happen for this to work in browsers is that the browsers could start interpreting every element without an existing html element definition as implying `
    \">`. This could be completely backward compatible; for CSS and jQuery selectors etc, \"div.hero-img\" could still work, and be the required syntax to select the elements. I know about the new web components specification, but that is significantly more complicated than what is suggested here. Can you imagine how pleasant it would be to look at a website's source and see html that looked like that?! So why do we have to use divs? If you look at Mozilla's html5 element list, every element has a semantic meaning, and then we get to `
    ` and it says: \"Represents a generic container with no special meaning.\" ..and then they list the arbitrary elements they are adding to html5 like `
    `. Of course, if this concept of implied divs was added to the html spec, it would take ten years to become standard, which is a million years in web time. So I figure there must be a good reason this hasn't happened yet. Please, explain it to me!"} {"_id": "215657", "title": "Are there any significant advantages to using a native language for mobile app development?", "text": "Forgive me if this question has already been answered but I couldn't quite find the answer I was looking for. What I wanted to know was, is there any significant advantage to using a native language when developing and deploying apps to a mobile environment? The reason I ask is for a long while now I've been using Objective-C, Apple's native language for iOS, to build my apps. However I've been wondering whether or not there is any real benefit to doing this, over using a non-native language like JavaScript and then deploying it through a service like 'Phone Gap'? I do stress _'significant'_ advantages as native languages are always more likely to have the upper hand when it comes to speed and access to the latest APIs. However in general I don't see using a non-native language or a service like 'Phone Gap' causing and major slow down to my apps or restricting my development. Additionally having the ability to deploy to multiple services is also very handy indeed. This is why I put the question, are there any significant advantages to using a native language for mobile app development?"} {"_id": "41195", "title": "Writing a partnership contract for a website", "text": "My boss has offered me a 25% share, in exchange on more input from my side, of a website ( a crawling engine in php and a front end using ajax ) I developed for him. What are the things I should consider in writing the contract so that my share is future proof?"} {"_id": "41196", "title": "TDD with limited resources", "text": "I work in a large company, but on a just two man team developing desktop LOB applications. I have been researching TDD for quite a while now, and although it is easy to realize its benefits for larger applications, I am having a hard time trying to justify the time to begin using TDD on the scale of our applications. I understand its advantages in automating testing, improving maintainability, etc., but on our scale, writing even basic unit tests for all of our components could easily double development time. Since we are already undermanned with extreme deadlines, I am not sure what direction to take. While other practices such as agile iterative development make perfect since, I am kind of torn over the productivity trade-offs of TDD on a small team. **Are the advantages of TDD worth the extra development time on small teams with very tight schedules?**"} {"_id": "164244", "title": "Data classes: getters and setters or different method design", "text": "I've been trying to design an interface for a data class I'm writing. This class stores styles for characters, for example whether the character is bold, italic or underlined. But also the font-size and the font-family. So it has different types of member variables. The easiest way to implement this would be to add getters and setters for every member variable, but this just feels wrong to me. It feels way more logical (and more OOP) to call `style.format(BOLD, true)` instead of `style.setBold(true)`. So to use logical methods instead of getters/setters. But I am facing two problems while implementing these methods: I would need a big switch statement with all member variables, since you can't access a variable by the contents of a string in C++. Moreover, you can't overload by return type, which means you can't write one getter like `style.getFormatting(BOLD)` (I know there are some tricks to do this, but these don't allow for parameters, which I would obviously need). However, if I would implement getters and setters, there are also issues. I would have to duplicate quite some code because styles can also have a parent styles, which means the getters have to look not only at the member variables of this style, but also at the variables of the parent styles. Because I wasn't able to figure out how to do this, I decided to ask a question a couple of weeks ago. See Object Oriented Programming: getters/setters or logical names. But in that question I didn't stress it would be just a data object and that I'm not making a text rendering engine, which was the reason one of the people that answered suggested I ask another question while making that clear (because his solution, the decorator pattern, isn't suitable for my problem). So please note that I'm not creating my own text rendering engine, I just use these classes to store data. Because I still haven't been able to find a solution to this problem I'd like to ask this question again: how would you design a styles class like this? And why would you do that?"} {"_id": "164249", "title": "One page using querystring or many folders and pages?", "text": "I have an application where I have the 'core' code in one folder for which there is a virtual directory in the root, such that I can include any core files using /myApp/core/bla.asp. I then have two folders outside of this with a default.asp which currently use the querystring to define what page should be displayed. One page is for general users, the other will only be accessible to users who have permission to manage users / usergroups / permissions. The core code checks the querystring and then checks the permissions for that user. An example of this as it is now is default.asp?action=view&viewtype=list&objectid=server. I am not worried about SEO as this is an internal app and uses Windows Auth. My question is, is it better the way it is now or would it be better to have something like the following: * /server/view/list/ * /server/view/?id=123 * /server/create/ * /server/edit/?id=123 * /server/remove/?id=123 In the above folders I would have a home page which defines all the variables which are currently determined by the querystring - in /server/create/ for example, I would define the action as 'create', object name as 'server' and so on. In terms of future development, I really have no idea which method would be best. I think the 2nd method would be best in terms of following what page does what but this is such a huge change to make at this stage that I would really like some opinions, preferably based on experience. PS Sorry if the tags are wrong - I am new to this forum and thought this was a bit too much of a discussion for StackOverflow as that is very much right / wrong answer based. I got the idea SE is more discussion based."} {"_id": "176336", "title": "C Minishell Command Expansion Printing Gibberish", "text": "I'm writing a unix minishell in C, and am at the point where I'm adding command expansion. What I mean by this is that I can nest commands in other commands, for example: $> echo hello $(echo world! ... $(echo and stuff)) hello world! ... and stuff I think I have it working mostly, however it isn't marking the end of the expanded string correctly, for example if I do: $> echo a $(echo b $(echo c)) a b c $> echo d $(echo e) d e c See it prints the c, even though I didn't ask it to. Here is my code: msh.c - http://pastebin.com/sd6DZYwB expand.c - http://pastebin.com/uLqvFGPw I have a more code, but there's a lot of it, and these are the parts that I'm having trouble with at the moment. I'll try to tell you the basic way I'm doing this. Main is in msh.c, here it gets a line of input from either the commandline or a shellfile, and then calls processline (char *line, int outFD, int waitFlag), where line is the line we just got, outFD is the file descriptor of the output file, and waitFlag tells us whether or not we should wait if we fork. When we call this from main we do it like this: processline (buffer, 1, 1); In processline, we allocate a new line: char expanded_line[EXPANDEDLEN]; We then call expand, in expand.c: expand(line, expanded_line, EXPANDEDLEN); In expand, we copy the characters literally from line to expanded_line until we find a $(, which then calls: static int expCmdOutput(char *orig, char *new, int *oldl_ind, int *newl_ind) orig is line, and new is expanded line. oldl_ind and newl_ind are the current positions in the line and expanded line, respectively. Then we pipe, and recursively call processline, passing it the nested command(for example, if we had \"echo a $(echo b)\", we would pass processline \"echo b\"). This is where I get confused, each time expand is called, is it allocating a new chunk of memory EXPANDEDLEN long? If so, this is bad because I'll run out of stack room really quickly(in the case of a hugely nested commandline input). In expand I insert a null character at the end of the expanded string, so why is it printing past it? If you guys need any more code, or explanations, just ask. Secondly, I put the code in pastebin because there's a ton of it, and in my experience people don't like it when I fill up several pages with code. Thanks."} {"_id": "246389", "title": "function naming: plural form of \"if not exists\"", "text": "This is a question about function naming. I am writing some code that updates the structure of a database according to a specification. Somewhere in this code I have something like this: void createIndices(Table tbl){ for(Index idx : table.getIndices()){ if(notExists(idx)){ createIndex(idx); } } } void createIndex(Index idx){ ... } As you can see, the function `createIndices` is not well named because it doesn't quite create the indices. It creates only those indices _that do not already exist_. I can come up with a variety of names for the function: * `createIndicesIfNotExist` * `createIndicesThatDoNotExist` * etc. However, because \"if not exists\" is such a recurring idiom in programming, I was wondering if there is some kind of standard for its plural form. Just like `createIndexIfNotExists` could be considered the standard singular form, even if it's not grammatically correct."} {"_id": "246388", "title": "Wrapping function in closures to make testable functions", "text": "In my nodejs project, I have functions like this for socketio. socket.on('draw', function (data) { socket.broadcast.to(socket.room).emit('draw', data); addEvent(socket, [\"draw\", data]); }); I'd like to rewrite them to something like this: function onDraw(socket, config) { return function (data) { socket.broadcast.to(socket.room).emit('draw', data); addEvent(socket, [\"draw\", data]); } } socket.on('draw', onDraw(socket, config)); The idea is that by placing all of my functions in simple closures like this, I can put them in different modules and it will make it easier to test by passing mockup objects to the function that construct the callback. **I was wondering if it was overkill or if there is a better way to make my code testable?**"} {"_id": "5225", "title": "Does open source licensing my code limit me later?", "text": "Suppose I develop a useful library and decide to publish it as open source. Some time later I have a business need to do something that wouldn't comply with the open source licence. Am I allowed to do that? How should I publish the software in a way that I keep ownership and don't block myself from using the library in the future in any way? Keep in mind that at least in theory, other developers may decide to contribute to my open-source project. Can I specify in a licence that I as the original developer get ownership of their contributions as well? Don't get me wrong here, I'm not trying to be evil and get ownership of other's work - I just want to keep ownership of mine, and if someone posts an important bugfix I could be rendered unable to use the original code unless I use his work as well."} {"_id": "70326", "title": "How viable is a PhD in software security research?", "text": "I am planning to do a Ph.D in software security. My school is giving a full scholarship for \"Detecting security vulnerabilities from source code and binary code\" project. Is is worth taking it? I feel software security is a bit saturated field, I mean there are already lots of software vulnerability detection tools available in the market. Is this still a viable research field? Is software security a solved problem?"} {"_id": "205381", "title": "Is R6RS backwards compatible with R5RS?", "text": "Is the new Scheme standard, R6RS which was published in 2007, backwards compatible with the older standard R5RS? If not, is there a compatibility mode in R6RS?"} {"_id": "237136", "title": "Is it a good practice to decouple the membership system?", "text": "Currently I'm developing a project that basically is built with ASP.NET Web API. The membership system I'm using is ASP.NET Identity. The only problem I'm seeing with this is that the membership system is being highly coupled with the API I'm bulding. In that case I've thought on bulding interfaces to expose the membership system functionalities the app needs and then use dependency injection to inject the current Identity system. The only thing is that on all the time I've been looking on the internet about authentication and authorization in ASP.NET I've never seem this done. So I've started to wonder whether this is or not a good practice. Is this a good practice or no? If not, why?"} {"_id": "175141", "title": "How do you handle objects that need custom behavior, and need to exist as an entity in the database?", "text": "For a simple example, assume your application sends out notifications to users when various events happen. So in the database I might have the following tables: TABLE Event EventId uniqueidentifier EventName varchar TABLE User UserId uniqueidentifier Name varchar TABLE EventSubscription EventUserId EventId UserId The events themselves are generated by the program. So there are hard-coded points in the application where an event instance is generated, and it needs to notify all the subscribed users. So, the application itself doesn't edit the `Event` table, except during initial installation, and during an update where a new `Event` might be created. At some point, when an event is generated, the application needs to lookup the `Event` and get a list of `User`s. What's the best way to link the event in the source code to the event in the database? ## Option 1: Store the `EventName` in the program as a fixed constant, and look it up by name. ## Option 2: Store the `EventId` in the program as a static `Guid`, and look it up by ID. ## Extra Credit In other similar circumstances I may want to include custom behavior with the event type. That is, I'll want subclasses of my `Event` entity class with different behaviors, and when I lookup an event, I want it to return an instance of my subclass. For instance: class Event { public Guid Id { get; } public Guid EventName { get; } public ReadOnlyCollection EventSubscriptions { get; } public void NotifySubscribers() { foreach(var eventSubscription in EventSubscriptions) { eventSubscription.Notify(); } this.OnSubscribersNotified(); } public virtual void OnSubscribersNotified() {} } class WakingEvent : Event { private readonly IWaker waker; public WakingEvent(IWaker waker) { if(waker == null) throw new ArgumentNullException(\"waker\"); this.waker = waker; } public override void OnSubscribersNotified() { this.waker.Wake(); base.OnSubscribersNotified(); } } So, that means I need to map `WakingEvent` to whatever key I'm using to look it up in the database. Let's say that's the `EventId`. Where do I store this relationship? Does it go in the event repository class? Should the `WakingEvent` know declare its own ID in a static member or method? ...and then, is this all backwards? If all events have a subclass, then instead of retrieving events by ID, should I be asking my repository for the `WakingEvent` like this: public T GetEvent() where T : Event { ... // what goes here? ... } I can't be the first one to tackle this. What's the best practice?"} {"_id": "138343", "title": "Is it a good practice to use Auto Format the text?", "text": "I'm using Eclipse and I will talk about it as an example but I am sure that this is the case in any other IDE. Eclipse has the `CTRL`-`Shift`-`F` command which automatically formats your code. Is it a good practice to use this kind of formatting? I'm asking in the context of a big project in which there may be many programmers working on the same files and all these are administrated by a CSV system. Thank you!"} {"_id": "175143", "title": "Brief explanation for executables in a GNU/Clang Toolchain?", "text": "I roughly understand that cc, ld and other parts are called in a certain sequence according to schemes like Makefiles etc. Some of those commands are used to generate those configs and Makefiles. And some other tools are used to deal with libraries. But what are other parts used for? How are they called in this process? Which tool would use various parser generators? Which part is optional? Why? Is there a brief summary get these explained on how the tools in a GNU or LLVM/Clang toolchain are organised and called in a C/C++ project building? Thanks in advance. EDIT: Here is a list of executables for Clang/LLVM on Mac OS X: ar clang dsymutil gperf libtool nmedit rpcgen unwinddump as clang++ dwarfdump gprof lorder otool segedit vgrind asa cmpdylib dyldinfo indent m4 pagestuff size what bison codesign_allocate flex install_name_tool mig ranlib strip yacc c++ ctags flex++ ld mkdep rebase unifdef cc ctf_insert gm4 lex nm redo_prebinding unifdefall"} {"_id": "179647", "title": "tips, guidelines, points to remember for rendering professional code?", "text": "I'm talking about giving clients professional looking code. The whole nine yards, everything you hardcore professional highly experienced programmers here probably do when coding freelance or for the company you work in. I'm fresh out of college and I'm going into freelance. I just want to be sure that my first few projects leave a good after-taste of professionalism imprinted on the clients' minds. When I Googled what I'm asking here, I was given pages that showed various websites and tools that let you make flashy websites and templates etc. The `$N package` and such stuff. I can't recall the word experts use for it. Standard, framework [I know that's not it]. English isn't my first language so I'm sorry I don't really don't know the exact phrase for it. That abstract way of writing code so that you don't come across as a sloppy programmer. That above mentioned way for programming **websites and desktop software [in python/C/C++/Java]**. EDIT: I can work on the accruing vast knowledge and know-how and logic building etc. what I'm asking for is the programming standard/guidelines you guys follow so that the client on seeing code feels that its a professional solution. Like comment blocks, a particular indentation style something like that. Is there any book on it or specific list of points for enterprise type coding by them? Especially here as in my case, for building websites [php for now..], and desktop software [c/c++/java/python]"} {"_id": "175146", "title": "Should tests be in the same Ruby file or in separated Ruby files?", "text": "While using Selenium and Ruby to do some functional tests, I am worried with the performance. So is it better to add all test methods in the same Ruby file, or I should put each one in separated code files? Below a sample with all tests in the same file: # encoding: utf-8 require \"selenium-webdriver\" require \"test/unit\" class Tests < Test::Unit::TestCase def setup @driver = Selenium::WebDriver.for :firefox @base_url = \"http://mysite\" @driver.manage.timeouts.implicit_wait = 30 @verification_errors = [] @wait = Selenium::WebDriver::Wait.new :timeout => 10 end def teardown @driver.quit assert_equal [], @verification_errors end def element_present?(how, what) @driver.find_element(how, what) true rescue Selenium::WebDriver::Error::NoSuchElementError false end def verify(&blk) yield rescue Test::Unit::AssertionFailedError => ex @verification_errors << ex end def test_1 @driver.get(@base_url + \"/\") # a huge test here end def test_2 @driver.get(@base_url + \"/\") # a huge test here end def test_3 @driver.get(@base_url + \"/\") # a huge test here end def test_4 @driver.get(@base_url + \"/\") # a huge test here end def test_5 @driver.get(@base_url + \"/\") # a huge test here end end"} {"_id": "175147", "title": "Alternate method to dependent, nested if statements to check multiple states", "text": "Is there an easier way to process multiple true/false states than using nested if statements? I think there is, and it would be to create a sequence of states, and then use a function like `when` to determine if all states were true, and drop out if not. I am asking the question to make sure there is not a preferred Clojure way to do this. Here is the background of my problem: I have an application that depends on quite a few input files. The application depends on .csv data reports; column headers for each report (.csv files also), so each sequence in the sequence of sequences can be zipped together with its columns for the purposes of creating a smaller sequence; and column files for output data. I use the following functions to find out if a file is present: (defn kind [filename] (let [f (File. filename)] (cond (.isFile f) \"file\" (.isDirectory f) \"directory\" (.exists f) \"other\" :else \"(cannot be found)\" ))) (defn look-for [filename expected-type] (let [find-status (kind-stat filename expected-type)] find-status)) And here are the first few lines of a multiple if which looks ugly and is hard to maintain: (defn extract-re-values \"Plain old-fashioned sub-routine to process real-estate values / 3rd Q re bills extract.\" [opts] (if (= (utl/look-for (:ifm1 opts) \"f\") 0) ; got re columns? (if (= (utl/look-for (:ifn1 opts) \"f\") 0) ; got re data? (if (= (utl/look-for (:ifm3 opts) \"f\") 0) ; got re values output columns? (if (= (utl/look-for (:ifm4 opts) \"f\") 0) ; got re_mixed_use_ratio columns? (let [re-in-col-nams (first (utl/fetch-csv-data (:ifm1 opts))) re-in-data (utl/fetch-csv-data (:ifn1 opts)) re-val-cols-out (first (utl/fetch-csv-data (:ifm3 opts))) mu-val-cols-out (first (utl/fetch-csv-data (:ifm4 opts))) chk-results (utl/chk-seq-len re-in-col-nams (first re-in-data) re-rec-count)] I am not looking for a discussion of the best way, but what is in Clojure that facilitates solving a problem like this."} {"_id": "178511", "title": "Why is heap size fixed on JVMs?", "text": "Can anyone explain to me why JVMs (I didn't check too many, but I've never seen one that didn't do it that way) need to run on a fixed heap size? I know it's easier to implement on a simple contiguous heap, but the Sun JVM is now over a decade old, so I'd expect them to have had time to improve this. Needing to define the maximum memory size of your program at startup time seems such a 1960s thing to do, and then there are the bad interactions with OS virtual memory management (GC retrieving swapped out data, inability to determine how much memory the Java process is _really_ using from the OS side, huge amounts of VM space wasted (I know, you don't care on your fancy 48bit machines...)). I also guess that the various sad attempts to build small operating systems inside the JVM (EE application servers, OSGi) are at least partially to blame on this circumstance, because running multiple Java processes on a system invariably leads to wasted resources because you have to give each of them the memory it might have to use at peak. Surprisingly, Google didn't yield the storms of outrage over this that I would expect, but they may just have been buried under the millions of people finding out about fixed heap size and just accepting it for a fact."} {"_id": "162951", "title": "My work isn't being used... what can/ should I do?", "text": "Several months ago I was approached by a small business, who had seen my work previously and asked me to create a website for them. Since then, the website hasn't changed one bit and I haven't heard a word from them. This sucks for them as they paid for a website and haven't used it. It''s frustrating for me because I spent a huge amount of time on the website and feel that all of that effort has been wasted, furthermore, I don't feel I can use the website on my portfolio/ CV. I was _thinking_ of offering to go round to their office for one day, and update the website for them then and there; but I'd need their support whilst there (to get the content for the about page, to get information for on their products etc.) and I don't want to disrupt their work day, nor do I want to sit there like a spare tyre and get nowhere. Furthermore, if I were to do this, should I expect to receive money for it? It's a day of my life, but I'm doing it for my benefit rather than theirs (but they benefit as well). Has anyone else had experience of a client not using their product; how did you handle it? **Additional background for those who want it:** The company is a local travel agent, and the website lets them CRUD offers and locations, and has several other static pages (about, contact, etc.) At the time of creating the website, I filled the static pages with lipsum, and the offers and locations with fake information, so that I could give the business an idea about what the final pages would look like; during the hand over, I guided them through the CRUD forms (they made notes) and said if they sent me the text for the pages, I'd update it."} {"_id": "162956", "title": "How to manage scripting language file (Python, for example)?", "text": "I am writing python, but I find that the I can put all the script files on the directory, but it seems very messy. So, is there any conventions in the community to deal with the script files? Thanks."} {"_id": "162955", "title": "How to translate formulas into form of natural language?", "text": "I am recently working on a project aiming at evaluating whether an android app crashes or not. The evaluation process is 1.Collect the logs(which record the execution process of an app). 2.Generate formulas to predict the result (formulas is generated by GP) 3.Evaluate the logs by formulas Now I can produce formulas, but for convenience for users, I want to translate formulas into form of natural language and tell users why crash happened.(I think it looks like \"inverse natural language processing\".) To explain the idea more clearly, imagine you got a formula like this: 155 - count(onKeyDown) >= 148 It's obvious that if count(onKeyDown) > 7, the result of \"155 - count(onKeyDown) >= 148\" is false, so the log contains more than 7 onKeyDown event would be predicted \"Failed\". I want to show users that if onKeyDown event appears more than 7 times(155-148=7), this app will crash. However, the real formula is much more complicated, such as: (< !( ( SUM( {Att[17]}, Event[5]) <= MAX( {Att[7]}, Att[0] >= Att[11]) OR SUM( {Att[17]}, Event[5]) > MIN( {Att[12]}, 734 > Att[19]) ) OR count(Event[5]) != 1 ) > (< count(Att[4] = Att[3]) >= count(702 != Att[8]) + 348 / SUM( {Att[13]}, 641 < Att[12]) mod 587 - SUM( {Att[13]}, Att[10] < Att[15]) mod MAX( {Att[13]}, Event[2]) + 384 > count(Event[10]) != 1)) I tried to implement this function by C++, but it's quite difficult, here's the snippet of code I am working right now. Does anyone knows how to implement this function quickly?(maybe by some tools or research findings?)Any idea is welcomed: ) Thanks in advance."} {"_id": "17945", "title": "About compilers, coding, large code bases, and some other stuff too", "text": "This is not purely a programming question. I work in a large company. We have lots of code written in C++ language: 1. Multiple simultaneous projects in progress and implementations that have to be supported. 2. Multiple code paths due to first reason. Some are ~10 years old. 3. Multiple target platforms. Most of the code has to be standard. Compiler specific code is frowned upon.(Which is a good thing I suppose) 4. A large number of processes (>30 easily) each having in excess of 10k lines even in the smaller processes. 5. Multiple developers per process, multiple processes per developer, multiple owners for a process over time(Who may now be unavailable as some processes have had a fair number of owners). This means there is no definite authority on a process for some processes. 6. Not all functionality of all the processes are used all the time. There may exist many bugs in your process you don't know about simply because you haven't come across a situation where a certain branch of code is executed at runtime. Eliminating all these are not so easy. 7. Timeline constraints and allocation to multiple projects and multiple processes mean that developers are usually left with no choice rather than to make do with existing code and try and work around limitations rather than to radically change the coding of a process so that it can be much better. 8. Most processes are performance critical. This is a huge incentive for people to adhere to C style coding and shy away from OOP. 9. templates, STL and metaprogramming usage is near to non-existent. 10. Use of third party libraries is next to impossible. i.e. No Boost! Oracle is an exception. 11. Basic functionalities needed are implemented in libraries which expose C like interfaces which make you look like an idiot if you are trying to write something in OOP or using templates because every time you use them, you have to wrap them up to hide the C like interface or just forget about the whole thing and do C like coding yourself. Not to mention additional bugs you may introduce by trying to wrap. The libraries themselves have been heavily used and are quite reliable in most cases. So the natural inclination is to go with them in a time constrained project. 12. People are not very receptive to new ideas. I do not have **ANY** idea how to remedy this. I myself am curious and willing to try out new stuff. I find it hard to accept when other people don't. Especially when those people are not in managerial positions. Management decisions I can tolerate even if I don't agree. But what is there to do about decadence in developers? Q1: What are the questions I should be asking myself? Please note that I love C++, and I have no intention whatsoever of quitting my job because the company is great. And it's the best in my country. Q2: What are the ways I can, and help others improve? Q3: I had this discussion with a dev lead about upgrading our compiler to the newer version because it has a lot of nice features in c++0x and I thought the sooner we try to adjust to those, the better. His concern was the risk associated with it. Reasoning: 1. Do we really need those features? 2. Our current main OS does not support for compiler version XXX. Of course we can always use XXX, but if something goes wrong the OS can say \"Hey, we don't support that!\". 3. It has to be a gradual process not an overnight one. (In my opinion this implies that we should always be a couple of years or more behind the current front) I know this is not a \"yes no question\", but what is the correct approach to this problem? Q4: Are all or some of the above stupid questions? Thanks for any feedback. p.s. Writing this has also made me realize stuff that had not occurred to me before. So I guess it would not go to waste if someone were to close this even without a single answer (Which I hope will not happen). :)"} {"_id": "17947", "title": "How do you rate Hosting?", "text": "1) Up-time 2) Latency 3) Throughput 4) Server software such as SQL, etc. 5) $$$ -- transfer, overages, etc. What else? And is there a web site rating hosts by this criteria that is actually reliable and not simply a dishonest advertising site... or do you guys have any recommendations? (BTW, someone with 150+ rep should create \"hosting\" and \"web hosting\" tags.)"} {"_id": "196212", "title": "How can I make being code reviewed by someone who doesn't know the language easier?", "text": "I'm in a team where I am the only Java developer. The rest of the team are VERY experienced in their own programming language, but their area of expertise is not object orientated. I found the first time that I had a code review, it was a lengthy process that involved a lot of tangents about how Java works, or how OOP works etc. The second time, we got caught up on something and they had to bail as it was taking too long. **How can I make this process easier**? I desperately want my code reviewed as I myself am quite new to Java programming, and really to programming in general, but this is taking a long time (especially as, for instance I'm writing quite large chunks of code by refactoring old code that the last Java dev worked on)."} {"_id": "207168", "title": "Internal exposure of implementation details on inheritance", "text": "I'm reading the \"Effective Java\" book which suggests to favor composition over inheritance. The example it gives shows something like this: public class InstrumentedHashSet extends HashSet { // The number of attempted element insertions private int addCount = 0; public InstrumentedHashSet() { } public InstrumentedHashSet(int initCap, float loadFactor) { super(initCap, loadFactor); } @Override public boolean add(E e) { addCount++; return super.add(e); } @Override public boolean addAll(Collection c) { addCount += c.size(); return super.addAll(c); } public int getAddCount() { return addCount; } } This presents a problem because the implementation of `addAll` in `HashSet` uses the method `add`. So, when calling `addAll` each new elements is counted twice - once in `addAll` and once in `add`. In the next chapter it is explained that if we choose to allow extending our class we should explicitly explain the inner working of our methods that use overridable methods (meaning - in the `addAll` documentation we should specify its use of `add` and commit to this implementation **forever** ). I think that a better practice would be to decide that each method that is both an overridable, API method and also used by inner implementation should be extracted to an inner, private method. So Our ideal `HashSet` would have these methods: public class HashSet implements Set { // Ignore irrelevant code @Override public boolean add(E e) { // The new add implementation only wraps the innerAdd method return innerAdd(e); } @Override public boolean addAll(Collection c) { // Do the same logic as before but use innerAdd instead of add } private boolean innerAdd(E e) { // The original add logic will be moved to here } } This way we encapsulate inner implementation and allow extending the class without fearing to override a method which is used by inner implementation. Of course that we can use the original add as before because by calling `super.add(e)` we de-facto use the `innerAdd` so that's not a problem. I would like to know whether this might be a good practice (not for `HashSet` which is already committed to the code that use it, but to new classes that are meant to be overriden)?"} {"_id": "196211", "title": "How to maintain the 'LICENSE' file?", "text": "Many projects on Github comprise a `LICENSE` file, usually looking like: Copyright (C) 2012-2013 Mr. Creator Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the \"Software\"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions: ...etc... My problems are: 1. If I create a project, should I add the name of every single contributor in `LICENSE`? 2. If I fork other's project, should I add my name in its `LICENSE`? 3. What if I add some files under another license?"} {"_id": "246038", "title": "Is hadoop designed only for \"simple\" data processing jobs, where communications between the distributed nodes are sparse?", "text": "I am not a professional coder, but rather an engineer/mathematician that uses computer to solve numerical problems. So far most of my problems are math- related, such as solving large scale linear systems, perform linear algebra operations, fft, mathematical optimization etc.. At smaller scale, these are best handled by Matlab (and similar high-level math packages). At large scale, however, I have to resort to old-fashioned (and VERY TEDIOUS) lower-level languages such as Fortran or C/C++. That is not all, when the problem become excessively large, parallelization becomes necessary, and MPI in that regard is a nightmare to wrestle through. I have been hearing about hadoop and \"big data\" recently on a regular basis. But my impression was hadoop is NOT a general-purpose parallel computing paradigm, in the sense that it handles data that need minimal mutual communications (correct me if I am wrong). For example, in a general LA operation on a large data set distributed among many nodes, rather than processing one's own piece of data independently, each node must communicate with EVERY other node to send/get info before it's own data can be correctly processed, and the whole data set is updated only after ALL communications have been done. I hope I made my point clear: in numerical applications, data- processing is naturally GLOBAL, and the \"graph\" for communication often is a FULL GRAPH (though there ARE exceptions). My question is, is hadoop suitable for such purpose, or is it only designed for \"simple\" data processing jobs, where communications between the distributed nodes are sparse?"} {"_id": "94931", "title": "Did eastern PKUFFT algorithm beat FFTW3 (Fastest FFT on the west) 20 fold?", "text": "I wonder if PKUFFT is Chinese algorithm/library and is it really better than FFTW, MKL ? Thank you Edit: I will elaborate. My personal programmers interest is in fast parallel algorithms like FFTW3, which was my favorite until I stumbled upon chinese claim of superior run with PKUFFT. It is on google, if you search for FFTW3, Intel MKL, GPU and things. I dont know anything about this library, but I would be very interested to learn. I can guess PKU stands for Peking Univ. There is nothing on web except presentation and mentioning of some commonly available hardware (GPU, infiniband, etc.). everything, including compilers was available for years. The improvement of 2000% is significant. The news are 1 year old. But no details about algorithm itself. It will be interesting to know if there is genuine discovery in algebra, or just technical effect with unfair advantage of knowing specific platform better than competing teams."} {"_id": "56772", "title": "Checking out systems programming, what should I learn, using what resources?", "text": "I have done some hobby application development, but now I'm interested in checking out systems programming (mainly operating systems, Linux kernel etc.). I know low-level languages like C, and I know minimal amounts of x86 Assembly (should I improve on it?). What resources/books/websites/projects etc. do you recommend for one to get started with systems programming and what topics are important? Note that I know close to nothing about the subject, so whatever resources you suggest should be introductory resources. I still know what the subject _is_ and what it includes etc., but I have not done systems programming before (but some application development, as previously noted, and I'm familiar with a bunch of programming languages as well as software engineering in general and algorithms, data structures etc.)."} {"_id": "254077", "title": "What are steps in making an operating system in C ?", "text": "I am trying to make an my own OS. This is for educational purpose only, so that I get to understand the internals as well as get a good idea of low level programming. I have some prior application development experience in C#/python/C++/C. I am a noob in assembly language(very less experience and knowledge). I understand that in writing an operating system,we can't go without assembly language. Currently, I have just printed a string in assembly language in the boot sector using qemu and BIOS interrupts. What I want is that, can someone specifically point out the steps that I need to follow to make my operating systems run C programs. So that, I can start writing my OS in C. Any other piece of advice to help a newbie, regarding the same is also welcome. Although, I have looked into many os development related tutorials/websites, I can't seem to find this information anywhere."} {"_id": "246034", "title": "Power of 2 and performance in SQL Server", "text": "Many mathematical operation (such as division, multiplication, etc.) are supposed to be computed faster when dealing with power of two numbers (C++, C#?, ...) For instance 15 * 256 = 0x0e right shifted (fast) 8 bits = 0x0e00 = 3840 Whereas 15 * 255 = 0x0e multiplied by (slow) 0xff = 0x0ef1 = 3825 Does this kind of optimization even happen in SQL Server? I don't think there are, I tried measuring a difference of execution time of queries such as those: SET STATISTICS TIME ON SELECT AVG(N / 256) FROM DBO.V_VIRTUAL_NUMBERS WHERE N < 1048576 SELECT AVG(N / 255) FROM DBO.V_VIRTUAL_NUMBERS WHERE N < 1000000 SET STATISTICS TIME OFF which resulted in: SQL Server Execution Times: CPU time = 0 ms, elapsed time = 0 ms. SQL Server parse and compile time: CPU time = 0 ms, elapsed time = 0 ms. (1 row(s) affected) SQL Server Execution Times: CPU time = 203 ms, elapsed time = 207 ms. SQL Server Execution Times: CPU time = 0 ms, elapsed time = 0 ms. SQL Server parse and compile time: CPU time = 0 ms, elapsed time = 0 ms. (1 row(s) affected) SQL Server Execution Times: CPU time = 218 ms, elapsed time = 206 ms."} {"_id": "122267", "title": "What would be the prototype of printf?", "text": "look at these calls closely: printf(\"hello, world\\n\"); printf(\"%d\", 2); printf(\"%d%g\\n\", 2, 2.3); we see that printf can accept any type and any number of args. however we know that functions in c only take fixed length args and should have a compatible prototype to match the arg. what would be the prototype of printf ?"} {"_id": "83994", "title": "Setting up CI with [Jenkins, TeamCity, etc] - which source code control?", "text": "I am bound and determined to setup CI at work. I have played with Jenkins, and will download TeamCity when I get home (damn you work enforced download filters!) I have no IT support, and only so much spare time. My main question: Can Jenkins or TeamCity integrate with git or mercurial repositories that exist only on a shared drive rather than a true server (via http, https, ssh, etc)? I would like to use a DVCS with my CI setup, but would also be ok with settling on SVN because it is familiar, and already setup (but no projects to migrate if I decide on a DVCS). I am familiar with the benefits of DVCS and I don't think they will come into play very often in my current environment. My CI will live on a plain Windows XP box on the network. Given the above, will it be fairly easy to integrate either Jenkins or TeamCity with git/mercurial, or should I just stick with SVN as it is more or less ready to go?"} {"_id": "252209", "title": "Rule order for Parsing Lists with LALR(1)", "text": "When creating the grammar for parsing a list (something like \u201c`ITEM*`\u201d) with a LALR(1) parser, this basically can be done in two ways: list : list ITEM | ; or list : ITEM list | ; What are the pros and cons of these two possibilities? In general, can an advice be given which one to choose, or does this depend on the complete grammar? What about the case \u201c`ITEM+`\u201d, i.e. list : list ITEM | ITEM ; and list : ITEM list | ITEM ; Do the same pros and cons apply here, or others?"} {"_id": "123797", "title": "C++ and system exceptions", "text": "Why standard C++ doesn't respect system (foreign or hardware) exceptions? E.g. when null pointer dereference occurs, stack isn't unwound, destructors aren't called, and RAII doesn't work. The common advice is \"to use system API\". But on certain systems, specifically Win32, this doesn't work. To enable stack unwinding for this C++ code // class Foo; // void bar(const Foo&); bar(Foo(1, 2)); one should generate something like this C code Foo tempFoo; Foo_ctor(&tempFoo); __try { bar(&tempFoo); } __finally { Foo_dtor(&tempFoo); } Foo_dtor(&tempFoo); and it's impossible to implement this as C++ library. * * * **Upd:** Standard doesn't forbid handling system exceptions. But it seems that popular compilers like **g++** doesn't respect system exceptions on _any_ platforms just because standard doesn't require this. The only thing that I want - is to use RAII to make code readable and program reliable. I don't want to put hand-crafted try\\finally around every call to unknown code. For example in this reusable code, `AbstractA::foo` is such unknown code: void func(AbstractA* a, AbstractB* b) { TempFile file; a->foo(b, file); } Maybe one will pass to `func` such implementation of `AbstractA`, which every Friday will not check if `b` is NULL, so access violation will happen, application will terminate and temporary file will not be deleted. How many months uses will suffer because of this issue, until either author of `func` or author of `AbstractA` will do something with it? * * * Related: Is `catch(...) { throw; }` a bad practice?"} {"_id": "123794", "title": "Reasonable to use different version controls for different parts of a web application?", "text": "I am contemplating a few different techniques for organizing code for a larger web application. For this situation, the server runs some Microsoft solution (probably ASP.NET MVC or OpenRasta) and the client (JavaScript, CSS, etc.) will consume this. Traditionally in an MVC application, the controllers and models along with the client codebase all live in the same project/solution. Our current solutions use TFS for source control. I want to start using Sass and Compass for the front-end, which will require Ruby, and I would like to use git for the version control of the front-end and continue to use TFS for the server components. The problem I have is that in order for someone to run the solution, you would need to pull from both repositories, and I am not sure if that is reasonable. Would you attempt something like this? Can you think of a better way for the projects to be structured so that front-end is separated from back-end? If I have not explained this well enough I will be happy to elaborate."} {"_id": "252204", "title": "Extending database model of ORM in subproject", "text": "I have a maven project which contains some entities which are stored in a database. The purpose of this project is to manage personal information, users, locations, etc. This project can work on its own. I also have a project which depends on the first which imports data into the database. But it's more than a simple import so it needs additional information to these entities. This information is only needed if it's imported data otherwise it is not set. ## The problem The problem is that I don't know how to \"attach\" this information to the entities in the first project without declaring them there. It wouldn't make sense to declare them there because the y don't belong there and it would just mix two domains. Putting both projects into one is not an option either because both are already big and still growing. ## Approach to the problem One approach I was thinking of is to use another entity which contains all information needed in the importer project and associate it with the original entity. This would almost work but it would create a terrible database model which I would never create if I wouldn't use an ORM. I'm starting to think if I even should use a ORM because of the ORM I use (Hibernate) I also had to make several changes of the design to the worse just because the ORM doesn't support it (using the Null Object Pattern or Interface segregation principle, for example)."} {"_id": "252207", "title": "Menu building pattern", "text": "I'm having troubles getting my head around the active-state handling of a menu when the menu isn't used for routing. I come from Drupal where the menu system handles the routing as well. so setting active state and active-trail state is handled by the route (that acts as a menu rendereing system aswell). Now, a lot of PHP frameworks have Router classes that handle the routing. This seems a good seperation since a Menu should not be aware of POST || OPTIONS || ... requests. But when writing the frontend, I found myself hardcoding the menu. Or storing everything in the DB and passing those values to a view. What I don't like of this approach is that you are kind of creating a copy of what you allready wrote in your Router but now using the Menu class. An example: Route::get('/somewhere','routename.somewhere','showStuffController'); Route::post('/somewhere','routename.somewhere','saveStuffController'); Menu::add('label.somewhere','routename.somewhere'); You are seperating concerns here, so that is nice. But Menu depends heavily in Route to set it's active state. Menu will also have to know about hierarchy to set active-trail. So yes, setting active trail, and active status classes is actually a view thing. But having if ( Route::currentName() === $menuitem->getRouteName() ) { print 'active'; } allover your views seems stupid. Then add all those noying active-trail if's and its a real bloat. Handling that before the view get renders and setting a active-trail flag to true just seems so ugly the way I do it know (a foreach looping over all children that loop over all children, ...) My question is: Is there a pattern or a smart way to get this cleaner, better, ... ? How should one handle the active-trail 'problem'? I was thinking of rendering child -> parent. So start of ad the deepest level and then work my way up. But then the child knows about it's parent but the parent knows nothing about his children (seems weird)."} {"_id": "252203", "title": "Why do we need to serialize data using a serialization framework e.g. avro", "text": "In this book, the author is using avro to serialize data before it is being processed by pig. What I don't understand is, why we need to do it using a serialization framework like avro? Why don't we just use json as the serialization data format? This way we can just use serializer/deserializer library in a language e.g. `data.json` in clojure."} {"_id": "118501", "title": "From Delphi to C# for Dummies", "text": "I'm a Delphi and C# coder and have been given the assignment to introduce the most fundamental concepts of .Net/C# to my coworkers in some trainings. They are seasoned Delphi Win32 coders but still use an fairly old version, so they aren't familiar with some language features which are commonplace today like generics and they never used .Net before. Now I'm thinking about the best way to tackle this. Which are the most predominant topics I should cover? I will have maybe 1-2 days for this so I can't delve into specifics too far but have to give them a good birds eye view especially of the features that will matter to them. I plan to emphasize the differences to Delphi, I think this will help them. I'd like to build a small Zoo-application and add something to it for every step I explain. So they can have a look at the created examples later whenever they want. My topics so far: * Short introduction of .Net, CLI, CLR * Short introduction of Visual Studio * Showing class/object equivalents, inheritance, creating some objects Animal->Cat->Tiger, explaining Namespace on the way * Garbage Collection and Interfaces (very different from Delphi), Bird gets ICanFly * Events / Delegates (short multicast example), Ape calls all other apes because he found a bunch of bananas or something like that * Generics, creating generic animal list * LINQ, query animals by categories * Parallel Task Libary, the same but this time multithreaded Some topics I'm not sure I should include: * Attributes * Operator Overloading * Anonymous Methods / Lambdas * Nullable Types * Reflection * P/Invoke Maybe this should be delayed to a second session, maybe its too much for the beginning, I'm not sure. I'm also undecided if I should include the coming asynchrony functionality in .Net 4.5. Maybe this is premature. Then I will have to introduce WPF but I think this should be a question for itself."} {"_id": "66919", "title": "Should every front-end developer understand the basic aspects of design?", "text": "I'd say that we're developing software in a world where the front-end of an application is probably the most important. The increasing ability for a user to access and interact with software almost instantly in the cloud is making the first few minutes of user interaction crucial in determining whether the application will get any further attention, making UI design and the \"intuitiveness\" of the application extremely important factors. As a developer, I've never really appreciated the important of the design of my applications. I usually write the code that works, and take advice from either a graphics designer or project leader on ways to improve the usability / look of the app. But maybe this isn't the way to develop any more. I can't help but wonder the time and resources wasted on a developer being told by a graphics designer to move a button 2 pixels to the left (for example) when, if that developer had a basic understanding of software front-end design, they'd be able to make the decision themselves. I can understand large design decisions, such as the overall look of an application and the global fonts used etc, are all jobs that should be done by someone with the appropriate knowledge of such an arena, but should we, as the developers of the front-end, know enough of design theory to be able to make smaller design decisions by ourselves? * Should the front-end design of software and some basics of user interaction be taught alongside the current programming package? * What level of knowledge should a developer expect to have in regards to user interaction & design? * Should an understanding of design hold a higher ground than it does at present in the context of resum\u00e9s and qualifications? The subject is somewhat of a hobby for me, I find it fascinating studying users' interaction with a program, but it should it be part of the core of software development? Just to clarify, when I talk about \"design\" I'm talking about front-end design, rather than the design of the software architecture (something which every developer should understand)."} {"_id": "238372", "title": "Auto Create VM and display IE", "text": "I have just acciquired an old Dell Poweredge SC1425 from an old friend, this server will act as a development box for web based applications. All has been going well, I have installed the following packages onto the system. **Packages** 1. Apache2 2. PHP5 (including MCrypt & PHP5-JSON) 3. MySQL 4. SSH Server & Client 5. Git & Subversion 6. SFTP Server 7. VirtualBox 8. Node.JS 9. Python 10. Buildlibs (g++, build essentials, fakeroot, make, checkinstall) 11. WebMin 12. Samba The server is running Debian 7 64bit with the included packages above, I have setup some virtual machines (Windows XP & Windows 7) and what I would like to do is when the user starts the VM I want the OS to start Internet Explorer automatically. The reason why is so that i can test my web based applications on older OS's and browsers. After the user has finished with the box or mistakenly left it switch on for X amount of time, I would like the VM to shutdown automatically and also wipe any data which was added during that session. Am not sure if what I am asking can be done so I thought I would ask."} {"_id": "232601", "title": "How to Think like a computer Scientist. Chapter 3, Question 2", "text": "http://interactivepython.org/runestone/static/thinkcspy/PythonTurtle/helloturtle.html I'm stuck on question 3. Give three attributes of your cellphone object. Give three methods of your cellphone. This is just a intro question. We haven't created a cellphone class, so I don't know how to instantiate a cellphone object yet (Is this the right lingo?). Can someone help me with what are looking for? How do I find the properties and methods of the cellphone object? Is the question just asking for possible future properties of my cellphone? Is it just assuming that I will create some code that will allow me to change the properties of my phone? So all I need to do is list 3 properties of my cellphone like color, brand, and length? Is that it? Oh, and possible methods would be cell number dial, web browser address, and address book addition? Is that it? If so, please upvote this so I can ask questions again in stackexchange."} {"_id": "245602", "title": "Why can't arrays be passed as function arguments in C?", "text": "Following this comment, I've tried to google why, but my google-fu failed. Comment from link: > [...] But the important thing is that arrays and pointers are different > things in C. > > Assuming you're not using any compiler extensions, you can't generally pass > an array itself to a function, but you can pass a pointer, and index a > pointer as if it were an array. > > You're effectively complaining that pointers have no length attached. You > should be complaining that arrays can't be passed as function arguments, or > that arrays degrade to pointers implicitly."} {"_id": "245604", "title": "What is a typical team size for a complex half million lines-of-code C# desktop application?", "text": "Imagine the following scenario: * Codebase of 600,000 lines of code (C#) * All in a single desktop application * All written by a single developer (myself) over 8 years (3 years worth of actual coding time). * The software is powerful and flexible (obviously vague), thus there is inherent complexity in the code, even if it is modular and low in debt. * The software is in a niche machine control industry, so there is a very long ramp up for domain knowledge of both the machines, and the code. * The software will become more and more custom where individual customers get their own addons and enhancements What would be a recommended software team size for a half million line codebase (C# desktop) like the one above? What would be ideal for a typical 500k LOC C# desktop application? This software is not 'end to end' lifecycle. It is 'end to infinity' meaning it will continue to grow and get bigger and and more powerful and complex. Especially as more engineers are added to the team."} {"_id": "213865", "title": "Could a reflog replace tags for bookkeeping?", "text": "My repository has a branch `production` and is filled with countless tags of the format `production-20130101-1234` that point to what has been actually shipped on the given datetime. I think this solution is somewhat messy and unwieldy and I wonder if reflogs would skip in here better. The objective is **\"make sure you can always find out what version has been deployed and when exactly\"**. Tags do the job, but reflogs sound even nicer because: * They are made automatically, * They allow a good amount of awesome, like in the form of `git checkout origin/production@{3 months ago}`. That said, I don't know that much about reflogs and I'm not sure about possible downsides of such approach. 1. I assume `origin/production` has its own reflog totally independent of my local remote-tracking `production` \\- is that correct? 2. Doing 3 consecutive no-ff merges from an integration branch to my local `production` branch would make 3 entries in my local reflog, but pushing it to the central repo (as a part of the deployment procedure) would make 1 new entry in the \"oracle\" `origin/production` reflog, correct? **I'd like to maintain the \"1 deployment = 1 record\" invariant.** 3. How to make sure a reflog of `origin/production` isn't pruned and continues to provide complete historical information for everyone? 4. Any downsides or possible problems I didn't think of?"} {"_id": "220726", "title": "Contract Based Programming vs Unit Test", "text": "I am a somewhat defensive programmer and a big fan of Microsofts Code Contracts. Now I cannot always use C# and in most languages the only tool I have is assertions. So I usually end up with code like this: class { function() { checkInvariants(); assert(/* requirement */); try { /* implementation */ } catch(...) { assert(/* exceptional ensures */); } finally { assert(/* ensures */); checkInvariants(); } } void checkInvariants() { assert(/* invariant */); } } However, this paradigm (or whatever you would call it) leads to a lot of code cluttering. I have started to wonder if it is really worth the effort and whether proper unit test would already cover this?"} {"_id": "218720", "title": "Compiler doesn't inline anything?", "text": "I've rolled my own _SIMD_ -accelerated math library. It's gotten pretty complete, so naturally I went to conduct speed tests and optimize it. Btw this isn't premature optimization, the lib is actually complete in functionality, I really need it to be fast now. So anyway, I'm testing some vector dot product methods against the ones in Microsoft's DirectXMath. The difference between my vectors and the ones in DirectXMath is, that in DirectXMath XMVECTOR is just a naked __m128, while mine is a __m128 inside a vector_simd class that is 16 byte aligned and with an aligned allocator. Now one would assume that with all possible optimizations enabled in release mode they would compile to the same thing. I mean `int a;` and `int arr[1];` compile to the same thing in release mode, just like my array template class has the same performance as a raw array and so on... but to my surprise my class's methods came up _2 times_ slower. I even tried just pasting the SSE code from DirectXMath into my class's method, it still came out slower. So the only thing left was the difference that one is a class and the other is raw. It seemed for a while that the overhead of accessing it as a class member and/or the extra overhead from the constructor (which is empty...). However I put all the definitions of my vector class methods in it's header and still it came up 2x slower than DirectXMath. Maybe the overhead comes from the __m128 being a class member? I don't understand why the compiler can't optimize that away, it's like having a struct with a single integer in it?Has anyone else had such issues? **EDIT:** OK, so it seems that my vector_simd class is 32 bytes, while __m128 is only 16 bytes, which is strange. When I removed the 16-byte alignment from my class it also became 16 bytes large. But this makes no sense, aligning a 16-byte class to 16 bytes should make it remain 16 bytes...."} {"_id": "218721", "title": "What's the best Git workflow for working with an open source project with employer-specific changes?", "text": "At my current employer, we are using an open-source project hosted on Github as a component of our application. I have been working on this project to both add some features that we need and to integrate it with our build systems. My manager and I are in agreement that we would like to submit as much of our work on this component as is reasonable back to the open-source project. My question is about what the best workflow/technique is to maintain my Git commits in such a way that I can easily seperate out things that make sense to add back to the open-source project - bug fixes and new features that are sufficiently general - from things that are specific to our project, like build locations and application constants. What I have been doing so far is to maintain a private Git branch where I commit all of my changes, with appropriate granularity. I then use `cherry- pick` to add the open-sourceable commits to the master branch, and submit those back to Github. It seems like I ought to be using merge to do this, so that I don't keep creating separate commits with identical contents, but I'm not sure how to do this while excluding the company-specific commits and keeping a reasonable workflow. For example, I suppose I could commit open-sourceable things on master and company-specific things on the private branch, and then merge master into that branch as needed, leaving the master branch pointing at the commit before the merge, such that I could commit open-sourceable things to it again and then merge again. What seems awkward about this workflow is that I would need to decide in advance for everything I do which branch it belonged on, work on that to what seemed like completion, then commit it and merge before testing. One of the things that I really like about Git is how easy it is to just do whatever you need to make your application work, and then decide later how and where to commit your changes. As far as I can tell, if you're currently on a branch and have some work done, there's no easy way to just submit some of that work to another branch, since changing to that branch would require a clean working directory. Is what I'm doing a reasonable workflow for long-term contributions? Can anybody recommend a different workflow that might be better, and why it's better?"} {"_id": "218725", "title": "Doesn't dockerfile initial configuration defeat one of the key arguments of docker?", "text": "I'm very excited about Docker, if you're a developer in a big project you've suffered too much at the _machine_ hands of environments and multi-platforms. One of the key selling points of Docker is that by snapshotting/committing image state you avoid the risk of building an environment with a different version, possibly incompatible, of a given dependency. I get that, great! Doing the tutorials on dockerfile, isn't this exactly the same concept as running `npm`, `chef` or `bower`. Of course unix libraries would be more stable than most of the ones found on these library stores I just mentioned but isn't this the same workflow that docker is fighting? Isn't the goal to tailor a container to your needs and then commit it's state to then multiply at will? Am I missing something trivial, or this is the case? Does docker still gives you a chance for building up images from script files and by that it steps on it's toes?"} {"_id": "218726", "title": "DDD, creating an aggreagate from outside the application service layer", "text": "I have a service(webservice) that is used to access to the domain logic. One of the methods of the webservice is createFoo, where Foo is my aggregate. So the class that implements the webservice's method calls my application service that contains the createFoo method which is responsible of doing some stuff and persisting the new Foo instance. This new Foo instance the is returned by the application service to the class that implements the web service's method. Finally, the Foo instance is converted to something like FooWsResponse and then the web service returns the FooWsResponse instance. Right now all this is more or less implemented like this: public class CreateFooService{ private FooService appService; private Converter converter; public FooWsResponse createFoo(CreateFooParams params){ Foo foo = FooFactory.createFoo(params); foo = appService.createFoo(foo); return converter.converter(foo); } } The problem with the code above is that the Application Service method is misleading because is not creating a Foo instance, is only working on the domain logic associated to the creation of Foo and persisting the instance. So, the things that I want to know are: 1. Is it ok create the aggregate outside the service layer?. I know that I can pass the CreateFooParams as parameter appService.createFoo but I don't want to do this because my service layer will be coupled to the web service framework. Also, I don't want to pass all the properties of the class CreateFooParams as parameters of appService.createFoo because I will end with a method with 15 parameters!. Finally, creating the Foo instance before calling appService.createFoo method but now the method's name is not accurate and also all clients of my appService should need to create how to create an instance of Foo to pass it as parameter of appService.createFoo which is very bad and may lead to many issues. 2. Should I create a FooDto that receives all the data of CreateFooParams and pass it to appService.createFoo method?. This looks like a better option but having 3 different classes with almost the same properties feels very redundant I want to know what is the right thing to do according to the DDD guidelines. Thanks for your help."} {"_id": "40248", "title": "Is \"call and return\" a pattern or an antipattern?", "text": "Imagine to have this code: class Foo: def __init__(self, active): self.active = active def doAction(self): if not self.active: return # do something f=Foo(false) f.doAction() # does nothing This is a nice code; I actually have (not in Python) a global active variable called \"dosomething\" and a routine called \"something,\" where the first thing happening inside the \"something\" routine is `if not dosomething return`. An alternative implementation would call to a routine that always performs an action, and the flag is checked at invocation time, as in the following code: class Foo: def doAction(self): # do something doaction = False f=Foo() if doaction: f.doAction() What is your opinion on this? I personally find that the first solution violates the least surprise principle: The caller is invoking an action which is never performed in response to a status which has been set somewhere else, but from the code it looks like the action is performed. Would you consider it a total pattern, a total antipattern, or just an option with no strong opinion for or against it?"} {"_id": "47732", "title": "Standard survey questions for gauging job satisfaction", "text": "Are there any standard surveys or questions that are useful for gauging software engineer or programmer job satisfaction? I hypothesize that determining software engineers' attitudes toward their jobs can reveal problems within an organization. I'm thinking about questions such as the following: * Are you reaching your full potential? * Are you learning or improving skills that you could put on a resume? * Are your skills valued within the company? * Are your skills utilized? * Is your voice heard? * Do you have a mentor? * Are you satisfied with your supervisor's leadership? * Are you able to experiment and try new ideas? Obviously the questions would need to be tweaked to get useful answers from a survey."} {"_id": "212407", "title": "How common is string manipulation, really?", "text": "I've noticed a lot of programming introductions (almost any language) usually include a heavy barrage of string manipulation quite early, such as: * Count the number of \"xx\" in the given string. We'll say that overlapping is allowed, so \"xxx\" contains 2 \"xx\". * Reverse the given string and remove all vowels from it. Not just online courses, but even in degree programmes. But the thing is, I've never encountered a situation in \"real life\" where you had to do anything like that. Is it just luck, or do you really need to do heavy manipulation in the above manner in some fields of programming? Can you give a description or an example where it is needed? Edit: Just to clarify, not necessarily string processing itself as in echoing variables to a html template, or other basic stuff like that, but the more arcane type of processing, like reading every second character? But yes, perhaps it is just for training purposes instead of direct needs. The file format conversion is a good note, though."} {"_id": "195928", "title": "In a module-core program, how should modules interact with each other?", "text": "Short background: I'm using MEF/C# for developing a program that has one core (singleton) and multiple MEF parts (I call them modules, each module is a separate assembly with one class that contains the MEF export). The core imports all parts that are needed/available and initializes them (makes them read their configuration files, that stuff happens in the constructors, business as usual) Now, my problem is that I need modules to interact with each other (say, module A, a graphical user interface, needs to get some data from a database, it'd need to somehow call a function on the database module (module B).) How'd I do that? Sure, I could just make those parts/modules public in my core, add a reference to the core in my module and be done with it, but somehow, that'd feel ugly/bad. (mainly because I've security concerns over such an implementation, everyone could just call that function, unless I use something like this, but still it doesn't feel right). Another approach would be to put all of those functions into the core, where they just call the corresponding function of the module. That feels bad, too because I'd have to add a lot of stuff to my core and in some cases that'd be something that a potential customer won't ever use because he doesn't need/have the specific module. Does anyone know a better method of doing this? Maybe my train of thought went completely wrong somewhere?"} {"_id": "82110", "title": "Can I use the code created by another agency when working for a client?", "text": "Let's assume that my client ( _C_ ) has a corporate website, created by an agency ( _A_ ). _A_ created a pretty simple, yet well designed website which uses plain HTML and no backend to edit any of the data. _C_ has to issue (and of course pay) _A_ every time they want any changes to their website which has become pretty expensive. Now _C_ asks _Me_ to write a dynamic module that integrates into the existing website. This module should be an event-planner that enables _C_ to change at least that part of the website on his own. I am a little bit worried whether it is ok for me to simply take some of _A_ 's graphics and HTML/CSS to use on my project (since I need the new module to be a part of the website). Is there any legal stuff to be considered? Who's property is the already existing website? Can one make a general assumption at all or is this something that necessarily needs to be issued in a contract?"} {"_id": "48811", "title": "What design patterns are the worst or most narrowly defined?", "text": "For every programming project, Managers with past programming experience try to shine when they recommend some design patterns for your project. I like design patterns when they make sense or if you need a scalbale solution. I've used Proxies, Observers and Command patterns in a positive way for example, and do so every day. But I'm really hesitant to use say a Factory pattern if there's only one way to create an object, as a factory might make it all easier in the future, but complicates the code and is pure overhead. _So, my question is in respect to my future career and my answer to manager types throwing random pattern-names around:_ Which design patterns did you use, that threw you back overall? **Which are the worst design patterns** , that you shouldn't have a look at if it's not that only single situation where it makes sense (read: which design patterns are very narrowly defined)? (It's like I was looking for the negative reviews of an overall good product of amazon to see what bugged people most in using design patterns). And I'm not talking about Anti-Patterns here, but about Patterns that are usually thought of as \"good\" patterns. **Edit:** As some answered, the problem is most often that patterns are not \"bad\" but \"used wrong\". If you know patterns, that are often misused or even difficult to use, they would also fit as an answer."} {"_id": "82114", "title": "Should Version Control Softwares be used from Command Line or GUI", "text": "I was going through some tutorials on TortoiseHg. Despite having a rich GUI, the first examples are given using command line options. Does the ideal usage involve command line or it was started that way so one has idea of what is happening under the hood and GUI internally uses this command anyway."} {"_id": "195926", "title": "Separation of concerns: Whose concern is this?", "text": "My senior reviewer colleague wants me to do the following. We have (on iOS, iPhone) a hierarchy of views in one of our screens. There is a simple rectangle that represents a Business card of a person (visually matches business cards style). As this card is reused many times in many screens, I have written a custom class called `PFXBusinessCard` that has four labels, each label representing person's data like Name, Phone number, account and email. These four labels are exposed as properties of that custom class. In my controller, I am passing the data for a particular label in a following manner. I get the person, read the particular value for a property and set this property in a controller. That is, I am setting the name, phone number etc.. But my colleague says I should instead handle this inside the `PFXBusinessCard` class. I should create a `populateWithPersonsData` method in the class, and I would just pass a person to the card in my controller. The card would then populate its labels itself. Is this approach ok? Why would the card should know about the person? The UI object should not know anything about the data. Should it?"} {"_id": "12401", "title": "Be liberal in what you accept... or not?", "text": "_[Disclaimer: this question is subjective, but I would prefer getting answers backed by facts and/or reflexions]_ I think everyone knows about the Robustness Principle, usually summed up by Postel's Law: > Be conservative in what you send; be liberal in what you accept. I would agree that for the design of a widespread communication protocol this may make sense (with the goal of allowing easy extension), however I have always thought that its application to HTML / CSS was a total failure, each browser implementing its own silent tweak detection / behavior, making it near impossible to obtain a consistent rendering across multiple browsers. I do notice though that there the RFC of the TCP protocol deems \"Silent Failure\" acceptable unless otherwise specified... which is an interesting behavior, to say the least. There are other examples of the application of this principle throughout the software trade that regularly pop up because they have bitten developpers, from the top off my head: * Javascript semi-colon insertion * C (silent) builtin conversions (which would not be so bad if it did not truncated...) and there are tools to help implement \"smart\" behavior: * name matching phonetic algorithms (Double Metaphone) * string distances algorithms (Levenshtein distance) However I find that this approach, while it may be helpful when dealing with non-technical users or to help users in the process of error recovery, has some drawbacks when applied to the design of library/classes interface: * it is somewhat subjective whether the algorithm guesses \"right\", and thus it may go against the Principle of Least Astonishment * it makes the implementation more difficult, thus more chances to introduce bugs (violation of YAGNI ?) * it makes the behavior more susceptible to change, as any modification of the \"guess\" routine may break old programs, nearly excluding refactoring possibilities... from the start! And this is what led me to the following question: When designing an interface (library, class, message), do you lean toward the robustness principle or not ? I myself tend to be quite strict, using extensive input validation on my interfaces, and I was wondering if I was perhaps too strict."} {"_id": "171134", "title": "Which web framework to use under Backbonejs?", "text": "For a previous project, I was using Backbonejs alongside Django, but I found out that I didn't use many features from Django. So, I am looking for a lighter framework to use underneath a Backbonejs web app. I never used Django built in templates. When I did, it was to set up the initial index page, but that's all. I did use the user management system that Django provided. I used the models.py, but never views.py. I used urls.py to set up which template the user would hit upon visiting the site. I noticed that the two features that I used most from Django was South and Tastypie, and they aren't even included with Django. Particularly, django-tastypie made it easy for me to link up my frontend models to my backend models. It made it easy to JSONify my front end models and send them to Tastypie. Although, I found myself overriding a lot of tastypie's methods for GET, PUT, POST requests, so it became useless. South made it easy to migrate new changes to the database. Although, I had so much trouble with South. Is there a framework with an easier way of handling database modifications than using South? When using South with multiple people, we had the worse time keeping our databases synced. When someone added a new table and pushed their migration to git, the other two people would spend days trying to use South's automatic migration, but it never worked. I liked how Rails had a manual way of migrating databases. Even though I used Tastypie and South a lot, I found myself not actually liking them because I ended up overriding most Tastypie methods for each Resource, and I also had the worst trouble migrating new tables and columns with South. So, I would like a framework that makes that process easier. Part of my problem was that they are too \"magical\". Which framework should I use? Nodejs or a lighter Python framework? Which works best with my above criteria?"} {"_id": "110804", "title": "Why are zero-based arrays the norm?", "text": "A question asked here reminded me of a discussion I had with a fellow programmer. He argued that zero-based arrays should be replaced with one-based arrays since arrays being zero-based is an implementation detail that originates from the way arrays and pointers and computer hardware work, but these sort of stuff should not be reflected in higher level languages. Now I am not really good at debating so I couldn't really offer any good reasons to stick with zero-based arrays other than they sort of feel like more appropriate. Why _is_ zero the common starting point for arrays?"} {"_id": "244381", "title": "Is it necessary to start variable from zero (var i = 0 ) of ' for loop '?", "text": "Is it necessary to start variable from zero (`var i = 0`) of any looping ? When should I use `var i = 1;` and when `var i =0;`?"} {"_id": "250337", "title": "Why is first column of list called 0th in so many languages?", "text": "If you want first element of list or array you reference it as 0 in many languages (like C or Clojure). Is there are some really good reasons why the programming languages was design this way? In old days in assembly languages it makes perfect sense because all possible values needs to be used. But what are nowadays to keep it this way? There is very little advantage when modulo arithmetic and ranges (Wikipedia article.) but not much more. On a disadvantages side it should be: It makes confusion because it human language the first is connected with 1 (1st and it is not in english only). It makes confusion even in XPath (W3School:\"Note: In IE 5,6,7,8,9 first node is[0], but according to W3C, it is [1].\"). There are troubles between languages who use 1-based and 0-based system. Want to know hat are the good reasons to use zero-based numbering and why even creator of new languages (like Clojure) choose this way?"} {"_id": "178968", "title": "\"static\" as a semantic clue about statelessness?", "text": "this might be a little philosophical but I hope someone can help me find a good way to think about this. I've recently undertaken a refactoring of a medium sized project in Java to go back and add unit tests. When I realized what a pain it was to mock singletons and statics, I finally \"got\" what I've been reading about them all this time. (I'm one of those people that needs to learn from experience. Oh well.) So, now that I'm using Spring to create the objects and wire them around, I'm getting rid of `static` keywords left and right. (If I could potentially want to mock it, it's not really static in the same sense that Math.abs() is, right?) The thing is, I had gotten into the habit of using `static` to denote that a method didn't rely on any object state. For example: //Before import com.thirdparty.ThirdPartyLibrary.Thingy; public class ThirdPartyLibraryWrapper { public static Thingy newThingy(InputType input) { new Thingy.Builder().withInput(input).alwaysFrobnicate().build(); } } //called as... ThirdPartyLibraryWrapper.newThingy(input); //After public class ThirdPartyFactory { public Thingy newThingy(InputType input) { new Thingy.Builder().withInput(input).alwaysFrobnicate().build(); } } //called as... thirdPartyFactoryInstance.newThingy(input); So, here's where it gets touchy-feely. I liked the old way because the capital letter told me that, just like Math.sin(x), ThirdPartyLibraryWrapper.newThingy(x) did the same thing the same way every time. There's no object state to change how the object does what I'm asking it to do. Here are some possible answers I'm considering. * Nobody else feels this way so there's something wrong with me. Maybe I just haven't really internalized the OO way of doing things! Maybe I'm writing in Java but thinking in FORTRAN or somesuch. (Which would be impressive since I've never written FORTRAN.) * Maybe I'm using staticness as a sort of proxy for immutability for the purposes of reasoning about code. That being said, what clues _should_ I have in my code for someone coming along to maintain it to know what's stateful and what's not? * Perhaps this should just come for free if I choose good object metaphors? e.g. `thingyWrapper` doesn't sound like it has state indepdent of the wrapped `Thingy` which may itself be mutable. Similarly, a `thingyFactory` sounds like it should be immutable but could have different strategies that are chosen among at creation. I hope I've been clear and thanks in advance for your advice!"} {"_id": "96399", "title": "Documenting a REST interface with a flowchart", "text": "Does anybody have suggestions on creating a flowchart representation of a REST-style web interface? In the interest of supplying thorough documentation to co-developers, I've been toying around in dia modeling the interface for modifying and generating a product resource: ![enter image description here](http://i.stack.imgur.com/tSwye.png) This particular system begins to act differently with user authentication/resource counts, so before I make modifications, I'm looking for some clarification: * Complexity: how would you simplify the overall structure to make this easier to read? * Display Symbol: is this appropriate for representing a page? * Manual Operation Symbol: is this appropriate for representing a user action like a button click? Any other suggestions would be greatly appreciated. My apologies for the re-post. The main stackexchange site suggested this question was better presented on programmers."} {"_id": "96395", "title": "Is kick-starting myself into the programming world by reading from a book about VS 2008 & .NET 3.5 a bad idea?", "text": "Basically what the title says. I'm completely new to programming and decided C# to be the best language to start learning, so I went ahead and installed Visual Studio Express 2010 C# to get started. I started reading Learning C# 3.0: Master the fundamentals of C# 3.0 and got worried when I got to the part, which to quote said: > \"Unless we specifically say otherwise, when we refer to C# in this book, we > mean C# 3.0; when we refer to .NET, we mean the .NET 3.5 Framework; and when > we refer to Visual Studio, we mean Visual Studio 2008.\" Does this mean that the book is out-dated and I should find something else? Does this mean I should just read the book anyway and continue in Visual Studio 2010? Does this mean I should uninstall 2010, and get VS 2008, and continue reading? I don't want to learn something that is outdated, and I'm aware of .NET 4.0 being available so I'm not sure if it would be a waste of time. I don't even know if there is a difference, does it just add more features and one could still use the same \"basic\" code in each? Sorry for being a clueless, I have never programmed before, I just need a kick-start in the right direction so I can start LEARNING just don't want to do so in the wrong way :)"} {"_id": "193441", "title": "Efficient way to create a code estimation/technical specification in a fast-moving environment", "text": "To better understand my question, let me elaborate the background of the subject matter. I work in a financial institution where the business module (credit finance) is constantly changing. In the IT world, however, the developers have a 6 week cycle taking list of projects/enhancements, put it in a timeline and business expect delivery on those projects. During each cycle, each developer gets assigned work that needs to be completed by a certain date. That date is already finalized by IT Change Managers and codes must be done by then. The code will then be synced, built to create a package and deployed to the test environment for QC (Quality Centre) team to test. The problem, however, is that developers get given the code 3 weeks before the code cycle ends. We are then told to do a technical specification and work estimation for each project/enhancements/bug fixes we are going to do. I have constantly talked to my managers that that is a backward mentality having to do estimations knowing when we need to deliver the finished code. My challenge: Is there a better estimation/technical specification model that cater for this kind of environment? If not, how can I tackle this issue such that it doesn't conflict with the deadline. Thanks. PS: I totally disagree in providing a business requirement to developers and expecting them to do a technical specification document as well as estimations during a code cycle. I do believe that technical specification document should be done before-hand and completed as well with functional/non-functional document and signed off by business. This is not happening currently."} {"_id": "193442", "title": "Are \"normal order\" and \"call-by-name\" the same thing?", "text": "I was studying the book Structure and Interpretation of Computer Programs and in section 1.1.5 The Substitution Model for Procedure Application the author explains the concepts of _normal order_ and _applicative order_ , which I believe I have understood well. Now, I am taking a course on Coursera called Function Programming Principles in Scala and there the professor Martin Odersky (who based much of his course in the book cited above) explains the same concepts under the names of _call- by-name_ and _call-by-value_. In his course, professor Odersky says the substitution model is based on the Lambda Calculus, so I consulted a book on my library An Introduction to Functional Programming Though Lambda Calculus and in page 22, the author does define the terms as applicative order and normal order. Interestingly in his definition he says that applicative order is like Pascal's call-by-value whereas normal order is like Algol's call-by-name. The use of the words \"is like\" in his explanation is what made me doubt. Thus my questions: * Are these two terms equivalent or are there any subtle differences? * Can I use one or the other interchangeably without risking making a mistake in the meaning they convey? * Are there any reasons you know about that justify the existence of different terminology to refer to the same thing?"} {"_id": "193445", "title": "I'm not sure what this license allows me to do", "text": "I'm using the BigRational class from bcl.codeplex.com. It will be opensourced when it's finished, and it's modified. However, I don't get legal jargon, so even though it's quite short, could someone explain to me what it does/doesn't allow you to do?"} {"_id": "193444", "title": "Byte code weaving vs Lisp macros", "text": "I have been reading about the libraries people have written for languages like Java and C# that make use of byte code weaving to do things like intercept function calls, insert logging code, etc. I have also been reading up on Lisp/Clojure macros in an attempt to better understand how to utilize them. The more I read about macros, the more it seems like they provide the same kind of functionality as byte code weaving libraries. By functionality, I mean the ability to manipulate code at compile time. Examples of libraries I have been looking at would be AspectJ, PostSharp, and Cecil. Is there anything that can be done with one and not the other? Do they actually solve the same problems or am I comparing apples and oranges?"} {"_id": "193447", "title": "Format text in a generic and reusable way", "text": "I would like to write some long text in some structure to allow a set of operations on that text. The question is which structure or format should I use, which suits best the use that I plan to do of that text? Next I describe that use: * I would like to write text in natural language, possibly with translations to several languages. Translations would simply be the same structure with different data (text). * I would like to keep that text in a VCS, check diffs, branch and merge, etc. The structure should fit well this use. * I would like to keep the text free of as much clutter as possible so that it is human-readable. * I would like to easily convert the text to other formats, not necessarily many but at least html and pdf would be fine. * I would like to be able to manipulate that text easily, for instance changing the order of some elements, filtering them, etc. based on metadata in that text. * Metadata is data, that means it may be printed or not, or it may be printed in different ways. Here are the main options I have considered so far: * **Latex** : basically it is a language designed for this task. The problems I see are that it is not as readable as other options, for instance Markdown, and it is not really structured text. The text is there and the metadata about formatting options and so on can be separated with a set of macros, but the text is not really structured, changing the order requires either parsing it or defining all the text as macros so that only the order of the macro invocation needs to be changed. It's great for what it does, but becomes clumsy when it falls short in some feature, as structuring the text. I don't see a good separation between control information and data to be printed. It is a very good option to convert to pdf. * **XML** : The structure in this case is fairly good, but in the current context, I see no advantage in using XML when HTML could be used instead, it provides the same features and some more. * **HTML** : the conversion to HTML would be immediate in this case but the conversion to pdf is not so clear. In terms of human readability maybe markdown could be better, but HTML is probably the most widespread and used language for the task at hand, there are supporting languages like CSS (Less, Sass, too many options) that can make life easier, Javascript can handle it, anyone with a browser can easily read it, etc. Maybe some special HTML could be converted to quality Latex and there to pdf, I don't know. * **Markdown** : a very good option in terms of readability, but I'm uncertain about how could it be manipulated, maybe through conversion to HTML and then using DOM manipulations or any other processing that could be done on XML and thus on proper HTML. I'm uncertain about how flexible may be for defining metadata (for instance a paragraph that is a summary of other paragraphs) when this could be easily done with XML or HTML via classes or other attributes. * **JSON** : most languages include a parser for JSON thus it is very friendly for programming languages and easy manipulation m. Obviously some standard should be defined for JSON, but the same holds for the rest of the options, including latex (macros). * **CoffeeScript** : this removes some clutter from the usual JSON, it may be more readable and can be converted into JSON easily. * **Mixing** : the problem with JSON and CoffeeScript is that the structure to hold the contents is very flexible (maybe too much) but it doesn't support in a natural way inline annotations. A possible solution is to use Markdown or HTML for these fragments of text, including bold text or what may be needed. The objective is to write a manifesto, or something that looks like a manifesto and evolves. This is based on some ideas that recommend using VCS systems. The point is to have a structure that allows to write once and publish as many times as may be needed and in different ways, maybe blog posts, pdfs, etc., because a lot of effort to reach the consensus has to be put to write the text, rewriting and rewording does not seem a good idea. This discards some other nice options, like a wiki, but it would be nice to be able to have it structured in some way such that a set of pages like a wiki could be built from the source data. In the end the technology may not be there yet, but I think it is not too far. There are actually so many options that a clever use of some of them should be enough."} {"_id": "133925", "title": "Best-practice to manage concurrency into a basket in a e-commerce website", "text": "What is the best practice to manage the case where two customers add in the same time a product whose the stock was only 1 ? Must there be a check in the code of the basket to avoid one of these 2 customers to add the same product ? Or does this check must be carried out in the payment phase in doing for instance a second query to confirm the concerned product is still present in stock (means not bought yet by the concurrent customer) ?"} {"_id": "219191", "title": "Novice to node.js, what is the advantage gained using callbacks over events?", "text": "I am a novice JavaScripter and have no real knowledge of what goes on inside the V8 engine. Having said that, I am really enjoying my early forays into the node.js environment but I find that I am constantly using events.EventEmitter() as a means to emit global events so that I can structure my programs to fit a notifier-observer pattern similar to what I would write in say an Objective-C or Python program. I find myself always doing things like this: var events = require('events'); var eventCenter = new events.EventEmitter(); eventCenter.on('init', function() { var greeting = 'Hello World!'; console.log('We're in the init function!); eventCenter.emit('secondFunction', greeting); }); eventCenter.on('secondFunction', function(greeting) { console.log('We're in the second function!); console.log(greeting); eventCenter.emit('nextFunction'); }); eventCenter.on('nextFunction', function { /* do stuff */ }); eventCenter.emit('init'); So in effect I'm just structuring 'async' node.js code into code that does things in the order I expect, instead I'm kind of \"coding backwards\" if that makes sense. Would there be any difference in doing this in a callback-heavy manner, either performance-wise or philosophy-wise? Is it better to do the same thing using callbacks instead of events?"} {"_id": "206789", "title": "Are there any SQL servers that support compiled queries?", "text": "Can SQL queries be compiled into byte code or equivalent to enhance performance? I know that most database servers support prepared statements, but is that the same thing as compiling? Most applications have to prepare statements on run- time so there is no benefit of pre-compiled byte code. Since it's prepared each time the application runs. I'm also not referring to stored procedures, but strictly SQL statements you execute to get a result set."} {"_id": "206784", "title": "Erlang and Go concurrent programming, objective differences between CSP and Actors?", "text": "I was looking into concurrent programming in Erlang and Go programming languages. As per my finding they are used Actor model and CSP respectively. But still I am confused with what are the objective differences between CSP and Actors? is it just theoretical different only but the same concept?"} {"_id": "206781", "title": "JSON object and storage of nosql", "text": "I have read Would a NoSQL DB be more efficient than a relational DB for storing JSON objects? and am building a small test project in Asp.Net. I have a webapi up in Azure. It returns a `List` and Company is my object which has several properties and child list and a lat/long value. //id, name etc. public List Certifications { get; set; } public float Latitude { get; set; } public float Longitude { get; set; } public GeoCoordinate Cordinate // etc. GeoCoordinate is from System.Device reference I return this List of companies and use the JSON output. Now internally, loading this list I load the complete list of companies out of a json file. and if there is no file, a file will be created. This is all good. But the Latitude and Longtitude is empty on the initial basis. So I fill it using googles reverse geocode. That works, but has a request limit. So I'd like to load the list and if lat/long is empty, retrieve the values from google's service and store it. But I am looking for a solution not to store the complete json list to a file again. And I am not looking for a relational database solution, because that is something that I have done enough. Now I have read about mongoDB. But it is a bit hard to set up on Azure. I have had Redis on Azure. What easy and fast solution do you recommend for me to store my list of objects? Do you even recommend it to store it as JSON? or something else? like XML? and use xpath to update values? So I am looking for an architecture/design to update all lat/longs untill google gives the quota limit error and give it a go next try I access the list of companies. ps. I do not want to store a list of `Certification`'s. I am curious if I can keep it as property of company and store the complete company project."} {"_id": "206783", "title": "How to maximise the features of MySQL when used with php", "text": "I asked this question a while back on SO (http://stackoverflow.com/questions/15506040/maximising-the-features-of-php- and-mysql-when-used-together) but it's a recurring topic, so I thought I'd ask in a different way over here. Basically, we are stuck with the infamous php and MySQL combination (for various reasons), and are trying to produce applications which follow best- practice as much as possible. We notice that time and again, we are tightly coupling the application to the database and vice-versa, despite trying to adhere to SOLID principles. One of the main reasons is to maximise the features of a relational database (such as strict datatypes, hierarchical data structures, relationships, enforcing foreign keys, firing triggers, etc.) This means that some of the application logic bleeds over into SQL. However, there are features that other RDBMS's offer which MySQL doesn't (such as CHECK constraint most notably) so, we end up using php to get round those. This means we end up using php for some of things MySQL doesn't do well and we use MySQL to do things which aren't feasible in php. So, even though every text book encourages programmers to keep things loosely coupled, does anyone have any examples of how to overcome limitations you have come across, or examples of other applications that mix the two up in a similar way?"} {"_id": "244957", "title": "delegating program logic to lower-level objects", "text": "I'm writing a library for use in scientific computing and ran into a bit of a quandary. The types at work here are a class `M` which consists of some `data` and a reference to a container class `C`. There are many different implementations of `C` and I devoted a lot of work to making sure that `M` objects could use `C` objects without knowing their internal representation. The code may be used for high-performance scientific computing someday, so speed actually is a concern. If `M` were to break the encapsulation of `C` objects, the code could run faster. I tested this and indeed I could get a 50% speedup. But, that would involve lots of repeated code and violation of the open-closed principle. Alternatively, I can take the behavior that `M` needs to perform and delegate it to `C`. By default, `C` will use the same implementation-agnostic algorithm that I had before, but the logic has just moved downtown to a new class. The advantage of this approach is that, if `CO` is an implementation of `C` which can do substantially better than the default implementation, it can override that method with its own version. There is, at present, only 1 method that `M` will need to delegate to `C`. I can imagine at most 2 more behaviors that will need to be dealt with in this way. There may be a bit of repeated code, but it could be handled with a code generator too. Is this a common approach? If so, what's it called? If not, is that because it's a terrible idea for some reason that I haven't noticed? It's not quite the strategy pattern; most of the container objects don't even bother implementing their own strategy, they use the default."} {"_id": "244955", "title": "How would I do a vim-style character text editor in a graphical application?", "text": "There's a cool guitar tabbing application that I've used before where you can use the keyboard to move around a character grid. You can put any digit in any character cell. Here is an image: ![enter image description here](http://i.stack.imgur.com/tIcil.png) This is similar to the VIM editor, where you have a block cursor and you can move around and place characters on the grid. I am using Qt as my GUI application. How would I go about adding this type of single-character editor control in my application? I have not run across this type of widget in any of my exposure to GUI programming; hence, I'm not even sure what to call it or how to describe it succinctly. Thanks."} {"_id": "244952", "title": "How the Stream.filter() method works?", "text": "I know how the lambda expresion works and I know it is an argument for `.filter()` that establish the criteria to filter with. But I don't get how `.filter()` uses the argument, in this case a lambda expression, because .filter() does't have an implementation or at least require one at compile time. I search for this unknown implementation in the Oracle's site but there are just a few words explaning how it works and no code at all. Is that implemetation hidden or is created automatically by the java compiler? Does an aggregate operation need one? double average = roster .stream() .filter(p -> p.getGender() == Person.Sex.MALE) .mapToInt(Person::getAge) .average() .getAsDouble(); `roster` is a `List` instance of `ArrayList` `Person` is a simple class that represents a person"} {"_id": "244953", "title": "Whole Program in CASE", "text": "I'm going to start by saying, this may not be the correct place to post this. So...I'm working in Embedded Development, using C. Is there any benefit or disadvantages to doing the following: while (1) { i++; switch (i) { case 1: x_get_inputs(); break; case 2: x_react(); break; case 3: x_set_outputs(); i = 0; break; default: break; } } Rather than a standard list of function calls? I think this is similar to a _Round Robin RTOS_? My logic is that the program cycle time will be shorter, and as such this should be more efficient?"} {"_id": "244950", "title": "How would I change the precision of a variable in Python?", "text": "I'm working on a 2D-physics engine, and I need a certain variable to be precise only to the hundredths. Are there any methods for basically shaving off all of that unneeded precision? I have tried things like \"{0:.2f}\".format(a) but that obviously produces a string, not a number. I'm moving an object based upon its speed, and I want it to(obviously) stop when its value is 0.0, but it keeps going because its speed is really something like 0.0000562351."} {"_id": "219331", "title": "C# LinqExtensions implement multiple inheritance", "text": "According to WikiPedia \"Some languages do not support mixins on the language level, but can easily mimic them by copying methods from one object to another at runtime, thereby \"borrowing\" the mixin's methods. Note that this is also possible with statically typed languages, but it requires constructing a new object with the extended set of methods.\" 1) LinqExtensions static library functions ( .Where() ) applied to collections of regular objects (int[]) that implement IEnumerable do essentially this. They use a _new object_ (LinqExtensions) to add the _extended set of methods_ ( Where(),etc ). Conclusion. LinqExtensions are mixins with all classes that support IEnumerable. 2) According to WikiPedia \"In object-oriented programming languages, a mixin is a class which contains a combination of methods from other classes. How such combination is done depends on language, but it is not by inheritance. If a combination contains all methods of combined classes it is equivalent to multiple inheritance.\" By mixing **ALL** methods of classes that support IEnumerable ( List.HashCode, List.ToString, other List hierarchy baseline capabilities ), with **ALL** methods from the LinqExtensions static library ( .Where(), Max(), etc ) **an equivalent to multiple inheritance** is produced. An equivalent to multiple inheritance is produced, yes or no?"} {"_id": "19203", "title": "What are the benefits of using Dependency Injection and IoC Containers?", "text": "I'm planning to do a talk on Dependency Injection and IoC Containers, and I'm looking for some good arguments for using it. What are the most important benefits of using this technique, and these tools?"} {"_id": "155768", "title": "What OO Design to use ( is there a Design Pattern )?", "text": "I have two objects that represent a 'Bar/Club' ( a place where you drink/socialise). In one scenario I need the bar name, address, distance, slogon In another scenario I need the bar name, address, website url, logo So I've got two objects representing the same thing but with different fields. I like to use immutable objects, **so all the fields are set from the constructor**. One option is to have two constructors and null the other fields i.e: class Bar { private final String name; private final Distance distance; private final Url url; public Bar(String name, Distance distance){ this.name = name; this.distance = distance; this.url = null; } public Bar(String name, Url url){ this.name = name; this.distance = null; this.url = url; } // getters } I don't like this as you would have to null check when you use the getters In my real example the first scenario has 3 fields and the second scenario has about 10, so it would be a **real pain having two constructors** , the amount of fields I would have to declare null and then when the object are in use you wouldn't know which `Bar` you where using and so what fields would be null and what wouldn't. What other options do I have? Two classes called `BarPreview` and `Bar`? Some type of inheritance / interface? Something else that is awesome?"} {"_id": "158307", "title": "Technology Choice for a Client Application", "text": "Not sure this is the right place to ask... I'm involved in the development of a new system, and now we are passing the demos stage. We need to build a proper client application. The platform we care most about is Windows, for now at least, but we would love to support other platforms, as long as it's free :-). Or at least very cheap. We anticipate two kinds of users: 1. Occasional, coming mostly from the web. 2. Professional, who would probably require more features, and better performance, and probably would prefer to see a native client. Our server exposes two APIs: 1. A SOAP API, WCF behind the scenes, that supports 100% of the functionality. 2. A small and very fast UDP + Binary API, that duplicates some of the functionality and is intended for the sake of performance for certain real-time scenarios. Our team is mostly proficient in .Net, C#, C++ development, and rather familiar with Web development (HTML, JavaScript). We are probably intending to develop two clients (for both user profiles), a web app, and a native app. Architecturally, we would like to have as many common components as possible. We would like to have several layers: Communication, Client Model, Client Logic, shared by both of the clients. We would also like to be able to add features to both clients when only the actual UI is a dual cost, and the rest is shared. We are looking at several technologies: WPF + Silverlight, Pure HTML, Flash / Flex (AIR?), Java (JavaFx?), and we are considering poking at WinRT(or whatever the proper name is). The question is which technology would you recommend and why? And which advantages or disadvantages will it have regarding our requirements?"} {"_id": "158309", "title": "understanding the encoding scheme in python 3", "text": "I got this error in my program which grab data from different website and write them to a file: 'charmap' codec can't encode characters in position 151618-151624: character maps to I am not familiar with all the encoding decoding thing and I have been okay with what python 2 did. Although the python officially said that they made the change in order to make things better, it seems to get worse. I have no idea how to fix these errors. However I am a pro-active person so I would really like to know what is causing the problem and how to solve it. I have check the official site but the words are hard to understand. Could I have a simple elaboration on that? Another page is also acceptable. EDIT: I've check this page, the Unicode HOWTO in Python 2.7. My understanding is that we must translate the unicode string into binary format while we're writing it to file and it require an encoding. Obviously 'utf-8' is the best one, but why it didn't force the python interpreter to use 'utf-8'? Instead, it use some strange codec such as 'charmap' and 'cp950' and 'latin-1'."} {"_id": "215837", "title": "When should method overloads be refactored?", "text": "When should code that looks like: DoThing(string foo, string bar); DoThing(string foo, string bar, int baz, bool qux); ... DoThing(string foo, string bar, int baz, bool qux, string more, string andMore); Be refactored into something that can be called like so: var doThing = new DoThing(foo, bar); doThing.more = value; doThing.andMore = otherValue; doThing.Go(); Or should it be refactored into something else entirely? In the particular case that inspired this question, it's a public interface for an XSLT templating DLL where we've had to add various flags (of various types) that can't be embedded into the string XML input."} {"_id": "13757", "title": "Pronunciation in programming?", "text": "How do you correctly or erroneously pronounce programming terms? Any that you find need strict correction or history into the early CS culture? **Programming** `char` = \"tchar\" not care? `!` = bang not exclamation? `#` = pound not hash? Exception `#!` = shebang `*` = splat not star? `regex` = \"rej ex\" not \"regg ex\"? `sql` = \"s q l\" not \"sequel\" (already answered, just i.e.) **Unixen** `|` = pipe not vertical bar? `bin` = bin as in pin , not as in binary? `lib` = lib as in library , not as in liberate? `etc` = \"ett see\" , not \"e t c\" (as in `/etc` and not \"&c\") **Annoyance** `/` = slash not backslash `LaTeX` = \"laytek\" not \"lay teks\""} {"_id": "250117", "title": "In wikipedia it is referred that Remote Socket Address is the Client Socket Address but i doubt that", "text": "In the context of Network Socket , which one is the REMOTE SOCKET ADDRESS?? Does the Client Socket Address refer to the Remote Socket Address?"} {"_id": "201781", "title": "Algorithm for generating hyperexponential distribution", "text": "I need to generate a hyperexponential distribution for my project. I have already implemented a poisson generating algorithm given by Donald Knuth, but I couldn't find an algorithm for generating a hyper exponential random variable. I am provided with the mean and variance required of the distribution and I need an algorithm which can generate a random variable from this distribution when I execute it."} {"_id": "82803", "title": "Subversion Pre-Commit Check", "text": "In a config file for a PHP project, I have some settings that should not go out in production. Can anybody offer some specific examples or help on how to automatically check these fields and make sure their values are not the development/demo settings before the files get committed into subversion? Our IDE is JetBrains PHP Storm, and we use the latest version of subversion. Some of us develop on Linux, while others are on Windows."} {"_id": "150688", "title": "How would you unit-test or perform the most effective automated testing on graphics code?", "text": "I'm writing a game and the accompanying graphics engine on top of OpenGL in C++. Im also a fan of good coding processes and automated testing. Graphics code + testing seems pretty immiscible, since the output is often visual only, or very heavily visual-oriented. For example, imagine analyzing the raw image-stream that is rendered to the screen byte-by-byte - you need test-data to compare with, which is hard to create/obtain, and often the rendered images aren't identical on a byte level when running at different times - small changes in algorithms will wreck this approach completely. I'm thinking of creating a visual unit-test suite, in which I can basically render different test-scenes, showing stuff like shadow mapping, animation, etc etc etc. As part of CI, these scenes would then be rendered to a video file (or possibly leave it as a executable) with different metrics. This would still require manual inspection of the video file, but atleast it would be somewhat automated and standardised. What do you think? I'm hoping there are better ways?"} {"_id": "161982", "title": "How to edit existing user stories", "text": "I'm quite new to working in Agile and with user stories and scenarios in the BDD tool Cucumber and ideally I'll need to go on a course of all of this. I have a set of user stories that need to be edited for an upcoming release. As an example one of the user stories is: 'As a user I want a visual indicator on entry into the application' This needs to be changed to: As a producer, I don\u2019t want a textured background in the application The Acceptance Criteria (In Gherkin format for those familiar with Cucumber) for the original user story (the first one above) is Scenario: Show background image when video is not playing Given the application restarts When the home menu is displayed Then the full-screen textured background should be visible at the correct resolution For the new user story (second one above), the acceptance criteria that I have written, but not certain of is: Scenario: Show dark grey background when video is not playing Given the application restarts When the home menu is displayed Then the dark grey background should be visible at the correct resolution Does this look right? Or am I missing information? I'm, quite new to this, so please bear with me."} {"_id": "255215", "title": "Knowing who is the user in every request (every action, every view, every time)", "text": "I have many model classes that are mapped from/to tables using EF. Two of them are `User` and `UserCookie`, which are stored in tables `Users` and `UserCookies`. public class User { public long UserId { get; set; } public string Fullname { get; set; } public string Email { get; set; } (...) } public class UserCookie { public long UserCookieId { get; set; } public long? UserId { get; set; } public virtual User User { get; set; } public string CookieValue { get; set; } } Every controller in my ASP.NET MVC application is a child (inherits) of `MyController`, which is like this: public class MyController : Controller { protected MyDbContext dbContext; protected UserCookie userCookie; protected User currentUser; protected string cookieValue; public MyController() : base() { this.dbContext = new MyDbContext(); this.cookieValue = \"\"; this.currentUser = null; this.userCookie = null; } protected override void OnActionExecuting(ActionExecutingContext filterContext) { base.OnActionExecuting(filterContext); HttpCookie cookie = Request.Cookies[\"LoginCookie\"]; if (cookie != null) { this.cookieValue = cookie.Value; this.userCookie = db.UserCookies.SingleOrDefault(c => c.CookieValue.Equals(cookieValue)); if (userCookie != null) this.currentUser = userCookie.User; } } protected override ViewResult View(string viewName, string masterName, object model) { ViewBag.CurrentUser = currentUser; ViewBag.UserCookie = userCookie; ViewBag.CookieValue = cookieValue; return base.View(viewName, masterName, model); } } When the user sign in successfully, I create a cookie in him with a random value, like this: public ActionResult Login(...) { (...) HttpCookie cookie = new HttpCookie(\"LoginCookie\"); cookie.Value = Guid.NewGuid().ToString(); Response.Cookies.Add(cookie); userCookie.User = user; userCookie.CookieValue = cookie.Value; db.UserCookies.Add(userCookie); db.SaveChanges(); The problem with all this that I've shown is that I'm doing a request to the database in every HTTP request (that is, in every call to one of my actions). I'm also not happy with the table `UserCookies` because it has to be cleaned and is not intuitive. Also the generated `Guid` is not safe. I know **how wrong** is all this, but... I can't figure out an elegant way of achieving the same benefits. The main benefit of this is that I have access to `currentUser` in every action, so I can do stuff like this easily: if (!currentUser.Privileges.Any(p => p.Privilege == PrivilegesEnum.Blah)) { (...) } I also have access to `currentUser` in all of my views. Questions: 1. What is the most common way of achieving this in real professional software? 2. If I wanted to use cache (like Redis), do I have to change a lot of stuff in the presented scenario? 3. If I do use cache, should I store just the `userId` instead of the entire User object?"} {"_id": "199128", "title": "Increasing User Changes/Requirements in Agile Methodology", "text": "My question is quite simple. How to handle a situation where the team is applying agile methodology in software projects and there so many iterations and change in requirements, that the schedule is highly affected? Plus you have your manager who is always wanting things to get done within schedule and it is something you cannot control?"} {"_id": "167204", "title": "Testing vs QA - output?", "text": "I was trying to find a relevant source of information for the above, but I failed. What is the output of software testing? Some say its \"list of bugs\", others say \"answer whether or not the software conforms to what customer wanted\". I know testing is a part of QA but not sure what is its aim and output in this context."} {"_id": "167206", "title": "When to use functional programming approach and when not? (in Java)", "text": "let's assume I have a task to create a `Set` of class names. To remove duplication of `.getName()` method calls for each class, I used `org.apache.commons.collections.CollectionUtils` and `org.apache.commons.collections.Transformer` as follows: _Snippet 1:_ Set myNames = new HashSet(); CollectionUtils.collect( Arrays.>asList(My1.class, My2.class, My3.class, My4.class, My5.class), new Transformer() { public Object transform(Object o) { return ((Class) o).getName(); } }, myNames); An alternative would be this code: _Snippet 2:_ Collections.addAll(myNames, My1.class.getName(), My2.class.getName(), My3.class.getName(), My4.class.getName(), My5.class.getName()); So, when using functional programming approach is overhead and when it's not and why? Isn't my usage of functional programming approach in _snippet 1_ is an overhead and why?"} {"_id": "2575", "title": "How often are the unfortunate stereotypes associated with older techies true?", "text": "How often are the unfortunate stereotypes associated with older techies true? When people get settled into a career--Do people lose their passion to learn new things? Do people get comfortable and stuck with one skillset and incapable of change? **How common is this?** I don't mean to be offensive with this question. I know many skilled older programmers. But you do sometimes see a \"failure mode\" when people get too comfortable in their jobs, skills, and lifestyle--too much so that they get irritated at the thought of having to learn something new. I see this and I'm deathly afraid of it happening to me. I want to emphasize I think there is also a \"failure mode\" for youth -- not really having the experience to know how to get a project done. The \"youth\" failure mode may be worse then the one I'm describing. I think the best teams have both junior and senior members. Am I just rehashing old, wrong stereotypes? Is there something to guard against when one gets settled into a career? Or am I just another ageist young programmer keeping the old-(wo)man down?"} {"_id": "201258", "title": "Will git whine if I create my projects in multiple directories?", "text": "I'm new to git (and github). I work in Java and Python. Right now, I manage my projects in only one directory - let's call it `Code` and the folder is on my Desktop. I want to group my projects by the language I code in. So, I want: /Code/ ++ /Java/ ++ /Python/ I do not know if Git would be happy with this or not. If it would be, what should I set my home directory to be? And how should I manage it with my github account? For example, I create a new repo on GH for a `new-project` in Python (but I already have files on my local machine in `/Code/Python/new-project`), how should I about committing the files to GH making sure any other directories are not messed up with? EDIT: Just to be clear, I have unrelated projects in `/Code/project_1`, `/Code/project_2`, `/Code/project_3` etc. But I want to arrange them as `/Code/Java/project_1`, `/Code/Java/project_1`, `/Code/Python/project_1` etc."} {"_id": "235206", "title": "how do live tutorials work e.g http://try.mongodb.org/ and http://try.redis.io/", "text": "I was wondering how do http://try.redis.io/, http://try.mongodb.org/ etc. work? Do they have one live (dedicated) user session on the server per user session in the web?"} {"_id": "197996", "title": "Underlying infrastructure behind something similar to Code School", "text": "I'm working on a venture similar to Code School: it features a code editor (currently, i'm using ACE Editor) and a real-time \"Run\" option. I have no idea how Code School works with this... i thought about creating/destroying VMs on Amazon for each student, but that would be very expensive. Or, does it run inside an application that delegates execution to many different machines? And what about security?"} {"_id": "191353", "title": "Managing an undercover SVN repository", "text": "The project I am working on is version controlled by SVN, and the unspoken rule at work is to commit only when a new stable feature is added (in order to have a \"clean\" revision history with no revert) so I work sometimes for a few days without commiting. However I am kind of a commiting maniac (several commits per day on my previous git projects) and this workflow doesn't suit me. Ideally I would like to fork the project, commit unstable version under the \"submarine repo\" and merge into the stable one whenever the feature is stable enough ( my unstable commits must not appear in the stable repo). How can I do this ? With SVN : I've look for Vendor Branches and Externals but I didn't really find what I want. With Git : is it possible to use Git on the undercover unstable repo and SVN for the stabilized repo without conflicts between the two CVS during merges ?"} {"_id": "201256", "title": "High-Load Java Server for Multiplayer", "text": "I am making a multiplayer game. Now I am trying to choose the technology to connect the android devices to the server. The clients run on Android, and the game is MMORPG. I would like to write the server in java. Right now I have only 3 ideas for it: _**1) creating multithreading environment with plain java and sockets. In this way it will be easier to perform full-duplex connection between the game client and the server. But I have the following concerns:_** 1.1) The game is MMORPG with a large number of objects, and I am not sure how such solutions are scaled if there are for example 5000 people playing at the same time. How many threads will I be able to run on java Machine? How can I approximately calculate that? In case 1 thread is reading for each socket and 1 thread is writing to the socket (so 2 threads for 1 player). 1.2) When the number of players grows, how will I be able to scale my self- written jar-archive to distribute over several servers? Maybe there is some special trick to do that? 1.3) A lot of programming overhead - sockets API is quite low level. _**2) Creating a Servlet-interface for serving HTTP requests._** 2.1) Easy to control sessions (and authorization) as long as each player has his/her own session. 2.2) can connect to java EE EJB or whatever - a lot of complications with system-level programming are taken away. So I can concentrate on writing business logic. 2.3) Can serve all types of clients with HTTP - mobile devices + browsers. 2.4) High-speed - even 1 servlet container can serve a few thousand requests per second so it will be really fast. 2.4) But this approach cant provide a full-duplex communication. I will have to send requests every 1 second to check for updates. 1 second delay does not make a lot of difference for the game as it is turn-based, but still it generates a lot of traffic. Is it feasible when there are many players playing? I heard about some COMET technology, but it seems like if the server has to push many messages in row, I will still have to send requests every time + this technology is not well established yet. _**3) Creating sockets and connecting them through JMS to the java EE server._** 3.1) Cool because allows for full duplex communication between clients and server + provides all cool features of java EE. Later can be extended to browser through servlet interface. 3.2) It seems like some kind of overengineering. Is it really how people would do that? I mean is it even the correct way? Would any sane developer do it like that? I would like you to help me out with the choice please. I dont have much experience with doing some work like that. And would like to stick to the best practices."} {"_id": "57217", "title": "What do you do when one thinks the code isn't complicated enough?", "text": "After six months of development on a project, our stakeholders have had a \"gut check\" and have decided that the path that we've been walking (a custom designed application framework and data access layer) is holding us (the developers) back from quickly developing the features they would like to see. After several days of debate management and the development team have decided to scrap the current incarnation and start over using ASP.net MVC, with Entity Framework as the bases of the a 'quick and dirty', lets just get it done project. In days following, our senior developer who has never worked with MVC or Entity Framework has finally gotten into a sample project and done some work. His take on ASP.net MVC, \"this is not software engineering\". So my question is this; what do you do, when one doesn't think the code is complicated enough?"} {"_id": "190409", "title": "A Unicode sentinel value I can use?", "text": "I am desiging a file format and I want to do it right. Since it is a binary format, the very first byte (or bytes) of the file should _not_ form valid textual characters (just like in the PNG file header1). This allows tools that do not recognize the format to still see that its not a text file by looking at the first few bytes. Any codepoint above `0x7F` is invalid US-ASCII, so that's easy. But for Unicode it's a whole different story. Apart from valid Unicode characters there are _private-use characters_ , _noncharacters_ and _sentinels_ , as I found in the Unicode Private-Use Characters, Noncharacters & Sentinels FAQ. **What would be a sentinel sequence of bytes that I can use at the start of the file that would result in invalid US-ASCII, UTF-8, UTF-16LE and UTF-16BE?** * Obviously the first byte cannot have a value below `0x80` as that would be a valid US-ASCII (control)character, so `0x00` cannot be used. * Also, since _private-use characters_ are valid Unicode characters, I can't use those codepoints either. * Since it must work with both little-endian and big-endian UTF-16, a _noncharacter_ such as `0xFFFE` is also not possible as its reverse `0xFEFF` is a valid Unicode character. * The above mentioned FAQ suggests not using any of the _noncharacters_ as that would still result in a valid Unicode sequence, so something like `0xFFFF` is also out of the picture. What would be the future-proof sentinel values that are left for me to use? * * * 1) The PNG format has as its very first byte the non-ASCII `0x89` value, followed by the string `PNG`. A tool that read the first few bytes of a PNG may determine it is a binary file since it cannot interpret `0x89`. A GIF file, on the other hand, starts directly with the valid and readable ASCII string `GIF` followed by three more valid ASCII characters. For GIF a tool might determine it is a readable text file. This is wrong and the idea of starting the file with a non-textural byte sequence came from Designing File Formats by Andy McFadden."} {"_id": "118571", "title": "Career change question: going from systems analyst to embedded systems development", "text": "How should I go about making a switch from doing help desk type work to embedded software development or any type of development that is done at a lower level? I will finally finish my degree in December and really want to get started building systems. My degree is an Applied Computer Science degree. I have taken classes in Assembly, C++, VB.NET, and Java but have not been able to use any of these to a great extent in any of my jobs. If anyone has advice on how to go about getting into the embedded development field that would be great."} {"_id": "190405", "title": "Fastest way to find the closest point", "text": "I have a list in Python 2.7 of about 10000 point co-ordinates, like [(168, 245), (59, 52), (61, 250), ... (205, 69), (185, 75)] Is there a faster way to search for all the points in a bounding box, instead of just iterating over the entire list each time and checking if the co-ord is inside the 4 sides?? I'm using this \"algoritm\" to see if any point, which is randomly moving, has come inside the stationary bounding box..(if it helps)"} {"_id": "60406", "title": "Which programming language first introduced 'Hello World'", "text": "Which programming language first introduced 'Hello World' as a first program to code for beginners?"} {"_id": "60405", "title": "Job Interview Challenges", "text": "I'm not entirely sure if this belongs here, so feel free to move/close it, if necessary. The other day, in our PHP class, our teacher gave us a challenge used by a friend of his in job interviews. It works in every programming language, so it's not limited to PHP. He said that his friend uses this 'riddle' to weed out the people who can't think of a fast answer when it comes to logical challenges. The people that don't solve it won't get a job, of course. The riddle is as follows: $a = 3; $b = 7; echo \"a = $a\"; // has to become 7 echo \"
    \"; echo \"b = $b\"; // has to become 3 You basically have to switch the contents of both variables without doing lame things like `$b = $a + 4`. You **cannot** use a temporary variable either! I struggled with this, I have to admit that; I was like 'oooh yeah' when we finally got the answer. I don't want to spoil this for anyone, so instead of posting the solution I'll just put a link. Now, as for my question. I was wondering if there are more riddles like these out there, that people (that's you, SO) use in job interviews, etc. Perhaps even a bit harder than this one. My goal is to train my logical thinking a bit and get better in solving issues like this. Perhaps there are any books or websites out there devoted to stuff like this?"} {"_id": "60404", "title": "What web development frameworks questions should I include in my thesis?", "text": "I'm busy working for my thesis about Web Development Frameworks. I want to write about frameworks that have been used in the past and frameworks that have a good chance of lasting in the future, such as JQuery and PrimeFaces for example. I also want to do a survey and I am looking for good questions to ask on this survey. My questions is what web developers themselves are most interested in answering and what you think would be good questions in this survey to ask? For example, I was thinking about giving a list of current technologies and giving the possibility to give a score from 1 tot 10 to indicate which they prefer the most."} {"_id": "153080", "title": "Is there a pattern or logical structure I can follow for Event Log Numbers?", "text": "What are some ideas or structure I can use when assigning EventID to events that will be saved to the Windows Event Log? ![enter image description here](http://i.stack.imgur.com/UT2UA.png) Some options I've considered * Sequential (0... int.Max) * Multiple of 10, where the \"0\" is replaced with how noisy the debugLevel is set. xxx0 may represent exceptions, critical information, start, stop etc. * ...? What numbering approach gives you the most insight when a user describes the event in an email or phone? What is the most useful to support staff?"} {"_id": "64180", "title": "Good use of try catch-blocks?", "text": "I always find myself wrestling with this... trying to find the right balance between try/catching and the code not becoming this obscene mess of tabs, brackets, and exceptions being thrown back up the call stack like a hot potato. For example, I have an app I'm developing right now that uses SQLite. I have a Database interface that abstracts the SQLite calls, and a Model that accepts things to go in/out of the Database... So if/when an SQLite exception occurs, it has to get tossed up to the Model (whom called it), who has to pass it off to whoever called the AddRecord/DeleteRecord/whatever... I'm a fan of exceptions as opposed to returning error codes because error codes can be ignored, forgotten, etc., whereas an Exception essentially has to be handled (granted, I _could_ catch and move on immediately...) I'm certain there's got to be a better way than what I've got going on right now. **Edit:** I should have phrased this a little differently. I understand to re-throw as different types and such, I worded that poorly and that's my own fault. My question is... how does one best keep the code clean when doing so? It just starts to feel extremely cluttered to me after a while."} {"_id": "17120", "title": "Do you use debugging in rails application? Why, when and how?", "text": "In Java, C and C++ I see people using intensively debugging strategies (because mostly they don't know about TDD). On the other hand, debugging too can help to understand software abstractions. So, when and how do you use debugging in Rails applications?"} {"_id": "17121", "title": "What does \"Human Readable\" mean? Is it a misnomer?", "text": "Two examples spring to mind: * One of the reasons that .Net programmers are encouraged to use .config files instead of the Windows Registry is that .config files are XML and therefore human-readable. * Similarly, JSON is sometimes considered human-readable compared with a proprietary format. Are human-readable formats actually readable by humans? In the example of configuration data: 1. The format doesn't change the underlying meaning of the information - in both cases, the data represents the same thing. 2. Both registry and .config file are stored internally as a series 0s and 1s. To that extent, the underlying representaion is equally unreadable by humans. 3. Both registry and .config file require a tool to read, format and display those 0s and 1s and convert them into a format that humans can read. In the case of configuration stored in the Windows Registry, this is a Registry Editor. In the case of XML it could be a text editor or XML reader. Either way, the tool makes the data readable, not the data format. So, what is the difference between human-readable data formats and non-human- readable formats?"} {"_id": "198836", "title": "Give advice from experienced to beginner programmer on reinventing my wheel(solution)", "text": "i ve just tried to do coding bat exercise. There are posted solutions to the problem i tried to solve. However, I was stubborn, i ignored them and tried to code it up my own way(basically reinventing my own wheel - square,bad working wheel). After 4 hours it looks nasty and fails some tests. Now i realize that my logic was totally wrong and wasteful. So the question : * When do you start to give up solving problem your own way(reinventing the wheel) and start to look around to see other solution? * How long should a developer be 'stubborn' in trying to find his 'unique' or complicated solution? * When do you give up and begin to search for another logic?"} {"_id": "251248", "title": "Complexity limits of solutions created in Google Spreadsheets", "text": "I am creating a solution where I essentially put all rules regarding communication with customers (including automatic invoicing, reminder emails, welcome emails, etc.) into a Google Sheets and use Ultradox to create emails and PDFs based upon Google Docs templates. For the three automatic emails I have currently implemented this is working out really well, the whole thing is very transparent to our organization since even non-technical people can inspect and correct the \"Excel\"-formulas. My concern is that in 2-3 years we will probably have 200 unique emails and actions that we need to send out for the various occasions and given the various states that customers can be in. Of course I could aim at limiting the number of emails and states that our customer can be in, but this should be choice based upon business realities and not be limited by the choice of technology. **My question** is therefore, what are the limits of complexity (when will it become unmaintainable) that can be reasonably implemented in a solution based upon Google Apps Scripts and Google Sheets, given that I will attempt to expose as many of the rules as possible to Google Sheets? And what pitfalls should I be aware of when basing myself on a spreadsheet formulas, and what strategies should I follow to avoid the pitfalls? **Some of my own strategies** So far I have come up with the following strategies to increase maintainability: 1. Using several Google Sheets, each with its own purpose, each with its own dedicated \"export\" and \"import\" sheets so it is clear, which columns are dependent on the Google Sheet. Such sheets also help maintain referential integrity when inserting columns and rows. 2. Using multi-line formulas with indentation for formula-readability 3. Experimenting with the \"validation\" function to reduce the variability of data 4. Experimenting with Arrayformulas to ensure that formulas will work even if additional rows are added 5. Potentially offloading very complex formulas to Google Scripts and calling them from spreadsheet formulas 6. Using Named Ranges to ensure referential integrity Please notice that I am not asking about performance in this question, only maintainability. Also, I am unsure of how software complexity can be measured, so I am unsure of how to ask this question in a more specific way."} {"_id": "251242", "title": "Server should accumulate several requests and to retrurn response for all", "text": "For example I have a server [c#] and 4 clients. When the first client sends a request to the server I want to push a notification to the other 3 clients that they should send a request to the server with some required data. After I receive requests from all clients I want to send the same response to all of them. What is the best way/pattern to implement something like this?"} {"_id": "235212", "title": "What is the main goal of MVVM pattern?", "text": "Could you tell me what is the goal of the MVVM pattern? What are the arguments or the reasons I can give to a team and product owner to respect and develop according to this pattern? I would like a simple answer. Something in one sentence or one word. Is it for * maintenance * security * testing * something else?"} {"_id": "156585", "title": "Are Persistence-Ignorant objects able to implement lazy loading?", "text": "_Persistence Ignorance_ is an application of single responsibility principle, which in practice means that Domain Objects ( **DO** ) shouldn't contain code related to persistence, instead they should only contain domain logic. a) I assume this means that the code which contacts lower layers ( ie persistence layers ) lives outside of the domain model in other classes ( **OC** ) of a business logic layer? b) If my assumption under _a)_ is correct, then **DO** , say `Customer`, never contains methods such as `GetCustomers` or `GetCustomerByID`? c) If my assumptions under _a)_ and _b)_ are correct, and assuming `Customer` domain object uses lazy loading for some of its properties, then at some point `Customer`'s internal logic must contact **OC** , which in turn retrieves deffered data. But if `Customer` needs to contact **OC** to receive deffered data, then we can't really claim that Domain Objects don't contain logic related to persistence?! Thank you **REPLYING TO jkohlhepp** 1) I assume `OrderProvider` and `CustomerProvider` classes are contained within business logic layer? 2) I gather from your reply that my assumptions under _b)_ are correct? 3) > ... I would check to see if some private orders field was populated or if it > was null. If it is null ... But as far as I can tell, as soon as domain code needs to check whether private `order` field was populated, and if it isn't,contacting OrderProvider, we are already violating **PI** principle?!"} {"_id": "235217", "title": "Organising data access for dependency injection", "text": "In our company we have a relatively long history of database backed applications, but have only just begun experimenting with dependency injection. I am looking for advice about how to convert our existing data access pattern into one more suited for dependency injection. ### Some specific questions: Do you create one access object per table (Given that a table represents an entity collection)? One interface per table? All of these would need the low level Data Access object to be injected, right? What about if there are dozens of tables, wouldn't that make the composition root into a nightmare? Would you instead have a single interface that defines things like `GetCustomer()`, `GetOrder()`, etc? If I took the example of EntityFramework, then I would have one `Container` that exposes an object for each table, but that container doesn't conform to any interface itself, so doesn't seem like it's compatible with DI. ### What we do now, in case it helps: The way we normally manage data access is through a generic data layer which exposes CRUD/Transaction capabilities and has provider specific subclasses which handle the creation of `IDbConnection`, `IDbCommand`, etc. Actual table access uses `Table` classes that perform the CRUD operations associated with a particular table and accept/return domain objects that the rest of the system deals with. These table classes expose only static methods, and utilise a static `DataAccess` singleton instantiated from a config file."} {"_id": "62800", "title": "What are your experiences selling on the Android Market? 1 year on", "text": "Follow up to this question So a lot has changed in the smartphone market in the last year (Specifically Androids market share, OS updates and marketplace updates). Given these changes I think it is appropriate to ask this question again. I would love to quit my job and write Android apps fulltime. :-) Is this yet feasible for the average lone developer? What have volumes been like? (Happy to turn this into a community wiki before all the Grinch's start moaning but looks like I don't have the option)"} {"_id": "62802", "title": "How much storage space do developers really need on work systems?", "text": "> Not Counting the OS And the requirements to run the Development software. > The storage space required. Strictly speaking from work perspective(company setup and not freelancers).A individual developer ( not considering a build system) unless into areas of video,audio processing( huge raw files) 3d/graphics development. How much storage space would be required. * Even if we account for the software trials to download or reading material.is it Right or Safe to assume 20GB would most suffice and any thing more would be a waste and or would be improperly utilized?. * What is the typical hard disk space allotted per developer in an office setup.This may differ per role or specific requirement and on what type of work the company is into. but on an average for a developer/programmer how much space is normally allotted. Edit: **To Clarify Intent** These are questions i had faced by business/management people.I only wish to understand more in this regard to give an answer(or better answer)the next time i come across them. I am neither making assumptions or intend to give offense to any one in this regard.It would be helpful if some links to data online were provided in this regard. Edit 2: * The issue as i understand was restricting the storage space to only the saving of work files to discourage extraneous usage... * Not about scrimping/cost saving on hardware."} {"_id": "153338", "title": "Changes in licence in forked project what are my rights?", "text": "Hi I'm intrested in using the apparently now defunct app-mdi libray in a flex application for a paying customer. http://sourceforge.net/projects/appmdi/ It appears that the app-mdi project has been forked from flex-mdi and indeed the code has so much in common it would appear almost identical to the origional code. Now in the original source flex-mdi the following licence appears in the source code > /* Copyright (c) 2007 FlexMDI Contributors. See: > http://code.google.com/p/flexmdi/wiki/ProjectContributors Permission is > hereby granted, free of charge, to any person obtaining a copy of this > software and associated documentation files (the \"Software\"), to deal in the > Software without restriction, including without limitation the rights to > use, copy, modify, merge, publish, distribute, sublicense, and/or sell > copies of the Software, and to permit persons to whom the Software is > furnished to do so, subject to the following conditions: The above copyright > notice and this permission notice shall be included in all copies or > substantial portions of the Software. THE SOFTWARE IS PROVIDED \"AS IS\", > WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED > TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND > NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE > LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF > CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE > SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. */ However in the app-mdi library on the same file the following licence appears. > Copyright (c) 2010, TRUEAGILE All rights reserved. > > Redistribution and use in source and binary forms, with or without > modification, are permitted provided that the following conditions are met: > > * Redistributions of source code must retain the above copyright notice, > this list of conditions and the following disclaimer. > * Redistributions in binary form must reproduce the above copyright > notice, this list of conditions and the following disclaimer in the > documentation and/or other materials provided with the distribution. > * Neither the name of the TRUEAGILE nor the names of its contributors may > be used to endorse or promote products derived from this software without > specific prior written permission. > > > THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS \"AS IS\" > AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE > IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE > ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE > LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR > CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF > SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS > INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN > CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) > ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE > POSSIBILITY OF SUCH DAMAGE. */ Now I've no problem with the licence except for the line. > Redistributions in binary form must reproduce the above copyright notice, > this list of conditions and the following disclaimer in the documentation > and/or other materials provided with the distribution. The copyright notice in its entireity makes no sense in binary material. Specifically talking about redistobutions in the binary form. Finally the question is what exactly has to be shown on web clients who access softare that utilises this library? Also is changing the licence in this manner actually allowed?"} {"_id": "48385", "title": "Anybody using VB with ASP.net MVC?", "text": "When a .net project use newer technologies like ASP.net MVC or Silverlight, is VB.NET being used? Or it is used just in the legacy projects? What is the situation now?"} {"_id": "254145", "title": "Am I violating LSP if the condition can be checked?", "text": "This base class for some shapes I have in my game looks like this. Some of the shapes can be resized, some of them can not. private Shape shape; public virtual void SetSizeOfShape(int x, int y) { if (CanResize()){ shape.Width = x; shape.Height = y; } else { throw new Exception(\"You cannot resize this shape\"); } } public virtual bool CanResize() { return true; } In a sub class of a shape that I don't ever want to resize I am overriding the `CanResize()` method so a piece of client code can check before calling the `SetSizeOfShape()` method. public override bool CanResize() { return false; } Here's how it might look in my client code: public void DoSomething(Shape s) { if(s.CanResize()){ s.SetSizeOfShape(50, 70); } } Is this violating LSP?"} {"_id": "254140", "title": "Is there a difference between fibers, coroutines and green threads and if that is so what is it?", "text": "Today I was reading several articles on the Internet about fibers, coroutines and green threads, and it seems like these concepts have very much in common, but there are slight differences, especially when we talk about fibers and coroutines. Is there a concise, correct summary of what makes them different from each other?"} {"_id": "224398", "title": "Implementing Settings/Preferences in JavaScript", "text": "I am making a simple web app mostly in JavaScript. I was wondering as to how do I deploy settings/preferences? Do I just store user preferences somewhere and make use of if...else... statements all over the code? I think that there must be a better alternative. I am know JS, jQuery & PHP and willing to learn anything new if at all required. I have already made the app, only the settings are remaining. I know what options to give users and how to program them in js. What's the most optimal way? How is it done in professional web apps and software made by companies (I am independent Student Developer - this is my first \"BIG\" project)? **EDIT:** For all the modifications that the settings are supposed to make (in this particular app), the whole js code base would litterally be filled with many branches of if...else...statements and I think that would make the code a lot harder to read and maintain. In my app, the whole database table to be fetched from the database and the number and types of HTML DOM manipulation to be done, new elements to be added to HTML dom and whether some are visible or not would change. How do I deal with all that?"} {"_id": "187142", "title": "Should I split out synchronization from my class and what's it called?", "text": "When thinking about testability and modular code, I recently thought about whether I should split out the synchronization part of a class from the actual behavior part. By example: The app \"needs\" this: class Mogrifier { // Mogrifier accesses are thread safe private: unsigned int m_state; mutex m_sync; public: Mogrifier() : m_state(0) { } unsigned int Mutate() { lock_guard lock(m_sync); const unsigned int old = m_state; m_state += 42; return old; } ::: }; Seems simple enough, except that locking and the actual stuff the class does aren't really related. So I though about whether it's a good idea to split this up: class Mogrifier { // Mogrifier is *not* synchronized private: unsigned int m_state; public: Mogrifier() : m_state(0) { } unsigned int Mutate() { const unsigned int old = m_state; m_state += 42; return old; } ::: }; class SyncedMogrifer { private: mutex m_sync; Mogrifier m_; public: unsigned int Mutate() { lock_guard lock(m_sync); return m_.Mutate(); } }; * Would you do this? * Will it help with unit testing? * I'm awful with Pattern names ... what is it called? * Is there a simple way to \"generate\" such a wrapper in C++ for an arbitrary calls?"} {"_id": "42652", "title": "using a wiki for requirements", "text": "I'm looking into ways of improving requirements management. Currently, we have a Word document published on a Web site. Unfortunately, we cannot (to my knowledge) look at changes from one revision to the next. I would greatly prefer to be able to do so, much like with a wiki or VCS (or both, like the wiki's on bitbucket!). Also, each document describes changes devs are expected to meet by a given deadline. There is no collection of accumulated app features documented anywhere, so it's sometimes hard to distinguish between a bug and a (poorly- designed) feature when trying to make quick fixes to legacy apps. So I had an idea I wanted to get feedback on. What about: 1. Using a wiki so that we can track who changed what when (mostly to even see _if_ any edits were made since the last time one looked). 2. Having one, say, wiki page per product rather than one per deadline, keeping up with all features of the product rather than the changes that should be implemented. This way, I can look at a particular revision of the page to see what the app should do at a given point in time, and I can look at _changes to the page since the last release_ for the requirements to be implemented by the next deadline. Waddayathink?"} {"_id": "210718", "title": "Specific reasons to create own array class over using std::array?", "text": "What specific conditions or requirements should you create your own array over using `std::array`? Here is my background: I'm developing a small simple library that a small group of people will use and _hopefully_ build upon. The current structure allows for both `std::vector` and `std::array` using `iterators`, however, a lot of the people do not like and cannot seem to use `std::array` and find the notation confusing to work with the library. I have attempted to make my library work for better for `std::array` however, it dawned on me that maybe I should provide my own and adding in the functionality needed? For example: `library::array myArray; myArray.fill(1, 100);` and I thought it would be a good project in order for me to develop my skills, particularly in `templates` and other `sorting` algorithms."} {"_id": "46571", "title": "Onsite Interview : QA Engineer with more Emphasis on Java Skills", "text": "I'm having a onsite interview for QA engineer with Startup. While phone interview the person said he would want to test my JAVA, JUnit and SQL skills on white board with more importance on Object-oriented skills, So what all can i questions can i expect ? **One more important issue : How do i overcome the fear of White board interview ?. I'm very bad at White board sessions, i get fully tensed. Please suggest me tips to overcome my jinx**"} {"_id": "159204", "title": "Can someone help me understand this GPL license", "text": "I can't understand the last line of this GPL license. > Copyright (C) 2011 Some Name > > This program is free software; you can redistribute it and/or modify it > under the terms of the GNU General Public License as published by the Free > Software Foundation; either version 2 of the License, or (at your option) > any later version. This program is distributed in the hope that it will be > useful, but WITHOUT ANY WARRANTY; without even the implied warranty of > MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General > Public License for more details. > > You should have received a copy of the GNU General Public License along with > this program; if not, write to the Free Software Foundation, Inc., 51 > Franklin St, Fifth Floor, Boston, MA 02110-1301 USA > > Linking SOMEAPP statically or dynamically with other modules is making a > combined work based on SOMEAPP. Thus, the terms and conditions of the GNU > General Public License cover the whole combination. > > **In addition, as a special exception, the copyright holders of SOMEAPP give > you permission to combine a portion of SOMEAPP with other binary code > (\"Program\") under any license the copyright holders of Program specified, > provided the combined work is produced by SOMEAPP.** I want to know can I use this code in a shareware application."} {"_id": "198657", "title": "how to map controllers , models, and views as a todo list", "text": "Before starting on a current project I was wondering if there is a terminology for what I am planning on doing? Basically what I want to do is map out every object in layers such as all the controllers and what they'll be called, the methods that will exist under the controllers and their purpose; Including, the models that the methods will be interacting with and the views that will be displayed based on certain actions. This way I've created a detailed to-do list and can catch any duplicates and make them as one call. I've heard of something called doctrine but I'm not sure if it will be what im looking for since i dont know how to use it but have heard of it having to do with documentation."} {"_id": "210711", "title": "Are there presently any standardized \"grammar\" rules for HTML & CSS?", "text": "Over the years I have assigned values to classes and html elements in all sorts of ways: * myClass * MyClass * my_class * my-class . . .and so on. As I am always trying to hone my craft and make my code more readable for others, I am going to discipline myself to stick to one convention moving forward. I am leaning towards all lower case and hyphenated, but that is a purely subjective choice. So I wanted the input of the SE community--is there a standard or recommended convention already in place? Thanks!"} {"_id": "69532", "title": "What's a propeller-head coder?", "text": "What's a propeller-head coder?"} {"_id": "80407", "title": "Leaving for the desert soon, need suggestions", "text": "I just got the news that I'm (finally) getting to spend the next few months in Afghanistan with my fellow AF maintainers, and I was hoping to get some suggestions of good offline content to learn from. I've got a few textbooks from ck12.org, downloaded the offline versions of the MSDN docs for VCS Express 2010, but I would love to have something more general to learn from, maybe that covers data structures and algorithms (specific to C# would be awesome. Oh, and if anyone knows of any good offline WPA tutorials that would be even better."} {"_id": "125955", "title": "Using GPL licensed jQuery scripts on website with ads", "text": "I'm making a web application that will generate revenue from displaying ads. The system is meant for me to run and not sold/distributed to anyone else. My questions are: * Can I use GPL licensed jQuery scripts on the website? * Can I use GPL licensed scripts while generating ad revenue from the page? * Do I need to provide my sites source to everyone if I use GPL licensed scripts?"} {"_id": "196767", "title": "Is it better to save output from command in memory and store later or save in a temporary file and then move to final location?", "text": "I hope this is not off topic. I have to save output from a command to file, but only if length of this output is positive. I've thought about two solutions: * save output to a python variable, check length, if positive save to destination file; * save output to a temporary file, check length, if positive rename to destination file. Is there any best practice for this little problem, or which one usually performs better between my proposed solutions? Typical configuration: * file length is usually 1KB * we are not using ssds, usually 7200rpm hard disk * average RAM is usually 10-20 GB * I don't know if filesystem is important, anyway ext3 I know this is a bit unclear, but I was hoping there was some best practice to apply in this case. Thanks"} {"_id": "45007", "title": "Should I blog about specifically about programming in English or in my native language?", "text": "I had a blog which was written in my native language, but now I'm wondering if I should switch to english because of a wider audience given that the technical field's default language is English. For sure, I want to share my knowledge, but at the meantime I'd like to get hired or be recognized from my peers. Reputation can be important and it can help in making my professional network larger. Do you have any feedback? Btw, my native language is french if that matters."} {"_id": "196763", "title": "Feature branches, beta branches, and scrapped features", "text": "I've been thinking a lot about best practices regarding branching in distributed version control systems such as git an mercurial (the two dvcs's I have experience with and use on a daily basis). The way I've been doing it is slightly different in each of these, but generally follows these guidelines: * master branch - concurrent with production code * development branch - concurrent with \"beta\" code * feature branches - for feature development Development is done in a feature branch (usually created off of the master branch, so we know we're working with a stable code base), and when dev is completed, reviewed, and developer-tested, it's pushed/merged into the development/beta branch, put out on a beta server, and tested. If all goes well, the feature is approved, and we can merge it into the master/stable branch, stage it, do final testing, and get it into production. If it doesn't go well, though, things break down. If, for instance, a feature is scrapped or just delayed indefinitely, we probably want to remove it from the dev/beta branch. However, since merges from master/stable (hotfixes, content changes, etc.), and other new features have probably been put into the dev branch, it becomes difficult to remove a single feature from that branch. I'm coming to the conclusion that this workflow is just broken, but it seems like it should work. So, specifically: * Is it possible to remove a particular feature from this type of branch? * Is this just a broken workflow? And more generally: * Given long-term development of features, and a need to have a branch concurrent with live, what are the best practices involved in dvcs?"} {"_id": "196760", "title": "Design pattern to dynamically create patterns found in a list of links", "text": "Can anyone help me think of a way to dynamically generate the patterns section of a tool I am creating? I'm not sure how to store and generate these \"patterns\" dynamically. What the program does is take a big list of links (100,000), puts them in a database, groups them by domain and then curls a page from each domain looking for backlinks. Here is the database scheme, you can see that the Domains table is were most of the information is stored because we group the URLs together by the domain: http://sqlfiddle.com/#!2/9a4d7 So now we know that some domains are live (and have a backlink) and some are dead (no backlink). This is relatively easy but now for the fun part. I need to derive \"patterns\" from the live links. For example find a list of all live link domains that have more than 25 links from that domain. So if joesblog.blogspot has 33 pages that link to my domain that matches this pattern. Here is my list of patterns: * Domains that include a homepage link * Domains grouped by top level domain (.com, .org etc) * Domains that returned a 405 header response * URLs with matching directory structures * Domains that contain the word _ _ _. * URLs that contain the word _ _ _ in their path. * Common anchor text. * Common title tags. * Common backlink targets (what page of your site does the link point to). The problem is that the patterns are changing CONSTANTLY. There being moved around, added, edited, removed and anything else you can think of. I really need a to build a content management system of sorts to handle these patterns. But how would I store something this intricate in a database? Has anyone ever delt with a similar problem and how did you solve it? If I could just store whole functions and MySQL statements in the database that would be great (but horrible wrong). (PHP, MySQL, JavaScript / JQuery) DISCLAIMER: This is an internal tool. Please don't ask me why I'm building this or claim that the requirements are wrong. This was designed by my manager and its my task to make it work because I am a developer at the company that needs this tool. Thank you!"} {"_id": "196768", "title": "Decrement operator difference between C++ and Java?", "text": "Please tell me why the same piece of code behaves differently in C++ and JAVA. **Ok first I implement a function to calculate the factorial of an Int RECURSIVELY** In JAVA: int f(int x) { if(x==1)return 1; return(x*f(x-1)); } System.out.println(f(5)); -----> outputs 120, ok that makes sense. JAVA METHOD 2: int f(int x) { if(x==1)return 1; return(x*f(--x)); } System.out.println(f(5)); -----> outputs 120, ok that makes sense too. But here is a twist in C++: C++ METHOD 1: int f(int x) { if(x==1)return 1; return (x*f(x-1)); } cout << f(5) ; -----> outputs 120, ok now keep reading! C++ METHOD 2: int f(int x) { if(x==1)return 1; return (x*f(--x)); } cout << f(5) ; -----> outputs 24. This is surprising. But Why?? I have tried tracing this multiple times and have failed. It makes sense in java because each recursive call gets its own piece of memory and pre decrement operator -- will decrease and yield the decreased value. Why does it work as expected in JAVA but not In C++?"} {"_id": "216717", "title": "Understand how the TLB (Translation Lookaside buffer) works and interacts with pagetable and addresses", "text": "So I am trying to understand this TLB (Translation Lookaside Buffer). But I am having a hard time grasping it. in context of having two streams of addresses, tlb and pagetable. I don't understand the association of the TLB to the streamed addresses/tags and page tables. a. 4669, 2227, 13916, 34587, 48870, 12608, 49225 b. 12948, 49419, 46814, 13975, 40004, 12707 TLB Valid Tag Physical Page Number 1 11 12 1 7 4 1 3 6 0 4 9 Page Table Valid Physical Page or in Disk 1 5 0 Disk 0 Disk 1 6 1 9 1 11 0 Disk 1 4 0 Disk 0 Disk 1 3 1 12 How does the TLB work with the pagetable and addresses? The homework question given is: Given the address stream in the table, and the initial TLB and page table states shown above, show the final state of the system also list for each reference if it is a hit in the TLB, a hit in the page table or a page fault. But I think first i just need to know how does the TLB work with these other elements and how to determine things. How do I even start to answer this question?"} {"_id": "156300", "title": "Should we use an outside CMS?", "text": "I work at a web design/development shop. Everything we do is centered around the Joomla! CMS. I'm a bit worried-if anything goes wrong with Joomla (major security flaw revealed, Joomla folds and ceases development) we're sunk. I'm meeting with the CEO to plan the next few steps for our company. Should I recommend that we create our own in-house CMS or am I just being paranoid about a single point of failure?"} {"_id": "10477", "title": "What does a typical programming journal entry consist of?", "text": "I read that a lot of people seem to favor the journal, or diary, form to keep notes on their work related activities. I've had a more structured approach myself, which involves outlines and categorization. While this has its advantages for information retrieval, I find it can become an hindrance when it comes to entries of a broad, subjective or reflective nature. I've been thinking about using a more diary-like format for this purpose, but since I've never held one before, I wonder about what sort of information it would contain. Can anyone give an example of their typical journal/diary entry's content ? What do you keep notes of, and how do you structure it ?"} {"_id": "216714", "title": "Why appending to a list in Scala should have O(n) time complexity?", "text": "I am learning Scala at the moment and I just read that the execution time of the append operation for a list (:+) grows linearly with the size of the list. Appending to a list seems like a pretty common operation. Why should the idiomatic way to do this be prepending the components and then reversing the list? It can't also be a design failure as implementation could be changed at any point. From my point of view, both prepending and appending should be O(1). Is there any legitimate reason for this?"} {"_id": "216711", "title": "Matching users based on a series of questions", "text": "I'm trying to figure out a way to match users based on specific personality traits. Each trait will have its own category. I figure in my user table I'll add a column for each category: id name cat1 cat2 cat3 1 Sean ? ? ? 2 Other ? ? ? Let's say I ask each user 3 questions in each category. For each question, you can answer one of the following: `No, Maybe, Yes` How would I calculate one number based off the answers in those 3 questions that would hold a value I can compare other users to? I was thinking having some sort of weight. Like: No -> 0 Maybe -> 1 Yes -> 2 Then doing some sort of meaningful calculation. I want to end up with something like this so I can query the users and find who matches close: id name cat1 cat2 cat3 1 Sean 4 5 1 2 Other 1 2 5 In the situation above, the users don't really match. I'd want to match with someone with a +1 or -1 of my score in each category. I'm not a math guy so I'm just looking for some ideas to get me started."} {"_id": "156309", "title": "Working with Git on multiple machines", "text": "This may sound a bit strange, but I'm wondering about a good way to work in Git from multiple machines networked together in some way. It looks to me like I have two options, and I can see benefits on both sides: * Use git itself for sharing, each machine has its own repo and you have to fetch between them. * You can work on either machine even if the other is offline. This by itself is pretty big I think. * Use one repo that is shared over the network between machines. * No need to do git pulls every time you switch machines, since your code is always up to date. * Never worry that you forgot to push code from your other non-hosting machine, which is now out of reach, since you were working off a fileshare on this machine. My intuition says that everyone generally goes with the first option. But the downside I see is that you might not always be able to access code from your other machines, and I certainly don't want to push all my WIP branches to github at the end of every day. I also don't want to have to leave my computers on all the time so I can fetch from them directly. Lastly a minor point is that all the git commands to keep multiple branches up to date can get tedious. Is there a third handle on this situation? Maybe some third party tools are available that help make this process easier? If you deal with this situation regularly, what do you suggest?"} {"_id": "156308", "title": "Developing an analytics's system processing large amounts of data - where to start", "text": "Imagine you're writing some sort of Web Analytics system - you're recording raw page hits along with some extra things like tagging cookies etc and then producing stats such as * Which pages got most traffic over a time period * Which referers sent most traffic * Goals completed (goal being a view of a particular page) * And more advanced things like which referers sent the most number of vistors who later hit a goal. The naieve way of approaching this would be to throw it in a relational database and run queries over it - but that won't scale. You could pre-calculate everything (have a queue of incoming 'hits' and use to update report tables) - but what if you later change a goal - how could you efficiently re-calculate just the data that would be effected. Obviously this has been done before ;) so any tips on where to start, methods & examples, architecture, technologies etc."} {"_id": "167971", "title": "Rendering design. How can I effectively deal with forward, deferred and transparent rendering?", "text": "I have many objects in my game world that all derive from one base class. Each object will have different materials and will therefore be required to be drawn using various rendering techniques. I currently use the following order for rendering my objects. * Deferred * Forward * Transparent (order independent) Each object has a rendering flag that denotes which one of the above methods should be used. The list of base objects in the scene are then iterated through and added to separate lists of deferred, forward or transparent objects based on their rendering flag value. The individual lists are then iterated through and drawn using the order above. Each list is cleared at the end of the frame. This methods works fairly well but it requires different draw methods for each material type. For example each object will require the following methods in order to be compatible with the possible flag settings. object.DrawDeferred() object.DrawForward() object.DrawTransparent() It is also hard to see where methods outside of materials, such as rendering shadow maps, would fit using this \"flag & method\" design. object.DrawShadow() I was hoping that someone may have some suggestions for improving this rendering process, possibly making it more generic and less verbose?"} {"_id": "167975", "title": "What can Haskell's type system do that Java's can't and vice versa?", "text": "I was talking to a friend about the differences between the type systems of Haskell and Java. He asked me what Haskell's could do that Java's couldn't, and I realized that I didn't know. After thinking for a while, I came up with a very short list of minor differences. Not being heavy into type theory, I'm left wondering whether they're formally equivalent. To try and keep this from becoming a subjective question, I'm asking: what are the major, non-syntactical differences between their type systems? I realize some things are easier/harder in one than in the other, and I'm not interested in talking about those. And to make it more specific, let's ignore Haskell type extensions since there's so many out there that do all kinds of crazy/cool stuff."} {"_id": "198387", "title": "Best github repository layout for snippets in multiple programming languages", "text": "I have to create a github-presence for an open-source organization. The aim is to distribute _code snippets_ and _reference implementations_ in **different** programming languages. These contributions are going to be (ideally) created by people outside of the organization and should be collected. I know that good practice is to create **one repository per project**. To this end, each implementation (in a specific programming language) should be considered a standalone project and should get its own repository. The point is that this approach makes hard for external people to contribute, because github pull-requests can only be made against existing repositories. By creating a **single repository** with a subdirectory per programming language code contribution becomes easier, but I'm afraid that some IDEs may complain about the directory layout. I'd like to know whether someone already solved this very specific problem in a satisfactory way."} {"_id": "78379", "title": "Need Advice on PHP Search Functionality", "text": "I'm coding a Linux/PHP site for an organization. The site has two views, activated by a login $_SESSION variable indicating whether one is logged in as a member or not. I need to provide search functionality for both the public and for members. The members area is private. Note that Google Custom Search can't be used here in that case. Does anyone have any recommendations on F/OSS scripts (or perhaps cheap scripts I can purchase) that would provide this functionality with two separate indices, one public and one private, and which would provide paginated results and a keyword search form? Otherwise, I guess I'll have to code one myself. I'm trying to save time, and therefore keep more money in my pocket. P.S. Note I can't ask this on StackOverflow because they want programmer questions there. This isn't a \"how do I code this?\" question, but \"what F/OSS scripts or cheap sitescripts would solve this problem?\" type of question. Note also that this won't help me either. I mean, look at the answers of that question."} {"_id": "198389", "title": "simple 'defrag' logic that minimizes changes?", "text": "edit: so clearly I didn't explain this well, so let me try again with examples. I have a 'pipe' of a given size with signals routed through it at given offsets. The pipe can get fragmented with signals in different offsets making it impossible to fit a new signal. I want an algorithm that will tell me how to arrange the signals to defragment the pipe; but with a minimal number of signals actually being moved! So for the example.. Say I have a pipe of size 16. It has the following signals with the sizes and offsets described. offset 0: A (size of 4; fills slots 0-3) offset 5: C (size of 2, fills slot 5-6) offset 8: B (size of 4, fills 8-11) offset 14: D (size 2, fills 14-15) Pipe: AAAA_CC_BBBB__DD In this case I have 1 slot open at offsets 4 & 7, and two at offset 12-13. Now lets say I want to add a size 4 signal to this pipe. There is no continuous space for it now, but I know I have enough space for it if I defragment. The 'obvious' solution is to group all the signals together at the 'top' like this: offset 0: A (size 4, fills 0-3) offset 4: B (size 4, fills 4-7) offset 8: C (size 2, fills 8-9) offset 10: D (size 2, fills 10-11) Pipe: AAAABBBBCCDD____ this leaves slots 12-15 free for my new signal. However, to do this I repositioned 3 signals(B-D). For each signal I moved I have to send commands down to hardware and wait for a non-trivial amount of time. If I was smarter I could realize there is another approch. I could reposition like this: offset 0: A(size 4, fills 0-3) offset 8: B(size 4, fills 8-11) offset 12: C(size 2, fills 12-13) offset 14: D(size 2, fills 14-15). Pipe: AAAA____BBBBCCDD I can now fit my new signal in offsets 4-7; AND I only had to reposition one signal(B). Thus saving on hardware calls. I'm wondering if there is a good algorithm to detect sitations like this; where I can 'fit' a signal onto a pipe with the minimum number of signals moved. The aproch thta comes to mind is an N! algorithm; which basically amounst to \"generate every possible distribution, calculate how many moves it resulted in\". I'm loking for a faster approch. The approch does _not_ have to be 100% perfect, I'm looking primarily to minimize the average case, so long as the worst case isn't made too horrendous. I know that I will never have more then 255 signals on a given pipe; so I may be able to 'get away' with N! as bad as that sounds. I also know each signal's size is a power of 2, as is the pipe size. also, are there any briliant algorithms for placing of signals that minimize fragmentation? * * * Question answered; see below. I wanted to expain on the answer slightly to better explain how defragmentation would occure for anyone reading; buddy does explan it I wanted to point out a simpler approch to conceptualize and explain the defraging part in more detail since that was the original question. I'm explaining my approch, which is a slightly different/simpler approch but effectively keeps the 'buddy' concept. I'm not precomputing blocks or labeling them, this is too much effort to implement and maintain. For me the CPU cost of calculating where a signal will go is pretty trivial compared to actual placing/deletion of signals; so I can afford to lose a tiny, linear, amount of CPU by not pre-computing to simplfy my logic. So the process is: for insertion: All signals are kept in signal boundaries equal to the signal size, so a signal will start on offset where offest % signalsize=0. For actuall offset we walk through and figure out intervals that keep this boundry. So if my signal is size 4 on a 16 size pipe I'll look at intervals 0-4, 5-7, 8-11, 12-15. For each interval check if all space in that interval is free. In the simple case we have a interval with no signals in it and we just put the signal at that interval. Important, we look at the intervals in order and place our signal in _the first free_ interval, this ensure we break the smallest possible 'block' (using buddy terms) when we add our signal. This should be equivlent to the buddy approch described by im3l96; except without precomputation of blocks. if no interval is completely free we have to defrag. For this we find the the signal with the most unused slots. If multuple intervals have the same number of slots unused select the first interval. We then find the largest signal in this interval and recursively call same insert algorithm for the smaller signal (except mark the interval we selcted to insert our first signal as unavailable somehow). This moves the signal to somewhere else that it will fit. We then find the next smallest signal in our selected interval and do the same until we moved all signals. Worst case of 2^n-1 signals moved; where N is the number of potential signal sizes <=our signal (assuming signals are multuple of 2 then N=log2(signalSize)). Here is an example. * stands for a slot marked as unavailable when we recursively call this method (ie the interval that the calling method wanted to place it's signal in, and thus doesn't want the recursive call to try to place a signal into) Here is an exaple, simplest case I could come up with that still demonstrates full complexity. Note: the following structure would be hard to create, but _could_ result from the buddy approch if someone tried very very hard. FFFFFFFF__AA_B__EEEE_HGGCC_DII_J Someone passes in a signal Z of size 8 we select offset 8: defragInsert(z, size 8) effective structure: FFFFFFFF__AA_B__EEEE_HGGCC_DII_J placing signal in interval: __AA_B__ defragInput(A, size 2) effective structure: FFFFFFFF********EEEE_HGGCC_DII_J place signal in interval (offset 20) _H defragInput(H, size1) effective structure: FFFFFFFF********EEEE**GGCC_DII_J place signal between C & D return defragInput(H, size1) effective structure: FFFFFFFF********EEEE__GGCCHDII_J place H at offset 20 now that it's open return defragInput(A, size 2) effective structure: FFFFFFFF_ ___ _B__EEEEAAGGCCHDII_J move B now... defragInput(B, size1) effective structure: FFFFFFFF* ****** *EEEEAAGGCCHDII_J place B between I & J return defragInput(B, size1) effective structure: FFFFFFFF_ **__** _EEEEAAGGCCHDIIBJ add B return defragInsert(z, size 8) fianl structure: FFFFFFFFzzzzzzzzEEEEAAGGCCHDIIBJ"} {"_id": "95606", "title": "Is there a simple, flat, XML-based query-able data storage solution?", "text": "I have been in long pursuit of an XML-based query-able data store, and despite continued searches and evaluations, I have yet to find a solution that meets the my needs, which include: 1. Data is wholly contained within XML nodes, in flat text files. 2. There is a \"native\" - or at least unobtrusive - method with which to perform Create/Read/Update/Delete (CRUD) operations onto the \"schema\". I would consider access via http, XHR, javascript, PHP, BASH, or PERL to be unobtrusive, dependent on the complexity of the set of dependencies. 3. Server-side file-system reads and writes. 4. A client-side interface element, accessible in any browser without a plug-in. Some extra, preferred (but optional) requirements include: 1. Respond to simple SQL, or similarly syntax queries. 2. Serve the data on a bare bones https server, with no \"extra stuff\", either via XMLHTTPRequest, HTTP proper, or JSON. A few thoughts: What I'm looking for may be possible via some Java server implementations, but for the sake of this question, please do not suggest that - unless it meets ALL the requirements. Java, especially on the client-side is not really an option, nor is it appealing from a development viewpoint.* I know walking the filesystem is a stretch, and I've heard it's possible with XPATH or XSLT, but as far as I know, that's not ready for primetime, nor even yet a recommendation. However the ability to recursively traverse the filesystem is needed for such a system to be of useful facility. At this point, I have basically implemented what I described via, of all things, CGI and Bash, but there has to be an easier way. Thoughts?"} {"_id": "58291", "title": "Duplication in parallel inheritance hierarchies", "text": "Using an OO language with static typing (like Java), what are good ways to represent the following model invariant without large amounts of duplication. I have two (actually multiple) flavours of the same structure. Each flavour requires its own (unique to that flavour data) on each of the objects within that structure as well as some shared data. But within each instance of the aggregation only objects of one (the same) flavour are allowed. FooContainer can contain FooSources and FooDestinations and associations between the \"Foo\" objects BarContainer can contain BarSources and BarDestinations and associations between the \"Bar\" objects interface Container() { List sources(); List destinations(); List associations(); } interface FooContainer() extends Container { List sources(); List destinations(); List associations(); } interface BarContainer() extends Container { List sources(); List destinations(); List associations(); } interface Source { String getSourceDetail1(); } interface FooSource extends Source { String getSourceDetail2(); } interface BarSource extends Source { String getSourceDetail3(); } interface Destination { String getDestinationDetail1(); } interface FooDestination extends Destination { String getDestinationDetail2(); } interface BarDestination extends Destination { String getDestinationDetail3(); } interface Association { Source getSource(); Destination getDestination(); } interface FooAssociation extends Association { FooSource getSource(); FooDestination getDestination(); String getFooAssociationDetail(); } interface BarAssociation extends Association { BarSource getSource(); BarDestination getDestination(); String getBarAssociationDetail(); }"} {"_id": "195158", "title": "Resources for writing a parser combinator library", "text": "I often use parser combinator libraries, but I've never written one. What are good resources for getting started? In case it matters, I'm using Julia, a functional (but not lazy) language."} {"_id": "58292", "title": "With a small development team, how do you organize second-level support?", "text": "Say, you have a team of 5 developers and your inhouse customers demand a reasonable support availability of say 5 days a week, 9am-6pm. I can imagine the following scenarios: * **the customers approach the same guy** , every time. Downside: single point of failure, if the guy is unavailable. * **each developer is assigned one week of support duty**. Downside: how to you distribute the work evenly in times of planned (vacation) and unplanned (sickness) unavailability? * **each developer is assigned one day of support duty**. Downside: similar to above, but not as bad. * a **randomly picked developer handles the support request**. Downside: maybe not fair, see above. What is your experience?"} {"_id": "142087", "title": "Why rails use yaml to config database instead of plain ruby code?", "text": "Most of configuration files in ruby such as Gemfile, gemspec, are just ruby code itself. Why database configuration file in rails is the exception?"} {"_id": "142086", "title": "Why the recent shift to removing/omitting semicolons from Javascript?", "text": "It seems to be fashionable recently to omit semicolons from Javascript. There was a blog post a few years ago emphasising that in Javascript, semicolons are optional and the gist of the post seemed to be that you shouldn't bother with them because they're unnecessary. The post, widely cited, doesn't give any compelling reasons _not_ to use them, just that leaving them out has few side- effects. Even GitHub has jumped on the no-semicolon bandwagon, requiring their omission in any internally-developed code, and a recent commit to the zepto.js project by its maintainer has removed all semicolons from the codebase. His chief justifications were: * it's a matter of preference for his team; * less typing Are there other good reasons to leave them out? Frankly I can see no reason to omit them, and certainly no reason to go back over code to erase them. It also goes against (years of) recommended practice, which I don't really buy the \"cargo cult\" argument for. So, why all the recent semicolon-hate? Is there a shortage looming? Or is this just the latest Javascript fad?"} {"_id": "236741", "title": "Why did Microsoft dropped the RESX model for RESW in Windows Store applications?", "text": "Why did Microsoft choose to change the resources management system from .NET's RESX files? RESX had useful code generation, providing developers auto-completion for resources names and outputting IMHO very readable code. The new RESW format is as far as I know, the same bare XML files, but without any code generation, forcing developers to write more code, and depriving them of compile time error detection."} {"_id": "112629", "title": "Learning path for web developer .NET or Java", "text": "I am interested to know how many real world web application servers are hosted by windows? I am going to learn C# and ASP.NET and want to convert my self from an embedded developer to a web app developer. My friends told me that there are ways more Linux based servers than windows servers. He also mentioned the Java skills stack is much more useful than .NET in the world of web application. My experience of Java and C# are rougly the same. I am an experienced C++ developer though. Can anyone give me some suggestion about it? Many thanks"} {"_id": "143069", "title": "Programming Test", "text": "> **Possible Duplicate:** > Good interview programming projects We are looking to hire some more Java developers onto our team, and plan to test their coding abilities with a test. We currently use a web-based Java test that automatically compiles and runs the code, but it is very flaky and we're having problems with our candidates losing their work on this site. Not only is this frustrating for everyone, it makes us look like we don't know what we're doing. Is there a popular testing suite out there? What do you use? I'm **not** interested in dogmatic arguments on whether or not I should be testing my candidates in this way, I'm looking for a tool that will help me do it."} {"_id": "196492", "title": "Design for using your own API", "text": "So I'm planning to use APIs for my host app. But the APIs are built such that it requires a sessionkey for every request. So my question is, how would I dogfood my API? Cause, apparently I'm thinking in the line of creating a \"special\" key for my host app (cause it makes no sense to request for a key for my own use), but, anyone inspecting the headers of the request could find this key and literally use it and bypass requesting their own keys. Maybe there's a best practice for dogfooding our own API without a special key or someway to differentiate whether the request is coming from the host app or the public. I couldn't use IP to differentiate too, cause the public could be using the same server to call the APIs. That's just the way it is, and is a one of the constraints to keep in mind."} {"_id": "59524", "title": "C++ Commitee Members?", "text": "I have been searching the web for information regarding what people (and their backgrounds) are in the C++ commitee. However, to my surprise I have not found any such information. Where can I find information about the C++ commitee members?"} {"_id": "196496", "title": "How to manage large scale project in node.js keeping everything Asynchronous?", "text": "I have a large module which have to process more than 10k request/ response per second. Every request has a json and need to process it and verify it on the database and generate the response on the basis of the db query. Here is an example Function 1 function ('/get' , fucntion (req1 , res1 , next ){ //to process the req data processData(req1 , res1 ); }); Function 2 processData(req1 , res1 ){ var J = JSON.parse (req1.body) //it has to read db three times //process the json build the response and return condtionCheck(J , res1){ var a = someVlaue(){ //select a value from a collection set of nosql which have //more than 1000 document and //i have to itrate a for loop to check the condition .....etc //........... } ........ dataRead(var a , res1){ // is it possible to send response for the req1 object here res1.send({value b: abcd123}) } } } Function 3 ..... and so on The major problem is all the code i have written inside the process data is synchronous. Bacaue each code depends upon the previous call back and there are so many condition check are used in several times. So It is good to put such a large processing in synchronous way inside node ? If i write the code using async some times all scenario got in a deadlock condition How to avoid such behavior? Does async or function like step have affect on performance ? On such a series of function how can we reduce latency ?"} {"_id": "196499", "title": "Why should CS teachers stop teaching applets?", "text": "Why should CS teachers **stop** teaching Java applets? As a veteran of applets and forums, I think it is high time to make a call that teachers should stop teaching applets in general, & especially AWT based applets. Applets are an advanced and specialized type of app. that has little place in the real world around us (most things that are done by applets can instead be better achieved using Javascript and HTML 5)."} {"_id": "34267", "title": "How to program something with the expectation that it will work the first time?", "text": "I had a friend in college who programmed something that worked the first time, that was pretty amazing. But as for me, I just fire up the debugger as soon as I finally get whatever I'm working on to compile - saves me time (kidding of course, I sometimes hold out a little bit of hope or use a lot of premeditated debug strings). What's the best way to approach the Dijkstrain ideal for our programs? -or- Is this just some sort of pie-in-the-sky old fools quest for greatness applicable only to finite tasks that no one should hope for in our professional lives because programming is just too complex?"} {"_id": "34265", "title": "What HR policies prevent you from finding skillful programmers?", "text": "During our interview process we give the candidate a programming test as well as a programming design problem. Our HR department has told us to stop administering the programming test due to legal concerns. This would really hinder our ability to screen candidates. Are there other HR policies that prevent you from hiring capable programmers?"} {"_id": "251936", "title": "How does a stack VM manage with only one stack?", "text": "Lately I've been asking a lot of questions here about VMs. Here's another one: I understand that often stack based VMs use only one stack - the call stack - for everything. E.g. it is also used for evaluation of arithmetic expressions. What I don't understand is, how doesn't this complicate things a whole lot? I'll demonstrate what I mean with an example. Please consider the following psuedocode program: func main: funcA() func funcA: 2 + 4 * 8 This would compile to the following bytecode: main: call funcA end funcA: push 2 push 4 push 8 mult add end (In this bytecode, the program starts at `main:`. `call` pushes the program counter onto the stack and jumps to the specified label. When an `end` is reached, the top of the stack - assumed to be a line number - is popped and we jump there.) So let's see what happens here: In `call funcA`, we push the program counter (i.e. next line number) onto the stack. Then we jump to `funcA`. In `funcA` some computation is made. After the computation, the number `34` is left at the top of the stack. When we reach `end`, we pop the top of the stack, assuming it's the line number we should return to But it isn't, the line number to return to is buried underneath. How should `end` know about this? To avoid all this mess, we can just have a separate data stack and call stack, and not mix the two. So my question is: why do some VMs (such as the JVM) use one stack for everything, and when the do: how do they handle situations such as the one described above?"} {"_id": "251932", "title": "Open Source-Only license", "text": "I want to know if a license exists that is similar to GPL but allows only source distribution (that is no binary distribution). Why do I want this? The idea is that I want fellow programmers to be able to have the benefits of open source software BUT also retain control over the consumer binary distribution. Specifically I want to prevent my software to be picked by organazations like the linux distributors. I want the users of my software to dowlnload binaries only from my site. Is this possible? Thanks"} {"_id": "194404", "title": "Silverlight support from Microsoft", "text": "I heard Silverlight will not be supported any more by Microsoft. It is true? If so, could anyone share with me Microsoft announcement? And if there are already application developed by Silverlight, what is the new technology to migration to? thanks in advance, Lin"} {"_id": "212678", "title": "Should you hard code your data across all unit tests?", "text": "Most unit testing tutorials/examples out there usually involve defining the data to be tested for each individual test. I guess this is part of the \"everything should be tested in isolation\" theory. However I've found that when dealing with multi tier applications with a lot of DI, the code required for setting up each test gets very long winded. Instead I've built a number of testbase classes which I can now inherit which has a lot of test scaffolding pre-built. As part of this, I'm also building fake datasets which represent the DB of a running application, albeit with usually only one or two rows in each \"table\". Is it an accepted practice to predefine, if not all, then the majority of the test data across all the unit tests? **Update** From the comments below it does feel like I'm doing more integration than unit testing. My current project is .net MVC, using Unit of Work over EF Code First, and Moq for testing. I've mocked the UoW, and the repositories, but I'm using the real business logic classes, and testing the controller actions. The tests will often check that the UoW has been committed, e.g: [TestClass] public class SetupControllerTests : SetupControllerTestBase { [TestMethod] public void UserInvite_ExistingUser_DoesntInsertNewUser() { // Arrange var model = new Mandy.App.Models.Setup.UserInvite() { Email = userData.First().Email }; // Act setupController.UserInvite(model); // Assert mockUserSet.Verify(m => m.Add(It.IsAny()), Times.Never); mockUnitOfWork.Verify(m => m.Commit(), Times.Once); } } `SetupControllerTestBase` is building the mock UoW, and instantiating the `userLogic`. A lot of the tests require having an existing user or product in the database, so I've pre-populated what the mock UoW returns, in this example `userData`, which is just an `IList` with a single user record."} {"_id": "251939", "title": "Does the pattern of passing in one object instead of many parameters to a constructor have a name?", "text": "If you have a constructor that takes a lot of parameters, like this: public OrgUnitsHalRepresentation(List orgUnitSummaryHalRepresentationList, int count, int providedCount, int total, int page, int filterLimit, boolean hasNextPage, boolean isDefaultCount) You can use a separate class that takes nothing in the constructor and set these parameters using setters and pass in this into the constructor instead: public OrgUnitsHalRepresentation(Pagination(Name of pattern) p) Does this have a name so that I can use it in my class name?"} {"_id": "49924", "title": "Collocation in Code", "text": "Quite some time ago I remember reading an article from 'Joel on Software' that mentioned collocation of information in code was important. By collocation, I mean that relevant information about the code is present when the code is. I'm currently writing an article that has a small bit in it about collocation so I went searching for sources and found the quote in the article 'Making Wrong Code Look Wrong' > In order to make code really, really robust, when you code-review it, you > need to have coding conventions that allow collocation. In other words, the > more information about what code is doing is located right in front of your > eyes, the better a job you\u2019ll do at finding the mistakes. When you have code > that says For me, collocation isn't just about the code itself, but the tool used to view the code. If it can help with the 'collocation factor' (term coined by me?) I believe it can help with the programmers productivity. Take for example the modern IDEs that show you the variables type by hovering over it. **Are their any other articles written about collocation in code and/or are their other terms that this is known by?**"} {"_id": "158687", "title": "Any empirical research comparing IT project failure rates to other domains?", "text": "IT projects are legendary for high failure rates, cost overruns, and schedule overruns. Hence the motivations for questions like this. Is there any actual empirical research that compares the failure rates of IT projects to those in other domains, in particular to engineering domains? For example, has anybody compared cost and schedule overruns in IT projects to construction projects of comparable scope? There should be publicly available data here from government projects, but I'm not aware of anybody who's gone and done comparisons."} {"_id": "158685", "title": "Is it possible to use RubyGnome2's/QtRuby's HTML renderers to make UI for a Ruby script?", "text": "I'd like to make a graphical user interface for my script, instead of running it from the console. I'm aware there's a wealth of UI libraries for Ruby, but I'm quite familiar with HTML and CSS and I'd like to use them for building an interface. So the question is: Is it possible to use a HTML rendering library to make such an UI? From what I understand, it's relatively easy to put in a HTML rendered view of something, but is it possible to communicate back with the script? Like when I push that big red button, it actually tells the script to act on it? Obviously it's possible if the script run on the server side, but I'd like to run it as a desktop application."} {"_id": "251110", "title": "Why would programmers ignore ISO standards?", "text": "One of the things I run into often is problems caused by programs which don't conform to ISO standardss. One example would be not using the ISO country tables but making up their own shorthands, which goes okay for the United States (US), or the Netherlands (NL) but goes spectacularly wrong for the United Kingdom (GB, not UK) or Spain (ES, not SP) and a lot of other countries. As another example, internal date notations. Why would anyone store a date as 01/02/2014 ever? It is completely unclear whether that is 1st February or January 2nd, whereas if you use the ISO standard you just store 2014-02-01* and it's unambiguously February 1st. My question: When and why should a programmer make up their own constructs when there is an ISO standard available? * Store 2014-02-01, and format the date accordingly when showing it to an end user."} {"_id": "147561", "title": "Why use Pascal with Cocoa/Cocoa Touch?", "text": "I'm surprised to find that it might be possible to use Pascal with the Cocoa and Cocoa Touch frameworks. This is an interesting turn of events, as Pascal was the favored language for Mac development for a short while early in the history of Macintosh. It looks like someone has gone to a lot of trouble to make Pascal work with Objective-C frameworks like Cocoa and Cocoa Touch, but I'm not sure I see the point. Is this mainly a refuge for Pascal developers coming from other development environments like Delphi, or are there compelling reasons to use Pascal in place of Objective-C or other languages. I don't mean to ask which language is \"better.\" I'm just wondering if there's a reason beyond language preference that one would choose to use Pascal for MacOS X or iOS development."} {"_id": "147566", "title": "How would you create a mobile (android) offline wiki site?", "text": "My apologies in advance if this is not a good forum for this question; pointers to others happily accepted. On the off chance it matters, I'm not going to commercialize this idea or anything; if anything in here is interesting to you, use it as you see fit. My basic problem is this: My favorite place to do non-code writing (essays, fiction, etc) is on my Android phone when I'm traveling. Said traveling often causes the phone to lose all signal/web connection. I would like a system that auto-syncs with the phone in some fashion, can be written in a fairly wiki-ish style (i.e. links are supported but I don't have to actually type out also be edited easily at a regular computer when needed. I'd prefer the editing on the phone be as close to plain text as possible, because fine manipulation like that required to set something to bold or whatever is a pain; I'd rather just be able to type things out. Nice-to-have is the ability for other people to collaborate on documents. Very-nice-to-have is authentication so that some documents can be private, without having to get into the auth quagmire myself. There really doesn't seem to be any such thing out there. The two options I've thought of for implementing this: 1. Now that there exists a proper 2-way sync dropbox app ( https://play.google.com/store/apps/details?id=com.ttxapps.dropsync ), I could just use a plain-text editor on the phone and have a simple web app that presents the text via a wiki markup library of some kind out of the relevant dropbox folder. Advantages: Dead simple. Disadvantages: No easy auth or collaboration. 2. I could use Google Docs, and have a not-so-simple Web app that pulls documents from there, treats them as wiki text, maybe does some cashing, and presents them as a coherent-ish web site(s). Advantages: If I can get the auth to pass through properly, auth and collaboration are free. Disadvantages: Much more complex, and this is just a one-off personal project. I'm curious as to whether I've missed anything, any other easier ways to solve this problem. I'm actually a little surprised that no-one seems to have thought of backing a wiki on Google Docs; there's the Google app wiki stuff, but that pretty much requires a browser AFAICT, whereas there are several Google Doc apps on pretty much everything with offline sync options."} {"_id": "149501", "title": "Given an idea for an application, how should I decide which web technologies to use?", "text": "Background: I'm an experienced Java developer, know basic Javascript, HTML, and CSS (and find it easy to figure out how to do something), and have a little experience in Python. I've seen Ruby before and edited a few lines of existing scripts. I'm curious not so much about any given technology, but mainly about what my thought process should be when deciding which technologies to use. Let's take a simple application - maybe an online address book where I can enter contact information, and create groups of contacts. How should I go about evaluating the options that are out there? The number of frameworks out there is daunting to say the least. This should be a learning experience for me, but also I want the knowledge that I gain to be relevant, and I'm afraid of using something that I'll never use again."} {"_id": "131610", "title": "Is it justifiable for a prospective employer to ask you to make a decision without full information?", "text": "## The Story I am currently in the process of finding a new job. Several days ago I had two interviews with prospective employers, both of which expressed interest for a second meeting. * Company A asked me to come over to talk today. There was no further interview, they simply made me an offer and explained terms. * Company B has scheduled me for a hands-on evaluation tomorrow, after which I will know their final answer within a week. I am fully confident they will also make me an offer and moreover that it will be financially preferable to that of Company A. Since Company A offered first, I told them that I cannot evaluate their offer before Company B also let me know of their decision. This did not sit well with the person I was talking with (the COO). Here are his two main objections and the gist of my answers: * _\"We do not want to be anyone's second best option\"_ I explained that in fact they were my first option, in that all other things being equal I would prefer to work for them. In addition, since the meeting was chronologically first there was really no way for me to be honest with them without letting them know; so this is not a matter of being my second option but rather of me being level at the negotiating table. * _\"We cannot wait for you to make up your mind\"_ I explained that as they surely understand, making a decision without evaluating all alternatives would be me acting against my own best interest. In addition, the skill and modus operandi of evaluating before deciding is definitely something they will expect of me on the job (and something that factored into their decision to make an offer). Finally, there is a hard limit on how long they would have to wait for an answer (a week); I also volunteered to tell company B \"I would appreciate it if you could inform me of your final decision as soon as possible, since I am also considering another offer\". ## The Facts At this point I am fairly certain that Company A just want to intimidate and pressure me into making an uninformed decision. Evidence towards this conclusion include: * Their playing hardball during the first interview, which was quite intense and slightly uncomfortable. Initially I justified this as a result of their sense of technical excellence, but after this second meeting I have reconsidered. After all, if the reason for their behavior was the bar being too high for me then making me an offer without any other intermediate step would not be logical. * The first interview being conducted mostly in \"senior personnel\" mode (they only asked two questions having straight and definite technical answers, which took maybe a minute out of almost two hours total time), but their offer being for a lower-ranked position. * Their ominous references to how bad the employment situation is right now in Greece where I live; things like \"we always pay on time and in euros\" (with the obvious suggestion that in case of Greece defaulting they will not be affected because they only do business with customers in the US). This I regard as pure FUD. * Their insistence that I need to give them an answer by tomorrow at most. I am fairly certain (due to other information I have) that they are not considering any other employee for the same position; rather, for them it's a question of \"do we pick this guy up or not?\". Also, what kind of employer would agree to sign you up today but turn you down next week? And why would they ask me to behave in a manner that actually gives you away as someone they _wouldn't_ want to hire? It does not make sense. In fairness, I should also mention that these people _look like_ they have achieved a high Joel Test score in an environment where the median score is either zero or one (for using version control). They look like their conduct of everyday operations is dramatically better than most other IT shops around here. And they only work with US customers, which implies they achieve a level of professional standard shall we say, _uncommon_ here in Greece. ## The Moral? Is there any convincing justification of why they would require an answer within two days at most? Especially since I told them I 'm expecting another offer _and_ the _worst_ case scenario is an answer within a week? Or am I correct in reconsidering if I want to work for them at all? Thank you for your valuable input."} {"_id": "143027", "title": "What is the proper jargon to refer to a variable wrapped inside a function closure?", "text": "In JavaScript, there is no such thing as a \"private\" variable. In order to achieve encapsulation and information hiding in JavaScript, I can wrap a variable inside a function closure, like so: var counter = (function() { var i = 0; var fn = {}; fn.increment = function() { i++; }; fn.get = function() { return i; }; return fn; {)(); counter.increment(); counter.increment(); alert(counter.get()); // alerts '2' Since I don't call `i` a private variable in JavaScript, what do I call it?"} {"_id": "50361", "title": "Custom Request Templates", "text": "What kind of information do you require from the project management team before you can proceed on a project? Is there a certain format they utilize on Programming Requests which helps you to understand exactly how the development team can succeed with this project. Example: I always like it when project managers mock up forms. It helps significantly to know how they are visualizing the UI for many tasks. Any suggestions on how we can assist the Project Management team in issuing Programming Requests that are as clear as day will be greatly appreciated. Thanks."} {"_id": "187126", "title": "Why does the .Net world seem to embrace magic strings instead of staticly typed alternatives?", "text": "So, I work in .Net. I make open source projects in .Net. One of my biggest problems with it isn't necessariyl with .Net, but with the community and frameworks around it. It seems everywhere that magical naming schemes and strings is treated as the best way to do everything. Bold statement, but look at it: ASP.Net MVC: Hello world route: routes.MapRoute( \"Default\", // Route name \"{controller}/{action}/{id}\", // URL with parameters new { controller = \"Home\", action = \"Index\", id = \"\" } // Parameter defaults ); What this means is that ASP.Net MVC will somehow look up `HomeController` in your code. Somehow make a new instance of it, and then call the function `Index` apparently with an `id` parameter of some sort. And then there are other things like: RenderView(\"Categories\", categories); ...or.. ViewData[\"Foobar\"]=\"meh\"; And then there are similar things with XAML as well. `DataContext` is treated as an object and you have to hope and pray that it resolves to the type you want. DependencyProperties must use magic strings and magic naming conventions. And things like this: MyData myDataObject = new MyData(DateTime.Now); Binding myBinding = new Binding(\"MyDataProperty\"); myBinding.Source = myDataObject; Although it relies more on casting and various magical runtime supports. Anyway, I say all that to end up here: Why is this so well tolerated in the .Net world? Aren't we using statically typed languages to almost always know what the type of things are? Why is reflection and type/method/property/whatever names(as strings) prefered so much in comparison to generics and delegates or even code generation? Are there inherit reasons that I'm missing for why ASP.Net's routing syntax relies almost exclusively on reflection to actually resolve how to handle a route? I hate when I change the name of a method or property and suddenly things break, but there don't appear to be any references to that method or property and there are of course no compiler errors. Why was the apparent convenience of magic strings considered \"worth it\"? I know there are also commonly statically typed alternatives to some things, but they usually take a backseat and seem to never be in tutorials or other beginner material."} {"_id": "50362", "title": "Proposal for a new position at work", "text": "I have an idea at work for a new Product Manager position at our office. I work with several developers, and it would be helpful to have someone working in a type of \"Scrum Master\" capacity, dividing out assignments and making sure they get complete. This position does not currently exist, however I feel that I have enough evidence to indicate that it be very helpful for our business. What is the best way to present this proposal to my boss? Is there a specific template that you know of for new position? It should be able to describe the qualification for the position, their responsibilities, and what metrics we would use to measure them. **UPDATE** With Anna's suggestion, I gave more details about this specific position. However, I would ideally like the most generic way to present a new position to my boss."} {"_id": "187123", "title": "Is there a code tag to indicate \"need improvement\" on code comments?", "text": "TODO indicates \"task to be completed\", FIXME to be fixed etc. Is there a code tag to indicate \"need improvement\" on code comments?"} {"_id": "187122", "title": "Proper Git setup between designers and developers?", "text": "Basically we now have 2 developers for an iOS project, 2 developers for an Android project and 1 designer doing designs for both projects. Right now, the way we exchange designs and images is through mail...so not very advanced or efficient. We're switching to Dropbox, but I feel like we could go a step further and somehow integrate the process into Git. For the iOS project, we are using Git, and I'm thinking that it might be a good idea to introduce the designer as well to Git. The problem is that, being a designer, it might be a bit difficult for him to understand source version control. As the 2 iOS devs, we'll probably be doing a lot of commits, branching, merging, but the designer will most likely only have to update images every now and then. We'd like to keep the setup as simple as possible (especially for the designer). We are using GitHub, so their OSX application might come in very handy for this. Does anyone else have experience or best practices about how to set up the branching/merging and integrating a designer into a technical Git project? Articles, blog posts etc. are welcomed as well."} {"_id": "197444", "title": "Building a WordPress-like filter system", "text": "This question is purely hypothetical. I use WordPress a lot and know the filter structure from an implementation point of view. I'm now wondering what's the best way to implement such a structure (not the way WordPress uses, but the _best_ way). I will give a short overview of what the WordPress filter structure is. After that, I'll list my requirements and thoughts. At last, I'll ask some questions. ### WordPress' filter structure WordPress sends nearly every data through a _filter function_ before it outputs the data to the browser. The WordPress Codex gives information in the Plugin API and the Filter Reference. For a plugin developer, it's quite easy to add a filter: > 1. Create the PHP function that filters the data. > 2. Hook to the filter in WordPress, by calling `add_filter()` > 3. Put your PHP function in a plugin file, and activate it. > Let's look at these steps in some more detail: 1. A filter function is a PHP function that takes one parameter, the input, and returns the output. It should not `echo` or `print` anything. 2. The `add_filter` function (reference) looks like: add_filter ( 'hook_name', 'your_filter', [priority], [accepted_args] ); `hook_name` is the _action hook_ at which the filter should be called. `your_filter` is the name of your filter function. `priority` gives the, well, the priority of the filter (default: 10), where lower numbers are more important. `accepted_arguments` tells WordPress the filter function will take more parameters, but let's leave that out here, it's not one of my requirements. 3. I don't have to explain this step, I think. ### Requirements I'd like to know how to implement a filter structure like WordPress has and for the sake of an actual answerable question, I have made up a hypothetical case with some requirements. My case is a forum, in which I have several cases in which I'd want to use filters for the content, like: * A linkify filter to make bare URLs into nice, working hyperlinks * A smiley filter to replace ASCII smileys like ':D' into an image ![enter image description here](http://i.stack.imgur.com/IRi6N.gif) (sorry for the imgur abuse) I think the advantages of using filters for this would be: * Code clarity * Filters can be easily reused * Making it easy to implement that a user can switch a filter on and off * Easy implementation of additional filters in the future ### My thoughts I imagine WordPress keeps a multidimensional array like `$filters[$hook][$filter]`. Here is `$hook` the name of the hook when the filter has to be called and `$filter` an array of the filter settings, basically those that were passed to the `add_filter()` function. When an action hook point is reached, WordPress can iterate through the `$filters[$current_hook]` array and execute every filter. ### Questions 1. Is this the best way to include a filter system with the requirements I listed? If so, why? If not, why not; and what would be a better system and why? 2. As I said I want users to be able to switch filters on and off. I thought of adding a `enabled_filters` column to the user table in the database, which would be a bitmask of the enabled filters. That would mean every filter would have a unique identifier, but more important that there aren't very much filters possible. So this wouldn't be the way to go. I also thought of adding a table `enabled_filters` with columns `userId`, `filterName` and `enabled`, to set the filters on and off with a new row. With using the filter name in every row, this would cost some data space on much users and filters, so a better but similar idea would be adding a table `filters` with `id` and `filterName`, and changing the `filterName` column of the `enabled_filters` table into a `filterId` column. This would also allow an additional field in the `filters` table, `allow_disable`, to disable the disabling of the filter. Is this second approach a good one, or are there better options? Key requirement is that I don't want to have to modify the base system when adding a filter. 3. WordPress wants programmers to not print anything in a filter function, but return a new string. That means you'll have a variable `$return` in your filter function, to which you append new data all the time with `$return .= '...';`. Using `echo` or `print()` could be easier for programmers, also to make it easier to port existing code (which uses print functions) to a filter function. The filter system could use the `ob_*` functions to capture the printed data instead of sending it to the browser. Would this be a good way to implement filters? Are there any disadvantages, like speed? 4. The last question is about when to use filters, and when not. It seems clear to me that the listed cases (linkify and smileys) are cases where filters are well-used. For things like signatures or avatars, to stick with the forum, it's different. I've tried to figure out why it feels different, and think it's because that would limit the use of a filter to one place. For example, avatars and profile overview filters could only be used next to a post. One of the nice things of a filter is that you can add the same filter function to a different action hook, so that you can pass both the post and the signature content through the linkify filter. Am I right here? Is it true that one shouldn't use filters for avatars, profile overviews and signatures? What can be said about general rules when and when not to use a filter? If I were to write documentation on my system for third-party developers, what should I write down on this topic?"} {"_id": "92556", "title": "Do variable names affect the performance of websites?", "text": "Do variable names affect website performance? I know this will going to be very low number, but still can any one provide the reasons for not choosing a long variable name in aspect of performance?"} {"_id": "234310", "title": "What should a repository really do?", "text": "I've heard a lot of the repository pattern, but I quite didn't understand what a repository should really do. When I say \"what a repository should really do\" I'm mainly concerned about which methods it should provide. For instance, should a repository really provide CRUD methods, or should it provide some different kind of method? I mean, should the repositories contain business logic, or should they simply contain the logic to communicate with the data store and manage the entities to be saved or loaded? Also I've heard that repositories are units of persistence for aggregates. But how is that? I fail to understand how this works in practice. I thought that we should have just one interface `IRepository` which contains the CRUD methods, and then for any entity the implementation would simply contain the logic to save and retrieve such type from the data store."} {"_id": "58436", "title": "Skills for RAD developer", "text": "I am about to embark on an exquisite journey. I have applied for an event, where in span of 48 straight hours, strangers meet, throw around some great ideas, decide on teams, and make a working prototype of IT project. All within 48 hours. I anticipate, that this will be very skill and ability intensive experience, and I want to be prepared. Since i will need to develop my part of the code quickly, I have a following question: **What would be the most needed skills for these 48 hours?** I do know, that things like proper documentation, version control and such are pretty important for a full fledged application/program/web development, but for this span of frantic coding? _Background: I am a web developer, so answers applicable to web development would be more appreciated than others._"} {"_id": "66274", "title": "10 years out of programming, wanting to return... How to best approach it?", "text": "I've searched, but found nothing directly applicable. I have 15 years or so development experience, in Unix, C, C++ mainly, (along with assorted script / minor languages) the last 5 years of this was contracting. My various testimonials and references over the years eventually convinced me I was pretty good at it too. Now, for the last 8 years I've been entirely non-technical and now know that to have been the wrong choice. The problem I face now apart from the skill set on the resume would put off some potential employers, is I've almost completely forgotten C++ and the little Java I had. I don't much fancy starting from scratch again! But that's how it feels when I sit and look at any non trivial C++ code. All that OO and class stuff and I'll pretty much give you a blank look. Surprisingly my C is still ok, well syntactically, though I imagine my design capabilities have taken quite a hit too. Anyone had experience of something similar? Am I realistically looking at years to get back to being half decent again? Apart from much practice and reading anything I can usefully do to get up to speed again faster? Ideally I don't want to face being stuck with the career change I **mistakenly** made 8 years ago. Help!"} {"_id": "24252", "title": "Diplomatically point out the obvious problem in a product", "text": "As we all know, every software has bugs in it. It is matter of time to discover it. Suppose if you just found your product has potential big issue and it was not developed by you. How would you deal with it? I usually speak up with some data & analysis even if it is not my part of code. I am wondering if it is too offensive because I often faced on some resistance(depending on the issue), which would eventually be gone."} {"_id": "160618", "title": "Is it OK for a function to modify a parameter", "text": "We have a data layer that wraps Linq To SQL. In this datalayer we have this method (simplified) int InsertReport(Report report) { db.Reports.InsertOnSubmit(report); db.SubmitChanges(); return report.ID; } On submit changes, the report ID is updated with the value in the database which we then return. From the calling side it looks like this (simplified) var report = new Report(); DataLayer.InsertReport(report); // Do something with report.ID Looking at the code, ID has been set inside the InsertReport function as a kind of side effect, and then we are ignoring the return value. My question is, should I rely on the side effect and do something like this instead. void InsertReport(Report report) { db.Reports.InsertOnSubmit(report); db.SubmitChanges(); } or should we prevent it int InsertReport(Report report) { var newReport = report.Clone(); db.Reports.InsertOnSubmit(newReport); db.SubmitChanges(); return newReport.ID; } maybe even Report InsertReport(Report report) { var newReport = report.Clone(); db.Reports.InsertOnSubmit(newReport); db.SubmitChanges(); return newReport; } This question was raised when we created a unit test and found that its not really clear that the report parameters ID property gets updated and that to mock the side effect behavior felt wrong, a code smell if you will."} {"_id": "200777", "title": "Would this development environment cover the bases for iOS and Windows 8/Windows Phone 8 apps?", "text": "I'm an experienced C# developer. I've created a couple Windows Phone apps and a Windows 8 app but have wanted to develop for iOS too. What I'm thinking is adding the following equipment and software to make this possible: 1. MacBook Air - necessary to compile/deploy to Apple's App Store and for some development tasks, as well as tools like PaintCode 2. iPad Mini - less expensive and able to do double duty testing for both iPhone and iPad screens (Ideally I'd have the budget for a phone too, but not in the cards) 3. Xamarin Studio Business Edition - pricier but integrates into Visual Studio, which is my primary dev environment and also home to Win8/WP8 app code Basically, I'm just looking for a reality check. Is this overkill? Not enough? I know I could save a few hundred getting a Mac Mini, but have other uses for a laptop. Can I get by with the Indie version of Xamarin or am I crippling productivity by constantly switching environments? (As soon as I get the hardware I'll start by testing the free version to help answer that question)."} {"_id": "164726", "title": "Storing Projects on Google Drive (Cloud)", "text": "I've started using Google Drive for my cloud needs and backing up pretty much everything. I've got the app installed so it auto-sync's all my content in most things. My question is this, I am currently coding for iOS (although this applies to any coding project) and am split on storing my project files on Google Drive while using sync. My theory is that if I did use it, I'd never have to worry about system crashes or lost code before backups, but if I do use it it will be sync'ing a-lot and I thought there might be problems with it detecting changes and trying to sync for example half way through compiling. Bandwidth isn't an issue as I have fast connection and unlimited monthly allowance. Has anyone ever used this, or similar cloud-based sync'ing (dropbox etc) for this and knows whether it works or not or whether there are any potential problems etc."} {"_id": "114846", "title": "Why has C prevailed over Pascal?", "text": "My understanding is that in the 1980s, and perhaps in the 1990s too, Pascal and C were pretty much head-to-head as production languages. Is the ultimate demise of Pascal only due to Borland's neglect of Delphi? Or was there more, such as bad luck or perhaps something inherently wrong with Pascal (any hopes for its revival?). I hope it's not an open, unanswerable question. I'm interested in historical facts and observations one can back up, rather than likes and dislikes. I also failed to find a duplicate question, which actually surprised me somewhat."} {"_id": "72036", "title": "What parts of my configuration and my code should I not post?", "text": "When people post code on forums they tend to change or blur out parts of their code. Probably because they want to protect certain parts that might be exploited if it ends up in the wrong hands, I guess? But is it really necessary to do this? Here are a couple of things I see a lot: * Renaming id's and class names in html and css * Renaming variables and functions in code * Bluring out parts of a folder structure * Changing stored procedure names and it's parameters * Posting of example code rather than real code Passwords and connection strings to the DB are obvious things you shouldn't post but how a bout the rest? Is it ok to give out a DB name? Is there anything in the web.config you shouldn't post? How about the .htaccess file on Apache or a folder structure on the system, etc... Basically what I'm asking is which parts are safe to post and which are not?"} {"_id": "28582", "title": "Do you have any teamguidelines regarding exceptions?", "text": "My team recently inherited a project from a team where the amount of developers dropped so low they had to offload some work. One of the projects we inherited was a project littered with nested code, and awful exception handling (Exceptions were in effect handled as goto statements and thus used as a part of the normal program flow.). All in all it was a hairy code ball someone had been coughing up for a few years. Now we've been having a few team guidelines in place for quite some time, but all of the regards to the structures of objects, coding styles and what not. But we havn't covered exception handling. So I'm wondering if you have any guidelines in your teams regarding exception handling, and if so how you enforce them?"} {"_id": "164728", "title": "Is it bad practice to output from within a function?", "text": "For example, should I be doing something like:

    \">

    or

    \">

    I understand they both achieve the same end result, but are there benefits doing one way over the other? Or does it not even matter?"} {"_id": "28585", "title": "Ruby and Enterprise. Any future?", "text": "After Heroku acquisition by SalesForce, is there any chance for Ruby being adopted in enterprise development? If you know any good examples already, please point to them."} {"_id": "77201", "title": "ODBC Lic Restrictions / Need way around limitation here are me thoughts? What do you suggest?", "text": "We are currently using an old legacy ERP system (Thoroughbred) that we have been bolting on functionality by means of web apps we design in house. I interact with the legacy system via ODBC and have to pay on a per connection base via Licenses. I would like to build in intermediate system that would take the sql requests and process them so that any of our apps could just connect to it to avoid the lic issue. What would be the best way to build this adapter. I was thinking a web service but to be honest I am not even sure technically what a web service is. I envision that it just takes in requests and \"does stuff\" with the data and then returns results back to the requester. Sorry about being vague but like I said, I don't have a lot of experience working with web services. Maybe a web service isn't the best way to handle it and I should come at the problem from another direction. My current strengths are Python and turbogears so if I can stay within my comfort zone it's a plus but is not a requirement. Thanks for any and all input!"} {"_id": "73783", "title": "Seeking Windows file compare which accepts directories and filters for source code only", "text": "There are some good free file compare programs out there and most of them will compare directories too. But what I am look for is one where I can specify a filter like *.cpp and have it ignore object files, makefiles, executables, etc without me having to manually remove those before compare. Any suggestions?"} {"_id": "68882", "title": "Recommended guidelines for Mercurial setups", "text": "I've just recently installed mercurial, and have been playing around it with. Overall, it seems like a great way to do version control. However, the distributed nature of its architecture makes it _so_ flexible, I'm not really sure what the ideal configuration/topology is. I'm well aware that asking something like \"what is the ideal Mercurial topology\" will never get a meaningful answer without more specific details ranging from business requirements, number of developers, security concerns, nature of the project, etc. However, are there any general guidelines or \"best-practices\" when it comes to setting up a Mercurial system? In particular, I'm interested in hearing if most companies/projects setup a centralized, \"canonical\" repository. Mercurial doesn't require that you do this, and you could essentially say _anything_ is the \"canonical\" repository. (Perhaps you could consider the lead developer's working repository the \"canonical\" one.) But is it generally a good idea to setup a dedicated \"source control\" server which simply hosts the canonical repository, so every developer can clone it and work locally? Or is this unnecessary?"} {"_id": "209783", "title": "Not copyrighting code and then reusing functions later", "text": "I'm coding something for my job to copy directories and then use regex to make all the filenames uniform. I'm an undergrad student programmer for my university and I'm not sure what a professional coder would consider reasonable in this case. My job is mostly to implement mathematical research. The current assignment is outside what I've been doing and seems like it would be really useful to have some of the functions available in case I (most likely) encounter a similar assignment at some point in my career in the future. It's going to be really useful because the current filenames are so screwed right now that the admin assistants can't really keep up (and even trying to do so might just cause more human error). I'm thinking that it may not overall be useful outside the current assignment because the regex will be so customised to the way the file are incorrectly named. I'm wondering if this is simple work for a pro coder, and therefore I'm making this program to be much more important in my head than it actually is, or if this is something that a coder would really want to copyright or otherwise retain the ability to use this whenever and wherever else they feel like. They've already told me that there is no question that I'll be given credit for the work, which I'm excited about. My only concern was whether I should be thinking about reusing this code later or if this task is so generic that a pro coder would laugh about keeping the rights to such code. This thread was interesting but seems different from my situation: How can I reuse generic code for consulting between companies? EDIT: Florida, USA"} {"_id": "28632", "title": "What is an \"assertion framework\"?", "text": "I was reading about the js-test-driver unit-testing framework, when I found out that the guys behind the framework intend it to be integrated with an assertion framework. What is an assertion framework? Is it a kind of unit- testing framework? If it is the case what is specific to such frameworks?"} {"_id": "237686", "title": "What are the bad points of using Core Data for iOS like an ORM", "text": "I just starting to use `Core Data` for my iOS app and I am thinking about how use Core Data for my needs. **What are the features of my app:** 1. The user look for products proposed from a catalog. 2. The user can add products into a basket. 3. The user can customise each selected product. 4. The user have to create an account to purchase the order. 5. The user have to add recipients for his order. 6. The user pay the order. **How I would persist the datas with`Core Data`:** * If the user isn't logged on `iCloud` or on my `RESTful API`: Persist the datas on the device. * If the user is logged on `iCloud` but not on my `RESTful API`: Persist the datas on `iCloud`. * If the user is logged on my `RESTful API`: Persiste the datas on my back-end. **Question** : After reading some articles about `Core Data`, I quickly understood that it is really verbose and I will rewrite lot of similar code in my app. After researches I found a wonderful article about libraries for `Core Data` by `NSHipster` and I am really interested by using Objective-Record library for my app. But one sentence on the `NSHipster`'s article alerts me on the consequences of using such a library: > Using Core Data as an ORM necessarily limits the capabilities of Core Data > and muddies its conceptual purity. I would have more informations about the \" _limits the capabilities_ \" of using a library inspired by `Active Records`. I wouldn't regret the choice of using Objective-Record library on my app if it can pose problems for the development of feature for me later. Thanks ! **Update:** The good point for me to use Core Data with a library inspired by Active Record is I have to do a lot of \"requests\" inside my app and this will allow me to write less code."} {"_id": "79041", "title": "How to apologize when you have broken the nightly build", "text": "My first commit in my project resulted in the nightly build being broken and people are all over me as we are nearing the release. I want to send an apology email that should sound sincere and at the same time hinting that this was my first commit and this would not be repeated any more. Being a non-native English speaker, I have difficulties coming up with correct words. Can someone please help ?"} {"_id": "94164", "title": "Feature vs. Function", "text": "Often I hear PMs (Project Managers) talk about feature and function. And I'm just so puzzled to differentiate them. Sometimes I think of a feature to be equivalent to a user story. Something like \"As a user, Bob should be able to see a list of his payments\", and they call it a feature. Sometimes it gets as big as a subsystem, something like \"the ability to send SMS via web application\". Function on the other hand sometimes gets as small as a task, \"implementing digit grouping for number inputs\", while there are cases when it gets as big as a whole CRUD operation. My question is, how can we differentiate feature from function?"} {"_id": "197171", "title": "MVC URL formatting/design", "text": "In refactoring a lot of MVC code, I have run into an issue with my URL design. For example, let's say we have a `Venue` object public class Venue { public long ID { get; set; } public List Events { get; set; } } and an `Event` object public class Event { public long ID { get; set; } public Venue Venue { get; set; } } So my initial set up was to have an action set up in the `EventController` like public ActionResult List (long? ID) // http://example.org/API/Events/List/12 where the `ID` specified the Venue, so it would return a list of events for that venue. If an `ID` is not specified, it returns a list of **all** events. This results in a confusing URL, because the `ID` parameter is ambiguous and misleading. I then thought to change it to public ActionResult List (long? VenueID) // http://example.org/API/Events/List?VenueID=12 which makes a lot more sense and is more clear. An even cleaner URL would exist if the action was moved to the `VenueController` and set up like public ActionResult Events (long? ID) // http://example.org/API/Venue/12/Events as the `ID` would clearly specify the `Venue`'s `ID`. The issue with this URL is that you are primarily dealing with `Event` objects in the `VenueController`, which seems wrong. I have been leaning towards the first option (`http://example.org/API/Events/List?VenueID=12`) because, even though the other option is cleaner, it seems like I should keep the `Event` pages (as I view this List page as more related to the `Event` object) in the `EventController`. Any recommendations?"} {"_id": "197170", "title": "Getting practicality of PHP from Ruby or Python", "text": "I have a rather odd problem. I love the practicality of PHP - specifically that I can fairly safely assume that on any random server I'll have access to the MySQL libraries, and that I can go between PHP/HTML with ``. That said, I find the language atrocious - Ruby and Python are considerably more expressive, yet since they're not web-specific, they lack both of those features - at least last I used them both. So my question is, is there some way to get the practicality of PHP from Ruby or Python (specifically the two issues I mentioned)? If not, is there some other language that doesn't feel like a C/C++ parser gone wrong?"} {"_id": "72787", "title": "Are wikis really appropriate to store documents for software development?", "text": "Everybody knows that well-documented software development leads to success. However, it usually means that not only plain text but also binary content will be involved in the document, such as a UML diagram. And I've heard many people say that. The version control system is not the appropriate place for the binary files. I totally understand and agree with the issue. I asked several seasoned developers where the best place to store documents should be and the answer I got was \"wiki\". Wiki is good but I considered another potential issue. How can the source code which has been stored in a version control system connect to its related document in wiki? Let's say someone clones the repository of git or mercurial. How can he/she find the document easily? Or have I just missed something? **Updated:** I know some wiki systems have the ability to integrate with source control systems. But my concern is not about the ability of integration. If you have cloned source code from a git repository and after a while you get on a train and want to continue to work offline on the train (which is a big feature of DVCS). Then you suddenly realize you don't have any access to document since you are working offline on the train. On the other hand, if the document was stored in git repository you would have access to the document with repository cloned."} {"_id": "38516", "title": "XML in the attribute of other XML", "text": "Today, another developer was talking to me about how he addressed an issue he was working on. The solution he found was to stick a string of escaped XML into the attribute of another XML element. In my head I was screaming \"Is that even safe and wise to do???\". According to him, that has been done on tons of other projects within the company that transmit XML back and forth. My question is this - 1. Is that a safe/smart thing to do (xml in an xml attribute)? 2. If not, should I bring that up? I have only been with the company for 2 years and have noticed many things that are just asking for a major catastrophe to happen someday (both in and not in projects I work on). I don't want to be the one that always says that they are doing things wrong... Any thoughts would be greatly appreciated! **FYI - UPDATED INFO RE: APPLICATION -** Flash is calling this API and parsing the XML it gets in the response. I do not think the Flash is using XPath or anything, just string parsing (but I could be wrong). I do not work on the Flash aspect, so I do not know where to look (nor would I understand it)."} {"_id": "50585", "title": "Computer Science Degrees and Real-World Experience", "text": "Recently, at a family reunion-type event I was asked by a high school student how important it is to get a computer science degree in order to get a job as a programmer in lieu of actual programming experience. The kid has been working with Python and the Blender project as he's into making games and the like; it sounds like he has some decent programming chops. Now, as someone that has gone through a computer science degree my initial response to this question is to say, \"You absolutely MUST get a computer science degree in order to get a job as a programmer!\" However, as I thought about this I was unsure as to whether my initial reaction was due in part to my own suffering as a CS student or because I feel that this is actually the case. Now, for me, I can say that I rarely use anything that I learned in college, in terms of the extremely hard math, algorithms, etc, etc. but I did come away with a decent attitude and the willingness to work through tough problems. I just don't know what to tell this kid; I feel like I should tell him to do the CS degree but I have hired so many programmers that majored in things like English, Philosophy, and other liberal arts-type degrees, even some that never went to college. In fact my best developer, falls into this latter category. He got started writing software for his church or something and then it took off into a passion. So, while I know this is one of those juicy potential down vote questions, I am just curious as to what everyone else thinks about this topic. Would you tell a high school kid about this? Perhaps if he/she already knows a good deal of programming and loves it he doesn't need a CS degree and could expand his horizons with a liberal arts degree. I know one of the creators of the Django web framework was a American Literature major and he is obviously a pretty gifted developer. Anyway, thanks for the consideration."} {"_id": "221484", "title": "Choosing an ubiquitous language across different bounded contexts", "text": "If my domain has several Bounded Contexts, but only **ONE** team will work on all contexts, should I develop an Ubiquitous language for each context? or should I have only one and force to into all contexts? The bounded context definition from Evan's book states: > A BOUNDED CONTEXT delimits the applicability of a particular model so that > team members have a clear and shared understanding of what has to be > consistent and how it relates to other CONTEXTS. Within that CONTEXT, work > to keep the model logically unified, but do not worry about applicability > outside those bounds. In other CONTEXTS, other models apply, with > differences in terminology, in concepts and rules, and in dialects of the > UBIQUITOUS LANGUAGE. I don't understand what is meant by _\"dialects of the UBIQUITOUS LANGUAGE\"_. Should I develop a universal ubiquitous languge then modify it for each bounded context? My main problem is if a single team is going to work on all context they might get confused by the constant change in the terminology. **UPDATE** : Let's use an example to illustrate the problem. If I have 2 bounded contexts `Operations` and `CustomerService` and an entity `Order`. A customer may request a refund. In the Operations context this is called a `refund` while in the CustomerService context it is called a `cancellation`. In my models I gonna have something like `order.refund()` or `order.cancel()`. The question is should I have 2 models for the order entity one with a method called `refund()` while in the other context a method called `cancel()`? or should I force a single terminology? The implementation of the refund process might the same or different."} {"_id": "221480", "title": "Package diagram for an MVC patterned project?", "text": "We are required to make a package diagram for our senior project. Since our project uses MVC patter design, we created an MVC class diagram, now, our problem is in creating the package diagrams from our class diagram. Is it possible to have packages with MVC at the same time? so it would be something like this: Package: Account * Account Model * Profile Controller * Registration Controller * Profile View * Registration View These are the controller that cannot exists without the Account model, so I included them. Thanks in advance!"} {"_id": "208545", "title": "Which language for which job?", "text": "Today I asked myself a quite fundamental question, well .. I guess it is one. But google- and SEing couldn't give me the answer I was looking for. I'm writing programs for quite a few years now, but the languages in which they were written, was always given by higher seniors, customers or by myself for learning purpose. Let's say for instance that I want to write a application with basic functions, like database access, calling webservices and so on. Which language should be used, and why? Ignoring factors like cross- platforming, UI, which DB system or webservices are used ... imagine something like a black body in physics. As example, why should I use Java over C++ or Python over Ruby? Not saying that this question is limited to these languages. Are there good reasons for choosing an specific language, like performance or memory usage, or is it, in case of a \"black-body-situation\", just the personal preference which determines the actual used language? Thanks in advance. ... and sorry if this question is to general."} {"_id": "130569", "title": "Should I Prefer Session Timeouts Based off of Prime Numbers?", "text": "While researching some information regarding managing state and session in web applications, I stumbled across this nugget of information: > 67 is the first useful prime number after 60. (Yes, 61 is a prime, too, but > it's too close to 60 to be of use.) Setting timeouts in durations of primes > is common because it lessens the likelihood that two timeout sessions will > overlap. > > Of course, that's completely anecdotal and may not in any way be the reason > why they chose 67 minutes, but that's always made sense to me. At first glance, this seems to make perfect sense to me, and traditionally, I've never given a lot of thought to the variance of sessions timing out, but I wonder how much (if any) this strategy has been put into place in practice? Would making a change to timeouts ending on prime numbers really have that much of an effect long-term in a large scale application? Or would a change like this go mostly unnoticed? In other words, is this _really_ anecdotal? Or is it something that should be strongly considered as a best practice?"} {"_id": "130566", "title": "What is the name for the programming paradigm characterized by Go?", "text": "I'm intrigued by the way Go abandons class hierarchies and seems to completely abandon the notion of class in the typical object oriented sense. Also, I'm amazed at the way interfaces can be defined without the type which implements that interface needing to know. Are there any terms which are/can be used to characterize this type of programming methodology and language paradigm (or perhaps specific aspects of it)? Is the Go language paradigm sufficiently new and distinct from the classical OOP paradigm and sufficiently important in the history of computer programming to warrant a unique name?"} {"_id": "114114", "title": "Should I move invariant out of cycle", "text": "Should I care about moving invariants out of cycle scope if it worsens code readability? Let's take a look at a simple example: for (var i = 0; i < collection.Count; i++) { ... } vs. var collectionCount = collection.Count; for (var i = 0; i < collectionCount; i++) { ... } The performance of second piece of code is better or equal to first one. It will be equal only if collection is fixed-size and Count is not calculated every time. It will be much better if, for example, collection is Linked List which doesn't cache somewhere its Length. I understand that second approach will unlikely kill my application performance (it is much more likely some inefficient SQL query will) but at the same time I don't feel comfortable when I write second piece of code as I miss (small) optimization. But at the same time from readability point of view I like the first piece of code more (less lines of code, less variables). I guess it is minor thing and may be it doesn't worth discussing but I would like to hear your opinion."} {"_id": "181402", "title": "How to manage the task of reviewing localized strings by a non-developer?", "text": "In .NET Framework, localized strings are located in an XML file (or multiple files). Those files are part of the project and are committed to the source control as any other source code file. Usually, Visual Studio is used to display those files as a table and edit the localized strings. I work in a small team on a product which should have a multilingual interface. 1. As a developer, I draft the localized strings in both languages, given that the translation may be inexact, 2. Another person from the team (a non-developer) reviews the content in both languages and corrects it if needed. The current issue is that the non-developer person would not use neither source control, nor an IDE, since it would be too cumbersome and difficult (version control _is_ difficult for non-developers) for this person. An alternative solution would be for me to export the localized strings as an Excel file, wait for this person to review the Excel, then to re-import the modified strings. The caveat here is that I may be creating other strings, renaming existing ones, etc., making it difficult to diff the local version with the reviewed one. What to do? How does it happen in other teams?"} {"_id": "32953", "title": "Data validation best practices: how can I better construct user feedback?", "text": "Data validation, whether it be domain object, form, or any other type of input validation, could theoretically be part of any development effort, no matter its size or complexity. I sometimes find myself writing informational or error messages that might seem harsh or demanding to unsuspecting users, and frankly I feel like there must be a better way to describe the validation problem to the user. **I know that this topic is subjective and argumentative.** I've migrated this question from StackOverflow where I originally asked it with little response. Basically, I'm looking for good resources on data validation and user feedback that results from it at a theoretical level. Topics and questions I'm interested in are: 1. **Content** * Should I be describing what the user did correctly or incorrectly, or simply what was expected? * How much detail can the user read before they get annoyed? (e.g. Is \"Username cannot exceed 20 characters.\" enough, or should it be described more fully, such as \"The username cannot be empty, and must be at least 6 characters but cannot exceed 30 characters.\"?) 2. **Grammar** * How do I decide between phrases like \"must not,\" \"may not,\" or \"cannot\"? 3. **Delivery** * This can depend on the project, but how should the information be delivered to the user? * Should it be obtrusive (e.g. JavaScript alerts) or friendly? * Should they be displayed prominently? Immediately (i.e. without confirmation steps, etc.)? 4. **Logging** * Do you bother logging validation errors? 5. **Internationalization** * Some cultures prefer or better understand directness over subtlety and vice-versa (e.g. \"Don't do that!\" vs. \"Please check what you've done.\"). How do I cater to the majority of users? 6. **Accessibility** (edit) * This is an extension of the _delivery_ topic, but what are the best options for providing feedback to the visually impaired (color blindness or full blindness)? I may edit this list as I think more about the topic, but I'm genuinely interested in proper user feedback techniques. I'm looking for things like research results, poll results, etc. I've developed and refined my own techniques over the years that users seem to be okay with, but I work in an environment where the users prefer to adapt to what you give them over speaking up about things they don't like. I'm interested in hearing your experiences in addition to any resources to which you may be able to point me."} {"_id": "32951", "title": "PyQt design issues", "text": "I've been working on a my first real project using PyQt lately. I've done just a little bit of work in Qt for C++ but nothing more than just messing around. I've found that the Qt python bindings are essentially just a straight port of C++ classes into python, which makes sense. The issue is that this creates a lot of messy, unpythonic code. For example if you look at QAbstractItemModel, there's a lot of hoops you have to go through that forces you to hide the actual python. I was just wondering if there's any intention of writing a python implementation of Qt that isn't necessarily just a wrapper? Either by Nokia or anyone else? I really like Qt but I would love to be able to write more pythonic code. I hope this is OK to ask here. I'm not trying to start a GUI war or anything."} {"_id": "213942", "title": "If I try to monetize free software, what could possibly prevent someone from forking that software and creating a proprietary version?", "text": "I've only recently begun to learn about the tensions between free and proprietary software, and I've been very confused by the way that free software can make money. I understand that free software is \"free as in speech, not as in beer,\" but if I release an open source program and then try to monetize it, what could possibly prevent someone from forking that software and creating a proprietary version? Is the only thing that stops them the investment of other members of the open source community in improving the software? It seems like every improvement free software makes is transparent so a proprietary copycat can make sure they are always up to date with the latest features in the free version, and then add their own features on top of that independently. I'm confused about how free software can survive in serious competition with proprietary software."} {"_id": "13902", "title": "What processes do you follow when developing code for large corporations?", "text": "I'm interested in learning about any process you need to follow when coding for a large corporation. For example, it would be nice to see a little insight into how you (or your manager) handles * Code reviews * Deployment to production * Procuring HW/SW for development * Evaluating vendors * SOX Compliance * Security reviews ... or anything else I may have missed. It would also be nice to see if you think there is any benefit to the paperwork, or if there is a comparable software alternative. I'm interested in everything from paperwork, meetings, routines, or even software tools you may use."} {"_id": "251116", "title": "How to communicate side effects in a RESTful API on the server to the client?", "text": "I have been thinking a lot about Hypermedia REST-APIs for the last couple of weeks. One thing I am not quite sure about is how I want to model side effects on the server side. In my current project, I am using JSON+HAL as the content-type, and am making use of links and embedded resources. For example: Let's say some resource A gets PATCHed by the client, e.g. because a money value gets updated. Suppose this resource is used by another resource B. This resource B represents the sum of all resources that are of A's kind. Hence resource B would have provide a new value after A has been updated. What's the best way to communicate this to the client? The following options come to my mind: 1. Return all possibly updated links in the PATH response, i.e. especially a link to B. The client will then try to GET all provided links. 2. Embed those resources that may have been updated, and provide links to all other relevant resources. 3. Return a link to a parent resource of both A and B, because the whole state of A and B has been changed, and possibly even more. Option 1) seems not right, because I may provide links to resources that are not really in the vicinity of A. Also, I cannot distinguish between resources that I should GET because they were updated, and resources that may be useful in other ways (for navigation, manipulating application state, creating new resources, ...) Option 2) is basically the same as 1), but with the advantage that an embedded resource tells me directly the state of the resource, which is useful when I embed B. Option 3) sounds the best to me so far. In this example I might have a parent resource \"spreadsheet\", which contains row resources, like A, and a sum resource, like B. If I change one of the rows, I get a link back to the spreadsheet, which the client can then GET, and either get all rows and sums embedded or as links. Maybe I am missing some other options. I am really not sure what the best practice here is."} {"_id": "189677", "title": "Why is \"working with files\" is an important subject when learning Objective C?", "text": "I'm reading a book on Objective C, and I was wondering how important the subject Working with files for learning to develop iOS in particular? On you tube the tutorials are very short, maybe 10 min of video that teaches you how to do stuff with files and URL's. But in the book its a very long chapter with lot's of detail. The basics of working with files and URL's if very clear and simple, so I was wondering when I be really need this and this information become handy, and if I need to dig that deep in this topic right now? The main question is, if I know the basics, can I move on and come back when I need it? or it's a really crucial to dig in right now?"} {"_id": "22363", "title": "Highly scalable and dynamic \"rule-based\" applications?", "text": "For a large enterprise app, everyone knows that being able to adjust to change is one of the most important aspects of design. I use a rule-based approach a lot of the time to deal with changing business logic, with each rule being stored in a DB. This allows for easy changes to be made without diving into nasty details. Now since C# cannot Eval(\"foo(bar);\") this is accomplished by using formatted strings stored in rows that are then processed in JavaScript at runtime. This works fine, however, it is less than elegant, and would not be the most enjoyable for anyone else to pick up on once it becomes legacy. Is there a more elegant solution to this? When you get into thousands of rules that change fairly frequently it becomes a real bear, but this cannot be that uncommon of a problem that someone has not thought of a better way to do this. Any suggestions? Is this current method defensible? What are the alternatives? Edit: Just to clarify, this is a large enterprise app, so no matter which solution works, there will be plenty of people constantly maintaining its rules and data (around 10). Also, The data changes frequently enough to say that some sort of centralized server system is basically a must."} {"_id": "61726", "title": "Define \"production-ready\"", "text": "I have been curious about this for a while. What exactly is meant by \"production-ready\" or its variants? Most recently I was looking for information about sqlite and found this thread, where many people suggest sqlite isn't ready for production. I know the difference between development/testing and production; my definition of production is anything that is provided to the customer or will be used by non-programmers. However, there seem to be many items that aren't defined as production-ready. But in reality, they may be perfectly suited and people just have a predujice against them, e.g. sqlite, python, non-MS products, etc. Small office vs. enterprise? Single user vs. multi-user? Client vs. server? Where do you draw the line?"} {"_id": "27787", "title": "Optimizing XML based protocol", "text": "We have recently replaced binary based communication protocol with XML one (between browser based client and server). The implementation is almost complete, however I am looking for ways to improve its performance both for faster transmission and parsing. Any ideas ? Please post link along with the answer."} {"_id": "145074", "title": "Do non-pure interpreters still make the guarantees of functional programming?", "text": "I am assuming the implementations/compilers/generated C code (referred to hereinafter as generic, 'interpreter') for most functional programming languages are written in non-pure functional languages. If this is the case, the underlying interpreter for any given functional programming language exhibits destructive updates and is referentially opaque. Functional constructs are designed to make certain guarantees, such as concurrency and provability. If the interpreter is indeed an imperative program, how is it able to guarantee the 'no side-effects' properties of pure functional programs? Surely optimisations to functional code by the interpreter include changing the nature of recursive functions to imperative ones? My question is: **How do imperative interpreters still make guarantees about the functional program they are executing without being inherently functional themselves?**"} {"_id": "183934", "title": "How to implement a hybrid role-based access control model?", "text": "I am writing an enterprise web-forms-frontend application for in-house use. It has Direct access control (DAC) masquerading as Role-based access control (RBAC). For anonymization purposes, let's call the main unit of information stored in my application a Document, and the roles in the company a Boss, a Grunt and a C-level executive. There are also Externals (e.g. somebody working for a business partner of ours). The general guideline is that everybody but externals should be able to read all documents, C-level execs can write everything, Bosses can write the Documents belonging to their own department, and Grunts can only write Documents personally assigned to them. Fairly standard thus far. But what users actually want is that they can make arbitrary exceptions to the above. They want that any person in the system can be granted write access to any Document, and that Externals can be granted Read access to any document. (Actually, it is more complicated than that, with more roles and finer granularity of permissions, including management of who can propagate which permission to others, but the above description illustrates the core of the problem well enough). So what I have is basically permissions on a personal level, and the roles are only a convenient way of having default settings for the personal-level permissions when a user or a Document is added to the system, instead of having somebody fill out a whole row or column in an access control matrix by hand. Now I have already designed a Permissions table in my database, which has a User FK, Document FK and columns for the different types of permissions. What I am not sure is what I should save in the table. * Alternative 1: I save all permissions in this table (pure DAC) and have the logic tier mimic a RBAC. E.g. when a new Boss is added, a row for each Document in the system is added to the DB, with Read permissions for all documents and Write for the Documents of her department. * Alternative 2: I save the deviations from the role guidelines only. So when a new Boss is added, nothing is written to the permissions table. But when an executive gives a Boss the rights to write to a Document from a different department, then a single row is added to reflect that information. I am not sure which alternative would be better. The first one feels closer to textbook implementation, and if a principle has made it into a textbook, then there is normally good reason to use it. But in my case, it also hurts the DRY principle - after all, if the information that a C-level exec can write to Document X is derivable from his role, writing a row with this information in the DB is redundant. What are the advantages and disadvantages of each approach in terms of a) application performance and b) complexity of implementation? What headaches can I expect from each? Keep in mind that 1) I don't plan to implement a full logic tier. The whole application is practically a convenient CRUD frontend to a database, so I will be doing DB queries for each page view instead of keeping a collection of Document objects in memory. (I know the advantages of a MVC pattern, but it was decided that it will be overkill for this project). 2) I am programming this in ASP .NET 4.5, so the closer I stay to roles, the more I can let the framework do the heavy lifting for me. 3) I have thought of implementing groups orthogonal to the roles to manage access, but it doesn't make sense in my case."} {"_id": "183935", "title": "How would I make a suggestion for a change to the SQL standard", "text": "If I wanted to make a suggestion to a change to how the `UPDATE` statement works in `SQL`, how would I go about it? Is there a website for the next standard? I googled, but just kept getting the Wikipedia page."} {"_id": "145078", "title": "Should You Log From Library Code?", "text": "If I am developing a Java library, is it good practice to issue log statements from within the library's code? Having logging within the library will make debugging and troubleshooting more transparent. However, on the other hand, I do not like littering my library code with logging statements. Are there any performance implications to consider as well?"} {"_id": "60890", "title": "Where can I read exemplary Scheme code?", "text": "Edi Weitz's libraries are often brought up when people ask for exemplary code in Common Lisp, the kind to read and learn from. Are there any open-source Scheme projects or libraries that you can recommend above others? I'm working in Gambit, so I'd prefer code without the Racket extensions. Applications in games or scientific/numeric computing would be a plus, but really I'm just looking for code to learn from. (Game and numerical code just tends to be easier to dive into for me, since the overall point of the project is readily understandable.) **Edit:** I am looking for complete projects and libraries, rather than isolated snippets such as those in a book. (Partially this is because I'm already reading SICP, and I'm looking for real-world applied examples to complement it, rather than another source of isolated functions/macros.)"} {"_id": "183933", "title": "Programming languages classification / taxonomy", "text": "Is there a rigorous way to classify programming languages ? If so, can the various \"dimensions\" be quantified ? (degree of purity) For instance, I just went on the Shade language website (I am not affiliated with it in any way) and saw : * \"semi-functional\" -> But how much is that language semi-functional ? -> quantification need * \"full type checking\" -> So type checking can be partial -> can be quantified too ? * Objectif model / no object model..."} {"_id": "162117", "title": "Sequence Diagram for Response Redirect in ASP.Net Webforms", "text": "In asp.net webforms I have a home aspx page that has a \u201cGo\u201d button. [This is the only control in this page]. When this button is clicked, the user is redirected to \u201cUserProfile.aspx\u201d page. How can we represent this redirect action in sequence diagram? Any reference articles/blogs for this?"} {"_id": "125712", "title": "For what reasons should I choose C# over Java and C++?", "text": "C# seems to be popular these days. I heard that syntactically it is almost the same as Java. Java and C++ have existed for a longer time. For what reasons should I choose C# over Java and C++?"} {"_id": "100553", "title": "Should we choose Java over C# for a new project?", "text": "We have a team of .NET developers (C#) with a range of experience from 2 to 6 years. Over the last few years we have been developing Silverlight, ASP.NET MVC, and WPF applications. However, there is a new two year project which means we will develop a HTML5 application on Linux. The company I work for would like all developers across their offices to use the same programming language which is Java. Although there is talk of using Mono but using the same language and sharing modules, services, etc. already created in Java is the main reason for moving to Java. Some of the developers here are upset. How do I find something positive in moving over that will convince the other developers? There will be a training budget, but the thought of learning to work with Java libraries and a new platform (Linux) will be scaring a lot of the guys. What should I do?"} {"_id": "23942", "title": "Drag-n-drop programming - would it fly?", "text": "All programming languages I know of are written - i.e. typed out as lengths of text one way or another. But I wonder if there's any programming language where you can just drag-n-drop the whole program; to get a loop, you select this box over here and drag it to that section of the \"code\" over there, and so on. And if there isn't one like this, would it fly if one was invented? Personally I don't believe it would be such a good idea, but I'd like to hear what you think."} {"_id": "205033", "title": "Static linking with modified LGPL code", "text": "I'm writing a library which links with modified LGPL library,so two questions: 1. Do I have to make my code LGPL in case of static linking with LGPL library? 2. in case of dynamic linking?"} {"_id": "205030", "title": "In-memory datastore in Haskell", "text": "I want to implement an in-memory datastore for a web service in Haskell. I want to run transactions in the `STM` monad. When I google _hash table steam Haskell_ I only get this: `Data. BTree. HashTable. STM.` The module name and complexities suggest that this is implemented as a tree. I would think that an array should be more efficient for mutable hash tables. Is there a reason to avoid using an array for an `STM` hashtable? Do I gain anything with this steam hash table or should I just use a steam ref to an `IntMap`?"} {"_id": "27252", "title": "How should calculations be handled in a document database", "text": "Ok, so I have a program that basically logs errors into a nosql database. Right now there is just a single model for an error and its stored as a document in the nosql database. Basically I want to summarize across different errors and produce a summary of the \"types\" of errors that occured. Traditionally in a SQL database the this normalization would work with groupings, sums and averages but in a NoSQL database I assume I need to use mapreduce. My current model seems unfit for the task, how should I change the way I store \"models\" in order to make statistical analysis easy? Would a NoSQL database even be the right tool for this type of problem? I'm storing things in Google AppEngine's BigTable, so there are some limitations to think of as well."} {"_id": "205035", "title": "How to deploy a GNU GPL software with Java Web Start?", "text": "I deploy an interactive software using Java Web Start. This is a quiet useful deployment technology and I don't want to change. Besides I am about to publish my software under a GNU GPL license. Usually, when a user installs a new software he is asked to read the license terms and accept it. Is there any equivalent process with Java Web Start ? How can I ask my user to accept the license terms before using it ?"} {"_id": "205038", "title": "Building a web app with encrypted MySQL database entries?", "text": "I have some experience in building PHP based websites with MySQL access, storing encrypted user details, but all other fields being plain text. My most recent project will require sensitive data to be stored in the database. All of which I'm hosting myself. I want to set up a system where a user can access his own entries and see the plain text results, but even if he was able to access someone else's they would be encrypted, unintelligible strings. I have an idea of how to accomplish this, but perhaps it's not optimal or there exist tools to do this already in a more efficient way. 1. Store username as plain text, and password encrypted with sha1(). Or both encrypted with sha1(). 2. Take the user's password (not the encrypted one, but the one he typed in and use it do define a key specific to that username and password, which will then be stored as a session variable. 3. Encrypt, or decrypt all of that users data with that key. In my opinion even if someone gained access to the database and saw a list of plain text usernames and encrypted passwords they couldn't figure out the specific key since it's not stored anywhere. Therefore even if access was gained, they couldn't decipher the content of the sensitive database fields. Of course I'm building in ways to stop them accessing the database anyway, but as a catch-all effort this seems like a good set of steps. Could the experts please comment on this, and offer some advice? Thanks a lot."} {"_id": "51185", "title": "What do you need to know before you build something like Facebook / Twitter / YouTube?", "text": "Of course you will need a programming language! But what other things that you should know before you build your first steps for a large website which will be for the community! * What else do you need to learn in the programming language itself?"} {"_id": "220287", "title": "Starting with BI / Data Analysis", "text": "We have a bunch of data about our customers and we'd like to learn more about it. Writing queries to pull data out is not a problem, and we're doing that right now, but we'd like to take it to the next level. I keep hearing about BI, data analysis, regression, predictive analysis, etc. So, if we want to glean more information from our data, where should we start? Should we look into a BI tool? Should we just start learning about data analysis? Or maybe look into R? Any books you would recommend to start? Should I go back to school (college level math, but nothing fancy -- willing to learn)? Any online courses you recommend? Ultimately we would like to end up with more intelligent filters that would allow us to seep through our data more efficiently. Something like \"only include people outside of cities in state X and Y because Z and Q\". Thank you for any suggestions you may have. **EDIT** : Per JeffO's request. It's not necessarily about tools. More like: What should be the next step in learning more about our data, about the relationships within our data that we may not know about? Or where should we start exploring things like regression or predictive analysis?"} {"_id": "114002", "title": "Is it a good idea to always use Google as the first step to solving a problem?", "text": "> **Possible Duplicate:** > Importance of learning to google efficiently for a programmer? Avoiding lengthy discussions, as a senior level student in CS, how can I get away from Googling problems I run into? I find myself using it too much; I seemingly reach for the instant answer and then blindly copy and paste code, hoping it works. Anyone can do that. I've read the related threads about being a better programmer, but mostly those recommend practicing on pet projects, which I have done, but again I feel EVERY wall encountered, from design through completion, was hurdled with Google. Do professionals instantly \"research\" their problem? Or do you guys step back and try and figure it out yourselves? I'm talking about both 'algorithm/design' problems as well as compiler issues."} {"_id": "238729", "title": "Can recursion be done in parallel? Would that make sense?", "text": "Say, I am using a simple recursive algo for fibonacci , which would be executed as: fib(5) -> fib(4)+fib(3) | | fib(3)+fib(2)| fib(2)+fib(1) and so on Now, the execution will still be sequential. Instead of that, how would I code this so that `fib(4)` and `fib(3)` are calculated by spawning 2 separate threads, then in `fib(4)`, 2 threads are spawned for `fib(3)` and `fib(2)`. Same for when `fib(3)` is split to `fib(2)` and `fib(1)` ? (I'm aware that Dynamic programming would be a much better approach for Fibonacci, just used it as an easy example here) (if someone could share a code sample in C\\C++\\C# as well, that would be ideal)"} {"_id": "214827", "title": "Ping one remote server from another remote server", "text": "It's simple to ping a server in C#, but suppose I have servers A, B and C. A connects to B. A asks B to ping C, to check that B can talk to C. A needs to read the outcome. Now, first of all is this possible without installing an application onto B? In other words, can I perform the entire check from just running a program on A? If so, can anyone suggest the route I would take to achieve this? I've looked at sockets but from the examples I've seen these require a client AND server application to function."} {"_id": "214825", "title": "Is avoiding the private access specifier in PHP justified?", "text": "I come from a Java background and I have been working with PHP for almost a year now. I have worked with WordPress, Zend and currently I'm using CakePHP. I was going through Cake's lib and I couldn't help notice that Cake goes a long way avoiding the \"private\" access specifier. Cake says > Try to avoid private methods or variables, though, in favor of protected > ones. The latter can be accessed or modified by subclasses, whereas private > ones prevent extension or re-use. in this tutorial. Why does Cake overly shun the \"private\" access specifier while good OO design encourages its use i.e to apply the most restrictive visibility for a class member that is not intended to be part of its exported API? I'm willing to believe that \"private\" functions are difficult test, but is rest of the convention justified outside Cake? or perhaps it's just a Cake convention of OO design geared towards extensibility at the expense of being a stickler for stringent (or traditional?) OO design?"} {"_id": "102541", "title": "Software Design Idea for multi tier architecture", "text": "I am currently investigating multi tier architecture design for a web based application in MVC3. I already have an architecture but not sure if its the best I can do in terms of extendability and performance. The current architecure has following components * DataTier (Contains EF POCO objects) * DomainModel (Contains Domain related objects) * Global (Among other common things it contains Repository objects for CRUD to DB) * Business Layer (Business Logic and Interaction between Data and Client and CRUD using repository) * Web(Client) (which talks to DomainModel and Business but also have its own ViewModels for Create and Edit Views for e.g.) Note: I am using ValueInjector for convering one type of entity to another. (which is proving an overhead in this desing. I really dont like over doing this.) My question is am I having too many tiers in the above architecure? Do I really need domain model? (I think I do when I exposes my Business Logic via WCF to external clients). What is happening is that for a simple database insert it (1) create ViewModel (2) Convert ViewModel to DomainModel for Business to understand (3) Business Convert it to DataModel for Repository and then data comes back in the same order. Few things to consider, I am not looking for a perfect architecure solution as it does not exits. I am looking for something that is scalable. It should resuable (for e.g. using design patterns ,interfaces, inheritance etc.) Each Layers should be easily testable. Any suggestions or comments is much appriciated. Thanks,"} {"_id": "214822", "title": "Macro vs. Static functions in Header", "text": "for a lot of quick tasks where one could employ a function `f(x,y)`, in plain C, macros are used. I would like to ask specifically about these cases, that are solvable by a function call (i.e. macros used for inlining functions, not for code expansion of arbitrary code). Typically C functions are not inlined since they might be linked to from other C files. However, static C functions are only visible from within the C file they are defined in. Therefore they can be inlined by compilers. I have heard that a lot of macros should be replaced by turning them into static functions, because this produces safer code. Are there cases where this is a not good idea? Again: Not asking about Code-Production macros with ## alike constructs that cannot at all be expressed as a function."} {"_id": "214821", "title": "Which are the best ways to organize view hierarchies in GUI interfaces?", "text": "I'm currently trying to figure out **the best techniques** for organizing **GUI view hierarchies** , that is dividing a window into several panels which are in turn divided into other components. I've given a look to the Composite Design Pattern, but I don't know if I can **find better alternatives** , so I'd appreciate to know if using the Composite is a good idea, or it would be better looking for some other techniques. I'm currently developing in Java Swing, but I don't think that the framework or the language can have a great impact on this. **Any help will be appreciated.** ---------EDIT------------ I was currently developing a frame containing three labels, one button and a text field. At the button pressed, the content inside the text field would be searched, and the results written inside the three labels. One of my typical structure would be the following: MainWindow | Main panel | Panel with text field and labels. | Panel with search button Now, as the title explains, I was looking for a suitable way of organizing both the MainPanel and the other two panels. But here came problems, since I'm not sure whether organizing them like attributes or storing inside some data structure (i.e. LinkedList or something like this). Anyway, I don't really think that both my solution are really good, so I'm wondering if there are really better approaches for facing this kind of problems. **Hope it helps**"} {"_id": "241286", "title": "Methodology To Determine Cause Of User Specific Error", "text": "We have software that for certain clients fails to download a file. The software is developed in Python and compiled into an Windows Executable. The cause of the error is still unknown but we have established that the client has an active internet connection. We suspect that the cause is due to the clients network setup. This error cannot be replicated in house. What technique or methodology should be applied to this kind of specific error that cannot be replicated in house. The end goal is to determine the cause of this error so we can move onto the solution. For example; * Remote Debugging: Produce a debug version of the software and ask the client to send back a debug output file. This involves alot of time (back and forth communication) and requires the client to work and act in a timely manor to be successful. * In-house debugging: Visit the client and determine their network setup, etc. Possibly develop a series of script tests before hand to run on the clients computer under the same network. * Other methodologies and techniques I am not aware of?"} {"_id": "55952", "title": "How much system and business analysis should a programmer be reasonably expected to do?", "text": "In most places I have worked for, there were no formal System or Business Analysts and the programmers were expected to perform both the roles. One had to understand all the subsystems and their interdependencies inside out. Further, one was also supposed to have a thorough knowledge of the business logic of the applications and interact directly with the users to gather requirements, answer their queries etc. In my current job, for ex, I spend about 70% time doing system analysis and only 30% time programming. I consider myself a good programmer but struggle with developing a good understanding of the business rules of a complex application. Often, this creates a handicap because while I can write efficient algorithms and thread- safe code, I lose out to guys who may be average programmers but have a much better understanding of the business processes. So I want to know \\- How much business and systems knowledge should a programmer have ? \\- How does one go about getting this knowledge in an immensely complex software system (e.g. trading applications) with several interdependent business processes but poorly documented business rules."} {"_id": "166615", "title": "Learning Java with a simple project", "text": "As i remember the time when i was learning PHP, it was suggested to build a simple blog or a forum after reading the language fundamentals. I was told/read that this would cover everything that I would need to learn about PHP from a beginners book. This advice was out there in a number of places, and after following and working with PHP it seems quite good advice. Now, i am learning Java and reading the book \"Thinking in Java\" by Bruce Eckel. I wonder if there is any such set of similar, small projects that I could take up, that would cover all the essentials and most of what is covered in the book."} {"_id": "142867", "title": "Which is a better design pattern for a database wrapper: Save as you go or Save when you're done?", "text": "I know this is probably a bad way to ask this question. I was unable to find another question that addressed this. The full question is this: We're producing a wrapper for a database and have two different viewpoints on managing data with the wrapper. The first is that all changes made to a data object in code must be persisted in the database by calling a \"save\" method to actually save the changes. The other side is that these changes should be save as they are made, so if I change a property it's saved, I change another it's save as well. What are the pros/cons of either choice and which is the \"proper\" way to manage the data? **EDIT** To provide more information, we're using Node.js, and we're writing our own wrapper for Neo4j because we feel that the current one available is not complete, or have run into issues where we need more functionality. That being said, an example of the save as you go method (these examples are in Javascript): // DB is defined as a connection to the database via HTTP(REST) // \"data\" being an object that would represent new data var node; db.getNodeById(data.id, function(err, result) { node = result; // This function task an optional callback as a third parameter because in // in addition to setting the property on the \"node\" object it saves the // new value to the database by making an HTTP request. node.set(\"firstName\", data.newFirstName); // auto saved as well node.set(\"lastName\", data.newLastName); // end here as the node has already been updated with your changes. }); And the opposing example would be: // DB is defined as a connection to the database via HTTP(REST) // \"data\" being an object that would represent new data var node; db.getNodeById(data.id, function(err, result) { node = result; // This changes the value for the node object only, nothing in the // database is changed. node.set(\"firstName\", data.newFirstName); node.set(\"lastName\", data.newLastName); // Save all your changes node.save(); });"} {"_id": "26349", "title": "How do you go about training a replacement?", "text": "I recently asked about leaving a position and got a lot of great answers. One of the common threads was that being around to train the new person would be expected and could go a long way. Now considering that (I think) most people don't stay at a company for a long time after they've given notice, and it will take time for the company to interview/hire one - that leaves for a short amount of time to get someone up to speed. I've also never trained anyone before. I did a bunch of tutoring in University and College, but teaching a language/technology is far different from training someone to replace you on your job. So the question is: how do you go about training someone to replace you in a, potentially, short amount of time?"} {"_id": "246648", "title": "How to transfer code responsibility to another developer", "text": "I'm in a situation at work where I have to transfer responsibility of a large code base that I inherited, re-factored and enhanced to another developer. This is the first time that I have to do such a thing and although I always thought it would be trivial the actual steps I have to follow seem vague. The code base I maintain is a module that includes a lot of stuff such as * A persistence layer * A service layer * A presentation layer which sadly has a lots of business code in it * An interface for module interaction I also have very good knowledge of the business assumptions made when it was developed as well as its technical foundation. I know that I can go with my mind's flow and give it my best but I prefer to do it in the most professional manner I can. So, I would like to ask you for advice on how I should approach this situation. Are there any standard procedures and good practices that I could follow?"} {"_id": "97758", "title": "job handover checklist", "text": "I have resigned my old employer to start a new job, and I need to prepare a handover [documentation to be taken by some new employee that haven't arrived yet, and would be some weeks from when I go to when they fill the position again]. So what important points should I make sure to cover, I was making some iOS apps, [90% completed], and a web portal [60% completed] the other IT engineer is here so he can update on passwords etc. Should I use some system like a wiki to leave things there for access? or just a normal doc? what other things should I consider?"} {"_id": "195373", "title": "Impact of ending contract with know-ins-and-outs lead developer of a project", "text": "I have a long time running project. Its been running for about 6 months. The project is very big. The lead developer/team leader designed the project and the whole project is in his brain. He knows all ins and outs. I also know almost everything. But I am not confident like him. Problem is recently he is charging quite high for simple bug fix and small change request. I am worried if I end the contract what will be the future of the project. I had 2-3 public demos. I must launch soon. The project is still in development alpha. 1. What happens to the project if I end the contract? 2. Are the high charges is because he knows I can not end the contract now?"} {"_id": "33851", "title": "What actions to take when people leave the team?", "text": "Recently one of our key engineers resigned. This engineer has co-authored a major component of our application. We are not hitting Truck number yet though, but we're getting close :) Before the guy waltzes off, we want to **take actions necessary to recover from this loss as smoothly as possible** and eventually 'grow' the rest of the team to competently cover the parts he authored. _More about the context: the domain the component covers and the code are no rocket science but still a lot of non-trivial stuff. Some team members can already cover a lot of this but those have a lot on their plates._ These are the actions that come to my mind: 1. Improve tests and test coverage - especially for the non-trivial stuff, 2. Update high level documents, 3. Document any 'funny stuff' the code does (we had to do some heavy duct-taping), 4. Add / update code documentation - have everything with 'public' visibility documented. Finally the questions: What do you think are the actions to take in this situation? What have you done in such situations? What did or did not work well for you?"} {"_id": "140125", "title": "Code ownership: What should I do when a dev leaves or team splits?", "text": "There are multiple ways of tracking code ownership (i.e., collective, team or individual). In case of team or individual ownership, how do you: * track ownership? * deal with situations when dev leaves or team splits/re-organizes for new projects?"} {"_id": "16365", "title": "Questions to ask before someone leaves", "text": "Apart from the obvious questions relating to specific project work someone is working on are there any questions I should be asking a fellow dev who is leaving the company? So far I am thinking about; * Locations of things on the server he uses that not maybe everyone does. * Credentials he has set up that we wouldn't have needed. * Client details he has not yet saved into our CRM system."} {"_id": "251592", "title": "New project from a third party and documentation", "text": "I'm going to become the custodian of an iOS project developed by a third party and my boss just asked me for a list of documentation that I think we should be asking the third party for when they hand over the code. I did some light searching on this and I've found incredibly varying answers to this, so I figured I'd ask a question more tailored to my particular situation. I'm working on a team of 7 individuals of varying experience levels, and I'm sure I'll be called upon to maintain or add features to this project in the future. What should I ask the third party to include when they hand off the code to help make my life easier? AppleDoc? UML diagrams?"} {"_id": "189091", "title": "Understanding high cohesion principle for methods in object oriented design", "text": "I know the idea of strong cohesion applies to methods as much it applies to classes. Just to be clear when I say _strong cohesion_ of a method I mean a method which does only one task and does it well. This idea really works for simple _getters_ and _setters_. But sometimes we usually come across methods that does more than one task internally , although it seems to be doing a single task if you look at a higher level of abstraction. An example of such a method can be : public void startUp() { List subSystems = getSubSystems(); sychronized(subSystems) { for (SubSystem subSystem : subSystems) { subSystem.initialize(); } } } And inside those `subSystem#initialize()` methods a lot is going on. So is this a violation of the _high/strong cohesion_ principle ?"} {"_id": "142860", "title": "Is there a term for quasi-open source proprietary software?", "text": "Say a company wants to keep development of new features of a piece of software internal, but wants to make the source code for previous versions public, up to and including existing public features, so that other people can benefit from using and modifying the software themselves, and even possibly contribute changes that can be applied to the development branch. Is there a term for this sort of arrangement, and what is the best way of accomplishing it using existing version control tools and platforms?"} {"_id": "166612", "title": "Schema.org vs microformats", "text": "They both serve the same purpose: Providing a vocabulary for semantic markup. Schema is recognized and standardized\u2026 but the microformats standard is by an open community process. Schema exploits microdata in documentation, while microformats go on classes. (Of note: microdata means that an element must be of a single `itemtype`, while microformats allow several classes to apply to the same element. I can markup xFolk+hAtom with classes, but not with microdata.) Is this a black-and-white situation? Google says I can't use both \"because it may confuse the parser\". What's the consensus on these?"} {"_id": "223394", "title": "Clarification about Grammars , Lexers and Parsers", "text": "**Background info** ( _May Skip_ ): I am working on a task we have been set at uni in which we have to design a grammar for a DSL we have been provided with. The grammar must be in BNF or EBNF. As well as other thing we are being evaluated on the Lexical rules in the grammar and the Parsing rules - such as if rules are suitable for the language subset, how comprehensive these rules are, how clear the rules are ect. What I don't understand is if these rules are covered in a grammar defined in BNF (it's a new topic for us). **The Question** : Does a grammar for a given language that has been defined in either BNF or EBNF contain / provide rules for **_Lexical Analysis_** and/or **_Parsing_**? ( _or do these have to be specified else-where?_ ) Also what would be considered a lexical rule? And what would be considered a parsing rule?"} {"_id": "189099", "title": "What is proper etiquette for releasing a complete rewrite of an existing project?", "text": "I'm new to the opensource world. The project I'm working on resides on Github. (Just for reference) The project I'm working on is a plug-in for the Plex Media Server. I plan to submit my plug-in to Plex so that it will be included in their \"app store\". Now to my question. When I first started out, I found an older semi-abandoned plugin that did some of what I wanted but not very well. I started by contributing to that repo. I was immediately made a collaborator with full rights to the repo since the current owner said he was too busy to mess with it anymore. However, as I started to dig deeper into the code I realized that it was futile. The existing code base was terrible and there was no efficient way to fix it. I ended up just starting from scratch. The only code I used in my new plugin was the code I committed initially. Now the project is ready to be released. However I am unsure as to how to go about doing this. I see my options as follows: 1. Create a new repo and just forget about the existing one. I'm not sure if I should even mention the previous repo and or its contributors. I didn't use any of that code/resources and have created an entirely new code base. While the plugin does some of the same things the old one did, it does it in an entirely new way and more efficient way. 2. I fork the existing repo, delete the existing code, and commit my new code. I'm really new to Git, so I'm not sure if this is even possible. 3. I commit my changes to the existing repo and see how the current contributors have to say. Of the three options, I'm strongly leaning towards the first. BUT! I'm new to open source and I want to make sure I'm doing things according to proper etiquette. I don't want to have my first project blow up in my face and become a disaster. Option two doesn't sound bad but I'm not sure if I'm supposed to do that. I'm not sure how the history and diffs would work. We're only talking about 500 - 1000 lines of code at most. So it's not a huge code base. Thanks for any input you can provide!"} {"_id": "205783", "title": "Where should UI errors be generated and printed from? Log errors?", "text": "Function `A()` calls `B()` calls `C()`, which encounters an error. Which function writes to the log and which function reports to the user? My first inclination is that all functions should write to the log, and that the inner-most UI-facing function should be responsible for outputting the error to the user. However, the _fix_ needs to be done in `C()`, so perhaps instead of having an super-long log file which will take a while to parse, I should simply have the function which encountered the error log the error and the parameters used to call it. Likewise, I'm not too convinced that the inner-most UI-facing function should output the error (consider CLI apps, as opposed to web or GUI apps). That was more of a gut-feeling decision, so any insight into actually _engineering_ a solution would be appreciated."} {"_id": "52986", "title": "Where, in an object oriented system should you, if at all, choose (C-style) structs over classes?", "text": "C and most likely many other languages provide a `struct` keyword for creating structures (or something in a similar fashion). These are (at least in C), from a simplified point of view like classes, but without polymorphism, inheritance, methods, and so on. Think of an object-oriented (or multi paradigm) language with C-style structs. Where would you choose them over classes? Now, I don't believe they are to be used with OOP as classes seem to replace their purposes, but I wonder if there are situations where they could be preferred over classes in otherwise object- oriented programs and in what kind of situations. Are there such situations?"} {"_id": "227882", "title": "How is intermediate data organized in MapReduce?", "text": "From what I understand, each mapper outputs an intermediate file. The intermediate data (data contained in each intermediate file) is then sorted by key. Then, a reducer is assigned a key by the master. The reducer reads from the intermediate file containing the key and then calls reduce using the data it has read. But in detail, how is the intermediate data organized? Can a data corresponding to a key be held in multiple intermediate files? What happens when there is too much data corresponding to one key to be held by a single file? In short, how do intermediate partitions differ from intermediate files and how are these differences dealt with in the implementation?"} {"_id": "104637", "title": "What comes first, the ruby chicken or the homebrew egg?", "text": "Whilst installing Ruby on OsX I noticed I could do so by using a package manager called Homebrew. This seemed like an easy option, so I took it. Everything worked smoothly. Life was good. Being a curious fellow, I looked into what other benefits having homebrew installed would give me, and in my study found that Homebrew is written in ruby. Woah, wait a minute! How is it then, that I can install Ruby using something that is written in Ruby, not already having Ruby on my system, and once installed said ruby based system I STILL have to install ruby separetely? **Warning:** _Do not read this question aloud. You risk getting a Kaiser Chiefs single stuck in your head for the remainder of the day._"} {"_id": "57554", "title": "ERP/CRM Systems. Desktop Based ? Web based?", "text": "I have seen 2-3 ERPs in action. I am wondering what is better. Desktop based application or webbased displayed on a browser. My first experience was with a web based ERP when i was 14 years old.. It was web based and terribly slow... For most simple task you had to do lots of clicks... no keyboard support ..... Pages took ages to load. Last year I worked for migrating to a newer computer some old terminal based cobol application. The computer that worked till today and still has no problem was from 1993. The user interface ofcourse was textbased.. The speed that guys placed orders was amazing! just typing the name of the customer , then 5-10 keys to add a product to order.... Comparing to this ERP the page for placing orders Link (click sales orders) seems terribly slow to add a product... No keyboard shortcut works to save what you added and generally I believe you need 4 times more time to place an order compared to the text interface... Having to use both mouse and keyboard for this task is BAD and sadistic... So how can the heck these people ever use a system like that ??? So in the long run desktop application seems the only way... Of course browsers support shortcuts but the way to overide the defaults that browsers uses isn't cross compatible... That is a huge problem. Finnaly, if we MUST/forced use cloud in near future what about keyboard shortcuts?? I feel confused... I have seen converters of desktop applications to browser applications but are SLOW as hell... The question is what about user friendliness? What kind of application would you use?"} {"_id": "39865", "title": "How do you convince management to throw away a prototype?", "text": "I love prototyping as a fast effective way to put a UI in front of a user. Many times though, management get their beaks in the way, and the prototype is dragged kicking and screaming into main stream development. How do you manage management into not doing this?"} {"_id": "26454", "title": "Is it ever too old to learn how to become a programmer?", "text": "If you want to be a good developer, but start developing at the age of 26, is there any way to became a good programmer ?"} {"_id": "225213", "title": "How big can a mobile development team be without being too big?", "text": "I work for a company that is moving heavily into mobile. We find the majority of our customers use our mobile app pretty regularly. We have tons of things we want to add to it -- as well as to deliver all these new features on Andriod- and iOS-native apps. But I wonder how big we could possible make a mobile development team without making it too big to be practical. Can we have a mobile team that's 5 people? 10? 50? 100? If it made sense for us to build that many new features, from an Engineering standpoint is it practical? Or is there some limit of the amount of Engineers that could support a single app (across multiple platforms)?"} {"_id": "105322", "title": "Are there any free software tools to aid software cost estimation using object points?", "text": "I'm looking for a free (or if there is none, then paid too) software that could aid estimating the effort to create a software using object points. If anyone knows any other software that is able to aid cost estimation (and it's quite fast and reliable) then I would also be happy to hear about it. :)"} {"_id": "105326", "title": "Is it common to lie in job ads regarding the technologies in use?", "text": "> Wanted: Experienced Delphi programmer to maintain ginormous legacy > application and assist in migration to C# _Later on, as the new hire settles into his role..._ \"Oh, that C# migration? Yeah, we'd love to do that. But management is dead-set against it. Good thing you love Pascal, eh?\" I've noticed quite a lot of this where I live (Scotland) and I'm not sure how common this is across IT: a company is using a legacy technology and they know that most developers will avoid them to keep mainstream technology on their resumes. So, they will put out a advertisement saying they are looking to move their product to some hip new tech (C#, Ruby, FORTRAN 99) and require someone who has exposure to both - but the migration is just a carrot on a stick, perpetually hung in front of the hungry developer as he spends each day maintaining the legacy app. I've experienced this myself, and heard far too many similar stories to the point where it seems like common practice. I've learned over time that every company has legacy problems of some sort, but I fail to see why they can't be honest about it. It should be common sense to any developer that the technology in place is there to support the business and not the other way round. Unless the technology is hurting the business in someway, I hardly see any just cause for reworking the software stack to adhere to whatever is currently vogue in the industry. **Would you say that this is commonplace?** If so, how can I detect these kinds of leading advertisements beforehand?"} {"_id": "231967", "title": "Optionally Using GPL Library via System API in Closed-Source Application", "text": "Please consider the following situation: You write a closed-source application, let's call it A. A depends on a system API (i.e. provided by the OS), that is in turn configurable to use different backends. Each backend is identified by a simple string value and also selected via this identifier. The API itself is not bound by any GPL-like requirements. Now, a specific backend (B) would be preferred but is GPL-licensed. The application would run without it, and there are other backends available, just with worse performance or other downsides. Is any of the following permitted? 1. Hard-coding the identifier to B 2. Automatically selecting any available backend, but preferring B if available 3. Letting the user select a backend (and strongly suggesting B) Would it also depend on whether I ship A and B together, or just A and suggest the user that installing B would improve the application performance? I would generally think that all three options are o.k. since passing a specific string to a system API would hardly qualify for a _derived work_ (IMHO), but the FAQ of B basically state that _it does not matter how you call a function of B, any usage makes your program a derived work_. Are they right even in the three cases outlined above? P.S.: It is not my decision to make A closed-source, so GPL'ing it is not an option. P.S. 2: I omitted the names of the system API and B to keep the question more generic, but I can add them if it makes any difference. edit: From another point of view: When does the user (as opposed to the programmer) create a derived work? In the third case, it should be clear that the user creates the derived work since he installed A and B and actively selected B. This would be in line with MSalters' arguments in his comment. The second case at least makes it clear that the work does not depend on B, so this could be o.k., too. Probably it would be most in the _spirit_ of the GPL to just contact the authors; however, I'm not sure if their FAQ entry is in line with the _actual text_ of the GPL."} {"_id": "141518", "title": "Is there any algorithm book that teaches like Head First series?", "text": "As a Java programmer I need to learn algorithms (for programming Challenges). I read some Head First Series (JAVA owned by me) and they are pretty brain friendly. So I was wondering is there any algorithm book that will be simple to understand and also goes to the crux of each algo."} {"_id": "227771", "title": "What are the essential mathematics topics that a programmer should know in order to learn algorithms?", "text": "Being a electronics engineer, I was never exposed to any algorithms. Now I am a experienced programmer and want to learn Algorithms. I started with CLRS. But I was overwhelmed by the maths in the book. So which are the minimum maths topics should I know before I start Algorithms? Are there any good tutorials or books on this? I did find one book on maths by Donald Knuth, but i guess its too details and tough for beginners."} {"_id": "227770", "title": "DDD and the persistence of value objects; must we denormalize?", "text": "I've been reading up a lot on Domain-Driven Development, and I came to the question of how to preserve lack of distinct identity with value objects (VOs). While in the DDD world, this is a requirement (yet I'm not sure I understand the full power of this) it poses problems for the lower ORM layers in terms of persistence. Tables, for the most part, like to be normalized. It makes life easy in terms of having no delete or insertion anomalies. My concern comes when implementing VOs; they do have primary keys - identities by definition (and foreign keys to their parent). Making them entities is violating DDD in favor of persistence. Instead, I could make a wrapper class that accepts a \"bag\" of parameters, and then attaches them to each parent's foreign key. While mechanically messy, it sounds like it will work. I've read a lot of responses on the internet (Stack Overflow also) about denormalizing tables. This concerns me, as now we're violating persistence for DDD. How to allow VOs to exist by their proper definition without a denormalization?"} {"_id": "141512", "title": "Resources for creating a turn-by-turn navigation system", "text": "I'm trying to create a kind of turn-by-turn satellite navigation system using the iOS SDK. I get the directions from the server and draw them on the map, then I keep getting location updates from the iPhone's GPS chip. Currently I start by finding the nearest turning point then, each time the user comes within a certain distance of the next turning point, a verbal cue is given and the turning point index is incremented. This is a delicate system and I'd like to make it more robust so I can tell when the user is going the wrong direction etc. Basically I'm looking for some literature about turn-by-turn navigation, in terms of tracking the user's progress and whether they're going the right direction. I'd have thought there's a lot of research out there but I can't seem to find anything apart from simple tutorials on how to use a given SDK or directions API. Can anyone direct me to a good run-through of the various techniques used in software such as TomTom or Google Maps Navigation?"} {"_id": "227772", "title": "Design pattern for overlapping actions and animations?", "text": "Is there a design pattern for dealing with overlapping UI actions and animations? A few examples: 1. Let's say I have an interactive table row that expands to reveal an extra control when the user clicks on it, and vice versa if they click on it again. If the user clicks on the row while the expand animation is in progress, I want it to stop animating and shrink back to its original size from the place where it stopped. 2. I have a set of rows from example 1 inside a group. When the last row is deleted, I want the group to shrink and then remove itself in a callback. However, while the group is shrinking, the user could still add another row to it. If that happens, I want the group to stop shrinking, expand to the correct size, and add the row. 3. What if I have a sorting function for the UI in example 2? If the user clicks on the sort button, the rows shuffle and animate into their new positions. However, I still want the user to be able to click on table rows, delete rows, and do everything else that my UI allows. The UI should behave sensibly, with the shuffle animation taking precedence. You could automatically fast-forward all animations for a UI element when a new action is performed on it, and/or block user input in all animating UI elements. But that seems inelegant to me. As a user, I really admire UIs that don't stutter or block me from doing what I want \u2014 UIs, in other words, that behave like real objects \u2014 and I'd like to do the same in my own applications. It seems to me that you'd need to have constraints for each action/animation in order to make this a robust system. For example, you might need to ensure that any changes to your data/state/model happen _only_ in the callback to your animation, not before. In example 2, you can't delete the group in your model as soon as the user clicks the button to delete the last row; it has to be done at the end of the animation, and at the end of the animation only. (Otherwise, if the user decides to add another row during the delete animation, reverting the group delete would be very difficult.) You might also need to design the animation system in such a way that if two animations from two different actions overlap, any mutually exclusive properties would continue animating as before. (In other words, if the user sorts the table while a row is expanding on click, the width expansion should not stop just because the row is moving to its new position.) And what about starting defaults for each animation? If a row fades out when deleted, and then another action implicitly cancels the delete, the fade out should stop and animate back in, even though the other action might not necessarily know about it. There's also the higher-level issue of animations (visual property changes) vs. actions (sequences of events, with potential model changes, that include animations). In example 2, \"delete last row\" is clearly an action, since the end result is that a group gets removed. However, in most UI libraries, you'd add the callback to the animation, conflating the two. This is also an issue in games. If I need to trigger a grenade throw after my character's throw animation finishes, how do I represent that in my code? How do I place the grenade throw on my timeline without mixing model and view code, while still allowing the animation to slow down and speed up based on external factors? The more I think about this, the more my head hurts, and the more I'm convinced that somebody has already thought this through a lot more carefully than me. Any ideas? (At the moment, I'm mostly interested in this for web UI, but I'm also interested in a general approach for my future, non-web projects.)"} {"_id": "227777", "title": "Common practice to transfer named parameters in Tcl", "text": "Using Tcl8.5, what is the common practice to transfer **named** variables to a proc/method? Something that will be similar to defining the class members when instantiating a class object _[incr Tcl]_. Something like this: # Passing 3 parameters: \"hello\", \"how\" and \"doing\". procName -hello world -how you -doing today # We can easily add or remove any parameters to/from any location **Edit #1:** Same example as above, but more similar to python: procName hello=world how=you doing=today"} {"_id": "227776", "title": "String and Suffix Matching from a Suffix List", "text": "I'm having trouble finding an algorithm that matches (or fails to match) a substring of a string to a suffix from a list of suffixes. The hits I'm finding are for suffix trees and suffix arrays, but they are not quite what I am looking for. My particular problem is a fully qualified hostname - possibly malformed - and trying to match the suffix provided by the banned suffix list published by Mozilla. As a concrete example, given `w.x.y.z.example.com`, I need to identify it as hostname `w`, domain `example.com`, and know that `example.com` is valid domain. As a degenerative example, I may need to take `example.com` and know that its a domain with no host, and that matching it to `*.com` would be wrong. And the negative cases only get worse when you factor in suffixes like `pvt.k12.ma.us`, `nasushiobara.tochigi.jp`, `\u516c\u53f8.cn` and `\u0627\u0644\u0633\u0639\u0648\u062f\u06cc\u0629`. The eventual application will be DNS hostname matching in X509 certificates. Hence the reason there may be malformed input from miscreants. I think this algorithm should run in _m log n_. _log n_ for each lookup into the [sorted] list of suffixes, and _m_ for each DNS label. I think it may be a case of not seeing the forest through the trees. Does anyone know of an efficient algorithm for matching substrings from a list of suffixes?"} {"_id": "63815", "title": "In what way(s) is LLVM Low Level?", "text": "In what way(s) is LLVM (Low Level Virtual Machine) Low Level? (At the time of writing, I did not find this expansion of the abbreviation \"LLVM\" on its web site, but on Wikipedia.) Is it called \"Low Level\" in what is it designed for (a compiler infrastructure) or because it is works on a \"lower level\" than other tools? As a (kind of) \"illustration\" of this, is LLVM lower-level than the JVM and CLR, or is it only _designed_ for \"lower level\" uses?"} {"_id": "185893", "title": "Customer ask firmware source file", "text": "Recently my company was asked by a customer to develop a control board that includes firmware and PCB layout development. After finishing development the customer will buy the control boards at certain quantity every year. We are now at making contract stage, the customer is wanting my company to handover both the firmware source files and PCB layout files after development. We could give the PCB layout files but we don't want to give the firmware files as we're worried about they may find third party to produce the control board so that we may have nothing. However from the customer point of view, they are concerned that they could be in risk situation if my company for some reason stops selling the boards. Is there any \"common\" practice to make contract regarding this?"} {"_id": "113236", "title": "Should all developers on a team have equal role/responsibility in writing and updating software design documents", "text": "I've asked a very similar question few days ago, but because I presented too much of my company's current situation, most answers focused completely on something that I wasn't looking to answer. So I wanted to try again... Given just about any agile team, you always have people with varied a) knowledge of the product b) experience in producing designs and c) general level of competence. So let's say you take an agile team and using the factors (a), (b) and (c) above you come up with an overall score for each engineers (mental exercise only). Now we sort them in ascending order and get a continuous spectrum. So the question I wanted to ask is this: _Should every single person on this spectrum be given equal responsibility as it comes to writing/updating software design specifications?_ I'm not talking about coming up with software design, in agile teams that is usually done by more than one person in a more collaborative setting. But at the end of the day, someone has to go back to his desk open their favorite (and company/team approved) document editor and type it all in. The reason I ask this question is because it seems people on high end of the spectrum tend to produce documents which are more readable, concise but have exactly the information you would want in a design specification so that future people who read it have much more benefit. People on the opposite side of the spectrum tend to produce documents which are not nearly as useful or clear and a lot of times, even with several iterations of design reviews, their work doesn't seem nearly as helpful (so specs they produce become write only dumping ground that no one reads or trusts because of the way they are written). I'm not proposing that the agile team be segregated and only certain individuals given certain roles that will never change. I'm asking... 1) what do you do in your teams with people who have vastly different (a)x(b)x(c) scores 2) Does it make sense to not give equal responsibility to everyone. Instead, give only smaller (or none) update tasks to those on the low end of the spectrum. But then work with these individuals by identifying (a), (b) and (c) factors that they'd need to improve and as those get improved give them more responsibility. Personally, I'm not sold on one way or another, I'm just curious how other teams are dealing with this."} {"_id": "113237", "title": "When you should NOT use Regular Expressions?", "text": "Regular expressions are powerful tool in programmer's arsenal, but - there are some cases when they are not a best choice, or even outright harmful. Simple example #1 is **parsing HTML with regexp** \\- a known road to numerous bugs. Probably, this also attributes to parsing in general. But, are there other clearly no-go areas for regular expressions ? * * * p.s.: \" _The question you're asking appears subjective and is likely to be closed._ \" - thus, I want to emphasize, that i am interested in examples where usage of regexps is known to cause problems."} {"_id": "147912", "title": "Need to process 2 million 100k messages per second and route them to a particular event, delegate or multicast delegate", "text": "I need to process 2 million messages per second (perhaps in a scale out configuration) and route each message to a N delegates or multicast delegates. **Question** How should I structure my application in C# so that I can achieve the best performance when receiving the message, and routing it to the correct delegate? **Additional Details** Each inbound message has the following properties in the format of a JSON array: * Customer * Category * TopicID * DateTime * Data[] Each message will be processed by a delagate function that is interested one or more of those properties. **Simple examples of delegate processing needed:** * A customer counter function may count the quantity of messages per customer, * A category counter may count the quantity of messages that counter * A customer-counter will count messages unique to that customer."} {"_id": "37476", "title": "Can anyone recommend coding standards for TSQL?", "text": "We've long had coding standards for our .Net code, and there seem to be several reputable sources for ideas on how to apply them which evolve over time. I'd like to be able to put together some standards for the SQL that is written for use by our products, but there don't seem to be any resources out there on the consensus for what determines well written SQL?"} {"_id": "37475", "title": "What are developer's problems with helpful error messages?", "text": "It continue to astounds me that, in this day and age, products that have years of use under their belt, built by teams of professionals, **still** to this day - fail to provide helpful error messages to the user. In some cases, the addition of just a little piece of extra information could save a user hours of trouble. A program that generates an error, generated it for a reason. It has everything at its disposal to inform the user as much as it can, why something failed. And yet it seems that providing information to aid the user is a low- priority. I think this is a huge failing. One example is from SQL Server. When you try and restore a database that is in use, it quite rightly won't let you. SQL Server _knows_ what processes and applications are accessing it. Why can't it include information about the process(es) that are using the database? I know not everyone passes an `Applicatio_Name` attribute on their connection string, but even a hint about the machine in question could be helpful. Another candidate, also SQL Server (and mySQL) is the lovely `string or binary data would be truncated` error message and equivalents. A lot of the time, a simple perusal of the SQL statement that was generated and the table shows which column is the culprit. This isn't always the case, and if the database engine picked up on the error, why can't it save us that time and just tells us which damned column it was? On this example, you could argue that there may be a performance hit to checking it and that this would impede the writer. Fine, I'll buy that. How about, once the database engine knows there is an error, it does a quick comparison after-the-fact, between values that were going to be stored, versus the column lengths. Then display that to the user. ASP.NET's horrid Table Adapters are also guilty. Queries can be executed and one can be given an error message saying that a constraint somewhere is being violated. Thanks for that. Time to compare my data model against the database, because the developers are too lazy to provide even a row number, or example data. (For the record, I'd never use this data-access method _by choice_ , it's just a project I have inherited!). Whenever I throw an exception from my C# or C++ code, I provide everything I have at hand to the user. The decision has been made to throw it, so the more information I can give, the better. Why did my function throw an exception? What was passed in, and what was expected? It takes me just a little longer to put something meaningful in the body of an exception message. Hell, it does nothing but help _me_ whilst I develop, because I know my code throws things that are meaningful. One could argue that complicated exception messages should not be displayed to the user. Whilst I disagree with that, it is an argument that can easily be appeased by having a different level of verbosity depending on your build. Even then, the users of ASP.NET and SQL Server are not your typical users, and would prefer something full of verbosity and yummy information because they can track down their problems faster. Why to developers think it is okay, in this day and age, to provide the bare minimum amount of information when an error occurs? It's 2011 guys, come _on_."} {"_id": "187944", "title": "The suffix Exception on exceptions in java", "text": "Specifying a suffix of Exception on exception classes feels like a code smell to me (Redundant information - the rest of the name implies an error state and it inherits from Exception). However, it also seems that everyone does it and it seems to be good practice. I am looking to understand why this is good practice. I have already seen and read the question why do exceptions usually have the suffix exception in the class name The question is for PHP and while the responses are probably valid for Java. Are there any other arguments or is it really as simple as explicitly differentiating them? If we take the examples from the previous question - could there really be classes in java with the name `FileNoFound` that is not an exception? If there could be, does it warrant suffixing it with `Exception` ? Looking at a quick hierarchy in eclipse of `Exception`, sure enough, the vast majority of them do have the suffix of exception, but there are a few exceptions. `javassist` is an example of a library that seems to have a few exceptions without the suffix - e.g. `BadByteCode`, `BadHttpRequest` etc. `BouncyCastle` is another lib with exceptions like `CompileError` I've googled around a bit as well with little info on the subject."} {"_id": "187947", "title": "I need advice developing a sensitive data transfer/storage/encryption system", "text": "I got closed on SO and told to post this here as it's about general application design as opposed to specific code. ## Intro I'm currently working on a project which involves the daily extraction of data (pharmacy records) from a VisualFox Pro database, and uploading some of it to a WordPress site, where clients of the pharmacy can securely view it. I would like some advice in terms of the general methodology of my software - I am able to code it, but need to know if I'm going the right way about it. I'm writing both the PC software (in C#/.NET 4.5) and the PHP WordPress plugin. (It doesn't matter much that it's in WordPress, the actual code for this will pretty much run separately). ## Question 1: Encryption The current process for encrypting the data server-side I plan to use is based on this article. Summarised, it advocates encrypting each separate user's data asymmetrically with their own public key, stored on the server. The private key to decrypt this data is then itself encrypted symmetrically using the user's password, and stored. This way, even if the database is stolen, the user's password hash needs to be broken, and even then the process needs to be repeated for every user's data. The only weakness, pointed out by the author himself, and the main point of my question, is the fact that while the user is logged in, the decrypted key is stored in session storage. The way the article suggests to deal with it is to just limit the time the user is logged in. I thought a better solution would be to store that key in a short-lived secure cookie (of course the whole process is happening over HTTPS). That way, if the attacker has control of the user's computer and can read their cookies, they can probably just keylog their password and log in, no need to steal the database, while even if the attacker gains access to the server, they cannot decrypt the HTTPS traffic (or can they? I'm not sure.) **Should I use secure cookies or session storage to temporarily store the decrypted key?** ## Question 2: Storage The second thing I still want to work out is how to store the data - this is more of an efficiency problem. Since every user has their own key for encryption, it follows that records for every user must be stored separately. I don't know if I should store a \"block\" of data for every user, containing encrypted JSON with an array of objects representing records, or whether I should store records in a table with the actual data structure, and encrypt each data field separately with the key. I am leaning towards storing the data as one block - it seems to me to be more efficient to decrypt one big block of data at a time, than perhaps several thousands separate fields. Also, even if I stored the data in its proper structure, I still wouldn't be able to use MySQL's WHERE, ORDERBY etc, since the data would all be BLOBs. **Should I store the data as a big block per user, or separated into the different fields?** ## Question 3: Transfer I extract the data from the DBF file, and essentially make a \"diff\", whereby I compare the current extracted data from the last day's data, and only upload the blocks of the users that have changed (I can't only upload the records, as I probably will end up storing the users' data in blocks). I also include \"delete\" instructions for users which have been deleted. This is as there are hundreds of thousands records in the database, totalling over 200mb, and the size increases every day. My current plan is to write all this data to a JSON file, gzip it and upload it to the server. My question is, how do I do that while ensuring the security of the data? Naturally, the upload will happen over HTTPS, and I have an API password in place to only allow authorised uploads, but my main concern is how to protect the data if the server is compromised. I don't want the attacker to just grab the JSON file from the server while it's being processed. One idea I had was to get the server to send me a list of public keys for the users, and perform the encryption in my software, before the upload. It seems to me like that's the only way of protecting that data. I could encrypt the whole JSON file, perhaps with an API key or a special password, but that's moot if the attacker can just access the decrypted file as it's being processed on the server. Is that a good solution? **Should I encrypt the data individually client-side, or is there a way to securely transfer it to the server and encrypt it there?** Thanks in advance for any answers, I'd love to hear from someone who's dealt with problems like this before. Matt **EDIT: I'm pretty decided on 1 and 2 - thanks @Morons. I'll use secure cookies and store data in blocks. My main question is now question 3, would love to get some input on that.**"} {"_id": "3438", "title": "Rigorous Definition of Syntactic Sugar?", "text": "It seems like in language holy wars, people constantly denigrate any feature they don't find particularly useful as being \"just syntactic sugar\". The line between \"real features\" and \"syntactic sugar\" tends to get blurred in these debates. What do you believe is a reasonable and unambiguous definition of syntactic sugar that avoids it being defined as any feature the speaker/writer doesn't find useful?"} {"_id": "249915", "title": "How to operate a computer without an operating system?", "text": "How a computer can be used when there is no operating system? What tools or knowledge do I need to do it? Do I have to give all the commands in binary to use computer hardware resources like a monitor?"} {"_id": "138101", "title": "What aspects of a project should be covered when presenting it in an interview?", "text": "I am working as a .Net developer for a couple of years. I don't have experience in presenting projects but I have an interview and I will be expected to present the projects that I worked on in the past. How much/what kind of technical details should be covered? What other aspects of a project should be covered?"} {"_id": "198241", "title": "BASE_DIR/URL with or without trailing slash?", "text": "When you have basedir/url constants or variables, do you put a trailing slash? In theory I think ultimately one should, since /path/to/dir/ is a dir while with /path/to/dir if I'm not mistaken the server checks if there's a file called 'dir', and then adds a trailing slash for you. However, I think `$my_url = BASE_URL . '/my/dir/'` looks better than this: `$my_url = BASE_URL . 'my/dir/'` Better yet, this: `/my/dir/\">` looks a lot more readable to me than `my/dir\">`. Obviously this might be personal preference, and fine as long as one is consistent, but what is most common? What approach do you prefer? Any opinion appreciated."} {"_id": "84771", "title": "Do companies actually send their developers to events abroad?", "text": "I was looking at the DevConnections conferences and lamenting the fact that the UK one was cancelled as there's absolutely no way I'll be going to any of the others, which got me thinking: **Do any companies actually send their developers to conferences abroad, or is that just a utopian ideal of which I'm irrationally envious?**"} {"_id": "199244", "title": "UML modelling semantics", "text": "In today's lecture about Modelling techniques with respect to MDD using UML the lecturer stated that it's absolutely necessary to give a (possibly) textual description about the semantics of each diagram you produce. In my opinion it's not necessary to describe semantics of diagram elements which are already standardized by UML. I can see that it's completely appropriate if you extend the standard UML by stereotypes/tagged values for any reason. In contrast, I think that standardization has in the ordinary case the intention to let people reason and talk about diagrams without the need to explain semantics every time. The only precondition is of course that they all rely on the same UML specification. Another point regarding MDD is that using different semantics from time to time for the same diagram kind makes code generation and automatic model transformation difficult in the end. 1. Am I right with this notion? 2. Are there some inherent interpretation ambiguities in UML to make it more general usable?"} {"_id": "167567", "title": "Preparing yourself for Code challenges", "text": "Just a few days ago I discovered Codility, and I tried their challenges. And I must say. I got my behind handed to me on a platter. I'm not sure what the problem was, but I'll lick my wounds and wait for the solution to come out and compare it with my own. In the meantime, I want to get ready for the next challenge so I'm reading their previous blog posts and seeing how to solve their previous problems. There are a lot of new things I haven't heard about like (Cartesian trees, various sort algorithms, etc.) So, how does one prepare for such challenges (especially the O(x) time and space complexity). What should I read to prepare for such a task?"} {"_id": "203068", "title": "How should modules access data outside their scope?", "text": "I run into this same problem quite often. First, I create a namespace and then add modules to this namespace. Then issue I always run into is how best to initialize the application? Naturally, each module has its own startup procedure so should this data(not code in some cases, just a list of items to run) stay with the module? Or should there be a startup procedure in the global namespace which has the startup data for ALL the modules. Which is the more robust way of organizing this situation? Should some things be made centralized or should there be strict adherence to modules encapsulating everything about themselves? Though this is a general architecture questions, Javascript centric answers would be really appreciated!"} {"_id": "70527", "title": "Implementation of communication between packages (Java)", "text": "I'm making a project with 5 packages. The packages can communicate with each other by sending Messages to a central MessageQueue in the Main package and that MessageQueue takes care of all the messages. (cfr.: http://en.wikipedia.org/wiki/Java_Message_Service) Provided this image: ![Diagram](http://i.stack.imgur.com/u26Bo.gif) the packages would be the clients (both publisher and subscriber) and the MessageQueue would be the topic. I'm now implementing the MessageQueue (a.k.a. the Topic in the image above) and packages can already subscribe to Messages and packages can also publish a Message. Publishing a Message is just adding the Message to a queue 'inside' the MessageQueue. So at a given time during execution there can be 0, 1 or more messages in the MessageQueue and now i must make sure the MessageQueue processes all these messages correctly (and in order). A way of doing this could look like this: public class MessageQueue { public MessageQueue(){ //initialising etc. processMessages(); } private void processMessages(){ while(/*program is running*/){ if(queue.isNotEmpty()){ /*process first message in queue*/ } } } However, i believe this is not a great design and is sometimes referred to as _busy waiting_? How could i approach this in a better way (without using any standard APIs ofcourse)? I would also like to apologise that the image isn't embedded, but i need 10 reputation before i can do that. If someone with more reputation could edit my post so that the image is embedded, i would greatly appreciate that."} {"_id": "209004", "title": "Using WPF rather than WinRT for Windows 8 Pro tablet app: good or bad idea?", "text": "Our business is considering writing a line of business application for tablets to enable road warriors and executives access our data. This will be primarily used for dashboards, reports and some form filling input: quite typical scenario. Our other apps being built in .Net (WinForms) with lots of related class libraries DLLs (for data access layer, business logic, etc), it seems natural to want to use Windows 8 for this. I am a rather put off by Windows RT as it doesn't implement the whole .Net Framework and seems quite limitating, so we are rather considering the Windows 8 Pro tablets than WinRT ones. However, we want a smooth and pleasant tablet experience and I don't think WinForms would be great for this as not built for touch support. My question is would WPF be good for this? I see there are UI vendors selling touch controls kits but I wonder whether the \"touch\" experience in WPF is equivalent to the one found WinRT/Metro apps? Probably this could be answered by playing around with the tablet but I unfortunately do not have one at the moment and can't get the budget to buy one until I have settled this."} {"_id": "108984", "title": "How to use a 3rd party web API", "text": "I am trying to understand the concept of using 3rd party web API. From what I understand so far, web API look like regular URLs with some parameters etc. Will the client program need to download and install any package/bundle etc from the website/server etc providing APIs and include it with their product or no bundle is needed to download from service provider. the client program just uses the web API URLs (like how we do it in browser) What are the variants in common use? Does it boil down to how much we want to do on client side and how much we want to do on server side? like not downloading anything from web API service provider means everything happens on web API provider side.."} {"_id": "71699", "title": "What is considered a suitable notice period for a software developer?", "text": "I'm being asked by the company I work for to extend my notice period. I'm not outright against it (in fact it's somewhat flattering in a way) but it has made me wonder what a typical notice period might be for a C# software developer?"} {"_id": "229204", "title": "How to use Git with ASP MVC Code-First Entity Framework", "text": "**TLDR:** What is an effective way to use Git to handle ASP MVC code-first EF? Which files need to be included for NuGet? Which files should be left out? We have an **ASP MVC code-first Entity Framework** project that we are building as a school project. For our versioning, we are using **Git to Bit Bucket**. The basic workflow to add a feature or revamp code is to create a branch off of a _Dev_ branch, commit to the new branch until finished and intial testing is done, then merge/pull request back into the _Dev_ branch. We have run into issues with the project setup breaking when a user tries to merge changes to the their branch before merging back into _Dev_. Some questions are: **For the NuGet Packages, what should be included in the Git repo?** There are some files that seem to update every time, forcing Git to go through all of them during a commit. **Should the Entity Framework be included in the repo?** We have the migrations, but they seem to break any time anyone pulls changes onto their machine. **Is there a good way for Git to handle Visual Studio Project (.csproj) Files?** We had a merge nightmare involving project files when one developer pulled changes from the _Dev_ before trying to push back. For our .gitIgnore, we used an ASP and added a few of our own. A copy of it is at Pastebin.com. Also, as for the horrific mangling of project files, we have looked on StackOverflow and saw this: .NET + Git: how I deal with Visual Studio files?. Is this something that would resolve the Visual Studio project files issue?"} {"_id": "252445", "title": "Patterns for undo in multi-user applications on relational data", "text": "I want to build a web application that will allow multiple users to collaboratively populate the contents of a fairly conventional relational database. (The database will have a fixed schema, which is known in advance). I can see how to define the schema for the necessary object types, relations, and foreign key constraints (items, items as members of categories, links between items, and so on). Basic CRUD operations to instantiate and modify objects are no problem. But for resilience against vandalism and mistakes, I can foresee that it will be necessary to have undo/rollback functionality, so that moderator-level users can undo changes made by other users. I'm having trouble figuring out a suitable approach to take for two key functional pre-requisites: 1. Capturing all the database changes that result from an initial user request. For example, there's a many-to-many relationship between items and categories. Therefore, if a category is deleted (triggered by a user submitting an HTML form), all the category-item relation records corresponding to that category will get deleted due to referential integrity constraints on the many-to-many relation. How can I record all the cascading consequences of an initial operation, so that's it's possible to completely undo it later? 2. How can I isolate undo operations so that a bad action by one user can be undone without also needing to roll back all the beneficial changes which have been made by other users, in between the bad action and the moderator's review? The Undo patterns I've seen described (e.g. \"Command\" pattern) all assume that there is a stack of commands and undo operations are always applied in strict reverse order of initial application (no support for out-of-order undos). Are there any standard patterns for handling undo capability in relational databases which would help meet these two goals? At the moment, I'm looking for generic algorithms and patterns which help solve the problems listed above, rather than platform-specific details."} {"_id": "219038", "title": "What is the difference between self-types and trait inheritance in Scala?", "text": "When Googled, many responses for this topic come up. However, I don't feel like any of them do a good job of illustrating the difference between these two features. So I'd like to try one more time, specifically... **What is something that can be done with self-types and not with inheritance, and vice-versa?** To me, there should be some quantifiable, physical difference between the two, otherwise they are just nominally different. If Trait A extends B or self-types B, don't they both illustrate that being of B is a requirement? Where is the difference?"} {"_id": "14162", "title": "How do you track bugs in your personal projects?", "text": "I'm trying to decide if I need to reassess my defect-tracking process for my home-grown projects. For the last several years, I really just track defects using `TODO` tags in the code, and keeping track of them in a specific view (I use Eclipse, which has a decent tagging system). Unfortunately, I'm starting to wonder if this system is unsustainable. The defects I find are typically associated with a snippet of code I'm working on; bugs which are not immediately understood tend to be forgotten, or ignored. I wrote an application for my wife which has had a severe defect for almost 9 months, and I keep forgetting to fix it. What mechanism do you use to track defects in your personal projects? Do you have a specific system, or a process for prioritizing and managing them?"} {"_id": "139482", "title": "Why are statements in many programming languages terminated by semicolons?", "text": "Is there a reason that a semi-colon was chosen as a line terminator instead of a different symbol? I want to know the history behind this decision, and hope the answers will lead to insights that may influence future decisions."} {"_id": "222112", "title": "Optimal way to implement this specific lookup table in C#?", "text": "I want to create a lookup table for this data: The \"input variables\" (what is used to \"lookup\") are 4 different doubles that can each take on 1 of 200 numbers (the numbers range from 1-1000 but there are only 200 possible numbers that each can be (these 200 possible numbers that each of them can be are known to me)) the doubles are all 2 decimal places. If any one of the four were changed it would slightly change the output variables. There is also 1 integer (enum really) that can take on a value from 1-5. There is a condition on 3 of the input variables that (1/x + 1/y + 1/z must be less than 1.02). Could this be used in a hashing algorithm? The \"output variables\" (what is returned) are ~30 doubles (mostly 2 decimal places but one has 10 decimal places). (will be packaged in an object) ranging from 1 to 1000 (2 decimal places). I expect there to be ~150 million records. Should I use a big dictionary and load it into memory then I start the program? Would a Database and LINQ be best? Can I use Trees or Hashing in some way to speed it up? I've never had to make a LUT this big before, where speed is a major factor. **Clarification:** Because of the condition that (1/x + 1/y + 1/z must be less than 1.02, see above) there is only ~150 million combinations of input variables. Not ~(200)^4. **Update:** I have looked at some stats (min and max observed values and discovered some relationships) for my input variables and have found that if we call the 4 input doubles A, B, C, D: `A and B have ~200 possible values each, C has ~50 possible values, and D has ~120 possible values` Of these, there are several relationships that mean that there are only ~27 million combinations of these rather than the ~150 million I originally thought. So there will be ~27 million records in the LUT. There is also definitely a relationship that I haven't been able to figure out between (A, B, C) and D which will also bring down the number of combinations. Would it be optimal to run the LUT from RAM now that I have reduced the entries from 150 to 27 million (and probably lower)? Now that it's down to 27 million would storing them as ints by multiplying them by 100 (2 decimal places) still be optimal? As DocBrown suggested, I should store the doubles as ints (multiply by 100 because they have 2 decimal places) and then combine the 5 different ints (4 doubles (see above) and 1 enum (value:1-5)) into a key for the LUT. How to do this such that I will have a unique value for each combination of my 5 input variables (the 5 ints) AND this method is open to expansion of each of the input variables, i.e. should I need to expand the double C's combinations to 70 instead of 50 I will need unique key values for the new entries that are result of the expanded number of total combinations of the input variables."} {"_id": "106167", "title": "What is the best way to INSERT a large dataset into a MySQL database (or any database in general)", "text": "As part of a PHP project, I have to insert a row into a MySQL database. I'm obviously used to doing this, but this required inserting into 90 columns in one query. The resulting query looks horrible and monolithic (especially inserting my PHP variables as the values): INSERT INTO mytable (column1, colum2, ..., column90) VALUES ('value1', 'value2', ..., 'value90') and I'm concerned that I'm not going about this in the right way. It also took me a long (boring) time just to type everything in and testing writing the test code will be equally tedious I fear. How do professionals go about quickly writing and testing these queries? Is there a way I can speed up the process?"} {"_id": "91626", "title": "Dependency analysis for tests", "text": "Google built a testing system that can infer which tests need to be run after a change. In their own words: > ... we built a continuous integration system that uses dependency analysis > to determine all the tests a change transitively affects and then runs only > those tests for every change. > > ... > > ![enter image description here](http://i.stack.imgur.com/wVwpL.png) Inspired by that, I have created a tool for Python that also uses dependency analysis to detect which tests need to be run. The difference is that it is not tied to a CI system. It is meant to be run locally and looks for file changes since the last time the tests passed, instead of looking at commits on the version control system. Visual Studio 2010 has a featured called \"Test impact analysis\" that \"inform the developer of what tests they should run to verify the code changes they are making\". My question is: Is there a name for this approach? I believe I read somewhere that this is called \"Incremental Testing\", but I cannot find the reference anymore."} {"_id": "102677", "title": "Masters vs. PhD - long", "text": "I'm 21 years old and a first year master's computer science student. Whether or not to continue with my PhD has been plaguing me for the past few months. I can't stop thinking about it and am extremely torn on the issue. I have read http://www.cs.unc.edu/~azuma/hitch4.html and many, many other masters vs phd articles on the web. Unfortunately, I have not yet come to a conclusion. I was hoping that I could post my ideas about the issue on here in hopes to 1) get some extra insight on the issue and 2) make sure that I am correct in my assumptions. Hopefully people who have experience in the respective fields can tell me if I am wrong so I don't make my decision based on false ideas. Okay, to get this topic out of the way - money. Money isn't the _most_ important thing to me, but it is still important. It's always been a goal of mine to make 6 figures, but I realize that will probably take me a long time with either path. According to most online salary calculating sites, the average starting salary for a software engineer is ~60-70k. The PhD program here is 5 years, so that's about 300k I am missing out on by not going into the workforce with a masters. I have only ever had ~1k at one time in my life so 300k is something I don't think I can accurately imagine. I know that I wouldn't have all of that at once obviously, but knowing I would be earning that is kinda crazy to me. I feel like I would be living quite comfortably by the time I'm 30 years old (but risk being too content too soon). I would definitely love to have at least a few years of my 20s to spend with that kind of money before I have a family to spend it all on. I haven't grown up very financially stable so it would be so nice to just spend some money\u2026get a nice car, buy a new guitar or two, eat some good food, and just be financially comfortable. I have always felt like I deserved to make good money in my life, even as a kid growing up, and I just want to have it be a reality. I know that either path I take will make good money by the time I'm ~40-45 years old, but I guess I'm just sick of not making money and am getting impatient about it. However, a big idea pushing me towards a PhD is that I feel the masters path would give me a feeling of selling out if I have the capability to solve real questions in the computer science world. (pretty straight-forward - not much to elaborate on, but this is a big deal) Now onto other aspects of the decision. I originally got into computer science because of programming. I started in high school and knew very soon that it was what I wanted to do for a career. I feel like getting a masters and being a software engineer in the industry gives me much more time to program in my career. In research, I feel like I would spend more time reading, writing, trying to get grant money, etc than I would coding. A guy I work with in the lab just recently published a paper. He showed it to me and I was shocked by it. The first two pages was littered with equations and formulas. Then the next page or so was followed by more equations and formulas that he derived from the previous ones. That was his work - breaking down and creating all of these formulas for robotic arm movement. And whenever I read computer science papers, they all seem to follow this pattern. I always pictured myself coding all day long\u2026not proving equations and things of that nature. I know that's only one part of computer science research, but that part bores me. A couple cons on each side - Phd - I don't really enjoy writing or feel like I'm that great at technical writing. Whenever I'm in groups to make something, I'm always the one who does the large majority of the work and then give it to my team members to write up a report. Presenting is different though - I don't mind presenting at all as long as I have a good grasp on what I am presenting. But writing papers seems like such a chore to me. And because of this, the \"publish or perish\" phrase really turns me off from research. Another bad thing - I feel like if I am doing research, most of it would be done alone. I work best in small groups. I like to have at least one person to bounce ideas off of when I am brainstorming. The idea of being a part of some small elite group to build things sounds ideal to me. So being able to work in small groups for the majority of my career is a definite plus. I don't feel like I can get this doing research. Masters - I read a lot online that most people come in as engineers and eventually move into management positions. As of now, I don't see myself wanting to be a part of management. Lets say my company wanted to make some new product or system - I would get much more pride, enjoyment, and overall satisfaction to say \"I made this\" rather than \"I managed a group of people that made this.\" I want to be a big part of the development process. I want to make things. I think it would be great to be more specialized than other people. I would rather know everything about something than something about everything. I always have been that way - was a great pitcher during my baseball years, but not so good at everything else, great at certain classes in school, but not so good at others, etc. To think that my career would be the same way sounds okay to me. Getting a PhD would point me in this direction. It would be great to be some guy who is someone that people look towards and come to ask for help because of being such an important contributor to a very specific field, such as artificial neural networks or robotic haptic perception. From what I gather about the software industry, being specialized can be a very bad thing because of the speed of the new technology. When it comes to being employed, I have pretty conservative views. I don't want to change companies every 5 years. Maybe this is something everyone wishes, but I would love to just be an important person in one company for 10+ (maybe 20-25+ if I'm lucky!) years if the working conditions were acceptable. I feel like that is more possible as a PhD though, being a professor or researcher. The more I read about people in the software industry, the more it seems like most software engineers bounce from company to company at rapid paces. Some even work like a hired gun from project to project which is NOT what I want AT ALL. But finding a place to make great and important software would be great if that actually happens in the real world. I'm a very competitive person. I thrive on competition. I don't really know why, but I have always been that way even as a kid growing up. Competition always gave me a reason to practice that little extra every night, always push my limits, etc. It seems to me like there is no competition in the research world. It seems like everyone is very relaxed as long as research is being conducted. The only competition is if someone is researching the same thing as you and its whoever can finish and publish first (but everyone seems to careful to check that circumstance). The only noticeable competition to me is just with yourself and your own discipline. I like the idea that in the industry, there is real competition between companies to put out the best product or be put out of business. I feel like this would constantly be pushing me to be better at what I do. One thing that is really pushing me towards a PhD is the lifetime of the things you make. I feel like if you make something truly innovative in the industry\u2026just some really great new application or system\u2026there is a shelf- life of about 5-10 years before someone just does it faster and more efficiently. But with research work, you could create an idea or algorithm that last decades. For instance, the A* search algorithm was described in 1968 and is still widely used today. That is amazing to me. In the words of Palahniuk, \"The goal isn't to live forever, its to create something that will.\" Over anything, I just want to do something that matters. I want my work to help and progress society. Seriously, if I'm stuck programming GUIs for the next 40 years\u2026I might shoot myself in the face. But then again, I hate the idea that less than 1% of the population will come into contact with my work and even less understand its importance. So if anything I have said is false then please inform me. If you think I come off as a masters or PhD, inform me. If you want to give me some extra insight or add on to any point I made, please do. Thank you so much to anyone for any help."} {"_id": "51553", "title": "Checklist for starting an open-source project", "text": "To start an open-source project is not just to throw up the source code on some public repository and then being happy with that. You should have technical (besides user) documentation, information on how to contribute etc. If creating a checklist over important things to do, what would you include on it?"} {"_id": "103195", "title": "Data Mining Books", "text": "I'm passionate about _data mining_ , I have read some books like _Programming Collective Intelligence_ , and I would like to know more good **books** , specially practical ones, **about _data mining_** in conjunction with _AI_. Any tips will be appreciated as well. Thanks."} {"_id": "232435", "title": "How can NSObject contain an NSString if NSString is an NSObject", "text": "How can NSObject contain an NSString if NSString is an NSObject /* NSObject.h Copyright (c) 1994-2012, Apple Inc. All rights reserved. */ (NSString *)description; NSObject has a property named 'description' that is an NSString but isn't NSString an NSObject? How is it that this works?"} {"_id": "232436", "title": "How to use specific Request Header", "text": "I am learning how to work with XMLHttpRequests. I know that request.setRequestHeader() is an important factor. I just don't understand why. It took me a while but I have at least found a list of Headers here and here, but I still don't understand what each one of them does, and what value goes with each. Is there a resource that gives an example and explains what they are for?"} {"_id": "230363", "title": "Iteratively improve software architecture & quality in an agile process?", "text": "Or to put it another way how to ensure that architecture or quality doesn't suffer, doing agile. Some of the understandings in handling architecture in agile are below(generally applies to testing as well) *** Do architecture until risks fall off your radar * minimal design up front, so that you will not pay a horrible price for reasonable change requests. * \"just enough\" architecture/infrastructure per each feature/requirement** (The above were found in searching other related StackOverflow questions) * * * A **hypothetical example/scenario** is that there is a requirement to develop a order processing module in the existing system. The below guidelines are negotiated and agreed upon. 1. The architecture for module should be able to handle processing of 10 orders in the span of a minute. 2. Time to market is 4 weeks, planning is done to handle 2 sprints of 2 weeks. 3. The Business is satisfied with average performance expectations. Once the module is delivered, lets say the following improvements/changes are requested. * Improve the processing to 15 orders per minute. * Improve performance by 25%. * Time to market is 2 weeks. So assuming the further requirements from the customer/business come and which are welcome, but in future as the iterations go on, the architecture needs to be overhauled, and testing the quality of the module also becomes more time consuming. * * * _And the above is not a happy result, as estimations increase, and there is an understandable inability to meet further expectations or requirements quickly. Even if it can be done in the given timeframe with sufficient quality, just wish that we could have done something about it well before and not face this situation at this point of time._ _In hindsight, architecting to scale in the first iteration would be ideal, but that was not the goal and focus. The focus is always on Time to market, and on acheiving on meeting the business/customers expectations in a given timeframe. Unfortunately, in the long run it does take a toll on development._ **What can be done to avoid the mentioned pitfall in the example, in improving the architecture or testing process iteratively?**"} {"_id": "18316", "title": "Comparison with an outsourced dev center", "text": "We are part of a software company which was just acquired by a larger one. This company has a large development center in India; we are based in Europe. We don't yet know what will happen with our projects, maybe they will be outsourced, maybe not but I want to know if we can rival an indian programmer as what salary is concerned. I know there are a lot of factors involved here, not just the salary issue, but I just want to get an ideea of the difference. Can someone mention salaries (in euros or dollars) and associated years of experience. I found some info on the web but it is not that recent. Thanks in advance!"} {"_id": "234873", "title": "Should developers \"own\" the build server(s)?", "text": "The build server at our company sucks. The build agents (there are a dozen of them, each is a separate VM, and apparently they are all living on their own ESX server) are SLOW. The web interface is SLOW and is running on a Windows VM, also on ESX. Everything is SLOW, SLOW, SLOW. Artifact upload sometimes fails for unknown reasons. Sometimes the web interface reports in the UI that it has run out of memory and it seizes up then crashes. It seems like IT is rebooting it at least once every day. Today they are rolling back a minor update because it has caused problems. Currently IT owns the hardware that the build servers run on and we have zero visibility into that. We do have admin control over the software. We (the developers) are constantly complaining about it but nothing seems to really improve. Does it make sense for developers to own the build server(s)? IT doesn't really have as much incentive to make them reliable and fast as developers do. In order to get IT to fix the problems we have to complain to our managers and then they have to talk to their managers who then tell IT to fix it. We can report problems to IT directly but that just doesn't seem to work to get things fixed because IT has lots of other things they need to deal with. Also, the developers are the ones who are actually using the build system and are probably a lot more familiar with it than IT. How does it work at other companies? We are about a 150 person company. So probably half of that is engineering, but a third of that is probably hardware, so we have maybe 40 or so devs."} {"_id": "234871", "title": "Correct way to undo the past few git commits including a merge?", "text": "My team is working with git for the first time and has really messed up our git remote due to our inexperience. I'm trying to help educate them on git to solve the problem in the long term, but in the meantime, the commits that have been made are totally wrong. We haven't made very much progress at all, and I don't see the value in retaining these commits when they could cause confusion/additional problems down the line. The most recent commit was a merge between the master branch and a second branch that one of the team members unintentionally created. If I try to revert, git states that I must be using the -m flag, but I'm not sure whether I should be reverting or trying to rebase in this case, or how I would do it given the structure of the past couple of commits. What is the best way to resolve this issue and how should I go about doing it?"} {"_id": "38059", "title": "Project size vs. Team size", "text": "I'm a freelance solo web developer/designer/iPhone app maker by night (IT technician by day) and i'm interested in developing a social network site to go public (like Facebook, fingers crossed for half a billion users). But i'm concerned that i'm not going to manage this on my own, being only able to dedicate weekends/evenings to it. I would class my self as an intermediate web dev but by no means have i written something as large scale as an online community. When does a project become too big for one person and when do you bite the bullet and hire someone/team up with another web dev?"} {"_id": "246236", "title": "Is this a good task for machine learning - grouping pieces of DNA based on sequence?", "text": "Say I have a list of 50 DNA sequences all of the same length (6, 8 or 16 bps/ chars). I want to group these to sets of say 5 or 6 sequences per set. I have some criteria that need to be met based on the sequence: 1) There has to be at least three mismatches (different characters), ACGTAC ACTTAT has 2 mismatches - positions 3, and 6 , so would not meet the criteria 2) if we say A and C are red C and T are green (the laser colours on the sequencing machines), then we must have a green and red laser in each position. ACACTG AATGAC CCATGC is equivalent to (r is RED, g is green) RRRRGG RRGGRR RRRGGG So the last four positions match the criteria (we have a red and a green in each position), but the first two positions are all red, so this set would not meet the criteria. I have tried brute forcing this, and it works, but after the set size gets big enough, the number of combinations needed to check gets huge. So would this be a suitable task for some \"machine learning\" algorithms. Is there a type of algorithm / process that I should be looking at in particular? (I just started a Coursera course, but the problems being discussed just now are linear regression, which seems to be a different class of problem)."} {"_id": "229039", "title": "Spring JDBC Template without DAO?", "text": "I am rather new to writing applications that interact with databases, and I'm curious about a project I'm working on. I have to write a very simple web app which is going to be displaying metric data based off a handful of various queries (probably not over 15) to various database tables. Based on my own research, Spring JDBC Template seemed like a good technology to go with based on the rather simplistic nature of my project. Every example I see for using it though seems to involve the use of the DAO pattern. I was under the assumption that usage of the DAO pattern wouldn't be necessary for what I'm doing, but it seems extremely pervasive in the examples, so perhaps I'm mistaken. What criteria would be used to evaluate whether I should be implementing a DAO pattern with Spring JDBC Template for my project?"} {"_id": "40000", "title": "Going from webforms, VS 2008, 3.5 framework to the \"next level\" based on my goals", "text": "I've got a few choices to make as I develop some business websites that will run for the next two to three years. Currently I run ASP.NET 3.5 with Visual Studio 2008. I do my development rather crudely in WebForms because that's what I learned and am most productive with. I don't use Membership or any other frameworks in my projects. I use a simple class that maintains a few session keys for each user based on basic database tables for users and roles. (I have about 3,000 users). So far I've kept the data simple, using ADO.NET against SQL Server and a data access class (Circa 2000, I know) to build my sites. My questions are as follows: 1. Under what conditions would I be better off moving to MVC? 2. Under what conditions would I find LINQ and ORM a better way to go than standard ADO.NET? 3. Would I benefit, in my current state of development, from going from Studio 2008 to Studio 2010?"} {"_id": "83209", "title": "Floyd's algorithm", "text": "Is it possible(expected) for an individual to figure out(having never seen it before) the algorithm if asked at an interview? What other problems have equally interesting solutions? Edit: Due to the confusion of the actual algorithm I am referring to, it is the Floyd's cycle-finding algorithm, aka, tortoise and hare algorithm."} {"_id": "83207", "title": "How to experience gradual improvement of knowledge while a newbie does .NET maintenance programming?", "text": "I started my career as a software developer about 6 months ago. This is my first job, and I am the only developer in this company. I gained .NET knowledge by self study and also by doing some university projects. Our systems have old foundations based on an earlier version of .NET, and I'm starting to feel that I am not improving since I am a maintenance programmer here. Everything is old and my manager is not really taking any chances on gradually improving the software. What is your opinion? What should I do? I am newbie and also work hard to find my way through. There is no other developer, not even a senior one to help me here. I need your advice on my situation. And one last thing, can I get a new job with doing maintenance programming? I mean don't managers say that you do not have the experience of developing a new software from scratch? I feel redundant, what do I do?"} {"_id": "83202", "title": "Is there certain payoffs between working for a company in the IT industry, and working for the IT department of a company in some other industry?", "text": "I am a software engineering student and am in the process of making some career choices. I need to understand what the major differences are in the above two scenario. Anyone with experience of both, insight from you would be really helpful. Thanks in advance."} {"_id": "147001", "title": "Choosing a rate to charge a client for training their programmer", "text": "We've been put in a bit of a difficult situation and I'm trying to figure out the best way to deal with it and how to charge for it. We've built a custom web app for a customer and I've been maintaining it for the last 1.5 years. They sent email an email a couple weeks ago saying they've hired a developer and would like to take the project in house. (Nothing had been mentioned regarding taking it in house before.) I was of course surprised as I had expected to be working on the project for another year or more. I met with the developer a week ago and went over the basics of the site. After my offer (including suggestion of a different rate) and follow-up email, they asked me if I'd be willing to do some training on how to continue development of the app using the libraries/techniques currently used. (The developer is experienced with the language, but not with the libraries.) I'm wondering how much I should charge for 1-on-1 training tailored specifically for the site and the libraries (libraries which are reusable)? I'm thinking the training would between 3 and 6 hours, depending on how much the developer's skill level. Here are my difficulties in coming up with the rate: * We're going to be loosing 10s of thousands of dollars by not having the client and project any more (not to mention future opportunities). So in part, I'd like to replace some of this income through training. * I'm essentially training someone to do what we do and giving them the opportunity to compete against us. They are a reasonably large company that has the possibility of competing against our company directly, although it's not likely our 2 companies will be direct competition. * We've spent years learning the libraries (50% external and 50% ours) and the technology behind them (which we haven't been paid for directly). * I'm not sure what the going rate for training is the area, specifically for 1-on-1 training. * I want to do the right thing because there is a slight possibility of affecting other business relationships. My first thought was to to charge something around 5 times the rate we're charging them currently, but I have a feeling they'll think this is high. (We're not terribly concerned with getting the contract to do the training, but it would be a unique opportunity where we could learn as well.) Thanks for your suggestions and ideas. I know this is a bit of a subjective question, but I'm just looking for suggestions or something I'm not thinking of."} {"_id": "85910", "title": "Is it wrong not to create Javadoc for my code?", "text": "I do a lot of Java programming at my work (I'm an intern) and I was wondering if it is generally a rule to create javadoc to accompany my code. I usually document every method and class anyways, but I find it hard to adhere to Javadoc's syntax (writing down the variables and the output so that the parser can generate html). I've looked at a lot of C programming and even C++ and I like the way they are commented. Is it wrong not to supply javadoc with my code?"} {"_id": "147009", "title": "Why hasn't Caja been popular?", "text": "Google released Caja around 2008(Capability JavaScript). It is still mainly a laboratory language. But XSS and other attacks would be prevented if there was widespread integration of Caja."} {"_id": "89064", "title": "How and when to use UNIT testing properly", "text": "I am an iOS developer. I have read about unit testing and how it is used to test specific pieces of your code. A very quick example has to do with processing JSON data onto a database. The unit test reads a file from the project bundle and executes the method that is in charge of processing JSON data. But I dont get how this is different from actually running the app and testing with the server. So my question might be a bit general, but I honestly dont understand the proper use of unit testing, or even how it is useful; I hope the experienced programmers that surf around StackOverflow can help me. Any help is very much appreciated!"} {"_id": "138817", "title": "How many user stories per person should be completed per sprint?", "text": "Just ran across this figure, and wondering if there's another well know source than would help confirm these numbers: > Based on data I analyzed on successfully finished sprints, I determined that > a team should average around 1 to 1-1/2 user stories (product backlog items > of any sort, really) per person per sprint. SOURCE: Mike Cohn's Blog on \"Should the Daily Standup Be Person-by-Person or Story-by-Story?\""} {"_id": "205818", "title": "Responsive Design vs. Separate applications/views", "text": "I'm involved in a project for a redesign of an existing site. The Designer Team delivered us -the Engineering Team- four separate HTML/CSS prototypes of how the site should look: * On a Desktop Browser. * On a Table Device, a similar but simplified design of the Desktop View * A High-Tech Smartphone, the design is mobile-app like and very different from the Desktop and Tablet designs. Also, less content is shown there. * A Low-End Smartphone, a simplified version of the precedent design but still very similar. My first impulse was build one HTML/CSS prototype using Responsive Web Design, and merge the prototypes made by the Design Team, but now i'm having some concerns: * Merging disparate prototypes might produce low-quality code. * We might be sending code/files to some mobile devices that they don't need. * We need to handle the situation where -for example- some option must be present in Tablets but must be hidden in Smartphone. Using CSS to hide can be bandwidth waste, so maybe there's a need of some server side code. Because of that, the idea of building two different Web Applications -or even better, handle two Views in a same web application using some server side Framework like Spring Mobile\\- was being considered. I am not an expert on Responsive Design, so I wan't to know what's the best approach is this situation or the standard way to tackle this problem. **PS:** If it's worth saying, I'm building a Portal using WebSphere Portal as Portlet Container and building the Portlets using Spring Portlet MVC."} {"_id": "205819", "title": "Create an automated program to check a site every day", "text": "Is it possible to create an automated program that will visit a web page every day? I'd like to make it run in the background (preferably non-visibly) when my computer loads and I have internet connection and navigate to a specific web page. I would think it would be pretty simple, but I'm not sure what language I'd use or what means to do it. Is it possible and by what means could I do it? **Edit based off of comments below** The only functionality the program would need to have is to 1. check to see if it's logged in and 2. log in if it's not. I'd only be using Windows. I'm interested in using existing programs or creating a program myself (for the experience). I'm a noob when it comes to using OSes and machine-based programming (I'm a web guy). Essentially I just thought it'd be a cool thing to create and good practice to learn more OS/offline programming Added thought: I'd also like to make sure it runs once per day, when it turns on (if it was off) or at a specific time if it's been on, but somehow check to see if it's been ran earlier in the day sometime"} {"_id": "206048", "title": "Is a Factory class still a Factory class if the objects it returns already exist?", "text": "I can't decide what to name my class. So far I've labelled it up as a Factory, but I am not sure. Here is the class. As you can see, it exists to return a concrete type of an Interface (`ResolveRegistrationIssue`) based on the argument supplied to the getter method (`ResolutionFamilyEnum`): @SuppressWarnings(\"serial\") @ManagedBean(name = \"resolveIssuesFactory\") @ViewScoped public class ResolveIssuesFactory implements Serializable { @ManagedProperty(value = \"#{resolvePrimaryIssuesFactory}\") private ResolvePrimaryIssuesBean resolvePrimaryIssuesBean; @ManagedProperty(value = \"#{resolveSecondaryIssuesBean }\") private ResolveSecondaryIssuesBean resolveSecondaryIssuesBean ; @ManagedProperty(value = \"#{resolveTertaryIssuesBean }\") private ResolveTertaryIssuesBean resolveTertaryIssuesBean ; public ResolveRegistrationIssue getResolutionBean( ResolutionFamilyEnum familyEnum) { //null safety if(null ==familyEnum) { throw new IllegalArgumentException(\"The family which determines the Resolution Bean to be used may not be null.\"); } if (familyEnum.equals(ResolutionFamilyEnum .PRIMARY)) { return resolvePrimaryIssuesBean; } else if (familyEnum.equals(ResolutionFamilyEnum .SECONDARY)) { return resolveSecondaryIssuesBean; } else if (familyEnum.equals(ResolutionFamilyEnum .TERTERY)) { return resolveTertaryIssuesBean; } else { throw new IllegalArgumentException(\"Could not determine the correct Resolution Issue bean for the provided family.\"); } } Now, I understand that a Factory class accepts arguments which define the entity to be **developed** and then **delivered** (returned). However in my case, my entities are already developed, as they are EJB references. Thus this method only delivers them. So is this still a Factory?"} {"_id": "238174", "title": "How can I \"inspect\" C++ code?", "text": "For reference, I am a JavaScript developer learning C++. The browser is a pretty powerful debugger, and I can easily place a breakpoint in my code, hover over a variable or expression and get the value of that expression. Is this even possible in C++ or am I in a different world entirely? I'm starting to write a bit of C++ code for an online course, and debugging with Code::Blocks gives me very opaque information. For example, I see stuff like this: `0x8049bc3 push ebp` in the 'watches' window. Even if I write something like `int foo = 3;`, I have found no way of telling that `foo` is 3 while I'm stepping through my code. Am I missing something?"} {"_id": "238173", "title": "License issue of Spring Data Neo4j?", "text": "If I use Spring Data Neo4j to develop a software, and I want to publish it for commercial use with charging, does there exist any license issue? I survey many posts about the license issue of Neo4j and Spring Data Neo4j. It seems Neo4j has two versions, community and enterprise, respectively. The Community version is GPL-3.0, and the enterprise version is AGPL-3.0. Spring Data Neo4j is Apache License-2.0. The role of spring data neo4j in my software: 1.It is only part of my software to deal with storing relationship and searching data. 2.I will combine it with MongoDB or DB2."} {"_id": "238171", "title": "C++ Web Development for REST API", "text": "I've been a C# developer for long time, focused on ASP MVC the most. Two years ago, basiclly due to the lower costs and ease of deployment/management I began to migrate my projects to linux using Node.js for serving HTML and Mono for all the backend operations. They talk to each other using HTTP REST API. As my skills in Linux have improved more and more I find this Node.js/Mono combination kind of weird. Mono is a great thing but there are issues now and then, more related to the implementation itself than the language. Now I have like three weeks available in between projects and I would like to dive into C++ as my backend. I have tried some Python and I really liked it, but I require fastest responses, that's why C++ came to me. Java is not an option, because although the language itself is pretty close to C#, it has a bunch of stuff I really wouldn't like to bother with (Maven to name one) and at the end of the day, that would decrease my productivity, at least while I become more proefficient on Java (just configuring Eclipse for a Jersey tutorial took me like 2 hours!). I really don't need anything fancy, since all that node.js does for me is serving html/css/javascript, some real time notifications (socket.io) and that's it. My backends are more reading from tables, complex SQL transformation and validation of the data the client sends. The template engine I use is Jade which is cool since I type less, but I wouldn't mind to switch to plain HTML. 99% of the job in my projects is done using Angular.js via JSON calls. Most of them are single page applications with lots of logic on the client and consuming REST API's. I really would like to cut all Mono/.NET dependencies and develop on something more native to Linux. At the same time I would like to say goodbye to Node.js as well in trade for one backend to serve my API and static HTML pages. If you have been here before, please let me know what are the current choices and anything that would help me on this experiment. Thanks a lot in advance for taking the time to read my concerns. EDIT: I forgot to mention I read about Casablanca project. It looks promising but again, it belongs to Microsoft under the covers of the white open source, just what I have been avoiding lately."} {"_id": "222316", "title": "Is there any particular reason for the use of lists over queues in functional programming languages?", "text": "Most functional programming languages such as Scheme and Haskell use lists as their main data structure. Queues are identical to lists, except for the fact appending to the end - not to the begin - has constant time. Every algorithm that is written elegantly using lists with `head` and `tail` can be written elegantly using queues with `init` and `last`. Considering appending to the end is more common than the opposite, I'd guess queues are more natural than lists. Is there any reason lists have always been preferred?"} {"_id": "136413", "title": "Use of a profiler tool to aid in the analysis of a brute force algorithm in Java", "text": "I was asked to profile (using some tool, such as YourKit or JVisualVM) a couple of implementations of the Traveling Salesman Problem (find the minimum path that visits all the given set of cities), in the context of a Algorithms Analysis and Design graduate course. Taking in consideration the brute force approach of the algorithm, I'm wondering, though, how to best use the tools (for instance, JVisualVM) to analyze the spatial and temporal complexities of the algorithm. If I'm not in error the algorithm has spatial complexity of O(n), as it basically grows linearly with the number of cities, while it is O(n!) in temporal complexity. The math isn't the issue here. After playing for a bit with both the referred tools, I'm having some hard time understanding how can they be of any use to my problem. Sure I can see which methods are being used the most and which % of the time is being used in each one of them, and what is the actual memory footprint, so that can be all be regarded as hints where to optimize. But other than that, and other than seeing the % of the CPU which is actually being used (or IO accesses, which never occur in this program), I can't see how can I actually learn something about the algorithm with these tools than the math before referred didn't. What should be my main concerns when using profilers to the analysis of algorithms? Also, I've put basic time recording capabilities embedded in my program. Maybe I could use the profiling tools to better do that job? Thanks"} {"_id": "136411", "title": "Is there a resource for beginning programming student misconceptions?", "text": "Many programmers tasked to teach a beginning programming class have forgotten a lot of the things that they didn't know or wouldn't know back before they learned how to program themselves. And there are lots of ways to not understand a topic. But some seem to pop up very often among students, no matter what common beginning programming language or textbook is used. An answer to this question lists several misconceptions or lack of conceptions that seem to occur more when teaching programming to people who have zero programming background: What is the best way to teach beginners? (such as confusion about the difference between the environment when a program is written and when it is run, that (basic block) program code executes sequentially, the difference between a variable and its contents, etc.) Another example are the questions from new programmers who simply assume that ordinary variables (in C, Python, etc.) can exactly represent common fractions (1/100ths). Is there an online list or book that covers these concepts, that have to be taught above and beyond the syntax and semantics of the chosen programming language and IDE?"} {"_id": "229692", "title": "Designing storage service data structure for decoupled models sharing same data", "text": "Surely most of you remember the Norton Commander application where similar (sometimes the same) data is displayed in separate decoupled views. I'm building a web application that follows the same principle. The amount of data is too big to be fetched/displayed entirely on the page, and potentially overlapping subsets of data need to be displayed in separate views. My architecture looks like this: > > \u250c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500 EventBus \u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2510 > \u2502 \u2502 \u2502 > \u2502 StorageService \u2502 > \u2502 \u2571 \u2572 \u2502 > PresenterLeft\u2524 \u251cPresenterRight > \u2571 \u2572 \u2572 \u2571 \u2571 \u2572 > ViewLeft ModelLeft ModelRight ViewRight > The application creates both presenters and passes them the instance of EventBus and StorageService. The presenters, in their turn, create their respective models and views. The StorageService is passed down to models. The communication between presenters is done via the EventBus. In this scenario the left side has no awareness of the right side, yet models should possess the same information without a need to do a double round trip to the server. To achieve this I use the StorageService. Models implement the `model.fetch()` method which asks the storage for data. If the same data was previously requested, the storage will simply return it acting like an in-memory cache. Here comes the actual question. **In the above context, how to design a data structure for a _network-wise efficient_ storage service for the following use case:** 1. Left model requests `GET /files?limit=100` 2. Right model requests `GET /files?limit=110` (but actually I only need the last 10) My attempted approach was: * store the requests made by models * for every additional request, do my best to figure out the request diff * do the diff request (e.g. `GET /files?offset=100&limit=10`) * store the fetched items under `{\"files\": {\"id1\": item1, ..., \"id110\": item110}}` Unfortunately the above steps don't help, because there's no relation between the url and the ids. When a model requests the next set of values, I never know if they are actually in the in-memory cache. Of course, I could use the per-url in-memory caching strategy, but then I would have duplicate entries in the cache, which is something I'd like to avoid. What are your thoughts? I feel like I'm reinventing the wheel here. Is there a better/simpler/commonly-accepted method? Perhaps my architecture is flawed..."} {"_id": "142743", "title": "Bug once in a while, but high priority", "text": "I am working on a CNC (computer numerical control) project which cuts shapes into metal with help of laser. Now my problem is once in a while (1-2 times in 20 odd days) the cutting goes wrong or not according to what is set. But this causes loss so the client is not very happy about it. I tried to find out the the cause of it by 1. Including log files 2. Debugging 3. Repeating the same environment. But it wont repeat. A pause and continue operation will again make it to run smoothly without the bug reappearing. How do I tackle this issue? Should I state it as a Hardware Problem?"} {"_id": "229694", "title": "Multiple var statements in JavaScript", "text": "Writing a single var declaration per function is considered to be good for readability and maintainability. But, when I went through some of standard libraries, their dev version doesn't strictly follow this. For example, Underscore code has multiple var statements in many function which are spread randomly and declared as required. Is there any reason to not to follow the single-var-rule?"} {"_id": "87459", "title": "Question about the no-endorsment clause on the BSD license", "text": "I'm developing a non-free library and I want to use Bcrypt.Net in it. The clause in question: * Neither the name of BCrypt.Net nor the names of its contributors may be used to endorse or promote products derived from this software without specific prior written permission. To what extent does this mean I can't use the name of Bcrypt.Net? For instance, could I say \"the only ASP.Net authentication library capable of using Bcrypt\" or can I even include \"supports Bcrypt for password hashing\" in promotional materials? Note: I do not actually modify any of Bcrypt.Net's code"} {"_id": "229697", "title": "Where to put comments, class declaration or constructor?", "text": "For a given class, where should the comments about the purpose of the class go? /// /// <----------------- describe the class here (1)? /// public class Message { ... //class members here ... /// /// <--------------- or here (2)? /// public Message() { } } If I go with the first option, the purpose of the class is very easy to find when I open the file, it's in the same place, on approximately the same line. When you have lots of classes, this is easier for the eyes. You don't have to scroll down past the members to find the constructor If I go with the second option, I get Intellisense when creating an object of the class, stating it's purpose and explaining the parameters, which is very nice. From what I can tell, I can't get both advantages by using just one method. The only solution I can think of is putting the same comment in both places, but that's just duplicate information and I don't like it."} {"_id": "173245", "title": "Can Scala be considered a functional superset of Java?", "text": "Apart from the differences in syntax, can Scala be considered a superset of Java that adds the functional paradigm to the object-oriented paradigm? Or are there any major features in Java for which there is no direct Scala equivalent? With major features I mean program constructs that would force me to heavily rewrite / restructure my code, e.g., if I had to port a Java program to Scala. Or can I expect that, given a Java program, I can port it to Scala almost line-by-line?"} {"_id": "231434", "title": "How should one prepare to allow others to add languages?", "text": "Seeing as many, many open source products are offering translations in multiple languages, I decided I may want to design my application around this requirement. I currently have a niche application that has positive reviews - In English. If I wanted to expand the audience to allow users who speak other languages in, I need to find translators who also program. That's not what this question is about. What technique is standard for allowing Joe Developer who speaks English and French, to add in the French language for all the text, and how should I allow them to submit the changes? Is a separate text/language file the way to go?"} {"_id": "95202", "title": "If someone was making software, able to display content in any written language, what would they have to consider?", "text": "I know you'd have to use Unicode. And that Japanese is read vertically and reverse of English. What are some of the things one would have to take into consideration?"} {"_id": "109761", "title": "Scrum meeting - dealing with the last question", "text": "In the 5/15 minute scrum meeting the 3 questions are asked. For the last question \"what impediments are getting in your way\" If a dev has problems - the xyz is going to have problems, this is likely going to draw the meeting out past 15 mins and could go into a hour long discussion. Is it the scrum masters job to help this user, is there something to stop this from going on more than 15 mins. Thoughts?"} {"_id": "109760", "title": "Difference between complexity and performance guarantee", "text": "I'm a bit confused with the performance guarantee and complexity of selection sort. I checked through internet and the complexity of selection sort is O(n^2). This O(n^2) is in terms of time complexity, am i right? So how about performance guarantee? Is the performance guarantee in my case measured in terms of swapping or in time complexity as well? If the performance guarantee is in terms of swapping, so best case of swapping is zero swaps (the array is already sorted) and the worst case of swapping is n-1 step? The performance guarantee is then equal to (n-1)/0=undefined, am I correct? Please correct me if im wrong...or is the performance guarantee is in terms of running time? Then performance guarantee will be (n-1)/(n-1) = 1? Can someone please clear my doubts?"} {"_id": "206547", "title": "Is Business Intelligence (BI) the Right Job Title?", "text": "There is a department at my company called Business Intelligence (BI). They mostly build reports and configure SSRS, SSIS and related Microsoft technologies. Most of them barely understand SQL (even though they work with databases all day long) and fewer know anything about writing software. At other companies, BI had very little to do with IT. I am curious if my current company is simply mislabeling/misunderstanding what these folks do. Or is this what BI is really all about? If they aren't true BI, does anyone know what the proper name for their position is? It will be helpful when trying to hire new people so we don't end up wasting their time."} {"_id": "228349", "title": "Should entities be accessible from all layers of an application?", "text": "I am googling this issue now for weeks, but cannot find a good discussion. It boils down to this: As POCO entities used in a dbContext are in fact a definition of the database, shouldn't they be constrained to the Data Access Layer and not be visible outside it? On the other hand: it is convenient to hide some POCO entities in the ViewModels. so that changed properties can be passed back easily to the Data Access Layer. I find many examples of Entities that make their way into the user interface, but also many examples of where AutoMapper is used to map the properties of a ViewModel to an Entity. Can someone point me to some good working examples or discussions that could help me make a well informed choice?"} {"_id": "228348", "title": "When is it worthwhile to replace working mature code with frameworks+patterns", "text": "I fear that frameworks in many cases have become a fashion or trend and are being abused. In many ways people are sacrificing speed just because they want to keep up with every single lib that comes out. I am for a more conservative approach that only makes use of new libraries and frameworks when it absolutely makes sense. Related: When NOT to use a framework **Case** We have a Tomcat6 Servlet production that has been ongoing development since 2008 and there are many hours of work behind it. The code is very mature but it does not use any frameworks. A new team member loves using frameworks and patterns for everything. Except for the love of frameworks, they also love cutting edge new releases and even proposed migrating from Tomcat6 to somethiong newer (JBOSS,GLASSFISH, or TOMCAT8). I would never move a core production service to a container/server/web-server that just came out. Not against refactoring code to keep it clean though! I am against adding frameworks and patterns that add delay to production code servicing thousands of requests per second. **Serialization to stream (JSON/XML) while maintaining backwards compatibility** When a request comes, we produce both XML and JSON documents containing data related to the request params. Since we maintain backwards compatibility for older clients changes must not break older support. Since we also support both JSON and XML, we make use of the Jackson but also KXML libraries. For both formats we have 2 different serialization methods that use the **streaming API**. Except for being faster, I also find it is MUCH more flexible for maintaining backwards compatibility. I like to use a sysyem of Major and Minor version. All minor changes to the backend that cause changes to the serialized document I add with IF statements in the serialization method. Once quite a few changes have accomulated (over 1 year), we add a new serialization method. So, all requests specifying version=1.00 to 1.99 are sent to serializeJSON1. All requests using version 2 to 2.99 are sent to serializeJSON2. I've heard the argument that we should use ANNOTATIONS and MAP to objects. I find this very distastefull as I have read streaming is the fastest. The streaming API allows you to make necessary changes on the fly while MAPPING would require to store multiple versions of the same data as different objects, or jump through other hoops. output.startTag(null, TagTable.PRODUCT); output.startTag(null, TagTable.ID); output.text(ID); output.endTag(null, TagTable.ID); output.startTag(null, TagTable.DESCRIPTION); if (locale != null && Translations.containsKey(locale) && version >= 1) output.text(this.DescriptionURL + \"/\" + locale); else output.text(this.DescriptionURL); output.endTag(null, TagTable.LONGDESCRIPTION); if (version >= 1 && version < 1.5) { output.startTag(null, TagTable.POPULARITY); output.text(String.valueOf((int) this.pop)); output.endTag(null, TagTable.POPULARITY); } else if (version >= 1.5) { output.startTag(null, TagTable.RATING); output.text(String.valueOf((int) this.rating)); output.endTag(null, TagTable.RATING); } In the specific 3 cases above I really don't see the point to replace working mature code with frameworks/libs/patterns that may add additional overhead and cost additional development time. So, when is it worthwhile to replace working mature code with frameworks+patterns?? Comments? **Update: Some benchmarks** For the first two examples posted previously, using REST and HASHMAPs does not make overly that big a difference in performance but does make the code more readable. Now for the more interesting issue about serialization when used in a hotspot. Since the actual servlet in question had too much other code I made a toy example to test streaming vs mapping. The code is available at: http://stackoverflow.com/questions/21781540/ By all means test it, change it, run it. What is very interesting is that the Just in time optimizer in java ends up making the difference negligable over time. So, here is how the percentage between mapping and streaming changes the more iterations there are. Iter Stream Mapping 0 36,71% 63,29% 20 44,75% 55,25% 40 45,65% 54,35% 60 45,95% 54,05% 80 46,24% 53,76% 100 47,09% 52,91% 120 47,09% 52,91% 140 47,37% 52,63% 160 47,64% 52,36% 180 47,92% 52,08% 200 47,64% 52,36% 220 47,92% 52,08% 240 47,92% 52,08% 260 47,92% 52,08% 280 48,19% 51,81% 300 48,45% 51,55% 320 48,45% 51,55% 340 48,45% 51,55% 360 48,45% 51,55% 380 48,45% 51,55% 400 48,45% 51,55% 1000 48,72% 51,28% 2000 49,24% 50,76% 3000 49,24% 50,76% 4000 49,49% 50,51% 5000 49,75% 50,25% 6000 49,75% 50,25% 7000 49,75% 50,25% 8000 49,75% 50,25% 9000 49,75% 50,25% I also re-ran one more time have first 20 iterations more detailed: 0 39,76% 60,24% 1 41,18% 58,82% 2 40,83% 59,17% 3 40,12% 59,88% 4 40,48% 59,52% 5 40,83% 59,17% 6 40,83% 59,17% 7 40,83% 59,17% 8 40,48% 59,52% 9 40,12% 59,88% 10 41,52% 58,48% 11 42,53% 57,47% 12 42,86% 57,14% 13 42,86% 57,14% 14 43,18% 56,82% 15 43,50% 56,50% 16 43,50% 56,50% 17 43,82% 56,18% 18 43,82% 56,18% 19 43,82% 56,18% 20 44,75% 55,25% After 10,000 iterations (or calling JSON generation code 1000 times) here is the screenshot from VisualVM profiling (visualVM snapshot NPS file): ![enter image description here](http://i.stack.imgur.com/gjl61.png) So, if this optimization behavior holds up in Tomcat servlets as well, it means there really is **no point NOT to use mapping** (and many other convenience functions/libs since the JVN will optimize over time). **Conclusions** Based on the answers/comments, I think at the end of the day whether to rewrite working production code is affected by business requirement, and specific cost-benefit analysis for each project. Based on the benchmarking, it turns out avoiding some libs/frameworks may make no difference in the long run due to JIT JVM optimizations. **Update: After leaving code running all night** ![enter image description here](http://i.stack.imgur.com/7ESGK.png) After 275,137 iterations and approximately 30,000 MS: Mapping - 19,80% Streaming - 80,20% I really wonder what JIT optimizations happen here and whether the behavior holds for a servlet."} {"_id": "228345", "title": "enum for Java reference types", "text": "I need a simple `enum` declaring the Java reference types, as: public enum ReferenceType { STRONG, SOFT, WEAK, PHANTOM; } Does such `enum` exist somewhere in the Java API or a general utility library such as Guava ? I have not been able to find it in either place, although I found third party projects that declare it (e.g. google-guice: RefrenceType). I just try to avoid polluting my project with silly classes/enums that may exist somewhere else. **UPDATE** I found that Guava in fact used to have this, but they dropped it: http://tinyurl.com/ndaqpet"} {"_id": "234121", "title": "DB Design: How to link a single column to a collection of entities", "text": "I have a db design question that I am sure has been quite extensively covered but I cannot find a good answer already. The problem is how to efficiently link a single school course to a collection of textbooks, a course may have 0 books and it may have 10. The problem here is that multiple courses can use the same textbook as well. My first thought was to include the textbook tables PK as a FK in the courses table. Then the courses table has a composite PK of course id and textbook id. The problem with this approach is that the courses table could potentially have a lot of columns, and then I am repeating the entire course row for every textbook addition, in addition to having to set a fake textbook_id for courses that do not have a textbook. **courses** * course_id * course_name * FK textbook_id * PK(course_id, textbook_id) **textbook** * textbook_id * textbook_name My next thought was to have a table that has 10 columns for textbooks, that I use for linking a course to all the textbooks. But, I'm not sure if there is a better approach (this will end up using less space), when I think about this I think why not have textbook1, textbook2 columns in the courses table in the first place. **courses** * PK course_id * course_name * FK textbooks_ids **course_textbook** * FK course_id * FK textbook_1 * FK textbook_2 * FK textbook_3 * FK textbook_4 **textbook** * isbn * textbook_name A bit of background as well, using MySql, Hibernate ORM, and writing a separate search indexer. Anyone have any suggestions on a better approach or thoughts on which of these two approaches is better? Thanks in advance for the help"} {"_id": "19449", "title": "How to end my dependency on .NET?", "text": "I have been developing Windows GUI applications since many years and jumped into .NET in early 2005. .NET is a undoubtedly a remarkable platform and I am still using it, but with a variety of technologies out there, I don't want to remain dedicated to this camp. I want to learn new languages with which I can develop GUI applications. I am learning Ruby and just installed Python. I read about WxRuby, a framework for developing Windows GUI app. in Ruby. I am searching for a similar framework for Python. Apart from that I want to know which language is more suitable for a production-level GUI app. I doubt that Ruby is more focused on Web platform with it's glamor, Ruby on Rails. I know that I may not get those rich .NET classes and that impressive Visual Studio IDE, but still I want to follow the road less traveled. I don't want to go with IronPython and IronRuby, however sometime later, I may dip hands to explore them."} {"_id": "170579", "title": "Why do Git users say that Subversion does not have all the source code locally?", "text": "I'm only going on what I've read on SO, so forgive me, but all I read says that one major advantage of Git over Subversion is that Git gives all the source code to the developer locally, not having to do anything on the server. With my limited using of SVN and TortoiseSVN, I had all the source code, or at least I thought I did. For example, I have a website. I upload it to SVN. I am still running my website locally, aren't I? If someone submits a change and I'm not connected, it wouldn't matter if I had Git or not, until I reconnect to the server. I do not understand. I'm not asking for a rehash of one vs. the other except this one point."} {"_id": "234124", "title": "\"Sensitive Data\" clause in Bitbucket's Customer Agreement", "text": "I'm using Bitbucket for a few projects, but as of April 28, 2014 they will replace their End User Agreement with a new Customer Agreement. The new agreement mentions in 7.7.2: \" _You will not submit to the Hosted Services ... any personally identifiable information_ \" One of my projects clearly contains such information, and I'm now trying to figure out what to do. These are the three possible options I could come up with: 1. I may keep all existing projects and continue to work with them, as long as no new patch involves sensitive data. This is under the assumption that what I have done so far has been in accordance with the current agreement, and that the new agreement only concerns information submitted after April 28. 2. I have to remove that particular project, but may keep the others. This seems likely if 7.4 in the new agreement means that any data that have been submitted to the \"Hosted Services\" will be treated equally, regardless of under which agreement it was submitted: _\" **Your Data** \" means any data, content, code, video, images or other materials of any type that you upload, submit or otherwise transmit to or through Hosted Services._ 3. I have to terminate the agreement, and thus remove my account and all of my projects. This is a possible interpretation if the continuation of 7.4, about the company's right to _\"collect, use, copy, store, transmit, modify and create derivative works of Your Data\"_ , means that they can still do whatever they want, even with data that were submitted under the terms of an older agreement and have since then been removed."} {"_id": "234125", "title": "Who has right over the code that comes from contributions in an open source project?", "text": "If somebody starts an open source project (for example with a GPL license) where people will make contributions, than who will own these contributions in the level of the whole project? Will the new code become the property of the original author or the contributors will be authors too? Who has the right over the ongoing project? For example who has the power to release the code in a second license? The original author only? Can the contributors separately do that as well, or do they have to make a joint decision with the original author and all of the contributors?"} {"_id": "127941", "title": "Copyright Laws for Apps", "text": "For my app that I'm about to release, I have used a bunch of 3rd party classes found for free online. I checked all the liscenses before using the code and each one said I could use the code commercially So now what I'm concerned about is do I have to give credit somewhere in my app to the creater of that class? Also, I plan on having my app be free, but if in the future I charge for it, does that change things? The most common license I see in an MIT License: > Permission is hereby granted, free of charge, to any person obtaining a copy > of this software and associated documentation files (the \"Software\"), to > deal in the Software without restriction, including without limitation the > rights to use, copy, modify, merge, publish, distribute, sublicense, and/or > sell copies of the Software, and to permit persons to whom the Software is > furnished to do so, subject to the following conditions: The above copyright > notice and this permission notice shall be included in all copies or > substantial portions of the Software. THE SOFTWARE IS PROVIDED \"AS IS\", > WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED > TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND > NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE > LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF > CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE > SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. The part I'm unsure about is: > The above copyright notice and this permission notice shall be included in > all copies or substantial portions of the Software. Is it enough for this license to be at the top of the class file embedded in the binary, or do I have to make it literally accessible to the user? Of course I'm not asking for expert legal advice, but any help would be appreciated."} {"_id": "103698", "title": "ASP vs javascript vs jQuery vs User Controls vs AJAX", "text": "Increasingly when developing sites I find myself utilising more client side technologies such as Javascript, jQuery and AJAX. For three different pages I measured the size of the HTML sent to the browser (excluding images and CSS). What I found was: * A page that has normal ASP.NET controls + a little bit of jQuery: approx. size = 30kb * A page using some user controls that have AJAX and client side support: approx size = 50kb * A page that uses more complex user controls from a library with heavy client side support: approx. size = 110kb As you can see the size of data transferred seems to increase as client side support and AJAX features are added. This adds to the bandwidth requirements of the site and presumably the servers I/O load which may reduce the number of concurrent users that can be served. Should I worry about balancing rich client side features and interactivity with server's load and bandwidth use? What are the priorities, if any?"} {"_id": "170574", "title": "Should HTTP Verbs Be Used Semantically?", "text": "If I'm making a web application which integrates with a server-side backend, would it be considered best practice to use HTTP methods semantically? That is, for example, if I'm fetching data (e.g., to populate a menu, etc.), I would use GET, but to update data (e.g., save a record), I would use POST. (I realise there are other methods that may be even more appropriate, but we need to consider browser support.) I can see the benefits of this in the sense that it's effectively a RESTful API, but at a slightly increased development cost. In my previous projects, I've POST'd everything: Is it worth switching to a RESTful mindset simply for the sake of best practice?"} {"_id": "231189", "title": "Why additional code and complexity when data model and interaction are simple is considered a disadvantage in MVC model", "text": "According to the slides given by my lecturer, one of the disadvantages of MVC model is > additional code and complexity when data model and interaction are simple Why is this so, I would assume it would make no difference since data model and interaction are on two different components of the MVC model?"} {"_id": "229961", "title": "Frameworks/methodologies for documenting functional/use case scenarios and mapping them to code base", "text": "Is there an established framework or methodology for documenting functional or use cases and mapping them to units of code? E.g. assume you are a new developer hire assigned to work on an application. What better way to get up to speed would there be than going through a list of that application's features or functional case scenarios, i.e. what all you can do with the application? For example, let's take Evernote (my favorite SaaS) as an example: 1. Successfully log in (duuhh, but still technically a use case) 2. Fail to log in (what happens next) 3. Create some notebooks 4. Create a basic, text only note 5. Move the note from one notebook to another 6. Create a note with media attached 7. Delete a note 8. Share a note publicly through a fixed URL etc. Now, that is simple. But what I would like to be able to do is have a way to map each one of those items to the pieces of code that utilize it. That way, when inspecting or learning or reverse engineering the code base, you could (provided that the documentation is up to date) easily trace a function, method, script, HTML document, REST service, servlet etc. to all the functional cases that utilize it and are affected by it."} {"_id": "224514", "title": "Will we go back(?) to fixed-point arithmetic in the near future?", "text": "As far as I recall, this has been the ongoing trend of the past, I'm just a student, so I might be wrong: * Long ago: Integer numbers and calculations. Very precise, however cannot present a big range. * At a later point: Added single-precision floating point numbers (`floats`), and also added FPU's which are incredibly fast at calculating with them. * Quite recently(?): Double-precision floating point numbers (`doubles`) are starting to become more common. For example in OpenGL Shader Language they are steadily introducing more and more methods that work with doubles natively. I think they are also working on FPU's for doubles? But also there has been a change to 64-bit processing a few years ago, so I've got a few questions: * Are there more options now to calculate with fixed-point numbers (maybe two int64 registers?)? * Why are there, as far as I know, no specific units for fast integer calculation with respect to fixed-point numbers? Maybe most of what I'm saying does not make much sense, but I think the question remains valid. Is there a switch we switch to fixed-point arithmetic in the near future?"} {"_id": "100528", "title": "New Silverlight app. MVVM. RIA Services vs CSLA", "text": "Another 2 days of reading and watching demos and here we go. For my enterprise LoB Silverlight app I'm going to use: 1. Prism for UI aspects and modularity. 2. MVVM pattern (using Prism) 3. ??? to bring data over and validations... 4. Entity Framework for Data access 5. SQL Server for data Ok, main dilemma is #3. If I won't use any framework then I will have to figure out how to do all CRUD stuff myself. I can do RESTful WCF, I can do SOAP. All that == MANUAL coding. I can do RIA Services. I kind of see what it does and it is nice for direct match with my data layer BUT it is not that great if there will be lot of business logic. Where would I put it? In my ViewModel? Another question is how those services maintained. Once I generated it - I should maintain them by hand if data changes? I also found CSLA which seems to be nice on one hand but receives lot's of critique.. CSLA will allow me to write business logic and shape object as I needed and than I can pass it \"through\" ViewModel and all is well. Something tells me that RIA Services will be much quicker to write. Also, I like the fact that I don't have to include extra dependencies. There is no blogs or mentioning of RIA Services since 2010. Is it going under table? Not widely accepted? Not scaling well for big apps? I'm trying to decide which one I need to bet on. CSLA or RIA Services. OR?"} {"_id": "100529", "title": "Price Comparison Database Scema", "text": "I have a bunch of data from multiple retailers. Each product that each retailer sells may or may not be sold by another retailer in the database. If multiple retailers sell the same product, then the product can be identified by a SKU. I currently have 1 database for each retailer. I'm having a problem trying to conceptualize an appropriate database schema to identify how many retailers are selling the same product. Should I have 1 table with all retailers?"} {"_id": "73995", "title": "Crucial programming-for-hire contract points?", "text": "**What are some important and easily-overlooked points that should be in a programming-for-hire contract?** We recently had a learning experience as a client hiring a programming company: in the contract we signed, there was no mention of billing practices such as * Minimum billable unit. A 5 minute phone call was billed as 1 hour. * Division of work. If two contractors at the company worked on something in the same day they had their hours billed _and rounded up_ separately. * What kinds of things were billable. We were billed for them to \"install Visio\" even though we'd asked them to use it on our VPN-accessible terminal server). We prefer learning from _other_ people's experiences rather than our own, thus hope to get the community's ideas about other important points we might be missing. I am interested in both sides of the coin as I have been and will again be a contract programmer myself, even though at this time I'm thinking of it more for my company's hiring needs. So points important for both the client and the contractor are welcome."} {"_id": "180419", "title": "Self taught programmer seeking advice", "text": "I need some general advice on how to continue my programming studies. I'm a self taught programmer and I picked up programming somewhere in the fall of 2011. I began with C++ and Java books and I learned everything up to OOP (except memory handling in c++). I later decided that I wanted to work with web-development so I learned the PHP syntax and started my first project in the summer that resulted in http://doostr.com (obviously work in progress) The main reason why I started the project was that I would be forced to solve different kind of problems that I knew I would face along the way. I learned about relational databases and designed my own tables in MySQL (third normalization form), learned SQL logic and functions, became more fluent in programming by extensively utilizing basic programming tools (variables, loops, arrays, functions, classes), PHP native functions, MVC, Javascript (jQuery library) and the basics of CSS/HTML. I've used twitter-bootstrap as a base for design since I wanted to speed up the design and focus on programming. Now I feel like I'm just iterating through things that I've learned before. I feel like I need to come in contact with more experienced programmers in a professional developing environment that I can learn more from so I'm preparing myself for applying for a backend programmer trainee job (unpaid) and use my project as a resum\u00e9 (I dont have a computer science degree). I need some general advice from working professionals on how I not only should prepare myself considering what I want to work with, but also how I should continue my programming/developer path generally. I've asked a friend that works with java and he said that I should find a good design patterns books. You guys have any good suggestions? Any other language that would be good to combine with PHP/MySQL? I know Java/C++ syntax. I'll take any advice you might have. You can go to my site, create an account and test the functionality and it might give you a hint of what I'm capable of. Thanks for reading."} {"_id": "181425", "title": "Expensive AOT Optimizations", "text": "I've seen it stated several times that AOT can run some more expensive optimizations that take too long to be used by a JIT. But I've never seen it stated what exactly these optimizations are. So I'm wondering, what are these optimizations?"} {"_id": "180417", "title": "Fastest Haskell library sort implementation", "text": "I am implementing an application in Haskell and, for sorting, I use the library function `Data.List.sort`. However, I was wondering whether this is the fastest sort implementation in the Haskell standard library (perhaps lists are not the best choice for efficient sorting). I have found different alternatives, e.g. heap sort on arrays, sort on sequences (but the documentation does not say what kind of algorithm is used). My question is: what is the fastest sorting implementation (container type + sort function) provided by the Haskell standard library? Is there some documentation page listing all library sort functions and comparing them wrt performance? **EDIT** To provide some more context, I am running a benchmark. I have written a simple program in C, Java, Python and Haskell that 1. Reads 1000000 strings (lines) from a text file. 2. Sorts the strings using a built-in (library) sorting algorithm. 3. Writes the sorted list of strings to a file. For each implementation, I only measure the sorting time (leaving out the time needed for disk IO). Running the benchmark on Ubuntu 12.04, I get * C (gcc 4.6.3, `qsort` on `char **`): 0.890 s * Java (OpenJDK 64-Bit 1.7.0_09, `Collections.sort()` on `java.util.LinkedList`): 1.307 s * Python (Python 2.7.3, `list.sort()`): 1.072 s * Haskell (GHC 7.4.1, `Data.List.sort` on `[Data.ByteString.UTF8.ByteString]`): 11.864 s So I wonder if there is another data type / library function in Haskell that can give better performance."} {"_id": "180414", "title": "Using MVC in a Java app", "text": "I need to write a cross-platform GUI application to process (in multiple threads) and visualize fairly large quantities of data. Ideally the application should be relatively fast and look good. The app's interface will consist of a table widget, a tree widget, and a custom figure-drawing widget. The user will be able to modify the data from any one of these widgets and the changes should be immediately reflected in the other widgets. Naturally, I am planning to use MVC. However, I normally do all my GUI programming in C++/Qt, and have very limited exposure to Java. So I would really appreciate any advice on how to organize such an app in Java. In particular, should I use Swing or JavaFX? Which widgets would you pick for the job? Could you recommend any books/online tutorials that cover these aspects of the Java platform? I will greatly appreciate any feedback. Thanks! (this question was originally posted on Stack Overflow, but this site was suggested as a more appropriate place to ask it)"} {"_id": "180415", "title": "Is it better to use $filehandler or FILEHANDLER?", "text": "It seems like FILEHANDLER is more commonly used as a handler naming convention than $filehandler. But it can give a bareword error if one forgets to use *FILEHANDLER in some constructions. What are advantages and disadvantages of HANDLERS and $handlers? Which naming convention is a better practise, and why?"} {"_id": "245881", "title": "C# server side application 100 GB dataset + Garbage Collection", "text": "If I have server with 256 Gb of ram. I was wondering can I create a C# application which has a 100 GB of memory footprint? I want to create a dictionary like Dictionary> Fist dictionary DateTime - Max 40 distinct values Second Dictionary string : 2000 distinct values Third dictionary string : 100 distinct values Object will be removed from the DateTime part (after every say 1 hour), till 1 hour, various internal members are populated. With Java I am told even if the server has large memory, JVM will stall because of GC."} {"_id": "245889", "title": "Is PHP the only popular language that mixes simple and associative arrays into a single type?", "text": "I'm doing a research on PHP and wondering if there any other commonly used programming langues that use an associative array for both simple indexed element storage and key-value functionality. Does it make PHP unique in this sense? For example, a language like C# distinguishes clearly between a simple array and a map\\dictionary\\hash T[] array = new T[]; Dictionary map = new Dictionary(); Meanwhile PHP makes no such distinction (at design time) $array = array(1, 2, 3); $map = array(\"one\" => 1, \"two\" => 2, \"three\" => 3)"} {"_id": "81118", "title": "Has anyone attempted to create a system which uses e style capabilities in Javascript?", "text": "The object level capabilities that show up in E seem like an interesting addition to the Javascript language, especially considering the security issues that Javascript programs face. Has anyone tried implementing an object- capability type security system in Javascript, or by modifying a javascript VM?"} {"_id": "4063", "title": "Why did an interviewer ask me a question about people eating curry?", "text": "I had an interview question once which went... Interviewer: \"Could you tell me how many people will eat curry for their dinner this evening\" Me: \"Er, sorry?\" Interviewer: \"Not the _actual_ number just an estimate\" I actually started to stumble my way through it, when I stopped and questioned what it had to do with anything about the job. The interviewer mumbled something and moved on. I guess the question is, what is the point in the ridiculous questions? I just don't understand why they started coming up with these things."} {"_id": "4061", "title": "A Programmers Version of Pygmalion", "text": "How long would it take and what resources would you need to train someone right out of high school and make them a 'technically' employable programmer? Granted they may not be mature enough to pass many inteviews, have the required time-based or domain experience."} {"_id": "43725", "title": "How to build a team of people not working together?", "text": "I am in charge of a group of about 30 software development experts and architects. While these people are co-located in the companies organization chart, they do not really feel as a team. This is due to their work enviroment: 1) The people are spread over eight locations, with a max. distance of about 1000km (this is Europe). 2) The people don't work as team but instead get called as single people (and sometimes small groups) into projects for as long as the projects run. 3) Travelling is somewhat limited as this requires business reasons. Lot is done via phone. Do you have ideas or suggestions on how I could make these people feeling part of a joint organization where they support others and get supported by others. So that they get to know their peers, build a network, informally exchange information? So that they generally get the feeling of having common ground and derive motivation and job satisfaction?"} {"_id": "22153", "title": "Effectively Learning 3 Languages Simultaneously", "text": "So I have been developing with PHP for a year and a half now. Somewhere along those times, I decided to pick up skills in Python and Ruby, and learn ASP with C# just recently (as my first compiled language). Adding to that I'm a sucker for good CSS and Javascript coding. What I'm worried about is that this is a huge undertaking. Please don't tell me \"you should focus on one language\" right now because it's just that my damned brain is telling me to study the aforementioned languages and get up and running ASAP. I'm 21, and work at a small web dev company. They're not that strict so I can sneak some time learning Python and improving my PHP. At home I work with Ruby and C#. I have done the following to effectively manage my learning process: 1. Focus on web-only projects 2. Reuse my codes from PHP and convert rewrite them according to the language syntax (to Ruby, to Python) 3. Create an online blog to store some of my codes in case I need to get back to them later I know it's quite insane to juggle 4 server languages and 2 frontend languages at once but hey, I am having fun doing this. I'm not saying I want to master them immediately, because I know it takes years to master a single language. Have you done this before? How did you manage your learning process? Did you give up halfway and just focused on one?"} {"_id": "205877", "title": "How can I work on multiple programming languages at same time", "text": "It always happen to me that if I leave the stuff for 1-2 months I forget the stuff. 5 months back I had symfony project and I did that. At that time I was very much confident that I can do any project in symfony2. Then we got one Python project in django and I worked full time on that for few months. But now when I had to fix the error in symfony2, I completely forgot the structure and I keep mixing python stuff with php symfony. I want to know that how people work with different languages at same time. Should I need to keep studying all languages at same time so that I don't forget? I am confused what should I do. Or whenever I do some project then I keep notes of each and every step so that I can follow that later on how I did it?"} {"_id": "196391", "title": "Revisiting Learned Languages", "text": "I'm an aspiring programmer, I really wish to be great at multiple programming languages. I began programming from my school where they didn't teach me well, they didn't follow the standards, they didn't even tell the difference between an IDE and a compiler, they kept calling the Turbo C++ IDE a compiler, which sucked. And at that time I didn't have good internet access. But I do have internet, now I can learn new things and how to do them properly. The problem I'm having now is how to relearn everything and at the same time learn new things. I'm trying to do JavaScript and Ruby at this time but I can't concentrate on any of these because I'm worried about the one I am not learning (and also when I find something interesting, I get attracted to that and not the one I'm looking for, eg. attracted to C function pointer while I'm trying to learn callbacks in JavaScript) . How do I manage this? Make a routine? And many of us recommend practicing by making, but I have no idea what to make or making what would be better as a programming practice. Really need help. Edit: How is my question a duplicate? The mentioned question asks whether it \"is\" better to learn multiple languages simultaneously or not. What I'm asking here is how to be able to concentrate and manage on learning languages properly. Therefore it is not a duplicate, plus the answers I'm getting here are great (which are not there in that question, I'd like to point out the moderators to quit judging like that)."} {"_id": "212889", "title": "How can a beginner develop an algorithm for this problem?", "text": "Before I'm flamed, I'm just a beginner in programming. I have been learning Python for about 1 week by myself. I have a real problem to solve: **Goal:** The fastest route through a pathway. **Problem:** * Walk forward to get to the goal. The pathway is 128 steps. * Walking 1 step increases the danger value by 113 * If the danger value is over the limit then there is a fight. The fight takes 15 seconds to win. * The limit is randomly assigned on each step. * The danger value resets to 0 after a fight. **Here's the essence of the problem though:** There are two possible options if there is a fight: * The fight can be escaped. Escaping takes 2 seconds. But does NOT reset the danger value. * Accept the fight. Resets the danger value. * The higher the danger value, the higher the chance of getting a fight. I've written the code to take one step. And I've written the code for the relevant randomness of the danger value, etc. I've even made a graph that shows exactly the pathway and potential fights, with the respective limits. This graph shows what happens when you accept every fight (don't escape): ![Fighting Pathway](http://i.stack.imgur.com/7p4Nv.png) **Being self-taught, I just don't know how to write a search algorithm for finding the quickest way through a particular route.** I believe I need to write a **backtracking algorithm** ; essentially a kind of **brute force**. I just don't know how to translate it from english into code. It's been around 4 days of just thinking about this, and I cannot write anything that expresses the even attempts to solve the problem. Any books or advice on how to even start solving this? As I've said, I have all of the code for what happens on each step (here: http://pastebin.com/LnV4h9NV). I just don't know how to even begin trying to brute force/search."} {"_id": "255471", "title": "In IT companies employees are working in the same project mean how they will join these part of work (programs) together?", "text": "In IT companies employees are working in the same project mean how they will join these part of work(programs) together? I mean I am working in `.zul` file (client side), some of my teammates are working in server side or other Java files, in the end of the day how they can join those programs from different system to single system?"} {"_id": "212880", "title": "Chain-of-Command/Responsibility pattern for my simple API. A good idea?", "text": "I need to write a simple API for a project I'm working on. This API will be used internally to perform some server side actions that were triggered by AJAX calls. To make things easier for me, I thought of using the Chain-of- Command/Responsibility pattern. Here are more details about what I want to do: I am creating a \"dashboard\" where admins of my app will be able to update information (meta data) about items stored in a database. To make things easy and simple for the admins, I chose to use AJAX. So, if an admin wants to delete an item, (s)he clicks on the \"delete button\". A POST request is then sent to `edit.php` page with all the information needed to perform the deletion (action : delete, element : link, id : xx ...). Where `action`, `element` and of course `id` can change. This is why I opted for a mini-API that will, depending on the `action` and the `element` data, call a different function from a different class. Now, to implement the API, I decided to use Chain-of-Responsibility design pattern. Why? Because I can easily add, delete or update classes without having to modify the API itself. Is the Chain of Responsibility design pattern good fit for my case?"} {"_id": "212887", "title": "Rethinking testing strategy", "text": "Working on Plone projects our team tries to achieve full test coverage at least for important products. The kind of tests we write are unit tests, functional tests and integration tests. (Also stress-tests, but those aren't in the scope of this question). The goal is two fold at least: to facilitate upgrades and to catch bugs (sometimes it even happens in the process of writing tests). However, Plone/Zope is a complex system, and with years of experience I've noticed, that test strategy should be re-thought. First of all, unit tests, which often require to use a lot of mocking, are not that (cost)efficient. They are mostly easy and beneficial when some core, logic-heavy functionality is being written, like pure Python modules, which have negligible couplings with Plone/Zope, databases, etc. I rarely seen unit-tests catching any bugs at all, except while writing them. So, when doing the usual thing (writing views/portlets/viewlets), from my experience, it's much more efficient to write functional and integration tests. The rationale of it is that in case Plone/Zope changes (in an upgrade) we can catch the mishaps in our code. Views usually do not have a lot of \"algorithmic\" logic, they glue together several data sources (like catalog queries), maybe with some form handling and preprocessing for templates. It is quite often views call one or more tools to do some routine job (like getting navigation tree or looking up site root). Mocking it all seems unwise. For example, sometimes Plone/Zope changes some default to another type and all those mocked tests happily continue to work while code fails in the real instance. Functional/Integration tests may be at times fragile (HTML can change), but they are cheaper to produce too. They provide basic coverage, and trigger alarms when underlying system changes. Spotting the source of mishap is usually not an issue. ( **update** : Wrong: spotting where integration test fails can be a big issue, but **our** code's unit tests are usually of no help.) The question is, am I overlooking something of importance confining unittesting to functions and classes, which do not require mocking the environment heavily and are instead \"purely\" logic-heavy? I can't find any justification for writing unit-test \"first\", or even at all, for every piece of code in Plone/Zope (I do not mean the core of the system, just our own additions for our clients). To make the question less opinion based, are there other reasons, I have not mentioned or tackled above, which necessiate to consider writing more unit tests (and less integration tests) in a situation when code heavily depends on a complex (and somewhat hairy) framework, and code serves more as a glue for framework's subsystems?"} {"_id": "212884", "title": "Interview Coding Assignments\u2026 how much do they account for?", "text": "So I've ran into a couple of companies that requires completing coding assignments before they give you a phone interview. I'm curious, do these coding assignments account for anything after the interviews? Or are these assignments just there to see if you're worthy of their time to be phone- interviewed? I'm more interested in hearing from those who had involvement in an actual hiring process that included a coding assignment. I would appreciate if you can mention your involvement (e.g., you were the hiring manager or one of the people on the team that did the hiring). Thanks!"} {"_id": "214730", "title": "Should I close database connections after use in PHP?", "text": "I wonder if I should close any unnecessary database connection inside of my PHP scripts. I am aware of the fact that database connections are closed implicitly when the block stops executing and 'manually' closing the connections could kinda bloat the codebase with unnecessary code. But shouldn't I do so in order to make by code as readable and as easy understandable as possible, while also preventing several possible issues during run time? Also, if I would do, would it be enough to `unset()` my database object?"} {"_id": "245160", "title": "Data Access Levels of Abstraction", "text": "I'd like to describe this situation from two perspectives. I have a system called `Accounts`. This system is made up of subsystems which handle different account-based activities. For example: Company.Accounts.NewAccounts Company.Accounts.Loans Company.Accounts.Transactions **Perspective One** From this perspective, imagine I have a shared data access tier: Company.Accounts.DataAccess This subsystem is built seperately and each of my other Accounts subsystems reference the built assembly. This Company.Accounts.DataAccess subsystem connects to multiple tables. Some of the tables are specific to certain subsystems, and some of them are shared. There are separate modules for use by some specific subsystems, but also one or two common modules that multiple subsystems reference. The common modules include a lot of basic CRUD activities, but also a good number of low/medium complexity actions. So from this perspective, would it be fair to keep this architecture, but to pull out the subsystem-specific logic into a subsystem-specific data access layer? That way, each subsystem has their own specific Data Access piece, but also still references the original Data Access Tier for the common modules. Company.Accounts.NewAccounts Company.Accounts.NewAccounts.DataAccess Company.Accounts.Loans Company.Accounts.Loans.DataAccess Company.Accounts.Transactions Company.Accounts.Transactions.DataAccess Company.Accounts.DataAccess **Perspective Two** From this perspective, imagine I have existing Data Access Layers for each subsystem: Company.Accounts.NewAccounts.DataAccess Company.Accounts.Loans.DataAccess Company.Accounts.Transactions.DataAccess They are logical layers as part of each subsystem, and only each subsystem references its own data access piece. So there are no cross references here. Each of these Data Access Layers perform operations against their own specific Database Tables. However, each of them also has code to perform common operations against tables that are shared amongst all subsystems in the Accounts system. So from this perspective, would it be fair to keep this architecture, but to abstract the common operations into a subsystem-agnostic data access Tier? That way, each subsystem keeps its own logical layer for specific operations, but also references an externally built tier for the common operations. Company.Accounts.NewAccounts Company.Accounts.NewAccounts.DataAccess Company.Accounts.Loans Company.Accounts.Loans.DataAccess Company.Accounts.Transactions Company.Accounts.Transactions.DataAccess Company.Accounts.DataAccess **Conclusion** So as you can see, the ending is the same, but coming at the description from two different perspectives. My idea is to have two different levels for data access. So a subsystem references its Data Access Layer for its own data operations, but also references a common Data Access Tier for common operations. Is that legitimate? Is there already a pattern named for this?"} {"_id": "19990", "title": "Typing skills do developers need a formal training?", "text": "Developers key in lots of code, write bogs, write software designs and stuff. Do we need formal typing training? I have seen many developers struggling with keyboard thus slowing down development momentum. At least we can play around with a software typing emulator to begin with."} {"_id": "245162", "title": "factors that are important for success when letting an agile framework emerge for the whole organisation?", "text": "I previously posted a question [link] on programmers.stackexchange.com asking when it makes sense to adopt an existing enterprise scale agile framework and when it makes sense to allow your own enterprise scale agile framework to emerge (e.g. through trial and error). This question takes the previous question a step further and asks for **the factors that are important for success** when you have decided to go down the route of letting your enterprise scale agile framework emerge? \\-- Downvoters: down voting without adding a comment provides no opportunity for improving the question. Please let me know why you are down voting so that I can improve the question."} {"_id": "189654", "title": "NoSQL Document Database as a Message Queue", "text": "I am considering using a NoSQL Document database as a messaging queue. Here is why: * I want a client to post a message (some serialized object) to the server and not have to wait for a synchronous response. * I want to be able to pull messages off of the \"queue\" based on some criteria, which may be more sophisticated than just a priority level (I am working on a hosted web app, so I want to give all of my customers a fair amount of \"computing time\", and not let one customer hog all of the processing). * I want the queue to be durable - if the server goes down I want any remaining messages to be handled when it comes back up. So, I am considering using MongoDB or RavenDB as a message queue. The client can post the message object to a web service which writes it to the database. Then - the service doing the work can pull the various message types based on any criteria that may arise. I can create indexes around the scenarios to make it faster. So - I am looking for someone to shoot a hole in this. Has anybody successfully done this? Has anybody tried this and failed in some way?"} {"_id": "38874", "title": "Where do you go to read good examples of source code?", "text": "I have heard a few people say that one of the best ways to improve your coding ability is to read others code and understand it. My question, as a relatively new programmer, where do I go to find good source code examples that are not too far over my head?"} {"_id": "81932", "title": "Good introduction to artificial intelligence?", "text": "I have been told that as part of my degree year I will be learning about Artificial Intelligence. My lecturers have recommended a couple of books: Artificial Intelligence: A Systems Approach (Computer Science) Artificial Intelligence (A Modern Approach) I've watched a couple of lectures hosted by Jeff Hawkins, they were good although a little advanced for me. Could someone recommend any other books or lectures that introduce the topic?"} {"_id": "13340", "title": "Is Power++ worth buying/learning?", "text": "I found Power++ and to my suprise it looks like a good alternative for Delphi and Visual Studio. Anyone used it? Can you tell me if it is worth it buying/learning it?"} {"_id": "13341", "title": "Hard-copy approaches to time tracking", "text": "I have a problem: I suck at tracking time-on-task for specific feature/defects/etc while coding them. I tend to jump between tasks a fair bit (partly due to the inherit juggling required by professional software development, partly due to my personal tendancy to focus on the code itself and not the business process around code). My personal preference is for a hard-copy system. Even with gabillions of pixels of real-estate on-screen I find it terribly distracting to keep a tracking window convienient; either I forget about it or it gets in my ways. So, looking for suggestions on time-tracking. My only requirement is a simple system to track start/stop times per task. I've considered going as far as buying a time-clock and giving each ticket a dedicated time-card. When I start working on it, punch-in; when done working, punch-out."} {"_id": "244750", "title": "How can my team avoid frequent errors after refactoring?", "text": "To give you a little background: I work for a company with roughly twelve Ruby on Rails developers (+/- interns). Remote work is common. Our product is made out of two parts: a rather fat core, and thin up to big customer projects built upon it. Customer projects usually expand the core. Overwriting of key features does not happen. I might add that the core has some rather bad parts that are in urgent need of refactorings. There are specs, but mostly for the customer projects. The worst part of the core are untested (not as it should be...). The developers are split into two teams, working with one or two PO for each sprint. Usually, one customer project is strictly associated with one of the teams and POs. Now our problem: Rather frequently, we break each other's stuff. Someone from Team A expands or refactors the core feature Y, causing unexpected errors for one of Team B's customer projects. Mostly, the changes are not announced over the teams, so the bugs hit almost always unexpected. Team B, including the PO, thought about feature Y to be stable and did not test it before releasing, unaware of the changes. How to get rid of those problems? What kind of 'announcement technique' can you recommend me?"} {"_id": "244751", "title": "How does Telnet work?", "text": "**Is telnet just a simple socket connection?** I usually have a difficult time in the networking area so I use some code from the internet to help me out, but I can't seem to find a library for Telnet in Objective-C. The closest thing I've found is `CocoaAsyncSocket` I was wondering, _Is telnet just plain socket connections?_ _Do I just create a socket to the server and send the commands?_"} {"_id": "186602", "title": "need advice for building evaluation system in Java for the web", "text": "I have made a java program that allows to mark an exam made by one student. In the interface the student chooses between a set of options for each question, and by an array (that holds the correct answers) I get the score of the student. By the way this simple program is made using console java, no swing or applets at all. The information of which answers are correct are stored in a vector, so each answer of the students is compared with this vector for getting the final score. I would like to put his program into one domain that I recently got, so that one student at the moment of entering into the webpage could make the exam; and another program installed in the server will check the correct answers and give other questions, in case the student wants to practice. One friend told me that it is possible to wrap the console java application into java for web, but I just dont know the details for doing that. What would anybody suggest or if there is any online tutorial that could help me with that? My domain has the following extensions: * FTP * MySQL * Apache * SharedTomcat Please this is not a discussion to know which tool is better, I want just to use Java, but I am pretty lost with all the web technologies that one can use with this language."} {"_id": "244758", "title": "Failed to allocate memory - What is it trying to say?", "text": "In my early days of programming I often used to get memory related fatal errors in the following format: Fatal error: Allowed memory size of bytes exhausted (tried to allocate bytes) in /path/to/filename.php on line I'm a little embarrassed to state that even though I have figured out how to solve them and take steps to avoid them altogether, I'm still not quite sure what exactly does the message translate to in simple words. For example, if I get a message such as: Fatal error: Allowed memory size of 67108864 bytes exhausted (tried to allocate 4000 bytes) in ........ on line 34 As things stand at the moment, I assume it to be stating that the script consumes 67108864 bytes of data, but only 4000 bytes are available during runtime. * Am I right in my assumption? * If not, what's the correct interpretation?"} {"_id": "235981", "title": "Design Hash table with simple hash function", "text": "I want to learn to Design Hash table with simple hash function for better understanding. I understand that the hash table will work as long as the hash function maps each key to a non-negative integer less than the size of the hash table, but it will only perform well if it distributes different keys across different buckets. My question is : What's a alternative ways to implement hash function using ASCII code. I found ASCII code hash function implementation it's easy to build a hash function on the idea of treating each character of the string as a digit in a number. I try to represent a number is to use a radix-10 system with the Arabic numerals. For example, I could represent numbers using the letters \"a\" - \"z\" for the numbers 0 through 25 to obtain the Radix-26 system described in your text book. Characters in the computer are often stored using 7-bit ASCII codes (with values 0-127). So we can treat a string of ASCII characters as a Radix-128 number."} {"_id": "218103", "title": "MVC Design Pattern to Combine Multiple Models for use", "text": "In my design, I have multiple models and each model has a controller. I need to use all the models to process some operation. Most examples I see are pretty simple with 1 view, 1 controller, and 1 model. How would you get all these models together? Only ways I can think of are 1) Have a top-level controller which has a reference to every controller. Those controllers will have a getter/setter function for their model. Does this violate MVC because every controller should have a model? 2) Have an Intermediate class to combine every model into a one model. Then you create a controller for that new super model. Do you know of any better ideas? Thanks."} {"_id": "235985", "title": "Best practice for projects architecture - server side", "text": "The usual way (that I'm familiar with) to divide the server side is the n-layer architecture : 1. **DAL** \\- data access layer, usually has the Entities and the context (and maybe include also a repository) 2. **BLL** \\- business logic layer 3. **Contract** \\- Interfaces 4. **Services** \\- classes which implement the interfaces ( could be a Web API for E.G ) Of course there is also the n-tier Architecture which i don't want to discuss . lately I've seen more and more a \"Modules\" architecture and i was wondering about it ( I'm not familiar with the professional term of this architecture ) Where each module is not a table in the database but a group of tables that have a shared concept . something that looks like that : ![Modules and DAL architecture](http://i.stack.imgur.com/qPrqb.jpg) Is there any known good architecture that separate the layers by modules on the server side? if so, what is the professional term for this kind of architecture and what are the advantages / disadvantages of each of this architectures ?"} {"_id": "235987", "title": "What is the right way to parse HTML?", "text": "I've heard, that parsing HTML using the Cthulhu way is not very good. But what are the right ways to parse HTML? Or is it possible to parse it at all?"} {"_id": "235989", "title": "Is there a purely technical term for 'monkey patching'", "text": "**EDIT** The original title of the question was **Is there a non-derogatory term for 'monkey patching'**. As I have learned that the term is actually not derogatory, or is at least not meant to be, I changed the name to free it from mistaken implications. * * * I would like to describe a piece of software, and try not to use emotionally biased language. However I am not aware of an unbiased alternative to 'monkey patching'. The term is meant to be derogatory for a reason. Many see it as a useful but problematic technique. I am aware of an alternative, but it is equally derogatory: 'duck punching'. I see 'open classes' not as an alternative name, but as a prerequisite for monkey patching. 'Overriding' does not seem right either. I would like to differentiate between monkey patching and the regular overriding as part of inheritance-based polymorphism."} {"_id": "42091", "title": "What should I know about C++?", "text": "I've recently started learning C++, and I enjoy it a lot. I've often read it's easier to write bad code in C++ than in most languages, and that it is a lot deeper than what it seems. As I'd like to avoid writing bad code, I was wondering what exactly I shouldn't do, and what I should do, to write good code in C++."} {"_id": "42094", "title": "Selling middleware", "text": "I am close to completion of a useful suite of tools for 2D game development (with mobile platforms in mind), for working with sprites and animations. Can anyone advise the best way to sell and promote my product (without investing capital). I am aware that there are lots of websites that can market and sell software to end consumers, but I need to target game developers specifically as middleware is completely meaningless to your average joe. So, aside from using PayPal on my website, are there any marketplaces for game middleware? How can I promote the application outside my website without spamming developer forums?"} {"_id": "42097", "title": "How long of a trial period do you use with programmers - how quickly can you tell if they are talented and a good fit?", "text": "It seems most jobs that I've been exposed to come with a 3 month trial period, during which the employer decides whether the employee is doing good enough work, and is a good fit. 3 months seem like overkill to me, for most cases we've known much sooner whether someone wasn't a good fit. How long does it take you, on average, to evaluate whether a newly hired programmer is both talented and a good fit for your team?"} {"_id": "101340", "title": "What \u201cIndustry Classification\u201d is Computer Programming?", "text": "Whenever I fill out a survey (especially, but not limited to trade- papers/mags/sites), there\u2019s one question (or any derivation thereof) that always trips me up: > Which industry classification best describes your line of work This is usually followed by a list of things like _Retail_ , _Health_ , _Education_ , _Construction_ , _Manufacturing_ , etc. I can never figure out exactly what industry computer programming falls in. I usually just end up picking either _Manufacturing_ because I suppose in a way, I\u2019m manufacturing something, _Retail_ since in a way, I am producing goods, or else whatever option happens to mention either \u201ccomputers\u201d or \u201celectronics\u201d (if any). I tried Googling it but came up empty. Does anyone know what the \u201ccorrect\u201d answer is? What do other programmers enter in these surveys? * * * ![enter image description here](http://i.stack.imgur.com/YOTmp.png)"} {"_id": "80247", "title": "PHP - How to Automatically Post Form in Another Website and Parse the Result", "text": "I am planning to create a website like what http://dohop.com is doing that will allow a user to pull the airlines price rate and date from the http://airasia.com website. Currently the site will only allow the user to view the flight schedule for one day and if they wish to view x days ahead they need to repost the data. I would like to collect the data in x days ahead and group them in table so that user can view all the flight prices & schedule variances in one screen without reposting. I have checked the `AirAsia.com` site and they currently don't have any API support which would allow me to extract their data. While they are using 'aspx' for their website and POST method. Can anyone give me some guidance on what is the method, approach or technique for me to harvest the data?"} {"_id": "101346", "title": "What is best practice on ordering parameters in a function?", "text": "Sometimes (rarely), it seems that creating a function that takes a decent amount of parameters is the best route. However, when I do, I feel like I'm often choosing the ordering of the parameters at random. I usually go by \"order of importance\", with the most important parameter first. Is there a better way to do this? Is there a \"best practice\" way of ordering parameters that enhances clarity?"} {"_id": "163147", "title": "Is a 1:* write:read thread system safe?", "text": "Theoretically, thread-safe code should fix race conditions. Race conditions, as I understand it, occur because two threads attempt to write to the same location at the same time. However, what about a threading model in which a single thread is designed to write to a location, and several slave/worker threads simply read from the location? Assuming the value/timing at which they read the data isn't relevant/doesn't hinder the worker thread's outcome, wouldn't this be considered 'thread safe', or am I missing something in my logic?"} {"_id": "48635", "title": "Is software innovation still primarily North American and European? Why, and for how much longer?", "text": "Since this site is read by a global audience of programmers, I want to know if people generally agree that the vast majority of software innovation - languages, OS, tools, methodologies, books, etc. - still originates from the USA, Canada, and the EU. I can think of a few exceptions, e.g. Nginx webserver from Russia and the Ruby language from Japan, but overwhelmingly, the software I use and encounter daily is from North America and the EU. * Why? Is history and historical momentum (computing having started in USA and Europe) still driving the industry? And/or, is some nebulous (or real) cultural difference discouraging software innovation abroad? * Or are those of us in the West simply ignorant of real software innovation going on in Asia, South America, Eastern Europe, etc.? * When, if ever, might the centers of innovation move out of the West? Your experiences and opinions welcome, thanks!"} {"_id": "256234", "title": "Example cases for each data structure?", "text": "I've been looking all over the Internet for a page that lists **cases** for when to use each abstract data type. I expect that reading through a series of both surprising and obvious uses for each type will help readers recognize future opportunities for applying a particular ADT in their algorithms. For example: > **Trees** > > * File System > * HTML DOM tree > * Family Genealogy > * Syntactic Parsing of Strings > * etc... > I'll provide a list of basic ADTs that should be included; feel free to add more, or to specify subtypes: Arrays, Linked-Lists, Sets, Dictionaries/Maps, Stacks, Queues, Trees, Graphs."} {"_id": "205716", "title": "Is it worth writing a unit test for a DTO with the most basic getter/setters?", "text": "The advantage is it protects your DTO against future \"enhancements\" ?"} {"_id": "256231", "title": "Java EE - Configuration over convention?", "text": "I'm a systems administrator and am also pursuing a more specific career in development. I started about a year and a half ago with Javascript and C#/.NET ... one day I decided I wanted to learn Java. Then I have found myself amazed with all of the options available and the community seems wonderful and I think the JVM is a great piece of technology and development platform. However...one thing about Microsoft is that everything is completely out of the box. If you want to build an infrastructure or just a simple app then you are dealing with one technology, one vendor, one IDE, one set of forums ... you just point, click, develop. I know there have been huge strides made with Java. More specifically I am spending most of my time with the Pivotal stack. Spring, Spring Boot, Grails, etc. But because there are so many options and everything is so open it seems that the tutorials on different sites are kinda hard to follow and because everything is changing so fast I finally find a good book or tutorial and the layout for everything just changes. I am able to build simple Java apps and I can also get a Grails project up and running in an instant. Also the Play! framework ... wow. But the point is .. just wanting a little bit of advice as someone who is new to Java development. I know that if I can just dig through all of these XML files and just figure it all out that I'll be able to really start taking advantages of these technologies. Just wanted to know if there is a de facto tutorial I should start with. Spring is pretty easy even though sometimes my stuff just ... doesn't work. Even when I'm following the guide on the site. And another part of this question is should I go with Maven or Gradle? I haven't used any of these tools. Is this supposed to make everything easier? All the projects I have created have come straight from the IDE (Spring Tool Suite). .. Ok this is getting long .. I really want to get an application up and running with Spring Data/Rest and start building an Angular JS frontend. Lots of options..a little overwhelming. But want to learn it well and the only thing I miss about .NET is the ease of use. Anybody feel anything similar when learning Java web development? Whats the best place to shut up and start?"} {"_id": "201906", "title": "How to map the english dictionary to UNSPSC codes?", "text": "Is there a db which maps the words from the english dictionary to the UNSPSC codes?(http://www.unspsc.org/) My problem is the following: I am building a search system. And the customer searches for 'pencil' and 'pencil sharpeners' are also returned. Each of the returned item has a UNSPSC code associated with it. So one possible solution would be to search only for those items in the categories the search word belongs to. But this solution would require a mapping from english words to the UNSPSC codes..."} {"_id": "201907", "title": "Redis & MongoDB for Metrics Platform", "text": "I'm in the process of writing an app that will ultimately display analytics to the user. I've written a service that collects data from an API. This data will then be processed, stored, then when the user requests the data, pulls it from the store and displays it. Fairly straightforward. We plan on using MongoDB for the app database (storing users, settings, etc.). I've read that Redis is good for storing metric information because of the key/value pair nature. My question is, what would be the best way to go about interchanging how the data comes from the API service to the user being able to request it? I've initially come up with storing the API data in another MongoDB store, seperate from the app. Then having another service that runs at a longer interval than the API service that aggregates the raw data in Mongo, moves it into Redis, then archives the parsed Mongo data into either some log file or something. The app would then be able to reach into Redis to grab the metrics based on predetermined keys. Is Redis even the right option for something like this? I've also considered swapping out MongoDB with Postgres or MySQL since operations like SUM run well on a relational platform."} {"_id": "201905", "title": "Javadoc for a static \"overloaded\" method", "text": "My problem is the following: I have an interface that requires its implementations to implement a static method, namely `void load()`. However, until Java 8, it seems we won't get static methods in interfaces. I trust my users to still implement it correctly, but my problem is how to write a proper doc. There are up to 50 implementations of this interface and not being able to inherit the javadoc is a bother. What is the best way to proceed in my case? Should I document a random implementation and add `@see` annotations in the others? It seems dirty to me. Any suggestion is welcome."} {"_id": "124551", "title": "Best design for Windows forms that will share common functionality", "text": "In the past, I have used inheritance to allow the extension of Windows forms in my application. If all of my forms would have common controls, artwork, and functionality, I would create a base form implementing the common controls and functionality and then allow other controls to inherit from that base form. However, I have run into a few problems with that design. 1. Controls can only be in one container at a time, so any static controls you have will be tricky. For example: Suppose you had a base form called BaseForm which contained a TreeView which you make protected and static so that all of the other (derived) instances of this class can modify and display the same TreeView. This would not work for multiple classes inheriting from BaseForm, because that TreeView can only be in one container at a time. It would likely be on the last form initialized. Though every instance could edit the control, it would only display in one at a given time. Of course, there are work-arounds, but they are all ugly. (This seems to be a really bad design to me. Why can't multiple containers store pointers to the same object? Anyhow, it is what it is.) 2. State between forms, that is, button states, label text, etc., I have to use global variables for and reset the states on Load. 3. This isn't really supported by Visual Studio's designer very well. Is there a better, yet still easily maintainable design to use? Or is form inheritance still the best approach? **Update** I went from looking at MVC to MVP to the Observer Pattern to the Event Pattern. Here is what I am thinking for the moment, please critique: My BaseForm class will only contain the controls, and events connected to those controls. All events that need any sort of logic to handle them will pass immediately to the BaseFormPresenter class. This class will handle the data from the UI, perform any logical operations, and then update the BaseFormModel. The Model will expose events, which will fire upon state changes, to the Presenter class, which it will subscribe (or observe) to. When the Presenter receives the event notification, it will perform any logic, and then the Presenter will modify the View accordingly. There will only be one of each Model class in memory, but there could potentially be many instances of the BaseForm and therefore the BaseFormPresenter. This would solve my problem of synchronizing each instance of the BaseForm to the same data model. Questions: Which layer should store stuff like, the last pressed button, so that I can keep it highlighted for the user (like in a CSS menu) between forms? Please criticize this design. Thanks for your help!"} {"_id": "124555", "title": "Are stand up meetings really useful?", "text": "My coworkers tell me stand up meetings are useless. My project manager also makes us end every morning meeting with a company chant."} {"_id": "124554", "title": "Trade-offs of local vs remote development workflows for a web development team", "text": "We currently have SVN setup on a remote development server. Developers SSH into the server and develops on their sandbox environment on the server. Each one has a virtual host pointed to their sandbox so they can preview their changes via the web browser by connecting to developer-sandbox1.domain.com. This has worked well so far because the team is small and everyone uses computers with varying specs and OSs. I've heard some web shops are using a workflow that has the developers work off of a VM on their local machine and then finally push changes to the remote server that hosts SVN. The downside to this is that everyone will need to make sure their machine is powerful enough to run both the VM and all their development tools. This would also mean creating images that mirror the server environment (we use CentOS) and have them install it into their VMs. And this would mean creating new images every time there is an update to the server environment. What are some other trade-offs? Ultimately, why did you choose one workflow over the other?"} {"_id": "201908", "title": "What is the best way of storing date?", "text": "I am a new to storing dates based on time zones. Need to know the standard way to store the date in the datastore. My requirements are 1. Easy to query the date based on the date range. 2. show the date with the client appropriate time zone selected by him(I am having a table maintained for the timezone separately) 3. Able to query using the datastore Admin console also. Any suggestions/ideas regarding this will be a great help in proceeding further."} {"_id": "79755", "title": "Discovered large security hole in someone elses website... What to do?", "text": "A chap I'm bidding to do some development for has a social network he wrote himself. Not the next facebook by any stretch. But a few thousand local users. I went to have a look at it to see what level of knowledge he had so I knew how to position myself for this potential job. I tapped a single quote into the login box on the front page and up pops a SQL error. So I tried a simple \" a' or 'a'='a \" in each box and was immediately logged in as the administrator. He had written a fairly comprehensive administration site by the looks of things. At the bottom of the page a \"Download SQL Backup\" button. This is where I stopped. My question is this. What do I do? As a developer myself I would appreciate the heads up. But as someone who's hopefully going to be paid to do some work for him I wouldn't want to throw all sorts of trust issues into the mix. Any ideas?"} {"_id": "44731", "title": "Why are there separate L1 caches for data and instructions?", "text": "Just went over some slides and noticed that the L1 cache (at least on Intel CPUs) distinguishes between data and instruction cache, I would like to know why this is.."} {"_id": "236184", "title": "Do I need to publish deployment scripts for deploying AGPL licensed work (MongoDB)", "text": "MongoDB is dual-licensed with AGPL (the engine) and ASL 2.0 (the drivers). In a nutshell, merely using MongoDB through the drivers does not dictate to release your source code (due to drivers' ASL 2.0 license). AFAIK, only if you directly call the mongo engine, you need to give out your code that's using it (but still not the _application_ code that talks to mongo via the drivers). Check this MongoDB blog entry: http://blog.mongodb.org/post/103832439/the-agpl What if you deploy (install, configure) mongodb in your deployment scripts. And then start/stop/restart those processes. And then maybe create some users via mongo shell. Do you need to publish your _deployment scripts_? (A bonus question: how can they publish the drivers as ASL 2.0 if those drivers use the AGPL-licensed part of MongoDB over the network? Because they are authors of both?)"} {"_id": "236185", "title": "How to design an IDisposable that unconditionally needs to be disposed?", "text": "Consider a class that implements `IDisposable`, and that has members in such a way that it will never become eligible for garbage collection when it is not disposed. And as it will not be garbage collected, it will not have the chance to use the destructor for cleaning up. As a result, when it is not disposed (e.g. unreliable users, programming errors), resources will be leaked. Is there a general approach how such a class can be designed to deal with such a situation or to avoid it? _Example_ : using System; class Program { static void Main(string[] args) { new Cyclical(); GC.Collect(); GC.WaitForPendingFinalizers(); Console.ReadKey(); } } class Cyclical { public Cyclical() { timer = new System.Threading.Timer(Callback, null, 0, 1000); } System.Threading.Timer timer; void Callback(object state) { Console.Write('.'); // do something useful } ~Cyclical() { Console.WriteLine(\"destructor\"); } } (Omitted `IDisposable` to keep example short.) This class uses a `Timer` to do something useful at certain intervals. It needs a reference to the `Timer` to avoid it is garbage collected. Let\u2019s assume the user of that class will not dispose it. As a result of the timer, somewhere some worker thread has a reference to the `Cyclical` instance via the callback, and as a result, the `Cyclical` instance will never become eligible for garbage collection, and its destructor will never run, and resources will leak. In this example, a possible fix (or workaround) could be to use a helper class that receives callbacks from the `Timer`, and that does not have a reference, but only a `WeakReference` to the `Cyclical` instance, which it calls using that `WeakReference`. However, in general, is there a design rule for classes like this that need to be disposed to avoid leaking resources? * * * For the sake of completeness, here the example including `IDispose` and including a workaround/solution (and with a hopefully less distracting name): class SomethingWithTimer : IDisposable { public SomethingWithTimer() { timer = new System.Threading.Timer(StaticCallback, new WeakReference(this), 0, 1000); } System.Threading.Timer timer; static void StaticCallback(object state) { WeakReference instanceRef = (WeakReference) state; SomethingWithTimer instance; if (instanceRef.TryGetTarget(out instance)) instance.Callback(null); } void Callback(object state) { Console.Write('.'); // do something useful } public void Dispose() { Dispose(true); GC.SuppressFinalize(this); } protected virtual void Dispose(bool disposing) { if (disposing) { Console.WriteLine(\"dispose\"); timer.Dispose(); } } ~SomethingWithTimer() { Console.WriteLine(\"destructor\"); Dispose(false); } } * If disposed, the timer will be disposed. * If not disposed, the object will become eligible for garbage collection."} {"_id": "135499", "title": "Is there a difference between the terms \"Open Source\" and \"Open Source Software\"?", "text": "Until today, I have always used the terms **open source** and **open source software** interchangeably. But then I read Phil Haack's blog post in which he suggests that the two terms are not necessarily referring to the same thing. He proposes the following definitions: **Open Source Software** is source code which is licensed under a license that meets the Open Source Definition. **Open Source** is a development methodology which includes certain characteristics: * Developed in the open with community involvement * The team accepts contributions that meet its standards * The end product has an open source license. This encompasses open source software, open source hardware, etc. So Phil argues that the end product of open source development is open source software, but open source software is _not necessarily_ the product of open source development. Since he also mentions that there are many different understandings of what open source means, what do you think about these definitions? Do you agree with the distinction he has made?"} {"_id": "139777", "title": "RDF and OWL: Have these delivered the promises of the Semantic Web?", "text": "These days I've been learning a lot about how different scientific fields are trying to move their data over to the Semantic Web in order to \"free up data from being stored in isolated silos\". I read a lot about how these fields are saying how their efforts are implementing the \"visions\" of the Semantic Web. As a learner (and from purely a learning perspective) I was curious to know why, if semantic technology is deemed to be so powerful, the efforts have been around for years but myself and a lot of people I know have never even heard of it until very recently? Also, I don't come across any scholarly articles deeming \"oh, our inferencing engine was able to make such and such discovery, which is helping us pave our way to solving....\" etc. It seems that there are genuine efforts across different institutions, fields, and disciplines to shift all their data to a \"semantic\" format, but what happens after all that's been done? All the **ontologies** have been created/unified, and then what?"} {"_id": "135495", "title": "In MVC, what is the difference between controller and router?", "text": "Do they mean the same thing (attaching URLs to actions, or actions to URLs) or is there any difference I'm missing? Example: http://github.com/dannyvankooten/PHP-Router vs. http://konstrukt.dk"} {"_id": "41019", "title": "How do I (tactfully) tell my project manager or lead developer that the project's codebase needs serious work?", "text": "I just joined a (relatively) small development team that's been working on a project for several months, if not a year. As with most developer joining a project, I spent my first couple of days reviewing the project's codebase. The project (a medium- to large-sized ASP.NET WebForms internal line of business application) is, for lack of a more descriptive term, a disaster. There are three immediately noticeable problems with the coding standards: 1. **The standard is very loose.** It describes more of what not to do (don't use Hungarian notation, etc..) than what _to_ do. 2. **The standard isn't always followed.** There are inconsistencies with the code formatting **everywhere**. 3. **The standard doesn't follow Microsoft's style guidelines.** In my opinion, there's no value in deviating from the guidelines that were set forth by the developer of the framework and the largest contributor to the language specification. As for point 3, perhaps it bothers me more because I've taken the time to get my MCPD with a focus on web applications (specifically, ASP.NET). I'm also the only Microsoft Certified Professional on the team. Because of what I learned in all of my schooling, self-teaching, and on-the-job learning (including my preparation for the certification exams) I've also spotted several instances in the project's code where things are simply not done in the best way. I've only been on this team for a week, but I see so many issues with their codebase that I imagine I'll be spending more time fighting with what's already written to do things in \"their way\" than I would if I were working on a project that, for example, followed more widely accepted coding standards, architecture patterns, and best practices. This brings me to my question: **Should I (and if so, how do I) propose to my project manager and team lead that the project needs to be majorly renovated?** I don't want to walk into their office, waving my MCTS and MCPD certificates around, saying that their project's codebase is crap. But I also don't want to have to stay silent and have to write kludgey code atop _their_ kludgey code, because I actually want to write quality software and I want the end product to be stable and easily maintainable."} {"_id": "134729", "title": "What is the best design pattern for asynchronous message passing in a Chrome extension?", "text": "I have a background script that is responsible for getting and setting data to a `localStorage` database. My content scripts must communicate with the background script to send and receive data. Right now I send a JSON object that contains the command and the data to a function. So if I'm trying to add an object to the database I'll create JSON that has a `command` attribute that is `addObject` and another object that is the data. Once this is completed the background scripts sends a response back stating that it was successful. Another use case of the function would be to ask for data in which case it would send an object back rather than a success/fail. The code gets kind of _hacky_ once I start trying to retrieve the returned object from the background script. It seems like there is probably a simple design problem to follow here that I'm not familiar with. Some people have suggested future/promise design problems but I haven't found a very good example. Content Script function sendCommand(cmdJson){ chrome.extension.sendRequest(cmdJson, function(response){ //figure out what to do with response }); } Background script if (request.command == \"addObject\"){ db[request.id]= JSON.stringify(request.data); sendResponse(\"success\"); } else if(request.command == \"getKeystroke\"){ var keystroke = db[request.id]; sendResponse(keystroke); }"} {"_id": "161506", "title": "Why don't inherited methods use child properties? (PHP)", "text": "I'm trying to get the code below to work in child classes. But it keeps failing because it is checking the `basicDbClass` rather than the child class. For those complaining and voting my question down because of my use of static methods and variables, the static getBy method serves the purpose of returning an array of matching class objects. If you have a complaint about that methodology, then rather voting down every question that uses code you don't agree with, look for the next question about that subject and then give them your brilliant answer. If you're really antsy, I'll even do you the favor of asking that question so that you can get this out of your system. The main problem is in `static::$data_table`. When I do the following, I get a fatal error: > access to undeclared static property: basicDbClass::$data_table * * * $obj_array = childClass::getBy('id',5); class basicDbClass { public static function getBy($property,$value) { if(!property_exists(get_called_class(),$property) ) { debug::add('errors',__FILE__,__LINE__,__METHOD__,$property . ' is not a valid property.'); return false; } $obj_array = array(); dbConnection::connect(MAIN_DB); $sql = sprintf('SELECT `id` FROM `%s` WHERE `%s` = \\'%s\\'',static::$data_table,mysql_real_escape_string($property),mysql_real_escape_string($value)); $result = mysql_query($sql); if($result === false) { return false; } while($row = mysql_fetch_assoc($result)) { $obj_array[$row['id']] = new static($row['id']); } return $obj_array; } } class childClass extends basicDbClass { protected static $data_table = 'child_class_table'; protected $name; protected $id; }"} {"_id": "134725", "title": "How do large-scale applications handle GUI creation?", "text": "I'm interested in developing GUI-based Windows applications in C++, but I'm not sure how it's done in professional or large-scale settings. It seems it would take a lot of development time to describe all the elements (e.g., dimensions and such) within code. Presumably it's not all done visually, right? How is it normally done? What methods are used to prevent a lot of time wasted on defining each and every property a UI control might have?"} {"_id": "179563", "title": "What problems will I face if I remove the concept of interfaces from my code?", "text": "I have been programming for many years but I am still not comfortable with the concept of \"Interfaces\". I try to use interfaces but many times I don't see a mandatory use for it. I think this is probably because the projects weren't so big, or because interfaces are more useful when teamwork is involved. So, making another attempt to understand the importance of interfaces, here is my question: What problems will I face if I remove the concept of interfaces from my code?"} {"_id": "179561", "title": "Java-style package naming and second-level country domains", "text": "I own a .co.uk domain, and I whenever I've dealt with Java-style package naming, I've gone with `uk.co.domainname`. Once I encountered package that did the following: `co.uk.domainname`. Is one of these right, or is it something that's up to the developer's discretion? Is there some sort of convention that deals with this?"} {"_id": "129123", "title": "Were the first assemblers written in machine code?", "text": "I am reading the book The Elements of Computing Systems: Building a Modern Computer from First Principles, which contains projects encompassing the build of a computer from boolean gates all the way to high level applications (in that order). The current project I'm working on is writing an assembler using a high level language of my choice, to translate from Hack assembly code to Hack machine code (Hack is the name of the hardware platform built in the previous chapters). Although the hardware has all been built in a simulator, I have tried to pretend that I am really constructing each level using only the tools available to me at that point in the real process. That said, it got me thinking. Using a high level language to write my assembler is certainly convenient, but for the very first assembler ever written (i.e. in history), wouldn't it need to be written in machine code, since that's all that existed at the time? And a correlated question... how about today? If a brand new CPU architecture comes out, with a brand new instruction set, and a brand new assembly syntax, how would the assembler be constructed? I'm assuming you could still use an existing high level language to generate binaries for the assembler program, since if you know the syntax of both the assembly and machine languages for your new platform, then the task of writing the assembler is really just a text analysis task and is not inherently related to that platform (i.e. needing to be written in that platform's machine language)... which is the very reason I am able to \"cheat\" while writing my Hack assembler in 2012, and use some preexisting high level language to help me out."} {"_id": "129121", "title": "Given some code, how can I learn where it loads resources from if the developer failed to document?", "text": "According to this tutorial OpenNI does not support custom gestures. However, Youtube has videos of people doing just that. Now, clearly this thing loads gestures from somewhere. How can I find out where if they've managed to obfuscate the code successfully?"} {"_id": "254583", "title": "Entity Component System Coupling", "text": "Lately I've been working on a small personal project which is basically an Entity Component System framework with autoupdated Systems. While I have a pretty good idea on the way the framework should work, because of lack of experience I am having trouble with actually keeping everything decoupled. Some details on the framework: Each entity is defined by it's components. Systems are responsible for the actually modifying the entities by changing their components. In order to improve locality of reference, instead of keeping each component in the appropriate entity, all components are stored in homogenous vectors and each entity keeps a list of indices to each vector. Since each System modifies specific components, it should keep a list only of the entities with the corresponding components. How I dealt with all of this until now was to have a ComponentManager, an EntityManager and a SystemManager. These classes however have very tight coupling with each other. The EntitManager needs to have access to the ComponentManager in order to handle the size of the the index lists and the mapping of each component type to them. It also needs access to actually add the components in the appropriate vector. Another coupling is between the EntityManager and the SystemManager. Whenever an entity is created, it needs to be added to the list of the appropriate Systems. A general event bus would appear to help but I am not sure how to implement it without making it global. How do I improve this design by removing coupling while still maintaining the system's functionality?"} {"_id": "10103", "title": "How to enter flow experience for SW development?", "text": "What are your strategies to improve the flow experience when doing work?"} {"_id": "247225", "title": "Where you are both the Authorization Server and Resource Server does it make sense to use OAUTH2 for security?", "text": "I'm curious if there are any use cases where it makes sense to use OAUTH2 where you are both the protected resource and the authorization server. In this case you have the passwords already for example so are protecting your own resource. Seems pretty heavy handed - is this a common practice or would it make sense to look at simpler solutions?"} {"_id": "247226", "title": "Can sequence alignment algorithms be used for search implementation?", "text": "Hi I want to implement a search on a website which includes imperfect search results. Meaning, if the search term is misspelled or slightly different from a 100% match, the function should still return results which are sorted by similarity in terms of the search keyword. I already have implemented algorithms like Smith-Waterman and Needleman-Wunsch which also can be used for database searching. So my idea was to run those algorithms against every keyword in the database and sort them by the score of each result. Is this a good idea? I am using ASP.NET in C#. Are there any tools or tricks which can accomplish this for me without using my own methods? My biggest concern is performance, after all those algorithms create at least one two dimensional matrix, calculate its values and perform a traceback. Any suggestions?"} {"_id": "247223", "title": "Managing debug information of C program for Debugger", "text": "I am not sure if the title is appropriate. I have written a parser for CDB files in C# and ANTLR that creates runtime objects for me such that I can pass it to the TCF Agent which takes care of everything that needs to be taken care of on the side of Eclipse. Such a CDB file consists of Symbol Records, Function Records, Module Records and also Type Records and so on. It describes a C program compiled with SDCC. I already have a good structure for local variables which debug information I can retrieve by resolving an ID that consists of * **scope_flag** Indicates the scope of the symbol: `L` local, `G`global, `F`File, ... * **module_base** The address of the modules where the local variable is declared * **level** and **level** A tupel that points to a node of a tree where each node is a scope `{ .. }`. It doesn't matter if it is a function scope or a control command (`if`, `for`, etc.) * **index** The index of the symbol inside a node (Therefore it is an ordered list) This works and should be fast enough even for large programs. If Eclipse wants to know anything about a symbol wheter it's the name, type or value it sends me an ID that looks something like this (values in hex): L@C415.E.1.1 and I can identify the symbol in my parser. The main problem I got now are global symbols like global variables, function names and so on. At the moment I have a `Dictionary` that maps the name of a symbol to its runtime object: /* Dictionary that holds all global-scoped symbols. */ private Dictionary m_nameToGlobalSymbolDict = new Dictionary(); Now consider a very large program where a lot of global symbols may occur. Is such a large hash-table the best I can do? At the moment a global symbol ID looks like these: G@function_name1 G@global_integer1 G@function_name2 The only thing that I might could do would be to add identifiers for functions, variables etc. like G@C.function_name1 G@E.global_integer1 G@C.function_name2 where `C` means code and `E` internal RAM but that would only allow me to split the hash-table and nothing else. Can anybody suggest me a solution or tell me if this is already okey the way I'm doing it? I hope it is clear what I am asking for. If not please let me know."} {"_id": "121450", "title": "How should I design a correct OO design in case of a Business-logic wide operation", "text": "**EDIT:** Maybe I should ask the question in a different way. in light of ammoQ's comment, I realize that I've done something like suggested which is kind of a fix and it is fine by me. But I still want to learn for the future, so that if I develop new code for operations similar to this, I can design it correctly from the start. So, if I got the following characteristics: * The relevant input is composed from data which is connected to several different business objects * All the input data is validated and cross-checked * Attempts are made in order to insert the data to the DB * All this is just a single operation from Business side prospective, meaning all of the cross checking and validations are just side effects. I can't think of any other way but some sort of Operator/Coordinator kind of Object which activates the entire procedure, but then I fall into a Functional-Decomposition kind of code. so is there a better way in doing this? **Original Question** In our system we have many complex operations which involve many validations and DB activities. One of the main Business functionality could have been designed better. In short, there were no separation of layers, and the code would only work from the scenario in which it was first designed at, and now there were more scenarios (like requests from an API or from other devices) So I had to redesign. I found myself moving all the DB code to objects which acts like Business to DB objects, and I've put all the business logic in an Operator kind of a class, which I've implemented like this: First, I created an object which will hold all the information needed for the operation let's call it InformationObject. Then I created an OperatorObject which will take the InformationObject as a parameter and act on it. The OperatorObject should activate different objects and validate or check for existence or any scenario in which the business logic is compromised and then make the operation according to the information on the InformationObject. So my question is - Is this kind of implementation correct? PS, this Operator only works on a single Business-wise Operation."} {"_id": "176628", "title": "PHP MVC error handling, view display and user permissions", "text": "I am building a moderation panel from scratch in a MVC approach and a lot of questions cropped up during development. I would like to hear from others how they handle these situations. 1. **Error handling** Should you handle an error inside the class method or should the method return something anyway and you handle the error in controller? What about PDO exceptions, how to handle them? For example, let's say we have a method that returns true if the user exists in a table and false if he does not exist. What do you return in the catch statement? You can't just return false because then the controller assumes that everything is alright while the truth is that something must be seriously broken. Displaying the error from the method completely breaks the whole design. Maybe a page redirect inside the method? 2. **The proper way to show a view** The controller right now looks something like this: include('view/header.php'); if ($_GET['m']=='something') include('view/something.php'); elseif ($_GET['m']=='somethingelse') include('view/somethingelse.php'); include('view/foter.php'); Each view also checks if it was included from the index page to prevent it being accessed directly. There is a view file for each different document body. Is this way of including different views ok or is there a more proper way? 3. **Managing user rights** Each user has his own rights, what he can see and what he can do. Which part of the system should verify that user has the permission to see the view, controller or view itself? Right now I do permission checks directly in the view because each view can contain several forms that require different permissions and I would need to make a seperate file for each of them if it was put in the controller. I also have to re-check for the permissions everytime a form is submitted because form data can be easily forged. The truth is, all this permission checking and validating the inputs just turns the controller into a huge if/then/else cluster. I feel like 90% of the time I am doing error checks/permissions/validations and very little of the actual logic. Is this normal even for popular frameworks?"} {"_id": "176621", "title": "Automated Qt testing framework", "text": "Can someone recommend a good robust \"Free\" testing framework for Qt? Our requirements: 1. Should be able to test basic mouse click / mouse move events 2. Should be able to handle non-widget view components 3. Should have \"record\" capability to generate test scripts. 4. Should be automatable for running it daily. We looked at: 1. Squish - this solves all our problems. But it is just too da** expensive. 2. KD Executor - the download page now links to the squish page and says that's what they recommend for testing. Not sure what they mean by that. 3. TDriver - from nokia.qt. Super difficult to install. Very little documentation. Having a hard time to just install. I wonder how much harder it would be to write tests. 4. qtestlib - Could not handle non-widget components. Everything has to be a widget to be tested. No \"record\" feature. Can someone help with any other alternative?"} {"_id": "176623", "title": "Saving all hits to a web app", "text": "Are there standard approaches to persisting data for every hit that a web app receives? This would be for analytics purposes (as a better alternative to log mining down the road). Seems like Redis would be a must. Is it advisable to also use a different DB server for that table, or would Redis be enough to mitigate the impact on the main DB? Also, how common is this practice? Seems like a no brainer for businesses who want to better understand their users, but I haven't read much about it."} {"_id": "176624", "title": "Programming language features that help to catch bugs early", "text": "Do you know any programming language features that help to detect bugs early in the software development process - ideally at compile-time or else as early as possible at run-time? Examples of well-known and effective bug-reducing features are: * Static typing and generic types: type incompatibility errors are detected by the compiler * Design by Contract (TM), also called Contract Programming: invalid values are quickly detected at runtime (through preconditions, postconditions and class invariants) * Unit testing I ask this question in the context of improving an object-oriented programming language (called Obix) which has been designed from the ground up to 'make it easy to quickly write reliable code'. Besides the features mentioned above this language also incorporates other Fail-fast features such as: * Objects are immutable by default * Void (null) values are not allowed by default The aim is to add more Fail-fast concepts to the language. If you know other features which help to write less error-prone code then please let us know. Thank you."} {"_id": "11855", "title": "Why have minimal user/handwritten code and do everything in XAML?", "text": "I feel the MVVM community has become overzealous like the OO programmers in the 90's - it is a misnomer MVVM is synonymous with no code. From my closed StackOverflow question: Many times I come across posts here about someone trying to do the equivalent in XAML instead of code behind. Their only reason being they want to keep their code behind 'clean'. Correct me if I am wrong, but is not the case that: XAML is compiled too - into BAML - then at runtime has to be parsed into code anyway. XAML can potentially have more runtime bugs as they will not be picked up by the compiler at compile time - from incorrect spellings - these bugs are also harder to debug. There already is code behind - like it or not InitializeComponent(); has to be run and the .g.i.cs file it is in contains a bunch of code though it may be hidden. Is it purely psychological? I suspect it is developers who come from a web background and like markup as opposed to code. EDIT: I don't propose code behind instead of XAML - use both - I prefer to do my binding in XAML too - I am just against making every effort to avoid writing code behind esp in a WPF app - it should be a fusion of both to get the most out of it. UPDATE: Its not even Microsoft's idea, every example on MSDN shows how you can do it in both."} {"_id": "11856", "title": "What's wrong with circular references?", "text": "I was involved in a programming discussion today where I made some statements that basically assumed axiomatically that circular references (between modules, classes, whatever) are generally bad. Once I got through with my pitch, my coworker asked, \"what's wrong with circular references?\" I've got strong feelings on this, but it's hard for me to verbalize concisely and concretely. Any explanation that I may come up with tends to rely on other items that I too consider axioms (\"can't use in isolation, so can't test\", \"unknown/undefined behavior as state mutates in the participating objects\", etc.), but I'd love to hear a concise reason for why circular references are bad that don't take the kinds of leaps of faith that my own brain does, having spent many hours over the years untangling them to understand, fix, and extend various bits of code. **Edit:** I am not asking about homogenous circular references, like those in a doubly-linked list or pointer-to-parent. This question is really asking about \"larger scope\" circular references, like libA calling libB which calls back to libA. Substitute 'module' for 'lib' if you like. Thanks for all of the answers so far!"} {"_id": "205330", "title": "Using a Directory Service instead of a Database", "text": "I've just started working on an existing application that uses an LDAP directory service as its object store instead of a database. Many of my coworkers have been commenting how an application should never use a directory service instead of a database as its object store. I've got to work on it regardless, and won't be able to change it from LDAP to a database anytime soon. That said, my coworkers comments brings up the question in my mind: _Why is it bad to use a directory provider instead of a database as an object store?_ Could there ever be cases where this would not only acceptable, but even better than using a database?"} {"_id": "231646", "title": "Visual Design implementation timing in Agile", "text": "We work in Scrum with the caveat that we don't have a potentially shippable product at the end of each Sprint, instead requiring several hardening/stabilization Sprints before release. One of the reasons for this is that our chief UX designer prefers that we'll complete all the visual work (such as colors, fonts, exact layout and controls) at the later stages of the project. On one hand it makes sense because a feature with decent UI based on a wireframe is usually good enough to receive feedback from the users, so working too hard on the final visuals and styles is not necessary for risk reduction. It might even be wasteful if the feature fails at usability tests or if after further user feedback we'll decide to change the visual design for the entire system, including features that were already implemented. On the other hand with this approach we will have a list of features that are not completely developed, and with the final polishes being left out until the final Sprints. What do you think we should do? Should we go with the recommendation of the chief UX designer or should we go for a potentially shippable product? Thank you."} {"_id": "98786", "title": "Programming as a career option for a computer science graduate vs other fields?", "text": "> **Possible Duplicate:** > Why does a computer science degree matter to a professional programmer? > Do I need a degree in Computer Science to get a junior Programming job? There are various kinds of professions like : doctors - various degrees are required for this and you become a doctor or a surgeon after you complete it. engineers - electrical engineer, civil engineer, electronics engineer and many more. now when I see programmers working in companies, they have different degrees like a civil eng. degree, biotechnology degree, Mcom degree, mechanical eng degree.... and many more. It seems rather pointless that those people got those degrees but anyway but does your degree matters when you are a good programming. say somebody is good at C++ and is a civil engineer .... **what about stackoverflow or any other companies, would you consider such people for an interview or you would strictly mention that computer science is a must for applying.**"} {"_id": "59092", "title": "Visual Studio 2010 SP1 Performance", "text": "I've noticed since installing Visual Studio 2010 SP1 that I'm having huge performance issues. It will randomly freeze up on me quite a bit. I had no performance issues with Visual Studio 2010 before the upgrade. The only add-on I have running is ReSharper. I'm wondering if anyone else is experiencing performance issues? If so have you found a way to fix them?"} {"_id": "241395", "title": "Is ok to leave untranslated advanced log?", "text": "I just had a little fight with my boss over this topic (well, the boss always wins so I will do what he wants to be done) but I'd like to have the opinion of others about this: We are making a complex application that will be used by expert users (like the one who manages the company backups). I made an **Advanced log** , which contains 3 types of log: * messages * alerts * errors Every error is translated and handled by the GUI, so the user will always know why something went wrong. My concern is about the **messages** : The main goal of those messages is to have a detailed idea of what is happening and help the expert user to solve problems by himself without calling for assistance. To do this I log a message (that will remain in RAM only, with a max message count) for every little kind of stuff, like Connecting to server Connection successfully Request file list from path XXX Starting receiving file list File list successfully received This can help to know **when** a problem occurred and to determine who is guilty for it. The main problem is: Our program is translated in more than 10 languages! Translating that HUGE amount of little messages will be expensive and it will be used only by some users. You can access to this messages list using an **Advanced console panel** , so it's clear it's for **Advanced** users. My boss wants those messages to be translated, but he doesn't want to spend a lot in translation, so he asked me to limit myself in using those messages. If I limit myself this message log can become useless, but if we don't translate it our program loses its \" _friendly, translated approach_ \". What is the best thing to do? Pay more and translate everything? Limit message usage? Try to convince my boss that Advanced Users **SHOULD** know English?"} {"_id": "59096", "title": "Why (not) logic programming?", "text": "I have not yet heard about any uses of a logical programming language (such as Prolog) in the software industry, nor do I know of usage of it in hobby programming or open source projects. It (Prolog) is used as an academic language to some extent, though (why is it used in academia?). This makes me wonder, why should you use logic programming, and why not? Why is it not getting any detectable industry usage?"} {"_id": "125205", "title": "Using IIS as a server for non-http server", "text": "I am developing a system that is designed for multiple forms of interfacing. There is a website, but that is connected through an SDK, as well as an HTTP query interface to access data. But to improve speed and quality, I was thinking of creating a system inside IIS that get any message sent to the server, any response, but still let IIS manage SSL and normal socket connections. Is there a way to host my project in IIS without ASP or any other kind of script with extra behavioral events?"} {"_id": "200129", "title": "Uses of WCF Binding", "text": "From MSDN, we have the following definition of WCF Binding > Bindings specify how a WCF service endpoint communicates with other > endpoints. > > At its most basic, a binding must specify the transport (for example, HTTP > or TCP) to use. You can also set other characteristics, such as security and > transaction support, through bindings. I am looking for practical uses of WCF. The scnearions in which people tweak the abilities of WCF Binding. Some of them are listed below. 1. **Exporting Metadata** : - Custom policy assertions are exported by implementing the IPolicyExportExtension interface on a BindingElement class. What are the other practical uses of Binding that you usually use?"} {"_id": "139866", "title": "Making LISPs manageable", "text": "I am trying to learn Clojure, which seems a good candidate for a successful LISP. I have no problem with the concepts, but now I would like to start actually doing something. Here it comes my problem. As I mainly do web stuff, I have been looking into existing frameworks, database libraries, templating libraries and so on. Often these libraries are heavily based on macros. Now, I like very much the possibility of writing macros to get a simpler syntax than it would be possible otherwise. **But** it definitely adds another layer of complexity. Let me take an example of a migration in Lobos from a blog post: (defmigration add-posts-table (up [] (create clogdb (table :posts (integer :id :primary-key ) (varchar :title 250) (text :content ) (boolean :status (default false)) (timestamp :created (default (now))) (timestamp :published ) (integer :author [:refer :authors :id] :not-null)))) (down [] (drop (table :posts )))) It is very readable indeed. But it is hard to recognize what the structure is. What does the function `timestamp` return? Or is it a macro? Having all this freedom of writing my own syntax means that I have to learn other people's syntax for every library I want to use. > How can I learn to use these components effectively? Am I supposed to learn > each small DSL as a black box?"} {"_id": "84233", "title": "Active Record library with support for both SQL and NoSQL?", "text": "I'm looking for a PHP Active Record library that supports both SQL and NoSQL drivers (mongodb in particular). It doesn't matter if it actually has the NoSQL driver, I can write it myself, as long as the library itself supports it. In particular I'd need one that supports writing custom AR methods in the driver, in case they are methods that simply cannot be transferred to other drivers. The reason I want this is because I'd like to be able to talk to my databases the same no matter the driver, this will allow me to fairly easily transfer my data between different databases for performance testing. I'm not sure if this is the right place to ask for this, if not I apologize and would love to know what stackexchange site would be appropriate. For the record, I HAVE googled it, and all I can come up with is AR libraries for SQL databases, I haven't found a single one that combines SQL and NoSQL. Thank you."} {"_id": "201139", "title": "Why would anyone invest time in Microsoft \"Roslyn\"?", "text": "I have just been reading through some of the white papers & examples from Microsoft \"Roslyn\" and the concept seems very interesting. From what I can tell, it opens up the black box that is the compiler and provides an interface that we can use to get information and metrics about code written in Visual Studio. Roslyn also appears to have the ability to \"script\" code and compile/execute it on the fly (similar to the CodeDom) but I have only come across limited uses for that type of functionality in my experience. Whilst the code analysis & metrics element is an interesting space...it is something that has been around for a very long time and there are numerous providers that have already invested a lot of money into code analysis & refactoring tools (e.g. ReSharper, CodeRush, nCover, etc) and they do a pretty good job of it! Why would any company go out of their way to implement something that can be provided at a fraction of a cost through buying a license for one of the existing tools? Maybe I have missed some key functionality of the Roslyn project that places it outside the domain of the mentioned tools..."} {"_id": "252079", "title": "How modularized should my interfaces be?", "text": "I stumbled upon a specific instance where it seems that modularity and simplicity are at conflict with each other. Usually that's not the case, so I was really unsure how to resolve it. Suppose I would like to make a `queue` interface: template class queue { public: virtual ~queue() {} virtual void enqueue(const T& t) = 0; virtual void enqueue(T&& t) = 0; virtual T dequeue() = 0; }; As I was making a couple implementations (some of which are atomic), I noticed that I could really save time by having an \"abstract class\" which called on `empty()` or `full()` methods to help implement condition variables. That way any synchronized class I implement could also just extend that \"abstract class.\" The abstract class, in turn, implemented an extended interface: template class bounded_queue : public queue { public: virtual ~bounded_queue() {} virtual bool full() = 0; virtual bool empty() = 0; }; But wouldn't it be simpler if I just shoved those methods into `queue`, perhaps returning `false` by default? The answer to this question is not immediately obvious to me. If it is, then consider the other end of the spectrum. I'm not even sure the above design (in the link) works as expected! How do I decide on something simple, yet modular?"} {"_id": "156854", "title": "Were method cascades ever considered for C#?", "text": "Smalltalk supports a syntax feature called \"message cascades\". Cascades are being adopted by the Dart Programming language. As far as I know, C# doesn't support this. Were they ever considered during the design of the language? Is it conceivable that they could appear in a future version of the language?"} {"_id": "252070", "title": "Domain model for a notification system", "text": "I'm trying to build a modular notification service in a ASP.NET MVC web application. The application generates notifications and the service is responsible for delivering the notifications to the right users. When creating a domain model for the notification service, inheritance naturally comes to mind. ![Notification domain model](http://i.stack.imgur.com/jXxy3.png) One of the requirements for the notification service is that users can be subscribed to particular notification types. For example, **User** can subscribe to receive **NewMessage** notifications but not **ProfileNotification** notifications. How would one represent a relationship where a User has a Subscription to a notification type, but the notification type is represented through inheritance. Of course, this could be possible via reflection, but I was wondering if there was a domain model that came naturally from my requirements since I couldn't think of one. ![User has subscription](http://i.stack.imgur.com/XqX4g.png) I'm using ASP.NET MVC and Entity Framework Code First, but I think the question is general enough for any object oriented programming language."} {"_id": "156856", "title": "How should I implement permissions?", "text": "I am using cppcms to create a blog like application and I'm trying to write a permission system, although I'm confused as to what would be the most efficient and manageable method. At the moment, I have a table with permission id, user id, permission type (int) and permission value (an optional parameter, of sorts). For each separate permission, a row must be added. Which method would you recommend? If you want any clarification or extra information, feel free to ask."} {"_id": "156850", "title": "Does this BSD-like license achieve what I want it to?", "text": "I was wondering if this license is: * self defeating * just a clone of an existing, better established license * practical * any more \"corporate-friendly\" than the GPL * too vague/open ended and finally, if there is a better license that achieves a similar effect? I wanted a license that would (in simple terms) * be as flexible/simple as the \"Simplified BSD\" license (which is essentially the MIT license) * allow anyone to make modifications as long as I'm attributed * require that I get a notification that such a derived work exists * require that I have access to the source code and be given license to use the code * not oblige the author of the derivative work to have to release the source code to the general public * not oblige the author of the derivative work to license the derivative work under a specific license Here is the proposed license, which is just the simplified BSD with a couple of additional clauses (all of which are **bolded** ). > Copyright (c) (year), (author) (email) > > All rights reserved. > > Redistribution and use in source and binary forms, with or without > modification, are permitted provided that the following conditions are met: > > 1. Redistributions of source code must retain the above copyright notice, > this list of conditions and the following disclaimer. > 2. Redistributions in binary form must reproduce the above copyright > notice, this list of conditions and the following disclaimer in the > documentation and/or other materials provided with the distribution. ** > 3. The copyright holder(s) must be notified of any redistributions of > source code. > 4. The copyright holder(s) must be notified of any redistributions in > binary form > 5. The copyright holder(s) must be granted access to the source code > and/or the binary form of any redistribution upon the copyright holder's > request.** > > > THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS \"AS IS\" > AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE > IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE > ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE > LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR > CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF > SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS > INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN > CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) > ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE > POSSIBILITY OF SUCH DAMAGE."} {"_id": "154987", "title": "Why would he say \"We don't want to support MVC3\"?", "text": "I work in a small shop at a fairly big company doing intranet web applications. By small, I mean there is 1 other guy in my position... and he graduated with me last December. (we aren't the only IT, but the only ones in our field) We are switching out an old COBOL system and converting it's only used application suite to a Web App. My company has contracted to a Web Application firm to help with this process who has chosen ASP.NET MVC, during one of the important meetings I asked if they will be using MVC2 or MVC3. Their lead developer said: > \"MVC2, we don't want to support MVC3. haha\" My question is, why is this? This was several months ago and I've been doing extensive and self training gearing up for the MVC switch. From everything I am understanding, MVC3 is just like MVC2 if you don't use Razor and it fixes a number of smaller bugs that MVC2 had. So in my eyes, I can't see any reason to **NOT** use MCV3. There has to be something I'm missing. Since I don't really have any mentors to turn to in the real world, I'm coming here. What problems are there with MVC3 that might possibly lead him to say this that I'm missing?"} {"_id": "5415", "title": "How to handle demanding clients?", "text": "Frequently, I have been finding myself overloaded with contracts. Most of the time, I find myself juggling with at least 2 projects, in addition to the numerous websites I have to upkeep and perform maintenance on. Unfortunately, many of my clients will expect updates constantly - are constantly adding more to the to-do list than any one programmer could keep up with, and freaking out because the deadline was already overdue when I started on a project. I constantly run into the fact most clients do not really understand the amount of work that can be involved behind the scenes, especially if it is non-visually-impacting. Does anyone know of good ways to handle these situations I might be overlooking?"} {"_id": "126159", "title": "Insert grammatical mistakes in a correct sentence", "text": "I'd like to insert grammatical mistakes (not typo) in a correct sentence, to make a small game. For instance: My name is John -> My name are John He leaves the room -> He leave the room I only found some tools to detect languages but nothing about verb/noun/adjective detection/transformation. Is there a tool that could help me ?"} {"_id": "28035", "title": "How do you maintain productivity outside of work? (Programming Schedule)", "text": "I enjoy programming, but programming at work is just that, work. I would like to further develop my own personal interests in programming. Throughout the week I imagine myself completing a small project on the weekend or finishing up a programming related book. However, in reality I often fall short of my expectations. I often will get just one or two chapters of reading done and even less coding. In reality I will spend time surfing the net, watching television, or visiting friends and slacking off... because it is the weekend. But when Sunday evening rolls around I often reflect on my weekend and I am sorely disappointed with my use of time. So my question is how do you maintain your productivity outside of work? I am sure some programmers could care less about programming on their free time. Although, I think the majority of programmers, especially on stackexchange, are passionate about programming. 1. Should I spend the weekend programming, or will I burn out and resent programming if I dedicate that much time to it? 2. How should I go about programming on my free time? Should I set a schedule? How much time should I dedicate to it? **Most importantly how do I follow that schedule?** Its only human nature to procrastinate. I know there is a lot of questions here. Feel free to answer the ones that relate to how you remained focused outside of work. I am passionate about programming but after 40 hours of programming it can be difficult to maintain that enthusiasm."} {"_id": "126157", "title": "How do delegates fit into ASP.NET?", "text": "A developer told me they used delegates to bind most of their events in ASP.NET. Until then, I did not even know it was possible to use delegates in ASP.NET in a meaningful way. My understanding is that ASP.NET/MVC3 works via HTTP verbs, not events/delegates. Is it even possible to have an event fire from the client to the server? This strikes me as an invalid form of IPC."} {"_id": "216999", "title": "What are the tradeoffs involved in referencing Context in a library?", "text": "`Context` is one of the core classes of Android, and many functions it contains are useful in Android library projects, particularly accessing configuration. What are the trade offs involved in accessing the `Context` in a library, either by injection or by subclassing `Application` in the library, and subclassing that in the application. Does this make the application brittle or introduce inappropriate coupling?"} {"_id": "216998", "title": "Updating password hashing without forcing a new password for existing users", "text": "You maintain an existing application with an established user base. Over time it is decided that the current password hashing technique is outdated and needs to be upgraded. Furthermore, for UX reasons, you don't want existing users to be forced to update their password. The whole password hashing update needs to happen behind the screen. Assume a 'simplistic' database model for users that contains: 1. ID 2. Email 3. Password How does one go around to solving such a requirement? * * * My current thoughts are: * create a new hashing method in the appropriate class * update the user table in the database to hold an additional password field * Once a user successfully logs in using the outdated password hash, fill the second password field with the updated hash This leaves me with the problem that I cannot reasonable differentiate between users who have and those who have not updated their password hash and thus will be forced to check both. This seems horribly flawed. Furthermore this basically means that the old hashing technique could be forced to stay indefinitely until every single user has updated their password. Only at that moment could I start removing the old hashing check and remove the superfluous database field. I'm mainly looking for some design tips here, since my current 'solution' is dirty, incomplete and what not, but if actual code is required to describe a possible solution, feel free to use any language."} {"_id": "216997", "title": "Authenticate native mobile app using a REST API", "text": "I'm starting a new project soon, which is targeting mobile application for all major mobile platforms (iOS, Android, Windows). It will be a client-server architecture. The app is both informational and transactional. For the transactional part, they're required to have an account and log in before a transaction can be made. I'm new to mobile development, so I don't know how the authentication part is done on these platforms. The clients will communicate with the server through a REST API. Will be using HTTPS ofcourse. I haven't yet decided if I want the user to log in when they open the app, or only when they perform a transaction. I got the following questions: 1) Like the Facebook application, you only enter your credentials when you open the application for the first time. After that, you're automatically signed in every time you open the app. How does one accomplish this? Just simply by encrypting and storing the credentials on the device and sending them every time the app starts? 2) Do I need to authenticate the user for each (transactional) request made to the REST API or use a token based approach? Please feel free to suggest other ways for authentication. Thanks!"} {"_id": "129874", "title": "Code Generation and IDE vs writing per Hand", "text": "I have been programming for about a year now. Pretty soon I realized that I need a great Tool for writing code and learned Vim. I was happy with C and Ruby and never liked the idea of an IDE. Which was encouraged by a lot of reading about programming.[1] However I started with (my first) Java Project. In a CS Course we were using Visual Paradigm and encouraged to let the program generate our code from a class diagram. I did not like that Idea because: 1. Our class diagram was buggy. 2. Students more experienced in Java said they would write the code per hand. 3. I had never written any Java before and would not understand a lot of the generated code. So I took a different approach and wrote all methods per Hand (getter and Setter included). My Team-members have written their parts (partly generated by VP) in an IDE and I was \"forced\" to use it too. I realized they had generated equal amounts of code in a shorter amount of time and did not spend a lot of time setting their CLASSPATH and writing scripts for compiling that son of a b***. Additionally we had to implement a GUI and I dont see how we could have done that in a sane matter in Vim. So here is my Problem: I fell in love with Vim and the Unix way. But it looks like for getting this job done (on time) the IDE/Code generation approach is superior. Do you have equal experiences? Is Java by the nature of the language just more suitable for an IDE/Code generated approach? Or am I lacking the knowledge to produce equal amounts of code \"per Hand\"? [1] http://heather.cs.ucdavis.edu/~matloff/eclipse.html"} {"_id": "160757", "title": "What do you call an interface with no defining methods used as property setters", "text": "In ASP.NET and C# I've ran across this before. Your class needs to implement interface `ISomething` in order for something in the super class to supply something to you. I can't remember the details, as I ran across it quite a while ago, but it had something to do with session variables in ASP.NET C#. It was part of the .NET framework. At first, I thought this practice was rather silly and could be more gracefully implemented. However now I'm finding myself implementing this in the architecture of a home-grown project I'm working on. Using reflection, I detect the toggle a behavior on or off based on if it implements ISomething. Just an empty interface with no methods at all. What is the technical name for this and is it good practice?"} {"_id": "160754", "title": "How do you deal with the costs of too-rapid change?", "text": "Like most modern developers I value Agile principals like customer collaboration and responding to change, but what happens when a product-owner (or whoever determines requirements and priorities) changes requirements and priorities too often? Like several times a day? I recently inherited a smallish code base that was buggy, incomplete, and couldn't even handle the simplest scenario it was supposed to. I can deal with the technical issues but I get several emails, texts, or phone calls a day saying \"OMG you MUST work on this RIGHT NOW! TOP PRIORITY! This is a MUST!!!oneone\" (that's only a slight exaggeration) What makes it even worse is that most of the things are minor details that aren't even relevant to what the software is actually supposed to do and would take days to implement anyway. I've tried explaining that there's only so much time and that we should focus on the most important things first, but something seems to get lost in translation because the same thing happens a day or two later. Is there some sort of Product-Owner-Handler role, in-depth study, metaphor, or quote that can help me reduce the amount of wasted effort or at least explain the costs of this chaotic behavior?"} {"_id": "81620", "title": "Does Rhino have a future?", "text": "I'm looking to add serious scripting into my Java app and JavaScript would be a great language choice. My concern though is the Rhino project and its future. While Groovy/Jruby etc have seen constant updates, and engines like V8 and SpiderMonkey make continuous and significant performance boosts; Rhino languishes with its last release in March '09. I've seen some work on hacking Rhino: * Great Thundering Rhinos! presentation from Oracle * Explore and implement JDK7 InvokeDynamic Google Summer of Code project But nothing solid about actually merging this code into the project and getting an active committer community around it. What is the Rhino road map? For example, is there any plan to bring `invokedynamic` to the Rhino world?"} {"_id": "216270", "title": "How to know how much detailed requirements should be?", "text": "This doubt has to do with the requirements gathering phase of each iteration in one project based on agile methodologies. It arose because of the following situation: suppose I meet with my customer to gather the requirements and he says something like: \"I need to be able to add, edit, remove and see details of my employees\". That's fine, but how should we register this requirement? Should we simply write something like \"the system must allow the user to manage employees\", or should we be more specific writing for points 1. The system must allow the user to add employees; 2. The system must allow the user to see details of employees; 3. The system must allow the user to edit employees; 4. The system must allow the user to delete employees; Of course, this is just an example of a situation I was in doubt. The main point here is: how to know how much detailed I must be, and how to know what I should register? Are there strategies for dealing with these things? Thanks very much in advance!"} {"_id": "114352", "title": "How do you avoid working on the wrong branch?", "text": "Being careful is usually enough to prevent problems, but sometimes I need to double check the branch I'm working on ( _e.g._ \"hmm... I'm in the `dev` branch, right?\") by checking the source control path of a random file. In looking for an easier way, I thought of naming the solution files accordingly ( _e.g._ `MySolution_Dev.sln`) but with different file names in each branch, I can't merge the solution files. It's not that big of a deal but are there any methods or \"small tricks\" you use to quickly ensure you're in the correct branch? I'm using Visual Studio 2010 with TFS 2008."} {"_id": "114357", "title": "Storing page-specific javascript on an AJAX driven site?", "text": "I have a general question about the placement of javascript code in a AJAX- driven web application. At a previous job, we tended to throw everything in one monolithic file. When maintaining that file became too difficult, we split it up into multiple smaller files and \"glued\" them back together with a php \"loader\" (pretty much just a bunch of include statements) Since all of my pages are dynamically loaded through AJAX, I cannot simply have different files called separately per page. The obvious disadvantage to this method is that all of your javascript is loaded by the end user, even if it is not required on first page load. (Which compounds the problem, the more pages you have) To get around this, I started putting page-specific javascript on the template page itself, in a `` Tags can be optionally be followed by an opening brace. They will automatically be closed at the closing brace. If no brace is used, they will be closed after taking one element. Variables are prefixed with the `@` symbol. They may be used inside double- quoted strings. I think I'll use single-quotes to indicate \"no variable substitution\" like PHP does. Filter functions can be applied to variables like `@variable|filter`. Arguments can be passed to the filter `@variable|filter:@arg1,arg2=\"y\"` Attributes can be passed to tags by including them in `()`, like `p(class=\"classname\")`. You will also be able to include partial templates like: for(@item in @item_list) include(\"item_partial\", item=@item) Something like that I'm thinking. The first argument will be the name of the template file, and subsequent ones will be named arguments where @item gets the variable name \"item\" inside that template. I also want to have a collection version like RoR has, so you don't even have to write the loop. Thoughts on this and exact syntax would be helpful :) Some questions: * Which symbol should I use to prefix variables? @ (like Razor), $ (like PHP), or something else? * Should the @ symbol be necessary in \"for\" and \"if\" statements? It's kind of implied that those are variables. * Tags and controls (like if,for) presently have the exact same syntax. Should I do something to differentiate the two? If so, what? * This would make it more clear that the \"tag\" isn't behaving like just a normal tag that will get replaced with content, but controls the flow. Also, it would allow name-reuse. * Like `tag` would be a normal tag, but `@tag` would be a directive like `@for`, `@if`, `@include`, `@using` * Do you like the attribute syntax? (round brackets) * How should I do template inheritance/layouts? * In Django, the first line of the file has to include the layout file, and then you delimit blocks of code which get stuffed into that layout. * In CakePHP, it's kind of backwards, you specify the layout in the controller.view function, the layout gets a special `$content_for_layout` variable, and then the entire template gets stuffed into that, and you don't need to delimit any blocks of code. * I guess Django's is a little more powerful because you can have multiple code blocks, but it makes your templates more verbose... trying to decide what approach to take * Filtered variables inside quotes: * `\"xxx {@var|filter} yyy\"` * `\"xxx @{var|filter} yyy\"` * `\"xxx @var|filter yyy\"` * i.e, @ inside, @ outside, or no braces at all. I think no-braces might cause problems, especially when you try adding arguments, like `@var|filter:arg=\"x\"`, then the quotes would get confused. But perhaps a braceless version could work for when there are no quotes...? Still, which option for braces, first or second? I think the first one might be better because then we're consistent... the @ is always nudged up against the variable. I'll add more questions in a few minutes, once I get some feedback. * * * **Semi-colons:** I'm going to use semi-colons to close tags without content. For example, div; would output and wouldn't eat the next token like it normally would. **Namespaces:** Since I'll be defining the tags in straight C#, and C# supports namespaces, I think I'll just use that namespace and it'll work seamlessly. `namespace.tagname`, but it won't be necessary to include the namespace unless you want to disambiguate. The most recently included (via @using directive) will take precedence. * * * @Tom Anderson: div { p \"This is a paragraph\" p \"This is another paragraph\" } span(class=\"error\") \"This is going to be bright red\" img(src=\"smiley.png\", alt=\":)\") p { \"This \" a(href=\"about-paragraphs.html\") \"paragraph\" \" contains a link.\" }"} {"_id": "72597", "title": "How To Effectively Set Deadline Times As A Freelance or Home-based Developer", "text": "I recently got into home-based hours-based programming work and one of the problems I am having is that deadlines I say to my clients are not often met. Some factors come into play: 1. I run into a coding problem that requires research time, which adds a couple of time to my specified deadline. 2. There are times when my internet is down for half a day or even a whole day. 3. There are some inquiries left unanswered (via email) and many other factors. I was wondering how I can specify deadlines which will take into consideration any unexpected factors that might delay it or something like that, which of course, will not put a frown into my clients' face, hehe. By the way, a client asked me to develop a site from scratch using Django in which I have not much experience of. He knew that and he consented to me doing some research while in development. I said I would finish the project in about 3 weeks yet I did not expect Django to be such a massive nerve-wracking framework, my internet was down during some days, and I only worked less than 10 hours on the holy week. Thanks in advance!"} {"_id": "188875", "title": "With libraries available, should programmers also learn the old way of writting the same things?", "text": "With pre written programs available, needing just editing, should programmers also learn writing them from scratch?"} {"_id": "72593", "title": "How to abbreviate variable names", "text": "I always struggle in abbreviating variable names. Is there any standard for abbreviating variable names?"} {"_id": "34768", "title": "Is there any one standard framework for developing Python GUI apps.?", "text": "There are so many frameworks for writing GUI application using Python. But is there any one key standard framework? For example we have a bundle of .NET/C# on Visual Studio. I am thinking in other perspectives also. In future if I give an interview for a Python programmer job, which GUI framework will be considered? I also wonder, there is no IDE that integrates the GUI and Python language. Choice of flavor is good but over-choice becomes a distraction."} {"_id": "232940", "title": "Continous Deployment and database format changes", "text": "My current automated workflow looks like this: * I commit changes * I push changes * Build (actually just a lot of tests using mock database) is triggered * If all tests pass, the new code is deployed to the server Now assuming there is a change in database format - first 3 steps do pass, but the code will not work on server because existing data is not compatible with new source. The solution that I'd call nice is to also provide/test the migration code. Which looks like a lot of overhead for something that is executed only once. The solution that I'd call sane is to stop automatic deployment for such changes, perform the migration, then restart the deployment chain. Today I was lucky enough that my server is not really widely used yet (I could simply drop everything and live with it) and no changes were required at all. I wouldn't be so lucky in future though. How should I actually handle the database changes?"} {"_id": "202541", "title": "Achieving Zero Downtime Deployment", "text": "I am trying to achieve zero downtime deployments so I can deploy less during off hours and more during \"slower\" hours - or anytime, in theory. My current setup, somewhat simplified: * Web Server A (.NET App) * Web Server B (.NET App) * Database Server (SQL Server) My current deployment process: 1. \"Stop\" the sites on both Web Server A and B 2. Upgrade the database schema for the version of the app being deployed 3. Update Web Server A 4. Update Web Server B 5. Bring everything back online ### Current Problem This leads to a small amount of downtime each month - about 30 mins. I do this during off hours, so it isn't a huge problem - but it is something I'd like to get away from. Also - there is no way to really go 'back'. I don't generally make rollback DB scripts - only upgrade scripts. ### Leveraging The Load Balancer I'd love to be able to upgrade one Web Server at a time. Take Web Server A out of the load balancer, upgrade it, put it back online, then repeat for Web Server B. The problem is the database. Each version of my software will need to execute against a different version of the database - so I am sort of \"stuck\". ### Possible Solution A current solution I am considering is adopting the following rules: * Never delete a database table. * Never delete a database column. * Never rename a database column. * Never reorder a column. * Every stored procedure must be versioned. * Meaning - 'spFindAllThings' will become 'spFindAllThings_2' when it is edited. * Then it becomes 'spFindAllThings_3' when edited again. * Same rule applies to views. While, this seems a bit extreme - I think it solves the problem. Each version of the application will be hitting the DB in a non breaking way. The code expects certain results from the views/stored procedures - and this keeps that 'contract' valid. The problem is - it just seeps sloppy. I know I can clean up old stored procedures after the app is deployed for awhile, but it just feels dirty. Also - it depends on all of the developers following these rule, which will mostly happen, but I imagine someone will make a mistake. ### Finally - My Question * Is this sloppy or hacky? * Is anybody else doing it this way? * How are other people solving this problem?"} {"_id": "144261", "title": "Is there a clear list of requirement analyst's tasks?", "text": "I am reading about requirements analysis as a stage of requirements engineering. I have been having difficulties finding a clear list of the tasks in which the requirements analysis is in charge of. In the whole process of requirements engineering, where does requirements analyst role begin and where it end?"} {"_id": "144263", "title": "What does \"enterprise\" means in relation to software architecture?", "text": "I see the term \"enterprise\" being thrown around software developers and programmers a lot and used loosely it seems. > **en\u00b7ter\u00b7prise/\u02c8ent\u0259r\u02ccpr\u012bz/** > > Noun: A project or undertaking, typically one that is difficult or requires > effort. Initiative and resourcefulness. Can someone please clarify what this term actually encompasses? \"At an enterprise level\", \"enterprise scale\"? There are even \"enterprise editions\" of things. What exactly does it mean? It obviously doesn't make sense judging by the above definition so more specifically to software what does one mean when using the word enterprise? **EDIT:** To add a spin on this - how does this term then fit into phrases such as Enterprise Framework Model? What does data access and data context have to do with company-wide descriptions?"} {"_id": "131006", "title": "Should I intentionally break the build when a bug is found in production?", "text": "It seems reasonable to me that if a serious bug is found in production by end- users, a failing unit test should be added to cover that bug, thus intentionally breaking the build until the bug is fixed. My rationale for this is that the build _should have been failing all along_ , but wasn't due to inadequate automated test coverage. Several of my colleagues have disagreed saying that a failing unit test shouldn't be checked in. I agree with this viewpoint in terms of normal TDD practices, but I think that production bugs should be handled differently - after all why would you want to allow a build to succeed with known defects? Does anyone else have proven strategies for handling this scenario? I understand intentionally breaking the build could be disruptive to other team members, but that entirely depends on how you're using branches."} {"_id": "131000", "title": "Website Country Detection", "text": "I have a web crawler, and I'm looking for hints that will help me automatically detect a website country of origin. And by country of origin I generally mean the country the website is targeting. For example: * http://www.spiegel.de/ -> Germany * http://www.lemonde.fr/ -> France * http://publico.pt/ -> Portugal * http://www.elpais.es/ -> Spain I know there's not a foolproof way of doing it, so I will likely rely on a scoring system. * The domain name; * The content language; * The server's IP address; * `Whois` information; What additional parameters would you use? For the previous examples a combination of domain and content language will do, but many websites have a `.com` domain and a language spoken in more than one country..."} {"_id": "227756", "title": "Why is PHP's method of comparing different types bad?", "text": "I'm working on designing a new programming language and trying to decide how I will do variable comparisons. Along with many different types of languages, I've used PHP for years and personally had zero bugs related to its comparison operations other than situations where 0 = false. Despite this, I've heard a lot of negativity towards its method of comparing types. For example, in PHP: 2 < 100 # True \"2\" < \"100\" # True \"2\" < 100 # True In Python, string comparison goes like this: 2 < 100 # True \"2\" < \"100\" # False \"2\" < 100 # False I don't see any value in Python's implementation (how often do you really need to see which of two strings is lexicographically greater?), and I see almost no risk in PHP's method and a lot of value. I know people claim it can create errors, but I don't see how. Is there ever really going to be a situation where you are testing if (100 = \"100\") and you don't want the string to be treated as a number? And if you really did, you could use === (which I've also heard people complain about but without any substantial reason). So, my question is, not counting some of PHP's weird conversion and comparison rules dealing with 0's and nulls and strings mixed with characters and numbers, are there any substantial reasons that comparing ints and strings like this is bad, and are there any real reasons having a === operator is bad?"} {"_id": "187355", "title": "System for checking code comments relevance", "text": "When I write comments for auto generated documentation it can become irrelevant after a few changes of the method. Do we have any system to automatically check, prevent such situation and warn developer to update comment? Maybe some VCS hooks?"} {"_id": "50133", "title": "Weekly technology meeting?", "text": "I am thinking of introducing weekly technology meeting where programmers working on the same project can discuss things like: * current status of the project on technical side * technology backlog. Things that we may have skipped because of deadlines but now coming back to bite us. * technology constraints that are limiting developers from being productive * new and emerging technologies that may apply to the project Basically looking at the project from programmer's perspective, not the business side. - What would be some good guidelines for a meeting like this? * How long should the meeting last? * Is weekly too often? * Should we time-limit each topic? * What kinda of topics are good for a meeting like this and which ones are bad? * Is 10 people too many? ..."} {"_id": "199863", "title": "Developing Python on Windows and deploying to Linux", "text": "I have a client who would prefer to host their application on Linux. However, my coworkers and I have very little experience with Linux. This is a short project with a low budget, so making choices that save time and money are not just desired, they're a must. We are also heavy on continuous integration and automation, much of which we already have figured out in Windows and can reuse from previous projects. That said, having the development team learn Linux and rebuild our automation so we can develop on the same environment to which we intend to deploy is most likely not a viable option. (For a larger project, perhaps, but not this one.) The team _is_ familiar enough with Python that writing the application in Python is a viable option (even though most of our development is done in .NET), although we would need to figure out a good packaging mechanism that can run on Windows and be pushed to a Linux box. I don't anticipate needing any libraries unavailable on Windows. Most likely, we will only need `psycopg` and `sqlalchemy` in terms of libraries with native components. All this makes the notion of having developers create the application on Windows, deploying to a Linux testing environment, and then pushing to production after thorough testing seem like a fairly attractive option, but I'm skeptical about it. Linux and Windows are very different operating systems, and I'm concerned about gotchas that could creep up and make life very difficult. Are there any real concerns with doing this (beyond the typical file path differences and other common things easily solved by good coding practices)? I think that a lot of shared hosting providers host on Linux, and I can't imagine everyone using them has developed on Linux for the past umpteen years. Is this done more commonly than I'm aware?"} {"_id": "50138", "title": "is OpenID really that bad?", "text": "I have seen this question on Quora where lots of people seem to agree that OpenID is bad, even going as far as stating that: > OpenID is the worst possible \"solution\" I have ever seen in my entire life > to a problem that most people don't really have Then I've seen articles and tweets referencing that question saying that OpenID has lost, and Facebook won. It's sad to read as I quite like the OpenID (or at least idea behind it). I literally hate getting yet another login/password for page (I'll forget it anyway) - it's a pretty serious issue for me and I know lots of people with the same problem. Thus I thought that OpenId is a great solution but I'm not sure anymore. So the question is should I still bother to implement OpenID or it's not worth it? What is the most robust and convenient (from the user perspective) way to identify and authenticate an user?"} {"_id": "187358", "title": "In-house SMS (using a SIM card) receiving and sending?", "text": "My needs are very simple but I don't know the terms I should be Googling for: I need to occasionally send a SMS to a machine and that machine is going to answer with another SMS to the number that did send the first SMS. I've got no say as to how that machine works: it's closed and all it does is receive SMS and send SMS in reply to SMS received (and uniquely to the number that did send the first SMS) and there's no other way to interface with it. Upon receiving the \"response SMS\", I need to update a server / DB (a regular Java webapp server + SQL DB) with the infos contained in the SMS. There isn't going to be a lot of volume (only a few SMSes daily) and robustness isn't that important (a few SMSes can be missed). What would be a possible architecture for something like that? I was thinking about using two or three cheap smartphones and giving them SIM cards and then program them to regularly (once every height hours or so) send an SMS. Then these smartphones would receive an SMS back from the machine they contacted and I'd intercept that SMS and update my server/DB accordingly. _(I'll have physical control on these smartphones so giving any permission needed to any app won't be an issue)_. Is this something easily doable? Technically, if such a solution could work, how can I access the SMS functionalities of the phone? And how can I have the phone udpate my server/DB? (the phone can be hooked to the Internet, so I take it I can simply do an HTTP POST to my server). Or is there already some (preferrably free or open-source) offering a similar functionality? (maybe something not totally unlike what \"zapier\" of \"ifttt\" does, where I could create a rule saying: _\"If I receive a SMS containing the word xxx, then send the SMS using an HTTP POST to the URL yyy\"_ ). Note that I did configure the server(s) hosting the Webapp myself and develop the entire webapp myself: so the \"programming\" part ain't that much of a problem. My issue is that I don't know how to \"create a bridge\" between the SMS coming in response to another SMS and my webserver. I hope I explained the issue simply enough: basically I need some guidance on the architecture to use here (which fits programmers.stackexchange as far as I can tell from just having read its FAQ)."} {"_id": "209661", "title": "What about using MVC as the way to provide Responsive Web Design?", "text": "Responsive Web Design shows the user different elements -- or elements arranged in different ways -- by using media queries (if the device is a desktop or laptop, show them this; if a tablet, show them that; if a \"phone,\" show them the other thing) or other methodologies. What about, though (especially if your app/site uses MVC anyway), leveraging MVC to return different Views based on the type of device / user agent? Each Model/Controller pair could have at least three Views (desklaptop, tablet, phone, as well as perhaps more granular gradations and/or Views tailored to larger devices) and, based on the size of the user agent's real estate, invoke the appropriate View. I'm thinking this might be a more natural and easier to implement way of optimizing the experience for all users, at least for those conversant with MVC. How exactly this is implemented (how the user agent is determined and the appropriate Controller ActionResult is invoked) I'm not sure, though... thoughts? ## UPDATE Response to the comment and answer: I'm thinking more along the lines of this scenario, for which, IMO, media- queries won't satisfactorily handle: Your app/site has as its centerpiece a map. There are ancillary but vital pieces that are placed around the map (top, botton, and side) that all fit on a tablet (barely) and larger devices just fine. On a phone, though - no way - there's only room for the map, and even then the map is almost too puny. The only way I can think of to deal with this is to show the map full-screen on the phone, with buttons or links in states like Wyoming, Montana, and Nevada that will invoke the parts that surround the map on larger devices but will monopolize the screen on a phone. In each case, the currently displaying View will need links to [re]open the other portions of the app/site. Otherwise, the only way it could be usable with a phone is if the user is carrying a magnifying glass around with him (or zooming and swiping around the screen like a madman, but IMO that's a pain in the gluteus maximus). And so, in such a scenario it seems to me that Views would be easier to implement than altering the CSS, as this is a major renovation of the screen, not just a rearranging of furniture."} {"_id": "107227", "title": "Under what circumstances are flowcharts still a valuable and useful tool?", "text": "When I first started programming, I relied heavily on flowcharts (and printer spacing charts). While I was in COBOL class, I couldn't start writing any code until my flowchart was signed off by the instructor. Back then, I had to make flowchart for everything. Today, twenty-five years later, I find myself only flowcharting two types of things. Very specific algorithms where the logic is tricky or very general concepts to ensure that I get all the big steps defined and in the proper order. Are there other use cases for flowcharts that I've simply overlooked?"} {"_id": "188795", "title": "Sorting rows off an autoincrementing primary key", "text": "Is it a bad practice to rely on an auto-incrementing primary key to sort rows in a table? A coworker and I were having an argument about this subject. We need to be able to find the last-inserted row in a table, and we have no create/update time columns (which might be considered a bad practice in itself, but that's a different argument). We obviously can't rely on the natural order of the rows, since that can be inconsistent. Is there anything inherently wrong with using the primary key in this way, given that it's an immutable value? I should have noted: our implementation is using Hibernate (an ORM) to fetch our objects. We're not using any native queries -- only JPQL or Hibernate's functionality."} {"_id": "88784", "title": "Is it worth being computer languages polyglot?", "text": "You can often hear that programmers should learn many different languages to improve themselves. I still go to school and don't have big programming experience (a little more than year). But what was noble intention to improve programming skills turned into some kind of OCD: I feel that **I won't calm down until I learn all relatively known programming languages.** And here is question itself: Will being programming languages polyglot actually help you (And I don't mean usual \"Programmer should know at least all paradigms\", I mean really **all** languages you usually hear about)? Does anybody have similar experience? Does it help with job/skills/career? How often are you able to apply those skills?"} {"_id": "24421", "title": "Is it getting harder to hire VB.NET developers?", "text": "I'm a consultant, and my last two engagements have been at VB.NET shops. It's become apparent to me that these organizations have a really hard time finding FTE developers. Have any of you observed that VB.NET developers are getting harder to find? Any thoughts as to why?"} {"_id": "151440", "title": "Does heavy JavaScript use adversely impact Googleability?", "text": "I've been developing the client-side for my web-app in JavaScript. The JavaScript can communicate with my server over REST (HTTP)[JSON, XML, CSV] or RPC (XML, JSON). I'm writing writing this decoupled client in order to use the same code for both my main website and my PhoneGap mobile apps. However recently I've been worrying that writing the website with almost no static content would prevent search-engines (like Google) from indexing my web-page. I was taught about this restriction about 4 years ago, which is why I'm asking here, to see if this restriction is still in-place. **Does heavy JavaScript use adversely impact Googleability?**"} {"_id": "100945", "title": "What's the best way to logically compare two XML files?", "text": "I want to compare two XML files logically. That means that following text lines should be the same for the comparison tool: 1. 2. I'm a Windows developer and tried some comparison tools now: Beyond Compare, WinMerge, Total Commander, but all of them compare XML like a normal text comparison tool. Are there any other tools or approaches I can take?"} {"_id": "182496", "title": "Which source is quoteable for the popularity of programming languages?", "text": "I'm writing a paper right now, and I need a quotable source for the popularity of programming languages. One source I know is the TIOBE index, however, there are several others if you search google, and the ranking is different on every index. Which one would you suggest to be quoteable?"} {"_id": "61577", "title": "Should I keep my GitHub forked repositories around forever?", "text": "So I've forked someone else's repository, made a few changes, submitted a pull request, and my changes made it into the product. Great! But...what should I do with my forked repository? Is there a compelling reason for me to keep my repository around, or should I go ahead and delete it? I don't plan on making any additional contributions, but if I change my mind I assume I can always just re-fork it. I'm not really concerned about keeping a backup. I'm more worried about breaking links, losing commit messages, etc."} {"_id": "61572", "title": "Project In A Week / development bootcamp", "text": "Our team is thinking of doing a \"Project In A Week\" (bootcamp), and I'm interested to know if anyone else has experience of doing this or has any advice? The idea behind it is to get away from the distractions of the office, motivate each other, and build our bonds within the team, in order to come up with an innovative and profitable product in a short space of time. The plan is to get the whole of the dev team (about 5 devs), a designer, a project manager, couple of sales and marketing people staying in a conference centre/hotel for a full working week. We'll be completely focused on building one web app (planned in advance) and getting it live and on the market within the week. We'll work quite long days but in the evenings we'll have some fun together as a team. There would be a couple of members of the team left in the office to ensure we're not distracted by day to day client support. Similar 'immersive' approaches are used by training companies such as Firebrand. Good idea? Terrible idea? What should we do to incentivise the team? Any thoughts/experiences/advice would be greatly appreciated. Cheers"} {"_id": "22118", "title": "Should we use progressing job titles for programmers?", "text": "I'm currently thinking about job titles for my team of programmers. Historically it's been pretty flat, with most people just \"Programmer\", and a very few people \"Senior Programmer\". But both those titles cover a wide diversity of experience and salary. I was thinking of replacing this with a progression of five levels - * Developer * Senior Developer * Principal Developer * Architect * Senior Architect I'd be interested to hear what people think are the pros and cons of such a scheme. On the positive side, I've found that programmers actually covet good titles. On the negative side, it's been suggested to me that such a hierarchy can create resentment. I'd welcome your thoughts. **Update:** Thanks for all the great answers! It's certainly changed my thinking on this subject, and I will probably retain the flater structure at this point."} {"_id": "77419", "title": "Migrating myself from .Net 1.1 to .Net 4.0", "text": "I have been working with .Net for 7 years. ASP Web Apps, Windows Forms, Windows Services, mostly done in C#, but some in VB.Net. And as we started to work with .Net 1.1, we stayed on .Net 1.1. Recently I did a few projects in .Net 4.0, which I'm sure are not different from those done in .Net 1.1, and I guess they can be easily compiled in .Net (haven't tried it, though). I see that there are some new tricks and methods, and obviously the framework has grown quite a bit... 1. I wonder if there is any difference from 1.1 to 4.0, in code, techniques, language abilities (you don't have to list them, just point out your favorites if there are any), etc? 2. What would you recommend I should do to learn those new things created in latter frameworks than 1.1?"} {"_id": "228890", "title": "License for my Software", "text": "I am nowhere near good at coming up with what type of license to use and I don't want to be sued or have my work stolen so I am going to ask. Here is my situation: I am creating a software for my boss and the YMCA which helps keep track of our employees, their students, and the amount of lessons they have remaining (In a nutshell but the program is much more extensive then that), I am spent almost 100 hours creating it and its not completely done. Some of the hours spent were at work while being paid (Not sure if that plays a factor). Here is a list of conditions that I desire: 1. My software to not be redistributed in any way without my knowledge and written consent. 2. To not be liable for any damage it causes, for maintaining the software, or for its success. 3. I do not want the software to be sold without my knowledge and consent either. 4. I want the software to be closed sourced. If there is a way I can have all of these in my software licensed I would be grateful, also if someone could explain to me where I need the license displayed and if it needs to be in the source code (since I am having it closed sourced)."} {"_id": "228895", "title": "MVC Widget optimization when accessing CSS and Resources", "text": "So we're trying to re-imagine our web solution in an MVC fashion. Going from an old webforms based solution to working with ASP.NET MVC with a bootstrap main menu and adding functionality in the forms of widgets using HTML.Action() that calls a controller and action to fill in the information on that part of the page. We're now thinking of how the CSS and resources that we are going to be getting for each widget separately in their own controller can be effectively used in this scenario. If we only use HTML.Action and letting a controller add our functionality to the page, we lose our connection to the page as a whole, and we could possibly load the resource package and CSS for the same type of objects over and over again. How would one solve this and make it possible for our web portal to have knowledge over which css and resource files have already been added to the solution?"} {"_id": "104361", "title": "Is Razor or XSLT better for my project?", "text": "I'm in the early stages in the design of a system that will essentially be split into two parts. One part is a service and the other is an interface with the service providing data through something like OData or XML. The application will be based on the MVC architectural pattern. For the views, we are considering using either XSLT or Razor under ASP.NET. XSLT or Razor would help to provide a separation of concerns where the original XML or response represents your model, the XSLT or 'Razor view' represents your view. I'll leave the controller out for this example. The initial design proposal recommends XSLT, however I suggested the use of Razor instead as a more friendly view engine. These are the reasons I suggested for Razor (C#): * Easier to work with and build more complicated pages. * Can easily produce non-*ML output, eg csv, txt, fdf * Less verbose templates * The view model is strongly typed, where XSLT would need to rely on convention, eg boolean or date values * Markup is more approachable, eg nbsp, newline normalization, attibute value normalization, whitespace rules * Built in HTML helper can generate JS validation code based on DTO attributes * Built in HTML helper can generate links to actions And the arguments for XSLT over razor were: * XSLT is a standard and will still exist many years into the future. * It is hard to accidentally move logic into the view * Easer for non programmers (which I don't agree with). * It's been successful in some of our past projects. * Data values are HTML-encoded by default * Always well formed So I'm looking for aguments on either side, recommendations or any experience making a similar choice?"} {"_id": "240440", "title": "How do programmers with non-latin keyboards work?", "text": "I was reading a question on stackoverflow where the code was interspaced with Russian comments and I came to wonder; how do programmers who use non-latin keyboards work? Do they keeping switching keyboard layouts in the OS or use separate keyboards? And why do they really bother switching just to write one- liner comments?"} {"_id": "98798", "title": "How to interview for .NET/C# job a guy with experience on Progress?", "text": "It is clearer how to interview someone experienced in Java / C++ or other languages closer to C#. In these cases the language and specific technologies are less important; what matters are the OOP principles. But how to do it with someone that comes from Progress to .NET and also, me as an interviewer, not knowing Progress at all? How much does Progress experience count for .NET/C# or OOP? Are there programming principles in common between the two worlds? Are these realy two different worlds?"} {"_id": "75817", "title": "Large Multi-Touch Display Monitor?", "text": "I'm sure that most shops use a standard whiteboard or a more fancier \"glass- board\" to discuss ideas during a meeting. After the meeting, people will usually take photos of the artifacts using a point-and-shoot camera and then e-mail the digital photo to all the participants. I'm thinking of buying a really huge LCD screen support to facilitate discussion. This LCD Screen will be outfitted with a Multi-touch \"filter\" that will be connected to a PC via the USB port. The PC will be connected to the screen via the HDMI port. The PC will run on a standard Windows 7 installation. Instead of writing/drawing on a whiteboard using a marker, the participants will use their fingers to draw/write stuff on the screen surface. Whatever input the \"filter\" receives, it will be sent back to the PC. So the application that has the focus is the one responsible for making sense of the input. I'm thinking of using \"PAINT\" for now. After the discussion, the \"canvas\" will be saved as an image and forwarded to participants e-mail. ^That's the easy/boring part. Here's the cool part: We can write an application that will fully utilize the Multi-touch technology to facilitate discussion more effectively. I'm thinking of writing apps like those in the Microsoft Surface computing videos. We can also outfit the PC with a camera and use facial recognition technology to automatically figure out who the participants are so we can provide a single \"Distribute a copy of this canvas to the participants\" button. Question: 1. Has anyone used a gigantic multi-touch screen over a whiteboard to facilitate discussion? 2. If yes, how successful was it? 3. Are there already software available today that makes these things possible? 4. Do you know a vendor/provider that provides a complete solution for this kind of things? Thanks!"} {"_id": "208047", "title": "Is modifying an object's __dict__ to set its properties considered Pythonic?", "text": "I have a class that inflates objects from rows found in a database (or another source, e.g. MongoDB, a CSV file, etc.). To set the object's properties, it does something like `self.__dict__.update(**properties)` or `obj.__dict__.update(**properties)`. Is this considered Pythonic? Is this a good pattern that I should continue to use, or is this considered bad form?"} {"_id": "199957", "title": "Does Turing-complete implies possibility of malware?", "text": "Is it possible to build an operating system that contains some Turing complete compiler (language?) but is unable to run any malware? Or is there any definition for a malware? This question popped on my mind as I was wondering why Windows has more malware than Linux. If Linux contains a C programming language and its compiler, I think it is possible to write a Linux program that works similarly than Windows viruses. But there are less malware for Linux than for Windows although there is a Wine for Linux to simulate Windows programs."} {"_id": "35412", "title": "Windows Forms Development - Books", "text": "So I'm reading a book for architecting applications for the enterprise from the Microsoft Press. It's a great book, and I'm learning a lot. However, it's very high level, and can be applied to a lot of different domains (not even just .NET, even though that's how the book is geared). The first project I want to develop after reading the book is a Windows Forms application in .NET 4.0. I want to use a lot of the books concepts to develop the app, but I really want a great Windows Forms dedicated book to read before starting that's really going to tell me all I need to know about developing Windows Forms apps. I found plenty of books for .NET 2.0 and stuff, but nothing for Windows Forms in the new .NET 4.0 Framework. Any suggestions?"} {"_id": "35413", "title": "Should I understand SVN before I jump to GIT?", "text": "I work in a department where no one has ever used source control before, including myself. I am trying to push the concept. I have spent a little while researching SVN. I some basics learned. I can Create/update/checkout/commit with command line and from Tortoise. I am starting to learn how to tag and branch but still confused a lot about conflicts between branches and trunk etc. I am still learning, but I do not have a physical person who can show me anything. Its all from books/tutorials and trial and error. From what I have read online it seems like `git` is the better thing to know, but its also more complicated. I don't want to overwhelm myself. Should I continue to master svn before moving to git or would I be wiser to just jump to git now? Are there pros and cons to both approaches?"} {"_id": "145130", "title": "Common name of the firmware that can test product board", "text": "This may be silly to ask. Consider the internal circuit board of any electronic device (cell phone, for example) driven by a micro-controller, also on the same board. I'm about to write a firmware that will have features to test various peripherals and other things on the board (IO pins, EEPROM, LCD etc). Is there any common name used for such firmware? For example, a bootloader is a kind of firmware that allow the code memory to be written during run time. EDIT: It will not be self test. External commands will be given to the micro and the micro will execute them. Each command will test a specific feature."} {"_id": "73243", "title": "Choosing a particular stack because of the IDE, tools and ease setting up the dev env", "text": "I almost always choose the Microsoft stack over anything else because of Visual Studio, the available tools and how easily I can get started programming with a particular new framework. **Please, that was only an example; it may be other stacks for other people.** I am not building real time, mission critical apps where a nuclear power plant would explode if my app didn't perform well... More like small business apps. But when I tell this to other programmers, even before they react, I feel I might have embarrassed myself - choosing a stack because of the IDE?!.. ease of setup?! I feel the \"pros\" choose the stack by other \"important\" things I am not aware of and in the grand scheme of things, IDE and ease of setting up dev env don't matter much. I feel it is a little immature to do what I am doing. Is it really a bad thing? Note that I also choose the Microsoft stack because of a lot of other reasons (C# is an amazing language and .Net is an awesome platform).. but IDE, tools and dev env setup play a big part for me. Also note that I am a 1 man dev shop, very occasionally 2 or max 3."} {"_id": "101107", "title": "What exactly undefined means in JavaScript? Why it's there? What usages it has? How it could be useful?", "text": "In JavaScript, we have something called **undefined**. I said something, because I really don't know if it's a base class, or a built-in variable, or a keyword, or anything else. I just know that it's there. To see it in action, you can simply write: undefined; typeof undefined; Can anyone please explain to me why this **thing** has been inserted into JavaScript? We have `null` value in this language, thus it shouldn't be something like `null`. In other languages, when we don't know the value of a property or variable, we simply set it to null. Here we can do the same thing. How we can use this _thing_ in JavaScript?"} {"_id": "230910", "title": "Not using Weak entities could denormalize DB?", "text": "In the following scenery in a ER and relational Model: Object 1-----N [Part] 1-----N [Subpart] 1-----N [Item] (Where \"1-----N\" is a 1 to N relationship; and [Entity] is weak). Solution 1: '\"Object\"' is a strong entity; and 'Part', 'Subpart' and 'Item' are weak entities: Object (*IDObject,...) Part (*IDObject, *IDPart,...) Subpart (*IDObject, *IDPart, *IDSubpart,...) Item (*IDObject, *IDPart, *IDSubpart, *IDItem,...) (Attributes marked with * make the composed key of the table) In this particular case of \"Items of Subparts of Parts of Objects\", another way of designing and implementing the same problem could be: Solution 2: Object 1-----N Part 1-----N Subpart 1-----N Item (All entities are strong) Table implementation: Object (*AutoincrementID,...) Part (*AutoincrementID,...) Subpart (*AutoincrementID,...) Item (*AutoincrementID,...) (Each table having it's own ID: an unique autoincrement field) My questions are: I would like an academic response; I mean the theory about this practices. 1. Which is the correct way of designing and implementing this scenario? 2. Is the \"Solution 2\" a denormalized model/implementation? In which way?"} {"_id": "230915", "title": "Common Methodologies for Writing \"Post-Mortem\" Reviews", "text": "We recently added a relatively large feature to our product that involved the entire R&D department (with all the different teams within it). This feature included UI development, server side development, and huge migrations of SQL schemes (and other stuff that I myself was not involved in at all). The development process for this feature was chaotic - Front-End and Server teams were not synchronized, SQL migrations broke the DB, and product specifications were incomplete which meant that with every step of the way we found new issues with the initial definition, requiring changes to core concepts that the developers had relied upon. All in all, a feature that was planned to be released within 10-14 days, took roughly 24 (intensive) days of development. I was requested to write a report of 'What went wrong' (from my team's side - each team writes such a report from their pov). What are the common methodologies for writing such reports, specifically in the field of software development? Also - Is there some formal name for such reports? EDIT & ANSWER: Apparently, such a report/review-process is commonly referred to as 'Project Post-Mortem' (other names exist as well). After figuring this out, I found these two resources that outline suggested methodologies for gathering the necessary data, organizing it, analyzing it, and formulating solutions for discovered issues (as well as some general information on these reviews and their purposes): 'A Defined Process For Project Post Mortem' - http://www.csee.umbc.edu/courses/undergraduate/345/spring12/mitchell/readings/aDefinedProcessForProjectPostMortemReview.pdf 'Post-Mortem Reviews: Purpose and Approaches in Software Engineering' - http://www.uio.no/studier/emner/matnat/ifi/INF5180/v10/undervisningsmateriale/reading- materials/p08/post-mortems.pdf"} {"_id": "101102", "title": "Are static classes with static methods considered SOLID?", "text": "SOLID includes the Liskov substitution princicple which has the notion that \u201cobjects in a program should be replaceable with instances of their subtypes without altering the correctness of that program\u201d. Since static classes with static methods (a bit like the `Math` class) does not have instances at all, is my system considered SOLID if I have static classes with static methods?"} {"_id": "165572", "title": "Where would my different development rhythm be suitable for the work?", "text": "Over the years I have worked on many projects, with some successful and a great benefit to the company, and some total failures with me getting fired or otherwise leaving. What is the difference? Naturally I prefer the former and wish to avoid the latter, so I'm pondering this issue. The key seems to be that my personal approach differs from the norm. I write code first, letting it be all spaghetti and chaos, using whatever tools \"fit my hand\" that I'm fluent in. I try to organize it, then give up and start over with a better design. I go through cycles, from thinking-design to coding- testing. This may seem to be the same as any other development process, Agile or whatever, cycling between design and coding, but there does seem to be a subtle difference: The methods (ideally) followed by most teams goes design, code; design, code; ... while I'm going code, design; code, design; (if that makes any sense.) Music analogy: some types of music have a strong downbeat while others have prominent syncopation. In practice, I just can't think in terms of UML, specifications and so on, but grok things only by attempting to code and debug and refactor ad-hoc. I need the grounding provided by coding in order to think constructively, then to offer any opinions, advice or solutions to the team and get real work done. In positions where I can initially hack up cowboy code without constraints of tool or language choices, I easily gain a \"feel\" for the data, requirements etc and eventually do good work. In formalized positions where paperwork and pure \"design\" comes first and only later any coding (even for small proof-of- concept projects), I am lost at sea and drown. Therefore, I'd like to know how to either 1) change my rhythm to match the more formalized methodology-oriented team ways of doing things, or 2) find positions at organizations where my sense of development rhythm is perfect for the work. It's probably unrealistic for a person to change their fundamental approach to things. So option 2) is preferred. So where I can I find such positions? How common is my approach and where is it seen as viable but different, and not dismissed as undisciplined or cowboy coder ways?"} {"_id": "165573", "title": "How to maintain same code fragments on multiple projects", "text": "I am an indie developer working on multiple Android projects. I am having problem with maintaining same functionality on different projects. For example, three of my apps use the same 2 classes; since they are different projects, when I need to make a change in those classes, I need to make it three times. Is there a simple solution to this kind of common problem?"} {"_id": "230919", "title": "How would I handle a set of differing event classes with differing handler interfaces in a single event processor?", "text": "I'm working on an event processor framework for a simple game I'm writing, in which multiple types of events are handled in a loop. Since these events carry distinct pieces of data (i.e. one carries a player and position, another carries a message and timestamp), I ended up creating different classes for them (though they still all implement a common interface, which is _currently_ a marker interface). Within my event processor, I have, for each type of event, a set of event handlers implementing a corresponding interface (e.g. anonymous class implementing `PlayerInteractHandler` that handles `PlayerInteractEvent`s). Since these interfaces are being implemented through a Javascript engine (Rhino) I am unable to use a single generic interface. In trying to implement the actual engine, I have currently code as follows (methods inlined to show idea behind code in a more compact representations): if (recvdEvent instanceof FooEvent){ // getHandlerList() returns a List due to limitations of generics for(EventHandler eh : getHandlerList(FooHandler.class) { FooEvent eventAfterCast = (FooEvent) recvdEvent } } Obviously there are a bunch more `else if (recvdEvent instanceof BarEvent)` blocks, and I catch and handle ClassCastExceptions. The problem with this, is that it seems like a mis-use of an object-oriented language, and I have to stringently verify other code by hand to retain type safety. The other alternatives I know of are for my event to return the `Class` in a certain method, to use polymorphism / dynamic dispatch, but that would require a _reflective_ cast, or a polymorphic method in the handler class that would still break complete type safety (and another layer on top of the interface itself). Multiple event processors would lead to code duplication, and would imply having separate threads for each as the current event processor is designed. If I were to retain a single thread and processor I would need a single queue, whose declared type is `AbstractQueue`, and I'd be back at square one. Am I approaching this in a completely incorrect manner?"} {"_id": "165579", "title": "How to convince a non-technical client that their application spec needs to be simplified?", "text": "Often times I am faced with the situation where a new client comes to me with an application that has literally 100s of unnecessary features and it is quite clear that things need to be drastically simplified for the project to have any chance of succeeding. How do you convince the client to take a more Minimum Viable Product (MVP) approach and simplify? edit: So the current top answer is to provide the client with a time/cost estimate for the huge application. I'm not too fond of this answer because it doesn't address the real problem with this situation. And that is - it's a bad practice to spec out a massive application and then try and build it from the get go. I feel much more comfortable initially building a small, simple MVP foundation. And then adding small features to that foundation one by one. So how do I convince the client to approach building software in this way?"} {"_id": "181212", "title": "Best Practice to Avoid \"Playing Telephone\" with Constructor Arguments", "text": "I find that the encapsulation required by OO has me frequently passing paramenters down the line from parent to child to great grandchild to second grand nephew once removed (not actually that bad). But inevitably in the process, and especially if graphics are involved there will be garbling the by the third iteration of `someConstructor(int minX, int minY, int maxX, int maxY)`. I found at least one question here suggesting that parameter objects are the solution. These seem to me more like a kludge designed to satisfy a low parameter count fetish than genuinely useful. Besides, in these scenarios, I am generally cutting the numbers into smaller pieces (of screen) with each step. It occurs to me if these constructor calls are laid out line by line higher up the chain, I'd have easy comparisons. But this seems to lose all semblance of OOPiness. In my most recent adventure with find-the-transposed-measurements, I was drawing a music keyboard. I don't want the main view to know or care what or exactly where the G# key is. So, best practices? Please don't say, go slower, be less sloppy."} {"_id": "51661", "title": "Ethics of soliciting App store app reviews?", "text": "I see more than a few developers soliciting 5-star ratings and good reviews for their App store apps, in their blogs, websites, app store descriptions, even dialogs that pop-up in the app after you've used them for awhile. What do people consider to be the ethical guidelines regarding such review and ratings solicitations? What's over the line? (Besides obviously evil stuff, such as paying to have someone forge multiple negative reviews about your competitor's apps, etc.)"} {"_id": "181214", "title": "2 different tasks in template method", "text": "I've read about Template Method Pattern but I'm not sure about one thing. The steps (methods) of an algorithm are supposed to be in the template method. In the case my template method's algorithm is about 2 completely different tasks that I would like all my objects to perform, let's say the one is drawing and the other calculating, can I still put all these methods in one method and call this my template method?"} {"_id": "55568", "title": "Is Oracle WebCenter 11g equivalent to a larger version of SharePoint?", "text": "I have some experience with SharePoint. My company has decided to use Oracle WebCenter to create a internal portal. None of the IT staff have any experience with Oracle WebCenter, and I think we could use SharePoint for this job, as we have been using it until now. So, what are the advantages to using Oracle WebCenter? Which are your experiences with Oracle WebCenter 11g? And how different is it from SharePoint? What can I do with Oracle WebCenter that I cannot do with Microsoft SharePoint?"} {"_id": "140567", "title": "Should one use a separate database for application data and user data?", "text": "I\u2019ve been working on a project for a little while and I\u2019m unsure which is the better architecture. I\u2019m interested in the consensus. The answer to me seems fairly obvious but something about it is digging at me and I can't pick out what. **The TL;DR is:** how do you handle a program with application data and user data in the same DB which needs to be able to receive updates to the application data periodically? One database for user data and one for application, or both in one? **The detailed version is..** if an application has a database which needs to maintain application data AND user data, and the user data all references application data, it feels more natural to me to store them in the same database. But if there exists a need to be able to update the application data within this database periodically, should this be stripped into two databases so that one can simply download the updated application data database file as an update and replace the old one? Or should they remain as one database, and the application data be updated via a script which inserts the new data into the existing database? The second sounds clearly preferable to me... but for some reason just doesn\u2019t feel right, and I can't pick out quite why."} {"_id": "140564", "title": "How can I work efficiently on a desktop sharing workflow?", "text": "I am a freelance Magento developer, based in Spain. One of my clients is a Germany based web development company and they're asking me something I think it's impossible. OK, maybe not impossible but definitely not a preferred way of doing things. One of their clients has a Magento Entreprise installation, which is the paid (and I think proprietary) version of Magento. Their client has **forbidden them to download the files** from his server. My client is asking me now to study one particular module of the application in order to interact with it from a custom module I'll have to develop. As they have a **read-only ssh access** to their client's server, they came up with this _solution_ : Set up a **desktop/screen sharing session** between one of their developer's station and mine, alongsides with a skype conversation. Their idea is that I'll say to the developer: > show me file foo.php The developer will then open this foo.php file in his IDE. I'll have then to ask him to show me the bar method, the parent class, etc... Remember that it's a read-only session, so forget about putting a `Zend_Debug::log()` anywhere, and don't even think about a xDebug breakpoint (they don't use any kind of debugger, sic). Their client has also **forbidden them to use any version control system**... My first reaction when they explained to me this was (and I actually did say it outloud to them): > Well, find another client. but they took it as a joke from me. I understand that in a business point of view rejecting a client is not a good practice, but I think that the condition of this assignment make it impossible to complete. At least according to my workflow. I mean, the way I work or learn a new framework/program is: 1. download all files and copy of db on my pc 2. create a git repository and a branch 3. run the application locally * use breakpoints * use Zend_Debug::log() 4. write the code and tests 5. commit to git repo 6. upload to (test/staging first if there is one, production if not) server I have agreed to try the desktop sharing session, although I think it will be a waste of time. On one hand I don't mind, they pay me for that time, but I know me and I don't like the sensation of loosing my time. On the other hand, I have other clients for whom I can work according to my workflow. I am about to say to them that I cannot (don't want to) do it. Well, I'll first try this desktop sharing session: maybe I'm wrong and it can actually work. But I like to consider myself as a professional, and I know that I don't know everything. So I try to keep an open mind and I am always willing to learn new stuff. So my questions are: 1. Can this desktop-sharing workflow work? What should be done in order to take the most of it? 2. Taking into account all the obstacles (geographic locations, no local, no git), is there another way for me to work on that project?"} {"_id": "220363", "title": "is using a PUT with side affects acceptable (REST)", "text": "I want to create an undo history whenever the user updates a form. Because it's an update, I want to use a PUT request. However, I read that PUT needs to have no side effects. Is it acceptable to use PUT here? Are there better alternatives? `PUT /person/F02E395A235` { time: 1234567, fields: { name: 'John', age: '41' } } In the server doGet('person/:personId', // create a new person snapshot ) Edit: The history will be visible to the user, calling multiple times would result in multiple versions. The solution was to check if the version was unique before creating it."} {"_id": "145738", "title": "Should a string constant be defined if it's only going to be used once?", "text": "We're implementing an adapter for Jaxen (an XPath library for Java) that allows us to use XPath to access the data model of our application. This is done by implementing classes which map strings (passed to us from Jaxen) into elements of our data model. We estimate we'll need around 100 classes with over 1000 string comparisons in total. I think that the best way to do this is simple if/else statements with the strings written directly into the code \u2014 rather than defining each strings as a constant. For example: public Object getNode(String name) { if (\"name\".equals(name)) { return contact.getFullName(); } else if (\"title\".equals(name)) { return contact.getTitle(); } else if (\"first_name\".equals(name)) { return contact.getFirstName(); } else if (\"last_name\".equals(name)) { return contact.getLastName(); ... However I was always taught that we should not embed string values directly into code, but create string constants instead. That would look something like this: private static final String NAME = \"name\"; private static final String TITLE = \"title\"; private static final String FIRST_NAME = \"first_name\"; private static final String LAST_NAME = \"last_name\"; public Object getNode(String name) { if (NAME.equals(name)) { return contact.getFullName(); } else if (TITLE.equals(name)) { return contact.getTitle(); } else if (FIRST_NAME.equals(name)) { return contact.getFirstName(); } else if (LAST_NAME.equals(name)) { return contact.getLastName(); ... In this case I think it's a bad idea. The constant will only ever be used once, in the `getNode()` method. Using the strings directly is just as easy to read and understand as using constants, and saves us writing at least a thousand lines of code. So is there any reason to define string constants for a single use? Or is it acceptable to use strings directly? * * * PS. Before anyone suggests using enums instead, we prototyped that but the enum conversion is 15 times slower than simple string comparison so it's not being considered. * * * **Conclusion:** The answers below expanded the scope of this question beyond just string constants, so I have two conclusions: * It's probably OK to use the strings directly rather than string constants in this scenario, **but** * There are ways to avoid using strings at all, which might be better. So I'm going to try the wrapper technique which avoids strings completely. Unfortunately we can't use the string switch statement because we're not on Java 7 yet. Ultimately, though, I think the best answer for us is to try each technique and evaluate its performance. The reality is that if one technique is clearly faster then we'll probably choose it regardless of its beauty or adherence to convention."} {"_id": "38382", "title": "System Analyst vs Computer Programmer?", "text": "My question here is relative to jobs. I currently hold a System Analyst position. What is the difference between System Analyst and Programmer/Analyst? Is this position higher than a programmer? Or how should I upgrade myself ?"} {"_id": "157590", "title": "Why is it preferred to write a commit message in present tense/imperative mood?", "text": "I often read/overhear that good commit messages should be written in present tense or to use imperative mood when describing the change eg. `Fix xyz` instead of `Fixed xyz`. What are the advantages of doing so? Are there differences between the various VCS (I mostly read these about git but that maybe because I mainly work with git)?"} {"_id": "246519", "title": "Controlling version numbers in sprints", "text": "Traditionally software build numbers fit into the format * Major * Minor * Release * Build Where a Major version is implemented whenever there are breaking changes, Minor when new mini features are added, Release when something is published and Build each time it's built. I find the last two digits work really well in a CI environment, each CI build increases the Build number and each release to live increments the Release number. However major waterfall work and breaking change developments are discouraged in more modern methodologies. We prefer to release little and often and so don't tend to make great breaking changes. Given this it's very difficult to determine when to create a new Major version. We try to avoid making major breaking changes and so are getting very high minor/release numbers but haven't had the excuse to move up a major version yet. What criteria should be used to determine when Major and Minor builds should be made (particularly web based applications)? I'm aware there are date style versioning but I'm interested in the Major.Minor.Build.Release only."} {"_id": "110978", "title": "Code Analysis & Reporting: Maven vs. Jenkins", "text": "My team (~10 devs) has recently migrated to Maven (multi-module project, ca. 50 modules) and we now use Jenkins for continuous integration. As the overall setup is running, we are planning to include code analysis and reporting tools (e.g., Checkstyle and Corbertura). As far as I can see, we basically have two options to generate such reports: either use Maven's reporting plug-ins to generate a site **and/or** use Jenkins plug-ins to do the reporting. **Which approach is better, and for which reasons?** It seems both have their merits: Maven-Reporting: * everything is configured in a single place (the pom files) * report generation on a local machine (i.e., without Jenkins) is straightforward and does not require any additional steps Jenkins-Reporting: * adjusting the reporting configuration seems more intuitive than with Maven and does not interfere with the main task of maven in our setup (building and deploying) * a single web-interface to check the overall status of the project, so that the reported issues are easier to find and harder to ignore (i.e., no need to click through the maven site, some nice history plots, notifications, etc.) Has anyone made a similar decision in the past, or is this a non-issue for some reason? (How well does it work? Any regrets?) Is it advisable to go **both ways** , or is that setup too tedious to maintain? Are there aspects of the problem that I am unaware of, or some advantages I have not noticed (e.g., is Maven-based reporting integrated nicely with m2eclipse)? Any pitfalls or typical problems that we are likely to encounter? **EDIT:** We tried out **Sonar** , as suggested below (including its Jenkins plug-in). So far, it works pretty well. Sonar is relatively simple to install and works on top of our Maven setup (one just invokes `mvn sonar:sonar` or re-configures a Jenkins job, that's all). This means we do not have to maintain a duplicate configuration, as all our Maven settings are used automatically (the only exception being exclude patterns). The web-interface is nice, and -- even better -- there is an **Eclipse plug-in** that retrieves all issues automatically, so that no one really _has to_ browse to the Sonar website, as **all issues can be automatically displayed in the IDE**."} {"_id": "187269", "title": "Can I use the patented Octree algorithm in a public programming challenge?", "text": "Similar to this question, but for a very different situation. I've been working-up some example code to accompany a proposed programming challenge and asking questions about the difficult bits. In my latest question, the answer I've received suggests that I use an Octree instead of a BSP-Tree. But Wikipedia says it's patented. And a public programming challenge -- much more than a private program or even free (beer) software -- really is encouraging people to reuse the idea without regard to ownership. So I can't ( _legally_ or _in good conscience_ ) use it here, can I?"} {"_id": "36262", "title": "What guidelines do you suggest for using Objective-C Properties?", "text": "Objective-C 2.0 introduced properties. While I personally think properties are nice addition to the language, I have seen a trend of making every instance variable as a property. Apple sample codes are no exceptions to this. I believe this is against the spirit of OOP, and since it exposes a lot more implementation details of a class to the client than they need to know. What guidelines do you suggest for the proper usage properties in Objective C?"} {"_id": "25461", "title": "Shouldn't recruitment be the other way round?", "text": "I really don't know why nobody's thought of this so far, but recruitment should be the other way round. Engineers should have some sort of a common platform where they register skills or domains they are interested in, demonstrate their capabilities and companies should take it up from there. I think this is way more effective since if you are paid well to do work that you love doing, you will generally make a fine job out of it. Does anybody know of some recruitment platform like this?"} {"_id": "4475", "title": "What are the best ways to get beta testers?", "text": "I have several projects coming up soon for public release, both commercial and open source. The projects are all downloadable, not web apps. I've worked on them alone and would like to get feedback during a beta period, but I don't currently have a large audience (though the markets are large). What are the best ways to get participation in the betas? Are there any existing sites or communities that specialize in software testing that I can reach out to? At this point, I'm specifically looking for technical testers who aren't intimidated diving into the code and can help spot security bugs, logical errors, etc. **Edit** : I'm looking for websites or communities similar to Invite Share. Invite Share itself would be perfect, but there doesn't seem to be any public information about how to submit a beta. **Bounty Explanation** : While Joel's article on running a beta is helpful, I wonder if there isn't an existing community available for beta testing of any sort, technical or user. As a self-taught and sole developer, I don't have a lot of technical contacts that would be appropriate approaching for testing. I did propose a Beta Testing site in Area 51 a few months ago, but it seems as if it either got buried, there wasn't a whole lot of interest, or it's a poor fit for StackExchange. If you know of existing testing communities, sites like InviteShare, or other ways to get testers, please share."} {"_id": "186972", "title": "Which asp.net technology fits this situation best?", "text": "I think I can count WebAPI out, but WebForms, WebPages, and MVC are all possibilities. I want to create an asp.net web site that is primarily static content and links to other sites. The only \"fancy\" bit will be a Bing map with pushpins that I add - but even these are static. And there will be a photo gallery. Oh, and ads, too. And, finally, it needs to work well on phones and tablets as well as desktop browsers. Which asp.net technology \"flavor\" is most suited for this type of web site? ## UPDATE In VS 2012, Web Pages projects are not available beneath Templates | Visual C# | Web. What _is_ there: ASP.NET Empy Web Application ASP.NET Web Forms Application ASP.NET MVC 3 Web Application ASP.NET MVC 4 WebApplication ASP.NET Dyanmic Data Entities Web Application Does this mean that Web Pages are passe, or that Web Pages and Web Forms are the same thing?"} {"_id": "85301", "title": "Understanding the Microsoft Public License (MS-PL)", "text": "I'm looking at using a few open source products in a commercial software application I'm working on. One of them is licensed under MIT, which I understand as allowing commercial software linking. However, the other open source product is licensed under MS-PL but I don't understand if that license is fully compatible with commercial software. So the question is, can I use MS-PL licensed OSS in a commercial/proprietary/for-sale application? Thanks."} {"_id": "74142", "title": "What does Dijkstra mean when he recommends an exceptionally good mastery of one's native tongue?", "text": "Dijkstra writes here: > Besides a mathematical inclination, an exceptionally good mastery of one's > native tongue is the most vital asset of a competent programmer. I do not understand the latter part of this quote. Can you please explain or elaborate? P.S. I have grown up in India. I speak Bengali at home; I speak Marathi in the community that I live in; Hindi is the national language and very widely spoken, so I know that, and in school and college I was taught with English as the first language. Of course, now I think in a multitude of languages and I must admit I don't have _mastery over any_. Is this really affecting my programming aptitude? If yes how ? and are there any _solutions_?"} {"_id": "85302", "title": "Flash development under Ubuntu", "text": "It's unfortunate, but I'm taking this course that would require me to work in Flash CS3 (specifically programming), which would make me use windows. I'm very used to development under Ubuntu, and booting into windows would require me to switch a hard drive. I was wondering if: * I could use the Flex 4.5 SDK to develop (something like a console application), then later hook it in Windows with the GUI I'm required to design? (In other words, is Flex 4.5 SDK compatible with CS3)? * Is there a good lightweight editor that I can edit actionscript 3 in?"} {"_id": "85304", "title": "What is the best practice to deliver a task to a developer?", "text": "As a very new software PM/Team Leader, I just want to know what is the best practice of a way to deliver a full required information about a task to the developers in my team. In this time, I just provide them with: * GUI Interface (if it used) * DB Schema (if it is used) * Blueprint of main classes. I think that's not enough. We usually have long discussions about the details; I just want to decrease the area of the discussion. What should I do? Do you think we should at least have a code review process?"} {"_id": "85309", "title": "Does the Decorator Pattern exist in the Java IO classes?", "text": "For an assignment, I have to find out which of the Gang of Four design pattern the classes `java.io.Reader` and its subclasses `java.io.PushbackReader`, `java.io.BufferedReader` and `java.io.FilterReader` were built with. According to this post, the design pattern would be the Decorator Pattern. This only makes sense to me if `PushbackReader`, `BufferedReader` and `FilterReader` can be decorated to be used at the same time, creating effectively a `BufferedPushbackFilterReader`. Is that the idea?"} {"_id": "200212", "title": "Determining the cost of impediments (waste)", "text": "For some time now our Scrum teams have experienced recurring impediments caused by external factors to the team. The teams have discussed the impediments in their retrospectives and also brought it up on \"Scrum of Scrums\". It seems that the impediments may require involvement by management as it requires some rather significant changes in that way we do things and the way our technical environments have been configured. In a small setting these kind of issues would probably have been easy to deal with because the team would have more control, but in this setting there are multiple teams, stakeholders and parties. I\u2019d like to hear your experience in making waste and costs visible. Do you simply estimate the hours wasted on the recurring impediments (such as, \u201cwe waste 10 hours every sprint waiting on the build server) or do you have a more systematic way of gathering and showing the waste? I would like to collect waste based on Six Sigma (and Lean Software Development) and estimate the waste cost in terms of story points. E.g. at every retrospective highlight the waste in seven categories with the number of story points wasted in each category. The seven categories would be: Partially Done work, Extra Features, Relearning, Handoffs, Task Switching, Delays, and Defects. In the end of the Retrospective, there would then be a clear indication of the cost of external impediments that management would easily be able to act on. What do you think?"} {"_id": "200214", "title": "Cross Compile Arm Program to Intel", "text": "I have searched around for a way to run a program meant for ARM processors on an Intel computer, but I can only find ways to do the reverse, to compile Intel programs for ARM. Are there any open-source cross-compilers that will allow me to do so? Thanks for your help."} {"_id": "200216", "title": "Independency and estimation of user stories that rely on shared predecessor", "text": "Lest's say I have user stories about using product catalog in shop: * As an administrator I can add/modify/delete catalog items (one or more user story, doesn't matter here) * As a customer I can search product catalog * As a customer I can view product details Each of this stories rely on predecessor: design and create database to store product information. I can express db creation as: 1. another story: As a shop owner I want to store information about products I sell 2. Task to be done in story which will be firstly chosen to do In first case I create story dependency, in second I wondering where estimate this additional task. Let's say it adds additional two story points. To which story should add this points? All (and re-estimate remaining stories in the future), none? I think that second option also can hinder estimation - I have to remember to add this story points to dependent stories and then possibly remove it. How to deal with such situations?"} {"_id": "200219", "title": "Which could be a good design pattern for complex numeric calculations between three or more different data models?", "text": "The source code I'm working on at the moment performs numeric calculations between a bunch of different properties belonging to different data models. All the calculations are coded in a big method with a lot of If statements than make it very complex, difficult to change and it contains some bugs. I have not found a design pattern that fits into this problem that I think is very common in any financial application. Any help with this? The source code is c# with .NET fw 4."} {"_id": "205688", "title": "Ported Functions Licensing", "text": "I have found several functions in python 2.7.2 to be very useful and I recreated them in C++ for my own uses. How do I properly give python credit for them? Do I even have to? I never actually looked at their source codes. I just wrote functions that output the same values as the python versions of the function."} {"_id": "141522", "title": "if/else statements or exceptions", "text": "> **Possible Duplicate:** > Defensive Programming vs Exception Handling? I don't know, that this question fit better on this site, or Stack Overflow, but because my question is connected rather to practices, that some specified problem. So, consider an object that does something. And this something can (but should not!) can go wrong. So, this situation can be resolved in two way: first, with exceptions: DoSomethingClass exampleObject = new DoSomethingClass(); try { exampleObject.DoSomething(); } catch (ThisCanGoWrongException ex) { [...] } And second, with if statement: DoSomethingClass exampleObject = new DoSomethingClass(); if(!exampleObject.DoSomething()) { [...] } Second case in more sophisticated way: DoSomethingClass exampleObject = new DoSomethingClass(); ErrorHandler error = exampleObject.DoSomething(); if (error.HasError) { if(error.ErrorType == ErrorType.DivideByPotato) { [...] } } which way is better? On one hand, I heard that exceptions should be used only for real unexpected situations, and if programmer knows that something may happen, they should use if/else. On the other hand, Robert C. Martin in his book Clean Code wrote that exceptions are far more object oriented, and more simple to keep clean."} {"_id": "166039", "title": "Why are exceptions considered better than explicit error testing?", "text": "> **Possible Duplicate:** > Defensive Programming vs Exception Handling? > if/else statements or exceptions I often come across heated blog posts where the author uses the argument: \"exceptions vs explicit error checking\" to advocate his/her preferred language over some other language. The general consensus seems to be that languages that make use of exceptions are inherently better / cleaner than languages which rely heavily on error checking through explicit function calls. Is the use of exceptions considered better programming practice than explicit error checking, and if so, why?"} {"_id": "135651", "title": "Do We Have a Responsiblity to Improve Old Code?", "text": "I was looking over some old code that I wrote. It works, but it's not great code. I know more now than I did at the time, so I could improve it. It's not a current project, but it's current, working, production code. Do we have a responsibility to go back and improve code that we've written in the past, or is the correct attitude \"if it ain't broke, don't fix it\"? My company sponsored a code review class a few years ago and one of the major takeaways was that sometimes it's just good enough; move on. So, at some point should you just call it good enough and move on, or should you try to push projects to improve old code when you think it could be better?"} {"_id": "139173", "title": "Caching Business Objects in MVC application", "text": "I figured this was more of an architectural question, so I chose to post it here rather than Stack Overflow. So I'm building an MVC web application and have just finished writing the code that wraps my calls into the DB (DAL) and gives me access via an interface to Create, Update, and Delete my Business Objects. Quick example just so you can get familiar before I go into my real question - here's how I would load up a stored record in the database for someone on the front end: BusinessObjects.Record record = DAL.LoadRecord(int recordId); Now once the `record` is loaded up, I can provide it to the front end (web) via a `Model` to one of my pages, or by attaching it to the `ViewBag` or something similar. How it gets there isn't important or what I'm worried about, it's more so how I can maintain state in a stateless (web) environment. The problem I am dealing with right now is, say I provide this `record` to the front end, and my page reads from it and generates a UI around it. At this point, the `record` is more or less gone. It has been created, passed, and read, but because of the stateless environment, is now lost. I can re-create this record based on the data I have on the front end on its way back, but is something like this usually handled by adding this `record` to the cache, and then just working with it there until it's fully ready to be persisted once again (something like `Saver.SaveRecord(record);`)? This would make things a lot quicker for the user, because they wouldn't have to be calling `Save` and `Load` every single time a page switched or something happened, but I didn't know if this was the correct architecture or if there's a common pattern that I have ignorantly ignored. I've tried to load up some links and have just found examples of _how_ to set something like this up, but what I really want to know is, _should_ I be setting something like this up, or is there a better way to do it. You'll have to excuse the beginner question. It's pretty obvious this problem has been solved many times, I'm just unaware of some of the more common patterns for doing so, and therefore wasn't sure exactly what to search for."} {"_id": "254532", "title": "What optimizations can be done for soft real-time code in C#?", "text": "I'm writing a soft real-time application in C#. Certain tasks, like responding to hardware requests coming in from a network, needs to be finished within a certain amount of milliseconds; however it is not 100% mission-critical to do so (i.e. we can tolerate it being on time most of the time, and the 1% is undesirable but not a failure), hence the \"soft\" part. Now I realize that C# is a managed language, and managed languages aren't particularly suited for real-time applications. However, the speed in which we can get things done in C#, as well as the language features, like reflection and memory management, make the task of building this application much easier. Are there any optimizations or design strategies one can take to reduce the amount of overhead and increase determinism? Ideally I would have the following goals * Delay the garbage collection until it is \"safe\" * Allow the garbage collector to work without interfering with real-time processes * Thread/process priorities for different tasks Are there any ways to do these in C#, and are there any other things to look out for with regards to real-time when using C#? The platform target for the application is .NET 4.0 Client Profile on Windows 7 64-bit, with. I've set it currently to Client profile but this was just the default option and wasn't chosen for any particular reason."} {"_id": "203748", "title": "Is there a tool or process to help FOSS authors agree on a license?", "text": "The Evercookie project has several contributors and there is no explicit license for the code. There is currently discussion on the dev mailing list trying to figure out what the licensing options are. * Is there any tool or process that can be used to find consensus among the various licenses? * What can be done if some contributors are non-responsive, can't be found, etc?"} {"_id": "87236", "title": "Where should Acceptance tests be written against?", "text": "I'm starting to get into writing automated Acceptance tests and I'm quite confused where to write these tests against, specifically what layer in the app. Most examples I've seen are Acceptance tests written against the **Domain** but how about tests like: > Given Incorrect Data When the user submits the form Then Play an Error Beep These seem to be fit for the **UI** and not for the **Domain** , or probably even the **Service layer**."} {"_id": "203745", "title": "Demonstration of garbage collection being faster than manual memory management", "text": "I've read in many places (heck, I've even written so myself) that garbage collection _could_ (theoretically) be faster than manual memory management. However, showing is a lot harder to come by than telling. I have never actually **_seen_** any piece of code that demonstrates this effect in action. Does anyone have (or know where I can find) code that demonstrates this performance advantage?"} {"_id": "203742", "title": "log4cxx: is it a stable option to include as part of distributed library?", "text": "We are porting our Java API library to C++. (Our target platforms are Linux and Windows.) Since we have minimal C++ experience, the learning curve has been pretty steep, but overall we have been able to make a clean port so far. In Java we use log4j, and are looking at to use log4cxx in the C++ version. It took us a few hours to get log4cxx to build on Windows (due both to our inexperience, and also build documentation seems out of date). We haven't yet tried to build on Linux. To my uninformed eye, log4cxx seems messy and somewhat outdated. Is there any consensus on whether this is a good logging framework to go forward with? (Log4j also seems to have been superceded by SLF4J and Logback.) I looked also at boost logging, but that does not appear to be part of the main distribution, so I wasn't sure if I was provided a more standard option with that or not. One other piece of information to add into the evaluation - the first client we are providing this to is already using log4cxx."} {"_id": "41254", "title": "Gradual approaches to dependency injection", "text": "I'm working on making my classes unit-testable, using dependency injection. But some of these classes have a lot of clients, and I'm not ready to refactor all of them to start passing in the dependencies yet. So I'm trying to do it gradually; keeping the default dependencies for now, but allowing them to be overridden for testing. One approach I'm conisdering is just moving all the \"new\" calls into their own methods, e.g.: public MyObject createMyObject(args) { return new MyObject(args); } Then in my unit tests, I can just subclass this class, and override the create functions, so they create fake objects instead. Is this a good approach? Are there any disadvantages? More generally, is it okay to have hard-coded dependencies, as long as you can replace them for testing? I know the preferred approach is to explicitly require them in the constructor, and I'd like to get there eventually. But I'm wondering if this is a good first step. **EDIT:** One disadvantage that just occurred to me: if you have real subclasses that you need to test, you can't reuse the test subclass you wrote for the parent class. You'll have to create a test subclass for each real subclass, and it will have to override same the create functions."} {"_id": "185443", "title": "What's a good strategy for managing static data in an SOA?", "text": "I'm working on a web application that sits on top of a number of RESTful web services, interacting primarily with those services through JSON formatted messages over HTTP. Our application has a great deal of static data that are read from these services (internationalization lookups, configuration, etc.) during operation. Unfortunately, we work pretty closely with the team who develops the webservces, so we know the backend architecture fairly intimately. Most of the services are using a MongoDB store for persistence, and our strategy thus far for managing the static data we require to be loaded is to utilize MongoDB dumps committed to version control that are loaded at deploy/upgrade time. We'd like to decouple our static data from the persistence store. The best ideas that have come forth thus far mostly involve storing static data formatted as it would be passed to services (JSON files) and then writing a \"loader\" process/script that will live as part of the deployment process which will interact with services to load the data. Are there patterns/strategies for managing/loading/deploying \"application\" data to services?"} {"_id": "36191", "title": "Generic Content Player?", "text": "The general idea on the web appears to be that video/audio are to be separated with plain text. By separated, I mean you have a place that plays video/audio and a place that you read text. This is because it is widely understood that they are vastly different. However, audio and video are just another way of communication, just like text. So why do we separate the two even if they are nearly the same thing? Correct me if I'm wrong but, most tutorials are either plain text how-to's (wiki-style) or visual/auditory instructional videos (YouTube). Why aren't the two combined? Or, if it's already been done can someone reply with the link? This might be bordering off-topic and if it is off-topic then please point me to the right place so it won't be. This might also appear to be an obvious question, however I'm not sure if this subject has really been deeply thought-out by more than a few individuals."} {"_id": "185446", "title": "Better way of storing key-value pairs in the database?", "text": "I have a C#/SQL Server program that sometimes needs to store data. The data could be a response from a web service, a result of a database query, or any number of other things. There's no way of knowing before the data is stored how many fields it might have or what the data structure might be. We have this kind of painful table we're using for this... four columns and lots of rows. An example of the data might be easier than an explanation. InstanceID RowID PropertyName PropertyValue 1 1 Property1 Value1 1 1 Property2 Value2 1 1 Property3 Value3 1 2 Property1 Value1 1 2 Property2 Value2 1 2 Property3 Value3 2 1 OtherProp1 Value1 2 1 OtherProp2 Value2 2 2 OtherProp1 Value1 2 2 OtherProp2 Value2 These values will then be pulled back and fed into a dictionary object, which can be updated, then the fields will be fed back into the database. This can be painful to code against, and also requires a lot of inserts which can make it very slow. I can't think of a better way of doing this, but I feel like there must be one. Any advice?"} {"_id": "36194", "title": "Help needed on a UI/Developer Interview", "text": "I have a phone interview with a major Internet company and it is a mostly front-end developer position. If anyone has experience with UI/developer interviews and can give some advice/questions asked etc. that'll be great. Additionally, what resources can be read and reviewed for the following: * Designing for performance, scalability and availability * Internet and OS security fundamentals **EDIT:** Now I am told that the interview I am told will be mostly on coding, Data Structures, design questions etc. Anyone?"} {"_id": "218226", "title": "What parts of functionality should be refactored into a directive?", "text": "I am creating an application from legacy code using AngularJS. I wonder what parts of my code should be moved into a directive. For example, iI had thought of moving a table which is used multiple times across the application into a directive. The tables alter from headings and size. Is it worth the effort or even a good practice to turn such things into their own directives or should I create each table in a unique way?"} {"_id": "185448", "title": "MVVM Clarification", "text": "We are about to write our first WPF application and are becoming familiar with the MVVM pattern. We've built many Winform applications and have an architecture that has been very successful for us. We're having a little bit of trouble translating that architecture or determining where certain pieces of our architecture fit in the MVVM model. Historically we have a Gui (the main exe) that then communicates to a BusinessLogic dll. The BusinessLogic communicates to a DAL dll through a web service and the DAL interacts with the DB. The DAL, BusinessLogic and GUI all reference the same BusinessObjects dll. ![AsIs Architecture](http://i.stack.imgur.com/uWIvW.png) Some of the transition to MVVM is fairly straight forward. Our Gui will still contain the views, our BusinessOjbects will still contain the model and our DAL will still interact with the DB (although the technology to implement them may change). What we're not sure of is our BusinessLogic component. Historically this would provide functions for the GUI to call to then populate controls in the views (ie. GetCustomerList which would return a list of Customer objects or the typical CRUD functions). The main hang up we have is whether the MVVM pattern would call for an additional component to house the ViewModels or if we just change our thinking and migrate what we have used as our BusinessLogic component to the ViewModels? Does our BusinessLogic component represent the ViewModels?"} {"_id": "30348", "title": "Best Practices for MVC Architecture", "text": "There are a number of questions on Stack Overflow regard MVC best practices, but most of those seem to revolve around things like using Dependency Injection, or creating helper functions, or do's and don'ts of what to do in views and controllers. My question is more about how to architect an MVC application. For example, we are encouraged to use DI with the Repository pattern to decouple data access from the controller, however very little is said on HOW to do that specifically for MVC. Where would we place the Repository classes, for instance? They don't seem to be model related specifically, since the model should likewise be relatively decoupled from the actual data access technologies. A second question involves how to structure the layers or tiers. Most example applications (Nerd dinner, Music Store, etc..) all seem to use a single tier, 2 layer approach (not counting tests) that typically has controllers directly calling L2S or EF code. If I want to create a multi-tier/layer application what are some of the best practices there in regards to MVC?"} {"_id": "166004", "title": "How to learn to draw UML sequence diagrams", "text": "How can I learn to draw UML sequence diagrams. Even though I don't use UML much I find that type of diagrams to be quite expressive and want to learn how to draw them. I don't plan to use them to visualise a large chunks of code, hence I would like to avoid using tools, and learn how to draw them with just pen and paper. Muscle memory is good. I guess I would need to learn some basics of notation first, and then just practice it like in \"take the piece of code, draw a seq. diagram visualising the code, then generate the diagram using some tool/website, then compare what I'd drawn to what the tool result. Find the differences, correct them, repeat.\". Where do I start? Can you recommend a book or a web site or something else? **Update** : I decided that it might be worthwhile to upskill myself in UML, and found this book: http://www.amazon.com/The-Elements- UML-2-0-Style/dp/0521616786 \\- reviews say it is best practices book on UML. Have you read it? Can you recommend it? Thank you."} {"_id": "166000", "title": "Accessing Repositories from Domain", "text": "Say we have a task logging system, when a task is logged, the user specifies a category and the task defaults to a status of 'Outstanding'. Assume in this instance that Category and Status have to be implemented as entities. Normally I would do this: **Application Layer:** public class TaskService { //... public void Add(Guid categoryId, string description) { var category = _categoryRepository.GetById(categoryId); var status = _statusRepository.GetById(Constants.Status.OutstandingId); var task = Task.Create(category, status, description); _taskRepository.Save(task); } } **Entity:** public class Task { //... public static void Create(Category category, Status status, string description) { return new Task { Category = category, Status = status, Description = descrtiption }; } } I do it like this because I am consistently told that entities should not access the repositories, but it would make much more sense to me if I did this: **Entity:** public class Task { //... public static void Create(Category category, string description) { return new Task { Category = category, Status = _statusRepository.GetById(Constants.Status.OutstandingId), Description = descrtiption }; } } The status repository is dependecy injected anyway, so there is no real dependency, and this feels more to me thike it is the domain that is making thedecision that a task defaults to outstanding. The previous version feels like it is the application layeer making that decision. Any why are repository contracts often in the domain if this should not be a posibility? Here is a more extreme example, here the domain decides urgency: **Entity:** public class Task { //... public static void Create(Category category, string description) { var task = new Task { Category = category, Status = _statusRepository.GetById(Constants.Status.OutstandingId), Description = descrtiption }; if(someCondition) { if(someValue > anotherValue) { task.Urgency = _urgencyRepository.GetById (Constants.Urgency.UrgentId); } else { task.Urgency = _urgencyRepository.GetById (Constants.Urgency.SemiUrgentId); } } else { task.Urgency = _urgencyRepository.GetById (Constants.Urgency.NotId); } return task; } } There is no way you would want to pass in all possible versions of Urgency, and no way you would want to calculate this business logic in the application layer, so surely this would be the most appropriate way? So is this a valid reason to access repositories from the domain? **EDIT: This could also be the case on non static methods:** public class Task { //... public void Update(Category category, string description) { Category = category, Status = _statusRepository.GetById(Constants.Status.OutstandingId), Description = descrtiption if(someCondition) { if(someValue > anotherValue) { Urgency = _urgencyRepository.GetById (Constants.Urgency.UrgentId); } else { Urgency = _urgencyRepository.GetById (Constants.Urgency.SemiUrgentId); } } else { Urgency = _urgencyRepository.GetById (Constants.Urgency.NotId); } return task; } }"} {"_id": "108790", "title": "Which Java framework meets these requirements?", "text": "Which Java framework meets these requirements? What set of frameworks would be best suited to meet these requirements? The requirements are: * oriented for web * support for transactions * support for creating RESTful web services * support for security, levels of security * integration with some kind of ORM framework like Hibernate * ability to change front-end without need of making changes to back-end. Firstly, I want to develop a flex based front-end but if I've ever wanted to change it to HTML5 then I don't want to make changes to my back-end * ready for cloud * free to use for commercial purposes To my mind comes Spring but are there any other alternatives meeting these requirements? What about if it is not necessary java? Do you know framework, set of frameworks combined together in other language which meet these requirements best?"} {"_id": "205597", "title": "Have Superclass Contain List of Subclass?", "text": "For the GUI of a program, I want it to list several items, all of which are, from a programming side, just subclasses. They can add one of these items to a list. I don't want to hard-code which subclasses there are. What would be the best way to allow the GUI to know about the subclasses, only having the base class to refer to?"} {"_id": "205591", "title": "Best way to develop a php open source application", "text": "I started to create a php application I'd like to make open source in the near future. However I do not know, if it's needed to follow any kind of code/documentation convention, to make it more usable and acceptable when released. Is there any model, to follow, to develop a good open source application? Do you think it is necessary(or better) to develop it entirely, or for the most, object oriented?"} {"_id": "205592", "title": "Storing application users in SQL: create a new \"Users\" table or use built-in database user management?", "text": "I am specifically interested in SQL Server, but the same question applies in general. When creating a new application, the way I see it, there are two options: * Create a table called \"Users\" and store the user name, password, etc. Set up one database user called \"application\" (and possibly more users for various components of the system). * Set up each application user as the database user. This may allow for easier single log on setup on Windows-based systems. Which approach is generally preferred? Why? What are the drawbacks and benefits?"} {"_id": "108792", "title": "Can we expect re-addition of Mobile SDK in future Visual Studio versions?", "text": "Microsoft stopped shipping the Mobile SDK in Visual Studio releases after the 2008 version. This wasn't a big surprise, as there were no takers for that feature and android made a big hole. But now, things have changed, Microsoft has partnered with Nokia, HTC rolling new mobiles with Windows OS. Also, Windows 8 is all set to make d\u00e9but on tablets. Can all these be seen as potential reasons for Microsoft to roll-out the Mobile SDK again in future releases of Visual Studio ?"} {"_id": "204240", "title": "How can I determine what level (entry, mid, senior) I am?", "text": "I have an interview with a company headquartered in San Francisco. They are pretty popular, well publicized, and recently acquired by a semi-notable player in Silicon Valley. The team is probably 20-30 people. They want at least a mid-level candidate, and offer \"perks\" for senior/lead developers. I got an internship (for backend development) at a small startup halfway through a state school CS program, and was hired full-time a week into my internship as lead (well... only) iOS developer. I've worked here for 5 months and released an app (iOS constituent of our realtime communication suite). I've also released a location-based app for a niche community. I am familiar with TDD, GitHub, API's, AFNetworking and all the recent popular jazz, as well as dated concepts like pre-ARC memory management and xib/xibless programming. I can demonstrate a basic knowledge level of algorithms and formal programming (competent in code competitions) and I am a good communicator. But my notable experience is somewhat lacking. Would it be fair to call myself a mid-level developer?"} {"_id": "204241", "title": "Adding correct documentation for a method", "text": "I have method like this: methodA(someParams) { do some initializations,etc; call methodB(someParams); } methodB(someParams) { do work; if blah then raise exception::FileNotFound if blahblah then raise exception::ResourceNotFound } When I am adding documentation for `methodB`, I am also documenting the exceptions it can raise. But how about documentation of `methodA` ? It IS consuming `methodB` that may raise exceptions, so should I still document those exceptions for methodA as well? Right? wrong?"} {"_id": "203296", "title": "How to convince boss to buy Visual Studio 2012 Professional", "text": "The main advantage is the use of ReSharper and other add-ons but we need to make a convincing argument for the purchase of Visual Studio 2012 Professional. We are currently using Visual Studio 2012 Express for Windows. It is quite good but is hard to switch from using the full Professional version in the past. So far the team has compiled the following list: > 1. Extract Interface function missing. Very useful for clean SOLID code. > 2. No add-on support. Can\u2019t install StyleCop or productivity tools. > AnkhSvn, Spell checker, Productivity PowerTools, GhostDoc, Regex Editor, > PowerCommands. > 3. The exception assistant is limited in Express edition. This is a big > annoyance. See http://www.lifehacker.com.au/2013/01/ive-given-up-on-visual- > studio-express-2012-for-windows-desktop-heres-why/ > 4. Different tools provided by MS like certificate generation. > 5. Possibility of create a Test project based on source code. > We do server development in C# so any web add-ons or anything else is useless. The reason I am asking is I am sure that people have been in the same position. What approach did you use and can you think of additions or ammends to the above list? Thanks,"} {"_id": "218598", "title": "Layered Manufacturing Contour Generation", "text": "I am currently working on software for layered manufacturing (3D printing), and have come to a point where I no longer know what to do. I am at the stage where I must take a bunch of lines, and order them to create a seamless contour of a model. I have some basic ideas on how I am going to do this, but since I am using .STL files there are excess lines in some of the layers. ![sample](http://i.stack.imgur.com/S2wSL.png) In that image you can see that there is a simple square with a line connecting two corners to form two triangles. This is the base of a cube. Ideally the line connecting the two corners wouldn't be there, but because this was processed from a .STL file which uses triangles only it has to be there to turn the square into two triangles. My original thought was to print out that line too, but in a more complex model ( like a gear ) there would be too many and it would be impossible to create one seamless contour. My next thought was to just print it out as two triangles, but I cannot go over the same line twice, so that doesn't work especially in complex models. I have been thinking about this problem for quite a while, but I can never come up with a solution that would work for any model. I need to end up with series of lines that form contours of not just the outside, but possibly the inside as well if there are holes. For example a gear would have one set of lines that form the teeth all the way around, and another set of lines that form the hole in the middle. If anyone knows a solution to this, or has experience with this, I would greatly appreciate your input =)"} {"_id": "101813", "title": "How does EF 4.1 stack up against ADO.NET SQL for stored procedures?", "text": "This question is _solely_ about using these technologies against stored procedures. I've been doing quite a bit of reading on pitting these two against each other, and far more than anything I've been seeing them compared with CRUD operations and other more complex queries - very rarely for calling stored procedures. I've read that it's hard to beat ADO.NET for performance when calling stored procedures. However, when calling them with Entity Framework 4.1, you get the benefits of entities (rather than just the data reader) which can be nicer to work with and pass around (imo). I ran a few tests today (not sure of the accuracy of them) and they suggest that over 100 or 1000 iterations, calling the same procedure, that ADO.NET is something like 5 milliseconds, on average, faster per call. (I haven't been able to test how long it takes to convert from the standard data reader object to a similar kind of entity, but I doubt it makes up for the 5 milliseconds) Is it worth the trade off in performance for the benefits in design and writing? Or are those 5ms per query just too much? Does the answer change between small projects, and environments like e-commerce?"} {"_id": "218592", "title": "Use the filesystem or some noSql database for shared memory between processes?", "text": "I'm implementing a system that's basically a pipeline of XML documents: XML documents are retrieved over the Internet, validated, further processed etc. until they are ingested in a relational (non-XML) database. After the ingestion in the database they can be discarded. Since the various components of the pipeline are somewhat independent from each other I want to use a number of separate applications, each performing a \"step\" in the pipeline. What should be the reasoning behind choosing the file- system for data sharing between the above applications versus some noSQL database? The data to be shared is mostly XML files and total volume of data that goes through the pipeline maybe 10 gigabytes per day."} {"_id": "218597", "title": "Understanding abstraction", "text": "I am trying to understand object oriented code better and I decided to start at abstraction. If I am not incorrect, abstraction means that you hide information that isn't relevant to what task you want to perform, i.e. If I turn on the TV I would not need to know the internal that is going on. Or If I have a class that handles Orders, and I would like to get the Total cost of a couple of Products, a getTotal() method would be the only thing that the User needs to know about to get the total sum of the products, and not how it's actually calculated. So per se, by using a class I am abstracting data?"} {"_id": "249897", "title": "Web-app filtering information client-side vs server-side?", "text": "I have a web-app that provides data (that is updated on an interval) for intranet users, who are able to filter information by location. I was having a discussion with a co-worker about whether that filtering should take place on the client vs server side. The amount of information isn't large, but the users are typically interested in a certain location. My question..Is it preferable to let the database filter the data and send the results to the client, or return all data back to the client and then sort by location? If it's not clear cut, when would you choose on method over the other?"} {"_id": "249890", "title": "When does a Monad become a hammer?", "text": "I realize my precursory understanding on Monads is severely lacking in detail considering my knowledge comes mostly from Douglas Crockford's Monads and Gonads talk and complicated with my sevear handicap with Haskell (which looks like a bunch of non-alpha numeric characters mushed between disjointed English words to my tragically disadvantaged brain). With that being said, I'd like to ask about programming practices concerning Monads and how they could be implemented in JavaScript. I'm prefacing this because I recognize that the very nature of the language can drastically affect how one perceives a concept and that because of my background in JavaScript this question could be inappropriate if it were based in a purely functional language like Haskell. Often times while designing an interface or coding an object I will find myself implementing a form of chaining which mutates the encapsulated data. I prefer this style over more declarative forms like passing in a multi-lined object literal. function Declarative(options) { this.options = options; } Declarative.prototype.compute = function() { ... } var x = new Declarative({ foo: 'foo', bar: 'bar' }); x.compute(); Verses: function Chained() { } Chained.prototype.withFoo = function(v) { this.foo = v; return this; }; Chained.prototype.withBar = function(v) { this.bar = v; return this; }; Chained.prototype.compute = function() { ... } var x = new Chained() .withFoo('foo') .withBar('bar') .compute(); Both these examples (aesthetics aside) raise a few hairs on my back because a small voice mockingly squeals \"Mutability much?\" I begin thinking this might be a good time to conciser a Monad pattern. What I mean by that is each method would in short return a new object of the same type. Then adding composition functions on it like map, bind, etc. could off a world of potential like I get with Promises and other Monad type. (Obviously taking care to follow the three Monadic laws when implemented). Finally, my other half of the brain starts chiming in with \"If all you have is a hammer, everything looks like a nail.\" Sigh. That is when my productivity and creativity crash here on SE curious and confused. While I continue my research for understanding (perhaps gaining enough courage to contemplate Haskell) I ask: When does the idea of Monads (that being composability of functions on objects (ie types) along with immutability) become a good idea to be cultivated and patterned? And, when is it nothing more then an over utilized hammer? (Concepts and learning opportunities welcomed, example code helpful)"} {"_id": "249891", "title": "Add arguments to mysql in Django's \"dbshell\"", "text": "I'd like to add a couple of command-line arguments to my Django's `./manage.py dbshell` command, and I can't figure out how. In specific I'd like to add `-A` to prevent MySQL from scanning every table and every column, and `--prompt=LOCAL:` since I frequently keep multiple shells open. I can't figure out how to do this! I'm only idea is to create my own \"mysql\" command in /usr/local/bin and have it be a wrapper for mysql with my own flags. But I'd really like to avoid doing that."} {"_id": "54591", "title": "Starting all over again?", "text": "Have you ever been developing something and just came to a point where you think that this is rubbish, the design is bad and although I will lose time it will be better to just start all over again? What should you consider before making this step? I know it can be drastic in some cases, is it best to just totally ignore what you did before, or take some of the best bits from it? Some real life examples would be great."} {"_id": "249893", "title": "Privilege (Access/Permission) Control for Hierarchial Structured Resource", "text": "Question: Is there any standard model or industry defacto implementation for modeling and implementing Access Control in (i.e.) a Document Management System? Note: I studied a bit the security mechanism of Windows (which I do not want to use), and I saw Users and Groups and Policies. But I can't understand: 1 - How a single policy object can contain all information about allowed/denied actions on a subject for all users and groups, at a specific moment of time. 2 - How multiple policies on a specific subject, merge into one to provide least possible access. 3 - What is the mechanism (data structures, database, caching, implementation) of hierarchical resources like folders? Those king of queries are usually slow."} {"_id": "61227", "title": "Why bother with detailed specs?", "text": "When writing software, what's the point of writing a formal, detailed dead- tree specification? To clarify, a somewhat informal, high-level specification for people who don't want to read the code makes sense to me. A reference implementation in well-commented, readable source code is about the most unambiguous specification of anything you're going to get, though, since a computer has to be able to execute it. Formal specs are often just as hard to read, and almost as hard to write, as code. What benefits does a formal specification offer over a reference implementation plus a little documentation about what behavior is undefined/implementation-defined regardless of how it works in the reference implementation. Edit: Alternatively, what's wrong with the tests being the formal specification?"} {"_id": "123349", "title": "How do you determine velocity when the previous sprint was half User Stories and half Defects?", "text": "The following scenario often happens with my team at the office. Let's say we decided to plan our Sprint this way: * 50% new features * 50% bug fixing (they are high priority AND _unestimated_ as the fixes are easy, the hard and time-consuming bit is the investigation) The Sprint goes well and we finish all the features, say X story points, and quite a few unestimated bugs. Two questions come to mind: 1. What is our Sprint velocity? Is it X? 2. Assuming it is X; how do we plan the next Sprint using \"yesterday's weather\" (X story points) if we now want to do an 100% new features Sprint?"} {"_id": "232662", "title": "Separate namespace just for exceptions?", "text": "I was doing a code review and came across something odd which I've never seen before. The developer decided to create a sub-namespace just to contain all the assembly's exceptions. I thought I had read that this was explicitly not a good use of namespaces and that exceptions should be kept at the \"lowest\" namespace from which they are thrown, but I haven't found anything on MSDN (or anywhere else for that matter). At the same time, it seems odd that I haven't come across this before in the framework itself or the third party libraries I've used. Are there any guidelines exist that would push away from keeping all exceptions in a .Exceptions sub-namespace?"} {"_id": "235697", "title": "What kind of processes or static alaysis would you use to catch impropper buffer bugs such as the one that caused heartbleed?", "text": "What kind of process or static analysis would catch the heart bleed bug other than human code reviews which we already know failed. The Fix Commit is here."} {"_id": "159911", "title": "How to contribute to jQuery?", "text": "I'm confused about the nature of the jQuery project. It can be licensed under either the GPL or the MIT license, according to the comments in the project. However, the jQuery web site provides a list of team members, as if this were a commercial product. I have written some code to improve jQuery to work around a nasty bug in Internet Explorer and I would like to know what is the best way to propose my idea to the project."} {"_id": "159912", "title": "How to deal with code reuse philosophy?", "text": "I constantly find myself thinking about code reuse when starting a new project. To what extent should I make my code reusable? Should I limit it to the application scope or should I make it reusable outside of the project? Sometimes, I feel like code reusability may stand in the way of a simple design. Please, share your own understanding and approach of code re- usability."} {"_id": "132967", "title": "Jenkins without Automated Tests", "text": "I know that Jenkins is focused on continous building/testing, monitoring of batch jobs about the project. We have a legacy project which such condition : 1)Has a development team. 2)It has SVN for source code management 3)Some cronjobs for some operations. 4)Compile&Build don't take too much time, there is no very complex dependencies. 5)It doesn't have any automated test/junit classes and will not have. I'd like to ask to experienced users about Jenkins, is it still worth to use Jenkins for central build&management of the project ?"} {"_id": "132965", "title": "Unit of work principle is causing a problem in MVC3 application", "text": "I am implementing a website using MVC3, Entity Framework 4.1 and Repositoty pattern, Unit of work principle. But I am facing a big problem while implementing this. I have developed a static ObjectContext class. This class is used across all the repositories so unit of work pattern is followed. I have e.g. ICustomer, IProduct repositories. I use specifically only one repository in a controller and that too is injected using NInject. CustomerController(ICustomer customer) { } ProductController(IProduct product) { } Since object context class is static and IProduct, ICustomer have parameterised constructor which accepts the objectContext class, these two will share the same ObjectContext instance. When the execution is single threaded everything goes fine, but in multi- threading I get an unhandled exception, because of one repository closing the connection which was being used by other. I think if I make ObjectContext class not-static then this will solve the problem(Not tested yet) but then Unit of Work will not be observed. Can you please suggest a solution for this?"} {"_id": "236176", "title": "Performance of One API vs Multiple API's", "text": "I was having a conversation with a colleague and although my opinion makes sense to me, I wasn't able to back it up. I'm in the process of creating an API that will be hit hundreds of thousands of times per day. It's fairly simple, and will just be doing some inserts into a relational database. There are essentially 3 functions I would need to create. The question is, what would have better performance, creating one API with different controllers (using .NET Web API or NodeJS something like that) or 3 different API's. My thought is that, for example with the .NET Web API, no matter whether there are 1 or 3 they are all being hosted, and using the same resources. His thought was that having 3 API's would be 3 times faster because it would have 3 times the resources. I simply didn't have the proof to disprove that."} {"_id": "171745", "title": "Managing JS and CSS for a static HTML web application", "text": "I'm working on a smallish web application that uses a little bit of static HTML and relies on JavaScript to load the application data as JSON and dynamically create the web page elements from that. **First question** : Is this a fundamentally bad idea? I'm unclear on how many web sites and web applications completely dispense with server-side generation of HTML. (There are obvious disadvantages of JS-only web apps in the areas of graceful degradation / progressive enhancement and being search engine friendly, but I don't believe that these are an issue for this particular app.) **Second question** : What's the best way to manage the static HTML, JS, and CSS? For my \"development build,\" I'd like non-minified third-party code, multiple JS and CSS files for easier organization, etc. For the \"release build,\" everything should be minified, concatenated together, etc. If I was doing server-side generation of HTML, it'd be easy to have my web framework generate different development versus release HTML that includes multiple verbose versus concatenated minified code. But given that I'm only doing any static HTML, what's the best way to manage this? (I realize I could hack something together with ERB or Perl, but I'm wondering if there are any standard solutions.) **In particular** , since I'm not doing any server-side HTML generation, is there an easy, semi-standard way of setting up my static HTML so that it contains code like at development time and for release?"} {"_id": "226036", "title": "Potentially Shippable Product Increment - what if users don't like the latest increment?", "text": "In our company we do usability test at the end of each Sprint. Many times we discover that the users don't like the implemented feature, so we either completely change it or scrap it in the next Sprint. However if we start doing potentially shippable product, thus fixing all bugs, running a lot of tests, preparing documentation for FDA and for users, fixing little UI issues - all this work will go to waste if users will not like the feature. Isn't it better NOT to do all this extra potentially shippable stuff until we are sure users actually like the feature?"} {"_id": "219171", "title": "What workflow do you find efficient when simultaneously developing multiple inter-dependent GIT branches?", "text": "Let me start by saying that my GIT knowledge is fairly shallow, so I'm guessing that there might be something I'm missing. **THE SETUP:** As an example, we have a project which is being developed as a collection of plug-ins/modules. Some modules, such as contact management, depend on others, such as validation. Each module has it's own branch. **CURRENT WORK FLOW:** Our validation module is being concurrently developed with our other modules, just in a separate stream. In doing so, I am finding that I am having to do a **lot** of checking out back and forth (as well as a lot of stashing (and merging, but that I'm fine with)). For example, say I'm developing module_x which needs a new validation rule (which will have uses in other modules as well) ... I then: 1. stash my work 2. checkout the validation branch 3. write the rule 4. commit 5. checkout the module_x branch 6. pop the stash 7. merge the validation branch into module_x. Now, if I come up with an improvement for something in the validation branch (or just need to fix a bug), I have to go through all that all over again. Between new development, refactoring/improvements, and bug fixing, I feel like I'm spending entirely too much time just switching back and forth between developments streams, and can't help but think that there's a better way. **DO's and DONT's:** Is this _really_ how it's done, or am I completely missing the bigger picture? :) What works for you?"} {"_id": "137244", "title": "Why doesn't java use a radix sort on primitives?", "text": "`java.util.Arrays.sort(/* int[], char[], short[], byte[], boolean[] */)` is implemented as a 'tuned quicksort' rather than a radix sort. I did a speed comparison a while ago, and with something like n>10000, radix sort was always faster. why?"} {"_id": "40614", "title": "As developers is it our job to report issues if no one else in the org seems to care?", "text": "**edit:** I should point out; My personal view was that I should be proactive. I know sometimes I have to bite my tongue, and I wanted to get the communities input(was this one of those times). I couldn't find a more appropriate place to ask it in the SO family of sites. Here is the scenario -- * small org < 70 employees * no qa department * website viewed by thousands everyday. * I am the sole website developer * I have never had a single complaint that the site is broken in IE6 * I've discovered our site has not worked in IE6 for years. The person I replaced who created it must have been \"testing\" it only on IE7. I fired up Virtual PC and with IE6, and our site is a complete mess. You can not select some menu items they are so garbled. It looks terrible. So again, Is it our job to proactively seek out bugs, or do we just fix what the customer requests.... Personally, I want to leverage this opportunity with my org to drop any expectation of IE6 support or compatibility."} {"_id": "178941", "title": "Why do operating systems do low level stuff in C and C++? Why not just C++?", "text": "On the Wikipedia page for Windows, it states the Windows is written in Assembly for the bootloader and task switcher, and C _and_ C++ for kernel routines. IIRC, you can call C++ functions from an `extern \"C\"`'d block. I can get using C for the kernel functions so pure C apps can use them (like `printf` and such), but if they can just be wrapped in an `extern \"C \"` block, then why code in C?"} {"_id": "570", "title": "Will correctness proofs of code ever go mainstream?", "text": "All but the most trivial programs are filled with bugs and so anything that promises to remove them is extremely alluring. At the moment, correctness proofs are code are extremely esoteric, mainly because of the difficultly of learning this and the extra effort it takes to prove a program correct. Do you think that code proving will ever take off?"} {"_id": "178949", "title": "Browser support for internal corporate tools", "text": "We are on the verge of a conversion. For years, our company supported only IE for its internal (intranet) home-built tools. Since a few of our users are still on XP, which means IE only goes up to 8... a heavily JS / jQuery site wont even load! We have been in the process of converting to use Chrome instead, to make use of its javascript performance. But, it has now been suggested that we support all common browsers... internally for these tools. Which means more development time to scale-back some of these new applications, more time to test in all browsers, and we are already under staffed. Are there any good informational sites/posts out there, that already make this argument?"} {"_id": "202167", "title": "What's with the aversion to documentation in the industry?", "text": "There seems to be an aversion to writing even the most basic documentation. Our project READMEs are relatively bare. There aren't even updated lists of dependencies in the docs. Is there something I'm unaware of in the industry that makes programmers dislike writing documentation? I can type out paragraphs of docs if needed, so why are others so averse to it? More importantly, how do I convince them that writing docs will save us time and frustration in the future?"} {"_id": "207531", "title": "Data Aggregation of CSV files java", "text": "I have `k` csv files (5 csv files for example), each file has `m` fields which produce a key and `n` values. I need to produce a single csv file with aggregated data. I'm looking for the most efficient solution for this problem, speed mainly. I don't think by the way that we will have memory issues. Also I would like to know if hashing is really a good solution because we will have to use 64 bit hashing solution to reduce the chance for a collision to less than 1% (we are having around 30000000 rows per aggregation). For example file 1: f1,f2,f3,v1,v2,v3,v4 a1,b1,c1,50,60,70,80 a3,b2,c4,60,60,80,90 file 2: f1,f2,f3,v1,v2,v3,v4 a1,b1,c1,30,50,90,40 a3,b2,c4,30,70,50,90 result: f1,f2,f3,v1,v2,v3,v4 a1,b1,c1,80,110,160,120 a3,b2,c4,90,130,130,180 algorithm that we thought until now: 1. hashing (using concurentHashTable) 2. merge sorting the files 3. DB: using mysql or hadoop or redis. The solution needs to be able to handle Huge amount of data (each file more than two million rows) a better example: file 1 country,city,peopleNum england,london,1000000 england,coventry,500000 file 2: country,city,peopleNum england,london,500000 england,coventry,500000 england,manchester,500000 merged file: country,city,peopleNum england,london,1500000 england,coventry,1000000 england,manchester,500000 The key is: `country,city`. This is just an example, my real key is of size 6 and the data columns are of size 8 - total of 14 columns. We would like that the solution will be the fastest in regard of data processing."} {"_id": "207534", "title": "how to create database in which new columns have to be added periodically?", "text": "I want to create a database that tracks different projects and their finances. As finances are tracked monthly, a new column has to be created for each month. I could create all tables together at the start, yeah, but I guess it would be bad development strategy. Also I would be guessing the number of months. I will be creating an application to monitor those projects (like graphs and such)... so maybe I could code the program to do it at regular intervals."} {"_id": "85235", "title": "Is code ownership a code smell?", "text": "This is something I've been thinking about ever since I read this answer in the controversial programming opinions thread: > **Your job is to put yourself out of work.** > > When you're writing software for your employer, any software that you create > is to be written in such a way that it can be picked up by any developer and > understood with a minimal amount of effort. It is well designed, clearly and > consistently written, formatted cleanly, documented where it needs to be, > builds daily as expected, checked into the repository, and appropriately > versioned. > > If you get hit by a bus, laid off, fired, or walk off the job, your employer > should be able to replace you on a moment's notice, and the next guy could > step into your role, pick up your code and be up and running within a week > tops. If he or she can't do that, then you've failed miserably. > > Interestingly, I've found that having that goal has made me more valuable to > my employers. The more I strive to be disposable, the more valuable I become > to them. And it has been discussed a bit in other questions, such as this one, but I wanted to bring it up again to discuss from a more point blank \" **it's a code smell!!** \" point of view - which hasn't really been covered in depth yet. I've been a professional developer for ten years. I've had one job where the code was well written enough to be picked up relatively quickly by any decent new developer, but in most cases in industry, it seems that a very high level of ownership (both individual and team ownership) is the norm. Most code bases seem to lack the documentation, process, and \"openness\" that would allow a new developer to pick them up and get working with them quickly. There always seem to be lots of unwritten little tricks and hacks that only someone who knows the code base very well (\"owns\" it) would know about. Of course, the obvious problem with this is: what if the person quits or \"gets hit by a bus\"? Or on a team level: what if the whole team gets food poisoning when they go out on their team lunch, and they all die? Would you be able to replace the team with a fresh set of new random developers relatively painlessly? - In several of my past jobs, I can't imagine that happening at all. The systems were so full of tricks and hacks that you \" _just have to know_ \", that any new team you hire would take far longer than the profitable business cycle (eg, new stable releases) to get things going again. In short, I wouldn't be surprised if the product would have to be abandoned. Obviously, losing an entire team at once would be very rare. But I think there is a more subtle and sinister thing in all this - which is the point which got me thinking to start this thread, as I haven't seen it discussed in these terms before. Basically: I think **a high need for code ownership is very often an indicator of technical debt**. If there is a lack of process, communication, good design, lots of little tricks and hacks in the system that you \"just have to know\", etc - it usually means that the system is getting into progressively deeper and deeper technical debt. But the thing is - code ownership is often presented as a kind of \"loyalty\" to a project and company, as a positive form of \"taking responsibility\" for your work - so it's unpopular to outright condemn it. But at the same time, the technical debt side of the equation often means that the code base is getting progressively less open, and more difficult to work with. And especially as people move on and new developers have to take their place, the technical debt (ie maintenance) cost starts to soar. So in a sense, I actually think that it would be a good thing for our profession if a high level of need for code ownership were openly seen as a job smell (in the popular programmer imagination). Instead of it being seen as \"taking responsibility and pride\" in the work, it should be seen more as \"entrenching oneself and creating artificial job security via technical debt\". And I think the test (thought experiment) should basically be: what if the person (or indeed, the whole team) were to vanish off the face of the Earth tomorrow. Would this be a gigantic - possibly fatal - injury to the project, or would we be able to bring in new people, get them to read the doccos and help files and play around with the code for a few days - and then be back in business in a few weeks (and back to full productivity in a month or so)?"} {"_id": "230283", "title": "Applying the Art Gallery problem to optimal sensor placement", "text": "I am trying to design an algorithm for _optimal sensor placement_ in a given area. After doing some research I found the _Art Gallery Problem_. However, this problem assumes that the guards can see all the way to the perimeter from their positions, which is not the case with sensors (sensors have a range). Is it possible to solve the optimal sensor placement problem by mapping it to an art gallery problem, and trying some recursive solution? I would be grateful if someone could throw some light on this question or guide me to appropriate resource."} {"_id": "47963", "title": "Recommendations for a web-based help system", "text": "I'm putting together a fairly large GUI. I'm a tech person, but more on the hardware side, not software. I'm wondering what software package would be best suited to enable me to generate a web-based help-system. Preferably it takes care of a lot of coding, allowing me to focus on the content. For example, the user would click on a link in the GUI when they have a question, that brings them to a web-based Help Guide, for example, providing an overview of how to use the GUI, perhaps a searchable index (key-word based index), table of contents, etc, for navigating through the Help Guide. My first thought was to program everything in XHTML using Dreamweaver, but my layout requirements are fairly modest (just figures and text, maybe a few equations), and I'd prefer not to spend a lot of time concentrating on the programming. Was wondering if any software existed that made generating web- based navigable pages easy to create/publish. Again, I'm not really a programmer, so if there's something obvious out there, I'm probably not aware of it."} {"_id": "38799", "title": "Does taking a series of contracts hurt your career?", "text": "So I've been contracting for 6-7 months now. The problem I have is that I take a contract while looking for work and then don't want to bail on the people working in my contract. Then when the contract is up, I tell myself I'm going to take a contract while looking, etc. I'm concerned that if this trend continues it will not be good for my career. I'm particularly concerned about the fact that the contracts (3 to 6 months) aren't really long enough for me to participate in a full SDLC. Does taking a series of 3-6 month contracts make it more difficult to land a permanent, senior position later? For example, I can't imagine someone hiring a consultant to be a team lead. So if you are going from contract to contract, does it hurt your ability to eventually become a team lead?"} {"_id": "175844", "title": "Is there an antipattern to describe this method of coding?", "text": "I have a codebase where the programmer tended to wrap things up in areas that don't make sense. For example, given an Error log we have you can log via ErrorLog.Log(ex, \"friendly message\"); He added various other means to accomplish the exact same task. E.G. SomeClass.Log(ex, \"friendly message\"); Which simply turns around and calls the first method. This adds levels of complexity with no added benefit. Is there an anti-pattern to describe this?"} {"_id": "175848", "title": "Am I sending large amounts of data sensibly?", "text": "I am about to design a video conversion service, that is scalable on the conversion side. The architecture is as follows: * Webpage for video upload * When done, a message gets sent out to one of several resizing servers * The server locates the video, saves it on disk, and converts it to several formats and resolutions * The resizing server uploads the output to a content server, and messages back that the conversion is done. Messaging is something I have covered, but right now I am transferring via FTP, and wonder if there is a better way? is there something faster, or more reliable? All the servers will be sitting in the same gigabit switch or neighboring switch, so fast transfer is expected. EDIT: The question goes on the server <-> server side of things. The servers are co- located in same LAN, so the security of the interconnection there is not expected to be the main issue."} {"_id": "233537", "title": "Software trial: limited time or limited functionality?", "text": "I am currently developing my first serious piece of software in C#. It is a good learning experience so far. My software is basically a task scheduling tool with some advanced features and capabilities. I am not near the finishing stages yet, but I was wondering what people's opinions are on trial versions of software. Do you think it would be more beneficial, in general, to have a limited time trial (say 30 days) or a limited functionality version of the software? I think I know which I would go with, however I always like to listen to the opinions of others. Thanks,"} {"_id": "238706", "title": "entity framework 6 agnostic enough different sql server and OS platforms?", "text": "It's my first time using sql server, I usually go with MySQL, so I'm unsure how to do this. The project I'm assigned will be deployed to multiple platforms particularly PC's with different SQL server ranging from 2008 to 2014, some of them are express edition, some are standard edition and in Windows ranging from XP, 7, 8 and 8.1 for both x86 and 64 bit systems If I use entity framework 6 in my development PC, 64-bit windows 8.1 and SQL Server 2012 Express. Will this pose a problem during deployment?"} {"_id": "13961", "title": "How do you handle changes in client focus?", "text": "We are working on a large ongoing project that has continual feature changes. These features could take as long as 3 - 4 weeks to complete. This would of course be fine, except the client is always changing its priorities/focus based on pushes from THEIR clients who may want certain features implemented before others. So, we often have to shift gears in the middle of a major feature build and start a new feature that all of a sudden has a greater priority. We then of course have the time cost of merging everything together at some point, and also the cost of that lost momentum on the original feature. Our estimates are out the window at that point. Is there any good way to handle this? Some options I've thought of: 1. assume this is the way things will be, and apply a 'distracted client' factor of up to 100% on all estimates (this is dangerous in the case where we can actually complete a feature without interruption) 2. educate the client on the costs of shifting gears, and perhaps apply a change premium to previous estimates if they want us to change to working on a different feature 3. refuse to work on more than one feature/release at a time (aside from bug fixes). This is probably not realistic. I'm looking forward to hearing about how others have tackled this. It can't be uncommon."} {"_id": "238700", "title": "Serving images from google cloud resized to 300x200", "text": "I am working on an application that uses Google Cloud storage to serve my app images. The images are submitted from an android app to google app engine code that is supposed to upload the images to gcs then create a link and update in my db. This works fine. The challenge am facing is i need different devices to serve the same image but in different dimensions. e.g device1 can request images to be served with size 300x300 while device2 can request the same image to be served with size 500x500. I have tried to google the issue but i dont seem to get a correct way to serve my image in different sizes but only got some url structure that will use a blob key then parse the size as a get parameter in the url and the image is cropped to the required dimensions. e.g. http://lh4.ggpht.com/TgjDT- cPRr6bjrpSVQeILk93o4Ouzjo1ygMB6KpmnCHhyH5vJKKRrqxCD2bC3T09CRIP6h5QFsV_l0hnhio5bN7z=s1000-c My images urls are in this format http://storage.googleapis.com/myappname.appspot.com/images/random_generated_imagename.jpg Can someone suggest an idea on how i can achieve this using the url that i have? I have tried reading image from the url, resizing it image to the size requested by the device then pushing back the image to the device but this looks tedious to me and utilizes alot of server resources given that i can have an average of 100 connections per second requesting for images."} {"_id": "171546", "title": "Software licensing and code generation", "text": "I'm developing a tool that generates code from some various data. The tool itself will be licensed with the MIT license, which strikes a good balance for me in terms of allowing the freedom to use and modify it, while still holding the copyright. OK, but what is the legal status of the code _generated_ by the tool? Who holds the copyright for code generated by a tool? Do I need to give users of the tool a license for the generated code, or do they already have that by virtue of it being generated by them? What is different about this code generation system (which may be relevant) is that the source information about the code generation is provided by the system itself. The user doesn't feed source data in; the source data is bundled along with it. They simply have the means to transform it in various ways (filtering out parts of the data they don't want, etc). Obviously they could edit the bundled data. Does that affect anything about this?"} {"_id": "214496", "title": "What options are there to programmatically use windows application in VB.NET? (Button Press and Keyboard Strokes)", "text": "I have this complex windows application that I need to create an automated testing system for. I need to use VB.NET, and it needs to be able to click all kinds of things, be conscious of the windows that open in the application, the dialog boxes, and do the things that a human would do in this application. Through my research, I discovered FindWindowEx will be useful for me. I would appreciate any feedback as to what direction I should go on creating this automated testing system. (are there any function, libraries, or other built in stuff that can help me? If so, what are they) Thanks, Phil"} {"_id": "230131", "title": "Do we need Logging when doing TDD?", "text": "When doing the Red, Green & Refactor cycle we should always write the minimum code to pass the test. This is the way I have been taught about TDD and the way almost all books describe the process. But what about the logging? Honestly I have rarely used logging in an application unless there was something really complicated that was happening, however, I have seen numerous posts that talk about the importance of proper logging. So other than logging an exception I couldn't justify the real importance of logging in a proper tested application (unit/integration/acceptance tests). So my questions are: 1. Do we need to log if we are doing TDD? won't a failing test reveal what wrong with the application? 2. Should we add test for the logging process in each method in each class? 3. If some log levels are disabled in the production environment for example, won't that introduce a dependency between the tests and enviroment? 4. People talk about how logs ease debugging, but one of the main advantages about TDD is that I always know what's wrong due to a failing test. Is there something I am missing out there?"} {"_id": "18127", "title": "Is it a good idea to do TDD on low level components?", "text": "I'm considering writing a low level driver or OS components/kernels. The osdev.org folks seem to think that the important bits are not meaningfully testable this way, but I have read some discussions where people thought differently. I've looked around, but have failed to find any real life examples of TDD on low-level components. Is this something people actually do, or just something that people talk about in theory because there is not a good way to do it in practice?"} {"_id": "242989", "title": "How to implement proper identification and session managent on json post requests?", "text": "I have some minor messaging connection to server from website via json requests. I have single endpoint which distributes requests according to identification data. I am using asynchronous server and handle data when it comes. Now I am thinking about extending requests with some kind of session. I tried using REST/JSON approach, passing single id in json. For example I register on page, get some kind token id as response and used that token with each post request. Post request has three simple parts: type, data, token_id. My problem is I am not sure how to cache token on browser side. I am using Tornado as webserver and saving token via cookies like this(http://technobeans.wordpress.com/2012/08/07/tornado-cookies/). Problem is I am not sure how long set expire time and how I should force to renew cookie. For example, should I when cookie expires (I really do not know proper way to detect it) to launch event on client side to register. Also I tried to drop requests and use socket like interfaces (sock-js) to transfer data. There I find that handling was way more easier: login open socket post data, close. Where I could implement it, I was happy with it. However I feel that having open connection is not solution to all problems, specifically to rare irregular data transfer events. I am searching for ideas how to have post requests properly identified and how to handle irregular json connections. To sum up: 1. **What is the best way to define post requests session?** Get cookie when registered and use token as long as session runs with each request? Should I implement timeout for token? Is there alternative methods? Can I cache tokens to same origin requests? What could I use on client side (Web browser)? 2. **How about safety?** What techniques I should use to throw away requests with malformed data, to big data, without choking server down? **Should I worry?**"} {"_id": "89741", "title": "Low impact refactoring and code cleaning of sloppy code while waiting for requirements", "text": "I inherited an existing code base for a product that is reprehensibly sloppy. The fundamental design is woefully inadequate which unfortunately I can do little about without a complete refactor (HIGH coupling, LOW cohesion, rampant duplication of code, no technical design documentation, integration tests instead of unit tests). The product has a history, high exposure to critical \"cash-cow\" clients with minimal tolerance for risk, technical debt that will make the Greeks blush, VERY large codebase and complexity, and a battle-weary defeatist approach to bugs by the team before me. The old team jumped ship to another division so that they have the opportunity to ruin another project. It is very rare that I experience a Technical Incompetency Project Failure as opposed to a Project Management Failure but this is indeed one of those cases. For the moment I am by myself but I have a lot of time, freedom of decision and future direction and the ability to build a team from scratch to help me. My question is to gather opinion on low impact refactoring on a project like this when you have some free time during the functional requirements gathering phase. There are thousands of compiler warnings, almost all of them unused imports, unread local variables, absence of type checking and unsafe casts. Code formatting is so unreadable and sloppy that it looks like the coder suffered from Parkinsons disease and couldn't control the amount of times the space bar was pressed on any given line. Further database and file resources are typically opened and never closed safely. Pointless method arguments, duplicate methods that do the same thing, etc.. While I am waiting for requirements for the next feature I have been cleaning low-impact low risk things as I go and wondered if I am wasting my time or doing the right thing. What if the new feature means ripping out code that I spent time on earlier? I am going to start an Agile approach and I understand this is acceptable and normal to constantly refactor during Agile development. Can you think of any positive or negative impacts of me doing this that you would like to add?"} {"_id": "102856", "title": "How to explain that it's hard to estimate the time required for a bigger software project?", "text": "I'm a junior developer and I find it hard to estimate how much time it takes to finish a bigger software project. I know how to structure the architecture in general, but it's hard for me to know what details I have to do and what problems I have to solve. So it's hard to estimate how much time it will take to finish a bigger project, because I don't know what problems I need to solve and how long it takes to solve them. How do I explain this to a person that is **not a software developer**?"} {"_id": "33379", "title": "Sales Manager: \"Why is time-estimation so complex?\"", "text": "A few days ago a sales manager asked me that question. But at this moment I didn't know a answer which he can understand. He isn't a programmer! At the moment I work on a product which is over 8 years old. Nobody thought about architecture or evolvability. I have a swamp of code in front of me every day which is not tested. Because of that, time estimates are very difficult for me. How I can describe that problem to an **salesman**? Not only my swamp-code- problem, but general!"} {"_id": "210788", "title": "Setting Deadlines in software development", "text": "I have been working on a system alone for about two years. I inherited the system from a contractor who spent about two years working on it before me (alone). The system is not particularly well designed because there is business logic, presentation logic and data logic mingled together. It is a very complex system, which I am trying to refactor as I go along and this takes time. A new developer was recruited but he seems to be taking more of a Project management role. He is very direct asking exactly how long things will take; questioning all my assumptions and predictions, which is difficult because of the complexity and because I am the only expert at the moment. I am very organised in my personal life. For example, I was asked what time I would arrive in Manchester last week so I provided an exact time catering for accidents and road works on the way. I find it difficult to apply the same principles at work in software development at the moment. Don't get me wrong. I have worked with project managers on less complex projects in the past and have had a good relationship, but then the projects were less complex and I always delivered before schedule. I am struggling with this particular project manager and it is causing stress. How do other developers deal with project managers who want answers and accurate deadline dates?"} {"_id": "72923", "title": "Network application framework/API/etc", "text": "With all the web2.0 hype and webapps being all the rage, the only advantage from a corporate POV that I can think of webapps having is that it is easier to service your user base: upgrades become easier among several things. It's just that the browser is not a good platform. I've been mulling this over for a while now. My original idea was to use python, then I found out about QSA. I like QT. It's widgets are excellent for most purposes, and it's cross platform. A generic QT application is installed on the clients machine. They point it to a URL to begin executing the signed/encrypted application at that location. The actual application could be written in Qt script. It could be developed very much like a traditional MVC application. Or it could, and probably should, be developed like WT, so that a \"web\" application can be coded the same way as a \"desktop\" application. Before I go reinventing the wheel, are there any such platforms/frameworks in existence? If I do go inventing this wheel, what are some things I should be aware of?"} {"_id": "72928", "title": "What does Douglas Crockford mean when he says jQuery doesn't scale?", "text": "In the Q&A section of this talk, Douglas Crockford says that jQuery doesn't scale as well as some other popular libraries. What does he mean by that, and what is it about the other libraries that makes them more scalable?"} {"_id": "30091", "title": "When returning from a period of not programming, do you find you've improved?", "text": "It seems as though whenever I take an extended break from programming--whether to pursue other interests or simply because I fall out of the habit for a while--I invariably find that when I return to a project and set to coding, I come with an abundance of new ideas, novel approaches, and just plain better code. It may be because I have a lot of other creative interests besides programming, and my mind likes to find correlation and crossover between them, so while I'm doing one thing, in the back of my mind I'm usually also applying it to another. So what's your experience? Do you ever return from a break (whether intentional or not) feeling not only refreshed, but also somehow _noticeably improved?_ Is it actually the norm?"} {"_id": "228139", "title": "What is a uniform way to express Country, City, State? (ISO standard, etc)", "text": "I need to encode data into a user token that describes the user's Country, State, and city. I understand that different descriptors apply to different government hierarchies, but am looking for a standard of some type that has already sorted this out. What ISO standard, binary format, etc is best at describing a user who may be from any country?"} {"_id": "201418", "title": "Getting rid of Massive View Controller in iOS?", "text": "I had a discussion with my colleague about the following problem. We have an application where we need filtering functionality. On any main screen within the upper navigation bar, there is a button in the upper right corner. Once you touch that button, an custom written Alert View like view will pop up modally, behind it a semitransparent black overlay view. In that modal view, there is a table view of options, and you can choose one exclusively. Based on your selection, once this modal view is closed, the list of items in the main view is filtered. It is simply a modally presented filter to filter the main table view.This UI design is dictated by the design department, I cannot do anything about it so let accept this as a premise. Also the main filter button in the navbar will change colours to indicate that the filter is active. The question I have is about implementation. I suggested to my colleague that we create a separate XYZFilter class that will * be an instance created by the main view controller * acquire the filtering options * handle saving and restoration of its state - i.e. last filter selected * provide its two views - the overlay view and the modal view * be the datasource for the table in its modal view. For some unknown reason, my colleague was not impressed by that approach at all. He simply wants to do these functionalities in the main view controller, maybe out of being used to do this in the past like that :-/ Is there any fundamental problem with my approach? I want to * keep the view controller small, not to have spaghetti code * create a reusable component (for use outside the project) * have more object oriented, decoupled approach. * prevent duplication of code as we need the filtering in two different places but it looks the same in both.. Any advice?"} {"_id": "47347", "title": "Anyone knows good references for Machine Learning Algorithms and Image Recognition?", "text": "I need it for my thesis and for some reason I am having a hard time finding decent books or websites for it. My thesis topic is \"Classification of Modern Art Paintings using Machine Learning Approach\". My goal is to classify examples of modern art paintings to its respective modern art movement(expressionism, realism,etc..) using machine learning approach. Also, suggestions and comments about my thesis are greatly appreciated."} {"_id": "89297", "title": "Looking for the tool that generated JSON Rail-Road grammar descriptions", "text": "I am currently working on a little language, and I'd like to express its grammar more rigorously. I know about EBNF and it's great to generate parsers (Bison/Yacc), however it's not that easy to visualize. The Wikipedia page on Syntax Diagram points out that one of the success of JSON is due to the display of its grammar as a Rail-Road diagram. Indeed if one goes to the JSON webpage, one can see nice diagrams, that are easily accessible and (I find) visually attractive, for example: ![http://json.org/object.gif](http://i.stack.imgur.com/raLaZ.gif) It easily conveys the structure of an `object` in JSON, dealing with multiple alternatives and repetitions easily. A very similar way of expressing the grammar can be found on SQLite (though I prefer the JSON one). ![http://www.sqlite.org/images/syntax/delete- stmt.gif](http://i.stack.imgur.com/Oiv6H.gif) Does anyone know if there was a tool used to generate the JSON diagrams ? (and if so, where I can find it). The wikipedia page reference some generators but none seem to generate such nice diagrams."} {"_id": "155111", "title": "What are some easy techniques to scan books for new information?", "text": "I find it irresistible to keep purchasing cheap programming and technical e-books in fields such as Drupal, PHP, etc., and also compulsively download free material made available such as those from Microsoft's developer blog... The main problem with the large library I've developed is that there are many chapters (especially the first few) in these books packed with information I already know, but with helpful tidbits hidden in between. The logical step would be to skip those chapters and read the ones I don't seem to know anything about, but I'm afraid I may lose out on really important information this way. But naturally it is tedious to have to read about variables, functions and objects all over again when you are trying to know more about the Registry pattern, for example. It's hard to research on the net for this, because my question itself seems vague and difficult to formulate into a single search query. I need people- advice - what do you do in this situation?"} {"_id": "6556", "title": "What do you think about JavaFX script retirement?", "text": "Oracle will no longer develop JavaFX script ( the language ) and now the applications would be coded through the API. I think JavaFX script was kind of neat. Does it seem reasonable to stop the development of the language, and use it only through the API? Links: http://jonathangiles.net/blog/?p=916 http://java.dzone.com/articles/javaone-2010-alternative-jvm"} {"_id": "213124", "title": "Is it bad practice to store metadata information in file names? Better solutions?", "text": "I have noticed where I work people are keen on storing information in file names, and parsing the file names. To me this doesn't seem to be especially good practice. I already see the occasional issues with scripts globbing for a file, and getting the wrong one because another file matches first.We are also discussing how to get around problems with separators for the fields. Is it considered bad practice or not? What are other accepted solutions for retrieving files from a file system based on some type of metadata?"} {"_id": "225071", "title": "Load all templates at startup?", "text": "I am developing a jQuery mobile app. In this app, i often use Mustache.js templates in separate html files. Actually, every template is needed by the user, but my app loads a template (via GET method) only when it is needed. For now, all the templates are small and they're all used by the client at runtime. But obviously this won't be the case for long, in case I add extensions that will be needed or not by the users. I could load every template at once, but I'm afraid this could become a blocking point in the future. Based on your experience, what is the best thing to do? Load all templates at once, or load them one by one, when needed?"} {"_id": "213121", "title": "Is there any standard for design document", "text": "This question is purely from academic point of view. As part of the software engineering lab, a design document is to be created after generating the SRS. Is there any standard for creating the design document. Since we followed IEEE standard for creating the SRS. What all things can be included in the design document from the academic point of view? I am thinking of including different UML diagrams."} {"_id": "225073", "title": "How should I test my application?", "text": "I have made a simple application which searches for files and folders on users computer. Since, I am a student currently in my 1st year and don't have any formal training and have made my application while I was still learning C# and .Net framework, I didn't have knowledge about unit tests and other good testing methods used by professionals. Although, most of my code is working fine, sometimes it throws exceptions or some other unexpected situation occurs and the code breaks. Keeping the above experiences in mind, I don't think my application is ready for deployment and so , I would like to test my application in a professional way, and be assured that my application is bug free. Therefore, I want to ask professionals here, that keeping my situation in mind, how should I start testing my application. What are the steps that I should follow, to ensure the quality of my application."} {"_id": "225079", "title": "Unit Testing in iOS -- Should I split out my Data Model into it's own class?", "text": "I'm attempting to try out using unit test in for the first time in new iOS activity for work. I love the idea of unit testing, but always find the specifics to be... messy. I get the general principals -- I have read a lot about it so I wanted to ask a specific question. My situation: * I have a UITableView that can have anywhere from 10 to 10,000 items. I would like to do partial loading. Load the first 100 or so, and then when the user scrolls down half way, load the next 100. Or something like that. * My calls to the DB are asyc (I do not control this, although I guess I could block and wait on the main thread if I really wanted to). My first thought is to have an NSMutableArray and a function to load/update the data in my UIViewController. I also need to keep track of if the data is finished loading and if the data is currently being reloaded (because it's an async call). However, I'm wondering if it's better to split this into it's own class for testability? Let's call it SmartMutableArray? My concerns are 1. If I do this, it requires more selectors then before. I already need to create a call back for my call to the database, and now I need to create another one for my UIViewController to be notified when SmartMutbleArray is completed loading. I would think this makes my code harder to read. 2. Because my new array is updated in the background, I think I need to use `self performSelectorOnMainThread:@selector(addObjectsFromArray:) withObject:response.balances waitUntilDone:YES];` to update the new array in my new SmartMutableArray class. Which seems weird to me, that this class needs to know it's a data source for a UI element. 3. If I don't break it up, I have to test all this loading indirectly by calling my UITableView delegate commands. I've plowed ahead with creating the new class for this list... but I'm curious what people with more experience think of something like this. Is it overkill? Thanks for any input, sorry for the length. Please let me know if I'm missing any details that would be useful though."} {"_id": "206668", "title": "Using multiple Git repositories instead of a single one containing many apps from different teams?", "text": "I am migrating a 10-years-old big CVS repository to Git. It seemed obvious to split this multiple-projects repository into several Git ones. But the decision-makers are used to CVS, therefore their point of view is influenced by CVS philosophy. To convince them to migrate from one CVS repo to different Git repositories I need to give them some arguments. When I speak with mates working on Git repo for years, they say that using multiple Git repo is the way to use Git. I do not know really why (they give me some ideas). I am a newbie in this field so I ask here my question. **What are the arguments to use multiple Git repositories instead of a single one containing different applications and libraries from different teams?** I have already listed: * branches/tags impact the whole Git repository files => pollutes other team projects * 4GB limit Git repo size but this is wrong * git annotate may be slower on _bloat_ Git repo... **EDIT:** * Eamon Nerbonne has noticed the related question: Choosing between Single or multiple projects in a git repository? * The reason the team managers finally have accepted the split: the single Git repo (550 MB) was requiring 13 minutes to be cloned on Windows. * The _bloat_ CVS repo split in 100 Git repositories: * each dead apps in one repo * each stabilized library in one repo (source code almost never changed any longer) * related apps/libs kept together in one repo * moved large files not used for compilation (config...) to other repos (Git does not like large files) * skipped other unrelevant files (`*.jar`, `*.pcb`, `*.dll`, `*.so`, `*.backup` ...) * Successfully installed the `repo` tool used by Android Open Source Project in order to handle all these Git repos: * easy installation on Linux * more difficult on Windows because of Cygwin and NTFS native symlinks requirements"} {"_id": "206667", "title": "Where to put entity model classes in case of using a dataservice layer?", "text": "If my solution has both a \"dataservice\" project and a \"business logic\" project, where do the entity models, which represent database tables, belong? At first i thought putting them in the dataservice layer, but then i would need duplicate models in the business logic layer, if i wanted to provide reusable logic dealing with or being dependant of database data. Putting models in the business logic project, would force the dataservice layer to be dependant on the core project."} {"_id": "206664", "title": "Custom PHP Template Engine", "text": "I've been developing a custom PHP template engine to suit just my needs and also to get a little more practice with PHP. What I did was to create a Template class that simply receives as constructor parameter a path to a template file and a string that should be the placeholder for the contents of the pages. In that case, the template file is an HTML file with some placeholders written as [placeholder]. These will be filled with real content by another class. So for instance, [title] should be filled with the title, [content] with the content of the selected page and so on. I've created also a Page class that receives as constructor parameter the directory of the pages and then has methods to set which name of file it should use (this is filled in as we like in the initialization, when working with friendly URL's the most common way I use this is setting as the first piece of the URL) and methods to set more data as pairs of placeholder string and real content. My only problem is with how to deal with titles and meta tags. For each page there's a title and a collection of meta tags. My only way to deal with this was to create in different files one array with titles and one array with meta tags. Both of them receives as index the name of the page file. So for instance $titles[\"home\"] should be the title of the home page and $meta_tags[\"home\"] should be the html code for the meta tags of the home page. The problem is that as soon as there are lots of pages this will have one unecessary cost of memory: the system would be loading for every requests all the titles and all meta tags, while it must load only the required one. I thought on using database, but it seems like killing one ant with an atomic bomb. So the next best thing I thought of was XML. Now, what's the best solution for this kind of situation? Use XML, use a MySQL database, or some other method? As I've said, I know that the recommended is that we use the template engines that are already out there since they are tested lots of times, are already stable and so on, but I'm struggling with those things as a way to get more practice developing with PHP. Thanks very much in advance for the help. Also, I didn't think it was needed to post any code I've written, but if some of the code will help, please ask and I'll post it."} {"_id": "206663", "title": "C++ class with only pure virtual functions: what's that called?", "text": "So i'm looking for some input/consensus on what terminology we should be using to describe something that looks like this: class Printable { public: virtual void printTo(Printer *) = 0; virtual double getWidthInPoints() = 0; virtual double getHeightInPoints() = 0; }; So this is a class that only has pure virtual methods. Any class that inherits it needs to implement the whole thing, i.e. there is no partial implementation. So what is this called? I have mostly seen two different names: 1) _Interface_ Used in Windows COM programming (perhaps it came from the IDL), it's a keyword in both C# and Java and also a compiler directive/keyword in Objective-C (but isn't the same thing). I've heard engineers using the phrase 'interface' synonymously with 'API', but, at least in this scenario, it's a single thing whereas an API could comprises more than one. 2) _Protocol_ I've seen this used in various open-source libraries such as Bloomberg BSL and it's a compiler directive/keyword in Objective-C, but does not necessarily need to be in the inheritance hierarchy. In the broader computer science sense, protocol is a formal definition of information exchange between two parties, like TCP and HTTP, but this definition kind of holds for the above class too. Is there any clear distinction between these in C++ or are they pretty much interchangeable?"} {"_id": "28278", "title": "Two bosses, both want different way of implementation", "text": "Over lunch, a colleague of mine, that has a manager (person in direct charge of the project) and a BOSS up top was asked to resolve one issue. The manager had arguments with the boss regarding how it should be resolved. They both agreed they would go with the manager's way. When my colleague was starting to resolve the issue, the Boss calls and said, \"You do it my way.\" In this case, what would you do?!"} {"_id": "168635", "title": "What does the python Codecs module do?", "text": "I just read through the documentation on the Codecs module, but I guess my knowledge/experience of comp sci doesn't run deep enough yet for me to comprehend it. It's for dealing with encoding/decoding, especially Unicode, and while to many of you that's a complete and perfect explanation, to me that's _really_ vague. I don't really understand what it means at all. Text is text is text in a .txt file? Or so this amateur thought. Can anyone explain?"} {"_id": "239077", "title": "How to build child classes as parent configuration?", "text": "I'm using Codeigniter PHP Framework for developing a web application, and when developing an admin zone, I've ended building a generic parent class called AdminController which is extended by the child classes to develop my Admin zones. **I've shortened my code to show what I really want to show** , my code is something like: Parent Class: class AdminController { protected $url_action = null; protected $permissions = null; protected $formulario_gestion = null; function __construct() { if ( empty( $this->permissions ) ) { throw new Exception( 'No se ha definido permissions' ); } if ( empty( $this->url_action ) ) { throw new Exception( 'No se ha definido la url_action' ); } if ( empty( $this->formulario_gestion ) ) { throw new Exception( 'No se ha definido el formulario_gestion' ); } } public function create() { if ( ! $this->checkPermissions( $this->permissions, $this->user ) { redirect( base_url ); } $formulario = new $this->formulario_gestion( null ); $formulario->setFormValues( $this->get_update_values() ); // More code below... } // Doing this to make sure I'll overwrite it public function get_update_values() { throw new Exception('No has definido get_update_values'); } // More Methods and things... And then, I use my child class to initiate all the values required in the parent, overwrite all the functions I need have to be overwritten, and go: Child Class class Webs extends AdminController { function __construct() { // Initiating parent values: $this->permissions = 'webs'; $this->url_action = strtolower( __CLASS__) ; $this->formulario_gestion = 'FormularioWebs'; parent::__construct(); } // Overwriting what I need to be overwritten public function get_update_values() { $values = $array( 'field1' => 'value1' , 'field2' => 'value2' ); return $values; } And this work quite well: All the pages share the same behaviour and I only write relevant code. **My problem is** : This is a nightmare when building a whole CRUD with it: I ended up with many properties I only use in some methods, and I don't know if there is a \"standard\" or correct way of doing it. I mean what are the flaws of doing this in this way? It is correct? I don't like having so many properties laying out there that are only going to be used in a method, and I don't know how to encapsulate them in a way only the method use it it'll have it. What I ended up is something like the code bellows, but I don't really like it, as you don't know in the parent where the value is declared. Method in Parent Class: public function show() { echo $this->var; } Method in Child Class: public function show() { $this->vars = 'value' parent::show(); }"} {"_id": "239070", "title": "Merging around 15 small Git repos of non-optional centralized web service components to a single large repo", "text": "In a centralized web service we break down the components into various small Git repos by software modules, e.g. authentication module, authorization module, data access module etc. (around 15 repos at the moment) The good thing is it is easy to manage the smaller code base, however, our productivity has decreased a lot since quite a lot of changes need to be update several repos at once. Also, deployment is more difficult as there are multiple versions of module involved, we always need to think about the dependency. I am considering to merge all the modules back to a single repo, because * our services must require all the modules to exists, they are not optional for our service * we are not a big team (3 people actually) and the overhead in maintaining too many repos does not worth it, they all need to have knowledge on all the code bases What do you think? Any pros and cons if we merge into a giant repo?"} {"_id": "173039", "title": "Is modern C++ replacing C#? Is Microsoft pushing developers to adopt C++?", "text": "I hear about modern C++ popularity and some talks about migrating back to C++ from C# or other C-like languages. I know about C++11 features but I would like to hear your experiences, especially from developers who migrated from C# to C++. More importantly, does Microsoft push developers to use C++? If yes, why?"} {"_id": "231666", "title": "What exactly is \u201ccomputer systems\u201d?", "text": "My professor made a comment today - \"...They've been having trouble with filesystem performance, and since they're more graphics guys, they asked us systems guys to help out...\". What exactly is \"computer systems\"? What does it mean to be a \"systems guy\"? And, what is a \"systems guy\" passionate about? For contrast, what is not computer systems? What would possibly bore a \"systems guy\"?"} {"_id": "82672", "title": "One-time income with recurring costs: how to mantain an app in the cloud", "text": "You have an idea of a software/app/webapp/website that you would like to spend some effort and implement. Your idea uses a cloud system to keep things up and you think you could earn some money by distributing your product and charging a **small** one-time fee. But there is a problem: the first thing you realize is that one day that user using your system may eat all the money he gave you with his indirect recurring little spending in the cloud. How would you do the math to find out if your idea was feasible?"} {"_id": "173037", "title": "iOS app with a lot of text", "text": "I just asked a question on StackOverflow, but I'm thinking that a part of it belongs here, as questions about design pattern are welcomed by the faq. Here is my situation. I have developed almost completely a native iOS app. The last section I need to implement is all the rules of a sport, so that's a lot of text. It has one main level of sections, divided in subsections, containing a lot of structured text (paragraphs, a few pictures, bulleted/numbered lists, tables). I have absolutely no problem with coding, I'm just looking for advice to improve and make the best design pattern possible for my app. My first shot (the last one so far) was a `UITableViewController` containing the sections, sending the user to another `UITableViewController` with the subsections of the selected section, and then one _strange_ last `UITableViewController` where the cells contain `UITextViews`, sections header help structure the content, etc. What I would like is your advice on how to improve the structure of this section. I'm perfectly ready to destroy/rebuild the whole thing, I'm really lost in my design here.. As I said on SO, I've began to implement a `UIWebView` in a `UIViewController`, showing a html page with JQuery Mobile to display the content, and it's fine. My question is more about the 2 views taking the user to that content. I used `UITableViewController`s because that's what seemed the most appropriate for a structured hierarchy like this one. But that doesn't seem like the best solution in term of user experience.. **What structure / \"view-flow\" / kind of presentation would you try to implement in my situation?** As always, any help would be **greatly** appreciated! * * * Just so you can understand better the hierarchy, with a simple example : -----> Section 1 -----> SubSection 1.1 -----> Content | -----> SubSection 1.2 -----> Content | -----> SubSection 1.3 -----> Content | | | UINavigationController -------> Section 2 -----> SubSection 2.1 -----> Content | -----> SubSection 2.2 -----> Content | -----> SubSection 2.3 -----> Content | -----> SubSection 2.4 -----> Content | -----> SubSection 2.5 -----> Content | -----> Section 3 -----> SubSection 3.1 -----> Content -----> SubSection 3.2 -----> Content |------------------| |--------------------| |-------------| 1 UITableViewController 3 UITableViewControllers 10 UIViewControllers (3 rows) (with different with a UIWebView number of rows)"} {"_id": "82676", "title": "server and request understanding", "text": "I want to know what a server does to run a php application. Below is what I think: client A types www.blahblahblah.blah/ 1. Server resolves url and directory etc. 2. Server go the index.php 3. index.php has a Singleton Pattern Class in it with a static variable called instance. Now does the server allocate the memory to that static variable in its own RAM so that all the requests following this first one uses the same static variable? OR for every new request does the server allocate new memory and that new memory will have a new space allocated to that static variable? My Confusion: if every request is run in its own memory space then what is a persistant connection? Second thing I wondering about: Can I have a desktop program which is continuously sending a special key to my web application and my web application is sending the key back continuously to make HTTP a connection full instead of connection less? That way I can confirm who is connected to my APP as a client instead who is connected to INTERNET. I know sessions but they make http connection less and then chance of spoofing and session hijacking is there. I know you can make session secure but still my App won't know if the client is dead and can delete the data from the session and tell others that client blah is disconnected."} {"_id": "173035", "title": "Performance impact of not implementing relationships at the database level?", "text": "Let's imagine a data model with customers and invoices. There is a 1 to n relationship between a customer and its invoices. We uses an ORM (like Hibernate). One can explicitely implement the 1-n relationship (using JPA for example) or not. If not, then one must do a bit more work to fetch invoices. However, it is much easier to maintain, improve and develop the data model of applications where relationships between objects are not explicitely implemented in the database. My question is, has anyone noticed a significant performance impact when not implementing the relationships in the database?"} {"_id": "39119", "title": "Strategy / resources for writing LISP webservices?", "text": "Background: I'm looking to write some fully functional webservices in Common Lisp as an April Fools prank on the rest of the development team at my company. There are two pieces to this: reading info from / writing it to a MySQL database, and receiving / processing / responding to requests over HTTP. (Actually, there's a third piece, writing automated tests, but my QA partner- in-crime is going to handle that part.) After some Googling I found a good resource here ( http://www.ymeme.com/creating-dynamic-websites-lisp-apache.html ), but I'm surprised that there's seemingly only the one walkthrough. Does anyone know of others, or can anyone share personal experiences with writing webservices in CLisp?"} {"_id": "74802", "title": "Cloud computing cost savings for large enterprise", "text": "I'm trying to understand whether cloud computing is meant for small to medium sized companies OR also for large companies. Imagine a website with a very large user base. The storage and bandwidth demands as well as the number of database transactions are incredibly high. The website might be hosting videos, music, images, etc. that keep the demands high. Does it make sense to be in the cloud when you know you need huge volumes of storage, bandwidth, and GET,PUT,etc. requests? (Each of these variables costs money in the cloud) OR does it make sense to build your own infrastructure? I can see the cost savings of cloud computing if you are a small business, but if you were aiming at the next big thing on the Internet, I can't quite see the benefits."} {"_id": "16608", "title": "What would you think of a job based on mostly doing the proof of concept?", "text": "I'm working as a developer in a small software company whose main job is interfacing between separate applications, like between a telephony system and an environment control system, between IP TVs and hospitality systems, etc...And it seems like I am the candidate for a new job title in the company, as the person who does the proof of concept of a new interfacing project and does some R&D for prototyping. What do you think the pros and cons of such a job would be, considering mainly the individual progress/regress of a person as a software engineer? And what aspects would you consider essential in a person to put him/her in such a job position?"} {"_id": "12777", "title": "Are null references really a bad thing?", "text": "I've heard it said that the inclusion of null references in programming languages is the \"billion dollar mistake\". But why? Sure, they can cause NullReferenceExceptions, but so what? Any element of the language can be a source of errors if used improperly. And what's the alternative? I suppose instead of saying this: Customer c = Customer.GetByLastName(\"Goodman\"); // returns null if not found if (c != null) { Console.WriteLine(c.FirstName + \" \" + c.LastName + \" is awesome!\"); } else { Console.WriteLine(\"There was no customer named Goodman. How lame!\"); } You could say this: if (Customer.ExistsWithLastName(\"Goodman\")) { Customer c = Customer.GetByLastName(\"Goodman\") // throws error if not found Console.WriteLine(c.FirstName + \" \" + c.LastName + \" is awesome!\"); } else { Console.WriteLine(\"There was no customer named Goodman. How lame!\"); } But how is that better? Either way, if you forget to check that the customer exists, you get an exception. I suppose that a CustomerNotFoundException is a bit easier to debug than a NullReferenceException by virtue of being more descriptive. Is that all there is to it?"} {"_id": "124109", "title": "Is there a canonical book on design patterns?", "text": "I am interested in learning design patterns and would like to know what are considered top tier books in learning this subject. Is there a book out there that's the de-facto standard for describing best practices, design methodologies, and other helpful information on design patterns? What about that book makes it special?"} {"_id": "94310", "title": "Setting realistic expectations for deadlines", "text": "I'm a tech lead for a small team. One of the major tasks on my plate is communicating with the client. One thing I find particularly difficult is dealing with deadlines because they are mandated by the client and I'm frequently not consulted. Usually, the interaction follows the following pattern. The client comes up with a feature they want to add, Feature X. Feature X would look good in the next week's app release that is about 6 business days away. At this point, the feature request needs to go through approval and there are frequently other dependencies that need to be deal with. Eventually, N days later, the feature request trickles down to my team. Even if the original dead line (that was set by a non-developer manager) was achievable it no longer is. My team is blamed, ~~feels discouraged and there's an overall atmosphere of defeat~~ , I feel discouraged and defeated. Clearly the overall process is broken. Unfortunately, there's not much I can do because I'm not in a position of power here. My current approach is to gently remind the client about our start date versus the deadline, the scope of the Feature, etc. This feels a lot like I am making excuses though. Have you guys been in similar situations? What has/hasn't worked for you?"} {"_id": "97768", "title": "Math Major wants to become a Software Engineer", "text": "I am a math major, what would be the best field of computer science/computer engineering for me to go into, where I could apply my skills that I used as a math major(i.e. Linear Algebra, Real Analysis, Number theory, Numerical analysis, etc...) Thanks"} {"_id": "84857", "title": "How defensive should we be?", "text": "We've been running Pex over some code, and it has been showing some good things (well bad things, but showing them before it gets to production!). However, one of the nice things about Pex is that it doesn't necessarily stop trying to find issues. One area we found is that when passing in a string, we were not checking for empty strings. So we changed: if (inputString == null) to if (string.IsNullOrEmpty(inputString)) // *** That fixed the initial issues. But then, when we ran Pex again, it decided that: inputString = \"\\0\"; was causing problems. And then inputString = \"\\u0001\"; What we've decided is that defaults can be used if we encounter `// ***` and that we are happy seeing the exception caused by any other odd input (and dealing with it). Is that enough?"} {"_id": "88168", "title": "What do you do to make sure you take proper/enough breaks, while avoiding unwanted side-effects of break taking?", "text": "preamble> It seems to me that computer programmers are one of a select few groups of people who actually take pleasure from sitting in front of computers for long periods of time. Most people in other professions actively dislike their time at computers, and do their best to avoid it (so, I assume, they don't have problems taking breaks). At least for me, having external cues for taking breaks, and clear instructions on what to do with each break (stretch, go for a walk, close my eyes, look into a distance of preferably a few km and focus on faraway objects, etc...), is a must. So far, I've just been making up the breaks and tools to get them as I go along, based on what looks to be low-specificity information found on the net (generic stuff ala ergonomics advice for office staff). This has led to all sorts of side effects - loss of attention as I get distracted if I walk around, breaks in flow with alarm clocks interrupting my thoughts, and people around me assuming I'm low on work due to the frequency of my walking around compared to everyone else. /preamble> **tl;dr** * Taking breaks is important * My internal break taking system doesn't work, and ad-hoc ones have unwanted side effects * What do you do to make sure you take proper breaks? * How do you avoid unwanted side-effects, such as getting distracted or interrupting flow or giving your co-workers the impression you're spending a lot of time goofing off?"} {"_id": "97766", "title": "Roll your own web crawler to crawl one specific website that has multiple entries", "text": "What sort of languages would be able to handle writing your own web crawler? Could PHP handle this? I'm quite good with PHP (following best practices etc). But I'd like a good reason to learn a new language if I need to. The idea is to crawl one specific website that has multiple entries, much like an RSS feed, but they don't offer that an RSS feed of the site..."} {"_id": "120063", "title": "Is it typical for a provider of a web services to also provide client libraries?", "text": "My company is building a corporate Java web-app and we are leaning towards using GWT-RPC as the client-server protocol for performance reasons. However, in the future, we will need to provide an API for other enterprise systems to access our data as well. For this, we were thinking of a SOAP based web service. In my experience it is common for commercial providers of enterprise web applications to provide client libraries (Java, .NET, C#, etc.). Is this generally the case? I ask because if so, then why bother using SOAP or REST or any standard web services protocol at all? Why not just create a client libraries that communicate via GWT-RPC?"} {"_id": "229662", "title": "How to collect seed data for a Products/Services User review website?", "text": "Data = information like name, address of local businesses. Companies, their products, services etc. Once users start engaging then they themselves can add/edit data wikipedia style. because if there is no data initially then users will not be attracted to come, and if there are no users then cannot expand the data. What are the various options to bootstrap with a seed dataset ? \\- Crawl and find info from various web resources \\- Collect and enter it manually \\- Hire data collectors \\- Any place to buy such data \\- Any other ?"} {"_id": "215407", "title": "Repository query conditions, dependencies and DRY", "text": "To keep it simple, let's suppose an application which has `Accounts` and `Users`. Each account may have any number of users. There's also 3 consumers of `UserRepository`: * An admin interface which may list all users * Public front-end which may list all users * An account authenticated API which should only list it's own users Assuming `UserRepository` is something like this: class UsersRepository extends DatabaseAbstraction { private function query() { return $this->database()->select('users.*'); } public function getAll() { return $this->query()->exec(); } // IMPORTANT: // Tons of other methods for searching, filtering, // joining of other tables, ordering and such... } Keeping in mind the comment above, and the necessity to abstract user querying conditions, How should I handle querying of users filtering by `account_id`? I can picture three possible roads: ### 1\\. Should I create an `AccountUsersRepository`? class AccountUsersRepository extends UserRepository { public function __construct(Account $account) { $this->account = $account; } private function query() { return parent::query() ->where('account_id', '=', $this->account->id); } } _This has the advantage of reducing the duplication of`UsersRepository` methods, but doesn't quite fit into anything I've read about DDD so far (I'm rookie by the way)_ ### 2\\. Should I put it as a method on `AccountsRepository`? class AccountsRepository extends DatabaseAbstraction { public function getAccountUsers(Account $account) { return $this->database() ->select('users.*') ->where('account_id', '=', $account->id) ->exec(); } } _This requires the duplication of all`UserRepository` methods and may need another `UserQuery` layer, that implements those querying logic on chainable way._ ### 3\\. Should I query `UserRepository` from within my account entity? class Account extends Entity { public function getUsers() { return UserRepository::findByAccountId($this->id); } } _This feels more like an aggregate root for me, but introduces dependency of`UserRepository` on `Account` entity, which may violate a few principles._ ### 4\\. Or am I missing the point completely? Maybe there's an even better solution? **Footnotes:** Besides permissions being a Service concern, in my understanding, they shouldn't implement SQL query but leave that to repositories since those may not even be SQL driven."} {"_id": "215405", "title": "Object inheritance and method parameters/return types - Please check my logic", "text": "I'm preparing for a test and doing practice questions, this one in particular I am unsure I did correctly: We are given a very simple UML diagram to demonstrate inheritance: I hope this is clear, it shows that W inherits from V and so on: |-----Y V <|----- W<|-----| |-----X<|----Z and this code: public X method1(){....} method2(new Y()); method2(method1()); method2(method3()); **The questions and my answers:** 1. **Q:** What types of objects could method1 actually return? **A:** X and Z, since the method definition includes X as the return type and since Z is a kind of X is would be OK to return either. 2. **Q:** What could the parameter type of method2 be? **A:** Since method2 in the code accepts Y, X and Z (as the return from method1), the parameter type must be either V or W, as Y,X and Z inherit from both of these. 3. **Q:** What could return type of method3 be? **A:** Return type of method3 must be V or W as this would be consistent with answer 2."} {"_id": "120069", "title": "How can I justify a technology over another? (Java over .NET)", "text": "We are working in a Java/.NET company and my team and I are planning a project for a client. One of the requirements is that the project has to be done in .NET I've asked about this requirement, and the client said that it doesn't matter, and that if I have a good reason we can use other technology. But, I have to justify the decision. As a Project Manager / Analyst I'm interested in making the project in Java because: * The team knows java much better, regarding the language and frameworks * I don't know anything about .NET technology (and maybe we could make bad decisions thinking in a Java way to do things) * There are other people in company that have more skills in .NET but they have other projects with more priority. For experience, I'm sure that if we use Java, the project will have much more quality. But this arguments could be weak from the client perspective. How can I justify making the project in Java? EDIT: I'm not asking if one technology is better than other. \"It's not a technology war\" question."} {"_id": "154642", "title": "What is the difference between a freelancer programmer and a Programmer working in a Software company?", "text": "I was told by a HR department that freelancing experience is not considered as professional experience. What could be the reason?"} {"_id": "208396", "title": "Ruby: Multithreading a CSV with output", "text": "I have a script written in Ruby that has maxed out a core in my server's Xeon processor for the last 2 hours. Since it's currently only using 1 of four possible cores, I want to try and rewrite the script to take advantage of all four cores. I can use the .each_slice(n) method on the array that contains my data, but I'm curious as to what would then be the best/most efficient way to then write this data to a file. It seems that I have a couple options. 1. Pass the file object to the functions being called by the Thread.new function (I assume this is legal in ruby?) and have them write as they see fit. 2. Have each function store the results in arrays, return the array on completion and let the main program then write to the disk. 3. Pass the same array object to each item and have them all add to it, then sort it. My assumption is that although 2 would probably be more memory intensive, it'll by far be most efficient method. Is there a different way to accomplish what I'm doing?"} {"_id": "153612", "title": "The architecture and technologies to use for a secure, fast, reliable and easily scalable web application", "text": "^ _For actual questions, skip to the lists down below_ I understand, that his is a vague topic, but please, before you turn the other way and disregard me, hear me out. I am currently doing research for a web application(I don't know if application is the correct word for it, but I will proceed w/ that for now), that one day might need to be everything mentioned in the title. I am bound by nothing. That means that every language, OS and framework is acceptable, but only if it proves it's usefulness. And if you are going to say, that scalability and speed depend on the code I write for this application, then I agree, but I am just trying to find something, that wouldn't stand in my way later on. I have done quite a bit reading on this subject, but I still don't have a clear picture, to what suits my needs, so I come to you, StackOverflow, to give me directions. I know you all must be wondering what I'm building, but I assure you, that it doesn't matter. I have heard of _12 factor app_ though, if you have any similar guidelines or what is, to suggest the please, go ahead. For the sake of keeping your answers as open as possible, I'm not gonna provide you my experience regarding anything written in this question. ^ _Skippers, start here_ First off - the weights of the requirements are probably something like that (on a scale of 10): * Security - 10 * Speed - 5 * Reliability (concurrency) - 7.5 * Scalability - 10 _Speed and concurrency are not a top priority, in the sense, that the program can be CPU intensive, and therefore slow, and only accept a not-that-high number of concurrent users, but both of these factors must be improvable by scaling the system_ Anyway, here are my questions: * **How many layers should the application have, so it would be future-proof and could best fulfill the aforementioned requirements?** For now, what I have in mind is the most common version: 1. Completely separated front end, that might be a web page or an MMI application or even both. 2. Some middle-ware handling communication between the front and the back end. This is probably a server that communicates w/ the front end via HTTP. How the communication w/ the back end should be handled is probably dependent on the back end. 3. The back end. Something that handles data through resources like DB and etc. and does various computations w/ the data. This, as the highest priority part of the software, must be easily spread to multiple computers later on and have no known security holes. I think ideally the middle-ware should send a request to a queue from where one of the back end processes takes this request, chops it up to smaller parts and buts these parts of the request back onto the same queue as the initial request, after what these parts will be then handled by other back end processes. Something *map-reduce*y, so to say. * **What frameworks, languages and etc. should these layers use?** 1. The technologies used here are not that important at this moment, you can ignore this part for now 2. I've been pointed to node.js for this part. Do you guys know any better alternatives, or have any reasons why I should (not) use node.js for this particular job. 3. I actually have no good idea, what to use for this job, there are too many options out there, so please direct me. This part (and the 2. one also, I think) depend a lot on the OS, so suggest any OSs alongside w/ the technologies/frameworks. Initially, all computers (or 1 for starters) hosting the back end are going to be virtual machines. Please do give suggestions to any part of the question, that you feel you have comprehensive knowledge and/or experience of. And also, point out if you feel that any part of the current set-up means an instant (or even distant) failure or if I missed a very important aspect to consider. I'm not looking for a definitive answer for how to achieve my goals, because there certainly isn't one, for I haven't provided you w/ all the required information. I'm just looking for recommendations and directions on what to look into. Also, bare in mind, that this isn't something that I have to get done quickly, to sell and let it be re-written by the new owner (which, I've been told for multiple times, is what I should aim for). I have all the time in the world and I really just want to learn doing something really high-end. Also, excuse me if my language isn't the best, I'm not a native. Anyway. Thanks in advance to anyone, who takes the time to help me out here. PS. When I do seem to come up w/ a good architecture/design for this project, I will certainly make it an open project and keep you guys up to date w/ it's development. As in what you could have told me earlier and etc. _For obvious reasonsthe very same question got closed on SO, but could you guys still help me?._"} {"_id": "69542", "title": "How to explain OOP to a matlab programmer?", "text": "I have a lot of friends who come from electrical / physical / mechanical engineering background, and are curious about what is \"OOP\" all about. They all know Matlab quite well, so they do have basic programming background; but they have a very hard time grasping a complex type system which can benefit from the concepts OOP introduces. **Can anyone propose a way I can try to explain it to them?** I'm just not familiar with Matlab myself, so I'm having troubles finding parallels. I think using simple examples like shapes or animals is a bit too abstract for those engineers. So far I've tried using a Matrix interface vs array-based / sparse / whatever implementations, but that didn't work so well, probably because different matrix types are already well-supported in Matlab."} {"_id": "152566", "title": "Any empirical evidence on the efficacy of CMMI?", "text": "I am wondering if there are any studies that examine the efficacy of software projects in CMMI-oriented organizations. For example, are CMMI organizations more likely to finish projects on time and/or on budget than non-CMMI organizations? _Edit for clarification:_ CMMI stands for \"Capability Maturity Model Integration\". It's developed by the Software Engineering Institute at Carnegie-Mellon University (SEI-CMU). It's not a _certification_ , but there are various companies that will \"appraise\" your organization to various levels of CMMI, such as level 2 and level 3. (I believe CMMI level 1 is an animalistic, Hobbesian free-for-all that nobody aspires to. In other words, everybody is at least CMMI level 1, even if you've never heard of CMMI before.) I'm definitely not an expert, but I believe that an organization can be appraised for CMMI levels within different scopes of work: i.e. service delivery, software development, foobaring, etc. My question is focused on the software development appraisal: is an organization that has been appraised to CMMI Level X for software projects more likely to finish a software project on time and on budget than another organization that has not been appraised to CMMI Level X? However, in the absence of hard data about software-oriented CMMI, I'd be interested in the effect that CMMI appraisals have on other activities as well. I originally asked the question because I've seen various studies conducted on software (e.g. the essays in The Mythical Man Month refer to numerous empirical studies, as does McConnell's Code Complete), so I know that there are organizations performing empirical studies of software development."} {"_id": "152563", "title": "Is this a pattern? Proxy/delegation of interface to existing concrete implementation", "text": "I occasionally write code like this when I want to replace small parts of an existing implementation: public interface IFoo { void Bar(); } public class Foo : IFoo { public void Bar() { } } public class ProxyFoo : IFoo { private IFoo _Implementation; public ProxyFoo(IFoo implementation) { this._Implementation = implementation; } #region IFoo Members public void Bar() { this._Implementation.Bar(); } #endregion } This is a much smaller example than the real life cases in which I've used this pattern, but if implementing an existing interface or abstract class would require lots of code, most of which is already written, but I need to change a small part of the behaviour, then I will use this pattern. Is this a pattern or an anti pattern? If so, does it have a name and are there any well known pros and cons to this approach? Is there a better way to achieve the same result? Rewriting the interfaces and/or the concrete implementation is not normally an option as it will be provided by a third party library."} {"_id": "120019", "title": "What's the benefit of object-oriented programming over procedural programming?", "text": "I'm trying to understand the difference between procedural languages like C and object-oriented languages like C++. I've never used C++, but I've been discussing with my friends on how to differentiate the two. I've been told C++ has object-oriented concepts as well as public and private modes for definition of variables: things C does not have. I've never had to use these for while developing programs in Visual Basic.NET: what are the benefits of these? I've also been told that if a variable is public, it can be accessed anywhere, but it's not clear how that's different from a global variable in a language like C. It's also not clear how a private variable differs from a local variable. Another thing I've heard is that, for security reasons, if a function needs to be accessed it should be inherited first. The use-case is that an administrator should only have as much rights as they need and not everything, but it seems a conditional would work as well: if ( login == \"admin\") { // invoke the function } Why is this not ideal? Given that there seems to be a procedural way to do everything object- oriented, why should I care about object-oriented programming?"} {"_id": "198675", "title": "What makes OOP \"good\"?", "text": "It's fairly obvious that OOP is viewed as a sort of silver bullet of programming today. In any computer science course, the merits of OOP are heralded. I would like to know why people like OOP. To be honest, combining procedures, types, and data structures into a single conglomerate seems bad to me. I'd much rather view data as simply data. I like to be able to think that I will pass the right data through a function and get the right output, and not have to consider that the data is capable of operating on itself. Also, I want to know if there are any good examples of robust programs written specifically with or without OOP, that have their source code available."} {"_id": "219953", "title": "How is localStorage different from indexedDB?", "text": "localStorage and indexedDB are used for offline storage of data in HTML5. What are their key differences and which one is preferable in what situations?"} {"_id": "241510", "title": "Efficient algorithm to find a the set of numbers in a range", "text": "If I have an array of sorted numbers and every object is one of the numbers or multiplication. For example if the sorted array is `[1, 2, 7]` then the set is `{1, 2, 7, 1*2, 1*7, 2*7, 1*2*7}`. As you can see if there's n numbers in the sorted array, the size of the set is 2n-1. My question is how can I find for a given sorted array of n numbers all the objects in the set so that the objects is in a given interval. For example if the sorted array is `[1, 2, 3 ... 19, 20]` what is the most efficient algorithm to find the objects that are larger than 1000 and less than 2500 (without calculating all the 2n-1 objects)?"} {"_id": "229668", "title": "Running a process multiple times at the same time", "text": "I have a c++ program with opencv library which takes an image as input and perform pose estimation,color detection,phog. When I run this program from the command line it takes around 4-5sec to complete. It takes around 60%cpu. When I try to run the same program from two different command line windows at the same time the process takes around 10-15 sec to finish and both the process finish in almost the same time. The CPU Usage reaches upto 100%. I have a website which calls this c++ exe using exec() command. So when two users try to upload an image and run it takes more time as I explained above in the command line. Is this because the c++ program involves high computation and the CPU reaches 100% it slows down? But I read that the CPU reaching 100% is not a bad thing as the computer is using its full capacity to run the program. So is this because of my c++ program or is it something to do with my server(computer) settings? This is probably not the apache server problem because when I try to run it from the command line also it slows down. I am using a quad core processor and all the 4 CPU reaches 100% when I try to run the same process at the same time so I think that its distributed among all the processor. So I have few more questions: 1) Can this be solved by using multithreading in my c++ code?As for now I am not using it but will multithreading make the c++ code more computationally expensive and increase the CPU usage(if this is the problem). 2) What can be the reason of it slowing down? Is the process in a queue and each process is ran only a certain amount of time and it switches between the two process? 3) If this is because it involves high computation will it help if I change some functions to opencv gpu functions? 4) Is there a way I can solve this problems any ideas or tips? I have inserted the result of top when running one process and running the same process twice at the same time: Version5 is the process,running it once ![enter image description here](http://i.stack.imgur.com/V9KVi.png) Two Version5 running at the same time ![enter image description here](http://i.stack.imgur.com/h6aP4.png) Thanks in advance."} {"_id": "159148", "title": "Java sql annotations ManyToMany relationships", "text": "I was wondering your thoughts on the best way to implement a SQL ManyToMany relationship in Java using annotations - in this case eBeans \\- where there is extra data associated with the join. I have created a db diagram to help explain: ![An ERD](http://i.stack.imgur.com/brOxP.png) Using @ManyToMany on the organisation and users classes would create the join table but without the extra Job Title. Is the best way to implement this to create the Org_has_users class and use @OneToMany and @ManyToOne annotations? Would cascade on save ensure that I can access the full join relationships from both the Users and Organisations classes? I hope this is enough to get started. I am more interested in how you would implement this. Thanks! Anthony"} {"_id": "162268", "title": "TDD with SQL and data manipulation functions", "text": "While I'm a professional programmer, I've never been formally trained in software engineering. As I'm frequently visiting here and SO, I've noticed a trend for writing unit tests whenever possible and, as my software gets more complex and sophisticated, I see automated testing as a good idea in aiding debugging. However, most of my work involves writing complex SQL and then processing the output in some way. How would you write a test to ensure your SQL was returning the correct data, for example? Then, say if the data wasn't under your control (e.g., that of a 3rd party system), how can you efficiently test your processing routines without having to hand write reams of dummy data? The best solution I can think of is making views of the data that, together, cover most cases. I can then join those views with my SQL to see if it's returning the correct records and manually process the views to see if my functions, etc. are doing what they're supposed to. Still, it seems excessive and flakey; particularly finding data to test against..."} {"_id": "162264", "title": "Linux OS developers : do they unit test their code?", "text": "Linux OS developers : do they unit test their code ? If yes : * since this OS is coded in C, how do they manage to write effectively unit tests in this language ? * what are the \"zones\" in OS where unit testing is easier to write ? where is it harder ? where is it valuable ?"} {"_id": "159140", "title": "Why categorization into \"Property\" and \"Methods\"", "text": "I am looking at the String class in Actionscript 3.0. It's having a property String.length . Internally it's a getter function ( or method ?) returning the length of string. Why it can't be String.getLength() ? Methods can take in 1, 2 or more values.. so their significance can be understood. But what significance \"property\" has. As it's afterall, a function only. So, why categorization into properties? Just adding an overhead botheration to remember that something has been categorized into property ? In other words, as a programmer, how i am helped, when i am told that String.length is a property of String class. And you can't find any method for the same. While writing a program, how i would know what is a property, and what is a method? Appreciate having some light on this. V."} {"_id": "69090", "title": "which license fits my needs?", "text": "Which license would fit the following requirements the most? * Everyone is welcome to make updates and use the source code * But nobody should sell it or sell an application that is using it, unless they make a deal with owners"} {"_id": "81327", "title": "Is it ok to include jQuery in a jQuery plugin?", "text": "The question jQuery plugin file including the jQuery library came up today on stackoverflow, and I strongly advised against including jQuery in the plugin. I didn't really get any support from others on this; in fact there was more support to include jQuery than not. Is this ok or are there good reasons not to do this? I think it's a really bad idea as I think it should be up to the developer using the plug as to what version of jQuery is being used."} {"_id": "162262", "title": "CPU recommendation for learning multiprocessor programming", "text": "This might be an odd question and the place to ask might not be appropriate. I am very interested in working with multiprocessor programming and parallel algorithms, mostly for research purposes. I want to build a computer specially for this and it should have at least 8 cores (many algorithms have contention problems only starting with 8 cores). Looking at what Intel and AMD offer I think the 8-core Intel CPUs are far too expensive, so I would have to choose between: \\- 6-core Intel i7 980X (3.33 Ghz) \\- 8-core AMD FX 8150 (3.6 Ghz) \\- 2x8-core AMD Opteron 4248 (3.0 Ghz, server version of FX 8150) \\- 1x or 2x-12-core AMD Opteron 6172 (2.1Ghz, expensive, but sometimes affordable on Ebay) I'm inclined for the 2xOpteron, but I'm not sure how the Bulldozer architecture compares to the Intel one; from what I understand the 8 cores actually share some parts and are not compleltely independent like the Intel ones (ignoring the shared L3 cache). I'm not sure that the results I get would reflect the ones that would be obtained on a \" classical\" CPU. On the other side, the i7 is much more powerful and might be sufficient for testing the parallel algorithms. The 2x12-core Opteron would probably be the best for testing, but they are also the slowest by far and I would like to use this computer as a workstation too. What would be the best solution? Is the Bulldozer architecture suitable for research (mostly for the massive parallelization of a compiler I wrote)?"} {"_id": "121088", "title": "How do you write straight to the point documentation without looking sloppy and informal?", "text": "I'm currently at a contract position and am looking to add to the documentation of the projects I worked on, to assist the next hiree taking over my projects. The documentation I received was overly technical (i.e. references code right away, references replacing certain values on certain lines, no high level description at all) How do I write documentation in simple plain English that is of actual benefit without looking sloppy? I find it difficult in areas such as outlining a system's flaws without coming off as judgmental, but still emphasize the severity of how detrimental some of the flaws are."} {"_id": "190056", "title": "Will SSD make the distinction between column oriented DB and row oriented DB obsolete?", "text": "I had a discussion with a friend about row vs column oriented RDBMS. From my understanding, these things are closely linked with the fact that there's a concrete drive head reading and writing the data. This head moves and reading/writing data is faster if the head moves are limited. Still from my understanding, SSD drives drops those mechanical artifacts and accessing the data is now taking the same amount time regardless of where they are physically stored on the chip / device (don't know the right term). Considering these two points, will the SSD drives (when they'll have reached their full potential) make the distinction between row oriented and column oriented either obsolete or pointless?"} {"_id": "135356", "title": "Extending Mono's C# compiler with additional custom features (more or less syntactic sugar)", "text": "I'm aware that this is a rather broad question, but here it goes anyway... What is, in your opinion, the most practical way to create own C# implementation with minor additions to the existing 4.0 feature set? For context: I'm thinking about adding a couple of (mostly syntactic) niceties to the dynamic feature set that would improve the whole duck-typing experience. For example, these would include the idea of a dynamic interface, as proposed in this debate (particularly in the last comment from MiddleTommy). I'm aware that nothing is stopping me from simply diving into the Mono sources. However, I'm compelled to first ask about potentially similar projects that may already exist in the wild. Any such extension efforts underway?"} {"_id": "135350", "title": "How to debug/change Java code while the program is running?", "text": "I just saw a video showing how Notch (of Minecraft fame) is debugging and changing Minecraft while it is running. He pauses the game, changes something in the code and then unpauses the game where the change takes immediate effect without the need to restart the program. How does this work? Which kind of technique is used to achieve this?"} {"_id": "200150", "title": "Object design where hard-coded values are used to instantiate objects?", "text": "I'm creating the design for a browser bookmark merging program and I've ran into a design problem that I've seen before yet I've never come up with a good solution for it. So lets say I have a Browser class: Browser: String bookmarkFilePath String type Bool bookmarkFileExists() When my program runs I will want to have hard-coded values of common browsers and the locations of their Bookmarks file: Object with hard-coded values: \"~/Library/Application Support/Google/Default/Bookmarks\", \"Chrome\" \"~/Library/Safari/Bookmarks.plist\", \"Safari\" **Is there a design pattern or object type that could effectively take an object with hard-coded values (Browser name / bookmarks file path) and uses it to instantiate (and possibly manage) other (Browser) objects?** Also, flexibility is important since there are edge cases such as Firefox's file path to the bookmarks file is always different and some searching needs to be done. **EDIT: I will be implementing this in Python. _Sorry for not mentioning this before_.**"} {"_id": "121081", "title": "Trial/Free & Full Version VS. Free App + In-app billing?", "text": "I'm just wondering what would be the best strategy to publish an application on the Android Market. If you have a free and paid version you have two codes to update (I know it will be 99% the same but still) and besides all the popular paid apps are quite easy to find for \"free\" in \"alternative\" markets. Also if you have any stored data in the trial/free version you lose it when you buy the full version.. On the other hand if you put a free application but inside you allow the user to unlock options (remove ads/more settings/etc...) you only have to worry about one code. I don't know the drawbacks of that strategy and how easy/hard is to hack that to get all the options for \"free\"."} {"_id": "49758", "title": "Is OCaml any good for numerical analysis?", "text": "I'm currently using C and some FORTRAN to solve numerically systems of differential equations. I'm a bit fed up with both of these languages but I need to have some (rather) efficient code... I'm thinking of switching to OCaml. Is it worth it?"} {"_id": "125502", "title": "Application design for web/iPad/iPhone/Android application", "text": "I have a web application which is a pretty standard SAAS database driven app. I have a few customers asking for iOS or Android versions of the app. Is it better to build an API on the web app which is then used to drive the native mobile apps UI? This would be simplest but mean the mobile apps are not 'stand alone'. Alternatively I could try and implement a full mobile solution that then synchronizes with the web app somehow. This would be much more work as all the business logic needs to be build into the mobile apps rather than behind the API. What is standard practice for this kind of thing?"} {"_id": "135359", "title": "Organizing MVC entities communication", "text": "I have the following situation. Imagine you have a MainWindow object who is layouting two different widgets, ListWidget and DisplayWidget. ListWidget is populated with data from the disk. DisplayWidget shows the details of the selection the user performs in the ListWidget. I am planning to do the following: in MainWindow I have the following objects: * ListWidget * ListView * ListModel * ListController ListView is initialized passing the ListWidget. ListViewController is initialized passing the View and the Model. Same happens for the DisplayWidget: * DisplayWidget * DisplayView * DisplayModel * DisplayController I initialize the DisplayView with the widget, and initialize the Model with the ListController. I do this because the DisplayModel wraps the ListController to get the information about the current selection, and the data to be displayed in the DisplayView. I am very rusty with MVC, being out of UI programming since a while. Is this the expected interaction layout for having different MVC triplets communicate ? In other words, MVC focus on the interaction of three objects. How do you put this interaction as a whole into a larger context of communication with other similar entities, MVC or not ?"} {"_id": "121085", "title": "Can the customer be a SCRUM Product Owner in a project?", "text": "I just had a discussion with a colleague about the Product Owner role: In a project where a customer organization has brought in a sofware developing organization (supplier), can the role of Product Owner be successfully held by the customer organization, or should it always be held by the supplier? I always imagined, that the PO was the supplier organizations guy. The guy that ensured that the customer is happy, and continously fed with new and high-businessvalue functionality, but still an integral part of the developer organization. However, maybe I have viewed the PO role too much like the waterfall project manager. My colleague made me think: If the customer organization is mature and proffessional enough, why not let a person from their camp prioritize the backlog?? That would put the PO role much closer to the business, thus being (in theory) better to assess the business value of backlog items. To me, that is an intriguing thought. But what are the implication of such a setup??? I look forward to your input."} {"_id": "190059", "title": "Is object-oriented conceptual thinking something you build with experience?", "text": "I know that the answer is pretty clear because you get better on everything with time and experience. But I'll tell you where I'm coming from: A couple of months ago I decided to learn iOS development, so I studied C (read the C primer plus book, pretty good book). Recently I finished \"Programming in Objective C\" and lately I started following the Stanford course for Winter 2013 (I'm really enjoying it. I'm only at the beginning of the course, on lecture #3, and had to do only 2 homeworks). I understand all the syntax and concepts so far, and I think Objective C is an amazing language. But something has been bothering me, I thought it might be because I'm new to OOP. For example when the instructor defines models (matching cards game for instance), I definitely understand the syntax and logic, but sometimes I'm trying to think if it as taking a task (card game for instance) and breaking it into the right logic. Is this something I will get used to with time? Because at this stage when I try to predict things I'm supposed to do, pretty often I go and see some of his logic and go back to code. Actually we haven't had to go from zero, he always tells us to follow the presentation with the code in this stage, but I really want to improve at taking a task and knowing how to break it into the right models, controllers, etc."} {"_id": "158430", "title": "What is \"toolkit design\"?", "text": "What does \"toolkit design\" mean and what are the basic steps to design a toolkit for a specific task/project? It appears in a task description which says: \"This task deals with the design of detailed specification of the various tools that are employed in order to implement [insert some development task here]\" So what I understand is that: it's required to design and provide the specifications of a toolkit which will be used for doing some specific tasks. But how is toolkit design done? Searching Google hardly gives any meaningful result. They are mostly about \"design toolkits\" which is not what I'm looking for. For example this guy has designed a software toolkit in some of his previous projects http://logonpro.com/resume.doc"} {"_id": "158435", "title": "Project Management - Asana / activeCollab / basecamp / alternative / none", "text": "I don't know whether this should be on programmers - I've been looking at the above three apps over the past few weeks just for myself and I'm in two minds. All three look good, are easy to use, and I came to this conclusion; * Asana is the easiest to use * ActiveCollab is the feature rich and easiest flow * BaseCamp is the best UX / design But I didn't really find my workflow was any more quicker / efficient, in fact it was a bit slower and organized. Is there a realistic place for them in workflow - should programmers use them for themselves, or only when a project manager can take control of it?"} {"_id": "158436", "title": "What is a good design strategy for retaining history of user activities and files like Visual Studio projects?", "text": "OK so I'm not so sure that \"project\" is the right term, but for my purposes, I define \"project\" as similar to what Visual Studio uses, or Microsoft word - files that the user can open and work on and then save, and when the user runs the application again the program is aware of the files that were worked on last time the application was open. So I ask, what is a good design strategy for retaining history of user activities and files like Visual Studio projects?"} {"_id": "45699", "title": "Differences between a Unified Process and an Agile project plan?", "text": "There've been many discussions on SO about the differences between (Rational) Unified Process and the Agile methodology. Can someone please give me an example on how different a project plan would be if there are 2 teams doing the same project, but following these 2 different methods?"} {"_id": "45698", "title": "What are the best XP practices?", "text": "In \"Extreme Programming Explained\", Beck lists 13 \"primary practices\". They are: * Sit Together * Whole Team * Informative Workspace * Energized Work * Pair Programming * Stories * Weekly Cycle * Quarterly Cycle * Slack * 10 Minute Build * Continuous Integration * Test-First Programming * Incremental Design Which of these have you actually implemented in your workplace? Which has been the most useful?"} {"_id": "41934", "title": "What software development process should I learn first for a solo project?", "text": "I want to develop a project on my own (if it is sucessful more people might start working on it too). Also I want to apply some proper software engineering from the first until the last day. On one hand just to try it out and compare results with previous projects that were just about writing code quick and dirty, and on the other hand to learn! I know the proper answer to this question is \"It depends very much on the project...\", \"There is no single correct answer...\". But I just need someplace to start, somewhere where every step is written down and tells me what to do. If I'm not happy next time I'll try something else. So, how/where should I start? I would love to hear some book suggestions cause I'm all about books :-D. EDIT: Answering a few of the questions that popped up: **Customer** : There is kind of a customer/friend. No real pressure. **Version Control** : I have used subversion in the past and want to try mercurial. **Bug tracking** : I was under the impression that for s single developer a checklist was enough (am I wrong there?) **Testing** : I want to try Lime (because I use Symfony) and Selenium. On the whole I will try out a lot of stuff I haven't used before but as I said one of the main points is learning. The Pomodoro Technique keeps popping up wherever I look, so maybe I should have a look at that..."} {"_id": "10605", "title": "How can programmers improve their UX skills?", "text": "As programmers we can solve very complex problems, but then, when we have to design a user interface we tend to fail on making them easy to use. In small companies they can\u2019t afford having designers and UX experts, programmers have to do almost everything in the software. But these interfaces are rarely intuitive (the classic example). What is the problem? How can developers improve their skills in designing good user experiences?"} {"_id": "90232", "title": "Original author rights in a licensed software project", "text": "I'm working on a software project where so far I'm the sole developer. As of right now the code is unlicensed, but the copyright of whatever I've written belongs to me (this is true at least in North America AFAIK). Before I release the software to the public, I'm going to license its distribution, probably under something like MPL or GPL. Question 1: As the original copyright holder, does this effect me in anyway? Or do I still have unrestricted distribution rights? Question 2: After some time, more developers join the project. Now much of source code copyright belongs to me+contributors. How do distribution rights work for the authors now? Regards, kfl"} {"_id": "90236", "title": "What is the / your most effective QA process?", "text": "I'm looking for some ideas on how to improve our current Quality Assurance process. There is no official QA method in place right now, but we basically just get some requirements using a ticket system and we go back and forth until all the requirements are covered. The problem right now is some requirements change and we may spend a day or 5 days on multiple requirements but in the end they all get tossed because the end user decided it's not appropriate or wanted something else. So we lose time and money, but how can we have foreseen these situations? I have to believe there is an effective way to communicate with end users about their set of requirements so that we can EXTRACT and squeeze out any doubts to foreshadow a change in a range of requirements. And then we need to be very sure that the requirements won't be tossed or changed so drastically that our work was a waste. In some ways it is about understanding really really well what the user eventually wants. But what is this effective QA process? Any ideas are greatly appreciated. I'd be happy to clarify any concerns. Thank you!"} {"_id": "236395", "title": "Estimed number of tries", "text": "**Problem:** The Oscar Committee wants to decide which person should get the best actor award among the given N actors.For that they decided to use a random function random_bit() which returns either 0 or 1 with equal probablity. For the results to be fair,the committee was asked to generate a random number between 1 to N (inclusive) , such that all actors have equal probability of being chosen (which in this case should always be equal to 1 / N. First of all Committee wants to know the expected number of times they will need to call random_bit() function to generate a random number between 1 to N.Also, while calling the function, they should follow the optimal strategy (i.e. the strategy that minimizes the expected number of function calls) * * * This is problem of past contests at codechef.com and here's the link to it. This site allows to see other users solution and till now only one solution has been accepted. The solution is in C++ code: #include using namespace std int main() { int T; cin >> T; for(int ii = 0; ii> N; double p = 1, ans = 0; int x = 1 % N; for(int i = 0; i<100000; ++i) { ans += x * p; x = (x * 2) % N; p /= 2; } cout << ans; } return 0; } I know how the code runs and what it does, but I can't find the logic behind it. Any help/ clue? Also it would be nice if someone can post another algorithm."} {"_id": "236391", "title": "What happens to equal elements when inserting into a binary search tree?", "text": "Most BST examples show a sample of a BST with unique values; mainly to demonstrate the order of values. e.g. values in the left subtree are smaller than the root, and values in the right subtree are larger. Is this because BSTs are normally just used to represents SETs ? If I insert an element say 4 which already exists in the BST, what should happen ? e.g. In my case, 4 is associated with a payload. Does it mean I override the existing node's payload."} {"_id": "58644", "title": "How do you coordinate with interaction designers during implementation?", "text": "Programmers are largely responsible for helping move a product from design to implementation. This process is always full of snags: * implementation details rear their ugly head and make parts of the design infeasible * user feedback on early prototypes leads to changes in the design * new technologies alter the field of what is possible, bringing back designs previously thought impossible * priorities shift, schedules change, and requirements wander How do you keep design and implementation in contact during the implementation? What processes do you use? Tools? Artifacts? Guidelines? Communication strategies?"} {"_id": "96966", "title": "Origin of \"Readme\"", "text": "When did people start writing Readme files? It seems that pretty much all programs have this file, regardless of the format. Is there any documented first use of this document?"} {"_id": "53878", "title": "Dynamic vs Statically typed languages for websites", "text": "This statement suggests that statically typed languages are not ideal for web sites: > I\u2019ll contrast that with building a website. When rendering web pages, often > you have very many components interacting on a web page. You have buttons > over here and little widgets over there and there are dozens of them on a > webpage, as well as possibly dozens or hundreds of web pages on your website > that are all dynamic. With a system with a really large surface area like > that, using a statically typed language is actually quite inflexible. I > would find it painful probably to program in Scala and render a web page > with it, when I want to interactively push around buttons and what-not. If > the whole system has to be coherent, like the whole system has to type check > just to be able to move a button around, I think that can be really > inflexible. Source: http://www.infoq.com/interviews/kallen-scala-twitter Is this correct? Why or why not?"} {"_id": "11951", "title": "Production-safe SQL stored procedure debugging", "text": "We have an enormous number of nested SQL stored procedures, and a number of different ways of debugging them (varying from developer to developer). So far, our methods include: 1. An optional `@debug` parameter, that causes the procedure to `print`messages as it runs (passing the variable down to called procedures). 2. Checking `@@servername` against a table of test server names, and `print`ing as above 3. Writing everything the procedure does to a log table (in production and test) Which of these is preferable (and why), or is there a better method we've overlooked?"} {"_id": "236398", "title": "Creating New Wrapper Objects and Extension Classes and Keeping it Organized", "text": "Here's my situation: I'm programming an embedded device with a very simple, but customizable LED array display. It's 10 RGB LEDs linearly setup. The LEDs will be used to display many different things depending on the device's mode. Now, I've started down the road, but feel after spending a lot of time coding and finding a lot of duplicate coding that this should be simplified with some abstraction and wrappers methods and classes. At the very bare minimum and starting point, I have a class, the LED driver, which represents the hardware driving the LEDS. It has one Method which is primarily used `TLC5940.Display(ushort[] greyscaleData)` where greyscaleData is an array of size equal to the amount of output pins, a multiple of 16(16 per chip, chips can be connected to each other).This array controls each pins individually allowing direct control of the leds. This lead me to creating different \"extension\" methods in a static class which had different routines of populating this array and then passing it into the `Display` method. This works ok for testing different methods, but I found a lot of duplicate code with how I populate the array because each RGB led consumes 3 physical pins(and 3 consecutive array values) to control a specific RGB LED. So, I need to write a consecutive chunk of three values to control an LED. I came up with this private method in the static extension class: private static void FillLEDArray(ref ushort[] brightnessArray, ushort LEDNumber, ushort rVal, ushort gVal, ushort bVal, short ledStartPin = 2, ushort numLEDs = 10) { //logic for how to shift array for physical pin layout if (ledStartPin == 2) { ledStartPin = 0; } else if (ledStartPin == 1) { ledStartPin = -1; } else { throw new Exception(\"Can only be 1 or 2\"); } ushort pinStart = (ushort)(LEDNumber*3 + ledStartPin); brightnessArray[pinStart] = bVal; brightnessArray[pinStart-1] = gVal; brightnessArray[pinStart - 2] = rVal; } It's simple and this makes things easier with my extension display methods so I can loop from 1 to 10 and call this method to write a specific LED to the array in one call. On a deeper level, I thought to make an LED object, which would contain an array of the 3 values and also offer some functions like LED.Colors.Red, LED.Colors.Blue, etc and return an array that would represent that color to make coding easier and more fun. Moreover, create a LED container object class to contain LED object. ex: `LEDS.LED(1).Color.Red` this could also generate an array on demand. One more level of complexity, My device is user customizable, so I have created a User Settings class, which contains all of the device's user settings, many of which apply to the LEDs like certain colors of the RGBs, overall brightness, color schemes, etc. These settings, such as brightness, would need to be applied the whole array before it's sent to the display also. If you've read this far, Thanks. **Question:** At this point have all the tools and ways to control the LEDs, but I'm looking to rewrite this all and wondering what's the best way to encapsulate all of this functionality? Is there a certain coding pattern or style that would apply to this? * * * My Own thoughts: In writing this out and getting it out of my head, maybe a new `DeviceDisplay` class which would contain an instance of the LED hardware and settings classes? Should this class be Static or Singleton? I'm also concerned about Memory/RAM/Speed because this is an embedded device. I would appreciate some feedback on this as it seems there's a lot of different dependencies and getting a little lost in my brain."} {"_id": "176781", "title": "Child to Parent linking - bad idea?", "text": "I have a situation where my parent knows about it's child (duh) but I want the child to be able to reference the parent. The reason for this is that I want the child to have the ability to designate itself as most important or least important when it feels like it. When the child does this, it moves it to the top or bottom of the parent's children. In the past I've used a WeakReference property on the child to refer back tot he parent, but I feel that adds an annoying overhead, but maybe it's just the best way to do it. Is this just a bad idea? How would you implement this ability differently? Update 1: Adding more context. This is a rendering system so the parent container is a list of windows grouped together. The child item (window) that says \"I'm most important!\" wants to basically be rendered on the top of the rest of the windows. The parent is just a logical container to group these children together. I can see where adding an event to signal the request to be on the top is a good idea. But implementation (what the child wants to do with the parent) aside, why wouldn't you want to have child->parent linking? Doubly linked lists do this so people can traverse to and from something."} {"_id": "176780", "title": "Too complex/too many objects?", "text": "I know that this will be a difficult question to answer without context, but hopefully there are at least some good guidelines to share on this. The questions are at the bottom if you want to skip the details. Most are about OOP in general. Begin context. I am a jr dev on a PHP application, and in general the devs I work with consider themselves to use many more OO concepts than most PHP devs. Still, in my research on clean code I have read about so many ways of using OO features to make code flexible, powerful, expressive, testable, etc. that is just plain not in use here. The current strongly OO API that I've proposed is being called too complex, even though it is trivial to implement. The problem I'm solving is that our permission checks are done via a message object (my API, they wanted to use arrays of constants) and the message object does not hold the validation object accountable for checking all provided data. Metaphorically, if your perm containing 'allowable' and 'rare but disallowed' is sent into a validator, the validator may not know to look for 'rare but disallowed', but approve 'allowable', which will actually approve the whole perm check. We have like 11 validators, too many to easily track at such minute detail. So I proposed an `AtomicPermission` class. To fix the previous example, the perm would instead contain two atomic permissions, one wrapping 'allowable' and the other wrapping 'rare but disallowed'. Where previously the validator would say 'the check is OK because it contains allowable,' now it would instead say '\"allowable\" is ok', at which point the check ends...and the check fails, because 'rare but disallowed' was not specifically okay-ed. The implementation is just 4 trivial objects, and rewriting a 10 line function into a 15 line function. abstract class PermissionAtom { public function allow(); // maybe deny() as well public function wasAllowed(); } class PermissionField extends PermissionAtom { public function getName(); public function getValue(); } class PermissionIdentifier extends PermissionAtom { public function getIdentifier(); } class PermissionAction extends PermissionAtom { public function getType(); } They say that this is 'not going to get us anything important' and it is 'too complex' and 'will be difficult for new developers to pick up.' I respectfully disagree, and there I end my context to begin the broader questions. So the question is about my OOP, are there any guidelines I should know: 1. _is_ this too complicated/too much OOP? Not that I expect to get more than 'it depends, I'd have to see if...' 2. when is OO abstraction too much? 3. when is OO abstraction too little? 4. how can I determine when I am overthinking a problem vs fixing one? 5. how can I determine when I am adding bad code to a bad project? 6. how can I pitch these APIs? I feel the other devs would just rather say 'its too complicated' than ask 'can you explain it?' whenever I suggest a new class."} {"_id": "246896", "title": "Is it OK to let invalid arguments slip to another method?", "text": "For example lets take this method: public List ReadAll(int listCapacity) { List list = new List(listCapacity); while (Read()) { list.Add(GetCurrentRow()); } return list; } If `listCapacity` is less than zero, an `ArgumentException` will be thrown by the constructor of `List`. Does it make sense to double check this? Passing the argument to `List` immediately seems careless, but checking it seems silly because `List` will definitely check it. Important note: the code snippet is not just a copy-paste method, its part of a full utility."} {"_id": "164956", "title": "Any good hackathons/competitions to refresh my programming skill?", "text": "I spent this summer as a PM, so my programming skills may have gotten a little rusty. I want to refresh those skills, and I'm looking for things like online hackathons or competitions. Preferably ones that happen every week or day so I can practice often. Where should I look ?"} {"_id": "164958", "title": "Are UML class diagrams adequate to design javascript systems?", "text": "Given that UML is oriented towards a more classic approach to object orientation, is it still usable in a reliable way to design javascript systems? One specific problem that I can see is that class diagrams are, in fact, a structural view of the system, and javascript is more behaviour driven, how can you deal with it? Please, keep in mind that I'm not talking abot the real world domain here, It's a model for the solution that I'm trying to achieve."} {"_id": "62450", "title": "Starter BOOKS On Data Analytics and Algorithms in Computer Science", "text": "Am Programmer cum Co-Founder of Start-up,Am a computer Science Graduate though am not good at Algorithms,I want to learn and sharp that side.Please suggest me some nice books for starters and Advance Level."} {"_id": "179159", "title": "Link between tests and user stories", "text": "I have not see these links explicitly stated in the Agile literature I have read. So, I was wondering if this approach was correct: Let a story be defined as _\"In order to [RESULT], [ROLE] needs to [ACTION]\"_ then 1. _RESULT_ generates system tests. 2. _ROLE_ generates acceptance tests. 3. _ACTION_ generates component and unit tests. Where the definitions are the ones used in xUnit Patterns which to be fair are fairly standard. Is this a correct interpretation or did I misunderstand something?"} {"_id": "136197", "title": "Should we use JavaScript and CGI variables to weed out bots from our visitor reports?", "text": "I am using ColdFusion 8 and jQuery 1.7. ** This is a programming question, because the solution I am questioning requires programming. It may not be the right solution to the problem, but if it is, then I need to figure out how to best program the concept. ** When a user comes to our site, we track their session by writing various CGI variables to a database using a CFC and stored procures. First we filter out non human traffic by keywords in the user agent such as \"bot\". Unfortunately a lot of bots and spammers mask their user agents. Later, we try to exclude from our visitor reports the bad bots and a few other known entities that are scraping pages and such. But this is a manual process. We are considering using an additional/alternate method of tracking usage. Once the user's page loads, we will use JavaScript to send the CGI variables from the client back to our server and store them. Specifically, we'll write the server variables to JavaScript on each page and then have JavaScript send them right back to us. If a bot or user doesn't fully view the page or have JavaScript enabled, the usage won't be counted is a real user. Correct me if I am wrong, but this is the same method that Google Analytics uses to track user behavior. Our goal is to eliminate good and bad bots from being counted as visitors in our reports. Does using JavaScript on a page like this minimize bots being counted? Is there a gaping hole in this plan?"} {"_id": "120937", "title": "Rights and use of developed software", "text": "I have been working on a piece of software for a company, that they wish to resell. There was an mail-based agreement upon a flat hourly rate for my work, and eager me chose to accept a rather low fee. Due to the stress and tempo of the task, a direct contract was never formed or signed. The software was developed locally on my machine, and I was pretty much alone with it, except by excellent help from StackOverflow when I got stuck. Now, the software is nearing completion, I suddenly hear that they have hired a new developer to make the same piece of software as me, and that I was expected to resign within long. Confused I ask around, and realize that the CEO of the company had informed the rest of the company that I was terminally ill and had cancer, and was expected to leave the company soon. Since I'm perfectly healthy, this confused me even more, until I realized what was going on. When I confronted my boss with this, I was no longer seen as a member of the company, and I left the same day, never to return. Later, I raised the question about my missing pay, since I had been working for quite a bit, and not received any payment for my software. I saw that they had already sold a fair copy of my software, and since it's not exactly sold cheap, the company should have plenty of gold to pay me. The company refused, and said that they owned the software, and everything it contained. That was a lot of drama, but my question is this: **Who has the rights to the software ?** The source code had my personal watermarks and copyrights inprinted, but they have since simply deleted it. The company claim that they have all the rights, because they have a website made about the product, where they write that they have \"All rights reserved\" in the bottom. My instinct tells me that if a company buys a service like this, and then refuses to pay their developer, then they should not be allowed to keep, and much less resell the product. I have not signed any agreements about giving the company the use of this product, I have made it in my own time and without help from the rest of the company. This all takes place in Denmark, Europe, but I would guess that the rules about this is somewhat universal. Im not the strongest person to legal-talk, so I might be wrong."} {"_id": "120934", "title": "best and most used algorithm for finding the primality of given positive number", "text": "When I was in college and as a programming language learner I wrote program for finding out prime numbers but then I never mind the program performance in terms of speed. Now after long time just started solving the problems from projecteuler.net. Now I wonder is there any best and most used algorithm for finding the primality of a given positive number which produces the result fast? Thanks"} {"_id": "237620", "title": "What specifications in software development are relevant to the clients?", "text": "Say your company develops software, and a client wants you to handle their development. It just so happens, that they have their own IT-team, but it is fully tied up, so they outsource to you. The project is in a quotation phase, and your client's Head of IT now asks for a requirement specification (SRS), an architectural specification, and so on. The Head of IT wants to know functions, modules and other decisions that will be in construction. Other than the SRS for analysis, is your company obliged to provide these specifications, even before the project has been confirmed?"} {"_id": "254513", "title": "Sending e-mail from a web application without being filtered out", "text": "For a web application I'm working on right now it's very important that automated e-mails arrive at the recipient. I've set up the mail server on the VPS that the application is going to run on. I've taken all the regular precautions to make sure that the e-mails it sends are not going to be filtered out: SPF, DKIM, e-mails have both a plain text and html part, html part is valid html etc. etc. Still some services - in particular Microsofts outlook.net addresses - either block the message or automatically mark it as junk mail. I've read up a little on how Microsoft determines if a message is junk and in our case it's most likely the reputation of the mail server. Since it's a new server it doesn't have any reputation yet. How do other developers deal with this type of problem? Do you use external SMTP services like Google Apps to send mail from your application? Any ideas or tips?"} {"_id": "118905", "title": "Is there such thing as an example driven parser generator or ad-hoc DSL development?", "text": "I'm intrested to know if there exists a tool that lets you input examples of valid documents and lets you then generalize from that to a reusable parser. I can imagine this, but everytime i start learning about parsers it gets to the granularity of something like lex and yacc and it seems more complicated than my instincts say it should be. So I'm left wondering if * a) it's just fundamentally that complicated and I need to see it or * b) there's a way to build ad-hoc DSLs for relatively simple tasks that I could learn about Update: a simple example of an \"ad hoc\" dsl i might like to make quickly. Instead of the XML bar bat I might want something like: foo bar bat * lines contain data items * indentation produces parent-child relations My imaginary tool lets me convey the above information. Then, in my imaginary tool, I might highlight \"foo\" above and right click, at which point it would prompt me to restrict the values to a choice list.... Then I might extend the first one to: foo bar(5) bat * in a line, '(' and ')' surround a sub item The above might be qualified with a boolean value that specifies recursion or not, if set to true then bar(lala(4)) might work... This is the kind of train of thought I generally have that led me to ask the question. It's possible that now I've qualified it, the answer changes - if so I apologize in advance."} {"_id": "118904", "title": "Is there a good Cognitive Architecture for implementing intelligent software agents?", "text": "So, I like dabbling in intelligent agent design (mainly video-game 'bots' but also some general task automation), but as a budding psychologist, I'd be really interested in a platform for developing such agents in a cognitively plausible setting. Such a platform would probably take the form of a cognitive architecture, but since these are meant to be implementations of psychological theories above all else, I'm worried that none of them are up for actually acting as an intelligent agent in a complex software environment like a video-game. Has anyone tried using such an architecture to produce an agent of the kind I've been describing? Failing that, do any cognitive architectures look particularly suitable for this sort of job? My current hunch is that a hybrid architecture like CLARION might work, but not having any experience with it, I'm still hesitant. I know that SOAR has famously been used to simulate a fighter pilot, but that took years of hand-coding production rules, and thus seems like an impractical platform from my standpoint."} {"_id": "118906", "title": "Single Responsibility Principle Implementation", "text": "In my spare time, I've been designing a CMS in order to learn more about actual software design and architecture, etc. Going through the SOLID principles, I already notice that ideas like \"MVC\", \"DRY\", and \"KISS\", pretty much fall right into place. That said, I'm still having problems deciding if one of two implementations is the best choice when it comes to the Single Responsibility Principle. Implementation #1: class User getName getPassword getEmail // etc... class UserManager create read update delete class Session start stop class Login main class Logout main class Register main The idea behind this implementation is that all user-based actions are separated out into different classes (creating a possible case of the aptly- named Ravioli Code), but following the SRP to a \"tee\", almost literally. But then I thought that it was a bit much, and came up with this next implementation class UserView extends View getLogin //Returns the html for the login screen getShortLogin //Returns the html for an inline login bar getLogout //Returns the html for a logout button getRegister //Returns the html for a register page // etc... as needed class UserModel extends DataModel implements IDataModel // Implements no new methods yet, outside of the interface methods // Haven't figured out anything special to go here at the moment // All CRUD operations are handled by DataModel // through methods implemented by the interface class UserControl extends Control implements IControl login logout register startSession stopSession class User extends DataObject getName getPassword getEmail // etc... This is obviously still very organized, and still very \"single responsibility\". The `User` class is a data object that I can manipulate data on and then pass to the `UserModel` to save it to the database. All the user data rendering (what the user will see) is handled by `UserView` and it's methods, and all the user actions are in one space in `UserControl` (plus some automated stuff required by the CMS to keep a user logged in or to ensure that they stay out.) I personally can't think of anything wrong with this implementation either. In my personal feelings I feel that both are effectively correct, but I can't decide which one would be easier to maintain and extend as life goes on (despite leaning towards Implementation #1.) So what about you guys? What are your opinions on this? Which one is better? What basics (or otherwise, nuances) of that principle have I missed in either design?"} {"_id": "255640", "title": "GPL- How much source must be released?", "text": "Suppose I have a GPL v2-licensed library that is of interest to a closed- source project (e.g. Wolfram Alpha). If some of my code were used in Wolfram Alpha, or a product that interfaces with Wolfram Alpha, would they have to release all of their code for Wolfram Alpha under the GPL v2 license?"} {"_id": "255641", "title": "How to stop switching between keyboard and mouse", "text": "I touch-type since several years, but very often I need to switch between different applications. For text editing I don't need the mouse, but as soon as I need a web browser, I switch to the mouse. I guess I loose a lot of time and energy doing this. Since I do web development I often switch between keyboard and mouse. Do you have a working solution? Please don't tell me about browser plugins you have never used yourself. Thank you."} {"_id": "255642", "title": "DataMapper for a MMO game plugin to send packets", "text": "I am working on an plugin for some game-server. The information about the plugin is not really necessary. _Few points you might find helpful to answer to this question:_ **The server** The server is programmed very badly. The whole design structure and how it handles things, therefore I cannot implement anything that will need to make me change how the server handles packets. The server has a list of all \"being-sent\" packets, and **sends them to the client every 600ms** So basically, the client is being updated every 600ms if there are incoming packets from the server. **My plugin** My plugin needs to update a few UIs of the client, such as new player names, change the current opened interface window, and so on. I can use a `DataMapper` class, which contains the object which has ability & access to add new packets to the list (it's not really a list; it's just a list of bytes and frames). Now look at this _dummy_ snippet (Note - this is not my plugin, but just a simple example of what I mean): public class MyPluginPlayer { private DataMapper mapper; private int state; public void setState(int state) { this.state = state; } public void getState() { return this.state; } public void selectWindowForState() { if (this.state == 5) { mapper.createLongWindow(); } else if (this.state == 4) { mapper.createWideWindow(); } // .... etc .... } } In my _dummy_ example, `MyPluginPlayer` is the lass which is responsible to handle business logic for the player that is part of my plugin, let's say my plugin is a small mini-game, like a cards game. So basically, imagine there is a manager class, which contains a List of `MyPluginPlayer` objects, and it loops through them all and works them out, setting the current state, and then calls `selectWindowForState()`. `selectWindowForState()` method basically reads all possible states, and then reads which state is the current state, and then calls the suitable method in the `DataMapper`, to send the right packet. The `DataMapper` class basically does all the dirty work such as creating new packet frame, writing the bytes, `int`s and whatever that is needed for the specific packet. **The problem** I call the use of `DataMapper` above bad, because I don't feel like the domain class should really have direct access to the `DataMapper`. I thought of making a `Queue` list and instead of calling `mapper.createWideWindow()` or whatever, it will add the action to the queue list, and then the `DataMapper` will access that list, and process all these packets to the server's global outgoing packet list. The `DataMapper` will do that at the end of every loop cycle that's done in the manager class. for (Player p : players) { MyPluginPlayer domain = p.getDomain(); // ...... management ..... // p.getMapper().update(); //goes through the Queue list and adds all packets to the server outgoing packets } But I am not really sure on how to define these packet updates. The only method I can think of is, every packet will have an own class which implements an interface. But I will need to create a class for each packet, and I don't really like the design of many many classes for a small thing. What do you think of both of these methods? Could you suggest something better?"} {"_id": "54098", "title": "What advantages switching to ruby might give me as a python programmer?", "text": "This is my first question on stackoverflow, so please bear with me. I'm trying to stay away from any form of trolling or flame baiting as i have a tremendous respect for both languages. I'm a python programmer (though not an expert) and i love it. My first language was C++. My line of work (web development) is pushing me towards other languages like php and javascript. Recently, I've been very excited by Ruby's increasing popularity. However I used to be under the impression that Python and Ruby were so close that there was little point in trying to learn and master both. But I get the sense that I was wrong, hence my question : I'd like to hear from python programmers who have either switched entirely to ruby or added ruby to their toolset. What specific benefits did you get from switching (entirely or partially) to Ruby from Python ? Ideally I'd like to hear from real world experiences."} {"_id": "255645", "title": "Java for loop: why can't I declare two variables in the for loop 'header'?", "text": "I always wondered, why doesn't this compile? for(int i=0, int q=0; i/lib/security/` directory and they should be installed. By the names of these JARs I have to assume that its not the Java Crypto API that cannot handle AES256, but it's in fact a **legal** issue, perhaps? And that these two JARs basically tell the JRE \" _yes, it's legally-acceptable to run this level of crypto (AES256)._ \" Am I correct or off-base?"} {"_id": "175789", "title": "C simple arrays and pointers question", "text": "So here's the confusion, let's say I declare an array of characters char name[3] = \"Sam\"; and then I declare another array but this time using pointers char * name = \"Sam\"; What's the difference between the two? I mean they work the same way in a program. Also how does the latter store the size of the stuff that someone puts in it, in this case 3 characters? Also how is it different from char * name = new char[3]; If those three are different where should they be used I mean in what circumstances?"} {"_id": "211894", "title": "How much time should jquery wait?", "text": "I'm programming a jQuery functionality for doing a search on articles while the user is typing in the search field. I implemented a bind in jQuery that at every `keyup`, it sets a `clearTimeout` and a `setTimeout` to the actual function doing the `post` to the server. This is because users can be writing something and I want to avoid multiple `post`s. I got \"stuck\" with: from the user interface point of view, what time should I set for the `setTimeout`? I tried different times (1s, 1.5s, 0.5s) but I have to admit the choice seems kind of random to me. This raised the broader question: Is there any reference or study that answers the question of what are the \"optimal\" times for dynamic web interfaces? I only now noticed that I've used more timers before (e.g. slides), that were also chosen \"randomly\". Putting in other terms, does anyone knows how to _justify_ why a given event in web interface has (or should have) a specific delta-time or range of delta- times? (I'm sorry if this question is not adequate to here, It seemed to me closer than SO.)"} {"_id": "211896", "title": "Why using FMOD?", "text": "I'm creating a music player using C++ and QT, everyone says use FMOD. But QT has all the components, I can play audio, create playlist, and everything i need, so what does FMOD offer me, that can't be done with QT? I did some research, here and here and here, and so many other results; in fact if you just google _\"create music player\"_ you get _\"use fmod and qt as GUI\"_."} {"_id": "128458", "title": "Best practices for team workflow with RoR/Github for designer + coder?", "text": "My friend and I have started to try to collaborate on some projects. For background, I come from a PHP/Wordpress/Drupal coding background, but recently I've become more experienced with the RoR framework, while he is more experienced as an HTML/CSS designer, working with PHP and WordPress. We're both relatively new to RoR I think, and so we're trying to figure out our collaborative workflow, but we have no idea where to start. For instance, we were trying to figure out how he could do some minor edits to the CSS file without having to do a full RoR deploy on his box. We still haven't figured out a solution, so I think it's best if we start to set some sort of workflow based on best practices. I was wondering if you guys have any insight or links to articles/case studies regarding this topic?"} {"_id": "216320", "title": "Java regex patterns - compile time constants or instance members?", "text": "Currently, I have a couple of **_singleton_** objects where I'm doing matching on regular expressions, and my `Pattern`s are defined like so: class Foobar { private final Pattern firstPattern = Pattern.compile(\"some regex\"); private final Pattern secondPattern = Pattern.compile(\"some other regex\"); // more Patterns, etc. private Foobar() {} public static Foobar create() { /* singleton stuff */ } } But I was told by someone the other day that this is bad style, and `Pattern`s should **always** be defined at the class level, and look something like this instead: class Foobar { private static final Pattern FIRST_PATTERN = Pattern.compile(\"some regex\"); private static final Pattern SECOND_PATTERN = Pattern.compile(\"some other regex\"); // more Patterns, etc. private Foobar() {} public static Foobar create() { /* singleton stuff */ } } The lifetime of this particular object isn't that long, and my main reason for using the first approach is because it doesn't make sense to me to hold on to the `Pattern`s once the object gets GC'd. Any suggestions / thoughts?"} {"_id": "216326", "title": "Finding the order of a set's elements", "text": "A little rephrased, in the form of a game, real-life problem: Suppose there is a set of elements {1, 2, ..., n}. Player A has chosen a single permutation of this set. Player B wants to find out the order of the elements by asking questions of form \"Is X earlier in the permutation than Y?\", where X and Y are elements of the set. Assuming B wants to minimize the amount of questions, how many times would he have to ask, and what would be the algorithm?"} {"_id": "216325", "title": "How to Implement a Parallel Workflow", "text": "I'm trying to implement a parallel split task using a workflow system. I'm using .NET but my process is very simple and I don't want to use WF or anything heavy like that. I've tried using Stateless. So far is was easy to set up and run, but I may be using the wrong tool for the job because I'm not sure how you're supposed to model parallel split workflows, where you have multiple sub-tasks required before you can advance to the next state, but the steps don't require being performed in any particular order. I can easily use the dynamic configuration options to check my data model manually to see if the model is in the correct state (all sub-tasks completed) and can transition to the next state, but this seems to completely break the workflow paradigm. What is the proper, orthodox way to implement a parallel split process? Thanks"} {"_id": "252915", "title": "Structural pattern for an unconventional use of a database", "text": "I'm writing a game client as a personal project and using using it as a vehicle to learn about Java database access, specifically Neo4j, and possibly Spring Data Neo4j if I decide it's appropriate. I'm well aware that my application is a bit unconventional, and this has raised questions that are hard frame narrowly. I hope this question is appropriate for this site. Maybe the best way to ask this is to first explain what I'm thinking of doing and why. My main reason for incorporating a database is persistence, not queryability. Because reaction times are critical, my plan is for the primary model of the game state to be an in-memory POJO graph. I want to update the persistent database in an asynchronous, eventually-consistent way. If I understand correctly, this is the reverse of most database applications, in which the database is authoritative and the in-memory data is just a snapshot copy. Is there a name for this pattern? If you've written something like this, what are some of the pitfalls I may encounter? Is it naive to even try this?"} {"_id": "213940", "title": "Business rule to display data in all uppercase - how to handle?", "text": "Part of a system I am working on manages some securities information (stocks, bonds, etc...) and business rules specify certain fields be displayed only in all CAPS (stock symbols and CUSIPs for example). Users will have to look at data displayed on the screen as well as perform create/edit data-entry operations. Where is the best place to deal with this? _1\\. Presentation layer only_ user enters \"ibm\" as stock symbol, stored in database as \"ibm\", converted to uppercase when displayed in app (\"IBM\") _2\\. Convert to CAPS before storing in DB_ user enters \"ibm\", model class converts to uppercase and sends to database, stored as \"IBM\" Something like a custom setter: private string _StockSymbol; public string StockSymbol { get { return _StockSymbol; } set { if (value != null) value = value.ToUpper(); _StockSymbol = value; } } _3\\. Convert to CAPS at DB_ user enters \"ibm\", database insert query converts to \"IBM\" (for example, using the `UPPER` function in SQL) The end result is the same for the users - they see their data in all CAPS and the system doesn't care if their data input is in the proper case or not. The most \"MVC compliant\" answer seems to be #1, but if this data will never be used in any other format other than all CAPS, I would argue it should be validated as such before being stored in the database. That then becomes more of a controller or view model concern, right? I've heard people speak about accomplishing this client-side with Java (and even CSS), but that seems like a very poor solution. I think the question is language/system-agnostic, but if it matters, I'm using MS SQL with Entity Framework/ASP.Net MVC. What I'm scratching my head over is whether or not a presentational business rule like this should influence how the data is stored in the DB (CAPS vs no CAPS). The application doesn't care if the stock symbol IBM is input as \"iBm\" or \"ibM\" but it seems wrong to store the data like that (it will only ever be used/displayed in CAPS). Would you consider this a data validation issue to be handled at the controller/model level, or a presentational detail to be handled only at the view?"} {"_id": "252910", "title": "Make Return Type an Interface - Problem with Initialization", "text": "I would like to make the return type of my method an interface rather than a class for similar reasons stated in c# List or IList, however I am having trouble figuring out how to initialize the interface to return it. I cannot use `new IA()` and `(IA) new A()` does not work as I cannot cast the result to `B`. interface IA{} class A: IA{} class B: IA{} class UseIA { public IA DesiredMethod() { return ???;// new IA() } public A UndesiredMethod() { return new A(); } }"} {"_id": "243118", "title": "storing map template in database", "text": "I am working on an application that displays choropleth maps. These maps are of all different types, some display state by county, country by state/province, or world by country. How should I handle storing the map information in the database? **My Thoughts:** I won't need to do queries to find POI inside a region, so I don't think there is a need to use spatial datatypes. I am considering storing a map as a geoJSON object (I am using JS mapping library that accepts geoJSON). The only issue is what if I want a map of the US northeast. Then I would have geoJSON for the US and a separate one for the US northeast, which would be redundant. Would it make sense to have a shape database where I had each state then when I needed a map of the US I could query for each state, and when I needed a map of the US Northeast I could again query for what I need? **Note** : I am not concerned with storing the data for each region, just the region itself. I will query for the data on the fly for the specific region."} {"_id": "163606", "title": "Configuration data: single-row table vs. name-value-pair table", "text": "Let's say you write an application that can be configured by the user. For storing this \"configuration data\" into a database, two patterns are commonly used. 1. The _single-row table_ CompanyName | StartFullScreen | RefreshSeconds | ... ---------------+-------------------+------------------+-------- ACME Inc. | true | 20 | ... 2. The _name-value-pair_ table ConfigOption | Value -----------------+------------- CompanyName | ACME Inc. StartFullScreen | true (or 1, or Y, ...) RefreshSeconds | 20 ... | ... I've seen both options in the wild, and both have obvious advantages and disadvantages, for example: * The single-row tables limits the number of configuration options you can have (since the number of columns in a row is usually limited). Every additional configuration option requires a DB schema change. * In a name-value-pair table everything is \"stringly typed\" (you have to encode/decode your Boolean/Date/etc. parameters). * (many more) Is there some consensus within the development community about which option is preferable?"} {"_id": "163601", "title": "What does cheap copying/branching mean for a versioning system like SVN?", "text": "One of the advantages of SVN over CVS as given here is **cheap copying and branching**. What does \" _cheap copying and branching_ \" mean in SVN parlance? How is it different from CVS copying and branching?"} {"_id": "163602", "title": "licensing a ' Sharepoint bug-Fix' to reserve it", "text": "My friend recently finished his personal work on a project that fixes a known bug in `sharepoint 2010`. The guy spent significant time\\effort to reach this point He consulted me on how to license this `fix` and asked if this possible and is it accepted for a such type of solutions! Actually I don't have any experiences in this area, especially that he is willing to start selling the `fix` and don't want to be lost in the market P.S.: his current location is U.A.E TIA."} {"_id": "168655", "title": "DI and hypothetical readonly setters in C#", "text": "Sometimes I would like to declare a property like this: public string Name { get; readonly set; } I am wondering if anyone sees a reason why such a syntax shouldn't exist. I believe that because it is a subset of \"get; private set;\", it could only make code more robust. My feeling is that such setters would be extremely DI friendly, but of course I'm more interested in hearing your opinions than my own, so what do _you_ think? _I am aware of 'public readonly' fields, but those are not interface friendly so I don't even consider them. That said, I don't mind if you bring them up into the discussion_ **Edit** I realize reading the comments that perhaps my idea is a little confusing. The ultimate purpose of this new syntax would be to have an automatic property syntax that specifies that the backing private field should be readonly. Basically declaring a property using my hypothetical syntax public string Name { get; readonly set; } would be interpreted by C# as: private readonly string name; public string Name { get { return this.name; } } And the reason I say this would be DI friendly is because when we rely heavily on constructor injection, I believe it is good practice to declare our constructor injected fields as readonly."} {"_id": "163609", "title": "How do I get from a highly manual process of development and deploy to continuous integration?", "text": "We have a development process which is completely manual. No unit tests, interface tests are manual, and merging and integration are as well. How could we go from this state to implementing continuous integration with full (or at least close to full) automation of build and test? We have a pretty intense development cycle, and are not currently using agile, so switching to agile with CI in one move would be a very complicated and expensive investment. How can we take it slowly, and still moving constantly towards a CI environment?"} {"_id": "63910", "title": "Smartphones and tablets for testing", "text": "In my work I have started development of a website/webapp that will be viewed primarily on smartphones and tablet. I have now been given the task of buying the necessary hardware for testing the webapp. As I'm not really a mobile-nerd, I'm not really up to date with the latest and greatest on the mobile market. So I need some advice on which units to buy. Some initial thoughts 1. I would like to cover the browsers in the most common systems; iOS, Android and Symbian? (are there any more I should know about?). 2. I would like each browser in both Phone size and tablet size. iPhone and iPad seems like a given choice, but after that my knowledge is limited. I hope this is in the realms of this site, feel free to move this question to another SE site if this question fits better somewhere else. Thanks in advance! **Edit:** In this scenario there aren't really any customer who can pay, this is an app that shall be used by our customers but is developed on our own initiative. Secondly, I get the feeling that for example the iPhone simulator uses the safari desktop browser for rendering the content (am I wrong?), if that is the case, how does it cope with the slight defferences in the desktop version and the phone version (for eaxample videos not being able to autostart on the phone and stuff like that). I can be totally wrong here, just raising the question."} {"_id": "121947", "title": "Is there ever a reason to use C++ in a Mac-only application?", "text": "Is there ever a reason to use C++ in a Mac-only application? I not talking about integrating external libraries which are C++, what I mean is using C++ because of any advantages in a particular application. While the UI code must be written in Obj-C, what about logic code? Because of the dynamic nature of Objective-C, C++ method calls tend to be ever so slightly faster but does this have any effect in any imaginable real life scenario? For example, would it make sense to use C++ over Objective-C for simulating large particle systems where some methods would need to be called over and over in short time? I can also see some cases where C++ has a more appropriate \"feel\". For example when doing graphics, it's nice to have vector and matrix types with appropriate operator overloads and methods. This, to me, seems like it would be a bit clunkier to implement in Objective-C. Also, Objective-C objects can never be treated plain old data structures in the same manner as C++ types since Objective-C objects always have an isa-pointer. Wouldn't it make sense to use C++ instead in something like this? Does anyone have a real life example of a situation where C++ was chosen for some parts of an application? Does Apple use any C++ except for the kernel? (I don't want to start a flame war here, both languages have their merits and I use both equally though in different applications.)"} {"_id": "29553", "title": "Under what circumstances should error messages be presented to the user?", "text": "Should error messages ever be presented to an end user and if so what rules or advice should you have about what should be in them?"} {"_id": "29550", "title": "How to keep a team well-trained?", "text": "I'm currently mentoring a small team of 4 junior dev in small software company. They are very smart and often achieve their tasks with a high-quality job but I'm sure they still can do better - actually I have exactly the same feeling for myself :) -. Besides some of them are more \"junior\" than other. So I would like to find of a funny way to improve their CS skills (design, coding, testing, algorithmic...) in addition to the experience they acquire in their daily work. For instance, I was thinking of setting up weekly sessions, not longer than 2 hours, where we could get together to work on challenging CS exercises. A bit like a coding dojo. I'm sure the team would enjoy that but is it really a good idea? Would it be efficient in a professional context? They already spend all their week to code so how should I organize that in order for them to get some benefits? Any feedback welcome !"} {"_id": "191869", "title": "To integrate git versions as build numbers or not?", "text": "A colleague and I have been taking turns debating/discussing the issues/merits of integrating a version derived from the current git repository into our code whenever it builds. We think the merits include: * No need to worry about human error in updating a version number * Traceability between what we find in a device and the source code it was derived from The issues that have arisen (for us) include: * IDE derived build systems (e.g. MPLABX) can make it hard to figure out where to put these kinds of hooks in (and it can end up pretty cheesy in the end) * More work to actually integrate this into the build scripts/makefiles * Coupling to a particular build approach (e.g. what if one person builds with XCode and the other MPLABX) may create downstream surprises So we're curious where others have landed on this debate. It's really easy for the discussion to become anecdotal. There are lots of people out there who are insistent on end-to-end automation, hang the amount of up front work and coupling it creates. And there are a lot of others on the other side of the debate, who just do the easiest thing that works and live with the risks. Is there a reasoned answer to which side is best to land on?"} {"_id": "104004", "title": "Why the sudden rise in popularity of Lua?", "text": "Does anyone know why the Lua progamming language has seen such a rise in popularity recently? I am going by the TIOBE ratings. http://www.tiobe.com/index.php/paperinfo/tpci/Lua.html I've used Lua in the past when I worked at a game development shop, but did not think it was used outside of that arena. Now it is the 11th most popular language according to TIOBE http://www.tiobe.com/index.php/content/paperinfo/tpci/index.html Has there been any recent activity or news that might have caused this sudden surge? Thanks! EDIT: I am particularly interested in the extreme increase over the last year as shown in this chart: ![Lua History](http://i.stack.imgur.com/krjpB.png)"} {"_id": "174789", "title": "Forbidding or controlling \"Hidden IT...\" Who should write and maintain ad-hoc software applications?", "text": "Bigger companies usually have the problem, that it is not possible to write all programs employees want (to save time and to optimize processes) due to a lack of staff and money. Then hidden programs will be created by some people having (at least some) coding experience (or by cheap students/interns...). Under some circumstances these applications will raise in importance and spread from one user to a whole department. Then there is the critical point: Who will maintain the application, add new features, ...? And this app is critical. It IS needed. But the intern has left the company. No one knows how it works. You only have a bunch of sources and some sort of documentation. Does it make sense to try and control or forbid application development done ad-hoc outside of the IT department (with the exception of minor stuff like Excel macros)?"} {"_id": "34959", "title": "How are Apache HTTP Server and Apache Tomcat related? (If at all)", "text": "I currently have Apache httpd running on a production Ubuntu VPS server. I write php scripts. I'm interested in learning Java and I was wondering how I would go about writing some server-side Java to work on my current set-up. How are _Apache Tomcat_ and _Apache HTTP Server_ related to each other? Can Tomcat be a module of httpd? Or are they simply just two very different projects that happen to be steered by the same organisation (Apache Software Foundation)?"} {"_id": "63917", "title": "Entry point to Mobile Computing", "text": "I'm a novice programmer who has just learnt some languages and want to experiment & sharpen my skills. I've learnt quite a few languages like C, C++, Python, VC++, Java. I want to learn to program on Mobile(Hand-held) devices.What would be the best way to start? Could anyone please suggest me from where to start? I have heard that I would have to work on Emulators, but googling it didn't help. I know that there are quite a few experienced programmers who could help me and show me the appropriate path. As you'll must have guessed by just reading my question that I don't have a perfect picture of what I want, so may be my question isn't Bang On!! ... Please Help"} {"_id": "178206", "title": "Is it appropriate to try to control the order of finalization?", "text": "I'm writing a class which is roughly analogous to a CancellationToken, except it has a third state for \"never going to be cancelled\". At the moment I'm trying to decide what to do if the 'source' of the token is garbage collected without ever being set. It seems that, intuitively, the source should transition the associated token to the 'never cancelled' state when it is about to be collected. However, this could trigger callbacks who were only kept alive by their linkage from the token. That means what those callbacks reference might now in the process of finalization. Calling them would be bad. In order to \"fix\" this, I wrote this class: public sealed class GCRoot { private static readonly GCRoot MainRoot = new GCRoot(); private GCRoot _next; private GCRoot _prev; private object _value; private GCRoot() { this._next = this._prev = this; } private GCRoot(GCRoot prev, object value) { this._value = value; this._prev = prev; this._next = prev._next; _prev._next = this; _next._prev = this; } public static GCRoot Root(object value) { return new GCRoot(MainRoot, value); } public void Unroot() { lock (MainRoot) { _next._prev = _prev; _prev._next = _next; this._next = this._prev = this; } } } intending to use it like this: Source() { ... _root = GCRoot.Root(callbacks); } void TransitionToNeverCancelled() { _root.Unlink(); ... } ~Source() { TransitionToNeverCancelled(); } but now I'm troubled. This seems to open the possibility for memory leaks, without actually fixing all cases of sources in limbo. Like, if a source is closed over in one of its own callbacks, then it is rooted by the callback root and so can never be collected. Presumably I should just let my sources be collected without a peep. Or maybe not? Is it ever appropriate to try to control the order of finalization, or is it a giant warning sign?"} {"_id": "111199", "title": "Using Node.js with Heroku to make a chat server?", "text": "I currently have a goal of creating a sort of chat website. I just finished trying with PHP and long polling, which hit a resource limit on my webhost's server. I was told that I should use Node.js and that it can be hosted with Heroku. I honestly have very little understanding of what Node.js or Heroku are. From what I've been told and have read after looking this up is that you can install Node.js and run apps online with it. The tutorial here: http://www.jamesward.com/2011/06/21/getting-started-with-node-js-on-the-cloud/ goes over installing and executing a script on heroku with Node.js. I have NO experience with command line, and wouldn't be able to do anything but copy the exact commands in that tutorial. I also don't understand when he accesses the script with localhost if the app is supposed to be on Heroku. Can anyone explain what Heroku is, and provide some resources on how to use it? How can I work my way up to knowing how to use it?"} {"_id": "123627", "title": "What are the London and Chicago schools of TDD?", "text": "I\u2019ve been hearing about the London style vs. Chicago style (sometimes called Detroit style) of Test Driven Development (TDD). Workshop of Utah Extreme Programming User's Group: > _Interaction-style_ TDD is also called _mockist-style_ , or _London-style_ > after London's Extreme Tuesday club where it became popular. It is usually > contrasted with _Detroit-style_ or _classic_ TDD which is more state-based. Jason Gorman's workshop: > The workshop covers both the _Chicago school_ of TDD (state-based behaviour > testing and triangulation), and the _London school_ , which focuses more on > interaction testing, mocking and end-to-end TDD, with particular emphasis on > _Responsibility-Driven Design_ and the _Tell, Don't Ask_ approach to _OO_ > recently re-popularized by Steve Freeman's and Nat Pryce's excellent > _Growing Object Oriented Software Guided By Tests_ book. The post **Classic TDD or \"London School\"?** by Jason Gorman was helpful, but his examples confused me, because he uses two different examples instead of one example with both approaches. What are the differences? When do you use each style?"} {"_id": "123621", "title": "Practical size limits of a Hashtable and Dictionary in c#", "text": "What are the practical limits for the number of items a C# 4 Dictionary or Hashtable can contain and the total number of bytes these structures can reasonable contain. I'll be working with large numbers of objects and want to know when these structures start to experience issues. For context, I'll be using a 64 bit system with tons of memory. Also, I'll need to find objects using some form or 'key'. Given the performance demands, these objects will need to reside in memory, and many will be long-lived. Feel free to suggest other approaches/patterns, although I need to avoid using third-party or open-source libraries. For specification reasons, I need to be able to build this using native C# ( _or C++\\CLI_ )."} {"_id": "111193", "title": "Hooking up a Business Layer and Repository using Unit of Work Pattern", "text": "I am having a bit of trouble explaining precisely the question I have here so bear with me. it is similar to the unanswered question found here: http://stackoverflow.com/questions/5580651/what-is-the-correct-way-to-use- unit-of-work-repositories-within-the-business-laye Apologies if this is too rambling. Scenario: \\+ .Net solution \\+ IRepository used to retrieve objects from DB \\+ IUnitOfWork used to allow transactions across multiple repositories This makes sense to me and I have implemented something along these lines that works fine. Now I want to introduce a business logic layer and am having trouble organising the three elements (BLL, UnitOfWork and Repository) in my mind. My understanding: \\- Repository - data retrieval, manipulation \\- UnitOfWork - persistence \\- BLL - logic relevant to the business ('real world') (dislike that term!) Consider we have an ASP.Net MVC front end. I guess the question is: what does a BLL look like and what does the MVC controller that uses it look like? For reference: I wonder if perhaps my IUnitOfWork/IRepository implementation might be the underlying cause of my confusion. public class IRepository { private IObjectSet objSet; public IRepository(IUnitOfWork uow) { objSet = uow.CreateObjectSet(); } public IQueryable Add(T entity) { objSet.Add(entity); } //etc. etc. for delete, attach, getall } So I feel like if I have a BLL, I should be passing it the IUnitOfWork, so that it can use that to create the IRepository instances that it needs. But how will the BLL (separate DLL from the front end) 'know' what implementation of IRepository to build?"} {"_id": "111196", "title": "How do I get over my .NET hump?", "text": "I've been teaching myself how to write code with the help of a very talented friend, who unfortunately is talented enough that he has never free time. I've read and worked out of a couple books, but I feel more like I'm going through motions, rather than elegantly solving real world problems. What did you guys do to get out of your larval stage? Is there a course or educational product out there that isn't quite the $1500 all encompassing lecture, but more than just a dry,\"regurgitate this example,\" type book?"} {"_id": "95311", "title": "Leaving a contract from a recruiter for a permanent position", "text": "So after months of searching, I finally get an offer. It's a contract position through my recruiter. Then all of a suddenly I get another offer from one of the MAJOR software engineer companies. I already accepted my position at the first place, but this other offer gives me so much more money and it has benefits. Can I leave the contract from my recruiter to go to this other company?"} {"_id": "148970", "title": "Is this an example of a pattern or an algorithm?", "text": "I feel like I have a reusable \"something\" here and I'm not sure whether to think of it as a pattern or an algorithm (or neither). It's characterized by having an unknown amount of work to accomplish a task because the subtasks can encounter various conditions which cause them to add \"Issues\" to a global queue. Then, the command is run repeatedly coupled with a round of \"Issue Fixing\" until either there are no issues left or the number of issues does not change. I'm just putting enough code here to show what I'm talking about - there's a bit more to it (let me know if I should post more). public void FindNewCampaigns() { var findNewCampaigns = Locator.Get>>(\"FindNewCampaigns\"); var campaigns = findNewCampaigns.Execute(); var issues = Locator.Get>(\"CommandIssues\"); while (issues.Count > 0) { int before = issues.Count; FixIssues(issues); issues.Clear(); campaigns = findNewCampaigns.Execute(); int after = issues.Count; if (before == after) { System.Console.WriteLine(\"No issues got fixed, quitting ({0}/{1}).\", before, after); break; } } if (issues.Count == 0) { CreateCampaigns(campaigns); } } private void FixIssues(ICollection issues) { foreach (var issue in issues) { System.Console.WriteLine(\"ISSUE: \" + issue.GetType().Name + \" - \" + issue.ToString()); if (issue is AdvertiserDoesNotExist) { var specificIssue = (AdvertiserDoesNotExist)issue; var command = Locator.Get>(\"CreateAdvertiser\"); command.Execute(specificIssue.AdvertiserName); } else if (issue is AccountManagerDoesNotExist) { var specificIssue = (AccountManagerDoesNotExist)issue; var command = Locator.Get>(\"CreateAccountManager\"); command.Execute(specificIssue.AccountManagerName); } else if (issue is AdManagerDoesNotExist) { var specificIssue = (AdManagerDoesNotExist)issue; var command = Locator.Get>(\"CreateAdManager\"); command.Execute(specificIssue.AdManagerName); } else if (issue is MediaBuyerDoesNotExist) { var specificIssue = (MediaBuyerDoesNotExist)issue; var command = Locator.Get>(\"CreateMediaBuyer\"); command.Execute(specificIssue.MediaBuyerName); } } } Here's the code for `FindNewCampaigns`. It adds issues as it finds them. An _Issue_ is supposed to be something that needs to happen before a new campaign is able to actually get created in a target store. public class FindNewCampaigns : Command> { private IFactory cakeEntities; private IFactory eomDatabase; public FindNewCampaigns(IFactory cakeEntities, IFactory eomDatabase) { this.cakeEntities = cakeEntities; this.eomDatabase = eomDatabase; } public override IEnumerable Execute() { using (var eom = eomDatabase.Create()) using (var cake = cakeEntities.Create()) { // Get EOM campaigns var campaigns = eom.Campaigns.Select(c => c.pid).ToList(); // Get Cake offers var offers = cake.CakeOffers.Select(c => c.Offer_Id).ToList(); // Get Cake offers that don't match to EOM campaigns var newOffers = offers.Except(campaigns).ToList(); // Get default values int accountManagerID = DefaultAccountManagerId(eom); int adManagerID = DefaultAdManagerId(eom); int advertiserID = DefaultAdvertiserID(eom); int campaignStatusID = DefaultCampaignStatus(eom); // Create new campaigns in memory var newCampaigns = (from offer in cake.CakeOffers where newOffers.Contains(offer.Offer_Id) select new { Offer = offer, Campaign = new Campaign { pid = offer.Offer_Id, campaign_name = offer.OfferName, campaign_status_id = campaignStatusID, account_manager_id = accountManagerID, ad_manager_id = adManagerID, advertiser_id = advertiserID, } }).ToList(); // Set campaign status var campaignStatus = eom.CampaignStatus.ToDictionary(c => c.name, c => c.id); campaignStatus.Add(\"Private\", campaignStatus[\"Active\"]); campaignStatus.Add(\"Apply To Run\", campaignStatus[\"Active\"]); campaignStatus.Add(\"Inactive\", campaignStatus[\"default\"]); foreach (var item in newCampaigns) { string statusName = item.Offer.StatusName; if (campaignStatus.ContainsKey(statusName)) { item.Campaign.campaign_status_id = campaignStatus[statusName]; } else { AddIssue(new CampaignStatusDoesNotExist(statusName)); } } // Set advertiser var cakeAdvertisers = cake.CakeAdvertisers.ToDictionary(c => c.Advertiser_Id); foreach (var item in newCampaigns) { int offerAdvertiserID = int.Parse(item.Offer.Advertiser_Id); var offerAdvertiser = cake.CakeAdvertisers.FirstOrDefault(c => c.Advertiser_Id == offerAdvertiserID); string offerAdvertiserName = offerAdvertiser.AdvertiserName; var eomAdvertiser = eom.Advertisers.FirstOrDefault(c => c.name == offerAdvertiserName); if (eomAdvertiser != null) { item.Campaign.advertiser_id = eomAdvertiser.id; } else { AddIssue(new AdvertiserDoesNotExist(offerAdvertiserName)); } } // Set account manager foreach (var item in newCampaigns) { int offerAdvertiserID = int.Parse(item.Offer.Advertiser_Id); var offerAdvertiser = cake.CakeAdvertisers.FirstOrDefault(c => c.Advertiser_Id == offerAdvertiserID); string offerAccountManager = offerAdvertiser.AccountManagerName; if (offerAccountManager != null) { AccountManager am = eom.AccountManagers.ToList().SingleOrDefault(c => c.NameIsEquivalentTo(offerAccountManager)); if (am != null) { item.Campaign.account_manager_id = am.id; } else { AddIssue(new AccountManagerDoesNotExist(offerAccountManager)); } } else { AddIssue(new OfferHasNoAccountManager(item.Offer.OfferName)); } } // Set ad manager foreach (var item in newCampaigns) { int offerAdvertiserID = int.Parse(item.Offer.Advertiser_Id); var offerAdvertiser = cake.CakeAdvertisers.FirstOrDefault(c => c.Advertiser_Id == offerAdvertiserID); string offerAdManager = offerAdvertiser.AdManagerName; if (offerAdManager != null) { AdManager ad = eom.AdManagers.ToList().SingleOrDefault(c => c.NameIsEquivalentTo(offerAdManager)); if (ad != null) { item.Campaign.ad_manager_id = ad.id; } else { AddIssue(new AdManagerDoesNotExist(offerAdManager)); } } else { AddIssue(new OfferHasNoAdManager(item.Offer.OfferName)); } } return newCampaigns.Select(c => c.Campaign).ToList(); } }"} {"_id": "186523", "title": "Difference between functional test and integration test", "text": "I am deeply confused the difference. I've read so many definitions and they always explain functional test as testing the requirement is satisfied. Well, that's just rephrasing the name `functional test`. That doesn't clarify the difference. I am interested in some real code to demonstrate the difference. Suppose we have a function in a library performs hashing: def custom_hasher(scheme, val): # use many hashing libraries... def hasher(inlist, scheme): \"\"\" Take a list and outputs the hashed values of a list. \"\"\" output = list() for val in inlist: output.append(custom_hasher(scheme, val)) return output Now for a functional test, I am guessing we want to test `['a', 'b']` is returned as something like `['jask34sdasdas', 'asasjdk234sjdk']` given some scheme. But that's just about what an integration test can do! I know exactly what type input I want (I want good execution so I pass in a list), or I want it to raise `Object has no append method` exception if I pass in a dictionary. I can do that in both. Where's the distinction? Another example is some web app: @logged_in # only logged in user can do this @route(\"/invite\", method=['POST']) def send_invite(request): recp_email = request.data['recp_email'] # now do a bunch of logics before and after sending an email So in my integration test, I will definitely do this over a network (have the server running). Send some request to this url. Same for functional test. How to draw a line? For this case, I can write a functional test that tests to find an email is sent by looking at the send log in some table. But that's a different function than what I am testing (the view `send_invite`). So I don't see how to differentiate the two. They both assert something. Please help."} {"_id": "148978", "title": "What is the connection between literate programming and the semantic web?", "text": "I was (casually) researching semantic / ontology based approaches to technical documentation, when I stumbled upon this gem: > Literate Programming and the Semantic Web are ideas from different times, > which do have a connection. The linked paper, Literate Programming in XML by Norman Walsh, discusses XML technologies that are central to semantic web, however I fail to see the conceptual connection between literate programming and the semantic web _or_ ontology based documentation. Help?"} {"_id": "111443", "title": "Pseudocode for Brodal queue", "text": "I'm trying to find more resources regarding Brodal heap. All I found is a haskell implementation of Brodal-Okasaki heap, but I _think_ that they are skew heaps, is this correct? Furthermore, I'm illiterate in Haskell so that does not help much. Does anyone have (or know of) an Brodal queue implementation in pseudocode, C, C++, Python? Also please correct if my assumptions above are wrong."} {"_id": "155918", "title": "Is my first employer expecting too much?", "text": "This is my first job as a programmer. I am working using the following technologies: * ASP.NET * C# * HTML * CSS * Javascript * JQuery I work for a firm which develops software for small banking firms. Currently they have their software running in 100 firms. Their software is developed in Visual Fox Pro. I was hired to develop an online version of this software. I am the only developer. My boss is another developer, the only other developer in the firm. Therefore, my employer has a total of two developers. My boss does not have any experience with .NET development. I have been working on this project for 8 months. The progress is there, but has been very slow. I try my best to do what my boss asks. But the project just seems too ambitious for me. The company has not done have any planning for the project. They just ask me to develop what their older software provides. So I have to deal with front end, back end, review code, design architecture, and more. I have decided to give my best. I try a lot. But the project sometimes just seems to be overwhelming. **Question:** Is it normal for a beginner programmer to be in this place? * Are my employers just expecting too much of a new programmer? * As a programmer, am I lacking skills one needs to deal with this? I always feel the need to work in at least a small team, if not big one. I am just not able judge my condition. Also I am paid very low salary. I do work on Saturday as well. Please, help to clarify my judgment. Any suggestions are welcome."} {"_id": "61376", "title": "Aggregation vs Composition", "text": "I understand what composition is in OOP, but I am not able to get a clear idea of what Aggregation is. Can someone explain?"} {"_id": "111449", "title": "What is the industry definition of an interpreter (as opposed to a compiler)?", "text": "In my compiler design courses, I have learned about and worked with a clear academic definition of an interpreter and a compiler, with an interpreter being > a program Pi from a language M capable of taking a program i from a language > I and an input and executing i with the given input and the correct > semantics for I on a machine capable of running programs from M and a compiler being > a program Pc that, when given a valid program i from a language I as an > input, produces a semantically equivalent program o in a language O This definition would clearly put the usual execution of Java bytecode by a JVM in the domain of interpretation, no matter how much JIT compilation is done. (Of course, Java is also compiled before, from Java code to Java bytecode.) I have encountered opinions in discussions on this site that clearly and vehemently state the opposite, i.e. that Java Bytecode execution thingies are compilers. Since I am about to make the leap from academics to industry, I am a little bit confused here: Is the above definition of interpreters false from the viewpoint of industry people in general? Or is it just false for Java people? Is the view of Java as a fully compiled language an alternate, but minority view? Or just a few loonies? (PS: Please do not move this to cstheory. I have deliberately put this question here since I would really like to get the view of the professional industrial, not the academic community.)"} {"_id": "61373", "title": "What is the preferred way to protect ownership of one's code (e.g. copyright, licenses, etc)", "text": "I posted some open source code on soureforge, and someone got in contact with me stating they intended to use my algorithms for a commercial product. I don't want this to happen, what should I do? According to us copyright.gov site: \"Your work is under copyright protection the moment it is created and fixed in a tangible form that it is perceptible either directly or with the aid of a machine or device.\" Which seems to grant some legal protection by doing nothing. I looked at getting an official copyright from them, and its more than I'd like to spend ($135). Is that what I need to get for my work? Or are licenses a cheaper and safe/legal alternative? Or something else? Or nothing at all?"} {"_id": "189202", "title": "You're hired to fix a small bug for a security-intensive site. Looking at the code, it's filled with security holes. What do you do?", "text": "I've been hired by someone to do some small work on a site. It's a site for a large company. It contains very sensitive data, so security is very important. Upon analyzing the code, I've noticed it's filled with security holes - read, lots of PHP files throwing user get/post input directly into mysql requests and system commands. The problem is, the person who made the site for him is a programmer with family and children who depend on that job. I can't just say: \"your site is a script kiddie amusement park. Let me redo it for you and you'll be fine.\" What would you do in this situation? ## Update: I followed some good advice here and politely reported to the developer that I've found some possible security flaws on the site. I pointed out the line and said there could be a possible vulnerability for SQL injection attacks there, and asked if he knew about it. He replied: _\"sure, but I think that to exploit it the attacker should have information on the structure of the database; I have to understand better\"_. ## Update 2: I said that's not always the case and suggested he follows this Stack Overflow question link in order to deal with it properly: How to prevent SQL injection in PHP? He said he would study it and thanked me for telling him before. I guess my part is done, thanks guys."} {"_id": "216810", "title": "Tender vs. Requirements vs. Solution Design", "text": "Conventionally, which of the above documents is deemed to hold the most weight when it comes to system acceptance? I recently had a conversation along these lines: It was argued that the initial requirements / tender documentation should be used to determine system acceptance. It was said that the solution design only serves to describe the way in which the system will solve the problem, not the problem it will solve. Furthermore, it was argued that if requirements are missed during solution design, the requirements should be referenced during system acceptance _and_ that if any requirements were missed then the original tender should be referenced. Conversely, I suggested that - while requirements may be based on the original tender - they supersede it once agreed with the stakeholders. Furthermore, during solution design, analysis is performed to address and refine these initial requirements, translating them into a system capable of meeting the actual requirements. Once signed off by the relevant users, this solution design should absolutely represent the requirements (by virtue of the fact that it's designed upon them) but actually supersedes them as the basis for system acceptance. Is one of the above arguments more valid than the other? _EDIT: Apologies for the ambiguous terms. In this situation, tender is the document the customer took to market when shopping for a provider - it includes details of all the high-level features they're looking for. Solution design is not a technical document, it's the functional specification._"} {"_id": "216817", "title": "Requirement refinement between two levels of specification", "text": "I am currently working on the definition of the documentation architecture of a system, from customers needs to software/hardware requirements. I encounter a big problem with the level of refinement of requirements. The classic architecture is : PTS --> SSS --> SSDD --> SRS/HRS with PTS : Purshaser Technical Specification SSS : Supplier System Specification SSDD : System Segment Design Description SRS / HRS : Software / Hardware Requirement Specification. Requirements from PTS are reworked in SSS, this document only expressed the needs (no design requirements are defined at this level). Then, the system design is described in SSDD : we allocate requirements from the SSS to functions from the design and functions are then allocated to component (Software or hardware) (we are still at the SSDD level). Finally, for each component, we write one SRS or one HRS. Requirements in SRS or HRS are refinement of requirements from SSS (and traceability matrix are made between these two levels). My problem is the following one : Our system is a complex one, and some of the requirements in the SSS needs to be refined twice to be at the right level in the SRS (means that software people can understand the requirement to make their coding). But, with this document architecture, I can only refine once the requirements from the SSS. The second problem is that only a part of the requirements from the SSS needs to be refined twice. The other part only need one refinement. On the picture below, the green boxes are requirements at the right level for SRS or HRS. And purple boxes are intermediate requirements which can not be included in SSS since they are design requirements. Where can I put these purple requirements ?? Is there someone who has already encountered this problem ? Should I write two documents at SRS level ? Should I include intermediate requirements in SSDD ? Should I includes the two refinement levels (purple and green) in the same SRS document (not sure that's possible since a SRS is only for one component) ??? Thanks for your help and expertise ;-) ![Requirement refinement](http://i.stack.imgur.com/3xHyM.png)"} {"_id": "151133", "title": "Why were annotations introduced in Spring and Hibernate?", "text": "I would like to know why were annotations introduced in Spring and Hibernate? For earlier versions of both the frameworks book authors were saying that if we keep configuration in xml files then it will be easier to maintain (due to decoupling) and just by changing the xml file we can re-configure the application. If we use annotations in our project and in future we want to re-configure application then again we have to compile and build the project. So why were these annotations introduced in these frameworks? From my point of view annotations make the apps dependent on certain framework. Isn't it true?"} {"_id": "151139", "title": "How much should I worry about modeling/analyzing a web application?", "text": "I'm developing kind of a modern web application - scalable, robust and fast. I need to develop it as fast as I can. How much should I worry about the analysis phase or writing UML diagrams or ER diagrams for it? And what should be the flow on that? What should I start drawing?"} {"_id": "207274", "title": "Should I bother to write unit test for UI/UX Components?", "text": "So I am building an application with Angular and have started to get into UI testing with DalekJS (http://dalekjs.com). As I have been writing these tests I have been thinking to myself, should I even bother with writing unit test that that are UI/UX components. Now my angular services generally don't have anything to do with the DOM or directly rendering stuff on the page so those I unit test and it make sense however angular directives are components that render things directly to the page and writing unit tests seems like 1. It is not an effective way to test UI/UX components and 2. It would overlap with UI/UX Tests. For point #1, unit test (at least ones I have seen) don't actually write anything to a browser and render it. For thing that require DOM, you generally mock the DOM in a variable and use that to test whatever you need to test. If you have an error with a test, you can't load it up in a browser and play around with it like you could with UI/UX tests (which in my experience runs against code that is the true application that renders and everything). For point #2, one of my directives has a property called contentVisible. Now I can write a unit test that make sure that property is the correct value at certain points but that really don't not test what I truly want to test because even if contentVisible is set to false, the content still might be rendering to the screen which UI/UX test would pick up. Is it still worth the effort to write unit test for UI/UX Components where UI tests would be able to pick up everything the unit test would plus also do a better job since it can test what is actually rendered? **Exception** The one exception case where I would need a unit test is for certain ajax requests. For example, making sure an ajax request is not made of that an ajax request that does not make and changes to the UI are things that can only be tested with unit tests."} {"_id": "207273", "title": "Distributing cryptographic software on GitHub", "text": "I am developing a cryptographic program and i am planing on distributing it using GitHub, But there are some regulations on the export of cryptographic software, how do make myself not liable if someone takes it out of the county? what disclaimer should i include with my software when i distribute it?"} {"_id": "212317", "title": "What would be a good \"umbrella\" term for HTML, CSS, SQL, JavaScript and the like?", "text": "There appears to be a pretty clear consensus on the question about whether HTML and CSS are programming languages. They are not. HTML is a markup language and CSS is a stylesheet language. You could probably also argue that SQL isn't really a programming language either, and JavaScript is as the name implies more a scripting language than a programming language. However, what's a good \"umbrella\" term for these? Could i just say they are \"technologies\"?"} {"_id": "212316", "title": "How to factor out data layer in nopCommerce and replace MS SQL with RavenDB?", "text": "I am new to nopCommerce and ecommerce in general but I am involved in an ecommerce project. Now from my past experiences with RavenDB (which mostly were absolutely pleasant) and based on the needs of the business (fast changes with awkward business workflows) It seemed to be an appealing option to have RavenDB handling all sort of things related to the database. I do not understand design and architecture of nopCommerce fully so I did not reach to a conclusion on how to factor data parts, since it seems the services layer actually does not abstract data-layer concepts away; like bringing in EF working model to other layers. I have found another project which used NuDB as it's database as a nopCommerce fork. But it did not help because NuDB still has the feeling of a RDBMS and is not as different as RavenDB. Now first how can I learn about the internals of nopCommerce (other than investigating the code)? It's workflows? It's conventions? Second has anyone tried something similar before with a NoSQL database (say like MongoDB or RavenDB)? Is it possible to achieve this in a 1 (~2) month time frame? Thanks in advance;"} {"_id": "242795", "title": "What is the \"Free Monad + Interpreter\" pattern?", "text": "I've seen people talking about _Free Monad with Interpreter_ , particularly in the context of data-access. What is this pattern? When might I want to use it? How does it work, and how would I implement it? I understand (from posts such as this) that it's about separating model from data-access. How does it differ from the well-known Repository pattern? They appear to have the same motivation."} {"_id": "212312", "title": "Mutual observer pattern in Java", "text": "I want to improve my multi-threading and design pattern skills. As such I'm designing an Instant Messaging server. I'm writing the Server first. My plan so far is to have Client \"Proxy\" Classes to handle the socket connection for each Client. I want to have an \"Exchange\" Class that takes a message from a Client Proxy and hands it to the recipient Client Proxy. I was thinking of having both the Client and Exchange observe each other in this situation via the Observer Pattern. Mutual observers as it were, with Client Proxy and Exchange being Observer and Subject. On further thinking, should I instead have just the Exchange be the Observer and the Client Proxies as multiple Subjects for the Exchange? \\-- Further thoughts It seems people quite like the idea of mutual observation between Proxy and Exchange. I was planning on each Client Proxy running in a separate thread. Would the Exchange become the bottleneck if there's only one Object? It sounds to me like I might need a pool of exchange objects, but I'm unsure how that would then map to the Observer pattern, even if I had some kind of broker in front of my pool of workers."} {"_id": "212310", "title": "Memory allocation of Classes that don't have any global data and locks", "text": "static void Main(string[] args) { var c2 = new Class2(); var c3 = new Class3(); var c1 = new Class1(c2, c3); c1.Method1(); } class Class1 { readonly Class2 _class2; readonly Class3 _class3; public Class1(Class2 class2, Class3 class3) { _class2 = class2; _class3 = class3; } public void Method1() { _class2.PerformM1(); _class3.PerformM2(); } } class Class2 { public void PerformM1() { //do some operation } } class Class3 { public void PerformM2() { //do some operation } } **Question based on above code:** 1. How much memory does object of Class1 has when it is created? 2. Does any memory increases or decreases when I perform c1.Method1(); keeping in mind that none of the classes have any global fields that might acquire any space. * * * **Edit** Expanding with one more question: 1. If creating new objects of Class1 and calling methods of c1 object does not involve much memory usage, then is it correct to say that I don't need a lock in the above provided sample code. I can simply create n number of objects and assign individual object to individual Task( or thread)?"} {"_id": "214418", "title": "Is choosing ASP.NET WebForms to design web applications now a bad practice?", "text": "I have no time to code in MVC and jQuery, I just want this job to \"get done\", and honestly I like WebForms after 3 years of MVC programming. So my question is, technically, is it bad practice to choose ASP.NET WebForms over MVC ?"} {"_id": "250980", "title": "External libraries licensing", "text": "The problem I have might seem trivial for some of you but in general I have a question about all those \"free\" software licenses (mainly for PHP libraries). Let's assume that I want to create project for end-user and I want to sell it to them and if possible encrypt my source code. For example for Zend Framework licence is New BSD License and for Laravel it's MIT license. On Zend framework there is such short info: > Redistributions of source code must retain the above copyright notice, this > list of conditions and the following disclaimer. > > Redistributions in binary form must reproduce the above copyright notice, > this list of conditions and the following disclaimer in the documentation > and/or other materials provided with the distribution. but it means nothing for me. What does it mean the first one? Should I simple download Zend Framework and put in directory and when I don't do anything with it I can forget about it or maybe I should tell my client that I used framework and my software may not working because > THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS \"AS IS\" > AND ANY EXPRESS OR IMPLIED WARRANTIES ... I really don't understand how those licenses work and what are differences between those licenses. Could you provide me some help or external resources where this issue is explained in a simple way?"} {"_id": "22468", "title": "Data structures for bioinformatics", "text": "what are some data structures that should be known by somebody involved in bioinformatics? I guess that anyone is supposed to know about lists, haseshes, balanced trees etc, but I expect that there are domain specific data structures. Is there any book devoted to this subject? Thanks, Lucian"} {"_id": "127011", "title": "Get a Programming Internship if you don't know programming? Learning instead?", "text": "Is it possible to get an internship to learn programming if you've say only had one classs in visual basic? For example, I want to learn any programming language, but I've only taken 1 class in visual basic and SQL....can I be at a company to learn on the job?"} {"_id": "245225", "title": "Format for getting clear directions on data paramters from users", "text": "As part of my job, I regularly get ad-hoc requests from users for snapshots of our very large database. However, there's no fixed format for delivering these and they usually come through in the form of bullet-pointed text, for example: * All customers * All active in last 0-24 months (including non-contactable customers) * All committed In this instance, \"committed\" is a flag, and \"active/non-contactable\" is both a text-based status and a dynamic status based on their last purchase. So this request could be interpreted as: * All customers who made a purchase in the last 24 months, regardless of status * All customers who have a \"active\" or \"non-contactable\" status regardless of last purchase. * One of the above and then \"either\" having a committed flag, or \"and\" having a committed flag. I won't labour the point - you see the problem. And it's worth adding that the possible range of parameters is quite large. Normally I have to talk to the requestor and go through a rigmarole of explaining the problem and trying to clarify the requirements. This situation can't be that uncommon. Is there a more effective way of capturing these details, with a checkbox-style grid or other visual aid? Does anyone have any examples of experience of useful solutions?"} {"_id": "127015", "title": "Is the \"exposer (hack) pattern\" a newly identified pattern or does it have another name?", "text": "**EDITED FOR CLARIFICATION** In the past, I have seen a whole manner of resolutions and fudges. Some really stand out. One particular resolution that I initially thought of as a fudge possibly deserves a category of its own. It could be considered a hack or a pattern, as a design pattern is just a piece of reusable code. So, it still conforms to the definition: http://en.wikipedia.org/wiki/Software_design_pattern It possibly already has a name but I have not seen it in the literature. I was wondering if anyone knows another name for what I labelled the \"exposer (hack) pattern\". I have included an example of it below. This code is used to expose data that is not available to a data source. Instead, the results are not generated until a data bind method is called. Unfortunately, these results are not stored anywhere in the data source after execution. This is due to an oversight (or bad design) in the third party API. So, the results are passed a repeater to store locally, before returning these out as a populated PageDataCollection: public static PageDataCollection Search(this SearchDataSource searchDataSource) { PageDataCollection results = new PageDataCollection(); //This is a dummy repeater whose only purpose is to provide a way of //iterating over the search results Repeater dummy = new Repeater(); dummy.ItemDataBound += delegate(Object sender, RepeaterItemEventArgs e) { if (e != null && e.Item != null && e.Item.DataItem != null) { PageData pageData = e.Item.DataItem as PageData; if (pageData != null) { results.Add(pageData); } } }; dummy.DataSource = searchDataSource; dummy.DataBind(); return results; }"} {"_id": "245222", "title": "Taxonomy / Classification of real world objects. Development suggestions", "text": "I have to develop a taxonomy/classification program based on classification algorithms and i would like to have some suggestions. The aim is to help the one using the program to categorize an object (could be a plant, a coin, an animal,...) and to teach what to look for in every step. The program is likely to be coded in PHP. The algorithm also should be able to be module based (multiple types of classification can exist at a determinare level and types of classification should be interchangeable). Example: ![Example of classification](http://i.stack.imgur.com/N8rrC.jpg) Should the program/algo be object oriented? can it be imperative (if/else, case)?"} {"_id": "250429", "title": "Solution for multiple versions of one Software", "text": "I'm sorry if the title is a little hard to get. I've never built an entire big software my self, just some small apps in my free time, i was more familiar with the do-what-my-boss-told-me things. And our approach is a little amateur, obsolete also (Foxpro 8.0 and MS SQL 2000). Now i've started to work at another company and been assigned to build a software in C#.Net, but the problem is i'm not sure with the program's architecture My problem is, we're building a software. But we have variety of clients, and for example, we have a CalculateCostOfProduct funtion, and it might be different for each client. Says for party A, the function is : function CalculateCostOfProduct() return 5; for party B : function CalculateCostOfProduct() return 10; ... And there should be extra modules for specific needs also ! So there will be customized versions from the original standard. How should i manage this ? But we can't build different exe for different client! So please give me some ideas, or solutions that might solve this problem."} {"_id": "250422", "title": "MVC: view/sidebar.php can load model?", "text": "I have a Route that activates a Controller which returns to me a page through a View. Let's call it **master page**. > route -> controller -> view [master page] The master page is divided into header, sidebar, body and footer. And as the sidebar can be loaded to other pages, and not only the master page, it has its own View file. However, the sidebar should receive the data from user, which would be obtained through a Model. And in theory, the Model could only be called by a Controller, who call the View, however, this View is the master page, not the sidebar. > [...] -> view [master page] -> view [sidebar] So I thought the possibilities were as follows, and the idea is to know whether it is right or wrong, or perhaps it is another way that I could not imagine. 1. **The Controller will load the data from the Model and apply the sidebar which, in turn, be applied to the master page.** The problem here is, in a \"deeper\" case, it would be extremely laborious and difficult to understand. 2. **The Controller run the Model, but send to the master page, which would be responsible for loading the View sidebar and pass the information of the Model to it.** In this case, the work would be passing on information between the layers, so far as was necessary to take the information. 3. **The Controller will only load the View master page, which will load the sidebar, which will be responsible for executing the Model (inside View).** The problem here is that a View theoretically should not run a Model, just print your information already processed in the Controller. What would be the most appropriate way to make this process run correctly?"} {"_id": "26375", "title": "Separate servers vs local machine for builds, issue tracking etc on solo project", "text": "For solo projects, do you keep your build / management tools on your local machine, or on a separate server? If the server is not guaranteed to be safer or more reliable than my own machine I struggle to see the point, but maybe I'm missing some things. Note that I'm not debating the value of continuous integration or having a staging environment etc.. just the question of whether it exists on separate hardware."} {"_id": "215105", "title": "Class design for calling \"the same method\" on different classes from one place", "text": "Let me introduce my situation: I have Java EE application and in one package, I want to have classes which will act primarily as cache for some data from database, for example: * class that will hold all articles for our website * class that will hold all categories * etc. Every class should have some `update()` method, which will update data for that class from database and also some other methods for data manipulation specific for that data type. Now, I would like to call `update()` method for all class instances (there will be exactly one class instance for every class) from one place. What is the best design?"} {"_id": "51586", "title": "Do employers hiring for software jobs care about the classes you took in a Computer Science Masters program?", "text": "I'm torn between two classes right now for next semester (Software Design and Advanced Computer Graphics). I would enjoy Advanced Computer Graphics more, but I feel the software design class would help me when approaching anything I ever build for the rest of my career. I feel though I could just buy the book (I already have both books actually) of the Software Design class and go through it, if I wanted. But think it would be a bit tougher to pick up the Advanced Computer Graphics class on my own. So do employers look at the graduate classes you've taken to decide if you would be a good fit or not? I think, more importantly, what I'm wanting to know is if I wanted to work for a high-end software company like Apple or Google would a company like that be more impressed by someone that took software engineering classes or hardcore CS classes?"} {"_id": "197852", "title": "Best place to write SQL queries", "text": "I've been working on this project for my company. Currently I am embedding my SQL statements inside the program itself as and when they are needed. I do have two seperate classes - 1. QueryBuilder Class (Contains all the queries) 2. DbConnection Class (Executes the queries against the database, takes care of connectivity and such) What this does, is whenever I need to make the smallest adjustment in the query, I have to rebuild the entire application. I'm hoping there are better ways to deal with this. Somewhere I can store these queries, get them, pass the parameters **as command parameters** , then execute them and if I have to, change them without having to rebuild my application. There are a couple of ideas in my head. 1. Resource files. 2. Seperate DLL/Class files - Disadvantage is that, I'll have to rebuild that class if I make a change to that code. But the seperation does have it's advantage. 3. Text files, May get unmanageable when queries increase. I'd like to know if there is a better way to deal with this issue and if others have any way of circumventing this."} {"_id": "197850", "title": "ViewController in programming", "text": "ViewController is a term for classes that handle views in a framework. This is especially used in MVC frameworks. I go through various projects, written by various programmers, who implement MVC in different ways. Especially, i get confused, about the relation between the MainView ( parent view ) and some CustomView ( widget etc) in the framework. **I personally pass reference of the MainView into the ViewController to be instantiated. All the subviews of ViewController are added to that reference of MainView. Additionally, ViewController itself is added as a child of MainView.** Like this : ![main-view and view-controller relation](http://i.stack.imgur.com/VkkVz.jpg) Want to know, if this is the right way to relate each other ?"} {"_id": "165771", "title": "Where to place the R code for R+Sweave+LaTeX workflow", "text": "I spent the last week learning 3 new tools: R, Sweave, and LaTeX. One question that came to my mind when working through my first project: Where do I place the majority of the R code? The tutorials that I read online placed the majority of the R code in the LaTeX .Rnw file. However, I find having a bunch of R calculations in the LaTeX file distracting. What I do find extremely helpful (of course) is to call out to R code in the LaTeX file and embed the result. So the workflow I've been using is to place 99% of my R code in my .R file. I run that file first, save a bunch of calculations as objects, and output the .Rout file once finished (to save the work). Then when running Sweave, _I load up that .Rout file_ , so that I have the majority of my calculations already completed and in the Sweave R session. Then my LaTeX callouts to R are quite simple: Just give me the XTable stored in 'res.table', or give me the result of an already-computed calculation stored in the variable 'res'. So I push towards the minimal amount of R code in the LaTex file possible, to achieve the desired result (embedding stats results in the LaTeX writeup). Does anyone have any experience with this approach? I'm just worried I might run into trouble further down the line, when I start really trying to load up and leverage this workflow."} {"_id": "93250", "title": "Why don't testers and programmers like each other?", "text": "During my career as a programmer I've seen various programmers and testers, and many of them didn't/don't like each other. I mean, programmers think that the job of a tester is not a \"real\" job, and testers think that programmers are too \"proud\". Is this the right decision made by me, why is it, and what can we do to avoid these kind problems?"} {"_id": "190802", "title": "How to model hashtags with nodejs and mongodb", "text": "Existing architecture: nodejs server with mongodb backend. I have strings coming in describing images that can have #hashtags in them. I wish to extract the hashtags from the strings, store the hashtags and associate the image with that hashtag. So e.g. an image is uploaded with 'having fun at #bandcamp #nyc' `#bandcamp` and `#nyc` are extracted. * If they don't exist as hashtags already, they're created and the image is associated with them both. * If they do exist, that's recognised and the image is associated with both. So it will be possible to build a mongo find query that gets all images for a hashtag or multiple hashtags. I'm new to nosql, I understand that in relational I'd have: * table hashtags * table images * table imageshashtags with a many to many relationship. An image can have many hash tags, and a hashtag can have many images. What sort of approach is suitable with mongo? From reading q&a like this: http://stackoverflow.com/questions/8455685/how-to-implement-post-tags-in-mongo I see that I can implement a sub document in the image document with the tags. Is that efficient for searching and retrieving? I could then use http://cookbook.mongodb.org/patterns/count_tags/ \\- map reduce? So end up with: images collection withwith tags subdocument tags collection * images document with tags subdocument with tags extracted and added to it when the image is created, and new tag added to the collection if it's not already present (i.e. tags must be unique) also create the tag in the tags collection, and run map reduce. Is that sound? Am I understanding things correctly and is my approach sensible?"} {"_id": "165779", "title": "Teaching: How can you motivate students to comment?", "text": "I remember when I was taught, \"comments are the most important part of code.\" Or rather, when I was _told_ that comments are the most important part of the code. I don't think I was convinced, and I still see common cases where programmers are not convinced of the necessity of good & thorough comments. I am certainly convinced myself at this point - trying to read, in particular, complex formulae that call functions that call other functions that I don't understand - but I don't know how to convey this to students."} {"_id": "190808", "title": "How do I convince manager to move me from services/support team to development?", "text": "I joined a company a few months ago as a Java developer. I like doing coding and it makes me happy. After my Java training I was put into services/support project. There is no coding in this team. When I ask my manager to move me into development project or roll me off from this project he questioned my \"Why do you want to do coding?\" This is the point where I am speechless to objectify why I want to do coding. How can I convince him to move me to development project? I want to keep gaining knowledge in Java and experience in development."} {"_id": "55768", "title": "Charging for chrome extension?", "text": "I have a chrome extension that I think is fairly useful. It's already been posted and is free. How can I start charging for it with the Chrome webstore? $.99 or some such."} {"_id": "215100", "title": "Is it possible to find a four-digit Username / Password by chance?", "text": "Imagine you have a login-form and it has two field : * Username (Maximum length : 4 - digits only) * Password (Maximum length : 4 - digits only) Is it possible to write a program to find the username and password? As I know in the worst case, we will have 100,000,000 checks. and it's crazy because it takes a very long time. So, what is your solution?"} {"_id": "220161", "title": "Algorithm for recursive evaluation of postfix expressions", "text": "I'm reading Sedgewick's book on algorithms in C and I'm looking for an algorithm to evaluate postfix expressions (addition and multiplication only) without using a stack. I tried to implement one but I always get nowhere. I have a simple C implementation for **prefix** evaluation int eval() { int x = 0; while (a[i] == ' ') i++; if (a[i] == '+') { i++; return eval() + eval(); } if (a[i] == '*') { i++; return eval() * eval(); } while ( (a[i] >= '0') && (a[i] <= '9')) x = 10*x + (a[i++]-'0'); return x; } and I was wondering if it's possible to do something as elegant as this but for postfix evaluation."} {"_id": "240752", "title": "How to store the file names, start offset and length while avoiding the issue of self imposed limits (lookup table) or having to scan the entire file?", "text": "I am attempting to learn more about C and it's descendants(C++ mainly). I have decided that I would like to create a \"file system\" of sorts. Not a particularly advanced one mind you but something to play with. I have no intentions of making it mountable, securable or even recoverable. At the moment I am stuck in concept land with trying to decide how to implement the MFT/FAT. At first I thought that I would just use the first X number of bytes to store a lookup table, when I realized that there would then be a limitation to the number of files I could store I thought maybe use some type of metadata with each file but then I would have to scan the entire filesystem to locate a file. I have read through this and this although the z80 link seems like it is more up my alley. From a high level I want to be able to issue a command like: ./myfs funnycat.jpg mystorage.mfs Essentially appending binary data to the end of mystorage.mfs How can I store the information that would contain the file names, start offset and length while avoiding the issue of self imposed limits (lookup table length) or having to scan the entire file (metadata with binary data)? **Concise Explanation** I am looking for a way to label binary data stored in a single contiguous file so that I can pull data from a given offset range or by string. ./myfs mystorage.mfs funnycat.jpg Likely in order to accomplish this I will add some logic to myfs to check the first argument for signs that it is a blob containing other files or not."} {"_id": "51062", "title": "Constructor parameter validation in C# - Best practices", "text": "What is the best practice for constructor parameter validation? Suppose a simple bit of C#: public class MyClass { public MyClass(string text) { if (String.IsNullOrEmpty(text)) throw new ArgumentException(\"Text cannot be empty\"); // continue with normal construction } } Would it be acceptable to throw an exception? The alternative I encountered was pre-validation, before instantiating: public class CallingClass { public MyClass MakeMyClass(string text) { if (String.IsNullOrEmpty(text)) { MessageBox.Show(\"Text cannot be empty\"); return null; } else { return new MyClass(text); } } }"} {"_id": "56239", "title": "How many hours can you be really productive per day? How?", "text": "I find that I'm having a great deal of trouble staying alert 8 hours per day. I've heard of people who've negotiated work contracts of just 4 hours/day, arguing that they won't be able to do much more in eight hours. I am often overwhelmed with drowsiness, boredom, distraction. Some days, I seem to blaze through eight hours in a furious explosion of productivity; other days, I hardly get anything done at all. Most days, it's somewhere in between, and I feel bad for wasting a lot of time because I can't muster the concentration to be my best throughout much of the day. I'd like to hear your experiences (tell me I'm not alone!), and, if found, your solutions to this dilemma. Are you productive 8 hours/day almost every day? How?"} {"_id": "110310", "title": "When is it preferred to combine Add/Edit functionality, and when to keep them separate?", "text": "I regularly come across situations where I need to Add or Edit an item, and sometimes I use separate methods for Add and Edit, and other times I combine them into a single method. Is one method preferred over the other? If so, why? public void AddItem() { ShowEditingPopup(new Item(), \"Add Item\"); } public void EditItem(Item item) { ShowEditingPopup(item, \"Edit Item\"); } OR public void EditItem(Item item) { ShowEditingPopup( (item ?? new Item()), string.format(\"{0} Item\", (item == null ? \"Add \" : \"Edit \")) ); } where `ShowEditingPopup` is defined as public void ShowEditingPopup(object popupDataContext, string popupTitle) { PopupDataContext = popupDataContext; PopupTitle = popupTitle; IsPopupVisible = true; } **Edit:** Just to clarify, I am not Saving the item, I am opening it for Editing. I almost always implement a generic Save method for saving to the database **Edit #2:** Edited code samples so they more accurately reflect the sort of situation I am referring to"} {"_id": "243135", "title": "Writing a method to 'transform' an immutable object: how should I approach this?", "text": "(While this question has to do with a concrete coding dilemma, it's mostly about what's the best way to design a function.) I'm writing a method that should take two Color objects, and gradually transform the first Color into the second one, creating an animation. The method will be in a utility class. My problem is that Color is an immutable object. That means that I can't do `color.setRGB` or `color.setBlue` inside a loop in the method. What I _can_ do, is instantiate a new Color and return it from the method. But then I won't be able to _gradually_ change the color. So I thought of three possible solutions: * * * 1- The client code includes the method call inside a loop. For example: int duration = 1500; // duration of the animation in milliseconds int steps = 20; // how many 'cycles' the animation will take for(int i=0; i #include #include\"human.h\" #include\"computer.h\" #include\"referee.h\" #include\"RandomComputer.h\" #include\"Avalanche.h\" #include\"Bureaucrat.h\" #include\"Toolbox.h\" #include\"Crescendo.h\" #include\"PaperDoll.h\" #include\"FistfullODollors.h\" using namespace std; int main() { Avalanche pla1; Avalanche pla2; referee f; pla1.disp(); for (int i=0;i<5;i++) { cout< GetTopCustomersOfTheYear(int howManyCustomers, int whichYear) { // Some code here } List customers = GetTopCustomersOfTheYear(50, 2010); in PHP: public function getTopCustomersOfTheYear($howManyCustomers, $whichYear) { // Some code here } $customers = getTopCustomersOfTheYear(50, 2010); Is there any language out there which support this syntax: function GetTop(x)CustomersOfTheYear(y) { // Some code here } returnValue = GetTop(50)CustomersOfTheYear(2010); Isn't it more semantic, more readable form of writing a function? **Update:** The reason I'm asking this question is that, I'm writing an article about a new syntax for a new language. However, I thought that having such syntax for declaring methods could be nicer and more friendly to developers and would decrease learning-curve of the language, because of being more closer to natural language. I just wanted to know if this feature has already been contemplated upon or not."} {"_id": "235581", "title": "Search for the (slowly moving) number with penalty for too high test value", "text": "You task is to design a search algorithm that will find my secret number, which is between 0 and 2^64. If your test value is too low you can test again in 1 second. If your test value is too high you are not allowed to test again the next 300 seconds. Every second I will add 1 to or subtract 1 from the number. I will not tell you what I have done, and while it may be random it also might not be random. How can you find the approximate number fast? The obvious answer is to use a binary search, but since the penalty is high for guessing too high, I would think it needs some adjustment. **Background** I need to find the speed of a system. The speed changes slowly. It takes 1 second to do a guess. If my guess is too low: no problem. If it is too high, then system reboots, causing a 5 minute delay. My goal is to get a decent estimate of the speed relatively fast."} {"_id": "136227", "title": "Sprint Backlog Task Estimates - What Are the Hours Made Of?", "text": "For the sprint backlog, I read that the stories are broken into tasks, and estimated. Are these estimates typically development hours only, or development + QA? What are these hours typically consisting of? Thanks."} {"_id": "250283", "title": "Should we avoid using design patterns in constantly changing projects?", "text": "A friend of mine is working for a small company on a project every developer would hate: he's pressured to release as quickly as possible, he's the only one who seem to care about technical debt, the customer has no technical background, etc. He told me a story which made me think about the appropriateness of design patterns in projects like this one. Here's the story. > We had to display products at different places on the website. For example, > content managers could view the products, but also the end users or the > partners through the API. > > Sometimes, information was missing from the products: for example, a bunch > of them didn't have any price when the product was just created, but the > price wasn't specified yet. Some didn't have a description (the description > being a complex object with modification histories, localized content, > etc.). Some were lacking shipment information. > > Inspired by my recent readings about design patterns, I thought this was an > excellent opportunity to use the magical Null Object pattern. So I did it, > and everything was smooth and clean. One just had to call > `product.Price.ToString(\"c\")` to display the price, or > `product.Description.Current` to show the description; no conditional stuff > required. Until, one day, the stakeholder asked to display it differently in > the API, by having a `null` in JSON. And also differently for content > managers by showing \"Price unspecified [Change]\". And I had to murder my > beloved Null Object pattern, because there was no need for it any longer. > > In the same way, I had to remove a few abstract factories and a few > builders, I ended up replacing my beautiful Facade pattern by direct and > ugly calls, because the underlying interfaces changed twice per day for > three months, and even the Singleton left me when the requirements told that > the concerned object had to be different depending on the context. > > More than three weeks of work consisted of adding design patterns, then > tearing them apart one month later, and my code finally became spaghetti > enough to be impossible to maintain by anyone, including myself. Wouldn't it > be better to never use those patterns in the first place? Indeed, I had to work myself on those types of projects where the requirements are changing constantly, and are _dictated_ by persons who don't really have in mind the cohesion or the coherence of the product. In this context, it doesn't matter how agile you are, you'll come with an elegant solution to a problem, and when you finally implement it, you learn that the requirements changed so drastically, that your elegant solution doesn't fit any longer. What would be the solution in this case? * Not using any design patterns, stop thinking, and write code directly? It would be interesting to do an experience where a team is writing code directly, while another one is thinking twice before typing, taking the risk of having to throw away the original design a few days later: who knows, maybe both teams would have the same technical debt. In the absence of such data, I would only assert that it doesn't _feel right_ to type code without prior thinking when working on a 20 man-month project. * Keep the design pattern which doesn't make sense any longer, and try to add more patterns for the newly created situation? This doesn't seem right neither. Patterns are used to simplify the understanding of the code; put too much patterns, and the code will become a mess. * Start thinking of a new design which encompasses the new requirements, then slowly refactor the old design into the new one? As a theoretician and the one who favors Agile, I'm totally into it. In practice, when you know that you'll have to get back to the whiteboard every week and _redo the large part of the previous design_ and that the customer just doesn't have enough funds to pay you for that, nor enough time to wait, this probably won't work. So, any suggestions?"} {"_id": "250281", "title": "Are Components ideal for packaging and reusing find() requests?", "text": "I find myself loading several models over and over again throughout my application. Instead of typing the code to load the model, run a find, would it be reasonable to put that code in a method and just call it from the controller. For example: **Instead of** public function view() { $this->loadModel('Foo'); $this->Foo->find('list'); } **Could I use** public function view() { $this->ComponentName->fooLoad(); }"} {"_id": "250280", "title": "How do you deal with being behind on a task?", "text": "I have a question for you guys. Say, you're given a task as an item in a product backlog. Say, you have a deliverable that's due the following day. If you're behind on the task compared to when you made your estimates, what do you do about it? Does it make you anxious knowing your behind schedule? How do you deal with this? Do you tell the scrum master or the product owner?"} {"_id": "25116", "title": "Which Java based web ui framework to use?", "text": "* Wicket * Click * GWT * Vaadin As I understand them, these frameworks all enable gui components to be created using java (with all its benefits) without having to do lots of html/javascript. As well as considering the technical factors, I am also interested to hear if any are gaining popularity fast. Is a particular framework is becoming the _leader of the pack_ , this will also effect the decision."} {"_id": "210558", "title": "How do programming languages define functions?", "text": "How do programming languages define and save functions/methods? I am creating an interpreted programming language in Ruby, and I am trying to figure out how to implement function declaration. My first idea is to save the content of the declaration in a map. For example, if I did something like def a() { callSomething(); x += 5; } Then I would add an entry into my map: { 'a' => 'callSomething(); x += 5;' } The problem with this is that it would become recursive, because I would have to call my `parse` method on the string, which would then call `parse` again when it encountered `doSomething`, and then I would run out of stack space eventually. So, how do interpreted languages handle this?"} {"_id": "251778", "title": "I recently read about unit of work and repository design patterns; working with EF wondering if this is a good design pattern", "text": "I found myself creating child records that require a master record to exist, first, so that they can reference it by the master record's primary key (if that's the right term). To accomplish this using EF 6 I tried to call `SaveChanges()` twice - once to create the master record so that its identity key gets generated and once after the children have been created. The problem I ran into was that EF doesn't like multiple calls to `SaveChanges()` so I had to use a transaction. I don't like using a transaction because it feels messy for some reason I can't articulate. It's a somewhat common problem for me to have to do this so instead of a `using`, `try/catch` and `commit/rollback` I figured it'd be easier to have this each time: this._dbContextWrapper.CommitIfTrue(() => { // Multiple calls to SaveChanges() here return true; }); While also having `BeginTransaction()` exposed on my `_dbContextWrapper` so that if passing a `Func` isn't desirable. public interface IRepository { IQueryable Query() where TModel : class; TModel Find(params object[] key) where TModel : class; void Add(TModel model) where TModel : class; void Update(TModel updated, params object[] key) where TModel : class; IDataResult SaveChanges(); void Delete(params object[] key) where TModel : class; IRepositoryTransaction BeginTransaction(); IRepositoryTransaction BeginTransaction(IsolationLevel isolationLevel); IDataResult CommitIfTrue(Func transaction); IDataResult CommitIfTrue(Func transaction, IsolationLevel isolationLevel); } public interface IRepositoryTransaction : IDisposable { void Commit(); void Rollback(); } public interface IDataResult { bool IsSuccess { get; } string ErrorMessage { get; } } I realize that this \"`IRepository`\" only a thin wrapper around `DbContext` \\- I'm doing it so that my classes can have a `Mock` implementation provided to them instead of having to mess with `Fakes` to get around the non-virtual aspect of `DbContext`'s methods. My question is: **Is my`CommitIfTrue` method reasonable? Is there a way to do something similar that's also more testable/involves no `Action`s or `Func`s?**"} {"_id": "117092", "title": "What's The Difference Between Imperative, Procedural and Structured Programming?", "text": "By researching around (books, Wikipedia, similar questions on SE, etc) I came to understand that Imperative programming is one of the major programming paradigms, where you describe a series of commands (or statements) for the computer to execute (so you pretty much order it to take specific actions, hence the name \"imperative\"). So far so good. Procedural programming, on the other hand, is a specific type (or subset) of Imperative programming, where you use procedures (i.e., functions) to describe the commands the computer should perform. **First question** : Is there an Imperative programming language which is not procedural? In other words, can you have Imperative programming without procedures? **Update** : This first question seems to be answered. A language CAN be imperative without being procedural or structured. An example is pure Assembly language. Then you also have Structured programming, which seems to be another type (or subset) of Imperative programming, which emerged to remove the reliance on the GOTO statement. **Second question** : What is the difference between procedural and structured programming? Can you have one without the other, and vice-versa? Can we say procedural programming is a subset of structured programming, as in the image? ![enter image description here](http://i.stack.imgur.com/Ip5eR.jpg)"} {"_id": "85974", "title": "Getting paid through Ltd or Umbrella company?", "text": "I am working for a company as a web dev consultant at the moment, and they asked me whether I want to get payed through the Umbrella company or through my Ltd. Which is better for me and why? The @David Thornley made a good point in comments. Don't forget that we are talking about web developing here. I am not sure how is it in UK, but in the country I am from, you get taxed differently for the stuff you do."} {"_id": "117098", "title": "Lean/Kanban *Inside* Software (i.e. WIP-Limits, Reducing Queues and Pull as Programming Techniques)", "text": "Thinking about Kanban, I realized that the queuing-theory behind the SW- development-methodology obviously also applies to concurrent software. Now I'm looking for whether this kind of thinking is explicitly applied in some area. A simple example: We usually want to limit the number of threads to avoid cache-thrashing (WIP-Limits). In the paper about the disruptor pattern[1], one statement that I found interesting was that producer/consumers are rarely balanced so when using queues, either consumers wait (queues are empty), or producers produce more than is consumed, resulting in either a full capacity-constrained queue or an unconstrained one blowing up and eating away memory. Both, in lean-speak, is waste, and increases lead-time. Does anybody have examples of WIP-Limits, reducing/eliminating queues, pull or single piece flow being applied in programming? http://disruptor.googlecode.com/files/Disruptor-1.0.pdf"} {"_id": "193575", "title": "Performance overhead of standard containers and boost", "text": "Adap.TV has chosen C++ to develop their software. However, they've decided not to use the standard containers1 and boost for performance reasons, as they've blogged about it in the following article: * Why we use C++ (without using STL or boost) It says (emphasis mine), > There are several rules that we are obeying **in order to keep the > performance high** ; > **Avoid malloc(), calloc() or new** > **No free() or delete** (and no need for tcmalloc) > **No STL, boost etc.** > Avoid locking as much as possible > # threads = # CPU cores (hyperthread is a trade-off between latency and > throughput) As we know, the standard containers use allocators which uses `new` and `delete` internally which are _expensive_ operations. So to avoid them, AdapTv has avoided using standard containers altogether. Instead of using `new` and `delete` (repeatedly), they _reuse_ memory (which implies they use a memory pool). I'm wondering what's stopping them from using custom allocators for the standard containers! The custom allocator could use a memory pool internally which means _memory reuse_. So I don't see how standard containers would hurt performance. Or am I missing something? Can't we avoid using `new` and `delete` with standard containers? Is there any other reason why would anyone avoid using standard containers? Or is it simply a lack of knowledge on their part which led to this decision? And how about boost? * * * 1) I suppose by _STL_ they meant _C++ standard containers_ , not SGI's STL."} {"_id": "193574", "title": "Is it possible to integrate MS Project Server with SVN", "text": "We have been using Hosted SVN + Fogbugz for our source control and task/issue tracking. Our developers are very comfortable with SVN and we are hesitant to switch source control (e.g. TFS) providers at this point. However, Management extensively uses Microsoft Project for project management and are considering setting up MS Project Server. **EDIT** With SVN + Fogbugz (or several other task tracking tools), we can create a post-commit hook to update tasks (close/reopen/specify time) from commit messages. Does anyone know of a way to intergrate MS Project Server with SVN? Is it even possible? I searched online but could not find anything. Maybe I did not search correctly?"} {"_id": "193577", "title": "Possible to Comment on a storyboard?", "text": "I recently started working for a company that has me making mockups for an app they'd like to see converted to an iPhone app. I have not been informed as to how all the boards fit together, but I do have a general idea on what they should look like. I would like to write notes or comments on each individual board about how things should be done once I have more information. Is it possible to comment on a board or is there some place close to the boards where I can leave notes on what to do next?"} {"_id": "82530", "title": "How can I estimate the lifespan of a line of code?", "text": "I'm trying to figure out a way to analyze code longevity in open source projects: that is, how long a specific line of code is active and in use. My current thinking is that a line of code's lifespan begins when it is first committed, and ends when one of the following occurs: * It's edited or deleted, * Excluded from builds, * No code within its build is maintained for some period of time (say, a year). NOTE: As clarification on why an \"edit\" is being counted as \"death\", edited lines would be counted as a \"new\" generation, or line of code. Also, unless there's an easy way to do this, there would be no accounting for the longevity of a lineage, or descent from an ancestor. What else would determine a line of code's lifespan?"} {"_id": "161290", "title": "How to use the clients webcam for recording through a website?", "text": "I am building a site and idealy I would like to record the users interaction via their webcam. How do I intergrate webcam recording into my site. I have seen It done with flash. But Im trying to avoid using this. My site is being built using c#, javascript and designed for IE 7 and older. * * * I want to record the videos for later. They are not going to be used for chat e.t.c."} {"_id": "161293", "title": "Choosing between Single or multiple projects in a git repository?", "text": "In a **`git`** environment, where we have modularized most projects, we're facing the _one project per repository_ or _multiple projects per repository_ design issue. Let's consider a modularized project: myProject/ +-- gui +-- core +-- api +-- implA +-- implB Today we're having _one project per repository_. It gives freedom to * `release` individual components * `tag` individual components But it's also cumbersome to **`branch`** components as often branching `api` requires equivalent branches in `core`, and perhaps other components. Given we want to `release` individual components can we still get the similar flexibility by utilizing a _multiple projects per repository_ design. What experiences are there and how/why did you address these issues?"} {"_id": "98003", "title": "Best way to present a small application", "text": "I'm giving a presentation on a small Timeline Application I made over the past few weeks. How might I make this small application into a much larger speech, about 3 minutes long. **Should I focus on how it was made or should I just focus on why people should use it?**"} {"_id": "253880", "title": "Finding a way to simplify complex queries on legacy application", "text": "I am working with an existing application built on Rails 3.1/MySql with much of the work taking place in a JavaScript interface, although the actual platforms are not tremendously relevant here, except in that they give context. The application is powerful, handles a reasonable amount of data and works well. As the number of customers using it and the complexity of the projects they create increases, however, we are starting to run into a few performance problems. As far as I can tell, the source of these problems is that the data represents a tree and it is very hard for ActiveRecord to deterministically know what data it should be retrieving. My model has many relationships like this: Project has_many Nodes has_many GlobalConditions Node has_one Parent has_many Nodes has_many WeightingFactors through NodeFactors has_many Tags through NodeTags GlobalCondition has_many Nodes ( referenced by Id, rather than replicating tree ) WeightingFactor has_many Nodes through NodeFactors Tag has_many Nodes through NodeTags ![enter image description here](http://i.stack.imgur.com/4L3Rn.png) The whole system has something in the region of thirty types which optionally hang off one or many nodes in the tree. My question is: What can I do to retrieve and construct this data faster? Having worked a lot with .Net, if I was in a similar situation there, I would look at building up a Stored Procedure to pull everything out of the database in one go but I would prefer to keep my logic in the application and from what I can tell it would be hard to take the queried data and build ActiveRecord objects from it without losing their integrity, which would cause more problems than it solves. It has also occurred to me that I could bunch the data up and send some of it across asynchronously, which would not improve performance but would improve the user perception of performance. However if sections of the data appeared after page load that could also be quite confusing. I am wondering whether it would be a useful strategy to make everything aware of it's parent project, so that one could pull all the records for that project and then build up the relationships later, but given the ubiquity of complex trees in day to day programming life I wouldn't be surprised if there were some better design patterns or standard approaches to this type of situation that I am not well versed in."} {"_id": "205717", "title": "Is there a Standard Visual Cue to Indicate the presence of a Tool Tip (Hover Text)?", "text": "A widely recognized cue that text is clickable is to underline it, set it in a different color, and change the cursor to a hand symbol when hovering over it. I have a situation where a column in a report may contain a tool tip -- hover- text that appears when the cursor is placed over it. I'm trying to come up with an obvious cue to the user for this. When I use a cue similar to a hyperlink, users instinctively click on it and wonder why nothing happens. Can you refer me to any real-world examples of an effective cue? Or, what cue have you successfully used in the past to accomplish this?"} {"_id": "82539", "title": "Why learn hexadecimal?", "text": "I've taken quite a few intro programming classes in my day, mostly just to get my feet wet in every different kind of programming I find. Not surprisingly, just about every class runs through the same format : intro to hardware, intro to software, and then you get into the actual programming. While understanding how the hardware and software works is very important, I've always been confused by one topic that has been in every single course. In the intro to software section I've found, without fail, they always put large emphasis on being literate in binary, hexadecimal, and sometimes even octal number systems. I understand that it's good to understand what these things are, and how a computer would interpret them, but I've never found myself actually needing to know how to read and write any of those number systems. Really, the only time I've seen something other than base 10 is for colors in CSS, which is even easier if you use something like www.colorpicker.com Have I just been ignorant of the wonderful uses of these non-base-10 number systems in the programming world, or is just an old tradition to include these sections in all programming textbooks? Does anyone have a good example of where the average programmer would actually use an octal number?"} {"_id": "253883", "title": "Cross browser client side storage", "text": "I am developing an angularjs app. The app has to run in current FF, IE, Chrome and on iOS/Android via Phonegap. I am looking for a solution to store data in the client. Phonegap offers a web sql api, that is also supported by Chrome. FF however does not support it, since it is abbandoned by w3c. Cookies and localStorage do not work reliable in iOS. How can i store data in all those browsers?"} {"_id": "193578", "title": "VB.NET: Two tiers for three layers or three tiers for three layers", "text": "I asked a question on StackOverflow in November about separating a very large application into layers and tiers: http://stackoverflow.com/questions/13342626/net-divorcing-layers. The previous developer included data logic and business logic in the business logic layer My question is about the tiers element. I researched on here and concluded that it is better to separate layers into tiers contained in separate DLLs i.e. the presentation layer, business logic layer and data access layer all have separate DLLs as described here: http://stackoverflow.com/questions/13342626/net-divorcing-layers. This seems to be consistent with what I lEarnt at university. However, since then all the examples I am finding online suggest having two tiers (One for the presentation layer and one tier for the BLL and DAL). Is there any specific criteria that developers use to decide whether or not to use three tiers? I am using ADO.NET and have a shared SQL Helper class in the data access layer. The SQLHelper class is similar to this but for VB.NET: http://www.sharpdeveloper.net/source/SqlHelper-Source-Code-cs.html"} {"_id": "167197", "title": "The Meaning of Unified in UML", "text": "UML and other related modelling languages are exists in most of the system engineering fields to represent the system, flow, relations in a structured way. UML also is one of the modelling language used in computer science like other industries to represent the systems in obeject oriented way by using different types of diagrams. Does 'Unified' UML has a special (or real) meaning here?"} {"_id": "253748", "title": "SmartHeap crashes in _shi_removeFromFreeList", "text": "We have a multithreaded application in C++ which uses SmartHeap-10 on Linux. new, new[], delete and delete [] are overloaded. there is _inconsistent_ occurrence of SIGSEGV, only in delete[]. the backtrace shows - #0 0x08b068bc in _shi_removeFromFreeList () #1 0x08b070d7 in _shi_freeVar () #2 0x08b086a7 in MemFreePtr () #3 0x081e91ea in operator delete(void*) () where MemFreePtr () is library function by SmartHeap. Any pointers to think of ?"} {"_id": "188989", "title": "Explanation of Object-parameter-coupling as mentioned in Code Complete book", "text": "I have been reading up on the seminal and excellent book _Code Complete_. It discusses about the various kinds of couplings that can happen between modules(which may be classes as well as methods): 1. _Simple-data-parameter-coupling_ 2. _Simple-object-coupling_ 3. _Object-parameter-coupling_ 4. _Semantic coupling_ The book has to say this about _object-parameter coupling_ : > Two modules are object -parameter coupled to each other if `Object1` > requires `Object2` to pass it an `Object3`. This kind of coupling is tighter > than `Object1` requiring `Object2` to pass it only primitive data types > because it requires `Object2` to know about `Object3`. What is the author trying to mean here?"} {"_id": "191048", "title": "How to count hits in an HTTP API without bogging down the DB", "text": "I'm building an API and want to count hits for each user. It's a HTTP API implemented in Python. I could keep the count in a database (using PostGreSQL) but it'll be a very busy API, so I don't want the overhead in the DB. I'm looking at Redis or just writing plain text logs. But is there a better approach? It's a map tile service, so bursts of hundreds of hits per sec. I want to report usage per month to the user--not keep total count of hits."} {"_id": "194035", "title": "About Artificial Intelligence", "text": "I am interested in starting a career in artificial intelligence. Can anyone suggest how I could prepare for this? What languages should I study that would be best for this career choice?"} {"_id": "191045", "title": "Few big libraries or many small libraries?", "text": "Over the course of some months I've created a little framework for game development that I currently include in all of my projects. The framework depends on SFML, LUA, JSONcpp, and other libraries. It deals with audio, graphics, networking, threading; it has some useful file system utilities and LUA wrapping capabilities. Also, it has many useful \"random\" utility methods, such as string parsing helpers and math utils. Most of my projects use all of these features, but not all of them: * I have an automatic updater that only makes use of the file system and networking features * I have a game with no networking capabilities * I have a project that doesn't need JSONcpp * I have a project that only needs those string/math utils This means that I have to include the SFML/LUA/JSON shared libraries in every project, even if they aren't used. The projects (uncompressed) have a minimum of 10MB in size this way, most of which is unused. The alternative would be splitting the framework in many smaller libraries, which I think would be much more effective and elegant, but would also have the cost of having to maintain more DLL files and projects. I would have to split my framework in a lot of smaller libraries: * Graphics * Threading * Networking * File system * Smaller utils * JSONcpp utils * LUA utils Is this the best solution?"} {"_id": "188987", "title": "Good design for delegates in a service oriented architecture", "text": "My problem is quite complex to explain and my English is not excellent, so I hope you can understand my question. In a service oriented architecture there are some modules that own data used by all the others applications end these modules expose the data via Remote Method Invocation and Web Services. We the developers of the module have seen that the code that invokes these modules is repeated in all the other modules, so we decided to put in common the code and created a new module named _Common Delegates_. The responsabilities of this new module are: * keep informations about the hostname, port and JNDI and/or web service names; * instantiate and use the service locator; * instantiate and call the stubs to the remote modules. But the methods exposed by the _Common Delegates_ modules use the same Request and Response classes that are defined in the called modules. This means that this module does not act as a layer of decoupling. In some cases this module created problems of circular dependencies during maven builds. Is it a good thing to split the _Common Delegates_ module into many different Maven artifacts to avoid circular dependencies, one for the called module? For example if I need to call via RMI the module A, I will have to use the _Module A delegate_. Is it a good thing to make this delegates to be also a decoupling layer, meaning that they will expose their own Request and Response beans and transform them into the beans used by the called methods?"} {"_id": "135766", "title": "Static functions vs classes", "text": "Let's say that I want to build some utility functions to do some basic maths with de `BigDecimal`s, for example I want to have a function that computes the average of a `List`. What is the best approach? A static function or a utility class? public static BigDecimal computeAverage(List numbers) or public class BigDecimalUtil public computeAverage(List numbers)"} {"_id": "93928", "title": "Prerequisite math skill for Introduction to Algorithms (CLRS) book", "text": "I already have knowledge about basic algorithms. Now I plan to study more advance algorithms and I decide to go with Introduction to Algorithms. I'm not sure, do I need to refresh my math's skill before read this book or not? (I forget almost math that I learn in high school and college) If this book need strong math knowledge, please suggest subjects that benefit. **Update** **P.S.** I want to learn about implementation, design and analysis of algorithms."} {"_id": "193291", "title": "Does any well-known license require to make modifications available when only derived *output* is published?", "text": "Is there a way to make sure that modifications to free software are released even when no binaries of modified code are _conveyed_? Though it may sound odd and from what I understand, e.g., GPL requires to distribute source only if binary is _conveyed_ to other parties thus making it possible for unlimited private use as it does not fall into _propagation_. I would like to make sure that if someone publishes a final finding in a scientific literature derived with the aim of a modified code, then those modifications are made available. Does any well-known license have these provisions? If not, why would it be bad? From what I understand if I add those provisions explicitly, it would limit the freedom, e.g., rendering result non-GPLish. From one of the answers that disappeared, I guess I can add an _Additional Term_ but would not it be a _further restriction_ that can be simply ignored > If the Program as you received it, or any part of it, contains a notice > stating that it is governed by this License along with a term that is a > further restriction, you may remove that term. From what I understand it looks like it depends on how I state it. If I require to make private modification used to derived a result that was published to make available, then it would be okay, otherwise it would be a _further restriction_ if I prohibit from privately using and not making modified code available. **Example** _A_ implements a super duper algorithm but they is bad with BLAS and it takes a long time for the simulation. _B_ claims that they refined the code using CUDA or whatever and can easily get results within second. Based on their work they found this and that from simulations and got it published. And that is it. No GPL violation and no better code for community. **Another example** Imagine if some animation studio took Blender, implemented an improved version of Cycles that allows them to render things way faster and with less problems. So now they have an advantage as they can deliver animations to clients faster. No software conveyance, no GPL violation, no improvements sharing."} {"_id": "179794", "title": "Labeling algorithm for points", "text": "I need an algorithm to place horizontal text labels for multiple series of points on the screen (basically I need to show timestamps and other information for a history of moving objects on a map; in general there are multiple data points per object). The text labels should appear close to their points--above, below, or on the right side--but should not overlap other points or text labels. Does anyone know an algorithm/heuristic for this?"} {"_id": "8090", "title": "Alternatives to time tracking methodologies", "text": "**Question first:** What are some feasible alternatives to time tracking for employees in a web/software development company, and why are they better options **Explanation:** I work at a company where we work like this. Everybody is paid salary. We have 3 types of work, Contract, Adhoc and Internal (Non billable). Adhoc is just small changes that take a few hours and we just bill the client at the end of the month. Contracts are signed and we have this big long process, the usual. We figure out how much to charge by getting an estimation of the time involved (From the design and the developers), multiplying it by our hourly rate and that's it. So say we estimate 50 hours for a website. We have time tracking software and have to record the time in 15 we spend on it (7:00 to 7:15 for example), the project name, and give it some comments. Now if we go over the 50 hours, we are both losing money and are inefficient. **Now** that I've explained how the system works, my question is how else can it be done if a better method exists (Which I'm sure one must). Nobody here likes the current system, we just can't find an alternative. I'd be more than willing to work after hours longer hours on a project to get it done in time, but I'm much inclined to do so with the current system. I'd love to be able to sum up (Or link) to this post for my manager to show them why we should use abc system instead of this system."} {"_id": "97207", "title": "What does C++ do better than D?", "text": "I have recently been learning D and am starting to get some sort of familiarity with the language. I know what it offers, I don't yet know how to use everything, and I don't know much about D idioms and so on, but I am learning. I like D. It is a nice language, being, in some sort of ways, a _huge_ update to C, and done nicely. None of the features seem that \"bolted on\", but actually quite well thought-out and well-designed. You will often hear that D is what C++ _should_ have been (I leave the question whether or not that is true to each and everyone to decide themselves in order to avoid unnecessary flame wars). I have also heard from several C++ programmers that they enjoy D much more than C++. Myself, while I know C, I can not say that I know C++. I would like to hear from someone knowing both C++ and D if they think there is something that C++ does better than D _as a language_ (meaning not the usual \"it has more third- party libraries\" or \"there are more resources\" or \"more jobs requiring C++ than D exists\"). D was designed by some very skilled C++ programmers (Walter Bright and Andrei Alexandrescu, with the help of the D community) to fix many of the issues that C++ had, but was there something that actually didn't get better after all? Something he missed? Something you think wasn't a better solution? Also, note that I am talking about D 2.0, not D 1.0."} {"_id": "196898", "title": "How can you prove an acyclic graph has n-1 edges?", "text": "I'm not so hot on the maths for this but for what I understand... A graph g exists with v vertices and edges. g = (V,E); The spanning graph for this is an acyclic copy of this where all the vertices are present, and all the edges are a subset of the graph with the condition that each connection is distinct. Apparently the MST should have n-1 nodes. How can this be proven ? **Sources:** http://youtu.be/zFbq8vOZ_0k?t=25m1s http://www.gtkesh.com/minimum-spanning-tree/"} {"_id": "196895", "title": "How to handle many arguments in an API wrapper?", "text": "I'm writing a PHP API wrapper for a third party API. I want to make all the methods consistent, but I'm not sure how to handle the number of arguments some API routes accept. One API request accepts up to 30 arguments. Obviously, it would be unwieldy to list each argument as a parameter on the method. Currently, I'm writing it to accept the required arguments as method paramaters, while all the optional ones are accepted in a final \"additionalOptions\" array. public function sampleApiMethod($reqVal1, $reqVal2, $additionalOptions) { //Method Code } Unfortunately, there are API requests that have only optional arguments. In this case, either the only parameter is the array of options, or the method has individual parameters for optional arguments. When only passing an array, the method is consistent with the other methods, but it's not the most intuitive. With the latter option, I lose the consistent structure. Is there any sort of best practice or structure for an API wrapper that I should follow to try to have a consistent developer usage experience?"} {"_id": "196890", "title": "Best strategy in SQL", "text": "Recently, a colleague told me that it wasn't advisable to make conditions in the **join clauses**. Instead he suggested to make conditions in the **where clause**. He told me that the SQL engine was optimized for this way. Here is a simple example to illustrate my question. In this case, I think it would make any difference. What strategy is the best? And why? Assume we have a parameter nammed `@user_id`. ## First strategy SELECT role.name FROM user_role INNER JOIN user ON user_role.user_id = user.id AND user_role.user_id = @user_id INNER JOIN role ON user_role.role_id = role.id ## Second strategy SELECT role.name FROM user_role INNER JOIN user ON user_role.user_id = user.id INNER JOIN role ON user_role.role_id = role.id WHERE user.id = @user_id"} {"_id": "53351", "title": "Who should determine team size?", "text": "Developers, managers, or customers? I was recently involved in a situation where I felt like the customers were arbitrarily demanding for more developers on a team which already had too many developers. They were scared the project was going to be late (and it probably will be). Personally, I was scared we were going to fulfill Brook's Law. The group of programmers already lacked in-depth business knowledge, and some were even new to the technology (.NET), yet the customer wanted to add more developers who had even **less** business knowledge. The impression was that this would make the project get done quicker. I started wondering if the customer, who is extremely bright, but presumably knows little about IT project management, should really be the one determining team size."} {"_id": "195605", "title": "How can I ensure our SolidWorks files are covered by the GPLv3?", "text": "I am in charge of a project on github that has files that are and are not code based. Specifically, I am worried about our SolidWorks files. How do I ensure these files are covered by the GPLv3 license?"} {"_id": "199547", "title": "Handling ground-breaking changes in a production system - Insert intermediate level Management object", "text": "As our client request, we are proceeding to change the base of our system. We already have the following structure: > A class has many students. (simple typical one - to - many) Now we must change it to: > A class has many groups. Each group has many students. (Intermediate Object > \"Group\" is inserted). That seems to be a \"simple\" change, but it breaks all of the management interface, from both mobile client to web views. And in database structure also. Our system is a data-management program where we have about 50 - 60 domain objects which are parts of a hierarchy model of around 5 levels. And the intermediate object is inserted in around the 2nd-stage. My question includes 2 aspects: 1) How to estimate the efforts needed. I'm making a new code branch, then insert the new domain object, after that break the relation of the class- student to insert the new \"group\" in. Then look in the error-report of Eclipse to locate the troubled code. However, it makes the whole project red, and I'm having troubles giving accurate time-estimation for the task. 2) How to avoid/prevent this situation to happen in the future. It happens once before, and at that time I have spent a week to write database conversion script, as well as making the changes. But at that time the system is still small. Now it has grown big."} {"_id": "149491", "title": "Learning to program on punchcards", "text": "I'd like to try programming with punch cards once in my life. How can I do this? I'm in my 30s, and grew up entirely in the PC era, programming on computers with screens and keyboards. I want to experience the way my father and grandfather used to work. I imagine the hardware (and probably the cards themselves) are no longer manufactured. Are there any universities or museums with functioning punch card readers anymore? I'm in Boston, but I'm willing to travel to do this. I asked MetaFilter, and I got some mixed answers (along with a lot of \"no, don't do this\" nay-saying). I did get a pointer to the Retro-Computing Society Of Rhode Island, but I haven't received a response to my email to them yet."} {"_id": "187536", "title": "Addition vs multiplication on algorithm performance", "text": "I am studying about optimizing alogrithms.(Prof. Skiena's Algorithm Guide book) One of the exercises asks us to optimize an algorithm: Suppose the following algorithm is used to evaluate the polynomial p(x) = a^(xn) + a^n\u22121(xn\u22121) + . . . + a(x) + a p := a0; xpower := 1; for i := 1 to n do xpower := x \u2217 xpower; p := p + ai \u2217 xpower end (Here xn, xn-1... are the names given to distinct constants.) After giving this some thought, the only way I found to possibly improve that algorithm is as follows p := a0; xpower := 1; for i := 1 to n do xpower := x \u2217 xpower; for j := 1 to ai p := p + xpower next j next i end With this solution I have converted the second multiplication to addition in a for loop. My questions: 1. Do you find any other way to optimize this algorithm? 2. _Is the alternative I have suggested better than the original?_ (Edit: As suggested in the comments, this question though related to the problem deserves another question of its own. Please ignore.)"} {"_id": "187531", "title": "Re-engineering an ASP.NET AJAX project as ASP MVC", "text": "I have been asked to investigate the possibility of re-engineering an existing ASP.NET AJAX Web Application as under MVC. The project, as it stands ATM, is very heavily relient on Telerik's ASP.NET AJAX controls. I've yet to have to opportunity to interview the person responsible for this request, so I'm unsure of the drivers for this, but I believe that as well as looking for a more structured solution (the original was developed in a very organic way) I believe that one of the goals is to _also_ get a lot of the code running client-side and to use a structured form for the development there also. So, My question is, is there a defined set of considerations for something like this? Or, if people do this kind of thing at all, do they just wing it?"} {"_id": "187532", "title": "Is it a good idea to include Installer Project within single solution?", "text": "I have pretty large code base for the client product I am working on. I am using Visual Studio 2010 for my development and it's hectically slow. I can get rid off this by disabling the installer project within the solution file. * Is it a good idea to include setup projects within the same solution? * How can I increase the performance of Visual Studio when working with large projects (including installer?) * How about a different build configuration separately for installer? Will that help?"} {"_id": "107403", "title": "Licensing for a commercial Blackberry app", "text": "I want to develop an app for a website (let's say stackoverflow.com in this case). Will I violate the Blackberry terms of service if I list an (paid) app for a website which I do not own? What about the website I'm going to use?"} {"_id": "187538", "title": "Writing Clean, Elegant Procedural Code (BASIC): Is There Such a Thing?", "text": "I learned to code in OO languages. I've always been interested in design patterns, clean code, etc., etc. - you know the type. Now at work I'm using a BASIC dialect. Given modern programming values, should we try to carry over these same principles to new procedural code? I'm pondering the following issues, and I wondering if I'm on the right lines. * * * **Variable Names** Variables are not strongly typed (nightmare!), they're given short names and written in ALL CAPS (why?!) - basically I find them hard to read and they could be _anything_. Once upon a time, I'm sure `XCNT = 1` would have offered performance gains over `int_EXISTINGCUSTOMERCOUNT = 1`, but we're past that now - surely? I choose the verbose name here. **GOSUB** I want to break down long blocks of code down into multiple smaller blocks. Internally, `GOSUB` is used (over a `FUNCTION`) if the helper is not re-usable by other programs / functions. Given its ability to add / modify variables without the safety of scoping (as we know it in the OO world) `GOSUB` scares me. This is typical: GOSUB GET_BEST_CUSTOMER IF RC = 0 THEN CRT CNAME But I would write: rc_GETBESTCUSTOMER = 1 ; !Default exception str_CUSTOMERNAME = \"\" GOSUB GET_BEST_CUSTOMER ; !set rc_GETBESTCUSTOMER, populate str_CUSTOMERNAME IF(rc_GETBESTCUSTOMER = 0) THEN CRT str_CUSTOMERNAME END With the caveat that `GET_BEST_CUSTOMER` would only modify `rc_GETBESTCUSTOMER` and `str_CUSTOMERNAME` in 'global' scope. * * * There's more, but it's all along the same lines. Given the editor of choice (Notepad++), I'd say my coding style makes the code easier to read and understand - therefore easier to maintain. But I'm sure some BASIC die-hard would readily tell me I'm doing it all wrong."} {"_id": "187539", "title": "How do I alter my solution / project when the spec changes", "text": "I have a question about the best practice in this situation. At one point, my small application allowed the client to upload a file to a server, and download a file from the server (it would also compress / decompress as well). This was created in 1 solution which consisted of 4 projects: 1. FTP 2. CompressDecompress 3. UI 4. Tests Now, the spec has changed and there will 2 end users, one who only wants to upload, the other who only wants to download and they should never have access to anything else (ie downloading people cannot upload and vice versa). So, I have a few choices here. I could either 1. Keep it as 1 solution, and ask users to login, based upon the credentials will display a different UI 2. Alter my UI so it only shows tools to download, create a new solution which consists of just a UI project and reference my .dll accordingly. 3. Delete my UI, create 2 new solutions, each solution being created for either download or upload (and each solution probably only consisting of just 1 project, the UI) and again, referencing the .dll Does any one have any suggestions? Would any guidelines have allowed me to have not gotten into this situation in the first place (or at least made me more aware of the potential disasters)?"} {"_id": "112312", "title": "Bringing my genius brain back", "text": "Well... I was hired because I did not take a comp-sci or engineering course, however I was really good at coding. My job is doing the heterodox stuff when needed... You know, the guy that sometimes has to use a goto, or invent some bizarre technology. When I started my job, I blasted though stuff I needed to do, then I got slower and slower until I almost got fired. Now I am more or less stable, but I am noticing I am slowing down again. I plainly open my source code, look at it, and have no idea what I have to do, sometimes I do not know even what I was doing. I think this is related to my lifestyle. My work is a two hour commute (meaning I lose 4 hours daily), and I have not completed university yet (so I need to do some university work at home), and on weekends I spend time with my significant other. Does anyone know what to do in a situation like this? I am depressed that I cannot write code like I could when I started, or when someone would throw some oddball problem at me and I would spit out a solution instantly. Now I am having trouble typing even 10 lines of code in a day, because I spend the entire day trying to figure what to do. In fact, I can't even procrastinate when I **want** to procrastinate, I open the browser, and I have no idea of what web page to open. It is really annoying :( I feel like my IQ dropped from its measured levels to something like 80..."} {"_id": "37001", "title": "How do you do ASP.Net performance testing?", "text": "Our team is in need of a performance testing process. We use ASP.Net (both web forms and MVC) and performance testing is not currently built into our projects. We occasionally do some ad-hoc analysis, such as checking the load on the server or SQL Server Profiler, but we don't have a true beginning to end, built into the project performance testing methodology. Where is a good place to start? I'm interested in both: 1. Process - General knowledge, including best practices. 2. Essential list of tools. I'm aware of a few tools, such as what's built into the pricier versions of VS 2010 and JetBrains products, though I haven't used them."} {"_id": "102150", "title": "What is a domain of automatic testing tools such as ScalaCheck?", "text": "I've seen not so many examples of testing with automatic tools, i.e. serializing/deserializing of JSON (which was paired in the following way: `val actual = deserialize(serialize(string))`), checking that appending symbols to string was done properly (and that's imho silly, cause it extremely hard to made a mistake in such plain operations). Can you provide really useful examples/use cases for automatic testing with ScalaCheck that will unveil it advantages? Does it meant to be used mostly in paired style (straight/inverse functions like in JSON example above)?"} {"_id": "113958", "title": "At a higher level description, how is DLMALLOC supposed work?", "text": "There doesn't seem to be that many good descriptions that go into the specifics about how dlmalloc works. The sources I have come across so far _mention_ dlmalloc, but then only goes on to explain what malloc() and free() are, rather than describing dlmalloc. The Wikipedia description, on the other hand, was a bit hard for me to understand. http://en.wikipedia.org/wiki/Malloc#dlmalloc_and_its_derivatives Can anyone explain the workings of dlmalloc, how to implement it, and any additional sources that could help?"} {"_id": "49410", "title": "Is it worth becoming a programmer?", "text": "I'm a first year student in CS and I absolutely love programming. Many people have told me it isn't so good once you start working due to bringing your work home (thinking about how to solve problems throughout the day), working many hours when the deadline is near, and so on. I've heard being a system administrator is a less stressful job, since you don't have to worry about it at home. So my questions are (for experienced programmers): * Is it worth becoming a programmer? * Does your job satisfy you enough to overcome these problems? Thanks in advance."} {"_id": "98916", "title": "Besides the IDE, libraries, and language, what are the main differences between iOS and Android development?", "text": "I'm coming from the iOS side. I'm particularly interested in knowing if there are similar hurdles on the Android side on these points: * developer fee -- do you have to pay $99 a year to build for android? * provisioning devices - do you have to go through a complex provisioning/digital certificate/device-id routine to get the app you're developing on a device for testing? * marketplace distribution - do you have to go through an approval process to get your app to users? * payment - is it easy to get paid for your app? What is Amazon's cut? Thanks. I know I could probably find these answers scattered on the web, but it would sure save me time to just hear from an experienced Android dev."} {"_id": "98918", "title": "How can I \"get in the know\"?", "text": "My company posted a job listing to get me a helper. A recruiter called me today and all he kept saying was \"MVC this Entity Framework that...\" - He sounded shocked when I said the project uses DataSets and Linq2Sql over WinForms and ASP.NET WebForms. Then I was looking at options for automated website testing and I come upon this here: and I began to get agitated. > Most folks \"in the know\" are using presentation layers to make ASP.NET so > thin that a tool like NUnitAsp isn't helpful. This person is in the know, and his friends are apparently in the know. I want to be in the know too, because being out of the know makes me feel insecure and a little sad. In my efforts this past year to get with the times, I realized great benefits from Linq2Sql and the Unity container. They both were nothing but good for me - filling gaps that have been apparent to me for ages. Then I moved on to Model-View-Preseneter for WinForms GUIs and was again very happy with it for the same reason - I had been asking myself for a long time how to separate things out so that I could have a thick client and a web client share their common logic in a common code base. Yet, I am stuggling with the following. And I know a zillion people can't be wrong and I'm not smarter than the masses, but I need help to see: * MVC as the evolution of WebForms * WPF as the evolution of WinForms * Entity Framework as the evolution of Linq2Sql (and, for that matter the deprecation of Datasets) (I suspect it all stems from my, to date, lack of obtaining Test Fahrvergn\u00fcgen) Thus, I have been asking myself, and not hearing an answer to: * What do I gain using MVC in a web application? I know I gain additional source code artifacts and a new DSL to learn. What else? * What would happen if I used WPF objects without the MVVM pattern? Would I be hurting my chances to get a job somewhere else? * For that matter, is WinForms really broken? Is it me or does Visual Studio have noticable visual lag on my dual core 2.8 GHZ machine with 8 Gigs of RAM? I like snappy. I want end users to experience snappy all the time without fail. * Why are Datasets \"the old way\"? They seem quick efficient and succinct for many small to medium sized problems I have to solve (yet they are not even in Silverlight). I feel like big pile of complexity is on the plate and spreading it around won't make it go away. The intrinsic amount of complexity needs to be confronted head on, and maybe software engineering should become more like electrical engineering or mechanical engineering, or brain surgery."} {"_id": "211132", "title": "Level of detail of a user story", "text": "I am about to start User story sessions in my team. It's quite new for them and I am also wrestling with certain things. For the current project we have some well worked out wireframes. I read a lot about the way of writing user stories. What the template should be like and about different aids like Invest The plan is to turn them around in user stories. Lets say I have a screen where a user could edit an order. There is a lot of detail on that screen. Now when creating a user story of this story. Will it suffice to say: > As an Admin I can edit a purchase order so that mistakes typed by the user > can be enhanced. Or should i specify each detail like: > As an Admin I can resend an invoice to the customer, so he can get a copy of > his lost one. > > As an Admin I can review the customer order so he has detailed information > about each purchase > > As an admin I can remove items forma on order so that in case the customer > made a mistake I can remove Items. * * * And how about the acceptance criteria. How should the be defined for a user story as such. Where do I define which fields need to be shown on an order detail page? Can this be part of the acceptance criteria?"} {"_id": "211137", "title": "Why can static methods only use static data?", "text": "I don't understand why a static method can't use non-static data. Can anybody explain what the problems are and why we can't do it?"} {"_id": "197079", "title": "github tools - is there a way not to copyright your app and stay its author", "text": "I am wondering... Recently I have observed github; I saw many Java projects' codes which don't have copyright at their headers... I mean, as I may guess, in this case the only thing which can prove the code author is - it is author's github account; So my question is... is it safe to protect open source code with github account only or there is a more optimal way to do so? Please share your experience; Thanks"} {"_id": "108002", "title": "Mobile PC Remote", "text": "If I buy a symbian(S60, nokia E51) mobile, and I want to write a program to control the PC(launch programs, control mouse/keyboard): 1. Is it possible to make a mobile app and a windows app which communicates via WiFi, or its easier/faster via bluetooth? 2. Which framework should I choose? (symbian/java) 3. (other info that I might need)"} {"_id": "234427", "title": "System Communication: Avoiding Including a Large \"HAS-A\" Hierarchy Which Isn't Used", "text": "The situation: **System A** Huge, complicated system. Uses an important Message object with many other Message objects attached, many of which have further Message objects attached. In total, this is about twenty different objects. Due to awkward timing with releases and introduction of code churn, System A cannot have its Message object hierarchy touched. **System B** Needs to use the same important Message object to communicate with System A to make use of one of its services. However, since the messages it will send off are going to be invariable, it only uses a very small portion of the Message object hierarchy. The Question: How could I allow System B to make use of the important Message object without (a) touching System A, or (b) including a hierarchy of nearly twenty objects, almost none of which are used at all? Is there a more advanced Design Pattern which could be put to use in this scenario?"} {"_id": "234428", "title": "how and should I 'unit test' an entire package?", "text": "I'm still learning to be good about doing _unit_ level testing, as I've always been a little sloppy about only doing functional testing in the past, so I want to double check I'm doing this 'right'. I have a package which is responsible for auto-magically updating a particular configuration which is expected to change regularly during runtime. Provide a list of objects to monitor to the constructor of my main package class. Then at some point call update it will go to all the multiple places that define configuration, detect change, update the appropriate state for the objects it's monitoring, and provide a report of what was changed. This is all automatic and otherwise transparent to all other packages. My inclination is to test this as a unit, but it is a pretty 'big' unit, 5-6 classes and fetching from multiple files and a restful interface. It feels like testing any smaller units would take much longer, be less flexible to refactoring and only provide a slightly higher chance of detecting a defect, but that could just be my laziness talking. Would it be considered 'better' to test at a lower level? Assuming it is 'right' to test the package as a unit what is considered appropriate for mocking something like this? There are methods of classes which are instantiated by my main class (ie, not something I pass in directly so I can't just pass my own mock) that have methods I want to control. I believe I can use powerMock in the two cases I can think of (or could, after I research PowerMock some more), but I'm not sure if doing so is transparent enough. Is configuring PowerMock to detect and return a mock file object any time my package tries to open a configuration file, even if it's buried deep in my package logic, acceptable, or is this considered to be abusing my knowledge of implementation specific details? Would it be 'cleaner' to actually modify configuration files on the file system and let them be read normally without further mocking? I would need to modify the files regularly for testing... Edit: To clarify the question of coupling asked in one of the questions, I don't think that my classes are overly coupled. Effectively I have three classes that fetch state from A, B, and C, and then a 'main' class which takes the state from the three and decides how to combine it correctly (ie, if A and B don't match use A unless C). I could easily test A B and C separately and then mock them to test my 'main' class. However, it seems like the testing of my 'main' class would effectively test the comparatively simple interfaces for A, B, and C anyways if I didn't mock them. Thus it feels like duplicate work to test them individually. I may get some slightly better code coverage, maybe, with individual testing, and it's always nice to have one test test only one thing. But I don't know if it's worth all the overhead for minor benefits."} {"_id": "68591", "title": "How do you manage minor changes that you want to keep local in mercurial?", "text": "I'm considering migrating a 14 year old cvs repository (history intact) to mercurial. I think I've got all the technical conversion bits down, but I still have some questions about working effectively in mercurial. One of the things I see a lot in individual developers' cvs sandboxes (including my own) is local uncommitted changes that aren't ready to be pushed to the mainline. My understanding is that this is a bad thing. Most of my experiments with hg suggest that uncommitted changes are a bad thing to have. The inability to merge with them is enough for that. So what I want to know is how other people who use mercurial in day to day coding deal with it. How do you deal with incomplete changes to code when it comes time to update your repository? How do you deal with local changes that you don't (yet) want to share with other developers?"} {"_id": "157490", "title": "How to port this architecture to .net?", "text": "My team is currently locked into using a tool we dislike that takes the form of a Eclipse plugin and a .jar; the plugin gives us a button to quickly run a single file's code (via invoking the main .jar and passing to it the current file). We want to move to C#.net. Is there any way to get Visual Studio to replicate this behavior? Obviously we could put each runnable class into its own project in our solution, but that requires checking a lot of project files into source control. Ideally we'd have a main() method in each file and could tell visual studio to run just that file for development purposes, while the finished software would be run from a single entry point using command-line parameters. (I don't want to give too much information here regarding the purpose of the software, so please just accept that the current setup makes a lot of sense for what we're doing and changing it too much would be rejected by management.)"} {"_id": "68592", "title": "How to disseminate Scala?", "text": "With the announcement of Ceylon, and after observing the slides describing its intent and feature list, I reckoned this language to be a Scala competitor. Furthermore, as a Scala programmer, I can see several points (as depicted in here) that state that Scala goes further in providing a quantum leap over Java, that is interoperable nevertheless. However, this language may be a symptom of failed comprehension over Scala features and/or lack of knowledge of the language itself, which may not be only their fault. Many great ideas have failed due to bad/insufficient marketing. Beta was, in some aspects, better than VHS but it lost its marketshare because of marketing. So my question is: Are we doing enough to explain/disseminate/evolve Scala? How can Scala professionals/enthusiasts propel the language forward in terms of acknowledge from their colleagues (i.e. Marketing)? How can we make natural selection work and prevent it from failing? Are sponsorships the way to go? Should there be already prepared presentations (e.g. like jrebel has on its page) to ease propagation?"} {"_id": "30255", "title": "What are the drawbacks of sending XML to browsers and let them apply XSLT?", "text": "**Context** Working as a freelance developer, I often made websites completely based on XSLT. In other words, on _every_ request, an XML file is generated, containing everything we need to know about the page content: the name of the user currently logged in, the top menu entries, if this menu is dynamic/configurable, the text to display in a specific area of the page, etc. Then XSL process (caches, etc.) it to HTML/XHTML page to send to the browser. It has a good point to make it easier to create small-scale websites, especially with PHP. It is a sort of template engine, but which I prefer to other template engines because it's much more powerful than most of template engines, and because I know it better and like it. It is also possible, when need, to give an access to raw XML data on demand for an automated access, without the need to create separate APIs. Of course, it will fail completely on any medium-scale or large-scale website, since, even with good caching techniques, XSL still degrades overall website performance and requires more CPU serverside. **Question** Modern browsers have the ability to take an XML file and to transform it with an associated XSL file declared in XML like ``. Firefox 3 can do it. Internet Explorer 8 can do it too. It means that it is possible to migrate XSL processing from the server to the client side for 50% of users (according on browser statistics on several websites where I may want to implement this). It means that those 50% of users will receive only the XML file at each request, thus reducing their and server's bandwidth (XML file being much shorter than its processed HTML analog), and reducing server's CPU usage. What are the drawbacks of this technique? I thought about several ones, but it doesn't apply in this situation: * Difficult implementation and the need to choose, based on the browser request, when to send raw XML and when to transform it to HTML instead. Obviously, the system will not be much more difficult then the actual one. The only change to make is to add XSL file link to every XML, and to add a browser check. * More IO and bandwidth usage, since the XSLT file will be downloaded by the browsers, instead of being cached by the server. I don't think it will be a problem, since XSLT file will be cached by the browsers (like images, or CSS, or JavaScript files are cached actually). * Possibly some problems on client side, like maybe problems when saving a page in some browsers. * Difficulty to debug code: it is impossible to obtain an HTML source the browser is actually using, since the only displayed source is the downloaded XML. On the other hand, I rarely go look at HTML code on client side, and in most cases, it is unusable directly (whitespace being removed)."} {"_id": "4250", "title": "\"A\", \"an\", and \"the\" in method and function names: What's your take?", "text": "I'm sure many of us have seen method names like this at one point or another: * `UploadTheFileToTheServerPlease` * `CreateATemporaryFile` * `WriteTheRecordToTheDatabase` * `ResetTheSystemClock` That is, method names that are also grammatically-correct English sentences, and include extra words purely to make them read like prose. Personally, I'm not a huge fan of such \"literal\" method names, and prefer to be succint, while still being as clear as possible. To me, words like \"a\", \"an\", and \"the\" just look plain awkward in method names, and it makes method names needlessly long without really adding anything useful. I would prefer the following method names for the previous examples: * `UploadFileToServer` * `CreateTemporaryFile` * `WriteOutRecord` * `ResetSystemClock` In my experience, this is far more common than the other approach of writing out the lengthier names, but I have seen both styles and was curious to see what other people's thoughts were on these two approaches. So, are you in the \"method names that read like prose\" camp or the \"method names that say what I mean but read out loud like a bad foreign-language-to- English translation\" camp?"} {"_id": "147451", "title": "What's so bad about the DOM?", "text": "I keep hearing people (Crockford in particular) saying the DOM is a terrible API, but not really justifying this statement. Apart from cross-browser inconsistencies, what are some reasons why the DOM is considered to be so bad?"} {"_id": "225272", "title": "distributed computing with remote hetrogenous machines", "text": "The way i am doing it now is using boost::asio TCP sockets handling everything manually with a main server that orchestrates the processes between the available machines, but the number of machines is increasing and when i need communication between specific machines i have to do it through the server and the number of machines is just to much to be handled by one server, so i am thinking about Open MPI. However i have 3 problems 1. the machines are Heterogeneous. 2. the number of machines available can be for example 50 at a time and also can be only 10 and sometimes it gets to 300 which is too much for my server to handle and there is always tons of data to process, and i can seem to utilize more than ~70 connections at the same time. 3. most of the machines are remote and some of them share the same network. if i want to scale things with my current design i would get ugly trees with multiple layers, i don't have the money to hire network experts/programmers for a non profit project, but i am fairly good at grasping new concepts, so how would you go around this? and with the above problems is OpenMPI for me? In other words, i am looking for a better design than mine, and i have no problem implementing it from scratch no matter how long it takes and i will be supporting linux only at the start because i figured that native code has a measurable advantage, i would like to hear your ideas."} {"_id": "225270", "title": "Is it correct to require end-users to clear their cookies?", "text": "Is it any indications of poor programming if the support staff for a given product requires you to clear the cookies in order to troubleshoot issues with their web-site? As a point of example, if you're trying to activate a new SIM on a tablet, and the carrier's activation web-site is giving you an error message that their web-site is down, even though there is no problem with actually accessing their web-server, which responds with the our-website-is-down page just fine. Subsequently, the support staff instructs the user to try clearing the cookies (even though this activation web-site is not know to have been accessed prior to the error message taking place)."} {"_id": "225271", "title": "Where can I find Turbo Pascal dialect description/reference?", "text": "Where can I find Turbo Pascal dialect description/reference? Is it still availiable in some place? I'm looking for the famous Turbo Pascal dialect description/reference(yes, the one from 80s/90s time) but I'm not having luck with google."} {"_id": "225275", "title": "Dangers when implementing features as plugins", "text": "_What kind of problems have you encountered when building plugin interfaces for your application? And how did you resolve them?_ **Background** I want to refactor an application so that various features can be moved into plugins, allowing them to be easily added and removed in future. I hope this will be a neater system than scattering `if (FeatureX_is_enabled)` branches throughout the code. (I may be wrong!) I believe I will need to define interfaces for each part of the system into which a features may plug in. I would like to benefit from your experience if you have already undertaken this kind of project. Perhaps you have endured mistaken designs, and can advise what _not_ to do. Perhaps you can recommend patterns or architectures that can make such a system more manageable. **Concerns** A few issues already spring to mind: * I will need to provide some power to plugins so that they can get their job done. If I pass too little control, new plugins may be **too restricted** to do what they need (e.g. some Google Chrome extension authors complain about the inability to modify the core interface). However if I pass too much control, irresponsible plugins will have the power to break the whole application (e.g. Mozilla Firefox extensions). * Some plugins may need to run in a **different order** on different events. I could register event handlers with a priority, e.g. 1-100, so that plugins can hook themselves into the appropriate order. Or I could just create `post_` and `pre_` events and hope nothing more finely grained is needed. * If two plugins **depend** on another plugin to be available first then the plugin architecture will need some kind of dependency (and naming) system. * Related, some plugins will need to know when other plugins are **present** , and perhaps even make calls into each other to handle edge cases. * There may be difficulties **maintaining the state** of the application if plugins are enabled and disabled at **runtime**. (Chrome seems to handle this well, but many applications require a restart.) * This kind of architecture will **obfusticate the flow** of execution (in exchange for keeping additional behaviour out of the core code). I will need to write neat interfaces so that unexpected behaviour is minimised. * Sometimes the monolothic `if (FeatureX_is_enabled)` approach might be a better way to handle the situation than breaking out to a plugin, but I don't know how to recognise such situations. _It is likely there are other concerns I have not considered, so I would love to hear what you have had to deal with!_ **Related** Related question: How to turn on/off code modules? Related library: Bemson's curious Flow library for Javascript seems to provide various features to help adding and removing features to/from a program at runtime, but I have trouble getting my head around it!"} {"_id": "106437", "title": "When conducting a code review, should the focus be on the completeness of the requirement?", "text": "Given that the reviewer is not part of the project, but was assigned to review because he has done some coding for the application being updated/enhanced. Is it the reviewer's job to ensure that the requirement is complete and correct? Or is this the QA and business tester's responsibility? If the developer and the reviewer misses a requirement which is discovered in UAT, is the reviewer also at fault (probably to a lesser degree)?"} {"_id": "225279", "title": "Is input validation necessary?", "text": "This is a very naive question about input validation in general. I'm a MATLAB user (1.5 years old) and I learned about input validation techniques such as \"parse\" and \"validatestring\". In fact, MATLAB built-in functions are full of those validations and parsers. So, I naturally thought this is the professional way of code development. With these techniques, you can be sure of data format of input variables. Otherwise your codes will reject the inputs and return an error. However, some people argue that if there is a problem in input variable, codes will cause errors and stop. You'll notice the problem anyway, and then what's the point of those complicated validations? Given that codes for validation itself take some efforts and time, often with quite complicated flow controls, I had to admit this opinion has got a point. With massive input validations, readability of codes may be compromised."} {"_id": "234396", "title": "How to manage a single branch", "text": "I read the _Continuous Delivery: Reliable Software Releases through Build, Test, and Deployment Automation_ book by **Jez Humble** and **David Farley**. The part that throw me off balance the most was their insistence on using a single branch. Thinking about it and what they are preaching, it does make sense. But the question is, how to achieve it?. At my company we are working on C# with TFS as our source control. And the following ideas came to my mind about how to achieve the idea of doing changes that you don't want to release yet, but you want to keep inside the vcs tool: * Injections to be able to select the class that you want to use. So you can rewrite some functionality without affecting code that needs to be released. * Related, an IoC container to make the above easy to achieve. So you can select at compilation (or even runtime) the ones that you want to use. * Decorator pattern to add functionality in a very simple manner (which is just applying the other two above) At another similar question they talk about the **Release Line** from _Software Configuration Management Patterns_ (which I still have to ready). But the example that they are using deals with tags and creation of branches from them, which doesn't seem to be the same thing as what Humble and Farley talk about. 1. Do the techniques that I have described sound right for achieving a single branch strategy? 2. Is there anything else that could be done to manage having your code on a single branch?"} {"_id": "183538", "title": "What are the advantages of a client/server architecture in webapplications as opposed to a server generated frontend application", "text": "In our company, we need to build a web interface onto an embedded Linux platform. I kind of see 2 options: You use a technology where the HTML and JavaScript is generated on the server side (Think JSP, Grails, but is is something that is using C++ and generates HTML/JavaScript) or you create a HTML5 'client' application that talks to a JSON or XML generating backend. I have the feeling that web applications currently tend to go with the latter, but what are the advantages of doing so (The developers on the project choose the former, mainly based on the fact that they know C++ much better then HTML and Javascript)"} {"_id": "111838", "title": "How should I react to diminishing application performance?", "text": "Sometimes it comes to me that the biggest challenge for an engineer when he find his application getting worse in performance is lack of enough information. Imagine that you go through the weekly performance report from your application's access log, and find you get much lower response times from July. All you can do is to pray for it not to get worse the next time."} {"_id": "143512", "title": "Junior developer support", "text": "I am a junior developer in my first work experience after university. I joined the company as PHP developer but I ended up developing using C# and ASP.NET. Right from the start I did not receive any training in C# and I was assigned with ASP projects with quite tight deadlines scoped by Senior developers. The few project hand overs I had from other developers were brief and it looked like I had to discover the system myself, in really short time. This is my first job as web developer and I wonder whether it is normal not to have a kind of mentor to show me how to do things, especially because I am completely new to the technology. Also, do you have idea how to tackle this? As you can imagine, it gets really frustrating."} {"_id": "183531", "title": "How to refactor \"nested\" view classes to avoid deep method calls?", "text": "Lets say I'm displaying a bunch of data (`model`) using a `View` class for rendering. However, a lot of the data has sub-data (`model`s) complicated enough to require separate rendering classes. In my design, a `View` class has a `model` which it is rendering, and has many children `Views` which display the sub-data. In some cases, A `View`, while containing a model, may not have anything to render and serves more as a wrapper for it children. However, if you have very complex data and your sub-views have subviews,this design results in deeply nested method calls. Some methods are simply passing information to view classes who may do nothing with it except pass it to their children. This seems inefficient, so I figured there might to be a pattern or something that solves this more elegantly."} {"_id": "186721", "title": "Level of detail in System Requirements", "text": "I've read in multiple places that the requirements must not be influenced by solution and must not contain solution. So in the below example , please can you help in letting me which is correct - Application A is interacting with Application B to install a new service / change the service type and disconnect a service. There 3 different service types 1,2 and 3. Application B does not support changing service from 1 to 2 or 1 to 3. Hence 1 must be disconnected and then 2 and 3 must be added. However service can be changed from 2 to 3 or 3 to 2 directly. In the above case , changing the service from 1 to 2 or 1 to 3 is a system level contraint. However from an end user's perspective this is a service change. In the above case , should I have a single system requirement stating system shall allow service to be changed for all the service types supported or should I have 2 different system requirement - 1 stating which all services can be upgraded directly and the 2nd stating in which cases service needs to be disconnected and installed?"} {"_id": "188616", "title": "Technical interview graphics-related concepts fundamentals", "text": "I'm having a technical interview with a graphics society in a few time and I'd like to expand my knowledge related to the following subjects: * TLB (translation lookaside buffers and their role) * low level optimizations * cache and how the process of fetching data for an instruction execution works * threads and synchronization * interrupts, deadlocks and spinlocks, irqs and interrupt masking Can you suggest some exceptionally explained book/resource about these topics?"} {"_id": "16908", "title": "Doesn't \"if (0 == value) ...\" do more harm than good?", "text": "This is one of the things that I hate most when I see it in someone else's code. I know what it means and why some people do it this way (\"what if I accidentally put '=' instead?\"). For me it's very much like when a child goes down the stairs counting the steps out loud. Anyway, here are my arguments against it: * It disrupts the natural flow of reading the program code. We, humans, say \"if value is zero\" and not \"if zero is value\". * Modern compilers warn you when you have an assignment in your condition, or actually if your condition consists of just that assignment, which, yes, looks suspicious anyway * You shouldn't forget to put double '=' when you are comparing values if you are a programmer. You may as well forget to put \"!\" when testing non-equality."} {"_id": "66160", "title": "Most regrettable design or programming decision you made?", "text": "I would like to hear what kind of design decisions you took and how did they backfire. Because of a bad design decision, I ended up having to support that bad decision forever (I also had a part in it). This made me realize that one single design mistake can haunt you forever. I want to learn from the more experienced people what kind of blunders have they experienced and what did they learn from them. I'm sure this will be a lot of help to other programmers by helping them to not repeat those decisions. Thanks for sharing your experience."} {"_id": "188619", "title": "Rules about the concreteness of method parameter types, return types and property types", "text": "Some time ago I read a kind of \"rule of thumb\" about the concreteness of method parameter types, return types and property types, but I just do not remember it. It said something about keep your return types as concrete as possible and your parameter types as abstract as possible... or vice versa. I don't know if it was actually a good or bad advice, so if you have your own thoughts about it, please let a comment. Cheers."} {"_id": "181975", "title": "Why is the UI of nowadays application getting so \"plain\"?", "text": "Take a look at Google, Windows 8, etc. Why does everything look so plain now? Looks really ugly to me. Is it all because of smartphones and tablets? Do I have to follow that trend?"} {"_id": "128971", "title": "Would like to add features to Text Editor - which editor to choose?", "text": "I would like to add some new features to a text editor. The main requirements I have are: * Should be a programmer's editor * I would like to target C# editor features to start with (since I work in Visual Studio most of the time). * I have considered Visual Studio addins/extensibility development, but I need to know some pros/cons of that. Mainly I am interested in the XAML editor features of VS 2010. * Some other open-source editors I have considered are Scintilla and AvalonEdit. Again, I would like to know the learning curve for working with those. Since I have not yet started, all editors seem equivalent. I would like to know the pros/cons of developing for each of these."} {"_id": "196306", "title": "will an assembly language book for intel x86 processor be compatible with amd processors?", "text": "I'm wanting to get an assembly book to learn assembly, and was wandering if i get a book for intel x86 processor will there be any problems assembling the code on an amd processor?"} {"_id": "29614", "title": "What are you telling yourself if you can't understand new concept, paradigm, feature ...?", "text": "Programming always required to learn new concepts, paradigms, features and technologies and I always have been failed at first attempt to understand new concept what i encounter. I start to blame and humiliate myself without remember before how i understood new concept which i hadn't understand it before. I can hardly stop to tell myself \"why i cant understand ? Am i stupid or idiot ? Yes, i am stuppiiddddd!!!\" What your inner voice tells if you can not understand new concept after spend long time till been tired or hopeless ? How do you handle your self-esteem in such situations ?"} {"_id": "73892", "title": "What are \"User Process Components\"?", "text": "This article about application architecture design mentions \"User Process Components\" as part of the presentation layer. > **User process components**. Your user process components help synchronize > and orchestrate user interactions. This way the process flow and state > management logic is not hard-coded in the user interface elements > themselves, and the same basic user interaction patterns can be reused by > multiple user interfaces. It looks like this is a standard term, but I have never heard of it so far. Can you provide a more precide definition or examples of UPCs which illustrate the definition above? **EDIT** : what's puzzling me particularly is the phrase _\"help synchronize and orchestrate user interactions\"_. Can you provide an explanation of what could be meant? Can you give an example of a problem that is solved using UPCs?"} {"_id": "73893", "title": "How to experiment with GPU programming on Linux+AMD/ATI card?", "text": "I've recently acquired a laptop with an Intel i3 CPU and an AMD/ATI 6300 card, running Ubuntu 10.10. How do I proceed in setting up a development environment that allows me to program the GPU? I assume I'll have to use OpenCL (CUDA is NVIDIA-only), but since I'm a novice with GPU programming, I'm asking the advice from more experienced programmers on the issue."} {"_id": "73895", "title": "Do you run production boxes with logs completely turned off?", "text": "We are internally debating in our team if we should continue to run our production boxes with logs turned off completely and just log errors and exceptions with log rotations. I want to know how everyone else logs info in their production environment and any particular strategy/tool that has worked well for you. Thanks Isaq"} {"_id": "156729", "title": "Procedural--enough?", "text": "_I posted a similar question on the PHP section of stackoverflow and was told that this might be a better forum in which to submit my question._ Without any programming experience whatsoever, I started studying PHP beginning in the second quarter of this year with the goal of slowly building my dream website starting later this summer. Just as I was wrapping my head around PHP and beginning--I felt-- to truly understand the different concepts and how the code works (this is why I prefaced my post with my non-experience--I've really had to bang and cram it all in my head), I come across Object Oriented PHP. I've taken the approach of of mastering (trying, as much as possible, at least) as many of the concepts and ideas before diving in to try and create my own scripts; much like learning how to dribble, throw a baseball or football well, before playing a single game. (of course I've played around with and done some code -drills). My question to the experts is this: Should or must I learn and delve deep into Object-Oriented-PHP, thus delaying my \"start-to-code\" date? Or will procedural PHP Programming suffice if one would like to build good E-commerce, forum/comment-type (like PHP Help) or basic social networking websites? How far will procedural programming take me? A site with 1,000; tens-of- thousands; hundred thousand visitors? Thanks in advance for your expert insights."} {"_id": "248458", "title": "Identify algorithm for my resource allocation needs", "text": "I'm trying to automate a task and I lack the right vocabulary to look up the correct algorithm. It really feels like a common problem that has likely been solved many times before. All I'm looking for is for someone to point me in the right direction or help me with the right search terms to look up a solution / algorithm. If you happen to know of an actuall library (javascript), then even better. # Made-up scenario Say I have 3 'buckets', `Bucket A`, `Bucket B` and `Bucket C`. Each of these can hold a certain number of 'Balls'. * `Bucket A`: Capacity 10 balls. * `Bucket B`: Capacity 15 balls. * `Bucket C`: Capacity 5 balls. Now, I also have an inventory of balls and each one can only be put into certain 'buckets'. One ball can only go in `Bucket 2`, The next ball can go into `Bucket 1` OR `Bucket 3`, and so on. Now.. I need to determine the best way to place the balls in order to try to fill up each 'bucket' to it's capacity (or as close as possible). # Real scenario My real reason for this is to schedule `people` (balls) to visit `locations` (buckets) for a requested number of hours (capacity of the bucket). However, due to the following reasons all the libraries/algorithms I've found while searching for \"scheduling\" so far do not work in my scenario. * I do not care about start/end times at all, only `person` -> `location` * My people (balls) all have a strict list of locations they can visit. Each one is unique. * Each person is available for an arbitrary number of whole (integer) hours that they can spend at _exactly one_ `location`. Using someone that's available for 8 hours for only 7 of those hours is OK. * Each location (bucket) requests a certain number of hours that I try to fulfill to the best of my ability with any combination of people. I have ~50 locations and ~100 people. It's not a requirement that I get a **perfect** solution, but 'pretty close'. I found schedule.js which looks fantastic, but I've been unable to abuse it to fit my needs."} {"_id": "23472", "title": "Dealing with co-workers who do not have a consistent coding style?", "text": "What do you when you're working with someone who tends to write stylistically bad code? The code I'm talking about is usually technically correct, reasonably structured, and may even be algorithmically elegant, but it just _looks ugly_. We've got: * Mixture of different naming conventions and titles (`underscore_style` and `camelCase` and `UpperCamel` and `CAPS` all applied more or less at random to different variables in the same function) * Bizarre and inconsistent spacing, e.g. `Functioncall (arg1 ,arg2,arg3 );` * Lots of misspelled words in comments and variable names We have a good code review system where I work, so we do get to look over and fix the worst stuff. However, it feels really petty to send a code review that consists of 50 lines of \"Add a space here. Spell 'itarator' correctly. Change this capitalization. etc.\" How would you encourage this person to be more careful and consistent with these kinds of details?"} {"_id": "109066", "title": "Where should heavy development assets like specification documents and multimedia editor source files be stored?", "text": "It's not uncommon where I'm working on a site that includes things like binary (or effectively undiffable due to factors outside my control) specification documents (e.g., PDFs, Word Documents, Excel Spreadsheets) and multimedia source files (e.g., FLA source files, Photoshop files, Illustrator files). How should these be stored? My instinct is to just throw them into git. Storage is cheap, but it does make the clone operation take much, much longer than it otherwise would. A potential alternative would be to use a wiki. I've tried it in some cases but never ran into a case where the wiki format's inherent strengths (layperson-accessible version history and easy access to latest copies) have or would have led to enough benefit to justify the additional work in maintaining the wiki. The third option is to just let them hang out locally and in email threads. So what approach should I take to this?"} {"_id": "225892", "title": "Which mathematical properties apply to XOR Swap algorithm (and similar bitwise operator algorithms)?", "text": "Assume you have two variables a and b, and you need to swap them, and for whatever reason, making a temporary variable for storage is not an option. This is the algorithm in pseudocode a \u2190 a XOR b b \u2190 a XOR b a \u2190 a XOR b Based on examples I can see that this does work. But, _why_ does it work? More specifically, how was this derived? Was it a mere coincidence that `XOR` such and such values does this? This question applies to **all bitwise operators.** I understand perfectly what they do, and how they work, and various algorithms that take advantage of them. But, are there mathematical properties of these bitwise operators that these algorithms are derived from? What are those mathematical properties? And which of them apply to the specific example of an `XOR` swap?"} {"_id": "109061", "title": "Should a software developer carry business cards?", "text": "I was at Google Developer Day yesterday and I was approached by several companies looking for hire and for freelance work. As I am interested in doing some freelance work I chatted a little with the representatives of these companies and they all gave me their business cards. I was a little taken aback by this since I didn't have a business card for myself and the fact that I find business card to be quite simply outdated. So, lacking a business card, I gave them my website address where I have a portfolio, links to my GitHub and LinkedIn accounts and a blog. Should I order some business cards and carry them around in these events? I can't shake the feeling that business card are outdated, am I just plain wrong?"} {"_id": "76977", "title": "Outsourcing Quality Assurance and Testing", "text": "I was recently approached by a software firm that specializes in Quality Assurance and Testing. Up until this point, the developers at our (small) company have been responsible for their own QA for the most part and we've had mixed results. We're at the point where we are ready to hire a full time QA guy, but I was curious as to whether or not other's have used teams like this in the past and what the results were? I'm a bit skeptical but trying to be open-minded. **tl;dr What was your experience with an outsourced QA/testing team? Pros and cons?**"} {"_id": "140153", "title": "Are there any ways to track for the visitor of my site , which site visitor come from?", "text": "This is a problem because when I do email campaign, There is a link on the email, that link to my company homepage, I would like to differentiate between the visitor come from another way (e.g. search on google) or the visitor come from the email I have sent. Is it able to check such kind of information ? Thankyou"} {"_id": "76979", "title": "Should I just slog it out or discuss with my PM?", "text": "Having \"completed\" my task, I have recently been assigned by my PM to work on a maintenance project by another PM. In this other project, the client wants to add new features and I'm assigned to do a feature. I'm finding my job over my head for various reasons: * code is difficult to understand/read as * not well-documented * standard naming convention is not followed (seems non-existent, and confusing at times because certain words are used in the wrong way) * dead-code * redundant code * code such as (isTrue == true) * temp variables that are not inlined and not prefixed with temp * etc... * visual sourcesafe is used * visual studio 2005 is used, even though they have vs2008 and vs2010. I'm unable to use a plugin for quick navigation (more of an inconvenience) * they just want to get things working, without caring about maintainability. I would love to refactor the code base, and suggest upgrading to svn and a newer version of VS. However, I don't feel that the PM or my new colleagues are amenable to such changes. On top of that, I don't have the confidence of delivering on time (if I'm even able to deliver), and if I do make these suggestions, he may assume that I think I feel that I am superior (not true) and I am competent enough for my assigned task, making it difficult for me to raise issues in the future. I just don't feel I have sufficient experience yet for a project of this complexity, and will likely end up writing copy-and-paste and googled code with lots of unpaid overtime. I will get surface learning without deep learning, and I feel the entire experience will mar my joy of programming, perhaps making me shun it completely. In the meanwhile, if I do nothing about it, I will probably just have to slog it out within the current constraints. To this end, I have borrowed books on brownfield application development and visual sourcesafe as references. What should I do? Should I make my suggestions? How early should I tell him if I don't feel I can deliver? Or should I just slog it out while risking not being able to get my task done?"} {"_id": "244689", "title": "Why do we need fork to create new process", "text": "In Unix whenever we want to create a new process, we fork the current process i.e. we create a new child process which is exactly the same as the parent process and then we do exec system call to replace the child process with a new process i.e. we replace all the data for the parent process eith that for the new process. Why do we create a copy of the parent process in the first place and why don't we create a new process directly? I am new to Unix please explain in lay-man terms."} {"_id": "177257", "title": "How do people deal with Android fragmentation?", "text": "I've spent the past few years working on iOS apps, and I'm now giving some serious consideration to creating an Android port of one of my apps. I'm sure that complaints about fragmentation are a frustrating cliche to experienced Android programmers, but as an iOS programmer, I'm quite honestly overwhelmed by the number of configurations and devices that my app might end up running on. There are literally thousands of Android devices in the wild, but I know there are successful Android developers in the world and I know they're not testing or developing for thousands of different devices. So how can a relatively small company deal with fragmentation? Is it possible to pick the five or six most popular devices, focus on those and prevent the app from being installed on any other devices? Are there any other strategies for practically dealing with the number of different configurations an app will face?"} {"_id": "205998", "title": "Unit testing in node.js and mocking modules", "text": "I'm trying to test this wrapper to request that I made: // lib/http-client.js var app = require('app'), request = require('request'); exports.search = function(searchTerm, next) { var options = { url: 'https://api.datamarket.azure.com/Bing/Search/Web', qs: { 'Query': '\\'' + searchTerm + '\\'', 'Adult': '\\'Off\\'', '$format': 'json', '$top': 10 }, auth: { user: app.get('bing identifier'), pass: app.get('bing identifier') } }; request(options, function(err, res, body){ if (err) return next(err); // TODO: parse JSON and return results }); }; where `app` is an instance of express. The question is, how do I test the function of this \"module\" without having to touch the internet? If I was to do this in other languages I would have mocked the request module but I don't know if that's the best way to do it in node.js. // NODE_PATH=lib describe('Http Client', function(){ it('should return error if transport failed', function(){ var c = require('http-client'), results = 'foo'; // request mock should return results when called c.search('foo', function(err, results){ results.should.eql(res); }); // TODO }); it('should return an error if JSON parsing failed', function(){ // TODO }); it('should return results', function(){ // TODO }); });"} {"_id": "228442", "title": "Event and Objects", "text": "I have an array of objects. Instead of me when performing a task asking every object If they have anything to say about that, I would rather like the following: Perform Task Object #4 Hey, I have something to say about that! And If object #4 says something about that, maybe Object #17 has anything to add up with, because Object #4 did say something. Do you understand? How do I structure this?"} {"_id": "50074", "title": "When required to convert code from a language you don't know, how do you go about it?", "text": "Scenario: Your boss tells you he needs a big chunk of code in language X converting into code Y. You know code Y but are only vaguely aware of X. You only have limited amount of time. Do you try and find a code converter? slowly hack through the code until it works? outsource? say you cant?"} {"_id": "194868", "title": "Looking for companies with strong R&D departments, specifically for A.I., but I can't seem to find many. Are there only a handful?", "text": "I plan to get a PhD in CS, concentrating in some subfield of artificial intelligence, not sure which one at the moment. I'm looking into R&D departments in industry, but there don't seem to be too many. For example, I have literally found hundreds of labs for the basic sciences and other engineering fields, but for CS I've found: Google, IBM Research, Microsoft Research, Yahoo Research, AT&T Research and GE Research. Are there not tons of companies like there are in other disciplines?"} {"_id": "228447", "title": "Three approaches for obtaining different sized versions of an image from the server", "text": "My android app needs different sized versions of images for different purpose and bandwidth preservation. Approach one: * when the user uploads their avatar or another image, my php script creates 4 versions of that image: mini_200, medium_300, big_400 and original. those paths are then taken and stored in the database. Then, when I need the smallest image, I load it from `http://myserver.com/item_images/200_mini_27304lkewsjfimage.jpg` Approach two: * same as approach one, but instead of adding prefixes to the names of the files, I store them in different folders - big, medium, mini. and in My app I just pass a parameter for which folder to look in Approach three: * when the user uploads their avatar or another image, I only store the original. Then, when I need the smallest image, I load if this way: `http://myserver.com/image_resizer.php?image=\"93_iosdfj0sd9fj.jpg\"&new_width=200&new_height=200` Which one is better and why? I feel like reinventing the wheel here, because this topic is too broad and I dont know where to read about it"} {"_id": "228444", "title": "What is best software design in creating methods?", "text": "I had created a extension method which extended the string type in C#. // actually checks if the string is empty or null and then looks up the default // promotion code which is set in the backend admin system. promocode.HasPromode(); I thought that was fine and dandy, but another colleague thought I should inject the method through constructor injection, which should implement a interface. I think this is overkill to have a simple lookup function to do that. Another colleague thought that extending the string type makes it simpler to use it wrong and exposes it for the wrong purposes. Which one is right? All solutions work fine; but some of these solutions should be the best one, or does it matter actually? It should matter, because we are having those discussions and we're trying to make the code better."} {"_id": "228445", "title": "JMS for synchronous calls vs leaving homebrewed messaging system?", "text": "I'm refactoring existing code. Right now the code has a main server with a CLI and restful interface as seperate applications which communicate with the server via a home-brewed message passing package that they share. The system is written by a few hardware engineers with no real programing experience. It looks a bit ugly at times, but it has been working in this code for a year+ with minimal hassle, beyond limiting passing of a single serialzed object per command. We now have to implement JMS communication to interface with a third system and I jumped all on it. I'm certain were use JMS queues to push some information that really should be pushed as well. My first inclination was to replace the whole home-brew message passing system with JMS. However, the home-brew system is written for synchronous calls, with an obvious \"I make a request, wait and send me a response\" sort of behavior. JMS is really used for async calls. I could make JMS work for synchrnous calls with a little hacking, but that hacking needs to exist in three applications... My question is, would it be better to remove an existin system that seems to mostly work with JMS to have the extra robustness of an enterprise level system, and a single communicaton protocol, or is it best to leave well-enough alone and not waste programing time trying to remove the homegrown system that seems to work? Incidentally, as far as I know there is no built-in tool to make JMS handle sycnrhnous calls without my maintaining a state machine is there? If there is a way I can tel JMS to send responses to my requests the way a restful interface would work and JMS will handle the blocking and keeping of tate to make it happen without my writing it I would definately switch entirely to JMS."} {"_id": "194863", "title": "finds all possible permutations of a certain numbers of distinct digits", "text": "I am stuck on the following problem. I want to find an algorithm that finds all possible permutations of a certain numbers of distinct digits and then put them in a large matrix for instance. For instance for the digits 0, 1 and 2 I want to get the matrix: 2 1 2 0 1 0 1 2 0 2 0 1 0 0 1 1 2 2 Here they are in order but this is not required per se. Does anyone know a nice algorithm to achieve this? Thanks in advance."} {"_id": "107382", "title": "Why most of large corporations websites are bad?", "text": "As a electronic customer, I sometimes have to go to manufacturer's websites to find information about products, drivers/firmware update, etc. More ore less, most of these websites, IMO, are bad. They're slow, not user-friendly (some time, really ugly), and take me lots of time to do what I need to. For example, in Sony's website, I must find \"Vaio laptops\" in a list of dozens (may be hundreds) products, then find my specific model in a list of hundreds model. Fail! I also see that on several websites, they may be slightly better, but still not good (HP, Dell, Lenovo (somewhat better), ...) Well, they may be have a large back-end system there, but that does not mean they can make the frond-end faster and more user-friendly. Is there an obvious reason that large corporation's website are that bad?"} {"_id": "194865", "title": "Where did the html/css 'float' concept come from?", "text": "Pretty straightforward, was wondering what the inspiration or precedent for the CSS 'float' concept. I'm not familiar with any other graphic APIs or other layout DSLs that use this concept, but it's such a huge part of HTML layouts. I'm curious if a previous language had a similar concept, or if 'floats' originated from the land of HTML/CSS. Thanks."} {"_id": "181397", "title": "Many Blocking VS Single Non-Blocking Workers", "text": "Assume there is an HTTP server which accepts connections and then it has _somehow_ wait for headers to be fully sent in. I wonder what is the most common way of implementing it and what are the rest pros and cons. I can only think of these: **Many blocking workers are good because:** * It is more responsive. * Easier to introduce new connections (workers pick them up them selves rather than outsider waiting till it can add it to a synchronized list). * CPU usage balances automatically (without any additional effort) as number of connections increases and decreases. * Less CPU usage (blocked threads are taken out of the execution loop and do not require any logic for jumping between clients). **Single non-blocking worker is good because:** * Uses less memory. * Less vulnerable to lazy clients (which connect to the server and send headers slowly or don't send at all). As you probably can see, in my opinion multiple worker-threads seem a bit better solution overall. The only problem with it is that it is easier to attack such server. **Edit (more research):** Some resource I found on the web (Thousands of Threads and Blocking I/O \\- The old way to write Java Servers is New again (and way better) by Paul Tyma) hints that blocking approach is generally better but I still don't really know how to deal with fake connections. P.S. Do not suggest using some library or applications for the task. I am more interested in knowing how it actually works or may work rather than have it working. P.S.S. I have split logic into multiple parts and this one only handles accepting HTTP headers. Does not process them."} {"_id": "181392", "title": "(Dis)advantages of datetime vs. long in globally used applications", "text": "I am currently designing a web/mobile application the bulk of whose users will be distributed across the different U.S. time zones with a potential of scaling up to other countries and time zones as well I have worked with several other globally used apps and noticed they all used datetime to record timestamps with the default time zone being that in which the application is typically hosted. In displaying the times, they would typically do not care about the user preference (locale) and simply display the timestamp with the TZ and leave it up to the user to convert if (s)he wishes. I am entertaining the idea of shifting away from that paradigm and recording timestamps as one global time (GMT) in long milis regardless of the user location and collecting the user locale by their IP or letting them change their TZ as a preference in their profile, which would later be used for display formatting of all their GMT long milis. This seems like a simpler solution than confusing the app design with TZs. I am curious to hear feedback of pros and cons to this approach. I am also curious to hear some explanations why the, IMO overcomplicated time zoned datetime data type has ever been used across databases and not a uniform single time zone with conversions just for display."} {"_id": "166250", "title": "How can I improve these online java programming puzzles I wrote for my (middle/high school) students?", "text": "I'm teaching some middle and high school students programming right now, and I found that some of them really liked online programming puzzles. So I created http://www.kapparate.com/coder/ , and right now there's 4 categories of puzzles. All the puzzles are set up right now so that variables are pre-initialized, and the user plugs in some code in the middle. For example, the problem might say these are pre-initialized: int x = ????; int y = ????; int z; and then the program might ask the student to write the final line of code: `z = x + y;`. Now I know I could go a long way in improving the usability of this site (like having an area that lists the pre-defined variables), but I was wondering if this concept seems sound. I know some sites have kids fill in functions, but not all of my students know what functions are yet, and I'm trying to introduce online programming puzzles before that."} {"_id": "189561", "title": "When a garbage collector compacts objects in the heap, does it change the references on the stack?", "text": "This seems like a simple question, but after a lot of reading on the subject, I still haven't found a definitive answer (perhaps because it is so simple). My question is this: when a garbage collector compacts objects in the heap, how are the references to those objects in the stack updated? I can think of two possible solutions: 1. Go through the stack (and references in the heap) and update the reference to point to the new location of the object. In an analogy to moving, this would be like sending a letter to anyone who has your address and asking them to update their address book with your new address. 2. Provide some sort of look up table. This would be like leaving a forwarding address with the local post office. Do garbage collectors predominantly use one of these two methods? Some other method? Both?"} {"_id": "6146", "title": "Is mod_security a good thing?", "text": "I've recently been frequented by erroneous error messages from mod_security. Its filter sets cover outdated PHP exploits, and I have to rewrite my stuff because Wordpress&Co had bugs years ago. Does this happen to anyone else? > Apache mod_security blocks possibly dangerous HTTP requests before they > reach applications (PHP specifically). It uses various filter sets, mostly > regex based. So I have a nice shared hosting provider, technically apt and stuff. But this bugged me: Just last week I had to change a parameter name `&src=` in one of my apps because mod_security blocks ANY requests with that. I didn't look up its details, but this filter rule was preventing the exploitability of another app which I don't use and probably never had heard about. Still I had to rewrite _my code_ (renaming parameter often suffices to trick mod_security) which had nothing to do or in common with that! And today, a silly regex blocks form submissions, because I wanted to submit php sample code. Given, this is the simple stuff that mod_security is there to protect against. But I don't believe mod_security can detect seriously obfuscated code, and just goes off at obvious (and in this case totally trivial) php snippets. Basically I'm getting penalized by mod_security because other people released bug-prone apps. (Not saying my apps are ultra secure - I'm pretty security wary, but make no hyperbolic claims.) I've already asked my provider to disable it anyway, the benefits are too minuscle IMO and for my apps. * * * What do you think? Does mod_security make much sense outside of WP hosting? Or is it really just a bunch of blacklists of long passed security bugs? Which of its rules are actually helpful? Is there an application level equivalent?"} {"_id": "189569", "title": "Moving from local storage to a remote database: how should I cache the data locally?", "text": "I have a .NET (C#) application that I am releasing soon, which also has some support files. Right now I am storing those files as .txt files, and I update them as necessary whenever the application version changes. I am wondering if a better solution might be to store the information in those files in a central database instead, and have all clients access that database when then launch. This would prevent the need to update the software each time those reference files change. My question for the gurus here is this: Is this a wise decision, and if so...what would be the best method of storing the downloaded data? A temp file? in memory? something else? Any help would be appreciated."} {"_id": "231436", "title": "How does jQuery's mechanism of event handlers work", "text": "I'm in the need of widen my perspective on the framework libraries, to be able to make well aware choices of if/which/when to add a framework to a website. One thing that got my attention was event handlers, as they seems to be somewhat more complex structured. For me, in the early days, an event handler were added inline with its element `...` Later on they were removed from being inline and added globally `document/element.onlick = \"DoSomethingOnClick` or `document/element.onlick = \"function() {...}` As of today, the frameworks add their own mechanism on top and I am trying to grasp what/how that actually means/is done. By studying code parts it appears to me, and here describe in a VERY simple way, that they add the `onclick` event similar to above global and then pass the event when raised, into their own handler, which itself is an object structure containing stored/added functions and elements and their relationships. Is this a correct observation? If not, then how is it done? Can this be showed with a simple written sample in Javascript? Based on the following quote, > \"You can use the jQuery _on_ method to attach an event handler to the > document, and pass it a filter string that will cause your handler only to > fire when the event happens on an element that match the filter, and with > the matched element as _this_ value.\" how is the element that triggered the event identified in the handler (by the event object or ?) and how are the elements to be fired stored/identified? Do the other frameworks, other than jQuery, have a very much different structure of their implementation of event handlers?"} {"_id": "142337", "title": "Using functions as statements on Python", "text": "A great feature of Javascript is function statements. You can do this: (function myFunc(){ doSomething(); doSomethingElse(); })(); Which is a way to declare a function and call it without having to refer to it twice. Can that be mimified on Python?"} {"_id": "131270", "title": "After writing code, why do I feel that \"I would have written better\" after some time?", "text": "I have been working on my hobby project in C++ for more than 2 years. Whenever I write a module/function, I code it with lot of thinking. Now see the problem, do { --> write the code in module 'X' and test it --> ... forget for sometime ... --> revisit the same piece of code (due to some requirement) --> feel that \"This isn't written nicely; could have been better\" } while(true); Here `'X'` is any module (be it small/large/medium). I am observing that, this happens no matter how much effort I put while coding. So mostly, I refrain myself from seeing a working code. :) Is this a common feeling for many people ? Is this language specific phenomena ? (Because in C++ one can write the same thing in different ways). What should I do, if I get this **re-factoring** feeling for a real world production code, where changing the working code will not earn me much accolades but rather it may invite troubles if it fails."} {"_id": "240623", "title": "Retried Operation with generic Exception", "text": "I am looking for a way to get the logic of retrying an operation in a single method while keeping the exception types of the operation. I.e., the implementation to retry an operation could look like this: public void retriedOperation(Operable operable, int maxAttempts) throws Exception { for (int attempt = 0; attempt < maxAttempts; attempt++) { try { operable.doOperation(); break; } catch (Exception e) { if (attempt + 1 > maxAttempts) { throw e; } } } } This implementation is however only capable of throwing an unspecified `Exception` and thereby looses lots of information. Unfortunately, generic Exceptions can ~~neither be caught nor thrown~~ not be caught in Java, thus something like public void retriedOperation(...) throws ExceptionType { ... } would only allow to catch an unspecific Exception, check if it is of the right type and then cast it. But if it is of another type, it cannot be thrown with this method header. **Is there a way to implement a generic retry function that preserves specific exceptions?**"} {"_id": "128598", "title": "How often is appropriate to destroy objects?", "text": "I know this is hard to answer without examples, so I'm looking for general principles or guidelines here. I'm thinking within the realm of small- to medium-sized mobile games and apps. I've read a few times the object reuse is a touchstone of creating efficient programs. So, I generally strive to reuse everything possible. But, then I came to realize that every object I'm keeping around has to exist _somewhere_ in memory, so it's begun to seem **inefficient**. I've thought it would be fine, as long as I'm not trying to render the objects' visual or auditory aspects. And I've feared that construction and re-construction of these objects would be too expensive to perform every time I just want to toggle their presence. Let's say, for instance, I've got a GUI comprised of 30 different objects that each represent a screen state-- buttons, panes, text, a background, maybe some animated UI components. Does it seem more sensible to initialize and de- initialize these objects, keeping them around in memory, but disabling their rendering, animations, etc? Or should I just be destroying these things and re-creating them? Other examples of things I keep around might be game objects like non-playable characters, weapons, items, etc."} {"_id": "200397", "title": "Unit Test OR Console Application", "text": "I am to write some one-off code in c#.net that will do some db manipulation of existing records and call a third party REST Api to update those records. I proposed writing a unit test that does it. And running it in vs2012. My team mates were not too happy with the approach and preferred writing a console application to do it. What does the community think? Which ones do you prefer and what are the pros and cons of each approach. Update #1 - Exact scenario (to keep everyone happy) is as follows An in-house API handles newsletter subscription for my website (well.. the website i work on). So these details are with this API (emails and their preference (subscribe or unsubscribe)). But to achieve certain things we keep a track of these preferences in our application db as well (SQL Server, as a table). Now there is some data in the applicaton's SQL db which is not in sync with the records that API manages. To do this, i want to be able to write a unit test/console app that will read some emails from a given list (maybe an excel), call the api to get preferences associated with those email ids and then update application's SQL db table. Update #2 - Why i want to use unit test to preform this functionality 1. I want to add this test/tests to my existing test project. This way i will have access to ALL Dtos and Contracts (and objects in general) which are used in my application. Saves me time repeating/duplicating any code that i may need into a separate console app. 2. Since it will be part of the existing source control repo, i dont have to maintain a seprate project/application which i will have to if i do a console (if i want to share it with people)"} {"_id": "200390", "title": "Need help understanding Constructor Chaining", "text": "I'm trying to understand constructor chaining better. I understand that this technique can be used to reduce code duplication for initialization of a class object and I also understand the order in which they're executed. I found the following example and I cannot see clearly what the benefits are in the code? And whether it's necessary? I don't understand why both constructors have '`this`' keyword in them, would calling either constructor (and passing in relevant parameters) have the same result? //Constructors public MyClass(double distance, double angle) : this(distance, angle, 0) { } public MyClass(double distance, double angle, double altitude) : this() { Distance = distance; Angle = angle; Altitude = altitude; }"} {"_id": "200391", "title": "GPLv3 Software vs. BSD (3-Clause) OpenSource Project .. i don't get it", "text": "**Preamble:** * If I say \"BSD\" I mean the 3-Clause BSD license * If I say \"GPL\" I mean the GPLv3 license * I am NOT the author of the GPL project * I am the author of the BSD project **Simple Task:** I wan't to use a 3rd party software (GPL) in (or better 'with') my open source project (BSD) **Situation:** * I won't change any of the GPL project code (using it AS IS) * I want to use it as kind of a library (i know about LGPL but it's not available) * I will use the GPL software as bulk, NOT only parts of the code * I will indicate the use of the GPL project with license, homepage and where and why it is used in my BSD project (in readme file) * I will not take the credits for the functionality of the GPL project * I will take the credits for my project and the \"Adapter\" connecting my BSD project to the GPL project * I don't want to re-license the GPL project as BSD (!) * I won't charge any fees or something like that to gain insight you can find the BSD project at GitHub: MOC-Framework (see /Extension/FlowPlayer and /Module/Office/Video) and the GPL project here: Flowplayer (free flash-version) **Simple Question:** Is this possible? **Closing Words** I found some discussions, stating: \"it is possible\", others say \"no\". Most of the \"no\" was for commercial projects. Most of the \"yes\" are referring to: \"include BSD in GPLv3\", but that's not what I mean. I want GPLv3 along with BSD.. I simply can't figure it out."} {"_id": "181027", "title": "Data decoding initialization/Constructor error handling", "text": "I have a set of loadable data decoders for a specific type of data and a stream to read containing data. Now I want the program to select the correct decoder in a reliable way so I want to use a trial-and-error algorithm. It seems resonable to create a decoder and connect it to the stream using a constructor: // C++ code // codecs is an array of structs containing function pointers loaded from a dll FileIn src(\"foo.wav\"); for(size_t i=0; i Real-world objects share two characteristics: They all have state and > behavior. Dogs have state (name, color, breed, hungry) and behavior > (barking, fetching, wagging tail). Software objects are conceptually similar > to real-world objects: they too consist of state and related behavior. My problem with that passage is that when describing _state_ its mixes _attributes_ there too. For instance, the name and color of a dog are its attributes, while it being hungry or thursty are its states. So in my opinion it's more accurate to break the characteristics of objects into three parts: **attributes, states and behaviors**. Sure, when translating this into a programming language I can see that the three-fold partition becomes a two-fold one, because both attributes and states will be stored into fields/variables, while behaviors will be store into methods/functions. But conceptually speaking it makes more sense to have the 3 things separate. Here's another example: consider a lamp. Saying that both the lamp size and whether or not it's turned on are states is a stretch in my opinion. The lamp size is an attribute, not a state, while it being turned on or off is a state. Or did I miss something?"} {"_id": "22705", "title": "Is it more important to focus on a business domain or a programming stack/technology for career growth?", "text": "i just basically realized that it's almost impossible to truly learn and master each programming language/technology before a new version is released. so my initial thought was to focus on the .net platform, but then what about the business domain side of the thing? does it make more sense to pursue more knowledge about the platform including taking MCPD, etc or does it makes more sense in the long run to focus on the business domain, i.e. web commerce and only things that is related? i'm just a starting up as programmer and would like to know what's a good way to direct myself to growth so that i could master a certain area for career growth. any tips or article would be nice ;)"} {"_id": "231273", "title": "How to get better understanding of the users as a programmer", "text": "I work at a company that wants to be agile, but the business analysts often provide us \"user stories\" that are more solution than problem statement. This makes it difficult to make good design decisions, or in more extreme cases, leaves few design decisions to be made. It does not help the programmers understand the user's needs or make better design decisions in the future. Our product owner makes an effort to provide us with problem statements, but we still sometimes get solution statements, and that tends toward a \"code monkey\" situation. An additional challenge is that some (not all) of my teammates do not see a problem with this, and some of them honestly want to be told what to do. Thus, when we receive a solution statement on our backlog, they are eager to jump right in and work on it. I believe that as a software engineer part of my job is to understand the user's needs so that I can build the right thing for the user. However, within our organization structure, I have zero contact with the user. What kind of things can I do to better understand our users?"} {"_id": "231272", "title": "Testing the Consumers of Subclassed Data Structures", "text": "PHP's SplQueue does not include a clear() or reset() function to wipe data out of the data structure. My application requires that functionality. This leaves two options: A) Create a subclass of SplQueue. Example: class UserQueue extends SplQueue { public function clear() { //dequeue everything } } The problem with this approach is that I can't see a way to test _consumers_ of this subclass without relying on clear() as defined in the subclass. In essence I can't think of a way to stub that functionality in PHPUnit short of just depending on UserQueue's own tests. B) Implement the clear() function in every consumer of SplQueue that requires it (most consumers in my application), which seems obviously terrible but would appear to be the only testable method. I am hoping someone out there can tell me there is an obvious way to test option A that I've missed. I also considered wrapping SplQueue but I don't see how this improves matters. Or I could monkey patch it with runkit, but a nearly-allergic aversion to runkit has been beaten into me."} {"_id": "182286", "title": "serving static or dynamic web-content based on user group", "text": "I need help understanding a problem I have, and that others surely have had as well. I'm working on a web-application that allows users to interface with a database. The application in general has multiple pages with tabbed navigation, that displays and allows _most_ content to be editable. So far, it works well. I need to restrict access to certain pages and disable dynamic (editable) content of the application based on the group (in the database) that the User belongs to, sounds simple, maybe it is. The issue is though, the editable pages have `input` of type buttons and text which should disappear for a user in a restricted group. Also, certain tabbed-navigation selections should not be displayed for Users of a restricted group. At what point in my application should this logic be handled? Using javascript for this logic seems like the wrong approach (using a library like `underscore.js` with templates), but I do need to hide or disable multiple options. My other thought was having dynamic and static web-pages and serve those based on User group. It's clear that I'm lost on this subject, and could use some insight as how this problem should be approached in a way that is sane so the next guy behind me won't say...wtf. I'm not sure if this question is better suited for stackoverflow or here. So hopefully this is the right place!"} {"_id": "231276", "title": "How do desktop applications talk to remote database server?", "text": "I have often seen desktop applications in stores and banks. How do these desktop applications talk to a remote database central server? There will be a desktop application which is installed on multiple computers in multiple different locations. How do all these machines talk to one central remote database server?"} {"_id": "182283", "title": "Should one declare alternative response types (e.g. JSON) in Rails controller actions even if not utilising them?", "text": "Just wondering what the accepted convention is for Rails controller design. Currently, every controller in my app that I've written it set up to send a JSON response when necessary. Thing is, I only ever utilse HTML responses in my app. So is it a good idea to have them defined? For example: def show @dog = Dog.find(params[:id]) respond_to do |format| format.html format.json { render json: @dog } # needed? end end It makes the controller code less readable (because more LOC) and it also means I have to think deeply about what a good JSON response should be when HTML is not being is used, so it decreases the speed of my controller development. This is especially true when you have conditional responses. For example: def create @dog = Dog.new(params[:dog]) respond_to do |format| if @dog.save format.html { redirect_to @dog } format.json { render json: @dog, status: :created, location: @dog } else format.html { render action: \"new\" } format.json { render json: @dog.errors, status: :unprocessable_entity } end end end The only positive I can see is that it \"future-proofs\" the controllers (i.e. if I need JSON responses later on then they are already written). But if I'm writing JSON responses just _because_ then by that same logic I might as well write XML responses (i.e. `format.xml`) too..."} {"_id": "22709", "title": "How long should I wait before re-contacting someone I've interviewed with for a web development/programming job?", "text": "After the interview has occurred, and after the thank you letter has been sent, how long should I wait to re-contact them? Should I wait two weeks? One week?"} {"_id": "77623", "title": "Best practice to keep the changes in the project well documented", "text": "What is the best practice and software you use to document change related document. lso, how you prefer to document the changes in the project. Let say, you got a requirement to make some changes in the project. How do you proceed? * Do you take the complete backup before processing. Will this backup be documented. If yes, then how? * How the change will be documented? Any template? How this will be shared between the peers? * What will be the content of the change request procedure? * Do you keep the affected code, queries etc in some folder/document?"} {"_id": "204892", "title": "Drop table if it already exists and then re-create it?", "text": "I was just learning about sqlite in android programming from the New Boston Travis videos and I have come across something strange: It is said that if the database does not already exists, onCreate() method is called to create it. so far, so good. But, if the database already exists, then onUpgrade() method is called which simply drops the database and then re- creates it. What is the sense behind such an update? why not just leave the database alone?"} {"_id": "204890", "title": "What's meant by, \"TODO Auto-generated method stub\"?", "text": "I am using eclipse for android programming, and every here and there i see the statement, \"TODO Auto-generated method stub.\" I understand that these methods were generated automatically by eclipse upon creation of classes and other trigger activities but I do not understand the need to have it mentioned everywhere. What's the need to have it mentioned everywhere repeatedly?"} {"_id": "77629", "title": "Expanding knowledge of python / Next book and/or Topic to read/research", "text": "I have been programming python and web apps awhile now but never delved very deep into OOP. I use classes all the time but I am pretty sure i am not fully getting what I could get from OOP. So today I popped open Learn Python (mark lutz) which i had read awhile ago, flipped through the oop section in about 5 minutes and realized it contained nothing new. What would you suggest as a next step in becoming better at understand OOP?"} {"_id": "184614", "title": "hginit - #ifdefs ridiculous", "text": "I was reading Joel Spolsky's mercurial introduction when it struck me: > \"And now what they do is this: each new feature is in a big #ifdef block. So > they can work in one single trunk, while customers never see the new code > until it\u2019s debugged, and frankly, that\u2019s ridiculous.\" Why is this so _ridiculous_ anyway, isn't this, if nothing else, simply simpler to handle? It's nothing fancy but does the trick - at least if you are already \"married\" with subversion. What is the downside? I kind of don't get the argument."} {"_id": "184615", "title": "Blocking IP address for web-scraping service", "text": "# Background Consider the following scenario: 1. **Link.** User provides a link to some poorly formatted website (e.g., _creative commons content_ ). 2. **Scrape.** Server downloads the content (web scrape), always throttled. 3. **Format.** Server formats the content (e.g., performs natural language processing). 4. **Return.** Server posts formatted results back to user. # Problem The server hosting the poorly-formatted website ( **host** ) can block the server that pulls down the content ( **scraper** ). If this happens, the user can no longer use the service to automatically change the format. Assume that the terms of service do not forbid scraping, nor is there an API available to pull the data directly. ## Comments Regarding Copyright * The content is not subject to copyright: it is either creative commons content or already in the public domain. * The content would be from whitelisted domains that have been vetted (e.g., U.S. federal government works). * For what it's worth, I don't even know if the sites will block the requests (especially given how infrequent the requests will be and I will likely do some pre-caching). It's mostly academic at this point # Question What strategies would you employ (such as using a virtual network, or cloud service) such that the IP address of the **scraper** can easily (potentially dynamically) change to avoid being blocked by the **host**?"} {"_id": "141206", "title": "Call constructor using an arguments object in javascript?", "text": "Is it possible to call the constructor using an arguments object? var MyClass = function(a, b){ this.a = a; this.b = b; }; var myClassInstance = function(){ //This line would not work, but is what I'm asking. Is there a way besides eval? return new MyClass.apply(?, arguments); }('an A value', 'a B value');"} {"_id": "230895", "title": "Accepting a numerical range in a function call", "text": "I have encountered two ways of doing it: void foo(int from, int to); /* 'from' inclusive, 'to' exclusive */ void foo(int startIndex, int rangelength); Has one style historically been preferred over the other? If so, was it just a matter of convention or was it due to some deeper underlying reason? I'm currently programming in Java and noticed that the Arrays class uses the former style. The exclusivity of the `to` argument felt somewhat unintuitive to me, which led me to ask this question."} {"_id": "205411", "title": "Why does git use hashes instead of revision numbers?", "text": "I always wondered why git prefers hashes over revision numbers. Revision numbers are much clearer and easier to refer to (in my opinion): There is a difference between telling someone to take a look at revision 1200 or commit 92ba93e! (Just to give one example). So, is there any reason for this design?"} {"_id": "205412", "title": "How to represent days and time graphically for a project I have done", "text": "How to represent the days and hours of different tasks(design, development, testing) of a software project in a graphical way. I tried using a Gantt chart for it, but it does not represent the hours but just days, of a particular task, in a project. Is there any way to do this in software management, I need this to show it to a client for a project I did. Here is some of the values I intend to use. \u25cf Research time \u25cb Days 10 - 8 hours * 10 = 80 \u25cf Design time \u25cb Days 10 -8 hours * 10 = 80 \u25cf Development time \u25cb Days 15 \u2013 8 hours times 15 = 120 \u25cf Testing time \u25cb Days 3 \u2013 8 hours times 3 = 24"} {"_id": "119228", "title": "How to deal with the programmer's block?", "text": "> **Possible Duplicate:** > Dealing with frustration when things don't work Something I find myself coding something difficult and after hours of struggle my mind goes blank and I don't know what I am doing anymore. I'm not sure if this happen to a lot of programmers. Does this happen to you How do you deal with it?"} {"_id": "205414", "title": "Difference between Zend Framework & Zend Server?", "text": "The title tells everything about my question but still let me elaborate to clarify : I know very well about the ZEND Framework but what about the Zend server? Does it work like Apache server or anything else? What is the main difference between these two? I also know that Zend Server is a product of the Zend Company but where is it used?"} {"_id": "205418", "title": "What is the proper way of nesting resources in REST model?", "text": "I'm designing a REST API of service and got stuck on proper way to nest resources. Resources: partners, tickets, settings Connections between resources: * partner has many tickets, * partner has set of settings, Bussines logic: * you can list all partners as anonymous user, * you can add new ticket to specified partner as anonymous user, * only partner can list his tickets, * only partner can modify his tickets, * only partner can list settings, * only partner can modify settings, What I did till now: **Partner resources** GET /partners - list all partners GET /partners/:id - show details of the partner specified by :id parameter GET /partners/:partner_id/tickets - list of partner's tickets GET /partners/:partner_id/tickets/:id - details of the specified partner's ticket POST /partners/:partner_id/tickets - saves new ticket PUT /partners/:partner_id/tickets/:id - updates the ticket specified by :id parameter GET /partners/:partner_id/settings - list partner's settings PUT /partners/:partner_id/settings - update partner's settings **Problem/Question** Would it be proper way to split nested resources (tickets, settings) to seperate resources or duplicate them as seperate resources? E.g. GET /tickets/:id POST /tickets PUT /tickets/:id GET /settings PUT /settings"} {"_id": "42930", "title": "Are specific types still necessary?", "text": "One thing that occurred to me the other day, are specific types still necessary or a legacy that is holding us back. What I mean is: do we really need short, int, long, bigint etc etc. I understand the reasoning, variables/objects are kept in memory, memory needs to be allocated and therefore we need to know how big a variable can be. But really, shouldn't a modern programming language be able to handle \"adaptive types\", ie, if something is only ever allocated in the shortint range it uses fewer bytes, and if something is suddenly allocated a very big number the memory is allocated accordinly for that particular instance. Float, real and double's are a bit trickier since the type depends on what precision you need. Strings should however be able to take upp less memory in many instances (in .Net) where mostly ascii is used buth strings always take up double the memory because of unicode encoding. One argument for specific types might be that it's part of the specification, ie for example a variable should not be able to be bigger than a certain value so we set it to shortint. But why not have type constraints instead? It would be much more flexible and powerful to be able to set permissible ranges and values on variables (and properties). I realize the immense problem in revamping the type architecture since it's so tightly integrated with underlying hardware and things like serialization might become tricky indeed. But from a programming perspective it should be great no?"} {"_id": "42936", "title": "Need help deciding if Joomla! experience as a good metric for hiring a particular prospective employee", "text": "My company has been looking to hire a PHP developer. Some of the requirements for the job include: 1. an understanding of design patterns, particularly MVC. 2. some knowledge of PHP 5.3's new features. 3. experience working with a PHP framework (it doesn't matter which one). I interviewed a man today who's primary work experience involved working with Joomla!. As an employee, he will be required to work on existing and new web applications that use Zend Framework, CakePHP and/or CodeIgniter. It is my opinion that we shouldn't dismiss hiring a developer just because he has not used the same technologies that he'll be using on the job. So, I'd like to know about the kind of coding experience working with Joomla! can provide. I've never bothered to take more than a brief look (if that) at the Joomla! package, so I'm hoping to lean on the knowledge of my peers. * Would you consider Joomla! to contain a professional code-base? * Is the package well organized, and/or OO in general, or is it more like WordPress where logic and presentation are commingled? * When working with Joomla!, is the developer encouraged to use best practices? * In your opinion, would experience working with Joomla! garner the skills needed to get up to speed with Zend or CakePHP quickly, or will there be a steep learning curve ahead of the developer? I'm not saying that Joomla! is a bad technology, or even that it is lower on the totem pole when compared to the frameworks I've mentioned. Maybe it's awesome, I dunno. I simply have no idea!"} {"_id": "138614", "title": "Is there a certain number of lines of code to be followed /maintain?", "text": "I am developing a software system (Patient Administration System) and I have noticed it already had 451 lines of code(in one namespace). Is this bad? Or does the number of lines of code not matter as long as the methods and comments are useful and they doing what they are intended to do? Or there is a number of lines of code to maintain like a namespace should only have 500 lines of code something like that"} {"_id": "249467", "title": "Use camera to analyze homogeneity", "text": "We are working with some screens whose in the production process change their state from a transparent state to a colored state. This colored state is achieved to block light transmittance. In this process where the screen gets tinted, this tint must be applied homogeneously. This is a very important point of the process. To check this homogeneity, I've been asked to develop a mobile app (Android) in which using the camera I could check if the tint is applied homogeneously. I'm an android programmer so the knowledge about android is not a problem. But I've never worked with images this way so I ignore if such a thing like this could be done. My first idea is to develop an app which using the camera takes a photo, this is the easy part. After this, I should process the image some way that I could identify non homogeneous parts, maybe convert the image to grey scales and check for clearer or darker tones. I'm not asking about how should I programm that, as this is just a first idea on how could I achieve this, what I'm asking help for is to know how could I do that kind of image processing that could detect homogeneity."} {"_id": "183880", "title": "should F12's request headers show session id as cookie?", "text": "I'm trying to educate myself on potential web attacks. I just found a site (which will rename anonymous) where it shows me what looks to be like the php session id inside the cookies section of the request header. My immediate reaction was \"wow, that's bad\"... but then i couldn't really come up with a scenario as to how someone could use this to mess up the site. But maybe its because I'm a newbie to this stuff. But assuming that I got someone else's session id... I'd have to hack the site with their session id before it expires right? So my question is, how real is this threat? And is this common, where web developers store session ids in cookies, thereby making it visible in F12? Sorry for the remedial questions. I read the article: Why popular websites store very complicated session related data in cookies -- and what does it all mean? And i get that sometimes, you want to use cookies to move data off server to client... but I think I need more clarification on how serious a security breach this is."} {"_id": "183881", "title": "How do you debug without an IDE?", "text": "Every time I look for an IDE (currently i'm tinkering with Go), I find a thread full of people recommending Vi, Emacs, Notepad++ etc. I've never done any development outside of an IDE; I guess I've been spoiled. How do you debug without an IDE? Are you limited to just logging?"} {"_id": "128725", "title": "How to write a good app store description?", "text": "> **Possible Duplicate:** > How to improve the quality of app descriptions I've just finished writing an iPhone app, and I've completed everything on my app store submission checklist except the description. Has anyone got any advice on how to write a really catchy description that's likely to get users interested? * What things are important to include in the description? * What _not_ to include? I really have no experience with this, as this is the first app I'll ever release to the store."} {"_id": "70016", "title": "Why don't software vendors use existing scheduling facilities for automatic upgrades?", "text": "It is common for software vendors to offer automatic updates. The research and installation of these updates can be done * at application startup * through a service or process in the background (which often can be seen in the icon tray) * Sometimes at the opening of a session or boot The problem I encounter today is that Google, Sun Java, Adobe and others... updates system seem to squander resources of my computer to watch for updates continuously. One uses a service, the other a running process... Why do those suppliers not use the scheduling tools offered by the operating system, like the **Task Scheduler**?"} {"_id": "96331", "title": "How should I charge for programming things which take two minutes to fix?", "text": "I am really confused with this. I believe that the more I am getting experience, the more I am becoming an expert at finding mistakes and fixing them quickly. Now my boss got website from a programmer who does very very bad coding. Now he sends the list of problems to fix. Suppose it's the stylesheet problem, and the old guy does not know how to fix it, but due to my experience I know straightaway what the problem is, and I can fix it in two minutes and many similar problems like that. But after fixing all that I realise that I fixed all problems in 15 minutes which other guy was not able to solve. I get 25$ per hour, so I feel very bad charging 6$ for that list of things which took many years of experience to learn. Is it OK to charge 6$ or should there be some way to charge things?"} {"_id": "125740", "title": "When a task can be accomplished by either Javascript or CSS, is it better to use CSS?", "text": "I always veto JavaScript by using CSS as much as possible. i.e. I create tabs and rollover buttons using CSS rather than JavaScript. I have seen some solutions\u2014specifically the Wt web-framework\u2014which advocate JavaScript; but gracefully downgrade to CSS if the browser isn't capable/js- disabled. I know CSS and JavaScript have different purposes, however there is overlap; which is the _bourne_ of this question. **Should I continue using CSS as much as possible over JavaScript?**"} {"_id": "104129", "title": "How to make models do more than setting and getting data and validation", "text": "I am asking this question because after developing a few small custom cms solutions in a framework, I developed the idea that Models can be easily substituted with ORMs which ease up the task of validating, getting and setting data, as thats all they are needed for. I recently got an order for a complex custom client management solution. After researching a bit on how to proceed on it, I found this: http://blog.astrumfutura.com/2008/12/the-m-in-mvc-why-models-are- misunderstood-and-unappreciated/ Now the idea is not new as thats what i had read about the MVC approach, models handle the business logic. But I am finding it hard to plan out an approach in which Models handle the business logic and are classes which are complete in themselves because of the type of work i have gotten used to. Please help me out here by explaining the idea and pointing me to some examples, articles etc"} {"_id": "122775", "title": "A Project's Initial Commit to Source Control", "text": "I'm probably over-thinking this and it might not matter in the scope of things, but I'm wondering how people build up initial versions of their projects in source control? I'm talking about when you start with nothing but an idea, so no forking an existing project or anything. * When do you do your commits before having an official or even unofficial version? * At what point do you start branching for features, or do you this right away? I'm so used to working with existing code bases that I'm not really sure I know if there's a \"best\" way to build up a project history."} {"_id": "223820", "title": "I need advice for a subsystem design?", "text": "I'm doing it in C++, I can't post the entire thing, because it's gigantic, I'll just sum it up with a simple example.I have: class B; class A { //Members and methods... void DoSomething(B* owningSystem); }; class B { //Members and methods A memberA; } So the problem is, A needs to know about B to use its DoSomething method and A is actually a member of B.I have an array of B's, who's number is determined at run-time.This might sound like overdoing it, but I need extreme optimization in this specific project and I was wondering - is it better to pass B* owningSystem or to just store a pointer of each B inside its A member?I don't think DoSomething will ever be inlined, since their count and pointer locations are determined at run-time, but I can chose storing a pointer vs. passing it in the function.I need to minimize the overhead at this point by as much as I can, it's critical.I'm open to any suggestions, please"} {"_id": "33195", "title": "portfolio building, working for closed-source vs open-source?", "text": "I've currently graduated from my first run at higher education, landed my first full-time gig as a web application developer, and absolutely love it. My question is that in looking for jobs I ran across many jobs that require a certain level of experience and code examples. Much of the work I am doing is both protected by a login, and closed source. How does someone, that is just starting out and needs to be building a resume, go about preparing for the next job. (no matter how much i love my current job, i feel like it's only responsible to always be preparing)"} {"_id": "236800", "title": "Building a DBAL from scratch", "text": "I am considering building a **DBAL** from scratch with PHP to use within my projects and also to learn through the process. I have noticed on _SO_ and other reputable forums that whenever this is mentioned most users advise to use an established DBAL like Doctrine and to not \" _reinvent the wheel_ \". Why is it that with this specific piece people continuously recommend to use a pre-made one? Is it that hard/problematic to create and build as you go? If so what are the main problem points? I assume security is one. In regards to the established DBALs like Doctrine, I have downloaded and inspected the contents and noticed its rather big, for those who have experience with this, is all of this really necessary? Do you use all of this functionality provided(and know whats happening in the backend)? From what I inspected I am finding it hard to see how I would use more than 40% of the libary, along with this I have the feeling that I am not completely sure whats happening in the backend with such a big library. Perhaps this is a common feeling and you get over it?"} {"_id": "236802", "title": "Poker software architecture", "text": "I have some classes so far. * **Hand** stores information like SB, BB, ante, collection of Players * **HandState** inherits from class Hand. has members like phase {POSTING BLINDS, PREFLOP, FLOP, TURN, RIVER, SHOWDOWN }, pot. I consider every action in a poker hand as a handState. Players post ante (if any), Small blind and big blind posts blinds, the first player to act (UTG) calls the blind, the 2nd player (UTG+1) raises, etc... * **Player** members: name, position, list of actions so far, stack, etc... * **Action** members: type {FOLD,CALL, RAISE, RERAISE}, amount The program receives as input the blinds, players, which player to act. Since the input I get is string (its a console program) I have to build the root handState. After I have the root handState, I can build a gameTree where every node is a handState. eg.: stacks = \"1000,2000,2300,1400,230\"; actions = \"200\"; playerToAct = 2; BB = 400; SB = 200; function makeRoot() { stacks = makeArray(stacks) for(i=0; i < stacks.size; i++) { player = new Player; player.position = \"UTG\" if (i > 0) player.position += i; if (i == stacks.size-2) { player.position = \"SB\"; player.bet(SB); } if (i == stacks.size-1) { player.position = \"BB\"; player.bet(BB); } } } My problem that Player and HandState has no connection. What if player has not enough chips to pay the BigBlind (230<400)? SB has enough chips, but if he bets 200, HandState's pot wont know about it? Should I keep track in both class? //HandState player.bet(200); this.pot += 200; //Player bet(amount) { this.stack -= amount; this.actions.push(new Action(amount, \"BET\")); } I hope there is some seasoned programmer with some poker knowledge to tell some opinion, critique my approach."} {"_id": "246308", "title": "Why is subclassing TraversableOnce not recommended", "text": "Reading http://www.scala- lang.org/api/2.11.1/index.html#scala.collection.TraversableOnce: > Directly subclassing TraversableOnce is not recommended - instead, consider > declaring an Iterator with a next and hasNext method, creating an Iterator > with one of the methods on the Iterator object, or declaring a subclass of > Traversable. Why is subclassing `TraversableOnce` not recommended?"} {"_id": "246302", "title": "Should I use check or checked?", "text": "I'm designing a library that binds to html elements on a page. In this particular case the input[type='checkbox'] will be checked if the likeItem property returns true and unchecked if the likeItem property returns false. What's the best practice for naming properties? input { check: likeItem } or input { checked: likeItem } This would also apply to other things like enable/enabled, disable/disabled, etc."} {"_id": "10340", "title": "How do you prevent the piracy of your software?", "text": "Is it still worth it to protect our software against piracy? There are reasonably effective ways to prevent or at least make piracy difficult?"} {"_id": "84649", "title": "What is the best way to become a professional in PHP and Website Building?", "text": "I would like to become a professional in php, I have learned nearly all about the language syntax and concepts and I have a good knowledge in C and C++, which made it easier to become familiar with PHP. (Of course, I learned MySql too.) But I don't feel like being able to build even a little good website of my own! It looks like PHP is all about knowing lots of functions and using them, while in fact I don't think it's like that, is it? How can I become a professional in PHP and Website Building? I would do anything and spend whatever amount of time required for that. **EDIT** I've also a very good knowledge in HTML and a normal knowledge in CSS and JavaScript. Sorry for not mentioning that, I just thought it was implicitly included."} {"_id": "241097", "title": "java.util.HashMap lock on actual HashMap object compare to lock on object that encapsulate the HashMap", "text": "The below Javadoc is an snippet of HashMap documentation. Why authors would emphasize on putting a lock on the object that encapsulate a HashMap? Lock on the actual HashMap Object makes for sense. > **Note that this implementation is not synchronized.** If multiple threads > access a hash map concurrently, and at least one of the threads modifies the > map structurally, it _must_ be synchronized externally. (A structural > modification is any operation that adds or deletes one or more mappings; > merely changing the value associated with a key that an instance already > contains is not a structural modification.) This is typically accomplished > by synchronizing on some object that naturally encapsulates the map. If no > such object exists, the map should be \"wrapped\" using the > Collections.synchronizedMap method..."} {"_id": "203157", "title": "Does data size in TCP/UDP make a difference on transmission time", "text": "While discussing the development of a network component for our game engine, a member of our team suggested that transmitting either 500 bytes or 1k of data using UDP makes no difference from performance perspective of the system (the time it takes to transmit the data). He actually said that as long as you don't cross the MTU size, the size of the transmitted data doesn't really matter as it's all the same. Is that true for UDP? what about TCP? That sounds just plain wrong to me, but i am not a network expert. *I've been reading about other companies' game networking architectures, and it seems they're all trying to keep transmitted data to a minimum, making my colleague's claims seem even more unreasonable."} {"_id": "39772", "title": "Is this Job Smell?", "text": "Went for an interview and from what I could gather from the interviewer: * Development is done to a 'wire-frame' specification where users get constant feedback and can change their mind at will * I could be expected to 'set up new computers' and 'configure servers' * Occasionally I will have to do unpaid overtime - particularly towards the end of a project * They use visual source safe * The company has chopped and changed quite a bit recently (name, management) * I will be doing an extra hours work per day as compared to my current position and taking 1 less days holiday * They want me to complete a test which is some development for their in house system (perhaps they want free work done?) Now I understand all these points by themselves might not mean much, and I would be happy to do a few of them as part of my job. But all together its making me wonder. I've yet to get salary details but I'm thinking it would have to be quite a bit higher for me to consider jumping ship. How does this position sound to you? Depending on the answers I'm tempted to link to company here - so I'm giving them a fair chance and they can defend themselves. Its possible that there are some misunderstandings due to short- ish hour-long interview."} {"_id": "39771", "title": "Do you prefix variable names with an abbreviation of the variable types? (Hungarian Notation)", "text": "In my current job, there are no coding guidelines. Everyone pretty much codes the way he wants. Which is fine, since the company is small. However, one new guy recently proposed to always use Hungarian Notation. Until now, some of us used some sort of Hungarian Notation, some of us didn't. You know, it's an engineering company, so coding styles do not really matter as long as the algorithms are sound. Personally, I feel that these little type abbreviations are kind of redundant. A well thought-out name usually delivers the same message. (Furthermore, most of our code has to run on some weirdo DSPs, where a concept like `bool` or `float` doesn't exist anyway). So, how do you feel about Hungarian Notation? Do you use it? Why?"} {"_id": "218414", "title": "How to ensure that all the customers have the latest version of web application deployed on intranet?", "text": "How to update web application deployed on intranet? There is a Java Spring application that will be deployed on intranet of multiple customers. In case when there is a major update what should the developer do in order to ensure that all the customers have the latest version of his software? There is an option that an update module is implemented in the application and it will automatically query the central web repository for updates. This option doesn't work if there is the server where the application is installed doesn't have internet connection. Another possibility that comes to mind is to manually send a .war file to the IT personnel of all the customers for them to install the latest version. How to deal with updates in this situation?"} {"_id": "218410", "title": "Generating a Nakagami Random Variable", "text": "Assuming the only tool for generating random numbers I have available is generating a uniformly distributed variable `u` on U(0,1). I want to generate a Nakagami Random Variable from it. I know I could just plug `u` into the inverse CDF of the Nakagami distribution, but unfortunately, the inverse CDF isn't trivial to compute."} {"_id": "202829", "title": "ArrayList in Java", "text": "I was implementing a program to remove the duplicates from the 2 character array. I implemented these 2 solutions, Solution 1 worked fine, but Solution 2 given me UnSupportedoperationException. I am wonderring why i sthat so? The two solutions are given below; public void getDiffernce(Character[] inp1, Character[] inp2){ // Solution 1: // ********************************************************************************** List list1 = new ArrayList(Arrays.asList(inp1)); List list2 = new ArrayList(Arrays.asList(inp2)); list1.removeAll(list2); System.out.println(list1); System.out.println(\"*********************************************************************************\"); // Solution 2: Character a[] = {'f', 'x', 'l', 'b', 'y'}; Character b[] = {'x', 'b','d'}; List al1 = new ArrayList(); List al2 = new ArrayList(); al1 = (Arrays.asList(a)); System.out.println(al1); al2 = (Arrays.asList(b)); System.out.println(al2); al1.removeAll(al2); // retainAll(al2); System.out.println(al1); }"} {"_id": "202823", "title": "What did network programs use to communicate before sockets was invented (around 1983?)", "text": "Sockets were invented in Berkeley around 1983, but how did networked computer programs work before this? These days, pretty much everything uses sockets, so it's hard for me to imagine how else programs could communicate and Google turned up nothing."} {"_id": "172994", "title": "Is conditional return type ever a good idea?", "text": "So I have a method that's something like this: -(BOOL)isSingleValueRecord And another method like this: -(Type)typeOfSingleValueRecord And it occurred to me that I could combine them into something like this: -(id)isSingleValueRecord And have the implementation be something like this: -(id)isSingleValueRecord { //If it is single value if(self.recordValue = 0) { //Do some stuff to determine type, then return it return typeOfSingleValueRecord; } //If its not single value else { //Return \"NO\" return [NSNumber numberWithBool:NO]; } } So combining the two methods makes it more efficient but makes the readability go down. In my gut, I feel like I should go with the two-method version, but is that really right? Is there any case that I should go with the combined version?"} {"_id": "41773", "title": "Does TDD really work for complex projects?", "text": "I\u2019m asking this question regarding problems I have experienced during TDD projects. I have noticed the following challenges when creating unit tests. * **Generating and maintaining mock data** It\u2019s hard and unrealistic to maintain large mock data. It\u2019s is even harder when database structure undergoes changes. * **Testing GUI** Even with MVVM and ability to test GUI, it takes a lot of code to reproduce the GUI scenario. * **Testing the business** I have experience that TDD works well if you limit it to simple business logic. However complex business logic is hard to test since the number of combinations of tests (test space) is very large. * **Contradiction in requirements** In reality it\u2019s hard to capture all requirements under analysis and design. Many times one note requirements lead to contradiction because the project is complex. The contradiction is found late under implementation phase. TDD requires that requirements are 100% correct. In such cases one could expect that conflicting requirements would be captured during creating of tests. But the problem is that this isn\u2019t the case in complex scenarios. I have read this question: Why does TDD work? **Does TDD really work for complex enterprise projects, or is it practically limit to project type?**"} {"_id": "12958", "title": "Managers X Motivation", "text": "As a programmer what do you think that is the thing that your manager does that mostly decreases your motivation? My manager insists in blocking web content (this week was msdn content and Microsoft domain sites) This is so stupid, make me think I am not a reliable professional or that I am stealing his internet. And not, it is not a small business. It is a huge enterprise where such dinossaurs should not exist anymore."} {"_id": "70903", "title": "What should I consider before creating a Silverlight website?", "text": "I have opted to use silverlight for a website. This runs in all major browsers. The application could be highly graphically intensive. What have I missed? Edit: It runs on Android and other mobile platforms? I have since written about this here: http://carnotaurus.tumblr.com/post/4921541502/old-school-game-to-be-written- in-silverlight"} {"_id": "70907", "title": "Windows 7 Phone -- what version of Visual Studio to use?", "text": "I am an iOS developer just getting started with Windows Phone. At the moment, I have the free product Visual Studio 2010 Express for Windows Phone. I see that Microsoft has several higher-end IDEs, and some of them have some very fancy price tags. My question is -- will upgrading get me significant advantages? One thing I find very annoying is that unlike with Xcode, Visual Studio Express does not let me edit my source code while my app is running. (I don't need fix and continue; but it sure would be nice to be able to make changes while it is running, and have those changes applied the next time I run my app.)"} {"_id": "236053", "title": "Why must directories be empty before being deleted?", "text": "As far as I know, deleting a non empty directory could work the same way as deleting an empty directory: by removing the pointer to the directory's metadata there would be no pointers to the items it contained, effectively deleting all its children recursively. If that's true, then why must directories be empty before being deleted? Is it just a safeguard to prevent erasing many files at once, or a technical limitation of some (possibly ancient) file system?"} {"_id": "159895", "title": "Can WinRT really be used at just the boundaries?", "text": "Microsoft (chiefly, Herb Sutter) recommends when using WinRT with C++/CX to keep WinRT at the boundaries of the application and keep the core of the application written in standard ISO C++. I've been writing an application which I would like to leave portable, so my core functionality was written in standard C++, and I am now attempting to write a Metro-style front end for it using C++/CX. I've had a bit of a problem with this approach, however. For example, if I want to push a vector of user- defined C++ types to a XAML ListView control, I have to wrap my user-defined type in a WinRT ref/value type for it to be stored in a `Vector^`. With this approach, I'm inevitably left with wrapping a large portion of my C++ classes with WinRT classes. This is the first time I've tried to write a portable native application in C++. Is it really practical to keep WinRT along the boundaries like this? How else could this type of portable core with a platform-specific boundary be handled?"} {"_id": "236056", "title": "Are VB.NET and C#.NET projects created from Microsoft Visual Studio \"Open Source\" safe?", "text": "I'm developing software in VB.NET and C#.NET and planning to release their source codes as fully open source. Are these projects \"open-source\" safe? **My doubts are:** 1. VB.NET and C#.NET are using the .NET framework which is not open source. 2. The source codes depends on the compiler and the IDE, although there's 100% open source and compatible alternatives provided, it is confirmed to be buggy and incompatible with my project. 3. My projects were using the Jet 4.0 OLE DB, which is not open source either. 4. Files like .Designer.vb, Microsoft ResX Schema, the Solution file, or the .vbproj etc that are generated by the Visual Studio Maybe I didn't have enough knowledge on open-source in the field where the codes released can be mixed with non open-source, or the codes can be released as open source even though developed and generated on a non open-source IDE. Am I still eligible to hold the \"open-source project\" title? Can the codes be released as open-source? If so, what kind of open source license that are compatible based on the criteria above?"} {"_id": "211689", "title": "Are mutexes assigned to specific regions of memory?", "text": "I'm currently reading _C++ Concurrency in Action_ by Anthony Williams and I'm facing an obstacle in thought. First he describes deadlocks as when two threads lock simultaneously (at least, that's how I understood it), which makes sense. However, he goes on to explain how you can lock two mutexes at the same time. Obviously this has a function, but with the above (assumingly wrong) understanding that would instantly deadlock both threads. From that obstacle arises a new one; obviously it won't deadlock, which means the mutexes must have a more advanced use than just allowing a single thread to do work in a process. In the book, mutexes are given to particular objects. This leads me to my overall question(s): _Are mutexes assigned to specific memory regions_ , such as objects? If so, how? The book does a great job at explaining how mutexes can be used but never really describes what mutexes are at a low level. I realize they're implementation specific but I never really grasped what exactly they do or how the locking functions use them."} {"_id": "52099", "title": "Is there a better term than \"smoothness\" or \"granularity\" to describe this language feature?", "text": "One of the best things about programming is the abundance of different languages. There are general purpose languages like C++ and Java, as well as little languages like XSLT and AWK. When comparing languages, people often use things like speed, power, expressiveness, and portability as the important distinguishing features. There is one characteristic of languages I consider to be important that, so far, I haven't heard [or been able to come up with] a good term for: **how well a language scales from writing tiny programs to writing huge programs**. Some languages make it easy and painless to write programs that only require a few lines of code, e.g. task automation. But those languages often don't have enough power to solve large problems, e.g. GUI programming. Conversely, languages that are powerful enough for big problems often require far too much overhead for small problems. This characteristic is important because problems that look small at first frequently grow in scope in unexpected ways. If a programmer chooses a language appropriate only for small tasks, scope changes can require rewriting code from scratch in a new language. And if the programmer chooses a language with lots of overhead and friction to solve a problem that stays small, it will be harder for other people to use and understand than necessary. Rewriting code that works fine is the single most wasteful thing a programmer can do with their time, but using a bazooka to kill a mosquito instead of a flyswatter isn't good either. Here are some of the ways this characteristic presents itself. * **Can be used interactively** \\- there is some environment where programmers can enter commands one by one * **Requires no more than one file** \\- neither project files nor makefiles are required for running in batch mode * **Can easily split code across multiple files** \\- files can refeence each other, or there is some support for modules * **Has good support for data structures** \\- supports structures like arrays, lists, and especially classes * **Supports a wide variety of features** \\- features like networking, serialization, XML, and database connectivity are supported by standard libraries Here's my take on how C#, Python, and shell scripting measure up. Python scores highest. Feature C# Python shell scripting --------------- --------- --------- --------------- Interactive poor strong strong One file poor strong strong Multiple files strong strong moderate Data structures strong strong poor Features strong strong strong Is there a term that captures this idea? If not, what term should I use? Here are some candidates. * **Scalability** \\- already used to decribe language performance, so it's not a good idea to overload it in the context of language syntax * **Granularity** \\- expresses the idea of being good just for big tasks versus being good for big and small tasks, but doesn't express anything about data structures * **Smoothness** \\- expresses the idea of low friction, but doesn't express anything about strength of data structures or features Note: Some of these properties are more correctly described as belonging to a compiler or IDE than the language itself. Please consider these tools collectively as the language environment. My question is about how easy or difficult languages are to use, which depends on the environment as well as the language."} {"_id": "18622", "title": "Do you still panic when you see a stack dump? Why?", "text": "A question for seasoned developers. Do you still get a sinking feeling when you see a stack dump? Any feelings of agitation, alarm, cold feet, confusion, consternation, dismay, dread, fear, trepidation? Why? For me it usually means that I'm not 100% sure about my design or I have not done enough testing of my code."} {"_id": "33228", "title": "ASP.NET AJAX and my axe!", "text": "So, I'm seriously considering axing ASP.NET AJAX from my future projects as I honestly feel it's too bloated, and at times convoluted. I'm also starting to feel it is a dying library in the .NET framework as I hardly see any quality components from the open-source community. All the kick-ass components are usually equally bloated commercial components... It was cool at first, but now I tend to get annoyed with it more than anything else. I'm planning on switching over to the jQuery library as just about everything in ASP.NET AJAX is often easily achievable with jQuery, and, more often than not, more graceful of a solution that ASP.NET AJAX and it has a much stronger open-source community. Perhaps, it's just me, but do you feel the same way about ASP.NET AJAX? How was/is your experience working with ASP.NET AJAX?"} {"_id": "15452", "title": "What is the state of TWAIN on the Macintosh today?", "text": "I'm currently working on a project where we want to interface with TWAIN scanners on both the PC (Windows) and the Macintosh. On Windows, we basically have everything squared away and the code works successfully with the vast majority of scanners. On Mac OS X, we also basically have everything working and with the main scanner we used to develop the application with works perfectly, but we're not having a ton of luck with other scanners. As a byproduct of development on this project, we have a fair number of scanners from various manufacturers on- hand to test with. The results vary wildly: * The scanner we used to develop with works perfectly on Mac OS X as it does in Windows. Ironically this scanner is the cheapest and crappiest scanner (speed-wise) we've ever encountered but it's been a dream to work with. * Another scanner works great - until the second or third scan, at which point the application crashes with no clear indication of what happened (we get an EXEC_BAD_ACCESS from the debugger) * Another scanner apparently has no TWAIN support on Mac OS X (no data sources in the \"Image Capture/TWAIN Data Sources\" folder), although it does have TWAIN support in Windows. * Another scanner has a generic data source that I'm thinking is supposed to cover all the possible scanners from this manufacturer but when we try to initiate a native scan (which is a requirement for all TWAIN data sources) we get no results. Also, trying to install a second scanner from this manufacturer gums everything up and requires a manual uninstall for everything from this company. * Another scanner has a TWAIN data source that appears to be specific to the manufacturer, but it also fails to initiate a native scan (but a scan using the native GUI - which is incompatible with our project - works) So I'm not sure where to go with this. I'm still digging into the code to figure out what, if anything, we're doing wrong but in checking against the TWAIN standard it really does look like we're doing everything right, but we're getting very hit-or-miss results on most of the scanners we're testing against. Also, as part of the new Cocoa/Carbon Events model there's this additional consideration of a \"callback\" function that Mac OS X TWAIN data sources are supposed to implement and I'm not seeing it called from most of these data source/drivers. So all of this leads me to wonder - is it that we're just doing something wrong or is TWAIN just not supported properly by and large on the Macintosh? I'm really not seeing a lot of information on TWAIN on the Macintosh online - the occasional sporadic inquiry on twainforum.org tends to go unanswered. Windows also has a thing called WIA - Windows Image Acquisition - and on the Windows side we also include this as an option. Is there something else on the Mac we should be exploring instead of or in addition to TWAIN? **NOTE:** I've posted this same question on StackOverflow but got no responses and after consulting meta, I've decided to re-post it here (since it's more discussion than specific). If this is a big no-no feel free to delete this thing."} {"_id": "229183", "title": "What are my options for using a C++11 library in a C# WPF application?", "text": "I am writing a cross-platform (OS X and Windows) desktop application in C++11. I intend to use the same C++11 core on both platforms, utilizing native frameworks for the UI (Cocoa and Objective-C on OS X and WPF and C# on Windows) as I believe the best UX experience is a native one. Currently the application runs as a console app on both platforms. The application performs some CPU-intensive work and provides callbacks for progress reporting and, when complete, instantiates a collection of Items (`std::vector>`) representing the results of the processing. My goal is for the C++11 library to act as a model for the UI in a manner compatible with the MVC and MVVM patterns. The UI must: * Allow the user to choose a file to process (open a file dialog and send the file path to the C++ library) * Display progress (handle callbacks from the C++ library to update a progress bar) * Display the results in a WPF form (access the Item class and display information it provides) I've looked at WinRT and it seems there isn't a lot of information out there for use in desktop applications. I'm also not fond of the idea of creating the UI itself in C++. My goal is to get data in and out of the C++ app and use C# to handle the UI as I believe that's a more efficient way of working with WPF. I'm aware of P/Invoke but my understanding is that it only works with a C interface. Creating such an interface around the C++11 seems cumbersome. I'm also aware of C++/CLI but I'm not sure if that will meet my needs or if it is compatible with C++11. I took a look at CppSharp but it seems to be a work-in-progress and I doubt I'd know how to work around any issues that may arise. I have a lot of experience with C++ and a little with C# but I'm not sure if I'm missing better options or which of the above is a sound approach."} {"_id": "229184", "title": "How to put lessons learned, good practices, etc into the \"work flow\"", "text": "As the title states it, I would like to get some suggestions about putting knowledge into action. We have many additional requirements that concern: coding practices feature development (all of them or only a subset), process, etc. The problem is that we have problems with introducing those practices into new projects and I want to help developers and reviewers to remember about it, but I don't want them to have everything just in theirs heads, but rather in some kind of a database that they can use easily. The list of practices is already defined in Excel. I would like all team members to apply these practices in their work but I don't know how make those information easy to find. When developer starts working on a feature he should be able to easily find all practices that he should use in these feature. To be clear with what I mean, I'm showing some examples of information we have to apply: * (new design request) every feature must output logs (and it must not contain any sensitive data); * (new design request) every feature has to have a flag in the configuration that allows to disable it; * (good practice) always update docs when feature is ready; * (retrospective feedback) QA must test only on release package (not in debug mode); * (retrospective feedback) make stress tests for every new feature implemented; * (lead's task) write release notes after each sprint that includes tasks completed and open bugs; * (design usage) every event from \"ABCStoreManager\" must be disconnected after being invoked; * (design usage) try-catch every event.Invoke() call; I thought for a while about a wiki, but it's no good, because it doesn't support tagging/categories or querying and I'm afraid that everybody would ignore it (people must know exactly where to look). My question can be summed up as how can I improve communication to our developers about the required development methodologies and practices listed above in an easy way? Remarks: * this question is not about security issues or code smells per-se; * I'm not looking for any heavy process (like RUP) or any process for that matter, which forces you to go step by step. Preferably I am looking for an agile approach * Daniel Figueroa has suggested adding additional requirements to the \"definition of done\". And it seems like a good way. But the problem is that some features have, for instance, 20 additional requirements (\"all GUI features\"), some 10 (\"all server requests\"), etc. I would like to have this stuff aggregated in a one place and just use links (\"see: 'GUI feature' \");"} {"_id": "237861", "title": "How to set up something like an integration server that measures the quality of code and reject the code if the score is below a certain number?", "text": "Even if I don't like enforcing people to do things (and I believe that it may decline the productivity and cause anger), I really want to enforce good coding style. Is there a way to set up something like an integration server that measures the quality of code (for example it runs `resharper` or some other static checker) and reject the code if the score is below a certain number? Also, do you think this is a good approach?"} {"_id": "237862", "title": "Choosing name for open-source project -- how to view existing trademarks/names used for other programs?", "text": "I'm trying to choose a good name for a new open-source project. The problem is, there is already so much software in the world that Google search reveals one or more existing programs with every good name I can think of (and there have been several already). Obviously, names like \"Linux\" or \"Windows\" or \"Java\" or \"Excel\" are off- limits. But what about names which may have been used by some little-known program? Does it make a difference if the name is trademarked in one or more countries? In one case, I found that there is a commercial software package marketed by a Canadian company, using a name which I wanted to use. The same name is trademarked in the US, but by a different company. I couldn't find any evidence that the holder of the trademark is actually marketing software under that name. In Canada, the name is not trademarked. In other cases, I found several programs all using the same name, some commercial, some university-student research projects. So maybe this is normal in the software industry? What if I use a name which is not trademarked, and someone else trademarks it later? Could I face legal pressure to stop using it? Would it help to avoid legal problems if I prefix the name? For example, say I want to call my project \"Broomflip\", but other software is already being marketed under that name. If, assuming I am associated with an organization called \"Hoplock\", I call it \"Hoplock Broomflip\", would that be better? I'm hoping someone with a good understanding of IP law can shed some light on some or all of the above questions. Of course, personal opinions from those without special legal knowledge are also welcome, but please try to back them up with evidence or relevant references. Anecdotal evidence is welcome."} {"_id": "229187", "title": "Is it necessary to science of DSP for a c++ programmer?", "text": "I am a C++ programmer. I would like to use DSP algorithms in C++. Is understanding the science behind Digital Signal Processing a prerequisite to implementing DSP algorithms?"} {"_id": "8157", "title": "Creating a platform agnostic development team hegemony", "text": "I work at a company where we have a lot of different skillsets in the development team. We do all of the following (generally geared towards web): * .NET (MVC, Umbraco, ASP.NET, Surface) * Java (Spring, Hibernate, Android) * PHP (Zend, Code igniter) * Actionscript 3 * AIR * Objective-C * Html/Javascript (obviously) We're trying to streamline our development process. We currently have a TeamCity server that builds and deploys .NET projects with msbuild/msdeploy/nant. What I want is something like maven that will give us a standard project template structure that works for most projects to allow people from different teams to move between projects easily. Currently this works on one platform because we tend to do things in a standard way for that platform (as long as certain people have been involved) however I want to use something like maven to standardise how a project is laid out and built. Has anyone tried anything like this before? Experiences? Books?"} {"_id": "240281", "title": "How is the Decorator Pattern actually used in practice?", "text": "I understand completely how to implement the Decorator pattern, and I also understand what it's intent is. The Decorator is used in one of two cases: **As an alternative to subclassing** \\- when there are multiple characteristics that an object can have, one could use inheritance in order to create subclasses for all the possible combinations. For example, three characteristics A, B and C will results in lots of classes: A, B, C, ABC, AB, AC, BC. This results in a 'class explosion'. With Decorator, one would have three decorators A, B and C, and a class D to 'decorate' - and that's it. **As a way to expand an object's functionality during runtime** \\- we can decide which decorators to 'wrap' an object with during runtime, thus 'customizing' an object dynamically. This was just to show that I do understand what Decorator is (I also totally understand how it's implemented). **Now my question:** I'm familiar with theoretical examples of when and how to use Decorator. And as you can see I know what is it's intent. But I'm still not sure when to actually use this in practice, in an actual application. Telling me \"it's used as an alternative to subclassing\", or \"it's used to dynamically add functionality to an object\" won't be helpful since I'm familiar with it's intent. Also telling me \"think of a UI window for example. It can have a border, or not, and can be resizable, or not\" isn't helpful, I'm already familiar with these theoretical examples. **So what I'm asking for is a concrete _real world_ example of Decorator, in a practical, real-world scenario,** with a brief explanation of the benefits of using a Decorator pattern there over other techniques. * * * Just to clarify, I'm not looking for a list of applications where Decorator was utilized. I'm looking for an example of where and how Decorator was used in a design, why it was a good design choice **and the concrete problem that it solved.** When I'll see **concrete problem** solved with Decorator hopefully I'll understand it better."} {"_id": "206250", "title": "How to initialize all your references?", "text": "I have recently taken a project with another developer, and he has a certain way of initializing his references. class Player { private: Console &console; Armor &armor1, &armor2; Debugger &debugger; sf::RenderWindow &window; sf::Event &event; public: Player(Console &console, Armor &armor1, ...) : console(console), armor1(armor1), ... {}; } And it's perfectly fine with me, but what if we add new stuff? Our constructor is massive, and so messy, I would like to know if there are better ways of initializing your references if you have a large project, because if we keep up with this, eventually our constructor will have more lines of code than what it actually does."} {"_id": "34440", "title": "What would one call this architecture?", "text": "I have developed a distributed test automation system which consists of two different entities. One entity is responsible for triggering tests runs and monitoring/displaying their progress. The other is responsible for carrying out tests on that host. Both of these entities retrieve data from a central DB. Now, my first thought is that this is clearly a server-client architecture. After all, you have exactly one organizing entity and many entities that communicate with said entity. However, while the supposed clients to communicate to the server via RPC, they are not actually requesting services or information, rather they are simply reporting back test progress, in fact, once the test run has been triggered they can complete their tasks without connection to the server. The request for a service is actually made by the supposed server which triggers the clients to carry out tests. So would this still be considered a server-client architecture or is this something different?"} {"_id": "171203", "title": "What are the difference between server-side and client-side programming?", "text": "> I've seen questions (mainly on Stack Overflow), which lack this basic > knowledge. The point of this question is to provide good information for > those seeking it, and those referencing to it. In the context of web programming, what are the differences between Server- side programming and Client-side programming? Which languages belong to which, and when do you use each of them?"} {"_id": "245895", "title": "What does programming a server exactly mean?", "text": "I searched around a lot, but couldn't find it. I know what server-side programming is and I've done it myself-- getting user's data, storing it in a database etc. using a server-side programming language. My question is: is this what we call \"programming a server\"? Or it's some other, more complicated task? Thanks. :) EDIT: This is NOT a duplicate of the other question. This question seeks a different answer than what is given in the question claimed to contain the answer. Okay, crystal-clear question: (it's getting kindda funny now :D ) Is there a difference between \"server-side programming\" (from my understanding: handling user input got via POST, GET, databases and all) and \"programming a server\"?"} {"_id": "188561", "title": "A simple definition of client-server", "text": "I'm looking for a simple definition of the concept of \u201cclient-server\u201d I'd like something similar to this definition of **_state_**. > ... That \"thing/information\" that you need to remember is called \"state\". Edit - This isn't a homework question (nor am I a student). My goal is to come up with a compact way of explaining REST to average developers. I didn't want to prejudice the response though."} {"_id": "215734", "title": "Are VB.NET to C# converters actually compilers?", "text": "Whenever I see programs or scripts that convert between high-level programming languages they are always labelled as converters. \"VB.NET to C# converter\" on Google results in expected, useful hits. However \"VB.NET to C# compiler\" on Google results in things like comparisons between the C# and VB.NET compilers and other hits that are not quite what you'd be looking for. Webopedia defines Compiler as > A program that translates source code into object code Eric Lipper in an answer to: \"How do I create my own programming language and a compiler for it\" suggests: > One of the best ways to get started writing a compiler is by writing a high- > level-language-to-high-level-language compiler. Is a VB.NET to C# _converter_ really just a _compiler_? If not, What separates the converter from also being a compiler?"} {"_id": "86734", "title": "Why should one want to disable compiler warnings?", "text": "This answer and the comments added to it show a way to disable several compiler warnings using `#pragma` directives. Why would one want to do that? Usually the warnings are there for a reason, and I've always felt that they're good reasons. Is there any \"valid case\" where warnings should be disabled? Right now I can't think of any, but maybe that's just me."} {"_id": "212151", "title": "A better alternative to incompatible implementations for the same interface?", "text": "I am working on a piece of code which performs a set task in several parallel environments where the behaviour of the different components in the task are similar but quite different. This means that my implementations are quite different but they are all based on the relationships between the same interfaces, something like this: IDataReader -> ContinuousDataReader -> ChunkedDataReader IDataProcessor -> ContinuousDataProcessor -> ChunkedDataProcessor IDataWriter -> ContinuousDataWriter -> ChunkedDataWriter So that in either environment we have an IDataReader, IDataProcessor and IDataWriter and then we can use Dependency Injection to ensure that we have the correct one of each for the current environment, so if we are working with data in chunks we use the `ChunkedDataReader`, `ChunkedDataProcessor` and `ChunkedDataWriter` and if we have continuous data we have the continuous versions. However the behaviour of these classes is quite different internally and one could certainly not go from a `ContinuousDataReader` to the `ChunkedDataReader` even though they are both `IDataProcessors`. This feels to me as though it is incorrect ( possibly an LSP violation? ) and certainly not a theoretically correct way of working. It is almost as though the \"real\" interface here is the combination of all three classes. Unfortunately in the project I am working on with the deadlines we are working to, we're pretty much stuck with this design, but if we had a little more elbow room, what would be a better design approach in this kind of scenario?"} {"_id": "212152", "title": "Importance of data structures in modern S/W development", "text": "During 1990s when there were no advanced frameworks/paradigms available for S/W development , knowledge on data structures was critical... which I can comprehend. But nowadays, for most of the problems (at least from Java/Android development) we can rely on an existing class to provide solution. As such I believe, an in depth knowledge of data structures is not needed for someone who is stepping into programming now. Is my belief right ?."} {"_id": "214658", "title": "How to refactor when all your development is on branches?", "text": "At my company, all of our development (bug fixes and new features) is done on separate branches. When it's complete, we send it off to QA who tests it on that branch, and when they give us the green light, we merge it into our main branch. This could take anywhere between a day and a year. If we try to squeeze any refactoring in on a branch, we don't know how long it will be \"out\" for, so it can cause many conflicts when it's merged back in. For example, let's say I want to rename a function because the feature I'm working on is making heavy use of this function, and I found that it's name doesn't really fit its purpose (again, this is just an example). So I go around and find every usage of this function, and rename them all to its new name, and everything works perfectly, so I send it off to QA. Meanwhile, new development is happening, and my renamed function doesn't exist on any of the branches that are being forked off main. When my issue gets merged back in, they're all going to break. Is there any way of dealing with this? It's not like management will ever approve a refactor-only issue so it has to be squeezed in with other work. It can't be developed directly on main because all changes have to go through QA and no one wants to be the jerk that broke main so that he could do a little bit of non-essential refactoring."} {"_id": "214659", "title": "Is it realistic to use designers for complex UI", "text": "When I created my first web application few years ago, I remember trying to use the designer in VS 2008. Before I can even remember I abandoned it to the favor of actual debug to understand how my UI will actually look like. Since then, I gave it few more shoots but I was never convinced that it's even possible to build an complex UI with it. How much progress designers have made in the recent few years? Are there any designers that can enable real-time view of highly complex UI? is it programming language/framework dependent? Thanks."} {"_id": "245029", "title": "I don't know how to understand the Wildcard type in Java", "text": "I am reading the Core Java (9th edition) by Cay S. Horstmann and Gary Cornell. After making an effort, I cannot still understand the **? super Manager**. Here are some materials relating to this question. public class Pair { private T first; private T second; public Pair(T first, T second) { this.first = first; this.second = second; } public void setFirst(T first) { this.first = first; } public void setSecond(T second) { this.second = second; } public T getFrist() { return this.first; } public T getSecond() { return this.second; } } The **Manager** inherits from **Employee** , and **Executive** inherites from **Manager**. _Pair_ has methods as follows: void setFirst(? super Manager) ? super Manager getFirst() Since _? super Manager_ denotes any supertype of **Manager** , why I cannot call _setFirst_ method with **Employee** (it is obvious that Employee is the supertype of Manager), but only with type **Manager** or a subtype such as **Executive** (but Executive is the subtype of Manager, not a supertype of Manager)?"} {"_id": "245025", "title": "Why do people use markdown wysiwyg editors in web applications?", "text": "I understand the advantages of using markdown in local text files, and then taking those files and generating HTML. This is great for documentation, readme's in git repositories and posts for static site generators. It is very simple to create markdown syntax in any text editor, it's human readable, and you can create HTML from markdown relatively easily. What I don't understand, is the advantages of using a wysiwyg editor for markdown in web applications, especially those that use a database server. (non-flat-file). I thought the whole point of using markdown was to remove the need for a wysiwyg editor. What is the advantage of using a markdown wysiwyg editor over an editor that just outputs HTML (avoiding the overhead of converting markdown to HTML in the process)? **clarification:** basically I'm looking for technical advantages to having a WYSIWYG editor output markdown instead of HTML. To the end user the text form will look the same, so it's only dealing with markdown/html in the backend of an application. Anchor CMS sorta does this, and I'm wondering if there is an advantage over just plain HTML output."} {"_id": "213681", "title": "What are the best practices and pitfalls of doing a JS app powered purely by RESTful API?", "text": "We are starting to build a new app and I would like to explore the idea of doing a thick JS client (backbone / angular) with only RESTful API exposed by our application layer. What are some of the best practices and potential pitfalls for doing this kind of web app?"} {"_id": "245021", "title": "Need to provide an interface (for plugins) for taking input Type A, and returning output Type B", "text": "public interface IMyInputProviderPlugin { IMyOutput Provide(IMyInput data); } This is an interface I need to provide so that I can dynamically load the dlls and not have them bound to my implementation. However, upon reading this, it seems like this is an anti-pattern? http://blog.ploeh.dk/2011/04/27/Providerisnotapattern/ I'm confused, how does one do this if not this way?"} {"_id": "245020", "title": "How to create a scoring system with time and correct answers for a game?", "text": "o create a scoring system with time and correct answers for a game? up vote 0 down vote favorite I have a small mobile quiz game, which consists of 30 questions, and a timer which starts from 0 seconds, and goes all the way up to 1 hour. Below you can see that my timer starts from 0, and it is displayed in the format of MM:SS. var timestamp = new Date(0, 0, 0, 0, 0, 0); function pad(n) { return (\"0\" + n).slice(-2); } Number.prototype.pad = function (len) { return (new Array(len+1).join(\"0\") + this).slice(-len); } So, what I actually need is, some kind of formula, or system in order to receive a final score. So the more correct answers a user has, and the faster finishes the quiz, the more points gets. I know that this is kind of unrelated question for this forum, but I'm kind of stuck. I would like to hear your opinions, about the scoring system. So the smallest score should be 0, and the highest, well no limit."} {"_id": "40420", "title": "Why Groovy(Java)?", "text": "I am looking on a new language to pick up and found out about Groovy. According to the website, the language is 'agile dynamic' language. 1. How is it Agile? 2. How shorter are the syntax compared to Java's? 3. Can I use existing Java libs out there? 4. Lastly, can anyone share their experience on this lingo? What you love and hate"} {"_id": "230077", "title": "Stopping spiders or unauthorised users from viewing files", "text": "I am stumped at this problem. Users authenticate at my client's website. Then the users see their profile page on my client's website. My client now wants me to provide a URL on their profile page, for his users, to download information from an external website. (This external website is not controlled by my client). That is to say, upon clicking this URL, it links out to an external website and gets a personalized file. As the information is considered sensitive (not critical), the external website should not allow anyone to download the personalized file. I'm stumped at figuring out how to allow only one authentication (on my client's website) instead of having the end users authenticating twice (once on my client's website and separately on the external website) Any ideas?"} {"_id": "212484", "title": "How to create a lightweight WPF application including .NET Framework 4.5?", "text": "I want to create an application which is dependent on .NET Framework 4.5. If I am binding the framework with application setup the size of application increases to much but I need to bind the .NET Framework 4.5 with my setup. Is there any other option to bind the framework with application setup besides downloading it from the website during installation?"} {"_id": "230072", "title": "Is there a practical use from learning Brainfuck?", "text": "Brainfuck is an esoteric programming language created in 1993 by Urban M\u00fcller. It was designed to challenge and amuse programmers, and was not made to be suitable for practical use. But still, it exists and some really cool stuff is written with it. My question is - will learning/practicing Brainfuck increase the depth of my knowledge of programming? For example: as a high-level programmer (using high- level languages) will it improve understanding of low-level methods and operations? Or is it really just a joke used to spend some time and amuse by showing code that really looks like \"code\" in a sense that's it's hard to read/understand."} {"_id": "212482", "title": "Casual projects on Github omit error checking, logging, etc., for the sake of clarity?", "text": "I just started using GitHub to socialize some projects for simple chat and peer-to-peer apps. With respect to coding, is it customary to omit exception handling, error checking, logging, etc., to promote easy-to-understand and 'clean'-looking code, if those additions are not completely necessary to actually run the code? For instance, here's two extreme examples: if( msg != null){ NodeEvent event = NodeEvent.valueOf(msg.getAction()); switch (event) { case NODE_CONNECTING: if( nodeName.isInvalid()) Utils.log(\"Invalid node name, ignoring...\"); else{ String existingNodeAddress = _nodeNameAddressMap.get(nodeName); if( existingNodeAddress != null) Utils.log(\"Node address for '\" + nodeName + \"' updated\"); else Utils.log(\"New node '\" + nodeName + \"' connected\"); _nodeNameAddressMap.put(nodeName, nodeAddress); } } Here's the same snippet minus the fluff, although this version can easily crash if fed bad data, but otherwise it will compile and run fine: NodeEvent event = NodeEvent.valueOf(msg.getAction()); switch (event) { case NODE_CONNECTING: _nodeNameAddressMap.put(nodeName, nodeAddress); The clean version is easier to read, obviously, but is brittle. The verbose version is safer, it just looks cluttered. I'm just curious to know if it's more effective for a social projects to go for the cleaner look at the expense of fragility, allowing those who fork the project to fill in the blanks. A quick survey of github projects and it's clear most go for the clean look."} {"_id": "230070", "title": "Microeconomical simulation: coordination/planning between self-interested trading agents", "text": "In a typical perfect-information strategy game like Chess, an agent can calculate its best move by searching the state tree for the best possible move, while assuming that the opponent will also make the best possible move (i.e. Mini-max). I would like to use this approach in a \"game\" modeling economic activity, where the possible \"moves\" would be to buy or sell for a given price, and the goal, rather than a specific class of states (e.g. Checkmate), would be to maximize some function F of the agent's state (e.g. F(money, widget) = 10*money + widget). How to handle buy/sell actions that require coordination between both parties, at the very least agreement upon a price? The cheap way out would be to set the price beforehand, maybe based upon the current supply -- but the idea of this simulation is to examine how prices emerge when freely determined by \"perfectly rational\" agents. A great example of what I do _not_ want is the trading algorithm in SugarScape -- paraphrasing from _Growing Artificial Societies_ p101-102: > when a pair of agents interact to trade, they each compute their internal > valuations of the goods, then a bargaining process is conducted and a price > is agreed to. If this price makes both agents better off, they complete the > transaction The protocol itself is beautiful, but what it cannot capture (as far as I can tell) is the ability for an agent to pay more than it might otherwise for a good, because it knows that it can sell it for even more at a later date -- what appears to be called \"strategic thinking\" in this pape at Google Books Multi-Agent-Based Simulation III: 4th International Workshop, MABS 2003... to get realistic behavior like that, it seems one would either (1) have to build an outrageously-complex internal valuation system which could at best only cover situations that were planned for at compile-time, or otherwise (2) have some mechanism to search the state tree... which would require some way of planning future trades. Note: The chess analogy only works as far as the state-space search goes; the simulation isn't intended to be \"zero sum\", so a literal mini-max search wouldn't be appropriate -- and ideally, it should work with more than two agents."} {"_id": "212480", "title": "How do I alter an open source library for my own use?", "text": "I'm programming in Objective-C. I want to include a camera library, DLCImagePicker, in my project. There is a variable I need to change, but it is private and I have no access to it. I've thought of a few way to get around this, but am not sure on the best practice way to do it. 1. I can just copy the code into my own class and alter the variable. 2. I can alter it directly in the original class. However, I have included this code as a git submodule and I'm thinking that touching the source code is a bad idea in this case, and in general. 3. Subclassing. I can't imagine how this would work, as the variable is private. 4. I can perform some action with git that would create my own fork/branch of the project, and alter it there. I have never done this and am very primitive with git."} {"_id": "14508", "title": "Something like LinqPad for C/C++?", "text": "I love the LinqPad not just for the Linq, but also for the various types of simple compile options it provides like single expression, statements and programs for C# and VB. Anyone knows something similar to this for C/C++?"} {"_id": "234987", "title": "calling an abstract method in abstract class", "text": "Suppose i have an abstract base class Parent which defines an abstract Method A(some parameter) taking a parameter, also it defines an instance Method B which calls method A(parameter) inside its body and passing some parameter.Then we have an concrete child class B which extends the base class and provide its own implementation of Method A using the parameter passes. I have seen many implementation of this practice in frameworks. I want to know how it works and does it provide any benefits. Here is some code written in Android public abstract class ParentActivity extends Activity { public abstract void onResume(LoginToken token); @Override protected void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); Some implementation code } @Override public void onResume() { super.onResume(); Some implementation code onResume(token); } } public class ChildActivity extends ParentActivity{ LoginToken token; @Override protected void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); Some implementation code } @Override public void onResume() { Some implementation code super.onResume(); } @Override public void onResume(LoginToken token) { this.token=token; } }"} {"_id": "186581", "title": "When to import names into the global namespace? (using x::y, from x import y etc.)", "text": "I've been programming in various languages for about 10 years now. And I still haven't figured out when it is a good idea to import something into the global namespace (`using x::y` in C++, `from x import y` in Python etc.), so I hardly ever do it. It almost always seems like a bad idea to me, if only because it limits the set of variable names I can use. For example: Where I to use `using namespace std;` or `using std::string;`in C++, I couldn't use `string`as a variable name anymore, which I occasionally do (e.g. for string utility functions). But I'm wondering: Are there some situations where importing a name into the global namespace really makes sense? Any rules of thumb?"} {"_id": "208978", "title": "Is it bad to place \"include directive\" within main function?", "text": "It is always said that the `include directive`s should be placed at the beginning of a script. The main reason is to make the functions available throughout the script. Regardless of this fact, is it bad to place an `include directive` within the main function where it is needed? For example, #include int main() { #include } Instead of #include #include int main() { } I was unable to find any different in performance or compiled file size, but it is difficult to judge about this fact based on simple scripts. I wonder if it has any effect or drawback, or any reason to avoid this non-standard approach."} {"_id": "207705", "title": "What is global mutable variable behaviour in dynamically-linked libraries?", "text": "When a dynamically linked library includes a global mutable variable, such as a container for state initialised when loading the library, how do references to that variable behave when running an application that links against it? Obviously the application cannot alter memory allocated to the dynamically- linked library by the OS, as that would have implications on the other applications using it, so one must assume references to the global mutable variables are rewritten to refer to some R/W memory space owned by the application. But exactly how do the compiler and linker collude to accomplish this?"} {"_id": "207700", "title": "What are the characteristics or features of production-quality code?", "text": "This is the first time I will be delivering code for a freelance project (web- app), and, since I don't have much experience shipping code, I am having a hard time deciding whether my program is ready for deployment or not. My understanding is that a production-level code must have the following characteristics: * **Fault tolerance** : ability to survive uncaught exceptions * **Data redundancy** : never lose user data * **Scalability** : Handling extra load should not require re-writing the app * **Test Coverage** : a \"decent\" amount of code tested Some of these characteristics are specific to the program itself, while others are more environment-related (whether using multiple clusters). However, even the environment-dependent characteristics do affect the way the program is designed. My question then is: What are the other characteristics that make production- code so different than code not meant for production? Just to reduce the scope of the question, please focus only on **web apps**. **Edit** : I will try to narrow down the scope by asking for characteristics specific to my situation. As a freelance, I was responsible of everything from purchasing a VPS, to configuring it, to writing the code, to deploying it. Although the project and its setup is well documented, the customer will not be able to maintain it. The app is not complex, but depends on a lot of external components, which makes it really prone to break if these components change/disappear. The goal is to set up a service that would be able to last as long as possible without the customer's maintenance."} {"_id": "136920", "title": "If competition is using 'lingua obscura' for development (why) should I be worried?", "text": "I was reading Paul Graham's essay - Beating The Averages (2003) and here's what he had to say: > The more of an IT flavor the job descriptions had, the less dangerous the > company was. The safest kind were the ones that wanted Oracle experience. > You never had to worry about those. You were also safe if they said they > wanted C++ or Java developers. If they wanted Perl or Python programmers, > that would be a bit frightening-- that's starting to sound like a company > where the technical side, at least, is run by real hackers Now, this is a dated essay. However, I fail to see how using a non-commonplace language (C/C++/Java, C#) would be _'less dangerous'_. If the programmers of an organization are very fluent with the development language they should be equally adept at cranking out code at a decent pace. In fact if you do use a non-commonplace language won't maintenance/enhancement problems hit you in the face since not too many programmers would be available, in the long run? For making quick-n-dirty systems I agree, that some languages allow you to take off relatively sooner than others. But does Paul Graham's essay/comment make sense in 2012 and beyond? If a startup were to use _typical IT_ languages for development, why should it's competition be less worried? I fail to see how the language itself makes a difference. IMHO it's the developers experience with the language that matters and the availability of frameworks so that you DRY (do not repeat yourself) not just coding in a particular language. What is it that I'm missing? Does it imply that startups better choose non IT- flavored languages (even if the developers may be extremely adept at them)? What are the (programming) economic/market-forces behind this claim? PS: 'lingua obscura' is not meant to hurt anyone's feelings :)"} {"_id": "158383", "title": "Developing Razor Web Pages - Visual Studio and/or WebMatrix?", "text": "When I started learning about Web Pages I followed several of the Microsoft tutorials, all of which utilized WebMatrix. I did this until I realized that WebMatrix offered no debugging. Wait? What?? No way. And I quickly moved my Web Pages development to Visual Studio. While there were some things that I thought were nicely done in WebMatrix, I have found nothing that I cannot do in in Visual Studio. Also, unless someone who knows better than me enlightens me, I am vehemently against developing the same project with two different IDEs; It just seems like an unnecessary risk to me. Nevertheless, I would like to ask for your thoughts/insights about this. I am very curious to know if any of you use them in tandem and, if so, what aspects of each do you utilize the most? And, of course, does it cause problems?"} {"_id": "158382", "title": "Cloud computing platforms only have one CPU. Does this mean I shouldn't use Parallel Programming?", "text": "Almost every cloud instance I can find only offers one CPU. Why is this only one CPU now, and should I expect this to increase in the future? Does this design impact my code design so that I exclude technologies like Task Parallel Library? This is on topic to Programmers.SE because it impacts the long-term scalability of muti-threaded code on cloud platforms."} {"_id": "69979", "title": "Questions for Architecture with Ruby and Java", "text": "I am in the research phase of a project that needs to make use of 3rd party libraries that are in Java so I am stuck using Java to at least a small degree. I am considering implementing Ruby as the front-end layer. That would leave me with the option of calling the Java classes as a web service by hitting a certain URL, is that correct? Is that generally a solid approach to things? Also, how would it affect my build environment when I create the .war file using Ant? should I have src/java and src/ruby code, or do the ruby files go somewhere else? Any tips and pitfalls on this? I am a little bit new to Ruby so advice would be appreciated. Also, I considered JRuby, but it seemed like a less mature tool that just has a JVM for a smoother calling of Java classes, correct? I am concerned with the maturity of that tool and its bein neither Java nor Ruby."} {"_id": "238511", "title": "MVC4 : How to create model at run time?", "text": "In my project I am dynamically creating table by giving table name (ex. student) and adding fields to that table and then save table. Now, my table is created in SQL Server database. Assume table has data and I want to show the data of table (ex student) on WebGrid. But, the problem is I have no model for that table (ex. student). So, how I can create model for newly created table or how I can show the the data on WebGrid?"} {"_id": "43214", "title": "How do you educate your teammates without seeming condescending or superior?", "text": "I work with three other guys; I'll call them Adam, Brian, and Chris. Adam and Brian are bright guys. Give them a problem; they will figure out a way to solve it. When it comes to OOP, though, they know very little about it and aren't particularly interested in learning. Pure procedural code is their MO. Chris, on the other hand, is an OOP guy all the way -- and a cocky, condescending one at that. He is constantly criticizing the work Adam and Brian do and talking to me as if I must share his disdain for the two of them. When I say that Adam and Brian aren't interested in learning about OOP, I suspect Chris is the primary reason. This hasn't bothered me too much for the most part, but there have been times when, looking at some code Adam or Brian wrote, it has pained me to think about how a problem could have been solved so simply using inheritance or some other OOP concept instead of the unmaintainable mess of 1,000 lines of code that ended up being written instead. And now that the company is starting a rather ambitious new project, with Adam assigned to the task of getting the core functionality in place, I fear the result. Really, I just want to help these guys out. But I know that if I come across as just another holier-than-thou developer like Chris, it's going to be massively counterproductive. I've considered: 1. Team code reviews -- everybody reviews _everybody's_ code. This way no one person is really in a position to look down on anyone else; besides, I know I could learn plenty from the other members on the team as well. But this would be time-consuming, and with such a small team, I have trouble picturing it gaining much traction as a team practice. 2. Periodic e-mails to the team -- this would entail me sending out an e-mail every now and then discussing some concept that, based on my observation, at least one team member would benefit from learning about. The downside to this approach is I do think it could easily make me come across as a self-appointed expert. 3. Keeping a blog -- I already do this, actually; but so far my blog has been more about esoteric little programming tidbits than straightforward practical advice. And anyway, I suspect it would get old pretty fast if I were constantly telling my coworkers, \"Hey guys, remember to check out my new blog post!\" This question doesn't need to be specifically about OOP or any particular programming paradigm or technology. I just want to know: **how have you found success in teaching new concepts to your coworkers without seeming like a condescending know-it-all?** It's pretty clear to me there isn't going to be a sure-fire answer, but any helpful advice (including methods that have worked as well as those that have proved ineffective or even backfired) would be greatly appreciated. * * * **UPDATE** : I am **not** the Team Lead on this team. Chris is. * * * **UPDATE 2** : Made community wiki to accord with the general sentiment of the community (fancy that)."} {"_id": "238513", "title": "Is allocating objects from a memory-pool a security anti-pattern?", "text": "In the wake of the heartbleed bug, OpenSSL has been rightly critizised for using its own freelist. With plain-old `malloc`, the bug would have almost certainly been found long ago. However, some kind of malloc wrapping is actually very common. For example when I have a big object that owns many small, fixed sized objects, I often allocate them from an array. /* BigObj is allocated with `malloc`, but LittleObj using `alloc_little`. */ typedef struct { /* bigObj stuff here */ int nlittle; LittleObj littles[MAX_LITTLE]; } BigObj; LittleObj *alloc_little(BigObj *big) { if(big->nlittle == MAX_LITTLE) return NULL; return = big->littles + big->nlittle++; } `alloc_little` is faster than `malloc()`, but the main reason is simplicity. There is no `free_little` to call, because the `LittleObjs` just remain valid until the `BigObj` is destroyed. So is this an anti-pattern? If so, can I mitigate the risks, or should I just abandon it? Similar questions go for any kind of memory pool, such as GNU Obstacks and the `talloc` used in Samba."} {"_id": "174907", "title": "Can I use EPL licensed libraries and not Give out the Source code of my Application?", "text": "If I use EPL licensed software ( namely Eclipse jars ) in an application, do I have to give the users the source code and the right to redistribute? If that is not the case, what rules should I follow if I use EPL licensed software libraries in an application that I wish to distribute?"} {"_id": "254201", "title": "Why should ViewModel route actions to Controller when using the MVCVM pattern?", "text": "When reading examples across the Internet (including the MSDN reference) I have found that code examples are all doing the following type of thing: public class FooViewModel : BaseViewModel { public FooViewModel(FooController controller) { Controller = controller; } protected FooController Controller { get; private set; } public void PerformSuperAction() { // This just routes action to controller... Controller.SuperAction(); } ... } and then for the view: public class FooView : BaseView { ... private void OnSuperButtonClicked() { ViewModel.PerformSuperAction(); } } Why do we not just do the following? public class FooView : BaseView { ... private void OnSuperButtonClicked() { ViewModel.Controller.SuperAction(); // or, even just use a shortcut property: Controller.SuperAction(); } }"} {"_id": "81427", "title": "Can Agile and ISO 9001 interact well?", "text": "There are few academic papers addressing the relationship between lean software development and the practices covered by ISO 9001. Most articles says that _the divergence between these approaches is big_ , but some also point that _these concepts can be complementary and gains are much higher when using both approaches_. Academically it is very beautiful, but in practice is it anyway? So here's the question: do you work or worked at companies applying both Agile as ISO 9001? What is your perception? What is really good and what is inappropriate?"} {"_id": "201701", "title": "Object Initializer in C# problem with readability", "text": "I wonder if object initializing have some performance gain in ASP.NET website. I have an office mate that told me that object initialization is much readable and faster than constructor. But one of my office mate disagree with that and said that it always depends. For example I am doing this: using (var dataAccess = new DatabaseAccess { ConnectionString = ConfigurationManager.ConnectionStrings[\"test\"].ConnectionString, Query = query, IsStoredProc = true }) { //some code here } Is it better if I will do this way? using (var dataAccess = new DatabaseAccess()) { dataAccess.ConnectionString = ConfigurationManager.ConnectionStrings[\"test\"].ConnectionString; dataAccess.Query = query; dataAccess.IsStoredProc = true; } Thanks in advance guys! \\--- **EDIT** \\---- is this somehow much better: using ( var dataAccess = new DatabaseAccess { ConnectionString = ConfigurationManager.ConnectionStrings[\"test\"].ConnectionString, Query = query, IsStoredProc = true } ) { //some code here }"} {"_id": "201702", "title": "Similar references to themselves in two classes", "text": "How can I make 1 class (base, generic or something else) from these two classes? class A { A Link { get; set; } } class B { B Link { get; set; } } **UPD:** This is what I have now: class BSTree { public BSTNode Root { get; set; } } class AVLTree { public AVLNode Root { get; set; } } class BSTNode { public BSTNode Parent { get; set; } public BSTNode Left { get; set; } public BSTNode Right{ get; set; } public int Key{ get; set; } } class AVLNode { public AVLNode Parent { get; set; } public AVLNode Left { get; set; } public AVLNode Right{ get; set; } public int Key{ get; set; } public int Height { get; set;} } And finally I want to get something like BaseNode and BaseTree."} {"_id": "127654", "title": "What's a good notation for showing MVC interactions?", "text": "I'm developing various websites and functionality to cater to various different use-cases up in Django. Is there a good notation for showing what behaviour is at each stage, e.g., swimlanes. I use BPMN 2 notation for everything, but feel that I am overusing this, and probably abusing the notation :P Example: ![BPMN 2.0 diagram](http://i.stack.imgur.com/PMCzW.png) Please recommend a suitable **notation for showing the interactions between model, view and controller, as well as some of the inner business-logic in each stage**."} {"_id": "103361", "title": "How to deal with potential enterpreneur developer in a IT startup", "text": "In a software startup, how can we deal with the main developers/programmers of the company who are good at design and communication, and also have entrepreneurial skills? How do we get them work for us and avoid any kind of competition from them to our business? How can we convince the lead programmer that our startup is the place for him?"} {"_id": "103363", "title": "Mahout's Flexibility for generating recommendations", "text": "I am currently working on a system that will generate product recommendations like those on Amazon: _\"People who bought this also bought this..\"_ **This is how I plan to do it** : * Process the data using Apache Mahout and generate recommendations(data is stored in MySQL), currently item based only. * Apply a clustering algorithm to make cluster of users and then apply the association rules subjective to each cluster. **My questions are:** * Will Mahout provide me enough flexibility to tweak the existing algorithms according to my needs. If yes, how? * Is there a better alternative like R?"} {"_id": "155093", "title": "Why is better to use external JavaScript or libraries ; and is it prefered to use jquery meaning more security?", "text": "I read this article Unobtrusive JavaScript with jQuery and I noticed these points in the slide page **11** * some companies strip JavaScript at the firewall * some run the NoScript Firefox extension to protect themselves from common XSS and CSRF attacks * many mobile devices ignore JavaScript entirely * screen readers **do** execute JavaScript but accessibility issues mean you may not want them to I did not understand the fourth point. What does it mean? I need your comment and responses on these points. Is not using JavaScript and switching to libraries like jQuery worth it? **UPDATE 1 :** whats the meaning of **Unobtrusive JavaScript with jQuery** ? and yes it does not say we should use libraries but we should have them on external files for that reason i asked my question."} {"_id": "170260", "title": "How easy is it to change languages/frameworks professionally?", "text": "Forgive me for asking a career related question - I know that they can be frowned upon here, but I think that this one is general enough to be useful to many people. My question is: How easy/difficult is it to get a job using language/frameowork B, when your current job uses language/framework A? e.g. If you use C#/ASP.NET in your current job, how difficult would it be to get a job using python/django, or PHP/Zend, or whatever (the specifics of the example don't matter). Relatedly, if you work in client side scripting, but perhaps work on server- side projects in your own time, how difficult would it be to switch to server- side professionally? So, to sum up, does the choice of which languages/frameworks use at work tend to box you in professionally?"} {"_id": "19174", "title": "Is Project Titan (Facebook email app) going to be a game changer from a programmers perspective?", "text": "Is Project Titan (Facebook email app) going to be a game changer from a programmers perspective? Although some details are still scarce, the internet is slowly learning more about this new software. From the perspective of a programmer, what if any functionality do you think this will bring to the mass communications sector of internet use? Will it in anyway be a \"game changer\" from the existing layer of highly successful web based email clients (such as hotmail, gmail, yahoo mail, etc) I'm interested if anyone thinks email fundamentally needs to change other than to add more \"connectedness\" to other sources of data such as social media profiles."} {"_id": "120384", "title": "Why not Green Threads?", "text": "Whilst I know questions on this have been covered already (e.g. http://stackoverflow.com/questions/5713142/green-threads-vs-non-green- threads), I don't feel like I've got a satisfactory answer. The question is: _why don't JVM's support green threads anymore?_ It says this on the code-style Java FAQ: > A green thread refers to a mode of operation for the Java Virtual Machine > (JVM) in which all code is executed in a single operating system thread. And this over on java.sun.com: > The downside is that using green threads means system threads on Linux are > not taken advantage of and so the Java virtual machine is not scalable when > additional CPUs are added. It seems to me that the JVM could have a pool of system processes equal to the number of cores, and then run green threads on top of that. This could offer some big advantages when you have a very large number of threads which block often (mostly because current JVM's cap the number of threads). Thoughts?"} {"_id": "116439", "title": "Why would a video game need main(String[] args) in its own class?", "text": "My teacher just told me that whenever I create a class to run something for a video game company that uses Eclipse, I should make a run class with the main and any outputs. He says any arithmetic should then be put in its own class. Here's the example he gave us. //triangle import java.util.Scanner; import java.lang.Math.*; import java.lang.String; public class Lab03a //this class is used to test Triangle { public static void main( String[] args ) { Scanner keyboard = new Scanner(System.in); //ask for user input System.out.print(\"Enter side A :: \"); int a = keyboard.nextInt(); System.out.print(\"Enter side B :: \"); int b = keyboard.nextInt(); System.out.print(\"Enter side C :: \"); int c = keyboard.nextInt(); Triangle test = new Triangle(a, b, c); test.calcPerimeter(); test.calcArea(); test.toString(); System.out.println(\"Area \"+test.toString()); //ask for user input System.out.print(\"Enter side A :: \"); a = keyboard.nextInt(); System.out.print(\"Enter side B :: \"); b = keyboard.nextInt(); System.out.print(\"Enter side C :: \"); c = keyboard.nextInt(); test.setSides(a,b,c); test.calcPerimeter(); test.calcArea(); test.toString(); System.out.printf(\"%.3f\\n\",(test.toString())); //add one more input section } } The other class: import java.util.Scanner; import java.lang.Math.*; public class Triangle { private int sideA, sideB, sideC; private double perimeter; private double theArea; private double s; public Triangle() { setSides(0,0,0); perimeter=0; theArea=0; s=0; } public Triangle(int a, int b, int c) { sideA=a; sideB=b; sideC=c; } public void setSides(int a, int b, int c) { sideA=a; sideB=b; sideC=c; } public void calcPerimeter( ) { perimeter=sideA+sideB+sideC; } public void calcArea() { s=perimeter/2; theArea=(Math.sqrt(s*(s-sideA)*(s-sideB)*(s-sideC))); } public String toString() { String output = \"\"+theArea+\"\\n\\n\"; return output; } } Is this universally true for gaming companys? True for some? Or would most accept this?"} {"_id": "120381", "title": "What, if anything, to do about bow-shaped burndowns?", "text": "I've started to notice a recurring pattern to our team's burndown charts, which I call a \"bowstring\" pattern. The ideal line is the \"string\" and the actual line starts out relatively flat, then curves down to meet the target like a bow. My theory on why they look like this is that toward the beginning of the story, we are doing a lot of debugging or exploratory work that is difficult to estimate remaining work for. Sometimes it even goes up a little as we discover a task is more difficult once we get into it. Then we get into implementation and test which is more predictable, hence the curving down graph. Note I'm not talking about a big scale like BDUF, just the natural short-term constraint that you have to find the bug before you can fix it, coupled with the fact that stories are most likely to start toward the beginning of a two-week iteration. Is this a common occurrence among scrum teams? Do people see it as a problem? If so, what is the root cause and some techniques to deal with it?"} {"_id": "94227", "title": "How loyal should I be to my present employer?", "text": "I come from a \"3rd world country\", moved a year ago to a \"first world country\", got a job doing customer support (networking/ management company) and web dev for my current employer. I started to cover for the main support engineer that was going for holidays; then my boss wanted an iPhone + iPad + server app for managing the employees (app on testing ground now); and gave the project to me. As I started in a new country, I got offered a very low salary, and was told it would be reviewed in a couple months, then it became \"when the app is ready\", and now that the app is ready, is at the end of this year (I started August last year). I was ok with the low salary as I was mostly learning (Objective-C) I come from microcontroller C and Linux background. Now my boss is in Europe for holidays and left everything (and everyone) for some weeks... I got now the need to make an average salary for a developer (mid level), and have been looking for jobs now that my boss is away. If I get a job before he comes back, I feel a bit guilty that he gave me a chance to learn and I pay him by leaving (and leave him with some web portals, and iOS apps)... But I really need to make more money for my family too, so I have clear I need to improve my conditions, but still feel thankful with the guy. But more important is my family! So how to proceed?"} {"_id": "235164", "title": "Redmine - Database Structure/Normalization", "text": "I am using redmine for project management and issue tracking. I was looking at the database tables and the underlying structure and was wondering if anyone who is experienced with database architecture can comment on the structure. I am concerned that once there are many users and hundreds (or thousands) of projects (each project containing many issues, with each issue containing many messages, etc.), the database structure could possibly turn out to be a weak point. How is the performance impacted by this design? I would like to hear about the Pros/cons of how the tables are laid out and how the data is separated or normalized, and whether or not it might be worth re-structuring. What would be the benefits of separating the data out to more tables (with less columns per table)"} {"_id": "235161", "title": "Types of unit tests based on usefulness", "text": "From value point of view I see two groups of unit tests in my practice: 1. Tests that test some non-trivial logic. Writing them (either before implementation or after) reveals some problems/potential bugs and helps to be confident in case logic is changed in the future. 2. Tests that test some very trivial logic. Those tests more like document code (typically with mocks) than test it. The maintenance workflow of those tests is not \"some logic changed, test became red - thanks God I wrote this test\" but \"some trivial code changed, test became false negative - I have to maintain (rewrite) the test without gaining any profit\". Most of the time those tests are not worth maintaining (except religious reasons). And according to my experience in many systems those tests are like 80% of all tests. I am trying to find out what other guys think on the topic of unit tests separation by value and how it corresponds to my separation. But what I mostly see is either fulltime TDD propaganda or tests-are-useless-just-write-the-code propaganda. I'm interested in something in the middle. Your own thoughts or references to articles/papers/books are welcome."} {"_id": "235168", "title": "Simple search from an image list to bring up a specific logo", "text": "The current way i have my form is a user selects a drop down menu of a particular folder and it displays the current logos in that folder in a row below. These images are grabbed on page load all at once. The problem is, as of right now, i have around 700 logos totaling around 10mb, and every time the user reaches the form, it has to load all 700 images before the page loads, unless the images are already in the browser's cache. My problem isn't so much now, but later if I end up having 1500 logos, or more. I thought about 2 different ways to isolate this issue. 1. To have the drop down selected folder load the images AFTER selected. I asked how to get this accomplished in stack overflow with no success. 2. My second idea now is to create a simple search engine bar that will just show the logo they are searching for it on the fly instead of loading the images. I know this is possible, i've seen it on a number of image search sites. my question is, does this type of search engine script already exist? if so .. could you point me in the right direction."} {"_id": "219049", "title": "How is MTU determined and can it be changed", "text": "I'm developing a multi-part solution that consists of an iOS application and OS X application which communicate over Bluetooth Low Energy. Everything works fine and I can send moderate amounts of text between the iPhone and my MacBook. I would like to be able to send larger amounts of data, such as an image, from the iPhone to the MacBook. I'm able to do this but it takes about 50 minutes for the transfer of 1 image, compressed as a JPEG with a 0.2 quality (on a scale of 0-1). Part of the problem is that in the iOS application when the central connects, it reports an maximum packet length of 20 bytes (plus 3 bytes for header). In iOS 7, the changed the MTU so it's not a hardcoded 23 bytes. I cannot find anywhere that would allow me to specify a higher MTU on my MacBook or in the IOBluetooth framework. I found a constant value in one of the bluetooth header files in the framework that specifies the MTU as 23 (20 + 3 for header), which leads me to believe this is just something hard coded into the framework that should be able to be changed somehow. An alternative would be to use BLE to negotiate some other form of transfer between the systems. Ultimately, the transfer shouldn't be required to go over the Internet (the way Bump does it) but if a direct WIFI or Bluetooth connection could be used without requiring any user configuration, that would work."} {"_id": "219046", "title": "Initializing a variable as undefined", "text": "Doing some refactoring, I noticed an unusual pattern I'm not familiar with. Properties and variables that do not yet have a value are initiated with `undefined` declared explicitly, despite the fact that value-less variables and object properties would evaluate to `undefined` anyways: var foo = undefined; this.prop = undefined; **Is there a reason to do this?**"} {"_id": "213357", "title": "What kind of problems is an Android beginner likely to encounter in using Scala?", "text": "I am a hobbyist programmer who makes and maintains one production system, largely coded in Python, which now has to be ported to Android. I don't know Java at all. However, SL4A in Android makes Python a bit of a second class citizen re APIs, such as Google Cloud Messaging etc. However, for me, Java appears intimidating with its verbose syntax and strict typing. Scala's syntax is more appealing and seems like a possible compromise. From this question and this one on Stack Overflow it seems that Scala development on Android is feasible. However, as a 'beginner', I would like to know what problems I might encounter by using Scala instead of Java. The available information that I can find on the 'Net (including the question from this site cited above) is from early 2012, at the latest, and I imagine the situation has changed since then."} {"_id": "241230", "title": "Should I use my own public API on my site (via JS)?", "text": "First of all, this question is far more different other 'public api questions' like this: Should a website use its own public API?, second, sorry for my English. **You can find the question summarized at the bottom of this question.** What I want to achieve is a big website with a public api, so who like programming (like me) and likes my website, can replicate my website's data with a much better approach (of course with some restrictions). Almost everything could be used by the public API. Because of this, I was thinking about making the whole website AJAX driven. There would be parts of the API which would be limited only to my website (domain), like login, registering. There would be only an INTERFACE on the client side, which would use the public and private API to make this interface working. The website would be ONLY CLIENT SIDE, well, I mean, the website would only use AJAX to use the api. * * * How do I imagine this? The website would be like a mobile application, the application only sending a request to a webserver, which returns a json, the application parses it, and uses it to advance in the application. (e.g.: login) My thoughts: **Pros** : * The whole website is built up by javascript, this means I don't need to transfer the html to the client, saving bandwidth. (I hope so) * Anyone can use up the data of my website to make their own cool things. (Is this a con or pro? O_O) * The public API is always in use, so I can see if there are any error. **Cons** : * Without Javascript the website is unusable. * The bad guys easily can load the server with requesting too much data (like Request Per Second > 10000), but this can be countered via limiting this with some PHP code and logging. * Probably much more work So the question in some words is: Should I build my website around my own api? Is it good to work only on the client side? Is this good for a big website? (e.x.: facebook, yeah facebook is a different story, but could it run with an 'architecture' like this?)"} {"_id": "241232", "title": "Best practice to collect information from child objects", "text": "I'm regularly facing following pattern: public abstract class BaseItem { BaseItem[] children; // ... public void DoSomethingWithStuff() { StuffCollection collection = new StuffCollection(); foreach(child c : children) c.AddRequiredStuff(collection); // do something with the collection ... } public abstract void AddRequiredStuff(StuffCollection collection); } public class ConcreteItem : BaseItem { // ... public override void AddRequiredStuff(StuffCollection collection) { Stuff stuff; // ... collection.Add(stuff); } } Where I would prefer something like this, for better information hiding: public abstract class BaseItem { BaseItem[] children; // ... public void DoSomethingWithStuff() { StuffCollection collection = new StuffCollection(); foreach(child c : children) collection.AddRange(c.RequiredStuff()); // do something with the collection ... } public abstract StuffCollection RequiredStuff(); } public class ConcreteItem : BaseItem { // ... public override StuffCollection RequiredStuff() { StuffCollection stuffCollection; Stuff stuff; // ... stuffCollection.Add(stuff); return stuffCollection; } } What are pros and cons of each solution? For me, giving the implementation access to parent's information is some how disconcerting. On the other hand, initializing a new list, just to collect the items is a useless overhead ... What is the better design? How would it change, if `DoSomethingWithStuff` wouldn't be part of `BaseItem` but a third class? PS: there might be missing semicolons, or typos; sorry for that! The above code is not meant to be executed, but just for illustration."} {"_id": "150574", "title": "I feel unprepared to start my first job out of college... how can I improve?", "text": "I just graduated from university with a degree in Computer Science/Engineering and was fortunate enough to land a job working in the pharmaceutical industry as a developer. My title is System Developer I, which will require the following skills: 0-3 years development experience with C# .NET and SQL Server Experience writing T-SQL and stored procedures XML Javascript I learned of the company at a job fair and was not aware that I would need those skills for the job until after they wanted to set up an interview with me. The interview consisted of talking about my background, an almost-too- simple logic test, and a couple SQL questions that anyone with any experience should be able to answer. I was honest with them and indicated that I had absolutely no experience with SQL Server, .NET, XML, or Javascript, but they offered me the position anyway. Of course, I accepted it, but I am now extremely worried that my skills will not be up to snuff. I fully realize from reading lots of Coding Horror, Stack Overflow, and The Daily WTF that a degree in Comp Sci in no way prepares me to be a software developer; I further realize that I will be a monumental noob in the presence of people who have been doing this for years. I feel like the only thing that makes up for my lack of development experience and programming knowledge are my social skills, innate writing ability, and humility (at least compared with some of my co-graduates who fancied themselves to be the next Steve Jobs... _barf_ ) You will never find me being the prima donna constantly complaining about the system, the language, etc... I just want to do my job like I'm told, work 9 - 5, and go home with my paycheck feeling like I'm competent. If that requires home-study, I'm more than willing because I do love programming and computer science. So far, I've familiarized myself a bit on using SQL Server Management Studio, gave myself a refresher on basic SQL, and started learning more about C# and .NET using Introducing Visual C# 2010 by Adam Freeman from Apress. Can anyone recommend anything else I can do in the meantime to: A. Chill the _*_ * out and enjoy my new job without worrying so much about getting canned for incompetence B. Improve my understanding of design patterns and OOP C. Get the low-down on writing T-SQL in the most efficient way possible Thanks everyone."} {"_id": "65673", "title": "I might be starting to do Arduino development and I would like some advice", "text": "So, today I looked at Arduino; seems very interesting. I still don't want to shell out the money just yet, as I have some questions, namely: 1. Should I learn something about electronics? What and with what resources? 2. What stuff should I buy? I have a limited, but not _that_ small budget (probably up to 200\u20ac, but preferably _less_ ). I'm interested in getting a screen (mustn't be complex, but I would prefer one like that), some speaker, some way for it to move and a way to remotely control it. I would also want a battery. I also wan't to be able to use my board for many things in the future, so it shouldn't be too basic, I guess. I think I will go with the MEGA, is that smart? 3. What resources do you recommend? Any books? Good tutorials besides what you find on their site? 4. What are really cool extensions (e.g. screens) which I would be adviced to get/try? Help greatly appreciated. Also, I have _never_ developed for any embedded device, but I know C and computer architecture to some degree; I'm somewhat familiar with low level stuff."} {"_id": "82730", "title": "Scrum overestimation and replanning", "text": "We are in the middle of our first Sprint and something dawn on us: we over estimated! We had planned 114 ideal hours for this 2 weeks iteration and at the end of the first week we finished the whole Sprint. What do we do now? The \"book\" says we should, and we will, get the next high priority items from the backlog. Though, how do we add them to the burn down chart? Do we re-write it to account for those stories as if they were there from the begining? Or simply add their estimates to the y axis in the day we start working on them (showing a 90o angle jump)? Any feedback is welcome!"} {"_id": "82732", "title": "How do you name sprints in your projects?", "text": "Some Scrum software management tools give you this option to explicitly name your sprints. Do you have a preferred way of naming your sprints or do you just use a simple scheme like 1, 2, 3, ...?"} {"_id": "215213", "title": "Permissions and MVC", "text": "I\u2019m in the progress of developing a web application. This web application is mostly a CRUD interface, although some users are only allowed to perform some actions and see only some parts of views. What would be a reasonable way to handle user permissions, given that some parts of views are not available to users? I was thinking of having a function `hasPermission(permission)` that returns `true` iff the current user has the given permission, although it would require conditionals around all parts of views that are only visible to some users. For example: {% if has_permission('view_location') %} {{ product.location }} {% endif %} I\u2019m fearing this will become an ugly and unreadable mess, especially since these permissions can get kind of complicated. How is this problem commonly solved in web applications? I\u2019m considering using Haskell with Happstack or Python with Django."} {"_id": "215210", "title": "The balance between client and server functionality", "text": "I want to bring the discussion that started in our teams and get your opinion about it. Assume we have an user account which could have different credentials for authentication and associated email to recover. An user has possibility to do signup with an email or use his social profile to complete signup process. As an Rest API from the backend to client looks like: 1. Create account 2. Authorise 3. Update user data 4. Link social account 5. Register email 6. Verify email In addition our BE is distributed and divided between several services/servers/clusters. So different calls are related to different end points. As well we have different client - mobile native applications, web application. The usual registration flow looks for the client like this: 1. Ask user for email 2. Create account (Rest) 3. Authorise (Rest) 4. Ask user for his display name 5. Update data (Rest) 6. Register email (Rest) And future possible calls: 1. Verify email 2. Link social account to user With the Facebook signup: 1. Ask user to login to FB 2. Create account (Rest) 3. Authorise (Rest) 4. Link FB (Rest) 5. Register and verify FB user email (Rest) 6. Update user name based on FB data (Rest) But all these steps are possible to do on backend. So we proposed to have another end point which will hide/combine different calls on BE and return whole process result to the clients: 1. Ask user for FB login 2. New user with FB (proposed Rest) 3. BE is doing register/link verified email/update user data/authorise The pros for this approach: 1. No more duplication of functionality between clients 2. Speed up the networking and user experience The cons for this approach: 1. Additional work for backend 2. Probably most complex scenarios in future updates I would like to get your opinion or experience with this situation. Especially if you already experienced point \"Probably most complex scenarios in future updates\" from against reasons."} {"_id": "215217", "title": "How to structure my GUI agnostic project?", "text": "I have a project which loads from database a XML file which defines a form for some user. XML is transformed into a collection of objects whose classes derive from single parent. Something like Control -> EditControl -> TextBox Control -> ContainterControl -> Panel Those classes are responsible for creation of GUI controls for three different enviroments: WinForms, DevExpress XtraReports and WebForms. All three frameworks share mostly the same control tree and have a common single parent (Windows.Forms.Control, XrControl and WebControl). So, how to do it? Solution a) Control class has abstract methods Control CreateWinControl(); XrControl CreateXtraControl(); WebControl CreateWebControl(); This could work but the project has to reference all three frameworks and the classes are going to be fat with methods which would support all three implementations. Solution b) Each framework implementation is done in separate projects and have the exact class tree like the Core project. All three implementations are connected using a interface to the Core class. This seems clean but I'm having a hard time wrapping my head around it. Does anyone have a simpler solution or a suggestion how should I approach this task?"} {"_id": "215216", "title": "Inherit one instance variable from the global scope", "text": "I'm using Curses to create a command line GUI with Ruby. Everything's going well, but I have hit a slight snag. I don't think Curses knowledge (esoteric to be fair) is required to answer this question, just Ruby concepts such as objects and inheritance. **I'm going to explain my problem now, but if I'm banging on, just look at the example below.** Basically, every Window instance needs to have .close called on it in order to close it. Some Window instances have other Windows associated with it. When closing a Window instance, I want to be able to close all of the other Window instances associated with it at the same time. Because associated Windows are generated in a logical fashion, (I append the name with a number: `instance_variable_set(self + integer, Window.new(10,10,10,10))` ), it's easy to target generated windows, because methods can anticipate what assosiated windows will be called, `(I can recreate the instance variable name from scratch, and almost query it: instance_variable_get(self + integer)`. I have a delete method that handles this. If the delete method is just a normal, global method (called like this: `delete_window(@win543)` then everything works perfectly. However, if the delete method is an instance method, which it needs to be in-order to use the `self` keyword, it doesn't work for a very clear reason; it can 'query' the correct instance variable perfectly well (`instance_variable_get(self + integer)`), however, **because it's an instance method, the global instances aren't scoped to it!** Now, one way around this would obviously be to simply make a global method like this: `delete_window(@win543)`. But I have attributes associated with my window instances, and it all works very elegantly. This is very simplified, but it literally translates the problem exactly: class Dog def speak woof end end def woof if @dog_generic == nil puts \"@dog_generic isn't scoped when .woof is called from a class method!\\n\" else puts \"@dog_generic is scoped when .woof is called from the global scope. See:\\n\" + @dog_generic end end @dog_generic = \"Woof!\" lassie = Dog.new lassie.speak #=> @dog_generic isn't scoped when .woof is called from an instance method!\\n woof #=> @dog_generic is scoped when .woof is called from the global scope. See:\\nWoof! **TL/DR:** I need `lassie.speak` to return this string: `\"@dog_generic is scoped when .woof is called from the global scope. See:\\nWoof!\"` @dog_generic must remain as an insance variable. The use of Globals or Constants is not acceptable. Could woof inherit from the Global scope? Maybe some sort of keyword: def woof < global # This 'code' is just to conceptualise what I want to do, don't take offence! end Is there some way the .woof method could 'pull in' @dog_generic from the global scope? Will @dog_generic have to be passed in as a parameter?"} {"_id": "64109", "title": "How to stay in the zone after finishing a task?", "text": "I've noticed that while I'm programming, I divide my work into tasks and until a task is complete I cannot stop, even if it takes hours. However, once that task is complete, I find it very hard to move onto the next task. I feel the urge to spend some time having a break, surfing the web, etc, and before you know it the rest of the work day has gone by. Sometimes if it only took 1-2 hours to complete a task, this still happens. Has anyone else experienced this? How do you deal with this so you stay in the zone till all your tasks for the day are done?"} {"_id": "154056", "title": "Git: Fixing a bug affecting two branches", "text": "I'm basing my Git repo on A successful Git branching model and was wondering what happens if you have this situation: ![enter image description here](http://i.stack.imgur.com/lLGpu.png) Say I'm developing on two feature branches A and B, and B requires code from A. The X node introduces an error in feature A which affects branch B, but this is not detected at node Y where feature A and B were merged and testing was conducted before branching out again and working on the next iteration. As a result, the bug is found at node Z by the people working on feature B. At this stage it's decided that a bugfix is needed. This fix should be applied to both features, since the people working on feature A also need the bug fixed, since its part of their feature. Should a bugfix branch be created from the latest feature A node (the one branching from node Y) and then merged with feature A? After which both features are merged into develop again and tested before branching out? The problem with this is that it requires both branches to merge to fix the issue. Since feature B doesn't touch code in feature A, is there a way to change the history at node Y by implementing the fix and still allowing the feature B branch to remain unmerged yet have the fixed code from feature A? Mildly related: Git bug branching convention"} {"_id": "215219", "title": "Is there a reason someone would choose GPLv2 instead of GPLv3?", "text": "Is there any reason anyone would use GPL v2 over GPL v3 when starting a new project, or is GPL v2 still around only because older projects can't or haven't updated their license yet?"} {"_id": "153000", "title": "Proper way to implement Android XML onClick attribute in Activity", "text": "I have used the android:onClick attribute extensively in my XML layouts for my Android application. **Example:** Instead of writing this (using jQuery just for brevity): $('#myButton').on('click', function() { ... }); They do this: $(document).on('click', function() { ... }); And then presumably use `event.target` to drill down to the element that was actually clicked. Are there any gains/advantages in capturing events at the `document` level instead of at the `element` level?"} {"_id": "60699", "title": "Is \"funny commenting\" a bad practice or not?", "text": "I want to ask you whether adding some \"easter eggs\" in the source documentation is unprofessional or not. Probably you have read the StackOverflow poll for funny comments in a source documentation, and I have personally stumbled at many such things during my work, including funny (or not) stuff in public API documentation (for example this weak BZZZTT!!1! thing in Android public documentation, I can give at least a dozen more examples). I can't come to a final opinion for myself, because I have contradicting arguments by myself. Pro argument: * It can cheer up somebody, and make his/her day funnier/more productive. Major portion of the source code doesn't need to be commented anyway (if the project is done properly), because the specific method (for example) is self explanatory, or if it is a pile of strange crappy code, it can't be explained in a meaningful way, so a funny joke doesn't harm the possible info that you can obtain from the doc. Cons argument: * If you are very concentrated/frustrated, the last thing you need is somebody's stupid joke, instead of giving you information you need about the documented code portion, it can just make you even more frustrated. And the idea of what the documentation would look like if everybody starts doing so is horrible. Plus the guy who writes the joke may be the only one who thinks that it is funny/interesting/worth wasting time to read it. What do you think?"} {"_id": "60694", "title": "What is the correlation between the quality of the software development process and the quality of the product?", "text": "I used to believe the practicing \"good\" software development methods tends to yield a better product in the long run. However, I've seen quite a few cases where \"quick-and-dirty\" \\ \"brute-force\" \\ \"copy-paste\" programming appeared to give decent results quicker, and cheaper. This appears especially in cases where time to market is more critical then maintenance overhead. Is there a correlation between the quality of the development process and techniques and the quality of the product?"} {"_id": "64375", "title": "How can I get started as a freelance programmer?", "text": "Assume that I know zero about freelancing, but I am a good .NET programmer. I want to start freelancing. How would I get started?"} {"_id": "153277", "title": "What are studies comparing programmer productivity in determined languages/environments?", "text": "I'm not sure if I'm using the right terms, but with productivity I mean the concept of transforming an idea/design in actual software."} {"_id": "251051", "title": "OOD: class hierarchy with method arguments forming another hierarchy", "text": "I'd like to find out how do you guys handle the following situation: you have a class hierarchy, call it H1, with some polymorphic method that is supposed to accept an argument which type forms hierarchy H2 in the following way: the higher the class is in H1, the higher H2 argument it accepts. +-----------+ +-----------+ | A | | A' | |-----------+ +-----------+ |method(A') | ^ +-----------+ | ^ +-----------+ | | B' | +-----------+ +-----------+ | B | ^ |-----------+ | |method(B') | +-----------+ +-----------+ | C' | ^ +-----------+ | +-----------+ | C | |-----------+ |method(C') | +-----------+ Before writing any code I mention that I have two protocol classes with methods `check` that have much in common, but request parameters are specific for concrete arguments -- transactions. The language is scala, but the problem is language-agnostic. At the first glance it should look like that: trait BaseMerchantProtocol { def check(transaction: BaseTransaction) = { // some common code... println(requestParams(transaction)) // ...and here as well } protected def requestParams(transaction: BaseTransaction) } class SmsCommerceMerchantProtocol extends BaseMerchantProtocol { override protected def requestParams(transaction: SmsCommerceTransaction) = { List(\"result specific for SmsCommerceTransaction class\") } } class TelepayMerchantProtocol extends BaseMerchantProtocol { override protected def requestParams(transaction: TelepayTransaction) = { List(\"result specific for TelepayTransaction class\") } } But of course it does not compile as Liskov substitution principle is violated. Let's try this one: trait IMerchantProtocol { def check(transaction: SmsCommerceTransaction) def check(transaction: TelepayTransaction) } class MerchantProtocol extends IMerchantProtocol { def check(transaction: SmsCommerceTransaction) = { doCheck(transaction) } def check(transaction: TelepayTransaction) = { doCheck(transaction) } private def requestParams(transaction: SmsCommerceTransaction) = { List(\"result specific for SmsCommerceTransaction class\") } private def requestParams(transaction: TelepayTransaction) = { List(\"result specific for TelepayTransaction class\") } private def doCheck(transaction: BaseTransaction) = { // some common code... println(requestParams(transaction)) // ...and here as well } } But that won't compile as well -- and with the same reason: `doCheck` accepts `BaseTransaction`, but `requestParams`s have more strict preconditions. The only thing that I came up with and that works is the following: class MerchantProtocol extends IMerchantProtocol { def check(transaction: SmsCommerceTransaction) = { doCheck(transaction, requestParams(transaction)) } def check(transaction: TelepayTransaction) = { doCheck(transaction, requestParams(transaction)) } private def requestParams(transaction: SmsCommerceTransaction) = { List(\"result specific for SmsCommerceTransaction class\") } private def requestParams(transaction: TelepayTransaction) = { List(\"result specific for TelepayTransaction class\") } private def doCheck(transaction: BaseTransaction, requestParams: List[String]) = { // some common code... println(requestParams) // ...and here as well } } But I don't like that all `check` methods belong to the same class. How can I split them by classes?"} {"_id": "251053", "title": "How should a REST API handle PUT when missing parameters?", "text": "I have a list of users that are being assigned to a certain office. I use checkboxes to select each user and when the client is done, a PUT is performed: PUT offices/:id users[0] : 14 users[1] : 12 users[2] : 25 The problem occurs when the server has users saved, but the client decides to clear the board. So he deselects the users and hits save. The problem in this case is that the Ajax request is not sending an empty `users[0] :` parameter, because each checkbox is unchecked. In the case when a parameter is not included at all, should PUT overwrite that it and set it to null or should the request include the empty parameter somehow?"} {"_id": "254020", "title": "C# inherit from a class in a different DLL", "text": "I need to make an application that needs to be highly modular and that can easily be expanded with new functionality. I've thought up a design where I have a main window and a list of actions that are implemented using a strategy pattern. I'd like to implement the base classes/interfaces in a DLL and have the option of loading actions from DLL's which are loaded dynamically when the application starts. This way the main window can initiate actions without having to recompile or redistribute a new version. I just have to (re)distribute new DLL's which I can update dynamically at runtime. This should enable very easy modular updating from a central online source. The 'action' DLL's all inherit their structure from the code defined in the the DLL which defines the main strategy pattern structure and it's abstract factory. I'd like to know if C# /.Net will allow such a construction. I'd also like to know whether this construction has any major problems in terms of design."} {"_id": "251055", "title": "What next steps to gain contributors to my programming language Wake?", "text": "I have spent several months creating the language Wake. It is now in alpha status, meaning its highly usable but still has many planned features -- features which will be slow to create with only one developer. Being in an alpha release, I don't want to go for too much publicity too soon. I don't want to push anyone to use it in production, or have Wake remembered as \"that language that is in the works.\" The number one thing at this point is gaining contributors. I did a presentation (there's a video on youtube) to friends and family that might have gained me one or two, probably more help on styling and docs than feature development. Do you think the language is ready for * sharing on reddit * presenting at conferences * pitching to companies for supplementary help * adding to package databases like ubuntu, brew, chocolatey, etc * sharing on lambda the ultimate And what other publicity actions could I take? Do you think I need to write \"how to contribute\" articles? I have been planning on making a blog post _From the ashes of Google's Noop comes a new language: Wake_ to take a piggy-back approach to marketing it. Is that sort of introduction good to broadly publicize right now?"} {"_id": "118212", "title": "S-Shaped Perl Program?", "text": "In his Why I Hate Django presentation, Cal Henderson brings up this awesome program in Perl which is a giant \"S,\" which solves a Sudoku problem. Does anyone know where I could get access to this code? I've been searching for it and I'm just dying to be able to read it."} {"_id": "190567", "title": "How to implement lazy evaluation of if()", "text": "I am currently implementing an expression evaluator (single line expressions, like formulas) based on the following: * the entered expression is tokenized to separate literal booleans, integers, decimals, strings, functions, identifiers (variables) * I implemented the Shunting-yard algorithm (lightly modified to handle functions with variable number of arguments) to get rid of parenthesis and order the operators with a decent precedence in a postfixed order * my shunting-yard simply produces a (simulated) queue of tokens (by means of an array, my Powerbuilder Classic language can define objects, but only have dynamic arrays as native storage - not true list, no dictionary) that I evaluate sequentially with a simple stack machine My evaluator is working nicely, but I am still missing an `if()` and I am wondering how to proceed. With my shunting-yard postfixed and stack based evaluation, if I add `if()` as another function with a true and false parts, a single `if(true, msgbox(\"ok\"), msgbox(\"not ok\"))` will show both messages while I would like to show only one. This is because when I need to evaluate a function, all of its arguments has already been evaluated and placed on the stack. Could you give me some way to implement `if()` in a lazy way? I though about processing these as a kind of macro, but at early time I have not yet the condition evaluation. Perhaps that I need to use an other kind of structure than a queue to keep separately the condition and the true / false expressions? For now the expression is parsed before evaluation, but I also plan to store the intermediate representation as kind of precompiled expression for future evaluation. **Edit** : after some though on the problem, I think I could build a tree representation of my expression (an AST instead of a linear token stream), from which I could easily ignore one or another branch of my `if()`."} {"_id": "190566", "title": "Audio Sync : Detecting current time of a program", "text": "I am trying to do some audio sync things (see Wikipedia on acoustic fingerprints). I know it is possible to place watermarks in an audio signal to analyse it later, but how would I do to get the current time of a video? Would it be something like placing a watermark each second? Wouldn't placing so much watermarks create the risk of having false positive? Also, I have tried to read audio frequencies on my Android phone (with zero crossing) but it is not very precise. So on a less conceptual topic: Could anyone advice me on a good way to detect audio watermark for smartphones/tablets?"} {"_id": "80529", "title": "What's your suggestion if the company didn't recognize my contribution towards a big project?", "text": "I am an entry level developer with 1 year of experience. I have worked on a large scale project which I have played around 80% of the project work, those 5 months were terrible to me - late nights spent working, even Sundays. i have worked on whole Process Model , doing some of my colleagues work ,DB Design ,client feedback all this but the point is some of my work been owned by my Team Lead & hope now realize why i mention 80% of work is done by me! Now the project is completed, and the client seems completely satisfied with the work. But, I haven't found the company to give any sort of encouragement / appreciation. My seniors who where not involved in the project were given praise, leave, bonuses, etc. Also I was rejected permission to attend an important family function - that makes me ask \"what's the credit I have now\"? I have been wondering, is being honest/dedicated to the job what resulted in this situation? I have currently gotten 3 offers with good package - I've been thinking to move on to any of the companies now. What's your suggestion at this point of time?"} {"_id": "190563", "title": "UTF-16 Pitfalls, Chinese", "text": "I'm going to be writing an application that is pure HTML5 and JS and MVC.net back-end. We have .resx files that are getting compiled to .js files for resources in the html5 application. The application has to work in English and in Chinese which I understand to mean that we need to use UTF-16 everywhere. Does anyone have any experience using UTF-16 for such a task, or any best practices thereof?"} {"_id": "42756", "title": "Zend PHP 5.3 certification", "text": "I'm planning to give the Zend PHP 5.3 certification. Any suggestions on how to prepare for it? Also, what kind of questions can I expect? Can anyone remember what they were asked?"} {"_id": "46236", "title": "Should you apply a language filter to a randomly generated string?", "text": "A while back I created a licensing system for my companies new product, as well as all products after this one. As with a lot of licensing systems mine generates codes: 25 character product and registration codes, as well as 16 character module unlocking codes. My question is, since some parts of these generated codes are random should I apply a language filter to it to avoid any embarrassing language being given to the end users? I chose to as it was not difficult at all. But has anyone else ever came across something like this? Any viewpoints as to if it is worth the effort?"} {"_id": "245873", "title": "What is a good C++ API Design for HW registers?", "text": "I am designing an API for a driver that manipulates HW. I have done the following: namespace HWRegister { //private: namespace Data { //accessible only within this namespace //Represents one of the four HW blocks. enum EHWUnit { Block0, Block1, Block2, Block3 }; enum EHWSet { Rx0, Rx1, Rx2, Rx3, Tx0, Tx1, Tx2, Tx3 }; } using namespace Data; //Returns Error Code int32_t enableHWUnit( const EHWUnit aHWNumber, const EHWSet aHWSet ); //Returns Error Code. int32_t disableHWUnit( const EHWUnit aHWNumber ); I would like to get peoples opinion on this. I am wondering if those enums are a good idea, I declare them here and therefore people who use my API are forced to use them, will this cause more problems for the callers of my API? Can I improve this API somehow? I will add more documentation into the API when I am sure it is looking good."} {"_id": "195582", "title": "How would I programmatically verify gift cards on a website?", "text": "I'm trying to include a feature on my website that verifies retail gift card balances on cards previously registered at the retailer's website by the card holder. Can this simply be done by writing an algorithm that pulls and aggregates this data on my site?"} {"_id": "195587", "title": "Less code or less operation", "text": "Sometimes I hesitate between \"More code to avoid unnecessary operations\" and \"less code but with redundant operations\". Let me just take an example (Win32 API): I try to paint some controls manually when the cursor is over it. If the cursor is over this control and it is painted, then it doesn't need to be painted again before the cursor goes out from it. Of course, I can just paint this control whenever a `WM_MOUSEMOVE` appears. Or, I can use more variables to record the state of this control and paint it only when the state is changed. Although it is not a question (in practice) in my example, I still want to have a feeling for which is the better choice, in general. And if this depends on cases, any way to develop a good sense to do the choice? (I am not focus on \"efficient\". It is more or less like a question as: If you are an idealist of programming, which one is better in your opinion.)"} {"_id": "245871", "title": "Project based prefix for class names", "text": "My project leader uses project based prefixes for class names, lets say projects name ABC, he create User class name as ABCUser. and he says he do this becasuse if he wants to make User.aspx Users get mixed. so I told him why not use namespace (Entity.User ie.) to make it specific but he against it. I would like to hear from you guys' opinion on this subject. We code c#.net and using visual studio for projects."} {"_id": "122845", "title": "Learning C# in a unified manner", "text": "Microsoft technologies keep getting better but they do so on the expense of adding one more abstraction layer every time. In the early stages we used to play with C# code and SQL procedures to perform CRUD operations. Then generics came along with ADO.NET, DataSets, Entities. Next, LINQ came offering LINQ to SQL, to XML, to Entities, to Objects. To get more easier and more confusing, EF 4.1 came. C# just kept evolving and adding new abstractions layers again and again. The end result is that I am so confused that I don't know which one is which and when to use which one. When I try to follow books, some teach EF while some are stuck with ADO.NET. Seriously I have no clue whatsoever why we have evolved so much for data manipulation To be honest, all I know is the C# syntax and some basic stuff. Data manipulation is out of the window and so are the advanced features. * Is there a unified manner to learn C# ? I don't want to use Entity Framework and LINQ when I don't understand them or what they offer. I just want to have a thorough understanding from Level 0 to Level 10. * Also, I am thinking of learning PHP simultaneously because I feel learning some other language will might help me understand C# better. Is this the right thing?"} {"_id": "122844", "title": "How relevant are Brainbench scores when evaluating candidates?", "text": "I've seen many companies using certification services such as Brainbench when evaluating candidates. Most times they use it as a secondary screen prior to interview or as a validation to choose between candidates. What is your experience with Brainbench scores? Did you try the tests yourself, and if so do you feel the score is meaningful enough to be used _as part_ of a hiring process? * * * Difficult choice. Consensus seems to be that BB cert are not very good as a certification. The biggest argument was around the fact that some of the questions are too precise to form a good evaluation. This view can probably be tempered somewhat but still, to hold someone's future solely on the results of this evaluation would be irresponsible. That said, I still think it is possible to use them properly to gain additional objective knowledge on a candidate's level of expertise provided the test is done in a controlled environment ensuring that all taking it stand on equal footing. Thus I went with the answer that best reflected this view keeping in mind that it is still just an hour long 50ish multiple choice question to evaluate skills and knowledge that take years to acquire. To be taken with a grain of salt ! In short, the tests have value but whether or not they are worth the money is another debate. Thanks all for your time."} {"_id": "53787", "title": "UI Testing with Visual Studio 2010 Feature Pack 2", "text": "One of the most intriguing items in the recently released Visual Studio 2010 Feature Pack 2 is the ability to create and edit UI tests in Silverlight. Here is an example of a coded UI test. I haven't had much time to use it yet, but for people who have, I am curious as to what your thoughts are. Is this something you have found to be particularly useful? I would like to be able to automate a significant amount of regression testing that we are currently performing manually. In your experience, has it made a major impact on the resources that you would normally have to dedicate to testing? Thanks"} {"_id": "216617", "title": "Approach to Authenticate Clients to TCP Server", "text": "I'm writing a Server/Client application where clients will connect to the server. What I want to do, is make sure that the client connecting to the server is actually using my protocol and I can \"trust\" the data being sent from the client to the server. What I thought about doing is creating a sort of hash on the client's machine that follows a particular algorithm. What I did in a previous version was took their IP address, the client version, and a few other attributes of the client and sent it as a calculated hash to the server, who then took their IP, and the version of the protocol the client claimed to be using, and calculated that number to see if they matched. This works ok until you get clients that connect from within a router environment where their internal IP is different from their external IP. My fix for this was to pass the client's internal IP used to calculate this hash with the authentication protocol. My fear is this approach is not secure enough. Since I'm passing the data used to create the \"auth hash\". Here's an example of what I'm talking about: Client IP: 192.168.1.10, Version: 2.4.5.2 hash = 2*4*5*1 * (1+9+2) * (1+6+8) * (1) * (1+0) Client Connects to Server client sends: auth hash ip version Server calculates that info, and accepts or denies the hash. Before I go and come up with another algorithm to prove a client can provide data a server (or use this existing algorithm), I was wondering if there are any existing, proven, and secure systems out there for generating a hash that both sides can generate with general knowledge. The server won't know about the client until the very first connection is established. The protocol's intent is to manage a network of clients who will be contributing data to the server periodically. New clients will be added simply by connecting the client to the server and \"registering\" with the server. So a client connects to the server for the first time, and registers their info (mac address or some other kind of unique computer identifier), then when they connect again, the server will recognize that client as a previous person and associate them with their data in the database. Edit: The approach I ended up going was this: I ended up using an approach similar to what OpenSSH does, but reverse. Since when a client connects, I'm not sure I can trust the client. If it is the first time a client is connecting, it will provide me with a public key and a unique identifier (In this case, I will be using a computer's service tag since the computers I'm going to be dealing with are all Dells). The server stores the id and public key in the database then sends the client a secret question encrypted using the public key. The client will then need to respond with the decrypted answer. Before a client can be accepted by the server, I may end up doing a unique id registration on the server manually before connecting the client for the first time, this way not any sly person can just generate public keys and emulate what a client does with a \"new unique id\". Thanks for the help! I hope this helps someone down the line. Feel free to tear apart the security flaws of this approach :)"} {"_id": "53780", "title": "Will you work with much less experienced not passionate coworkers?", "text": "Will you work with much less experienced not passionate coworkers? Or they slow you so much that you should always try to work with at least as experienced and passionate as you coworkers. Will less experienced not passionate coworker reduce your productivity?"} {"_id": "216615", "title": "Scuttlebutt Reconciliation from \"Efficient Reconciliation and Flow Control for Anti-Entropy Protocols\"", "text": "_This question might be more suited to math.stackexchange.com, but here goes:_ ## Their Version Reconciliation takes two parts-- first the exchange of digests, and then an exchange of updates. I'll first paraphrase the paper's description of each step. To exchange digests, two peers send one another a set of pairs-- (`peer`, `max_version`) for each peer in the network, and then each one responds with a set of deltas. The deltas look like: `(peer, key, value, version)`, for all tuples for which `peer`'s state maps the `key` to the given `value` and `version`, and the version number is greater than the maximum version number `peer` has seen. This seems to require that each node remember the state of each other node, and the highest version number and ID each node has seen. ## Question Why must we iterate through all peers to exchange information between p and q?"} {"_id": "53788", "title": "Why big internet companies (Google, Facebook, Twitter, etc) going open source way?", "text": "These very large internet companies using, developing, and promoting open source technologies like crazies. What is the rationale behind promoting and developing open source tools and technologies rather than closed source software like traditional tech companies (Microsoft, Apple, Adobe)?"} {"_id": "163335", "title": "Dependency Injection/IoC container practices when writing frameworks", "text": "I've used various IoC containers (Castle.Windsor, Autofac, MEF, etc) for .Net in a number of projects. I have found they tend to be frequently abused and encourage a number of bad practices. Are there any established practices for IoC container use, particularly when providing a platform/framework? My aim as a framework writer is to make code as simple and as easy to use as possible. I'd rather write one line of code to construct an object than ten or even just two. For example, a couple of code smells that I've noticed and don't have good suggestions to: 1. Large number of parameters (>5) for constructors. Creating services tends to be complex; all of the dependencies are injected via the constructor - despite the fact that the components are rarely optional (except for maybe in testing). 2. Lack of private and internal classes; this one may be a specific limitation of using C# and Silverlight, but I'm interested in how it is solved. It's difficult to tell what a frameworks interface is if all the classes are public; it allows me access to private parts that I probably shouldnt touch. 3. Coupling the object lifecycle to the IoC container. It is often difficult to manually construct the dependencies required to create objects. Object lifecycle is too often managed by the IoC framework. I've seen projects where most classes are registered as Singletons. You get a lack of explicit control and are also forced to manage the internals (it relates to the above point, all classes are public and you have to inject them). For example, .Net framework has many static methods. such as, DateTime.UtcNow. Many times I have seen this wrapped and injected as a construction parameter. Depending on concrete implementation makes my code hard to test. Injecting a dependency makes my code hard to use - particularly if the class has many parameters. How do I provide both a testable interface, as well as one that is easy to use? What are the best practices?"} {"_id": "78233", "title": "How Do You Effectively Use Trace And Debug", "text": "I'm a self taught C# programmer and up to now I haven't really been making use of Debug and Trace and feel as though I should use them. I've been leaning more on TDD to understand and get feedback from my code. The .Net Framework Developers Guide declines to specify general guidelines for strategic placement of trace statements _\"because applications that use tracinig vary widely...\"_ ; What are your experiences of using Debug/Trace? Does using them too much create code maintenance issues? How much output is too much? How much is too little? What sorts of things do you tend to record?"} {"_id": "95729", "title": "Program Loaders in Linux and Windows", "text": "I was wondering what the loaders in Linux and Windows are called, i.e., their command names? Loaders' definitions are > In computing, a loader is the part of an operating system that is > responsible for loading programs."} {"_id": "95720", "title": "Teaching programmer looking for a simple statically and weakly typed language", "text": "I'm trying to illustrate the differences between the four different type systems -- static vs. dynamic typing and weak vs. strong typing. * Dynamic + weak = JavaScript * Dynaimc + strong = Python * Static + strong = Java (is there a good _purely_ interpreted language for this? The Java \"compiler\" can be annoying without a framework like Ant of Maven) * Static + weak = ??? For static + weak the one that is most immediately apparent is C++, with operator overloading, pointer arithmetic, and, frankly, the ability to treat almost any data type as some form of number, it seems like it would be a very clear example of static typing and weak typing. But, it is also _incredibly_ complicated, especially if I want to teach a **beginner**. What alternative teaching languages are there for beginners which are both statically and weakly typed? Ideally something which is easy to install and get running with minimal effort? (Even more ideally, something which supports OOP in some form?)"} {"_id": "95721", "title": "Offline demo of website - tools to handle \"saved\" external links", "text": "I am designing a test (demo to users) of a web site from my local machine without Internet access. The site will be hosted locally, but I would also like users to be able to follow links to certain external sites. Are there tools that can make the user think they are browsing a site online when they are actually browsing a local copy of the relevant external pages? The kind of tool I'm imagining would allow you to specify a proxy in the browser that would intercept certain URLs and hand them to the offline copy of a page. There would also be an easy way to save and manage these offline pages. The offline access requirement is a developing world thing - smart people don't rely on Internet access in places where there is little by way of infrastructure and they only have one chance to get something right."} {"_id": "190699", "title": "When and why would we use immutable pointers?", "text": "In Java, the `String` object is both immutable and also a pointer (aka reference type). I'm sure there are other types/objects which are both immutable and a pointer as well and that this extends further than just Java. I can not think where this would be desirable possibly due to my own experience I've only used either a reference types OR immutable primitive types. Can some one please give me a situation where it would be useful so I can get an appreciation for their meaning/value/concept."} {"_id": "81169", "title": "Why is the Apache HTTP Server so complex?", "text": "The Apache HTTP server is a fairly large project\u2014much larger than, say, `lighthttp` or `nginx` or certainly the \"simple HTTP servers\" you see floating around in C/C++ tutorials. What is the extra code for? Does it add security/stability (and if so, how?) or is it just for doing things like parsing Apache `conf` files/`.htaccess` type things (and, I guess, `VirtualHosts` etc). I ask not to critique Apache, but because I'm interested in writing a web server of sorts and I'd like to know things that, while perhaps not obvious, are important to remember for a secure, stable and fast web server."} {"_id": "176095", "title": "Is the escaping provided by the Google-Gson library enough to ensure a safe JSON payload?", "text": "I am currently using the Google-Gson library to convert Java objects into JSON inside a web service. Once the object has been converted to JSON, it is returned to the client to be converted into a JSON object using the JavaScript eval() function. Is the character escaping provided by the Gson library enough to ensure that nothing nasty will happen when I run the eval() function on the JSON payload? Do I need to HTML Encode the Strings in the Java Objects before passing them to the Gson library? Are there any other security concerns that I should be aware of?"} {"_id": "176093", "title": "How to reduce tight coupling between two data sources", "text": "I'm having some trouble finding a proper solution to the following architecture problem. In **our setting (sketched below)** we have 2 data sources, where data source A is the primary source for items of type Foo. A secondary data source exists which can be used to retrieve additional information on a Foo; however this information does not always exist. Furthermore, data source A can be used to retrieve items of type Bar. However, each Bar refers to a Foo. The difficulty here is that each Bar should refer to a Foo which, if available, also contains the information as augmented by data source B. My question is: how to remove the tight coupling between SubsystemA.1 and DataSourceB? ![http://i.stack.imgur.com/Xi4aA.png](http://i.stack.imgur.com/Xi4aA.png)"} {"_id": "12318", "title": "Why can people expect hardware/software requirements on software applications, but it's not for web applications?", "text": "Questions revolving around ways of getting customers to embrace new web technology / browsers so one can deliver better web software to the end user. It's hard to manage expectations of customers. Expectations such as: 1. Websites should work on older browsers. (ambiguous). 2. Websites should not require specific hardware. 3. Giving system / computer specs for running a website is unacceptable. Web development isn't as easy as many think. There's a lot that goes into creating a properly run web application (not just a website). Take a look at Google Docs or Microsoft Office online. These are more than just regular websites, and they force users to use newer browsers. MS Office Online will not work with IE6, and they are trying very hard to push people to use IE8 (soon IE9). Google pushes as well, same with many other strong web entities. You can do a lot on the internet, from playing games, watching movies, doing work, even coding and have the server you're connected to compile your code. With everything the web can do, I find it amazing that people still want to put unrealistic expectations on web applications just because it will require someone to use a browser that is only... 2-3 years old. I understand people don't like change. And we all know that many corporations will provide days/weeks of training to help their employees understand new internet browsers. There are also cases where people are forced to use old browsers because the archaic system they use for internal work only runs on that browser (ActiveX+IE6). **My Questions** How can you tell your end users that they will need to upgrade their browser to use the latest version of your website without a huge outcry? Why does the expectation exist that it's ok for software to require people to upgrade Windows/Mac versions, but a website cannot require a new browser version?"} {"_id": "179247", "title": "What is a real-world use case of using a Chomsky Type-I (context-sensitive) grammar", "text": "I have been having some fun lately exploring the development of language parsers in the context of how they fit into the Chomsky Hierarchy. What is a good real-world (ie not theoretical) example of a context-sensitive grammar?"} {"_id": "196596", "title": "Algorithm/Strategy or Data structure to capture priorities sub priorities in an app", "text": "I am working on a CMS that is starting to evolve a bit. We started off with content that had the following priorities ( columnn in the db on the content table): HIGH MED LOW. Data was fetched by priority HIGH being on top etc. Now after a few months a need has risen to have sub priorities: HIGH [ high, med , low ], MED [high, med, low ] etc. I have a test branch with a sub-priority column in the db. So fetch by priority now brings back HIGH(high) HIGH(med) HIGH(low) first. Wondering is there a better way to do this that I not seeing. There is a rails app."} {"_id": "53436", "title": "shipping a recompiled plugin under lgpl", "text": "I have recompiled a plugin (making changes before they are accepted upstream) for the gstreamer media framework. I did not recompile the whole library -- just a plugin. If I want proprietary python code to use this plugin -- am I running afoul of the LGPL license? My code never _directly_ links to the modified plugin. MyPythonCode [proprietary] --> pygst/binary [LGPL] --> gStreamer/binary [LGPL] --> modifiedPlugin/binary(w/source)[LGPL] The source code for the modified plugin would be available per LGPL."} {"_id": "49845", "title": "What should a C++ developer expect on an interview at a Rails company?", "text": "I have been working on C++ backend large scale apps for over 5 years. I'm doing TDD, using STL and Boost etc. I decided I need a change and about year ago started learning Ruby, and few months ago I started playing with Rails, HTML5 and CSS. I don't know JavaScript yet, I'm focusing on Rails now. What can I expect on an interview for a Ruby on Rails backend developer job? How can I present myself to take advantage of my C++ experience? I'm on a senior level now and I can't start from intern position. I consider myself really good in C++, I know also some Scheme, some Python and quite a bit of Ruby. I'll have one small Rails app ready and 1 simple Gem published before I'll start applying. Plus quite a few personal C++ projects. I have a bachelor degree in Electrical Engineering and I'm completing master degree in CS in June 2011."} {"_id": "49842", "title": "PHP, structural or OOP based language?", "text": "I would like to discuss why is PHP called a structural language? what are the OO concepts that cannot be implemented using PHP?"} {"_id": "78587", "title": "Facebook and Twitter authentication from Mobile Applications", "text": "I might be missing something with the Facebook and Twitter logins APIs. Something that is blocking me from understanding how they would be used in mobile applications. Both APIs seem like they need to redirect you to and then back to a website. How would that work from a mobile app? Or is there another way to use these APIs? I have to write a website that has native login, Facebook login and Twitter login. Then I am expected to turn around and open up a set of web services to an Android and iPhone mobile apps. Thought this was to \"loose\" of a question to ask on StackOverflow. What is the best approach? Any links on calling the API without UI interaction?"} {"_id": "63342", "title": "What are some easter eggs that went wrong?", "text": "What are some easter eggs that went wrong and resulted in unintended consequences, e.g programmer(s) being fired, monetary losses, even human life loss, etc? Please share."} {"_id": "157593", "title": "Convention on model names in ruby on rails", "text": "I was doing my ER diagram for a rails application I'm about to begin with and there I have an entity called `Class News` so I'd have a model `ClassNew` but I don't know if I will have problems in the future with the `New` part or what would be the right way to do this o how should I call the model? since the right thing would be `ClassNews` and the table should be `class_news`... What's the best thing to do when working with `News` at the time of creating models for ActiveRecord in Ruby on Rails. Thanks."} {"_id": "157591", "title": "Adapting parts of an open-source project for my own use", "text": "I'm in the process of coding a game and almost done with the game mechanics to the point where it's pretty playable. I later discovered an open source version of the kind of game I'm making with the same mechanics, whereas I coded mine from scratch. However, in the interest of optimizing later on, I may want to adapt a small part of the open source project's code. The project is under the GNU GPL 2, and it's coded in a different language for a different framework (Android/Java versus mine on C#/XNA). What are my options as to copying functions / routines from a GPL open source project, if I would have to port the code anyways, for the language I'm using? These are not patented algorithms I'm talking about, just code I found that could possibly make my program more efficient or stamp out a bug."} {"_id": "157596", "title": "How to indicate the word is method name in Objective-C?", "text": "You can indicate that `XXX` is method name by writing `XXX()`. But in Objective-C, you can not write `XXX()` for `XXX` method. If you write `XXX`, you can't tell if `XXX` is method name or other type of identifier. How can you indicate that `XXX` is method name of Objective-C class? _**EDIT_** Method call of Objective-C is different from other language. in Objective-C, method call is: [self XXX]; in other language, method call is: self->XXX(); In other language, for example, we can write in stackoverflow: XXX() doesn't work And we can know `XXX` is method thanks to `()`. But in Objective-C, we can't say: XXX doesn't work Instead, to make it clear that XXX is method, we have to say: XXX method doesn't work"} {"_id": "255379", "title": "What are memory addresses?", "text": "I have more or less 0 knowledge in low-level topics, so forgive my possible ignorance. I know that in languages such as C, pointers hold 'memory addresses', i.e. strings (or binary data?) written in hex such as `0x52A132F3`. Judging by the term 'memory address', I assume this number actually leads to some place in memory. But I'm having a hard time understanding what a 'place in memory' actually is and how a hexadecimal number can 'lead' to it. (A Java programmer here..) So two questions: 1. Do the memory addresses point to places on the CPU itself, or anywhere in the computer? 2. Is memory 'ordered' in hardware in some way that it makes sense to refer to it using an 'ordinal' number, such as `0x52A132F3`? How is memory ordered in hardware, and how and why does it make sense to access it using a numerical value?"} {"_id": "149626", "title": "Properties under ARC: Always or public-only?", "text": "After reading an article humbly named \"The Code Commandments: Best Practices for Objective-C Coding\" by Robert McNally a little less than two years ago, I adopted the practice of using properties for pretty much every data member of my Objective-C classes (the 3rd commandment as of may 2012). McNally lists these reasons for doing so (my emphasis): 1. Properties enforce access restrictions (such as readonly) 2. _Properties enforce memory management policy (strong, weak)_ 3. Properties provide the opportunity to transparently implement custom setters and getters. 4. Properties with custom setters or getters can be used to enforce a thread-safety strategy. 5. Having a single way to access instance variables increases code readability. I put most of my properties in private categories, so number 1 and 4 are usually not issues I run in to. Arguments 3 and 5 are more 'soft', and with the right tools and other consistencies they could become non-issues. So finally, to me the most influential of these arguments was number 2, memory management. I've been doing this ever since. @property (nonatomic, strong) id object; // Properties became my friends. For my last few projects I've switched to using ARC, which made me doubt whether creating properties for pretty much anything is still a good idea or maybe a little superfluous. ARC takes care of memory managing Objective-C objects for me, which for most `strong` members works fine if you just declare the ivars. The C-types you had to manually manage anyway, before and after ARC, and the `weak` properties are mostly public ones. Of course I still use properties for anything that needs access from outside of the class, but those are mostly only a handful of properties, while most data members are listed as ivars under the implementation header @implementation GTWeekViewController { UILongPressGestureRecognizer *_pressRecognizer; GTPagingGestureRecognizer *_pagingRecognizer; UITapGestureRecognizer *_tapRecognizer; } As an experiment I've been doing this a bit more rigorously, and the move away from properties for everything has some nice positive side effects. 1. Data member code requirements (`@property`/`@synthesize`) shrunk down to just the ivar declaration. 2. Most of my `self.something` references cleaned up to just `_something`. 3. It's easily distinguishable which data members are private (ivars) and which are public (properties). 4. Lastly, it 'feels' more like this was the purpose Apple intended properties for, but that's subjective speculation. **On to the question** : I'm slowly sliding towards ~~the dark side,~~ using less and less properties in favor of implementation-ivars. Can you provide me with a bit of reasoning for why I should stick to using properties for everything, or confirm my current train of thoughts as to why I should use more ivars and less properties only where needed? The most persuasive answer for either side will receive my mark. _EDIT:McNally weighs in on Twitter, saying: \"I think my main reason for sticking with properties is: one way to do everything, that does everything (including KVC/KVO.)\"_"} {"_id": "54784", "title": "Software architecture for authentication/access-control of REST web service", "text": "I am setting up a new RESTful web service and I need to provide a role-based access control model. I need to create an architecture that will allow users to provide their username and password to get access to the services and then restrict how they can use the services (which services they can use, read vs read/write, etc) based upon the roles assigned to that users. I have looked around at other questions and found pieces of what I want. For example there are several great discussions about how to handle passing credentials to REST services restful-authentication, best practices. There are also some great pointers on what programmers should know when creating websites (what every developer should know before building a public web site). But I haven't been able to find a good post, article, book on best practices and patterns for the software architecture that implements these solutions. Specifically: * How should user details and access rights be stored? (data model, location, format) * What are good design patterns for representing and tracking these in the server? (sessions in memory, db lookups each time, etc) * What are good patterns for mapping these rights to the services in a secure way in the code base? * What architectural choices can help keep the system more secure and reliable? * What lessons learned do people have from the trenches? I am looking for are design patterns and recommendations for the software architecture outside of any specific technologies. (If the technologies matter, I am planning to implement this using python, twisted, and a postgresql database)"} {"_id": "115129", "title": "Why do some teachers often consider bad practice things that are not?", "text": "In college, teachers often say that some things are bad practices while they are not. I'm referring to the recent Why is naming a table's Primary Key column \u201cId\u201d considered bad practice? question, or to the fact that my teacher told us that early returns are bad practice and must never be used. There are plenty of other examples everyone who have been in college remembers. Once learned, those things are hard to unlearn. When you deal with an intern who learned something wrong from her teacher, it's not always obvious to explain that the thing the teacher told is either not totally true or sometimes completely wrong. What makes those teachers mistakenly believe that things are bad practices, while in programming industry, most people would disagree with them? Don't they read programming books we, developers, read? Don't they talk frequently to developers?"} {"_id": "80915", "title": "Fast algorithm for finding common elements of two sorted lists", "text": "Suppose I have two lists of N 3 by 3 vectors of integers. I need to find a quick way (say of running time at most N^(1+epsilon)) to find the vectors of the first list that have the same 1st coordinate with a vector of the second list. Of course, I could do the following naive copmarison: for u in list_1 do for v in list_2 if u[1] equals v[1] then print u;print v; end if;end for; end for; This, however, would require N^2 loops. I feel that sorting the two lists according to their first coordinate and then look up for collisions is perhaps a fast way. Bubbleshort, etc., would probably take logN time, but I can't really see how to code the search for collision between the sorted lists. Any help would be appreciated."} {"_id": "50202", "title": "Effective versus efficient code", "text": "**TL;DR: Quick and dirty code, or \"correct\" (insert your definition of this term) code?** There is often a tension between \"efficient\" and \"effective\" in software development. \"Efficient\" often means code that is \"correct\" from the point of view of adhering to standards, using widely-accepted patterns/approaches for structures, regardless of project size, budget, etc. \"Effective\" is not about being \"right\", but about getting things done. This often results in code that falls outside the bounds of commonly accepted \"correct\" standards, usage, etc. Usually the people paying for the development effort have dictated ahead of time what it is that they value more. An organization that lives in a technical space will tend towards the efficient end, others will tend towards the effective. Developers often refuse to compromise their favored approach for the other. In my own experience I have found that people with formal education in software development tend towards the Efficient camp. Those that picked up software development more or less as a tool to get things done tend towards the Effective camp. These camps don't get along very well. When managing a team of developers who are not all in one camp it is challenging. In your own experience, which camp do you land in, and do you find yourself having to justify your approach to others? To management? To other developers?"} {"_id": "187603", "title": "Naming for a class that consumes an iterator pattern", "text": "The iterator pattern is very clearly defined. What would you call the consumer of an iterator?"} {"_id": "168760", "title": "Guidelines for creating referentially transparent callables", "text": "In some cases, I want to use referentially transparent callables while coding in Python. My goals are to help with handling concurrency, memoization, unit testing, and verification of code correctness. I want to write down clear rules for myself and other developers to follow that would ensure referential transparency. I don't mind that Python won't enforce any rules - we trust ourselves to follow them. Note that we never modify functions or methods in place (i.e., by hacking into the bytecode). Would the following make sense? > A callable object `c` of class `C` will be referentially transparent if: > > 1. Whenever the returned value of `c(...)` depends on any instance > attributes, global variables, or disk files, such attributes, variables, and > files must not change for the duration of the program execution; the only > exception is that instance attributes may be changed during instance > initialization. > > 2. When `c(...)` is executed, no modifications to the program state occur > that may affect the behavior of any object accessed through its \"public > interface\" (as defined by us). > > If we don't put any restrictions on what \"public interface\" includes, then rule #2 becomes: > When `c(...)` is executed, no objects are modified that are visible outside > the scope of `c.__call__`. Note: I unsuccessfully tried to ask this question on SO, but I'm hoping it's more appropriate to this site."} {"_id": "187609", "title": "Identifying Domain Services & Application Services when doing DDD", "text": "-I'm trying to figure out how to identify Application Services in my application. I think I can identify a Domain service by 2 things: 1. It acts as a facade to the repository. 2. It holds business logic that can't be encapsulated in a single entity. So I thought about this simple use case that I'm currently working on: An admin should be able to ban a certain user. The operation must be logged and an email must be sent to the banned user. I have a `UserRepository` which has a function `getUserById()`. I have a `User` entity which has a `ban()` function. I'll create a `UserService` like this: class UserService{ static void banUser(int userId){ User user= UserRepository.getUserById(userId); user.ban(); UserRepository.update(user); } } The `userId` is a `POST` variable that I'll be receiving in my `Controller` from a web form. Now where does the logging go? I need to log the ban process twice before and after the operation ex: Logger.log(\"Attempting to ban user with id:\"+userId); //user gets banned Logger.log(\"user banned successfully.\"); Second, I need to send a mail to the user. where should the call go? I thought about putting the logging and email is the `UserService` class itself, but I think there are better solutions. Third, where do Application Services fit in all of this?"} {"_id": "195385", "title": "Understanding stack frame of function call in C/C++?", "text": "I am trying to understand how stack frames are built and which variables (params) are pushed to stack in what order? Some search results showed that the C/C++ compiler decides based on operations performed within a function. For example, if the function was supposed to just increment a passed-in int value by 1 (similar to ++ operator) and return it, it would put all the parameters of the function and local variables into registers. I'm wondering which registers are used for returned or pass by value parameters. How are references returned? How does the compiler choose between eax, ebx,ecx and edx? What do I need to know to understand how registers, stack and heap references are used, built and destroyed during function calls?"} {"_id": "160151", "title": "Methods for testing a very large application", "text": "I have a PHP app which is very large. There are usually 2-3 developers working on it full time and we are getting to the point where we are making changes and creating bugs (cough features!). The software isn't complex per say, just there is a lot going on (35~ controllers, about the same models, etc). Even being careful it's easy for a change in this view (tweaking an id on an element) to wreck an ajax query happening under some special condition (logged out while standing on one foot). Unit tests are the first things which spring to mind, but we tried these on another app, and it's so easy to forget them / or spend more time writing tests then doing tests. We do have a staging environment where code gets checked over before pushing live. Maybe we need a part time Q/A person? Anyone have any suggestions/thoughts."} {"_id": "24353", "title": "New Starter Introduction and Induction Process", "text": "First day at work for a new programmer. Assuming that the company has covered all the usual corporate stuff (benefits, paperwork, health and safety and so on), **What are the things you should be talking a new developer through to get them up to speed and productive as soon as possible?** In additional what are the key things in making this process a success? What have you seen work particularly well? (Edit: Added a bounty. The current top answer is fine but I'm still thinking there is more that can be done to make a programmers first week as useful to them as possible. Really hoping for insights from good and bad starts to jobs people have had)."} {"_id": "160158", "title": "Pricing of a collaborative work", "text": "I suppose there's no straight answer to this, but what ideas come to mind for determining how much each programmer would get for participation in a collaborative project if it were to be sold?"} {"_id": "58779", "title": "Dealing with bilingual(spoken language) code?", "text": "So I've got to work with this set of code here for a re-write, and it's written by people who speak both English and French. Here's a snapshot of what I'm talking about (only, about 4000 lines of this) function refreshDest(FormEnCours,dest,hotel,duration) { var GateEnCours; GateEnCours = FormEnCours.gateway_dep.options[FormEnCours.gateway_dep.selectedIndex].value; if (GateEnCours == \"\") { FormEnCours.dest_dep.length = 0 } else if (FormEnCours.dest_dep != null && FormEnCours.dest_dep.type && FormEnCours.dest_dep.value != \"ALL\") { if (Destinations[GateEnCours] == null || Destinations[GateEnCours].length == 0) { RetreiveDestinations(FormEnCours,GateEnCours,dest,hotel,duration); } else { refreshDestSuite(FormEnCours,GateEnCours,dest,hotel,duration); } } } function refreshDuration(FormEnCours,GateEnCours,DestEnCours,hotel,duration) { // Refresh durations var FlagMoinsDe5Jours = \"\"; var Flag5a10jours = \"\"; var Flag11a16jours = \"\"; var FlagPlusDe16Jours = \"\"; ....... Is there any approach that I, as a speaker of only one of these languages, can use to make this entire process a lot less painful for both figuring out what everything does, and then refactoring it?"} {"_id": "158819", "title": "Conceptually what does it mean when it is said that each thread gets its own stack?", "text": "I have been reading Java Concurrency in Practice by Brian Goetz and inside the section **Stack Confinement** it is mentioned that each thread gets its own stack and so local variables are intrinsically confined to the executing thread; they exist on the executing threads stack, which is not accessible to other threads. What does he mean that each thread has its own execution stack ?"} {"_id": "164642", "title": "At what point is asynchronous reading of disk I/O more efficient than synchronous?", "text": "Assuming there is some bit of code that reads files for multiple consumers, and the files are of any arbitrary size: **At what size does it become more efficient to read the file asynchronously? Or to put it another way, how small must a file be for it to be faster just to read it synchronously?** I've noticed (and perhaps I'm incorrect) that when reading very small files, it takes longer to read them asynchronously than synchronously (in particular with .NET). I'm assuming this has to do with set up time for things like I/O Completion Ports, threads, etc. Is there any rule of thumb to help out here? Or is it dependent on the system and the environment?"} {"_id": "158817", "title": "Reasonable commision for position", "text": "My situation is that I have been offered a job working on a product for a small company. I'm currently working for a large corporation for a respectable salary considering my experience. Long term I am interested in creating my own products and starting up my own company. I've been offered a job to develop a product that I would be primarily developing. I looked at the idea and I think it is a pretty good idea and has a reasonable chance of being successful. I told him that I would be interested in earning a commission from the sales of the product. Considering that I would be developing the product what percentage would be reasonable to ask?"} {"_id": "158813", "title": "What shoud I know or should be doing after six years of experience in software development?", "text": "Even after six years of experienece I am still doing the things which I was doing six years ago. Working on the same mundane CRUD stuff. Nothing has been real challenging. Since I am working in a small company, we dont even have a proper development process. We don't even do unit testing, for example. I am feeling like I am just not going to the next level in my career."} {"_id": "164648", "title": "Function that requires many parameters", "text": "I have a problem related to this: Are there guidelines on how many parameters a function should accept? In my case, I have a function that describes a rounded rectangle. The caller specifies * An integer which determines how the rectangle should be merged into previously created shapes * An Anchor, which is a point that is used for alignment (right, left, top, bottom etc). (0,-1) means that position (next parameter) describes the top, middle point of the rectangle. * The position of the rectangle * Width and height * Corner radius Should I use Parameter Object pattern in this case? It is hard to see how these parameters are related"} {"_id": "152450", "title": "IDE for visually impaired", "text": "A visually impaired colleague has asked me to recommend an IDE with easy-to- find and easy-to-use controls for: 1. font size 2. background and foreground colors 3. changing syntax color scheme 4. support for at least C/C++ and Java He would prefer an IDE that is either portable or that has similar versions for Linux, Windows and Mac. He prefers a dark background and light colored fonts and needs to sit very close to the display."} {"_id": "200340", "title": "What is the justification of use .inc files to declare and implement code is some Delphi RTL units?", "text": "Starting with #Delphi #XE2 many of the new RTL units related to Vcl styles, OSX and so on, uses inc files to declare types, classes and implement code (just like the FPC does), what is the justification to do that? you can see what i mean if you inspect one of these folders (source\\rtl\\posix, source\\rtl\\posix\\osx, source\\rtl\\sys )"} {"_id": "212719", "title": "What is the big-O cpu time of Euclid's Algorithm of \"Greatest common divisor of two numbers\"", "text": "Looking at Euclid's algorithm for the \"Greatest common divisor of two numbers\", I'm trying to divine the big-O cpu time for numbers K, and N. Can anybody help? This is the algorithm as I understand it.. Where: * max(A,B) = the greater of A or B such that: min(10,3) = 10 * min(A,B) = the smaller of A or B such that: min(10,3) = 3 * modulus(A,B) = the remainder of A divided by B such that: modulus(10, 3) = 1 The algorithm is: 1. r = modulus(max(K,N), min(K,N)) 2. if r = 0 * then GCD is max(K,N) * else: * max(K,N) = r * go to step 1 It appears to me that due to the division occurring, it can't be a linear algorithm which is why it's useful as something more efficient than a naive implementation that just tries all the possibilities from 0 to min(K,N). But I can't quite figure out just what the runtime is. Any pointers would be very helpful!"} {"_id": "197585", "title": "Code Base with awful code conventions, follow them?", "text": "With no time to refactor, when you are working on legacy code, with the most awful conventions, what is the best practice? Will trying to follow _better_ coding style would improve readability or actually hurt it? For example, in java method names are usually camel cased: > myGoodNamedMethod But in the repository there are code with methods like this (not all, but the majority) > my_c_style_method_that_looks_off_in_java"} {"_id": "170981", "title": "Alternatives to type casting in your domain", "text": "In my Domain I have an entity `Activity` which has a list of `ITask`s. Each implementation of this task has it's own properties beside the implementation of `ITask` itself. Now each operation of the `Activity` entity (e.g. `Execute()`) only needs to loop over this list and call an ITask method (e.g. `ExecuteTask()`). Where I'm having trouble is when a specific tasks' properties needs to be updated, as in configuring the task (not as part of the execute method). How do I get an instance of that task? The options I see are: * Get the Activity by Id and cast the task I need. This'll either sprinkle my code with: * `Tasks.OfType().Single(t => t.Id == taskId)` * or `Tasks.Single(t => t.Id == taskId) as SpecificTask` * Make each task unique in the whole system (make each task an entity), and create a new repository for each ITask implementation I don't like either option, the first because I don't like casting: I'm using NHibernate and I'm sure this'll come back and bite me when I start using Lazy Loading (NHibernate currently uses proxies to implement this). I don't like the second option because there are/will be dozens of different kind of tasks. Which would mean I'd have to create as many repositories. Am I missing a third option here? Or are any of my objections to the two options not justified? How have you solved this problem in the past?"} {"_id": "91728", "title": "What drives the adoption, or not, of new programming languages?", "text": "I'd really like to focus on why some new programming languages are adopted in the mainstream, and others remain relatively niche. I'd like to know about things like specific use cases, backwards compatibility, or some new features, simple or complex implementation difficulty. Specific examples would be appreciated, but let's not get caught up on the exact definition of \"mainstream\" or \"niche\" here."} {"_id": "34456", "title": "Business case for decentralized version control systems", "text": "I searched and couldn't find any _business_ reasons why git/mercurial/bazzr systems are better than centralized systems (subversion, perforce). If you were trying to sell a DVCS to a non-technical person what arguments would you provide for the DVCS **increasing profit**. I will shortly be pitching git to my manager, it will take some time converting out subversion repositories and some expense in buying smartgit licences. **Edit** I tried to make this question into a generic discussion on centralized vs decentralized, but inevitably it has turned into git vs subversion. Surely there are better centralized systems than subversion."} {"_id": "106865", "title": "Why aren't Multi-Platform Programming Languages popular?", "text": "So I was talking to a friend and he mentioned > Using WAC I can write Javascript code that will compile and run in iOS, > Android, BB etc and there is this programming language that lets you write code that is _multi-platform_ : http://haxe.org/doc/intro I was wondering why these hasn't got that much attention yet because the idea is so good. So in a sense, what is probably wrong with these ideas in a real-world perspective(real deployable projects)? **EDIT:** So I have probably misused some terms so here are some clarifications. It goes something like, for example, I write code in one language, then I can run it in iOS and Android devices, both of which have their native programming languages. That means I have an app that I can run on both device without having to code any Java or Objective C That's the best I can do for clarifications because I myself am a little confused T_T"} {"_id": "106862", "title": "How do you handle regular latecomers at the stand-up meetings?", "text": "We have our daily stand-up meeting at **8:45** while the workday starts at **8:30**. Even with the 15 minutes slack people keep coming too late, on a regular basis. This results in our meetings being small, incomplete, inefficient and sometimes even completely useless. We have tried to have people bring fruit for the team on the next day, but this didn't work as we'd be having multiple breakfasts every day (and after a while they didn't even bother to bring any anymore). We have been thinking of imposing a fine, but we find solutions like that quite childish. How do you handle regular latecomers at stand up meetings? What worked - and what didn't?"} {"_id": "223516", "title": "How to represent compound elements within a database?", "text": "I need to design a database table that holds the data of programmers work hours. The problem is that I have a compound type - two pieces of data belong to a category. Specifically, the work hours are classified into categories: design, program, test, troubleshoot, etc... and each category is divided into two types: hours spent at work and hours spent at home. How can I design a compound type in the database side? ![enter image description here](http://i.stack.imgur.com/gXob9.jpg) I thought about storing the two types in one field like \"6,2\" but that creates more problems to resolve. I would now have to convert and split data along with worrying about localization issues, missing data, or if another location was added to the categories. * * * Clarification: Asking my question a different way, it's relatively easy to design a table without compound types like this: ![enter image description here](http://i.stack.imgur.com/izuiu.jpg) But how to deal with it when each category is split into two types?"} {"_id": "73133", "title": "What to name a parent thread that just delegates to workers?", "text": "I have a program where one thread creates a work queue and a bunch of workers that pull from it, and occasionally does some cleanup work for the workers. If the workers are all instances of `FooWorker`, what would I name the other class? One idea I've had so far is a `FooManager`, but maybe there's a better one?"} {"_id": "165062", "title": "Calculations in Vector Register", "text": "How do vector registers work in terms of calculations and alloting data to them ? Is there a detailed reference available somewhere explaining how vector registers work and how data is fetched from them ?"} {"_id": "197343", "title": "Why don't I see many unit test projects that bring up and tear down a DB? (ASP.NET MVC)", "text": "I see all the examples that demonstrate unit testing code and mocking the calls to the DB since you are not suppose to touch the DB. But it seems to me having a set up tasks that uses the actually schema, loads the lookup tables and then populates it with data using the methods to be tested... This way it is more real world testing and all the stored procedures are tested as well.. But I never have seen any examples like this... Is there some reason I do not understand why using this technique is not as useful as it seems ? EDIT1: Sorry.. I shouldn't have called it unit test.. yes. I will have unit tests as well that are for testing code.. But beyond that it seems like a good idea to bring up and tear down and DB and have code that modifies that DB and does assertions.. But I don't want this to be UI testing.. I just want to perform a lot of CRUD functions and a lot of assertions .. :My question is #1 does this make sense.. #2 how do I automate this all within visual studio ? How do I tell VS 2012 to run a SQL script before running the tests? is there some special API for this ?"} {"_id": "66590", "title": "Why Python and not Lua?", "text": "Why has Python been backed by google and become so rapidly popular and Lua has not? Do you know why Lua has stayed in background?"} {"_id": "165067", "title": "Should all new web projects build their backend based on xml/json result sets?", "text": "If you were building a new Saas project, would it make sense to start with all of the backend services returning xml/json? Because these days you need to build for both the web and mobile devices, and having a backend that is build from the start to return xml and json, you are ready to go mobile (all services have the business logic, so you won't be repeating anything). Now the web would be MVC, so the controller would just be routing the request to your service backend, and converting the json or xml to html. The obviousl downside is that you have to build a backend, and then another web project that calls your backend. But this also goes to you favor as it forces you to seperate your concerns, and not leak business logic in your controller/view layer. Thoughts?"} {"_id": "93768", "title": "ACM student chapter event ideas?", "text": "I was recently elected president of the student chapter of ACM at my university, and am trying to come up with some ideas for events for the following year. The big one we do every year is practice matches and try outs for the ICPC. I am hoping to host one or two more events this year, and was wondering if anyone has some cool ideas or has some past experiences that worked well. One idea that we have had so far is to do a hackathon but it generally seems difficult to convince the underclassmen that they have the skill to participate in events like this."} {"_id": "255887", "title": "The bad habits formed in the process of learning the C programming language", "text": "In the Cambridge University Computer Science curriculum booklet of 2012-13 under the module entitled 'Foundations of Computer Science' it states: > The main aim of this course is to present the basic principles of > programming. As the introductory course of the Computer Science Tripos, it > caters for students from all backgrounds. To those who have had no > programming experience, it will be comprehensible; to those experienced in > languages such as C, it will attempt to correct any bad habits that they > have learnt. They seem pretty sure that one who has learned a little C, has formed 'bad habits' in the process. I wish to learn C for I am interested in reading and writing Kernel source code. How do I learn C without forming 'bad habits'?"} {"_id": "181521", "title": "Representing a rule in a ruleset", "text": "How to represent rules for a rule engine as objects? A rule would be > if ( _booleanExpression(input)_ ) then _a chain of generic actions_ \" else > _next rule_ ...where the generic actions might be e.g. passing the input to a different chain of rules, returning a conclusion (terminating analysis with result), or gathering additional data - though probably other actions might prove necessary too. Now the problem of parsing arbitrary boolean conditions aside (let's assume we have a fully working `booleanExpression` class available), how would such a Rule object look like, and specifically, how would the Ruleset object containing orderly, interconnected grid of these look like, in particular in a structure that will be possible to process by the actual engine. (if the data structures don't make it obvious how such engine should process them, maybe a hint about it too?) This will be written in C++ but a language-agnostic answer is most welcome too. * * * This is derivative of my question about Event Correlator, which seems to be unanswerable simply because it's too broad to tackle in one answer, so I'm trying to split the problem into smaller, bite-sized chunks."} {"_id": "235294", "title": "Maintaining indices for location in a sorted list in database rows", "text": "I have a fairly simple data structure like this: create table project (id int auto increment primary key, name text); create table item (id int auto increment primary key, name text, project_id int not null, project_sort_index int not null, sort_index int not null, foreign key fk_project(project_id) references project(id)); A `project` can have many `item`s. Items have two different fields for maintaining sort order, `project_sort_index` and `sort_index`. These sort order fields apply in the following way. When I view all items belonging to a project, I need a sort order specific to each project for the items. When I view _all_ items globally, I need a sort order for them globally. I have a couple of questions as to how best to maintain and modify sort order for these items. Lets say that I have moved an item at index 4 to index 2. How do I propagate that change to my database efficiently? For example, how do I now update all index numbers >= 2 to move everything down, yet close the gap left in place 4? Is there a better way of sorting lists in SQL?"} {"_id": "195140", "title": "Is reverse debugging possible?", "text": "I know there are some products for reverse debugging. I am wondering that does reverse debugging mean going to one step back or starting over again up to one step back? I've found an explanation here."} {"_id": "51554", "title": "Software license options", "text": "Is there a software license that allows free access to source code, but does not allow redistributing any binaries, either direct from the source code or modified source code for a limited time? The idea being much like a open source software patent; the original developer has the exclusive right to sell and distribute the product, and prevent others from copying the product for a limited time, and the source code would be disclosed to the public. Obviously the downside is enforcing this license, but larger companies (e.g. Microsoft) could possibly gain the benefits of open source projects but still keep their proprietary position."} {"_id": "91898", "title": "Patterns and practices for Web Scraping in .Net (C#)", "text": "I will be putting together an application to automate an external web site/application. In some instances I will need to navigate the site as a user would (some links I need to follow cannot be predicted and must be parsed from a response) I am already using Html Agility Pack, and am aware of Tidy if that is needed. **Are there any other technologies I should be aware of?** **Are there any recommended patterns for being able to quickly adjust in the event that the external web app changes?** I\u2019m envisioning encapsulating the validation of responses as some type of strategy or similar pattern that can be easily separated/plugged in as necessary, but any specific suggestions would be great."} {"_id": "111351", "title": "Are Quines useful as anything more than a programming puzzle?", "text": "Quines, which are programs that generate their own code as part or all of their output are a neat idea for a programming puzzle. However, do they have any use beyond that?"} {"_id": "214017", "title": "Architecting multi-model multi-DB ASP.NET MVC solution", "text": "I have an ASP.NET MVC 4 solution that I'm putting together, leveraging IoC and the repository pattern using Entity Framework 5. I have a new requirement to be able to pull data from a second database (from another internal application) which I don't have control over. There is no API available unfortunately for the second application and the general pattern at my place of work is to go direct to the database. I want to maintain a consistent approach to modeling the domain and use entity framework to pull the data out, so thus far I have used Entity Framework's database first approach to generate a domain model and database context over the top of this. However, I've become a little stuck on how to include the second domain model in the application. I have a generic repository which I've now moved out to a common DataAccess project, but short of creating two distinct wrappers for the generic repository (so each can identify with a specific database context), I'm struggling to see how I can elegantly include multiple models?"} {"_id": "189734", "title": "Why does Java require a servlet container for simple RPC service?", "text": "I have a big database controller which is written in Java. The controller reads information from the database, and interprets it into data structures which are then displayed in a CLI. Java was chosen because writing code in it is fast and easy. Now I want to create an RPC server on top of the controller (XML-RPC or JSON-RPC for future AJAX calls), but it looks like I need a servlet container for my RPC service. I am confused because last year when I needed this kind of capability for another project in python, it took me less than 5 minutes to create the same functionality using the SimpleXMLRPCServer The same ease of creation applies also to C# as far as I recall. But in Java the story is different; now I need a servlet and therefore a servlet container (i.e Tomcat, Jetty) which means I need to install and maintain web servers. From what I can tell, JSON-RPC requires `Spring` framework in order to work. I have already spent around two hours in learning the design and sort-of how Tomcat works without writing even one line of code. I searched the web and found out that I have standalone options: I can use this library, but it doesn't seem maintained and it is also somewhat complex (I'm looking for something with decorators / annotations). The other option I found is to use the so called \"embedded\" jetty and then trying to set it and configure it by code which also seems a tedious task. Why isn't there a **standalone** mechanism for such a popular interface? Am I missing out something here?"} {"_id": "143279", "title": "Attending my first software conference - any tips before I go?", "text": "My nice employer allowed me to visit a software conference in June (International PHP Conference, for those who care). Wanting to make the most of it, I would ask the more experienced conference goers in here to give me some tips on what I could do to maximize my learning experience on the conference, and to reduce beginner mistakes. Sorry that this question is a little ambiguous, but I think it's best to keep it a little bit more open, so I can get a wide range of Ideas, and it will be of more use to further people seeking for an answer."} {"_id": "111358", "title": "Functional programming: Writing a small interpreter", "text": "I'm working on a small Unix shell, and am contemplating the idea of writing a script interpreter. While reading about the subject I inevitably hear of functional programming, lambda calculus, and find out about the whole fascination around Lisp. Before I jump into this I have some questions. * Which language should I use? I am curious about functional programming, so that would be a great opportunity to start. I want my shell to have as little 3rd party dependencies as possible. I am wondering whether I should look for a compiled language. I would like being able to distribute it more easily. Is this a correct approach? if so, which language would you recommend? * How do you embed an interpreter in your program? The way I see it is having the interpreter run in a second separate process. As far as I know, two processes communicating, are either listening to a pipe or sending signals to one another. Is this a realistic approach? Is there a particular language that handles this part? Are there other ways of embedding the interpreter?"} {"_id": "189730", "title": "What design patterns does every developer need to know", "text": "I was wondering what design patterns **every** developer, no mather what dev language, should know and why?"} {"_id": "110402", "title": "How would you go about looking for collaborators?", "text": "I seem to have a never ending stream of more-or-less original, more-or-less cool ideas for software/apps/stuff yet to be written on my mind. Sometimes, I decide to just start implementing my idea. Several hours later, I end up with a decent, more-or-less working prototype of what I'm trying to build. Then, my alarm clock goes ringing and I have to get back to the real world, tired as hell. In most cases, the stuff I started remains unfinished forever. Sometimes, that's okay. Other times, I honestly feel like that's a bummer. But I realize that there are sites like GitHub and there are many other coders out there. Is there a place where one can post ideas, proposals, concepts, or rough-around-the-edges-code in order to find people who are interested in collaborating on projects? **Edit:** I am aware of \"the usual way\" \u2013 keep developing on your own for some period of time, open-source the code, mention your project on dev blogs, IRC or wherever else you go; eventually attracts others. What I'm looking for is a place to connect with other devs (e.g. of different specializations) on the early stages of a project."} {"_id": "130358", "title": "Can cross-platform development be speeded by really specific, complete pre-coding design?", "text": "I'm green at programming but not entirely noobish. I taught myself some Java about five years ago, and I learnt me some Objective-C recently. I was talking with a Unix guy a while back, and he said that if you do your object architecture and design very completely and correctly, cross-platform development becomes really easy. All you have to do is take the description of each element and implement it. Since your design is so complete, each individual element is composed of relatively simple tasks. I'd like to get some feedback on that concept. In theory it makes sense, but programming is about execution. And if you agree with the concept, can you point me to some material that would help me understand how to build those kind of complete, specific designs?"} {"_id": "34843", "title": "Why are software schedules so hard to define?", "text": "It seems that, in my experience, getting us engineers to accurately estimate and determine tasks to be completed is like pulling teeth. Rather than just giving a swag estimate of 2-3 weeks or 3-6 months... what is the simplest way to define software schedules so they are not so painful to define? For instance, customer A wants a feature by 02/01/2011. How do you schedule time to implement this feature knowing that other bug fixes may be needed along the way and take up additional engineering time?"} {"_id": "221674", "title": "WCF and object-oriented programming", "text": "I am building a program which would have WCF support. I am using the MVC pattern. For each controller there is a WCF service class. e.g. I have `CTRL.CTRLBooking`, `WCF.IWCFBooking` and `WCF.WCFBooking`. For each controller (total of 10) there is a WCF class and an endpoint. Is this a good approach or should I keep all the methods I provide in one single class e.g. `WCF.WCFService` (considering their count would be no more than 25-30)?"} {"_id": "35491", "title": "Examples of different architecture methodologies", "text": "Is there a resource or site which illustrates building the same application (desktop or web) using several different contrasting architectures? Such as MVP versus MVVM versus MVC, etc. It would be very helpful to see how they look side-by-side using real-world code instead of comparing written theory to written theory. I've often found that something can be _described_ well in a book, but when you go to _implement_ it, the subtleties and weaknesses of the theory become readily apparent."} {"_id": "204673", "title": "CQRS and validations", "text": "I'm starting to introduce myself in CQRS concepts, but I get stucked with the following situation: Supouse you have an entity that must have an unique name. In order to verify that, prior to create the entity you must make a query, thus you are verifing against the query subsystem. But what happens if the syncronization has not been happened between the command system and the query system yet? Other client just had sent the same name before you. What happens in that case?"} {"_id": "204672", "title": "Best way to test a reimplemented web service", "text": "In my team, we are about to start the reimplement of a service. One of the important steps to do to accomplish this is how do we ensure that we are doing it in the right way and we are not introducing new bugs. So, what we have in mind is create a bunch of test to verify the following: * The behavior. Both services should behave the same way (store data in the same places, send same, notificacions, etc...) * Results. Object returned by the service's calls should be exactly the same. So, the things that we have thought is to do the following: * Create a set of test that verifies the behavior and results of old service. * Create the new service adding unit test, and whitebox integration test for the new service * Create some kind of \"Mirror test\" that checks if the new service is working as the old one. Is there a way to do this? Thanks for any help. Note: None of the code of the previous version is reusable. There is no test for the older version of the service [Edited]replaced original word \"migrate\" with \"reimplement\""} {"_id": "181299", "title": "How flexible can hardware get?", "text": "This subject is long time in the making for me and it particularly took off when I was researching bootloaders for computers and consumer electronics, which, I will note, differ drastically. I've learned how ancient and inflexible x86 hardware is and just how much the structure of software is constrained by it. Examples of what I am talking about: * Bootloaders cannot have an arbitrary size. * Specialized functions like memory-mapped hardware. * An Intel processor's ties to a particular type of firmware. So I've wondered about systems that might be designed like this: * Writing text to the screen handled pixel-by-pixel by an operating system instead of an intermediary microcontroller. * Sectorless hard drives wherein a computer simply begins executing at the first address and the bootloader can be any size. * Functionality of microcontrollers moved into the software. I understand that this would take away a lot of simplicity, but do systems like this exist? I am guessing this would be more prevalent in embedded systems. To better understand this question, imagine a device where there are no controllers or independent systems and everything is controlled by the CPU."} {"_id": "253028", "title": "opensaml assertion signature validation", "text": "I have a SAML 2 response with one assertion which is signed and the response itself has signed again. I use below code to validate the signature profile of the response. SAMLSignatureProfileValidator signatureProfileValidator = new SAMLSignatureProfileValidator(); signatureProfileValidator.validate(**response**.getSignature()); And below code block to validate signature. SignatureValidator signatureValidator = new SignatureValidator(validatingCredential); signatureValidator.validate(**response**.getSignature()); But I believe that these things validate the response signature and the response signature profile only. **Do I need to validate the assertion signature as well?** I have tried validating assertion signature using below code block. But it gives me the ValidationException which means it is not valid. But it should be. SignatureValidator signatureValidator = new SignatureValidator(validatingCredential); signatureValidator.validate(**assertion**.getSignature());"} {"_id": "181293", "title": "Developing with confidence without a real development environment", "text": "I've recently been hired for a project that involves working with and around several third-party \"enterprise\" systems. Due to what I imagine would be the astronomical cost and effort required to build a sufficiently faithful replica of the production environment, the prospect of having a real development environment seems vanishingly slim. This is of course not ideal. On the bright side, I imagine there must be people out there safely testing and deploying software into unreplicable environments like this, and I can probably follow in their footsteps. How those who effectively deal with these kinds of situations do it?"} {"_id": "253027", "title": "Has there really not been one thing in the past 20 years that provided huge software development gains?", "text": "In _No Silver Bullet_ , Fred Brooks makes a variety of predictions about the future of software engineering, best summed up by: > There is no single development, in either technology or in management > technique, that by itself promises even **one order-of-magnitude > improvement** in productivity, in reliability, in simplicity. His argument is very convincing. Brooks was writing in 1986: was he right? Do developers in 2014 produce software at a rate less than 10x faster than their counterparts in 1986? By some appropriate metric -- how large has the gain in productivity actually been?"} {"_id": "181290", "title": "Verifying a debit card online - What information is checked?", "text": "I am eager to know what information is checked by the online companies to confirm that the card is yours? If a programmer has to implement this functionality, how can he access information like address of the client which is not written on the debit card? Thanks"} {"_id": "253022", "title": "alerting that an object cannot be deleted (due to constraints)", "text": "Assume an application with a rich domain model with many classes (e.g `School`, `Classroom`, `Teacher`, `Student`, `Course`, `Exam`, `Submission`, ...) linking to each other. Model and links are mapped to the database which uses appropriate FKs and constraints (without cascade delete). In the admin panel the user has delete buttons next to each object. Attempting to delete an object has one of the following two outcomes: * the object isn't being referenced from any other object, so it is deleted * the object is being referenced by at least one other object so it cannot be deleted. An alert is shown to the user. I've got two ways to implement this: 1. prior to executing the sql delete, do whatever queries are necessary to discover whether this object can be deleted. If it cannot, then alert user. 2. go ahead and execute the sql delete and if that fails (due to the rdbms constraints) catch the sql exception and alert the user that it cannot be deleted. Both ways work well. The first way allows me to give the user a detailed reason why the object cannot be deleted (e.g it is being referenced by 2 `Course`s and 1 `Classroom`). The second way allows me to solve the whole problem by not writing any constraints checking code and rely on the solid (and existing) implementation of the db. Is there a reason why I should definitely choose one over the other?"} {"_id": "181296", "title": "lightweight document indexing to handle less than 250k potential records", "text": "Recently I've found myself chafing at the limitations of document indexing engines. I was developing a small website that needed some fairly robust searching capabilities but due to their hardware constraints I couldn't deploy a Lucene-ish solution (such as Solr or ElasticSearch, like I normally would) to handle this need. And even then, while I needed to serve up some complex data and calculations that were database-intensive, I didn't need to handle more than 250k potential records. Deploying an entire Solr or ES instance just to handle this seemed like a waste. After I thought about it, it seems like a fairly large problem. Most people handle search requirements solely with SQL. They just run SQL queries for their data and that's that. Their search capabilities also end up being terrible. * Doing a blanket full-text wildcard search can be painfully slow on some systems (shared hosts in particular) and bog down your database, especially if you have complicated queries and lots of joins. * You end up doing multiple queries on a single request from the user. You might get around this with ever-more-complicated queries, but see the previous point. * Lack of features typically present in full-text engines. Databases had the same problem of needing to be deployed as a server and then SQLite came along and suddenly we could deploy a database that is self- contained in a single file. My Googling has produced nothing - wonder if something exist like this for full-text indexing/searching. What factors to take into account when deciding whether to implement lightweight document indexing (eg as explained in answers to another question) or keep using SQL for these situations?"} {"_id": "165685", "title": "WCF/webservice architecture question", "text": "I have a requirement to create a webservice to expose certain items from a CMS as a web service, and I need some suggestions - the structure of the items is as such: item - field 1 - field 2 - field 3 - field 4 So, one would think that the class for this will be: public class MyItem { public string ItemName { get; set; } public List Fields { get; set; } } public class MyField { public string FieldName { get; set; } public string FieldValue { get; set; } //they are always string (except - see below) } This works for when its always one level deep, but sometimes, one of the fields is actually a point to ANOTHER item (`MyItem`) or multiple `MyItem` (`List`), so I thought I would change the structure of `MyField` as follows, to make `FieldValue` as `object`; public class MyField { public string FieldName { get; set; } public object FieldValue { get; set; } //changed to object } So, now, I can put whatever I want in there. This is great in theory, but how will clients consume this? I suspect that when users make a reference to this web service, they won't know which object is being returned in that field? This seems like a not-so-good design. Is there a better approach to this?"} {"_id": "165683", "title": "Removing an element not currently in a list: ValueError?", "text": "This is something that's bothered me for a while, and I can't figure out _why_ anyone would ever want the language to act like this: In [1]: foo = [1, 2, 3] In [2]: foo.remove(2) ; foo # okay Out[2]: [1, 3] In [3]: foo.remove(4) ; foo # not okay? --------------------------------------------------------------------------- ValueError Traceback (most recent call last) /home/izkata/ in () ValueError: list.remove(x): x not in list If the value is already not in the list, then I'd expect a silent success. Goal already achieved. Is there any real reason this was done this way? It forces awkward code that _should_ be much shorter: for item in items_to_remove: try: thingamabob.remove(item) except ValueError: pass Instead of simply: for item in items_to_remove: thingamabob.remove(item) * * * As an aside, no, I can't just use `set(thingamabob).difference(items_to_remove)` because I _do_ have to retain both order and duplicates."} {"_id": "165680", "title": "Designing a system with different business rules for different customers", "text": "My company is rewriting our proprietary business application. The current architecture is poorly done and inflexible. It is coded more procedural oriented as opposed to object oriented. It has become difficult to maintain. Our system is a web application written in .Net Webforms. I am considering ASP.Net MVC for the rewrite. We intend to rewrite it with a good, solid architecture with the goal of maintainability and reusable classes for some of our other systems and services. We would also like the system to be customizable for different customers in the event that we market the system. I am considering redesigning the system based on the layered architecture (Presentation, Business, Data Access layers) described in the Microsoft Patterns and Practices Application Architecture Guide. http://msdn.microsoft.com/en-us/library/ff650706.aspx Hopefully this isn't too open ended, but how would you recommend allowing for different business logic/rules for different customers? I'm aware of Windows Workflow Foundation, but from what I've read about it, it seems many business rules could be too complicated to handle there. Also, Can anyone point me to where I can download an example of a .net solution that is based on the Application Architecture Guide? I have already downloaded the Layered Architecture Solution Guidance and the Expense Sample on codeplex. I was looking for something a bit larger and more robust that I could step through the code and see how it works. If you feel there are better architectures to base our redesign on please feel free to share. I appreciate your help!"} {"_id": "197961", "title": "Is Domain Entity violating Single Responsibility Principle?", "text": "Single responsibility ( reason to change ) of an entity should be to uniquely identify itself, in other words, its responsibility is to be findable. Eric Evan's DDD book, pg. 93: > most basic responsibility of Entities is to establish continuity so that > behavior can be clear and predictable. They do this best if they are kept > spare. Rather than focusing on the attributes or even the behavior, strip > the Entity object's definition down to the most intrinsic characteristics, > particularly those that identify it or are commonly used to find or match > it. Add only behavior that is essential to the concept and attributes that > are required by that behavior. > > Beyond that, look to remove behavior and attributes into other objects > associated with the core Entity .Beyond identity issues, Entities tend to > fulfill their responsibilities by coordinating the operations of objects > they own. 1. > ... strip the ENTITY object's definition down to the most intrinsic > characteristics, particularly those that identify it or are commonly used to > find or match it. Add only behavior that is essential to the concept ... Once an _entity_ is assigned a _unique ID_ , its identity is established and so I would assume such an entity doesn't need any behavior to _maintain its identity_ or to _help it identify itself_. Thus, I don't understand what kind of behavior is author referring to ( besides `find` and `match` _operations_ ) with \" _behavior that is essential to the concept_ \"? 2. > ... strip the ENTITY object's definition down to the most intrinsic > characteristics, particularly those that identify it or are commonly used to > find or match it. ... Beyond that, look to remove behavior and attributes > into other objects associated with the core ENTITY. So any behavior that doesn't help identify the entity, but we'd still characterize that behavior as being an _intrinsic characteristic_ of that entity ( i.e. barking is intrinsic to dogs, flying is intrinsic to airplanes, laying eggs is intrinsic to birds ... ), should be put into other objects associated with that entity ( example: we should put barking behavior into an object associated with a dog entity )? 3. > Beyond that, look to remove behavior and attributes into other objects > associated with the core ENTITY. a) `MyEntity` delegates responsibilities `A_resp` and `B_resp` to objects `a` and `b`, respectively. Even though most of `A_resp` and `B_resp` work is done by `a` and `b` instances, clients are still served `A_resp` and `B_resp` through `MyEntity`, which means that from the client's perspective the two responsibilities belong to `MyEntity`. Thus, doesn't that mean `MyEntity` also has `A_resp` and `B_resp` responsibilities and as such is violating **SRP**? b) Even if we assume that `A_resp` and `B_resp` don't belong to `MyEntity`, `MyEntity` still has a responsibility `AB_resp` of coordinating the operations of objects `a` and `b`. So doesn't `MyEntity` violate **SRP** since at minimum it has _two responsibilities_ \u2013 to uniquely identify itself and also `AB_resp`? class MyEntity { private A a = ... private B b = ... public A GetA() { ... } public B GetB() { ... } /* coordinates operations of objects a and b */ public int AworkB() { ... } } /* A encapsulates a single responsibility resp_A*/ /* A is value object */ class A { ... } /* B encapsulates a single responsibility resp_B*/ /* B is value object */ class B { ... } **UPDATE:** 1. > Behavior in this context is refering to semantic behavior. For example, a > Property on a class (i.e. attribute on a domain object) that is used to > uniquely identify it has a behavior. While this is not represented in code > directly. The expected behavior is that there will not be any duplicate > values for that property. So in code we would almost never need to actually implement a behavior ( i.e. an operation ) that would somehow maintain entity's identity, since as you explained such a behavior only exist as a concept in a domain model ( in the form of a ID attribute of an entity), but when we translate this ID attribute into code, part of its semantics is lost ( i.e. the part which implicitly makes sure the ID value is unique is lost )? 2. > Furthmore, a property such as Age has no context outside of a Person Entity, > and as such, makes no sense to move into a different object ... However that > information could easily be stored in a separate location that the unique > identifier, hence the confusing reference to behavior. Age could be a lazy > loaded value. a) If `Age` property is lazy loaded, then we may call it a behavior, even though semantically `Age` is just an attribute? 3. > You could easily have operations specific to Address such as verification > that it is a valid address. You may not know that at design time, but this > whole concept is to break down objects into their smallest parts While I agree that we'd lose context by moving `Age` into different object, the context wouldn't be lost if we moved `DateOfBirth` property into a different object, but we usually don't move it. What is the main reason that we'd move `Address` into another object, but not `DateOfBirth`? Because `DateOfBirth` is more intrinsic to `Person` entity or because there are less chances that somewhere in the future we may need to define operations specific to `DateOfBirth`? 4\\. I must say I still don't know whether `MyEntity` also has `A_resp` and `B_resp` responsibilities and why `MyEntity` also having `AB_resp` isn't considered a violation of **SRP** **EULERFX** 1) > The behaviors that the author is referring to are behaviors associated with > the entity. These are the behaviors that modify the state of the entity a) If I understand you correctly, you're saying that _entity_ should only contain those _behaviors_ that modify its _attributes_ ( i.e. its _state_ )? b) And what about the _behaviors_ that don't necessarily modify the _state of the entity_ , but are still considered as being an _intrinsic_ characteristic of that _entity_ ( example: _barking_ would be _intrinsic_ characteristic of a `Dog` entity, even if it didn't modify _Dog's state_ )? Should we include these behaviors in an _entity_ or should they be moved to other objects? 2) > As far as moving behavior to other objects, the author is referring to value > objects specifically. Though my quote doesn't include it, but author does mention in the same paragraph that in some cases _behaviors_ ( and _attributes_ ) will also get moved into _other entities_ ( though I understand the benefits of moving _behaviors_ to VOs ) 3) Assuming `MyEntity` ( see question **3.** in my original post ) doesn't violate SRP, would we say that a _responsibility_ of `MyEntity` is among other things also comprised of: a. `A_resp` **+** `B_resp` **+** `AB_resp` ( `AB_resp` coordinates objects `a` and `b` ) or b. `AB_resp` **+** delegating `A_resp` and `B_resp` to objects ( `a` and `b` ) associated with `MyEntity`? 4) Eric Evan's DDD book, pg. 94: > CustomerID is the one and only identifier of the Customer ENTITY ( figure > 5.5 ), but phone number and address would often be used to find or match a > Customer. The name does not define a person's identity, but it is often used > as part of the means of determining it. > > In this example, the phone and address attributes moved into Customer, but > on a real project, that choice would depend on how the domain's customers > are typically matched or distinguished. For example, if a Customer has many > contact phone numbers for different purposes, then the phone number is not > associated with identity and should stay with the Sales Contact. a) > CustomerID is the one and only identifier of the Customer ENTITY ( figure > 5.5 ), but phone number and address would often be used to find or match a > Customer. The name does not define a person's identity, but it is often used > as part of the means of determining it. Quote states that only _attributes_ associated with _identity_ should stay in an _entity_. I assume author means that _entity_ should contain only those _attributes_ that are often used to _find or match_ this _entity_ , while ALL other _attributes_ should be moved? b) But how/where should other _attributes_ be moved? For example ( assumption here is that _address attribute_ isn't used to _find or match_ `Customer` and thus we want to move _address attribute_ out of `Customer` ): if instead of having `Customer.Address` ( of type `string` ) we create a property `Customer.Address` of type `Address`, did we move the _address attribute_ into an associated VO object ( which is of type `Address` ) or would we say that `Customer` still contains _address attribute_? c) > In this example, the phone and address attributes moved into Customer, but > on a real project, that choice would depend on how the domain's customers > are typically matched or distinguished. For example, if a Customer has many > contact phone numbers for different purposes, then the phone number is not > associated with identity and should stay with the Sales Contact. Isn't author in the wrong here, since if we assume each of the many _contact phone numbers_ that `Customer` has only belong to that particular `Customer`, then I'd say these _phone numbers_ are associated with _identity_ just as much as when `Customer` only had _one phone number_? 5) > The reason the author suggests stripping the entity down is that when one > initially creates a Customer entity, there is a tendency to populate it with > any attribute that one can think of being associated with a customer. This > is a data-centric approach that overlooks behaviors ultimately leading to an > anemic domain model. Off topic, but I thought _anemic domain model_ results from moving _behavior_ out of an _entity_ , while your example is populating an _entity_ with lots of _attributes_ , which would result in `Customer` having too much _behavior_ ( since we'd probably also include in `Customer` the _behaviors_ which modify these additional _attributes_ ) and thus in violation of SRP? thanks"} {"_id": "165688", "title": "How to negotiate with software vendors who do not follow HL7 standards", "text": "Take, for instance the \"\", I'd hope that anyone who has spent any time in dealing with HL7 messages knows that the \"\" signifies that something should be deleted. \"\" is not an empty string, it's not a filler etc... But occasionally, one may meet a vendor who persists in sending \"\" instead of just sending nothing at all. Since, I work for a small business and have an extremely flexible HL7 interface, I can ignore \"\"'s in received messages. But these things are adding up. * Some vendors like to send custom formatted fields with psuedo-components that they leave others to interpret themselves. * Some vendors send all their information in note segments and assume you're going to only show users the information they send in a monospace font. * Some vendors even have the audacity to send Carriage Return Line Feeds at the end of each line of a file interface. * Some vendors absolutely refuse to send decimal numbers and in-so-doing refuse to send any numbers. So, with all this crippling humanity against the simple plastic software man, how does one bend without breaking*? Or better yet, how does one fight back and still make money? *my answer is usually to create an interface for the interface and keep the HL7 processing pure, but I don't think this is the best solution"} {"_id": "103285", "title": "How can you tell good programmers from the average one?", "text": "> **Possible Duplicates:** > How do managers know if a person is a good or a bad programmer? > How to recognize a good programmer? For your record, I am a programmer myself, and I still do coding. We are not doing your-just-another-CRUD-app, instead we are working on CAD apps. The nature of software development makes it really hard to gauge a programmer's worth. How can you tell whether a programmer is good or not-so- good? All programmers who are working with me work on different parts of the applications, and how difficult it is to get those parts working is only known to the person who spend most time in it, in this case it's the programmers themselves; me as an outsider would not be able to fully appreciate the amount of sweat, ingenuity, effort they put in into solving those problems precisely because I don't have a chance to do the same job. This gives me a hard time when I evaluate them. How do I know programmer A is really great at solving the problem at hand and therefore I can throw him a bigger, harder task? And how do I know programmer B is just working hard, but not working smart? How can I evaluate and compensate programmers fairly?"} {"_id": "33816", "title": "How to recognize a good programmer?", "text": "Our company is looking for new programmers. And here comes the problem - there are many developers who look really great at the interview, seem to know the technology you need and have a good job background, but after two moths of work, you find out that they are not able to work in a team, writing some code takes them very long time, and moreover, the result is not as good as it should be. So, do you use any formalized tests (are there any?)? How do you recognize a good programmer - and a good person? Are there any simple 'good' questions that might reveal the future problems? ...or is it just about your 'feeling' about the person (ie., mainly your experience), and trying him out? Edit: According to Manoj's answer, here is the question related to the coding task at the job interview."} {"_id": "55692", "title": "How to write efficient code despite heavy deadlines", "text": "I am working in an environment wherein we have many projects with strict deadlines on deliverables. We even talk directly to the clients so getting the jobs done and fast is a must. My issue is that i'd always write code for the first solution that comes to my mind, which of course I thought as best at that moment. It always ends up ugly though and i'd later realize that there are better ways to do it but can't afford to change due to time restrictions. Are there any tips by which I could make my code efficient yet deliver on time?"} {"_id": "189190", "title": "Allow threads (in c) to print messages without interfering with user input", "text": "I'm a writing a small webserver for personal use in c (not c++). I want to allow user input in the console like \"shutdown server\" or \"restart server\". In order to allow this kind of input the server is running in a seperate thread (pthreads), so the console isn't blocked. I also want this thread to print output in the console like \"a new client connected\" or \"client requestet 'home.html'\". > The problem is: If I'm typing something like \"shutdown server\" and at the > same time the thread prints something like \"a new client connected\" > everthing mixes up and I get something like \"shuta new client connectedown > server\" Is there an elegant way to print the output of the thread and at the same time allow the user to enter commands without both interfering? Or is this a stupid idea to begin with? If yes: Is there a standard way to handle things like that (i.e. to control server)."} {"_id": "189191", "title": "Why is test driven development missing from Joel's Test?", "text": "I was reading this blog by Joel Spolsky about 12 steps to better code. The absence of _Test Driven Development_ really surprised me. So I want to throw the question to the Gurus. Is TDD not really worth the effort?"} {"_id": "98717", "title": "What methods were used for online payments before API's and Paypal, etc", "text": "What methods (in programming/web dev terms) were used to take payments online before such things as Paypal, Google Checkout and various gateways and API's. How were such transactions carried out?"} {"_id": "189194", "title": "Command pattern design", "text": "I have this old implementation of the Command pattern. It is kind of passing a Context through all the _DIOperation_ implementation, but I realized later on, in the process of learning and learning (that never stops), that is not optimal. I also think that the \"visiting\" here doesn't really fit and just confuses. I am actually thinking of refactoring my code, also because a Command should know nothing about the others and at the moment they all share the same key- value pairs. It is really hard to maintain which class owns which key-value, sometimes leading to duplicate variables. An Example of use case: let's say **_CommandB_** requires **_UserName_** which is set by **_CommandA_**. Should CommandA set the key **_UserNameForCommandB_** = _John_? Or should they share a common _UserName=John_ key-value? What if the UserName is used by a third Command? How can I improve this design? Thanks! class DIParameters { public: /** * Parameter setter. */ virtual void setParameter(std::string key, std::string value) = 0; /** * Parameter getter. */ virtual std::string getParameter(std::string key) const = 0; virtual ~DIParameters() = 0; }; class DIOperation { public: /** * Visit before performing execution. */ virtual void visitBefore(DIParameters& visitee) = 0; /** * Perform. */ virtual int perform() = 0; /** * Visit after performing execution. */ virtual void visitAfter(DIParameters& visitee) = 0; virtual ~DIOperation() = 0; };"} {"_id": "189197", "title": "Contributor's actions after rejected pull request", "text": "Imagine, there is an open-source project with a Maintainer and a Contributor. Both of them have theirs repos exported to some repository hosting (github, bitbucket, sourceforge --- whatever, but they are public). Imagine further, Contributor did several commits, pushed them into his repo and sent pull-request to the Maintainer. Maintainer did review, made several comments about commits (fix this, fix that, simplify there, blah-blah-blah) and **rejected** the pull-request. Note: 1. For Contributor it's obvious that it's better to just fix patches/commits than rewriting from scratch. 2. Please consider situation, when there are many commits and fixes have to be scattered between them. What should Contributor do in this situation before making next pull-request? Remove invalid repo, rewrite history and push new one? Make a fix commit into current one? May Maintainer demand, that incoming history should be clean? I.e. is it technically simple enough to do?"} {"_id": "223291", "title": "Listing dependencies in the package.json for a node.js app on Heroku", "text": "I previously had an issue with my node.js app on Heroku. I added the dependency into my package.json and now it is working. But, is this the best way to do it? { \"name\": \"application-name\", \"version\": \"0.0.1\", \"private\": true, \"scripts\": { \"start\": \"node app.js\" }, \"dependencies\": { \"express\": \"3.4.4\", \"jade\": \"*\", \"stylus\": \"*\", \"ejs\": \"0.8.5\" }, \"engines\": { \"node\": \"0.10.1\", \"npm\": \"1.3.14\" } } I ask because the other dependencies have a \"*\" which were there by default in my express app. What exactly does that mean and am I fine with leaving this file the way it is?"} {"_id": "223296", "title": "What is meant by a step-by-step refactoring plan describing implementation of design", "text": "> What is meant by a step-by-step refactoring plan describing how to implement > a certain design? As little as I know about refactoring it regards improving a (UML) design model and has nothing to do with the implementation of it. Therefore I'm pretty much confused by what this question means. Does anyone know and willing to explain? EDIT: In the context the design corresponds to a UML diagram with classes, methods and underlying associations."} {"_id": "205664", "title": "Java Compiler and VM Compatibility", "text": "A co-worker and I recently had a discussion about Java versions and the JVM. I use Java 7 but we use Java 6 for our client (while he says that some are still on 5). My immediate thought was, why can't we target those VMs too? The Java VM is somewhat different than a real machine in that it has a bunch of runtime features. Type checking, exception handling, garbage collection, etc. But it's still a virtual machine which has a bytecode. (Which is why we can have things like C to JVM compilers.) So why can't we target older VMs with newer version of Java? Why does the language and the runtime have to be tied together? Besides the obvious performance penalties, it seems like it should be completely possible to compile Java 7 code to the Java 6 JVM. (And considering how little changed from Java 6 to 7, I can't imagine the compiler changes being that extensive.)"} {"_id": "107807", "title": "How to describe framework/features of a java application for client presentation?", "text": "I have been asked to put down the architecture and features of our Java application (more from infrastructure/software point of view). It will go on a Powerpoint presentation, so I need to provide the list. Please let me know how do I go about this."} {"_id": "107800", "title": "Can daily reports decrease a developer's productivity?", "text": "In another question, I asked about why developers might don't like **daily scrum**. We talked to developers and we decided to not hold daily scrum for a while (to give it a try and customized scrum in our first attempt). This is the output of consulting with developers directly. On the other hand, we don't want to lose good parts of daily scrum, like getting a chance to coordinate developers everyday, or watching the work progress like a Key Performance Indicator, to take actions early. As an alternative to daily scrum, we're thinking about asking developers to provide daily reports with the following conditions: 1. No need to follow any specific format. Each and every format is accepted. 2. Even if the work is not done, we want to hear the amount of progress. 3. There is no need to mention the time spent on each task. 4. Development obstacles and coordination requirements should be mentioned. 5. There is no need to be obsessed with daily reports. It's not taken that strict. Do you think that this can decrease their productivity? Have you had any daily report experience? Do you have any suggestion for us, so that we can get sure that we're not micromanaging?"} {"_id": "56522", "title": "Is there ever a situation where it's ok to initiate a Delete on a GET?", "text": "When building a simple web app with database delete functionality, you normally would take the following steps: 1. User initiates a GET request using a delete link 2. User confirms the deletion 3. Upon confirmation, browser initiates a POST request to the server to perform the deletion What are the reasons for this convention? I understand that it sets up a confirmation step which would prevent automated calling of the delete function (as with spiders and such) - are there other reasons?"} {"_id": "57698", "title": "Ur/Web new purely functional language for web programming?", "text": "I came across the Ur/Web project during my search for web frameworks for Haskell-like languages. It looks like a very interesting project done by one person. Basically, it is a domain-specific purely functional language for web programming, taking the best of ML and Haskell. The syntax is ML, but there are type classes and monad from Haskell, and it's strictly evaluated. Server- side is compiled to native code, client to Javascript. See the slides and FAQ page for other advertised advantages. Looking at the demos and their source code, I think the project is very promising. The latest version is something 20110123, so it seems to be under active development at this time. My question: Has anybody here had any further experience with it? Are there problems/annoyances compared to Haskell, apart from ML's slightly more verbose syntax? Even if it's not well known yet, I hope more people will know of it. OMG this looks very cool to me. I don't want this project to die!!"} {"_id": "223126", "title": "API design with references to root object", "text": "[Normally I post on StackOverflow but as this is more a design/theory question rather than a code question I'll give it a shot here] Most of my applications currently use a core object model that I originally wrote 6 years ago and which has just grown as needed - as with most of my stuff, it was coded as is, without thinking up a design first. Part of it is too inflexible, part is poorly designed and yet more is clunky. So I decided to start again from scratch rather than trying to retrofit a new API on top of the existing one. And I also decided to actually map out the full API first upfront, then write the code for it. So far that approach is paying off I suppose and I have a nice little model brewing. However, I have one quandary in the new design. If you consider the automation models offered by things like Microsoft Word, most primary objects have a property named `Application` which points back to a core object. My existing object model follows pretty much the same principle and I am using that in the new one too. The current API however has a mix of approaches. Some objects store a reference to that root object. Others don't, and the property simply looks at a singleton \"service\" object, which at it's heart has a `ServiceContainer` to look up registered objects. Yes, a service locator. Some get it passed in through a constructor, others look it up _then_ store a reference. I normally like consistency, but this is a mess. My new design isn't changing this approach - objects still will have a property back the the root object with the idea being all the objects behave the same and I don't have to (manually) go hunting for an application when writing code for a given implementation. With that background information out of the way, here's my question. Should each object hold a reference to the root object, or should it not store a direct reference, but look it up somehow, ie from a singleton or a service locator. With the former approach, I'm going to use more memory - 4 bytes extra per object per object as I currently compiled everything as 32bit. While that doesn't sound a lot and I'm sure I'm not going to be creating many thousands of objects I sort of want the base API to be as efficient as possible as no doubt the apps I stick on top of it won't be! This approach also means I'm going to have to pass that object reference around in every single constructor, not much of a problem for real code but makes writing tests a bit more complicated. The latter approach means I save memory, but then I continue to use the bad practice of a service locator, and I supposed there's a at least the hint of a performance issue as having to lookup a reference isn't quite going to be as quick as returning a direct one. Or, is there another approach that large object models use that I haven't considered above? As mentioned I normally just dive in, make something that works then leave it alone until it breaks or I need it to do something else. I'm guessing the answer is going to be \"just store the reference and stop complaining\" but better to get a feel for other peoples opinions!"} {"_id": "223121", "title": "Development console commands registration", "text": "I have a DevelopmentConsole class. I am making functionality to register console commands for the subsystems. I don't want the console to know about them but also I don't want them to contain a debug code (like \"Console.RegisterCommand...\"). I think I should make an additional class hiearachy. For an example I have Player class. IConsoleBuilder { RegisterCommand(string command, Func action); } PlayerConsoleBuilderClient : ConsoleBuilderClient { readonly Player _player = ?inject? public override void Visit(IConsoleBuilder builder) { // builder.RegisterCommand(\"GetName\", args => _player.Name) ; } } Here I need to use Reflection to find all ConsoleBuilderClient subclasses. It's not a very good idea, is it? Can you suggest how to do it in a better way?"} {"_id": "141657", "title": "software developer designation related", "text": "What does designation \"Associate software developer Grade 10\" means? A company mention it in his offer letter. Here two words are confusing first \"associate\" and second \"GRADE 10\"."} {"_id": "166089", "title": "Replacing out parameters with struct", "text": "I'm encountering a lot of methods in my project that have a bunch of out parameters embedded in them and its making it cumbersome to call the methods as I have to start declaring the variables before calling the methods. As such, I would like to refactor the code to return a struct instead and was wondering if this is a good idea. One of the examples from an interface: void CalculateFinancialReturnTotals(FinancialReturn fr, out decimal expenses, out decimal revenue, out decimal levyA, out decimal levyB, out decimal profit, out decimal turnover, out string message) and if I was to refactor that, i would be putting all the out parameters in the struct such that the method signature is much simpler as below. [structName] CalculateFinancialReturnTotals(FinancialReturn fr); Please advise."} {"_id": "227639", "title": "Can I use Apache Software License, Version 2.0 and GNU LGPL 3 licence plugins in my commercial web application?", "text": "I have two plug-ins. One has the GNU LGPL 3 license and the other has the Apache Software License, Version 2.0. Can I use them in my commercial app? And if yes, what precautions should I take?"} {"_id": "26243", "title": "What's the \"normal\" range for typing speed for developers?", "text": "Clearly, two finger typing is probably a sign the developer needs work to speed up his typing (or is lying about his experience as a developer), and 60+ WPM is more than sufficient. Has anyone studied this, though? How fast do developers type, on average, and what's the \"normal\" range of typing speeds?"} {"_id": "250599", "title": "How reduce the usage of Magic Strings?", "text": "In the application database there is a configuration table with this schema: Table: ReleaseProperty 1. ReleasePropertyID 2. ReleaseID 3. Name 4. Value Currently to retrieve a specific property I pass to the Repository class the arguments ReleaseID (that come from the current web page address) and a Name, that is basically hard coded E.g. var property = ReleasePropertyRepository.Select(ReleaseID, \"MyProperty\"); I don't like to use hard coded strings around my code, and the only idea to improve the quality of this code is to use a container class for constants. var property = ReleasePropertyRepository.Select(ReleaseID, Constants.MyProperty); Is there any other way to develop a better code?"} {"_id": "26249", "title": "What could be a good thesis topic for software and business?", "text": "For my business studies, I need to explore the financial side of software engineering projects, but I am rather a technical person myself. What would be good place to start? What are the important financials (net-present-value does this make sense for software?) or performance-indicators of \"good\" software projects? What type of person in an organization I should contact? Thanks for any help."} {"_id": "250594", "title": "Web-services REST security clarification", "text": "I'm newbie of web services programming and I have some problem to understand how to work authentication/security for REST WS pattern. I had read about OAuth but I haven't understand how it work in detail, however I think I don't need it because only my app use the API. I need authenticate the users that need to use the \"privates\" APIs, can you explain me the correct way for implement a REST WS authentication, or give me a good guide or flowchart?"} {"_id": "250591", "title": "In BDD, going from feature to user story how does it works?", "text": "My background is the book BDD in action. How one goes from Feature to Stories? More specifically I would like to understand the following: 1- When does one provide the decomposition into stories? Do you do it while spotting the examples that illustrate your feature ? 2 - If so how do you proceed in your development then: Do you first start to on work on the stories while leaving your feature pending ? Do you right the feature scenario first and then, move on the stories that comes out of it, while leaving the feature test pending ? The key here is the process, if i do a decomposition of a feature into stories, before writing my actual scenario that spot those stories, i might be writing stories that are not relevant to the feature isn't it? This though make me think that stories must be fleshed out in the same way as unit test/integration test are fleshed out of a user stories? However if one does that well it is difficult to plan a feature for scrum iteration. I understand that it is just a planning tool, but it would be interesting to understand how this planning is actually used in the context of dealing with a feature. I believe one should not do too much upfront planning, not commit to things without being sure, but in the mean time, it seem to me that the decomposition into the different process (stories) involve into the feature process requires some upfront planning. I would appreciate a clarification on that point."} {"_id": "168058", "title": "What are graphs in laymen's terms", "text": "What are graphs, in computer science, and what are they used for? In laymen's terms preferably. I have read the definition on Wikipedia: > In computer science, a graph is an abstract data type that is meant to > implement the graph and hypergraph concepts from mathematics. > > A graph data structure consists of a finite (and possibly mutable) set of > ordered pairs, called edges or arcs, of certain entities called nodes or > vertices. As in mathematics, an edge (x,y) is said to point or go from x to > y. The nodes may be part of the graph structure, or may be external entities > represented by integer indices or references. but I'm looking for a less formal, easier to understand definition."} {"_id": "168059", "title": "Best practices for logging user actions in production", "text": "I was planning on logging a lot of different stuff in my production environment, things like when a user: * Logs In, Logs Off * Change Profile * Edit Account settings * Change password ... etc Is this a good practice to do on a production enviornment? Also what is a good way to log all this. I am currently using the following code block to log to: public void LogMessageToFile(string msg) { System.IO.StreamWriter sw = System.IO.File.AppendText( GetTempPath() + @\"MyLogFile.txt\"); try { string logLine = System.String.Format( \"{0:G}: {1}.\", System.DateTime.Now, msg); sw.WriteLine(logLine); } finally { sw.Close(); } } Will this be ok for production? My application is very new so im not expecting millions of users right away or anything, looking for the best practices to keeping track of actions on a website or if its even best practice to."} {"_id": "101890", "title": "How do I transition from a large enterprise to a small startup?", "text": "I am shortly going to move from a large, multinational, enterprise software house to a start up where I will be the only full time developer. I've worked in start ups and in companies where there have been only three or four devs before so I'm fairly happy with many of the differences in general company feel. But having never worked as a sole dev there a couple things I've got used to in a large company that I think will take more adjustment. And was wondering if there was any advice on how to manage them?"} {"_id": "37533", "title": "Books on improving JEE web application performance and responsiveness", "text": "Does anyone know of any good books on this topic? I did search on amazon but ones that I found are pretty old publications."} {"_id": "37532", "title": "Best industry to work for as a developer", "text": "My contract has just ended and I'm wondering what possible jobs I might want to look at next. I've worked in banking and insurance industry for all my career (including one Fortune 500 company) and in my experience banking is the slowest (and most boring) industry to work for due to their strict business practices (which is fair enough). The upside is that they pay well. ### My questions are: * What are the best and worst industries for developers to work in? That is, in the industries you have worked in, what was good and bad from a developer perspective (money, work, culture, benefits, colleagues, etc.)? * How does working as a consultant affect your opinion of an industry? * Seeing that I've mentioned boredom, which industry supports fast growth?"} {"_id": "138222", "title": "Tips/tricks to manage a new team with new code", "text": "How do you handle yourself in a new team where you are the senior most developer and most others in the team are junior to you by several years. The task ahead of the team is something nobody else including you has accomplished in their career before. Management insists on higher productivity of the whole team, and as senior developer you are responsible. Any tips for coming out trumps in a situation like this? Clearly, the entire team needs time to learn and let's not forget the team's new. However, deadlines are up ahead as well..."} {"_id": "79963", "title": "Best Usage of Multiple Computers For a Developer", "text": "I have two Macbook Pros - both are comparable in hardware. One is a 17\" and the other a 15\". The 17\" has a slightly swifter CPU clock speed, but beyond that the differences are completely negligible. I tried a setup a while back where I had the 17\" hooked up to an external monitor in the middle of my desk with the 15\" laptop immediately to the right of it, and was using teleport to control the 15\" from my 17\". All development, terminal usage, etc. etc. was being done on the 17\" and the 15\" was primarily used for email / IM / IRC... or anything secondary to what I was working on. I have a MobileMe account so preferences were synced, but otherwise I didn't really use anything else to keep the computers in sync (I use dropbox/git but probably not optimally). For reasons I can't put my finger on, this setup never felt quite right. A few things that irked me was * the 15\" was way under-utlized and the 17\" was overutilized * having 2 laptops and a 21\" monitor all on one desk actually took up lots of desk space and it felt like I had too much to look at. I reverted back to just using the 17\" and the external monitor and keeping the 15\" around the house (and using it _very_ sparingly). For those of you who are using multiple laptops (or just multiple machines for that matter), I'd like to see setups that work for you for when you have 2 or more machines that gives you optimal productivity and why. I'd like to give this one more shot but with a different approach than my previous - which was using the 15\" as a machine for secondary things (communication, reading documentation, etc. etc)."} {"_id": "207945", "title": "Apple Dispatch Queue vs Threads", "text": "I've heard a lot about apple's famous dispatch queues and the **GCD** but today was the first time I decided to understand exactly what is going on, so I started reading Concurrency Programming Guide and this paragraph caught my eye. > If you have two tasks that access the same shared resource but run on > different threads, either thread could modify the resource first and you > would need to use a lock to ensure that both tasks did not modify that > resource at the same time. With dispatch queues, you could add both tasks to > a serial dispatch queue to ensure that only one task modified the resource > at any given time. This type of queue-based synchronization is more > efficient than locks because locks always require an expensive kernel trap > in both the contested and uncontested cases, whereas a dispatch queue works > primarily in your application\u2019s process space and only calls down to the > kernel when absolutely necessary. What I understand is that they are suggesting executing two tasks serially to avoid use of locks, which can be done the same way in threads. For example in Java you can put two functions in your thread's runnable and the thread will execute them serially and you will not need locks in that case two. Now my question is that am I missing something ?"} {"_id": "75230", "title": "New to programming. How do I meet people to expand my programming knowledge and discourse?", "text": "I've been a tinkerer of tech and programming, but books and online resources only go so far. I want a community to engage in discussions about programming to take me beyond what books can give (also, often just to explain what the books are trying to convey). No one in my daily life is very tech-inclined in the slightest, so where do I go? Are online communities enough for you programmers out there? If not, where do you go, or recommend going? (specifically, I'm interested in C++, Linux, Python, & Objective-C, but that may not apply to the question)"} {"_id": "138229", "title": "Python is slowly replacing C in universities. Does this move degrade the quality of CS students?", "text": "I believe learning C is one of the most important aspects for any programmer. It's a beautiful combination of a high and low level language. Some universities are moving to stop teaching C in the introductory stages and are using Python instead. Will this move to Python, from C, degrade the quality of CS students? If you miss out on some of the aspects of a low level language, are you missing something important from you CS degree?"} {"_id": "167684", "title": "When to use identity comparison instead of equals?", "text": "I wonder why would anybody want to use identity comparison for fields in `equals`, like here (Java syntax): class C { private A a; public boolean equals(Object other) { // standard boring prelude if (other==this) return true; if (other==null) return false; if (other.getClass() != this.getClass()) return false; C c = (C) other; // the relevant part if (c.a != this.a) return false; // more tests... and then return true; } // getter, setters, hashCode, ... } Using `==` is a bit faster than `equals` and a bit shorter (due to no need for null tests), too, but in what cases (if any) you'd say it's really better **to use`==` for fields inside `equals`**?"} {"_id": "167687", "title": "How often is seq used in Haskell production code?", "text": "I have some experience writing small tools in Haskell and I find it very intuitive to use, especially for writing filters (using `interact`) that process their standard input and pipe it to standard output. Recently I tried to use one such filter on a file that was about 10 times larger than usual and I got a `Stack space overflow` error. After doing some reading (e.g. here and here) I have identified two guidelines to save stack space (experienced Haskellers, please correct me if I write something that is not correct): 1. Avoid recursive function calls that are not tail-recursive (this is valid for all functional languages that support tail-call optimization). 2. Introduce `seq` to force early evaluation of sub-expressions so that expressions do not grow too large before they are reduced (this is specific to Haskell, or at least to languages using lazy evaluation). After introducing five or six `seq` calls in my code my tool runs smoothly again (also on the larger data). However, I find the original code was a bit more readable. Since I am not an experienced Haskell programmer I wanted to ask if introducing `seq` in this way is a common practice, and how often one will normally see `seq` in Haskell production code. Or are there any techniques that allow to avoid using `seq` too often and still use little stack space?"} {"_id": "167688", "title": "What is the most efficient way to study multiple languages, frameworks, and APIs as a developer?", "text": "I know there are those out there who have read a slurry of books on a specific technology and only code in that one particular language, but this question is aimed at those who need bounce around between using multiple technologies and yet still manage to be productive. What is the most efficient way to study multiple languages, frameworks, and APIs as a developer without becoming a cheap swiss army knife? And how much time should one dedicate to a particular subject before moving to another?"} {"_id": "250040", "title": "Is it wrong to use Agile when clients' requirements don't change at all?", "text": "I have seen a lot of posts recently saying that one of the major reasons why Agile is used is because clients often change the requirements. However, let's say the clients **do not change the requirements often**. In fact, the clients have firm requirements though might be a bit vague (but nothing unreasonably vague), but I use Agile anyway. The reason why I employ Agile is because the software is complex enough that there are details, problems that I wouldn't recognize until I actually face them. I could do a full scale heavy planning approach like waterfall, but then it would take a few months to finalize all the high level design and low level coding signatures. There is a very specific, fixed architectural design for the system though. My question is: Would this be considered as bad, cowboy coding, anti-pattern, etc..? Must we employ waterfall and plan as much as possible in great details before we start coding **when requirements are stable** instead of this 'let's do it' mentality in Agile? EDIT: The major point here is that: we CANNOT blame the clients for changing requirements. Assume the clients pointed us to a very concrete problem, give us a wish list in very reasonable details and leave us alone (ie the clients have their own productive things to do, don't bug them any more. Only demo to them near the end when you have a minimum working prototype). Would it be wrong to use Agile in this scenario?"} {"_id": "204132", "title": "Using a Proxy as an ACL", "text": "I am building an MVC application in PHP, using Zend Framework. My model includes Domain Model and Mapper layers. I am trying to keep the domain model in pristine condition and have managed to keep it free from persistence-related code. Now I am trying to find a solution for ACL. I have 'general' ACL rules that can be easily enforced in a service layer, but I also have some very fine grained rules. The fine grained rules apply at the property level and affect whether the current user can alter a field, and if so, what the range of allowable values is. Currently I am thinking of using the proxy pattern, and that the proxy should extend the real object (to save me having to redirect 100% of calls). Does this sound reasonable? Is there a better option?"} {"_id": "209327", "title": "Getting help in programming while holding the reins tightly", "text": "I am starting an Internet business. Basically the most important part, the part that is making me actual money is still missing. I tried to deal with those issues several times but I always get distracted by other things. This is a typical example of not rising to a challenge. I thought about hiring a programmer to continue my coding but I came up with two problems: 1. When revealing my current code the programmer could just run the business on his own. How do I choose someone who won't do that? 2. Which way of payment is appropriate for the programmer in this situation? Per hour? How do I figure out the right amount? The code to be written will be in PHP + MySQL."} {"_id": "250047", "title": "Is WCF strictly an asynchonous comms platform?", "text": "Since I last had to do any comms/network programming, the field has exploded with acronyms. In fact, networking almost feels like it is now described by a whole new language. The very name \"Windows Communication Foundation\" suggests it should be all things to all people... but I need to be sure I can program up some synchronous real-time* comms on a dedicated gigabit ethernet network. Can someone confirm that WCF is not a starter for this task... or if it is, which of the many confusing acronyms I should get familiar with? * By real-time, I mean I need to reduce latencies down to an acceptably low level. For our application, which involves transmitting results of video & audio analyses continuously every few milliseconds, I need to ensure 99% or better of those analyses are presented at the UI within (ideally) 0.5 seconds so that the user has time to respond before the originating network node deletes data out of its buffer to make way for subsequent analyses."} {"_id": "209325", "title": "Combine Data from Two Tables-Help Needed", "text": "I have developed a web site with a mysql backend, but am not satisfied with how I am getting one set of data and do not know how to get another dataset. The page in question is at: http://whistclub.org/test/ajax.php?vichill/results/1 The results are shown through October so there is some data to use. I am pulling the results from two mysql tables (see below), but the code I have used is too ugly for me to tolerate. Yeah, it works, but I have some standards! Think calls to the database for each team. I think joins should be used, but I can't get it to work. Ideally (I think) the result array would convert the teamIDs to team names and use names for the winner and loser VP columns. The standings table on the page is not working at all--what you see is hand coded just to show what is needed. The tables are: teams: id--autoincremented teamID--Asssigned to each team as part of the game teamName games: id--autoincremented gameID--I suspect this is not needed since it duplicates id. teamA teamB date winner IMPmargin winnersVPs losersVPS The game is bridge and IMP and VP are scores. VPs are derived from IMPs and the loser often gets a few VPs. See the webpage for the details, but I do not think that is relevant to my issue. Here is some data from the games table: id gameID teamA teamB date winner IMPmargin winnersVPs losersVPs 1 1 11 18 2013-09-25 18 12 20 10 2 2 12 17 2013-09-25 12 22 22 8 3 3 13 16 2013-09-25 13 20 21 9 4 4 14 15 2013-09-25 14 0 15 15 5 5 19 99 2013-09-25 NULL NULL NULL NULL Team 99 is a dummy for a bye week for a team. If the tables are designed too badly, let me know how to make them work better. Hopefully, that is enough for someone to point me in the right direction."} {"_id": "234447", "title": "Is Object Oriented Design necessary when building Symfony web apps?", "text": "I am primarily a software developer, and as such, I do a lot of reading on the subject of Object Oriented Design; the 5 SOLID principles, design patterns, composition over inheritance etc. I currently work as a PHP developer building web applications in Symfony 2. Along with learning to use an Agile/BDD/TDD approach to building web applications, I try to incorporate some OOD by searching out the abstractions and jotting down some interfaces for classes to communicate with (dependency inversion). I was recently tasked with building a small, in-house CMS for a small, static website. This seemed a great opportunity to create, for example, an abstract \"content\" class and derive different types of content from it, or an interface for persisting content to the database. As it turned out, I didn't need any of the OOD knowledge that I have spent hours amassing. I just created Entities for each content type (albeit they derived from a base Entity), and I used the Symfony Form API to render forms for User input. Symfony functionality, when combined with Doctrine, handled all of the persisting to the DB. All that was left was to retrieve from the DB, process it, and pass it to the Twig template. So in consideration of this, I imagine that even had the application been much larger, it seems that the nature of most web applications is to take user input, process it, perhaps persist it, and render a view. None of this seems to require any OOD knowledge or design pattern knowledge. My question is whether my time is wasted learning OOD, or if I shouldn't place such importance on it, as perhaps modern web application development doesn't really fit that realm of \"software\" development. NB: I am aware that if I was to build a framework such as Symfony 2, then the OOD principles would apply, as that is truely building software from the ground up. My question is more related to when we use the Symfony (or any other) framework. **Edit** A better way to rephrase my question: do the majority of web application developers, whilst being rich in the knowledge of OOD, find that they needn't apply a vast majority of this knowledge when building web applications, because today's frameworks shelter you from such low level design. Does anyone have any examples of modules they have built that required Object Oriented methodology?"} {"_id": "14047", "title": "How you prepared for your .NET interview?", "text": "I gave many interviews in the last few years and each time I found the interviewers are not satisfied with what I know. My first company only developed desktop Windows applications using .NET. They had nothing to do with features like: _Remoting_ etc. We also had limited use of _Generics_ , _Reflection_ and _Multi-threading_. When I appeared for the interviews, I was asked questions on above features even when I told them that I don't have real-life experience. Now the .NET interviews are even more complex. Seeing my experience, the interviewers target the latest framework. I have no real-life exposure to the new features and technologies like WPF, WCF etc. Please suggest me how to effectively prepare for the .NET interview. I have 3 years experience in .NET but I only developed Windows based applications. At present I work on .NET Framework 3.5. I never worked on ASP.NET, as in my present company I work on PHP for web-applications."} {"_id": "163087", "title": "High-Level Application Architecture Question", "text": "So I'm really wanting to improve how I architect the software I code. I want to focus on maintainability and clean code. As you might guess, I've been reading a lot of resources on this topic and all it's doing is making it harder for me to settle on an architecture because I can never tell if my design is the one that the _more experienced programmer_ would've chosen. So I have these requirements: * I should connect to one vendor and download form submissions from their API. We'll call them the `CompanyA`. * I should then map those submissions to a schema fit for submitting to another vendor for integration with the email service provider. We'll call them the `CompanyB`. * I should then submit those responses to the ESP (`CompanyB`) and then instruct the ESP to send that submitter an email. So basically, I'm copying data from one web service to another and then performing an action at the latter web service. I've identified a couple high-level services: * The service that downloads data from `CompanyA`. I called this the `CompanyAIntegrator`. * The service that submits the data to `CompanyB`. I called this `CompanyBIntegrator`. So my questions are these: 1. Is this a good design? I've tried to separate the concerns and am planning to use the facade pattern to make the integrators interchangeable if the vendors change in the future. 2. Are my naming conventions accurate and meaningful to you (who knows nothing specific of the project)? 3. Now that I have these services, where should I do the work of taking output from the `CompanyAIntegrator` and getting it in the format for input to the `CompanyBIntegrator`? Is this OK to be done in `main()`? 4. Do you have any general pointers on how you'd code something like this? I imagine this scenario is common to us engineers---especially those working in agencies. Thanks for any help you can give. Learning how to architect well is really mind-cluttering."} {"_id": "197059", "title": "C++ or C#: Which language is Microsoft going to use in development of future Windows versions?", "text": "I heard almost all parts of Windows are written in C and C++ with some assembly. Why did Microsoft skip C#? Is there any scope for C# in the development of future Windows versions?"} {"_id": "220961", "title": "using pre-commit / post-merge hook script to replace configuration values", "text": "I'm having some problem developing a web application with various developers, each one has an specific configuration to work. and i would like to use the less resource cosuming approach to avoid that any personal configuration goes to the repository. **the basic idea is:** to have a config file with all the pairs of key/values to store the configuration and substitute specific constants inserted in the code with the correpondent value in the code. I'm pretending to replace the values in the code with the key text using this syntax `{@KEY_TEXT}` in pre-commit and executing the opposite operation in the post-merge hook i'm doing this right? there's a more efficent way to do this? **EDIT:** Why i don't have all debug configuration in a single file, because all this values are inserted accross various files / and involves lot of different programming languages and differente context. i'm Using **PHP/javascript/Shell/Batch** file Language (Windows). in a cross plataform enviroment"} {"_id": "232372", "title": "How much to charge for project support after one year?", "text": "We did a project and the cost includes one year bug fixes and supporting. How much should we charge for supporting the website for another one year? is there a formula or standard way to calculate this?"} {"_id": "222652", "title": "What's the reason for C standard to consider const-ness recursively?", "text": "The C99 standard says in 6.5.16:2: > An assignment operator shall have a modifiable lvalue as its left operand. and in 6.3.2.1:1: > A modifiable lvalue is an lvalue that does not have array type, does not > have an incomplete type, does not have a const-qualified type, and if it is > a structure or union, does not have any member (including, recursively, any > member or element of all contained aggregates or unions) with a const- > qualified type. Now, let's consider a non-`const` `struct` with a `const` field. typedef struct S_s { const int _a; } S_t; By standard, the following code is undefined behavior (UB): S_t s1; S_t s2 = { ._a = 2 }; s1 = s2; The semantic problem with this is that the enclosing entity (`struct`) should be considered writable (non-read-only), judging by the declared type of the entity (`S_t s1`), but should not be considered writable by the wording of standard (the 2 clauses on the top) because of `const` field `_a`. The Standard makes it unclear for a programmer reading the code that the assignment is actually a UB, because it's impossible to tell that w/o the definition of `struct S_s ... S_t` type. Moreover, the read-only access to the field is only enforced syntactically anyway. There's no way some `const` fields of non-`const` `struct` are going really be placed to read-only storage. But such wording of standard outlaws the code which deliberately casts away the `const` qualifier of fields in accessor procedures of these fields, like so (Is it a good idea to const- qualify the fields of structure in C?): **(*)** #include #include typedef struct S_s { const int _a; } S_t; S_t * create_S(void) { return calloc(sizeof(S_t), 1); } void destroy_S(S_t *s) { free(s); } const int get_S_a(const S_t *s) { return s->_a; } void set_S_a(S_t *s, const int a) { int *a_p = (int *)&s->_a; *a_p = a; } int main(void) { S_t s1; // s1._a = 5; // Error set_S_a(&s1, 5); // OK S_t *s2 = create_S(); // s2->_a = 8; // Error set_S_a(s2, 8); // OK printf(\"s1.a == %d\\n\", get_S_a(&s1)); printf(\"s2->a == %d\\n\", get_S_a(s2)); destroy_S(s2); } So, for some reason, for an entire `struct` to be read-only it's enough to declare it `const` const S_t s3; But for an entire `struct` to be non-read-only it's not enough to declare it w/o `const`. What I think would be better, is either: 1. To constrain the creation of non-`const` structures with `const` fields, and issue a diagnostic in such a case. That would make it clear that the `struct` containing read-only fields is read-only itself. 2. To define the behavior in case of write to a `const` field belonging to a non-`const` struct as to make the code above **(*)** compliant to the Standard. Otherwise the behavior is not consistent and hard to understand. So, what's the reason for C Standard to consider `const`-ness recursively, as it puts it?"} {"_id": "207948", "title": "How do I find a good middle way to make this library safe for concurrent operations", "text": "I've made a little library called SignalR.EventAggregatorProxy Before I push it into 1.0 state I need to fix so it works safely with concurrent operations. Easiest way is lock all operations but thats a huge performance impact. The library queues event subscriptions and when a event comes in it checks the subscriptions and updates the clients using SignalR This is the class that holds the subscriptions There are 3 methods that write/read to the subscription collection(s) (I aggregate the subscription both on a client level and event level, so its two collections) * Subscribe * UnsubscribeConnection * Unsubscribe And one that that reads * Handle I realize since this is a library I can't make it optimum for all users of the library, but how do I find a good middle way that does not use locks? I think the Handle method is the most important method and should be prioritized for performance over the other 3. I made this little Unittest to test for Concurrency fail **update:** I choose to have locked writes and unlocked reads. The writes didnt mutate existing state but overwrite the collection completely."} {"_id": "106044", "title": "Best way to use source control for a project (1-3 people)", "text": "So my partner and I are working on a practice management system and at first we didn't use any version control, then I persuaded him to use git. At the moment our system is that we have three branches: develop, master and release. Our git repo is on an external server, which has Apache, MySQL, etc. We do our daily coding on the develop branch (commits) and then we we have done some testing to make sure it's (semi) bug-free we merge it into the master branch. If we want to do something radical we just make a branch of master, mess around and if it works, good, we merge it back in, if not, we delete the branch. Finally when we go for our weekly release we merge to the release branch to make sure we have a copy of the project as it was when we released. Unfortunately we don't track all files in git as it slows down git too much (e.g. images, especially since this is on an old server with a single-core 1.4 GHz AMD Sempron), as a `git status` usually changes from around 3s to about 1 minute; or because we can't (mysql databases). For us the slowdown is not workable so we just leave them out, and this has never seemed to be a problem because we haven't deleted any images, but we may start to soon as we start to optimize. So when we checkout previous commits sometimes the files aren't there and so it is broken (a big problem is we can't use version control on the database (because we don't know how), so because the schema has changed it is also broken if we go back). And that's pretty much our workflow. So after my background, my question is: **Do you think that there are any improvements we could make to the above?** And on a side note: Does any know a reliable and easy-medium difficulty (i.e. not _too_ hard) way to track mysql databases in git?"} {"_id": "222659", "title": "How to deal with hard configurations at the component level?", "text": "I distinguish three organisation levels while programming: the library level, the component level and the application level. A library defines functions to solve a range of related problems or preform related operations in _tools and no policies_ manner. An application lets several libraries interact and defines policies for diagnostic, error handling, internationalisation,\u2026 A component sits between the library and the application, it handles low-level errors and produce diagnostics, it might old global state and is aware of other components. Now I have a library implemented as a functor parametrised by a module holding a constant _n_ and values associated to different values of _n_ have incompatible types\u2014which is the purpose of implementing the library as a functor. I call such a configuration a _hard_ configuration, because it must somehow be hard-coded in the program. Now I am facing the following problem: In this setting, I want to allow the user of the program to choose the value of _n_ for the duration of the program. Since the library is a functor, one feels forced to write the component interfacing the library to the program as a functor. But I want the choosed value of _n_ to be a property of the component, not of the application, and I want component to be regular modules (no functor allowed). How can I wrap a parametric library (implemented as a functor) in a non- parametric component, so that parameters can be chosen when the application starts? I insist on being able to express the application logic at the highest level in terms of non parametric modules, and want to avoid russian-dolls- alike parametrised modules at this level."} {"_id": "222658", "title": "Implementation of chess endgame engine without Endgame Tablebases", "text": "I'm interested in creating an chess endgame solving engine. The endgames in chess are usually solved using the endgame table-bases generated by `retrograde algorithm`. I have found that Artificial Intelligence and Genetic Algorithms have been applied to the chess programming. However, before starting the implementation I wanted to find out whether the chess endgames can be played without the endgame tablebases? If yes, then what are the pros and cons of these alternatives to endgame tables? Are there any other algorithms known for this problem?"} {"_id": "53844", "title": "Website inheritance of ownership question", "text": "I paid for a web site for my motel business, then due to my landlord actions I was placed in a position to file for bankrupcy. I asked my IT manager and web developer to close down the web site for the business. He has since then sold my web site to the new owners. I took all the photos myself and I am in at least 4 of the photos on the site. There are no changes to the site as I left it and it now states that it is under the copyright on the new owners? I am not sure what I should do as this happen 10 months ago and I have just found this out. Thank you for your help, I am in Australia."} {"_id": "53847", "title": "Keeping application backend and UI synchronised", "text": "I have an app that works with a list of objects so has one central keyed list. The main window has a tree containing these object with some information, which is easy enough to populate as a one off event after loading. Now, the complicated part, any part of the application can add, remove, and more importantly change the details of those objects at any time (all in the same process) and I'd like the tree to update to suit. I have a few options including passing events back down from the object to the list to the form which seems to be the most flexible way. I can also do it lazily and repopulate the tree each time or periodically (very hackish). Does anyone have any better thoughts on how to structure this? This is being done in C# 2.0 but the concepts apply to any environment. Thanks"} {"_id": "167359", "title": "Pricing personalized software?", "text": "Currently i'm working on a Purchased Order System Application Project for a small scale company. The Software that i am working on is personalized based on the on their business requirement. **The company told me to create proposal include the price how much is the application is so they can process the check for me.** The person who give me this project is the company supervisor and also a former supply chain supervisor in my employer before which i work also in some of their applications back then.So i want to be fair. This is my first time to create an application as a sideline so i really never experienced pricing a software even though i am working as full time web developer in a big company. Any tips and help ?"} {"_id": "185991", "title": "Will Authentication over HTTPS Slow My Application?", "text": "I am building a web application and RESTful web service. I have been reading various articles about the best way to authenticate the requests to the web service. The best option for me seems to be to use HTTP basic authentication. Pretty much every article ive read says that authentication should be encrypted over SSL or equivalent. Im not totally sure what this involves. Does this mean that my whole web service will have to be on a secure server? Will this slow things down?"} {"_id": "53843", "title": "What is the best way to work with large databases in Java depending on context?", "text": "We are trying to figure out the best practice for working with very large DBs in Java. What we do is a kind of BI (business Intelligence), i.e analyzing very large DBs, and using them to create intermediate DBs that represent intelligent knowledge of the DBs. We are currently using JDBC, and just preforming queries using a ResultSet. As more and more data is being created, we are wondering whether more appropriate ways exist for parsing and manipulating these large DBs: 1. We need to support 'chunk' manipulation and not an entire DB at once(e.g. limit in JDBC, very poor performance) 2. We do not need to be constantly connected since we are just pulling results and creating new tables of our own. 3. We want to understand JDBC alternatives, with respect to advantages and disadvantages. 4. Whether you think JDBC is the way to go or not, what are the best practices to go by depending on context (e.g. for large DBs queried in chunks)?"} {"_id": "151246", "title": "Is it possible to modify a video codec + distribute it?", "text": "this is my first question on this particular stackexchange node, not sure if it's the most appropriate place for this question (if not, guidance to the appropriate node would be appreciated). **the abstract:** I'm interested in modifying existing video codecs and distributing my modded codecs in such a way as to make them easily added to a users codec library... for example to be added to their mpeg streamclip, ffmpeg etc. **some details:** I've had some experience modifying codecs by hacking ffmpeg source files and compiling my hacked code (so that for ex: my version of ffmpeg has a very different h.263 than yours). I'm interested now in taking these modified codecs and somehow making them easily distributable, so others could \"add them\" to their \"libraries.\" Also, I realize there are some tricky rights/patent issues here, this is in part my motivation. I'm interested in the patent quagmires, and welcome any thoughts on this as well. **ctx link:** if it helps (to gauge where I'm coming from) here's a link to a previous codec-hacking project of mine http://nickbriz.com/glitchcodectutorial/"} {"_id": "185998", "title": "What is the fitness landscape for minimal + viable solutions?", "text": "Let's say I'm trying to find a number from 1 .. 100. All numbers in this range are \"valid\", in that they could be interpreted as potential solutions. Let's say the ideal number is 50. And all numbers >= 50 are \"feasible\" in that they actual solve the problem. And all numbers < 50 are \"not feasible\" (but still valid). How would you code a scenario like this with a fitness function (assuming that the landscape is similar to but more complex than this contrived example)? Do you give \"bonuses\" to valid solutions? Do you measure how far an unfeasible solution has left to go before becoming optimal? And do you penalize excessive solutions? if (solution < 50) { maximalFitness = solution } else if (solution >= 50) { maximalFitness = 1_000 - solution } The curve wouldn't be continuous if all feasible solutions are strictly better than all unfeasible solutions, despite having a similar distance from optimal."} {"_id": "121555", "title": "Why is trailing whitespace a big deal?", "text": "Trailing whitespace is enough of a problem for programmers that editors like Emacs have special functions that highlight it or get rid of it automatically, and many coding standards require you to eliminate all instances of it. I'm not entirely sure why though. I can think of one practical reason of avoiding unnecessary whitespace, and it is that if people are not careful about avoiding it, then they might change it in between commits, and then we get diffs polluted with seemingly unchanged lines, just because someone removed or added a space. This already sounds like a pretty good reason to avoid it, but I do want to see if there's more to it than that. So, why is trailing whitespace such a big deal?"} {"_id": "72259", "title": "As a beginning programmer, should I favor building my own libraries over using 3rd-party libraries?", "text": "As a beginning Python programmer, is it a good idea to build and understand my own libraries before jumping to advanced 3rd-party libraries that contains the functionality I need? Some projects (e.g. web frameworks like Django) are probably too large for this approach. But other projects (e.g. Web Crawlers, graph libraries, HTML parser) seem to be feasible. I worry that early reliance on 3rd-party libraries would stunt my growth. Note: this question and this question seem to focus more an experienced programmers, who are probably more focused on the efficiency of reuse than the learning benefit. My question, I think, is focused on beginners."} {"_id": "37398", "title": "Evolution in coding standards, how do you deal with them?", "text": "How do you deal with evolution in the coding standards / style guide in a project for the existing code base? Let's say someone on your team discovered a better way of object instantiation in the programming language. It's not that the old way is bad or buggy, it's just that the new way is less verbose and feels much more elegant. And all team members really like it. Would you change all exisiting code? Let's say your codebase is about 500.000+ lines of code. Would you still want to change all existing code? Or would you only let new code adhere to the new standard? Basically lose consistency? How do you deal with an evolution in the coding standards on your project?"} {"_id": "18213", "title": "Any tips for designing the invoicing/payment system of a SaaS?", "text": "The SaaS is for real estate companies, and they can pay a monthly fee that will offer them 1000 publications but they can also consume additional publications or other services that will appear on their bill as extras. On registration the user can choose one of the 5 available plans that the only difference will be the quantity of publications their plan allows them to make. But they can pass that limit if they wish, and additional payment will be required on the next bill. A publication means: Publishing a property during one day, for 1 whole month would be: 30 publications. And 5 properties during one day would be 5 publications. ## So basically the user can: * Make publications (already paid in the monthly fee, extra payment only if it passes the limit) * Highlight that publication (extra payment) * Publish on other websites or printed catalogues (extra payment) ## Doubts: * How to handle modifications in pricing plans? Let's say quantities change, or you want to offer some free stuff. * How to handle unpaid invoices? I mean, freeze the service until the payment has been done and then resume it. * When to make the invoices? The idea is to make one invoice for the monthly fee and a second invoice for the extra services that were consumed. * What payment methods to use? The choosen now is by bank account, and mobile phone validation with a SMS. If user doesn't pay we call that phone and ask for payment. Any examples on billing online services will be welcome! Thanks!"} {"_id": "71424", "title": "What are the barriers to adopting best practice? How can they be overcome?", "text": "We've all seen (and most of us have written) plenty of poorly written code. Why? What makes us adopt poor practices rather than good ones? The most obvious answer (to me) is \"ignorance\", but I'm sure that isn't the only reason. What others are there? What can we do to overcome the temptation to write bad code?"} {"_id": "208741", "title": "Web app outgrowing current framework", "text": "I have quite a bit of experience with using Django for websites and so when I started a new project I naturally chose to use Django for it. Everything went well for a time but now the application is really starting to rub up against what Django can comfortably cope with and I fighting all the time to ensure that things work as intended. I've been considering moving the site over to Java EE 7 now that it has been released. It certainly seems to provide the features I require as well as also being less forceful in the way that a project is laid out and maintained. I guess now that I have a good idea of how the application should be structured, development should be much faster. Have you felt the need to change the web framework you are using simply because it doesn't lend itself well to the type of project you are trying to produce?"} {"_id": "208747", "title": "What to test when building websites using CMS?", "text": "My job is mainly building websites using CMS such as Drupal, eZPublish or Magento. Most of the work is templating, CRUD and adhering to the specs with this. Occasionally, there is some business-specific logic, but it's usually 20 lines or so. Most of the logic is a bunch of `if` in the templates. In this context, I find it hard to argue over unit testing. What should I test? Only integration tests? They usually can only be done at a very high level, such as using Selenium IDE. And Selenium is a pain to handle in project-mode products, because the requirements often change in 6 months and then the project is over. There are also other difficulties such as URLs being dynamic. (CMS don't always respect a RESTful architecture, especially for contents.) I am in favor of unit testing. For libraries or framework-based projects, I easily write them and sets up a Jenkins server for them. But for CMS-based projects, I just don't know how and where to get started."} {"_id": "208745", "title": "Talking about Front End Web Development frameworks from a designer's perspective", "text": "I am a Web Developer working at a company where the fronted framework we have selected is Angular JS. I am now in the position where I am the 'resident expert' whatever that may mean. I have been tasked with teaching/explaining everything that our Designers need to know about Angular JS. Unfortunately my knowledge/understanding of design begins and ends with the color wheel. Therefore I have the following question: What, if anything would a designer need to know about a front end javascript framework in general/Angular JS in particular in order to streamline their productivity? What kind of things are irrelevant? What kind of things are important?"} {"_id": "38924", "title": "Confusion in definitions of a method and a methodology in the book \"OOAD with Applicatons\" (Booch et al)", "text": "I am reading the book Object-Oriented Analysis and Design written by Grady Booch and others. In the Section : I Concepts in a subsection Bringing Order to Chaos authors suggest to separate between a _Method_ and a _Methodology_ : According to the book: **A method** is a disciplined procedure for generating a set of models that describe various aspects of a software system under development, using some well-defined notation. **A methodology** is a collection of methods applied across the software development lifecycle and unified by process, practices, and some general, philosophical approach. I understood that a _Method_ is used to built system models and a _Methodology_ is a set of such methods that are applied across software development lifecycle. To my knowledge, a software development lifecycle includes but is not limited to analysis, design, implementation and testing phases. How it can be that a _Method_ that is used to built system models is also applied in implementation or testing phase?"} {"_id": "202477", "title": "Can too much abstraction be bad?", "text": "As programmers I feel that our goal is to provide good abstractions on the given domain model and business logic. But where should this abstraction stop? How to make the **trade-off between abstraction** and all it's benefits (flexibility, ease of changing etc.) **and ease of understanding the code** and all it's benefits. I believe I tend to write code overly abstracted and I don't know how good is it; I often tend to write it like it is some kind of a micro-framework, which consists of two parts: 1. Micro-Modules which are hooked up in the micro-framework: these modules are easy to be understood, developed and maintained as single units. This code basically represents the code that actually does the functional stuff, described in requirements. 2. Connecting code; now here I believe stands the problem. This code tends to be complicated because it is sometimes very abstracted and is hard to be understood at the beginning; this arises due to the fact that it is only pure abstraction, the base in reality and business logic being performed in the code presented 1; from this reason this code is not expected to be changed once tested. Is this a good approach at programming? That it, having changing code very fragmented in many modules and very easy to be understood and non-changing code very complex from the abstraction POV? Should all the code be uniformly complex (that is code 1 more complex and interlinked and code 2 more simple) so that anybody looking through it can understand it in a reasonable amount of time but change is expensive or the solution presented above is good, where \"changing code\" is very easy to be understood, debugged, changed and \"linking code\" is kind of difficult. Note: this is not about code readability! Both code at 1 and 2 is readable, but code at 2 comes with more complex abstractions while code 1 comes with simple abstractions."} {"_id": "133073", "title": "Is there a difference between casting and converting types in imperative programming languages?", "text": "The question came up in a discussion at StackOverflow. Is there a clean distinction between the two concepts **cast** and **convert** (concerning the type of an object), or are these two words describing exactly the same? How about languages other than C++, Python and Java? **EDIT** : What if the types in question are primitive types, like `int` to `float`?"} {"_id": "113461", "title": "Involuntarily becoming a programmer: how to do it right?", "text": "My background is electrical engineering, DSP to be more precise. The company I currently work for does a lot of diverse projects, mostly building analog hardware. Being somewhat closer to computers than everybody else around here I'm often the one writing code for both embedded devices (which I'm perfectly fine with) and Windows or Linux OS. It is the latter that is foreign territory to me. I can code, and I know a few languages (C/C++, Java, some VB.NET), but I only used them for algorithm simulations in signal and image processing, neural networks, and other similar applications. For me programming has been a computational tool more than anything else. However, I get more and more projects where I have to write proper full-fledged software, and I don't really know how to do it, because I never had to do it, and I was never really interested enough. I have myself seen quite a few engineers who got converted into coders to a certain degree because of job demands, and most of them weren't that great at what they did. I'm sure many people have encountered the same. If I were to learn writing proper software with good user interface, good internal architecture and so on, how do I do it? We don't have anyone at work who could tell me what's good practice and what isn't. Given that I can write _code_ in the rawest sense of the word, what else is there to know about writing good software and how I do I get there on my own?"} {"_id": "83568", "title": "Is it legal to put `FSharp.Core.dll` from FSharp redistributable package?", "text": "Is it legal to put `FSharp.Core.dll` from FSharp Redist Package into my application package and redistribute it with my application? I couldn't find any information about this."} {"_id": "83566", "title": "Making it work V/S Making it work right", "text": "I am somehow a bit good at coding algorithms for small projects, but when it comes to make an attempt for some biggies, I make a total hodge-podge and marshmallows of my project's structure. I have studied a lot of books on software engineering and SDLCs where they say that becoming a developer is what every nook and crook can do, but the actual developer is the one who follows a systematic approach to the problem and solve it the way its meant to be. My code works fine for such jumbled up projects, but it gets too difficult for maintenance and further upgrades. So, **I would like suggestions so as what should be the proper approach to planning the big moves**"} {"_id": "251318", "title": "Problem : Certificate for multi Clients of WCF", "text": "* If My WCF service have big number of clients through the Internet,then whether should all of them share the same Client Certificate(X509)? * And if their certificates should be unique , what should I do on the WCF to identify all the certificates? * Last question: Must I import the server-side certificate into TrustedPeople location on every Client ? (Which seems to be troublesome and any way convenient? )"} {"_id": "112730", "title": "Windows Phone 7 app development - Is it worth it?", "text": "I've written a Windows Phone 7 application to display the Ordnance Survey maps that are loved so much in the UK (I am amazed that no-one else has done this yet). However I was about to shell out the \u00a365 to pay for the app hub and get my app to the marketplace when I started investigating how you actually get paid for the apps that people buy. Apparently if you are not a US developer then you have to start sending over forms e.g. W8BEN form? and even after this the IRS takes another 30% (after MS have taken their 30% share). It also mentions VAT so maybe there is more money taken off after this as well??? Has anyone from outside the US actually got all the paperwork sorted so they got paid? Did you get tax taken off as well? What percentage of the sales do you actually end up with? Is it all worth it? I don't expect to make much from the app but I would like to think I could recoup my \u00a365 and have enough to buy a couple of beers as well."} {"_id": "112731", "title": "What does backslash \"\\\" escape character really escape?", "text": "What does backslash really escape? It's used as escaping character. But I always wonder what's escaping? I know \"\\n\" is a new line character. But what in it is escaping? Why is it called like that?"} {"_id": "85812", "title": "Is CodeIgniter PHP Framework suitable for large ERP or Business Application?", "text": "Is CodeIgniter is recommended for a large web based ERP or Business Application? I want to use CodeIgniter for my future Project and I'm so confused whether to use it or not. Im so worried about in the long term process or lifetime of the application that it may crashed or produce a bug or error. I also worried about the performance of the framework when the data becomes larger and containing millions of records. I searched on the internet the answer but there is no exactly answer that will satisfy me. I think this question is important for the programmers like me who wanted to use PHP Framework for their large business application. I need an advice from you guys in order to decide whether to use it or not. thank you very much!"} {"_id": "147168", "title": "What to charge for a ready-built piece of software?", "text": "Recently I was contacted by someone interested buying a piece of software that I wrote to automate a process. The 'client' knows little about the program, except what it outputs and how easy it is to use. They are now interested in purchasing it, and I need to give a reasonable price. I've never done anything like this before, so as you might be able to guess, I do not know where to start with a price. I spent a good 12 hours+ working on the program itself. The client will probably want some changes also. **My question: How can I work out what to charge for a piece of software that I built in my own time?**"} {"_id": "47474", "title": "Adoption of Lean methods", "text": "Are there any studies on the adoption of agile methods based on Lean principles? Some of the stats I'm looking for: 1. How widely is Lean used? 2. In terms of Agile methods, how does the adoption of Lean compare with other Agile methodologies like Scrum, XP, etc?"} {"_id": "47476", "title": "Should companies require developers to credit code they didn't write?", "text": "In academia, it's considered cheating if a student copies code/work from someone/somewhere else without giving credit, and tries to pass it off as his/her own. Should companies make it a requirement for developers to properly credit all _non-trivial_ code and work that they did not produce themselves? Is it useful to do so, or is it simply overkill? I understand there are various free licenses out there, but if I find stuff I like and actually use, I really feel compelled to give credit via comment in code even if it's not required by the license (or lack thereof one)."} {"_id": "147160", "title": "Good resource for getting up to speed on windows forms development", "text": "I'm a .NET developer comfortable with C# and web application development. I need to improve my knowledge of Winforms. What is a good book or online resource that particularly covers the Winforms life cycle. Which events to use for what? How does OnLoad compare to Form_load etc."} {"_id": "6884", "title": "How to recognize a bad client before you start to work for him?", "text": "I'm sure that many of you have encountered a bad client. I'm also sure you took some measures to prevent such encounters in the future. What is the most influential characteristic of a client that warns you to walk away?"} {"_id": "222753", "title": "How do you read this line of code?", "text": "Forgive me for my poor English, me and my friend were doing school homework, suddenly he asked me to read this line of code `ptr = &array[1][1][1]` (ptr is a pointer to an integer). He said > ptr is pointing to the address of the second element of the second column of > the second row of an array. I think he will be right if it was an array of char. As far as it's an integer 3D array I have no idea how to read that line of code in plain English. How would you read it?"} {"_id": "225566", "title": "What would a good work flow for a programming mentor look like?", "text": "I have read a junior mentoring topic which answered most of my doubts. However this topic gives general (though very good) advice on mentoring, while I am mainly interested in what a mentor's week looks like. This is my case. I have a junior (1 year of experience) who got a new project and I am mentoring him. While I am sure that he will use google and will not ask me silly questions, I want to make sure he's on the right path and not to create structures which are hard to maintain or slow in execution. Now, should I: 1. Check out the code each day and examine it? * Pro: detect errors in its root * Con: time consuming, may overlook something as there is a work day behind me as well. 2. When he needs to implement let's say singleton, do I tell him to find it on google and come back to me with a solution? Or I let him implement it and later review it? * Pro: he does not wait for me to reply to him. * Con: he may implement it wrongly and lose precious time. 3. Are there any situations when I can just check the result? Just like I am a QA. Or I must always stick to the code review? 4. At what point should I stop being a mentor? After how many projects or after how much time? 5. Anything else I have missed?"} {"_id": "103139", "title": "MCPD: Webdeveloper 4 Visual Studio 2010 Training Stuff", "text": "I think about doing the 4 exams required to become a MCPD Webdeveloper 4 Visual Studio 2010. The examns can be found here: http://www.microsoft.com/learning/en/us/certification/mcpd.aspx#tab2 I found only one book at amazon, having a rather bad rating. Can somebody recommend any training material for those examns?"} {"_id": "225568", "title": "How should I introduce a new programmer to a complex project?", "text": "We have a pretty complex project with 100 or so classes, multiple custom elements, etc. We have a new senior programmer who will work on this project. How should we approach the task of introducing the senior programmer to the project? How can we make sure the programmer does not break some other feature while fixing the current one? I have seen too many similar situation.s Sometimes, new bugs are not noticed immediately -- but pop up months later. Shall I direct the programmer by first giving them smaller tasks and waiting for them to finish them? Or there is a better solution? I am asking this because I don't want to become micromanager but I also don't want to have nasty bugs created because of my bad approach to the new programmer."} {"_id": "225569", "title": "A programmer as project manager or non-programmer?", "text": "We are a team of programmers and are employing a new person who will be a project manager only. Our project are only programming ones and we are not sure how to decide who will be the best for project manager. Shall we find a senior programmer from our programming field? Or we must find a regular guy (economics or management studies) who never coded in his life? **EDIT** A project manager will_ * Create tasks for developers * Monitor that our ticket workflow is respected * Warn developers is they cross estimated work time * Manage weekly finance timesheets * Help client and reply to his questions * Set meetings between client and developers (internet meetings) And similar things. Now, when I wrote this I see that there are no IT tasks in here. But what about simple questions the clients ask and which an educated It person will be able to reply? we obviously cannot bother programmers for this all the time. this is the main reason whay we are in doubt."} {"_id": "53970", "title": "C# books for the experienced programmer", "text": "> **Possible Duplicate:** > Is there a canonical source for learning C# and .NET internals? So I've been programming in C# for 3 years now (been programming in various languages for 3 years before that as well) and most of the stuff I learned I pieced together on the internet. The thing is, I want to understand C# more formally and in depth and so would like to get some books on the subjects. Any books you'd recommend? Also, I've heard good things about \"C# 4.0 in a Nutshell\", \"Pro C# 2010 and the .NET 4 Platform\" and \"CLR via C#\". What do you think of these? (The people at stackoverflow told me to take it here. Please, Please tell me I'm in the right place this time)"} {"_id": "238033", "title": "What does it mean when data is scalar?", "text": "I don't know what scalar means exactly, but I'm trying to see if I'm thinking about it correctly. Does scalar relate to arbitrariness where the type of data could be any type, or a system is not able to know what the data is in advance."} {"_id": "238036", "title": "Where should I put bindings for dependency injection?", "text": "I'm new to dependency injection and though I've really liked it so far, I'm not sure where bindings should go. I'm using Guice in Java, so some of what I say might be specific to just Guice. As I see it, there's two options: **Accompanying the class(s) its needed for.** Then, just write `install(OtherClassModule.class)` in whatever other modules want to be able to use said class. As I see it, the advantage of this is that classes that want to use it (or manage classes that want to use it) don't need to know any of the implementation detail. The issue I see is that what if two classes want to use two different versions of the same class? There's a lot of customization possible because of DI and this seems to restrict it a lot. **Implemented in the module of the class(s) its needed for.** It's the flip of what I said above. Now you have customization, but not encapsulation. Is there a third option? Am I misunderstanding something obvious? What's the best practice?"} {"_id": "238037", "title": "Why std::allocators are not that popular?", "text": "With the latest trends about C and C++ applications, and with latest I mean the latest years, I was expecting to see `std::allocator`s to be used way more frequently than what it really is. Modern applications are usually multithreaded or with form of concurrency in them, and they manage a significant amount of memory, especially in some fields like games and multimedia applications; so the usual high cost of the allocation/deallocations rises and impacts the performances even more than in old applications. But for what I can see this kind of memory management through `std::allocator`s is not popular, I can't even count 1 library that I know of, that is using any rationale about a standardize way of managing memory. There is a real reason why `std::allocator` is not popular ? Technical reasons ?"} {"_id": "171279", "title": "What can I do in order to inform users of potential errors in my software in order to minimize liability?", "text": "I'm an independent software developer that's spent the last few months creating software for viewing and searching map data. The software has some navigation functionality as well (mapping, directions,etc). The eventual goal is to sell it in mobile app markets. I use OpenStreetMap as my data source. I'm concerned about liability for erroneous map data / routing instructions, etc that might result when someone uses the application. There are a lot of stories on the internet where someone gets into an accident or gets stuck or gets lost because of their GPS unit/Google Maps/mapping app... I myself have come across incorrect map data as well in a GPS unit I have in my car. While I try to make my own software as bug free as possible, no software is truly bug free. And moving beyond what I can control, OpenStreetMap data (and street map data in general) is prone to errors as well. What steps can I take to clearly inform the user that results from the software aren't always perfect, and to minimize my liability?"} {"_id": "40692", "title": "Greater than or identical to?", "text": "While browsing my code in a weakly-typed language I was seeing that I've trained myself to use identity (`===`) where logical. Then I came across a greater (or less) than or equal to (`>=`), and it made me wonder... why is there no \"greater than or identical to\"? I suppose it would be `>==`. For example... 5 == 5 // true 5 === 5 // true 5.5 >= 5 // true 5.5 >== 5 // false 6 >= 5 // true 6 >== 5 // true Basically, I would throw a false if it was of a different type. For example, if I want to check if $x is greater than $y, but I want them both to be integers (or floats, but no mixing), then wouldn't it make sense to have a single call that can do all that, rather than having to check separately to see if they were the same type? A quick google indicated that this may not exist in any language; why not? Is it just not as useful as I might think it is? :)"} {"_id": "238039", "title": "Should data models know where / how they're stored?", "text": "I have some classes that represent, for the most part, data that's deserialized from XML. They also have some behavior in them, because I don't want to suffer from an anemic domain model. These domain objects don't have any XML specific code in them, but they do have properties that correspond exactly to the tags and attributes in the XML. Many of these classes' behaviors rely on resources loaded from disk, like images. For that, some properties / xml attributes are relative file paths to these resources. Now my problems begin. The absolute path to be constructed for these resources is based on a separate dependency - the root of the \"project\". Here's what I'm struggling with: 1. Should my domain objects store the full absolute path to their resources? Or just the relative paths? If they held the absolute path, then the classes that load them - my XML readers - will have to take a dependency on the project root in order to build the full path for them. If not, I'll have to make yet another layer of classes to handle that. I already have 3 layers, I don't know if I can handle any more! 2. Should my domain objects store _their own paths_ for where they are on disk? It makes sense for them to know where their dependencies are in relation to them, but to know their own location just feels wrong to me intuitively. This is sort of unrelated to the other part of the question, but I think if I go the wrong way on this it could bite me later."} {"_id": "136519", "title": "Examples of operator overloading, which make sense", "text": "While I learning C#, I found that, the C# supports operator overloading. I have problem with good example which: 1. Make sense (ex. adding class named sheep and cow) 2. Is not an example of concatenation of two strings Examples from Base Class Library are welcome."} {"_id": "238257", "title": "Optional dependencies in npm?", "text": "I have a similar question to this, but not quite the same. I would like for the user of my app to install it with whatever dependencies are needed for the way he would want to use it. So, for example, if they want to persist to MongoDB, then only Mongo-related libraries will be installed, but if they want to persist to Redis, then only Redis-related libraries will be installed. I don't want to make them download and install libraries they won't be using. I know I can do that for development purposes with `devDependencies`, but this goes farther than that. As the answer in the question above says, this is more closely related to Python's `setuptools` `extras_require` and Clojure's `leiningen` profiles. Anything like that in npm? I really feel like `devDependencies` should be a `dev` profile of a more versatile way of specifying dependencies."} {"_id": "145261", "title": "Validation and authorization in layered architecture", "text": "I know you are thinking (or maybe yelling), \"not another question asking where validation belongs in a layered architecture?!?\" Well, yes, but hopefully this will be a little bit of a different take on the subject. I am a firm believer that validation takes many forms, is context-based and varies at each level of the architecture. That is the basis for the post - helping to identify what type of validation should be performed in each layer. In addition, a question that often comes up is where authorization checks belong. The example scenario comes from an application for a catering business. Periodically during the day, a driver may turn in to the office any excess cash they've accumulated while taking the truck from site to site. The application allows a user to record the 'cash drop' by collecting the driver's ID, and the amount. Here's some skeleton code to illustrate the layers involved: public class CashDropApi // This is in the Service Facade Layer { [WebInvoke(Method = \"POST\")] public void AddCashDrop(NewCashDropContract contract) { // 1 Service.AddCashDrop(contract.Amount, contract.DriverId); } } public class CashDropService // This is the Application Service in the Domain Layer { public void AddCashDrop(Decimal amount, Int32 driverId) { // 2 CommandBus.Send(new AddCashDropCommand(amount, driverId)); } } internal class AddCashDropCommand // This is a command object in Domain Layer { public AddCashDropCommand(Decimal amount, Int32 driverId) { // 3 Amount = amount; DriverId = driverId; } public Decimal Amount { get; private set; } public Int32 DriverId { get; private set; } } internal class AddCashDropCommandHandler : IHandle { internal ICashDropFactory Factory { get; set; } // Set by IoC container internal ICashDropRepository CashDrops { get; set; } // Set by IoC container internal IEmployeeRepository Employees { get; set; } // Set by IoC container public void Handle(AddCashDropCommand command) { // 4 var driver = Employees.GetById(command.DriverId); // 5 var authorizedBy = CurrentUser as Employee; // 6 var cashDrop = Factory.CreateCashDrop(command.Amount, driver, authorizedBy); // 7 CashDrops.Add(cashDrop); } } public class CashDropFactory { public CashDrop CreateCashDrop(Decimal amount, Employee driver, Employee authorizedBy) { // 8 return new CashDrop(amount, driver, authorizedBy, DateTime.Now); } } public class CashDrop // The domain object (entity) { public CashDrop(Decimal amount, Employee driver, Employee authorizedBy, DateTime at) { // 9 ... } } public class CashDropRepository // The implementation is in the Data Access Layer { public void Add(CashDrop item) { // 10 ... } } I've indicated 10 locations where I've seen validation checks placed in code. My question is what checks you would, if any, be performing at each given the following business rules (along with standard checks for length, range, format, type, etc): 1. The amount of the cash drop must be greater than zero. 2. The cash drop must have a valid Driver. 3. The current user must be authorized to add cash drops (current user is not the driver). Please share your thoughts, how you have or would approach this scenario and the reasons for your choices."} {"_id": "231554", "title": "How is the publish-subscribe pattern different from gotos?", "text": "My understanding is that Goto statements are generally frowned up. But the publish-subscribe pattern seems to be conceptually similar in that when a piece of code publishes a message, it performs a one-way transfer of control. The programmer may have no idea what parts of the program are subscribing to this message. I have see something similar in a lot of JavaScript programs in which events are used to conveniently \"hop\" across modules. Am I missing something about the publish-subscribe or event-driven patterns?"} {"_id": "77865", "title": "Should I wait until Mango's release?", "text": "I was deciding whether to start a Windows Phone application, but it seems like it might be better to wait until Mango is released. Is Mango a new platform, or is it an upgrade to Windows Phone 7 (i.e. Windows Phone 8)? Is it expected to arrive in the near future? That is, should a project starting today worry about it?"} {"_id": "193420", "title": "Best practise is not to poll...but isn't polling happening internally anyway when a thread calls wait()?", "text": "Say we have some thread that wants to check when another thread is finished its task. I have read that we should call a wait() type function that will make this thread wait until it receives a notification that the other thread is finished. And that this is good because it means we aren't performing expensive polling. But isn't polling happening internally at a lower level anyway? I.e. if we make the thread wait() isn't the kernal performing polling anyway to check when the other thread is finished so that it can then notify the first thread? I presume I a missing out on something here, can someone enlighten me?"} {"_id": "77690", "title": "Design: Object method vs separate class's method which takes Object as parameter?", "text": "For example, is it better to do: Pdf pdf = new Pdf(); pdf.Print(); or: Pdf pdf = new Pdf(); PdfPrinter printer = new PdfPrinter(); printer.Print(pdf); Another example: Country m = new Country(\"Mexico\"); double ratio = m.GetDebtToGDPRatio(); or: Country m = new Country(\"Mexico\"); Country us = new Country(\"US\"); DebtStatistics ds = new DebtStatistics(); double usRatio = ds.GetDebtToGDPRatio(us); double mRatio = ds.GetDebtToGDPRatio(m); My concern in the last example is there are potentially endless statistics (but let's say even just 10) you might want to know about a country; do they all belong on the country object? e.g. Country m = new Country(\"Mexico\"); double ratio = m.GetGDPToMedianIncomeRatio(); These are simple ratios but lets assume the statistics are complicated enough to warrant a method. Where is that line between operations that are intrinsic to an object vs operations that can be performed on an object but are not part of it?"} {"_id": "238785", "title": "The deference between Computer science and Computer information system?", "text": "I'm Asking you that question because you are an experts in programming , and maybe you could help me . Unfortunately in my country you can't get a job if you don't have a university certification, even if you are a great programmer , so this year i went to college and majored in Computer Science , But I'm so Confused between CS and CIS , and in what major should I spent my money on it ,, There is more than 25 subjects are mutual between us but there is some subjects are special for Computer science , also another subjects for Computer information system too , **Computer science specialization** : 1. Discrete Mathematics 2. Linear algebra And Numerical Analyses 3. Computation theory 4. Compiler Design 5. Wireless Networks security 6. Artificial Intelligence 7. Computer Organization & Architecture 8. Expert Systems =========================================== **Computer information system specialization :** 1.Information retrieval systems 1. Management for network 2. Wireless network 3. Data base management systems 4. Information security 5. Data warehousing 6. Data mining 7. Intelligent systems"} {"_id": "238786", "title": "Does C++ compiler remove/optimize useless parentheses?", "text": "Will the code int a = ((1 + 2) + 3); // Easy to read run slower than int a = 1 + 2 + 3; // (Barely) Not quite so easy to read or are modern compilers clever enough to remove/optimize \"useless\" parentheses. It might seems like a very tiny optimization concern, but choosing C++ over C#/Java/... is all about optimizations (IMHO)."} {"_id": "31630", "title": "Best practices for managing and maintaining large Rails app?", "text": "What are best practices for managing and maintaining large Rails app?"} {"_id": "238783", "title": "JavaScript strict mode compatibility", "text": "While reading about strict mode on MDN I really was surprised to read the following near the end of the page, quote: > Browsers don't reliably implement strict mode yet, so don't blindly depend > on it. Strict mode changes semantics. Relying on those changes will cause > mistakes and errors in browsers which don't implement strict mode. Exercise > caution in using strict mode, and back up reliance on strict mode with > feature tests that check whether relevant parts of strict mode are > implemented. Finally, make sure to test your code in browsers that do and > don't support strict mode. If you test only in browsers that don't support > strict mode, you're very likely to have problems in browsers that do, and > vice versa. As far as I understand it, strict mode is a redused set of \"nonstict\" mode, thus I can't imagine situation where strict code can't run correctly in a nonstrict browser. So, the question is this statement really makes sense? Is the any situation where strict to \"nonstrict\" switch will make code invalid?"} {"_id": "225375", "title": "C# Design Issue", "text": "I am building a small application and I am trying to understand the best way to approach the design. I am looking for some guidance/advice how best to approach the following issue. What I have is that I receive a set of data, real time. I then analyze the data for patterns. The patterns are classes that derive from an abstract class which implements an interface. The number of patterns will change over time as patterns are added/removed. In addition, depending on the access level of the user, the data is analyzed with different pattern options. For example, if I have five patterns, A, B, C, D and E, level 100 access may only analyze the data with pattern A whereas access level 300 will analyze with patterns B, D and E, and access level 500 will analyze with all the patterns. The access levels are linked to the user, and a user can have different access levels on different data streams. My thought is to create a hash table or dictionary for the patterns and a db for the users and their various access levels. Is this the best way to go or is there a better approach that will work in real time?"} {"_id": "42253", "title": "How do I account for changed or forgotten tasks in an estimate?", "text": "To handle task-level estimates and time reporting, I have been using (roughly) the technique that Steve McConnell describes in Chapter 10 of _Software Estimation_. Specifically, when the time comes for me to create task-level estimates (right before coding begins on a project), I determine the tasks at a fairly granular level so that, whenever possible, I have no tasks with a single-point, 50%-confidence estimate greater than four hours. That way, the task estimation process helps with constructing the software while helping me not to forget tasks during estimation. I come up with a range of hours possible for each task also, and using the statistical calculations that McConnell describes along with my historical accuracy data, I can generate estimates at other confidence levels when desired. I feel like this method has been working fairly well for me. We are required to put tasks and their estimates into TFS for tracking, so I use the estimates at the percentage of confidence I am told to use. I am unsure, however, what to do when I do forget a task, or I end up needing to do work that does not neatly fall within one of the tasks I estimated. Of course, trying to avoid this situation is best, but how do I account for forgotten/changed tasks? I want to have the best historical data I can to help me with future estimates, but right now, I basically am just calculating whether I made the 50%-confidence estimate and whether I made it inside the ranged estimate. I'll be happy to clarify what I'm asking if needed -- let me know what is unclear."} {"_id": "103794", "title": "Is it acceptable to deploy web app to production directly from SVN", "text": "**Question** Is there a legitimate reason **NOT** to use SVN for production deploys, or is this merely a case of personal preference and there is no real case against SVN? **Background** My workplace has a culture of tagging release in SVN and then deploying those releases directly to the various web servers using `svn co` or `svn switch` including directly to production. I personally have a problem with this as I believe that without using a build and deploy script or some form or automated deploy you lose integration environment settings as they're undocumented. However more than that I have a gut feeling that there may be a hidden danger to doing that which has been overlooked, something which has yet to rear its ugly head. I've brought up my concerns with the operations staff who are responsible for deploying code to our various environments (staging, pre-prod, production) etc. Their arguments were pretty much that it's worked pretty well so far, no reason to change. **Edit:** _What I mean about build and deploy:_ For example, if a developer requires a web.config setting added for a particular environment. Web.config is generally not kept in svn and so these files are updated manually without any form of automated build script. So if they're lost or OPS forget to add a field to the web.config for a release then you have issues. A build script that say uses XMLPoke to automatically generate a web.config appropriate for a particular environment is ideal in that you have a versionable script which documents all changes necessary for each of your environments. _Current Build and Deploy Method_ For the project in question a developer builds a release manually, other projects have the build step automated either with NANT, or MSBuild which is OK. Database migrations for most projects are via DB scripts, or migration scripts (Migrator.NET), or CMS Packages. CI is usually done by Team City on a per checkin basis, we have a code review process that all tickets are done in branches and then peer reviewed for validation and correctness/quality prior to checking into trunk (works well). However actual code deploy is pretty much always via SVN, either via a checkout of a tagged release or more generally an SVN Switch. This is something that sticks odd with me that we're using our repository as part of the deploy process. Configuration doesn't generally change very often, the only things that will be in config files are environment specific information. Everything else is in the db. Don't get me wrong this works, it works well. However I want to try and push for an automated build and deploy. I've used this with Rails and Capistrano, and for personal projects using Cygwin, Nant and SSH. More importantly I would need very specific valid arguments to get my colleagues to change to using an Automated build and deploy. Or are there NO real valid arguments against using SVN specifically to deploy to production?"} {"_id": "103797", "title": "Why the decline in search traffic for popular programming languages?", "text": "Is there any solid evidence behind the reasons for decline in search volume for popular programming languages? Could this possibly be due to improvements in finding necessary information (no need to search 10 times to find something) and quality of information? ![enter image description here](http://i.stack.imgur.com/Wfc7T.jpg)"} {"_id": "234007", "title": "What kind of webservice can be called with just a browser url?", "text": "Can a webservice be written in such a way that it can be called via just a browser url? For eg. if the webservice is called GetStockQuote, then it should be callable by the following url on the browser http://myserver.com/WebServices/GetStockQuote?sym=MSFT Likewise, if there are more params, they can be passed via the URL. Unlike this one - http://www.webservicex.net/stockquote.asmx \\- where it doesn't look like the param can be passed via a URL. It seems to require either a program or a human to type in the param & click on invoke. Or does it always require a client program? Is there a way from the wsdl to figure out whether it can be called from a browser or not. How can I program a webservice which fulfils the above criteria?"} {"_id": "170632", "title": "Is it okay to define a [] method in ruby's NilClass?", "text": "Ruby by default does not include the method `[]` for `NilClass` For example, to check if `foo[\"bar\"]` exists when `foo` may be nil, I have to do: foo = something_that_may_or_may_not_return_nil if foo && foo[\"bar\"] # do something with foo[\"bar\"] here end If I define this method: class NilClass def [](arg) nil end end Something like that would make this possible, even if `foo` is nil: if foo[\"bar\"] # do something with foo[\"bar\"] end Or even: if foo[\"bar\"][\"baz\"] # do something with foo[\"bar\"][\"baz\"] here end **Question:** Is this a good idea or is there some reason ruby doesn't include this functionality by default?"} {"_id": "88478", "title": "Pros and cons of hosted scripts", "text": "I have seen some developers use hosted scripts to link their libraries. `cdn.jquerytools.org` is one example. I have also seen people complain that a hosted script link has been hijacked. How safe is using hosted scripts in reality? Are the scripts automatically updated? For example, if jQuery 5 goes to 6 do I automatically get version 6 or do I need to update my link? I also see that Google has a large set of these scripts setup for hosting. What are the pros and cons?"} {"_id": "170634", "title": "Avoiding duplicate bug reports", "text": "I use Linux and other open source software in my home. As I'm not a professional coder, I usually report bugs to developers as my skills are not enough to solve problems on my own. What kind of things you want me to check before I send a bug report? I mean, once I thought I found a bug in Gedit and I couldn't find similar bug in Bugzilla. But after I sent the report, some developer said that the bug is already in Bugzilla as the bug was in GTK+, not in Gedit. Sometimes it might be hard for an amateur to guess whether some previously known bug would solve the issue I found."} {"_id": "42795", "title": "What do you consider to be a high-level language and for what reason?", "text": "Traditionally, C was called a high-level language, but these days it is often referred to as a low-level language (it is high-level compared to Assembly, but it is **very** low-level compared to, for instance, Python, these days). Generally, everyone calls languages of the sort of Python and Java high-level languages nowadays. How would you judge whether a programming language _really_ is a high-level language (or a low-level one)? Must you give direct instructions to the CPU while programming in a language to call it low-level (e.g. Assembly), or is C, which provides some abstraction from the hardware, a low-level language too? Has the meaning of \"high-level\" and \"low-level\" changed over the years?"} {"_id": "196996", "title": "Why do some programmers categorize C, Python, C++ differently? - regarding level", "text": "I am taking an introductory course on python and the instructor says that python is a high level language and C and C++ are low level languages. It's just damn confusing. I thought that C, C++, Python, Java, etc were all high level languages. I was reading questions at stackoverflow on C, C++, etc and they all seem to refer to those languages as high level. it seems to me that some programmers use those terms interchangably. Please clarify this for me."} {"_id": "225389", "title": "Model Driven Design with Bean Validation", "text": "If I have a rich domain library that gets included into a Java web application, but I want to achieve a level of dependency isolation with that domain library such that it is possible to build and test it totally independently(of larger libraries like Spring), how can I realistically use JSR-303 bean validation? The domain library itself can use the hibernate validation annotations, like @NotNull on fields and getters just fine, and this is still independent and testable. However, if I want to use newer features, such as method parameter validation, which requires AOP, how do I maintain minimal dependencies while maintaining testability (via JUnit)? Is this a simple matter of configuring the unit test to exercise the domain objects through an AOP proxy? The design philosophy of JSR-303 seems, on the face of it, to be at odds with rich domain design, favouring more of a dumb-bean, business layer service design."} {"_id": "177947", "title": "Tomcat 7 vs. ehCache Standalone Server (Glassfish) Configuration with RESTful Web Services", "text": "My requirements consist of using ehCache to send and store data via RESTful web service calls. The data can be stored in-memory or via the filesystem... Never used ehCache before so I am having some issues deciding on which bundle to use. Have downloaded the following bundles: ehcache-2.6.2 ehcache-standalone-server-1.0.0 (1) What is the difference between the two? It seems the ehcache-2.6.2 contains src and binaries, which essentially enables one to bundle it with their webapps (by putting the compiled jar or binaries inside the webapp's WEB-INF/lib folder). But it doesn't seem that it has support for Restful web services. Whereas, ehcache-standalone-server-1.0.0 (comes with an embedded Glassfish server and has support for REST & SOAP) can be used to run as a standalone server. If I my answers to my own question are correct, then that means, I should just use the standalone server? (2) My requirements are to setup ehCache (with REST support) on Tomcat 7. So, how could I setup ehCache on Tomcat 7 as a separate app with REST & SOAP support? Thank you for taking the time to read this..."} {"_id": "199370", "title": "Should I make a seperate unit test for a method, if it only modifies the parent state?", "text": "Should classes, that modify the state of the parent class, but not itself, be unit tested separately? And by separately, I mean putting the test in the corresponding unit test class, that tests that specific class. I'm developing a library based on chained methods, that return a new instance of a new type in most cases, where a chained method is called. The returned instances only modify the root parent state, but not itself. Overly simplified example, to get the point across: public class BoxedRabbits { private readonly Box _box; public BoxedRabbits(Box box) { _box = box; } public void SetCount(int count) { _box.Items += count; } } public class Box { public int Items { get; set; } public BoxedRabbits AddRabbits() { return new BoxedRabbits(this); } } var box = new Box(); box.AddRabbits().SetCount(14); Say, if I write a unit test under the `Box` class unit tests: box.AddRabbits().SetCount(14) I could effectively say, that I've already tested the `BoxedRabbits` class as well. Is this the wrong way of approaching this, even though it's far simpler to first write a test for the above call, then to first write a unit test for the `BoxedRabbits` separately?"} {"_id": "229800", "title": "OO design for a Windows application that communicates with an external machine via RS232", "text": "I'm after a bit of OO design advice... I'm about to start developing a Windows application that communicates with an external machine via RS232. The machine has an onboard \"system controller\" consisting of registers (addresses). You can write a value to a register (e.g. to turn a pump on), or read a value from a register (e.g. to see if the pump is running). The app will have a config file detailing each register's settings, e.g. its address, whether it's 16- or 32-bit, a friendly \"alias\" (so other code can deal with \"Pump XYZ\" rather than address \"50B3D124\"), and a few other things. My first thought was to have a `Register` class reflecting these properties, plus `Read()` and `Write()` methods. But this raises the first question - should the `Read()` method have a return value, or should it populate a `Value` property on the class? Similarly, should the `Write()` method include a \"valueToWrite\" parameter, or should it write whatever value is currently held in the `Value` property? I could take this one step further (possibly even answering the above question) - instead of Read/Write methods, put the functionality into the `Value` property's getter and setter? Using a property for this purpose doesn't feel right though. I then started wondering if a `Register` class was even necessary, as its read/write functionality would be doing little more than calling Read/Write methods on a `SystemController` class, e.g. long ReadRegister(string registerAlias); void WriteRegister(string registerAlias, long value); Why not just let the rest of the application read/write registers via the `SystemController` class? (I suspect I would still need some concept of a `Register` class, even if it's just to represent a register's config settings)."} {"_id": "177948", "title": "algorithm for project euler problem no 18", "text": "Problem number 18 from Project Euler's site is as follows: By starting at the top of the triangle below and moving to adjacent numbers on the row below, the maximum total from top to bottom is 23. 3 7 4 2 4 6 8 5 9 3 That is, 3 + 7 + 4 + 9 = 23. Find the maximum total from top to bottom of the triangle below: 75 95 64 17 47 82 18 35 87 10 20 04 82 47 65 19 01 23 75 03 34 88 02 77 73 07 63 67 99 65 04 28 06 16 70 92 41 41 26 56 83 40 80 70 33 41 48 72 33 47 32 37 16 94 29 53 71 44 65 25 43 91 52 97 51 14 70 11 33 28 77 73 17 78 39 68 17 57 91 71 52 38 17 14 91 43 58 50 27 29 48 63 66 04 68 89 53 67 30 73 16 69 87 40 31 04 62 98 27 23 09 70 98 73 93 38 53 60 04 23 NOTE: As there are only 16384 routes, it is possible to solve this problem by trying every route. However, Problem 67, is the same challenge with a triangle containing one-hundred rows; it cannot be solved by brute force, and requires a clever method! ;o) The formulation of this problems does not make clear if * the \"Traversor\" is greedy, meaning that he always choosed the child with be higher value * the maximum of every single walkthrough is asked The `NOTE` says, that `it is possible to solve this problem by trying every route`. This means to me, that is is also possible **without**! This leads to my actual question: Assumed that not the greedy one is the max, is there any algorithm that finds the max walkthrough value without trying every route and that doesn't act like the greedy algorithm? I implemented an algorithm in Java, putting the values first in a node structure, then applying the greedy algorithm. The result, however, is cosidered as wrong by Project Euler. sum = 0; void findWay(Node node){ sum += node.value; if(node.nodeLeft != null && node.nodeRight != null){ if(node.nodeLeft.value > node.nodeRight.value){ findWay(node.nodeLeft); }else{ findWay(node.nodeRight); } } }"} {"_id": "229804", "title": "Usage of __ while declaring any variables or class member in python", "text": "Is it good practice to use `__` convention while declaring the member variable? This also imparts private kind of feature of that data member. There have been cases when I found that its good to have hidden members of a class. This is certainly exposes only that feature that will help the the object to behave in more protected manner."} {"_id": "229806", "title": "Design principles : classifying different types of class", "text": "I'm trying to clarify the design of a C++ application, and would like to define a clear classification among classes to aid clarity. However I'm struggling to find any literature to help me, probably because I don't really know what I'm searching for. To help, here is what I've got so far. Classes can be one of a number of types: * Value object. Here I mean things like std::string, std::vector. These are concrete classes that are typically instantiated on the stack, can be copied etc. * Service objects. Clients will interact with these through a pure virtual interface, and these will typically be injected, DI style. A persistence service might be a good example here. There would typically be once instance of a service, either injected or accessed through a ServiceLocator (yeah, I know). * (Domain object?). Clients interact with these through a pure virtual interface, but these represent abstract domain objects and are constructed using an abstract factory, which itself will be a service. These would typically have identity, but you want to interact with them in an abstract way. So a database connection might be a good example (note that this is different from a persistence service -- the concrete implementation of persistence service may use a database connection, but the database connection would represent one actual connection, and there would be subclasses for Mysql, SqlServer etc). There will be many instances of such objects, they have identity. * Other things that don't fit into the above, but which I've failed to come up with any clear classification. Now someone must have documented this kind of taxonomy, but so far I can't find anything. Does anyone have any good references? Thanks."} {"_id": "100448", "title": "UnitTests, will cleaning up your act-statement make your test more or less clear?", "text": "Lets say that we are testing `FooClass` with the following method: public void Foo(string stringParameter, int intParameter, Action successCallback, Action errorCallback); If the call to `Foo` succeeds, the `successCallback` will be called with the result of `Foo` in the form of a `Bar` object. If it fails, the `errorCallback` will be called with an `Exception`. So the tests will look something like this: [TestMethod] public void Foo_UnderGivenConditions_WeExpectAGivenResult() { //Arrange var fooObject = CreateFooObjectWithGivenConditions( ... ); //Act fooObject.Foo(String.Empty, 0, (bar) => { ... }, (error) => { ... }); //Assert Assert.AreEqual(..., ...); } Now, there are a lot of tests on this `Foo`-method, all of them containing a call to `Foo`, but not every test will care for all the parameters. Some may provide different string values, but doesn't care about the int param, and some will need to provide an error callback to assert that the right exception is thrown and so on. Now, since all the parameters are mandatory, we have to pass a string value to the `Foo` method, even though the value has no meaning for the test. There will be a lot of `\"I don't care\"` or `\"some text\"`. When some one else comes and read the test, they have to consider if the given value actually has a meaning for the test result or not. The same goes for the callbacks. Sometimes we need the callbacks to get to the result value or the exception, but most of the time we do not. So lets implement an extension method, `PerformFoo`, that defaults every parameter: public static Bar PerformBar(this FooClass fooObject, string stringParameter = \"some text\", int intParameter = 0, Action successCallback = null, Action errorCallback = null); { Bar result = null; var ourCallback = (bar) => { result = bar; if (successCallback != null) sucessCallback.invoke(); } fooObject.Foo(stringParameter, intParameter, ourCallback, errorCallback); return result; } The extension method will call `Foo` with default parameters, and even return the result if the successCallback is called. This lets us change the Act part of the test to something like: //Act, all we care about is the string parameter: fooObject.PerformFoo(\"A string that we care about\"); //Act, we need the resulting bar when the int parameter is 10: var bar = fooObject.PerformFoo(intParameter: 10); //Act, we still needs to provide a callback to get the exception fooObject.PerformFoo(\"SomeInvalidValueCausingAnException\", errorCallback: (error) => { exceptionThrown = error; }); So the questions would be: * Does this make the tests more readable? * Is it easier to get what the test really tests? * The fact that we call `PerformFoo`, which doesn't really exist on the class under test make the test less worth as an documentation? * Would dropping the extension method for a regular method taking the fooObject as the first parameter be less of a 'lie'? (e.g. `PerformFoo(fooObject, intParameter: 10)` ) How far would you go to make your tests clean and clear?"} {"_id": "230826", "title": "Least Change Problem with Thousands of Denominations", "text": "In a hypothetical economy there are currency units that can represent thousands of different values. So, for example there might be coins worth 1c, 3c, 5c, 7c, 7.5c, 80c, 8001.5c etc. Given a list of all of the possible denominations and an arbitrary value how would one return a list of coins representing the smallest number of coins needed to match that value. For bonus points, imagine that you have arbitrary finite quantities of each currency. So you may have only two 3c coins but thirty 18c coins. Sounds like a nightmare compsci class problem but it's something I ran into recently on a personal project of mine. The economy is a barter economy (tf2) in which every type of item has a distinct value relative to other items. I have an Item class where items have a Value attribute and this attribute is a double that represents the value of the item relative to a base item which has a value of 1. I'm trying to come up with a way so that if someone makes an offer of a value of 30 I can then look through a list of around 1200 item objects and find the smallest set of objects which combine in value to match 30. I cannot think of a remotely efficient way to do this, but I figured the problem was interesting enough that someone would like to ponder over it with me. My program is in c# but even a psuedocode brainstorm would be helpful."} {"_id": "39196", "title": "MVC Pattern, ViewModels, Location of conversion", "text": "I've been working with ASP.Net MVC for around a year now and have created my applications in the following way. > X.Web - MVC Application Contains Controller and Views > > X.Lib - Contains Data Access, Repositories and Services. This allows us to drop the .Lib into any application that requires it. At the moment we are using Entity Framework, the conversion from EntityO to a more specific model is done in the controller. This set-up means if a service method returns an EntityO and then the Controller will do a conversion before the data is passed to a view. I'm interested to know if I should move the conversion to the Service so that the app doesn't have Entity Objects being passed around."} {"_id": "184227", "title": "Migrating legacy procedural code to MVC without rewriting", "text": "I recently started working on a PHP application that was built many years ago before the advent of objects and namespaces in PHP. The code is procedural, does not separate presentation logic from business logic, lacks a front end point (front controller), and has files everywhere with a seemingly random folder heirarchy. However, it works great for customers and despite the code mess it's actually pretty fast. I'd like to slowly move the codebase into something more maintainable and organized. I put MVC in this title, but truthfully all I really mean is an organized and maintainable. Key objectives of this effort are: * Not degrade the customer experience in any sort of way while this overhaul takes place (a rewrite is not an option) * Fully separate views from logic * At least have a \"best practice\" or method for implementing new application end points (routes) * Have a consistent method or layer for persisting data * Begin encapsulating as much code as possible in objects so that unit testing is feasible at some point in time."} {"_id": "100442", "title": "How can a web site be set up using the MVC Pattern? (Without MS ASP.NET MVC)", "text": "We used a contractor to create the initial version of our site; they used their own MVC-pattern code-generator to create the (ASP.NET / C#) site and customized from there. That was turned over to us two years ago, and we've been working on it since. That's the environment I'm coming into, with little practical MVC experience. Thus I need to grok MVC in general but it seems that every article/tutorial/book I find is either extremely general in nature or very specific to MS ASP.NET MVC. Can someone here help with examples, a tutorial or walk-through of setting up a basic site using the MVC Pattern _without using Microsoft ASP.NET MVC_ so that I can better understand the concepts and see how they can be applied in real life?"} {"_id": "206889", "title": "How to incorporate existing open source software from a licensing perspective?", "text": "I'm working on software that uses the following libraries: * Biopython * SciPy * NumPy All of the above have licenses similar to `MIT` or `BSD`. Three scenarios: 1. First, if I don't redistribute those dependencies, and only my code, then all I need is my own copyright and license (planing on using the `MIT License`) for my code. Correct? 2. What if I use py2exe or py2app to create a binary executable to distribute so as to make it easy for people to run the application without needing to install python and all the dependencies. Of course this also means that my binary file(s) contains python itself (along with any other packages I might have performed a `pip install xyz`). 3. What if I bundle Biopython, SciPy, and NumPy binaries in my package? In the latter two cases, what do I need to do to comply with copyright laws."} {"_id": "131561", "title": "IDEs for dynamic languages - how far can you get?", "text": "I find it frustrating how the speed of development that dynamic languages should offer gets significantly compromised by the lack of completions and other assets that IDEs would give you in their static counterparts. It's not just about typing less - it's the productivity boost and plain fun you get by browsing APIs without constantly having to refer to a documentation that is not integrated with the editor. To date all IDE + dynamic language combinations -which to be fair aren't that much- I've tried were: * buggy * slow * clueless/overenthusiastic (as in showing all completions ever possible) * or simply not as complete as, say, Eclipse + Java. I'm aware that dynamic code analysis is not a trivial task. But one can't help wondering - _is this piece of code really so hard to figure out_? So my question is: **Have any particular IDEs (or less all-in-one setups) achieved a totally outstanding support for a dynamic language, or is this still an 'unsolved' problem?**"} {"_id": "159238", "title": "Implenting ActiveRecord with inheritance?", "text": "I recently converted an old application that was using XML files as the data store to use SQL instead. To avoid a lot of changes I basically created ActiveRecord style classes that inherited from the original business objects. For example SomeClassRecord :SomeClass //ID Property //Save method I then used this new class in place of the other one, because of polymorphism I didn't need to change any methods that took SomeClass as a parameter. Would this be considered 'Bad'? What would be a better alternative?"} {"_id": "224389", "title": "Questions about the issue of getting stuck in a problem, and making snail's pace progress", "text": "Let me describe my background. I am currently a software developer co-op working at an architecture firm that has a large focus on research and development, mainly in mathematical modeling and physics simulations in better aid architecture design. I am one of the three main software developers. Realistically, we're all people that specialize in different topics, for example, one developer is an architect that's been programming and doing mathematical models since he was in high school (he's like 30 years old), and the other architect also has a Masters in data analytics. We all specialize in different things, making us part of a very multi-disciplinary group of nine people in total. Currently I am working on a project that handles Grasshopper 3D and Rhino 3D programming, making use of the Grasshopper SDK. I really enjoy what I am doing and appreciate the learning opportunity, however, with that said, the nature of Grasshopper programming has been difficult for me. I don't think I am that poor programming-wise, however, since most of my projects involve extending functionality beyond what the default Grasshopper SDK does, there are often times I have to spend a good couple hours or even days to understand a problem and then apply code to it. The very nature of this \"functionality beyond what the default Grasshopper SDK provides\" means that finding a solution to a problem isn't as simple as Googling, because of the relatively rare resources available, whether it's documentation or help threads online. The nature of getting stuck on an issue and the only remedy to said problem has been slowly chipping away at my confidence, and I evidently see that I am slowing down in shipping out components (that's what Grasshopper 3D \"tools\" are called). I've already made a good impression in the first couple months of my co-op in being known as efficient, quick, and comprehensive, however, in my three month review, once I began this task of working on Grasshopper programming, my advisors have noticed that my progress has not been as quick, and they mentioned that they are wondering what/where better to put me to work on that will bring out my full potential. I like the idea of working on another project, but I do not wish to do so until I've finished the current one I am on, which day by day I feel like I am making snail's pace progress because of some issues I am stuck with where finding help is a problem in itself. I dislike making a bad impression on my colleagues (and on my school as well, since I am a co-op employee), as I really believe I have the potential to do great. However, this issue of getting stuck on something, making small progresses, then getting stuck again, is simply chipping away my confidence. What are some things I can do? I feel like once I am stuck on a problem, I find myself closing up, unable to voice it, as if asking for help is a sign of incompetency."} {"_id": "201351", "title": "What features should a programming language have to say it has good reusability?", "text": "What should a programming language have so that it can be said to promote good reusability? Only generics and functions come to my mind when I hear the word reusability. What else make a programming language good for writing easily reusable code?"} {"_id": "159230", "title": "Is it possible to call a Javascript function from C?", "text": "I'd like to find a way to call Javascript functions from C. Are there any language bindings available for this purpose? I'm trying to make a library of Javascript functions accessible from C. (Something like a C -> Javascript foreign function interface would be suitable for this purpose, but I haven't been able to find one so far.)"} {"_id": "86042", "title": "Switching domains in one's career?", "text": "I have been a C++,Qt programmer for the last 3.5 years and have hit a plateau in terms of doing something new. Work has been repetitive and routine. I personally believe it is time to move on but off late I am getting more offers in mobile development like Android,Iphone etc. The latest offer I have is for objective-C based profile. I do not have the slightest idea about objective-C apart from that it is Object oriented C resembling C++ but not exactly a clone. Questions in my mind are * what are the pros/cons of this careers switch or for any such switch? * Is it good for one's career to change domains after sometime? * How difficult it is to get back to one's previous area of proficiency? Thanks"} {"_id": "159232", "title": "Should we ever delete data in a database?", "text": "I am new to databases and trying to understand the basic concepts. I have learned how to delete data in a database. But one of my friends told me that you should never delete data in a database. Rather, when its no longer needed, it's better to simply mark it or flag it as 'not in use'. Is that true? If so, how would a big company like IBM handle their data for a hundred or more years?"} {"_id": "204956", "title": "Mutable and Immutable version of Collection implementations or both stuffed into one via .makeImmutable()", "text": "I am currently working on a collection implementation in JavaScript. I need a mutable and an immutable version of that. So what I thought of first was something like: * _Collection_ * _MutableCollection_ extends _Collection_ * _ImmutableCollection_ extends _Collection_. My main issue with this is that if I want to implement more specific collections like _List_ or _Set_ , they could still inherit from _Collection_ while their Mutable/Immutable implementations could not inherit from _MutableCollection_ / _ImmutableCollection_. So this made me think whether having all three, _Collection_ , _MutableCollection_ and _ImmutableCollection_ would actually bring any significant benefit. An alternative approach which came to me would be to only have _Collection_ and then have a _Collection.prototype.makeImmutable_ or _freeze_ there. I would then also introduce a _Collection.requireMutableInstance( collectionInstance )_ which can be used in functions explicitly requiring a mutable collection. Here are my main concerns about putting everything into just _Collection_ : * The prototype definition is getting much bigger compared to defineing a _MutableCollection_ prototype as well which would then hold the code of all the operations for alteration. I guess I could solve this by just defining a second file anyhow which would then just extend the basic _Collection_ definition with those additional operations and the _makeImmutable_ operation. * Documentation of those function signatures that actually care about mutable vs. immutable and not just require the basic collection would look less pretty. Take: /** * @param {MutableCollection} collection /* function messAroundWithSomeCollection( collection ) { /* ... */ } vs. /** * @param {Collection} collection A collection not yet made immutable. /* function messAroundWithSomeCollection( collection ) { /* ... */ } What would be the more _natural_ way of doing this in JavaScript? Especially without having interfaces I can check against and considering the flatter chains of inheritance if I would also implement a _Set_ and _Map_ collection, the everything-in-one approach seems more intuitive to me."} {"_id": "86046", "title": "Codeigniter + JQuery + Processing.js to replace a Delphi App", "text": "So, I've got a mandate to make our aged trillion lined Delphi app web based and it needs to make heavy use of the `` element (HTML5 compatibility doesn't seem to be a big issue since we can just make our clients use a compatible browser the way we'd make them use a compatible version of Windows in the win32 environment). The Delphi app in question is almost completely database driven and will still pretty much continue to be developed as the main product. What I am tasked with is pretty much recreating a scaled down version of the program that performs the major functions of the whole program. I couldn't find any frameworks that simulate windows forms using the canvas element, I'm assuming this is probably by design since it is easier just to use HTML, well, be that as it may, I still think it would be cool to have a few of my cool controls on the web (TRichView and TVirtualTree, etc...) So my question is, to anyone who has tried this before, A.) What can we use for an IDE to code this web app (I just use emacs, but no one else in my company does)? B.) Is it a good idea to mix PHP and Processing.JS? It seems like I'm using a lot of AJAX to get anything to happen. 3 calls just for one dialog box to pop up, 1. Loads the HTML for the dialog, 2. Loads the XML to populate the database info on the form 3. Loads the processing.js PJS file which draws the database info to the canvas. Is three a lot, do people usually combine all their gets into one?"} {"_id": "201359", "title": ".c FIle Dedicated to Data", "text": "Is it completely unheard of to have a `.c` file dedicated to just data? In my case, I'd be using it for global variables that are shared across two other `.c` files. Here's specifically how I'm using it. // serverth.h struct serverth_parameters { struct { // right now, this is the only struct needed char * root, * user, * public, * site; } paths; // I anticipate needing another struct here }; #ifndef SERVERTH_SOURCE extern struct serverth_parameters parameters; #endif * * * // serverth.c #define SERVERTH_SOURCE #include \"serverth.h\" struct serverth_parameters parameters = { .paths = { // macros are actually used here \"/srv\", \"/user\", \"/public\" \"/site\" } }; * * * `parameters` is a struct that's used for a websockets server, in two files: * One for HTTP (uses `parameters.paths.site`) * Two for proprietary protocols (both use `.paths.user`, one uses `paths.public`) Is this a bad practice? Do people do this? Or, is it more conventional to just keep the data in the source file in which it is most relevant?"} {"_id": "4142", "title": "What are some easy-to-implement scaffolding systems?", "text": "Often when stating a new project I'll require a _\"quick 'n' dirty\"_ content management solution. Ideally something that can read my database schema and generated HTML forms. Previously I've used; phpMyEdit and phpMyAdmin but they are lacking is key areas. My wish list woulds be: 1. Database independent 2. Foreign key aware 3. Handles views as-well-as tables 4. Generates modern HTML and CSS 5. AJAX interface. What's your swiss army knife when it comes to CMS on a project?"} {"_id": "177362", "title": "Should I expect my peers to read or practice on a regular basis?", "text": "I've been debating asking this question for some time. Based several of the comments I read in this question I decided I had to ask. This feels like I'm stating the obvious, but I believe that regular reading (of books, blogs, StackOverflow, whatever) and/or practice are required just to _stay current_ (let alone excel) in whichever stack you use to pay the bills, not to mention playing with things outside your comfort zone to learn new ways of doing things. Yet, I virtually never see this from many of my peers. Even when I go out of my way to point out useful (and almost always free) learning material, I quite often get a sense of total apathy from those I'm speaking to. I'd even go so far as to say that if someone doesn't try to improve (or at least stay current), they'll atrophy as technology advances and actually become less useful to the company. I don't expect people to spend hours a day studying or practicing. I have two young kids and hours of practice simply aren't feasible. Still, I find **some** time; perhaps on the train, at lunch, in bed for a few minutes, whatever. I'm willing to believe this is arrogance or naivete on my part, but I'd like to hear what the community has to say. **So here's my question:** Should I expect (and encourage) the same from my peers, or just keep my mouth shut and do my own thing?"} {"_id": "177363", "title": "How to move a car around an environment with hills in C++?", "text": "I don't have any code for this since I don't know how I am meant to do this. I have a car and I am able to move it around on a flat plane and I have that working correctly. However, I want it to also be able to move up hills accurately. So that when I drive up a hill the front wheels start going up first and then the back wheels follow like a normal car would. So far all I have managed to be able to do is when one point of the car reaches an incline the whole car moves to that new y value instead of just the section of the car that is supposed to. Does it require some kind of translation and rotation applying to it so that it reacts correctly?"} {"_id": "241597", "title": "Modelling highly specific business requirements", "text": "How can one go about modelling highly specific business requirements, which have no precedent in the system? Take for example the following requirement: ` When a purchase order contains N lines, is over X value in total and is being recorded against project Y, an email needs to be sent to persons A and B with the details ` This requirement supplements other requirements surrounding purchase orders, but comes in at a much later date in response to some ongoing problem elsewhere in the business. Persons A and B are not part of any role or group in the system, and don't hold any specific responsibility; they are simply the two people the business has appointed to receive these emails in this very specific case. Projects are also data driven, so project Y has no special properties to distinguish it from any other project. The only way to identify it is to compare its identifier to a magic number. **How can one go about modelling this kind of case without introducing too much additional complexity?** That I can think of right now, there are a couple of options. 1. **Perform the checks and actions inline with the existing code.** Here we find the correct spot in the code, check the conditions in the requirement and send the emails to hardcoded addresses. Of course this is fraught with issues. At the very least it stops working if one of these people leaves or changes their email address. At worst you have to ensure that any tests and test data are aware that additional actions are taken for a specific set of criteria. 2. **Introduce some form of events system.** Here we introduce an eventing system, so that we might react to some event, and fulfil the requirement outside of the usual path of execution. This sounds like a cleaner solution than option 1, but the work involved is ultimately probably slightly overkill for this one small requirement. That said, having it in place does allow the system to handle these kinds of specific requirements consistently and easily in the future. **Are there any other (good/better) ways of handling highly specific requirements?** I mean other than telling the other parts of the business no!"} {"_id": "241591", "title": "How to design console application with good seperation of UI from Logic", "text": "1. Is it considered an overkill for console application to be design like MVC , MVP or N tier architecture? If not which is more common and if you can link me to simple example of it. 2. I want to implement a tic tac toe game in console application. I have a solution which hold two projects: `TicTacToeBusinessLogic (Class library project)` and `TicTacToeConsoleApplication (Console application project)` to represent the view logic. In the `TicTacToeConsoleApplication` I've `Program.cs` class which holds the main entry point (`public static void Main`). Now I face a problem. I want the game to handle its own game flow so I can: Create new GameManager class (`from BL`) but this causing the view to directly know the `BL` part. So I'm a little confused how to write it in an acceptable way. Should I use delegates? Please show me a simple example."} {"_id": "185177", "title": "MySQL setup for remote site failover", "text": "I'm designing a custom web-based inventory management and workflow system for a client of mine. One of their remote sites has fairly sketchy internet access, and needs to have no (or very little) interruption of business when their connection to the universe goes down. I'm picturing some sort of local installation of the web app with MySQL replication keeping their local DB up to date. When their connection out of the building fails, they can manually kick over to the local URL, which is configured to hit their local MySQL, and they stay in business. Question is, replicating those changes back up to the real master. I know of such a thing as master-master replication, but will that really hot-sync divergent tables back together when the connection comes back up? It's some help that the remote site's use case is fairly unique, and it's likely (though not guaranteed) to write only to tables related to their wing of the business. I could perhaps limit the \"failover mode\" local application to only those pieces of the app that are unique to their location, which would be enough to keep them in business. How would you approach this?"} {"_id": "185175", "title": "Specifying Query in Unit Test", "text": "When writing unit tests, should I specify the query that will be performed for interacting with the database? I can see both sides of this. On one hand, I want to make sure that the query that I specify is being performed. But I can be making the test more brittle due to the formatting of the query. **EDIT:** Basically the code would be a mapper object that interacts with the db layer. But I am wondering how specific do I need to be in my interactions with a Db object. **Edit:** As I want to mock any external dependencies to my objects when I am mocking my Database connection. Should I specify the query that is going to be performed. For example: testGettingUsers() { $mockDb = $this->getMockBuilder('My_DB_Connection') ->setMethods(array('query')) ->getMock(); $mockDb->expects($this->once()) ->method('query') ->with('SELECT id, name FROM USERS WHERE name = foo') ->will($this->returnValue($dbReturn)); Here I have specified my query as the parameter that I am going to call my DB connection with. Is that necessary?"} {"_id": "185173", "title": "What .NET objects should I use to create a cookie based session in MVC?", "text": "I'm writing a custom password reset application that uses a validation technique that doesn't fit cleanly with ASP.NET Membership Provider's challenge questions. Namely I need to invoke a workflow and collect information from the end user (backup phone number, email address) after the user logs in using a custom form. The only way I know to create a cookie-based session (without too much \"innovation\" on my part) is to use WIF. * What other standard objects can I use with ASP.NET MVC to create an authenticated session that works with non-windows user stores? Ideally I can store \"role\" or claim information in the session object such as \"admin\", \"departmentXadmin\", \"normalUser\", or \"restrictedUser\" * * * The workflow would look like this: 1. User logs in with username and password 2. If the username and pw are correct a (stateless) cookie based session is created 3. The user gets redirected to a HTML form that allows them to enter their backup phone number (for SMS dual factor), or validate it if already set. 4. The user can then change their password using the form provided The \"forgot password\" would look like this 1. User requests OTP code to be sent to the phone 2. User logs in using username and OTP 3. If the OTP is valid and not expired then create a cookie based session and redirect to a form that allows password reset 4. Show password reset form, and process results."} {"_id": "246747", "title": "How to backup data and images in this project?", "text": "I have MS C# and MS SQL 2008 database project. It can capture employees' records with pictures of more than 1000 records. Presently, I'm able to capture say 150 records on PC1 using my installed C# application, save the records into XML or CSV file format. My friends are able to continue registration or entering (i.e from 151 to 1000) after restoring or uploading the information contained in that XML or CSV using another copy of my C# application installed on their laptops at another location. 1. How can the content be hidden or locked from user viewing its content except for my C# app reading or writing to it (not necessarily in XML or CSV format)? 2. If I still use same XML or CSV, how can the file be compressed to smaller sizes?"} {"_id": "227671", "title": "What is the clean way to pass my LoginContext down through the layers to the data access layer?", "text": "I have inherited an API implemented using ASP.NET WebApi 2. The actions on the controllers are all like this: public object Get(long id) { LoginContext loginDetails = GetLoginDetails(); if (loginDetails.IsAuthorised) { return _dependency.DoSomething(loginDetails, id); } return new HttpResponseMessage(HttpStatusCodes.Unauthorised); } The `_dependency` will have many methods all with similar signatures, and it will have dependencies of its own, and those will also use the `LoginContext` class until you finally reach the bottom of the call stack at the data access layer, where the `LoginContext` class is actually used. Dependencies are currently all injected into the constructor by the IoC container. So there are a number of issues here that bother me - the repetitive checking in each controller action that the user is authorized, and the need to have a `LoginContext` on every method of every dependency referenced anywhere by the controller. Now in the first case, I have created an action filter that handles the authentication, and writes a custom identity (which contains the `LoginContext` details) back to the `HttpContext`. That then leaves the meat of my question - what is the best way to pass my `LoginContext` down through the layers to the data access layer? **UPDATE:** just to clarify, in response to some of the questions below, authentication itself is not being checked by the data access layer (although the business layer will obviously do things differently based on the caller's authorisation claims); but rather we are passing data gathered during the authentication process to the data access layer, where it is then being used to access particular resources, or for infrastructure concerns such as auditing. The problem still remains though, should every method of my business layer and every method of my data layer, take a LoginContext as one of its parameters, or are there better ways?"} {"_id": "181886", "title": "Is it okay to mock multiple objects in one class?", "text": "For developers with extensive experience using mocks, **is it okay to mock multiple objects in one class (ie satisfy multiple interfaces) or is this not recommended?** I am wondering because mocks are stubs anyway, from a testing perspective, it doesn't seem to make a big difference if I consolidate mocks. However, I do wonder if there are negative impacts from a maintenance/clarity POV."} {"_id": "101799", "title": "What is the best strategy for quickly implementing small business applications?", "text": "I'm not a software developer, but I have previously developed a project- specific Access application for another employer. My current employer would like me to help organize their workflow and data for an upcoming project, and I was considering using Access again. In previous projects the staff used large piles of Excel spreadsheets; now they would like to keep their data more organized as well as automate some processes. I've found that it is difficult to create forms in Access that are pleasant and usable, due to the limited control over forms. Are there better options for quickly developing a business application, such as MS LightSwitch, Delphi, or .NET? The team that will be using the application is small - less than five users."} {"_id": "193429", "title": "Is path in Set-Cookie URL encoded?", "text": "I'm writing some code that sets cookies and I'm wondering about the exact semantics of the `Set-Cookie` header. Imagine the following HTTP header line: Set-Cookie: name=value; Path=/%20 For with path does this set the cookie? `/` or `/%20`(unescaped) (`/%20` or `/%2520` escaped)? The reason I'm asking is that I should support non-ASCII paths. Since HTTP header must only be ASCII my plan was to URL escape the path value but the HTTP specification is not as clear as I'd hoped for. **Edit** I know what Path is supposed to do. My question is: Is the value interpreted as percent encoded or not?"} {"_id": "101796", "title": "App Store: Why did I get paid less than it says in iTunes Connect?", "text": "Ok so not really a programming question... but it is very relevant for any non-US Apple Devs. - Basically, iTunesConnect lists my last payment as $248 NZ (I live in NZ, I signed the basic US tax contract that says I do not live in the US). The payment I received in my bank account a while ago was $221. That's about a 10% difference, maybe because of some type of tax? However, I was under the impression that I should not be getting taxed, and that it was up to me to declare and pay the tax in my own country. - So for any other non-US iOS devs - do you also see this 10% difference - if so, where does the 10% go to (NZ, US, or my bank?). If not, what am I doing wrong, and how should I be doing it? Any help is much appreciated! Thanks :)"} {"_id": "181888", "title": "Versioned Resources to Improve Cacheability", "text": "Here's an API concept which could be useful for performance optimisation. It's an example of key-based cache expiry applied to a broader internet-wide context instead of the internal Memcached-style scenario it seems to be mostly used for. I want to make API calls almost as cacheable as static images. Using a news/feed subscriber as an example, which we might poll hourly, the idea is to send a last-updated timestamp along with each topic (it could just as easily be a version number or checksum): { username: \"Wendy\", topics: [{ name: \"tv\", updated: 1357647954355 }, { name: \"movies\", updated: 1357648018817 }, { name: \"music\", updated: 1357648028264 }] } To be clear, this resource itself comes directly from the server every time and is not cached on the edge or by the client. It's our subsequent calls for topics that we can aggressively cache, thanks to the timestamp. Assuming we want to sync all topics, we'd have \"N\" further calls to make in a naieve implementation (`/topics/tv` etc). But because of the timestamp, we can construct a URL like `/topics/tv/1357647954355.json`. The client usually doesn't make a call at all if it's already seen (and cached) the same version of that resource. Furthermore, even if it's new to the client, an edge cache (e.g. a reverse-proxy like Squid, Varnish, or service like Cloudflare) probably _has_ seen it before, because some other user has probably opened the latest version of this topic already. So we still bypass the application server; the server only ever creates topic JSON once after the underlying resource has updated. So instead of N+1 calls to the server, the client probably makes a much smaller number of calls, and those calls will rarely hit the app server anyway. **Now for my question** All this seems feasible and worth doing, but my question is if there's any prior art for this kind of thing and in particular, any HTTP standards to support it. I initially thought of conditional caching (ETags and modified dates), and I think they'd help to optimise this setup further, but I don't believe they are the answer. They are subtly different, because they require calls to be passed through to the application server in order to check something's changed. The idea here is the client saying \"I already know the latest version, please send that resource back to me\". I don't think there's any HTTP standard for it, which is why I propose a URL scheme like /topics/tv/1357647954355.json instead of some ETag-like header. I believe some CDNs work this way and it's surprising to me if there's no real HTTP standard around it. **Update:** On reflection, an important special case of this is what a web browser does when it fetches a new HTML page. We know it will immediately be requesting CSS+JS, so the same versioning/timestamp trick can be used to ensure those static resources are cache-friendly. That this trick has not been formalised by the spec gives me confidence that unfortunately there is no HTTP standard for it. http://www.particletree.com/notebook/automatically-version- your-css-and-javascript-files/"} {"_id": "244413", "title": "Fiscal quarter vs calendar quarter", "text": "I'm building a Date/Time class with a \"configurable quarter\" system as follows. * User specifies which month the quarter starts at (config) * Set of functions to deal with quarters (next quarter, prev quarter, etc) * All quarter functions respect the config Now this class is primarily to be used for fiscal quarter calculations. So assuming I have this class with a configurable \"quarter\" system, would I need another parallel set of functions for calendar quarters too? What are the applications for calendar quarters anyways? By calendar quarters I mean where Q1 is Jan-Mar, and Q4 is Oct-Dec. By fiscal quarters I mean whatever standard your Country uses (in India Q1 starts in April)"} {"_id": "244410", "title": "Use a template to get alternate behaviour?", "text": "Is this a bad practice? const int sId(int const id); // true/false it doesn't matter template const int sId(int const id) { return this->id = id; } const int MCard::sId(int const id){ MCard card = *this; this->id = id; this->onChange.fire(EventArgs(*this, card)); return this->id; } myCard.sId(9); myCard.sId(8); As you can see, my goal is to be able to have an alternative behaviour for sId. I know I could use a second parameter to the function and use a if, but this feels more fun (imo) and might prevent branch prediction (I'm no expert in that field). So, is it a valid practice, and/or is there a better approach?"} {"_id": "244417", "title": "Dynamic Query Generation : suggestion for better approaches", "text": "I am currently designing a functionality in my Web Application where the verified user of the application can execute queries which he wishes to from the predefined set of queries with where clause varying as per user's choice. For example,Table ABC contains the following Template query called SecretReport \"Select def as FOO, ghi as BAR from MNO where \" SecretReport can have parameters XYZ, ILP. Again XYZ can have values as 1,2 and ILP can have 3,4 so if the user chooses ILP=3, he will get the result of the following query on his screen \"Select def as FOO, ghi as BAR from MNO where ILP=3\" Again the user is allowed permutations of XYZ / ILP My initial thought is that User will be shown a list of Report names and each report will have parameters and corresponding values. But this approach although technically simple does not appear intuitive. I would like to extend this functionality to a more generic level. Such that the user can choose a table and query based on his requirements. Of course we do not want the end user to take complete control of DB. But only tables and fields that are relevant to him. At present we are defining what is relevant in the code. But I want the Admin to take over this functionality such that he can decide what is relevant and expose the same to the user. On user's side it should be intuitive what is available to him and what queries he can form. Please share your thoughts what is the most user friendly way to provide this feature to the end user."} {"_id": "13424", "title": "How do you keep focused through long compiles", "text": "I find it fairly simple keep focused on a task at hand whilst I am actively engaged on it, however, during a long compile (and our current main project can compile in 5-10 mintues on a bad day) that my focus will drift. I'll start procrastinating, check the web, talk to someone, start thinking about something else, and just generally lose my attention, and realise that the compile had finished a while back and I have lost my train of thought, or worse, a lot of time. This is hurting my productivity, and whilst it has not been mentioned by anyone, I think it's a habit I should stop. I have tried white-noise generators (brilliant whilst I have something to focus on), shutting down my web connection and the like, however, through long compiles what can I do rather than stare at the monitor? What tricks or tips can be used to keep focused without spending large amounts of your day starting at a compile in boredom, just to ensure you don't miss it finishing? Hang on, my build broke 10 minutes ago ... :) Finally, I am fairly sure this question isn't a dup of this, as as I am talking specifically about compile times, rather than waining motivation. Foosball is not an option, rather the problem :) **EDIT** People trying to gloat about their language of choice compiling faster are not really helping, nor being particularly amusing. Thanks anyway."} {"_id": "124185", "title": "Experience With Similar Technologies: Convincing HR People?", "text": "Let's say that a job posting asks for experience in technology X. You have no experience in X but you do have experience in technology Y, which you're convinced is similar enough that the learning curve to be productive in X would be extremely short. How do you get hiring people, especially HR people without much technical background, to look past your lack of experience in X and take you seriously? Examples: * You're applying for a Java job and have never used Java except for some very small toy projects. However, you do have substantial experience in C#, which is clearly derivative of Java and promotes the same style of programming (class-based OO, static typing, autoboxing, etc.) * You're applying for a C job. You've never worked on anything written in straight C, but you have done substantial work in C++ and used the C-like subset (raw pointers, malloc/free memory management, void* pointers, etc.) for some of the low-level parts of the project. * You're applying for a C++ job and you've done extensive work in D. D is intended to be a reengineering of C++ and includes the key concepts like RAII, templates, the ability to manage memory manually and do low-level work, etc. Furthermore, you're aware of exactly where the differences lie from extensive discussions on language design in the D community. Edit: I guess my more fundamental question that inspired this post is, \"Why do most job ads place so much emphasis on specific technologies (which aren't hard to learn on the fly if you grasp the underlying concepts) instead of fundamental language-agnostic skills?\" If I were running a language X shop, I'd much rather hire a top notch programmer regardless of specific technologies and assume he'd pick up X pretty fast than hire an expert in X who had mediocre fundamental, language-agnostic programming skills."} {"_id": "125857", "title": "Translating between Python-Django and Javascript", "text": "I have a conceptual question about 'translating' between objects I have stored in Django (in Postgres) that I want to use on the front-end. So I have a user object in Python that holds basic things: an id, a username, a password. Let's say I want to output all of that information using Javascript on the client side. Can I just give JS a python object and it will make it into a JS object? I know JS is known for being 'fast and loose' with typing but that just seems absurd! I'm not looking for syntax, but just an overview of how this would work. Can I put Python objects in AJAX requests, then translate them client side, or is this going to be a basic break-down-things-on-server-side, ship them off, and recombine them on the client side type of thing?"} {"_id": "254223", "title": "Looking for an algorithm to connect dots - shortest route", "text": "I have written a program to solve a special puzzle, but now I'm kind of stuck at the following problem: I have about 3200 points/nodes/dots. Each of these points is connected to a few other points (usually 2-5, theoretical limit is 1-26). I have exactly one starting point and about 30 exit points (probably all of the exit points are connected to each other). Many of these 3200 points are probably not connected to neither start nor end point in any way, like a separate net, but all points are connected to at least one other point. I need to find the shortest number of hops to go from entry to exit. There is no distance between the points (unlike the road or train routing problem), just the number of hops counts. I need to find all solutions with the shortest number of hops, and not just one solution, but all. And potentially also solutions with one more hop etc. I expect to have a solution with about 30-50 hops to go from start to exit. I already tried: 1) randomly trying possibilities and just starting over when the count was bigger than a previous solution. I got first solution with 3500 hops, then it got down to about 97 after some minutes, but looking at the solutions I saw problems like unnecessary loops and stuff, so I tried to optimize a bit (like not going back where it came from etc.). More optimizations are possible, but this random thing doesn't find all best solutions or takes too long. 2) Recursively run through all ways from start (chess-problem-like) and breaking the try when it reached a previous point. This was looping at about a length of 120 nodes, so it tries chains that are (probably) by far too long. If we calculate 4 possibilities and 120 nodes, we're reaching 1.7E72 possibilities, which is not possible to calculate through. This is called Depth-first search (DFS) as I found out in the meantime. Maybe I should try Breadth-first search by adding some queue? The connections between the points are actually moves you can make in the game and the points are how the game looks like after you made the move. What would be the algorithm to use for this problem? I'm using C#.NET, but the language shouldn't matter."} {"_id": "254227", "title": "Should one reject over-scoped projects?", "text": "I spoke to my first potential client today and he told me about the requirements of his project - an Android app. He is a well-known designer / photographer in my country and now wants me to \"convert the website into an app, custom-tailored\". So the requirements, details stripped out, are as follows: * eCommerce * Aggregating all his content like videos, blogs, tweets, etc. into the app * Live streaming any of his studio demos * Augmented reality. So that people can see what his painting will look like on their wall before they buy it * Taxi Sharing Now, for a freelance project, it seems too over-scoped. I am not saying that I cannot do it. I can. But let me be realistic: * There is a steep learning curve when it comes to VR. * I am not a tester. I have never white-box tested my own apps. I always black-box test. * Since he is a renowned artist, something short of perfect _might_ harm his public image So, I asked him for 2 weeks' worth of time before I give him the final answer. Now knowing whom to consult for advise, I am posting the question here. Although interesting and personally challenging, I am split-minded about accepting a project like this. I will be **the only developer** for this. **Should one reject a project that seems to be over-scoped for one's own abilities?**"} {"_id": "125850", "title": "How to model a many to many from a DDD perspective in UML?", "text": "I have a two entity objects Site and Customer where there is a many to many relationship. I have read you try not to model this in DDD as it is in the data model and go for a unidirectional flow. If I wanted to show this in UML, would I show it as it is in the data model: `Site * ----->*Customer` but the direction arrow gives the flow? or as following `Site ----->*Customer` But then this would imply that Customer can only go in one site."} {"_id": "235665", "title": "How to measure the quality of my use cases?", "text": "When I'm coding something I know that there are many ways to see if my code is good or not. First is testing: I can do unit tests or even test the software by myself and see that it works or not. After getting it working, I can analyze the coupling of the code and so on to refactor and make it better. In that sense, I'm confident when coding because I have a way to know if what I did works or not and ways to think it better if it doesn't work. When designing the software though I'm mainly having difficulties to analyze whether the uses cases I write are good or not. I've read lots of theory on use cases, but the practice is being a little hard because I always find myself questioning things like: \"should this be included? is this necessary to say here? isn't it missing something?\" and all sorts of things like that. So, how can I \"test\" my use cases? How do I know if they are well written or not? I know in OOAD iterative approach, we don't try to get it right on the first iteration, however it should at least contain the exact information to get me started in coding, and I don't now how to discover if it does contain that information."} {"_id": "218065", "title": "C# vector class - Interpolation design decision", "text": "Currently I'm working on a vector class in C# and now I'm coming to the point, where I've to figure out, how i want to implement the functions for interpolation between two vectors. At first I came up with implementing the functions directly into the vector class... public class Vector3D { public static Vector3D LinearInterpolate(Vector3D vector1, Vector3D vector2, double factor) { ... } public Vector3D LinearInterpolate(Vector3D other, double factor { ... } } (I always offer both: a static method with two vectors as parameters and one non-static, with only one vector as parameter) ...but then I got the idea to use extension methods (defined in a seperate class called \"Interpolation\" for example), since interpolation isn't really a thing only available for vectors. So this could be another solution: public class Vector3D { ... } public static class Interpolation { public static Vector3D LinearInterpolate(this Vector3D vector, Vector3D other, double factor) { ... } } So here an example how you'd use the different possibilities: { var vec1 = new Vector3D(5, 3, 1); var vec2 = new Vector3D(4, 2, 0); Vector3D vec3; vec3 = vec1.LinearInterpolate(vec2, 0.5); //1 vec3 = Vector3D.LinearInterpolate(vec1, vec2, 0.5); //2 //or with extension-methods vec3 = vec1.LinearInterpolate(vec2, 0.5); //3 (same as 1) vec3 = Interpolation.LinearInterpolation(vec1, vec2, 0.5); //4 } So I really don't know which design is better. Also I don't know if there's an ultimate rule for things like this or if it's just about what someone personally prefers. But I really would like to hear your opinions, what's better (and if possible why )."} {"_id": "229779", "title": "Does OpenMp have support for Real Time Multiprocessor Computing?", "text": "I am working on a real time multiprocessor scheduling algorithm. I found very few results via google research related to it. A few simulators are available but are not robust enough. OpenMp is an API I have previously used for small parallel application development. But I am not sure if it provides any support for real time parallel computing which involves time related parameters such as task period, deadline etc. Are those features available in OpenMp? If not, are the underpinnings available so I could extend OpenMp instead?"} {"_id": "235663", "title": "How to inherit from two parent classes", "text": "I have many classes with many relationship I draw Uml that relation between them: > ![enter image description here](http://i.stack.imgur.com/tw1H9.jpg) Is this relation true and how to implement this?"} {"_id": "221075", "title": "How is major software protected?", "text": "I am a new software developer and I wish to sell my software. I recently realized that from C++ code we can not stop the user seeing parts of the code that are related to scripts or system commands. Would you make some comments on how software written in C++/JAVA (distributed via CD-ROMs or available via download) is protected from reverse engineering, scanners for when the code is in memory and direct copy of parts (as system commands). What a small software company which just starts producing software should do to protect its product from the technological point of view (it should not be able to pay legal fees \u2026)?"} {"_id": "131926", "title": "Will giving new recruits a separate subproject from experienced developers help the newbies ramp up more quickly?", "text": "We have 7 developers in a team and need to double our development pace in a short period of time (around one month). I know there is a common sense rule that \"if you hire more developers, you only lose in productivity for the first few months\". The project is an e-commerce web service and has around 270K lines of code. My idea for now is to divide the project in two more or less independent sub- projects and let the new team work on the smaller of the two sub-projects, while the current team works on the main project. Namely, the new team will work on checkout functionality, which will eventually become an independent web service in order to decrease coupling. This way, the new team works on a projects with only 100K lines of code. My question is: will this approach help newbie developers to adapt easily to the new project? What are other ways to extend the development team rapidly without waiting two months until newbies start producing more software then bugs?"} {"_id": "253502", "title": "Using a binary from a project that violates GPL", "text": "There is an open source project which uses GPLv2. It had numerous releases that complied with GPL in the past where source code was available for given binaries. The newest source code is about 18 months old and is being called a last \"official release\". Since that OSed \"last version\" further development has been done by a single main developer. Features have been added and it is clearly visible that the code must have been changed. The main developer only holds part of the copyright for the whole package (it is not 100% his work, the project was started as a fork of another GPLed project). Lets call the ongoing development a TRUNK branch. The problem is that the latest development is done as closed source and **only binaries are being distributed for the TRUNK branch and no source code** was revealed yet. While this most likely is a GPL violation and is a bad thing itself, I dont mind this (even as I should) because I want to use the TRUNK binaries. My question is - **what is the legal status of such TRUNK binaries** that have no source available to them? Are these poised somehow? I want to use the binaries in my project which is open source itself but the binaries would be used in a standalone working unit that only is used for data acquisition so I most likely wont need to modify the unavailable TRUNK source code anyway."} {"_id": "140242", "title": "Purpose of the csx folder in Azure projects?", "text": "I'm fairly new to Azure but I noticed that Visual Studio auto-creates the following folders (among others) ... //bin //obj //csx <== ... Now the bin and obj folders are fairly standard. But I'm not clear about the purpose of the **csx** folder. Any ideas?"} {"_id": "81907", "title": "How would you feel if your code editor formatted your code for you as you typed, without tabs/spaces?", "text": "As with most things, I'm sure this concept has been tried before - I just haven't come across editors that use what I've termed 'Virtual Formatting'. The principle is that there's a floating left-margin that simulates the effect of the padding space/tab characters that are conventionally inserted by the developer or the editor itself to format the code. The editor continously parses code (even when commented-out) as you type and calculates the required indent based on the context where each line-feed is found I'm developing this idea working specifically with an XML editor as XML has some peculiar problems with formatting characters and it tends to be heavily nested, however I believe many of the principles still hold for conventional code. Have you experience coding with such a tool or do you have a view on whether it would help or hinder? Would it cause problems with version-control systems? (it detects and strips out all existing padding characters) Unless you've tried it, the behavior of such a tool is hard to describe, it looks conventional until you actually start editing. I've put up a screencast video showing a prototype in action which demonstrates editing XML, changing it's hierarchy and doing drag/drop and copy and paste operations, and then how formatting is broken/fixed when invalid characters are typed. **Edit** All answers/comments have so far been negative - so to attempt to redress balance, some benefits of virtual-formatting to think about: * No more debates on formatting standards, just place line-feeds where that conforms to your chosen/mandated convention * Where space is at a premium (in a book/blog/documentation) you can word-wrap but still getting perfect indentation * Each code-block can have a 'mouse-handle' immediately adjacent to where it starts, not squeezed into the screen edge - click this to select the whole block or inner-block * Drag, drop and forget - becomes viable for the first time * No time spent reformatting other peoples code * No incorrectly formatted code (in the sense that there is none - just the rendering) * Using Backspace instead of Ctrl+Backspace keeps your fingers on the keyboard guide-keys * Flexible rendering - adapt rendered formatting to your environment, anyone tried reading code on a mobile phone/small-screen tablet? * Consider that there are roughly 25% fewer editable characters (in a sample XSLT), doesn't that have efficiency benefits? **Edit - Conclusions so far** 1. Developers have established tools and working methods that efficiently overcome most of the disadvantages inherent in the use of padding characters used for indentation. 2. There is concern that removal of formatting characters will detrimentally affect some differencing tools. 3. Developers want the flexibility to 'fine-tune' formatting in such a way that automated rendering could not handle. 4. The removal of leading spaces/tabs means that a 'code-aware' tool capable of code-formatting is needed to review such code efficiently - a plain-text editor would show no formatting. 5. Those that feel there may be some hypothetical benefits (to virtual-indentation), have a view that the disadvantages outweigh those potential benefits - _conclusively_. **Edit - Verdict** The perception of the obstacles and few (if any) benefits is such that it would be unwise for me, as a sole-developer, to pursue this space-free editing concept for general languages. For XML/XSLT, however (because of its special treatment of whitespace), there seems to be some agreement of potential at least. **Edit - Product Shipped** In spite of the generally negative sentiment found here, I went ahead and shipped the editor. I made a free version in the hope it would bring criticism in the form of more concrete issues, based on real experience. Somewhat frustratingly, there have been no complaints so far (in fact barely any feedback considering download volume). I'd like to think this was because users adjusted to the idea so well that they see this as a 'so what?' kind of feature - but there's no way of telling..."} {"_id": "81905", "title": "What's the rationale of not simply disclosing the license fees for a commercial libary or tool?", "text": "Some companies selling software or libraries simply put their licensing model on their web page and be done with it. (Many also contain a disclaimer that there are volume discounts and special arrangements possible.) Other companies (most notably those placing themselves in the \"enterprise\" market) don't disclose their fees, some don't even disclose their licensing model! When you are interested in their product and contact sales, you then discover that their price calculation is simply a per site / per developer / per core / per whatever thing and they could just as well have put that info on their webpage. Can fellow programmers give me an insight as to why I have to exchange 3 emails and 2 phone calls with a sales representative to find out if a product even remotely fits into our development price tag? I have found that me (and some of my co-workers) are extremely reluctant to even evaluate a product that doesn't disclose it's costs up front, so I cannot understand why anyone would do that for \"simple\" products - that is for libraries and software that really doesn't have negotiated license fees. Note again: I'm talking about single-installation software or libraries that are licensed per-site or per-developer and not about stuff that requires complicated licensing agreements."} {"_id": "124457", "title": "How to move to Java enterprise development after Python and Ruby?", "text": "I used to develop in Django/Python and Rails/Ruby (and before that C/C++ and C#), and I'm now at a job where we do enterprise Java development (Spring, Hibernate, RESTEasy, Maven, etc.) for web applications and web services. Coming from the Convention over Configuration world, what's the best way to get up to speed doing enterprise Java web services development? I know Java (the language) well, and I've written GUIs in Swing and basic JSP before, but nothing of the kind I'm doing now. Are there any recommended tutorials to get up to speed on popular Java enterprise development tutorials?"} {"_id": "45168", "title": "Should I achieve validation by handling errors in classic ASP?", "text": "I came across this while modifying an old ASP application: ON ERROR RESUME NEXT ivalue = CDATE(ivalue) IF err.number > 0 THEN ivalue = CDATE(date) END IF err.clear This doesn't look like good practice. What is the purpose of doing it this way, and is there a better way to go about it? For example, I would've just used `ISDATE()`: is there something I'm missing?"} {"_id": "20628", "title": "Best books on Managing a Software Development Team?", "text": "The canonical books on software development is fairly well established. However, after reading through a dreadful book full of bad advice on managing programming teams this weekend I am looking for recommendations for really good books that focus on the management side of programming (recruiting, performance measurement/management, motivation, best practices, organizational structure, etc.) and not as much on the construction of software itself. Any suggestions?"} {"_id": "246817", "title": "Implement Generic DataSet Builder with C#", "text": "I want to create a data access library that can build a DataSets with relations which can easily be written to XML with `dataset.WriteXML()`. This is a get to know C# endeavor that will hopefully gain me some productivity as well (lots of converting relational tables to XML from different data sources for document generation) So far the only difference I see between the Data Access technologies (SQL, OLEDB, ODBC) with regards to how I will use them for this is that they require a type specified Connection and Adapter (SqlAdapter, OleDbAdapter, OdbcAdapter, etc). So in my mind I envision classes with two methods and a public data set that will be filled. public DataSet DataSet { get; set; } public void InsertTables(string ConnectionString, string[] TableNames, string[] Commands) public void AddRelations(string[] PrimaryTables, string[] PrimaryKeys, string[] ChildTables, string[] ForeignKeys, bool[] NestingRules) I already started with an OleDb Implementation that works well, and I want to set up something similar for other Data Access technologies. However, I want to be as efficient as possible with the code so am looking for advice on how to accomplish. I was thinking that the Template Method Design pattern could be a solid approach, but then I also thought that a single class that utilizes generics might work as well (I am new to C# and not that familiar with them). I am looking for a general example of how I could accomplish this with a good design pattern and/or generics. Here is what I have for the OleDbDesign. Any advice is greatly appreciated. public class OleDbDataSetBuilder { private DataSet _DataSet; public DataSet DataSet { get { return _DataSet; } } public OleDbDataSetBuilder(string DataSetName) { this._DataSet = new DataSet(DataSetName); } public void InsertTables(string ConnectionString, string[] TableNames, string[] Commands) { if (TableNames.Length != Commands.Length) { throw new Exception(\"Error: Must provide a table name for each command.\"); } OleDbConnection cn = new OleDbConnection(ConnectionString); OleDbDataAdapter adapter = new OleDbDataAdapter(\"\", cn); adapter.SelectCommand = new OleDbCommand(\"\", cn); for (int i = 0; i < TableNames.Length; i++) { adapter.SelectCommand.CommandText = Commands[i]; adapter.Fill(_DataSet, TableNames[i]); } cn.Close(); } public void AddRelations(string[] PrimaryTables, string[] PrimaryKeys, string[] ChildTables, string[] ForeignKeys, bool[] NestingRules) { for (int i = 0; i < PrimaryTables.Length; i++) { DataColumn pk = _DataSet.Tables[PrimaryTables[i]].Columns[PrimaryKeys[i]]; DataColumn fk = _DataSet.Tables[ChildTables[i]].Columns[ForeignKeys[i]]; DataRelation relation = _DataSet.Relations.Add(pk, fk); relation.Nested = NestingRules[i]; } } }"} {"_id": "20624", "title": "GWT vs JSF vs ZK vs Restful+JS", "text": "I'm planning to develop a web based ERP, which should be full-ajax and with desktop-like UI. It will be a data-entry & data-report application. For developing it I'm considering all technologies. GWT: I saw that with GWT Designer you could create cool UIs, but databinding seems to be too complex JSF: Netbeans no longer supports the visual web editor ZK: supports databinding in a relatively easy way, and has got an Eclipse- based visual editor Some people talk about REST + javascript as a winning choice I'd like to have your opinion about what could be the right choice. Thank you very much in advance!"} {"_id": "183865", "title": "Is it a good idea to provide different function signatures that do the same thing?", "text": "Here is a C++ class that gets constructed with three values. class Foo{ //Constructor Foo(std::string, int, char); private: std::string foo; char bar; int baz; }; * * * All of the parameter types are different. I could overload the constructor so that order doesn't matter. class Foo{ //Constructors Foo(std::string, char, int); Foo(std::string, int, char); Foo(char, int, std::string); Foo(char, std::string, int); Foo(int, std::string, char); Foo(int, char, std::string); private: std::string foo; char bar; int baz; }; But is that a good idea? I started doing it because I knew what things a class/function needed; I didn't always remember what order it took them in. * * * I've been assuming that the compiler optimizes this as if I called the same constructor. //compiler will implement this with the same code? //maybe not.. I could call a function to get a parameter, //and that function could change the state of the program, before calling //a function to get another parameter and the compiler would have to //implement both Foo foo1(\"hello\",1,'a'); Foo foo2('z',0,\"world\"); What are your opinions on overloading a function so that the order doesn't matter? * * * Also, If I'm writing some utility functions, Is it a good idea to provide different function names that do the same thing? eg. void Do_Foo(); void DoFoo(); void do_foo(); //etc.. * * * I don't often see these two but similar conventions. Should I break or embrace the habit?"} {"_id": "246812", "title": "What is the proper way to use an IDE to work on remote code?", "text": "One of the code bases I work on has a development environment that is running on a dev server and cannot be copied over to my PC to locally test and develop. I am wondering what is the proper way to work with this code base? It is object oriented and I have found it to be very tedious and time consuming to do my work using Vim when working on such a codebase. I have another project which I have running locally and I like to use Eclipse and sometimes Sublime text to make changes to the codebase. Is there a way to utilize such tools that I use locally with a remote project, and if so what is the proper way to set this up? Should I remotely mount my system? We do have git/hg setup with these environments however I like to test and debug things in very small changes and this would produce a lot of commits that are un-needed. As well it is also time consuming. Ideally it would be great to just be able to work like it is a local environment and when I save and refresh a page the change is there. Is my best solution mounting?"} {"_id": "186748", "title": "Are there any tools for remote coding interview?", "text": "Firstly, I'm not exactly sure if this question is a better fit over here, or on workplace.SE. So forgive me if it is in the wrong place. We are interviewing some candidates for a development position, and currently they are not in our city. We would like to give them simple coding tests to see how they will perform on the typical issues that we face in our daily work. Are there any specific tools geared towards this? Right now we are using Skype and I feel this tends to decrease the performance of a lot of developers since they tend to be shy, and often can't work when someone is directly staring at them. The problem with sending them the test questions by email are as follows: 1. It is not possible to know what their thought process is, sine we just see the end result. There is no discussion, or clarification of the question, which is an Important step. 2. There is no guarantee that the problems were solved by the candidates themselves. They could send it to a smarter friend, and we wouldn't be able to know. How are these problems usually solved?"} {"_id": "201821", "title": "Guid collisions", "text": "I have a product that lets game developers create games. Inside of their games they are required to give all the elements of their games GUIDs. I've told them that they need to generate their guids using a specific mechanism, but they seem to think that it won't cause any issues. What I'm concerned about is GUID collisions between games. Some developers just sequentially increment the last digit of their GUID, others zero out the last block and increment other parts, well others just pull together a random armada of numbers and letters for each one. My argument is that by doing this it dramatically increases the risk of colliding with other games that are both following the standard, and those that don't. Just as a heads up there are potentially hundreds of thousands of GUIDs involved here. Am I right in thinking this, or is it really improbable that this will really happen?"} {"_id": "201829", "title": "Just in time prize winning algorithm", "text": "I'm building a contest where you can win prizes by opening boxes. Whatever the box is open, i just send a request to the server to test if the user won something. Since this contest is not a \"Register and we'll draw a winner after some time\" type of contest, i'm a bit puzzled at what algorithm i should be taking to calculate chances to win. I've devised in an excel spreadsheet what i call a progressive winning algorithm in the form of: ([Number of prizes left] / [Prizes i should have left at this time of contest]) / 4 Which gives me a rough percentage change of winning an item off the pool of remaining items around 25%. If there are many winners within a close timespan, the algorithm should automatically lower the chances of winning while going in the opposite direction if the scenario is different. What i'm afraid of is that not all my items will be drawn by the end of the contest because it's a linear calculation of expected gifts left or i'm afraid that, by chance, most of my prizes will be won after only half of the contest (and by chance i mean that lots of player get truly great random odds and keep winning) So my question is, how should i tackle this kind of scenario, what are already known methods for this and please keep it simple, i'm no mathematician nor statistician. * * * **Variables to take into account** * 100 prizes to give out total * Contest lasts five whole days * Unknown number of participants * Unknown participation rule but will probably be once per day based on a unique credential such as an e-mail address. * It's an online contest where a user opens a virtual box, nothing physical"} {"_id": "80051", "title": "Practice programming guidance", "text": "I am a recent graduate so I understand a lot of the theory, object-oriented languages, and various data structures. I am competent in languages such as C++, Java, PHP, just to name the ones I'm strong at. Since graduation I bite the bullet and accepted a job outside of my field for now. So obviously I don't want to lose my edge and want to stay current. Most of my programming has been for academic reasons and a 3-month web programming (back-end) internship. I really want to program something to use my skills I've learned. I always hear program a game is great practice. However one problem with the Internet is it's robust volume of information and find myself reading one site then jumping to another & on & on. Where would you suggest is a good foothold to begin? It's always harder to start from scratch and that is where I am. I figure start with some 2D game and go from there, but then comes the question of which SDK to use, if any. I'm currently using Linux but I wouldn't mind developing in Windows either. I'm always thinking maybe I should try to develop a simple app for an Android phone as well. As you can tell I'm all over and just need a little direction. Any advice/website is greatly appreciated!"} {"_id": "125587", "title": "What are the difference between an edge case, a corner case, a base case and a boundary case?", "text": "I'm not a native English speaker. In my native language I'm aware of some terms used to refer to the condition checked to stop a recursion, and to the condition checked for extreme, unlikely or super-simple cases. In English, I've encountered the terms \"edge case\", \"corner case\", \"boundary case\" and \"base case\", but I can't quite figure out the differences and which is used to refer to what; I'd love to get some summary of the differences between them. In particular, I would be very happy if someone could provide annotations for the lines in the following code sample: int transmogrify(int n) { 1. assert(n <= 1000000); 2. if (n < 0) return -1; 3. if (n == 1000000) return PRE_CALC; 4. if (n == 0) return n+1; // For stopping the recursion 5. if (n == 1251) return 3077; return transmogrify(n-1); } I _think_ it's: 1. Sanity check 2. Input check 3. Boundary case? Edge case? Corner case? 4. Base case? Boundary case? 5. Corner case? Edge case?"} {"_id": "125589", "title": "Using Scrum/agile with multiple customers ", "text": "I am a big believer in agile development. I just changed jobs, and I am now working for a company that coordinates big development projects for (rather large) groups of customer organizations. My job is to find the right contractors for the projects, and make sure that the customer organizations are as happy as possible, and control the process. There is nothing in the world that would make me happier, than if I could introduce Scrum or a similar agile method into this situation, making my company's main role the product owner. It would make sense on so many levels. There are some issues though: Since Scrum is a rapid-response method, I wonder if Scrum is the right method for a place like this, where not one customer but a big group of customers have to be involved in prioritizing of the backlog from sprint to sprint. It takes place in a quite political environment. What pitfalls and possibilities do you see in a situation like this? Can this situation simply be handled by longer sprints? There is a general understanding and empathy for the agile agenda in the company, but the issue I describe here seems to be a major obstacle. I would really appreciate your input. I need good ammo for my agile preaching :o)"} {"_id": "254597", "title": "Making internal search engine return results that adhere to local permissions on an internal web application", "text": "I have an ElasticSearch search engine and an internal web application. Prior to applying the security model, search results were very fast. The security model restricts some users from accessing certain pages. We have 7 \"types\" of pages. Our process is currently this: 1. Search engine queries all records 2. For each type of result, I get all recordIds of that type from the database that the user is allowed to access. This ensures I only call the database a max of 7 times (once per type) 3. I do an intersect between the records the user can access and the records search is returning The above approach isn't fast enough. I have an autosuggest search and each query takes up to 2 seconds now. What is a typical way to make a search engine return results that adhere to security? Another developer has recommended I store the security permissions directly in ElasticSearch itself(essentially mirroring what our sql server has). Edit: Each record has a page. We can assume 200-500 users and 25000+ records."} {"_id": "205305", "title": "Using two versions of a class in the same code", "text": "At my job, in our core project, we have a Validation class that has been evolving with the years. And ee have an old project with an User class that uses an old version of the Valdiation class. And we have to update some functionality that requires new validation methods. All those methods are implemented in our latest Validation class. But the new Validation class isn't backwards compatible, so we can't just put the new one in place. I thought that a good solution would be to upgrade all the code that references the Validation Class or making it backwards compatible but, obviously, both of them would take a lot of time. Finally, it was decided that adding the latest version and calling it Validation2 works just fine, some methods use Valdiation and others Validation2. I think that will bring a lot of problems later, because you don't know when to use one or another as the names stop being meaningful. What could be a better option to deal with this? Also, the Validation class was the best example, but we have the same problem with a lot of bigger and more complicated classes and now we have a lot of \"duplicated\" classes. The language is PHP, but could happen in other languages too."} {"_id": "13084", "title": "Degree after BSc in Computer Science: Marketing, MBA, Techno-MBA, HCI?", "text": "Since a year I am a graduated Computer Engineer, and am now working in named field since 2 years. However, I'd like to get a Masters in a \"softer\" field and was thinking over the following choices: * **MBA** : Master of Business Administration, to bridge the business world with the world of computer science. Would give me an interesting job, travel potential, high salary etc. * **Techno-MBA** : Sort of an MBA, but aimed at people who already have a degree in Computer Science. * **HCI** : Human Computer Interaction degree, something I have been **very** interested in, but feel like it doesn't have the growth potential of MBA/Techno-MBA * **Marketing** : My most recent brainchild is a degree in Marketing, is this at all a good idea given my background? Growth potential? Main priority: Gaining a skill, and gaining a job allowing me to use my social skills. Which path has the best \"career potential\"? (Controversial I know) Any thoughts? Any feedback welcome. Edit: DOH. Realized that the first sentence makes no sense. Was running my own semi-successful business during last year of school."} {"_id": "225217", "title": "When would you choose *not* to update a third-party library to a newer version?", "text": "Using third party libraries for productivity gains in software development is common. Unfortunately, along with the library's functionality we also import its bugs. Some of them get fixed in subsequent releases. So, to upgrade or not to upgrade, this is the question. I am interested in learning from experiences when upgrading to a newer version of the library was desirable, but after a cost/benefit analysis the conclusion was that upgrading was not a good solution \"in the grand scheme of things\". I am interested in finding out what _forces_ influence the decision towards _not_ upgrading."} {"_id": "42225", "title": "Python vs. Java for embedded wireless module", "text": "We are developing a product at work which interfaces with basic I/O and sends data to a webserver over a GPRS connection. What i need to know before we commit to a product, is which language is more suited for this task: Java or Python? (or any language to be honest) As i said, it will run on a wireless module and open serial connections, read values, send data through GPRS connections to a webserver..."} {"_id": "134008", "title": "Should PHP view files be called something other than '.php'?", "text": "By default, any file that PHP touches is usually suffixed with `.php`. It's universally understood by Apache / Nginx as the default for PHP files and most setups expect PHP files to end in this extension. In short, `.php` is the standard for everything PHP. However, I'm wondering if perhaps view files should have a different extension to help differentiate them from other PHP files. First, when it comes to views I have found that almost all MVC frameworks using a matching view file named after the controller or method. In addition, you generally also have a matching model named the same thing. This causes a problem with most IDE's and editors. For example, you might have a \"user\" controller, a \"user\" view, and a \"user\" model. The results in having three files open called \"user.php\" which makes it a bother when you are moving around and clicking on the wrong tabs. Second, separating views as a fundamentally different kind of PHP file (the presentation type) is another argument for changing the extension of the view files to something other than `.php`. Something that immediately tells your brain what type of content belongs in it. Third, some applications expose parts (or all) of the PHP files in the webroot and it's directories. Rather than adding something like ` The refactorer found this code ugly and introduced a decorator: from functools import wraps ... def requires_auth(wrapped_func): \"\"\" (a little doc telling that this is a decorator and how to use it) \"\"\" @wraps(wrapped_func) def decorator(*args): if check_auth(): return wrapped_func(*args) else raise web.Unauthorized() return decorator ... class FooHandler: @requires_auth def GET(self, a, b, c): handle_foo_get(a, b, c) @requires_auth def POST(self, x, y, z): handle_spam_eggs() ... The reviewer argues that this change actually **decreases** maintainability, because most of the team doesn't know decorator syntax. So the next person visiting the refactored code could be puzzled by the construct \u2014 but not with the previous version. There's also a convincing argument that **most of the codebase isn't in Python** , (the language is there only for that little web service), and hence team members aren't required to have even shallow knowledge of Python. However, to work with code, one's going to need to know its language, right? And if you have the temptation to say that python decorators aren't _that_ advanced syntax \u2014 please remember that this is only an example. Every language has its dark corners. (But to learn those corners means to become better.) So the question is... Can you read unknown syntax? Is the elegance worth the surprise? At least, what are the tradeoffs and/or guidelines you can name? I do really need some opinions from the community of programming professionals."} {"_id": "203464", "title": "How do I initialize a Scala map with more than 4 initial elements in Java?", "text": "For 4 or fewer elements, something like this works (or at least compiles): import scala.collection.immutable.Map; Map HAI_MAP = new Map4<>(\"Hello\", \"World\", \"Happy\", \"Birthday\", \"Merry\", \"XMas\", \"Bye\", \"For Now\"); For a 5th element I _could_ do this: Map b = HAI_MAP.$plus(new Tuple2<>(\"Later\", \"Aligator\")); But I want to know how to initialize an immutable map with 5 or more elements and I'm flailing in Type-hell. # Partial Solution I thought I'd figure this out quickly by compiling what I wanted in Scala, then decompiling the resultant class files. Here's the scala: object JavaMapTest { def main(args: Array[String]) = { val HAI_MAP = Map((\"Hello\", \"World\"), (\"Happy\", \"Birthday\"), (\"Merry\", \"XMas\"), (\"Bye\", \"For Now\"), (\"Later\", \"Aligator\")) println(\"My map is: \" + HAI_MAP) } } But the decompiler gave me something that has two periods in a row and thus won't compile (I don't think this is valid Java): scala.collection.immutable.Map HAI_MAP = (scala.collection.immutable.Map) scala.Predef..MODULE$.Map().apply(scala.Predef..MODULE$.wrapRefArray( scala.Predef.wrapRefArray( (Object[])new Tuple2[] { new Tuple2(\"Hello\", \"World\"), new Tuple2(\"Happy\", \"Birthday\"), new Tuple2(\"Merry\", \"XMas\"), new Tuple2(\"Bye\", \"For Now\"), new Tuple2(\"Later\", \"Aligator\") })); I'm really baffled by the two periods in this: scala.Predef..MODULE$ I asked about it on `#java` on Freenode and they said the `..` looked like a decompiler bug. It doesn't seem to want to compile, so I think they are probably right. I'm running into it when I try to browse interfaces in IntelliJ and am just generally lost. Based on my experimentation, the following is valid: Tuple2[] x = new Tuple2[] { new Tuple2(\"Hello\", \"World\"), new Tuple2(\"Happy\", \"Birthday\"), new Tuple2(\"Merry\", \"XMas\"), new Tuple2(\"Bye\", \"For Now\"), new Tuple2(\"Later\", \"Aligator\") }; scala.collection.mutable.WrappedArray y = scala.Predef.wrapRefArray(x); There is even a `WrappedArray.toMap()` method but the types of the signature are complicated and I'm running into the double-period problem there too when I try to research the interfaces from Java."} {"_id": "203461", "title": "unit testing variable state explicit tests in dynamically typed languages", "text": "I have heard that a desirable quality of unit tests is that they test for each scenario independently. I realised whilst writing tests today that when you compare a variable with another value in a statement like: assertEquals(\"foo\", otherObject.stringFoo); You are really testing three things: 1. The variable you are testing exists and is within scope. 2. The variable you are testing is the expected type. 3. The variable you are testing's value is what you expect it to be. Which to me raises the question of whether you should test for each of these implicitly so that a test fail would occur on the specific line that tests for that problem: assertTrue(stringFoo); assertTrue(stringFoo.typeOf() == \"String\"); assertEquals(\"foo\", otherObject.stringFoo); For example if the variable was an integer instead of a string the test case failure would be on line 2 which would give you more feedback on what went wrong. Should you test for this kind of thing explicitly or am i overthinking this?"} {"_id": "186847", "title": "A design pattern for data binding an object (with subclasses) to asp.net user control", "text": "I have an abstract class called Address and I am deriving three classes ; HomeAddress, Work Address, NextOfKin address. My idea is to bind this to a usercontrol and based on the type of Address it should bind properly to the ASP.NET user control. My idea is the user control doesn't know which address it is going to present and based on the type it will parse accordingly. How can I design such a setup, based on the fact that, the user control can take any type of address and bind accordingly. _I know of one method like :-_ Declare class objects for all the three types (Home,Work,NextOfKin). Declare an enum to hold these types and based on the type of this enum passed to user control, instantiate the appropriate object based on setter injection. As a part of my generic design, I just created a class structure like this :- ![Sample UML](http://i.stack.imgur.com/DyML8.png) I know I am missing a lot of pieces in design. Can anybody give me an idea of how to approach this in proper way."} {"_id": "7340", "title": "Which operating system book do you recommend?", "text": "I want to read a good book about operating systems. More specifically, I want to read about how common problems - such as managing virtual memory, handling traps, doing context switches, managing processes and threads, etc. - are usually solved. To wit, I'm not looking for a book about how to program _towards_ an operating system; I'm looking for a book about how to _write_ an operating system. I hope this makes the question clear. Please only write **one book per answer** , for voting purposes."} {"_id": "10021", "title": "What is a \"powerful\" language?", "text": "I have often seen people fighting over that their favorite language is more \"powerful\" than others. When it comes to describing a programming language, I can understand what an object oriented language is or what a dynamic language is, but I still can't figure out what exactly a \"powerful\" language is. What are your thoughts?"} {"_id": "247140", "title": "Is it a good idea for JS objects to draw themselves when the page loads?", "text": "So normally I would only use JS to modify the dom after the user interacts with something or some event goes off. This seems right for some reason. But I'm developing a widget based app where widgets need the capability to draw (print) themselves and it feels redundant to send the widget's html via the initial http request when I can just print them with the js method when the page loads. Should I be trying to avoid printing html elements with JS on the page load as a rule or is it just an optimization thing?"} {"_id": "176706", "title": "Graduated transition from Green - Yellow - Red", "text": "I have am having algorithm mental block in designing a way to transition from Green to Red, as smoothly as possible with a, potentially, unknown length of time to transition. For testing purposes, i will be using 300 as my model timespan but the methodology algorithm design needs to be flexible enough to account for larger or even smaller timespans. Figured using RGB would probably be the best to transition with, but open to other color creation types, assuming its native to .Net (VB/C#). Currently i have: t = 300 x = t/2 z = 0 low = Green (0, 255, 0) mid = Yellow (255, 255, 0) high = Red (255, 0, 0) Lastly, sort of an optional piece, is to account for the possibility of the `low`, `mid`, and `high` color's to be flexible as well. I assume that there would need to be a check to make sure that someone isnt putting in `low = (255,0,0)`, `mid=(254,0,0)`, and `high=(253,0,0)`. Outside of this anomaly, which i will handle myself based on the best approach to evaluate a color. ### Question: * What would be the best approach to do the transition from `low` to `mid` and then from `mid` to `high`? * What would be some potential pitfalls of implementing this type of design, if any?"} {"_id": "163688", "title": "What is an appropriate language for expressing initial stages of algorithm refinement?", "text": "First, this is not a homework assignment, but you can treat it as such ;). I found the following question in the published paper The Camel Has Two Humps. I was not a CS major going to college (I majored in MIS/Management), but I have a job where I find myself coding quite often. > For a non-trivial programming problem, which one of the following is an > appropriate language for expressing the initial stages of algorithm > refinement? > > (a) A high-level programming language. > > (b) English. > > (c) Byte code. > > (d) The native machine code for the processor on which the program will run. > > (e) Structured English (pseudocode). What I do know is that you _usually_ want to start your design implementation by writing down pseuducode and then moving/writing in the desired technology (because we all do that, right?) But I never thought about it in terms of **refinement**. I mean, if you were the original designer, then you might have access to the original pseudocode. But realisticly, when I have to maintain/refactor/ **refine** somebody elses code, I just keep trucking with the language it currently resides in. **Anybody have a definitive answer to this?** As a side note, I did a quick scan of the paper as I havn't read every single detail. It presents various score statistics, can't find where the answers are with the paper."} {"_id": "163689", "title": "Should interfaces extend (and in doing so inherit methods of) other interfaces", "text": "Although this is a general question it is also specific to a problem I am currently experiencing. I currently have an interface specified in my solution called public interface IContextProvider { IDataContext { get; set; } IAreaContext { get; set; } } This interface is often used throughout the program and hence I have easy access to the objects I need. However at a fairly low level of a part of my program I need access to another class that will use _IAreaContext_ and perform some operations off it. So I have created another factory interface to do this creation called: public interface IEventContextFactory { IEventContext CreateEventContext(int eventId); } I have a class that implements the _IContextProvider_ and is injected using NinJect. The problem I have is that the area where I need to use this _IEventContextFactory_ has access to the _IContextProvider_ only and itself uses another class which will need this new interface. I don't want to have to **instantiate** this implementation of _IEventContextFactory_ at the low level and would rather work with the _IEventContextFactory_ interface throughout. However I also don't want to have to inject another parameter through the constructors just to have it passed through to the class that needs it i.e. // example of problem public class MyClass { public MyClass(IContextProvider context, IEventContextFactory event) { _context = context; _event = event; } public void DoSomething() { // the only place _event is used in the class is to pass it through var myClass = new MyChildClass(_event); myClass.PerformCalculation(); } } So my main question is, would this be acceptable or is it even common or good practice to do something like this (interface extend another an interface): public interface IContextProvider : IEventContextFactory or should I consider better alternatives to achieving what I need. If I have not provided enough information to give suggestions let me know and I can provide more."} {"_id": "247146", "title": "Returning results of method on batch list?", "text": "The title is a bit vague so I'll try to elaborate. I have a function makeFoo(int bar) -> returns Foo or throws Exception. I also have a batch version of this makeFoos(int[] bars) -> returns Foo[] which basically loops through bars and runs makeFoo() on them. Issue is, if while running makeFoos(), makeFoo() throws an Exception, what do I do? I don't want to break out of makeFoos() because I want to continue processing the rest of the bars. But also, I want to retain the Exception that was thrown. My initial solution is, instead of returning Foo[], I return Result[]. Where Result is a wrapper class: class Result: T data; Exception e; Is there a better way that I can approach this? * * * Edit: Apologies if this is considered a duplicate post, but I posted a more general version of this question here: Result Object vs. Exceptions"} {"_id": "176700", "title": "How do .so files avoid problems associated with passing header-only templates like MS dll files have?", "text": "Based on the discussion around this question. I'd like to know how .so files/the ELF format/the gcc toolchain avoid problems passing classes defined purely in header files (like the std library). According to Jan in that answer, the dynamic linker/loader only picks one version of such a class to load if its defined in two .so files. So if two .so files have two definitions, perhaps with different compiler options/etc, the dynamic linker can pick one to use. Is this correct? How does this work with inlining? For example, MSVC inlines templates aggressively. This makes the solution I describe above untenable for dlls. Does Gcc never inline header-only templates like the std library as MSVC does? If so wouldn't that make the functionality of ELF described above ineffective in these cases?"} {"_id": "247148", "title": "Program Design clearness vs convenience", "text": "I've recently disputed with my colleague about the following situation: I've developed a framework that processes the resources (in java, although its not really important). It goes (simplified, pseudo-code) like this: class Algorithm { Processor processor = ...; execute() { forEach(Resource r : readResourceNamesFromFileSystem()) { DomainObject do = convertResourceToDomainObject(r); processor.process(do); } } } interface Processor { process(DomainObject do); } Class **Algorithm** is an entry point to my framework. Resource is a representation of some content in the filesystem (it has methods for reading the data from file and the file name, which is important) Class **DomainObject** (declaration is omitted) is a java representation of the content of the file depicted by Resource. DomainObject only has data and doesn't have a filename inside. Class **Processor** represents the business logic layer, the DomainObject can be processed, stored in Database and so forth. The content data is supplied by others, we just know that these files contain the data that can be converted into DomainObject(s) which is a real object we work with in the framework. We store one DomainObject in one file. Now we argue about the file name. Colleague of mine wants the file name to be propagated into the Processor and potentially to all the layers because its easy to print the logging message/throw an exception that contains a file name if something goes wrong during the processing, or maybe to write the message like \"domain object stored in file ( **here__comes__filename** ) has been processed successfully\". I see the point of my colleague (convenience) but I'm sure it breaks the design and encapsulation because the processor module shouldn't really be aware of the \"origin\" of the DomainObject (today we store them in files one DomainObject per file, tomorrow we'll maybe use something else, something that doesn't necessarily has a filename). I claim that we better catch the exception in the Algorithm.execute method and rethrow the exception with a filename/ log the filename. The log file will be less readable but the encapsulation won't be broken. In my understanding its a design vs clearness. Could you please share your opinion, who is right, who is wrong? Should I sacrifice the design for the clearness of logs? BTW I've promised my colleague to ask this question so we'll read the possible answers together :) Thanks a lot in advance and have a nice day"} {"_id": "163681", "title": "Shared FIFO file descriptor", "text": "is ok to open fifo with one FD and share it with multiple threads? or is it better to have multiple fds opened for the same fifo and share these fds with the threads? BTW, I'll be doing write and read. The environment is linux, C, pthreads"} {"_id": "176708", "title": "Using Visual Studio as a Task-Focused IDE", "text": "Are there patterns or libraries or any official Microsoft SDK for using Visual Studio as a specifically Task-Focused UI? For example, both Revolution R (IDE for the R language) and SQL 2012 (and I think SQL 2008 and possibly 2005) use Visual Studio as the underlying IDE framework. Is there an officially supported SDK and/or examples/samples for doing this type of thing? I am building a language Parser for an existing language - whose only available IDE is INSANELY expensive - using Irony (and eventually will generate a Language Service as well). Any direct or indirect suggestions/answers are appreciated."} {"_id": "236319", "title": "Relative encapsulation design", "text": "Let's say I am doing a 2D application with the following design: There is the Level object that manages the world, and there are world objects which are entities inside the Level object. A world object has a location and velocity, as well as size and a texture. However, a world object only exposes `get` properties. The `set` properties are private (or protected) and are only available to inherited classes. But of course, Level is responsible for these world objects, and must somehow be able to manipulate at least some of its private setters. But as of now, Level has no access, meaning world objects must change its private setters to public (violating encapsulation). How to tackle this problem? Should I just make everything public? Currently what I'm doing is having a inner class inside game object that does the `set` work. So when Level needs to update an objects location it goes something like this: void ChangeObject(GameObject targetObject, int newX, int newY){ // targetObject.SetX and targetObject.SetY cannot be set directly var setter = new GameObject.Setter(targetObject); setter.SetX(newX); setter.SetY(newY); } This code feels like overkill, but it doesn't feel right to have everything public so that anything can change an objects location for example."} {"_id": "255176", "title": "What if any languages treat undisposed resources as an error?", "text": "I've seen lots of code like the following example. It's in Python, but the same mistake is made in all languages with managed resources: f = open('foo.txt', 'rb') for line in f: print line That's it. The error is that `close(f)` wasn't called so the file handle is kept open until some indeterministic time in the future when the runtimes memory management decides to reclaim memory. Python has `with` and c# has `using` to help make resource cleanup easier, but let's disregard those features for a minute. Given that: 1. It's a programming error not to explicitly close open files. 2. The runtime can detect that an open file has not been closed. Why then doesn't the runtime throw an error instead of being \"helpful\" and closing the file for the programmer? That would be the fail fast and fail early strategy. Is there a technical reason why it can't? Is there any languages that does it? Has the idea been considered before (I've googled but not found anything)? Here is how you **almost** implement the feature in Python: class mustclose: def __init__(self, f): self.f = f def __del__(self): if not self.f.closed: raise Exception(\"You forgot to close() me!\") k = mustclose(open('foo.txt', 'wb')) #k.f.close() Two problems: It requires wrappers, Python doesn't like to throw exceptions from destructors. (Read this http://stackoverflow.com/questions/2807241/what-does-the- expression-fail-early-mean-and-when-would-you-want-to-do-so SO question for background on why failing fast is often desirable)"} {"_id": "133334", "title": "How do I preview my mobile web application on a Windows 7 PC?", "text": "I am developing a mobile web application that may be browsed in an iPhone. I would like to test this application, but I lack a mobile web device. How can I test this mobile application on a Windows 7 PC?"} {"_id": "85231", "title": "What should I recommend a small company looking for C# developers", "text": "Here is the issue. I am a senior developer, and one of the start-ups I designed the system (management system/database/web) a long time ago, have grown and need software updates. I have left their system to another developer long time ago, but apparently he has left the job, and so they are asking me if I can suggest them where to find a new one. The problem is that the company has no clue that the IT is not cheap. They expect multiple features to be added for 40$, so that's an issue. Actually one of the reasons why I left the project when I did. Lots of expectations, little pay, also I know those people outside work, so I decided to avoided stressing the nonwork-relationships and left the project gracefully. Today they asked me for an advice, and I told them that the feature list they want is probably going to cost some if they'll get a senior developer for the job. So I guess their best bet is to find someone who loves coding and has just finished the school. Which would give someone a chance to code for money which is good for a student, and at the same time, allow the student to get some hands on experience. Then again, the system is not exactly 20 line console program, there is an MSSQL database, ASP.NET web page and content management system with all the AJAX stuff and some other things. So student straight out of school could have some problems with that. But, I thought about the issue some more, and I think that junior developer is a tricky deal, without mentoring, he can either screw up royally, or just do what's asked. Also, it seems no one is coming to interviews at all, which is weird, or maybe not. What should I suggest them?"} {"_id": "70430", "title": "Maintaining Method Signatures across languages?", "text": "I'm finishing up a port I did of a portion of a Java Library. The library calculates sunrise and sunset for a given latitude and longitude. The original Java library also calculated various times based on sunrise and sunset, which I plan to get to later. That said, I'm working with Objective-C and the Cocoa touch framework. The original methods, being Java, all began with `get` and took their arguments in parenthesis. However, Objective-C has a different method structure, so the methods end up looking different. For example, `public double getUTCSunset(AstronomicalCalendar astronomicalCalendar, double zenith, boolean adjustForElevation)` becomes this: `- (double) getUTCSunsetForDate:(NSDate*)date andZenith:(double)zenith adjustForElevation:(BOOL)adjustForElevation`. The problem here is twofold. First of all, according to the Apple Documentation on coding convention, method names (and \"getters\"), should not include `get` in them. However, a second issue arises if we drop the `get`. We end up with a method signature that begins with a capital letter, which is also not supposed to happen. How would you rewrite this method, while keeping the method signatures similar enough to be recognizable to each other?"} {"_id": "70437", "title": "Preparing For a New Programming Project", "text": "I consider myself to be a novice programmer -- a noob is you like. As such I'm still not sure how to get started on a project where I will be doing stuff that I've never done before. For instance, I would like to write a program that can download videos from YouTube and convert them to a format specified by the user. I've never done anything like this before and I really have no idea where to start. Rather, I have no idea what I should search for. If I search for \"YouTube Downloader\" then I get hit with a bunch of useless links to existing YouTube Downloader sites, most of which don't work. What I want to know is how to get started on a project that I know nothing about. How do I find out what is required for this project? How do I find out what languages are best suited for this? How can I find out if there are any APIs that would be particularly useful? Also, what other questions should I be asking myself when preparing to take on a new project?"} {"_id": "70438", "title": "Optimizing programs by identifying and taking evaluation (calculation) shortcuts", "text": "Recently I am working on a project that involves a lot of simple numerical calculations being applied to large arrays. The numerical values are very simple but there are many different types, such as positive integers, negative integers, small floating point numbers, and so on. SIMD programming techniques (such as SSE) are widely used. I noticed that a lot of times processing speed can be improved if I can find shortcuts in the calculations. For example, if the inputs are integers and the outputs are also integers, I should try my best to avoid converting to floating point numbers if possible. For simple chained calculations this is easy. However, when we increase the number of operations that can be combined, we find that we have to manually code those SIMD instructions for each case. I have studied the approaches taken by various vectorization libraries, and observed that while their approaches are very elegant in code using metaprogramming techniques, most compilers aren't able to generate optimized machine instructions from the results. If I represent the calculations with command objects, will I have a better chance of matching small sections of the command chain and substituting with shortcuts?"} {"_id": "121664", "title": "When to do code reviews when doing continuous integration?", "text": "We are trying to switch to a continuous integration environment but are not sure when to do code reviews. From what I've read of continuous integration, we should be attempting to check in code as often as multiple times a day. I assume, this even means for features that are not yet complete. So the question is, when do we do the code reviews? We can't do it before we check in the code, because that would slow down the process where we will not be able to do daily checkins, let alone multiple checkins per day. Also, if the code we are checking in merely compiles but is not feature complete, doing a code review then is pointless, as most code reviews are best done as the feature is finalized. Does this mean we should do code reviews when a feature is completed, but that unreviewed code will get into the repository?"} {"_id": "121667", "title": "Calculation of Milestones/Task list", "text": "My project manager assigned me a task to estimate the development time for an iPad application. Lets assume that I gave estimation of 15 working days. He thought that the number of days where too many and client needed the changes to the application urgently (as in most of cases). So, he told me: \"I am going to assign **two** developer including you and as per my understandings and experience it won't take more than **seven** working days.\" **Clarifications** I was given the task of estimating development time for an individual. How could I be sure that 2 developers are going to finish it within 7 days? (I am new to team & I hardly know the others abilities) **Questions** * Why do most of project managers / team leaders have understandings like: * If one developer requires N days, * Then two developers would require N/2 days, * Do they think something like `developer = s/w production machines`? * Should a team member (developer, not team lead or any higher post) estimate other developers work? I didn't deny anything in the meeting and didn't said, but what should be the appropriate answer to convince them that N/2 formula that they follow is not correct?"} {"_id": "125320", "title": "Do TODO comments make sense?", "text": "I am working on a fairly big project and got the task to do some translations for it. There were tons of labels that haven't been translated and while I was digging through the code I found this little piece of code //TODO translations This made me think about the sense of these comments to yourself (and others?) because I got the feeling that most developers after they get a certain piece of code done and it does what it's supposed to do they never look at this until they have to maintain it or add new functionality. So that this `TODO` will be lost for a long time. Does it make sense to write this comments or should they be written on a whiteboard/paper/something else where they remain in the focus of developers?"} {"_id": "121662", "title": "Requirement, architecture data capture tool", "text": "Are there any tools available for the following use case: I am planning to write a complex application and now I just know about the basic functional requirments. I am refining the functionality with more and more details. I am also writing down how to implement this in software architecture perspective. I am in a situation where there are multiple ways implement a certain functionality and based on the selected approach, other parts of the program will also change. At this moment, I don't want to decide on the approach to use, I just want to list down all the options, present to a wider audience and get it finalized. Are there any standard software tools to do this? I know I can use MS excel or some mind mapper tools, but just thinking of some standard tools (not just for programmers, but for managers and other) availability Thanks, Den"} {"_id": "146116", "title": "What relationship would this be and how to implement", "text": "I have a pet project where I am making a Grade Book Application to better teach several different topics in DB, OOP and Planning. Im running into some issues with the database planning of things. For the record I am learning all of this by using online resources, books and trial and error in no way am I professional. I have 3 Tables (Teachers, Courses, and Assignments). 1) Many Teachers can Teach Many Courses - So this would be a M2M Relationship where I would need a third table to cross reference Teachers and Courses by their PK. Is it okay to have that third table have extra columns to show unique data about the courses like Title of the course or should that go into the course table? And if it depends how would you know which is the best way? 2) One Course can have Many Assignments or is it Many Courses can have Many Assignments? I feel this is a stupid question but I can not grasp on which would be a correct fact statement? 3) Does anyone know of any Database Diagrams that shown an example of a very simple gradebook layout, I have been looking on google but couldn't find anything that was not in my realm."} {"_id": "251459", "title": "Sequentially/parallel algorithm for extracting blob outer perimeter/contour length", "text": "I have labelled the connected components in a binary image and found their areas and bounding boxes. The components are not necessarily filled, and may contain holes. I wish to identify the component that resembles a pupil the most. For this, I would also like to extract ( **only** ) their outer perimeter lengths, for calculating circularity, since these are good features for pupil detection. I plan to do this sequentially and then move the algorithm to CUDA afterwards, so the algorithm should be parallelisable to some extent. I should note that this work is for my thesis, and I am not asking you to solve anything for me, just provide some feedback on my research so far. I investigated tons of articles for this problem, but it seems most of them are concerned with connected component labelling and not feature extraction. Alas, I found three candidates, and ~~two~~ one of my own design: 1. **The Marching Squares algorithm**. It sounds promising (also embarassingly parallel), but it appears to extract all perimeter lengths, including inner contours, without modification, which will likely overestimate perimeter lengths. However, since I am looking for the pupil, a homogenously colored area, it will likely not overestimate the pupil. The overestimation might also yield bad results for other irregularly shaped blobs, which should be fine if they are then not selected. 2. **The Chain Code algorithm** (used by OpenCV's findContours function): Seems pertty good as well, and parallel solutions do exist, but I worry it might fail if the stopping criterion is not good enough (see here, at the bottom near Jacob's stopping criterion). However, it should be able to extract only the outer contour and give good approximations. 3. **The Convex Hull algorithms** : While parallel solutions exist, I worry that it might make a blob more circular than it really is, if points are scattered in a way that favors this. Should give good results for the pupil blob though. 4. ~~**Algorithm 1** : You could launch some threads that trace from each side of the blob's bounding box towards the opposite side. When the threads \"hit\" a pixel with the blob's label, they mark it as visited and sum the hits. When another side is traced, visited pixel are ignored, hit pixels are summed again etc., and the total is returned.~~ 5. **Algorithm 2** : I also tried counting the number of pixels with a background pixel in their Moore neighborhood, but this overestimates the contour if enough holes are present. I would appreciate some suggestions before I try to code everything since I am on a schedule. Again, I'm just asking for advice, not solutions."} {"_id": "211811", "title": "In what situations does it make sense to use an enumeration when writing object-oriented code?", "text": "Enumerations1 are often associated with procedural code rather than object- oriented code. They tend to give rise to similar switch statements scattered through the code, and in general, these are replaced by polymorphism in object-oriented code. For example, see _Replace Type Code with Class_ , _Replace Type Code with Subclasses_ , and _Replace Type Code with State/Strategy_ in _Refactoring_ by Martin Fowler. Closely related, see _Replace Conditional With Polymorphism_ in the same volume; Bob Martin also has quite a bit to say on the disadvantages of switch statements in _Clean Code_ (for example, heuristic G23 _Prefer Polymorphism to If/Else or Switch/Case_ ). However, since object-orientation, like any other good paradigm, can be a powerful tool but is not a silver bullet, are there times when using an enumeration is a good decision? * * * 1I use this term broadly; not just for something which is strictly an `enum` in a C-based language, but for any set of entities that are used to represent it (a class with a set of static members, etc...)."} {"_id": "249597", "title": "In Asp.net, how browser knows a textbox is registered with server side TextChanged event?", "text": "When I evaluate asp.net control events, I am not able to identify how the browser is aware of asp.net textbox is subscribed to serverside textchanged event or not. When I look the server generated HTML code, there is not information about the TextChanged event. **ASP.Net Declaration** **Server Generated HTML Code** "} {"_id": "84377", "title": "How to be better at reviewing code?", "text": "First I firmly believe in the code review process and always want someone else to review my code. My question really centers around how can I do a better job at performing a code review for someone else? I know that to perform a code review you need to have knowledge of the how the existing code works and a knowledge of what the local standard is, both of which I feel that I know very well. Still I feel like I never do a good enough code review for other people. Also I know that certain people seem to do a better job review code than others so I am wondering for those that are great code reviewers what are the techniques that you use?"} {"_id": "211816", "title": "Handling multiple clients using single application", "text": "We are developing an ERP web application using Vaadin framework where each of the potential client companies will have their own data and file storage. Current implementation only allows having one client company per application instance, what consequently requires separate database schema and file storage. Some of the clients may want specific customizations based on their needs. Having few clients doesn't cause much trouble, but if number of clients would grow, then maintaining dozens of application instances will be a real challenge. We are reaching that point where we have to choose one these options: 1. Improve our architecture to allow all client companies manage their data on a single application instance and restrict personal customizations to the application; 2. Stick with the current implementation and just keep increasing number of running application instances on the server. Which of these options is considered better practice and would make the system easier to maintain? Is there a different solution for this kind of situation? Edit: Application is currently in development stage. The server with all the potential application instances will be run by ourselves. Each instance will be deployed in a WAR package via Apache Tomcat server."} {"_id": "84379", "title": "What technologies are used for an Android/iPhone App to interact with a database", "text": "Do Android/iPhone Apps use AJAX to interact with the backend? If yes, is this common? Or do most apps use a different method for fetching database information. And if so, what other methods are there? Are server-side languages ever involved, and if so, is that more common to see with app development?"} {"_id": "84378", "title": "What happens to get database information on the server-side with AJAX?", "text": "I'm slightly confused about how AJAX works. I mostly understand the PHP/MySQL model of retrieving database information for the user, but how does AJAX do it? From what I understand the AJAX engine sends an HTTP request to the backend which does some magic and returns XML data to the AJAX engine which translates it to HTML+CSS for the user to see, but what happens when it sends that first HTTP request to the backend? Is a server-side language acting as the middleman there to get data from the database for the rest of the process? Also, does HTML5 put any extra spin on the usage of AJAX?"} {"_id": "119345", "title": "Meaningful concise method naming guidelines", "text": "Recently I started releasing an open source project, while I was the only user of the library I did not care about the names, but know I want to assign clever names to each methods to make it easier to learn, but I also need to use concise names so they are easy to write as well. I was thinking about some guidelines about the naming, I am aware of lots of guidelines that only care about letters casing or some simple notes. Here, I am looking after guidelines for meaningful but yet concise naming. For example, this could be part of the guidelines I am looking after: * Use Add when an existing item is going to be added to a target, Use Create when a new item is being created and added to a target. * Use Remove when an existing item is going to be removed from a target, Use delete when an item is going to be removed permanently. * Pair AddXXX methods with RemoveXXX and Pair CreateXXX methods with DeleteXXX methods, but do not mix them. As the above samples show,I would like to find some online material helping me with naming methods and other item complying with English grammar and word meanings. The above guidance may be intuitive for native English speakers, but for me that English is my second language I need to be told about things like this."} {"_id": "252333", "title": "WCF vs Web API, Deeper details?", "text": "**Before I continue, I just want to mention I have heavily researched and searched on this topic, but I need the opinion of people who have worked/and or have practical knowledge with regards to this topic.** We are currently looking at developing an API so that clients can utilize this. We are definitely set on REST as opposed to SOAP, but I am still a little unclear about what exactly the difference is, if you go down to microscopic detail. Having a look at msdn (Choosing which technology to use), the link below: http://msdn.microsoft.com/en-us/library/jj823172.aspx The differences are clear. However, the first point states for **WCF** : Enables building services that support multiple transport protocols (HTTP, TCP, UDP, and custom transports) and allows switching between them. For **ASP.NET Web Api** : HTTP only. First-class programming model for HTTP. More suitable for access from various browsers, mobile devices etc enabling wide reach. So, both support HTTP, but WCF supports more protocols in addition. What makes ASP.NET Web API more suitable for, I quote: \"More suitable for access from various browsers, mobile devices etc enabling wide reach.\" if both support HTTP? We have until now, relied heavily on PC use. What I am getting to, and finally my question is: We plan on going mobile, and moving a lot more to multiple devices, browsers etc. But I just would not like to go on a service on the basis of a bit of text saying \"Web API\" is more suitable? IS it more efficient, less resource intensive etc.? With regards to future proofing? Would we be better of with WEB API and why? Just lastly, we have existing SOAP services, everything we have is built in SOAP. Please do not hesitate to go into detail, I am not an experienced dev, just trying to add a contribution to this project. Just a last couple of requirements: 1) We plan on utilizing JSON. 2) We are limited to .NET 3.5/4. 3) Must be REST."} {"_id": "252337", "title": "boolean operations in C using bitfields", "text": "I am trying to implement boolean data type in C. Basically, I am working with sets. The following code can be used to access each bit but I am unsure whether I can represent sets using this method. Can somebody clarify this for me? struct SET { unsigned int b0 :1; // bit 0 single bit unsigned int b1 :1; // bit 1 single bit unsigned int b2 :1; unsigned int b3 :1; }; I can define two structures s1 and s2.. and I will be able to access each bit of these structures (treated as boolean strings). I will have to perform set operations like UNION, INTERSECTION and MEMBERSHIP. Is this even possible in C? Note: I cannot use Java, only C."} {"_id": "185200", "title": "Does it make sense to implement OAuth for a 2 party system?", "text": "I'm under the impression that OAuth is for authentication between three parties. Does it make sense to implement OAuth in a context where there is just a client and server. We have a server, and a client (HTML/javascript). Currently we authenticate via the normal \"post credentials to server, get a cookie, use cookie to authenticate all subsequent requests\" method. Will implementing OAuth be a benefit in this situation?"} {"_id": "5356", "title": "Do you consider mainframe as part of large application deployments?", "text": "When you are setting up your system landscape for large and/or multiple application deployments, do you consider mainframe? If not, why not? If so, what factors are you considering. If you take a _real_ TCO look at large ERP and/or consolidated application landscapes, mainframe is actually quite cost-effective. My own consultations have included recommendations for scale-up/mainframe/mid- size systems on some specific needs. Honestly, I've never had a customer take said recommendation, rather defaulting to countless scale-out VMs on Intel boxen (in non-trivial cost) and yet still to this day have system management and performance issues. Curious your take on this. We need to remember that the virtual machines we manage (and apparently love in IT departments) today have been done for decades on mainframe. Most mid-size and mainframe shops have small fractions of support persons managing larger and more complex applications. Your thoughts appreciated."} {"_id": "5354", "title": "Are NoSQL databases going to take the place of relational databases? Is SQL going away?", "text": "Does anyone have experience with NoSQL databases (CouchDB, MongoDB, Google's BigTable, Dynamo, Cassandra...)? What was it like? Was it actual production/on the job experience? What will happen to the relational databases in 5 years? 10 years?"} {"_id": "1058", "title": "When is it appropriate to use Microsoft's Enterprise Library (EntLib)?", "text": "I'm not exactly sure when to use Enterprise Library, and when not to... and that is making me not learn it at all. I feel that I have enough of a reason to **start learning** then perhaps one day I'll **use** it. Are there times when I should use EntLib? When shouldn't I use it?"} {"_id": "1059", "title": "Have objects delivered in terms of code reuse?", "text": "I have often heard it said that objects have not delivered in terms of code reuse. Do you agree? If you believe that they haven't, why not?"} {"_id": "33883", "title": "How to choose a new technology for mastering and not lose sense of reality and practicality?", "text": "How to choose the right next step in learning programming and mastering new technologies? I have experience with WinForms applications in C# .NET. Next what I see as a good area of expanding the knowledge is ASP.NET. Language I already know, C #, so I think there is now more a matter of mastering new technologies. Also I have interest in WPF. Perhaps the best is to work on ASP.NET and WPF at the same time. Sometimes the problem is when we do not have motivation, but also known to become a problem when we want to much :) How to choose a new technology for mastering and not lose sense of reality and practicality?"} {"_id": "179734", "title": "Switch interface implementation using configuration", "text": "We want to allow the same core service to be either fully implemented or, as other option, to be a proxy toward a client legacy system (via a WSDL for example). In that way, we have both implementation (proxy & full) and we switch which one to use in the configuration of the app. So in a nutshell, Some desired features: * Two different implementation (proxy, full) instead of one implementation with a switch inside * Switch implementation using configuration: dependency injection? reflection? * Nice-to-have: the packaged delivered to the client doesn\u2019t have to change depending on the choice between proxy or full * Nice-to-have: Client can develop their custom implementation of the Core Interface and configure the applciation to use that one With this background, the question is: What alternatives we have to choose one implementation or other of an interface just changing configuration? Thanks"} {"_id": "82933", "title": "how to create a data model for the following problem", "text": "How to tackle 2-D and 3-D space in data ? lets say you are working on a power grid problem. You need to represent Towers; transmission lines; transformers and every thing else in a 2-D space. How would you design a Data Model for this 2-D space ? I can use a class tower { int x_cor; int y_cor; string power_properties } But how to represent the ... say map of the power grid itself ? is there some standard solution to this ? some logical template people follow here ? i want to be able to slice this map by shape or area etc; i should be able to compose small maps to make a large one .. So what is the solution here ?? By the way i am not working on a power-grid project (LOL) so please keep answers generic .. P.S. non-noobs please help me make the question better."} {"_id": "216898", "title": "Possible problems in a team of programmers", "text": "I am a \"one man team\" ASP.NET C#, SQL, HTML, JQuery programmer that wants to split workload with two other guys. Since I never actually thought of possible issue in a team of programmer, there are actually quite a few that came to my mind. delegating tasks (who works on what which is also very much related to security). I found Team Foundation Service could be helpful with this problem and started reading about it. Are there any alternatives? security (do now want for original code to be reused outside the project) How to prevent programmers from having access to all parts of code, and how to prevent them from using that code outside of project? Is trust or contract the only way?"} {"_id": "216899", "title": "Are there any technical obstacles for implementing `function* ()` syntax", "text": "In Python we have `yield` which is very similar to that one which is proposed in ES6 (in fact, pythonic co-routines were the main source of inspiration for implementing co-routines in I wonder what are the reasons for choosing a separate `function* ()` syntax for generators compared to just defining \"regular\" functions with yeilds - just like in python by the way? I'm talking strictly of technical issues and peculiarities. Why it had been decided that a separate form will be more appropriate?"} {"_id": "200329", "title": "How do you demo software with No UI in the Sprint Review?", "text": "We are doing agile software development, basically following Scrum. We are trying to do sprint reviews but finding it difficult. Our software is doing a lot of data processing and the stories often are about changing various rules around this. What are some options for demoing the changes that occurred in the sprint when there isn't a UI or visible workflow change, but instead the change is a subtle business rule on a processing job that can take 10s of minutes or even a couple of hours?"} {"_id": "122312", "title": "How to properly shield a Product Owner from outside?", "text": "**Update:** We are a very small team (3 people) and thus I (Scrum Master) and the Product Owner are also developers doing some coding. We are aware of this situation and we are actively trying to recruit some new talents. But it's hard! * * * Meanwhile... we need to adapt... so my question: The Product Owner complains about having too much outside noise (mainly stakeholders feature requests), and he can't focus on the sprint realisation. We agree that we should try to educate people on our process implications (sprint durations and product backlog), to reduce the noise. But as a Scrum Master, how am I supposed to shield a PO from outside? Isn't he supposed to be in contact with the management and business? Also, if people outside don't want to waste too much time learning agile, what is the best way to educate them?"} {"_id": "111116", "title": "How to cope with the problem of (compiling) a large code base?", "text": "Although I can code, I don't yet have any experience with working on large projects. What I did so far was either coding small programs that get compiled in matter of seconds (various c/c++ exercises like algorithms, programming principles, ideas, paradigms, or just trying out api's...) or working on some smaller projects that were made in a scripting language(s) (python, php, js) where no compiling is needed. The thing is, when coding in a scripting language, whenever I want to try if something works - I just run the script and see what happens. If things don't work, I can simply change the code and try it out again by running the script again and keep on doing that until I get the result that I wanted.. My point is that you don't have to wait for anything to compile and because of that it is quite easy to take a big code base, modify it, add something to it, or simply play with it - you can see the changes instantly. As an example I will take Wordpress. It is quite easy to try and figure it out how to create a plugin for it. First you start by creating a simple \"Hello World\" plugin, then you make a simple interface for admin panel to familiarize yourself with API, then you build it up and make something more complex, in the mean time changing how it looks a couple of times.. The idea of having to recompile something as big as WP over and over again, after each minor change to try \"if it works\" and \"how it work/feels\" just seems inefficient, slow and wrong. Now, how could I do that with a project that is written in a compiled language? I would like to contribute to some open-source projects and this question keeps bugging me. The situation probably differs from project to project where some of them that were pre thought wisely will be \"modular\" in some way while others will just be one big blob that needs to be recompiled again and again. I would like to know more about how this is done properly. What are some common practices, approaches and project designs (patterns?) to cope with this? How is this \"modularity\" called in programmers world and what should I google for to learn more about this? Is it often that projects grow out of their first thought proportions which becomes troublesome after a while? Is there any way to avoid **long compiling** of not-so-well designed projects? A way to somehow modularize them (maybe excluding non-vital parts of program while developing (any other ideas?))? Thanks."} {"_id": "114542", "title": "How can a large, Fortran-based number crunching codebase be modernized?", "text": "A friend in academia asked me for advice (I'm a C# business application developer). He has a legacy codebase which he wrote in Fortran in the medical imaging field. It does a huge amount of number crunching using vectors. He uses a cluster (30ish cores) and has now gone towards a single workstation with 500ish GPUS in it. However where to go next with the codebase so: * Other people can maintain it over next 10 year cycle * Get faster at tweaking the software * Can run on different infrastructures without recompiles After some research from me (this is a super interesting area) some options are: * Use Python and CUDA from Nvidia * Rewrite in a functional language. For example, F# or Haskell * Go cloud based and use something like Hadoop and Java * Learn C What has been your experience with this? What should my friend be looking at to modernize his codebase? UPDATE: Thanks @Mark and everyone who has answered. The reasons my friend is asking this question is that it's a perfect time in the projects lifecycle to do a review. Bringing research assistants up to speed in Fortran takes time (I like C#, and especially the tooling and can't imagine going back to older languages!!) I liked the suggestion of keeping the pure number crunching in Fortran, but wrapping it in something newer. Perhaps Python as that seems to be getting a stronghold in academia as a general-purpose programming language that is fairly easy to pick up. See _Medical Imaging_ and a guy who has written a Fortran wrapper for CUDA, _Can I legally publish my Fortran 90 wrappers to Nvidias' CUFFT library (from the CUDA SDK)?_."} {"_id": "114545", "title": "Need advice on which route I should take for porting app from Android to iOS", "text": "I have developed this android application and I am looking to port it to iOS. My questions are: 1) I am planning to finance the iOS development with the money earn't from the android sales which is about $200 at the moment. Would it be reasonable to get the project done for this amount even if it is outsourced to elance or odesk etc? 2) I have heard the code quality for outsourced iOS projects to be bad, I dont want any memory leaks and crashes because of overflows etc. 3) I currently dont have a full time job so I can technically do the project myself, however I dont have a apple pc or iPhone (dont really want to get them either). I know C pretty well but I haven't really released a major C project that must be 100% stable. How many hours to learn iOs and do the port? And would it be worth my time taking into consideration that if I do enjoy it I may get into it fulltime. 4) What would the demand be like for someone that can do Android and iOS? Thanks"} {"_id": "50958", "title": "Data encryption/protection - where to find info about high-level best practices", "text": "I feel that no one in the group I work in, myself included, _really_ groks encryption and security, or the reasons behind making certain decisions. For example, we recently had a conversation regarding encryption of data that we handle for another group that we work with - the data ends up in a database that is on our secure corporate network (I work in a small group in a large software company, so the integrity of the corporate network is very high), along with everything else we handle. Of course, standard guidelines call for \"encryption\" of this data. Obviously, that could mean many things - IPSec/encrypted connections, encrypted fileshares, encryption implemented in the DB (whole-DB or column), encryption of the actual bits in the file, etc. - and some people in the group are under the impression that the only kind of encryption that really counts is directly encrypting the bits that are stored, the argument being that everything else is too easy to circumvent - \"if the DB is encrypted, I could still log into it and see the data there; if the file share is encrypted, as long as I have permissions to the folder I can just grab the file; but if the bits are directly encrypted, I won't be able to read it\". My instinct says that that statement is based on limited understanding: they can see themselves logging into SQL Server Management Studio to see the data, but since they wouldn't know how to take a stream or array of encrypted data and use a certificate that they probably have access to to decrypt it, it's probably safe. Are they right? Am I right? No one seems to really know, so decisions get based on the opinion of the loudest or highest-paid person. Anyway, that's just kind of an extended example of what I'm talking about. I feel like it's the blind leading the blind here, with decisions based on limited understanding, and it's frustrating. I'm no expert on the technical bits of encryption, but I know how to use standard libraries to encrypt streams and arrays and the like - where I really need more knowledge is about architecting data security and information on which I can base decisions like the above. Where can I read about this kind of stuff?"} {"_id": "110649", "title": "Why don't we use browser detection and platform-specific CSS?", "text": "Nowadays, the common phenomena is to develop a website for a browser and then corresponding apps for Android phones, iPhone, tablets and so on. Since all the platforms come with a browser, why aren't companies using CSS to accommodate them? Surely we can detect from the request which browser was used and from which platform the request came. Reading those values, why don't we just implement the corresponding CSS for different platforms. Like we do for IE, Chrome and Safari. This way we can use the platforms' browser capabilities and don't need to develop subsequent apps for a platform."} {"_id": "110645", "title": "Why is the Repository pattern needed in NHibernate?", "text": "I am reading the official Your first NHibernate based application. While the tutorial is good and easy to follow, I am wondering why the Repository pattern is used. In the various `Add`, `Update`, `Remove` methods in the `ProductRepository` implementation, the code is nearly identical - they are all using transactions, and the difference is in the \"meat\" i.e. call `session.Save` int the `Add` method, `session.Delete` in the `remove` method. ( _The page lacks HTML anchors, but you can search the page for the relevant code like`public void Remove`, `public void Add`_) That code just \"feels wrong\". Why is the author using the Repository pattern - is it just for demonstration of using NHibernate or is that required or some other reason? Ps. My background is from Ruby on Rails using ActiveRecord so I'm trying to make sense of how NHibernate works/is used."} {"_id": "50951", "title": "Best projects that illustrate the strengths of the languages they are implemented in?", "text": "I'm looking for a who's who list of projects(open source) that really illustrate the strength of the languages they are written in. Ideally the project should make use of some features of a given language that would otherwise be clunky or difficult to do in another language."} {"_id": "66868", "title": "Scrum and Scrum Master on a global team", "text": "I work in a company that runs on a model of using one developer as tech lead (me), with an onshore co-ordinator/developer who co-ordinates the offshore team, and on the offshore team there's an off-shore co-ordinator. It sounds bizarre but it basically works. The rest of the developers are offshore. I happen to be on a project now that has an additional onshore developer. My question is, do you think I can use some agile methodologies here in a loose sense (we're a waterfall company, but maybe I could do scrum, sprints, planning poker, etc)? Also, do you think we could benefit from a scrum master? What if I didn't have the additional onshore developer (so basically I'd be one of two onshore developers then, and neither of us might be on the project full time)? What about me giving up the coding I do and becoming scrum master & tech lead? See my comment below for my duties now."} {"_id": "190884", "title": "Do Flexibility and Inconsistency,Unsafety Overlap?", "text": "I was lately doing some research about different programming languages. I was interested particularly to learn unique features of popular programming languages and situations where these assets shine. I believe this can help me to decide what language to use depending upon the problem to solve. I found that many languages offering high level of flexibility (\"can adapt to new, different, or changing requirements\") are also inconsistent, for example in JavaScript (just an example, no offense to js people), arguments of a function get be manipulated either by naming them in the function declaration (classic approach): function max(x,y) Or by using the `arguments` variable: function max() { if (arguments[0] < arguments[1]) return arguments[1]; else return arguments[0]; } c = max(1,2); The arguments variable is useful for many situations such as function overloading, however, it defeats the purpose of function prototyping (which is to give information about the function), and I saw many JavaScript samples where the argument variable is being used despite there is no need for it, which clearly makes the code harder to understand and debug. I know that this depends heavily on the users of the language, but usually, code written in a language offering high flexibility tends to be unsafe, harder to understand,optimize and to debug. Do flexibility and inconsistency overlap? How do languages designers make the choice? Edit: I'm not critiquing any particular language here, I'm rather focusing on the relation between Flexibility,Safety,Consistency of programming languages."} {"_id": "66862", "title": "How to make a GPL license with more severe penalties?", "text": "I am considering open sourcing some software, but when I read the answers on this questions, it seems like the penalty of ignoring a GPL license is rather small, compared to the costs of the suing party to document that the other party has included GPL code in their proprietary code and the costs of suing. Therefore, I would like to know if there is a way to make a license or some kind of contract before source code is accessed, which is stricter than GPL. What I want to achieve is that a company including GPL code in their proprietary code have to GPL their whole code, and not just rewrite the GPL code. Is this possible?"} {"_id": "160560", "title": "What exactly is \"Web API\" in ASP.Net MVC4?", "text": "I know what a Web API is. I've written API's in multiple languages (including in MVC3). I'm also well practiced in ASP.Net. I just discovered that MVC4 has \"Web API\" and without going through the video examples I can't find a good explanation of what exactly it IS. From my past experience, Microsoft technologies (especially ASP.Net) have a tendency to take a simple concept and wrap it in a bunch of useless overhead that is meant to make everything \"easier\". Can someone please explain to me what Web API in MVC4 is exactly? Why do I need it? Why can't I just write my own API?"} {"_id": "160561", "title": "What useful expressiveness will be impossible in a language where an expression is not a statement?", "text": "I am contemplating writing a programming language. Most grammars define expressions as being a kind of a statement. But really I cannot come up with a single example of any useful expression that would pass as a statement. See http://stackoverflow.com/questions/19132/expression-versus-statement for an easy definition of expression vs statement. EDIT: Why are so many grammars making the distinction between expr and stmt when as noted in the answers, it makes no sense."} {"_id": "66864", "title": "Choosing a restrictive licence for open source work", "text": "If you were putting some work of yours online (say, a research project still in development or something alike) and **_it had to_** be made available to the public, but you wished that if someone uses it, he has to acknowledge the original author (i.e. you don't want anyone pushing your research as their own) what would be a good licence to use? In other words, you wish to protect your own work which, were it not for some rules, would never been actually made public."} {"_id": "127363", "title": "Science degrees that are complementary to programming", "text": "I hope this wouldn't be closed especially since I think a lot of people can benefit from this question that are in similar positions to mine. I'm been programming for a long time (since I was about 13/14, now I'm 22) and now I'm in that stage in life when I'm thinking whether and what to study. Since I'm programming for a long time I'm pretty certain that I will not pursue a degree in Computer Science/Software Engineering because I think a lot of the material will be about things I already know. Even if it will benefit me, I think I will lose a lot of motivation because I will be bored in class and I will always think to myself that \"I'm wasting time\" or that \"I'm not really gonna use the material that is taught in this class\". Science really interests me, especially Physics, but Chemistry/Biology is also interesting to me. and it's also possible to study two together like a BSc in Physics and Chemistry or Chemistry and Biology, which are both offered at the university I want to go to. So the question is, what Science degree can benefit me the most in a career as a programmer? Are there fields that incorporate software engineering and Physics/Chemistry/Biology and that have a lot of demand? or are expected to have a lot of demand? Are there any specific careers that you know of that incorporate software engineering and Physics/Chemistry/Biology? This is not a question of whether to study or not, this is a question of \"if I decide to do a Science Degree that isn't Computer Science/Software Engineering, but which will benefit me as a programmer or widen my range of opportunities, which one do you recommend?\""} {"_id": "191374", "title": "Why interviewers want Optimal Algorithms", "text": "In an interview with a software company they asked me some algorithm design questions. Being strong in mathematics I was able to solve it mathematically, but each time when I propose my algorithm they were doubtful about its working and I have to prove it with examples. I didnt get a call for further rounds, I believe my solutions were correct. After googling I have found answers to most questions but they were implemented in a different way. Why do interviewers look for optimal answers rather than solutions, which came out of our thoughts rather than memorization of an optimal answer and delivering it."} {"_id": "129936", "title": "What is the relevance of Unit Testing in an \"Release early release often\" environment?", "text": "Over last one year or so, I have driven my team towards the release-early- release-often mode of development (AKA: Rapid Application Development, not Agile). For more information about the manner in which we close the build, see my answer here: A simple ways to improve the release quality in RAD environment When we adopted RAD, people were quite independent and they were doing unit testing first; the integrated tests happened much later in the process. It was a natural process for them without much formal enforcement. Now the situation is quite different: 1. The entire platform is well integrated with established builds/releases working client-side without any hot spots. 2. New functionality requirements keep coming and we incrementally build them as we go. 3. The overall dynamics of the system are very important because while independent development groups might be following processes right, major failures have arisen due to complicated, non-obvious circumstances. 4. Many parts of the system involve new algorithms and research inputs so the challenges (and hence mechanism for testing) are not quite always foreseen correctly, like feature testing in well-defined software. Recently, I was trying to get better overall picture to see if we need process improvement. When I sat down with my team, many of them balked: \"We don't do unit testing anymore!\" while others thought we shouldn't start now because it will never be effective. Are unit tests useful in a relatively mature system? Should we at least need to weigh test scope depending on the maturity of units? Will unit testing slow down the pace of development? Is there a need to evaluate unit testing in a different way? What are the best practices of testing for a mature platform in a release- early-release-often environment?"} {"_id": "245739", "title": "Web Application: Combining View Layer Between PHP and Javascript-AJAX", "text": "I'm developing web application using PHP with CodeIgniter MVC framework with a huge real time client-side functionality needs. This is my first time to build large scale of client-side app. So I combine the PHP with a large scale of Javascript modules in one project. As you already know, MVC framework seperate application modules into Model- View-Controller. My concern is about View layer. I could be display the data on the DOM by PHP built-in script tag by load some data on the Controller. Otherwise I could use AJAX to pulled the data -- treat the Controller like a service only -- and display the them by Javascript. Here is some visualization I could put the data directly from Controller: \">
    \">
    \"> Or pull them using AJAX: $.ajax({ type: \"POST\", url: config.indexURL + \"user\", dataType: \"json\", success: function(data) { $('#username').val(data.username); $('#dateOfBirth').val(data.dob); $('#address').val(data.address); } }); So, which approach is better regarding my application has a complex client- side functionality? In the other hand, PHP-CI has a default mechanism to put the data directly from Controller, so why using AJAX?"} {"_id": "129931", "title": "Example of time-saving usage of compile-time meta-programming?", "text": "The webpage of Converge states that: > Converge has a macro-like facility that can embed domain specific languages > with arbitrary syntaxes into source files. I am most intrigued by the possibility to create domain specific languages. My understanding of the purpose of such languages, is that they enables the developer to become much more efficient when solving domain specific problems. So, I went on to read the \"about\" section. There are two examples of usage here, but none of them describe how the developer could save development-time by utilizing Converge. Can anyone come up with an example or refer me to an example? I am happy to read examples in other languages than Converge."} {"_id": "214940", "title": "How to handle status columns in designing tables", "text": "How to handle multiple statuses for a table entry, for example an item table may have an active, inactive, fast moving, and/or batch statuses. And I wanted to handle them in single column with VARCHAR type. Also I might set each of those attributes as a boolean with different columns. But I am not sure what consequences this might lead to. So if you have experienced such situations which one would be the best way to handle it?"} {"_id": "245730", "title": "Retrying a statement or call in a catch block - code smell or anti-pattern?", "text": "I'm wondering how better to perform this operation for a large amount of files. The bit I'd like some thoughts on **whether this copy/paste is acceptable enough of a tradeoff**. * try to write a file * if the target dir doesn't exist, create it * try again to write the file. If something else throws and exception, let it raise. try{ File.Copy(\"@D:\\foo.txt\", @\"C:\\mydir\\foo.txt\"); } catch (DirectoryNotFoundException){ CreateDirectoryForFile(@\"C:\\mydir\"); File.Copy(\"@D:\\foo.txt\", @\"C:\\mydir\\foo.txt\"); //copy pasted from the try block } There's a simpler block that's easier to read, but leads to a higher number of disk IO calls that is necessary: CreateDirectoryForFile(@\"C:\\mydir\\foo.txt\"); File.Copy(\"@D:\\foo.txt\", @\"C:\\mydir\\foo.txt\"); Can pattern 1 be improved?"} {"_id": "206919", "title": "Do delegates defy OOP", "text": "I'm trying to understand OOP so I can write better OOP code and one thing which keeps coming up is this concept of a delegate (using .NET). I could have an object, which is totally self contained (encapsulated); it knows nothing of the outside world... but then I attach a delegate to it. In my head, this is still quite well separated as the delegate only knows what to reference, but this by itself means it has to know about something else outside it's world! That a method exists within another class! Have I got myself it total muddle here, or is this a grey area, or is this actually down to interpretation (and if so, sorry as that will be off topic I'm sure). My question is, do delegates defy/muddy the OOP pattern?"} {"_id": "126940", "title": "Debug multiprocessing in Python", "text": "What are some good practices in debugging multiprocessing programs in Python?"} {"_id": "110392", "title": "Algorithm for detecting a knob-turning gesture?", "text": "What is the math to identify a gesture for the motion made by two fingers turning a knob (tracking two sets for xycoords)? Is there a library/api anyone is familiar with that contains a good library of gestures for multitouch applications?"} {"_id": "110394", "title": "Raw Sql vs SqlAlchemy when Django ORM is not enough", "text": "In a django project, if a situation comes that Django ORM is not able to execute some complex queries then there are 2 options: 1. Use Raw Sql queries 2. Use SqlAlchemy I want to know if there are some rules like some pros and cons of both which can guide me to choose one of the above 2 options. Or is it entirely depends on what we like?"} {"_id": "110395", "title": "Theta notation on constant time. Why we use the 1?", "text": "In asymptotic notation when it is stated that if the problem size is small enough (e.g. `n \"thread\" does not exist in std namespace Looking at tickets tracking and mail discussions of GCC/TDM-GCC, there were requests for thread support since 2009. Possible that after 4 years still no solution? What's really happening?"} {"_id": "214492", "title": "Custom animation iOS", "text": "I have seen animation, and i can't figure out how to do something like in this video(youtube). I want to discuss how it's made. I don't think that they're using sprites. I have one idea how to do this: for example i want to create animation of \"walking\" animal (when animal moves, he's legs \"runs\" moving animation), i should create customView with imageView of animalBody 3 and two imageViews of animal legs 2. Then i make hard coded animation of moving legs and voila, i have custom animation. When i move customView, then i should start animation of legs. But is there better approach to do this? Thanks! ![enter image description here](http://i.stack.imgur.com/f1mdB.png) ![enter image description here](http://i.stack.imgur.com/wSdey.png)"} {"_id": "193280", "title": "Web API URI Schema Design", "text": "I'm in the middle of designing an API for a very basic flashcard application for learning purposes and I'm wondering if you all think there can be any improvements. In the app, a Folder contains Folders and Sets. A Set contains Cards. The Homepage lists any top-level Sets and Folders that the I've created (but never cards). Here's the URI schema I've come up with: ![enter image description here](http://i.stack.imgur.com/r8lES.png) Does this seem ok? My goal is to keep the URIs terse/concise/simple and push any relationship metadata (parentFolderId, parentSetId, etc) into the body of the request. Thoughts?"} {"_id": "127095", "title": "What do I need to include in my OOP design document?", "text": "I am a graduate student in aerospace engineering. A lot of my Master's thesis has been on developing an OOP software architecture in MATLAB to facilitate control of Arduino-based vehicles (blimps, cars, quadrotors, etc). All my code is heavily documented, but my thesis itself needs to be more than a reference of every class and method - it needs to describe at a high level how each piece works together, and _why_ my architecture is the way it is. I only have a minor in computer engineering and I consider myself a pretty competent programmer, but I have no experience writing this kind of document. What kinds of things need to be included in the document? (Diagrams, paragraph explanations of how each class operates, etc)? I don't want to just rehash my code comments (although at the start of each file there's paragraphs of comments about how that class relates to other classes). I also wonder if it's necessary to get rather technical (I implement some fairly advanced MATLAB concepts [that is, advanced for non-computer engineers] such as listeners and callbacks); my audience also hails from an aerospace engineering background. Examples would be appreciated if they exist. **EDIT** : The part my audience really cares about is not the software architecture (the stuff this question entails); this document is really for people who will use my work in the future - who are **not** computer engineers! So I guess the main question is, what needs to be in the document such that people who are not up to speed on the project (and we can assume competent with the language) can understand the architecture, maintain and use the software?"} {"_id": "213441", "title": "Functional programming and stateful algorithms", "text": "I'm learning functional programming with **Haskell**. In the meantime I'm studying Automata theory and as the two seem to fit well together I'm writing a small library to play with automata. Here's the problem that made me ask the question. While studying a way to evaluate a state's reachability I got the idea that a simple recursive algorithm would be quite inefficient, because some paths might share some states and I might end up evaluating them more than once. For example, here, evaluating **reachability** of _g_ from _a_ , I'd have to exclude _f_ both while checking the path through _d_ and _c_ : ![digraph representing an automaton](http://i.stack.imgur.com/4JG7I.png) So my idea is that an algorithm working in parallel on many paths and updating a shared record of excluded states might be great, but that's too much for me. I've seen that in some simple recursion cases one can pass state as an argument, and that's what I have to do here, because I pass forward the list of states I've gone through to avoid loops. But is there a way to pass that list also backwards, like returning it in a tuple together with the boolean result of my `canReach` function? (although this feels a bit forced) **Besides the validity of my example case** , what other techniques are available to solve this kind of problems? I feel like these must be common enough that there have to be solutions like what happens with `fold*` or `map`. So far, reading learnyouahaskell.com I didn't find any, but consider I haven't touched monads yet. ( _if interested, I posted my code oncodereview_)"} {"_id": "16646", "title": "Is throwing an exception from a property bad form?", "text": "I've always been of the mindset that properties (ie, their set/get operations) should be fast/immediate and failure-free. You should never have to try/catch around getting or setting a property. But I'm looking at some ways to apply role-based security on the properties of some objects. For instance an Employee.Salary property. Some of the solutions I've run across that others have tried (one in particular is the AOP example here) involve throwing an exception if the accessor doesn't have the right permissions - but this goes against a personal rule that I've had for a long time now. So I ask: am I wrong? Have things changed? Has it been accepted that properties should be able to throw exceptions?"} {"_id": "213443", "title": "Sprite Sheets in PyGame?", "text": "So, I've been doing some googling, and haven't found a good solution to my problem. My problem is that I'm using PyGame, and I want to use a Sprite Sheet for my player. This is all well and good, and it would be too, if I wasn't using a Sprite Sheet strip. Basically, if you don't understand, I have a strip of 32x32 'frames'. These frames are all in an image, along side each other. So, I have 3 frames in 1 image. I'd like to be able to use them as my sprite sheet, and not have to crop them up. I have used an awesome, popular and easy-to-use game framework for Lua called L\u00d6VE. L\u00d6VE has these things called \"Quads\". They are similar to texture regions in LibGDX, if you know what they are. Basically, quads allow you to get parts of an image. You define how large a quad is, and you define parts of an image that way, or 'regions' of an image. I would like to do something similar to this in PyGame, and use a \"for\" loop to go through the entire image width and height and mark each 32x32 area (or whatever the user defines as their desired frame width and height) and store that in a list or something for use later on. I'd define an animation speed and stuff, but that's for later on. I've been looking around on the web, and I can't find anything that will do this. I found 1 script on the PyGame website, but it crashed PyGame when I tried to run it. I tried for hours trying to fix it, but no luck. So, is there a way to do this? Is there a way to get regions of an image? Am I going about this the wrong way? Is there a simpler way to do this? Thanks! :-)"} {"_id": "246278", "title": "get all the combination of a given set of numbers", "text": "I m trying to get the possible combinations of a given set of numbers say for example 123 The possible combinations would be 123 132 213 231 312 321 For this i have written a code as below - import java.util.ArrayList; /* Name of the class has to be \"Main\" only if the class is public. */ public class Main { /** * @param args the command line arguments */ static ArrayList list; public static void main(String[] args) { // TODO code application logic here Main nm = new Main(); nm.list = new ArrayList(); /* for(int i=1; i<= 4; i++) { list.add(i); } list.remove(list.indexOf(1)); for(int i=0; i<= 2; i++) { System.out.print(list.get(i)); } */ nm.test1(); } public void test1() { for(int i=1; i<= 4; i++) { if(!list.contains(i)) { list.add(i); test1(); list.remove(list.indexOf(i)); } } if(list.size() == 4) { for(int i=0; i< 4; i++) { System.out.print(list.get(i)); } System.out.println(\"\"); } } } This gives the correct output as shown here in the link I was thinking about if this the right approach or if there can be any optimizations done so that this code may be used to get combinations of numbers of 10 to 20 digits or more. Regards"} {"_id": "146761", "title": "Design pattern for handling a response", "text": "Most of the time when I'm writing some code that handles the response for a certain function call I get the following code structure: example: This is a function that will handle the authentication for a login system class Authentication{ function login(){ //This function is called from my Controller $result=$this->authenticate($username,$password); if($result=='wrong password'){ //increase the login trials counter //send mail to admin //store visitor ip }else if($result=='wrong username'){ //increase the login trials counter //do other stuff }else if($result=='login trials exceeded') //do some stuff }else if($result=='banned ip'){ //do some stuff }else if... function authenticate($username,$password){ //authenticate the user locally or remotely and return an error code in case a login in fails. } } **Problem** 1. As you can see the code is build on a `if/else` structure which means a new failure status will mean that I need to add an `else if` statement which is a violation for the _Open Closed Principle_. 2. I get a feeling that the function has different layers of abstraction as I may just increase the login trials counter in one handler, but do more serious stuff in another. 3. Some of the functions are repeated `increase the login trials` for example. I thought about converting the multiple `if/else` to a factory pattern, but I only used factory to create objects not alter behaviors. Does anyone have a better solution for this? **Note:** **This is just an example using a login system.** I'm asking for a general solution to this behavior using a well built OO pattern. This kind of `if/else` handlers appears in too many places in my code and I just used the login system as a simple easy to explain example. My _real_ use cases are to much complicated to post here. :D Please don't limit your answer to PHP code and feel free to use the language you prefer. * * * **UPDATE** Another more complicated code example just to clarify my question: public function refundAcceptedDisputes() { $this->getRequestedEbayOrdersFromDB(); //get all disputes requested on ebay foreach ($this->orders as $order) { /* $order is a Doctrine Entity */ try { if ($this->isDisputeAccepted($order)) { //returns true if dispute was accepted $order->setStatus('accepted'); $order->refund(); //refunds the order on ebay and internally in my system $this->insertRecordInOrderHistoryTable($order,'refunded'); } else if ($this->isDisputeCancelled($order)) { //returns true if dispute was cancelled $order->setStatus('cancelled'); $this->insertRecordInOrderHistory($order,'cancelled'); $order->rollBackRefund(); //cancels the refund on ebay and internally in my system } else if ($this->isDisputeOlderThan7Days($order)) { //returns true if 7 days elapsed since the dispute was opened $order->closeDispute(); //closes the dispute on ebay $this->insertRecordInOrderHistoryTable($order,'refunded'); $order->refund(); //refunds the order on ebay and internally in my system } } catch (Exception $e) { $order->setStatus('failed'); $order->setErrorMessage($e->getMessage()); $this->addLog();//log error } $order->setUpdatedAt(time()); $order->save(); } } **function purpose:** * I am selling games on ebay. * If a customers wishes to cancel his order and gets his money back (i.e. a Refund) I must open a \"Dispute\" on ebay first. * Once a dispute is opened I must wait for the customer to confirm that he agrees to the refund (silly as he's the one who told me to refund, but that's how it works on ebay). * This functions gets all disputes opened by me and checks their statuses periodically to see if the customer has replied to the dispute or not. * The customer may agree (then I refund) or refuse (then I rollback) or may not respond for 7 days (I close the dispute myself then refund)."} {"_id": "206149", "title": "Java: The best way to learn it when MOOCs and books are not enough?", "text": "This topic was taken from Stack Overflow, but it was put on-hold due to Opinion-based question. So I moved it to here. I've had some great troubles with my homework exercises and I've used excessive time on doing simple programs like arrays and methods. For example it took me over 20 hours to create a program that asks user to enter row and columns size and then prints the matrix. I've read over 20 java books and tried 4 MOOCs, and even Buckys videos but it feels like I won't get in-depth understanding of Java fundamentals. Here's a list of problems that I face everyday when I'm coding java: 1. Problem solving 2. Creating methods and using them 3. Arrays 4. \"I've no idea how to do this\" 5. What is constructor and how to use it 6. \"Where can I find help?\" I've asked this question before but the answers have been like: \"Read books\" or \"Just start coding\" but neither of these haven't helped me because I've tried to code, but I haven't had any reasonable goals and reading books feels like eternal loop that just leads nowhere. I know that Internet is full of 'What is the best way to learn Java'-questions but I think that I've to try ask this because this topics question does not appear in Google Search + I think that there are people who are struggling with the same problem. Any advice would be appreciated. I'm eager to learn Java, I just need to find my own way to do it!"} {"_id": "206142", "title": "Communication between programmers and system engineers. Tool for planning builds and maintenance jobs", "text": "We have Development department and the department of System Engineers that works on a big project (development of J2EE application). We are using _Build management system_ , _Bug tracking system_ and a _Wiki engine_ in our work. Sometimes we have a situation when some developer initiate a server restart or an application build that breaks the debug process for other developers (in some cases developers using the same remote _Developer_ server). At the present moment we are using the _chat_ application for approving the process of restart\\rebuild server (application). Can anyone share the experience of solving collision like I described above? * * * My vision is to use application which was able to do the following things: * create request for server restart (or application rebuild\\redeploy); * subcribing for notification on \"new request created\" event; * manual approving the request (by several members) and automatic approving of request (in case nobody declined the request during the some period of a time); * subcribing for notification on \"the request is approved\" event; * the request solving; * subcribing for notification on \"the request is solved (done)\" event. It looks like the request has its own lifecycle or workflow. I think that some _Bug tracking system_ (for example, _by Atlassian Jira_ ) can help to create the request and can start the workflow on it. But this solution does not look perfect (even good) for me - its too \"heavy and bulky\"."} {"_id": "117015", "title": "How can I provide guidance on technology choices through user stories?", "text": "I'm a project manager for a team of firmware engineers making the transition to adopting Scrum and I'm moving to support the team in a Product Owner role. We've just gone through Scrum training and are beginning a month of coaching which is going to be invaluable. Coming from a technical background I can already see the challenge of shifting my point of view from the solution domain to focusing on business value. For me it's very tempting to think of \"how\" to solve a problem. But I like the concept of user stories and defining vertical slices of functionality that give value to the customer and letting the team determine the \"how\". Forgive me if this is a naive question, but as the PO how can I provide guidance on specific technical direction through user stories? My concern is that in some cases without guidance the team may head down the wrong path on key technical decisions. As a simplistic example I might have a user story: > As a clinician I want the device to record information about the patient's > treatment so that I can determine if the patient's therapy has been > effective. Now say time to market is critical and I want to fork out for an off-the-shelf file system stack to save time, what conveys that to the team? They might go off and start writing their own stack from scratch. Is this sort of guidance provided through: 1. Conversations during sprint planning and backlog grooming? But if so, is it my place as the PO to even be suggesting how to solve a problem. 2. Acceptance criteria? I don't really like this, I have a bad feeling about making acceptance criteria so prescriptive that they specify how to solve a problem. 3. Constraints? Thanks in advance. I could (and will) ask our Scrum coach but I won't be able to do that before Monday and this question has been really bugging me :)"} {"_id": "255951", "title": "Proper Architecture for DBContext and Migrations with Multiple Projects", "text": "I have multiple projects that will be using the same business objects (customer, order) but the projects operate in different databases within the organization and I would like to understand the proper architecture that I should create to service such needs. Here is a list of the different types of projects in play: * ASP.NET MVC front end to handle all general user actions. This front end is deployed to a dedicated instance per customer. * Centralized Windows Service to process FTP files and transmit the data to each customer instance. * ASP.NET MVC front end to manage all customers and their instances. All business objects (customer, order, etc) are located in a Core class library. Each of the projects will need to use the business objects at one point or another but connect to a different database. Should the DBContext reside in the Core library or in the project? Should Entity Framework migration be located in the same location as the DBContext?"} {"_id": "255953", "title": "Python Socket Scripting. What am i doing wrong?", "text": "My socket program hangs at clientsocket, address) = serversocket.accept() and doesn't spit our an error or anything. I followed directions on https://docs.python.org/3/howto/sockets.html I've been trying to figure it out for an hour now, but to no avail. I'm using python3 btw. What am i doing wrong? EDIT: My intedentation is all screwed up because I pasted it wrong, but other than that my code is as I have it in my file. #import socket module import socket #creates an inet streaming socket. serversocket = socket.socket(socket.AF_INET, socket.SOCK_STREAM) print('socket created') #binds socket to a public host, and a well known port serversocket.bind(('127.0.0.1', 1024)) #print(socket.gethostname())# on desktop prints 'myname-PC') #become a server socket serversocket.listen(5) # listens for up to 5 requests while True: #accept connections from outside #print('In while true loop') This works, but we never get to the next print statement. Why the hell is it catching at line 20? (clientsocket, address) = serversocket.accept() #clientsocket = serversocket.accept() print('Ready to serve') #now we do something with client socket... try: message = clientsocket.recv(1024) filename = message.split()[1] f = open(filename[1:]) outputdata = f.read() #send an http header line clientsocket.send('HTTP/1.1 200 OK\\nContent-Type: text/html\\n\\n') for i in range(0, len(outputdata)): clientsocket.send(outputdata[i]) clientsocket.close() except IOERROR: clientsocket.send('HTTP/1.1 404 File not found!') clientsocket.close()"} {"_id": "161219", "title": "Could implicit static methods cause problems?", "text": "This is a purely hypothetical question. Say I create a class method that contains no references to instance variables or other resources. For example (C#): protected string FormatColumn(string value, int width) { return value.Trim().PadLeft(width); } As far as I can see, there is absolutely no reason why this method could not be declared static: * It only uses method-scope variables. * It doesn't `override` a base class method. * It's not `virtual` or `abstract`. My questions are: Is there any benefit to calling a static method over an instance method? If so, why doesn't the compiler implicitly convert this to a static method? I'm certain I've missed some key point here. Any ideas?"} {"_id": "255955", "title": "What effect does using multiple childViewControllers views inside a ContainerViewController have on memory?", "text": "I have three childViewController views inside my ContainerViewController and these three views are inside a horizontal scroll view. My intention is to build an app that has a custom camera on one ChildViewController view, a tableView which will display JSON data on another view, and a page which will show a list of the users friends. The reason I am building it this way is to achieve a dragging/sliding effect between the three different child views. Although I get the result I want from a UI/UX perspective, i'm not sure how this will effect memory and performance. If i'm not mistaken my containerViewController will be handling a lot of activity. Would there be an alternative way to architect this type of application? I understand I could have three separate view controllers using a Navigation Controller but than I wouldn't achieve the UI/UX experience I want. I'm really after the dragging/sliding between different pages experience, similar if not the same as Snapchat/Tinder"} {"_id": "255956", "title": "Which aids to decision-making exist for going open-source?", "text": "I am involved in the development of a software product that I'd like to see becoming open source. My bosses will ponder this request benevolently I assume, yet they still want to see some evidence that this is a good idea, at least that it is not a bad idea. They have little experience with open source themselves. We work in an academic public service environment, so financial interests are little to non- existing. I think their primary concerns are bad reputation by publishing a project and nobody is interested, and maybe the potential loss of control of the development process. I wonder whether there is (possibly peer-reviewed) material out there that advocates for going open source. But I'm also interested in documents describing a critical perspective towards such a step. Moreover, case studies would be helpful I think. The documents may be behind a journal paywall."} {"_id": "255957", "title": "Anonymous access to api REST, protection", "text": "I have a public website that does not require authentication. It's a lighting calculator for indoor cultivation. Anyone can enter and complete the process and ultimately save your settings for future use sharing it on Facebook or twitter. The configuration is saved as a document in a database, using a REST api. At this time nothing prevents someone make a bot and fill my hard disk in a few hours. What steps can I take to give protection to my service?"} {"_id": "255958", "title": "What is the largest size a HashSet or TreeSet be?", "text": "So I am working on a solo project that involves **a lot** of strings. In one of my smaller test cases there will be _at least_ 70 million elements. So my question is, what is the largest possible size a TreeSet or a HashSet can reach until it can no longer grow **OR** until it reaches a size that it can no longer be feasible to use in terms of efficiency."} {"_id": "255959", "title": "Using a zLib like icensed library which has dependencies on LGPL or GPL libs; what license am I able to release my project under?", "text": "If I use an Open Source library which is released under a zLib like license, but it has dependencies on libraries that are using LGPL or GPL; what license am I able to release my project under? If I read things I find on the net it sounds like I can sell my project but I can't stop people from distributing it freely as per GPL?"} {"_id": "245512", "title": "How to check the space complexity of this program?", "text": "I have written my version of strstr function in c. I am using a temporary array of size 26. Then is the space complexity O(1) or O(n)? This is my code : void strcheck(char t[], int n, char p[], int m) { int i, j, k; int temp[26]; for (i = 0; i < 25; ++i) temp[i] = 0; for (i = 0; i < n; ++i) { k = t[i] - 'a'; temp[k]++; } for (j = 0; j < m; j++) { k = p[j] - 'a'; if (temp[k] > 0) temp[k]--; else break; } if (j == m) printf(\"string occured\\n\"); else printf(\"not occured\\n\"); } The program is working correctly just wanna know about space complexity. Thanks"} {"_id": "233180", "title": "Video conferencing server architecture", "text": "I am developing a video conference application with the following requirements: * Audio works like call conference, where all participants may talk at the same time. * However, video works like broadcasting, where only one designated client broadcasts one video to the rest. Clients will be Android devices. One way of doing this is to use a SIP stack (e.g. Asterisk) and use its conference call capabilities; however the video conference part of Asterisk looks pretty undeveloped. Also, there might be some issues using SIP, as the app will be used over 3G and/or restrictive firewalls. Another way is to use streaming media server like Wowza, which while it is impressive on live video streaming, it does not seem to have a conferencing capabilities. Is there a better way to approach this? It seems that this use case is pretty common and there are various similar solutions for desktop apps."} {"_id": "16325", "title": "What should a CS grad know?", "text": "In thinking of making a good introductory resource guide for new Computer Science students... **What is a good list of expected knowledge and ability for a computer science BS grad?** And if you graduated a while ago, how does this compare to the standards for graduates at that time? And, are there any things you would want college grads to know that most entering the field don't? Many thanks!"} {"_id": "16323", "title": "Object Oriented Programming Language performance ranking", "text": "After reading this post about ideal programming language learning sequence, I am wondering what would have been the answers if the question was performance -instead of learning- oriented ? Since there are many programming languages I chose to ask the question for OOL to be the least subjective. But any thought or comparison about no-OOL are appreciated :D If we omit the programming effort, time and costs. What is your ranking of the most powerful object oriented languages ?"} {"_id": "19513", "title": "Opinions on distribution of software via virtual appliance", "text": "I've created a distribution of my open source application framework in a working virtual appliance. It includes everything to get started with the tutorial. The distribution is Fedora 14 running Tomcat 5.5 and Oracle 10g Express Edition, plus my framework. It is completely preconfigured and boots into a working running copy. Would this be something you might try? What assurances might you need to get you to try it? **Edit:** The VM is just over a 2Gb download. Alternatively it is also available via 23Mb download for the source and a PDF detailing how to configure the Tomcat and Oracle dependencies."} {"_id": "96735", "title": "I've failed at PHP several times. Is Ruby the Cure?", "text": "_Extremely, extremely_ subjective question here, but its something I've been struggling with for quite a while. I've seriously tried to become a reasonable PHP coder for the past several years. But I've really failed every time. I hate to describe myself as a beginner, b/c I've been designing websites (using WordPress, Drupal, etc) for years, but still I just can't seem get better at programming. Could it be that I have some kind of allergy to PHP? I went through Chris Pine's awesome into to Ruby about a week ago (for about the fifth time), and though it did all all seem much clearer to me than PHP, I wondered if I was just switching languages to find an easy way out? The things I struggle with in PHP all seem elementary\u2014when to use a function, how to return database queries in foreach/while statements, when to turn those queries into reusable functions, adding arguments to functions, etc, etc. And all the OOP stuff that I keep seeing these days just files over my head. I guess my question(s) are: Am I going about learning how to program in the wrong way? Do I have some aversion to PHP that's preventing me from catch on? If I keep pushing at Ruby/Rails, will it just eventually 'click'. Or, the one I fear, am I just unlikely to ever be a programmer?"} {"_id": "187828", "title": "How to balance Front End and Back End coding skill", "text": "This is a question specifically for web applications developers. In recent years I spent most of my time on SaaS products, which leads you dig deep into more back-end stuff, like how to use less memory resources, make session working with load balancer, write efficient SQL queries, performance tuning, solving technique difficulty. There is no web designer working with us, generally what we get is a snapshot of what the system should look like, but nobody here cares how you produce it. Since now the company plan changed and they are going to change and the SaaS product will reach end of life soon, and I just feel I lack of those HTML/CSS knowledge, I mean I never got a chance to check those things as a whole stuff, but only learn from what I need to work out. All those above seems made me out of fashion about front-end skills, I seems tend to ignore learn things about it. Till just now I realized that might be a mistake I made, for instance I still tend to use table layout in HTML and just realized how bad practice that is recently. I start wondering if I were going to find a new job, how good should I be on front-end coding compare to back-end skills? Is it OK for now as more and more web designers or user interface designers are hired so we don't need to worry about Html/Css too much? Or should I pause my back-end skill improvement and jump back on front-end a while and then make it half and half? Sorry English is not my native language so there might be something expressed not that clear. Thanks for reading this long post!"} {"_id": "187829", "title": "OOP question for product catalog", "text": "I have a question bugging me for some days. I made a webshop for a good friend of me. The problem is I have an OOP class question. People can buy some clothing in the shop. The problem arise how to show different information for different products? What I have now is the following: * abstract Product class * Cloth extends Product * Trousers extends Product Problem is that trousers can contain extra information like: * fiber type * structure How can I show those information in the shop? I query for all products and receive the Product class with only the selling price and name? What is the correct way to figure out it is a trouser to show those information like fiber structure on the detail page?"} {"_id": "96731", "title": "Are ASP.NET Web Parts useful & common enough practice in the real world, outside of SharePoint projects?", "text": "Studying for the .NET 70-515 exam. There is a chapter on Web Parts. I will suck it up and learn what I need to learn here, but... I am wondering, do ASP.NET developers in the real world often turn to Web Parts _outside_ of SharePoint development? I do not **ever** plan to deal with SharePoint - I prefer working on large scale public-facing web sites and applications - and Web Parts (SharePoint, too) hardly seem useful to me in that context. But what do I know? Help me understand the relevance or irrelevance of this technology, please. Should I keep ASP.NET Web Parts on my _ignore-list_ , if SharePoint is already on it too? Thanks!"} {"_id": "65742", "title": "Stored Procedures a bad practice at one of worlds largest IT software consulting firms?", "text": "I'm working at a project in one of the world's top 3 IT consulting firms, and was told by a DBA that company best practice's state stored procedures are not a \"best practice\". This is so contrary to everything I've learned. Stored procedures give you code reuse, and encapsulation (two pillars of software development), security (you can grant/revoke permissions on an individual stored proc), protect you from SQL injection attacks, and also help with speed (although that DBA said that starting with SQL Server 2008 that even regular SQL queries are compiled if they are run enough times). We're developing a complex app using Agile software development methodology. Can anyone think of good reasons why they wouldn't want to use stored procs? My guess was that the DBAs didn't want to maintain those stored procs, but there seem to be way too many negatives to justify such a design decision."} {"_id": "214234", "title": "Basic Objective-C Questions", "text": "I'm new to objective C, I'm following \"Objective C 5th Edition Stephen Kochan and I don't have anyone to ask my doubts to. I'm confused with this question: Q. Is it necessary to use \"-\" or \"+\" before every property or method declaration in Interface? What will happen if I don't use \"-\" or \"+\" before every property or method declaration in Interface?"} {"_id": "150443", "title": "Should I return iterators or more sophisticated objects?", "text": "Say I have a function that creates a list of objects. If I want to return an iterator, I'll have to `return iter(a_list)`. Should I do this, or just return the list as it is? My motivation for returning an iterator is that this would keep the interface smaller -- what kind of container I create to collect the objects is essentially an implementation detail On the other hand, it would be wasteful if the user of my function may have to recreate the same container from the iterator which would be bad for performance."} {"_id": "150448", "title": "Is there a way for developers and their clients to make a 'safety net' for the final transaction of software for money?", "text": "First off, we presume that the projects in question are not large or important enough for the parties involved to establish a formal contract, and that the parties are not able to meet in person. So, how can the programmer(s) be sure that they will be paid(in full), and how can the client be sure that the their requirements are met, e.g. testing it without acquiring the full source code?"} {"_id": "214239", "title": "Comparing TCP/IP applications vs HTTP applications", "text": "I'm interested in developing a large-scale user-facing website that is written in Java. As for design, I'm thinking of developing independent, modular services that can act as data providers to my main web application. As for writing these modular services (data providers), I can leverage an existing framework like Spring and develop these services following the RESTful design pattern, and expose resources via HTTP with a message format like JSON...or I can leverage an existing network framework like Netty (http://netty.io/) and serialization format like Protobufs (https://developers.google.com/protocol-buffers/docs/overview) and develop a TCP server that sends back and forth the serialized protobuf payload. When should you choose one over the other? Would there be any benefit of using a serialization format like Protobufs and sending stream of bytes over the wire? Would there be overhead in just using JSON? How much overhead is there between using TCP/IP and using HTTP? When should you use Spring over Netty, and vice versa to build such a service?"} {"_id": "250205", "title": "Using home server to develop, moving everything out of local machine", "text": "I'm sorry if this question is too broad, but I hope it's not :) I'm thinking about setting up small & cheap home linux based server (one of those $200-$300 micro PCs), to move my whole development environment out from my laptop (databases, interpreters etc.). The main reason is - while I earn my meal coding PHP, I'd like to try new things that require installing new stuff and I want to keep my laptop running smoothly. Other reason is that I find setting up and maintaining dev environment on OS X somehow disturbing (you just can't upgrade PHP, you have to do some tricks to do this). Setup I thought about is: 1\\. Mount server's drive via `Samba` 2\\. Use `NetBeans` saving directly to server, work, run apps on server Keeping that in mind, my main (and IMHO quite specific) question is - will it work? I mean - won't this setup slow down my development instead of making it cleaner and separated from development machine? By slowing down I mean: samba issues, configuration issues, gotchas and so on. Technologies I intend to use: `mysql`, `postgres`, `php`, `pyton`, (maybe) `java`, (maybe) `RoR`, (maybe) `Solr`."} {"_id": "185130", "title": "How can I update a large legacy codebase to meet specific quality standards?", "text": "There is a lot of information about tools and techniques for improving legacy codebases, but I haven't come across any successful real world case studies. Most advice is on the micro level, and while helpful, doesn't convince many people because of a lack of evidence it can help at the macro level. I am looking specifically for incremental improvements that have been proven to be a success in the real world when updating a large legacy codebase to meet today's quality standards, and not a complete rewrite. Before: * Large: greater than 1MLOC * Legacy: no automated tests * Poor quality: high complexity, high coupling, high escaped defects After * Automated tests * Easier updates/maintenance * High quality: lowered complexity, decoupled code, few escaped defects What kind of incremental steps have been proven in the real world to update a large legacy codebase successfully to meet above quality standards, without going through a total rewrite? If possible, include an example company or case study of a large legacy project that has gone through a \"successful\" quality improvement process in your answer to back it up."} {"_id": "250202", "title": "Loading dynamic css based on user", "text": "I want to provide different UI theme based on user who logged in. For that I have came up with following 2 options: * create separate files for all themes duplicating all the common css. This way I can easily do some login in scripting and get the required css file. But I end up having same css classes code in many different theme files. This is real pain when you have to change something common to all files. * make a common file and separate theme files containing only theme specific css. This way everytime I need to load 2 css files, one - common file by default and then load theme specific file to override default css classes. Please help me to choose better option. If there is other simple and easy way to do it then please do let me know."} {"_id": "158278", "title": "How I identify true error?", "text": "I am writing a program to scrap some data from the web. The pages are sequential ( 1,2,3 ... ), but I have no idea when will it stops. I combine a prefix and a integer to make a link for the python urllib to parse on it. For example : 'http://some.domain.com/page' + '1' + '.htm'. So the request would fail if the link is invalid, however, there could be some other error such as network error, connection timed out which will resolve itself. I could retry a couple of times on these error. Aside from 'Page not found' error, there could be other error such as 'Internal Server Error' which was thrown when the server is down. There error won't fix themselves. I should probably move on or stop the program. Back to my program. Because I don't know when is the page index are going to end, so I set the end integer to 9999. The links may run out after some hundreds. I should identify this and send a 'break' to it. What will you do? Gather all the possible error and put them on the exception line and treat them differently? Now I just stop after 10 failures."} {"_id": "158277", "title": "Decorators in Python", "text": "I am just learning python, and I am currently playing with Tornado framework. I see this class: class AuthHandler(BaseHandler, tornado.auth.GoogleMixin): @tornado.web.asynchronous def get(self): if self.get_argument(\"openid.mode\", None): self.get_authenticated_user(self.async_callback(self._on_auth)) return self.authenticate_redirect() I am having trouble grasping what the decorator does there (@tornado.web.asynchronous). Does it overwrite that function? You can see the full source at https://github.com/facebook/tornado/blob/master/demos/chat/chatdemo.py"} {"_id": "158276", "title": "scons and python unit tests best practices", "text": "I am using scons to build a large project containing a mix of C++ and Python. I would like scons to run Python unit tests either using nose or not. Currently, we have a long list of tests files and run a test builder on each one. This causes each test file to be run as a separate script which feel inelegant and inefficient. Is there a better way of doing this?"} {"_id": "97284", "title": "which are the best front end web developer blogs", "text": "Looking for recommendations on industry-expert blogs from front-end web developers (emphasising RIA applications; e.g. HTML5, Javascript, AJAX, JQuery, Ext JS). Trying to sort out the best from the rest. Which blogs are the well respected and worth following?"} {"_id": "215720", "title": "Is there a name for the Builder Pattern where the Builder is implemented via interfaces so certain parameters are required?", "text": "So we implemented the builder pattern for most of our domain to help in understandability of what actually being passed to a constructor, and for the normal advantages that a builder gives. The one twist was that we exposed the builder through interfaces so we could chain required functions and unrequired functions to make sure that the correct parameters were passed. I was curious if there was an existing pattern like this. Example below: public class Foo { private int someThing; private int someThing2; private DateTime someThing3; private Foo(Builder builder) { this.someThing = builder.someThing; this.someThing2 = builder.someThing2; this.someThing3 = builder.someThing3; } public static RequiredSomething getBuilder() { return new Builder(); } public interface RequiredSomething { public RequiredDateTime withSomething (int value); } public interface RequiredDateTime { public OptionalParamters withDateTime (DateTime value); } public interface OptionalParamters { public OptionalParamters withSeomthing2 (int value); public Foo Build ();} public static class Builder implements RequiredSomething, RequiredDateTime, OptionalParamters { private int someThing; private int someThing2; private DateTime someThing3; public RequiredDateTime withSomething (int value) {someThing = value; return this;} public OptionalParamters withDateTime (int value) {someThing = value; return this;} public OptionalParamters withSeomthing2 (int value) {someThing = value; return this;} public Foo build(){return new Foo(this);} } } Example of how it's called: Foo foo = Foo.getBuilder().withSomething(1).withDateTime(DateTime.now()).build(); Foo foo2 = Foo.getBuilder().withSomething(1).withDateTime(DateTime.now()).withSomething2(3).build();"} {"_id": "69668", "title": "Genetic programming", "text": "I recently was browsing Reddit and came across a post linking to a \"JavaScript genetic algorithm\" example. I've really been fascinated with the concepts of genetic algorithms and programming, however even after some Googling I am still left slightly confused. Can someone please explain to me how it works? I suppose the vocabulary terms are confusing me more than anything else. I've never taken a computer science class but I do program on my spare time and enjoy it. I'm 16 years old so I would appreciate brief examples and perhaps explanations. Just the concept of genetic programming and how I could implement it into my projects and why. Thanks."} {"_id": "81599", "title": "Assigning effective complexity ratings to tasks", "text": "I want to make excellent estimations about, how much effort is required for a particular task. The constraint is, that I have already solved the problem and written down steps required to solve it. Also I have the description of what kind of problems it solves, compiled example program, it's source code with comments, tests and test data, language abnormalities involved with particular solution and etc. And I have done this for set of languages X. My sub questions are: * did I ask right sub questions? * is this a Fermi problem (or paradox)? * on what should I base my decision upon? * what elements should it contain to be effective? * what should a I know, before giving a rough estimation? * what kind of argument scales should it use: linear, logarithmic, ...? * how does estimation change, if definition of required feature changes? * how do I evaluate it, is there some-kind of table like this IBM's (bogus) one ? * how does estimation differ, if i'm asked to solve it in language {x, y}? Where x \u2287 X and y \u2209 X. My question is: **What is the most efficient way, to rate task recreation difficulty?** ## About the question Although I have been learning the profession of software engineering for quite some time, only now I find myself going to work as one. In university I found, that 2 time managemanet techniques work really well: * doing projects ASP - several weeks early also means virtually no competition at all, to be graded * timesliceing - deciding how much continuous time I will spend on a task, before switching to a new one. If it's important, I'll just schedule a new time slice sooner, then I would. Thy seem incompatible for this task or rather, it seems a complex problem to adapt them. ## Example: Irregular click event hot-spots Difficulty rating : \u2605 \u2605 \u2606 | Time estimation : 1 work day ![enter image description here](http://i.stack.imgur.com/EhTHi.jpg) Example of required data creation process ![enter image description here](http://i.stack.imgur.com/okHb2.jpg)"} {"_id": "69665", "title": "How easy/difficult is it to start using something like Python/Django in an existing Java project?", "text": "I have a Java project which uses Java to access some 3rd party libraries. For other things I'd like to use something newer like Python maybe. Is it easy to incorporate into an existing project? How do people usually do that? Thanks, Alex"} {"_id": "154565", "title": "Where does a \"Technical Programmer\" fit in, and what does the title mean?", "text": "Was: \"What is a 'Technical Programmer'\"? I've noticed in job posting boards a few postings, all from European companies in the games industry, for a \"Technical Programmer\". The job description was similar, having to do with tools development, 3d graphics programming, etc. It seems to be somewhere between a Technical Artist who's more technical than artist or who can code, and a Technical Director but perhaps without the seniority/experience. Information elsewhere on the position is sparse. The title seems redundant and I haven't seen any American companies post jobs by that name, exactly. One example is this job posting on gamedev.net which isn't exactly thorough. In case the link dies: > **Subject** : Technical Programmer > > Frictional Games, the creators of Amnesia: The Dark Descent and the Penumbra > series, are looking for a talented programmer to join the company! > > You will be working for a small team with a big focus on finding new and > innovating solutions. We want you who are not afraid to explore uncharted > territory and constantly learn new things. Self-discipline and independence > are also important traits as all work will be done from home. > > Some the things you will work with include: > > * 3D math, rendering, shaders and everything else related. > * Console development (most likely Xbox 360). > * Hardware implementations (support for motion controls, etc). > > > All coding is in C++, so great skills in that is imperative. _Revised Summarised Question_ : So, where does a programmer of this nature fit in to software development team? If I had these on my team, what tasks am I expecting them to complete? Can I ask one to build a new level editor, or optimize the rendering engine? It doesn't seem to be a \"tools programmer\" which focuses on producing artist tools, often in high-level languages like C#, Python, or Java. Nor does it seem to be working directly on the engine, nor a graphics programmer, as such. Yet, a strong C++ requirement, which was mirrored in other postings besides this one I quoted. **Edited To Add** As far as it being a low-level programmer, I had considered that but lacking from the posting was a requirement of Assembly. Instead, they tend to require familiarity with higher-level hardware APIs such as DirectX, or DirectInput. I wasn't fully clear in my original post. I think, however, that Mathew Foscarini has it right in his answer, so barring someone who definitely works with or as a \"Technical Programmer\" stepping in to provide a clearer explanation, I'll go with that. A generalist, which also fits the description of a more-technical-than-artist TA."} {"_id": "154563", "title": "Which tools do you use to manage your todo list?", "text": "I would like to optimize the way I manage my todo list. My tasks are, for a large part, related to `C++` coding but also include readings, presentations, research... Lately I discovered the `@todo` command in doxygen, which is pretty cool for the coding part. I would like to keep a clear track of all my tasks in one big tool though. Any ideas ?"} {"_id": "191549", "title": "Began with iOS development, skipped CS, and getting hit hard at interviews?", "text": "I'll start with a brief background: my degree is in Recording Arts, so mainly audio engineering and stuff relating to the music industry, with some synthesis and stuff which was my link into coding (via an environment called PureData). I've been working for an iOS developer for the last three years handling audio stuff (mainly musical), and having minimal coding work to do (things like localization, etc.) I've written some basic iOS projects myself which are increasing in complexity slowly, and am pretty comfortable with with the basic concepts of iOS development (MVC, delegation, KVC, etc.) I've been attending interviews just to test the water to see where I'd fit in with some junior iOS developer jobs that could really challenge me and help me grow as a developer, and I'm being hit really hard by interviewers who expect me to have a CS degree and really in-depth knowledge of basic CS concepts. This may be a statement born out of frustration, but they don't seem to want to touch any potential employee who didn't get a Computer Science degree, or has a thorough knowledge of it. As examples, one that hit me hard in an interview yesterday was something theoretical about behaviours belonging to classes and being accessible to a non-inherited class.. I had no idea, really. I can go in and code classes with inheritance, but discussing them on an abstract level with a sheet of blank paper in front of me is very daunting. Another example was discussing \"data structures\" - as far as I'm aware those are just variable types, and arrays/dictionaries, etc., but the guy acted like I knew 5% of the story there and seemed surprised. I've since looked up these things and learned a little more, but I think this is a band-aid on the problem. It's pretty demoralising to feel like I'm progressing as a developer but also feel like I'm still nowhere towards a proper career as a software developer, and I realise this post may well be closed (though I genuinely do think this is the best place for it and hope others can offer advice), but I'm trying to establish exactly how I can learn those core CS concepts without needing to reiterate the basic coding constructs I'm already very familiar with. I've tried watching along with various iTunes U courses, some of the MIT stuff online, etc., and they all jump straight into Java in lesson 1, 2, or 3 and begin talking about if statements, variables, etc., then move up in complexity while remaining almost exclusively with the code itself. Perhaps I'm making too big of a distinction between programming and CS, but I'm in a boat where I'm boring easily of hearing what an \"if\" statement or a \"for\" loop does over and over, and want to understand the underlying concepts that the interviews I've dealt with all expect me to know like the back of my hand. I'm hoping some folks here will have some advice on patching up those CS- shaped holes in my knowledge so that I can properly begin my career as a software developer, rather than simply getting better and better with iOS development and leaving a gaping hole where my actual education should have been. **EDIT:** I want to try to make this into a more specific question which can be answered as opposed to discussed, as per the rules, so: **Is it possible to overcome a missing CS education by just pushing forward with programming and learning concepts as you come across them?**"} {"_id": "69663", "title": "PGAS Systems in the wild", "text": "Has anyone had any experiences with developing a Partitioned Global Address Space product or system, or an application that used PGAS, or anything PGAS- like? I'm looking for insights, warstories, and practical approaches with regards to languages used, wisdom learned, and ideas/applications either dreamed of or spun off from actual development."} {"_id": "195755", "title": "How do distributed version-control systems deal with fragmentation?", "text": "Here is the scenario: X is the author of a software. X releases v1.0 on an open source license on Github and moves on. People interested in the software fork and improve the software. Now there are 15 different flavours of the software. Few also try to send a pull request back, however since X no longer works on the software, the pull request is just sitting idle on his project page. As a new user wanting to use the most active and stable release of the software, how does he learn which out of the 15 he should clone and use? He certainly would like to use the more active branch, since it may have more bug fixes and features upgrades. I see that the model works for bigger open-source projects, because the author of the software is usually well-known in the community. As a result, the user base inadvertently knows which branch to clone from. However, how does one solve a problem for a software, where the author, maintainer and several contributors all are practically unknown in the open-source community?"} {"_id": "120094", "title": "Interviewing Process (using blank paper etc)", "text": "> **Possible Duplicate:** > What are the pros and cons for the employer of code questions during an > interview? I am curious to know why do companies hand you a blank paper and ask you to write code? This confuses me, because these days intellisense, google, stackoverflow etc are common sources to look up syntax and / or IDE gives you a colored indicator if your syntax is wrong. I usually get stressed out during these situations. I am curious to know opinions of other developers. I am posting this on this forum, hoping to get helpful feedback from other experienced developers."} {"_id": "46379", "title": "Does it make sense to develop open source python library for database inspection?", "text": "Some time ago I came up with an idea for a library for database inspection. I started developing it and got some very basic functionality, just to check if that's possible. Recently however, I get second thoughts, whether such project would really be useful. I am actually planning to develop following software suite: * library for `python`, that would provide easy interface to inspect database structure, * desktop application in `PyQt` that would use the interface to provide graphical database inspection, * web application in `Django` that would use the interface to provide database inspection through the browser. Do you think such suite would be useful for other developers/database administrators/analysts? I know, that there is pgadmin for `PostgreSQL` and some tool for `sqlite3` and that there is `Java` tool called `DBInspect`. Usually I would be against creating new tool and rather join existing project, but I am not `Java` programmer (and I would rather stick to `python` or `C`, which I like) and none of these projects provide a library for database inspection. Anyway I would like to hear some opinions from fellow developers, whether such project make sense or I should try to spend my free time on developing something else."} {"_id": "1877", "title": "How much information can a user reasonably process from a UI?", "text": "As an example, say there's an interface that contains a table/grid of information that is periodically updated. The table is meant to represent an event that has happened, perhaps the date and time of a stock price change . The acutal frequency of these events could be dozens of events per second. This is obviously too much information for a user to process/understand, so I'm trying to find out how much information a user COULD process in a given amount of time so that we can throttle the data and come up with an alternate display. I know some studies have been done on this, but I can't seem to find an authorative source."} {"_id": "94397", "title": "Online Advertising And Marketing Your Services?", "text": "I have been working on freelance sites for a good 4 or 5 years, bending over backwards to build a decent portfolio and generate great ratings. I take huge pride in my work (web applications). I'm completely lost because when I think what would happen if I suddenly lost my freelance account it isn't a pretty picture. I have literally no idea where else I could advertise my services apart from google paid advertising. Any suggestions? I'd of course be more than willing to pay for marketing and such. I've been searching google for ages and can't find much advice on where to advertise to secure good clients for web development work. I say good clients because I mean actual business owners, not somebody else who is outsourcing to me (where do they find clients?). I'd appreciate any help."} {"_id": "94390", "title": "In what visual format should I present an estimate for a software project?", "text": "What I mean is, do people print some kind of line item form? What data does it contain? What are the line items usually? Otherwise, do you just present a written document describing with a paragraph? Do you show the number of hours or just a lump sum? Perhaps you only show dollars per hour and an estimate of the hours. Do you give multiple estimates, a high and a low? Is there any best practice for making sure you don't screw yourself or your client? Just trying to get an idea of a few samples of what a client might want to hold in their hands or see on their LCD screen so they can authorize the work that is being presented and agree on the cost."} {"_id": "199631", "title": "Architecture For Mockable DAL On Large Projects", "text": "I have recently been reading an article about creating a blog using ASP.NET and MVC, and in the article the user splits the Data Access Layer into a separate Class Library, and creates an interface for this to allow the DAL to be mocked for testing. This works great for small projects, but I am struggling to see how this will scale. For example in the article you end up with the following Interface: public interface IBlogRepository { IList Posts(int pageNo, int pageSize); int TotalPosts(bool checkIsPublished = true); IList PostsForCategory(string categorySlug, int pageNo, int pageSize); int TotalPostsForCategory(string categorySlug); Objects.Category Category(string categorySlug); IList PostsForTag(string tagSlug, int pageNo, int pageSize); int TotalPostsForTag(string tagSlug); Objects.Tag Tag(string tagSlug); IList PostsForSearch(string search, int pageNo, int pageSize); int TotalPostsForSearch(string search); Objects.Post Post(int year, int month, string titleSlug); IList Categories(); IList Tags(); IList Posts(int pageNo, int pageSize, string sortColumn, bool sortByAscending); void AddPost(Objects.Post post); } There is then an associated .cs file with the implementation of this interface. How would you implement a similar architecture for a much larger project? For example the project I have at work consists of 25 controllers, each having as a minimum list, add, edit, view, delete and count. That would lead to an interface with 150+ functions. Is this kind of architecture still suitable for larger projects, and if so how would you structure this to avoid having a single file implementing 150+ functions?"} {"_id": "81040", "title": "Is there any hope for writing good code atop a horribly designed database?", "text": "Here's my predicament. One of several programs I've recently inherited is built with a horrible database on the backend. The esteemed creators of it apparently did not appreciate relational concepts. A table for each and every client, named as a unique client ID. Eighty-three cryptically named fields. The code is all procedural with dozens of concatenated inline SQL statements. As we weren't provided with an important ancillary application that runs off the same database, I've been tasked with recreating it from scratch. I'm a sole developer, which isn't even my primary responsibility as at least half of my time is taken up by operations stuff. There's an unavoidable deadline set for 30 days from now. Despite my inexperience, I'm certain I could have designed this database and existing application much better than they were, but I don't really think it's realistic for me to alter the database, adjust the existing application, and be sure I didn't break anything while needing to create the additional application this quickly. So let's assume I am stuck with the terrible database. Needing to work with such a bad structure, would anything I write that conforms to it just add to the heaping pile of technical debt to be shelved away until something completely breaks or new functionality is needed? How could I approach this situation and get something good out of it besides a hopefully functional application? edit: In case anyone's interested, we ended up scrapping this horrible database and the application that ran on it. We outsourced the creation of the ancillary application (I wasn't involved in setting this up) to ultimately two different contractors who both ended up falling through on us, accomplishing nothing. I ended up having to rush out a horrific, partially functional hack of a fix in three days that's still in use today."} {"_id": "81041", "title": "Starting a job in the financial sector, advice needed", "text": "Starting a job in the financial sector in the next couple of weeks. I have never done financial programming (EG Loans, mortgages, etc). Just to get acquainted and familiar with this kind of thing, are there any open-source application libraries in C# that I could look at and get a feel for? Any suggestions?"} {"_id": "199635", "title": "Is there a time bomb in sequential process id allocation when the `unsigned` wraps?", "text": "I'm designing and implementing a multiprocessing postscript interpreter (\"lightweight\" processes simulated by a single OS-thread), and I need to generate a unique identifier for each process in the system, as well as maintain ids for any dead processes which still have live references. (Reference: Adobe PLRM, 2ed, Ch. 7 Display PostScript, http://partners.adobe.com/public/developer/en/ps/psrefman.pdf\u200e) I've started with a simple increment, which arithmetically maps to the table index for the per-process data. http://code.google.com/p/xpost/source/browse/itp.c#115 unsigned nextid = 0; // global (file-static) counter unsigned initctxid(void) { // generate a new cid while ( ctxcid(++nextid)->state != 0 ) // .state == 0 means free ; return nextid; } context *ctxcid(unsigned cid) { // get context data from cid return &itpdata.ctab[ (cid-1) % MAXCONTEXT ]; } But there's gonna be a big problem if (when) the unsigned wraps around. Perhaps not scary for a game or an application, but this thing is supposed to be a server. Eventually. I'd like to avoid writing (in earnest) the disclaimer, \"warning: will start having strange problems after running for a long time\". So, it should be easy enough to detect when it wraps, but what then? Bail out? So the situation/scenario so far is: you're a multitasking postscript server with a handful of processes with cids allocated during one _epoch_ of the generator, and the epoch has just turned. My thought (not impossible, just seems really hard) is to _compact_ these existing IDs down to the _0..N_ range (rewriting all references, scanning all memory if necessary) and reset `nextid` to N. But that's gonna be a pain-in-the-butt. Is there a different way to generate these IDs so I don't have to do a big garbage-collect on them, and have it work, you know, _perpetually_? Edit: A fact I neglected to mention was that the Display PostScript reference says IDs are not re-used during a running instance of the system. But since these IDs are not exposed, they cannot be saved in any form other than the context object. So re-use should be just fine as long as no running process can know about it (contains an old context object in accessible memory)."} {"_id": "199636", "title": "Design suggestion required to create an Export plugin", "text": "I am trying to create Export Module for our application, this seems to me a bit complex so I am posting it here to get some guide lines. In our database we have a list of Products, which can be exported as XML, RTF or PDF The user can select one or many products from the list to export. After Selecting the product(s) he/she is presented with list of Export type(XML, RTF, PDF) and a list of Connections( where to send this export files) like FTP, HTTP, EMail etc whose details are already configured. And this export is used for many different users, who have different needs. For example User1 need only Description, User2 needs Description+Thumbnail etc. Should I create for Export type(XML, RTF, PDF) a Manager class? and For Connections also a Manager class? And for Different users should I create an Interface?"} {"_id": "197562", "title": "Why is there no 'finally' construct in C++?", "text": "Exception handling in C++ is limited to try/throw/catch. Unlike Object Pascal, Java, C# and Python, even in C++ 11, the `finally` construct has not been implemented. I have seen an awful lot of C++ literature discussing \"exception safe code\". Lippman writes that exception safe code is an important but advanced, difficult topic, beyond the scope of his Primer - which seems to imply that safe code is not fundamental to C++! Herb Sutter devotes 10 chapters to the topic in his Exceptional C++ ! Yet it seems to me that many of the problems encountered when attempting to write \"exception safe code\" could be quite well solved if the `finally` construct was implemented, allowing the programmer to ensure that even in the event of an exception, the program can be restored to a safe, stable, leak- free state, close to the point of allocation of resources and potentially problematic code. As a very experienced Delphi and C# programmer I use try.. finally blocks quite extensively in my code, as do most programmers in these languages. Considering all the 'bells and whistles' implemented in C++ 11, I was astonished to find that 'finally' was still not there. So, why has the `finally` construct never been implemented in C++? It's really not a very difficult or advanced concept to grasp and goes a long ways towards helping the programmer to write 'exception safe code'."} {"_id": "190716", "title": "Is relying on implicit argument conversion considered dangerous?", "text": "C++ has a feature (I cannot figure out the proper name of it), that automatically calls matching constructors of parameter types if the argument types are not the expected ones. A very basic example of this is calling a function that expects a `std::string` with a `const char*` argument. The compiler will automatically generate code to invoke the appropriate `std::string` constructor. I'm wondering, is it as bad for readability as I think it is? Here's an example: class Texture { public: Texture(const std::string& imageFile); }; class Renderer { public: void Draw(const Texture& texture); }; Renderer renderer; std::string path = \"foo.png\"; renderer.Draw(path); Is that just fine? Or does it go too far? If I shouldn't do it, can I somehow make Clang or GCC warn about it?"} {"_id": "190717", "title": "Is there any real value in unit testing a controller in ASP.NET MVC?", "text": "I hope this question gives some interesting answers because it's one that's bugged me for a while. **Is there any real value in unit testing a controller in ASP.NET MVC?** What I mean by that is, most of the time, (and I'm no genius), my controller methods are, even at their most complex something like this: public ActionResult Create(MyModel model) { // start error list var errors = new List(); // check model state based on data annotations if(ModelState.IsValid) { // call a service method if(this._myService.CreateNew(model, Request.UserHostAddress, ref errors)) { // all is well, data is saved, // so tell the user they are brilliant return View(\"_Success\"); } } // add errors to model state errors.ForEach(e => ModelState.AddModelError(\"\", e)); // return view return View(model); } Most of the heavy-lifting is done by either the MVC pipeline or my service library. So maybe questions to ask might be: * what would be the value of unit testing this method? * would it not break on `Request.UserHostAddress` and `ModelState` with a NullReferenceException? Should I try to mock these? * if I refractor this method into a re-useable \"helper\" (which I probably should, considering how many times I do it!), would testing that even be worthwhile when all I'm really testing is mostly the \"pipeline\" which, presumably, has been tested to within an inch of it's life by Microsoft? I think my point **really** is, doing the following seems utterly pointless and wrong [TestMethod] public void Test_Home_Index() { var controller = new HomeController(); var expected = \"Index\"; var actual = ((ViewResult)controller.Index()).ViewName; Assert.AreEqual(expected, actual); } Obviously I'm being obtuse with this exaggeratedly pointless example, but does anybody have any wisdom to add here? Looking forward to it... Thanks."} {"_id": "190710", "title": "Can I set the IMEI of a blustacks virtual device?", "text": "Using an app \"Get My IMEI\" I was able to retrieve an invalid IMEI for my bluestack emulator however I can't see how I can change it. The guys at XDA developers forum are talking about it, but it's an old post so I wanted to know if it is possible now? I am building an app that uses IMEI validation and bluestack is the device I use for development on Eclipse. It's way faster than the normal method and I don't need to keep track of the apk version installed or clean it like I would do if I used my physical device. If the device IMEI is already stored in the database I have no problem. The problem is when I have to register the device to the service (front-end) I need a valid IMEI. I know the IMEI can be spoofed like the mac address or the ip but it's just some extra security feature that is added. * * * _I asked the same questionat Stack Overflow but the guys didn't think it had anything to do with programming. So I added the context of what I am trying to do so at this point I don't even know if the question was rejected because I mentioned XDA._"} {"_id": "197569", "title": "What does Uncle Bob mean by 'noun phrase names'?", "text": "I am reading _Clean Code_ by Uncle Bob. Because I am not a native-English speaker, I couldn't understand following statement: > Classes and objects should have noun or noun phrase names like `Customer`, > `WikiPage`, `Account`, and `AddressParser`. Avoid words like `Manager`, > `Processor`, `Data`, or `Info` in the name of a class. A class name should > not be a verb. As I know, none of the `Manager`, `Processor`, `Data`, and `Info` is a verb, isn't it? What is the actual point he want to emphasize?"} {"_id": "190719", "title": "The difference between \"concurrent\" and \"parallel\" execution?", "text": "What is the difference between the terms _concurrent_ and _parallel_ execution? I've never quite been able to grasp the distinction. The tag defines concurrency as a manner of running two processes simultaneously, but I thought parallelism was exactly the same thing, i.e.: separate threads or processes which can potentially be run on separate processors. Also, if we consider something like asynchronous I/O, are we dealing with concurrency or parallelism?"} {"_id": "210806", "title": "Does Java have a built in method to determine if an identifier is legal", "text": "Naming a variable or class, for example, must adhere to certain rules (such as not allowing keywords or reserved words as identifiers) I really don't want to code my own method to do such a thing. Does java have something already built in? My goal is that I am building an expression builder / code generator. It will recognize user specified variables and these must respect java's identifier rules or a warning is presented to user to rename his variable."} {"_id": "17052", "title": "Mobile application development- suggestions for chosing the platform", "text": "I am planning to build a mobile application (a text editor based app) to primarily use with my symbian device. I have plans to make it commercial too, once its complete. Now the manufacturer (NOKIA) website lists some choices for mobile app development - java, native platform for symbian (using c) and QT. I am aware about the aspects of scaling and extending the app for other platforms. Android is increasing in it's market share like anything, symbian is losing too. Also if i need to get the platform specific functionalities (say reading the contact list or sending an sms) i should be compromising the portability. I was not able to find any comparisons between these portability and platform specific features aspects. If i chose java, i think i could easily port my application for android/iphone and make it commercial. I have no idea how portable the QT platform can be. I do not want to go by Nokia's suggestions since i doubt they are biased to their products(even if they provide excellent developer support) I have to start learning from the beginning, irrespective of whichever platform i chose (except languages) Please advice me if i have assumed something wrong, and your opinion about portability versus platform features. Thanks, Abhi"} {"_id": "52623", "title": "First Class Functions", "text": "I started seriously taking a look at Lisp this weekend (by which I mean I have only been learning Lisp and not reverting back to projects in C#) and must say I love it. I have dabbled with other functional languages (F#, Haskell, Erlang) but haven't felt the draw that Lisp has given me. Now as I continue to learn Lisp, I have started wondering why non-functional languages don't support first-class functions. I know that languages such as C# can do similar things with delegates and to an extent you can use pointers to functions in C/C++ but is there a reason why this would never become a feature in those languages? Is there a drawback to making functions first- class? To me, it is extremely useful, so I am lost as to why more languages (outside the functional paradigm) don't implement it. [Edit]I appreciate the responses so far. Since I have been shown that many languages now support first-class functions now, I'll re-phrase the question to be why would it take so long for languages to implement them? [/Edit]"} {"_id": "52628", "title": "What's the better user experience: Waiting once at startup for a long time or waiting frequently for a short time?", "text": "I'm currently design an application that involves a lot of calculation. Now I have generally two possibilities which I have both tested: 1) During startup of the application I calculated only the most important values and these values that consume a lot of time. So the user has to wait approximately 15 seconds during startup. But on the other hand a lot of user interactions require recalculation so that the user often has to wait 2-3 seconds after clicking somewhere until the application has calculated and loaded all values 2) I load everything during startup. This takes from 90 to 120 seconds... This is quite a long time, but the big advantage is that all the user interactions are executed immediately. So what would you generally consider the better approach? Loading all time- consuming operations during startup or when needed?"} {"_id": "17059", "title": "What is the difference between \"good\" designers and \"great\" designers?", "text": "While reading this wikipedia article, Brookes has told that there is a difference between \"good\" designers and \"great\" designers. What is the difference between them? How can I decide if a designer is good or great?"} {"_id": "60719", "title": ".net data access technology", "text": "Iam new with .Net technology and i need a help with choosing correct path of creating application. I want to create application that get its data from web- service and small part from local SQLCE database. Mainly I have problems with database access and making database access clean in code. What .Net technology should i choose to accomplish that in good way with clean and easy to maintained code."} {"_id": "116056", "title": "What language and topics should be covered when teaching non-CS college students how to program?", "text": "I have been asked by many of my non-computer science friends to teach them how to program. I have agreed to hold a seminar for them that will last for approximately 1 to 2 hours. My thoughts are to use Python as the language to teach them basic programming skills. I figured Python is relatively easier to learn from what I have researched. It is also a language I want to learn which will make holding this seminar all the more enjoyable. The topics I plan to cover are as followed: 1. Variables / Arrays 2. Logic - If else statements, switch case, nested statements 3. Loops - for, while, do-while and nested loops 4. Functions - pass by value, pass by reference (is this the correct terms for Python? I am mostly a C/C++ person) 5. Object Oriented Programming Of course, I plan to have code examples for all topics and I will try to have each example flow into each other so that at the end of the seminar everyone will have a complete working program. I suppose my question is, if you were given 1 to 2 hours to teach a group of college students how to program, what language would you choose and what topics would you cover? Update: Thank you for the great feedback. I should have mentioned in my earlier post above that a majority of the students attending the seminar have some form of programming experience whether it was with Java or using Matlab. Most of these students are 3rd/4th year Engineering students who want to get a refresher on programming before they graduate."} {"_id": "235505", "title": "Does agile approach support taking a task from a team?", "text": "Can customer take one of tasks during the planning meeting, which was already assigned to this team, and put it to another team? For example because a customer think that the other team will be faster in implementing it? Is it normal for some agile methodologies? If yes, could you, please, provide me with the source of this information(book or web link), because I do not know how to handle it, especially because of team spirit: such a thing can make people in team demotivated and also there can origin some tenses in between these two teams... It is part of some agile methodology and what are the constraints to make this thing?"} {"_id": "118391", "title": "Backtrack My \"Education\"", "text": "A while ago, I decided to start programming. I really, just jumped into a language (Perl) and went from there. What I regret is that I just jumped in: I didn't learn the basics (if you would call them basics). I didn't learn about Computer Science. This issue, I believe, is holding me back from my true potential. Where should I \"restart\"? Are there any books, articles, etc. that I should read? Are there any topics an experienced programmer should know? What's your advice?"} {"_id": "90778", "title": "How do I prepare for an 'aptitude for programming' test?", "text": "I have a job interview tomorrow and it's for a junior web developer role. I am really excited about it because I really want to get my foot in the door in this industry. I have been told how the interview will pan out. It will consist of a 45 minute interview with two of their developers and an 'aptitude for programming' test. I am really worried about the 'aptitude for programming' test. Questions I am hoping the stack community can help me out with is: 1. What exactly is this? (as far as I understand it is a written test using a fictional programming language). 2. If you have done this before, either as the interviewer or interviewee, can you give me any advice, hints or help? 3. Are there any online sources that may help with this, i.e. blogs, websites, places I can practice a few online?"} {"_id": "235506", "title": "Empty superclass for Collection of derived classes", "text": "Basically, what I would like to obtain is a way to iterate through ONE list, and call methods specific to the interface the objects in the collection implement. In my Java project, it would result in something like this: //GameComponent would be my empty abstract class HashSet components = new HashSet(); components.add(/* instance of a class that extends GameComponent and implements Drawable || Updatable */); for (Drawable d : components) d.draw(); for (Updatable u : components) u.update(); My friends suggested me to just make two separate lists, but that means some objects will be in both lists. I also think that it makes sense (abstraction-wise) to have one list of GameComponents and iterate through that one list. But I also don't like having an empty class to inherit from, only for categorisation. So, the question is: does my approach make sense? If so, is it elegant? If not, what pattern could I follow?"} {"_id": "90776", "title": "How does one securely and privately address security concerns inside code", "text": "I recently finished a practicum for which I desperately need a recommendation from. However when I was working on the code for the public face web-portal I noticed many sql injection possibilities and as well as other things. What bothers me is the complete ignorance of the indevidual managing the code. He wrote much of the code and is completely unaware of what SQL is supposed to be used for. In that manner he also doesn't know how to properly protect against SQL injection and the like. While I was working for the individual I tried to push him in the right direction of looking and seeing how broken the system was. But I really need a good recomendation for a job and didn't wish to upset him. The office is also a very close-knit office and there was no-one I felt would understand the severe need for code review. Why I feel that this issue must be addressed is because they system was for a School district to manage student records very personal student records. We (the developers including another student) had access to a clone of the live database for testing purposes and there my little brother is set to enroll in that school district next year. The lead developer is somewhat good at programming but with the lack of understanding regarding SQL and relation databases the system will be exploited and most likely within the next year as it's being rolled out district wide. I really feel I should have said something to someone. :( Besides phoning up the supervisor and asking him to do something what can I legally do. Can I phone the RCMP (I'm Canadian) and ask them to investigate (I know that's dumb but this is verging on criminal negligence)."} {"_id": "112536", "title": "Should I get a service-oriented architecture certification?", "text": "In my place of work, we have a number of applications that seem to interact in a rather ad-hoc, haphazard way, and I feel I need to understand system integration better. I'm currently deciding whether to do an SOA School Profesional Certification. The certifications are well recognised, but from what I've read of them, I'm a bit concerned that even after I've passed the exams, I won't really be in a position to implement what I've learned, or that the information won't be hugely useful in my day to day job as a developer. It all seems a bit too theoretical. Will it be any practical use to me? Also, I've heard the 150 page booklet that costs $300 is not enough to prepare you for the exam. Is this correct? **Edit** Really, what this comes down to is whether SOA is really going to be the answer to my integration problems. At the moment, it seems to be the de facto answer."} {"_id": "112530", "title": "Is it possible for two DLLs to conflict , preventing solution to build", "text": "Though I have a specific case, but I was wondering about the general situation. Can two DLLs, when added as Reference to a Visual C# project collide with each other to prevent the solution from building? If this is the case, what are the possible ways to mitigate this."} {"_id": "54663", "title": "What are some high quality Enterprise Architecture conferences or training programs?", "text": "I am looking for a conference or training which will give me a broad exposure to enterprise level software architecture. I've been with the same company for 10 years and we've grown to the size where we really need to lay out a framework for the applications which support our company's business. The organic growth over the last 10 years has left us with a tightly coupled and fairly messy set of applications. We need to do a better job at componentizing our business entities and have more rigorous control on the interfaces between those entities and our business processes. I'm looking to get a broad, yet practical exposure on design patterns to support that architecture (SOA, messaging, ESB's etc). I'm hoping to gain insight from folks who have direct experience with implementing or working with what would be considered an enterprise class architecture."} {"_id": "20542", "title": "How do you get into the zone? How long does it take? What steps do you take before?", "text": "Getting into zone is a pleasurable and fruitful process. We produce good source code and we get lots of satisfaction from our work done while being in the zone. But, how does one get into the 'zone'? Do you follow a specific process? Apart from switching of email system, mobiles and other mundane non- productive applications, is there anything else that can be done?"} {"_id": "115628", "title": "How to divide work among development team members in a website project using MVC pattern", "text": "I am working on a webapp project using flask framework and sqlalchemy orm in python. Its the first time I am working on a project like this and I am having trouble in how to divide work properly among my teammates. We are 5 guys, one is handling designing, two front end and 2 on backend. I am on the backend. Flask follows MVC type pattern. How does teams divide work in MVC pattern? Moreover I am more confused in dividing work with my team mate in backend. Should one handle all the database queries and let the other one handle processing on those returned results? Need some advise on work division in web projects."} {"_id": "143842", "title": "Why is there no here document syntax in .Net?", "text": "* Is it considered bad form? Maybe it promotes non-separate model / view? * Is it inefficient? * Was it just left out? I guess every language has features that certain developers wish were there, and not every language can have everything. I've just always found this feature to be pretty handy. Why does .Net not have a here document syntax?"} {"_id": "115620", "title": "How does one minimize or prevent User lawsuits?", "text": "I work in a bunch of tools that do things to customer's computers. Some of these tools allow scripting, which allows someone to run a script to nuke files, registry keys, or the like. Of course, if the script is bad, or if there's a serious bug in the tool, it could cause damage to the person's system. I'm concerned that a user might become... angry... with me if something goes wrong. How do I minimize the likelihood of being sued as a result of publishing such a tool?"} {"_id": "143847", "title": "Adding dynamic business logic/business process checks to a system", "text": "I'm wondering if there is a good extant pattern (language here is Python/Django but also interested on the more abstract level) for creating a business logic layer that can be created without coding. For example, suppose that a house rental should only be available during a specific time. A coder might create the following class: from bizlogic import rules, LogicRule from orders.models import Order class BeachHouseAvailable(LogicRule): def check(self, reservation): house = reservation.house_reserved if not (house.earliest_available < reservation.starts < house.latest_available ) raise RuleViolationWhen(\"Beach house is available only between %s and %s\" % (house.earliest_available, house.latest_available)) return True rules.add(Order, BeachHouseAvailable, name=\"BeachHouse Available\") This is fine, but I don't want to have to code something like this each time a new rule is needed. I'd like to create something dynamic, ideally something that can be stored in a database. The thing is, it would have to be flexible enough to encompass a wide variety of rules: * avoiding duplicates/overlaps (to continue the example \"You already have a reservation for this time/location\") * logic rules (\"You can't rent a house to yourself\", \"This house is in a different place from your chosen destination\") * sanity tests (\"You've set a rental price that's 10x the normal rate. Are you sure this is the right price?\" Things like that. Before I recreate the wheel, I'm wondering if there are already methods out there for doing something like this."} {"_id": "80316", "title": "Online Computer Science BS Degree?", "text": "Does anyone know of a good online undergraduate Computer Science program? My girlfriend is almost done her Masters of Science in Nursing online at Vanderbilt. All her courses are online, she does clinicals at local hospitals and private practices, and flies to Nashville for a long weekend every couple of months. I've been looking for something similar for Computer Science, but can only seem to find programs at scam schools. If it's possible to get the credentials which will give you the ability to suture and prescribe narcotics online, shouldn't computer science be handled similarly? Yes, I am aware of the social implications (e.g. college is a great place to meet people, develop as a person, etc...). I did attend a traditional college, and have about 85 credits. The difficulty is, my business started making money, so I'm too busy to attend scheduled courses. The university wouldn't work with me, at all, so I stopped going. I see value in having a degree, and would like to finish, but it has to be on my time."} {"_id": "80311", "title": "How do you define a node and an edge when talking about McCabes Complexity?", "text": "I am trying to understand the formula, but I am a bit in doubt how exactly you define edges and nodes. Edges seem to be every possible exit from a statement and nodes seem to be statements, is this definition wrong?"} {"_id": "80312", "title": "What are the dangerous corners of Qt?", "text": "There's nothing perfect under the sun. Qt is no exception, and it does have limitations: we can't use pixmaps in a thread other than GUI, we can't use QImage with 16-bit-per-channel image format, etc.. Which situations have forced you to spoil the design because of Qt's limitations? What are the most hated quirks? **Which design decisions should one avoid while using Qt in his projects?**"} {"_id": "137002", "title": "Grid framework for CSS", "text": "I see there are large number of grid frameworks in CSS like 960, heroku grid, etc being used by huge websites. I want to know whether using grid structure is really useful? If yes, then how? One of the biggest problem I saw with grid is **having equal heights for elements**. If we are using three grids like `grid_2`, `grid_7`, `grid_3` for 3 vertical panels then it becomes very difficult to have these three panels positioned in a way such that they have equal heights and all of them change height when any of the content exapnds or collapse. This is because elements are floated in grid system and they don't change height along with neighbouring element."} {"_id": "38642", "title": "Understanding Abstract Data Types (ADTs)", "text": "Just browsing through Code Complete last night and I came across the explanation of abstract data types. I must have read it 5 times, and the Wikipedia article doesn't help much either. So what I'm looking for is a simple explanation of exactly **what is an Abstract Data Type**? Any solid examples? In C# or VB? I understand that String is supposed to be one, why is this? And why isn't Int32 one? Or is it? Any pointers much appreciated."} {"_id": "167806", "title": "What is a good way to keep track of strings for dictionary lookups?", "text": "I am working through the Windows 8 app tutorial. They have some code about saving app data like so: private void NameInput_TextChanged(object sender, TextChangedEventArgs e) { Windows.Storage.ApplicationDataContainer roamingSettings = Windows.Storage.ApplicationData.Current.RoamingSettings; roamingSettings.Values[\"userName\"] = nameInput.Text; } I have worked with C# in the past and found that things like using constant string values (like \"userName\" in this case) for keys could get messy because auto-complete did not work and it was easy to forget if I had made an entry for a setting before and what it was called. So if I don't touch code for awhile I end up accidentally creating multiple entries for the same value that are named slightly differently. Surely there is a better way to keep track of the strings that key to those values. What is a good solution to this problem?"} {"_id": "167807", "title": "what are some good interview questions for a position that consists of reviewing code for security vulnerabilities?", "text": "The position is an entry-level position that consists of reading C++ code and identifying lines of code that are vulnerable to buffer overflows, out-of- bounds reads, uncontrolled format strings, and a bunch of other CWE's. We don't expect the average candidate to be knowledgeable in the area of software security nor do we expect him or her to be an expert computer programmer; we just expect them to be able to read the code and correctly identify vulnerabilities. I guess I could ask them the typical interview questions: reverse a string, print a list of prime numbers, etc, but I'm not sure that their ability to write code under pressure (or lack thereof) tells me anything about their ability to read code. Should I instead focus on testing their knowledge of C++? Ask them if they understand what a pointer is and how bitwise operators work? My only concern about asking that kind of question is that I might unfairly weed out people who don't happen to have the knowledge but have the ability to acquire it. After all, it's not like they will be writing a single line of code, and it's not like we are looking only for people who already know C++, since we are willing to train the right candidate. (It is true that I could ask those questions only to those candidates who claim to know C++, but I'd like to give the same \"test\" to everyone.) Should I just focus on trying to get an idea of their level of intelligence? In other words, should I get them to talk and pay attention to the way they articulate their thoughts, and so on?"} {"_id": "167802", "title": "How important is it for a programmer to know how to implement a QuickSort/MergeSort algorithm from memory?", "text": "I was reviewing my notes and stumbled across the implementation of different sorting algorithms. As I attempted to make sense of the implementation of QuickSort and MergeSort, it occurred to me that although I do programming for a living and consider myself decent at what I do, I have neither the photographic memory nor the sheer brainpower to implement those algorithms without relying on my notes. All I remembered is that some of those algorithms are stable and some are not. Some take O(nlog(n)) or O(n^2) time to complete. Some use more memory than others... I'd feel like I don't deserve this kind of job if it weren't because my position doesn't require that I use any sorting algorithm other than those found in standard APIs. I mean, how many of you have a programming position where it actually is essential that you can remember or come up with this kind of stuff on your own?"} {"_id": "167808", "title": "Instantiating Interfaces in C#?", "text": "I am reading/learning about interfaces in C# at the moment, and thus far I managed to understand how it differs from an abstract class. In the book I am reading the author explains that interfaces are the ultimate abstract class and that it simply sets the standard of certain methods the inheriting class will have, but then provides the following example. static void Main(string[] args) { ... Circle c = new Circle(\"Lisa\"); IPointy itPt = null; try { itPt = (IPointy)c; Console.WriteLine.(itPt.Points); } catch (InvalidCastException e) { Console.WriteLine(e.Message); } ... } The line that absolutely throws me off is the `IPointy itfPt=null;` did he just declare an interface? I thought interfaces are abstract and can only be inherited? What kind of sorcery is going on here?"} {"_id": "216403", "title": "How can a website look different in safari Windows and Safari mac?", "text": "I have a website. I've been testing crossbrowser on my windows PC, and it looks good in all browsers, but on Mac in Safari it looks like the CSS is not getting interpreted right, or there is a critical javascript error. When I look in the console cross-browser, the error log shows exactly the same. Chrome on mac interprets the site as intended, so why do I have a problem with safari? It is the same across different computers, and iphone safari also shows the site wrong. How is this possible and how do I debug?"} {"_id": "57814", "title": "Should we use RSS or Atom for feed generation?", "text": "For various reasons we are required to add feeds to our product. The main reason is to be able to say to potential buyers that \"yes, we have feeds\". We do not actually expect the feature to be used that much. Ideally we would like to provide both RSS and Atom feeds. However, at the moment we are severely pressed for time and are forced to select just one of these. Should we use Atom or RSS? Feature-wise we are fine with either, so I am only looking for information about the popularity and support for the various formats. Are there many feed readers out there without Atom support? EDIT: The reason we only want to implement one format is not related to generating the actual feeds. That in itself will not be very time consuming. It is more of a UI problem. If we implement both Atom and RSS, we need to present the user with a UI where he/she can select between the different formats. For usability purposes we would also need help texts, tooltips etc. to make sure that the user can understand the different options. And since our product is localized into multiple languages all of the above would need to be translated, and someone has to pay for that. It all adds up and becomes a lot more work. If we settle on a single format we only need one button with a tooltip pointing to an .aspx with the feed. Besides, it is not my decision anyway. :) Someone above me has already decided that this functionality will be implemented for this release."} {"_id": "216406", "title": "Since Garbage Collection is non-deterministic, why isn't it used for secure random number generation?", "text": "I get that /dev/random is a good source of entropy, and is what is usually used-- It's just as I'm reading up on GC, at least in Java, it seems accepted that the garbage collection daemon executes non-deterministically. If this it true, why don't we use the timing of the garbage collection as a source of entropy instead of the variable /dev/random?"} {"_id": "95113", "title": "Has anyone used \"Design Pattern Framework (TM)\"?", "text": "Has anyone purchased Design Pattern FrameworkTM? Are these samples worth investment? Are they practical? What are the pro and cons of the guidelines? Anyone used this in the real-world development?"} {"_id": "67298", "title": "What do you do when there is a last minute request to exclude a feature from a release?", "text": "There is a feature that has already passed acceptance testing, both internally and by the customer. It is a fully working feature. However, there is now a request to exclude this feature from an upcoming release. According to the customer, this feature should be removed because the users have not been trained on how to use it. What is the best course of action to handle this situation? Should we design software in anticipation that a feature might be excluded last minute using configuration settings? Are there context-dependent solutions that might be more correct in some situations than others?"} {"_id": "67291", "title": "What do programmers at security firms do?", "text": "I have heard of security firms who consult on the security of a clients systems. All of the people I know in this field are network engineers, but I know programmers get involved in security as well. What do security programmers who perform audits/consulting actually do? Do they literally go through the codebase looking for every vulnerability in people\u2019s legacy systems? I have always assumed that is what they did, but it seems like this would be highly unreliable and not do much more than providing a false sense of security. Note: I\u2019m not talking about programmers who write encryption algorithms or anything like that, only those concerned with software security audits/consulting."} {"_id": "198275", "title": "Extracting domain logic from the forms to which they are coupled?", "text": "Many applications do nothing to separate the interface from domain logic. I\u2019ve been programming for a couple decades and have worked at more than a dozen shops and none of them have taken any measure of separating the interface from the domain logic. That is, they're all using the Autonomous View pattern. This runs counter to all the wisdom I\u2019ve read about separating concerns. When the user needs to choose an item from a set of items, that selection is directly tied to a combo box rather than an abstraction. A huge drawback with all this coupling is that it makes it impossible to write unit tests. To run any unit tests the application in its entirety must first be loaded. In my mind, the use case should be encapsulated within an abstraction that can later have a database and an interface attached to it. This kind of separation would make implementing unit tests trivial by comparison. Given that the average shop has its interface and domain logic tightly coupled, how to we refactor to separation? That is, a complete rewrite is off the table. What I\u2019m talking about is taking a proposal to management about how to go about creating domain objects (whatever these happen to look like) that live in isolation from the database and the interface. In other words, it would be useful to be able to explain how this concept of presenting the user with a choice (something that is now represented by combos on forms) and to see how some methodology/framework/pattern represents that in an abstract sense and how that abstraction is then tied to the interface. This would make replacing combos with listboxes nothing more than a slight detail. What are some good ways for extracting abstractions (domain use cases) from forms? Any good online resources? My current shop is on .NET, though I am interested in ideas from any development platform. Ultimately, what I need to provide is a concrete example (code) of how this might be accomplished."} {"_id": "168354", "title": "Do we ethically have the right to use the MAC Address for verification purposes?", "text": "I am writing a program, or starting at the very beginning of it, and I am thinking of purchase verification systems as a final step. I will be catering to Macs, PCs, and possibly Linux if all is said and done. I will also be programming this for smartphones as well using C++ and Objective-C. (I am writing a blueprint before going head first into it) That being said, I am not asking for help on doing it yet, but what I\u2019m looking for is a realistic measurement for what could be expected as a viable and ethical option for purchase verification systems. Apple through the Apple Store, and some other stores out there have their own \"You bought it\" check. I am looking to use a three prong verification system. 1. Email/password 2. 16 to 32 character serial number using alpha/numeric and symbols with Upper and lowercase variants. 3. MAC Address. The first two are in my mind ok, but I have to ask on an ethical standpoint, is a MAC Address to lock the software to said hardware unethical, or is it smart? I understand if an Ethernet card changes if not part of the logic board, or if the logic board changes so does the MAC address, so if that changes it will have to be re-verified, but I have to ask with how everything is today. Is it ethical to actually use the MAC address as a validation key or no? Should I be forward with this kind of verification system or should I keep it hidden as a secret? Yes I know hackers and others will find ways of knowing what I am doing, but in reality this is why I am asking. I know no verification is foolproof, but making it so that its harder to break is something I've always been interested in, and learning how to program is bringing up these questions, because I don't want to assume one thing and find out it's not really accepted in the programming world as a \"you shouldn't do that\" maneuver. I am just learning how to program, and I am just making sure I'm not breaking some ethical programmer credo I shouldn't."} {"_id": "168358", "title": "technique for checking modifications in configuration file while starting up a program", "text": "I'm writing a software for periodically checking a specific range of networked devices' reach-ability. I'm specifying the address range and the time frequency for checking their reachability, in an xml file. Which technique can I use to check that xml file during the start up of the program for any modifications done in either the range or the frequency and do the necessary update in specific database?"} {"_id": "109357", "title": "When a team size gets over 10, can you still do release planning together?", "text": "When deciding what to work on for the next release, and estimating timings for each user story (and sub tasks for a given story), do you guys do this in a group or just managers? For a team size of 10, is this practical? How long does it take?"} {"_id": "667", "title": "Writing a style guide, what are the best examples for breaking long lines?", "text": "You are writing a style guide and you are at the chapter about breaking long lines. What would be the best examples of long lines to demonstrate your rules of breaking long lines. The examples should be in a c-like syntax. I think it is best to keep the examples more or less language agnostic. You can use language specific features for extreme long lines like the many modifiers in java, the implements clause or the throws clause. Note: this is not about the rules of how to break long lines. This is about collecting the best examples for the rules of how to break long lines. I have prepared a question for the rules, but i wanted to collect the best examples first. Please not break your long line, at least not much. just break it as much so that it is readable in this website. You can break your long line to your hearts content in this question once we have a good set of example lines. One example per answer please."} {"_id": "72144", "title": "Does anyone have any experience with scene7 image server?", "text": "Scene7 is Adobe's image server. It seems like many people are using it. We are considering using it. I want to know the developers experience. Was a pain to develop with or did it make things easier? What experiences have you had?"} {"_id": "72147", "title": "What are the benefits of not having logic within integration tests?", "text": "In the excellent http://artofunittesting.com/, i saw recommendation to keep logic out of unit tests. Does this hold true for functional/integration tests?"} {"_id": "10920", "title": "How to choose a research discipline in Computer Science?", "text": "I got my bachelor's degree in CS five years ago and after that I've been working as software engineer building Voice Over IP (VOIP) or Video Surveillance System. But now for some personal reason I feel that I would like go back to get a PhD degree. But now, the problem is, I am not quite sure which branch of Computer Science I should choose. As most of my working experience is related to multimedia, perhaps professors who study that would be more willing to accept my application. However, I am not quite sure if multimedia is still popular research domain in North America. And if there are not too much works left in this domain, what are other possible choices. I am also interested in programming languages, but I don't know if programming languages could be considered a research area. Another important question is, after I have decided in which domain I would like to study. How can I find most influential research results in this area to get an overview of it? In other words, since there must be plenty of papers in a domain, how can I know which I should read. BTW: If you happened to be a PhD student in Computer Science, what kind of skills or personality do you think your professors prefer? (This last question might be too subjective, so please ignore it if you feel it's ridiculous) Thanks a lot!"} {"_id": "72140", "title": "If you use multiple computers, how do you sync everything?", "text": "I have like 4 or 5 computers now, and I need a better system for syncing everything. I use git and github a lot to sync my files for programming projects, but then there are databases, .bash_profile files, bash scripts, etc. Sometimes, instead of syncing files, I just ssh in from one computer to another. But this is getting fairly chaotic. Some of my computers are Ubuntu and others are OS X. Any suggestions for managing a workflow that spans multiple personal computers?"} {"_id": "196389", "title": "Architecting Python application consisting of many small scripts", "text": "I am building an application which, at the moment, consists of many small Python scripts. Each Python script processes items from one Amazon SQS queue. Emails come into an initial queue and are processed by a script, and typically the script will do a small unit of processing (for example, parse email and store some database fields), then an item will be placed on the next queue for further processing, until eventually the email has finished going through the various scripts and queues. What I like about this approach is that it is very loosely coupled. However, I'm not sure how I should implement live. Should I make each script a daemon which is constantly polling its inbound queue for things to do? Or should there be some overarching orchestration program or process? Or maybe I should not have lots of small Python scripts but one large application? Specific questions: How should I run each of these scripts - as a daemon with some sort or restart monitor to restart them in case they stop for any reason? If yes, should I have some program which orchestrates this? Or is the idea of many small script not a good one, would it make more sense to have a larger python program which contains all the functionality and does all the queue polling and execution of functionality for each queue? What is the current preferred approach to daemonising Python scripts? Broadly I would welcome any comments or opinions on any aspect of this. thanks"} {"_id": "196384", "title": "How do I tell a user that bps means bits per second or bytes per second?", "text": "I'm writing an application that deals with the network and the hard drive. For the network portion, the application measures in bits per second, while the disk portion measures in bytes per second. This becomes an issue as they both are abbreviated `b/s` or `bps` everywhere I've seen. How could I inform the user that one means _bits_ per second, while the other means _bytes_ per second? If I were writing a specification, I could just add a footnote, but as this is an application, I can't do that. So, my question is **How do I tell a user that one means bits per second while the other means bytes per second and how would I style the lettering (i.e. megabinarybytes is MiB, but what is megabinarybits)?**"} {"_id": "115085", "title": "What sort of wiki or other solution can be used to put updates/software changes in our office?", "text": "We use sourcesafe to keep all of our code up to date in what we do in our office, but it's getting to the point where we need to write a few notes and attach different versions of our code for different things (mainly webapps, asp.net) Is there a wiki out there designed for this? Ive looked at Trac but i'm not sure it's best for the job. We really just need somewhere where we can login and see okay x did a release to our main server on y with z changes. We could make one but don't really have the time at the moment. Any suggestions? Thanks a lot, Tom"} {"_id": "218467", "title": "How to configure these REST API Resources", "text": "I'm still in the design phase of my REST API, and I'm a bit stuck on how to configure the resources. The API will be consumed by mobile devices (Android, iOS, and Windows). Communication through HTTPS. These are the features the API will support: 1. View a list of previous reservations 2. View a previous reservation 3. Create a new reservation 4. Create a new anonymous reservation I was thinking of naming my resources as following: 1. api/users/{id}/reservations (GET) 2. api/users/{id}/reservations/{id} (GET) 3. api/users/{id}/reservations (POST) 4. api/users/0/reservations (POST) I'm not sure if the resource for 4 is a good design? Also 1, 2, 3 require a user to have an account and be authenticated. Authentication will be done using an access-token. This means the user has to log in every X period. Is it bad to store the user id on the device so it is able to perform requests where the user id is required? Or should I just remove the /users/ part from the resource and have the server decide based on the access-token which user is trying to perform a specific action? The REST API will be built using ASP.NET MVC4."} {"_id": "29697", "title": "browser syntax highlighting", "text": "I find myself reluctant to read code without syntax highlighting. On an almost daily basis, I end up downloading or copy-pasting large blocks of code from a browser window into vim just for the sake of seeing it in color. Is there a more elegant solution? is there a plugin for firefox or chrome that allows me to apply syntax highlighting to any page?"} {"_id": "223584", "title": "Using Ubuntu for commercial software development", "text": "I am plaining to use Ubuntu for developing Android and php application which I will sell in market. As far as I understand Ubuntu falls under GNU GPL license. In this case do I need to make my source code open for all?"} {"_id": "29692", "title": "What's RAII? Examples?", "text": "Always when the term RAII is used, people are actually talking about deconstruction instead of initialisation ... I think I have a basic understanding what it might mean but I'm not quite sure. Also: is C++ the only RAII language? What about Java or C#/.NET? or is C++/CLI not RAII anymore?"} {"_id": "114978", "title": "Recommendations for teaching kids math concepts & skills for programming?", "text": "I've got two very bright kids who are showing interest in learning programming. Of course their primary goal is to develop video games. They have decided they want to be game developers when they finish their schooling! However, I seem to have been reading a lot lately about how software companies are frustrated with the level of math skills in their job candidates - apparently schools are not teaching math adequately for programming. There are already other questions about how to teach programming to kids but I'm more specifically interested here in improving kids relevant MATH concepts & skills. What specific suggestions can you give? Just for the record, I'm not seeking to force math on them as a chore rather I'm wondering if there's something enjoyable I could get them to do and they could learn from it at the same time (e.g. similar to how Lego Mindstorms gets recommended for teaching kids programming). Mind you, if someone recommended some great math book that helped their kid that would be great too."} {"_id": "238668", "title": "Writing a spell checker similar to \"did you mean\"", "text": "I'm hoping to write a spellchecker for search queries in a web application - not unlike Google's \"Did you mean?\" The algorithm will be loosely based on this: http://catalog.ldc.upenn.edu/LDC2006T13 In short, it generates correction candidates and scores them on how often they appear (along with adjacent words in the search query) in an enormous dataset of known n-grams - Google Web 1T - which contains well over 1 billion 5-grams. I'm not using the Web 1T dataset, but building my n-gram sets from my own documents - about 200k docs, and I'm estimating tens or hundreds of millions of n-grams will be generated. This kind of process is pushing the limits of my understanding of basic computing performance - can I simply load my n-grams into memory in a hashtable or dictionary when the app starts? Is the only limiting factor the amount of memory on the machine? Or am I barking up the wrong tree? Perhaps putting all my n-grams in a graph database with some sort of tree query optimisation? Could that ever be fast enough?"} {"_id": "114971", "title": "Purely technical reasons for PHP as a first choice?", "text": "I know this may come off as a flame-y / troll-y, but I hope you will take my word for it that it's not my intention. I am just trying to understand the PHP phenomenon. After looking at the many technical issues with the language design of PHP, I am hard pressed to find any redeeming technical advantages where PHP surpasses all other languages. Before coming to the conclusion that there would simply be no reason to choose PHP as a development language **on purely technical grounds** , I would like to ask, if all non-technical factors were equal (such as what language the developers already know, what languages the hosting provider offers, language of existing code, cost, license, corporate fiat, etc.), would there be any type of **new** software system that would indicate making PHP a first choice for development? If so, what technical advantage does PHP have over **all** other languages that would cause you to choose it? **EDIT:** I am not interested in comparing PHP \"out of the box\" with other languages \"out of the box\". If PHP has a certain feature \"out of the box\" that another language has only after installing some readily available add-on, that is **not** considered an advantage for PHP for the purposes of this question."} {"_id": "78795", "title": "Best representation for relative dates & durations", "text": "I use ISO 8601 to represent dates & durations and all is OK. But now I need to represent relative dates and durations like: * The date for first day, at midnight, of the next week * All the last month period. In fact I need to have a syntax for it, I prefer a standard one. I don't need code (I yet have all the code) but a syntax to store relatives date or durations just like ISO 8601 use \"P1Y2M10DT2H30M\" for a duration. Thanks..."} {"_id": "91918", "title": "Cost of heavy autoloading in PHP applications?", "text": "I have an application which does heavy autoloading, meaning, that only two classes are \"included\" directly. For every module tha the application has (total like 14 modules), that module defines an autoloader. However, I wanted to know if is too time expensive to load all classes using autoloaders and what could I do to speed it up. As a side note, at this point I'm not experiencing a slow application, I'm just trying to predict if it could happen. Thanks in advance"} {"_id": "72492", "title": "Grateful for opinions on Language and Framework for my Windows Application", "text": "I have read some related questions on this site, including: What language should I seek to learn if I would like to develop for Windows and How to start programming in Windows but feel that my situation is slightly different and I have a few other questions that I would like some opinions on. I'm a mature student doing an undergraduate degree in Computer Science. I have to choose an idea for my final year project pretty soon. I have my mind set on producing a specific Windows software product (desktop application - web enabled) when I leave uni, and see my final year project as an opportunity to get a head start on this. In particular I want to tackle all the hard parts such as networking and security. I'm not really concerned at the moment with cross-platform compatability, because the market for this product all use Windows, but I need to choose the language and framework that I should use pretty soon. I'm certified in Java (OCPJP (used to be SCJP)) and at uni we've been using Java, C and Occam (uurrghh). However, I was thinking of learning C++ so that I could use the Qt framework which looks pretty good. I do have some concerns though. Would it be quicker to use C#, rather than C++ and Qt? What would the advantages be of using Qt? I'm hoping it will make development quick and will make my software secure and difficult to crack. I am aware that I have to pay for the privilege (I do not want to use the option which is free, as I would have to make my source code available). If I use C#, then I will learn the .NET framework which is basically a lot of helper classes to speed Windows development, is that correct? I understand that I can use quite a few different languages with .NET and they'll all get compiled into the same intermediate language. What's the difference between the Microsoft Foundation Class Library and the Windows Presentation Foundation? Does anyone know the advantages of using either one? I don't think the software will be uber complex. I have professionally used 4 different products of the type that I want to make (I'm a mature student) and the actual processing part does not have to be cutting edge because the demand is never that great for even pretty low powered modern computers (obviously I will still try to be efficient), but the overall design and usability was suprising lacking in most of them and I'm hoping to do better. I just want to make sure that it looks good and that I can develop it quickly. Should I continue reading Ivor Horton's Visual C++ which covers C++, basics of Windows programming, Windows Forms and the Microsoft Foundation Classes? Or does someone have a better suggestion. Limited time available so I need to decide on the tools I'm going to use for the finished product now. I'm very sorry that this post is so long, but it's very important to me to get some good advice on this. Many thanks :) Edit: Good article on StackOverflow Another Edit: Is memory management something which is automatic when developing with C++/Qt? I understand that if you're using .NET with C++, then it's all garbage collected."} {"_id": "72495", "title": ".NET Properties - Use Private Set or ReadOnly Property?", "text": "In what situation should I use a Private Set on a property versus making it a ReadOnly property? Take into consideration the two very simplistic examples below. First example: Public Class Person Private _name As String Public Property Name As String Get Return _name End Get Private Set(ByVal value As String) _name = value End Set End Property Public Sub WorkOnName() Dim txtInfo As TextInfo = _ Threading.Thread.CurrentThread.CurrentCulture.TextInfo Me.Name = txtInfo.ToTitleCase(Me.Name) End Sub End Class // ---------- public class Person { private string _name; public string Name { get { return _name; } private set { _name = value; } } public void WorkOnName() { TextInfo txtInfo = System.Threading.Thread.CurrentThread.CurrentCulture.TextInfo; this.Name = txtInfo.ToTitleCase(this.Name); } } Second example: Public Class AnotherPerson Private _name As String Public ReadOnly Property Name As String Get Return _name End Get End Property Public Sub WorkOnName() Dim txtInfo As TextInfo = _ Threading.Thread.CurrentThread.CurrentCulture.TextInfo _name = txtInfo.ToTitleCase(_name) End Sub End Class // --------------- public class AnotherPerson { private string _name; public string Name { get { return _name; } } public void WorkOnName() { TextInfo txtInfo = System.Threading.Thread.CurrentThread.CurrentCulture.TextInfo; _name = txtInfo.ToTitleCase(_name); } } They both yield the same results. Is this a situation where there's no right and wrong, and it's just a matter of preference?"} {"_id": "256110", "title": "Developing linux p2p applications", "text": "I have been tossing the idea around in the old noggin of creating a simple p2p chat application that would run in a linux environment and possibly be written in C. It's been awhile since I have written an application that wasn't a web app and I have been wanting to do more with C. I have always been curious just how p2p applications function so I figure this would be a nice way to further my understanding of both the C programming language and the architecture of p2p apps. The only problem is I'm not quite sure where to start. I have done a bit of initial research and found the following C library for p2p applications here. Right now my questions would be: 1. What else should I research before starting the project? 2. Does anyone know of any other C libraries for p2p applications? 3. Does anyone have any good links or know of good books on p2p protocols? My final goal for this would be to have an executable that two people could open on their system and communicate back and forth...simple I know, but education is my primary goal here. Any other information on the topic is also greatly appreciated! Thanks :)"} {"_id": "107305", "title": "How to pay more attention to detail as a developer?", "text": "Are there any resources for paying more attention to detail as a software developer? (Especially edge cases, or small mistakes in code, details in the problem description, ramifications for certain changes to a large system) Some thoughts so far: \\- Books of some sort? \\- Exercise of some sort (e.g. just solving math problems [a professor during undergrad mentioned math teaches attention to detail..although another said assembly language programming teaches detail too...])? \\- Some type of method for decomposing problems/thinking to force attention to every detail? \\- Some way of noting down details so as not to forget them later Example of what I mean: Someone said a good question to ask a prospective developer how many 7's between 0 and 100. I did it quickly and thought of 10, forgetting about 70-76 and 78-79. Basically it's just a lack of attention to detail.... Another example is during a white board coding interview, there was an easy problem and while the initial version was correct, as we kept making it more efficient I kept making more and more small mistakes. Once pointed out it was easy for me to fix them, but it was embarrassing to have them pointed out by the interviewer instead of me finding the issues on my own and fixing them. Another example is just compiling code. Initially I would write code, go over it once before compiling and catch many errors. Now I miss many more errors and the compiler (or interpreter) finds many more errors than it used to. Also I noticed that when I first came out of undergrad it was much easier to hang onto tons of detail where as now even the detail that I initially knew I seem to forget more as time goes on. Which is why in addition to paying attention to detail for new problems I could also use resources for keeping track of detail from older problems without having to rely on memory."} {"_id": "67878", "title": "How to approach documentation translation for an open source project?", "text": "I'm going to start a little open source project with some friends from all over the world. As I know that one of most important things in open source projects is to help users use the project itself, I want to make some wiki/tutorials/whatever in different languages. What are best tools or way to think/act in general for managing translations? Can you give me some advice?"} {"_id": "187011", "title": "ASP.NET Mvc3 - application/request lifetime and dependency injection", "text": "I thought of asking on SO, but it seems this is more of a \"concept\" type question than a \"problem\" type question. If it needs to be moved, please do so. Anyway, I'm having a tough time finding straight info on this. I'm using Unity.Mvc3 to setup dependency injection for controllers and other components the controllers might use. From what I understand from the Unity.Mvc3 website, anything that is `IDisposable` will be disposed at the end of the request from the container if it is registered with a `HierarchicalLifetimeManager`. Does that mean that things registered with the `ContainerControlledLifetimeManager` will be registered as the same instance for all requests for as long as the server/application is running? Can I register an instance using the `HierarchicalLifetimeManager` or does it have to be a type? How do the application/session/request lifetimes work? I ask, because I come from PHP and PHP has no scope outside the request, so this is obviously something I need to get a hang of."} {"_id": "187016", "title": "Wrote an application for a friend. Who is the owner of the software?", "text": "Friend asked me to develop a software application for him. I did and he paid me. There was no written contract. I live in UK and he resides in Canada. My question is: who is the owner of the software now? Can my friend make and sell copies without my permission? Can I do the same? I'm asking this because another person also wants to buy it, but friend insists on receiving some kind of sale commission, because the software was originally written at his request."} {"_id": "145857", "title": "What toolkit to use for card game?", "text": "I'm new at this and I was wondering if someone could suggest the most appropriate API to use to make a card game that is: * cross-platform * two-player * peer-to-peer * capable of laying out cards (png files) * open-source * beginner-friendly (well documented) If it helps to give a better answer, the game is based on _Set_ : http://www.setgame.com/set/puzzle_frame.htm ### Edit 1 This will be a standalone client application using C#, or any other suitable language for cross-platform GUI-based applications (e.g., Ruby, Java & Swing, Python). ### Edit 2 There are a number of toolkits listed at: * http://content.gpwiki.org/index.php/Game_Engines This helps, but which toolkit would be most suitable?"} {"_id": "224142", "title": "What algorithm should I use to find the shortest path in this graph", "text": "I have a problem about the calculation of shortest paths on an unweighted and undirected graph. Which algorithm I use to calculate the shortest path between a node A and node B that passing through a node C on an undirected and unweighted graph?"} {"_id": "224141", "title": "In ASP.NET MVC/Razor, how to add initializer JavaScript to a \"control\"?", "text": "Actually, I already have at least 3 different solutions for the problem, I just don't like any of them for various reasons. In ASP.NET MVC/Razor, there are no controls anymore, in the sense as they were in ASP.NET. An ASCX control used to be view+controller intertwined, so I get it. Instead we have html helpers that emit strings, html fragments. Let's say I want to create a helper method in a library that creates a jQuery UI date picker. First solution. I just write a method that first creates a textbox (actually I can just call `TextBoxFor` because it does the same thing for me), then it creates a small JavaScript code fragment that calls `$('#textboxid').datepicker()`. Then I can call that in my Razor views and it works. I don't like the fact that view-related stuff is done in code, and not in Razor, but all default editor templates are like that, sadly. I don't like the fact that there will be lots of small script fragments after each textbox in the output html. And my code fragment can get complex, if that initializer JS had many parameters, it would look ugly, there would be string escape issues, etc. Microsoft is pushing unobtrusive JavaScript architecture. I could just put a CSS marker class on my textbox, add some data-attributes if needed, and emit only one JavaScript that turns all my textboxes to date pickers, based on the CSS marker class. This one last part is the main problem, I do need an imperative JS code to do that. Unobtrusive AJAX and validation work well because they don't need that at all, they just respond to user interaction events. This is not the case, I need to turn my textboxes to date pickers whenever they appear. And they can appear dynamically in an editable grid, or after an AJAX call, whenever. Third solution, view should be done in Razor, so let's do it in Razor, as an editor template. Problem is, we're talking about a class library here. In the past, I have seen ugly hacks to embed .cshtml files as embedded resources or compile them to actual C# code during compilation. Last I checked, they seemed like ugly hacks and weren't supported. Is there another solution? If not, which of the above is the nicest, the one that most conforms to the state of the art coding conventions, or Microsoft's idea, etc?"} {"_id": "224146", "title": "How has an increase in the complexity of systems affected successive generations of programmers?", "text": "As a \"new\" programmer (I first wrote a line of code in 2009), I've noticed it's relatively easy to create a program that exhibits quite complex elements today with things like .NET framework for example. Creating a visual interface, or sorting a list can be done with very few commands now. When I was learning to program, I was also learning computing theory in parallel. Things like sorting algorithms, principles of how hardware operates together, boolean algebra, and finite-state machines. But I noticed if I ever wanted to test out some very basic principle I'd learned in theory, it was always a lot more difficult to get started because so much technology is obscured by things like libraries, frameworks, and the OS. Making a memory-efficient program was required 40/50 years ago because there wasn't enough memory and it was expensive, so most programmers paid close attention to data types and how the instructions would be handled by the processor. Nowadays, some might argue that due to increased processing power and available memory, those concerns aren't a priority. My question is if older programmers see innovations like these as a godsend or an additional layer to abstract through, and why might they think so? And do younger programmers benefit more learning low-level programming BEFORE exploring the realms of expansive libraries? If so then why?"} {"_id": "234248", "title": "Is the flyweight pattern a good security choice for managing permissions?", "text": "I'm designing a system, and it needs future expandability for the use of a permission system of some kind. I'm wondering if the flyweight pattern would be a good choice to implement this. I'm not responsible for the implementation right now, as it is just a prototype, and we're not prototyping any parts that need the right system. However, because of the demand for future extensibility to parts that need permission management the interface needs to be solid. This means that in the end it will be a thought experiment, rather than a real part of what I have to work on. However, I do need to be able to explain and justify my design in this area of the application. In favour of using the are that you can define permissions by having them symbolized through a token class, as represented in the flyweight pattern. If you're dealing with hundreds of users, this would somewhat simplify the handling and issuing of the rights a user holds, and the required rights a user needs for an action within the system; as well as the memory usage of the rights assigned to all the users. In the system I have in mind, a factory method of some kind will assign the rights needed at construction time. As I'm not really experienced with designing with security in mind, I'm having a paranoid line of thought of which I can't determine if it's justified, security wise. A shared pointer could be hijacked by 'evil intruders' to gain rights they should not be getting. This is the major argument against the use of a flyweight that keeps bugging me, Even though the 'how' is undefined, and I wouldn't know how someone would get it done. (no experience in the security mindset, or it's workings. However, I'm not really looking for a lecture in security beyond the secure use of patterns, unless motivated and clearly related to the question) Q: Is the flyweight pattern a suitable pattern to manage the representation of the rights of a user (or other some 'hand out' type of data) in the design of a software system that needs to be secure by design? Does the use of flyweight objects as representation of permissions pose more of a security risk than other patterns (I could also use a decorator chain for it, even though that would take up more memory) when dealing with permissions?"} {"_id": "188696", "title": "Please help me understand the relationship between script file size and memory usage?", "text": "I am programming in PHP and I have an include file that I have to, well, include, as part of my script. The include file is 400 MB, and it contains an array of objects which are nothing more than configuration settings for a larger project. Here is an example of the contents: $obj = new obj(); $obj->name = \"myname\"; .... $objs[$obj->name] = $obj->name; {repeat} .... return $objs; This process repeats itself 40,000 times and ultimately is 650,000 lines long (the file is generated dynamically of course.) If I am simply trying to load a file that is 400MB, why then would memory usage increase to 6GB? Even if the file is loaded into memory, wouldn't only take up 400 MB of RAM?"} {"_id": "188693", "title": "What is the better way to design flexible menus?", "text": "I would like to have a menu like this in surfdome. I don't mean the UI but the flexibility of this menu. I will try to explain it. I have some products. I want to match these products with some categories and have multiple type of menus based on these categories. e.g. _(in [] are the categories and in () are the products)_ A menu like [Men] -> [Shoes] -> [Running] -> (Product1) [Men] -> [Accessories] -> [Running] -> (Product2) [Women] -> [Shoes] -> [Running] -> (Product3) [Women] -> [Accessories] -> [Running] -> (Product4) or [Running] -> [Men] ->[Shoes] -> (Product1) [Running] -> [Women] -> [Shoes] -> (Product3) [Running] -> [Men] -> [Accessories] -> (Product2) [Running] -> [Women] -> [Accessories] -> (Product4) or [Shoes] -> [Men] ->[Running] -> (Product1) [Shoes] -> [Women] -> [Running] -> (Product3) [Accessories] -> [Men] -> [Running] -> (Product2) [Accessories] -> [Women] -> [Running] -> (Product4) ... What i think it could be done with a `tag system`, but i would like to ask if anyone know a way to do it?"} {"_id": "173371", "title": "Functional Methods on Collections", "text": "I'm learning Scala and am a little bewildered by all the methods (higher-order functions) available on the collections. Which ones produce more results than the original collection, which ones produce less, and which are most appropriate for a given problem? Though I'm studying Scala, I think this would pertain to most modern functional languages (Clojure, Haskell) and also to Java 8 which introduces these methods on Java collections. Specifically, right now I'm wondering about map with filter vs. fold/reduce. I was delighted that using foldRight() can yield the same result as a map(...).filter(...) with only one traversal of the underlying collection. But a friend pointed out that foldRight() may force sequential processing while map() is friendlier to being processed by multiple processors in parallel. Maybe this is why mapReduce() is so popular? More generally, I'm still sometimes surprised when I chain several of these methods together to get back a List(List()) or to pass a List(List()) and get back just a List(). For instance, when would I use: collection.map(a => a.map(b => ...)) vs. collection.map(a => ...).map(b => ...) The for/yield command does nothing to help this confusion. Am I asking about the difference between a \"fold\" and \"unfold\" operation? Am I trying to jam too many questions into one? I think there may be an underlying concept that, if I understood it, might answer all these questions, or at least tie the answers together."} {"_id": "239139", "title": "Why do arrays in Java not override equals()?", "text": "I was working with a `HashSet` the other day, which has this written in the spec: > [add()] adds the specified element e to this set if this set contains no > element e2 such that (e==null ? e2==null : e.equals(e2)) I was using `char[]` in the `HashSet` until I realized that, based on this contract, it was no better than an `ArrayList`! Since it's using the non- overridden `.equals()`, my arrays will only be checked for reference equality, which is not particularly useful. I know that `Arrays.equals()` exists, but that doesn't help when one is using collections such as `HashSet`. So my question is, why would Java arrays _not_ override equals?"} {"_id": "61433", "title": "Resources or advice useful coming to C# from python 2.7.1", "text": "I'm a python monkey at heart. I eat drink sleep and dream in it, and have found that it's taught me much more about writing quality code than my degree in Computer Science ever did, however I've been asked to write an automated testing project in C# at work to link in with Team Foundation Server. Having looked at the language it looks very similar. I've noticed that it seems to be statically typed but I never had any problems with that in C++ or Ada. I've tried looking online for tutorials etc, but they all seem to start from the basics. I was hoping to be able to find information on differences and code translations et al similar to Norvig's lisp to python article. A search on google hasn't rendered any results for me so I was hoping to have a bit more luck here. * * * As a little background I have about 4 years of python, 2 of C++, but that was a few years ago, and bits and pieces of knowledge of other languages. I try not to mention BASIC any more... It is because of my extended current experience with python that I wish to compare C# with it."} {"_id": "77511", "title": "How do I work badges into my open-source project?", "text": "Stack Overflow has an awesome set of badges that recognizes a persons contribution to the community and allows an individual's capabilities to grow the more he contributes. What techniques can I use for open-source or coding projects? For example, a person's contribution to the code base earns them badges along with ever- increasing set of capabilities on the project? > ### Moderator note > > Providing a link to a recommendation is not enough: please provide detailed > answers about _how_ to incorporate gamification elements into an open-source > project. Any answer that doesn't do this **will be deleted.** > > See Good Subjective, Bad Subjective for more information about the types of > questions, and the types of answers, we're looking for on Programmers."} {"_id": "228999", "title": "Corporate website design", "text": "We are redesigning our corporate website with the following functions: * we want to give the marketing users flexibility and freedom to add new static pages, change the static pages or contents of static pages without the developer(s)/deployment team getting involved. * we want developers to work on the dynamic portion of the website. How to architect the website to make sure that the layout/look and feel of the website stays same across all the pages and achieve the above mentioned work? Any experience? Suggestions?"} {"_id": "151275", "title": "how to avoid workaholic tag", "text": "As we all know a programmer just needs a computer and a network connection. When these things are available at your disposal you can program anywhere in the world. Now this is causing me a bit of problems. Since its not necessary to work at your workplace only you are asked anytime of your vacation or week-off to help out the client on reported bug. Also if you do enjoy doing it at your pass time, any one seeing you stare at the computer may treat you workaholic which I don't enjoy. How do you make them realize that its not just about the work.It can be a hobby also. In my understanding a workaholic is a person who _works to earn_ but an enthusiast is the one who _works to learn_."} {"_id": "252448", "title": "Representation of a question mark in variable names", "text": "I once in my childhood read a SF story in which it was assumed that the capital letter `'P'` would be a good representation for a question mark, if you cannot use that character directly, eg. in variable names. However the story is quite old and things may have changed :) **Is this a good practice?** BTW: The story was called 'Enter P' or something like that. * * * ### Edit Additional questions that are answered below. **Is this only writer's phantasy or is it based on an actual language's habit?** **If yes, which language would that be?**"} {"_id": "28129", "title": "Is there a good site to hire programmers for little projects?", "text": "I need to a piece of php code that goes beyond my abilities so I need to hire somebody to do it. It's not really something too long or complex so I wanted to know if there was a straightforward way to post my request and find somebody to do it for a certain amount. Does such a website exists or what's the best alternative? Anyway thanks a lot"} {"_id": "165181", "title": "Why is the use of the STE template no longer recommended for EF5?", "text": "I was looking to upgrade my project from EF4.1/Framework 4.0 to EF5/Framework 4.5. After reading up on migrating the t4 templates for STE's (Self-Tracking Entities), I came across this link that indicates that STE's are no longer recommended. Why is Microsoft no longer recommending the use of STE's?"} {"_id": "127101", "title": "Recommend a text that explains the physical implementation of C (stack, heap, etc)", "text": "One of the most popular questions on stackoverflow.com is this one: http://stackoverflow.com/questions/4025768/what-do-people-find-difficult- about-c-pointers Many of the answers seem to gravitate toward the fact that students often don't get a good understanding of what is physically going on when code executes... the very most basic part of this seems to be stack/heap. I've read through K&R twice and finally have a good understanding of the language, but I don't think that is enough. Nearly every tough question here has the work \"stack\" or \"heap\" in the answer, and while I have a limited understanding of what those mean just from context and wikipedia searches, I prefer the comprehensive approach of a textbook. So, please recommend a textbook which explains things like stack, heap, memory allocation, etc from a physical standpoint, as it relates to C programming. I tried asking this question on stackoverflow and apparently it was not the right place to ask it. So I hope this site is a better location. If not, please consider migrating it to the proper stackexchange site. Thanks."} {"_id": "165185", "title": "What problems can arise from emulating concepts from another languages?", "text": "I've read many times on the web that if your language doesn't support some concept, for example, object orientation, or maybe function calls, and it's considered a good practice in this other context, you should do it. The only problem I can see now is that other programmers may find your code too different than the usual, making it hard for them to program. What other problems do you think may arise from this?"} {"_id": "165184", "title": "Is donationware a good monetization model for developers?", "text": "I've been developing for Android for about 2 years (and ~1 year for iOS), releasing freeware and open source applications (mostly because my AdSense account was disabled in 2010), but recently I had an idea for a great app that I wanted to get some money, since it would take some effort to develop and also I would like to test this \"commercial\" model to know if this could make me invest more time improving and making my apps better. Since my AdSense account was disabled and then I'll not be about to sell it on the Google Play Store, I thought about making it a donationware so I would distribute it for free (and probably open source too) and users that really liked the app and wanted to give me a thanks and a incentive to continue developing it could donate any amount of money. So, what's your experience with donationware? Is it worth compared to paid apps?"} {"_id": "75919", "title": "Should package names be singular or plural?", "text": "Often, in libraries especially, packages contains classes that are organized around a single concept. Examples: **xml, sql, user, config, db**. I think we all feel pretty naturally that these packages are correct in the _singular_. > com.myproject. **xml**.Element > com.myproject. **sql**.Connection > com.myproject. **user**.User > com.myproject. **user**.UserFactory However, if I have a package that actually contains a **collection of implementations of a single type** \\- such as **tasks, rules, handlers, models, etc.** , which is preferable? > com.myproject. **tasks**.TakeOutGarbageTask > com.myproject. **tasks**.DoTheDishesTask > com.myproject. **tasks**.PaintTheHouseTask or > com.myproject. **task**.TakeOutGarbageTask > com.myproject. **task**.DoTheDishesTask > com.myproject. **task**.PaintTheHouseTask"} {"_id": "38931", "title": "how can we improve documentation of project", "text": "We are planning to develop a database api for our project. I want to know how can we improve our documentation. The Current Structure of a document is : Introduction : Database Objects :List Each Object (Table) Field List List of Common Input Parameters. loop of each command of database api Input Parameter of command Output Parameter of command Example of using the command End of loop Please suggest if we can improve our documentation. :)"} {"_id": "48413", "title": "In Java, should I use \"final\" for parameters and locals even when I don't have to?", "text": "Java allows marking variables (fields / locals / parameters) as `final`, to prevent re-assigning into them. I find it very useful with fields, as it helps me quickly see whether some attributes - or an entire class - are meant to be immutable. On the other hand, I find it a lot less useful with locals and parameters, and usually I avoid marking them as `final` even if they will never be re-assigned into (with the obvious exception when they need to be used in an inner class). Lately, however, I've came upon code which used final whenever it can, which I guess technically provides more information. No longer confident about my programming style, I wonder what are other advantages and disadvantages of applying `final` anywhere, what is the most common industry style, and why."} {"_id": "115690", "title": "Why declare final variables inside methods?", "text": "Studying some classes of Android, I realized that most of the variables of methods are declared as final. Example code taken from the class android.widget.ListView: /** * @return Whether the list needs to show the top fading edge */ private boolean showingTopFadingEdge() { final int listTop = mScrollY + mListPadding.top; return (mFirstPosition > 0) || (getChildAt(0).getTop() > listTop); } /** * @return Whether the list needs to show the bottom fading edge */ private boolean showingBottomFadingEdge() { final int childCount = getChildCount(); final int bottomOfBottomChild = getChildAt(childCount - 1).getBottom(); final int lastVisiblePosition = mFirstPosition + childCount - 1; final int listBottom = mScrollY + getHeight() - mListPadding.bottom; return (lastVisiblePosition < mItemCount - 1) || (bottomOfBottomChild < listBottom); } What is the intention of using the final keyword in these cases?"} {"_id": "208122", "title": "Why do some software packages have an \"amd64\" suffix for 64-bit systems?", "text": "When downloading various software packages, and executables for Windows, I always see two different types of executables to download. One just says `...32-bit` and the other always says `...amd64`. I know this has nothing to do with AMD, but it is referring to 64-bit operating systems, so why is this still a norm? Even large companies like Google and Ubuntu have packages set up like this. Thanks for any insight! ~Carpetfizz"} {"_id": "98692", "title": "Where are octals useful?", "text": "I just banged my head against the table for some 20 minutes looking at a totally weird bug in PHP, and then I realized there's octal. The <%(*&#> octal. In short, I padded some literals with zeros so the code would be aligned, I know, big mistake. Forgot about octals. The question is, does anyone use octals for anything other than file permissions? (I personally prefer `chmod ugo+rwx` but I understand that if they have to be programmaticaly generated, it's useful to use octals.) But are they useful in any other situation?"} {"_id": "208125", "title": "Tracking changes to posts", "text": "I'm currently in the process of writing a support ticket system... Let's say it's a small forum application, or something like Uservoice. Now I want my users to be able to edit their tickets, but still be able to see the old versions of them. Which route should I take in terms of Database-Design? I've thought of something like: SupportTickets * Identifier * Author * Subject * SupportTicketContentId SupportTicketContents * Identifier * Content * CreatedOn Now when viewing a ticket, I'd simply query the most recent `SupportTicketContent` depending on the `CreatedOn` column... But is there a better way? Or is there a better naming of these tables?"} {"_id": "35513", "title": "Developing iOS apps as web developer", "text": "My Boss has sold a few 'iPhone apps' to clients, we are a web development shop. I have explained to him that I do not know the first thing about them, but it's such a powerful buzz-word and we need to meet clients expectations. I do have some experience in C, Java and Python which should help if I need to use objective-C. I have even done a few Android tutorials. These apps will more or less be HTML, in my mind they are not real apps, but faux apps which have the same functionality as the clients' websites. To me a real app is something that uses the phones hardware inputs and outputs, gps, accelerometer, speaker etc. What resources can I use to get up to speed iOS development and how to build apps in html. I have no idea where to begin."} {"_id": "35511", "title": "I love programming but never get projects started", "text": "I read this question and the most voted answer and said to myself \"Those things are what I want to do! But why the heck I'm not doing it?\" The problem here is that I can't get anything started. I hope you get some kind of idea what I'm interested in and can help me continue programming. I'm interested in embedded programming and I'd like to learn some ARM assembly. I think those could be combined with image processing. Also I'd love to create programs that are actually useful to some people but it seems like there are already multiple programs for pretty much every purpose. Maybe if I found an interesting program without open source alternatives I could get a great project to work on. How could I get myself to finally start a project or few and keep myself interested in them? I just want to get myself to do something I love."} {"_id": "114023", "title": "Speed of Java vs. JS / HTML / CSS for web applications", "text": "I am creating a web application. I have primarily used Javascript specifically jQuery. Because of some very specific functionality, I am running into practical limitations of Javascript--they're not hard limitations but stuff that I would find easy in Java, like making an equation editor where you can edit directly as opposed to entering TeX, is difficult in JS even using MathJax as a base. I'm going to have to build even more complex functionality that involves 3D and physics engines. For a large scale application like this--specifically one that involves 3D and physics engines--would Java be slower or faster than Javascript when one is run within a browser? (Assume that code is written well in both cases.) Or is it completely uncertain--i.e. dependent on far too many specific variables? Thanks."} {"_id": "236633", "title": "How do open-source licenses work when the application is for internal use only?", "text": "I am writing a MATLAB application that makes fairly heavy use of the MATLAB File Exchange. Most of the functions I use from here fall under the BSD license. My application is being deployed for _internal use only_ and is not meant for public consumption. While my particular case uses code that falls under the BSD license, generally speaking what implications are there for using open- source code (GPL, BSD, etc) for programs that will never go public? Do I have to include a license file to cover the code that was licensed as BSD?"} {"_id": "204813", "title": "MVVM pattern - Best design approach to manage an application", "text": "One year ago, I discovered the WPF technology and I developed a little application, as first experiment, to compare the content of two different directories. The content of each directory is shown in a different DataGrid. At that time, I didn't develop the application using MVVM pattern, because it was only an experiment, not so good, but it worked. Now that I've much more experience with WPF and with the design patterns, I'm not so proud of that work, and I want to rewrite the application, improving it and following the recommended MVVM pattern. Actually, I designed the application in this way: * the MainModel, with the main common algorithm. * the MainView is the unique view of the application with the datagrids where I can show the results of my algorithms. * a MainViewModel, that it will take care of handling the connection between other two ViewModels, the MainModel and the MainView. * One ViewModel for each DataGrid. In this way, I'll separate the behavior and the data of the two DataGrids. The MainModel contains the main algorithm used to search the files in a given directory, like this: public class MainModel { public MainModel() {} public List GetFiles(string directoryPath) } The MainViewModel will instantiate an object for each ViewModel (I have a ViewModel for each datagrid), using some properties to do it: public class MainViewModel { public MainViewModel() { FirstViewModel = new ViewModelDG1(); SecondViewModel = new ViewModelDG2(); } public ViewModelDG1 FirstViewModel {get;set;} public ViewModelDG2 SecondViewModel {get;set;} } I wanted to apply a design pattern to help me manage the application, like a Factory Method or Builder, or a Template Method, but I think they're not suitable for my application for the different usages/conditions of those design patterns. I'm not so sure of this design, I don't know if this a is correct implementation of the MVVM pattern and if it promotes a loose coupling and high cohesion. Can you help me in the design? Is mine a correct implementation of the MVVM? If not, then what is the correct one? What are your opinions? Thank you."} {"_id": "236630", "title": "Reusable and customizable charting library on top of d3js", "text": "I have started building a charting library on top of d3js using javascript's inheritance. My goal is to develop reusable and fully customizable chart components. I read the article: Towards Reusable Charts. Then, I went through the source code of nvd3. NVD3 uses boiler plate code in each chart like: copying and pasting definitions for width, height, margin etc. BUT I would rather use inheritance to avoid such boiler plate code. I would like properties like: dimensions, axes and functions like zooming and event listeners to be reusable among all charts and customizable through any instance (object) of the chart. Here is what my current library looks like. (It only supports bubble chart for now.) var BaseChart = function(){ this.width = 200; this.height = 200;//and others like: margin/ chart title } //XYChart inherits from BaseChart var XYChart = function(){ //It contains members for axes, axis groups, labels, ticks, domains and ranges //Example: One of the members it contains is \"this.yAxis\" this.yAxis = d3.svg.axis().scale(this.scaleY) .orient(\"left\") .ticks(this.yTickCnt) .tickSize(-this.width); } //Extends XYChart and adds zooming var ZoomableXYChart = function(){ this.zoom = function(){ this.svg.selectAll(this.dataShape) .attr(\"transform\", function(d, i) { var translate = that.calculateTranslation(d); return \"translate(\" + translate.x + \",\" + translate.y + \")\"; }); } } //Extends zoomable XY chart and adds support for drawing shapes on data points var BubbleChart = function(){ this.dataShape = \"path\"; this.drawChart = function() { this.prepareChart();//Attaches zooming, draws axes etc. var that = this; this.svg.selectAll(that.dataShape) .data(that.data) .enter() .append(that.dataShape) .attr(\"transform\", function(d) { return that.transformXYData(d); }) .attr(\"d\", symbolGenerator().size(100).type(function(d) { return that.generateShape(d); })) .attr(\"fill\", function(d) { return that.generateColor(d); }) .on(\"click\", function(d) { that.onClickListener(this, d); }) } } I can create a chart in this way: var chart = new BubbleChart(); chart.data = someData; chart.width = 900; chart.elementId = \"#myChart\"; chart.onClickListener = function(this,d){} chart.drawChart(); This solution allows me to use common properties and functions across all charts. In addition to that, it allows any instance of any chart to override default properties and functions (like: onClickListener). Do you see any limitations with such a solution? I haven't really seen javascript's inheritance used for d3.js and I wonder why? Is \"chaining\" that important? Using Mike Bostock's suggestions, how can we share functions like zooming across all XY charts? Isn't inheritance absolutely necessary to share functions and properties?"} {"_id": "193537", "title": "Ext JS licensing issue", "text": "First of all I'm not asking for legal advice but just checking if anyone agrees with my suspicions. That might help in convincing Sencha to change their license. Their commercial license says: > The Open Source version of the Software (\u201cGPL Version\u201d) is licensed under > the terms of the GNU General Public License versions 3.0 (\u201cGPL\u201d) and not > under this License Agreement. **If You, or another third party, has, at any > time, developed all (or any portions of) the Application(s) using the GPL > Version, You** may not combine such development work with the Software and > **must license such Application(s) (or any portions derived there from) > under the terms of the GNU General Public License version 3** , a copy of > which is located at http://www.gnu.org/copyleft/gpl.html. Meanwhile there exist libraries licensed under MIT or BSD, that use ExtJS. GeoExt is one example. It's likely some contributors of those libraries have not bought the commercial license of Ext JS, and are using its GPL-licensed version. Reasonably GPL-licensed Ext JS shouldn't \"infect\" GeoExt code if it's distributed without the Ext JS library. Now the problem is, if I use GeoExt then likely \"portions\" of it **have** been \"developed using\" the GPL version. Specifically Sencha clarifies what they mean by \"develop using\" in their FAQ: > The license prohibits combining code that you **develop using a GPL licensed > version** of the software with code that you develop using a commercial > licensed version. In other words, **you may not begin development with our > GPL version and then \"convert\" it to our commercial version**. My interpretation is that it's not allowed by the Sencha commercial license to combine Ext JS with GeoExt and who knows what other libraries whose contributors at some point may have \"developed using\" a GPL version of Ext JS. This may or may not include OpenLayers, jQuery and other libraries developers would likely also need to use in their commercial projects. If Sencha then so desires, according to their commercial contract they can demand commercial customers who combined Ext JS with other libraries, to release their software under the GPL, since the commercial license doesn't offer any other resolution. My question is: do you agree that this sounds like developing commercial software using Ext JS may be risky, or is my interpretation incorrect?"} {"_id": "165233", "title": "Event sourcing and persistence", "text": "I'm reading up on event sourcing and have a question regarding persistence. I can still have a DB with all entities, right? Or should the events be replayed every time the application is started to get the latest version of each entity in the memory? Seems like a waste on larger systems (as in large amount of data)? The point with event sourcing is that I can can replay the events to populate a data store if required? (or analyze the data)"} {"_id": "130491", "title": "Why is a Bayes classifier used for spam filtering?", "text": "I've been reading about Bayesian spam filtering and I think I understand the theory, but I just don't see why this approach is needed in order to calculate the likelihood of a message being spam, given that it contains a certain word. If we have a set of messages already classified by the user as either 'spam' or 'ham' and we receive a new message (containing the chosen word) which we want to classify, then surely all we need to do is divide the number of spam messages that contain the word, by the total number of messages that contain the word... Why all the equations?"} {"_id": "230816", "title": "What data cannot be compressed by huffman codes?", "text": "What kind of data cannot be compressed by using the huffman codes and why? I tried looking for the answer, but I only came across loss-less and lossy compression. I don't really understand what types of data cannot be compressed by huffman and why they can't be compressed using it."} {"_id": "130496", "title": "Is there another name for Sql Azure's programming language?", "text": "According to this page on MSDN the sql language used on Sql Azure is called **transact-sql** (the same as on sql server). But is this the only way to refer to programming on Sql Azure? The Sql Azure variant of transact-sql doesn't support a bunch of features including global temporary tables, distributed queries and system tables; and has partial support for other areas. So is calling it transact-sql appropriate and does anyone know if another name has taken unofficial hold (azure-sql, a-sql ...)?"} {"_id": "27094", "title": "Should a contract programmer have a portfolio?", "text": "I realize that the more you go towards the spectrum of \"freelance\", there is a need for a portfolio (at least for web development) but I would think that a structured contracted job (like a 3 month duration contract to hire, for example) would be a bit more professional in realizing that a professional programmer spends most of their time writing proprietary applications for companies and thus has no portfolio of these to show off.. The area where I live is not a major tech center so a lot of employers and recruiters here seem a bit less than professional when it comes to tech stuff (but it is a big U.S. city) and I get, fairly often, recruiters asking if I have a portfolio to share, etc.. Sure I have done my own projects but I don't keep them up to date and the work that I do for my current employer is my main focus. Anyways, I guess this is just a generalized question about how to differentiate between \"freelance\" and \"contract\" (even though I thought I had been doing so on my resume)"} {"_id": "27096", "title": "When to Use Shared Libraries", "text": "Note: I'm not sure if this question is more suitable for Stack Overflow or Programmers. The thought process behind putting it here was that it doesn't actually relate to coding itself. I noticed a small freeware utility I have on my computer uses a couple of DLLs. From their names (\"RenderAllChunks\" and \"RenderSlice\"), it looks like they're being used just for specific functions. If outsourcing program- specific functions is really necessary, wouldn't it be better to just stuff them in a separate header file? It seems quite pointless to go through all the trouble of compiling, linking and distributing DLLs just for one function. * Is it bad to use shared libraries for small projects? * When/why are shared libraries preferable over static libraries (or even header files)?"} {"_id": "27091", "title": "If you take a year or two out from being a developer, is it really that hard to get back into it?", "text": "I've been working as a developer for about 3 years now (straight from uni), I'm wondering, if I take a year or two out would it be impossible to get back into the industry? I didn't get the gap year thing out of my system after uni, and I'm thinking that I should probably do it before I hit 30 (24 now), my main concern is that if I leave the industry now, I might not get back into it at all and end up working some dead end job. The way I see things is, that general concepts / design patterns etc remain similar over the years, and it is mostly coding syntax / actual implementation that evolves, so it shouldn't move on dramatically. Also, women developers (yes there are some out there!) take years out to have kids and still carry on with their career afterwards, so it can't be impossible. Ultimatum : Would taking a year or two out destroy the (small) career I've built up so far?"} {"_id": "51746", "title": "Structuring multi-threaded programs", "text": "Are there any canonical sources for learning how to structure multi-threaded programs? Even with all the concurrency utility classes that Java provides I'm having a hard time properly structuring multi-threaded programs. Whenever threads are involved my code becomes very brittle, any little change can potentially break the program because the code that jumps back and forth between the threads tends to be very convoluted."} {"_id": "181312", "title": "Is there a synonym for \"Blittable\" that is more common?", "text": "Is there a more common name for a \"Blittable\" data type? In my software there is a distinction between a variable sized structure and a fixed size structure that has a similar behavior to \"blittable\" but I have only seen the name used in Microsoft software."} {"_id": "232539", "title": "How to create a some kind of value for sentences?", "text": "![enter image description here](http://i.stack.imgur.com/qLzG4.png)I want to identify most matching sentence using some pattern. That means by using java algorithm I want to create identical value for each sentences.Each sentence when entering to that algorithm can be out some kind of identical value. How can I develop it? What can I refer any web sites do you know? What sort of sites should I look for? Actually I want clarify about when I'm giving as ex: 5 sentence to algorithm that possible to generate some kind of 5 values.Then I compare with those values with previously generated values(I should be store those values in my database) and get the gap between new 5 value and previously stored values.Then I get the distance and I selected most suitable sentence as most lowest gap value. I'm use those things for my machine translation tool. As an example we think using my ruled based translation model generate 2 sentences. 1\\. I want eat an apple. 2\\. I want eat a house. In my corpus we think more sentences include and I store values for sentences in my database. (Value assign part is I don\u2019t know yet) I want to create java algorithm to assign value for each sentence. As an example if we think Sentence 1 value: 250.8 Sentence 2 value : 290.5 Database included values 248, 400,800 Then I got the difference. So we can see here most minimum difference get for 250.8 and most suitable sentence is 1 one."} {"_id": "255874", "title": "How to abide the \"allocate in caller\" rule when the size is computed in the callee?", "text": "Let's say we have an opaque type `handle_t` that is managed through a simple interface with functions such as: handle_t *handle_init(void); int handle_do(handle_t *); void handle_free(handle_t *); The problem is that the size cannot be always determined in the initialization step and it has to be calculated in `handle_do`. One solution I have seen is calling `handle_do` two times, the first of which returns the size needed to allocate, but doesn't that affect the performance? Is it a bad practice to allocate or reallocate memory in `handle_do`? Does it even matter when using handle types (since the init/free routines are made clear)?"} {"_id": "130143", "title": "Do programmers keep error reporting on or off?", "text": "I was wondering if php programmers keep error_reporting in php.ini on or off after delivering the website?"} {"_id": "25782", "title": "Best IDE for HTML, CSS, and Javascript for mac", "text": "I'm currently looking to move to using an IDE for web development. The options I'm considering are: 1. Aptana Studio 2. Coda 3. Expresso Please base your answers on the following criteria, in descending order of importance: 1. Supports HTML, CSS, JavaScript 2. Powerful (having good code completion, good debugger, great syntax highlighting etc) 3. Fast and light 4. Supports HTML5, CSS3, and major JavaScript frameworks (JQuery or YUI) 5. Great design (both usability and aesthetics) 6. Supports PHP, Ruby, and Python 7. Has Git integrated I've updated the question to be more objective. I'm mainly looking for an answer that addresses how well each of the IDEs addresses my criteria."} {"_id": "193696", "title": "What is the difference between a stock-hardware and a micro-coded machine in \"A Critique of Common Lisp\"?", "text": "I was reading this article: A Critique of Common Lisp and finding it hard to make out the precise definition of \"stock-hardware machine\" and its difference with \"micro-coded\" machines. I tried to search for a precise definition of the former to no avail. What I currently construe stock-hardware machine to be is that, it refers to the computer chip-set commonly available in the market (without much customization). So given that I would like to ask, what is meant by a stock-hardware machine (in the context of the paper) and how does it differ from a micro-coded machine?"} {"_id": "224852", "title": "Best practice for copying static content between a web project and a self hosted EXE project", "text": "I have a visual studio solution that has, amongst other project the following three: * **NamespacePrefix.NancyFX.csproj:** Some Nancy modules * **NamespacePrefix.NancyFX.IISHosting.csproj:** A web solution to host the nancy modules * **NamespacePrefix.NancyFX.Selfhost.csproj:** An exe to host the nancy modules In addition to rest services there is some html javascript and css in the Views/, Scripts/, and Content/ folders. I want this content in both the IIS Hosting and the Self hosting project. Currently I store the files in the Web project, and copy them over to the self hosting project bu editing the csproj xml like so: This works fine, but is there a better way? This seems to give me the best intellisense, etc."} {"_id": "186837", "title": "How to avoid \"managers\" in my code", "text": "I'm currently re-designing my Entity System, for C++, and I have a lot of Managers. In my design, I have these classes, in order to tie my library together. I've heard a lot of bad things when it comes to \"manager\" classes, perhaps I'm not naming my classes appropriately. However, I have no idea what else to name them. Most of the managers, in my library, are composed of these classes (although it does vary a little bit): * Container - a container for objects in the manager * Attributes - attributes for the objects in the manager In my new design for my library, I have these specific classes, in order to tie my library together. * ComponentManager - manages components in the Entity System * ComponentContainer * ComponentAttributes * Scene* - a reference to a Scene (see below) * SystemManager - manages systems in the Entity System * SystemContainer * Scene* - a reference to a Scene (see below) * EntityManager - manages entities in the Entity System * EntityPool - a pool of entities * EntityAttributes - attributes of an entity (this will only be accessible to ComponentContainer and System classes) * Scene* - a reference to a Scene (see below) * Scene - ties all the managers together * ComponentManager * SystemManager * EntityManager I was thinking of just putting all the container/pools in the Scene class itself. i.e. Instead of this: Scene scene; // create a Scene // NOTE: // I technically could wrap this line in a createEntity() call in the Scene class Entity entity = scene.getEntityManager().getPool().create(); It would be this: Scene scene; // create a Scene Entity entity = scene.getEntityPool().create(); But, I am unsure. If I were to do the latter, that would mean I would have a lot of objects and methods declared inside my Scene class. ## NOTES: 1. An entity system is simply a design that is used for games. It is composed of 3 major parts: components, entities, and systems. The components are simply data, which may be \"added\" to the entities, in order for the entities to be distinctive. An entity is represented by an integer. Systems contain the logic for an entity, with specific components. 2. The reason I'm changing my design for my library, is because I think it can be changed quite a lot, I don't like the feel/flow to it, at the moment."} {"_id": "224857", "title": "Question about moving to embedded systems", "text": "I currently work as a .net developer and have coming up to 3 years experience in the industry as well as a degree in computer science specifically software engineering (I know that means nothing but just wanted to explain that I'm not your general back bedroom programmer). When I finished Uni, I started contracting for a company. I wont mention their name but some of you may have heard of them just by the description. Basically they take in fresh graduates and in return for training and contracting experience you have to go to the clients they say for a minimum of two years. I cant say this was great but I do feel I know more than people with the same years of experience as me. The problem is that I've now started heading down the route of web development because of the lack of choice in clients. How difficult is it to change from a .net field to embedded/low level programming? I have the enthusiasm for it and the strive but I'm not quite sure how to go about moving. Unlike application development, knowing the language and having some business etiquette doesn't seem to be enough. > For example hardware wise I can set-up a computer but wouldn't know how to > manually attached wires and program these systems. Is this kind of knowledge > necessary? Thank you in advance. Also if anyone has similar experience of swapping, should I? tl;dr Developer that doesn't know how to get from web development roles into embedded systems. Edit:: Thank you all for the advice."} {"_id": "183807", "title": "Can someone else patent my open-sourced algorithm?", "text": "I've written a recursive search algorithm to find the boundaries of a voxel data structure in order to render it more efficiently. I've looked around, and either it's such a simple and obvious technique that nobody's bothered to patent it, or it's novel and nobody's done it this way before. It's openly \"published\" on GitHub and protected under the GPL. I'd like to show it to others, to see if it can be improved, however... I fear that although I've written and published it, someone may attempt to patent the same idea. Am I safe, protected by the banners of open source software, or must I attempt to protect myself like the big guns and patent trolls do? It's my belief that software patents are evil, and that in order for the best software to be written, many eyes need to see it. I'm worried this may be a rather na\u00efve viewpoint on how software is written, though, and I'm curious as to what others think."} {"_id": "140737", "title": "If you had two projects with the same specification and only one was developed using TDD how could you tell?", "text": "I was asked this question in an interview and it has been bugging me ever since. > You have two projects, both with the same specification but only one of > these projects was developed using Test Driven Development. You are given > the source for both but with the tests removed from the TDD project. How can > you tell which was developed using TDD? All I was able to muster up was something about the classes being more \u201cbroken up\u201d into smaller chunks and having more visible APIs, not my proudest moment. I would be very interested to hear a _good_ answer to this question."} {"_id": "128512", "title": "Suggested HTTP REST status code for 'request limit reached'", "text": "I'm putting together a spec for a REST service, part of which will incorporate the ability to throttle users service-wide and on groups of, or on individual, resources. Equally, time-outs for these would be configurable per resource/group/service. I'm just looking through the HTTP 1.1 spec and trying to decide how I will communicate to a client that a request will not be fulfilled because they've reached their limit. Initially I figured that client code `403 - Forbidden` was the one, but this, from the spec: > Authorization will not help and the request SHOULD NOT be repeated bothered me. It actually appears that `503 - Service Unavailable` is a better one to use - since it allows for the communication of a retry time through the use of the `Retry-After` header. It's possible that in the future I might look to support 'purchasing' more requests via eCommerce (in which case it would be nice if client code `402 - Payment Required` had been finalized!) - but I figure that this could equally be squeezed into a 503 response too. Which do you think I should use? Or is there another I've not considered?"} {"_id": "140733", "title": "Are there any concrete examples of where a paralellizing compiler would provide a value-adding benefit?", "text": "Paul Graham argues that: > It would be great if a startup could give us something of the old Moore's > Law back, by writing software that could make a large number of CPUs look to > the developer like one very fast CPU. ... The most ambitious is to try to do > it automatically: to write a compiler that will parallelize our code for us. > There's a name for this compiler, the sufficiently smart compiler, and it is > a byword for impossibility. But is it really impossible? Can someone provide a **concrete example** where a paralellizing compiler would solve a pain point? Web-apps don't appear to be a problem: just run a bunch of Node processes. Real-time raytracing isn't a problem: the programmers are writing multi-threaded, SIMD assembly language quite happily (indeed, some might complain if we make it easier!). The holy grail is to be able to accelerate any program, be it MySQL, Garage Band, or Quicken. I'm looking for a middle ground: is there a **real-world problem that you have experienced** where a \"smart-enough\" compiler would have provided a real benefit, i.e that someone would pay for? **A good answer** is one where there is a process where the computer runs at 100% CPU on a single core for a painful period of time. That time might be 10 seconds, if the task is meant to be quick. It might be 500ms if the task is meant to be interactive. It might be 10 hours. Please describe such a problem. Really, that's all I'm looking for: **candidate areas for further investigation**. (Hence, raytracing is off the list because all the low- hanging fruit have been feasted upon.) **I am not interested in why it cannot be done.** There are a million people willing to point to the sound reasons why it cannot be done. Such answers are not useful."} {"_id": "27323", "title": "How to manage Inquiries of more than one products?", "text": "I can't guess the perfect title of this question, so please read my case and please help me. We have 4 different products. Each is having their own site and database, means they are totally independent. Now I want a common system that can manage \"contact us\" page inquiries. Moreover I can pull out the contact inquires from each database. And even I should be able to send bulk mail to all of them. etc etc. So what kind of application is this? Is there any software available? If yes then which are those? Or do I need to create a new custom application? Please advise.."} {"_id": "200318", "title": "Unit tests and language-native function calls", "text": "Is there a best practice for calling language-native functions when writing testable code? I have experimented a little with php code and have come up with two methodologies: * create a wrapper class for all native functions and mocking this wrapper when writing tests * using \"namespace magic\": calling all native functions without specifying the global namespace (`aFunction` in stead of `\\aFunction`) and then writing a method stub in the given namespace during testing, which will result in the code to be tested using the stub instead of the native function Is there a more commonly used approach? What are the benefits and shortcomings of the most commonly used methodologies to test language-native function calls?"} {"_id": "200319", "title": "Is SQL declarative?", "text": "I ask because so many of the questions I see in SQL amount to: \"This is slow. How do I speed it up\"? Or are tutorials stating \"Do this this way and not that way as it's faster\". It seems to me that a large part of SQL is knowing just how an expression would be performed and from that knowledge chosing expression styles that perform better. This doesn't square with one aspect of declaritive programming - that of leaving the system to decide how best to perform the calculation with you just specifying what the calculation should produce. Shouldn't an SQL engine not care about if you used `in`, `exists` or `join` if it is truly declarative shouldn't it just give you the correct answer in reasonable time if possible by any of the three methods? This last example is prompted by this recent post which is of the type mentioned in my opening paragraph. **Indexes** I guess the easiest example I could have used relates to creating an index for a table. The gumph here on w3schools.com even tries to explain it as something unseen by the user that is there for performance reasons. Their description seems to put SQL indices in the non-declarative camp and they are routinely added by hand for purely performance reasons. Is it the case that their is somewhere an ideal SQL DB that is much more declarative than all the rest but because it is that good one doesn't hear about it?"} {"_id": "253277", "title": "\"Middle ground\" architecture for client-server iOS apps?", "text": "I see two obvious approaches to the architecture for an iOS app which needs to talk to a server to do its job. **Pretend to be a web browser** Under this approach, the app fetches a chunk of data (typically JSON) from the server in response to a user action (navigating to a new view controller, tapping an \"Update\", whatever), puts up a spinner or some other progress indicator, then updates the view when the request completes. The \"model\" classes are mapped directly from the incoming JSON and possibly even immutable, and thrown away when the user moves off the screen they were fetched to populate. Under the \"pure\" version of this approach you set the appropriate headers on the server and let NSURLSession and friends handle caching. This works fine if you can assume the user has fast, low-latency network connectivity, and is mostly reading data from the server and not writing. **Sync** With the \"sync\" approach, the app maintains a local Core Data store of model objects to display to the user. It refreshes this either in response to user action or periodically, and updates the UI (via KVO or similar) when the model changes. When the user modifies the model, these changes are sent to the server asynchronously; there must be a mechanism for resolving conflicts. This approach is more suitable when the app needs to function offline or in high latency/slow network contexts, or where the user is writing a lot of data and they need to see model changes without waiting for a server round-trip. * * * Is there a third in-between way? What if I have an app which is _mostly_ but not always reads, but where there are writes the user should see those updates reflected in the UI immediately. The app should be usable in low/no network situations (showing any previously cached data until a request to the server has time to respond.) Can I get the best of both worlds without going full Core-Data-with-sync (which feels heavyweight and is difficult to get right) but without also inadvertently implementing a buggy and incomplete clone of Core Data?"} {"_id": "253275", "title": "Best approach of testing ravendb database queries speed", "text": "Iv'e seached the internet for a while but am not able to find any examples how to test ravenDb query speed. What I'm trying to achive is to compare two session.query and find out witch of those two has the best performance speed. How can I do that? //Thanks EDIT: I'm buliding a mvc Notes-app were a user can create an account and save notes. Lets say I have these these two classes: public class SingleNote : ContentPage { public string Header { get; set; } public string Note { get; set; } public string Category { get; set; } } And this one: public class LoginViewModel { [Required] [Display(Name = \"Username\")] public string UserName { get; set; } [Required] [DataType(DataType.Password)] [Display(Name = \"password\")] public string Password { get; set; } } Is it best to put a List of the users singleNotes in the LoginViewodel and store all the user notes there or should I put a property in the the SingleNote-ravenDocument that refers to the user. What I'm then trying to achive is to test these to diffrent type of queris and se witch of them gets the best performace speed. So, can I do some testing of that and compare these two queries witch gets best performance speed: Case 1: where I have put the prop string UserThatOwnsTheDoc in SingleNote- class `var listOfSpecificUsersDocuments = RavenSession.Query() .Where(o => o.UserThatOwnsTheDoc == User.Identity.Name) .ToList();` Case 2: where I have put the prop List SingleNotes in the LoginViewModel var userDocumentWitchIncludesAListOfSingleNotes = RavenSession.Load(\"UserName/1\");"} {"_id": "27328", "title": "RSpec vs Test::Unit in Rails", "text": "I've never been really convinced of the advantages that you get by switching over to RSpec from Test::Unit in Ruby on Rails (despite reading from time to time about RSpec). What is it about RSpec that most Rails project seems to be using it? (some code examples clearly indicating advantages of one over the other would be much appreciated)"} {"_id": "154931", "title": "one single compressed js file VS compressed requirejs module files", "text": "I just started using requirejs and I love it. I have one concern though. I've been compressing all my js files into one single file. Even with requirejs optimizer, I need to load module files from the server time to time and I'm concerned with it. Performance and user experience wise, which one is better?"} {"_id": "134643", "title": "What factors should I consider when choosing names for identifiers?", "text": "What factors do I need to consider when choosing names for identifiers such as variables? I am concerned about space issues, i.e. extra memory consumption when choosing longer names. As an example, take these two variables: bool noExp = true; bool willNotExpireEver = true; Each one will take up memory the size of `bool`. But what about the variable names? They are after all characters that have to be stored somewhere. Where does the space for them get allocated? Am I wasting pace by choosing longer names?"} {"_id": "193786", "title": "Map of functions vs switch statement", "text": "I'm working on a project that processes requests, and there are two components to the request: the command and the parameters. The handler for each command is very simple (< 10 lines, often < 5). There are at least 20 commands, and likely will have more than 50. I've come up with a couple of solutions: * one big switch/if-else on commands * map of commands to functions * map of commands to static classes/singletons Each command does a little error checking, and the only bit that can be abstracted out is checking for the number of parameters, which is defined for each command. What would be the best solution to this problem, and why? I'm also open to any design patterns I may have missed. I've come up with the following pro/con list for each: **switch** * pros * keeps all commands in one function; since they're simple, this makes it a visual lookup table * don't need to clutter up source with tons of small functions/classes that will only get used in one place * cons * very long * hard to add commands programmatically (need to chain using default case) **map commands -> function** * pros * small, bite-size chunks * can add/remove commands programmatically * cons * if done in-line, same visually as switch * if not done in-line, lots of functions only used in one place **map commands -> static class/singleton** * pros * can use polymorphism to handle simple error checking (only like 3 lines, but still) * similar benefits to map -> function solution * cons * lots of very small classes will clutter project * implementation not all in the same place, so it's not as easy to scan implementations **Extra notes:** I'm writing this in Go, but I don't think the solution is language-specific. I'm looking for a more general solution because I may need to do something very similar in other languages. A command is a string, but I can easily map this to a number if convenient. The function signature is something like: Reply Command(List params) Go has top-level functions, and other platforms I'm considering also have top- level functions, hence the difference between the second and third options."} {"_id": "146771", "title": "Can the Strategy pattern be implemented without significant branching?", "text": "The Strategy pattern works well to avoid huge if...else constructs and make it easier to add or replace functionality. However, it still leaves one flaw in my opinion. It seems like in every implementation there still needs to be a branching construct. It might be a factory or a data file. As an example take an ordering system. Factory: // All of these classes implement OrderStrategy switch (orderType) { case NEW_ORDER: return new NewOrder(); case CANCELLATION: return new Cancellation(); case RETURN: return new Return(); } The code after this doesn't need to worry, and there is only one place to add a new order type now, but this section of code still isn't extensible. Pulling it out into a data file helps readability somewhat (debatable, I know): com.company.NewOrder com.company.Cancellation com.company.Return But this still adds boilerplate code to process the data file - granted, more easily unit testable and relatively stable code, but additional complexity nontheless. Also, this sort of construct doesn't integration test well. Each individual strategy may be easier to test now, but every new strategy you add is addition complexity to test. It's less than you would have if you _hadn't_ used the pattern, but it's still there. Is there a way to implement the strategy pattern that mitigates this complexity? Or is this just as simple as it gets, and trying to go further would only add another layer of abstraction for little to no benefit?"} {"_id": "185566", "title": "Does a BSD-licensed project need a signed statement from each contributor?", "text": "Today I read on Fossil SCM's mailing list: > The problem with BSD is that you really should get a signed form from each > contributor stating that their contribution is BSD. This is automatic with > GPL, since releasing your contributions under a compatible license is a > prerequisite for viewing the code in GPL. This makes GPL great for a highly > collaborative environment, with lots of contributors. BSD is more permissive > (less burdensome) for readers, but that makes it a little tougher for > writers since they now have to send in some paperwork. Could someone explain why, and what are the possible consequences of not having such a signed statement from project's contributors?"} {"_id": "53468", "title": "WPF vs. WinForms - a Delphi programmer's perspective?", "text": "I have read most of the major threads on WPF vs. WinForms and I find myself stuck in the unfortunate ambivalence you can fall into when deciding between the tried and true previous tech (Winforms), and it's successor (WPF). I am a veteran Delphi programmer of many years that is finally making the jump to C#. My fellow Delphi programmers out there will understand that I am excited to know that Anders Hejlsberg, of Delphi fame, was the architect behind C#. I have a strong addiction to Delphi's VCL custom components, especially those involved in making multi-step Wizards and components that act as a container for child components. With that background, I am hoping that those of you that switched from Delphi to C# can help me with my WinForms vs. WPF decision for writing my initial applications. Note, I am very impatient when coding and things like full fledged auto-complete and proper debugger support can make or break a project for me, including being able to find readily available information on API features and calls and even more so, workarounds for bugs. The SO threads and comments in the early 2009 date range give me great concern over WPF when it comes to potential frustrations that could mar my C# UI development coding. On the other hand, spending an inordinate amount of time learning an API tech that is, even if it is not abandoned, soon to be replaced (WinForms), is equally troubling and I do find the GPU support in WPF tantalizing. Hence my ambivalence. Since I haven't learned either tech yet I have a rare opportunity to get a fresh start and not have to face the big \"unlearning\" curve I've seen people mention in various threads when a WinForms programmer makes the move to WPF. On the other hand, if using WPF will just be too frustrating or have other major negative consequences for an impatient RAD developer like myself, then I'll just stick with WinForms until WPF reaches the same level of support and ease of use. To give you a concrete example into my psychology as a programmer, I used VB and subsequently Delphi to completely avoid altogether the very real pain of coding with MFC, a Windows UI library that many developers suffered through while developing early Windows apps. I have never regretted my luck in avoiding MFC. It would also be comforting to know if Anders Hejlsberg had a hand in the architecture of WPF and/or WinForms, and if there are any disparities in the creative vision and ease of use embodied in either code base. Finally, for the Delphi programmers again, let me know how much \"IDE schock\" I'm in for when using WPF as opposed to WinForms, especially when it comes to debugger support. Any job market comments updated for 2011 would be appreciated too."} {"_id": "139343", "title": "How to avoid halting of the JVM due to a deadlock in java?", "text": "I have read many websites talking about how to avoid and how to design etc. I completely understand those strategies. My question is based on the following preconditions: 1. You have a company with 1000's of developers. 2. There are different teams working on the same product but as modules. 3. New developers writing new code not knowing the overall system, please consider an Enterprise application. 4. High available software development where a downtime of 15 mins is considered as an SLA violation. I could write few more preconditions but I thought these could be strong enough to support my question about why I might need a recovering strategy for a \"Deadlock\" in a software. Please note that re-designing the modules whenever we find a deadlock is not realistic. Now this being said. Can someone take sometime to provide an input or brainstorm on an idea of how to resolve a deadlock if at all it happens, so that we can report it and move forward, instead of halting completely. 1. Run a deadlock detector that runs periodically to look for deadlocks in the system. 2. If a deadlock is detected, notify with an event to resolve the deadlock. 3. The deadlock event listener will then kick in and act upon the deadlocked threads. 4. For each thread identify the contention. 5. Write an intelligent algorithm that could either release the locks and kill the thread or release the locks and re-evaluate the thread. 6. In step 2 we handle the notification in multiple ways, out of which logging is one of the listener. I know how to go about steps 1,2,6. Will need help with 3,4 and 5. I know that Oracle RDBMS already has a deadlock detection and resolution strategy in place, I wonder if they would ever share their strategies in this thread :) Can't add my comment as an answer so adding it as a comment here. ================================================================= I completely understand the risk of killing the threads. I was 100% certain that I would get answers like this but I was also hoping that someone would suggest something new. I'll keep the thread open as there is no answer in here that I already do not know, thank you very much for trying though."} {"_id": "74358", "title": "Do Makefiles Matter?", "text": "I'm currently working on a small library as a hobby project. I am the only one who actively codes for it, but a few of my friends have expressed interest in participating in the future. When using the library for my own purposes, I usually just add the appropriate source files to my project using an IDE. (E.g., dragging a `.h` and `.cpp` file into Xcode.) As the library grows in size and complexity, however, I've been trying to move towards a more professional approach to organization. I've considered setting up some sort of makefile that future participants can use to compile the library into a \"monolithic\" library file. This might be beneficial to me as well, since I code on multiple computers. Would such a venture be worth the effort? Note: I tried looking at the Boost library to see how they do things, but it's pretty difficult to navigate without any previous experience and I had trouble making sense of the structure."} {"_id": "224758", "title": "Maximizing number of nested triangles", "text": "There is a finite set of points on a plane (about 1500 points in the current task). I need to construct triangles on those points such as each triangle lies completely inside one larger triangle: ![enter image description here](http://i.stack.imgur.com/GSvTE.png) Now I want an algorithm to maximize the number of such triangles. Where can I start? I think one of the ways would be to choose a \"center\" and to find three directions from this center with most point density near those directions. Or just choose an arbitrary center triangle manually and then iteratively choose a larger triangle with the least area or the least distance."} {"_id": "74351", "title": "How can I convince my company to move to MVC?", "text": "I currently write web apps using asp.net web forms and getting my company to move to another technology is like [insert funny line here]. I would really like to start writing apps using MVC, but they fear any type of change. How is the best way to convince/ease them into using MVC? I guess this can go for moving to any new technology. ### Update Decided to go the rogue developer route and just started using it. I recreated a small app in MVC and learned the ropes that way, and moved up from there."} {"_id": "85595", "title": "What method is the easiest/fastest way for developing Android Apps", "text": "Which one of these is the easiest/fastest choice for developing simple Android apps/games? I'm familiar with all this technologies. \u2022 Android SDK or NDK \u2022 C# on MonoDroid platform \u2022 Adobe Flash & Adobe AIR \u2022 Corona SDK \u2022 Converting tools like PhoneGap Fundamentally, Could you explain adventages and disadventages of this methods."} {"_id": "85596", "title": "Architecture diagram of a computer virus", "text": "I am looking for an architecture diagram of a computer virus. Does anyone have a link to a good example? **Edit** Looks like I am getting hammered with downvotes. I agree that there is no single architecture for a virus. But somethings must be included for a program to be a virus. Example for components in the SAD: * Replication method * Trigger * Payload * Hosts targeted * Vulnerabilities targeted * Anti detection method **Edit 2** Found a good example here: http://static.northpole.fi/download/1318959037124281/w32_duqu_the_precursor_to_the_next_stuxnet.pdf"} {"_id": "166763", "title": "Service to test app on all the iPhones?", "text": "I have some developers creating an iPhone app, often the app will not work on one type of iPhone even though it worked on another one using the same version of iOS. Therefore, I am looking for a service where I can test the app natively on all the iPhone versions running various versions of iOS. I would like to be able to interact with the iPhones myself, so that I know that a specific bug has actually been fixed before, pushing to App Store and waiting 9 days for the review before I can hear the sad news from customers. Googling got me nowhere. Do such services exist?"} {"_id": "166768", "title": "Abstract class + Inheritance vs Interface", "text": "Hello fellow programmers, I am reading a book on C# and the author is comparing Abstract classes and Interfaces. He claims that if you have the following \"abstract class:\" abstract class CloneableType { public abstract object Clone(); } Then you cannot do this: public class MiniVan : Car, CloneableType {} This, I understand. However he claims that because of this inability to do multiple inheritance that you should use an interface for CloneableType, like so: public interface ICloneable { object Clone(); } My question is, isn't this somewhat misleading, because you can create an abstract class which is \"above\" class Car with the method Clone, then have Car inherit that class and then Minivan will inherit Car with all these methods, CloneAble class -> Car class -> Minivan Class. What do you think? Thanks."} {"_id": "205493", "title": "What's the best approach to handle javascript/ajax code in a project?", "text": "In a general sense, for medium/big projects, what's the best way to handle javascript/ajax code? Is it better, to have a script file where to put each script or, to add the script directly into the html file between tags? I know it doesn't make any difference in terms of result, but i would like to understand what's is easier to mantain in long terms."} {"_id": "89721", "title": "Is the MVC pattern used in industry a lot? What's all the hype?", "text": "I'm a student but I am hopefully moving into the software industry soon. There seems to be a lot of hype about the MVC software pattern. I noticed that PHP frameworks are often MVC, what about non-web languages.. is it the same with them? For my masters dissertation (C++), I chose the MVC pattern because it nicely separates out the logic and user interface. In industry is it used a lot? If so, what are the main reasons and what are other competing, popular designs?"} {"_id": "70638", "title": "When the current sprint consists of mostly spikes, should the sprint be shortened?", "text": "Spikes are mostly quick research tasks aimed at establishing the feasibility or prototyping of some idea. Thus, they are structured so that the tasks either finish-fast or fail-fast. However, when a dozen or so spikes are added to the current sprint, there is a possibility that several of them will fail-fast altogether. Let's say there are 4 possible ways of doing something, and the task is to choose the most suitable one and come up with a prototype. If the first attempt fails then try the next possible way, and so on. If the first pick is already a clear winner, then the next 3 aren't necessary anymore. So, when the current sprint consists of mostly spikes, does it make sense to also shorten the sprint time to maybe a few days, so that reprioritization can happen more frequently?"} {"_id": "204362", "title": "Can the version of Git vary between team members?", "text": "We are about 40 developers working on the same code base and we use Git for version control. My question is: can there be problems if, for example, some developers have a much older version of Git installed on their system ? Or should we try to enforce some rule that says something like \"you should upgrade Git on your machine to the latest version at least once per year\" ? Maybe in older versions of Git, the structure of the objects may be slightly different. Or there may be some bugs in the algorithm that calculates which lines were added/removed from files. These issues can cause corrupt repositories or different values for the SHA-1 hashes in places where it should have the same value.1 Obviously, since Git is a distributed VCS, a corrupt repository won't ever mean critical loss of data, since there are 40 other people from which you can clone a new copy. So it's more a curiosity than a concern. I suspect that backwards- compatibility is something extremely important when it comes to releasing a new version of Git; but still: the potential issues mentioned above are a possibility. 1 = as far as I know, we had no such problems... yet."} {"_id": "101910", "title": "Incanter for real world statistical projects", "text": "I'm interested in statistical computing. **R** is a leader platform obviously but what about Incanter? Incanter is at the top of my list since I'm a Clojure and JVM guy. Do you have any real world projects with Incanter? Is it a proven platform or I have to fall back to **R**? What are the advantages of **R** over Incanter? What I'm going to miss?"} {"_id": "71328", "title": "What cannot be unit-tested in mobile app?", "text": "I really value unit-testing in developing webapps. I haven't had any experience in developing mobile apps. Is there anything that cannot be unit- tested in mobile apps? And what is the workaround for this? Is there any common gotcha that we must be aware in unit-testing mobile apps?"} {"_id": "33115", "title": "What is \"Simplified JavaScript\"?", "text": "Douglas Crockford makes reference to **Simplified JavaScript** in this article about **Top Down Operator Precedence**. Unfortunately he only makes references to it in this article. The best explanation I found **here**. But I still don't have a clue what it actually is. **What is Simplified JavaScript?**"} {"_id": "71323", "title": "Transition from a \"look ma, I can do this\" web developer to someone who creates products that people and businesses can rely on", "text": "I am doing web development for the past 6 years. But somehow, I never got the feeling that I did a good job. I always felt that my code was not production quality. I felt like someone who delivers sub-par products. **An analogy:** Me developing apps for paying clients is like.... a 14 year old who just knows how to shoot and move around the basketball court without falling down, playing among professionals in the NBA. That should be an accurate description :| **I wouldn't buy from me, if I were the client.** I want to move from this current position to a stage where I feel confident that my websites web apps are \"professional\", \"secure\", \"scalable\", (insert all requisites of a product that is worth paying for or relying on for your business). **My question is:** **1\\. What are the things I should learn, and..** **2\\. From where can I learn those..** **..in order to create truly professional websites and web applications that other people/businesses can rely upon?** I am sick of feeling like an amateur even after so many years. I want help in getting started. I can learn if I know _what_ to learn. I can learn given enough time and things to experiment with, but I don't want to make my clients guinea pigs. **Here is what I know and don't know:** * I know or can learn the needed syntax in C# to convert the required business logic from concept to working code. * I can write somewhat complex select queries and even a few joins if needed to fetch data. Of course I can insert, update, delete. But that's it. I know nothing else in SQL server. * I know enough web front-end technologies to develop good UI. This is not a problem area. * I know enough about hosting/domains to register and buy hosting and point servers at each other so that example.com actually loads the website. Not much more than that. * For all practical purposes, ZERO knowledge or experience in handling security and server load * No idea about caching, at any level. * I only know the coding best practices, dos and donts. I don't know the same for real world apps that thousands of people are going to use. * Every time I read a question in SO that has anything to do with a production application, all the answers and the question itself is all Greek and Latin to me. I feel inspired that there is so much to learn, but I can't figure out how to start. * * * I will primarily be working with the Microsoft Stack. So any answer specific to it will also be great."} {"_id": "121888", "title": "Is Java a good choice for cross-platform games?", "text": "I'm looking to create a game in Java and would like it to work on Windows, Linux, and Mac. I'm pretty sure C# is a bad choice for this, and I don't have enough experience in C or C++. I want to stay away from Flash. Therefore, is Java a good choice for me? Mostly, I use C#, and think that Java is similar, so I assume it won't be that hard to learn. But is it fast enough? Is there a language more suited for my needs than Java?"} {"_id": "84171", "title": "Providing Estimates When working With Unfamiliar Technology?", "text": "I was presented with a new problem recently, to provide an estimate for a project in which I must use a framework (and potentially bits of another framework) that I am unfamiliar with. It's much easier for me to provide estimates when I'm at liberty to use what I'm familiar with, but it was as though a crippling paralysis by analysis had kicked in when an estimate was requested for work in unfamiliar territory. My solution, in retrospect, was wrong. I merely began working. How might I better estimate projects and tasks when I am required to work with unfamiliar languages/technologies/frameworks?"} {"_id": "113344", "title": "Secure way to remember usernames and passwords", "text": "Maybe it was already discussed. After browsing through some alike questions here about password management, I still would like to ask this question. If there are 100 sites which require username password to login. It would be a nightmare to make up another username and password just for the uniqueness. However if we use the same username and password, we expose ourselves to crossite attack. Maybe we can make 100 different username and password, but how can we remember it? Brain is not trustworthy. Professionals recommend not to write down any password on paper. And there are those password management software. I've never used these software so I am rather blind there. Do you trust those things? Imagine if someone stole your computer and try to break into these software, do you has the risk to lose all your identity?"} {"_id": "123428", "title": "Is Google App Engine's level of support robust enough for real-world production?", "text": "Google App Engine has been great for trying out ideas and learning stuff, but so far I haven't seen much confidence in the community in using it for production applications. One significant issue that has come up over and over again is that when things go wrong, it's nearly impossible to actually talk to anyone at Google. This is really scary if your company is depending on this service for the production app. However, their literature on Premier accounts looks promising, with the promise of better levels of support. What has been your experience with using Google App Engine when it comes down to resolving support issues? Does it really take 4 hours to just acknowledge a P1?"} {"_id": "70981", "title": "Should I provide some way to disable my software post-delivery?", "text": "I'm debating the value of putting remote disable (or destruct) functions into my software before delivering it to clients... As a hypothetical example, consider the development of a Silverlight app where you're worried about the client not paying you. You create a function which when a specific query string is entered it deletes everything in the database. Destroying data might be a bit of an extreme example. Making the application either partially or completely unusable would be another example. What are the benefits and risks of doing this? If you've done it, why did you do it, and how did you go about it?"} {"_id": "123422", "title": "Mask oAuth API key and token for pure client-side technologies", "text": "If I were to build a Twitter or Facebook application using pure client-side technologies like HTML and javascript, how would I mask/hide my API keys? For example, for Twitter I have consumer key and consumer secret. In order for me to call Twitter's API I'll have to pass these keys to authenticate my app. If I am using pure client side technologies, I leave myself exposed. Therefore, the API keys are up for grab and anyone can authenticate as my app. How can I prevent this? Can I prevent this at all? This question is very similar to this thread, however, it's not a desktop application."} {"_id": "126126", "title": "Transitioning from being a bespoke development to a COTS development house", "text": "Currently one of the major applications that our organisation produces would be regarded as bespoke software as it is specifically designed for one specific client organisation. However, we have specifically retained the intellectual capital on the project and a number of different organisations in different countries (with different languages and character sets) now look like they want to purchase our software solution. Our current approach to development passes 9 \u00bd out of the Joel Test and is improving. However, our approach is all about satisfying the requirements of one specific client and responding quickly to their needs (regardless of whether something is a bug or a change request). If we want to transition to producing more of an 'Off the Shelf' approach with multiple client organisations---rather than just one client organisation---on a common code base, what are the major differences and what pitfalls do we need to be aware of to successfully make the transition?"} {"_id": "180860", "title": "How to refer to specific areas of code in documentation?", "text": "I'm about to leave a project, and before I go my boss has asked for me to document code (I've not documented very well). It's not a big deal, the project is not terribly complex. But I'm finding places in my documentation where I would like to say, \"Notice on line XYZ that such-and-such happens.\" In this case, it doesn't make sense to refer to a specific line number, since adding or deleting a single line of code would immediately outdate the documentation. Is there some best-practice for referring to a specific line of code without referring to it by line number? I've considered littering the code with comments like: /* linetag 38FECD4F */ Where \"38FECD4F\" is some unique tag for that specific line. Then I can put in the documentation, \"On the line tagged '38FECD4F', notice that such-and-such happens.\" Any other ideas? I feel like it's generally better to document code units as a whole, rather than specific portions of them, but in the case of this particular project there are LONG swaths of procedural code, which have never been refactored into smaller units."} {"_id": "159810", "title": "Is STL implemented with OO?", "text": "There are several design patterns like Adaptor, Iterator implemented in STL. Does that mean STL is implemented with OO concepts? What is the relationship between OO and template parts of C++? I learned that virtual member function which justifys the OO is contradict with template, is this correct?"} {"_id": "182039", "title": "Selling a Joomla extension with third party jquery plugin", "text": "Let's suppose I want to create an extension for Joomla that uses a jQuery plugin or any third party library. Imagine that I create a Joomla module that uses jquery Nivo Slider plugin (MIT License) http://nivo.dev7studios.com/license/. I'm not sure If I could sell this module with this license. Could I sell it if the plugin has a GPL License?"} {"_id": "182037", "title": "Is this XOR value swap algorithm still in use or useful", "text": "When I first started working a mainframe assembler programmer showed me how they swap to values without using the traditional algorithm of: a = 0xBABE b = 0xFADE temp = a a = b b = temp What they used to swap two values - from a bit to a large buffer - was: a = 0xBABE b = 0xFADE a = a XOR b b = b XOR a a = a XOR b now b == 0xBABE a == 0xFADE which swapped the contents of 2 objects without the need for a third temp holding space. My question is: Is this XOR swap algorithm still in use and where is it still applicable."} {"_id": "182034", "title": "Were there major changes in testing practices in ASP .NET between 3.5 and 4.5?", "text": "Obviously, testing methods are language-independent. An integration test stays an integration test no matter what the technology. But platforms implement some kinds of testing support. And the details vary between implementations, so a programmer has to learn how to use the tools of their framework. And platforms evolve, so best practices in one version may become obsolete or at least inefficient in the next version. See for example this StackOverflow question about unit tests for which at least the first three answers seem to be valid: One describing the correct solution for the current version, one describing the solution which used to be best before the .NET framework included an ExpectedException attribute, and one describing a generic solution which is tool-independent. My question is: were there any major testing-related changes between the .NET framework versions 3.5 and 4.5? Something as disruptive to testing as introducing generics was to programming in general, or introducing the Membership framework was to authentication? Or were there only small enhancing changes? I am asking this because I wonder whether learning materials written for 3.5 are still reasonably valid today."} {"_id": "182032", "title": "What phases of a traditional waterfall project should Scrum replace?", "text": "My team and I are using Scrum to manage software development projects. In my company there has been a move towards more structure in all IT projects using a waterfall methodology. This is fine by me as there are lots of non-software- development projects which will benefit greatly from this and it is not threatening our use of Scrum. However, we need to work out how our Scrum processes (which do a great job of taking a Product Owner's requirements and turn them into working software) fit into the broader project process. Our new waterfall project process includes explicit activities for the following things (not necessarily sequentially and not necessarily in this order): 1. Production and approval of business case. 2. Resourcing. 3. Requirements analysis. 4. Design. 5. Build. 6. Test. 7. Training. 8. Communications. 9. Go live. 10. Risk & Issue Management. 11. Benefits realisation. Bear in mind that a project may be required to deliver more than just software. It may also include server infrastructure to host the software and network / desktop infrastructure to make it available to users. I think Scrum will manage 3, 4, 5 & 6 happily but probably only for teams who find it adds value, which is probably only software development... We could get other teams to use it for the production of infrastructure deliverables, but they don't see the need or benefit and nor do I (Scrum is good for large, risky projects and they are building a bunch of standard PCs). Likewise we could have a User Story in the Product Backlog for the Project Manager (not typically a member of the Scrum Team) to produce a benefits realisation plan. Are we approaching this correctly? Is there a better way to integrate these two styles of working or are they exclusive and we should use one or the other?"} {"_id": "142648", "title": "What class structure allows for a base class and mix/match of subclasses? (Similar to Users w/ roles)", "text": "I have a set of base characteristics, and then a number of sub-types. Each instance _must_ be one of the sub-types, but can be multiple sub-types at once. The sub-types of each thing can change. In general, I don't care what subtype I have, but sometimes I do care. This is much like a Users-Roles sort of relationship where a User having a particular Role gives the user additional characteristics. Sort of like duck- typing (ie. If my `Foo` has a `Bar`, I can treat it like a `ThingWithABar`.) Straight inheritance doesn't work, since that doesn't allow mix/match of sub- types. (ie. no multi-inheritance). Straight composition doesn't work because I can't switch that up at runtime. How can I model this?"} {"_id": "231255", "title": "Validating objects with each other - Design Pattern needed", "text": "I am running a zoo application. My zoo includes an abstract class of 'animal', and several deriving classes - 'zebra', 'elephant', 'orangutan', 'baboon' and so on. Of each class I have several instances. My question is: I want to check out if two animal instances can mate. The business logic is divided to two parts: 1. I want to check if each mating partner is fit for mating, e.g. not too young or too old or sick etc. 2. I want to check if the two partners match - e.g. a zebra cannot mate an elephant, but an orangutan can mate with a baboon. I assume the first requirement would be implemented by an abstract function which would reside under the animal baseclass. But what about the second requirement? How would you design the classes in the most general matter that adding new types of animals would not require much of an overhead?"} {"_id": "202204", "title": "Finally block for methods - is it a bad idea?", "text": "The `finally` block for a `try-catch` structure is well known, and gives a really easy and elegant way to deal with some must-be-done code. Therefore, I can see no reason why It shouldn't be good for methods too. For instance, lets say I'm writing some very complicated logic in a method, and I expect to end up with a bunch of boolean flags that will in turn lead to some decisions. Many times in such kind of methods I have branches of code where I would want to \"break\" the flow and just perform the \"real stuff\" that I called this method for, with the satisfying information I gathered so far. So, Why isn't there such pattern? Or is there?"} {"_id": "202205", "title": "Displaying items with a countdown timer", "text": "I am creating a widget for rotating topics. The functionality is as follows: * Each topic is displayed one by one on the homepage and has a duration of 30 seconds. * A countdown timer is displayed on the page. When it reaches zero the current topic gets marked as \"shown\" and the next topic is displayed. * Users can choose to \"like\" topics, in which case the topic gets an additional 5 seconds duration (the countdown timer needs to update itself at this point). * The countdown timer always needs to display the current allocated duration for the topic, so for example if a user \"likes\" a topic then people accessing the site should instantly see the updated duration for that topic. Basically everything needs to be shown in \"realtime\". The topics are stored in a MySQL database. I am having trouble with the last two points in the above. The way I have tried to do it so far is to have an AJAX request which is sent every second to deduct 1 second from the current topic duration. The request response is the current topic duration. The problem with this is that it will only work so long as somebody is accessing the site. I know I can probably create a CRON script to run this process in the background, but for it to run every second would be a bit crazy. There has got to a be a much simpler solution to this problem. What would be the most efficient way to go about this?"} {"_id": "202202", "title": "Does the VS version setting in Settings.settings matter?", "text": "Looking through the server vs local code on a humongous legacy project, I see this in Settings.settings: Is there a problem with having this VS2010 project reference 2004 this way? Or does it not matter?"} {"_id": "40715", "title": "How to query a very long list of properties fast", "text": "I have a structure for storing item properties on SQL Server: ItemId PropertyId Value 1 1 a 1 2 b 2 1 a 2 2 5 Currently there are over 130000 items and 10000 properties and the numbers are growing. Current row count is a little over 15M. If I created a pivot table for this data, it would have a little over 1.3 billion cells, 15 million of which are not null. Users can form custom expressions on this data like: X: P1 = 'a' (rule X selects items which have property 1 with value 'a') Y: P2 <> 'b' Z: P3 like '%c%' T: P4 > 5 (rule T selects items which have property 4 with a value greater than 5) and they form filters by using expressions like: (X AND T) (items that match both X and Y) (X AND Y) OR (Z AND T) (X OR Y) AND (Z OR NOT T) (X OR Y AND T) OR Z I need to query the result of a few filters (generally 4 or 5) as a response of a single web request. How can I do this fast? Is there a storage method or a super efficient algorithm to get this filter results? It'd be great if this is possible on SQL Server but I am also open to solutions like storing this portion of data on a no sql database."} {"_id": "40714", "title": "Hashing growth strategy", "text": "What is a good growth strategy for hash tables? If the number of elements exceeds the number of buckets, I increase the number of buckets with the following formula: n = int(n * 1.618033988749895) | 1; Does that sound sensible? (The `| 1` part makes sure I get an odd number.)"} {"_id": "83331", "title": "Question on design of current pagination implementations", "text": "I have checked pagination implementations on asp.net mvc specifically and i really feel that there is something less efficient in implementations. First of all all implementations use pagination values like below. public ActionResult MostPopulars(int pageIndex,int pageSize) { } The thing that i feel wrong is pageIndex and pageSize totally should be member of Pagination class otherwise this way looks so much functional way. Also it simplify unnecesary paramater pass in tiers of application. Second thing is that they use below interface. public interface IPagedList : IList { int PageCount { get; } int TotalItemCount { get; } int PageIndex { get; } int PageNumber { get; } int PageSize { get; } bool HasPreviousPage { get; } bool HasNextPage { get; } bool IsFirstPage { get; } bool IsLastPage { get; } } If i want to routing my pagination to different action so i have to create new view model for encapsulate action name in it or even controller name. Another solution can be that sending this interfaced model to view then specify action and controller hard coded in pager method as parameter but i am losing totally re-usability of my view because it is strictly depends on just one action. Another thing is that they use below code in view Html.Pager(Model.PageSize, Model.PageNumber, Model.TotalItemCount) If the model is IPagedList why they don't provide an overload method like `@Html.Pager(Model)` or even better one is `@Html.Pager()`. You know that we know model type in this way. Before i was doing mistake because i was using Model.PageIndex instead of Model.PageNumber. Another big issue is they strongly rely on IQueryable interface. How they know that i use IQueryable in my data layer ? I would expected that they work simply with collections that is keep pagination implementation persistence ignorant. What is wrong about my improvement ideas over their pagination implementations ? What is their reason to not implement their paginations in this way ?"} {"_id": "766", "title": "What acceptance test frameworks have you used? What are the pros and cons?", "text": "For business or tester facing tests on projects (not unit tests), I've typically used FitNesse, but I've dabbled with RSpec and I'm interested in learning Selenium and Robot. Do you have experience in these (or other) acceptance testing tools? What are the pros and cons? For FitNesse, I can think of the following pros and cons initially. PROS: * Wiki based editing, which most people understand * Lots of support and plugins for different languages * Completely self contained, you just download and go CONS: * Refactoring the tests can be a little clunky * You need to know (or document) the various properties that each method in your test fixture expects. There's no \"intellisense\" or IDE to do this for you that I'm aware of."} {"_id": "83334", "title": "Can I use Xcode 4.2?", "text": "Can I use the new Xcode 4.2 to make apps for older version of the iOS? Will they work? I like the new features like the Storyboard and Automatic Reference Counting."} {"_id": "226282", "title": "The value of potential shippability when the product is not minimally viable yet", "text": "The main benefit of having a potentially shippable product at the end of each Sprint is the ability to release the product quickly in case market conditions change. However, usually in the first half year or more the product doesn't have enough value to be actually shipped. So why spend all this energy to make a potentially shippable increment each Sprint if you know with complete certainty that the product won't be shipped?"} {"_id": "226283", "title": "Super Fast File Storage Engine", "text": "I basically have one big gigantic table (about 1.000.000.000.000 records) in a database with these fields: id, block_id, record id is unique, block_id is not unique, it contains about 10k (max) records with the same block_id but with different records To simplify my job that deals with the DB I have an API similar to this: Engine e = new Engine(...); // this method must be thread safe but with fine grained locked (block_id) to improve concurrency e.add(block_id, \"asdf\"); // asdf up to 1 Kilobyte max // this must concatenate all the already added records added block_id, and won't need to be bigger than 10Mb (worst case) average will be <5Mb String s = e.getConcatenatedRecords(block_id); If I map each block to a file(haven't done it yet), then each record will be a line in the file and I will still be able to use that API But I want to know if I will have any peformance gain by using flat files compared to a well tunned postgresql database ? (at least for this specific scenario) My biggest requirement though is that the getConcatenatedRecords method returns stupidly fast (not so with the add operation). I am considering caching and memory mapping also, I just don't want to complicate myself before asking if there is an already made solution for this kind of scenario ?"} {"_id": "219721", "title": "Clearing the lowest set bit of a number", "text": "I can see in this tutorial on bit manipulation, under the heading \"Extracting every last bit\", that - > Suppose we wish to find the lowest set bit of x (which is known to be non- > zero). If we subtract 1 from x then this bit is cleared, but all the other > one bits in x remain set. I don't understand how this statement is true. If we take `x = 110`, subtracting 1 would give `101`. Here, the lowest set bit is not cleared. Can anyone tell me how I'm approaching this problem in a wrong way?"} {"_id": "61984", "title": "Are Vim or Emacs practical for languages like .Net or Java?", "text": "So, I am primarily a .Net developer who does some stuff in Java, Python and a few others from time to time. I have heard a lot of people praise Vim and Emacs for greatly increasing efficiency once the basics are nailed down. I can definitely see how many of the features could be very useful with sufficient practice, and even believe the learning curve is probably worth the effort. However...it seems that you would really have to be some sort of wizard of macros and hotkeys to be as efficient in Vim or Emacs as the average developer is in Visual Studio, Netbeans, Eclipse, or other platforms. I have been starting to learn to use Vim and think some of its features are awesome (column editing for example), but it seems that many of the tools provided by the heavy weight IDEs simply could not be replaced buy even the most juiced up text editor. A few processes I would be skeptical anyone could ever be more efficient at in Vim include (I know VS best, so I will stick to those): * generating dbml files for Linq-to-SQL * Automated testing * designing UIs * Creating/organizing projects and solutions I know Vim and Emacs can do a lot of the same things very powerfully that VS can (like intellisense, refactoring, etc.) and it may be able to do some or all of the examples that I provided, but is it realistic to say that someone working on these platforms would actually benefit from Vim or Emacs?"} {"_id": "71585", "title": "Teaching Programming to Kid / Teen", "text": "I know this topic has be discussed before, but I thought this might be a bit more of a detailed question... A family friend is a 12yr old boy with ADHD, and a very bright kid at that. He seems to have a solid instinct of computers, and I really think he would excel at programming. For example, today he was kicking around our place and so I opened up Visual Studio Express C#, and showed him how to create a console application (he was the one typing it all out). In about 10 - 20 minutes he was writing his own code, ReadLine()s and Writeline()s etc, and even started working on a loop. Made me think that even something like C# would definitely be within his grasp. Although Lego Mindstorms came to mind, his family isn't particularly well off and is likely not feasible. I thought about lending him one of my intro to C# books, but with the ADHD I don't know if he would have the patience to actually go through it. I also think something like Alice or Scratch would be too childish for him, and wouldn't catch his attention... I'm trying to figure out what the best way to approach this would be, and what sort of material is out there that he could teach himself how to program. Any thoughts or suggestions?? One thought I had was this book, but it may be too advanced without a basic C# background at least... Amazon C# Game Book: Thanks!"} {"_id": "38691", "title": "Difference between Web API and Web Service?", "text": "I have heard about Web Services and Web API's a lot, is there any difference between them or are the same?"} {"_id": "208469", "title": "Benchmark of asynchronous code", "text": "Asynchronous programming seems to be getting quite popular these days. One of the most quoted advantages is performance gain from removing operations that block threads. But I also saw people saying that the advantage is not that great. And that properly configured thread pool can have same effect as fully asynchronous code. My question is, are there any real benchmarks that compare blocking vs. asynchronous code? It can be any language, but I think C# and Java would be most representative. Not sure how good would this benchmark be with Node.js or similar. **Edit** : My attempt at general question combined with unclear terminology seems to have failed. By \"asynchronous code\" I mean what some answers described as event or callback programming. In the final result, operations that would block thread are instead delegated to some callback system so threads can be utilized better. And if I wanted to ask specific question : _Are there any benchmarks that compare throughput/latency gain of async/await server code in .NET? Or any other similar comparison?_"} {"_id": "208467", "title": "How would my custom language be categorized?", "text": "I'm developing my own scripting language to solve some unique challenges for a project. The language takes source code and converts the contents into tokens, and then a command factory is used to convert those tokens into something that can be executed. An example of a script might look like this. accept: always; reject: title has \"something\" or body has \"something else\"; reject: title length < 10; Each line in the script ends with a `;` terminator, and each line starts with a named rule that ends with `:`. Each rule uses a command followed by arguments. The commands can be joined together using logical operators. The number of arguments a command can have is fixed. So when I find a command I can expect X number of additional tokens for the syntax to be correct. The goal is human readable code because non-programmers will be using it. An alternative way of writing the same above code using a C style structure would be. accept(true); reject(has(title,\"something\") || has(body,\"something else\")); reject(length(title) < 10); What is the term used for a parser that handles a language where `()` for structuring arguments is omitted, and there are no clear boundaries defined for arguments. I would like to read up on how these kinds of parsers are implemented. To ensure I'm not overly reinventing the wheel or running into common problems."} {"_id": "208464", "title": "Align draggable elements in line and generate without overflow or new line", "text": "I have this script I'm working on. What it should do is generate a new div every time I press a button and I've already done that, but those are not draggable. I think that happens because of the onLoad function but alternative do I have? The second problem is that when I click one of the blue boxes, all of them must arrange in a line like you see them when you load my jsFiddle link, and if the line is full and you click the \"New Square\" button, instead of overflowing or add a new line, it would close the last box instead and add a new box before all of the other boxes. I'm sorry to bother you with those questions, but I am a PHP and MySQL programmer, I just started using jQuery. jsFiddle HTML:



    jQuery: $('.round').draggable(); $('.button').click( function (){ $(\"#container\").append(\"
    \"); }); $('.round').click ( ); I had more code in jQuery, but I removed all of the useless options because I want you to easily understand what I want to do."} {"_id": "245462", "title": "Enforcing organizational standards for software in Scrum", "text": "Organizations, especially organizations with a recognized brand, have various standards for software. This includes things like security, usability, code quality and readability, UI design, test automation and so forth. In our organization we have a lead for each of these areas. Each lead is a very experienced individual and is trusted by management. Each lead is responsible for the software to meet the standards in the lead's area of expertise. The Scrum Guide says: \"Development teams are self-organizing. No one (not even the Scrum Master) tells the Development Team how to turn Product Backlog into Increments of potentially releasable functionality;\" My question is how can an organization trust teams to produce a product that meets the various standards mentioned above, if the leads (trusted by the organization) have no ability to enforce the quality in each of the areas?"} {"_id": "150702", "title": "Is hiring a \"chief intern\" a good idea?", "text": "I'm starting an internship program for our software department and I was wondering about creating a position (\"chief intern\", intern supervisor, or whatever one should call it) with the following responsibilities: * **Train** interns * **Coach** interns * **Manage projects and tasks** for interns * **Supervise** intern's work in terms of **rhythm and quality** * **Act as a liaison** between the main team's needs and interns performance/aspirations * Evaluate and facilitate intern's progress **when they want to grab a higher-level domain-specific task** (at this point, a main dev team member can do mentoring) * **Get freely involved in the main team's software development tasks so that he himself can grow** , and have full mentorship from the main dev team. I'm thinking that an apprentice-level engineer (below Jr., or Jr.; but being a graduate and working full-time) can handle this for a while (he will be trained by the main dev team first), until one of two things happen: 1. He/she **decides to move on to the main dev team** by recommending an appropriate replacement (or me finding another one as a new hire) 2. **Keep leading** the interns while still being able to grow to Jr. Eng., Eng., Sr. Eng I know the notion of a \"chief intern\" is common within the medical world, but I don't really know about that in the software world (I was a freelancer for most of my university years). A side-intention to this is also that, if this ends up being a higher rotation position (organically) because the intern supervisor wants to join the main dev team, **this could help interns that aspire this position emerge as leaders**. **My main intention for this, though, is removing distractions from the main team but without making the interns suffer the lack of attention, which could lead to boredom and little intern retention.** Is this \"chief intern\" idea common (or good at least)?, are there any obvious risks to it that I might not be seeing? * * * **Edit** : I have a draft plan for the kind of work the interns would be doing: Are R&D mini-projects a good activity for interns? **Edit #2** : My intention is not keeping them isolated, but having someone focus on giving attention to them when we cannot. **Edit #3** : I'm now convince it is a good idea, but I will take the organic approach to hiring someone in such position: do it myself until I cannot. This way I'll know better what to expect from a person I hire for this role in the future, as well as what works and what doesn't with interns."} {"_id": "232507", "title": "Communicating with Vendors", "text": "We are a small software team (as far as programmers go) and have a team of vendors on the other side of the world that program for us. We own the product, and simply dictate to them some of the tasks to work on. I am one of the lead programmers and also the project manager of this particular project, so I am a programming and fulfilling requirements as well as outlining them. I am having difficulties communicating with the vendors exactly what is expected of them. Let me start by saying we are fairly new at this and I don't have a lot of experience leading a team of vendors, especially when it is difficult to communicate verbally in English, and they work during my night and then I come into the code they pushed while I was asleep. Problem is, I end up spending a lot of my day just checking what they did, and fixing bugs and cases they didn't think of. They're not thinkers, they do exactly what I tell them, and nothing more or less. Most of the time, I feel like it would just be easier to do it myself. My question is how do most teams like this communicate? Right now we have weekly telephone meetings and I email them nightly the progress I made, as well as what is expected of them. When I think \"how to communicate with other programmers\" the answer seems to be UML. That's what it is. I certainly am very familiar with UML, learned it in school, but have never really used it on the job. It's just not something we do, in fact generally the requirements for a task are in my manager's head. I can get those into a spreadsheet or a flowchart, but never an official diagram. Is UML something teams like this actually use? In all reality, I feel like I learned a lot in school that no one actually does. If so, which diagrams are the most useful/used? From my knowledge, in this project where we are revamping something that exists in the system already from the ground up, I feel like the following would be a good approach: * Create a quick ER diagram containing any added/updated/used entities in the project. * Create a _detailed_ use case model, clearly defining and numbering each use case. * Create a sequence diagram for the complex use cases (as well as their alternate flows) to show exactly what is expected each step of the way. I feel like this would be a good start. We don't really do a good job of capturing requirements currently, we kind of just start coding based on something we drew up on a whiteboard. Obviously this needs to change as well, to avoid getting a week into a project and realize that we forgot something. What are your suggestions/experience? Unfortunately we're sort of flying blind and just don't have anyone experienced enough in these situations to go to. I want the project to be a success, but I can't keep having the vendors (understandably) making mistakes because I assume they know X or Y. How can I utilize them more effectively?"} {"_id": "172918", "title": "Custom HTML Tags: Are there any specifications stating a standard way to handle them?", "text": "It seems like for years they've just been given default styling and inline display. Is there a spec somewhere that has dictated this? I've looked over the RFC's but I'm not particularly good with RFC-ese, and I didn't notice anything anywhere. For example Some content something else more content. I can still style it with CSS, and the browser doesn't outright vomit... so it seems like there is some sort of expected behavior. Was that dictated by a specification?"} {"_id": "241147", "title": "Understanding UML composition better", "text": "The difference between Composition and Aggregation in UML (and sometimes in programming too) is that with Composition, the lifetime of the objects composing the composite (e.g. an engine and a steering wheel in a car) is dependent on the composite object. While with Aggregation, the lifetime of the objects making up the composite is independent of the composite. However I'm not sure about something related to composition in UML. Say ClassA is composed of an object of ClassB: class ClassA{ ClassB bInstance; public ClassA(){ bInstance = new ClassB(); } } This is an example of composition, because `bInstance` is dependent on the lifetime of it's enclosing object. However, regarding UML notation - **I'm not sure if I would notate the relationship between ClassA and ClassB with a filled diamond (composition) or a white diamond (aggregation).** This is because while the lifetime of _some_ `ClassB` instances is dependent of `ClassA` instances - there could be `ClassB` instances anywhere else in the program - not only within `ClassA` instances. * * * **The question is:** if **_some_** ClassB objects are composed with ClassA objects (and thus are dependent on the lifetime of the ClassA objects), but **_some are not_** \\- would the relationship between the two classes in UML be an aggregation or a composition relationship??"} {"_id": "172914", "title": "Simplifying C++11 optimal parameter passing when a copy is needed", "text": "It seems to me that in C++11 lots of attention was made to **simplify returning values** from functions and methods, i.e.: with move semantics it's possible to simply return heavy-to-copy but cheap-to-move values (while in C++98/03 the general guideline was to use output parameters via non-const references or pointers), e.g.: // C++11 style vector MakeAVeryBigStringList(); // C++98/03 style void MakeAVeryBigStringList(vector& result); On the other side, it seems to me that more work should be done on input parameter passing, in particular when **a copy of an input parameter is needed** , e.g. in constructors and setters. My understanding is that the best technique in this case is to use templates and `std::forward<>`, e.g. (following the pattern of this answer on C++11 optimal parameter passing): class Person { std::string m_name; public: template ::value >::type> explicit Person(T&& name) : m_name(std::forward(name)) { } ... }; A similar code could be written for setters. Frankly, this code seems boilerplate and complex, and doesn't scale up well when there are more parameters (e.g. if a surname attribute is added to the above class). Would it be possible to add a new feature to C++11 to _simplify_ code like this (just like lambdas simplify C++98/03 code with functors in several cases)? I was thinking of a syntax with some special character, like `@` (since introducing a `&&&` in addition to `&&` would be too much typing :) e.g.: class Person { std::string m_name; public: /* Simplified syntax to produce boilerplate code like this: template ::value >::type> */ explicit Person(std::string@ name) : m_name(name) // implicit std::forward as well { } ... }; This would be very convenient also for more complex cases involving more parameters, e.g. Person(std::string@ name, std::string@ surname) : m_name(name), m_surname(surname) { } **Would it be possible to add a simplified convenient syntax like this in C++?** What would be the downsides of such a syntax?"} {"_id": "76675", "title": "How can a Java programmer make the most of a new project in C or C++?", "text": "As a Java programmer, I'm looking to learn either C or C++ by writing a database manager. Obviously, Java shares many idioms with C and C++, but yet both bring vastly different program design challenges. I'm looking to for a way to make this exercise as educational as possible. **What aspects of taking on this project in C or C++ can help me make a decision about which approach will teach me the most, as a Java programmer?** One particular target of this exercise would be at least a subset of the spatial extensions in PostgreSQL. Obvious issues to consider would be goodness of fit of C vs. C++ for: * modeling concepts of databases in general. * modeling spatial modeling in particular Another major point would be the degree of expected difference from Java. Would a good, idiomatic design in C++ be enough different from a design in Java to teach new, different concepts, or would it mostly be the same concepts with slightly different syntax? Would a good design in C contain more concepts that were new or different than one in C++?"} {"_id": "77988", "title": "Third-party open-source projects in .NET and Ruby and NIH syndrome", "text": "The title might seem to be inflammatory, but it's here to catch your eye after all. I'm a professional .NET developer, but I try to follow other platforms as well. With Ruby being all hyped up (mostly due to Rails, I guess) I cannot help but compare the situation in open-source projects in Ruby and .NET. What I personally find interesting is that .NET developers are for the most part severely suffering from the NIH syndrome and are very hesitant to use someone else's code in pretty much any shape or form. Comparing it with Ruby, I see a striking difference. Folks out there have gems literally for every little piece of functionality imaginable. New projects are popping out left and right and generally are heartily welcomed. On the .NET side we have CodePlex which I personally find to be a place where abandoned projects grow old and eventually get abandoned. Now, there certainly are several well-known and maintained projects, but the number of those pales in comparison with that of Ruby. Granted, NIH on the .NET devs part comes mostly from the fact that there are very few quality .NET projects out there, let alone projects that solve their specific needs, but even if there is such a project, it's often frowned upon and is reinvented in-house. So my question is multi-fold: * Do you find my observations anywhere near being correct? * If so, what are your thoughts on quality and quantitiy of OSS projects in .NET? * Again, if you do agree with my thoughts on \"NIH in .NET\", what do you think is causing it? * And finally, is it Ruby's feature set & community standpoint (dynamic language, strong focus on testing) that allows for such easy integration of third-party code?"} {"_id": "233671", "title": "Removing dependencies on subclass-specific behavior", "text": "I have a `Message` class which can contain multiple types of payloads (or sometimes no payload), each derived from a common `Payload` class. However, this becomes problematic because the `Message` class wants to know about the `Payload` subclasses for various reasons such as: * Checking for `Message` equality if (message parts besides the payload are equal) { switch(type) { case Payload::Type::RESPONSE: return *static_cast(payload.get()) == *static_cast(o.payload.get()); break; case Payload::Type::SETUP: return *static_cast(payload.get()) == *static_cast(o.payload.get()); break; ... } } * Deserializing (since the deserialization methods are static because the payloads are immutable) switch(type) { case Payload::Type::RESPONSE: load = ResponsePayload::fromJSON(payloadValue); break; case Payload::Type::SETUP: load = SetupPayload::fromJSON(payloadValue); break; ... case Payload::Type::START: case Payload::Type::STOP: case ...: break; // Load stays null default: THROW(Exception, \"Error in program logic: we forgot to parse some payload\"); } * Ensuring a `Payload` is attached to a `Message` while constructing it: switch(type) { case Payload::Type::RESPONSE: case Payload::Type::SETUP: case ...: ENFORCE(IOException, payload != nullptr, \"For this message type, a payload is required.\"); break; case Payload::Type::START: case Payload::Type::STOP: case ...: ENFORCE(IOException, payload == nullptr, \"For this message type, the payload should be null\"); break; default: THROW(Exception, \"Error in program logic: we forgot to handle some payload\"); } Alarm bells are going are going off in my head - this violates SOLID like there's no tomorrow and obviously doesn't scale well since I have to add `case` statements every time I add a new payload. Is there a cleaner approach I could take?"} {"_id": "233673", "title": "Why isn't java used as a build language?", "text": "If Java is a general purpose language, and building a program is something that can be described using the Java language, why isn't this the best way to write build files and instead we use tools like Ant, Maven, and Gradle? Wouldn't that be more straightforward, and also remove the need to learn yet another programming language? (BTW - this question can also be applied to other languages, like C#)"} {"_id": "233672", "title": "How to use namespaces to separate interface from implementation, in c++?", "text": "As far as I can tell, you can make your interface known to others by providing your .h file. Your .cpp is the implementation. Then they can see the function names, the parameter types, the return type, maybe a description of how to use a function, and maybe what it does in the .h file. Then I read in posts here about using namespaces to separate the interface from the implementation. What does that mean? Doesn't a namespace only let you know that a name exists in that namespace? So please provide an example, I can't find any."} {"_id": "127293", "title": "Learn HTML5 CSS3 and JavaScript", "text": "What is best way to learn these technologies which would enable me to understand all the 3 in a combined perspective rather than individually?"} {"_id": "77984", "title": "Which programming frameworks offer good multilingual support?", "text": "I'm starting a new company, and our software will be a SaaS solution offered to municipalities around the world. Their interface will be web-based. I am trying to figure our which software development frameworks and which software platforms would be able to support a fully multi-lingual system. In this I mean that I need to be able to have the same screens operate in different native languages (including right-to-left), both for display and data-entry. I want to be able to maintain a single application for all languages. I need to be able to add new languages pretty quickly (= no development) once the system is set up. Since I am starting from scratch, I have the privilege to research and choose in advance, and not force a solution into an existing application. Also, I have experience in leading development on Java, .NET, C++ and C, so I am open to use any technology that will meet my business needs. I would appreciate recommendations with pros and cons."} {"_id": "203625", "title": "Question about Java nested classes design decision", "text": "I was shocked today to discover that this code compiles cleanly in Java: public class A { public static class B { private static void x() {} } private static class C { private /* So, private to what exactly? */ static void x() {} } public static void main(String[] args) { B.x(); C.x(); } } It seems Java's private keyword indicates only top-level-class-privacy; which, given the general convention to name Java source files by class, means that Java effectively provides only file-level private visibility, not true private visibility. (The equivalent code in C++ does not compile with a visibility error.) Would anyone be able to explain why Java was designed this way? Or is this just Java's way of telling everyone not to use nested classes? (It certainly hacks in an implementation of a two-way C++ friend relationship, but we already have package-level visibility for that; this seems to make the one-way equivalent impossible. Moreover, it necessitates the compiler creating more hidden accessor methods to circumvent the visibility, and under-the-hood downgrading of nested class visibility control to make all this not error out at runtime. Really, why the trouble?)"} {"_id": "89666", "title": "Promotion or De-motivation", "text": "I am working as a programmer for last 3 years, recently my company has promoted me as a project manager. Now I am facing resource management issues and I am not comfortable as a manager and I want to continue my profession as a programmer. I have talked to my director to step me back as a programmer but he forced me to continue as a project manager as this is one step ahead from programming. Now suggest me what should I do?"} {"_id": "14584", "title": "Do you actively think about security when coding?", "text": "When you're coding, do you actively think about your code might be exploited in ways it wasn't originally meant to do and thus gain access to protected information, run commands or something else you wouldn't want your users to do?"} {"_id": "108566", "title": "can a team lead's work ever be done working remotely", "text": "I am currently a software developer. As a next milestone in my career I am thinking I must establish myself as team leader which leads a small development team in order to meet deliverables, deployments, etc. I've read various post in this site itself which discusses the pros and cons of working from home. But I am not sure if this aspect was quite discussed/debated. I haven't stepped into the team leading role per se, but I would like to know the responsibilities that come with the title, and if (and how) that can be managed while working from home."} {"_id": "21336", "title": "How do you organize your local development environment?", "text": "> **Possible Duplicate:** > How do you organize your projects folders? I'm interested to learn how everyone keeps their local development projects organized in a logical manner. What do you do to keep your dev projects organized in a manner that allows you to easily find things?"} {"_id": "72802", "title": "Two related, but independent projects -- Two repositories or one?", "text": "I've made two \"complimenting\" projects. These two projects are stand alone, but are almost designed to be used together. They are both fairly small projects as well. I'm wanting to host the code on GitHub, but I'm not sure if I should use two repositories or one for this. What do you think I should do?"} {"_id": "211532", "title": "Why is this syntax convention?", "text": "It says in the book (APress Ganesh / Sharma) about Java 7: > You create a method named `fill()` in the Utilities class with this > declaration: > > > public static void fill(List list, T val) > > > You declare the generic type parameter `T` in this method. After the > qualifiers `public` and `static`, you put `` and then followed it by > return type, method name, and its parameters. This declaration is different > from generic classes\u2014you give the generic type parameters after the class > name in generic classes. But why `` as in `public static ` ? The syntax looks odd and appears nowhere else in the Java language. What does the `` mean in this case and why is it used?"} {"_id": "234904", "title": "Is a PHP array an example of a dynamic data structure?", "text": "I did my homework, and it says that dynamic data structures are \"data structures that change in size as a program needs it to by allocating and de- allocating memory from the heap\". So I was wondering, are PHP arrays examples of dynamic data structures, or are they limited to binary trees and double-linked lists? Thanks."} {"_id": "234900", "title": "Go-like interfaces + multi-methods make sense?", "text": "Thinking about the design of a potential new language, I wonder how related are the concepts of built a OO similar to GO interfaces and multi-methods (I get this from http://docs.julialang.org/en/latest/manual/methods/). If I understand this right, the interfaces are simply a group of methods that must be implemented by each descendant, and multi-methods provide a nice way to specialize them without do a lot of boilerplate or casting. However, Is my intuition that both could work well together, but wonder if is correct. For example, if a interface declare: type Animal interface { Speak() string } func (d Dog) Speak() string { return \"Woof!\" } but in julia is more like: speak(x::Dog) = return \"Woof!\"; So this feel like that both do the same, but different. Will be a mess to do both? When could that make sense? Also, what make a \"better\" path, if only one must be used?"} {"_id": "229108", "title": "What is the applicability of CORS?", "text": "I have a system which needs to do cross-origin requests and having trouble understanding the relevance of CORS. At first glance it doesn't appear to provide me with any type of security I'd actually want for my service. 1. Since CORS is honoured only on the client side my server must still make zero assumptions about the request. There is no guarantee a request is coming from a browser at all, thus CORS headers cannot be used as access control. 2. The target server decides who is allowed to talk to it, and the origin server has no control. My JavaScript is also not secure on the client and subject to alteration. A malicious server could simply accept all CORS requests and thus all data could leak out of my application. I see only a narrow scope of where CORS is actually helpful. This is the Facebook like scenario. Default cross-origin restrictions prevent a malicious page from making requests to Facebook on behalf of the user. CORS simply allows certain domains to do this. However, I still don't see the value in CORS here because some simple DNS setup would allow the same behaviour. So what I am missing about CORS? In which scenarios is it relevant and advantageous to use (compared to simple DNS CNAME's)?"} {"_id": "229100", "title": "how much memory should android app use", "text": "We are planning to develop an android application. the application will use API and data transfer will be via SSL. I should write documentation and set functional and system requirements for the appliaction. One of the main requirements is that how much memory should the application use. Of course Android OS manages itself how much memory allocate to it at different satges of its activeness, but a great deal depends on appliaction. Is there any specific defined practice on how much memory should a typical android app use and that it is in use in advanced world of android developement, e.g. the appliaction of X type should use no more that 7 MB if they are active? How to set the requriement so that developers pay enough attention to this specific parameter?"} {"_id": "225480", "title": "MySQL left join + performance", "text": "SELECT * #Case 1 cp.* #Case 2 FROM crawl_page AS cp LEFT JOIN artist AS a ON cp.cpURL LIKE CONCAT('%',a.artistPageID,'/',a.artistPageURL,'%') WHERE cpPageType = 'SONGS' AND cpPageAdded < ( SELECT cpCrawlDate FROM crawl_page WHERE cpURL = 'http://www.example.com/page-52/' ) AND a.artistID IS NULL LIMIT 1 In this query when I use 1st case (selecting everything from both tables) time taken for query is fine about 0.468 sec. But if I do case 2 (selecting only from main table) it takes hell much time 26 second. It's not problem just to use case 1, but I can't get the idea how does it work and why? Thanks"} {"_id": "225487", "title": "Mapping values over a curve", "text": "I have a value between 0 and 8000 and I want this number to resolve to another number between 0 and 2000. I could just divide it by 4 but I have a special need here. For values above 4000 I want the mapped value to be biased towards the 2000. But for values lower than 4000 I want the mapped value to be biased towards 0. This is actually for a game where you are given bonus points between 0 and 2000 depending on how quickly you complete the task. So if you are very fast and do it in 7500 milliseconds you should get 1875 points (exactly 7500 / 4). If you do it in 6000 milliseconds you would get 1450 points (a little under 6000 / 4). If you are slow and do it in 1000 milliseconds you would get 50 milliseconds (well under 1000 / 4). The idea is to reward fast players more than slow players but still let slow players feel like they've achieved something. I feel an \"eased-out\" value mapping would achieve this. I hope this makes sense. Can you help me figure out some code that achieves this?"} {"_id": "225486", "title": "Fortran 90 - How to create a coordinate system", "text": "So I need to code a simple program, and I need to define 2D coordinates? Is there any coordinate system I can use in Fortran? I was told it might have to be all in arrays? And if so, can anybody push me in the right direction as to how to set up 2d arrays? Many thanks."} {"_id": "208182", "title": "Why are reference-counting smart pointers so popular?", "text": "As I can see, smart pointers are used extensively in many real-world C++ projects. Though some kind of smart pointers are obviously beneficial to support RAII and ownership transfers, there is also a trend of using shared pointers _by default_ , as a way of _\"garbage collection\"_ , so that the programmer do not have to think about allocation that much. Why are shared pointers more popular than integrating a proper garbage collector like Boehm GC? (Or do you agree at all, that they are more popular than actual GCs?) I know about two advantages of conventional GCs over reference-counting: * Conventional GC algorithms has no **problem with reference-cycles**. * Reference-count is generally **slower** than a proper GC. What are the reasons for using reference-counting smart pointers?"} {"_id": "159571", "title": "Job prospects for open source project experience", "text": "I would like to know whether work experience in Open Source Projects will really brightens the job prospects/resume of a developer other than their day time programming job or doing certifications are more than enough for a developer?"} {"_id": "171751", "title": "Application of LGPL license on a simple algorithm", "text": "The \"scope\" of the GNU license is troubling me : I know it has been answered many times ( here, here, ... ) but shouldn't we take into consideration the complexity and originality of a code before using GPL license ? I explain : I'm working on a pet project using the DTW algorithm that I have written in C using the pseudo-code given on the wikipedia page . At one point I decided to change it for a C++ implementation ( just for hone my c++ skill ) . After doing so, I've looked for an existing implementation on the web, to compare the \"cleanliness\" of it, and I found this one : Vectored DTW implementation, which is part of limproved, a C++ library licensed under GPL v3 . Personnally, I don't mind the GNU license because it is a personnal project, which will never led to any kind of commercial purpose, but **I wonder if this implementation can abide a company using it to open their code** ( and other FOSS permissions ). Theoretically, I think it can ( I may be wrong :p ), but the algorithm in question is so simple (and old) that it should not."} {"_id": "171754", "title": "Client-Side V.S. Server-Side Searching?", "text": "I am currently helping to design a web site and application in HTML. We would like the user to be able to search the site/app for desired content via a search bar. We would also like to include an advanced search ability to allow for different search options and more concentrated searches. We are having trouble deciding whether to program the search function on the Client-Side (with JavaScript) or on the Server-Side (with PHP). What are the pros and cons of both and what would you recommend?"} {"_id": "171757", "title": "Computational Complexity of Correlation in Time vs Multiplication in Frequency space", "text": "I am working with 2d correlation for image processing techniques (pattern recognition etc...). I was wondering if there is a theoretical approach on how to tell when to use multiplication in frequency space over correlation in time space. For sizes of 2x frequency space is obviously faster but how about small, prime sizes like e.g. 11?"} {"_id": "132170", "title": "When learning Android, should I focus on the latest and greatest?", "text": "I'm a Java developer and I'm going to start working with Android to get familiar with mobile apps. I've worked with Ajax toolkits, but haven't done any mobile development. I've seen a huge change in development on certain toolkits where the learning curve really jumps up, as in GWT 1.5 versus 2.4. I'm just wanting to jump in and get the basics (along with best practices) down for now. I don't know if Android 2 versus version 4 would do that, but version 4 might just throw in extra features that I might not need to know about for now and take away from my main focus. Is it better to go with the latest version of Android or, to make sure the fundamentals are covered, take an older version that would be easier to get in and start developing with?"} {"_id": "132174", "title": "How to Use Subversion Repository Inside Git Repository?", "text": "I'm developing a project in Git. IT depends on another project, that's in a Subversion repository. I'd like to be able to make changes to the Subversion project in the tree, and commit to/update from the Subversion repository from within the Git project. Is that possible?"} {"_id": "213135", "title": "What to do when request is sent to server and while waiting for response internet connectivity lost?", "text": "I am sending a huge amount of data to server. Now while I have sent the data and waiting for server response, suddenly my android device gets internet connection lost. So what I used to do is, showing an alert dialog of connection lost, but at server side the data was already processed and it updated somewhere e.g. on any URL. But my android phone does not know this as it did not get response ever. How to resolve it. Whether it could be done on server side or on android itself How? How server would know that android phone is not going to listen the response? It may be client-server communication optimization perspective."} {"_id": "205962", "title": "estimation method at the \"just finished definition phase\"", "text": "We are basically using Waterfall life cycle methodology. Requirement Analysis: I get a use case created by some one else. Design (talking only about Low level design): I get some little feeling about how would I design it, may be either instantly/ or after some time. Coding+Unit Testing: In case of unit Testing I am not using any TDD, but I am trying to get full code coverage, through mocks and etc. after the coding is done. I am being asked about estimates just after the Requirement walk through. Since I have only a fair amount of idea about how the design will look like, I have no way to say about estimates. As for the coding part is considered, how can I get the total number of methods at the \"just finished definition phase\" (I never know how much will I be refactoring at this stage), Unit Testing with full code coverage: I haven't written a code, then How can I know how many behaviors are there to test for one method and hence time. Is there some scientific way/ or other way to know this at this stage? **Note:** I have already read this possible similar question. I can break down all my tasks to small units, but I am having problem to give an estimate for each unit. **Edit1** This is part of my problem too it all started from here. Our Organization also unit tested (or correctly saying performed integration testing) manually or through emulators etc. So whenever some enhancements come for a particular client You need to create test data again, and what about any breaking changes You have introduced. So, I introducing Automated Unit Testing, but now there is again a problem for correct estimation which is necessary to give the organization farsighted view of \"time estimate of creating a test suite with code\" in simple words build to validation team. Kindly help."} {"_id": "205963", "title": "Very-Loose Coupling and Dependency Injection for Database Management", "text": "I'm currently setting up a MongoDB database for a web-app. I'm running node.js and using Mongoose to help manage mapping, validation ect. I'm wondering if it's a good idea to really decouple MongoDB from my code, just on the off-chance we want to switch to something like CouchDB a little later. Specifically, I'd make a `databaseManger` module that is responsible for setting up Mongoose/Mongo, defining the mongoose models, managing replica sets, and doing some extra validation. This manager would export a generic API, and handle the details of the queries internally. The manager would then be passed around via dependency injection. Does this make sense? Or am I kind-of redoing some of the mongoose functionality? Any advice is appreciated."} {"_id": "213604", "title": "Merge two different API calls into One", "text": "I have two different apps in my django project. One is \"comment\" and an other one is \"files\". A comment might save some file attached to it. The current way of creating a comment with attachments is by making two API calls. First one creates an actual comment and replies with the comment ID which serves as foreign key for the Files. Then for each file, a new request is made with the comment ID. Please note that file is a generic app, that can be used with other apps too. What is the cleanest way of making this into one API call? I want to have this as a single API call because I am in a situation where I need to send user an email with all the files as attachment when a comment is made. I know Queueing is the ideal way to do it. But I don't have the liberty to add queing to our stack now. So this was the only way I could think of."} {"_id": "69768", "title": "Workflow on Development and Production Servers for a Consistently Updating Website", "text": "We have three developers, one system administrator, and an artist that primarily work on a single website (forum) on our spare time to consistently develop features for the forum (but there are other projects that we work on). Because our system administrator recently joined, we dropped managed hosting for a single server and decided to rent out two unmanaged servers (one for testing and the other for production). In the old server, we simply used Git as a middleman for pushing updates from the non-unified developer team into the server without conflicting updates. Pull other developer updates. Push our own updates. Revert if something breaks. Since we have two servers now, we plan to push updates to the development server and somehow have it push updates to the production server. Developer(s) -> development server (bare) -> production server We want to keep the repositories in the development server and the work tree of the web site on the production server (web server). Is there an efficient way to do this without pushing from the dev server to production? Is there a better workflow for two servers in general? P.S. The development team consists of high school teenagers and some college kids that have never developed on teams for for-profit businesses."} {"_id": "202146", "title": "Game to decide which team member gets the window seat in a fair way", "text": "During team formation, we have a conflict where 2 developers both really want the same coveted window seat with a lush view of a creek & the woods. What sort of game / activity can we place them in that will: 1. Decide who gets the seat 2. Provide a random (not skill based) chance of winning 3. Allow both team members to participate in the selection 4. Not be a single player game masquerading as a two player game (drawing straws for example - the outcome is based off of the first player's actions when there's only 2 people)"} {"_id": "205969", "title": "How should I architect a RESTful webservice to use 3rd party (i.e. Google, Facebook, Twitter) for authentication?", "text": "For my job we have a nice RESTful webservice we've built out that we use to drive a couple websites we have. Basically the webservice lets you create and work with support tickets, and the website is responsible for the front end. Any webservice requests use an auth header which we use to validate the user and their password for each call. This year we're looking to expand our login options so that users on the website can log in via Google, Twitter, and Facebook (possibly others). However I'm having a lot of trouble figure out how to architect this so the webservice can use the 3rd party authentication providers to ensure the users is who they say they are. Is there any best practices out there for how to do this? Currently we're thinking of having the website handle authenticating the users themselves, and then use a new setSessionId call that registers their current session with the webservice back end. Each additional request to the webservice will pass along that sessionId and will validate it. These seems okay, but I have that feeling in the back of my head that I'm not thinking this through and all my forum browsing and reading oauth and openid specs is just confusing me more. Any tips for how to tackle this?"} {"_id": "225738", "title": "Reducing Coupling in a Series of Tasks", "text": "I am working on some code right now that involves processing user requests. Each request requires going through an approval. When a request is made, one or more records are created on the database recording what the request entails. Then a ticket is created and placed on the approver's work queue. For a typical request, approving it involves these steps: 1. The request is processed (usually resulting in a table getting updated) 2. The changes are recorded to a history table (record auditing). 3. The request is marked as approved by the approver (approval auditing). 4. The ticket is removed from the work queue. 5. The original requester is sent an email telling them their request was approved. We have a single class that kicks off these steps sequentially within a transaction (http://martinfowler.com/eaaCatalog/transactionScript.html). For the most part, the sequence doesn't really matter and the email isn't really essential. The problem is each of these classes has at least 5+ dependencies (in reality, it's closer to 10+). I am stumped about how to reduce the number of dependencies when it comes to a workflow like this. I thought about grouping some of these items together, but there's no logical grouping. I don't see much of a point in creating an unnecessary layer like that, anyway. I also thought about doing some kind of observer pattern (.NET events). A class higher up could listen for the event, instantiate the needed classes and kick them off. However, that \"higher-up class\" would then be dependent on all of the classes again, so I'd be right back in the same situation. So, the only ways I've been able to think of reducing the number of dependencies is to either do artificial grouping of classes or to just make it the layer above's problem. Neither seem ideal and I wonder if I am missing something. If it helps, I am using Ninject with ASP.NET MVC. I have access to an IOC container (`DependencyResolver`). I am wondering if the solution is some sort of combination between using listeners/events and an IOC container."} {"_id": "155012", "title": "How to become a passionate programmer?", "text": "I want to know how to be a `passionate programmer`? What are the things that `someone has to be focus to be a good programmer`?"} {"_id": "243259", "title": "Cloning existing software for commercial purposes - legal implications", "text": "I have been asked to clone some existing software for a company. Basically its an old 16 bit DOS console app, which was supplied free of charge in I believe the late 80's. Having replaced the machine that needs to run it with a box running Win7 x64 they can't get it to work. It crashes every couple of minutes under DOSbox. The company that supplied it appears to no longer exist - if they did the company asking me to do this would almost certainly know about it. Its undetermined whether they have gone entirely or are just trading under a different name. If the latter they seem to have withdrawn from the market related to this product (because again, niche area, we **should** know about everyone there). What is the status to this with regards to copyright etc.? The main concern for the company involved is they want an identical interface to what they already have so I would have to clone this entirely. Having no source code / indication of the underlying mechanisms these would be written from scratch. Is an interface covered by copyright? / Does that still hold 30 years later? What is the assumed license when none at all is provided? Under UK law would I be under any serious risk were I to take on the project? How would this pan out if I then decided to sell the software on to other companies? Thanks"} {"_id": "144843", "title": "Where does my database schema live in MVC?", "text": "I'm starting to check in the SQL files for creating/maintaining our database. Previously SQL files were kept totally separate from our codebase's version control. I'd like to pragmatically store my database construction/manipulation files with my other code. Do they make the most sense in a subdirectory of my Models or as a separate directory somewhere? For the most part the SQL statements _are_ the data my Models access, but some other stuff is maintenance scripts ect; stuff that doesn't pragmatically fit into my MVC folder structure."} {"_id": "144842", "title": "Should I include dependencies for which I have the source as projects in my solution?", "text": "We have various projects in source control for web and desktop applications. Invariably, many of them use third-party open source projects or even common libraries within our organization as dependencies. Should we include the full projects of these dependencies in our various solutions? Or should we just include the .dll reference (or equivalent compiled binary) and maintain the source for said dependency elsewhere (say, for debugging purposes only)? I'm specifically thinking of .NET projects and solutions in Visual Studio, but feel free to replace those terms with JAR or whatever equivalent in other languages and frameworks. To further specify, I'm asking if a [fictional] Invoice Payments Web App solution tree should look like this: * InvoicePaymentsWeb * InvoicePaymentsDataModel * InvoicePaymentsServices * RickSoftUltraDataGrid * MyCompanyCommonWebLib And you could individually compile (or change) the various dependencies since they exist entirely within your solution. As opposed to just having the first three projects, referencing the compiled DLLs of the other two, and having the source for the other two exist elsewhere on your machine (or elsewhere in source control). I'm wondering if this wouldn't potentially fracture the development and/or maintenance of the various dependencies since each solution could have its own specific changes to the source, unless the underlying source control handled sharing the projects internally. What other problems might exist? Would there be any benefit?"} {"_id": "779", "title": "What's the worst question you were ever asked at interview?", "text": "It doesn't have to be programming or software development related, but just asked during an interview for an IT related job. I know some \"left field\" questions are meant to see how the candidate copes with unexpected and novel situations, but here I'm looking for a question that appeared to be completely unrelated to the job they were interviewing you for, or something that made you think \"what useful information could they possibly get from my answer to _that_ question?\"."} {"_id": "218925", "title": "Interface at the class or function level?", "text": "I have been falling into a pattern lately where I have been defining routines that rely on an interface defined by a function that is specified as a parameter to the routine. (The language is C#, but this can be applied to any language with first-class functions.) For example, I have an API that exposes CRUD operations against a backing store. In the case of a GET operation on a particular resource, I can generalize the routine into: * get the resource we are looking for * return a not found response if resource does not exist * return the resource if found What I ended up doing is defining a routine that accepts a delegate function for finding the resource. It's this delegate that defines the interface contract. This happens to work well for my situation because the information required to locate the resource can vary. In my case, it's looking up data in a database by keys, but the type and number of keys can vary. I can capture these in a closure in the calling routine and satisfy the delegate function interface. For example: // Locate a simple record that only has one key public SimpleRecord GetSimpleRecord(int recordID) { return getResource(repository => repository.SimpleRecords.Find(recordID)); } // Locate a complex record that has many keys public ComplexRecord GetComplexRecord(int recordID, int userID, string token) { return getResource(repository => repository.ComplexRecords.Find(recordID, userID, token)); } This work, but seems like is a mix of OOP and functional style programming. If I need more than one delegate passed, it starts to get a bit messy. Some routines that I need everywhere I ended up defining as abstract methods that all sub-classes need to implement. So I have a hybrid. Does this type of technique have a name or pattern that I'm missing? Should the delegates be implemented in a defined class interface that gets passed to the caller? **UPDATE** with more concrete example: I'm trying to adhere to the DRY principal. I'm talking about controllers in a C# Web API application. Each and every request has some commonality which I have implemented in a base controller class: * Handle all exceptions by returning the correct HTTP status code, (404 for resources that are not found, 201 for created resources, etc.) * Map database entities to-and-from data transport objects that the client deals with I want to express _what_ to do in this base class, and delegate _how_ to the concreate class. The _how_ ends up being implemented by delegate functions. I may need to get a person from the database, by first name and last name, or a purchase order, by an integer id. In both cases, if the resource is not found, a 404 must be returned. One returns a person, one a purchase order. How to look them up and how to map the data to a client object differs. The base function may look like this: T getResource (Func find) { T data = find(getRepository()); if (data == null) { throw new DataNotFoundException(); } return data; } Now, in the person controller and purchase order controller, I don't have to repeat the logic of what do to when the resource is not found--just implement the find delegate. (This is a simple example without mapping, adding, removing or other details that differ resource to resource). public Person Get(string first, string last) { return getResource(repository => repository.People.Find(first, last)); } public PurchaseOrder Get(int id) { return getResource(repository => repository.POs.Find(id)); } Note how the closures above neatly deal with varying number and types of parameters for finding things, but satisfy the interface defined by the delegate find function. Is this possible to do with standard class interfaces? (And this question is not about the repository. That is resolved with dependency injection and is implemented with Entity Framework.)"} {"_id": "47150", "title": "How can I make my PHP development environment more efficient?", "text": "I want to start a home-brew pet project in PHP. I've spent some time in my life developing in PHP and I've always felt it was hard to organize the development environment efficiently. In my previous PHP work, I've used a windows desktop machine and a linux server for development. This configuration had it's advantages: it's easy to configure Apache (and it's modules)/PHP/MySql on a linux box, and, at the time, this configuration was the same like on production server. However, I never successfully set up a debug connection between my Eclipse install and X-debug on server. Transferring files from my local workspace to the server was also very annoying (either ftp or Bazaar script moving files from repository to web root). For my new setup, I'm considering installing everything on my local machine. I'm afraid that it will slow down workstation performance (LAMP + Eclipse), and that compatibility problems will kick-in. What would you recommend? Should I develop using two separate machines? On one? Do you have experience using one of above configurations in your work?"} {"_id": "154460", "title": "Using default parameters for 404 error (PHP with mvc design)?", "text": "I have a custom made Framework (written in PHP). It all works very good, but i have some doubts about a certain thing. Right now when a user call this url for example: http://host.com/user/edit/12 Which would resolve to: * user = userController * edit = editAction() in userController * 12 = treated as a param But suppose the controller 'userController' doesn't exist. Then i could throw a 404. But on the other hand, the url could also be used as params for the indexController (which is the default controller). So in that case: * controller = indexController * user = could be an action in indexController, otherwise treated as a param * edit = treated as a param * 12 = treated as a param That is actually how it works right now in my framework. So basically, i never throw a 404. I could ofcourse say that only params can be given if the controller name is explicitly named in the URL. So if i want the above url: http://host.com/user/edit/12 To be invoked by the indexController, in the indexAction. Then i specifically have to tell what controller and action it uses in the URL. So the URL should become: http://host.com/index/index/user/edit/12 * index = indexController * index (2nd one) = the action method * user = treated as a param * edit = treated as a param * 12 = treated as a param That way, when a controller doesn't exist, i don't reroute everything as a param to the index controller and simply throw a 404 error. Now my question is, which one is more preffered? Should i allow both options to be configurable in a config file? Or should i always use one of them. Simply because that's the only and best way to do it?"} {"_id": "47154", "title": "What drawbacks does Java Swing GUI framework have?", "text": "I have used Java Swing for some desktop applications. But I haven't used other GUI frameworks that much so I can't compare them. There is definitely some things that I like with Swing and some that I don't like, but that is the situation with almost everything. What are the biggest drawbacks with the Java Swing GUI framework?"} {"_id": "202239", "title": "How to compute a signature of a matrix", "text": "We know some signature/checksum methods for text, like md5,sha. Now, I want to compute a checksum for a matrix, to represent the exact matrix. Say, two matrixes with different sizes, or with different values, have different checksum. And slightly different matrixes(same size) result in slightly different checksum. So, both layout and values should be considered. Although the most intuitive method is to compare two matrixes directly, however, sometimes it's inconvenient to get the two matrixes in the same environment. Maybe I should check the md5 algorithm, and adapt a similar method. Just post it here for better advices."} {"_id": "199066", "title": "Is licensing required for public repositories?", "text": "I am planning to start the public repository on github. In this repository I will be sharing code which I will be posting in my blog. I have seen lot of repositories without any license. Is it required to have a license file? Can't I just say \"You're free to use this code in whatever way you need\"? Please advise"} {"_id": "52608", "title": "What is Object Oriented Programming ill-suited for?", "text": "In Martin Fowler's book Refactoring, Fowler speaks of how when developers learn something new, they don't consider when it's inappropriate for the job: > Ten years ago it was like that with objects. If someone asked me when not to > use objects, it was hard to answer. [...] It was just that I didn't know > what those limitations were, although I knew what the benefits were. Reading this, it occurred to me I don't know what the limitations or potential disadvantages of Object-Oriented Programming are. What are the limitations of Object Oriented Programming? When should one look at a project and think \"OOP is not best suited for this\"?"} {"_id": "206431", "title": "Does dealing with legacy code help one evolve as a programmer?", "text": "I'm a Java developer with a bit more than a year of experience which places me somewhere above a junior, but not among mid-level developers yet. Recently I was offered a long-term project which is about studying an existing banking application code for 4 months and then introducing changes when needed. As a not-so-experienced programmer I'm looking for ways to develop and I wonder what such a project might give. Would you consider dealing with a big and probably not-so-well written application a good practice for a beginner?"} {"_id": "87523", "title": "Embedded Linux Training", "text": "I have been working with an old `OS9`(not mac) operating system and I have been trying to get my organization to transition to an embedded `Linux` platform. We have money in the budget for training and I figure I would take advantage of it. A secondary impetus would be a good resume addition. What are some good vendors/bootcamps for Embedded Linux training?"} {"_id": "231745", "title": "Multiple entrance points in project", "text": "My question is related to C++ but it comes from Java actually. When I was programming Java I had multiple classes, which were derived from a base \"Test\" class. Their purpose was to test things - run tests (unit tests/not unit tests). In order to run a test, I had a `public static void main` in every class. So the matter of running such a test was to click run in _Eclipse/Netbeans_ , whatever good IDE. I know this question maybe IDE dependent, it actually boils down to a makefiles, but is it possible to maintain a similar structure in C++ IDEs? Actually my question is: **how do you deal with tests?** Do you put a huge main method with some switch/enum based statements to run the tests or make different build configurations for each test, or have different solution to this? I know this question won't get a straightforward answer, but I'm curious how to deal with it so I can choose something for myself."} {"_id": "231741", "title": "User interfaces for C function completion", "text": "In method call syntax in many object-oriented languages, the receiver object goes to the left of the method name, e.g. `someObject.someMethod()`. This comes in handy when using an IDE with code completion/code assist/Intellisense; when the user types a method call expression, the receiver has already been typed, so the IDE can narrow down its method choices to apply to that receiver (at least in statically-typed languages). What I'm wondering is, what is the user interface for method/function completion like in languages where the method or function name goes _before_ its receiver and/or arguments, e.g. Lisp or C? Does the user have to trigger a hotkey, to which the IDE responds by asking for the receiver and/or arguments, and then once the value is typed in the IDE pops up a list of methods or functions that are applicable to that list?"} {"_id": "154465", "title": "Scrum - how to carry over a partially complete User Story to the next Sprint without skewing the backlog", "text": "We're using Scrum and occasionally find that we can't quite finish a User Story in the sprint in which it was planned. In true Scrum style, we ship the software anyway and consider including the User Story in the next sprint during the next Sprint Planning session. Given that the User Story we are carrying over is partially complete, how do we estimate for it correctly in the next Sprint Planning session? We have considered: a) Adjusting the number of Story Points down to reflect just the work which remains to complete the User Story. Unfortunately this will mess up reporting the Product Backlog. b) Close the partially-completed User Story and raise a new one to implement the remainder of that feature, which will have fewer Story Points. This will affect our ability to retrospectively see what we didn't complete in that sprint and seems a bit time consuming. c) Not bother with either a or b and continue to guess during Sprint Planning saying things like \"Well that User Story may be X story points, but I know it's 95% finished so I'm sure we can fit it in.\""} {"_id": "16527", "title": "Netbeans support for struts 2?", "text": "I have just started learning struts 2. I have purchased the book named Struts 2 in action. But I want to use an IDE for developing Struts 2 applications. I basically use netbeans. Please tell me how can i develop applications using netbeans. Also, it would be great if somebody can point out a few tutorials on the same. regards P.S. I haven't used eclipse. Is it advisable to switch to it for struts 2, (if it is better than netbeans in this case)?"} {"_id": "177979", "title": "Single statement non-braced ifs and code merges", "text": "I've seen code merges being used as an argument for bracing even single statement ifs. For example: if (condition) { do something; } Unfortunately I can't think of a change that would break a non-braced version during an automatic code merge. Could someone post an example?"} {"_id": "88207", "title": "What constitutes proper use of threads in programming?", "text": "I am tired of hearing people recommend that you should use only one thread per processor, while many programs use up to 100 per process! take for example some common programs vb.net ide uses about 25 thread when not debugging System uses about 100 chrome uses about 19 Avira uses more than about 50 Any time I post a thread related question, I am reminded almost every time that I should not use more that one thread per processor, and all the programs I mention above are ruining on my system with a single processor. What constitutes proper use of threads in programming? Please make general comment, but I'd prefer .NET framework thanks"} {"_id": "200873", "title": "Is the Glade GUI designer a good option for a beginner?", "text": "I am a student of Computer Science and I already have basic programming experience in Python 2.x, 3.x, C++, and HTML. I never made a program with a GUI. I have just programmed games, scripts/plugins, and console applications. First time developers in Ubuntu are recommended to use the GUI designer Glade, which creates GUIs for the GTK toolkit. They are also recommended to program in the Qt framework. I intend to create a cross-platform application (Linux/Windows/Android/iOS/OSX). Given my experience, is Glade a reasonable option for me to use in order to create a cross-platform application, especially in my first application that needs a GUI?"} {"_id": "173660", "title": "Static DataTable or DataSet in a class - bad idea?", "text": "I have several instances of a class. Each instance stores data in a common database. So, I thought \"I'll make the `DataTable table` field static, that way every instance can just add/modify rows to its own `table` field, but all the data will actually be in one place!\" However, apparently it's a bad idea to do use static fields, _especially_ if it's databases: Don't Use \"Static\" in C#? Is this a bad idea? Will I run into problems later on if I use it? This is a small project so I can accept no testing as a compromise if that is the only drawback. The benefit of using a static database is that there can be many objects of type `MyClass`, but only one table they all talk to, so a static field seems to be an implementation of exactly this, while keeping syntax concise. I don't see why I shouldn't use a static field (although I wouldn't really know) but if I had to, the best alternative I can think of is creating one `DataTable`, and passing a reference to it when creating each instance of `MyClass`, perhaps as a constructor parameter. But is this really an improvement? It seems less intuitive than a static field."} {"_id": "173667", "title": "Use of interfaces to ease rapid development/prototypes", "text": "Recently I've started to put almost all of my data structures into interfaces, and many of the classes that contain pieces of logic code as well, depending on how much work they are. I find that this makes development of applications much easier because I can easily swap out parts of my code when they do not work as well as intended without changing the rest of the application and swap them back in when I have corrected them, or if I need something simpler for the time being. I was wondering if I'm developing a bad habit here. Is this a anti-pattern I'm using here, or is it okay to do this?"} {"_id": "239247", "title": "how can we have a person to allot and track tasks in agile development", "text": "I understand that Agile team should be self organized and self driven, but is there a provision that I can have someone who will allot tasks to developers and ensure that all user stories will be completed on time?? For example if there are two persons in an agile team who are not self motivated to take up tasks and they will work only when task is assigned to them with a deadline, how can we deal this in Agile? The problem I face is that no one is fixing the deadlines for the tasks and the team is under delivering for the last two sprints. It will be better if we can have someone who can fix deadlines. IS there a provision for this in Agile"} {"_id": "184413", "title": "Automated object creation from user input", "text": "I am working on a command-line application that runs simulations. It has to be heavily configurable; the user should be able to provide a very large number (100+) of parameters, some mandatory and some optional. Obviously, these parameters will be used by a variety of classes in the application. I can see two approaches I can use; which one is a better design in the long run? **Approach 1** Have the (text) input file directly control object creation in the application. For example, user would provide the following input: root=Simulation time=50 local_species=Sentient strength=3.5 intelligence=7.1 invading_species=Primitive strength=5.1 sociability=2.6 The application would then recursively create objects of the class specified on the rhs of the equal sign, passing the class' constructor the arguments provided in the relevant sub-branch of the input file, if any. I'm using Python, which supports keyword arguments and default argument values when no argument is provided, but I hope the question isn't terribly language- specific. For example line 1 would tell the app to create an object of `class Simulation`, with three constructor arguments. The first constructor argument is `time=50`. The second argument is an instance of `class Sentient`, which is created by passing `Simulation`'s constructor arguments `strength=3.5`, `intelligence=7.1`. The third argument is an instance of `class Primitive`, which is created with by passing the constructor arguments `strength=5.1`, `sociability=2.6`. The whole object creation is handled automatically, with just a few lines of (recursive) code. The highest-level object (instance of `Simulation` in this case) would be passed to the visualization layer. **Approach 2** The input file still looks the same. However, the application's classes now have full control over how they interpret user input. The lhs and rhs values may or may not correspond to keyword parameters and classes. For example, the above input may result in the application creating an instance of `class InvasionSimulation` with arguments `strength_local=3.5`, `intelligence_local=7.1`, `strength_invading=5.1`, `sociability_invading=2.6`, and then calling its method `set_runtime(time=50)`, before passing the object to the visualization layer. * * * One obvious advantage of approach 2 is that the user's input can remain stable even as the application is completely redesigned internally. But I think there's nothing wrong with designating a subset of classes as \"public API\", and therefore ensuring that it doesn't change in the future, is there? On the other hand, if the application internally follows something quite similar to approach 1 anyway, it feels that approach 2 requires adding a lot of redundant code: instead of automatically interpreting user input, we now need to do it \"by hand\" for each class. Any additional considerations would be much appreciated. Are there any names for these approaches that I can search for?"} {"_id": "177754", "title": "What to do when a project is too difficult to continue developing?", "text": "As a developer, can you tell your project manager that an application is _unworkable_? Or, if you're a project manager, how would you need this presented to you in order to be compelled? This isn't about \"how to work on a poor project\", it's assuming you cannot. I can provide an example of the situation if anyone thinks it's important, but I'm trying to avoid proposed solutions to \"plodding through\"."} {"_id": "240893", "title": "Is there any point in writing a random tester for code that deals with inductive data structures?", "text": "Let's say we're writing a simple JSON parser and we've fully covered the code with unit tests: * it can parse primitives `\"0\", \"123\", \"-456\", '\"\"', '\"asd\"', true, false` * it can parse arrays `\"[]\", \"[1, 2, 3]\"` * it can parse nested arrays `\"[[]]\"` * it can parse objects `\"{}\", '{ \"a\": 1, \"b\": 2 }'` * it can parse nested objects `'{ \"a\": {} }'` Every possible JSON can be either a primitive or an array/object holding other primitives/arrays/objects - this is what I mean by inductive data structure (is 'recursive' a better term?). It seems to me that any JSON parsing can be reduced to these few cases and if the parser passes these tests then it is fully correct (unless it was explicitly hard-wired to break when you try to parse special numbers). Is there still any point in creating a random tester that generates random stringified JSON in hope that one of them will break this parser? Would this random tester find anything interesting? P.S. I chose JSON just as an example of an inductive data structure; I'm not interested in writing a JSON parser."} {"_id": "228762", "title": "Can I be forced to continue project which is outside my job duties?", "text": "I am a DBA / Report Writer in client support for a software company. Starting in my spare time, I wrote a reporting web application with live reports. The ca 2001 IE only site was horrific. My application eventually became a huge success with clients and was subsequently swallowed by dev. I think it was assumed that I would fail and I was told it would never be part of dev; it will always be a support project. Naturally, after such success, they wanted it. They assumed control, demanded I stop working on improvements, and started giving me tasks to \"stabilize\" it. I went from architect / designer / sole developer to code monkey. I have zero input anymore. It was fun to do, but that fun has left the building. I want to continue working on it, but my way. My way is why they have it in the first place, so it must work. Now I'm stuck in an AGILE loop. I don't see them giving back control so I want to stop. I'm not a developer or even work in that department. Can they force me to continue?"} {"_id": "180524", "title": "Domino Solitaire Algorithm", "text": "**Problem Statement -** ![Problem Statement](http://i.stack.imgur.com/F6HJC.png) Given a 2xN grid of numbers, the task is to find the most profitable tiling combination (each tile covers 2x1 cells; vertically or horizontally) covering all tiles. I thought of approaching it in the greedy way, enqueuing the max possible for any cell, but it has a fallback that a low-profit choice at i, could yield a greater profit at i+n tiles. So what should be the approach? EDIT - Test Data Range - N<=105 Source - INOI 2008 Q Paper **UPDATE** \\- Working out the plausibility of a Dynamic programming Approach. **UPDATE 2** \\- Worked out an answer using DP."} {"_id": "180522", "title": "C++ Design: Functional Programming vs OOP", "text": "**Design Question** Recently, I've been doing more and more FP in C++, mostly in the form of function templates and lambdas, and heavy overloading of a single function name. I really like FP for some well-defined operations, but I've also noticed that as with OOP, one could easily fall into anti-patterns of spaghetti-code if one isn't careful (eg circular dependencies, which is a bigger issue for state-based code). _My question is, when thinking about adding new functionality, how do you decide between using a FP-paradigm or a OOP-paradigm?_ I'm sensing it may have to do with identifying the **invariants** in the problem or in the design, but I'm not sure. For example, without an OOP model/simplification of the real world, it may not be immediately obvious what a Dog or a Cat class is (what are its states? what are its methods?). Otoh, from a FP POV, a Eat() function simply allows an Animal to turn Food into Poop and Energy. It's harder to imagine Eat() being anything else (for me, at least). I'm looking more for C++ answers, but this question could apply to any sub- module in any language that can handle multiple paradigms."} {"_id": "180520", "title": "Cumulative charts within Webdesign", "text": "A rather simple question here, With regards to web design and the paperwork behind it (Which I am sure many people/companies don't actually do). Would there be any reason for using a Cumulative Chart? Specifically a Cumulative Resource chart, although I am unsure what the resources could even be! More to the point, there are a few ideas I have been thinking of, unsure of whether they are valid or not I have come to the guys behind programming a website! tl;dr : If it is possible, what can I make a cumulative resource chart for in relation to a website?"} {"_id": "215526", "title": "In what situations do mutually dependent modules have an advantage?", "text": "Earlier today I created two mutually dependent, implicitly linked DLLs just to see if this was possible: http://i.imgur.com/GMACpnC.jpg I am just curious; in general, what advantages might this kind of mutual dependency have?"} {"_id": "100632", "title": "Is this interview question real or a communication skills test?", "text": "I've been to about 50 interviews in my life. Many of them I've done just to get experience, and keep doing. The question I keep hearing again and again is > Are you more a front-end or a back-end developer? I program mostly for the web, but 2-3 years ago I was database developer, and even wrote a desktop app. And if I remember correctly, I heard this question at interviews that had nothing to do with the web. I can't remember exactly what I was answering. IIRC, usually I would say: it's hard to tell and that I had programmed an application X, Y and Z, and W. Last time I started answering and then said _\"Wait, how do **you** define backend?\"_ Now, does this question, whether I'm better at frontend or backend programming, really make sense nowadays? * a website has a front end, true. But it's backend is usually the admin website. There's little difference between them. * if I write a DB driver for a Django-driven website, is it really a front end thing? I remember at some interview I asked what are front and back ends, and got an answer that front end is for clients and used or visited a lot, and back end is something for internal use. Well, it makes no big difference. Things must work and work efficiently on both sides. I also led a dozen of interviews myself, and have never made this question, because it makes no sense and doesn't come to my mind. Is this just a test whether I jump into the hoop and start speaking or if I try to clarify the question?"} {"_id": "120305", "title": "Is Ant still in the \"mainstream\" for Java builds?", "text": "We have been slowly replacing batch command files (windows .bat) which were simply jarring up the classes compiled in the developers IDE, with more comprehensive Ant builds (i.e. get from CVS, clean compile, jar, archive, email, etc.) I've spent a lot of time learning (and debugging issues) with Ant, so I'm most comfortable using it for these tasks. But I wonder if Ant is still in as wide usage as it was when I first started learning, or whether \"the world has moved on\" to something newer (and maybe slicker). (I've started to see more Maven build stuff distributed, which I've never used, for example.) The practical import of this question, is **whether I push new developers to learn Ant** , or whether they should be learning something else for builds? I'm never too on top of the trends, so it would be great to hear from other Java developers what they think is the best build tool, and what they think new developers should be learning."} {"_id": "120302", "title": "Include all php files in one file and include that file in every page if we're using hiphop?", "text": "I understand that in normal php if we're including a file we're merging the source of it in the script and it would take longer for that page to be parsed/processed but if we're using HipHop shouldn't it be ok to just create one single php file and include every file in it (that contains some class) and every page which needs those classes (in separate file each) can just include one single php file? Would this be ok in presence of HipHop?"} {"_id": "154762", "title": "Is it a good idea to write an OS in a scripting language?", "text": "Is it a good idea to create an OS that's written in a scripting language? For example, how about creating an OS using Python?"} {"_id": "21032", "title": "Help me construct a list of best approaches for new C and C++ developers", "text": "Not specific code writing practices. Please also include reasoning. My start: * use `GCC` or `Clang` * gcc because it is unchallenged in the amount of static checking it can do (both against standards and general errors) * clang cause it has such pretty and meaningful error messages * when compiling C code using GCC use `-Wall -Wextra -Wwrite-strings -Werror` * in 99,99% the warning is a valid error * when compiling C++ code using GCC use `-Wall -Wextra -Weffc++ -Werror` * you could skip `-Weffc++` (cause it can be confusing) * always code against a standard C (C89, C99), C++ (C++98, C++0x) * while compilers change, standards don't, coding against a standard gives at least some level of assurance that the code will also compile in the next version of the compiler or even a different compiler/platform * make sure that the compiler checks your code against standard (`-std=c99 -pedantic` for C99, `-std=ansi -pedantic` for C++98 in GCC) * cause automatic checking always good * use `valgrind` or a similar tool to check for runtime errors (memory, threads, ...) * free bug catching * never duplicate functionality of the standard libraries (if there is a bug in your compiler, make a temporary patch, wrapper, ...) * there is no chance that your code will be better then the code maintained by hundreds of people and tested by tenths of thousands * make sure that you actually fix all bugs that are reported by automatic tools (GCC, valgrind) * the errors might not cause your program to crash now, but they will * never follow recommendations that include \"never use feature X\" * such recommendations are usually outdated, exaggerated or oversimplified"} {"_id": "63602", "title": "How should I restart a career in software development after a 7 year gap?", "text": "I am a graduate in electronics, and I worked in the analog electronics field for around 4 years, but now I have a gap of about 7 years in my career. What are my options to restart my career? Can anyone suggest some refresher courses to enter the software field?"} {"_id": "63604", "title": "What should be on your \"Going Live Day\" checklist?", "text": "Aside from the technical specifics, what should be on your checklist for when you go live with a program? **Are there last minute things you can do to make a piece of software go into production smoothly?** (assuming you used sound principles in development and tested the average amount) More testing? Discussion with the client? Last minute optimizations?"} {"_id": "241410", "title": "Is it feasible and useful to auto-generate some code of unit tests?", "text": "Earlier today I have come up with an idea, based upon a particular real use case, which I would want to have checked for feasability and usefulness. This question will feature a fair chunk of Java code, but can be applied to all languages running inside a VM, and maybe even outside. While there is real code, it uses nothing language-specific, so please read it mostly as pseudo code. **The idea** Make unit testing less cumbersome by adding in some ways to autogenerate code based on **human** interaction with the codebase. I understand this goes against the principle of TDD, but I don't think anyone ever proved that doing TDD is better over first creating code and then immediatly therafter the tests. This may even be adapted to be fit into TDD, but that is not my current goal. To show how it is intended to be used, I'll copy one of my classes here, for which I need to make unit tests. public class PutMonsterOnFieldAction implements PlayerAction { private final int handCardIndex; private final int fieldMonsterIndex; public PutMonsterOnFieldAction(final int handCardIndex, final int fieldMonsterIndex) { this.handCardIndex = Arguments.requirePositiveOrZero(handCardIndex, \"handCardIndex\"); this.fieldMonsterIndex = Arguments.requirePositiveOrZero(fieldMonsterIndex, \"fieldCardIndex\"); } @Override public boolean isActionAllowed(final Player player) { Objects.requireNonNull(player, \"player\"); Hand hand = player.getHand(); Field field = player.getField(); if (handCardIndex >= hand.getCapacity()) { return false; } if (fieldMonsterIndex >= field.getMonsterCapacity()) { return false; } if (field.hasMonster(fieldMonsterIndex)) { return false; } if (!(hand.get(handCardIndex) instanceof MonsterCard)) { return false; } return true; } @Override public void performAction(final Player player) { Objects.requireNonNull(player); if (!isActionAllowed(player)) { throw new PlayerActionNotAllowedException(); } Hand hand = player.getHand(); Field field = player.getField(); field.setMonster(fieldMonsterIndex, (MonsterCard)hand.play(handCardIndex)); } } We can observe the need for the following tests: * Constructor test with valid input * Constructor test with invalid inputs * `isActionAllowed` test with valid input * `isActionAllowed` test with invalid inputs * `performAction` test with valid input * `performAction` test with invalid inputs My idea mainly focuses on the `isActionAllowed` test with invalid inputs. Writing these tests is not fun, you need to ensure a number of conditions and you check whether it really returns `false`, this can be extended to `performAction`, where an exception needs to be thrown in that case. **The goal of my idea** is to generate those tests, by indicating (through GUI of IDE hopefully) that you want to generate tests based on a specific branch. **The implementation by example** 1. User clicks on \"Generate code for branch `if (handCardIndex >= hand.getCapacity())`\". 2. Now the tool needs to find a case where that holds. (I haven't added the relevant code as that may clutter the post ultimately) 3. To invalidate the branch, the tool needs to find a `handCardIndex` and `hand.getCapacity()` such that the condition `>=` holds. 4. It needs to construct a `Player` with a `Hand` that has a capacity of at least 1. 5. It notices that the `capacity` _private int_ of `Hand` needs to be at least 1. 6. It searches for ways to set it to 1. Fortunately it finds a constructor that takes the `capacity` as an argument. It uses 1 for this. 7. Some more work needs to be done to succesfully construct a `Player` instance, involving the creation of objects that have constraints that can be seen by inspecting the source code. 8. It has found the `hand` with the least capacity possible and is able to construct it. 9. Now to invalidate the test it will need to set `handCardIndex = 1`. 10. It constructs the test and asserts it to be false (the returned value of the branch) **What does the tool need to work?** In order to function properly, it will need the ability to scan through all source code (including JDK code) to figure out all constraints. Optionally this could be done through the javadoc, but that is not always used to indicate all constraints. It could also do _some_ trial and error, but it pretty much stops if you cannot attach source code to compiled classes. Then it needs some basic knowledge of what the _primitive_ types are, including arrays. And it needs to be able to construct some form of \"modification trees\". The tool knows that it needs to change a certain variable to a different value in order to get the correct testcase. Hence it will need to list all possible ways to change it, without using reflection obviously. What this tool will not replace is the need to create tailored unit tests that tests all kinds of conditions when a certain method actually works. It is purely to be used to test methods when they invalidate constraints. **My questions** : * Is creating such a tool _feasible_? Would it ever work, or are there some obvious problems? * Would such a tool be _useful_? Is it even useful to automatically generate these testcases at all? Could it be extended to do even more useful things? * Does, by chance, such a project already exist and would I be reinventing the wheel? If not proven useful, but still possible to make such thing, I will still consider it for fun. If it's considered useful, then I _might_ make an open source project for it depending on the time. For people searching more background information about the used `Player` and `Hand` classes in my example, please refer to this repository. At the time of writing the `PutMonsterOnFieldAction` has not been uploaded to the repo yet, but this will be done once I'm done with the unit tests."} {"_id": "216765", "title": "What are the steps to grouping related classes into packages", "text": "What are the steps needed to be taken to group related classes into packages in Java? In my case, I have about a number of .java files that I'd like to group into 3 packages according to the MVC pattern. One package for Model classes, one package for View classes and one package for Controller classes. I've identified which belong in what package, but not sure of the next step. I want to know how to seperate them into packages, do I make 3 folders and place the .java files in the folder that represents the package they belong in?"} {"_id": "241412", "title": "How to unit test models in MVC / MVR app?", "text": "I'm building a node.js web app and am trying to do so for the first time in a test driven fashion. I'm using nodeunit for testing, which I find allows me to write tests quickly and painlessly. In this particular app, the heavy lifting primarily involves translating SQL data into complex Javascript object and serving them to the front-end via json. Likewise, the app also spends a great deal of code validating and translating complex, multidimensional Javascript objects it receives from the front-end into SQL rows. Hence I have used a fat model design for the app -- most of the real code resides in the models, where the data translation happens. What's the best approach to test such models with unit tests? I mean in particular the methods that have create javascript objects from the SQL rows and serve them to the front-end. Right now what I'm doing is making particular requests of my models with the unit tests and checking the returned data for all of the fields that should be there. However I have a suspicion that this is not the most robust kind of testing I could be doing. My current testing design also means I have to package my app code with some dummy data so that my tests can anticipate the kind of data that the app _should_ be returning when tests run. ## UPDATED QUESTION: Specific question: ### Does it make sense to inject a testing layer between my model and my database? Or would it be better to let the models work on a real database with pre-defined test data inserted into it? The second option I imagine will probably result in more accurate testing, but the first model seems more versatile in terms of granularity and makes the testing and development more portable (no db required)"} {"_id": "236439", "title": "Why must essential mutable derived data have an inverse function?", "text": "I was reading the paper Out of the Tar Pit authored by Ben Moseley and Peter Marks when I came across the following section on page 25 regarding essential mutable derived data: > ### Essential Derived Data \u2014 Mutable > > As with immutable essential derived data, this can be excluded (and the data > re-derived on demand) and hence corresponds to _accidental state_. > > Mutability of derived data makes sense only where the function (logic) used > to derive the data has an inverse (otherwise \u2014 given its mutability \u2014 the > data cannot be considered _derived_ on an ongoing basis, and it is > effectively _input_ ). An inverse often exists where the derived data > represents simple restructurings of the input data. In this situation > modifications to the data can simply be treated identically to the > corresponding modifications to the existing _essential state_. I don't understand why essential mutable derived data must have an inverse function. For example consider the following JavaScript code: inputbox.onchange = function () { outputbox.value = md5(inputbox.value); }; Here `inputbox.value` is _input_ to the system and `outputbox.value` is the essential mutable derived data. It is derived from `inputbox.value` using the `md5` function. However the `md5` function doesn't have an inverse. Nevertheless `inputbox.value` is still essential, mutable and derived. So what do the authors actually mean when they say that \"mutability of derived data makes sense only where the function (logic) used to derive the data has an inverse (otherwise \u2014 given its mutability \u2014 the data cannot be considered _derived_ on an ongoing basis, and it is effectively _input_ )\"? Do you have any examples to elucidate their point?"} {"_id": "12189", "title": "How do I learn Python from zero to web development?", "text": "I am looking into learning Python for web development. Assuming I already have some basic web development experience with Java (JSP/Servlets), I'm already familiar with web design (HTML, CSS, JS), basic programming concepts and that I am completely new to Python, how do I go about learning Python in a structured manner that will eventually lead me to web development with Python and Django? I'm not in a hurry to make web applications in Python so I really want to learn it thoroughly so as not to leave any gaps in my knowledge of the technologies involving web development in Python. Are there any books, resource or techniques to help me in my endeavor? In what order should I do/read them? **UPDATE:** When I say learning in a structured manner, I mean starting out from the basics then learning the advanced stuff without leaving some of the important details/features that Python has to offer. I want to know how to apply the things that I already know in programming to Python."} {"_id": "236434", "title": "What goes within the Architecture Overview of a Design Specification?", "text": "I have designed an management system for a medical practice and I am writing the design specification and I am kind of stumped by what to write for a section. It asks for me to write about the components of a system but I am perplexed. What exactly are components within a system? Is this generally considered as the features that a system has or does it mean something else altogether?"} {"_id": "12182", "title": "What does 'opinionated software' really mean?", "text": "I've seen a lot of other framework/library developers throw the phrase 'we write opinionated software' around, but in practical terms, what does that really mean? Does it mean that the author of the 'Opinionated Framework X' says that because they write code a certain way, you should be writing the same type of code that they write? Isn't that a bit pretentious?"} {"_id": "81207", "title": "These days is it required to test a desktop website for IE6 and IE7? Or is IE8 and IE9 enough?", "text": "These days is it required to test a desktop website for IE6 and IE7? Or is IE8 and IE9 enough? I heard that IE8 has replaced IE7."} {"_id": "68997", "title": "Escaping strings in database layer", "text": "Can escaping functions (e.g. mysql_real_esacpe_string ) be moved down to the database layer where we would loop through all parameters passed for all queries and escape all strings. Would that be a good design?"} {"_id": "81202", "title": "In what area is LISP's macro better than Ruby's \"ability\" to create DSL", "text": "One of things that makes Ruby shine is the ability to create Domain Specific Languages better, like * Sinatra * Rspec * Rake * Ruby on Rails' ActiveRecord Though one can duplicate these libraries in LISP through macro, I think Ruby's implementation is more elegant. Nonetheless, I think there are cases that LISP's macro can be better than Ruby's, though I could not think of one. So, in what area is LISP's macro better than Ruby's \"ability\" to create DSL, if any? **update** I've asked this because modern programming languages are approaching the LISP singularity, like * C got macro expansion preprocessor, though very primitive and prone to error * C# has attributes, though this is a read-only, exposed through reflection * Python added decorator, that can modify the behavior of the function (and class for v 3.0), though feels quite limited. * Ruby TMTOWTDI that makes elegant DSL, if care is applied, but in Ruby way. I was wondering if LISP's macro is only applicable to special cases and that the other programming language features are powerful enough to raise the abstraction to meet the challenges today in software development."} {"_id": "249628", "title": "Javascript and SQL Lite (multi browser offline SQL/database query)", "text": "I'm in the elections division of my county and am trying to simplify a voter lookup method for our poll judges during election time. Currently we are using a clunky heavy application that the judges must install, and we only support Windows. My goal is to have an offline (client-side) browser based solution that will work on any browser for any operating system. I think, based on research that I've done through google searches and w3 forums, that I should be using JavaScript + SQL Lite. I have read that HTML5 has a good solution, but please keep in mind that election judges are typically elderly and don't upkeep their updates, or deviate from the preinstalled browser (typically IE), so HTML5 is not a plausible solution for our needs. I've done a days worth of research on the subject and just come up empty handed, because honestly I'm NOT certain what I should be looking for. But as I said, the JS/SQLlite combo seems to be the solution. Unfortunately I can't find any tutorials that will help me reconcile these two systems. Do you have any suggestions? Anything is helpful at this point. Thanks!!!!"} {"_id": "119933", "title": "Mission critical embedded language", "text": "Maybe the question sounds a bit strange, so I'll explain a the background a little bit. Currently I'm working on a project at y university, which will be a complete on-board software for an satellite. The system is programmed in C++ on top of a real-time operating system. However, some subsystems like the attitude control system and the fault detection and a space simulation are currently only implemented in Matlab/Simulink, to prototype the algorithms efficiently. After their verification, they will be translated into c++. The complete on-board software grew very complex, and only a handful people know the whole system. Furthermore, many of the students haven't program in C++ yet and the manual memory management of C++ makes it even more difficult to write mission critical software. Of course the main system has to be implemented in C++, but I asked myself if it's maybe possible to use an embedded language to implement the subsystem which are currently written in Matlab. This embedded language should feature: * static/strong typing and compiler checks to minimize runtime errors * small memory usage, and relative fast runtime * attitude control algorithms are mainly numerical computations, so a good numeric support would be nice * maybe some sort of functional programming feature, matlab/simulink encourage you to use it too I googled a bit, but only found Lua. It looks nice, but I would not use it in mission critical software. Have you ever encountered a situation like this, or do you know any language, which could satisfies the conditions? EDIT: To clarify some things: embedded means it should be able to embed the language into the existing C++ environment. So no compiled languages like Ada or Haskell"} {"_id": "249620", "title": "Decrease probability of picking a list's item based on its index", "text": "I have a list of sounds that are to be played when the user presses a button. For now I just generate a random number within the range of the list and just pick the sound with that index. (I also prevent that sound from being played the next time the button is clicked. Since that is easy, I would like to concentrate _only_ on the selection algorithm and leave the exclusion behind.) Since that makes it possible for a few sounds to be played repeatedly in a pattern, I would like to adjust my algorithm: Once a file is played it gets moved to the end of the list. Sounds from the beginning have a greater chance of being played than those at the end - therefore minimizing repetition. Instead of assigning a weight to each index, I want to have one factor determining the probability falloff. For example, I would set this factor to be 1.2 which means that each index is 1.2 times as probable to be played as the following one. This hopefully produces a nice falloff in probability. The fastest solution seems to be a mathematical function that maps the random number to the range of indices. That way I do not have to set up intervals, but can directly calculate the result based on the random number. I just cannot figure out the math and implementiation behind this idea, so I need you to lay it out for me. The algorithm will be used in an Android app, so the programming language I am using is Java."} {"_id": "198247", "title": "Javascript and web application data", "text": "I am pretty new to Web Application programming and so to **OOP** Javascript and the new **Client-Server** interactions. I am encountering some trouble wondering about how much **AJAX** I should use or how much data I have to save \"locally\" . Explainig it better: at the moment , to manage the data that I obtain from AJAX request , I push them in **Data Structure** like **Array's** and then I handle them helping myself rendering html through template. Now, was I wondering : is all this necessary? Do I really need to store all this data in Data Structures or I can make AJAX call for every action users do on my application? I apologize for the explanation but it's a difficult concept for me to explain in English. Thanks in advance"} {"_id": "43439", "title": "Do todays web pages use Web Semantics", "text": "Do todays web pages use Web Semantics? what is the potential in web semantics (i mean potential to enhance web)? web semantics is all about SEO?"} {"_id": "215293", "title": "Role based access to resources for a RESTful service", "text": "I'm still wrapping my head around REST, but I wonder if someone can help with any suggestions or approaches to role based access control for a RESTful service, particularly from the point of view of securing the data and how the URLs might look. It's probably best to consider an example: Say I have a REST service for Customers, and want to split the users of this REST service into Admin, Editor and Reader roles: * Admins can change all attributes of a Customer resource * Editors can change only some * Readers can only view them. Access control rights are assigned to the Customers entities individually. So for example a user of the service might have admin rights to Customers 1,2 and 3 but Editor access to 4,5 and Reader access to 7,8,9. Now consider the user calling the service. What is a good way to seperate the list of Customers for the current User? **GET /Customer** \\- this might get a list of all customers that the current user has Admin\\Editor\\Reader access to. But then on each Customer the consumer would need an indication of what role they have. Or would it be \"better\" having something like **GET /Customer/Admin** \\- return all customers the current user has Admin access to. Just looking for some high level pointers or reading on a decent way to secure\\filter the resources based on roles of the current user."} {"_id": "159063", "title": "REST API Library Conventions", "text": "Most API libraries define one method for each endpoint. If there is an endpoint for getting user information, you might have a method like: getUserInfo(userId); That simple method is often a wrapper around another function that actually does the api call: apiCall('/user/info', {userId: userId}); Another option I see less frequently is for the library to only contain the method that makes the API call. The first method only has the advantages of code discoverability, but it isn't that big of an issue for dynamic languages. Some say it also removes the need for the developer to read documentation for the APIs, but why would you ever want somebody using APIs without reading the documentation? The second method has three advantages that I see: * The library is dramatically smaller * Methods don't need to be updated if the API ever changes * There is no need to maintain documentation for the library; the API documentation fulfills that purpose But I don't see the second method used very often, so perhaps I'm missing something. When is it appropriate or inappropriate to use either method of writing API libraries?"} {"_id": "219732", "title": "Difference between Singleton pattern and auto_ptr<> resp. unique_ptr<>", "text": "I'm maintaining some legacy code of a physical simulation. The calculation object is build as a singleton, to ensure there is only one instance. A co- worker told me, that singleton are completely out-of-date and I should instantiate it through a smartpointer. But I think it is not the same, because the initialization by a smartpointer doesn't garanties me, that there is only one instance of this object, right? If I wanna have one single instance of an object in my code, which way is preferable: To use the singleton pattern or the initialize the object through one of this smartpointers (`auto_ptr<>` or `unique_ptr<>`)."} {"_id": "135258", "title": "Sanity of design for my in-memory object representations of database rows", "text": "I've been trying to revise the structural design of the C#.NET-based system I'm currently working on. The new design involves a rather light-weight object-relational mapping framework (we're trying to stay away from ADO or linq2sql). Sorry, this came rather lengthy, but I cannot describe it shorter - it's a bit complex. The system has two primary functions: 1. Retrieve rows from an SQL relational disk-based database, then serve them up using `[DataContract]`-marked objects via ASP to a web-based client, as well as receive UPDATEs to those and reroute them to the database. 2. Read off and serve the same kinds of objects to server-side / distributed modules which work with these, generate some result, and push it back to the database. The results (some might ask: \"why're you storing generated data?\") of module computation _do_ have to be stored in the database due to both size and length of computation required to obtain them, only to be retrieved in parts by the web-client through the (1) function. Currently, all row-representing objects are defined as separate C# classes with a common trend but little to no inheritance, and no common base class. I want to completely break this pattern, and instigate more extensive inheritance based on a tree of abstract classes and possibly interfaces. I've whipped up a \"due-for-test\" implementation in the past few days, but I'm concerned about the sanity of this design. Obviously, there is the object- relational impedance mismatch, which I deal with by only initializing the database-column-backed fields unless the other encapsulated objects are provided to the constructor. I'm not so concerned about that, and it's an issue for the old system design as well. Here's what I have so far: A `DatabaseColumnAttribute` class, which targets fields, and defines * name of the column where the data for the field resides. * database type (enum-backed locally) for this column. There's a fixed set of database types this system is designed to support, so be it. * whether or not the column is nullable. * read phrase(i.e. how do we represent this column in a SELECT query within an sql statement): this is generated automatically based on column type. There's a fixed number of types the data can be accessed in, so if there's a conversion involved when reading data from a column this will include it. Otherwise, just the column name. * write wrapper (i.e. database function that converts the data sent into the column from C# to the appropriate representation) - also generated based on column type. All this is used within a `DatabaseFieldColumnWrapper` object, which contains all the methods required to transfer the data from the column to the field or vice-verse. The `DatabaseFieldColumnWrapper` is constructed for each location in the code where a `DatabaseColumnAttribute` is used, meaning it's not quite \"static\", but rather class-based, since it modifies fields of derived classes. The fields can back properties, there's a `BackedByFieldAttribute` class as well, I'll skip it for brevity. All of these are used within an abstract `DataElement` class, root of hierarchy. It defines the following abstract members: function that provides the backing-table's name for this class of objects, a read-only public Id property, a private function to set the Id (which corresponds to the primary key of the row in the table). It also provides a protected static helper function to retrieve the `DatabaseFieldColumnWrapper` objects for specific fields based on Linq Expressions (to avoid \"magic strings\"). This can be used in derived types to easily synch the database representation for specific fields/columns. It provides a `Delete()` method, which is (not a destructor, but a function) responsible for only deleting the database representation. The whole system is designed to allow asynchrony and multiple memory representations unless otherwise specified. Here is the \"less sane\" part: There is a `DataElementAttribute` class, which is designed to modify non- abstract derived types. It is _not_ fully constructed completely right away where it's defined, instead filled-in in a thread-safe manner upon the construction of the first object of a derived type. It not inherited, and contains only data specific to the _derived class_ required to speed up construction (i.e. construct all the necessary `DatabaseFieldColumnWrapper` objects before-hand and associate them with the fields). This basically _violates_ the C# OO model, since it's \"static only with respect to current class in the hierarchy\", not static to the whole chain. Basically, it makes the `DataElmentAttribute` object associate with a particular Type object. It's like a convenience structure, which is generated according to the rules specified in the base class _only_ , but initializes its members based on the fields marked as columns for each derived type. Each instance of that particular type has a reference to it. I don't know how to circumvent this issue, it seems to be best in terms of \"code reuse\", but it also seems like I'm violating a lot of things I shouldn't be when I do it. Any ideas, suggestions? In a more \"question\"-form: do you know of any better way to associate things with Types (without inheritance) and have these accessible from each instance of that Type, which conforms to the OO model? P.S. I'm not looking for a \"you shouldn't\" kind of answer, since that wouldn't allow us to reuse more code."} {"_id": "49652", "title": "Modules already committed, client doesn't pay, what should I do?", "text": "So the story is simple, early stage EU portal hired me to do some extra modules. I got all the source code for local testing, did my job, committed new code. Now I am out of this project but client still haven't paid me yet and he is not even thinking about that. It has been couple of months and no contract was signed so I can't take any legal actions. What should I do with all the source code? Sell it? Run exact copy of that portal? Make all portal publicly available?"} {"_id": "49650", "title": "Why is WCF is called Windows", "text": "I'm wondering why it isn't called Microsoft Communication Foundation. Does it rely on Windows and will it rely on Windows in the foreseeable future?"} {"_id": "133186", "title": "Are there any good resources to learn how to write tech specs/design documents", "text": "> **Possible Duplicate:** > Writing a Software Requirement Specification > How to make a great functional specification I am starting to write more tech specs and design documents. However, I don't have a lot of time in school writing these, and don't have a lot of access to other people's examples. I have found a few good examples of how to structure these documents for example: Define the project goals Define the system architecture/infrastructure Define the user dialogs and the control flow Define the background tasks Define the database model Define the interfaces to other systems Define the non functional requirements (response times, security, ...) But I am looking for maybe a little more on best practices of actually writing the contents of these types of sections. I am not sure this is a dumb question or not since it is just documenting what is there but I get kind of a block with a blank canvas and start to just over think things"} {"_id": "49741", "title": "How to make a great functional specification", "text": "I am going to start a little side project very soon, but this time i want to do not just the little UML domain model and case diagrams i often do before programming, i thought about making a full functional specification. Is there anybody that has experience writing functional specifications that could recommend me what i need to add to it? How would be the best way to start preparing it? Here i will write down the topics that i think are more relevant: * Purpose * Functional Overview * Context Diagram * Critical Project Success Factors * Scope (In & Out) * Assumptions * Actors (Data Sources, System Actors) * Use Case Diagram * Process Flow Diagram * Activity Diagram * Security Requirements * Performance Requirements * Special Requirements * Business Rules * Domain Model (Data model) * Flow Scenarios (Success, alternate\u2026) * Time Schedule (Task Management) * Goals * System Requirements * Expected Expenses ## What do you think about those topics? Shall i add something else? or maybe remove something? I rode every single answer, and i would like to thank all of you for the useful information. I am doing this side project for a company, and they spect from me a constant flow of communication and i will need to explain why i do every single thing, because i will have to administer the resources they will give to me. This will be my first func spec and as i said i want it to be useful, not just big and useless. I think this is something that has to be done, but i want to do it in the way that will be more useful for me and my team. Its bad that i we dont have a manager, so thats why i also need to take care about some administrative tasks... Regarding to the agile programming, i think this is 100% compatible with the agile aproach. I am an Agile programmer my self and i honestly fell more confident when someone already did the thinking for me. I still Junnior, but i worked before as a Tapestry web developer in other projects, were the organization was a total chaos. I dont agree i am doing a waterfall aproach, i think i am just trying to define some boundaries that will make the project being easier when the development starts."} {"_id": "208238", "title": "Why do I have to keep my open source software license in the root?", "text": "Nearly all open source software licenses require (or at least lawyers generally suggest they require) users to include the full license in the root of the project that they are protecting. One lawyer I spoke to suggests this is a legacy of the CD age, when it was necessary that a full license be included in a jewel case. But today, we're living in the cloud age. Why can't I, for instance, simply host the full license at my website, and include the title + URL of that license in the header of my source files? **Bonus:** If it's generally agreed that established licenses must be kept intact in the root, why hasn't the OSI of FSF approved a license that you _can_ refer to by URL, and what is keeping someone from creating that license?"} {"_id": "244592", "title": "What are the memory-management capabilities of MySQL + JDBC (in light of autonomic computing)?", "text": "I'm interested in implementing some kind of autonomic-computing functionality using MySQL. By autonomic-computing I mean roughly some failsafe abilities, whereby the application appears to be at least slightly \"intelligent\" For reference, the main parts of autonomic computing we'd like are the \"self- configuring\" and \"self-healing\" features (the other two - \"self-optimizing\" and \"self-protecting\", are too abstract/futuristic for us, at this time). Sofor example, if we have a sample Java application that utilizes a MySQL database, we might want to automatically restart the MySQL database if we take up too much memory. Or maybe we want to have the ability to dynamiccally adjust the database memory as needed. So for example, when we start the application the database begins with a 56 Megabyte buffer; but then as we insert so many rows we want to have it automatically jump up to 512 MB, then to 1024, until a max of 4096 MB. Does all of the above suggest that MySQL is too \"weak\" for the task? Do you suggest using Oracle database? My professor believes that by using Java we can basically make up for any memory-management deficiencies that MySQL has in relation to Oracle DB. I'm new to MySQL , but have experience with Oracle. If all of the above sounds wishy-washy, it is because I'm still fleshing it out. thanks"} {"_id": "244593", "title": "Is my work on a developer test being taken advantage of?", "text": "I am looking for a job and have applied to a number of positions. One of them responded, I had a pretty lengthy phone interview (perhaps an hour +) and they then set me up with a developer test. I was told that this test is estimated to take between 6 and 8 hours and that, provided it met with their approval, I could be paid for my work on it. That gave me some pause, but I endeavored. The developer test took place on a VM accessed via RDP. The task was to implement a search page in a web project that requests data from the server, displays it on the screen in a table, has a pretty complicated search filtering scheme (there are about 15 statuses and when sending the search to the server you can search by these statuses) in addition to the string/field search. They want some SVG icons to change color on certain data values, they want some data to be represented differently than how it is in the database, etc. Loooong story short, this took one heck of a lot longer than 6-8 hours. Much of it was due to the very poor VM that I was running on (Visual Studio 2013 took 10 minutes to load, and another 15 minutes to open the 3 GB ginormous solution). After completing, I was told to commit my changes to source control... Hmm, OK. I get an email back that they thought that the SVGs could have their color changed differently, they found a bug in this edge-case, there was an occasional problem with this other thing that I never experienced, etc. So I am 13-14 hours into this thing now, and I have to do bug fixes. I do them, and they come back with some more. This is all apparently going into a production application. I noticed some anomalies in the code that was already in there where it looked like other people had coded all of one functionality and not anything else that I could find. Am I just being used for cheap labor? Even if they pay me the promised 50 dollars and hour for 6 hours, I have committed like 18 hours to this thing now. If I bug fix all of the stuff they keep coming up with, I will have worked at least 16 hours for free. I have taken a number of developer tests. I have never taken one where I worked on code that was destined for production. I have never taken one where I implemented a feature that was in the pipeline for development (it was planned for, and I implemented it through the course of the test). And I have never taken one that took 4 rounds and a total of 20+ hours. I get the impression that they are using their developer test to field some of the functionality, that they don't have time for in their normal team, on the cheap."} {"_id": "162458", "title": "Should I use both WCF and ASP.NET Web API", "text": "We already have a WCF API with basichttpbinding. Some of the calls have complex objects in both the response and request. We need to add RESTful abilities to the API. at first I tried adding a webHttp endpoint, but I got At most one body parameter can be serialized without wrapper elements If I made it Wrapped it wasn't pure as I need it to be. I got to read this, and this (which states \"ASP.NET Web API is the new way to build RESTful service on .NET\"). So my question is, should I make 2 APIs(2 different projects)? one for SOAP with WCF and one RESTful with ASP.NET Web API? is there anything wrong architecturally speaking with this approach?"} {"_id": "90579", "title": "Virus source code", "text": "Recently I read an article on the bbc about encrypted viruses. I have no idea about how one would even start programming something like that. I would love to see the source code for that or something similar. Does anyone have any relevent material? (Disclaimer: I dont want to create a virus of my own. My interest is purely acedemic)"} {"_id": "244598", "title": "Modular Web App Network Architecture", "text": "Assuming that I am dealing with dedicated physical servers or VPSs, is it conceivable and does it make sense to have distinct servers setup with the following roles to host a web application? 1. Reverse Proxy 2. Web server 3. Application server 4. Database server ![cloud diagram](http://i.stack.imgur.com/ycKgq.png) Specific points of interest: 1. I am confused how to even separate the web and application servers. My understanding was that such 3-tier architectures were feasible. 2. It is unclear to me if the app server would reside directly between the web and database server, or if the web server could directly interact with the database as well. The app server could either do the computational heavy-lifting on behalf of the app server or it could do heavy-lifting _plus control all of the business logic_ (as implied in the diagram above, thus denying the web server of direct database access). 3. I am also unsure what role the reverse proxy (ex. nginx) could and should fulfill as a web server, given the above mentioned setup. I know that nginx has web server features. But I do not know if it makes sense to have the reverse proxy be its own VPS, given that the web server\u2013in theory\u2013would be separate from the app server."} {"_id": "214246", "title": "How unique should a UUID identified object be?", "text": "I am working on a fairly large open source project (Drupal) and the next major version will begin to use UUIDs for a lot of things, including configuration objects. If we put UUIDs into the configuration we ship with and then various sites change their configuration then there will be many, many objects in the wild with the same UUID but different properties. Is this... OK? Or is it better practice to ship without UUIDs and generate them on install?"} {"_id": "214563", "title": "Construct your solution logic in syntax or in a faster and more efficient mental model?", "text": "I am a newbie, studying programming and I came across this question today: How can I make sure that I'm actually learning how to program rather than simply learning the details of a language? A commenter, ChaosPandion, had asked the question **if the programmer thinks in syntax or in a faster mental model**. I, while writing a solution will always think about the number of loops or condition to be used (for example). Am I doing it wrong? If so, how can I correct myself and what should I learn?"} {"_id": "53111", "title": "What is needed to implement a usable functional language", "text": "I've done some programming in a more-or-less functional style, but I've never really studied pure functional programming. What is the bare minimum needed to implement a functional language? As far as I can tell, you need: * The ability to define a function * The ability to call a function * parameter substitution * Some kind of \"if\" operation that can prevent recursion like \"?:\". * Testing operators or functions like \"Less than\" or \"Equals\" * A core library of fundamental functions and operators (+ -...) This implies that you do NOT need: * Looping * variables (except for parameters) * sequential statements This is for a very simple mathematical usage--I probably won't even implement string manipulation. Would implementing a language like this significantly limit you in some way I'm not considering? Mostly I'm concerned about the lack of sequential statements and ability to define variables, can you really do without these \"Normal\" programming features?"} {"_id": "96800", "title": "How much technical detail is too much when talking to non-technical managers?", "text": "I have a boss that is not very technical. He can write some basic SQL, speak some dev lingo, but all in all not a pro dev. I get into trouble often, by giving him a lot of detail and asking him to make a decision as to which direction he wants to go. For example, recently he asked for a bunch of changes to go into a product. Then took 80% of the changes away saying he is not ready to release these yet. But then, I speak to our lead developer and he tells me to just push it all. This way we avoid having to manually merge code that we do want pushed vs code we do not want pushed (by push I mean move from dev to QA, forward merge). So, the lesson is...do what's easy. Don't tell the boss that you are pushing up all the stuff. This doesn't seem ethical to me, but this is what the solution turned out to be. So, where do I draw the line? Do you feel that hiding details is unethical or just a normal job function of a developer? I ask only highly experienced developers answer this question (10+ years). I know there's skilled guys with less experience, but I want experience to speak in this case, not just skills."} {"_id": "236296", "title": "Can the DDD repository modify entity in the DB without an entity object?", "text": "Say I have an aggregate root `Entity` with some flags which are represented by an encapsulated object `EntityFlags`: class Entity { /** @var EntityFlags */ private $flags; ... } I have a repository for this entity. My goal is to modify flags in the DB. There are two ways I see: 1. Get entity from the repository, modify flags like `$entity->getFlags()->set($name, true)` and save it: `$repository->save($entity)`. 2. Create an additional method in the repository, e.g. `modifyFlags(EntityId $id, EntityFlags $flags)` I think the first way is redundant. But it also seems wrong to use repository for partial entity updates like in the 2nd way. Which way is the correct one? Maybe I missed something?"} {"_id": "96805", "title": "Are languages just syntax or do they include the framework too?", "text": "In building a language history for Pascal I noticed that at some point languages changed from a strong line between the language and its common libraries to more of a blurry one. In the first few versions of Turbo Pascal there was no capability for reusable modules until version 4 introduced units. Later versions of Pascal had a very powerful collection of common libraries. With the introduction of Delphi it came with the **RTL** and **VCL**. Especially for the RTL, it is often considered part of the language. For example, Exceptions are part of the language, but you need to use the `SysUtils` unit to get proper handling. Additionally there is the built in `System` unit that always gets used in all Delphi projects. Then with Java you have the Java language with the Java run-time and the Java platform. You can't use the Java language outside of the Java platform. Now with .NET we have VB.NET and C# which are languages that don't exist without their framework either, while Phyton, Ruby and others exist both within .NET and without. So my question is, is a Language just the syntax and compiler, or is the platform and framework part of the language too? Where is the line? Why or why not?"} {"_id": "54467", "title": "Should you charge clients hours spent on the wrong track?", "text": "I took up a small CSS challenge to solve for a client and I'm going to be paid on a hourly rate. I eventually solved it, it took 5 hours but I spent roughly 25% of the time in the wrong track, trying a CSS3 solution that only worked in recent browsers and finally discovering that no fallback is possible via JS (like I originally thought). Should I charge the client that 25%? More details: I didn't provide an estimate, I liked the challenge per se, so I started working on it before giving an estimate (but I have worked with him before, so I know he's not one of those people that have unrealistic expectations). At the very worst I will have spent 5 unpaid hours on an intriguing CSS challenge. And I will give the fairest possible estimate for both of us, since I will have already done the work. :) Edit: Thank you all, I wish I could accept more than one answer! I ended up not billing him for the extra hours (I billed him for 3 and a half), but I mentioned them, so that he knows I worked more on it than I billed him for. Maybe that's why he immediately accepted the \"estimate\" (which in that case wasn't an estimate, hence the quotes)."} {"_id": "58122", "title": "What is the general definition of something that can be included or excluded?", "text": "When an application presents a user with a list of items, it's pretty common that it permits the user to filter the items. Often a 'filter' feature is implemented as a set of _include_ or _exclude_ rules. For example: * _include_ all emails from bob@example.com, and * _exclude_ those emails without attachments I've seen this include/exclude pattern often; for example Maven and Google Analytics filter things this way. But now that I'm implementing something like this myself, I don't know what to call something that could be either _included_ or _excluded_. In specific terms: 1. If I have a database table of filter rules, each of which either _includes_ or _excludes_ matching items, what is an appropriate name of the field that stores `include` or `exclude`? 2. When displaying a list of filters to a user, what is a good way to label the `include` or `exclude` value? (as a bonus, can anyone recommend a good implementation of this kind of filtering for inspiration?) * * * ### Edit: My solution Thanks for your answers. I decided that to use **operation** to described an include or exclude. So the database fields and program variables are called operation. I like the name because it roughly aligns with the meaning of operation in set theory. As for how it's displayed to users, I fudged it by never labelling 'include' or 'exclude'. Instead it is always presented as part of a full sentence: ![Mockup](http://img824.imageshack.us/img824/3695/examplerestrictionslist.png) The only label used is 'Restrictions' which I think is more specific than 'Filters'. Our users are ususlly not very technically savvy, so \u2014 much as I like the word operation \u2014 I decided that displaying a label called 'operation' would be unclear for most of our users. Using full sentences might be a little slower to read, but avoids confusion."} {"_id": "246991", "title": "Screen out software engineers with poor communication skills?", "text": "Personally, I try to speak and write impeccable English. I try to capitalize proper nouns, get punctuation correct, use parallel form, and employ precise verbiage. (I'm also a fan of the Oxford comma, can you tell?) As a programmer, I put an equivalent amount of effort in the code I write, as code is read many more times than it is written. I pay close attention to style guides, use direct clear idioms, and try to follow recognized best practices. If I were looking for others to work with, I'd be looking for the same. My question is: is careful usage of human language a trait that correlates with better code? If so, could human resource personnel use this to good effect to screen out applicants, or would they be reducing their qualified applicant pool arbitrarily?"} {"_id": "164858", "title": "Advantages of paid Java Application Servers?", "text": "What are the advantages of Weblogic or Websphere over Glassfish or JBoss, since they may reach costs of millions of dollars? Edit: Is there additional functionality? The all are Java EE Full Certified. **The question is about technology, not business or marketing. From the technological POV how are the servers different?**"} {"_id": "246996", "title": "How do I deal with classes that are only used once, say in only a single function?", "text": "I have a class that basically builds up a memory-efficient version of a structure to be used in a performance-critical section of code. The resulting structure is ugly but extremely fast, so not using it isn't really an option, but I also can't use it as the only representation of my structure because it has parts not relevant to the analysis stripped. Did I mention it's also very ugly? Let's call this class `Fast`. I also have the original, pretty representation, let's call it `Pretty`. I have a function called `Process()` which is supposed to take in an instance of `Pretty`, convert it to `Fast` and then do a few million calculations on it and spit out a result: Pretty pretty; // do stuff with pretty // in another hpp: double Process(const Pretty& pretty) { Fast fast(pretty); // create my fast object // do some processing on fast return result; // at this point fast dies } The instance of `Fast` is created and dies within `Process`, but unfortunately I can't declare a class within a function, so I have to declare it elsewhere. What I would _like_ to do is make the entire class accessible only to `Process`, and not even have it _seen_ by anyone else (just like I would conversely declare a function `private` if the class were the only object that should have access to it). But I can't declare a class private! Here's my current attempt: class Process { public: static double DoProcess(Pretty pretty); // this is the process function private: class Fast { // construct fast here (only Process can see this class!) }; }; And then I just call `Process::DoProcess()`. This isn't really a critical issue so I didn't post it on SO, but it's a style question and I would really like to have a neat solution! I've never come across this issue before and was wondering how others have dealt with it, and if they came up with a nicer solution."} {"_id": "244623", "title": "when to use a scaled/enterprise agile software development framework and when to let agile processes 'emerge'?", "text": "There are quite a few enterprise agile software development frameworks available: * Scott Ambler: Disciplined Agile Delivery * Dean Leffingwell: Scaled Agile Framework * Alan Shalloway: Enterprise Agile Book * Craig Larman: Scaling Lean and Agile * Barry Boehm: Balancing Agility and Discipline * Brian Wernham: Agile Project Management for Government - DSDM, Scrum and others I've also spoken with people that state that your enterprise agile processes should just 'emerge' and that you shouldn't need or use a framework because they constrain you. **Question 1:** When should one choose an enterprise agile software development framework, and when should one just let their agile processes 'emerge'. **Question 2:** If choosing an enterprise agile software development framework, how does one select the appropriate framework to use for their organisation? Please provide evidence of your experience or research when answering questions rather than just presenting opinions."} {"_id": "237749", "title": "How do languages with Maybe types instead of nulls handle edge conditions?", "text": "Eric Lippert made a very interesting point in his discussion of why C# uses a `null` rather than a `Maybe` type: > Consistency of the type system is important; can we always know that a non- > nullable reference is never under any circumstances observed to be invalid? > What about in the constructor of an object with a non-nullable field of > reference type? What about in the finalizer of such an object, where the > object is finalized because the code that was supposed to fill in the > reference threw an exception? A type system that lies to you about its > guarantees is dangerous. That was a bit of an eye-opener. The concepts involved interest me, and I've done some playing around with compilers and type systems, but I never thought about that scenario. How do languages that have a Maybe type instead of a null handle edge cases such as initialization and error recovery, in which a supposedly guaranteed non-null reference is not, in fact, in a valid state?"} {"_id": "60580", "title": "How do I quote a job with PHPUnit?", "text": "I've been doing website programming for 15 years and PHP for the last 5 years or so. I've always written solid code. HOWEVER, I have a client that is insisting on 80% of the code being unit tested. Since the client is ALWAYS RIGHT, I am planning on using PHP_CodeSniffer to make sure my code looks right and PHPUnit to do my unit testing. I'm hoping to learn something through this experience. Are these the right tools to use? How much time is involved in setting up PHPUnit and writing the additional code? It should take me around 8 weeks to write the webpages and self-test as I have done in the past. If I add 4 days additional (10%) for unit testing (PHPUnit), is that enough? Thoughts? Suggestions? Thanks."} {"_id": "120872", "title": "How should a developer reject impossible requirements?", "text": "Here's the problem I'm facing: * * * **Quote From Project Manager:** Hey Spark, I'm assigning you the task of developing a framework that could be used for many different iOS applications. Here are the requirements: * It should be able to detect the thickness of the thumb or fingers being used to manipulate the UI. * With this information, all elements of the UI should be arranged & sized _automatically_. * For a larger thumb, elements should be arranged nearer the center of the screen. * For a smaller thumb, elements should be arranged nearer the corners of the screen. * For a larger thumb, all fonts should be smaller. (We're assuming an adult in this case.) * For a smaller thumb, all fonts should be larger. (We're assuming a younger person in this case.) **Summary:** This framework is required for creating user-friendly user interfaces programmatically. The framework should be developed in such a way that we can use for as many projects as needed, so it must also be very developer- friendly. * * * I am the developer given this task, so my questions are as follows: * How can I explain that these requirements are a little ridiculous? * How can I explain that it would be better to concentrate on developing actual projects? * How can I explain that even if this were possible, I wouldn't recommended developing such a thing? * How do I say NO to this project politely, gently, and respectfully? * How can I explain that even for a developer with 3 years of experience, this might not be possible?"} {"_id": "120873", "title": "Do you need to know Java before trying Scala", "text": "I'm interested in learning Scala. I've been reading a lot about it, but a lot of people value it because it has an actor model which is better for concurrency, it handles xml in a much better way, solves the problem of first class functions. My question is do you need to know Java to understand/appreciate the way things work in Scala? Is it better to first take a stab at Java and then try Scala or you can start Scala with absolutely no java backround?"} {"_id": "152352", "title": "Anonymous function vs. separate named function for initialization in jquery", "text": "We just had some controversial discussion and I would like to see your opinions on the issue: Let's say we have some code that is used to initialize things when a page is loaded and it looks like this: function initStuff() { ...} ... $(document).ready(initStuff); The initStuff function is only called from the third line of the snippet. Never again. Now I would say: Usually people put this into an anonymous callback like that: $(document).ready(function() { //Body of initStuff }); because having the function in a dedicated location in the code is not really helping with readability, because with the call on ready() makes it obvious, that this code is initialization stuff. Would you agree or disagree with that decision? And why? Thank you very much for your opinion!"} {"_id": "152351", "title": "How to manage a developer who has poor communication skills", "text": "I manage a small team of developers on an application which is in the mid- point of its lifecycle, within a big firm. This unfortunately means there is commonly a 30/70 split of Programming tasks to \"other technical work\". This work includes: * Working with DBA / Unix / Network / Loadbalancer teams on various tasks * Placing and managing orders for hardware or infrastructure in different regions * Running tests that have not yet been migrated to CI * Analysis * Support / Investigation Its fair to say that the Developers would all prefer to be coding, rather than doing these more mundane tasks, so I try to hand out the fun programming jobs evenly amongst the team. Most of the team was hired because, though they may not have the elite programming skills to write their own compiler / game engine / high-frequency trading system etc., they are good communicators who \"can get stuff done\", work with other teams, and somewhat navigate the complex bureaucracy here. They are good developers, but they are also good all-round technical staff. However, one member of the team probably has above-average coding skills, but below-average communication skills. Traditionally, the previous Development Manager tended to give him the Programming tasks and not the more mundane tasks listed above. However, I don't feel that this is fair to the rest of the team, who have shown an aptitude for developing a well-rounded skillset that is commonly required in a big-business IT department. What should I do in this situation? If I continue to give him more programming work, I know that it will be done faster (and conversely, I would expect him to complete the other work slower). But it goes against my principles, and promotes the idea that you can carve out a \"comfortable niche\" for yourself simply by being bad at the tasks you don't like. **Update** I've been impressed by the quality and insight of the answers I've received. Though most agree I should work to the individual team members strengths, there's been good debate back and forth. I want to clarify that I'm not trying to address this issue due to a grudge, or that I have a \"chip on my shoulder\" as was mentioned. I'm looking for advice on how to keep a well-rounded team, which is happy and motivated. By observing the variety of answers to this question, it seems like there are a lot of different opinions on how to achieve this."} {"_id": "128652", "title": "What is the purpose of including header files in the solution in Visual Studio?", "text": "So I have including the files to my projects by simply : `#include \"myheader.hpp\"` and adding the headers into the solution explorer. But recently I have realized that I may omit the step of adding the headers to the solution explorer because it does not change anything. The only important fact is to define proper include directories in project properties. Am I missing something here ? Can someone clever explain this to me ?"} {"_id": "125509", "title": "Programming language with native concurrency support for large graphs?", "text": "I'm currently looking for a new programming language to learn (currently working through some C++, know some C and Python), specifically one that has built-in concurrency support? I want to try to build a large graph library that can do processing across clusters or multiple cores.. I know C++'s Boost library has support for concurrency, but I also want to learn a new language and I'm guessing a language that was designed with concurrency in mind would also be more pleasant to do concurrent programming in. Overall I see this as a chance to learn a new language, learn about concurrent programming, and tackle a big project. From looking around, Clojure and Scala seem to be the two popular candidates when looking for concurrent programming support.. though I'm not sure how these two compare in terms of 1. Speed (specifically for concurrent large graph processing) 2. Community (thinking about pushing this project onto somewhere like GitHub) 3. Ease of programming concurrently Or are there other languages I should consider aside from Clojure or Scala? I have never programmed in a functional language before, but I'm open to learning it.. I've seen one of my friends program in Haskell and Clojure and it looks daunting but I've heard good things about functional programming, esp. for data processing. Thanks!"} {"_id": "152359", "title": "Could someone break this nasty habit of mine please?", "text": "I recently graduated in cs and was mostly unsatisfied since I realized that I received only a basic theoretical approach in a wide range of subjects (which is what college is supposed to do but still...) . Anyway I took the habit of spending a lot of time looking for implementations of concepts and upon finding those I will used them as guides to writing my own implementation of those concepts just for fun. But now I feel like the only way I can fully understand a new concept is by trying to implement from scratch no matter how unoptimized the result may be. Anyway this behavior lead me to choose by default the hard way, that is time consuming instead of using a nicely written library until I hit my head again a huge wall and then try to find a library that works for my purpose.... Does anyone else do that and why? It seems so weird why would anyone (including me) do that ? Is it a bad practice ? and if so how can i stop doing that ?"} {"_id": "211994", "title": "Is copyrights notice of a BSD licensed library considered as endorsement?", "text": "As I am reviewing a BSD license for an open source library to use it in my commercial product, I found this paragraph: > Redistribution and use in source and binary forms are permitted provided > that the above copyright notice and this paragraph are duplicated in all > such forms and that any documentation, advertising materials, and other > materials related to such distribution and use acknowledge that the software > was developed by the organization. Its clear that I have to mention the copyright notice, but as for the next paragraph: > The name of the organization may not be used to endorse or promote products > derived from this software without specific prior written permission. If I just put the copyright notice of the used library, is that considered as I am promoting my product by mentioning the library copyrights notice? Shall I seek a written permission? or I just need to add the copyright notice only?"} {"_id": "122436", "title": "Should back end processes be included in use cases in requirements document?", "text": "We're writing a requirements document for our client and need to include the use cases of the system. We're following this template: ID Description Actors Precondition Basic Steps Alternate Steps Exceptions Business validations/Rules Postconditions In the Basic Steps section, should we include steps that the system performs in the back end or should we only include steps that the user directly interacts with? Example: Basic Steps for Search 1: User goes to search page User enters term User presses search System matches search term with database entries System displays results vs Basic Steps for Search 2: User goes to search page User enters term User presses search System displays results"} {"_id": "9730", "title": "Functional Programming vs. OOP", "text": "I've heard a lot of talk about using functional languages such as Haskell as of late. What are some of the big differences, pros and cons of functional programming vs. object-oriented programming?"} {"_id": "121830", "title": "How to write manageable code with functional programming?", "text": "I just started with functional programming (with JavaScript and Node.js) and from the look of things it looks as if the code I am writing would grow to be one hell of a code base to manage, when compared to programming languages that have a sort of object oriented paradigm. With OOP I am familiar with practices that would ensure your code is easily managed and extensible. But am nore sure of similar convention with functional programming."} {"_id": "218558", "title": "When You Have Both Options, When Functional and When OOP?", "text": "Like (I suspect) a lot of JS devs, I tend to start with intuition first and then come to sound principles/practice with experience informing study of the comp. sci stuff I never really had any form training in that I like to look into in my spare time. Note: I tagged JS by way of identifying where I'm coming but I'm open to answers from any language where you might have the same dilemma. So, as I've come to understand more purely functional-oriented languages I've started to realize that I've occasionally been following that approach by accident. Essentially I'll have some data that doesn't change and a function or set of functions using those \"constants\" (yeah, yeah...) in conjunction with variable params to hand off a return value that won't necessarily be different on method firings. I tend to reach for whatever tool strikes me as likely to simplify the problem or reduce work the most but is there maybe something a little less ambiguous in terms of what are more formally considered the advantages of one vs. the other to come up with some heuristics to consider? So far the split is typically more with OOP at higher-level architectural concerns with occasional functional typically coming into play within or between objects."} {"_id": "211999", "title": "Are Nested Static Library dependencies possible?", "text": "I am working in QT . 1. Can a static library depend on another static library?(Static Lib is made by linking another static lib) 2. If yes, is it possible that after linking to lib2, the generated lib(lib1) would not contain all the codes of lib2? In my Qt Project I am using a static library, which depends on multiple libraries. I had to add all the libraries (with all their headers in my project), although I need only one lib (and one .h of that class) in my code. Please explain the scenario."} {"_id": "250553", "title": "Are there any benefits in not having referential integrity keys in database?", "text": "So Referential integrity is good, but are there scenarios where not having constraints is better? For example when all the logic of relationships is kept in code? or when the database upgrade is simplified? PS: I am not asking it is good not to have referential integrity, just that what scenarios would justify it"} {"_id": "199166", "title": "Why does a contenteditable div not behave like an input element?", "text": "The motivation for the content editable div would be to allow user input inside a normal div element. Why does it then behave so differently to input element? Mainly I am referring to the addition of a `div` or a `br` on line break. Additionally it does not have a placeholder item. So my question is what is the point of content-editable div? Why would I want to use it at all?"} {"_id": "246660", "title": "Entirely separate business logic layer from MVC", "text": "We are currently refactoring our controller methods in ASP.NET MVC application. At the beginning we've separated data access layer (our goal was to remove LINQ from controllers entirely). Now we are thinking about another step - how to create a business logic layer to reduce number of lines and logic in controller. Currently we have something like this: public ActionResult CustomerProjects(some parameters) { var collectionOfSomeType = this.CustomerRepository.GetSomeData(); var anothercollection = this.ProjectsRepository.GetAnotherData(collectionOfSomeType); var viewModel = CreateSomeViewModel(anotherCollection); return View(\"Index\", viewModel); } Of course it's just an example with four lines of code. We have some methods which are bigger (but still IMO not so bad). We injected our helper classes through IoC container, more complex logic sits there. But now we'd like to remove even that parts. Is it worth our time to create another layer which would be our 'last gate' before controllers? The final result would be: public ActionResult CustomerProjects(some parameters) { var viewModel = someSource.PrepareCustomerProjectsViewModel(params); return View(\"Index\", viewModel); } What do you think?"} {"_id": "94036", "title": "How do you pronounce eval()?", "text": "I'm not a native speaker so please excuse me if you think the question is silly, but how should I pronounce eval()?"} {"_id": "243012", "title": "How does dependecy injection increase coupling?", "text": "On the Wikipedia page on dependency injection, the disadvantages section tells us this: > Dependency injection increases coupling by requiring the user of a subsystem > to provide for the needs of that subsystem. with a link to an article against dependency injection. Dependency injection makes a class use the interface instead of the concrete implementation. That should result in decreased coupling, no? What am I missing? How is dependency injection increasing coupling between classes?"} {"_id": "209417", "title": "Copyright registration against future patents", "text": "**Short question:** Could registering a work containing probable patentable ideas at one or more copyright registraion services (such as CRS) be used to effectively prove _prior art_ in case someone tries to patent the idea? **Details:** The particular work is software and some related stuff which show some novelties, maybe patentable, maybe not in the US. I live in Europe, and even if I lived in the US, I wouldn't want to care (and leave tons of $$$ in patent offices) about patents: the thing is to be released under GPL when the time comes. I want to protect it in such a way that it could stay free, which, as I see, involves protecting it against probable patent ventures (trying to capture those novelties the project has). Reading about the subject I see that copyright registration services (such as CRS) exist which for an affordable price would archive the state of the work reliably with their registration date. So in case someone tries to \"attack\" the GPL'ed work in the aforementioned way, how useful could these proofs be in proving _prior art_? (Could you think of any other effective methods for someone who has no reliable connections to release a true _printed publication_ , neither much funds, and neither lives in the US?) My goals are _solely protecting the published GPL'ed work from legal takedown_. That is as I understand patenting it might be possible that someone patents an already existing technology (in this case my published GPL'ed work), and then sues the original author (inventor) for patent infringement. I only seek for protection against this type of abuse building up some robust enough proof of _prior art_ so the work may remain free."} {"_id": "243015", "title": "How to handle mutiple API calls using javascript/jquery", "text": "I need to build a service that will call multiple API's at the same time and then output the results on the page (Think of how a price comparison site works for example). The idea being that as each API call completes the results are sent to the browser immediately and the page would get progressively bigger until all process are complete. Because these API calls may take several seconds each to return I would like to do this via javascript/jquery in order to create a better user experience. I have never done anything like this before using javascript/jquery so I was wondering if there was any frameworks/advice that anyone would be willing to share."} {"_id": "168110", "title": "Lua & Javascript documentation generation", "text": "I am in the beginning phase of create a mobile MMO with my team. The server software will be written in JavaScript using NodeJS, and the client software in Lua using Corona. We need a tool to auto-generate documentation for both the server-side and client-side code. Are there any tools which can generate documentation for both Lua and Javascript? And as a bonus: we are hosting our project on Bitbucket and the Bitbucket Wiki uses the Creole markup language. So if it's possible I want the tool to export to Creole. Edit: I know about tools for generating documentation for one of both languages. However, I don't want 2 different styles for documentation in one project. Therefore one tool which can generate documentation for both languages would be great."} {"_id": "168112", "title": "What to choose API based server or Socket based server for data driven application", "text": "I am working on a project which has a Desktop Application for MAC/COCOA, a native application for iPhone another native application in iPad. All the application do almost same thing. The applications are data driven applications. Every communication to server is made via a restful API developed in PHP. When a user logs in a lot of data is fetched from server. And to remain in sync with server polling is done. As there are lot of data to poll it makes application slower and un-reliable. A possible solution that comes into my mind is to use Socket based server. My question is that will it reasonably improve the performance? And which technology (of sockets) will be good as a server side solution for data driven application? I have heard a lot about Node.js. Please give your suggestions."} {"_id": "168114", "title": "I seem to be missing a few important concepts with PhoneGap", "text": "I'm planning on developing an app on multiple platforms and I'm thinking that PhoneGap might be perfect for me. I had been reading that it's one codebase for all platforms but looking at the PhoneGap guide it seems there are separate instructions for each platform. So if i want to develop for iOS, Android, BB and WP7 I need to write 4 different sets of code? I'm sure i'm missing something fundamental here. Aside from that, how do people usually approach a PhoneGap build? You obviously / probably want the finished app to look like a native app - is it more common than not to use jQuery Mobile together with PhoneGap? Is there a preferred IDE? I see, in the guide, for iOS they seem to suggest Xcode. I'm fine using Xcode but it seems a bit overkill for HTML & CSS. Do I need to develop in Xcode and if not how do i approach it? Use a different IDE / Text Editor and then copy paste into Xcode for building and testing? I know this question is long-winded and fundamental but it something which i don't think is properly addressed in the guides. Thanks."} {"_id": "40495", "title": "How to organize back-end database design and front-end application in software repository?", "text": "I have an application that has a back-end database (tables, procedures, database specific dlls) and front-end (application logic and UI) that separate people are working on. I was wondering what's the best way to organize it in my repository (I'm using GIT). The options I see are: * all-in-one repository * submodules of main repository * two separate repositories * other? Any suggestions are welcome."} {"_id": "191184", "title": "Preventing crawler from interfering with user tracking", "text": "I'm scraping text from various webshops (no images/videos or other data). I'm no expert on user tracking, so I'd like to know if there's a way for me to write my crawler so it won't interfere with the webshop owners tracking. Perhaps this is already the case since the crawler isn't storing any cookies, requesting images or anything else but the actual pages, but I'd like to be sure. What would I do when requesting the pages to ensure that nothing is tracked in Google Analytics for instance? Should I send an e-mail to the owners telling them to filter a specific user-agent or...? I've seen How to be a good citizen when crawling web sites? where the last answer states that one should add \"crawler\" or \"spider\" in the user-agent. I'm not sure what to make of that as I can't find anything to back it up. (The crawler is written in node.js and uses the request-module to download websites) EDIT: For anyone interested, here's the infinitely simple crawler I've made. It doesn't obey robots.txt because I'm specifying what kind of links I want to follow myself (and I'm too lazy right now to write something that obeys robots.txt): var request = require(\"request\") , cheerio = require(\"cheerio\") ; exports.crawl = function(options) { var links = [].concat(options.url) // Takes either an array of url's or just a string. , ongoingRequests = 0 , index = 0 , done = false ; process.nextTick(ticker); function ticker() { if(ongoingRequests < options.maxRequests && index < links.length && !done) { var url = links[index++]; ongoingRequests++; if(options.debug) console.log(\"Downloading \" + url); request({ url: url, encoding: options.encoding || \"utf-8\" }, function(err, response) { ongoingRequests--; if(err) { return; } if(!done) { var $ = cheerio.load(response.body) , shouldContinue = options.data($, response, url) ; if(shouldContinue === false) { console.log(\"Crawler interrupted.\"); options.done(); done = true; } var newLinks = options.extractLinks($).filter(function(url, i) { return links.indexOf(url, i + 1) === -1; }); if(options.debug && newLinks.length) console.log(newLinks); links = links.concat(newLinks); if((index - 1) % 5 === 0) console.log(\"Current index: \" + (index - 1) + \", links found so far: \" + links.length); } }); } else if(!ongoingRequests) { if(!done) { options.done(); done = true; } return; } process.nextTick(ticker); } }; Use it like this: var crawler = require(\"../crawler\"); crawler.crawl({ url: \"http://website.com\" , debug: true , maxRequests: 5 , data: function($, response, url) { if(url.indexOf(\"/product/\") === -1) { return; } console.log(\"extract stuff from $ using CSS-selectors and a nice jQuery-like API\"); } , done: function() { console.log(\"DONE!!\"); } , extractLinks: function($) { return $(\"a\").map(function() { return $(this).attr(\"href\"); }).filter(function(url, i, a) { if(!url || url[0] !== \"/\") { return false; } return i === a.indexOf(url) // Remove duplicates && url.indexOf(\"/cart\") === -1 && url.indexOf(\".htm\") > -1 ; }).map(function(url) { return \"http://website.com\" + url; }); } });"} {"_id": "255450", "title": "Is DDD not appropriate for my website or should I introduce a Query Layer?", "text": "I have inherited an ASP.NET website application and the previous developer has used what I believe are some DDD concepts. I am new to DDD and I have to admit I am struggling with the practical side of it having come from an N-Tier Transaction Script background. One of the problems with the site is that it is slow. There is no paging on huge grids and also the business logic layer pulls down way more data than is needed for 90% of operations 70% of which are reads to bind data to grids. The general design is that the business logic layer has a Service class per aggregate root, with various \"GetByX()\", 'FindBestMatching()\" and \"Update()\" type methods on it. These Service classes are the points from which trees of domain objects are created such as \"Organisation\" and \"User.\" Back in my comfortable transaction script days, my service layer was similar: there would be some logic possibly, a call into a repository (or since EF 6 direct use of EF as it can be mocked for testing) and then the call would return. The objects that were returned would be DTO type objects containing return data relevant, including possibly some POCO entity objects. These DTOs would travel to clients via webservices, be bound to ASP.NET or Silverlight grids...all was good. My operations on my service layer could be very specific, allowing for queries like: \"GetUsersWithSurnameIncludeChildren(string surname, int skip, int take)\" allowing for nice narrow DB queries that supported paging and eager loading of related data. It was fast and loaded only the data that was actually needed. Now with this site I have inherited there is an actual OO model of the business domain. These classes are POCOs, not generated from EF, but EF is used database first as a repository (there is a lot of mapping between the domain objects and the EF objects). An example domain object is below (without any logic). public class Organisation { public int OrganisationID { get; set; } public string Name { get; set; } public IList AssociatedOrganisations { get; set; } public IList Staff { get; set; } public IList Managers { get; set; } // Business logic method implemented below here } Many of the classes are essentially copies of the underlying entity framework objects, others have a significant amount of logic encapsulated within them. The way the site works currently is that every time the OrganisationService service class creates a new Organisation domain object (and it may create a collection in one hit) it also populates ALL of the fields and collections in that object...and all of the fields and collections in those objects! A huge deep tree is created in memory and a stack of joins and unions (generated by EF) is needed to get all of the data. This is totally inappropriate for a website that mainly displays loads of flat grids. My questions are: 1) Is an application that is 70% pulling data out of a DB for display and 30% business logic actually a bad candidate for DDD? 2) If DDD is appropriate how are scenarios like this generally designed? Should I implement a separate \"Query Service\" that is designed just to pull out loads of flat data for binding to pages and grids (these could just be the EF POCOS themselves) and reserve the Domain objects just for parts of the application where logic must actually be performed? 3) Should I only load data into my Domain Object reference types and collections when it is actually accessed (like EF lazy loading...)? Previous posts I have read have suggested lazy loading in a Domain Object is a code smell: DDD Lazy Loading Code Smell I am about to buy a DDD book tonight :)"} {"_id": "255455", "title": "What Algorithm can be use for task allocation based on location?", "text": "What algorithm can be use for task scheduling based on location? For example there are 100 students and 5 lecturers available. Each lecturer must get same amount of student using location allocation which is allocation of students depends on the nearest place to lecturer. The system must assign automatically students to lecturer depends on shortest distance between student and lecturer. Student A Lecturer A Student B lecturer B Student c Lecturer C EXAMPLE : Lecturer A can get student c because when compare 3 students address student C is near to lecturer A. I have done some research on algorithms such as ant colony, bee colony, location allocation but what I can learn from that is it doesn't meet my requirement. So looking for some new algo. What type of algorithm can be use to solve this problem?"} {"_id": "41128", "title": "What is the purpose of the \"non-endorsement clause\" in the New BSD license?", "text": "_**Note:** This question is not about the \"obnoxious BSD advertising clause\". The New BSD license does not contain that clause, and is compatible with the GPL._ I'm trying to pick between the New BSD license and the MIT license for my own projects. They are essentially identical, except the BSD license contains the following clause: > * Neither the name of the nor the names of its contributors > may be used to endorse or promote products derived from this software > without specific prior written permission. > Why would anyone want to use this clause? What's wrong with gaining some notoriety if someone makes a well-known piece of software using your code? Also, wouldn't dictating what users can and cannot do with your given name fall outside the domain of intellectual property?"} {"_id": "255457", "title": "RefactorException: Good idea or bad idea?", "text": "When I'm doing large scale refactors I'm often commenting out the contents of methods and using NotImplementedExceptions for stuff that I still need to refactor. Problem is that this is interfering with 'valid' NotImplementedExceptions. I'm thinking about introducing a custom RefactorException, that I quite easily find back by reference to see what I still need to do. Good idea or bad idea? What are other common ways to do large scale refactors in stages? (of course the idea is that all these exceptions are removed before the I commit the whole refactoring)"} {"_id": "255456", "title": "Is it legaly allowed to state this in a software EULA?", "text": "Just read through the whole EULA from the game (\"Minecraft\") owned by (\"Mojang AB\"). And in there it was one sentence that got my attention: We have the final say on what constitutes a tool / mod / plugin and what is not. My questions are: 1. Is is legaly allowed to write this in a EULA? 2. Will this really hold up in court? **EDIT:** As people apperently want more detail: I was initially thinking just generally but as more detail is required lets's take Sweden(Where's mojang is from) and USA(Just becase).."} {"_id": "129014", "title": "What should a GetSelectedIndex method return when no rows are selected", "text": "I'm creating a UI table component that returns an array of selected indices. Some typical return values would be var myRetVal1 = [0];//one value selected var myRetVal2 = [0,1,2,3,8,11];//multiple value selected as you can see I'm always returning an array. I had an idea to return `-1` when nothing is selected, but then I thought that might be confusing when in every other condition an array is returned. So checking for an empty set of values would be either //returns -1 var selectedItems = tbl.GetSelectedIndex(); if(selectedItems !== -1){ //we have data to process } OR //returns [] var selectedItems = tbl.GetSelectedIndex(); if(selectedItems.length > 0){ //we have data to process } OR //returns null var selectedItems = tbl.GetSelectedIndex(); if(selectedItems){ //we have data to process } Maybe I'm making too big a deal over this, but is there a standard expectation for this type of control? As I build other controls should they conform to a standard empty return value or should they always return a \"empty\" version of their expected return type?"} {"_id": "161439", "title": "Query regarding VBA as career option", "text": "I am a dot net developer, with over 7 years experience in VB, Vb.net, C# and Asp.net. Would it be worthwhile to learn VBA/VSTO? Would it be adding value to my career? Would there be any companies hiring a person who knows both VBA and Dot net? I heard that VBA would be used mostly used in finance industry and it won't pay much. Is it true?"} {"_id": "161435", "title": "best way to do the compile and check cycle", "text": "I am trying to learn lua and am experimenting on my linux machine. I am not a programmer, so I am looking for your help to give me some suggestions. What I want to accomplish is making my compile-rewrite-recompile cycle as efficient as possible. Here is what I do. I am using vim in one window to program. I have a shell open in another window. When I want to check my progress, I save the code in vim, switch to my shell, then execute the code. However, this is still kind of tedious, I was there was a faster, more elegant way. Any ideas? How do you go about this problem?"} {"_id": "161436", "title": "How can we protect the namespace of an object in Javascript?", "text": "Continuing from my previous question: Javascript simple code to understand prototype-based OOP basics Let's say we run into console this two separate objects(even if they are called child and parent there is no inheritance between them): var parent = { name: \"parent\", print: function(){ console.log(\"Hello, \"+this.name); } }; var child = { name: \"child\", print: function(){ console.log(\"Hi, \"+this.name); } }; parent.print() // This will print: Hello, parent child.print() // This will print: Hi, child temp =parent; parent = child; child = temp; parent.print() // This will now print: Hi, child child.print() // This will now print: Hello, parent Now suppose that parent is a library, as a HTML5 application in a browser this cannot do much harm because is practically running sandboxed, but now with the advent of the ChromeOS, FirefoxOS and other [Browser] OS they will also be linked to a native API, that would be a head out of the \u201esandbox\u201d. Now if someone changes the namespace it would be harder for a code reviewer (either automated or not ) to spot an incorrect use if the namespaces changes. My question would be: Are there many ways in which the above situation can be done and what can be done to protect this namespaces? (Either in the javascript itself or by some static code analysis tool)"} {"_id": "40832", "title": "Generic code or code easy to understand?", "text": "at work now I had an argument with co-worker, because I made a page that he feels is **too generic**. The page has **3 modes** (simple, adv and special) - it has to work like this, because we don't choose how the specification is written. in each mode **the page looks different** (the changes are not big - **it shows/hides 5 text fields** in different combinations based on the mode). My co-worker think it should be **3 pages** and when you change something you should just **merge** the changes to other two modes. The fields now has code like `rendered=\"#{mode!=2}\"`, etc. P.S. Now the difference is 5 fields, but **in the future no1 know how much it will be changed**. **EDIT:** We use Seam(JSF/Facelets), here is pseudo facelet code(to make it easier to understand). I did not put it in panelGroups to better present the problem. **EDIT:** I duplicated version it would look like this(pseudo code) modeSimple.xhtml modeAdv.xhtml modeSpec.xhtml "} {"_id": "162002", "title": "Can my company give IP rights away for an application I wrote off hours to another startup?", "text": "I am an intern for a health company (unpaid), let's call it _Company A_ and I noticed that they are using a lot of paper form for things that can be done on the computer. Excel files for things that shouldn't be in Excel. So I wanted to improve on my programming and figured that it was the best opportunity to do. I developed a couple of apps for their use. All these applications were outside company time. One application I did and they love and one of the director has a brother who has a health start up company. He wants me to give over my source code so his brother's company can further develop it and maybe sell it (I am out of the equation). I have no intention of handing over my code as I put a lot of time outside doing it but I also don't want to burn bridges with anyone in the company. I can't go to the director and tell him \"I don't think so\". I am fine with demoing to the brother how it works but the line is giving up my code. If they want to build something like it then they can go right ahead I have no problem with it. What is the correct way of approaching this and do they have the right to do this to me? Edit: there is no contract I never signed anything"} {"_id": "178363", "title": "How is the DOM language independent?", "text": "Quoting from Wikipedia > The Document Object Model (DOM) is a cross-platform and language-independent > convention for representing and interacting with objects in HTML, XHTML and > XML documents. and from wc3 > In order to provide a precise, language-independent specification of the DOM > interfaces, we have chosen to define the specifications in OMG IDL Now I have been programming in Java, C# and PHP and in all these languages the keyword `interface` is provided, but how do you implement a language- independent interface? How come you can write an interface without a programming language? Moreover how can you interact with the DOM using any programming language? If someone invented a new programming language, what are the steps he needs to do to be able to interact with the DOM?"} {"_id": "40838", "title": "Cloud availability of short-term \"virgin\" Windows instances?", "text": "I have a situation where we on a regular basis need a freshly installed \"virgin\" Windows installation to do various work in isolation on, and building one from scratch every time in a vmware instance is getting tedious. Perhaps there are cloud offerings providing a service allowing to request one or more Windows instances and after a very short while they were available for logging in through Remote Desktop? After usage they were just recycled without having to pay for a full Windows license every time. Do this exist for a reasonable price? What is your personal experiences with this?"} {"_id": "216175", "title": "Why should I use MSBuild instead of Visual Studio Solution files?", "text": "We're using TeamCity for continuous integration and it's building our releases via the solution file (.sln). I've used Makefiles in the past for various systems but never msbuild (which I've heard is sorta like Makefiles + XML mashup). I've seen many posts on how to use msbuild directly instead of the solution files but I don't see a very clear answer on _why_ to do it. So, why should we bother migrating from solution files to an MSBuild 'makefile'? We do have a a couple of releases that differ by a #define (featurized builds) but for the most part everything works. The bigger concern is that now we'd have to maintain _two_ systems when adding projects/source code. _**UPDATE:_** Can folks shed light on the lifecycle and interplay of the following three components? 1. The Visual Studio .sln file 2. The many project level .csproj files (which I understand an \"sub\" msbuild scripts) 3. The custom msbuild script Is it safe to say that the .sln and .csproj are consumed/maintained as usual from within the Visual Studio IDE GUI while the custom msbuild script is hand- written and usually consumes the already existing individual .csproj \"as-is\"? That's one way I can see reduce overlap/duplicate in maintenance... Would appreciate some light on this from other folks' operational experience"} {"_id": "216174", "title": "A good way to learn a language and fully understand its features and classes is to try all methods in the API", "text": "I have an idea that to learn a language (i.e. Java)I need to try all methods from language's API; i want to try to use methods from its API. is this a good idea? there are 10 000 methods in Java API. is this a good way to learn all language capabilities?"} {"_id": "176935", "title": "Basic use of Business Rules", "text": "I have a query on whether the following requirements would need to be designed via Business Rules - this is for a JEE based application where currently this is coded as part of the Business logic. > System will create a tax account for every city, county and district > combination that imposes tax for only certain cities, counties or districts > depending on the taxpayer's business. > > When the user establishes an account which exists in all subdivisions (i.e. > at city or county level), the application must use his tax code and > automatically populate all the locations without requiring the user to data > enter every location. I assume this would mean a data lookup table from a master table (of tax accounts) and fetch and display all locations. Is there some way in which a Rules Engine can be used to manage these combinations?"} {"_id": "163499", "title": "Can I use my company name instead of using ofbiz logo and name", "text": "Sorry this is not a programming questions. My company is looking to do some work on ofbiz. I read the license of apache and I am not sure if its legal to change the logo to our company logo. http://www.apache.org/foundation/license-faq.html#WhatDoesItMEAN"} {"_id": "176931", "title": "How do I store an object in a way that I can use it even in a second execution of the program?", "text": "I am making program in Java in which I am making `Dialog box`, `button`, `checkbox`, etc. SWT widgets. Now my problem is, I want like if in one execution `button` is made and in second execution `checkbox` is made then they both should display in one box. So I need to store that box object. How can I do that?"} {"_id": "163496", "title": "How should I create a mutable, varied jtree with arbitrary/generic category nodes?", "text": "_Please note: I don't want coding help here, I'm on`Programmers` for a reason. I want to improve my program planning/writing skills not (just) my understanding of Java_. I'm trying to figure out how to make a tree which has an arbitrary category system, based on the skills listed for this LARP game here. My previous attempt had a bool for whether a skill was also a category. Trying to code around that was messy. Drawing out my tree I noticed that only my 'leaves' were skills and I'd labeled the others as categories. ### What I'm after is a way to make a tree which attempts to separate Model and View, and allow adding an arbitary type of child node (with a separate way of being edited/rendered) to an arbitrary parent. ![A tree listing various skill from the above link](http://i.stack.imgur.com/vmcFs.png) _N.B Everything here is bought as a skill, even where it seems like a property. The End-Users will see this as buying skills (which they do on paper atm) so should be presented as such, all on the same page._ **Explanation of tree:** The Tree is 'born' with a set of hard coded highest level categories ( _Weapons, Physical and Mental, Medical_ and more etc.). From this the user needs to be able to add a skill. Ultimately they want to add the 'One-handed Sword Specialisation' _skill_ ( **not** item) for instance. To do so you'd ideally click 'add' with `Weapons` selected and then select `One-handed` from a combobox node that appears on that child, then click add again and enter a name in a text field on _that_ child node that appears. Then click add again to add/specify a 'level' or 'tier' for that leaf; first proficiency, then specialisation (for example). Of course if you want to buy a different skill it's a completely different route to the leaf. You might not need a combo box at the same level down the tree as you did with the weapon example, and need some other logic behind it too. This is what I'm having trouble getting my head around let alone programming in; how to make a set of classes and **not** specify what order to attach them together in, but still have them all fit. What is a good system for describing this sort of tree in code? All the other JTree examples I've seen have some _predictable_ pattern, and mine _doesn't_. I don't want to have to code this all in 'literals', with long lists of what _type_ (combo-box, text field etc) of children node should be allowed on what parent. Should I be using abstract classes? Interfaces? How can I make this sort of cluster of objects extensible when I add in other skills not listed above that behave differently? If there is _not_ a good system to use, is there a good process for working out how to do this sort of thing? ### The gears in my head are turning: I always need to: * Check the parent * Provide options based on the parent I'm begining to think becasue of this commonality I need some sort of abstract/interface `skill` class that defines/outlines common methods for skill and category. I can (hopefully) put rules and options in a database and read to-from there. The question is I think now, between an abstract or interface method and how to bes implement that."} {"_id": "176938", "title": "How to check if 4 points form a square?", "text": "Assume I have 4 points (they are 2-dimension), which are different from each other, and I want to know whether they form a square. How to do it? (let the process be as simple as possible.)"} {"_id": "179944", "title": "Advantages and disadvantages of structuring all code via classes and compiling to classes (like Java)", "text": "_Edit: my language allows for multiple inheritance, unlike Java._ I've started designing and developing my own programming language for educational, recreational, and potentially useful purposes. At first, I've decided to base it off Java. This implied that all the code would be written in form of classes, and that code compiles into classes, which are loaded by the VM. However, I've excluded features such as interfaces and abstract classes, because I found no need for them. They seemed to be enforcing a paradigm, and I'd like my language not to do that. I wanted to keep the classes as the compilation unit though, because it seemed convenient to implement, familiar, and I just liked the idea. Then I noticed that I'm basically left with a module system, where classes could be used either as \"namespaces\", providing constants and functions using the `static` directive, or as templates for objects that need to be instantiated (\"actual\" purpose of classes in other languages). Now I'm left wondering: **what are the upsides and the downsides of having classes as compilation units?** Also, any general commentary on my design would be much appreciated. An informative post on my language can be found here: http://www.yannbane.com/2012/12/kava.html."} {"_id": "111096", "title": "Would using AJAX extensively improve server performance?", "text": "Clearly AJAX improves the user interface but does this also decrease server load? You would think it does because the entire page will not have to be served up each time, but maybe there are other variables I'm not considering."} {"_id": "59387", "title": "Is OOP hard because it is not natural?", "text": "One can often hear that OOP naturally corresponds to the way people think about the world. But I would strongly disagree with this statement: We (or at least I) conceptualize the world in terms of _relationships_ between things we encounter, but the focus of OOP is designing individual classes and their hierarchies. Note that, in everyday life, relationships and actions exist mostly between objects that would have been instances of unrelated classes in OOP. Examples of such relationships are: \"my screen is on top of the table\"; \"I (a human being) am sitting on a chair\"; \"a car is on the road\"; \"I am typing on the keyboard\"; \"the coffee machine boils water\", \"the text is shown in the terminal window.\" We think in terms of bivalent (sometimes trivalent, as, for example in, \"I gave you flowers\") verbs where the verb is the action (relation) that operates on two objects to produce some result/action. The _focus_ is on action, and the two (or three) [grammatical] objects have equal importance. Contrast that with OOP where you first have to find one object (noun) and tell it to perform some action on another object. The way of thinking is shifted from actions/verbs operating on nouns to nouns operating on nouns -- it is as if everything is being said in passive or reflexive voice, e.g., \"the text is being shown by the terminal window\". Or maybe \"the text draws itself on the terminal window\". Not only is the focus shifted to nouns, but one of the nouns (let's call it grammatical subject) is given higher \"importance\" than the other (grammatical object). Thus one must decide whether one will say terminalWindow.show(someText) or someText.show(terminalWindow). But why burden people with such trivial decisions with no operational consequences when one really means show(terminalWindow, someText)? [Consequences are _operationally_ insignificant -- in both cases the text is shown on the terminal window -- but can be very serious in the design of class hierarchies and a \"wrong\" choice can lead to convoluted and hard to maintain code.] I would therefore argue that the mainstream way of doing OOP (class-based, single-dispatch) is hard because it IS UNNATURAL and does not correspond to how humans think about the world. Generic methods from CLOS are closer to my way of thinking, but, alas, this is not widespread approach. Given these problems, how/why did it happen that the currently mainstream way of doing OOP became so popular? And what, if anything, can be done to dethrone it?"} {"_id": "111090", "title": "Does teaching programming make you a better programmer", "text": "I consider myself an intermediate Python programmer and have been offered an opportunity to be a trainer for a beginner Python programming class. I was wondering if this would really widen my programming repertoire. Has somebody had an enlightening experience after they successfully trained a group of people? Does it also depend on those people -- whether they're programmers or noob students? (In my case they are intermediate .NET and Java programmers) What should I expect from them? One of my fears is -- what if I choked when one of them asked a tangled question. Is this normal?"} {"_id": "59381", "title": "Should I break contract early?", "text": "About 7 months ago I made the switch from a 5 year permie role (as a support developer in C#) to a contract role. I did this because I was stagnating in my old role. The extra cash contracting is really helping too. Unfortunately my team leader has taken a dislike to me from day 1. He regularly tells me I went out contracting too early, and frequently remarks that people in their 20's have no idea what they are talking about (I am 29). I was recently given the task of configuring our reports via our in house reporting library. It works off of a database driven criteria base, with controls being loaded as needed. The configs can get fairly complex, with controls having various levels of dependency on each other. I had a short time frame to get 50 reports working, and I was told to just get the basic configuration done, after which they will be handed over to the reporting team for fine tuning, then the test team. Our updated system was deployed 2 weeks ago, and it turned out that about 15 reports had issues causing incorrect data to be returned. Upon investigation I discovered that the reporting team hadn't even looked at them, and the test team hadn't bothered to test the reports. In spite of this, my team leader has told me that it is 100% my fault. As a result, our help desk got hit hard. I worked back until 2am that night to fix the highest priority issues (on my wedding anniversary!). The next day I arrive at work at 7:45 am to continue with the fixes. I got no thanks, but keep getting repeatedly told by my manager that \"I fucked up\" and \"this is all my fault\". I told my team leader I would spend part of my weekend working to fix the remaining issues. His response was \"so you fucking should! you fucked it all up!\" in front of the rest of the team. I responded \"No worries.\" and left. I spent a decent chunk of my weekend working on it. Within 2 business days of finding out about the issues, I had all the medium and high priority issues fixed. The only comments my team leader has made to me in the last 2 weeks is to tell me how I have caused a big mess, and to tell me it was all my fault. I get this multiple times a day. If I make any jokes to anyone else in the team, I get told not to be a smartass... even though the rest of the team jokes throughout the day. Apart from that, all I get is angry looks any time I am anywhere near the guy. I don't give any response other than \"alright\" or silence when he starts giving me a hard time. Today we found out that the pilot release for the next stage has been pushed back. My team leader has said this was caused by me (but the higher ups said no such thing). He also said I have \"no understanding of the ramifications of my actions\". My question is, should I break contract (I am contracted until June 30) and find another role? No one else in my team will speak up in my favour, as they are contractors too and have no interest in rocking the boat. I could complain to my team leaders boss, but I can't see that helping, as I will still be stuck in the same team. As this is my first contract, I imagine getting the next one will be hard without a reference. I can't figure out if this guy is trying to get me fired up to provoke a confrontation (the guy loves conflict), or if he is just venting anger, or what. Copping this blame day after day is really wearing me down and making me depressed... especially since I have a wife and kid to support)."} {"_id": "118769", "title": "Simulating double-click, how long should I wait between clicks?", "text": "I'm simulating a double click programmatically and I want to have a slight pause between both clicks in order to better simulate a real user. I know the interval should be less than `GetDoubleClickTime` but am not sure what would be a good time to choose. Does anyone know of any data on how fast a typical person performs a double-click? I was thinking in the direction `GetDoubleClickTime()/3` but of course the magic number seems a bit iffy."} {"_id": "121730", "title": "How can I make sure my evening project code is mine?", "text": "I'm a physicist with a CS degree and just started my PhD at a tech company (wanted to do applied research). It deals with large scale finite element simulations. After reviewing their current approach, I think that a radically different method has to be applied (they are using a commercial tool which is very limited). I'd rather base my research on an open source finite element solver and write a program which makes use of it. I'd like to develop this idea in the evenings, because that's the time that best suits me for programming (during the day I prefer reading and maths) and use it at a late stage of my PhD. I'd like to have the option to release my program as open source on my website as a reference, for future personal or even commercial (e.g. consulting) use. How can I make sure that my company doesn't claim the code ownership? I thought that a version control system could help (check out only in the evening). This would document that I programmed not during regular office hours (documented elsewhere). But these data can be easily manufactured. Any other ideas? I want to stress that I'm not interested in selling software and neither is my company. * * * Very interesting responses so far. This clearly helps me. Some remarks: * I'm not restrained by my work contract. National law says that the company owns anything I produce during working hours and no special agreement has been made (my employer is not selling software and may be a bit naive on this side). They mostly use software and non of my colleagues is a serious programmer. * Secondly, I need to rethink the point raised by @Mark about trade secrets. This is quite serious in the particular industry. * Thirdly, I care a lot about no to upset my supervisor/ boss. But, and this is the motivation for this question, I'd like to keep the _innovative_ part of my work a bit separated so I can reuse it or at least demonstrate it as a reference work."} {"_id": "164141", "title": "Have you used a Framework/Lib whose LGPL License? if yes, what are the impressions of your customers?", "text": "I am trying to make my first app for sale, I would like to ask some questions for those who have already sold their software: * Have you used a Framework/Lib whose LGPL License? * if yes, what are the impressions of your customers? for example, if your customers/ competitors from the market reveal technology/secrets that you used in your solution (as LGPL requires that you make a Dynamic Link (.DLL) for your libs and you clearly tell the use of a Lib/Framework). Full story: For my project, I used a framework LGPL/commercial (Dual License) the second one it was too expensive (about 3000 USD) which pushed me to use LGPL however I still concerned. That is why I ask for advise and especially motivations."} {"_id": "123744", "title": "easy way to search github, googlecode, codeplex,etc at the same time?", "text": "Is there an easy way to search open repository such as github, googlecode, codeplex, etc at the same time? I wonder whether google is good at this."} {"_id": "95050", "title": "How to solve an event-queue problem in a computer emulator in Java", "text": "I'm writing a _low-level_ emulator (in Java) of the IBM 1620, a late-50s computer. By \"low level,\" I mean that the lowest _observable_ unit of work is the 20-microsecond \"trigger\" -- machine instructions achieve their objectives by running a series of these triggers; and I've written code to emulate almost all of them. By \"observable\" I mean that the 1620's console contained a \"Single Cycle Execution\" key which allowed you to manually advance one trigger at a time - and most of the CPU's inner state (registers, PC, conditions) were displayed on a large console of lights. I am guaranteeing correctness of the state at the single-cycle level. Normally the emulator is either waiting for the operator to press START, or it's in its main run loop, executing trigger after trigger. It also needs to be able to respond to external events like a thrown toggle switch, or the afore-mentioned Single Cycle Execution key, which are conveyed to the CPU Thread from the GUI thread by an event queue. The run loop peeks at this queue before firing off the next trigger, and so far this has worked fine. Now, however, I'm implementing the card-reader I/O instructions, and I have a problem. All 1620 I/O was synchronous (the 1620 did not have interrupt capability), so when it executed the ReadCard instruction, it would literally wait in mid-trigger until the card reader delivered a card. This normally took a tenth of a second, _unless the operator had not mounted the card deck_! It's that latter contingency that creates the problem: the 1620 must wait in mid- trigger (i.e. the main run loop is not running), while remaining responsive to external events (thrown switches, the Reset key, or, of course, a card from the card reader). I can't seem to think of a clean, elegant way to design this. Polling for, and handling, \"unrequested\" events at the top of the run loop has worked well so far, but now I need a callable mechanism that does the same thing but is able to return control to its caller when the right event occurs. Here's what it looks like right now: state: MANUAL means unable to immediately process next trigger AUTO means we can go ahead and process next trigger do forever { processQueuedEvents() // event-processing never blocks if (state == MANUAL) { waitForEvents() // this blocks until queue not empty } else { // state == AUTO trigger = doTrigger(trigger) // run trigger and get next one } } CPU trigger E30 sends a message to CardReader requesting a card buffer, and must wait to receive the event containing the buffer. In the meantime, the CPU must continue to receive and process events as usual. I could imagine this to be an operating-system design problem, but that's not my strong suit. Can anyone provide some guidance?"} {"_id": "178090", "title": "Test driven vs Business requirements constant changing", "text": "One of the new requirement of our dev team set by the CTO/CIO is to become test driven development, however I don't think the rest of the business is going to help because they have no sense of development life cycles, and requirements get changed all the time within a single sprint. Which gets me frustrated about wasting time writing 10 test cases and will become useless tomorrow. We have suggested setting up processes to dodge those requirement changes and educate the business about development life cycles. What if the business fails to get the idea? What would you do?"} {"_id": "238926", "title": "Fixing bugs generated by another team", "text": "I'm facing the following situation: There are 2 different teams working on the same project using Scrum, and every now and then, bugs related to user stories developed by team \"A\" are being assigned to team \"B\". We're used to fixing bugs made by other people when they belong to the same team, but things get a bit more complicated when different teams are involved. Some of the developers don't like working this way and say that the bugs must be fixed by the ones who made them, and this is causing some conflicts between the teams. While reading some of the similar questions I've found some interesting ones (Is fixing bugs made by other people a good approach? and also Parallel teams and scrum/agile), but they are not about the same situation, or at least I don't see them that way. During our discussions here we got 2 possible approaches: Leave the situation as is, or determine that the bugs must be assigned to the team that developed the feature. Do you guys have any suggestions?"} {"_id": "178092", "title": "Why was Objective-C popularity so sudden on TIOBE index?", "text": "I'd like to ask a question that is pretty similar to the one being asked here, but for Objective-C. According to TIOBE rankings, the rise of popularity of Objective-C is unprecedented. This is obviously tied to the popularity of Apple products, but I feel like this might be a hasty conclusion to make since it doesn't really explain the stagnant growth of Java (1. There are way more Android O/S devices distributed worldwide, 2. Java is used in virtually every platform one can imagine) Now I haven't programmed in Objective-C at all, but I'd like to ask if there are any unique features or advantages about the language itself compared to other prevalent languages such as C++, Java, C#, Python etc. What are some other factors that contributed into the rise of Objective-C in this short span of time? > ![http://i.stack.imgur.com/3q0tW.png](http://i.stack.imgur.com/3q0tW.png)"} {"_id": "50831", "title": "How do programmers in the West see programmers in the East?", "text": "The other half of this question: How do Programmers in the East see programmers in the West? * * * The eastern part of the world (India/China/Philippines ) mainly provide outsourcing services to the western world (USA and Europe). Do you have the experience of working with offshore teams? If yes, how was it? Do you hold any generalized ideas or opinions about the programmers from the East (e.g. Are they cooperative, do they deliver on time or do they do quality work?). What are these based on?"} {"_id": "238929", "title": "Tree View Children condition indicator on topmost un-expanded parent", "text": "I am using a tree view in c# and i am creating custom icons for the nodes. Let say this is my hierarchy with a node that satisfies a certain condition: Root1 |_Ax |_Bx1 |_Bx2 |_Cx1 |_Cx2(condition true) |_Bx3 |_Bx4 |_Ay |_By1 |_By2 |_Cy1 when the tree it's collapsed I don't know if some inner child satisfies that particular condition so I want to display an icon on the topmost un-expanded parent. For example let say that the tree it's completely collapsed I would like this behaviour : 1) the first root will be like this : Root1(!) 2) I expand Root1 and i see there is something the children of Ax : Root1 |_Ax(!) |_Ay 3) I expand Ax : Root1 |_Ax |_Bx1 |_Bx2(!) |_Bx3 |_Bx4 |_Ay 4) Last I expand Bx2 to find the target node Cx2: Root1 |_Ax |_Bx1 |_Bx2 |_Cx1 |_Cx2(condition true) |_Bx3 |_Bx4 |_Ay The symbol **(!)** appears only on the topmost un-expanded parent and disappears once the node is expanded indicating a path to locate the target node Cx2. But I would like some idea on how to add the node indicator (!) in an efficient fashion. I need to do this in 2 steps \\- tree creation \\- node expanded In the first case as soon as I create the node Cx2 and I notice it satisfies that particular condition I need to put the indicator on the topmost unexpanded parent node. In the second case I need to change the indicator location dynamically when the node it`s expanded. What is the most efficient way to do that ?"} {"_id": "148836", "title": "What would be the Impact of P=NP?", "text": "I am preparing for a test and I can't find a clear answer on the question: What would be the impact of proving that PTIME=NPTIME. I checked wikipedia and it just mentioned that it would have \"profound impact on maths,AI,algorithms..\" etc. Anyone can give me an answer?"} {"_id": "148831", "title": "Is it considered bad practice to run different JavaScript for IE", "text": "Is it considered bad practice (and how bad) to run different JavaScript for IE? Currently im writing some JavaScript and the simplist way to work arround IE quirks seems to be to check for browser version and run different code var browserName = navigator.appName; if (browserName == \"Microsoft Internet Explorer\") { //Do some stuff } else { //Do other stuff } It's quick and works well but does lead to duplication of code and feels \"hacky\"."} {"_id": "9843", "title": "Business Objects - Containers or functional?", "text": "This is a question I asked a while back on SO, but it may get discussed better here... Where I work, we've gone back and forth on this subject a number of times and are looking for a sanity check. Here's the question: Should Business Objects be data containers (more like DTOs) or should they also contain logic that can perform some functionality on that object. Example - Take a customer object, it probably contains some common properties (Name, Id, etc), should that customer object also include functions (Save, Calc, etc.)? One line of reasoning says separate the object from the functionality (single responsibility principal) and put the functionality in a Business Logic layer or object. The other line of reasoning says, no, if I have a customer object I just want to call Customer.Save and be done with it. Why do I need to know about another class to save a customer if I'm consuming the object? Our last two projects have had the objects separated from the functionality, but the debate has been raised again on a new project. Which makes more sense and why??"} {"_id": "9844", "title": "How to introduce a willing manager to IT and web development?", "text": "After experiencing quite a lot of problems with our small project's manager (we're a startup, and he's the only guy who has the appropriate education; the rest of us are programmers), we have finally identified the cause of our troubles: his understanding of the inner works of our project is limited, as is his understanding of a normal software development workflow. He has expressed the desire to quickly learn and to fill these gaps; however, I don't really know what books, articles or blogs to recommend to him. In short, can you recommend a good reading for a manager-type person who is not going to be a developer himself, but needs to understand web application development process well enough to understand our explanations when he requests them? P.S. No, kicking him and finding another manager is not an option at this moment."} {"_id": "9849", "title": "How could I hire a programmer to add a small feature to an OSS project?", "text": "I'd like a feature added to Eclipse, as a small plug-in, but: * It's a bit niche, so not high demand. So if I post it as a feature request it's unlikely to be followed-up. * Still, I'm sure someone else would find it handy. * I'm a programmer, but I don't know Java, and I don't think it's currently worth my time learning Java just to code this. What might be a good way to find a programmer who could code such an Eclipse plug-in, and pay them to do the job? My example is specifically about Java and Eclipse, but what might be an answer to this question in general terms?"} {"_id": "151072", "title": "How abstract should you get with BDD", "text": "I was writing some tests in Gherkin (using Cucumber/Specflow). I was wondering how abstract should I get with my tests. In order to not make this open-ended, which of the following statements is better for BDD: Given I am logged in with email admin@mycompany.com and password 12345 When I do something Then something happens as opposed to Given I am logged in as the Administrator When I do something Then something happens The reason I am confused is because 1 is more based on the behaviour (filing in email and password) and 2 is easier to process and write the tests."} {"_id": "151076", "title": "Approaching Java/JVM internals", "text": "I've programmed in Java for about 8 years and I know the language quite well as a developer, but my goal is to deepen my knowledge of the internals. I've taken undergraduate courses in PL design, but they were very broad academic overviews (in Scheme, IIRC). Can someone suggest a route to start delving into the details? Specifically, are there particular topics (say, garbage collection) that might be more approachable or be a good starting point? Is there a decent high-level book on the internals of the JVM and the design of the Java programming language? My current approach is going to be to start with the JVM spec and research as needed."} {"_id": "256141", "title": "Find all possible subarrays of an Array", "text": "I am lost I just can't seem to get my head around backtracking/recursion approaches. I understand how simple recursion problems like factorials work I can even trace those by hand. But when it comes to backtracking problems I am so lost. I keep trying. its been 5 hours I have been reading various approaches on forums and stack exchange but nothing has clicked. For instance say I have a array with `[1,2,3,4]` and my destination value is `5`. I am trying to find all possible combinations that equal to my destination value. I broke this problem down further to make it even more simple for my self to understand. First I can find all possible combinations of the array and then pass them to another function which checks if the sum of that array equals destination value and then only does it print it. Can someone advice how to approach this? I am not looking for code/answer I want to be able to think and imagine this in my head clearly."} {"_id": "212219", "title": "Consolidating hotels data from various booking sites with different IDs or reference", "text": "In one of my projects, I have data for hotels, and other booking sites are able to book this hotel. For example: Hotel A - Booking (ID = 4002), Expedia (ID = 123), Priceline (ID = 147) The three booking engines each uses their own Id to reference to Hotel A. I would need to check manually and make the right reference to the hotel. If I have 100,000 hotels, I have to check manually 300,000 (considering 3 booking sites) times? They might provide API, then I can cross check the name, address or latitude/longitude, but if they differ a little bit then I might give the wrong reference to the wrong hotel. I'm sure there are better ways to do this. There are many travel sites out there which do hotel price checking on many booking sites, but how do they do to make sure they are checking the right hotel on these booking sites? Anyone has any experience on this?"} {"_id": "212210", "title": "Mockup strategy for Responsive Web Design (from a programmer point of view)", "text": "When making mockups for _Responsive Web Design_ projects, should I separate them by **Page** or by **Screen Size**? Which one would be more helpful when I start writing the source code? What are the pros and cons of the following: * Page based: All the home page mockups in all different screen sizes * Screen-size based: All the pages for width <= 960px I'm using Balsamiq Mockups to make mockups and I prefer to store all related mockups in a file like `homepage, login` or `960px, 768px`."} {"_id": "232360", "title": "What is the difference between Repository Pattern and Facades Pattern?", "text": "I've always used the repository pattern in my applications. But I have seen that many people use facades instead of the repository for naming convention, but the operation is the same, I Think. Why is there this difference? There are a real difference between them or not?"} {"_id": "22549", "title": "Understanding Microsoft SQL certifications", "text": "I am royally confused about the certifications Microsoft has out for SQL server. From what I can gather from reading their site as well as others is there are various levels of tests (like in university) they have level 100 series tests, 200, 300... up to about 400 for the new master of SQL Server level. I wanted to knock a few out since I am doing daily SQL work now I figure I can get on the job practice and learning where I fall short. Where do I start? What test and test details do I need to know to take and pass each exam? what order should I take them in?"} {"_id": "22542", "title": "Where is Smalltalk-80 best used?", "text": "I want to know in which applications/programming domain are most suitable for Smalltalk. Could anyone please provide me some useful links that could answer my query? Through googling I learned that some companies use it for: logistics and foreign trade application desktop, server and script development data processing and logistics, scripts and presentations but I cant find documents/research papers that can tell me which programming domain Smalltalk-80 (or Smalltalk) is best suited. Some of the programming domains are: - Artificial intelligence reasoning - General purpose applications - Financial time series analysis - Natural language processing - Relational database querying - Application scripting - Internet - Symbolic mathematics - Numerical mathematics - Statistical applications - Text processing - Matrix algorithms I hope you guys can help me. I am doing this for my case study. Thanks in advance."} {"_id": "250961", "title": "Is there an official programming format?", "text": "A person can make their code readable and neat in their own way. However, is there a standard programming format that professional programmers are compliant to?"} {"_id": "26438", "title": "How much business logic should be allowed to exist in the controller layer?", "text": "Sometimes we have some business logic represented in the controller code of our applications. This is usually logic that differentiates what methods to call from the model and/or what arguments to pass them. Another example of this is a set of utility functions that exist in the controller that may work to format or sanitize data returned from the model, according to a set of business rules. This works, but I am wondering if its flirting with disaster. If there is business logic shared between controller and model the two layers are no longer separable, and someone inheriting the code may be confused by this unevenness in location of business logic related code. **My question is how much business logic should be allowed in the controller and in what circumstances, if any?**"} {"_id": "62986", "title": "What's an assessment day and what kind of individual & group based programming activities are involved?", "text": "The only things that I know: * An assessment day is much more than a simple interview. Actually 1 or more interviews are just a part of it. * Both individual and group based programming activities are involved. Any ideas about what kind of programming activities to expect (preferably by people who have already participated in an assessment day)?"} {"_id": "62987", "title": "What about all those coding rules?", "text": "I always supported the idea of having coding rules for developers in a company or a specific project. Especially if the company is size greater than 10. The bigger the company the bigger the need. I know a lot of people will disagree, but I have seen projects that don't have them and the code looks like total disaster. The real problem that comes from this is how to make those hard headed ones that don't like to use brackets in if statements, or use the same connections string everywhere in the code, or whatever, to use the coding rules without making them oppose the idea?"} {"_id": "34469", "title": "Adobe Air apps on the Mac App Store?", "text": "Before you chastise me for asking this, I have an Adobe Air app that has a lot of investment in it, and if there is a way to utilize the existing code-base in _any_ way, it is worth considering. I have heard news reports in passing about Adobe creating some kind of tool to allow flash or air apps to be ported to native objective-c code, but the information found in google is mostly commentary on the one-time Apple blockade. My question(s) is/are this: 1. Is it even possible, at any level, to use an existing Air app to create a Mac Store app? 2. If possible, what are the resources I can use to accomplish this?"} {"_id": "157287", "title": "How to motivate co-workers to write unit-tests?", "text": "We're working on a large product which has been in production for about 5 years. The codebase is.. erm.. working. Not really well but it is working. New features are thrown into production and tested with a small QA. Bugs are fixed, etc. But no one, except me, is writing unit-tests. No one uses the power of \"tracking\" down bugs by writing unit tests to ensure this special bug (test case) would never, ever occur again. I've talked to management. I've talked to developers. I've talked to everyone in the whole company. Everybody says: \"Yep, we must write more unit-tests!\" That was about a year ago. Since then I have forced introduction of pre-commit code review (Gerrit) and continuous integration (Jenkins). I held some meetings about unit-tests and I also showed the benefits of writing unit-tests. But no one seems to be interested. **Q1: How do I motivate my fellow co-workers to write unit-tests?** **Q2: How do I stay motivated to follow my personal code quality standards?** (Sometimes it' s really frustrating!) PS: Some frustrating facts (reached in 1 year): * Total unit-tests: 1693 * Total \"example unit-tests\": around 50 * Done by me: 1521 **Edit:** Am I expecting too much? Its my first working place and I'm trying to do my best. **Edit 2:** Based upon all answers I've made a small checklist for myself. I've talked to two developer in private and we had a good and honest talk. One of them told me, like Telastyn said, that he is really uncomfortable with unit-tests. He said that he would like to be \"more professional\" but he needs a kickstart. He also said that our unit-test meeting with all developers (around 9-11) was good, but it was too crowdy. Meh. Some critics for me, but I'll learn from that. (see answers below concering tdd kata meetings!) The other one said that he is not interested in writing unit-tests. He thinks that his work is good enough for his salary. He don't want to put more effort in. I was quite speechless. Typical 9-5 \"worker\". Next week I'm going to talk to the other developers. Thanks for your great answers (so far!) and your support. I really appreciate it! I'ved learned a lot, thank you very much!"} {"_id": "117937", "title": "Conflicting versions of jQuery in Separate Extensions", "text": "I have built a few browser extensions that live in GMail. Since they are larger extensions, they incorporate jQuery 1.6.x. I am using jQuery as a content script which means it is injected into GMail then my scripts reference jQuery as they are loaded afterwards. I have found that when other extensions are installed alongside of my extensions and they incorporate earlier versions of jQuery the earlier versions are loaded first and my 1.6.x is ignored. The functionality I have that depends on 1.6.x no longer works, and that's a dealbreaker. I'm trying to come up with an elegant solution for this. My first instinct is to namespace my version of jQuery, but loading jQuery twice seams clunky. Possibly testing for jQuery and then doing a diff, but that seems tedious. Any thoughts?"} {"_id": "117938", "title": "Ensuring successful merges in Subversion", "text": "Subversion Re-education actually convinced me, but for now I'll stick to svn - mastering the use of a very popular tool can't be bad. To date I've not used branching/merges in production code, now I decided to give it a try in a small team environment. I'm afraid of suffering the same pain as the described in Spolsky's article. **Is there a \"right\" way of working with branches in svn?**"} {"_id": "92916", "title": "Why is Perl so heavily used in Bioinformatics?", "text": "What is it about Perl that makes it so useful in Bioinformatics ? Why isn't C++ or Matlab or Python the big language?"} {"_id": "206212", "title": "Monitoring App: Client side or Server Side?", "text": "I have a monitoring web application which has a .Net Backend and a Silverlight frontend. The application crunchs big chunks of data, process them and presentates to user. Then user can interact with the data to see different graphs of log. He/She can unselect dimensions, group some values, choose metrics like transaction count, transaction amount(dollars) etc. Currently, I combine log data into 3 minute chunks. Then making a lookup from it and sending to clientside. With this raw form of data it's size is optimized for network. On clientside I have a lot of business logic to process this data for presentation. Also when user changes options clientside processes the data according to user's choices. We have chosen this path to serve this application only from one server. We don't have to scale with the user amount in the company because the whole thing happens on the clientside. This is good but we are sacrificing performance. I'm really curious about what is going to happen if I choose to do the all computation for user's choices on raw data at the serverside. -Is it going to be faster? -Do I need to scale immediately? -Is Fetching the data only once on the server then caching it something like redis than doing the computations according to user requests better solution? -If clientside approach is good, do I need to switch to Javascript and Javascript client side MVC frameworks like AngularJS? And I really don't know how to write the my whole business logic in Javascript at the moment. **Extra Info** Average desktop has 2gb ram and a dual core cpu which is core to duo. Our servers are in VM cluster. They have scalable ram min 8gb. And 8 core xeon cpus. 100-150 concurrent users can use it. Every user can do different manipulations on the data. That's why they all have their own data on the clientside. Thanks"} {"_id": "110056", "title": "How would you approach developing a Hotel Reservation System?", "text": "Till now, I have had the experience of developing either a desktop application or a Website. But, with this Hotel reservation system, I am interfacing a problem that needs both. The application needs to be fulfil the following requirements : * Manager can do the bookings/checkout of the incoming customers at the reception counter System. * Customers should be able to book themselves online. As you can see, the second requirements clearly means that a web app is required, whereas the first can be accomplished via desktop development. This type of requirement is pretty common. Like, in case of cinema seat bookings, We are free to book ourself online or we can go the ticket window and buy one. The question is how are these simultaneous and different requirements fulfilled for a single project. There has to be a sync or else the customer might book himself in an already booked room. From my understanding, there are two ways to develop this solution. 1. You develop a Web application with a back-end(admin panel) and host it on a server. The customers can book themselves on the front-end, while the manager(at reception desk) can use the administrator panel to carry out his bookings and checkouts. Since both front-end and back-end are using a single database, there is no requirement for sync. 2. Develop a desktop application for use at the Hotel and a website for the customers to book online. The two applications are maintaining a single datbase, therefore synchronization comes into play. Some definite development for the sync is mandatory or else the system will fail. Which of the above solutions should I choose ?. This is also a common scenario for banks. They handle transactions at the local branch as well as over the Internet. How do they achieve this thing. Much like i mentioned above or something different."} {"_id": "148168", "title": "Will the customer benefit from a DVCS in any way?", "text": "Some of us can say a Distributed Version Control System (e.g. Mercurial, git) will have a positive impact on developers only out of the experience of using one (under the right conditions: higher productivity, higher code-base stability, etc.), but, in what situations could a customer ( **i.e. the observer of the outcome of the software development lifecycle** ) tell there is a difference from using a CVCS and a DVCS?. Anyone with empiric evidence of a customer \"feeling\" a difference upon DVCS adoption gets a big double-chocolate cookie with whipped cream and a cherry on top. Note: \"tell the difference\" doesn't mean \"know there was a DVCS involved\" nor even remotely \"know what a DVCS is\""} {"_id": "220755", "title": "Is there a special name for a condition which will break a loop if it increments a set number of times", "text": "Is there a name for including a limitation in a loop structure to prevent it from running if its primary condition becomes unwieldy. For example for (var i = 0; i < len; i++){ some_function(); } but if len is somewhere set to ten million, I don't want to run it, so I would do something like: for (var i = 0; i < len && i < 50; i++){ some_function(); } Is there a name for this type of hard-coded condition?"} {"_id": "104320", "title": "Why is extending the DOM/built-in object prototypes a bad idea?", "text": "I'm looking for a definitive answer to why extending built-in prototypes is so heavily chastised in the JS developer community. I've been using the Prototype JS framework for a while, and to me doing `[1,2,3].each(doStuff)` seems much more elegant than `$.each([1,2,3], doStuff)`. I know that it creates \"namespace pollution,\" but I stil don't understand why it's considered to be a bad thing. Also is there any real performance degradation associated with extending built-in prototypes? Thanks!"} {"_id": "127115", "title": "How to spread awareness for generic programming among team members?", "text": "I am staying in an environment, where people believe: 1. Java generics are the feature exclusively used for library writing and not for the real coding. 2. C++ is an OO programming language; `template` is an optional and avoidable feature Though, these people rely highly on the libraries written using generic programming (e.g. STL, Java containers). If I write a code using `template`s or `generics`, the code reviewer is most likely to reject it and will comment to write it in a **_\"proper / understandable / elegant\"_** way. Such mentality is applicable from normal programmer to senior most manager. There is no way out, because 90% of the time, these people have their lobbying. What is the best way ( _not being cut throat_ ) of explaining them, the practical approach of writing code which constitutes OO and generic programming both at the same time ?"} {"_id": "61143", "title": "How do you know if you should use Self-Tracking Entities or DTOs/POCOs?", "text": "What are some questions I can ask myself about our design to identify if we should use DTOs or Self-Tracking Entities in our application? Here's some things I know of to take into consideration: * We have a standard n-tier application with a WPF/MVVM client, WCF server, and MS SQL Database. * Users can define their own interface, so the data needed from the WCF service changes based on what interface the user has defined for themselves * Models are used on both the client-side and server-side for validation. We would not be binding directly to the DTO or STE * Some Models contain properties that get lazy-loaded from the WCF service if needed * There are permission checks on the server-side which affect how the data is returned. For example, some data is either partially or fully masked based on the user's role * Our resources are limited (time, manpower, etc) So, how can I determine what is right for us? I have never used EF before so I really don't know if STEs are right for us or not. I've seen people suggest starting with STEs and only implement DTOs if they it becomes a problem, however we currently have DTOs in place and are trying to decide if using STEs would make life easier. We're early enough in the process that switching would not be a huge deal, but I don't want to switch to STEs only to find out it doesn't work for us."} {"_id": "127117", "title": "How to convert a copy/paste/spaghetti programmer to see the light?", "text": "This question was inspired by this one. While that other question was deemed localized, I believe the underlying problem is something that is extremely common in our industry. I know there are some developers, who will read this and think I'm making this stuff up and then they might reply how everyone cares about their work and wants to learn, but just looking at other Programmers SE posts (case in point), I know that's not universally true. So let's say you have someone on your team (or maybe majority) who's standard operating procedure is to copy/paste and who believes that everything can be solved if only you add enough function calls and variables. This person never heard of TDD, DRY or SOLID and outside of 40 hours at work when they are busy working, they never once read a single methodology/pratices/design book. In the past I (and others) have asked, how to do you teach OOD. But now I'm thinking that's not the right question. The real question is how do you approach such a person/team and make them curious about better way of doing things? How do you inspire them to learn? Without that, it seems that all the teaching, meetings, lectures, discussions are useless if they are perfectly happy going back to their desk and doing what they've always done. I work with a bunch of people like that. They are actually quite bright individuals, but I hate when I hear, \"I'm done coding, just need to refactor and split into multiple classes to make DXM happy\". They don't refactor for cleaner, readable, maintainable code, but only because otherwise they'll get scolded. I know they are capable of learning, just seems that there's a general lack of motivation. When I deliver work, it generally has way fewer bugs and the work I owned never became 5000-line monstrosity of a class. Other's would make comments like, \"your code is much cleaner and readable than our stuff\", so they see the difference. But at the same time, I feel like they believe they get paid for 40 hours regardless of what they do, so they actually don't mind if they spend 3 full days in QA looking for a bug that shouldn't have been introduced in the first place. Or that they take a week to modify one class because there are so many dependencies they end up touching. The though, \"maybe that class should have been written differently\" never seems to pop up. Can anything be done in these situations? Has anyone succeeded? Or is it best to isolate such mindset to non-critical parts of the project and minimize the damage? **NOTE:** When I say \"lack of motivation\". I don't think it's lack of motivation to work or do a good job because they simply stopped caring. Most of our team is actually quite the opposite. They definitely care about the product. We have guys that will work nights and weekends. The part I'm trying to get through is with improved habits and skills, they actually wouldn't have to work as much. I guess that \"40 hours\" thing made this post sound a little too negative."} {"_id": "61145", "title": "Merging functionality from two controllers; should I place all actions in one controller?", "text": "Say I have 2 controllers, `OrderController` and `StatusController`. `OrderController` has several CRUD actions for Orders and `StatusController` has several actions pertinent to changing the satus of Orders (from created to sent, cancelled, etc). `StatusController` really only has one view where several actions can be performed, but for the sake of usability I've decided to merge the functionality of this one view with the Index action of `OrdersController`. (This is just an adapted example for the question, so bear with me if it sounds silly). Only by copy/pasting the Html in the views from here to there everything works, of course, but... What should I do with the actions on `StatusController`? Should I move them to `OrderController` or Should I let `StatusController` exist even though it has no views? I think it might be better than having `OrderController` have 15 actions, but is this desirable? Is there a recommended good practice/approach in such case?"} {"_id": "61144", "title": "Task scheduling for software development", "text": "I'm managing a team of 10 software developers and I'm looking for a tool which can be used to schedule/assign tasks. I envisage a fairly simple web-based tool which each developer signs into. Here they can see a list of tasks assigned to them. For each task it would be clear how long has been assigned (either the number of hours or number of days) and when it should be delivered. Drilling into a task would reveal any associated notes/specification, and each task would be associated with a client and a project. Each developer would 'sign off' a task when complete. An administrative interface would exist for managing users, clients, projects and tasks. Is there a free tool available which provides this basic functionality? Obviously we could write our own in a relatively short amount of time, but I'd be interesting in knowing if there are any (ideally open source) tools already out there that people have experience using."} {"_id": "127112", "title": "Git workflow for multiple teams", "text": "We are going to start using Git (not using it yet), and I want to define the workflow. We have 4 teams at 4 different global locations, developing together the same product. Each team owns a part of product's code, but sometimes they also have to make changes in the code owned by other teams. Is there a recommendation for a Git workflow for such environment? I have already seen this article, but the approach here is \"we create additional branches as seldom as possible\", and I believe more in \"branch for each user story\" approach. Also, this article presents a nice approach. I had in mind having a master branch, a permanent branch per each team periodically merging to master, and a per-user-story branches merging to the teams' branches. Does it make sense, or it wouldn't work?"} {"_id": "253475", "title": "An efficient + extendable way to store/reference a normalized set of \"causations\"?", "text": "I'm looking for a good way to define and store references to a normalized set of \"causations\". I have a well-defined set of reasons (short, descriptive strings) why certain objects are created and persisted, and I need an efficient way to store references to them. I also need the flexibility to add new reasons in the future. Oh, and I need to implement it in a way that will please the developers as well as the database guys :) Some context: I'm working with a Rails app and a relational database. There are `events` and `payments` (as well as many other models). When a `payment` is created, it should include a reference to a causation. Similarly, when an `event` is canceled it should reference a cause for the cancellation. In the case of a canceled event, the current implementation stores the index of the cause, pulled from an array of possible causes defined in the `Event` model. The implicit (and probably unwise) assumption is that the array won't change in a way that would break the reference. Also, there's some overlap with similar \"cause-arrays\" in other models. In an attempt to normalize how we store \"causation\" data, I built a `Causation` model and accompanying table. The `causations` table entries form a unique list of causes that can be referenced by id from any other model. The drawback is that whenever we want to define a new `causation`, we have to add it to the database before we can reference it. This is a bit cumbersome, especially when deploying to testing environments, but maybe that's just the price paid for enforced normalization? A coworker suggested moving the definitions into the `Causation` model itself, rather than storing them in the database, and I wonder if that's a good middle-ground between the overly-brittle/disorganized original code and my own cumbersome/restrictive solution. Maybe something like: module Causation module Refund UNHAPPY_CUSTOMER = 175 UNUSED_SUPPLIES = 192 end module Fee SHIPPING = 214 EXTRA_FOOD = 294 end MAPPINGS = { Refund::UNHAPPY_CUSTOMER => \"Customer was dissatisfied\", Refund::UNUSED_SUPPLIES => \"Extra/unused supplies returned\", Fee::SHIPPING => \"Shipping cost\", Fee::EXTRA_FOOD => \"Ordered extra food\" } end payment.causation_int = Causation::Refund::UNUSED_SUPPLIES payment.save ... Causation::MAPPINGS[payment.causation_int] => \"Extra/unused supplies returned\" You can see by the absurd numbers that I haven't figured out an elegant way to ensure uniqueness and prevent collisions. Also, you'll notice I need to be able to group causations into categories. Is the namespaced-module approach a better way to go (and is there a way to avoid using \"magic numbers\" like above), or should I stick with my database implementation? Or maybe there's something better than either of the two?"} {"_id": "253477", "title": "Knockout custom text binding handler", "text": "I created this custom binding handler : ko.bindingHandlers.line = { update: function (element, valueAccessor, allBindings) { // First get the latest data that we're bound to var value = valueAccessor(); // Next, whether or not the supplied model property is observable, get its current value var valueUnwrapped = ko.unwrap(value); // Now manipulate the DOM element if (valueUnwrapped) { $(element).after('
    '); // Make the element invisible } return ko.bindingHandlers.text.update(element, function () { return valueUnwrapped; }) } }; I'm wondering about the part where I do `ko.bindingHandlers.text.update`, is there a better way to solve this ? Question is: Is it the only way to update the DOM with ko.bindingHandler.text.update or is there any more straightfoward way to do it ?"} {"_id": "144723", "title": "If everyone thinks that PHP sucks, why no one tries fixing it?", "text": "I'm a junior web developer in the first steps of my professional career. I'm starting off by maintaining projects written in PHP. Wherever I looked, I see people saying that PHP is a bad language and it's broken in so many ways but it's so widely used now and we can't just make everyone switch to something else. So, why people doesn't seem to try fixing it? Is it because of PHP? The dev core team? People just fed-up and went trying using something else? Please don't get me wrong. I'm not trying to be rude here, I just want to find some answers. :-)"} {"_id": "215884", "title": "Can I rightfully claim this as my own project if I recieved help online?", "text": "Basically I'm new to network programming in Python, so I went on a tutorial online to find out about it. Using what was taught in the tutorial (creating a socket, connecting to ports, etc), I modified the code so that I made a program where two computers can send messages to one another. If I were to apply for a job and show this to my interviewers, would the code for it technically be mine? It is fair to say that I didn't modify the code by that much; However, what if for example I modified it into something like a tic-tac-toe game, where two users play each other from different PCs, would the code then be mine? I just don't want to look like a plagiarizer hence why I ask."} {"_id": "222448", "title": "Is there any practical algorithm / data-structure that can't be done with non-recursive Lambda Calculus augmented with foldl?", "text": "In my search for a practical non-turing complete programming language, I've been paying attention to lambda-calculus with disallowed self-application - that is, `x x` forbidden. After taking that language and augmenting it with lists and the `foldl` and `range` operations, pretty much any algorithm I've tried so far is implementable. It is trivial to implement `filter`, `reverse`, `head`, `tail`, `map`, `scanl`, `zip` and many others - foldl replaces the need for recursion. Can you think in any practical, important algorithm that would be undoable in that language? > It is no coincidence that all of them use self-application\u2014the application > of an expression to itself. It is through self-application that repetitive > computation can be simulated in the lambda calculus. Indeed, the third of > the previous three examples is famous because it can encode recursive > function de\ufb01nitions. From http://people.cis.ksu.edu/~schmidt/705s13/Lectures/ch6.pdf ."} {"_id": "120469", "title": "How should developers handle subpar working conditions?", "text": "I have been working in my current job for less than a year and at the beginning didn't have the courage to say anything about the things that bothered me. Now I'm a bit fed up and need things to get better. The first problem is not random but I'll mention it anyway. We are running out of space so every new employee gets a smaller table. We are promised that the space problem will be fixed soon. Almost every employee has a different keyboard, mouse, headphones (if any). Mine are $10 keyboard, some random cheap mouse and some random crappy headphones with a mic. All these were used and dirty when I got them. The number of monitor is 1-3 and with different sizes. I have 2 nice monitors and can't complain but some are given 1 small monitor. When it's their first job they don't have the guts to ask for 2 even if most others have 2. Nobody seems to care too. Project manager asked if it's ok? He obviously said he can handle the 1 small one. Then the manager said you can go ask for 1 more. I'm watching this and think go and ask where? The company is trying to hire more people but is not doing much after the person has signed the contract. We are put in one room that is open to the hallway and it's super noisy. Almost like a zoo at times. Even if nobody is talking the crappy keyboards make too much noise. * Is this normal? * Am I too negative and should I just do my job with what I was given? * Should I demand better things? * Should the company have some system that everybody gets things in some price range?"} {"_id": "151920", "title": "How does the Java Virtual Machine execute code written in other languages?", "text": "Since Java 1.6 the JVM can run a myriad of programming languages on top of instead of just Java. I conceptually understand how Java is run on the Java VM, but not how other languages can run on it as well. To me, it all looks like black magic. Do you have any articles to point me to so I can better understand how this all fits together?"} {"_id": "151924", "title": "What licence should I use for internal projects?", "text": "I need to package an internal project as a debian package. That project will never ever be downloaded by anyone outside of our company, but the debian packaging system insists on there being a `copyright` file. What should I choose for this file? There's a remote possibility that a sysadmin could stumble upon that package and would like to read its licence. What should it then read?"} {"_id": "98321", "title": "How to better start learning programming - with imperative or declarative languages?", "text": "Someone is interested in learning to program. What language paradigm should I recomend him - imperative or declarative? And what programming language should he start with? I think that declarative because it is closer to math. And I would say that Prolog might be the best start because it is based on logic and programs are short. On the other hand at school we started learning from imperative languages and I am not sure whether there is a benefit to start with them instead of declarive ones. Thanks. :)"} {"_id": "98485", "title": "TDD negative experience", "text": "What is a negative side of your TDD experience? Do you find baby steps (the simplest fix to make test green) annoying and useless? Do you find no-value tests (when test has sense initially but in final implementation checks the same logic as other test) maintanence critical? etc. The questions above are about things which I am uncomfortable with during my TDD experience. So I am interested whether other developers have similar feelings and what do they think about them. Would be thankful for the links to articles describing negative sides of TDD (Google is fullfilled by positive and often fanatic articles)."} {"_id": "166167", "title": "Does a mature agile team requires any management?", "text": "After a recent heated debate over Scrum, I realized my problem is that I think of management as a quite unnecessary and redundant activity in a fully agile team. I believe a mature Agile team does not require management or any non- technical decision making process whatsoever. To my (apparently erring) eyes it is more than obvious that the only one suitable and capable of _managing_ a mature development team is their coach (who is the most technically competent colleague with proper communication skills). I can't imagine how a Scrum master can contribute to such a team. I am having great difficulty realizing and understanding the value of such things in Scrum and the _manager_ as someone who is not a veteran developer but is well skilled in planning the production cycles when a coach exists in the team. What does that even mean? How on earth can someone with no edge- skills of development manage a highly technical team? Perhaps management here means something else? I see management as a total waste of time and a by-product of immaturity. In my understanding a mature team is fully self-managing. Apparently I'm mistaken since many great people say the contrary but I can't convince myself."} {"_id": "64469", "title": "WPF4: Is this indictment of DevExpress WPF controls valid and what is a good alternative vendor?", "text": "My company is starting a major greenfield development project using DevExpress WPF controls. I just read this critical review of their WPF controls. Do you agree that DevExpress does not understand the WPF paradigm and will cause our developers grief during development and maintenance? Can you suggest an alternate vendor of WPF controls? I'm looking for a vendor with WPF controls that will enhance our application while fitting well with the WPF API, binding and MVVM. You can read a quote from the blog post here."} {"_id": "221392", "title": "What's the most readable way of echoing from PHP?", "text": "Should I use is_logged_in()){ echo '

    Click here to log in

    '; } ?> or is_logged_in()){ ?>

    Click here to login.

    ? Why? The first one will not have the HTML tags coloured so is less readable in code editors, but with the second one, it seems like it would be bad to take up three lines just to add a curly brace. Which should I be using?"} {"_id": "148559", "title": "Generators in Javascript into the new ECMA standard", "text": "Generators are being introduced in the new version of the ECMA standards. Could any one suggest the importance and the usage of generators in the present Javascript world? Examples of generators in real world problems will be helpful."} {"_id": "153560", "title": "How to visualize timer functionality in sequence diagram?", "text": "I am developing software for communication with external device through serial port. To better understand the new functionality I am trying to display it in sequence diagram. Flow of events is as follows. I send to the device command to reset it. This is asynchronous operation so there is some delay between request and response (typically 100 ms). There can be case when the answer never comes (for example device is not connected to the specified port or is currently turned off). For this purpose I create a timer with period twice the maximum answer time. In my case it is 2 * 125 ms = 250 ms. If the answer comes in predefined time interval, I destroy already running timer. If the answer doesnt come in predefined interval, timer initiates some action. After this action we can destroy it. How to effectively model this situation in sequence diagram? **Addendum 1:** Based on advices made by scarfridge i drew following UML diagram. Comment by Ozair is also helpful for simplifying the diagram even more. ![enter image description here](http://i.stack.imgur.com/F4Vnh.png)"} {"_id": "153567", "title": "How to get better at solving Dynamic programming problems", "text": "I recently came across this question: \"You are given a boolean expression consisting of a string of the symbols 'true', 'false', 'and', 'or', and 'xor'. Count the number of ways to parenthesize the expression such that it will evaluate to true. For example, there are two ways to parenthesize 'true and false xor true' such that it evaluates to true.\" I knew it is a dynamic programming problem so i tried to come up with a solution on my own which is as follows. Suppose we have a expression as A.B.C.....D where '.' represents any of the operations and, or, xor and the capital letters represent true or false. Lets say the number of ways for this expression of size K to produce a true is N. when a new boolean value E is added to this expression there are 2 ways to parenthesize this new expression 1. ((A.B.C.....D).E) ie. with all possible parenthesizations of A.B.C.....D we add E at the end. 2\\. (A.B.C.(D.E)) ie. evaluate D.E first and then find the number of ways this expression of size K can produce true. suppose T[K] is the number of ways the expression with size K produces true then T[k]=val1+val2+val3 where val1,val2,val3 are calculated as follows. 1)when E is grouped with D. i)It does not change the value of D ii)it inverses the value of D in the first case val1=T[K]=N.( As this reduces to the initial A.B.C....D expression ). In the second case re-evaluate dp[K] with value of D reversed and that is val1. 2)when E is grouped with the whole expression. //val2 contains the number of 'true' E will produce with expressions which gave 'true' among all parenthesized instances of A.B.C.......D i) if true.E = true then val2 = N ii) if true.E = false then val2 = 0 //val3 contains the number of 'true' E will produce with expressions which gave 'false' among all parenthesized instances of A.B.C.......D iii) if false.E=true then val3=( 2^(K-2) - N ) = M ie. number of ways the expression with size K produces a false [ 2^(K-2) is the number of ways to parenthesize an expression of size K ]. iv) if false.E=false then val3 = 0 This is the basic idea i had in mind but when i checked for its solution http://people.csail.mit.edu/bdean/6.046/dp/dp_9.swf the approach there was completely different. Can someone tell me what am I doing wrong and how can i get better at solving DP so that I can come up with solutions like the one given above myself. Thanks in advance."} {"_id": "64467", "title": "Software Transactional Memory - Composability Example", "text": "One of the major advantages of software transactional memory that always gets mentioned is composability and modularity. Different fragments can be combined to produce larger components. In lock-based programs, this is often not the case. I am looking for a simple example illustrating this with actual code. I'd prefer an example in Clojure, but Haskell is fine too. Bonus points if the example also exhibits some lock-based code which can't be composed easily."} {"_id": "238583", "title": "Javascript/ui framework to compliment WebAPI/TypeScript/ASP.NET MVC?", "text": "We're about to venture into building a brand new SAAS application that needs to have a great looking and sophisticated front end. The UI is meant to be single-page, built on top of Web API and using Typescript. Backend is in ASP.NET/MVC. UI will be heavy on charts/graphs/reports/dashboards as well as data/entry of parameters. Dashboards need to be \"live\", but all other data can be refreshed on screens manually, no need to live-push it. Goals for the framework revolve around the following criteria: 1) Ability to acceptance-test UI data layer without Selenium (ie: json only, don't want to maintain tests at the brittle HTML layer) 2) Ability to work with Typescript 3) Great tooling with respect to Visual Studio 4) Easy to work with, understand, and catch issues/errors. 5) Html5 and responsive design are very important Appreciate any advice or follow up questions"} {"_id": "194172", "title": "Understanding Package Management Systems", "text": "I am attempting to understand what a Package Management System. I grasp the main concept of it but I have some queries. * Does a package management system install features(compilers, libraries, virtual machines) necessary to run an application AND install an application or does it just install features necessary to run an application? * Are package management systems external applications that you run with command line arguments or are they API libraries that you integrate into your installer code? To use an example; if my application uses python 2.7 and wxPython would a Package Management System ensure that python 2.7 is installed and wxPython is installed then finally install my application(python script files)?"} {"_id": "188860", "title": "Why shouldn't a GET request change data on the server?", "text": "All over the internet, I see the following advice: A GET should never change data on the server- use a POST request for that What is the basis for this idea? If I make a php service which inserts data in the database, and pass it parameters in the GET query string, why is that wrong? (I am using prepared statements, to take care of SQL Injection). Is a POST request in some way more secure? Or is there some historic reason for this? If so how valid is this advice today?"} {"_id": "194174", "title": "Diagrams used to model the architecture and functionality of a website", "text": "I am attempting to understand what UML models/diagrams can be used to communicate a websites architecture. The model or models' purpose is to communicate the architecture and functionality of a website to technical people (other software developers and engineers). Website Features: \\- The website is a recipe search engine \\- Server side code is Python. Client side uses HTML, CSS, JQuery and AJAX. \\- The website will have a Web Crawler/Indexer \\- Infinite Scrolling is utilised when viewing search results so I will need to model asynchronous requests (both GET and POST). The diagrams I am tending towards are Component Diagram (to communicate the architecture) and Sequence Diagram (to communicate the functionality of a HTTP request). What diagrams have you used in the past to communicate the architecture and functionality of a website to technical people?"} {"_id": "198849", "title": "In MVC is it considered good practice to have private, non-action, functions in a controller class?", "text": "Sometimes action functions in the controller class can become huge and nasty, with many-many lines of code to simply control the flow of data from the Model to the View. At some point these huge functions completely lose track of the basic principles of good code, i.e. only doing one thing, being small, readable and manageable etc. Would it be considered good practice to break these huge action functions into smaller private functions in the controller class or should the need of such optimization mean we should rather add them in the model? I would vote for having the smaller functions as private in the controller so that they are relative to the action, but I have heard arguments that the controller should preferably be simple while the model can get huge and clumpy; and was just wondering which one would be the most preferred method."} {"_id": "198843", "title": "Should a complex unifying class be doing computation?", "text": "I have a large application in Java filled with independent classes which are unified in a `PlayerCharacter` class. The class is intended to hold a character's data for a game called the Burning Wheel, and as a result, is unusually complex. The data in the class needs to have computations controlled by the UI, but needs to do so in particular ways for each of the objects in the class. There are 19 objects in the class `PlayerCharacter`. I've thought about condensing it down, but I'm pretty sure this wouldn't work anywhere in the application as it stands so far. private String name = \"\"; private String concept = \"\"; private String race = \"\"; private int age = -1; private ArrayList lifepaths = new ArrayList<>(); private StatSet stats = new StatSet(); private ArrayList affiliations = new ArrayList<>(); private ArrayList contacts = new ArrayList<>(); private ArrayList reputations = new ArrayList<>(); //10 more lines of declarations I've been considering this problem for some time now, and have considered multiple approaches. The problem arises primarily when data is deleted - for instance, since pretty much everything else (but not some parts!) depends upon Lifepaths, when a lifepath is deleted, nearly everything else must be recalculated. However, if a Skill is deleted, only a few things must be recalculated. Additionally, the application must somewhere track certain values; skill points, trait points, etc. to ensure that the user is not unintentionally exceeding those values. So my question is generally as follows: Where should everything go? What makes this easiest? There are a couple options: * Place point total calculations in the PlayerCharacter class (but how does this generate a warning?) * Handle all calculation outside of the PlayerCharacter class, and just use PlayerCharacter as a container for all the character's information * Place _all_ calculations in the PlayerCharacter class; after each item is changed, recompute the entire character. Then, if there are issues arising from deletion, throw warnings back at the UI. I'm slightly overwhelmed by the scope of this particular class - whereas everything I've done so far has been easily broken down into small manageable chunks, this beast seems to resist being tamed. If there's a better approach to this, I'm all ears! But as of right now, I'm slightly confused, and progressing aimlessly probably won't get me anywhere. Any advice is appreciated. I apologize if this is unclear - I'm certain I lack the software vocabulary to properly communicate my ideas. I would love to improve this question - any help here is appreciated!"} {"_id": "210104", "title": "Will having ClassA extend ClassB slow down my runtime performance compared to having classC which contains all the members of ClassC?", "text": "I have a class with a lot of methods. I would like to group similar methods together in their own class, but _all_ of the methods need to extend another class, _ClassC_. So I was thinking of having _ClassA_ , which contains the first group of methods, extend _ClassB_ , which extends Class_C, etc. Is this inefficient in terms of runtime performance, or are they virtually the same? Note: there will be hundreds of instances of this class running at once, so I would really not want to waste memory."} {"_id": "5751", "title": "Where do you have your Control key?", "text": "In relation to the fab question here: http://programmers.stackexchange.com/questions/2254/what-are-good-keyboards- for-programming Where do you map your Control key and why? ![Control Key](http://i.stack.imgur.com/z0syX.png)"} {"_id": "117201", "title": "How would I access trac on mobile phone?", "text": "as a programmer I live with Trac, how do you go about accessing trac projects via a mobile phone, nokia/android/iphone/windows mobile, the mobile platform is not important, I can get a new phone but trac is there to stay."} {"_id": "198847", "title": "The best way of coding web system in term of performance", "text": "So far I've been using IPB and my custom scripts all coded in PHP but I am really disappointed of the long term performance of it. I would like to move to native coding, the learning time to put into doesn't matter at all. Ive found CppCMS and it seems to be the exact solution i was searching for. I would like to know from anyone who has chosen this way, what have you done? Is CppCMS the best one in term of performance? If yes, with what webserver should I run it with and any special configuration? I am really searching for the best way to go in term of performance (learning time and coding time does not matter)."} {"_id": "58630", "title": "How far should 'var' and null coalescing operator '??' be entertained without hampering readability?", "text": "I know the title of the question is very subjective, but I was confronted with usage of `??` operator by my peers, where at the same time I was not very happy/comfortable with applying `var` in new up-coming code. The argument given for using `??` operator was, it takes away readability in code. My question is, doesn't the same thing happens when you start using `var`?"} {"_id": "97369", "title": "What wiki/document system do you recommend for knowledge-management?", "text": "I'm founder of a web-site called Now.in. I wrote all programs by my-self at first. And we are going to run a start-up company. More people will soon get involved. Some know-how/overviews only exist in my mind, it takes time to explain to newcomers. Hence, I would like to write some documents for others to read. Here comes the problem, how to write documents? Which tool/platform should I use? I have some concerns. ## Portability For sure I can just write documents in Google Docs, run a Wiki, or even use wiki of BitBucket repo directly. But the problem is, what if I would like to use another document system rather than the current one? It would be difficult and costly to convert documents among markdown languages/Google Docs. ## Easy to use Diagrams are important in documents to explain things. Some of the simple wiki/doc systems don't support uploading images. You can upload the image somewhere else and insert the image tag in documents, which is kinda inconvenient to use. Also, it appears some of those markdown languages are difficult to learn and remember. I don't like to spend one month to learn something that looks like LaTeX and start to write a document, which will drive me crazy. Simple and expressive markdown/WYSIWYG would be nice. ## Code expression Google Doc is easy to use and powerful enough. But however, sometimes I would like to write code examples in documents. It's good to have syntax highlighting. But Google Doc isn't user friendly for code writing. Being code friendly, would be a nice to have feature, too. ## Integrate with API documents For overview/SOP/Know-how documents, it's fine to write them directly in the document system. But for my Python code base, there might be some generated API-level documents from those codes. It would be inconvenient to have two documents systems, one for API and one for overviews. So, I think it would be nice to have them integrated together. ## So.... What to use? Maybe I have some more concerns in my mind, but I can't recall them at this moment. I would like to know what kind of document system you are using? Could you recommend some? It's fine even if it takes some fee to use the platform/system, we can afford if it is not too expensive."} {"_id": "105055", "title": "What is a good practice for reading culture (language) info for users of a web app?", "text": "I am trying to get an idea of what would be a better practice / implementation for the users of an application that I am going to localize. Which would be better and why? * Read the culture info (preferred language) from the browser. or * Have a language selection option in the application, commonly implemented using flags (icons) or a drop down. Or suggest a better idea."} {"_id": "193740", "title": "Tricky compareTo, inheritance, easy to extend - Java", "text": "Let's say we have 4 classes A, B, C, D where: A is a superclass of both B and C and C is a superclass of D. I suppose, in a diagram, it should look like this: A / \\ B C \\ D Moreover, the class A implements the interface Comparable and also implements a method public int compareTo(A another) I'd like to be able to compare every pair of objects x, y such that they are instances of classes from the set {A, B, C, D}. Naturally, I should implement a compareTo method in each of these classes but by default I'm doing this with one sided effect, i.e. defining how to compare object c from C with b from B doesn't tell anything about how to compare b with c. Now, I could write a lot of code and handle all of those cases (this also means I would have to use 'isInstance' method to handle these, right?) but then every two classes have to be 'aware of' existence of others. So if I wanted to add another class, E, such that D is a superclass of E, I would have to serve all cases in the new class E and moreover I'd have to change something in classes A, B, C, D in order to 'be aware of E'. So I'm struggling to find an elegant solution to it, i.e. one that doesn't entail changing a lot while adding a new class to the hierarchy. How can I achieve this? Thank you in advance for help."} {"_id": "97362", "title": "An application to get statistical data about HTTP Request/Response", "text": "I'm looking for an application which takes the address of a website, browses that website, and provides reports based on some user-defined criteria about the HTTP requests/responses. For example, this application should report how many images doesn't have Cach-Control response header, or how many items are more than 25KB in size (including images, scripts, styles, etc.). I know about PageSpeed or WebPageTest, or Firebug or YSlow and plugins like that. But they don't have customized reports, and you should manually go through each response to check something. Also they are not repeatable. I'm looking for an application in which you enter the parameters once, and then many times you can test a website during its development. Does such an application exist? Anyone have any reference?"} {"_id": "97363", "title": "PHP programming in iPad", "text": "I want to develop some PHP applications on my iPad device but there is 2 problems! First , i can not find any local server applications on iPad like Wamp or Lamp, and second my Googling only returns Cloud based IDEs for this!!! any idea?"} {"_id": "191674", "title": "What is the meaning of \"inversion\" in Dependency Inversion design principle?", "text": "I'm reading about design patterns. I know what this principle does. High-level and low-level classes depend on abstractions. But why we say this is _inversion_?"} {"_id": "107566", "title": "Computer Science and other advance topics taught in javascript", "text": "I'm an 'okay' Javascript developer working on browser javascript and node.js. However because I'm self taught, starting from copying and pasting jquery, there are a lot of holes in my programming/CS education. I don't really have the time to devote myself to fully learning all the CS concepts that might be taught in a real college program, and learning C or Scala or whatnot, I'm just too addicted to hacking and making things that work. On some level studying CS seems to be 'premature scaling' in Hacker News parlance. But every so often I get struck by the thought that maybe there is something that I should really really learn to take my skills and productivity to the next level. Is there a book, a list of concepts or things that I can study and learn in my 'down time' that would see reasonable returns on my productivity and ability in 1 - 2 months? Thanks! And yes, the only language I _know_ and use is Javascript, though I have tinkered with Python (High School), Ruby (RoR), PHP (Drupal), and Assembly (Making Diablo 2 Mods) in the past. Please note that a book like O'Reilly's JavaScript Web Applications is too easy for me and not really what I'm talking about."} {"_id": "102588", "title": "Is there a standard pseudocode for parallel algorithms?", "text": "The common styles of pseudocode are largely intelligible, and it is more or less clear how to write pseudocode for sequential programs. But if parallelism is not hidden behind a full library and is regarded as a regular part of programming, then it should be treated as the same way in regards to pseudo-code. Is there a consistent and widely used style of pseudocode for parallel algorithms? Are there good, practical examples of this?"} {"_id": "107563", "title": "Should we embed virtual machines rather than languages", "text": "Ok, so I am trying to learn front-end programming. Trying to figure out how javascript is a pain, and various projects (GWT, Coffeescript, Capuccino's objective/j...) are trying to fix this with languages that compile to JS. What would you do if you were to reinvent the browser? I think I would embed a VM rather than a language (quite like Java applets except less complex and with a DOM API instead of a canvas and self-contained environment). I suppose this has been suggested many times, but I cannot find resources on this, and most of the embedding tutorial I stumble on address embedding languages (guile, lua, python, whatever). Thoughts?"} {"_id": "36875", "title": "Conditions for a traditional friends system vs. open following system", "text": "I'm just curious for everyone who is developing social sites out there. When you build a method for connecting users, do you prefer to use a following- style system (follow me, you can see all of my information and I can just choose to follow you back), or instead do you choose to have a friends-style system (I have to allow you see all of my information on your homepage, even if it is open to the public, vise versa). Why and under what circumstances do you use each? How do you manage privacy between your users? Have you use another way to connect your users? Examples of what methods you've choose and how you manage the user's privacy (private by default vs open to the web) are awesome; it could show correlation and provides an actual look."} {"_id": "230511", "title": "Keeping Backbone model in sync with editable view", "text": "I'm making a web form for editing some objects. I'll call these objects Foos. I have a Backbone model that represents Foos. I have a Backbone view that renders an editor form, filling in fields based on the state of the Foo model it is given. My goal is to keep the Foo model in sync with the form fields. Idea 1: I could wire up `onkeyup` or `onchange` listeners on all of the input boxes. Whenever the user changes a field, the model would be immediately updated. Manually adding an event listener to each input box seems excessive, though. Idea 2: I could add an `updateModel` method to my view. Calling this method would inspect the form fields and bring the model up to date. The problem with this is that if you forget to call `updateModel`, you will be working with a stale model. What are some best practices for syncing a Backbone model with its user- editable view?"} {"_id": "102580", "title": "How do I improve my skills working in a small shop?", "text": "I'm a self-taught programmer and I'm working for a company that doesn't deal directly with building software (a glass manufacturer with a small IT team). I'm the only .NET developer there, although there are 3 SAP programmers with a bit of Visual Basic 6 background. Now I'm only a medium-level developer myself and I want to improve my skills, but it's rather hard when you are the \"one-eyed king\" in the proverbial land of blinds. Without leaving the employer, what can I do to improve my skills in such a sub-par learning environment?"} {"_id": "199462", "title": "How does one start mocking up a mobile app?", "text": "I have this idea for the \"best app in the world\". I know what it's going to do, the need it's going to fill, a high to mid-level idea of the code, and who i'm going to target. Now i'm ready to begin: I grab my pen, paper, thinking- cap, can-do-attitude, set the clock to GO-TIME--This app is gonna be awesome!......2 hours later, all i have on my paper is a big rectangle... One would think that the mockup would be the _easy_ part, but drawing a \"simple & sleek\" design harder than it sounds. Are there any techniques/advice that you can give on mocking up an app? Any references or rules-of-thumb guidelines you can provide would also be greatly appreciated. Thanks in advance."} {"_id": "102586", "title": "How can I influence my team to become more agile", "text": "I consider myself an agile developer, I have set up CI in the last three teams I have worked with, and in my previous role worked in a style which revolved around writing failing tests before fixing bugs, extremely short feedback cycles (typically starting UAT of new features within a couple of days), and a very short release cycle. Now I find myself the only person to write a C# unit test in the last 6 months, battling inertia to improve code quality, and supporting a system in which new features are just released with the intention of fixing them later. What tricks can I employ to try and stabilise things? I have tried setting up CI, automated UAT deployment from a separate branch in source control, and publicising local free talks on software development, but with little success."} {"_id": "191679", "title": "Stroustrup and the C++ complexity admission", "text": "I heard from a friend that Bjarne Stroustroup admitted that he doesn't know entirely the C++ programming language due to its vast complexity Is it true and there's some referrable sources or is it just an exaggeration? This affirmation should be present on his website, he told it in an interview/conference some time ago"} {"_id": "107569", "title": "Why C++ cannot adopt D's approach for its concept implementation?", "text": "As many of you guys know, _concepts_ , C++'s approach for constraining possible types for a template argument has failed to be included in C++11. I learned that the D programming language 2.0 has a similar feature for its generic programming. Its solution seems to me quite elegant & simple. So my question is _why C++ cannot use a similar approach_. * The C++ concept's goal might be bigger than what D's implementation provides? * Or C++'s legacy prevents it from adopting such an approach? * Or any other? Thanks for your answers. p.s. To see some examples of D's generic programming power, please refer to this: http://stackoverflow.com/questions/7300298/metaprogramming-in-c-and- in-d/7303534#7303534"} {"_id": "15636", "title": "What should be in a coding standard?", "text": "What should be in a good (read:useful) coding standard? * Things the code should have. * Things the code shouldn't have. * Should the coding standard include definitions of things the language, compiler, or code formatter enforces? * What about metrics like cyclomatic complexity, lines per file, etc?"} {"_id": "198594", "title": "What should I do in AJAX or PHP?", "text": "Since I discovered the joys of AJAX, I tend to do all my requests to the server using AJAX. Is this a good idea? According to you, what should I do in PHP and what should I do in AJAX? I like to do my requests to the server with AJAX because I can use it as a kind of web service."} {"_id": "198595", "title": "Web applications have \"the todo list.\" What analogous program is there for systems programming?", "text": "You can find many frameworks with an example todo list for demonstrating a small but full application in the framework. You don't have to consider large problems like scaling or caching, but you still exercise most of the fundamentals of that framework in a todo list. Is there an analogous application for systems-level programming?"} {"_id": "221935", "title": "Should I pass an object or values?", "text": "Let's say I have a model that has 'purchase' method. The purchase method should take care of purchasing a product. Signature of purchase public function purchase($token, Model_Member $member, Model_Product $product, Model_Recipient $recipient); To call this method, I need to make $member, $product, and $recipient in my controller. Is it considered a better approach to pass an array of values to the **purchase** so that purchase can make those required objects like below? public function purchase($token, $member_id, $product_id, array $recipient); in this case, **purchase** should pull up a member record from the database and make a model of it. What's the better approach?"} {"_id": "84661", "title": "Git workflow - advise a newbie", "text": "I'm starting to work with git for the first time, and I'm trying to come up with a workflow that works for me, so I thought of coming and asking around. Right now, I'm in a couple of projects where I'm the only programmer and, in fact, the only pusher to origin. I'm working like this, where `c` is a commit, `p` is a push and `m` is a merge: /feature2-c-c-c-c-m-c-c-c-c / / \\ master-----------------------m-p------m-p \\ / \\ \\-feature1-c-c-c-c-c-c-c-c-c-c-m-c-c Now I've gathered that rebasing would be more \"correct\" than those `merge master` I do in the feature branches, or at least that's how it'd seem.. but I'm not sure I'm doing it right. What I've realized now is that by merging master into my other branches, I mess with my branch's history, and add all the unrelated features' commits to it. Maybe I should branch more, by the subtask, adding a third level like this: ....................\\ master---------------------------m-p--m-p-.... \\ / \\ \\-feature1------------m-----------m..... \\ / \\ \\-feature11-c-c-c feature12-c-c-c.. This leaves unaddressed the fact that sometimes a feature is bigger than what a branch should be. These are my thoughts on the matter so far, so I'm very open to suggestions on what's the best git workflow on one or two person teams. I hope the diagrams are easy to follow."} {"_id": "221936", "title": "What is the difference between 'code readability' and 'language conventions' used within a community?", "text": "When I'm looking at questions asked on sites like stackoverflow on how to make a particular piece of code \"more pythonic\" there are usually suggestions offered to use complex list comprehensions or generators or other python features that aren't available in, say, Java. I am a casual Python programmer, and sometimes I can't follow what's going on without carefully unwrapping the logic in my head. Sometimes the list comprehensions are nested in ways that I'm thinking \"you probably should just write it out using loops\". Of course, veteran Python users probably won't have an issue with this, but I'm increasingly getting the impression that \"pythonic code\" is preferred over writing code that would be readable for the \"average programmer\" that may not be very proficient in Python. In general, what are some guidelines and heuristics to follow when choosing between the readability of the code and following the conventions used in a specific programming language community?"} {"_id": "105580", "title": "What is the best \"bucket-fill\" algorithm?", "text": "I'm pretty new to image processing, and I am currently working on a paint-like application that will feature a bucket-fill. However, I have no idea what the best algorithm for a bucket-fill is. I implemented an example I found from this site, however, it ran into infinite loop problems when a user tried to bucket-fill an area that had already been bucket-filled with the same color. I'm currently working around that problem by filling left, right, up and then down; however, I made it so that once a pixel has been filled in to the left, it cannot fill to the right, which means shapes such as: ![Example](http://i.stack.imgur.com/bDHrE.png) will not be filled properly if the bucket tool is used at the red dot. Therefore, I am hoping someone knows of an algorithm or a link to one that will resolve all these issues. **Additional Information:** This will be implemented using Javascript as the paint tool. It will be used online utilizing the Canvas element."} {"_id": "82377", "title": "Should Properties have side effects", "text": "Should properties in C# have side effects beside notifying of a change in it's states? I have seen properties used in several different ways. From properties that will load the value the first time they are accessed to properties that have massive side effects like causing a redirect to a different page."} {"_id": "147260", "title": "Team work and agile development", "text": "I think this question related not only to agile but to teamwork in general. When we are working in a team and each member is working a user story to complete how to avoid creation of duplicate classes and conflict? I mean if my user story require the creation of class A and also my team member need the same class and created the same (he may create it with slightly different name), how would we plan so that we move smoothly?"} {"_id": "197374", "title": "What is the time complexity of the algorithm to check if a number is prime?", "text": "What is the time complexity of the algorithm to check if a number is prime? This is the algorithm : bool isPrime(int number){ if(number < 2) return false; if(number == 2) return true; if(number % 2 == 0) return false; for(int i=3; (i*i)<=number; i+=2){ if(number % i == 0 ) return false; } return true; }"} {"_id": "107695", "title": "Is there a specific design strategy that can be applied to solve most chicken-and-egg problems while using immutable objects?", "text": "Coming from a OOP background (Java), I'm learning Scala on my own. While I can readily see the advantages of using immutable objects individually, I'm having a hard time seeing how one can design a whole application like that. I'll give an example: Say I have objects that represents \"materials\" and their properties (I'm designing a game, so I actually really have that problem), like water and ice. I would have a \"manager\" that owns all such materials instances. One property would be the freezing and melting point, and what the material freezes or melts to. [EDIT] All material instances are \"singleton\", kind of like a Java Enum. I want \"water\" to say it freezes to \"ice\" at 0C, and \"ice\" to say it melts to \"water\" at 1C. But if water and ice are immutable, they cannot get a reference to each other as constructor parameters, because one of them has to be created first, and that one could not get a reference to the not-yet-existing other as constructor parameter. I could solve this by giving them both a reference to the manager so that they can query it to find the other material instance they need every time they are being asked for their freezing/melting properties, but then I get the same problem between the manager and the materials, that they need a reference to each other, but it can only be provided in the constructor for one of them, so either the manager or the material cannot be immutable. Is their just no way around this problem, or do I need to use \"functional\" programming techniques, or some other pattern to solve it?"} {"_id": "213675", "title": "Control Flow of php", "text": "I have a php script example1.php How would the control flow be when I call another php script example2.php from example1.php. Foe E.g example1.php looks like this .... ... .... example2.php (calling example2.php from example1.php) ..... ..... How does the control flow work for this ? Does example1.php wait until example2.php completes execution and then continuous with rest of code logic or does it just continues allowing example2.php to run independently ?"} {"_id": "49354", "title": "Why is Windows registry needed?", "text": "As I have debugged problems in com, side by side, dealt with dll hell, all while hating the windows registry with passion, I was wondering why is it needed. I never felt compelled to read an entire book on registry best practices, and then just \"get it\". I have, however, used Linux and Mac OS, and look at the ways one can install multiple versions of Python and its libraries on the same *nix computer. Because registry has somewhat of a free (albeit ugly) format, and is used for all sorts of purposes, I have never understood what essential problem it is trying to solve. For instance, Microsoft does not want you to have two different versions of MS Office installed side by side. They use registry to enforce this during installation. This limitation is artificial, in my opinion. If they really cared to allow a different behavior, they could have adjusted their architecture accordingly. In Mac OS you can install and remove apps by just dropping them into a particular folder. So, A) What essential problem it is trying to solve? B) How do other operating systems solve it?"} {"_id": "98813", "title": "Is it good to start with a simple subset of requirements and then extend the program?", "text": "When I was assigned a program, I usually build them block by block. For example, I was required to write a program which enable FTP transfer of files, which also allow queueing of transfer and a multi-thread transfer instead of single thread transfer. What I would do is: 1) Try to make the program able to transfer 1 file 2) Try to make the transfer into a queue 3) Try to implement multi-threading Would you do this instead: 1) Read through all the material/tutorial of all 3 requirement 2) Try to assemble them into a general picture 3) Write the program at once. What do you think?"} {"_id": "197488", "title": "Statistics collection engine for C++ systems", "text": "We have a research project with idea->prototype->statistics development cycle. Anyway, our final product is a prototype, so the statistics collection suite is not used persistently. Supposing I have following class: class Transform { private: someData; public: transformForward (inp_block, outp_block); transformBackward (inp_block, outp_block); }; Imagine that it is a part of a big system. I need to periodically collect some statistics on such transform (internal data could be considered as well). It seems that adding statistical routines would be a violation of single- responsibility principle. The second smell is that I do not want to touch code that uses `Transform` instances to explicitly invoke these routines. I would be best if I could just trigger some kind of switch so that the statistics for that module will be collected. I've met that challenge a number of times and I have a feeling that I'm constantly reinventing the wheel. Is there some good practices for configuring and collecting the statistics suite for a compound system without interfering into it's internal code base? ## UPDATE: As I can see from the answers proposed, my question is too non-specific, so I'll provide more concrete example. Consider an image-compression system composed by two _huge_ blocks: `Predictor` and `Encoder`. There are a lot of various prediction and compression algorithms, during our research we need to explore the behavior of the components under various conditions. We should answer questions like \"how many times the pixel is processed within each context\", \"how well does this predictor works\", \"how does each predictor affects the `Encoder`'s internal state\" and many others. Anyway, our final product is just a _Codec_ with no statistical suite shipped with it; all kind of statistics collection is used internally during our research. Thus the question arises: how could one build flexible statistics engine that knows the very internals of the system? How could one keep the system itself independent of the statistics engine?"} {"_id": "98810", "title": "How to organise Website CSS", "text": "I understand the concept of CSS. But on many projects I've found that I tend to lose myself and end up with a millions CSS files for a millions different pages! I realise that the point of CSS is that it cascades. Otherwise it would just be a style sheet! I would just like people to shed some light on how they use CSS to its full potential! On my current website I've got a MasterPage and one css file for that. then for all the sub pages i tend to write a separate css for every 5 pages or so. I dont like my css files to be HUGE cos then i just get confused. How do you do it? I find it hard to comprehend that some people use one css file for the WHOLE website. or is that the done thing?"} {"_id": "186176", "title": "How to create high quality code producing teams? (as a group leader)", "text": "I come with a clear agenda on how to code correctly. I used to be a team leader, and I managed using significant amount of mentoring to create a team that creates high quality code. Now I have 3 teams under me, each one has its own team leader. My time is mostly spent on strategic meetings and I can't devote enough of it to mentor my teams. Once I do get to see code or design, I usually finds flaws in it, but I get to see little of that and I am afraid that all the code I can not see is designed or written poorly. How can I make sure my teams produce great code? Note: I am not in the position to fire anyone."} {"_id": "197482", "title": "When are chained assignments (i.e. a=b=c) bad form?", "text": "I'm working on a VB.Net WinForms project and found myself writing code like this: this.Fizz.Enabled = this.Buzz.Enabled = someCondition; I couldn't decide whether that was bad code or not. Are there any .NET guidelines for when/when-not to do assignment chaining?"} {"_id": "222035", "title": "Frontend vs Backend data handling", "text": "Firstly I would like an answer from an experienced person (not a one man band), someone whom has worked in a medium/large team and had to battle with this burning question before. **The problem** : I am constantly being asked to not submit **empty** rows by rows = I mean a list of items and properties. A good example would be, multiple file uploads.
    **The requirement states** : > * As a user I should be presented with fields 3 (three) upload fields. > * For each field there should be a corresponding title/name/label > * There should be a \"Add more\" button, to allow me (the user) to add more > images. > The reason for the requirement above is user experience. (Less clicks are required) **The Backend argument** Submitting empty rows (to the server) is \"error prone\" or is seen as a \"validation corner\" **The Front-end argument** * Backend should not dictate what the FE can/cannot do. * JavaScript is very volatile and should not manage/validate whether empty rows should be sent or not **The question** if the user: * Enters 1 (one) title/label * Selects 1 (one) image from his/her local PC * Clicks Upload * Leaves the last 2 (two) upload fields empty. So in your opinion/experience with this kind of problem. Should the front end send empty rows or data to the server or not?"} {"_id": "20708", "title": "What groupware/project-management apps (preferably self-hosted webapp) do you recommend for a small dev shop?", "text": "I run a small Drupal consulting shop and we've been trying different groupware solutions for what seems like ages, yet nothing we've found seems to be a good fit. * We don't need CRM-overkill such as SugarCRM offers -- it's just too much for our small size. * We do need git integration (at a minimum, an easy way to associate commits with issues) * Time tracking on configurable or 15m increments * per-project issue tracking * billing (incl. recurring billing for support contracts, etc) * some sort of per-project notes/wiki for things like login credentials, client contact info, etc. * Contact logging (Client foo called at 2:20pm and asked to add bar to the spec, signed addendum with pricing due to client NLT CoB today, to be returned by CoB tomorrow) * Open source solutions are greatly preferred to closed ones * **Most of all, it should be very efficient to use.** Several solutions just fell out of use here because they required too many clicks for simple, frequent tasks like logging time spent on an issue or noting a call from a client. It shouldn't take 20 minutes to make a note. **Edit:** I almost forgot to mention: we're a mixed Linux/Mac shop with no Windows users."} {"_id": "71870", "title": "Could developers learn anything from studying malware?", "text": "Malware uses interesting techniques for hiding themselves from anti-malware software and more. They can \"polymorph\" themselves: practically change the code while it continuing to mean pretty much the same to the executing machine, making antivirus definitions invalid etc. I wonder if there is anything (non-malicious) developers could learn from studying the source of such, or reversing them and studying whatever you get from that process if the source isn't available, that could be useful outside this (dark?) realm. **I am _not_ interested in writing malware.** (at least not for non- educational purposes) This question isn't meant to be a question about how to write malware or such, but what you could learn from already written malware. Also, maybe a bit unethical (I hope not), would there be any gains from writing your own piece of malware, just for better understanding of vulnerabilities/exploits/security, or the underlying operating system?"} {"_id": "71872", "title": "Comparing features in a asp.net web application using different database methodologies", "text": "I have a webstore which sells components (it is a academic project) which looks like this. I have developed the same web application using following database methodologies: 1. MS Sql Server with Stored procedures and sql data reader 2. LINQ to Sql 3. DB4o using LINQ (Client/Server) What features can I compare apart from the technical and theoretical details between relational database and object oriented database ? It is my graduate/master's thesis final project. I want the features that i want to compare to be more practical and interesting so that I can draw some concrete and meaningful conclusions rather than abstract comparisons which don't create much interest and hard for inference. Please help me."} {"_id": "198331", "title": "Private Repository Management for Potential Employers", "text": "I have some projects that I do not wish to be viewed publicly (some of them are school course work), but I want to show them to my potential employers. Ideally, I would be able to generate some bit.ly-like links that expire after certain period. I am guessing I can do this with Dropbox's public link where I put the link to the code on the resume and then I delete the folder after some time via `crontab`, but I am wondering if there are better options. I do have accounts on github and bitbucket."} {"_id": "71878", "title": "Tips for a solo programmer's resume", "text": "**Short version:** I've been a solo programmer for 5 years, right out of school, and am looking to for resume tips as I aim to get a job with a larger, well structured company. **Long version:** I am a 26 year old who graduated in CS in 2007. I had an internship for a small company starting in 2006, and still work for them to this day. I'm looking to work for a larger company for reasons beyond the scope of this post. During this time I was the only programmer, and designed/coded/maintained four .NET applications for various industries. Those four projects kept me very busy. My applications have happy customers, they work well and look nice. I realize that being a solo developer has given me some strengths and some weaknesses compared to my peers. * My strengths are that I can self educate well, and I've suffered the consequences of mistakes firsthand, so I can give VERY good reasons why I do things the way I do. * My weaknesses are that I've lacked a mentor, and the missed opportunities to learn from him/her. I also lack team work with programming. I worked closely with other technical people, and had my own SVN/TFS servers, but never worked closely with other programmers, delegated orders, or really had a specific role other than \"do everything.\" I realize that the teamwork is a crucial component of a mature developer. So now, some questions about a resume and/or interview: 1. Should I be up front about being on a lonely island of programming so long? 2. When listing my projects in a resume, since I was a solo dev should I describe the projects in great detail, and say that I did everything rather than say \"I did xxx, yyy, zzz\"? 3. My current employer has web pages with videos/screenshots of my software in action, should I link to them? 4. Do you think my strengths make up for my weaknesses and leave me > 0? 5. Any other tips?"} {"_id": "222693", "title": "When is it okay to reassign the model for a view in MVC?", "text": "Is it ever really acceptable to reassign the model for a view in MVC? (Or MV* where applicable.) In other words, for a single view instance, is it ever ok to reassign the view's model? That is, as opposed to using setters (mutators) on the model so that it does not have to be reassigned. Under what conditions (if any) is this acceptable MVC design? Here is an example in JavaScript, but the question is language-agnostic. var Model = function (m) { var message = m; return { getMessage: function () { return message; }, setMessage: function (m) { message = m; } }; }; var View = function (m) { var model = m; return { render: function () { alert(model.getMessage()); }, setModel: function (m) { model = m; // hmm, is this okay? } }; }; var model1 = new Model('hello world'); var view = new View(model1); var model2 = new Model('hi there'); view.render(); view.setModel(model2); // reassign model view.render(); model2.setMessage('hi again'); // mutate model via setter view.render(); In the line `view.setModel(model2);`, the model is reassigned (the approach in question). In the line `model2.setMessage('hi again');`, the model is instead mutated via its own setter, which in my view seems generally preferred. One of the main drawbacks I can see to model reassignment is that it could complicate attempts to reuse the model elsewhere in the application. In some MVC applications a single shared model could be used to drive multiple views (e.g. list, editor, and display views of the same data), in which case model reassignment would be problematic -- it would at least require the added complexity of communicating the model reassignment to dependent views. Here I think the DRY principle trumps the no-mutators principle. (Open to argument here.) Can an even stronger case be made? To recap, under what circumstances might model assignment be acceptable, if any? Perhaps in certain languages, under extreme performance constraints...? Is _\"do not reassign models\"_ a good rule of thumb? A pointer to an authoritative source or real-world situation would be helpful."} {"_id": "168258", "title": "What is the wireframe of a project?", "text": "What is the actual meaning of a wireframe of a project?"} {"_id": "168259", "title": "How can I design my classes to include calendar events stored in a database?", "text": "I'm developing a web calendar in php (using Symfony2) inspired by iCal for a project of mine. At this moment, I have two classes: a class \"Calendar\" and a class \"CalendarCell\". Here you are the two classes properties and method declarations. class Calendar { private $month; private $monthName; private $year; private $calendarCellList = array(); private $translator; public function __construct($month, $year, $translator) {} public function getCalendarCellList() {} public function getMonth() {} public function getMonthName() {} public function getNextMonth() {} public function getNextYear() {} public function getPreviousMonth() {} public function getPreviousYear() {} public function getYear() {} private function calculateDaysPreviousMonth() {} private function calculateNumericDayOfTheFirstDayOfTheWeek() {} private function isCurrentDay(\\DateTime $dateTime) {} private function isDifferentMonth(\\DateTime $dateTime) {} } class CalendarCell { private $day; private $month; private $dayNameAbbreviation; private $numericDayOfTheWeek; private $isCurrentDay; private $isDifferentMonth; private $translator; public function __construct(array $parameters) {} public function getDay() {} public function getMonth() {} public function getDayNameAbbreviation() {} public function isCurrentDay() {} public function isDifferentMonth() {} } Each calendar day can includes many calendar events (such as appointments or schedules) stored in a database. My question is: which is the best way to manage these calendar events in my classes? I think to add a eventList property in CalendarCell and populate it with an array of CalendarEvent objects fetched by the database. This kind of solution doesn't allow other coders to reuse the classes without db (because I should inject at least a repository services also) just to create and visualize a calendar... so maybe it could be better to extend CalendarCell (for instance in CalendarCellEvent) and add the database features? I feel like I'm missing some crucial design pattern! Any suggestion will be very appreciated!"} {"_id": "108768", "title": "What should take precedence: YAGNI or Good Design?", "text": "At which point should YAGNI take precedence against good coding practices and vice versa? I'm working on a project at work and want to slowly introduce good code standards to my co-workers (currently there are none and everything is just kind of hacked together without rhyme or reason), but after creating a series of classes (we don't do TDD, or sadly any kind of unit testing at all) I took a step back and thought it's violating YAGNI because I pretty much know with certainty that we don't require the need to extend some of these classes. Here's a concrete example of what I mean: I have a data access layer wrapping a set of stored procedures, which uses a rudimentary Repository-style pattern with basic CRUD functions. Since there are a handful of methods that all my repository classes need, I created a generic interface for my repositories, called `IRepository`. However, I then created a \"marker\" interface (i.e. interface that doesn't add any new functionality) for each type of repository (e.g. `ICustomerRepository`) and the concrete class implements that. I've done the same thing with a Factory implementation to build the business objects from DataReaders/DataSets returned by the Stored Procedure; the signature of my repository class tends to look something like this: public class CustomerRepository : ICustomerRepository { ICustomerFactory factory = null; public CustomerRepository() : this(new CustomerFactory() { } public CustomerRepository(ICustomerFactory factory) { this.factory = factory; } public Customer Find(int customerID) { // data access stuff here return factory.Build(ds.Tables[0].Rows[0]); } } My concern here is that I'm violating YAGNI because I know with 99% certainty that there is never going to be a reason to give anything other than a concrete `CustomerFactory` to this repository; since we don't have unit tests I don't need a `MockCustomerFactory` or similar things, and having so many interfaces might confuse my co-workers. On the other hand, using a concrete implementation of the factory seems like a design smell. Is there a good way to come to a compromise between proper software design and not overarchitecting the solution? I'm questioning if I need to have all of the \"single implemenation interfaces\" or if I could sacrifice a bit of good design and just have, for example, the base interface and then the single concrete, and not worry about programming to the interface if the implementation is that will ever be used."} {"_id": "197730", "title": "Effective team meetings", "text": "I'm a team leader of a team of 8 programmers in a company of about 20 technical people. They're working on a range of projects, these projects also involve people from other teams that are outside of my control. My organisation is not doing proper agile development, and they're somewhat resistant to change, but I've been holding daily stand up meetings within my team and we've all been finding them useful and everyone is engaged and we're done within 10-15 minutes. I also have weekly individual catch ups with every team member where we discuss various general topics (both technical and non- technical) in more detail, as well as various ad-hoc topical meetings. What I've been struggling with, however, is my weekly team meeting. It is losing steam and I have not been able to keep people interested. I still want to hold a longer meeting, even if it has to be fortnightly or monthly. The purpose was to discuss various topics that cannot be done during a stand up meeting because they require more time. Updates from me include a summary on every current project that they're working on (whether it is on schedule, various delays, etc), any changes in direction, future projects, changes to development process, etc. However, it ends up being a lecture from me, and at least 2 people are obviously zoned out and the rest are at most mildly interested. I tried getting people to be more engaged by getting them to talk about their week, but with 8 people it takes a long time and (partly because a lot of their work does not cross over all that much), most of the rest team does not care what their co-workers have been working on in more detail (they get a high level overview during stand ups). So, during these meetings, at least some people are very bored, and it is almost embarrassing for me to keep holding these. It is a stark contrast to our energised morning stand up meetings. Any advice on what I can do to keep people more engaged and more interested? And how can I get them to present things on their or start discussions that involve everyone instead of it being a monologue from me?"} {"_id": "197737", "title": "Notation for the average time complexity of an algorithm", "text": "What notation do you use for the **average** time complexity of an algorithm? It occurs to me that the proper way would be to use big-theta to refer to a set of results (even when a specific try may differ). For example, average array search would be \u0398(n+1)/2. Is this right? Explanation of average search time being \u0398(n+1)/2: Given that the access time sum for the n first integers is n(n+1)/2, the average time for a large number of random access to the array will be (n(n+1)/2)/n = (n+1)/2. In this case I would say that \u0398(n+1)/2 will be the average access time. It makes sense (to me) but since I'm new to asymptotic notation (and programming in general), I'd like to know if this is common practice. I'm asking because I'm confused to see big-O being used for everywhere in pages like http://bigocheatsheet.com. eg: average case of array search O(n)."} {"_id": "190108", "title": "How can a server initiate request in SIP protocol?", "text": "Either a client or a server can initiate request in SIP. How is it possible? How will the server know about the client?"} {"_id": "58248", "title": "What is the value of checking in failing unit tests?", "text": "While there are ways of keeping unit tests from being executed, what is the value of checking in failing unit tests? I will use a simple example: Case Sensitivity. The current code is case sensitive. A valid input into the method is \"Cat\" and it would return an enum of Animal.Cat. However, the desired functionality of the method should not be case sensitive. So if the method described was passed \"cat\" it could possibly return something like Animal.Null instead of Animal.Cat and the unit test would fail. Though a simple code change would make this work, a more complex issue may take weeks to fix, but identifying the bug with a unit test could be a less complex task. The application currently being analyzed has 4 years of code that \"works\". However, recent discussions regarding unit tests have found flaws in the code. Some just need explicit implementation documentation (ex. case sensitive or not), or code that does not execute the bug based on how it is currently called. But unit tests can be created executing specific scenarios that will cause the bug to be seen and are valid inputs. > What is the value of checking in unit tests that exercise the bug until > someone can get around to fixing the code? Should this unit test be flagged with ignore, priority, category etc, to determine whether a build was successful based on tests executed? Eventually the unit test should be created to execute the code once someone fixes it. On one hand it shows that identified bugs have not been fixed. On the other, there could be hundreds of failed unit tests showing up in the logs and weeding through the ones that should fail vs. failures due to a code check-in would be difficult to find."} {"_id": "64543", "title": "Does a web application have to live in a browser to be called a web application?", "text": "Does a web application have to live in a browser to be called a web application? Or is a thin client that uses a web service for most of it's functionality a web applicaiton?"} {"_id": "246090", "title": "Is OAuth suitable for this scenario?", "text": "I need to create a simple web application for track expenses, with some basic actions (user must be able to create an account and log in, list expenses, edit them, etc) with a REST API for each one, and the trick is that I need to be able to pass credentials to both the webpage and the API. So, after some research I've found some examples using Digest Authentication and HMAC Authentication but lot of posts also mentioned OAuth as an alternative approach, so my question is, given this scenario, would be proper to use OAuth? I mean, as far as I understand OAuth is suitable when you want to share resources with other application, which I'm not doing for this project; besides that, when you try to access the shared resource it appears a page requesting permission for the foreign application, would that page appear at some point in my application? (maybe after the login?) In case OAuth wouldn't be suitable, is there any other option besides Digest authentication and HMAC authentication?"} {"_id": "30110", "title": "What parameters to use to compare GUI frameworks / toolkits?", "text": "I'm doing some research on the best GUI toolkit to use for future products at the company. We're talking about a fairly large organizations with quite a bit of code and a complete rewrite project in planning. Don't ask. Anyway, I'm trying to create a list relevant parameters to judge the toolkits. What would you use to drive the comparison? Here's what I've got so far: * Maturity * Ease of development * Ease of prototyping * Ease of maintenance * Size of hiring pool * Available knowledge at the company * Training costs * Community size * Community level of expertise (how hard to find good answers to complex problems) * Amount of expert-level books available * Ability to interface to other technologies * Deployment considerations * Visual aesthetics * Ability to access OS resources * Multiple monitor support (something that might come in handy in our particular application)"} {"_id": "30116", "title": "the future of commercial products", "text": "I'm seeing that the open source solutions are growing rapidly & many companies use them now. What do you think the future for commercial products or solutions will be?"} {"_id": "78611", "title": "Style and recommendations of commenting code", "text": "I want to hear from you any advice and experience of writing comments in your code. How do you write them in the most easy and informative way? What habits do you have when commenting parts of code? Maybe some exotic recommendations? I hope this question will collect the most interesting advices and recommendations forcommenting, something useful that everyone can learn from. OK, I will start. 1. Usually, I don't use `/* */` comments even when I need to comment many lines. **Advantages** : code visually looks better than when you mix such syntax with one-line comments. Most of IDEs have an ability to comment selected text and they usually do it with one-line syntax. **Disadvantages** : Hard to edit such code without IDE. 2. Place \"dot\" in the end of any finished comment. For example: //Recognize wallpaper style. Here I wanted to add additional details int style = int.Parse(styleValue); //Apply style to image. Apply(style); **Advantages** : Place \"dot\" only in comments that you finished. Sometimes you can write temporal information, so lack of \"dot\" will tell you that you wanted to return and add some additional text to this comment. 3. Align text in the enumerations, commenting parameters etc. For example: public enum WallpaperStyle { Fill = 100, //WallpaperStyle = \"10\"; TileWallpaper = \"0\". SizeToFit = 60, //WallpaperStyle = \"6\"; TileWallpaper = \"0\". Stretch = 20, //WallpaperStyle = \"2\"; TileWallpaper = \"0\". Tile = 1, //WallpaperStyle = \"0\"; TileWallpaper = \"1\". Center = 0 //WallpaperStyle = \"0\"; TileWallpaper = \"0\". }; **Advantages** : Just looks better and visually more easy to find what you need. **Disadvantages** : Spending time to align and harder to edit. 4. Write text in comment that you can't obtain by analyzing code. For example, stupid comment: //Apply style. Apply(style); **Advantages** : You will have clear and small code with only useful information in comments."} {"_id": "60049", "title": "Right mix of planning and programming on a new project", "text": "I am about to start a new project (a game, but thats unimportant). The basic idea is in my head but not all the details. I don't want to start programming without planning, but I am seriously fighting my urge to just do it. I want some planning before to prevent refactoring the whole app just because a new feature I could think of requires it. On the other hand, I don't want to plan multiple months (spare time) and start that because I have some fear that I will lose my motivation in this time. What I am looking for is a way of combining both without one dominating the other. Should I realize the project in the way of scrum? Should I creating user stories and then realize them? Should I work feature driven? (I have some experience in scrum and the classic \"specification to code\" way.) **Update** : How about starting with a \"click dummy\" and implementing the functionality later?"} {"_id": "132378", "title": "What is the recommended learning path for PHP and Javascript?", "text": "For a couple of months now I have been wanting to learn Javascript and PHP but the lack of time didn't give me that opportunity. I'm in a position that have enough time to learn only one language and practice it or both in a row without doing any practice for about 1-2 months except the code in the books. I have a bit of Java programming background and have built 3 applications in Java , so I'm not totaly new when it comes to programming, but I can't say I'm at an intermediate level. I also have some experience with jQuery, which by the way I recently realised it was a mistake to learn jQuery before Javascript and XHTML/CSS. These are the recources that I am planning to read for both of them: Javascript: 1. Getting Good with JavaScript 2. Professional JavaScript for Web Developers 3rd Edition 3. JavaScript 24-Hour Trainer 4. Javascript Patterns 5. JavaScript: The Good Parts PHP: 1. PHP Solutions Dynamic Web Design Made Easy 2nd.Edition 2. PHP and MySQL Web Development 4th Edition 3. Oreilly PHP Cookbook 2nd Edition These were the resources that were recommended to me by a friend to fully grasp these two technologies exactly in that order and these are the ones I plan to read in the time that I have in the summer. If there is anything you would like to add please feel free to tell me. I am committed to learning them both to their full potential. So what I want to ask if it is appropriate to try and learn PHP immediately after learning Javascript or should I stick just to one of them until I learn it better and move to the other after?"} {"_id": "246917", "title": "How to do documentation for code and why is software (often) poorly documented?", "text": "There are some good examples of well-documented code out there, such as java api. But, a lot of code in public projects such as git and internal projects of companies is poorly documented and not very newcomer friendly. In all my software development stints, I have had to deal with poorly documented code. I noticed the following things - 1. Little or no comments in code. 2. Method and variable names are not self describing. 3. There is little or no documentation for how the code fits into the system or business processes. 4. Hiring bad developers or not mentoring the good ones . They can't write simple and clean code. Hence its difficult or impossible for anyone, including the developer to document the code. As a result, I have had to go through a lot of code and talk to many people to learn things. I feel this wastes everyone's time. It also creates the need for KT/Knowledge transfer sessions for newcomers to a project. I learned that documentation is not given the attention it deserves because of the following reasons: 1. Laziness. 2. Developers don't like to do anything but code. 3. Job security. (If no one can understand your code easily, then you might not be easily replaceable.) 4. Difficult deadlines leave little time to document. So, I am wondering if there is a way to encourage and enforce good documentation practices in a company or project. What are the strategies to be used for creating decent documentation for the systems and code of any project, regardless of its complexity ? Are there any good examples of when minimal or no documentation is needed ? IMHO, I feel that we should have a documentation review after a project is delivered. If it is not simple, concise, illustrative and user friendly, the developer or technical documentation engineer own the responsibility for it and be made to fix it. I neither expect people to make reams of documentation, not hope that it will be user friendly like the head first books, but I expect it to eliminate the need for hours of analysis and wasteful KT sessions. Is there a way to end or alleviate this madness ? \"Document driven development\" perhaps ?"} {"_id": "37836", "title": "Understanding the stateless internet", "text": "I'm transitioning from being a desktop developer to a web developer, and I'm having trouble understanding why HTTP is stateless. What are the reasons for it? What are some ways a desktop developer like myself can make the transition to a stateless development environment?"} {"_id": "132373", "title": "As a Junior Software Engineer should I say that something has been done wrong if I feel so?", "text": "I recently joined a company and it is my first job. When reading the code base, I felt that the code was not well written. It seemed to me that the code had most of the problems mentioned here and also seemed to have an Anemic Domain Model. There are no unit tests and they don't employ any code quality checking tools like findbugs or pmd. The problem I have is that the code is very difficult to understand. Maybe my conclusions are wrong because I am not that experienced. I need advice on whether to communicate the above facts to a superior or not. If I am to communicate, to whom(Tech Lead, Architect, Product Manager) and how? And if I do communicate will they take it badly since I'm a Junior and has no experience?"} {"_id": "225133", "title": "Licensing question regarding no derivatives", "text": "I am working on a project where I'd like these licensing terms: * Allow unmodified redistribution, with attribution to the author * Disallow modified redistribution * Non-commercial use Now the CC BY-NC-ND 4.0 license is perfect for this... however, it does not cover any software topics, and CC themselves say that you shouldn't really use the CC licenses for software projects. So what should you use in a case like this instead? LGPL comes close, but it does allow commercial use. Standard copyright, without a license, also comes close again, but allows commercial use while disallowing redistribution. BTW, I understand you guys aren't lawyers. But you might have experience with licensing to know what kind of license you should be looking for with my needs."} {"_id": "225132", "title": "When using the Apache license, is there still a need for a Contributor License Agreement (CLA)?", "text": "I can certainly see the need for CLAs with a terser license like MIT or BSD, but the more verbose Apache seems to already have that type of verbiage in it. I would prefer to not add this level of complication an open source project if it is not necessary with the Apache 2.0 license. CLAs seem to primarily grant a copyright license and a patent license from the contributor to the project, both of which are already in the text of the Apache 2.0 license (sections 2 and 3). If I am using the Apache license, do I really need to also add the extra requirement for contributors to sign a CLA? I basically want to get the general benefits of a CLA (like the non-ambiguous right to relicense, though the contributor maintains ownership) without making contributors fill out a bunch of information and sign a contract in some third-party form. Doesn't the act of cloning a repository, modifying code, and sending a pull request indicate agreement to the terms of the associated license? Section 5 of the Apache 2.0 License explicitly states that contributions shall also be under Apache 2.0: > **Submission of Contributions** : Unless You explicitly state otherwise, any > Contribution intentionally submitted for inclusion in the Work by You to the > Licensor shall be under the terms and conditions of this License, without > any additional terms or conditions. Notwithstanding the above, nothing > herein shall supersede or modify the terms of any separate license agreement > you may have executed with Licensor regarding such Contributions. When combined with Sections 2 & 3 regarding grant of copyright license and patent license \"...to reproduce, prepare Derivative Works of, sublicense...\" it would seem that Apache 2.0 already covers the project's bases regarding submissions. Or am I misunderstanding a possible difference between a \"sublicense\" and actually changing the license of the project? (Since the Open Source Licensing Q&A site is still a just a proposal, I will accept IANAL answers here.)"} {"_id": "225134", "title": "Can the GitHub pull request process constitute an electronic signature of a CLA?", "text": "I appreciate the need for a Contributor License Agreement (CLA) in open source software projects and even understand that some tools are starting to make this process easier (like the low-friction CLAHub for GitHub and Project Harmony). However, rather than requiring a third-party form submission for an electronic signature, would adding an extra sentence and the CLA itself to CONTRIBUTING.md, which GitHub links to when a contributor files a pull request (displayed as \"Please review the guideline for contributing to this repository\"), allow for a valid electronic signature (e.g. \"Submitting a pull request to this repository on GitHub with your name and email constitutes your agreement to and electronic signature of the following Contributor License Agreement...\")? Perhaps that argument hinges on the definition of \"electronic signature\" (ESIGN, UETA, etc.): > **\"Electronic signature\"** means an electronic sound, symbol, or process > attached to or logically associated with a record and executed or adopted by > a person with the intent to sign the record. And I would like to propose that the GitHub pull request \"process\" is \"logically associated\" with the CONTRIBUTING.md/CLA \"record\" and that the contributor has the \"intent to sign\" by adopting the process... (Since the Open Source Licensing Q&A site is still a just a proposal, I will accept IANAL answers here.)"} {"_id": "129259", "title": "Apps for facilitating live code demos?", "text": "I really enjoy watching live code demos, especially when time is focused on what the code is doing instead of what the presenter is typing. Many seem to be using apps to manage their clipboard to paste code into the IDE. What apps are out there for skillfully managing your clipboard to seamlessly do a code demo? UPDATE: I found this app that Apple engineers use called _DemoMonkey_. It's actually an OS X app demo for, ironically enough, the clipboard and system services. The link includes the source so creating a PC equivalent would be easy, is there really nothing out there?"} {"_id": "27509", "title": "What is the ettiquite for releasing a Perl module based on someone else's module?", "text": "I have written a Perl module by starting with an existing Perl module of related functionality and modifying heavily. In fact, acccording to git blame I have changed (or created) every line of non-biolerplate code in the module. Of course, I have also changed the name of the module, so if I uploaded it to CPAN, it would not directly conflict with the original module. Anyway, despite my heavy modifications, the module is still conceptually based on its predecessor, and I would like to give credit to its authors. However, I can see how they might not appreciate having their names and emails slapped on what appears to be a new module without their knowing. So should I just put myself as the sole author and put the original module's authors in an \"Acknowledgements\" section? What is the best way to credit the original authors in my module?"} {"_id": "147664", "title": "What is \"swarming\"?", "text": "I've heard **swarming** mentioned in the context of Agile or Extreme Programming. It seems to be a complement to pairing. What exactly is it? When should it be applied? How do you do it well?"} {"_id": "185300", "title": "Is unconditional code considered a branch?", "text": "Having simple code like this: int A=5; object X=Console.ReadLine() if(Condition) DoSomething(); else DoStuff(); DoSomethingElse(); Some sources say there are actually 4 branches: First unconditional, two for the IF and another unconditional after the IF statement. Some say there are only two branches. What would be correct? E.g. here: http://www.ruleworks.co.uk/testguide/BS7925-2-Annex-B7.asp"} {"_id": "147667", "title": "Where would I typically use a Deque in production software?", "text": "I'm fairly familiar with where to use Stacks, Queues, and Trees in software applications but I've never used a Deque (Double Ended Queue) before. Where would I typically encounter them in the wild? Would it be in the same places as a Queue but with extra gribbilies?"} {"_id": "147668", "title": "What is the Bible of Hashing?", "text": "Is there a Cormen-like reference on Hashes and Hashing? This particular structure has seen little attention in my CS education for some reason but I'd like to learn more as they seem to be everywhere. I know Cormen covers it but I'm looking for something more specialized and in-depth."} {"_id": "147669", "title": "Why does the add() method of a Linkedlist return true in Java?", "text": "Why does the `add` method of a Linkedlist return true in Java? http://docs.oracle.com/javase/1.5.0/docs/api/java/util/LinkedList.html#add(E) Why not just make it a void method? I know it says \"per the general contract of Collection.add\", but why doesn't this contract/interface make `add` a void method?"} {"_id": "198006", "title": "Why do people consider Python a weak language?", "text": "I've been using Python for a little while now and I am enjoying it, but a few of my friends are telling me to start using a language like C# or Java instead and give these reasons: 1. Python is a scripting language 2. Python apps don't port well to people who don't have Python 3. It's hard to make good GUIs in Python since it's a scripting language I like the batteries included approach to Python and the ability to download and upload pre-built modules from PyPI is really useful to me. Is there any specific reason why Python is considered a weak language?"} {"_id": "216230", "title": "Naming a release", "text": "OS X 10.9 not just called 10.9 but also Mavericks. iOS7 is just called iOS7. Android releases are named after sweets. What is the rationale of giving a name to a release version? What are the benefits if any? Most apps simply increment the number as they push new releases. Is naming a release (Mavericks, Kit kat etc...) just for marketing purpose?"} {"_id": "156009", "title": "Why .NET and C# are not available on Cloud Platforms?", "text": "I checked Google App Engine and Heroku but both don't support .NET/C# applications. Even though Google App Engine shows support for Windows Azure, but in the supported languages it doesn't show C# or VB. Is .NET not supported on Cloud Platform?"} {"_id": "67488", "title": "Show your customers your roadmap", "text": "I read a while ago an article about \"Managing Expectations\" and I ask myself, is it a good or bad idea to share your roadmap with your customers? In your experience, what are the pros and cons of doing this?"} {"_id": "67489", "title": "What is \"lambda\" code?", "text": "I have recently heard people talk about code being \"lambda\". I have never heard of this phrase before. What does it mean?"} {"_id": "216238", "title": "Where Are Multiple JUnit Test Methods Typically Placed in Code?", "text": "I've just read the Vogella JUnit tutorial and found it very helpful in understanding how to use JUnit. However, I'm a bit confused about what the convention is for placing multiple test methods in code. The tutorial only places one test method in a class, then describes how you can use a test suite to group multiple test classes together. Does this mean that it's common practice for each test class to only have one test method and then test suites are used to chain them together? Or was that just unintended and instead common practice is to put multiple test methods in a class?"} {"_id": "136271", "title": "Enterprise scalable vs internet scalable, what is the meaning & differences?", "text": "While reading about Java EE applications, somewhere I have seen people saying they're enterprise scalable, I am confused as to what that really mean? Are Java web applications mainly written & suitable for building enterprise management tools, not for the high traffic websites of today's world, that need to scaled to a large internet population?"} {"_id": "74968", "title": "How do you organize your MVC framework while supporting modules/plugins?", "text": "There are a two main codebase structures that I have seen when it comes to MVC frameworks. The problem is that they both seem to have an organisational bug that goes with them. **Standard MVC** /controller /model /view _Problem: No separation of related components (forum, blog, user, etc..)_ **Modular MVC** /blog /controller /model /view /user /controller /model /view /forum /controller /model /view Picking the module-based system leaves you with a problem. * Long names (Forum_Model_Forum = forum/model/forum.php) (Like Zend) * File system searches using `is_file()` to find which folder has the forum model? (Like Kohana) **Are their any other MVC structures that work well when trying to separate different modules?** Are there benefits from these structures that I'm missing?"} {"_id": "66755", "title": "Do Python programmers find the whitespace issue inconvenient?", "text": "Many programmers, upon first encountering Python, are immediately put off by the significance of whitespace. I've heard a variety of reasons that this is inconvenient, but I've never heard a complaint from a Python programmer. Of course, I haven't met a lot of Python programmers, as I have spent my career in the Java world. So my question is for those of you that have participated in a large Python project (more than 3 months, with Python being the primary language used): Did you find the whitespace issue to be inconvenient and continually annoying? Or was it a non-issue once you got in the flow? I'm not asking the question because I'm for or against Python, or for or against its use of whitespace. I happen to like Python, but I've never used it for anything big. Please don't provide speculations if you are not experienced in Python."} {"_id": "188083", "title": "Are design principles important, and if so, why don't more people use them?", "text": "I am a developer who works for an in-house information communications technology (ICT) department. I am usually quite critical when looking over code that I have not written as I find time and time again that it does not follow principles like SOLID (single responsibility etc). I used to assume that it was because I worked in an in-house ICT department rather than an outsourced ICT service. However, I am now trying to integrate an internally developed app with a few third party apps developed by large software vendors. However, again the code does not seem to be very well written e.g. there are no interfaces and every method is public. I see it time and time again were developers create a bunch of classes and that's it (i.e., everything related to Sales goes in the Sales class, etc.) I don't want to sound like I am whining; I just wondered whether other developers see design principles as important. I wonder if I am too narrow minded."} {"_id": "188082", "title": "How do ORM'S manage CRUD operations in multi thread environment", "text": "Suppose I have code which retrieves an object and modifies it and submits it via any ORM from a web application. Below is the pseudo code: First request var objCust = _dbContext.Customers.Where(c=>c.CustomerId == \"ALFKI\").SingleOrDefault(); objCust.Address =\"test address 1\"; //and add some orders _dbContext.SubmitChanges(); Second simultaneous request var objCategory = _dbContext.Categories.Where(c=>c.CategoryId == 1).SingleOrDefault(); objCategoryName = \"test name\"; _dbContext.SubmitChanges(); How does the first request only grasp the changes done to customers and submit the changes. Is there any mechanism inbuilt in ORM's to track changes to entities per thread or request."} {"_id": "72754", "title": "Does this status show a green signal?", "text": "I have done this static code analysis with Code Analyzer. Is it a green signal or should I improve my coding standards? ![enter image description here](http://i.stack.imgur.com/UVXFb.jpg)"} {"_id": "72750", "title": "Git Project Dependencies on GitHub", "text": "I've written a PHP framework and a CMS on top of the framework. The CMS is dependent on the framework, but the framework exists as a self-contained folder within the CMS files. I'd like to maintain them as separate projects on GitHub, but I don't want to have the mess of updating the CMS project every time I update the framework. Ideally, I'd like to have the CMS somehow pull the framework files for inclusion into a predefined sub-directory rather than physically committing those files. Is this possible with Git/GitHub? If so, what do I need to know to make it work? Keep in mind that I'm at a very, very basic level of experience with Git - I can make repositories and commit using the Git plugin for Eclipse, connect to GitHub, and that's about it. I'm currently working solo on the projects, so I haven't had to learn much more about Git so far, but I'd like to open it up to others in the future and I want to make sure I have it right. Also, what should my ideal workflow be for projects with dependencies? Any tips on that subject would also greatly appreciated. If you need more info on my setup, just ask in the comments."} {"_id": "177515", "title": "What follows after lexical analysis?", "text": "I'm working on a toy compiler (for some simple language like PL/0) and I have my lexer up and running. At this point I should start working on building the parse tree, but before I start I was wondering: How much information can one gather from just the string of tokens? Here's what I gathered so far: 1. One can already do syntax highlighting having only the list of tokens. Numbers and operators get coloured accordingly and keywords also. 2. Autoformatting (indenting) should also be possible. How? Specify for each token type how many white spaces or new line characters should follow it. Also when you print tokens modify an alignment variable (when the code printer reads \"{\" increment the alignment variable by 1, and decrement by 1 for \"}\". Whenever it starts printing on a new line the code printer will align according to this alignment variable) 3. In languages without nested subroutines one can get a complete list of subroutines and their signature. How? Just read what follows after the \"procedure\" or \"function\" keyword until you hit the first \")\" (this should work fine in a Pascal language with no nested subroutines) 4. In languages like Pascal you can even determine local variables and their types, as they are declared in a special place (ok, you can't handle initialization as well, but you can parse sequences like: \"var a, b, c: integer\") 5. Detection of recursive functions may also be possible, or even a graph representation of which subroutine calls who. If one can identify the body of a function then one can also search if there are any mentions of other function's names. 6. Gathering statistics about the code, like number of lines, instructions, subroutines EDIT: I clarified why I think some processes are possible. As I read comments and responses I realise that the answer depends very much on the language that I'm parsing."} {"_id": "177514", "title": "CSS vs codebehind, if I have to do part of the style in codebehind, should I do all of it there?", "text": "I'm working with ASP.Net and trying to style an ImageButton. I tried to do this via CSS only to discover that for some reason, .Net writes out an inline style setting the border width to 0 automatically. This overrides the border width from CSS, so I have to set it manually in the codebehind. My question is, should I do all the styling in the codebehind to keep it all in one place, or only do the border width there and do the rest in a style sheet? For clarification, I mean all the styling _for that object_ not for the whole site."} {"_id": "177516", "title": "Creating and assigning console.log a new object", "text": "I have seen code like this in several places: (function() { var method; var noop = function noop() {}; var methods = [ 'assert', 'clear', 'count', 'debug', 'dir', 'dirxml', 'error', 'exception', 'group', 'groupCollapsed', 'groupEnd', 'info', 'log', 'markTimeline', 'profile', 'profileEnd', 'table', 'time', 'timeEnd', 'timeStamp', 'trace', 'warn' ]; var length = methods.length; var console = (window.console = window.console || {}); while (length--) { method = methods[length]; // Only stub undefined methods. if (!console[method]) { console[method] = noop; } } }()); Specifically I am interested in this line of code var console = (window.console = window.console || {}); Why are we creating a new `window.console` object if `window.console` is not defined ? So why are we setting `window.console={}`?"} {"_id": "105876", "title": "Are there decent entry-level tutorials on debugging?", "text": "In the tags I frequent on Stack Overflow, a big chunk of questions _should_ be answered with some basic knowledge about how to debug things - ie., teaching the OP to fish - instead of giving a fix for the piece of code presented. Are there any good debugging tutorials for newbies, especially * Basic debugging techniques - understanding a piece of code, finding the problem by walking through it step by step instead of posting on SO and saying \"it doesn't work\" * How to debug simple PHP scripts (For situations like \"my POST variables aren't showing\", etc.) * How to debug Ajax scripts (How to use Firebug to see what gets sent, etc.) but also other languages and platforms?"} {"_id": "105875", "title": "Macbook pro for a C++ programmer?", "text": "I am a computer science student and I'm thinking of getting a new laptop. Is Macbook C++ programmer friendly? Does OSX have enough tools as in Linux for programming. Does working on a macbook hinter my growth as a C++ programmer. EDIT: I mainly use gedit, gdb, g++, meld, ddd (gdb front end), valgrind etc in Ubuntu Does OSX have equivalent softwares."} {"_id": "105870", "title": "PHP as a scripting language", "text": "I have reasonable knowledge of PHP, Perl, and Bash. I use Perl for most text processing on my system (find, replace, filter output, etc). I use PHP for web development, allowing a user to view and interact with database data via the browser. I use Bash for quick and dirty scripts often to supplement the more complex Perl scripts. I'm a big believer of focusing on as few languages as possible (to get the job done) and becoming an expert in them. So the question is, why not use PHP for the common tasks that Perl (and Bash) are used? Are there good reasons (limitations or features) why PHP is mostly used for web development, while Perl and Bash are mostly used for \"offline\" \"scripting\"?"} {"_id": "228561", "title": "Limitations of using Hyper-V Virtual Machine as .NET Development enviroment", "text": "I've been thinking of using a Hyper-V Machine as instead of physical computer for installing my .NET / Azure / Xamarin / ASP.NET / Windows Phone development tools. That way I could easily move from my desktop to my laptop when I need to develop something on the go. Are there any limitations in this scenario that I should be aware of? Thanks, Adrian"} {"_id": "228563", "title": "how to check email address exist or not in php", "text": "I have tried a lot but could not find a way by which I will find if an email id exists or not. My problem is that I do not want to make visitor to email me with fake email Id. Is there any solution for this? I do not want to use smpt service as it send mail to solve this."} {"_id": "187193", "title": "Which license for an open source project which may be, but is not intended to be, used as a bot in some apps?", "text": "I'm wondering if there is some license which I could put on my website on which I could publish an open-source application that potentially could be used for purposes of cheating in one game for smartphones. Now, I have to stress that the goal of the application is in no way cheating, so with this license I want to eliminate the possibility that the authors of the smartphone game (in which my bot could be used) could sue me if someone else starts using the application to get better at their game. I may be too paranoid, but with all the weird laws you just never know. I did a little research and came up with Creative Commons Attribution 3.0 (CC BY 3.0), but was wondering whether any of you guys were ever in a similar situation, and what you did in that case? By the way, I did read and consider the moral implications of the possible app appearing on my site, so now I just need some license as stated in the question."} {"_id": "187190", "title": "Do I include association links for Objects that only have method scope in UML class diagrams", "text": "For example I have a utility Class which contains a few constants (GCMConstants), this class is used in one method in the application. However as it is not a member of the Class it should not be modeled via an association link. Should I continue as is and not include a link or should I include one? I've checked my UML books but none of them seem to cover stuff like this."} {"_id": "194901", "title": "Do Windows mail clients actually care about the MIME type of binary attachments?", "text": "...or, in other words: When sending a mail programmatically, do I really need to go through the hassle of setting the correct MIME type or can I just use `application/octet-stream`? The mail clients I tested all do \"the right thing\" based on the file extension, even if the MIME type is just the generic `application/octet- stream`, so why bother? I know that it is considered \"good practice\". I'm asking out of scientific curiosity, because I want to know if it makes a practical difference or not."} {"_id": "202801", "title": "Purpose oriented user accounts on a single desktop?", "text": "# Starting point: * I currently do development for Dynamics Ax, Android and an occasional dabble with Wordpress and Python. * Soon, I'll start a project involving setting up WP on Google Apps Engine. * Everything is, and should continue to, run from the same PC (running Linux Mint). # Issue: I'm afraid of botching/bogging down my setup due to tinkering/installing multiple runtimes/IDE's/SDK's/Services, so I was thinking of using multiple users, each purposed to handle the task at hand (web, Android etc) and making each user as inert as possible to one another. What I need to know is the following: * Is this a good/feasible practice? The second closest thing to this using remote desktops connections, either to computers or to VM's, which I'd rather avoid. * What about switching users? Can it be made seamless? * Anything else I should know? * * * # Update and clarification regarding VM's and whatnot: The reason I wish to avoid resorting to VM's is that I dislike the performance impact and sluggishness associated with it. I also suspect it might add a layer of complexity I wish to avoid. This answer by Wyatt is interesting but I think it's only partly suited for requirements (web development for example). Also, in reference to the point made about system wide installs, there is a level compromise I should accept as experessed by this for example. This option suggested by 9000 is also enticing (more than VM's actually) and by no means do I intend to \"Juggle\" JVMs and whatnot, partly due to the reason mentioned before. Regarding complexity, I agree and would consider what was said, only from my experience I tend to pollute my work environment with SDKs and runtimes I tried and discarded, which would occasionally leave leftovers which cause issues throught the session. What I **really** want is a set of well defined, non virtualized sessions from which I can choose at my leisure and be mostly (to a reasonable extent) safe from affecting each session from the other. And what I'm really asking is if and how can this be done using user accounts."} {"_id": "194904", "title": "Number of sites to build before trying to become a web developer?", "text": "Someone has asked me this question and I couldn't answer as I am not on the web part of development so I'm asking here. If one has just finished college and is trying to break through into web development/web-design a portfolio of already developed projects/sites seems to be essential to me for going to interviews. What would you say is a good number of individual sites to create for your portfolio before sending your CV and going to interviews, for a web developer position? Also what would be a good number of sites to design if you were aiming for a web designer position?"} {"_id": "171825", "title": "How is time calculation performed by a computer?", "text": "I need to add a certain feature to a module in a given project regarding time calculation. For this specific case I'm using Java and reading through the documentation of the Date class I found out the time is calculated in milliseconds starting from January 1, 1970, 00:00:00 GMT. I think it's safe to assume there is a similar \"starting date\" in other languages so I guess the specific implementation in Java doesn't matter. How is the time calculation performed by the computer? How does it know exactly how many milliseconds have passed from that given \"starting date and time\" to the current date and time?"} {"_id": "233968", "title": "Iteratively building a Test Framework", "text": "(Upfront, I'm not a test or agile expert, just trying to push us to be better. If I'm wrong on anything I'd love your guy's opinions). So I've been pushing for us to become a real agile shop and less of a \"calling ad-hoc agile\" shop. So I'm iteratively trying to improve our process. I've found that its been good to slowly build towards \"doing it right\": I can get buy in for small changes, or large changes on their own. So we've got slow iterations (about a month) that I've been using to slowly fold more and more in. My personal goal is to get us to a process that support continuous integration: at that point I think we can do things like speed up our iterations at much lower cost. The next bit that I've sold management on that we need to improve is our testing. We've never had a real solid test strategy, so I'd like to start to build up our capabilities there. We've already introduced \"Test Driven Development\", using Google Test (unit test) framework, which has brought us very far. But these unit tests are not enough to test the system as a whole. We're supplementing this with our old way of testing, ad-hoc system and functional tests that are really specific to the project, that are rewritten every time, and that are not automated. To make matters worse, our functional and system tests are often more similar to characterization tasks than pass- fail tasks: our system/functional test tools need to be able to support advanced plotting and number crunching, which mean a lot of them are in Matlab _shudder_. From what I can tell, a good automated test system has two parts: the Test Framework (something akin to the roboframework, ect) that figures out what tests to run, calls those tests, orgamizes the results, ect...the thing that actually does the automation. That uses the Test Tool(s), the fixture that actually performs your test. I may be viewing this wrong, but the Test Framework is like a huge batch file, and the Test Tools are the tests. So I see two possible ways to go with \"what to do next\". 1. Implement (could be writing our own, though I'd prefer to acquire/modify) a Test Framework. Leave our ad-hoc system tests, but let the framework take over automation. The goal would to have a way to hit \"Test\" and have all the tests run. 2. Figure out a way that we can use to make Test Tools more efficiently and effectively. I'm not sure what the best way to go is here, or if any of the following matter: * We do both windows and linux development, though usually not at the same time * While I want to focus on C++, C with CMake, we also do C# and Matlab development. The ideal framework would support all of these, even if it is different tools. * The \"testers\", \"test automators\", and \"developers\" are all going to be pulled from the same group of highly-competent developers (for now.) * I need to support command-line executable and libraries now, but would like to support GUIs (QT and .NET) in the future. The scope of 2 seems HUGE, though doing 1 first seems like putting the cart before the horse. Do I have a semi-accurate picture of whats going on? I've read a few things that use Google Test as their test framework, but doesn't better fit the definition of test tool? Am I biting off more than I can chew?"} {"_id": "189683", "title": "Why Have People Started Deeming it Necessary to Separate JS hooks from CSS hooks in HTML?", "text": "Edit: Point of clarificatioon, IDs and classes as separate hooks is just one form of the applied idea in question which is to never use the same hooks for CSS as you do in JS. I've also seen people want things like js-combo_box and css-combo_box as classes on the same element. This is one I've started running into more and more in recent years. People wanting to only use IDs for JS and classes for CSS. Or they don't want JS to touch classes and IDs at all and rely on things like HTML5 data-attributes hooks (which is and will probably remain the slowest way to actually find an element even in modern browsers). In 5+ years of occasionally heavy-duty/insane-environment client-side web development including one very large totally messed up e-commerce retail operation where HTML nobody was even working with would mutate because we had 200 offshore devs of varying levels code illiteracy (completely on the low end, not kidding) jumping up and down on the back end codebase with no source control, I've never run into an issue where JS and CSS targeting shared attributes in the HTML caused any kind of maintainability or ease of modification problem for me. I guess I'm wondering what is informing this perceived need and whether I'm missing something that comes up for a lot of people? Isn't HTML via selectors and the DOM API effectively the point of abstraction that ties everything together? Why would JS and CSS concerns sharing the same attributes there ever cause a problem?"} {"_id": "189689", "title": "Should I read a chapter about Memory management if now a days we mostly use ARC?", "text": "I'm reading a book on Objective C, and I was wondering about 2 things: 1. Should I take the time currently to read a whole chapter on memory management? 2. If you are doing a really good job on manual management, can you get to better performance than using ARC?"} {"_id": "56508", "title": "Where can I find fellow developers who are looking for help to create a startup?", "text": "Where can I find fellow developers who are looking for help to create a startup? Are there any collaboration websites whereby people pitch ideas and a group of people 'join' the project in an effort to create some kind of prototype? **Edit:** @Vitor Braga - Seriously i'm not looking to make money, just finding developers who are interested in making a trivial little app with someone else. **Edit:** It may be worthy to explain that I live in a remote area of Australia."} {"_id": "92072", "title": "Putting a programming language on your resume?", "text": "How much experience do you need in a language before you can put it on your resume? There is one language I'm in proficient in (Java) which I would definately put on the resume but say I took I couple of semester courses in college which involved extensive programming in C or selftaught myself C# but have written no meaningful projects in it, can I put those languages on the resume without having the employer laugh at it or percieve it as resume inflation?"} {"_id": "100348", "title": "Choosing between CL and Python for web development", "text": "I'm coming from a Java / PHP background and after I read this little essay by Paul Graham I started wondering about picking up a new language namely Common Lisp to speed up my work (I'm a web developer). I'm writing pet projects currently but I have some business plans for the future. Paul speaks about LISP in his essay as a \"Secret weapon\". I don't know if this statement is true after 10 years but I dipped my toes into a nice CL tutorial and it looks like that LISP may be superior for web development. Paul also mentions Python as a nice choice which I actually familiar with. My question is: which one should I choose for my future web projects? What I was thinking about: * I'm not going to develop desktop applications so I can choose whatever language I prefer. * python seems to have a very large community thus a mutch more libraries / frameworks compared to lisp * I found out that lisp has some functionality (like macros) which cannot be found anywhere else * I mostly work alone or with 1-2 other programmers but finding someone with lisp knowledge can be hard So what do you think?"} {"_id": "131137", "title": "Research on software defects", "text": "There is a chapter in _\"Making Software: What really works and why we believe it\"_ By Andy Oram and Greg Wilson about software defects, and what metrics can be used to predict them. To summarize (from what I can remember), they explained that they used a C codebase that was open source and had a published defect tracking history. They used a variety of well known metrics to see what was best at predicting the presence of defects. The first metric that they started with was lines of code (minus comments), which showed a correlation to defects (.i.e. as LOC increase so do defects). They did the same for a variety of other metrics (don't remember what off the top of my head) and ultimately concluded that more complex metrics were not significantly better at predicting defects than simple LOC count. It would be easy to infer from this that choosing a less verbose language (a dynamic language?) will result in less lines of code and thus less defects. But the research in _\"Making Software\"_ did not discuss the effect of language choice on defects, or on the class of defects. For example, perhaps a java program can be rewritten in clojure (or scala, or groovy, or...) resulting in more than 10x LOC savings. And you might infer 10x less defects because of that. But is it possible that the concise language, while less verbose, is more prone to programmer errors (relative to the more verbose language)? Or, that defects written in the less verbose language are 10x harder to find and fix? The research in _\"Making Software\"_ was a fine start, but it left me wanting more. Is there anything published on this topic?"} {"_id": "187221", "title": "Strategy for keeping up with (Python) language changes", "text": "## Writing code that will still run years from now Programming languages change. Libraries change. Some code from 5, 10, or even 20 years ago might still run and produce expected results, whereas some code from 2 years might fail with a syntax error. This is partly inevitable, since languages evolve (at least, most do). Developers have a responsibility to maintain their code. But sometimes, stability is an important requirement in production code, and code should simply run for 10 years without the need for someone going through the code every year to adapt it for language changes. Or I might have small scripts, for example for scientific data analysis, that I need to revisit after not touching them for years. For example, at meteorological offices there is a lot of operational Fortran code even for non-speed-essential parts, and code stability is one of the reasons. I've heard fear for instability is one of the objects they have against moving to Python (apart from language inertia of course; it's only possible for new code not dependent on old code). Of course, one strategy for stable code is to freeze the entire operating system. But that is not always feasible. I'm using Python as on example, but the issue is not limited to Python in particular. ## Documents on Python compatibility issues In the case of Python, there are several documents outlining policy for backward-incompatible changes. ### PEP-5 According to PEP 5: > There must be at least a one-year transition period between the release of > the transitional version of Python and the release of the backwards > incompatible version. Users will have at least a year to test their programs > and migrate them from use of the deprecated construct to the alternative > one. Personally, I consider that one year is rather short. It means I might write some code, and 1\u00bd years from now it won't run anymore. ### PEP 291 PEP 291 contains an incomplete lists of guidelines of things that should be avoided in order to maintain backward compatibility. However, it relates only to Python 2.x. As Python 2.7 is the final release in the 2.x series and Python 2.7 is bugfix-only, this PEP is now only of historical interest. ### PEP 387 There is also PEP 387 on backward-incompatible changes. PEP 387 is a draft and not official policy. In June 2009, this was discussed on the Python-ideas mailing-list. Part of the discussion focussed on how developers can write code that is robust against language changes. One post listed some advice on what _not_ to do: > Along with this there are several rules you can infer that are probably true > most of the time: don't call stuff starting with `\"_\"`, don't monkey- patch > anything, don't use dynamic class replacement on objects from classes other > than your own, don't depend on the depth of inheritance hierarchies (for > example, no `\".__bases__[0].__bases__[0]\"`), make sure your tests run > without producing any DeprecationWarnings, be mindful of potential namespace > conflicts when adding attributes to classes that inherit from other > libraries. I don't think all these things are written down in one place > though. In addition, there were some points about \"mine fields\" (new features likely to change) and \"frozen areas\" (very sold APIs virtually guaranteed not to change). Quoting Antoine Pitrou: > I think the \"frozen area\" should be defined positively (explicit public APIs > and explicitly guaranteed behaviour) rather than negatively (an explicit > \"mine field\"). Otherwise, we will forget to put some important things in the > minefield and get bitten later when we need to change those things in a > backwards-incompatible way. There doesn't seem to be any conclusion from this thread, but it gets pretty close to the core of what I'm looking for. The thread is almost four years old, so perhaps the situation has changed or improved. What kind of code is likely to survive, and what kind of code is more fragile? ### Porting guidelines In addition to the documents outlined above, each Python version comes with a porting guideline: porting to Python 3.2, porting to Python 3.3, etc. ### Useful compatibility PEP 3151 introduced me to the concept of _useful compatibility_. In my own words, this boils down to the idea that only if code is carefully written language developers need to be careful to maintain compatibility. It doesn't really define _useful compatibility_ , but I think it's similar to the ideas I quoted from the PEP 387 discussion above. ## From the programmers' point of view As a programmer, I know that Python will change in the future and that people \u2014 most notably myself \u2014 will try to run my code perhaps several years from now in a Python version that is one, two, or perhaps three minor versions up. Not everything will be compatible, and in fact it's easy to come up with code that will fail (I once encountered code stating `if sys.version[:3] != '2.3': print 'Wrong version, exiting'`). What I'm looking for is a set of guidelines on _what to do and what_ **not** _to do_ to enhance the chances that my code will still run unaltered in the future. Are there any such guidelines? **How do I write Python code that will still run in the future?** My question relates both to the Python core, to its standard library, but also to commonly used add-on libraries, in particular `numpy`, `scipy`, `matplotlib`. * * * **EDIT** : So far, two of the answers relate to python2 vs. python3. This is not what I mean. I know about tools to migrate from Python2 to Python3. My question relates to language changes _yet to come_. We can do better than a _crystal ball_ in finding coding guidelines that are more stable. For example: * `import module` is more future-proof than `from module import *`, because the latter can break code if `module` grows one or more new functions/classes. * Using undocumented methods may be less future-proof than using documented methods, as something being undocumented may be a sign of something being not stable yet. It's this kind of practical coding advices that I'm after. Since it's about present\u2192future, we can limit ourselves to Python3, because Python2 is not going to change anymore."} {"_id": "110933", "title": "How to determine the levels of abstraction", "text": "I was reading a book today called \"Clean code\" and I came across a paragraph were the author was talking about the levels of abstraction per a function, he classified some code as low/intermediate/high level of abstraction. My question is what is the criteria for determining the level of abstraction? I quote the paragraph from the book: > In order to make sure our functions are doing \u201cone thing,\u201d we need to make > sure that the statements within our function are all at the same level of > abstraction. It is easy to see how Listing 3-1 violates this rule. There are > concepts in there that are at a very high level of abstraction, such as > getHtml(); others that are at an intermediate level of abstraction, such as: > String pagePathName = PathParser.render(pagePath); and still others that are > remarkably low level, such as: .append(\"\\n\")."} {"_id": "110936", "title": "What are the advantages of prototype-based OOP over class-based OOP?", "text": "When I first started programming Javascript after primarily dealing with OOP in context of class-based languages, I was left confused as to why prototype- based OOP would ever be preferred to class-based OOP. 1. What are the structural advantages to using prototype-based OOP, if any? (e.g. Would we expect it to be faster or less memory intensive in certain applications?) 2. What are the advantages from a coder's perspective? (e.g. Is it easier to code certain applications or extend other people's code using prototyping?) Please don't look at this question as a question about Javascript in particular (which has had many faults over the years that are completely unrelated to prototyping). Instead, please look at it in context of the theoretical advantages of prototyping vs classes. Thank you."} {"_id": "187227", "title": "Advice on App/Service Architecture", "text": "I'm starting a project that will have a web front end for the users coupled with a database. There will then be a stand-alone service running that will, on a specified interval, poll an API and update the database with the changes it found. What I need advice on is how the service should be setup. When a user creates an account on the site, I\"ll need to create a \"hook\" in the service that will then go poll the API for that specific user. I looked at building the service using NodeJS because I could use the non- blocking structure of it along with callbacks then the API returns data that needs to be reflected in the database. I don't know how I will be able to create more monitors when a user creates an account since it's already running. Can NodeJS run in a continuous loop as as service? I am familiar with .Net and could use a Windows Service, but how would that scale with polling when the user numbers grow?"} {"_id": "187229", "title": "Text Editor Document Model", "text": "I am working on a text editor in javascript. I have created the front end and i need to prototype a backend. I need to model the structure of the document using this as hierarchy: Character , Word , Tag , Document I am trying to create objects based on this hierarchy. For example: Characters are represented a linked list. Word are pointers to the first and last characters. Tags are another linked list of words, and a document is a collection of words and tags. I am was wondering if anyone has a better suggestion to model these relationships as classes. I receive a character from the keyboard and i need to store this as an object in my document model. thanks"} {"_id": "50627", "title": "How do you get Business owners to buy off on properly requested product changes", "text": "I work for an in house IT shop. We use a scrum-like system. What we are struggling with now is getting the business owners to follow the process. They all like it in concept until they want something, then the process is for the other people. How do you convince them that following the process will give them more consistent results then just throwing a grenade over the fence and saying, \"make it work\"."} {"_id": "193620", "title": "How to deal with warnings in a legacy project", "text": "I work on a C++ project that generates bajillions of warnings. Most of the warnings appeared after the code was written: * Initially the project used Visual C++ 8, soon switching to 9, but there is little difference in the generated warnings. Warnings were either fixed or silenced, so there were no warnings then. * Then a 64-bit target was added. It generated a huge amount of warnings, mainly because of sloppy use of types (e.g. `unsigned` vs. `size_t`). No one bothered, or had time, to fix them once the code worked for the purpose. * Then support for other platforms using GCC (4.5 and 4.6, some 4.4 initially) was added. GCC is much more picky, so it generated many more warnings. Again nobody bothered, or had time, to fix them. This was complicated by the fact that GCC didn't have a pragma to silence a warning in particular bit of code until 4.5, and according to the documentation it is still not what one would need. * In the mean time some deprecated warnings popped up. So now we have a project generating thousands of warnings. And I can't even say from how many places, since even `.cpp` files are compiled several times by different compilers and warnings in headers get printed over and over. Is there a best practice for cleaning up something like that? Or at least some positive experience dealing with it?"} {"_id": "141367", "title": "How to build a web service to detect content change(s) at an external website?", "text": "I'm researching ways to build a web service to periodically traverse a predetermined list of web pages (of another external website) to detect if a page's content has changed from 1. editing of the page, and 2. deletion of the page. The end goal is to have this web service post push-notification events to mobile devices. FYI, I've searched and read \"Questions with similar titles\" here. Thank you for sharing your answers."} {"_id": "184556", "title": "How to implement syntax sugar in OO", "text": "### The Problem I regularly find myself writing the same code over and over again. For example if I want a value from a nested array structure I end up writing something like this: $config = function_to_get_nested_array(); $node = $config; foreach(array('vendor','product','config','leave') as $index => $key){ if(! isset($node[$key])){ //throw exception } $node = $node[$key]; } I would prefer to have something like $config = function_to_get_nested_array(); $node = get_node($config, 'vendor/product/config/leave'); But how do I implement it? ### Possible Solutions 1. I could implement a function as I did above. _BUT_ my packages heavily rely on autoloading and are purely OO. Right now I just have to clone an submodule in the right directory and everything works fine. 2. I could implement this where I need it. _BUT_ this means I have to implement it countless times in independent projects. 3. I could use a trait, if I was not stuck with PHP 5.3. Not an option for me. 4. I could implement a static service provider. For various reasons static classes and methods are usually a bad choice, but I am tempted to do so in this case. use vendor\\product\\helpers\\ArrayTree as Tree ... $config = function_to_get_nested_array(); $node = Tree::getNode($config, 'vendor/product/config/leave'); Looks _very_ tempting, because it supports autoloading and namespaces. ### The Question Is there a better solution to implement this kind of helper functions?"} {"_id": "184554", "title": "Node.js app private modules. Where to put them?", "text": "The situation would be: I develop 2 projects in my Node.js development environment, P1 and P2. P1 required the development of two simple modules, mod1 and mod2, which are stored in `P1/lib`. Each one of this modules resolves find their external dependencies in `P1/node_modules`. Necessary dependencies for P1 have been installed in this folder via npm. Now image we want to reuse mod1 in the other proyect P2, here's were my doubts come up. I could... * Just copy mod1 to `P2/lib`. Replication, so I don't even consider this option. * From P2, reference mod1 from P1: `require($PROJECTS_DIR + '/P1/lib/mod1')`. Not a good option, this way P2 would depend on P1. * Put mod1 into a higher level directory or using NODE_PATH, so that P1 and P2 can resolve it by just doint `require('mod1')`. However, when deploying, I should also deploy this higher level directory which seems a bit dirty. * I would like to treat mod1 as a npm module, so it can be easily installed in any project or environment, However, in this particular case, I can't publish the module to npm cause is too project-specific. I could create a private npm repository and put mod1 inside. The gist of this would be to set it up in order to be accessed also from the production envirnoment. Is it worth it? * What about putting it all together in `node_modules`? (external depencencies and my own libraries). That would be great since modules can be just required like `require('module'). But it also seems quite dirty. * Not sure how `npm link` would work when deploying. It creates a symbolic link, which is not followed when commiting code via Git or SVN. If I run `npm install` in production, will it also install the linked module? None of the above quite satisfies me. I don't know if one of these is suitable, or you guys have other suggestions, but, are there any preferred ways to structure own **private** libraries so that they can be easily reused in other projects?"} {"_id": "141360", "title": "handling multiple interviews / offers", "text": "What's the best way to handle a situation where you have, or expect to have multiple offers? The ideal situation is that your several offers come in about the same time, and you make a choice. this is not how it happens though. You may have an offer, and several near-final interviews lined up for the following days or weeks. One way to handle it would be to ask for a longer time to decide on the first offers you receive. 2 weeks? This gives time to rush the rest of the things you have going through to an end. i question whether asking for 2 weeks to decide is reasonable though. My guess is that an employer would see through that and force your hand. Another way to handle it would be to accept the first offer, and ask for a reasonable period before your start date, then simply \"quit\" the first position before you ever start if something better comes along. On one hand, employment is at-will, and employers exercise this fact regularly. On the other hand, it seems morally the wrong thing, and has the potential to burn some bridges. And of course the last option is to simply evaluate each offer in isolation, and accept or reject within the given time frame. any thoughts?"} {"_id": "189352", "title": "SQL W/ hibernate vs in-memory solution", "text": "I recently posted a question here: Methods to share memory or state across JVMs? and appreciate the answers. However, as I read the code and learned the system better I learned I was over complicating the problem. Only one of the JVM's really care about the dynamic data so there is no need to pass dynamic data across memory boundries. However, we still have a problem that they are manually maintaining state between in-memory and sql; the manual code isn't perfect and doesn't protect against stale data, data races, etc etc and I want to remove it entirely one way or another so I can feel more secure about the overal stability. Since only one JVM cares about dynamic data, and the dynamic data can be regenerated at bootup each time (with a small time penalty to do so) my inclination is to remove all the dynamic data from sql and just store everything in memory; why over complicate anything? Howver, they liked the sql as a debugging tool. This system is developed agile, it's on a live system but bugs and errors do come up due to the agile nature. When that happens they ssh to the live system and debug it on the fly; often by viewing the database. The SQL allows them to see the actual routes and pathing that are being used. They can also see when a route looks wrong, change it, then restart the module so that the fixed path is loaded into memory and used from then on. They like this ability to quickly fix bad routes don't want to lose it. There is now talk of keeping the database but backing it with hybernate to avoid the nightmare of trying to keep JVM and sql sychronized manually. There are some minor other gaines, but this is primarily so they can keep the sql and try to use it as a debug tool. Something feels wrong about all of this, but I'm not entierly certain WHAT is so wrong about it. If we droped the dynamic sql data I could partially emulate what they want by adding messages to print out the graph as it is, or to modify a graph on the fly, but obviously each message takes a bit of time to write. Does keeping SQL rather then trying to write messages to allow changing of memory on the fly make sense? I think using hibernate may make it a little harder to write our objects we use for generating and maintaining paths, having to keep the structor similar to a SQL database and all. But I think my real issue is the idea of pushing changes to our in-memory state by changing our database. That just feels dangerous to me; in much the same way improper encapsulation feels wrong. I don't think I like the idea of fixing a broken route by just manually changing the route in sql feels safe. But I don't know how to articulate why this all feels wrong. Maybe I should be suggesting a better debugging solution? So, am i right to worry or is hibernate really the best approch?"} {"_id": "189354", "title": "IEC 62304 compliant Architecture definition", "text": "I am currently tasked with creating a software architecture for compliance with IEC 62304. These regulations are notoriously vague, and do not provide any real substance as to what is required for a \"software architecture\". The standard states > The manufacturer shall transform the requirements for the medical device > software into a documented architecture that describes the software's > structure and identifies software items. Now I went to an applied computer science school, so the vast majority of my teachings were around how to actually write code and working on projects, so as far as I can remember, we never covered anything involving creating software architecture diagrams. I've already basically written all of the software, but for regulatory submission purposes I need to create this documentation for the project. So in short my question is this: What exactly does a \"software architecture\" consist of?"} {"_id": "184558", "title": "How to debug through a swing based application effectively", "text": "I am trying to understand how a radio button is created in a Dynamic field by reading from an XML using Netbeans 7.0. I know the radio button is created because of the XML being read from database, but I cannot see **how** the radio button is created. Also, since I don't know where to place the break points, I can't see how I would debug the creation of components on the Dynamic Editor. Maybe I am trying a wrong approach or something, so how do I efficiently debug an application like this?"} {"_id": "77707", "title": "Are iframes a design smell?", "text": "We have some pretty old, clunky .Net 1.1 apps in our business that - rather than be forcably upgraded - just get an iFrame added so that new functions can be dropped in. It's become such a well-known option that many management staff who don't really have a great deal of technical knowledge or experience suggest just throwing an iframe on a page as if that would be the only way to solve the problem. I don't have an issue with the iframe element per se if it's absolutely necessary, but it does seem that it's a very convenient tool for something that might be better achieved with a little more thought and consideration for design. Are iframes the web design equivalent of a code smell?"} {"_id": "165376", "title": "Web Page Execution Internals", "text": "My question is what is the subject area that covers web page execution/loading. I am looking to purchase a book by subject area that covers when things execute or load in a web page, whether it's straight html, html and Javascript, or a PHP page. Is that topic covered by a detailed html book, or should I expect to find information like that in a JavaScript of PHP book? I understand that PHP and Perl execute on the server and that Javascript is client side, and I know there is a lot of on-line documentation describing `, , `, and so on. I'm just wondering what subject area a book would be in to cover all that, not a discussion of the best book or someone's favorite book, but the subject area."} {"_id": "165375", "title": "Windows driver signing", "text": "My company is developing driver for our hardware. Now I need to sign my driver for 32 and 64 bit platforms. Please tell, now I need to buy Authenticode certificate, right? What CA to use? DigiCert? GlobalSign? ( http://www.sslshopper.com/microsoft-authenticode- certificates.html ) Symantec? ( http://www.symantec.com/verisign/code-signing/microsoft- authenticode ) What is the difference between this CA offers? I need to use tools from WDK?"} {"_id": "231759", "title": "How to locate source code that implemented a certain feature?", "text": "I was wondering what are some techniques to locate which code implemented a specific feature, on a desktop application. I am a junior developer, with only professional programming experience lying around web programming. In the Web it is easier to do that. For example, you \"inspect\" a button with the browser tools, and you can see what is being done when you click it. And then, presuming you have the full source code, you can drill down the hierarchy of the calls. But how do you do this in desktop applications? At least, without having to dive into the full codebase?"} {"_id": "165379", "title": "What norms/standards should I follow when writing a functional spec?", "text": "I would like to know what documents (ISO?) should I follow when I write a functional specification. Or what should designers follow when creating the system design? I was told that there was a progress in last years but was not told what the progress was in (college professor). Thank you EDIT: I do not speak about document content etc. but about standards for capturing requirements, for business analysis."} {"_id": "82743", "title": "RESTful reference representations - semantic link vs uri", "text": "We're designing a RESTful API to open up our customer's account information. We have representations that contain references to other resources related to the current resource. This is from a number of best practices we were able to find in public APIs as well as published materials. The representations can be either XML or JSON. For example for an account resource we would have references to the account's addresses and for a paginated list resource, we would have references to the first, next, and previous pages. The API was first designed using semantic links `` as described in an O'Reilly book and used in APIs by Netflix and Google. When it came time for our QA engineers to write the automation suite, they had problems deserializing the links. We've now suggested simpler uri string elements which have been used in APIs by Facebook and Twitter. Our QA enginners have since solved their deserialization issues, but I still have a concern of the ease of use of the current API spec with semantic links. Our API will primarily be consumed by our customers and some third-party partnerships and we've gone to REST because the previous XML-RPC API was too difficult for our consumers. tl;dr; Question: ### Has anyone who has implemented a semantic link representation experienced consumer issues with the difficulty? * * * Update (6/21): I've decided to stay with semantic links and hope that the confusion was an edge case. I'll try to remember to answer the question with our experiences once the API is live with some consumers. * * * Edit: add examples Semantic Account JSON: { \"username\": \"paul\", \"links\": [ { \"title\": \"addresses\", \"rel\": \"related\", \"href\": \"http://example.com/account/paul/addresses\" }, { \"title\": \"history\", \"rel\": \"related\", \"href\": \"http://example.com/account/paul/history\" } ] } Semantic Account XML: paul Simple Account JSON: { \"username\": \"paul\", \"addresses\": \"http://example.com/account/paul/addresses\" \"history\": \"http://example.com/account/paul/history\" } Simple Account XML: paul http://example.com/account/paul/addresses http://example.com/account/paul/history "} {"_id": "205756", "title": "Am I a total misfit for Programming or is it Corporate Drama?", "text": "My background: Computer Science Engineer with around 4 and half years of experience as a programmer. In the last one year I have changed 4 programming jobs and it includes companies of size both big and small and both product and service based and in the end I have quit all four jobs. The following are some common issues that I face and how can I improve my situation in the next job? 1. Experienced programmers are expected to get on board and start implementing features even when I haven't fully seen how many classes are in play. Of course I do understand being experienced I cant take a long time to get hold of the project but literally asked in the first few weeks to start giving status updates. 2. Project managers almost always say or at least claim that they have been coders from a different technology sending out 2 messages at the same time. One being that they too have done programming in the past and second one that they cannot really help me out with the current project's technical problems. 3. The big challenge the project is currently going through is directly thrown at the new comer. Although new comer is not expected to solve it straight off but at least is expected to give plausible solution which the original team is struggling with for I don't know how long. 4. Programming is treated as though people just have solutions right off the head and the fact that almost everything in software development needs lot of research is completely ignored. 5. One more major common observation is that every project is somehow complex in its design and the code is slightly messy to begin with . If it is pointed out that the project needs redesign and cannot continue like a POC, it is often perceived by managers as though I am an incapable resource when design change was a must for the survival of the software itself . Most managers are simply not interested in hearing No as an answer for anything and they just want some hacking done so that they can report progress to their senior management. 6. Overall I found that all the four jobs felt like the same sweat shop with different people with almost the same kind of complexity and stress. While talking to other team mates about the way I feel, they tell me that's just the way it is and we have to put up with it. I was looking forward for some suggestions on how I can improve my current situation and also if changing career path to management might help?"} {"_id": "135722", "title": "Getting and maintaining data from a large number of sources on the web", "text": "For the sake of an exercice I want to aggregate price listings with prices coming from a series of web sites in a structured form (XML, JSON...). If I have control on both sides how can I go about making price updates the most efficient possible? Edit: To clarify, I'm looking for a more efficient approach then having a script or application pull in price lists in their entirety from all sources for updates."} {"_id": "135724", "title": "Separating data access in ASP.NET MVC", "text": "I want to make sure I'm following industry standards and best practices with my first real crack at MVC. In this case, it's ASP.NET MVC, using C#. I will be using Entity Framework 4.1 for my model, with code-first objects (the database already exists), so there will be a DBContext object for retrieving data from the database. In the demos I've gone through on the asp.net website, controllers have data access code in them. This doesn't seem right to me, especially when following the DRY (don't repeat yourself) practices. For example, let's say I am writing a web application to be used in a public library, and I have a controller for creating, updating, and deleting books in a catalog. Several of the actions may take an ISBN and need want to return a \"Book\" object (note this is probably not 100% valid code): public class BookController : Controller { LibraryDBContext _db = new LibraryDBContext(); public ActionResult Details(String ISBNtoGet) { Book currentBook = _db.Books.Single(b => b.ISBN == ISBNtoGet); return View(currentBook); } public ActionResult Edit(String ISBNtoGet) { Book currentBook = _db.Books.Single(b => b.ISBN == ISBNtoGet); return View(currentBook); } } Instead, _should_ I actually have a method in my db context object to return one Book? That seems like it is a better separation to me, and helps promote DRY, because I might need to get a Book object by ISBN somewhere else in my web application. public partial class LibraryDBContext: DBContext { public Book GetBookByISBN(String ISBNtoGet) { return Books.Single(b => b.ISBN == ISBNtoGet); } } public class BookController : Controller { LibraryDBContext _db = new LibraryDBContext(); public ActionResult Details(String ISBNtoGet) { return View(_db.GetBookByISBN(ISBNtoGet)); } public ActionResult Edit(ByVal ISBNtoGet as String) { return View(_db.GetBookByISBN(ISBNtoGet)); } } Is this a valid set of rules to follow in the coding of my application? Or, I guess a more subjective question would be: \"is this the right way to do it?\""} {"_id": "141093", "title": "Best approach for a database of long strings", "text": "I need to store questions and answers in a database. The questions will be one to two sentences, but the answers will be long, at least a paragraph, likely more. The only way I know about to do this right now is an SQL database. However, I don't feel like this is a good solution because as far as I've seen, these databases aren't used for data of this type or size. Is this the correct way to go or is there a better way to store this data? Is there a better way than storing raw strings?"} {"_id": "75709", "title": "Delphi Conversion to Prism?", "text": "At my office, we are about to embark on an epic journey of converting an old Delphi 5 application to something more modern. One of our options is to convert to Delphi Prism, but we want to see what options are available to help us. So far, all I've found is Oxidizer (last worked on 2 years ago). Are there any other options for helping us convert reasonably quickly to Prism? Thanks."} {"_id": "249708", "title": "Design Pattern : Static Array/List in Class Object", "text": "I'm a CS alumni, but we didn't learn much by the way of OOP, or Design Patterns. I've heard the phrase C with iostreams and thought it fitting. Anyway that's besides the point. I am just curious about what I perceive would be a very simple design pattern. Have any of you played the game Starcraft? You know how you can select/deselect units? Back to that in a second. Suppose I have a Unit class, and I wanted it to keep track of all the units that were selected. I had the idea that I could create a static list/arrray inside the Unit class, and the units themselves could add or remove themselves from the array as they were selected/deselected. That's just one example, there could be other applications of this \"pattern\". Say you wanted to maintain a list of visible \"tabs\" in a web browser or w/e, you could have them add a reference to themselves in a a static array in the class. The idea is that you could just call Unit.GetSelected() and get a pointer/reference to all of the \"selected\" or \"opened\" units/tabs. I am aware that there are other maybe better ways of solving these problems, but that is also besides the point. What would this pattern be called?"} {"_id": "249707", "title": "Web Services of System Integration", "text": "I have been assigned to a system integration project. However, I do not understand part of the integration implementation architecture. For example, I have a passport scanner connected to a Windows Client application constructed in C++. The client application would pass the passport picture, name, address etc, (once the passport scanning is complete) to a WPF application. I'm not sure whether the communication between both system should be tie to a Web Services, or there is some other way to do so? If the Web Services proves to be the only solution, which party is the Web Service provider and requester? In my own perspective, the WPF application should becomes the Web Service requester. Meanwhile, the Scanner Client Application act as a Web Service provider would going to host the Web Services. However, in Visual Studio there is no way to add the WCF service file to the C++ Client Application project. Is there anyone can enlighten me about this?"} {"_id": "130266", "title": "what techniques can be used to implement Facebook's People You May Know feature?", "text": "I'm curious to learn from a technical perspective what techniques can be used to make recommendations for e.g. Facebook's \"People You May Know\" (what algorithm do they use, etc.)? One of the things that come to my mind is the number of mutual friends. Other than this, what are the different parameters they use to make suggestions?"} {"_id": "130267", "title": "What programming language to choose for this XML and data processing task?", "text": "I currently code in PHP. Recently I've been working on a project using PHP and Symfony that: 1. reads large XML files (lots of DOM parsing/reading), 2. converts large XML files to large arrays, 3. merges 2 large arrays (lots of array sorting), 4. takes the 2 large arrays and turns them into a large CSV file. I finished it in PHP but now it is kind of memory intensive and requires about 8-15 seconds to run. So now I have the following options and need help choosing one: 1. Try rewriting/refactoring it using better methods in PHP 2. Choose a different programming language (I have been wanting to learn one, possibly another language processes these things a lot faster?) 3. Do 1 or 2 and additionally set up something to be constantly reading xml files and write them to MongoDB documents to serve clients from the database instead of scrapping the data. I am inclined to do 2 or 3 (using a different language), since I am sure there is another language that handles these kind of tasks much faster (e.g Python, C etc.). It's just that I am not sure which."} {"_id": "130261", "title": "UUID collisions", "text": "Has anybody done any real research on the probability of UUID collisions, especially with version 4 (random) UUIDs, given that the random number generators we use aren't truly random and that we might have dozens or hundreds of identical machines running the same code generating UUIDs? My co-workers consider testing for UUID collision to be a complete waste of time, but I always put in code to catch a duplicate key exception from the database and try again with a new UUID. But that's not going to solve the problem if the UUID comes from another process and refers to a real object."} {"_id": "93306", "title": "Getting started with system programming?", "text": "Ever since I discovered programming five years ago, I've done a lot of things. I've learned numerous programming languages and technologies and tried out many interesting things. I've written games, both console and with graphics, console and windowed applications that run in the desktop, CRUD web applications, my own (crappy) PHP XML-based flat file database. In addition to Web and desktop, I've tried mobile development with Android but didn't enjoy it, so I stopped that. I recently finished a Web project of mine and am learning functional programming right now (Haskell). But I have never dabbled into system programming before. The idea of building software (I'm not even sure if that's the correct terminology to use for it) at a low level that interacts with the operating system seems interesting. The problem is, I'm not sure how exactly to get started, and need more examples of what I do with this. Should I start by learning the Win32 API? I know some C++ as I've used it to make quite a few console apps and games, but haven't used it in a few years. Is that the way to go? Also, what about C? I am planning to learn some more about C (using the K&R book) before the summer ends and college starts. I want to get a good head start as a college freshman with a solid programming background."} {"_id": "93302", "title": "Spending too much time debugging", "text": "Yesterday, I rolled out a v1.0 release of a Web project I've spent about 6 weeks working on (on and off, that is). I haven't made any exact records of my time, but according to my experiences I would estimate that out of all the time I spent programming, half of it was spent debugging. I estimate that to be about a good 15-20 hours spend debugging, which to me is precious time that could have better been spent writing new code or finishing the project earlier. It also especially doesn't help that I'll be a freshman in college in 5 weeks. The thing is, I feel bad for spending all that time debugging. All that time spent debugging makes me realize that I made some pretty stupid mistakes while I was developing my project, mistakes that cost me a damn good amount of time to fix. How can I prevent this from happening in the future? I don't want to spend 50% of my time debugging, I'd rather spend 10% debugging and the rest writing new code. What are some techniques I can try to help me reach this goal?"} {"_id": "128492", "title": "Should unit tests be stored in the repository?", "text": "I'm a growing programmer who's finally putting unit testing into practice for a library that I'm storing on GitHub. It occurred to me that I might include the test suites in the repo, but as I look around at other projects, the inclusion of tests seems hit-or-miss. Is this considered bad form? Is the idea that users are only interested in the working code and that they'll test in their own framework anyway?"} {"_id": "104048", "title": "What defines robust code?", "text": "My professor keeps referring to this Java example when he speaks of \"robust\" code: if (var == true) { ... } else if (var == false) { ... } else { ... } He claims that \"robust code\" means that your program takes into account all possibilities, and that there is no such thing as an error - all situations are handled by the code and result in valid state, hence the \"else\". I am doubtful, however. If the variable is a boolean, what is the point of checking a third state when a third state is logically impossible? \"Having no such thing as an error\" seems ridiculous as well; even Google applications show errors directly to the user instead of swallowing them up silently or somehow considering them as valid state. And it's good - I like knowing when something goes wrong. And it seems quite the claim to say an application would never have any errors. So what is the _actual_ definition of \"robust code\"?"} {"_id": "86510", "title": "Why not XHTML5?", "text": "So, HTML5 is the Big Step Forward, I'm told. The last step forward we took that I'm aware of was the introduction of XHTML. The advantages were obvious: simplicity, strictness, the ability to use standard XML parsers and generators to work with web pages, and so on. How strange and frustrating, then, that HTML5 rolls all that back: once again we're working with a non-standard syntax; once again, we have to deal with historical baggage and parsing complexity; once again we can't use our standard XML libraries, parsers, generators, or transformers; and all the advantages introduced by XML (extensibility, namespaces, standardization, and so on), that the W3C spent a decade pushing for good reasons, are lost. Fine, we have XHTML5, but it seems like it has not gained popularity like the HTML5 encoding has. See this SO question, for example. Even the HTML5 specification says that HTML5, not XHTML5, \"is the format suggested for most authors.\" Do I have my facts wrong? Otherwise, why am I the only one that feels this way? Why are people choosing HTML5 over XHTML5?"} {"_id": "170880", "title": "Are XML Comments Necessary Documentation?", "text": "I used to be a fan of requiring XML comments for documentation. I've since changed my mind for two main reasons: > 1. Like good code, methods should be self-explanatory. > 2. In practice, most XML comments are useless noise that provide no > additional value. > Many times we simply use GhostDoc to generate generic comments, and this is what I mean by useless noise: /// /// Gets or sets the unit of measure. /// /// /// The unit of measure. /// public string UnitOfMeasure { get; set; } To me, that's obvious. Having said that, if there were special instructions to include, then we should absolutely use XML comments. I like this excerpt from this article: > Sometimes, you will need to write comments. But, it should be the exception > not the rule. Comments should only be used when they are expressing > something that cannot be expressed in code. If you want to write elegant > code, strive to eliminate comments and instead write self-documenting code. **Am I wrong to think we should only be using XML comments when the code isn't enough to explain itself on its own?** I believe this is a good example where XML comments make pretty code look ugly. It takes a class like this... public class RawMaterialLabel : EntityBase { public long Id { get; set; } public string ManufacturerId { get; set; } public string PartNumber { get; set; } public string Quantity { get; set; } public string UnitOfMeasure { get; set; } public string LotNumber { get; set; } public string SublotNumber { get; set; } public int LabelSerialNumber { get; set; } public string PurchaseOrderNumber { get; set; } public string PurchaseOrderLineNumber { get; set; } public DateTime ManufacturingDate { get; set; } public string LastModifiedUser { get; set; } public DateTime LastModifiedTime { get; set; } public Binary VersionNumber { get; set; } public ICollection LotEquipmentScans { get; private set; } } ... And turns it into this: /// /// Container for properties of a raw material label /// public class RawMaterialLabel : EntityBase { /// /// Gets or sets the id. /// /// /// The id. /// public long Id { get; set; } /// /// Gets or sets the manufacturer id. /// /// /// The manufacturer id. /// public string ManufacturerId { get; set; } /// /// Gets or sets the part number. /// /// /// The part number. /// public string PartNumber { get; set; } /// /// Gets or sets the quantity. /// /// /// The quantity. /// public string Quantity { get; set; } /// /// Gets or sets the unit of measure. /// /// /// The unit of measure. /// public string UnitOfMeasure { get; set; } /// /// Gets or sets the lot number. /// /// /// The lot number. /// public string LotNumber { get; set; } /// /// Gets or sets the sublot number. /// /// /// The sublot number. /// public string SublotNumber { get; set; } /// /// Gets or sets the label serial number. /// /// /// The label serial number. /// public int LabelSerialNumber { get; set; } /// /// Gets or sets the purchase order number. /// /// /// The purchase order number. /// public string PurchaseOrderNumber { get; set; } /// /// Gets or sets the purchase order line number. /// /// /// The purchase order line number. /// public string PurchaseOrderLineNumber { get; set; } /// /// Gets or sets the manufacturing date. /// /// /// The manufacturing date. /// public DateTime ManufacturingDate { get; set; } /// /// Gets or sets the last modified user. /// /// /// The last modified user. /// public string LastModifiedUser { get; set; } /// /// Gets or sets the last modified time. /// /// /// The last modified time. /// public DateTime LastModifiedTime { get; set; } /// /// Gets or sets the version number. /// /// /// The version number. /// public Binary VersionNumber { get; set; } /// /// Gets the lot equipment scans. /// /// /// The lot equipment scans. /// public ICollection LotEquipmentScans { get; private set; } }"} {"_id": "122638", "title": "Freelancers do you have a surcharge for going to client site to work?", "text": "I'm currently new to freelancing as a programmer and need to work out what some of the \"norms\" are without making myself look like an amateur. I've already won some work from a local company doing C# development and quoting an hourly rate for work that i am doing from my office. However on one upcoming project I've been asked to come on-site (client office) to work a full week. Is it reasonable to charge more than my regular hourly rate for working on site? And how should i justify the extra charges?"} {"_id": "122639", "title": "How to be hired as a remote programmer abroad and not to be an entrepreneur?", "text": "The question is quite interesting as for me because I watched jobs adds and mostly they all: * A) Require being located the same country a vacancy is * B) Employers don't want to hire foreign programmers if they don't have H1B or something * C) As a rule, most adds provide 6 month contract position I can keep adding the list for long time describing some job adds specifications, anyway, as a rule, most positions require non-employee cooperation status. I don't have a company for such kind of \"making projects by a client order\" so it is quite complicated; So I was trying, just, as for a statistics, to find out is there a way to be hired abroad as a remote programmer as if I get hired in my native city? The thing is not about being hired where I can be hired \"because I am located this or that place\" but the thing is about a possibility (not to relocate) which actually should provide nowadays technologies especially for IT specialists in many different fields; So the question is it possible to work any country in remote mode as if I am working in my own place? What do I need for that? Can you advice some useful web sites in this direction?"} {"_id": "246468", "title": "What procedural languages support algebraic data types?", "text": "One of the things I like about the new Apple language Swift is that it combines procedural programming with algebraic data types. What other languages do this?"} {"_id": "126", "title": "Should we move programmers between rooms to group them by project they're involved in?", "text": "Assume that a company runs several projects simultaneously. When a project reaches certain state (is completed, beta is shipped, etc) some people move to the other projects, then to the other, ad infinitum. It could be a good idea to assign rooms to specific projects, and make people move there as soon as they're added to the project as well. Some software methodologies, for example, Scrum and other agile ones, explicitly declare that having all programmers in the same room is a boon, and benefits development. However, a group of programmers that spends their work time in a common room for years creates a constructive and strong relationship between its members. The programmers inside it always know whom to ask for an advice about a specific technology, and the closeness makes people helpful and trusting. An advantage of a closer social interaction could also be taken. Should such an opportunity be missed just to arrange several peers for a specific short-term project? In other words, **how should we organize moving programmers between rooms?** Note: The best option, perhaps, is keeping each programmer in a separate office. But let's assume that the office under consideration is just not big enough for that."} {"_id": "229698", "title": "Good Programming Practice for similar child classes", "text": "I am developing an iOS application, in which I have to draw some patterns on a view based on option selected by user. Let me explain you more clearly. User will be shown number of images as options to choose from. ![enter image description here](http://i.stack.imgur.com/pmxti.jpg) On selecting an option, a view will be drawn. This view is inheriting from a class say `ParentClass` where I have set up common properties for child classes. **ParentClass** @interface ParentClass : UIView -(ImagePlaceHolder *) imageHolderTouched:(CGPoint) position; //common properties declared @end Right now, my approach is use of separate classes for each pattern. I override `drawRect:` for different patterns. So that's why I am creating separate child classes for each. **ChildDesign** @interface ChildDesign : ParentClass @end @implementation CollageDesign - (id)initWithFrame:(CGRect)frame { self = [super initWithFrame:frame]; if (self) { // Initialization code [self drawViews]; } return self; } -(void) drawRect:(CGRect)rect { //different implementation for different classes } @end There are many similar child classes just having different `drawRect` implementation. In main controller, I initialize pattern child class based on selection using switch case. ParentClass collView; switch (selectedOptionIndex) { case 1: collView = [[ChildDesign alloc] initWithFrame:newframe]; break; case 2: collView = [[ChildDesign2 alloc] initWithFrame:newframe]; //similar other classes break; case 3: collView = [[ChildDesign3 alloc] initWithFrame:newframe]; break; } Problem is that I have got alot of options, say 50 - 100. Creating separate inherited classes (50 - 100 .h and .m files) would be really bad approach in my opinion and is bad programming practice. So What pattern should I follow here to optimize my code? How should I create my child classes/ patterns? What will be best approach? An idea in my mind is to pass selected index to I hope my Question is clear.. Thanks. An idea in my mind is to create single child pattern class having selected index as a property. Based on selected index, I will implement `drawRect:` method. But there will be lot of if-else or switch cases."} {"_id": "206425", "title": "Creating a Website Without a Framework", "text": "I've been using PHP Frameworks for so long that I've actually forgot the \"best practices\" for create websites without one. Usually I will use Symfony, or more recently I've been using Laravel. A client wants a very simple website, but with certain parts of it dynamic. Due to the nature of the site using Wordpress, or a Framework, is out of the question. I'm a sucker for priding myself on my code, but I feel like I'm asking such a basic question that it's killing me to ask. But, what are the best practices for creating websites without a Framework? I like to live by the K.I.S.S (Keep It Simple Stupid!) method of thinking. So, my idea was to just create the .php pages that are required, do any page processing or database interaction on that page, then have the HTML below the closing PHP tag. I would have any helpers/functions in a functions.php file. This is what I remember doing way before I was using Frameworks, and to me it seems like a very old school way of doing things. I've not created a site without a Framework for literally 2+ years, so I've lost my way with the basics. Any advice would be greatly appreciated."} {"_id": "140944", "title": "Unix tools in business use: are they helpful?", "text": "Do you think knowing Unix tools like sed, awk, LaTeX, Perl give you a great edge in the business world? (e.g. being a manager) From my short reflection, the only profession that needs those sort of (plain text) tools is programming. Because even when I do creative writing, I rarely ever need it. I mean, do CEOs and executives of large corporations ever learn this kind of stuff if they were not CS major to begin with?"} {"_id": "140945", "title": "Sample code under MS-PL: must leave original comments?", "text": "I have some files in my project that started from a sample in the all-in-one code sample browser: http://visualstudiogallery.msdn.microsoft.com/4934b087-e6cc-44dd-b992-a71f00a2a6df Some files contain boilerplate code that I modify heavily. They contain MS comments at the top that mention the license, copyright microsoft etc. Am I required to leave the entire comment block at the top of the source files that I modify or is it okay to just include the MS-PL license in a separate file for the whole project?"} {"_id": "125178", "title": "Do you have a find/replace list of C/C++ code improvements that doesn't cause side effects?", "text": "Time after time you have to work with code that's not as safe as you would like it to be. Either that's someone elses code, or something you wrote at 3am 5 years ago, but it happens. And in those cases it would be good to make that code a little bit safer and more typing-error prone. At the same time, you want to avoid extensive testing, and if code is not covered with unit-tests, well, let's be honest, even if it is, you can't be sure you won't break anything when doing most refactoring things. So, what I'm wondering is, what would be a shortlist for C/C++ code that can be automated to almost \"find/replace all\" that would make the code safer, without having high risk of changing the programs behavior. Few things that came to my mind: * Make all member functions const * Make all arguments for internally linked functions const * Remove all C style casts and replace them with C++ style casts on errors * Turn most macros into inline functions or templates * Change enums to typed enums But most of them would need a recompile and manual verifications Do you have other candidates, and maybe even safer ones?"} {"_id": "85414", "title": "if I want to learn to build web pages, should I bother with xhtml, or go straight to html5?", "text": "Is there a practical reason to learn xhtml still? Should I learn to make my webpages in xhtml instead of html5 still or does it matter?"} {"_id": "140948", "title": "Qt Certification Exams", "text": "I'm wondering about doing a Qt Certification Exam this year, but I'm not 100% sure the investment is worth. I'm considering it because I think it could be a nice **+** on my resume, and as you know, I'm all for improving my software engineer persona. As I already earn a BSc and MSc degrees in computer stuff, I guess I see the certification process as some kind of adventure. Anyway, I know I'll spend a lot of time preparing myself for the exam and I just wanted to know if a Qt certification is worth the effort. Apparently there are 2 certificates that you can get in the Qt world: * **Nokia Certified Qt Developer** (basic) * **Nokia Certified Qt Specialist** (advanced) Nowadays I build cross-platform software in C++ and this exam would fit beautifully in my resume. My main concern is that, **given the obscure future of Qt** , I might be throwing time and money out the window. I'm looking for some advice regarding the usefulness of such certifications."} {"_id": "125172", "title": "SONAR and DesignForExtensionCheck Rule", "text": "When enabling SONAR on an in-house Java project, there are a large number of violations being reported due to the rule `DesignForExtensionCheck`. Whilst I agree with the theory that all classes/methods should be marked up as being either Abstract, Final, etc. But is it strictly necessary for an in-house application where the developers have full contorl over the code base and unit tests etc? An example is a Value Object who's every getter/setter is required to be final to pass this check. When you have 500+ such issues being reported its a big ask to retrofit the code. Some people have stated that Java has some flaws, and one of those is that classes/methods are not final by default. Bearing in mind that we don't have final as the default behaviour how much effort do we realy have to go to declar everything as final? Note: I realise that API's being used by third parties must have tighter controls. This question is levelled at in-house applications."} {"_id": "85412", "title": "What implementation of Scheme is good for studying SICP?", "text": "I heard about Dr. Scheme but haven't really used it. What is your experience with SICP, what set of scheme tools did you use when learning SICP?"} {"_id": "45826", "title": "Cross platform mobile development VS Native Mobile Development: Present And Future", "text": "I just completed one year in Smart phone development, working on BlackBerry and Android and also developed one application exclusively targeted to nokia feature phones. And just a month ago I come to know about Titanium Appcelerator tool that enables cross platform development, but there are some developers who complain about it's sub-par functionalities. Even a little bit experience of mine says that developing in native environment rather than these cross platform tools will give you more advantages by giving a developer a chance to add more features with better performance. Do you have same experience? Or you find such cross development tools really useful regarding to advance functionality and performance? As porting (or co developing) same application to different mobile platform is common thing nowadays, what do you think will these cross platform tools evolve and force developers to get a hands on approach on them or majority will stick to the native development environment?"} {"_id": "45827", "title": "How to avoid getting carried away with details?", "text": "When I program, I often get too involved with details. There might be some little thing that doesn't do _exactly_ what I want it to, or maybe there's some little feature I want to add. Either way, none are really essential to the application - it's just a minor niusance (if any). However, when trying to fix it, I may happen to spend way more time on it than I planned, and there are things much more important that I should be doing instead of dealing with this little detail. What can I do to avoid getting carried away with details, when there's more essential things that need doing? (I didn't know how to tag this question, so feel free to add whatever appropriate tags that are missing.)"} {"_id": "232644", "title": "Why was \"goto\" originally supposed to be included in Java?", "text": "`goto` is still a keyword in Java, although it has no use. It was saved as a keyword in the beginning in case it would be decided to add `goto` functionality in the future. My question is: Why would a modern OO programming language even take into account the possibility of including `goto` statements in the language? As I understand `goto` is considered harmful and even primitive. It has no use in modern programming, never seen it in any OO code. Why did the Java creators even consider adding `goto`s to the language?"} {"_id": "87301", "title": "Should I use \"Business logic\" term when speaking about non-business application?", "text": "Suppose there is a part of program that does not deal with initialisation, input, output. It just specifies what should be done, what is allowed or not. I use the term \"Business logic\" for this. But application can have nothing to do with business. Example: a game. Suppose there are following parts: 1. Input processing 2. Collision detection, physics, player control 3. Rendering the output 4. AI - How do NPCs attain the specified goal. 5. \"Business logic\" - what happens when player touch certain objects. What types of NPCs are there and what they do when ..., concepts of \"lives\", \"ammo\", \"levels\", \"score\". But it is not business, it's just a game. Wikipedia is not clear about it."} {"_id": "83818", "title": "What does \"proxy to\" mean?", "text": "I keep coming across the word \"proxy\" used as a verb in tutorials, etc. Usually something will \"proxy to\" something else. What does this mean? Having spent some time googling for what it means in a programming context, I mostly found \"proxy server\" or some other noun use. I understand the word proxy generally means \"a stand in,\" so \"proxy to\" must mean \"to stand in for.\" Right? But I'm still confused, because it doesn't seem to be used like that. An example (from a PHP ZF tutorial): \"__get(), __set(), __isset(), and __call(): All of these methods simply **proxy to** the row instance stored in $_row. This provides an easy way to composite Zend_Db_Table_Row with our Model Resource Item.\" from Keith Pope, _Zend Framework 1.8 Web Application Development_ , 2009. What does the author mean by \"proxy to\" in this context?"} {"_id": "14968", "title": "Why does adding more resource to a late project make it later?", "text": "It's a fairly common adage that adding more programmers to a late project will make matters worse. Why is this?"} {"_id": "133767", "title": "Where should I locate the cache in a WCF service?", "text": "I am going to build a Windows Communication Foundation (WCF) service using Microsoft Enterprise Library for caching. I am wondering whether or not I should put the cache in the service layer. If I do this, do I have to use `InstanceContextMode = Single` for this to work? Are there better alternatives, because I prefer using `InstanceContextMode = PerSession`. Where could I put the cache?"} {"_id": "202900", "title": "Nested entities in Google App Engine. Do I do it right?", "text": "Trying to make most of the GAE Datastore entities concept, but some doubts drill my head. Say I have the model: class User(ndb.Model): email = ndb.StringProperty(indexed=True) password = ndb.StringProperty(indexed=False) first_name = ndb.StringProperty(indexed=False) last_name = ndb.StringProperty(indexed=False) created_at = ndb.DateTimeProperty(auto_now_add=True) @classmethod def key(cls, email): return ndb.Key(User, email) @classmethod def Add(cls, email, password, first_name, last_name): user = User(parent=cls.key(email), email=email, password=password, first_name=first_name, last_name=last_name) user.put() UserLogin.Record(email) class UserLogin(ndb.Model): time = ndb.DateTimeProperty(auto_now_add=True) @classmethod def Record(cls, user_email): login = UserLogin(parent=User.key(user_email)) login.put() And I need to keep track of times of successful login operations. Each time user logs in, an `UserLogin.Record()` method will be executed. Now the question \u2014 do I make it right? Thanks. * * * ### EDIT 2 Ok, used the typed arguments, but then it raised this: `Expected Key instance, got User(key=Key('User', 5418393301680128), created_at=datetime.datetime(2013, 6, 27, 10, 12, 25, 479928), email=u'al@mail.com', first_name=u'First', last_name=u'Last', password=u'password')`. It's clear to understand, but I don't get why the docs are misleading? They implicitly propose to use: # Set Employee as Address entity's parent directly... address = Address(parent=employee) But Model expects key. And what's worse the `parent=user.key()` swears that `key()` isn't callable. And I found out the `user.key` works. * * * ### EDIT 1 After reading the example form the docs and trying to replicate it \u2014 I got type error: `TypeError('Model constructor takes no positional arguments.')`. This is the exacto code used: user = User('al@mail.com', 'password', 'First', 'Last') user.put() stamp = UserLogin(parent=user) stamp.put() I understand that Model was given the wrong argument, BUT why it's in the docs?"} {"_id": "133768", "title": "Regulation of the software industry", "text": "Every few years someone proposes tighter regulation for the software industry. This IEEE article has been getting some attention lately on the subject. > If software engineers who write programs for systems that expose the public > to physical or financial risk knew they would be tested on their competence, > the thinking goes, it would reduce the flaws and failures in code\u2014and maybe > save a few lives in the bargain. I'm skeptical about the value and merit of this. To my mind it looks like a land grab by those that proposed it. The quote that clinches that for me is: > The exam will test for basic knowledge, not mastery of subject matter because the big failures (e.g. THERAC-25) seem to be complex, subtle issues that \"basic knowledge\" would never be sufficient to prevent. Ignoring any local issues (such as existing protections of the title Engineer in some jurisdictions): The aims are noble - avoid the quacks/charlatans1 and make that distinction more obvious to those that buy their software. _Can tighter regulation of the software industry ever achieve it's original goal?_ 1 Exactly as regulation of the medical profession was intended to do."} {"_id": "83815", "title": "What's the deal with programming languages as strict job requirements?", "text": "I went to a \"job fair\" recently and I was surprised to see how much emphasis workplaces seem to put on the programming languages candidates are familiar with. From my (admittedly limited) experience, while truly mastering a programming language may take years, learning it to a reasonable level is a fairly simple affair to someone who already has experience with other languages, and can definitely fit within the timeframe employers usually allocate for the initial ramp-up. I'd think an employer would care more about how many languages / paradigms I am familiar with, or what's my algorithmic / software design experience, as opposed to the specific technology I'm skilled with at the moment. Say I already know Java, C++, Smalltalk and Prolog... should a workplace that relies on Objective-C really consider me unqualified because I lack experience in that language? Is this a flaw in recruiting methodologies, and if it is, what can I do to convince that workplace that my lack of experience with Objective-C should not matter? I'm asking hypothetically, not specifically about the mentioned programming languages. Alternatively, my experience is limited and I admit I may be missing something. Is previous experience with a programming language more crucial than what I think it is? Does it make a difference if it's a junior or senior position? _Should_ it make s difference?"} {"_id": "83814", "title": "Dealing with a developer continuously ignoring edge cases in his work", "text": "I have an interesting, fairly common I guess, issue with one of the developers in my team. The guy is a great developer, work fast and productive, produces fairly good quality code and all. Good engineer. But there is a problem with him - very often he fails to address edge cases in his code. We spoke with him about it many times and he is trying but I guess he just doesn't think this way. So what ends up happening is that QA would find plenty issues with his code and return it back for development again and again, ultimately resulting in missed deadlines and everyone in the team unhappy. I don't know what to do with him and how to help him overcome this problem. Perhaps someone with more experience could advise? Thank you!"} {"_id": "203360", "title": "Is there an industry standard for systems registered user permissions in terms of database model?", "text": "I developed many applications with registered user access for my enterprise clients. In many years I have changed my way of doing it, specially because I used many programming languages and database types along time. Some of them not very simple as `view`, `create` and/or `edit` permissions for each module in the application, or light as `access` or `can't access` certain module. But now that I am developing a very extensive application with many modules and many kinds of users to access them, I was wondering if there is an standard model for doing it, because I already see that's the simple or the light way won't be enough."} {"_id": "255390", "title": "Mocks and Stubs - Classes and methods?", "text": "I'm trying to get a high level understanding of mocks and stubs. My language/frameworks are ruby, rails, rspec, jasmine I heard this very high level definition: * Mocks are used to represent objects themselves * Stubs are used for methods of instances of objects and will return a value I've heard others say, no not at all. But I haven't heard a simple, high-level definition of them that has stuck. Most of the definitions are very wordy with lots of lower level details. I'm not sure if there's a clear rationale about when to use which either. Sure use a stub if you don't want to call the actual method of an instance (or it doesn't yet exist), but when does it make sense to use a mock?"} {"_id": "84457", "title": "What are the rules for coupling a ViewModel and a View in the MVVM pattern?", "text": "So given the _Separation of Concerns_ , how coupled should the View and ViewModel be? For example, I want the visibility of a Control in the View to be databound (databinded?) to a flag in the ViewModel. My first hunch would be to use a boolean value, `IsControlVisible` that returns true/false. However, in the View, visibility is set by an Enum. So I have a choice: change the property to an Enum, or use a Converter to convert the bool into the Visibility Enum. **Which is the proper approach when trying to follow idiomatic MVVM?**"} {"_id": "84455", "title": "How mature is FreeBASIC?", "text": "A friend of mine is considering using FreeBASIC in a critical production environment. They currently use GWBasic, and they want to make a soft transition towards more modern languages. I am just worried that there might be undetected bugs in the software. I see that their version number is 0.22.0, which indicates, that it is not quite mature yet. I also read this discussion, without being able to conclude. Also on their Sourceforge pages there is no indication of whether it is Alpha or Beta (which anyways is not a very good indicator). Does anyone have own experience about the maturity, ideas on how to judge the maturity, or know of companies using FreeBASIC in a critical production environment?"} {"_id": "218687", "title": "Why \"mainstream language\" is so opposed to \"built on a small core of orthogonal features\"?", "text": "On \"hammerprinciple.com\" website there are programming languages, statements about them and voting that associates languages with statements. In particular, there are statements: * \"This language is built on a small core of orthogonal features\" * \"This is a mainstream language\" Why they are so opposed to each other? Is being incoherent, history- encumbrant, ridden with extra special cases a must for being popular, \"mainstream\" programming language? Maybe is it similar reason as why beautiful and coherent artificial natural languages (like Esperanto) are not that popular?"} {"_id": "218685", "title": "In what language could I track reference counting in the simplest and safest way? (Or track existing reference counting?)", "text": "I want to count references of objects, just like you would with (some) garbage collection, but all I want to do is turn this into a graph dataset that I can then analyze or visualize. What language, even a niche one, would allow this most simply? Edit: I thought this question was super-clear, but apparently I need to give an example: var x = [] var y = x var z = 1 x.extend(z) resulting edges: x -> [1] y -> x z -> 1"} {"_id": "123580", "title": "Should you really keep your js, html and css separate?", "text": "I hear/read all the time that it is cleaner to keep your _js_ , _html_ and _css_ separated. Supposedly it makes it more easy to maintain, debug. Supposedly it is more efficient, because it allows caching/minifying _css_ and _js_ files. As far as I am concerned, using web frameworks (Django, Rails, ...), javascript templating libraries, ... I tend to split quite a lot a single html page into multiple reusable templates - some kind of widgets if you wish. For example I can have a _news feed_ widget, a _select multiple_ widget, etc ... each of them having a consistent layout throughout the different pages, and each being controlled by its piece of javascript. With this organization - which seems to me as clean as it can get, maximizing reusability - I have trouble to understand why it would be simpler to keep all _js_ and _css_ in separate files. I kind of think it would be so much simpler _for example_ : in the _select multiple_ widget file * html * basic css layout * control of direct interactions and UX enhancements with a bit of JS. I think that way it is more reusable, much more maintainable, because you don't have to scroll through a fat _js_ file, then switch to and scroll through a fat _css_ file, switch again to your _html_ file ... back and forth. So I'd love to know how you guys organize your code, if you stick to the separation that is usually recommended. * Are there really good reasons to do so ? * isn't it that the guides on the web usually assume that you won't use any fancy tool (in which case I'd love to get more up-to-date online readings for best practices) ? * Is it just a matter of preference ?"} {"_id": "218683", "title": "Transactional Memory vs Mutex and Locks", "text": "Just found out that Intel processors now have Transactional Memory support !!!! I learned about Transactional operations in my dB/OS class, it is a very simple concept: entire operation is executed or nothing gets executed. How is programming under new Transactional Mem. model is different from Multithreaded model that uses locks and mutexes ? Does it mean that we will be getting rid of locks and mutexes ?"} {"_id": "123584", "title": "Programming standards and principles to become better programmer", "text": "I am a c# developer. I have always been interested in increasing my skills and knowledge and trying to pickup new technology. However now I want to enhance my knowledge in Programming standards and principles. So for example I want to know about how to structure code, refactor code, coding standards and good practices etc... Does anyone have any recommendation on any books or website links?"} {"_id": "37482", "title": "Having MSc or Bsc with Experience, what's it worth in industrial environments?", "text": "I'm a fresh graduate in Electronic & Telecommunication field, and in our University, we can have major and minor fields in the relevant subjects. So, I majored in telecommunication and minored in Software Engineering. As I learned programming long before, Now I'm passionate about SE and programming. And, I want drive into the SE field. And, I come to know that, in industries, most of them expect the candidates to have a Bsc + experience of two+ years, or having a MSc in the related field. [I'm referring to my surrounding environment, not all the industries]. My Question is how do they consider those MSc and BSc + experience guys in the industries? IMO, having MSc is great assert when compared to experience. Because, in the industry, you can drive in a particular technology (Java, .Net or some thing else), not all, and with MSc, we can get the domain knowledge, not a particular technology!"} {"_id": "14281", "title": "Anyone know of any resources for someone who wants to learn WPF and F#", "text": "It's easy to find resources for learning WPF, similarly it's pretty easy to find resources for F#. But I feel that I could save some time if I could learn them both at the same time. So can anyone recommend any books, blogs, articles , something else? (I'm familiar with functional programming, winforms and c#)"} {"_id": "14287", "title": "Now that Apple's intending to deprecate Java on OS X, what language should I focus on?", "text": "After getting shot down on SO, I'll try this here: I'm sure you'll all know of Apple's recent announcement to deprecate Java on OS X (such as discussed here). I've recently come back to programming in the last year or so since I originally learnt on ye olde BASIC many years ago. I have a Mac at home and a PC at work and whilst I have got Windows and Ubuntu installed on my Mac as VMs, I chose to focus my \"relearning\" on VB first (as it was closest to BASIC) and then rapidly moved to Java as it was cross platform (with minimal effort) and so it was easiest to work on code from both OSes. So my question, if the winds of change on Mac are blowing away from Java and in this post-Sun era, what would be the best language to focus my new efforts on? Please note, this isn't a general \"which language is better?\" thread and or an opportunity for the associated flame-war. There's plenty of those and it's not the point. I realise that in the long term one shouldn't be allegiant to an individual language so, taking this as an excuse, the question is specifically which is going to be the most quick to be productive on given the background whilst bearing in mind minimum portability rewrites (aspiration rather then requirement) and with a long term value of usage. To that I see the main options as: C# - Closest in \"style\" to Java but M$ dependent (unless you consider Mono of course) C++ - Hugely complex but if even slightly conquered, then a win? Is it worth the climb up the learning curve? VB.Net - Already have background so easiest to go back to but who uses VB for .Net these days? Surely if using a CLI language I should use C#... Python - Cross-platform but what about UI for the end-user? **EDIT:** As a usage priority, I envision desktop application programming. Though the ability to branch in the future is always desirable. I guess graphics are the next direction once basics are in place."} {"_id": "237943", "title": "Domain Services vs. Factories vs. Aggregate Roots", "text": "After dealing with DDD for months now, I'm still confused about the general purposes of domain services, factories and aggregate roots in relation to each other, e.g. where they overlap in their responsibility. Example: I need to 1) create a complex domain entity in a saga (process manager) which is followed by 2) a certain domain event that needs to be handled elsewhere whereas 3) the entity is clearly an aggregate root that marks a bounded context to some other entities. 1. The factory IS be responsible for creation of the entity/aggregate root 2. The service CAN create the entity, since he also throws the domain event 3. The service CAN act as a aggregate root (create 'subentity' of 'entity' with ID 4) 4. The aggregate root can create and manage 'subentities' When I introduce the concept of a aggregate root as well as a factory to my domain, a service seems no longer needed. However, if I'm not, the service can handle everything needed as well with the knowledge and dependencies he has. **Code Example** _based on the ubiquitous language of a car repair shop_ public class Car : AggregateRoot { private readonly IWheelRepository _wheels; private readonly IMessageBus _messageBus; public void AddWheel(Wheel wheel) { _wheels.Add(wheel); _messageBus.Raise(new WheelAddedEvent()); } } public static class CarFactory { public static Car CreateCar(string model, int amountofWheels); } _..or..._ public class Car { public ICollection Wheels { get; set; } } public interface ICarService { Car CreateCar(args); void DeleteCar(args); Car AddWheel(int carId, Wheel wheel); }"} {"_id": "232711", "title": "Complete immutability and Object Oriented Programming", "text": "In most OOP languages, objects are generally mutable with a limited set of exceptions (like e.g. tuples and strings in python). In most functional languages, data is immutable. Both mutable and immutable objects bring a whole list of advantages and disadvantages of their own. There are languages that try to marry both concepts like e.g. scala where you have (explicitly declared) mutable and immutable data (please correct me if I am wrong, my knowledge of scala is more than limited). My question is: **Does _complete_ (sic!) immutability -i.e. no object can mutate once it has been created- make any sense in an OOP context?** **Are there designs or implementations of such a model?** **Basically, are (complete) immutability and OOP opposites or orthogonal?** Motivation: In OOP you normally operate **on** data, changing (mutating) the underlying information, keeping references between those objects. E.g. an object of class `Person` with a member `father` referencing another `Person` object. If you change the name of the father, this is immediately visible to the child object with no need for update. Being immutable you would need to construct new objects for both father and child. But you would have a lot less kerfuffle with shared objects, multi-threading, GIL, etc."} {"_id": "232765", "title": "Simulate desktop app with Chrome application shortcut", "text": "Technically I'm building an intranet web application, with web server running 24/7 and a central database, users can access it with any browser. However, for the user, the app should have the look and feel of a desktop application, and Chrome's application shortcut suits it best. But I don't want end user do the tasks of (1) install Chrome (2) access the web with url (3) create shortcut. Is it possible to make an installer package with Chrome inside, and automatically create the application shortcut (on the desktop and start menu)?"} {"_id": "76421", "title": "How to write freelancing in resume for programmers job", "text": "I had 3 years full time work experience in web development. I had to leave the job and come to overseas (my home) for 1.5 years due to personal problem, but I did some free lance work in web development and maintaining a Linux VPS server. I did some certifications like CCNA, CCNP, RHCE during that time. I am going back and getting ready to apply for jobs. Now I am confused how can I fill that 1.5 year gap. Can I write free lancing there? How should I write it? Did my worth degrade as I had to leave full time job for 1.5 years? Or do I really need to write that I was away? Can I write that I was doing free lancing and now I want a permanent job?"} {"_id": "7652", "title": "Identifying programming languages by a piece of code", "text": "Lets say you got some piece of code and you have no idea which language it was written in. What are the unique \"signature\" syntax constructions for each programming language that you can use to quickly determine code origin? _(PS. Mentioning only one language in an answer probably would be a good idea. Having more than one answer for the same language is fine, lets vote for the best one)_"} {"_id": "7656", "title": "Do you ever write code with pen and paper, and should we do it more often?", "text": "Like the title says, do you ever write code with pen and paper? If so, why? Do those who don't do if have anything to gain by doing so more often?"} {"_id": "159601", "title": "How to document and peer review design in scrum?", "text": "I am a relatively new developer in a small business (a team of 3 developers and an equally small QA branch) working on a medium-sized system. The current iteration is still under 100k lines of code (server & client combined), but I could imagine that, in the long term, the total size could be 200 kLOC or more. We are attempting to use Scrum for development, and we are working towards a CMMI level 2 appraisal. We are adopting peer review methods to verify our software design and source code. We elicit software requirements during the sprint planning meeting, and we document the software requirements in a master SRS. This also gives us a start into the software design, but we don't have a formal method for reviewing design concepts, such as OO design, UI design, re-usable design patterns, and more. For our source code, we are trying new techniques, such as using spreadsheets to document over-the-shoulder reviews and e-mail pass- around reviews, but it can be difficult for the reviewer to understand the design concepts from just looking at the source code. _(Please excuse me if I am misrepresenting concepts; we are attempting a lot of this from scratch.)_ We are not averse to using UML to express classes, objects, software interfaces, event patterns, or other design concepts, but we are not sure when or how to peer review our design efforts. Often, a developer can be 70+% complete with a user story and realize that a fundamental design element needs to be changed (and, subsequently, peer reviewed). In an attempt to avoid open discussion on the topic and promote concise answers, I'll try to propose two specific questions: 1. Does anyone know of any good resources (i.e. books, papers, articles) on the best practices of peer reviewing design concepts? 2. I have read that the code itself is the (implementation of the) design. Can the peer review of source code be utilized as the peer review of design? Thank you."} {"_id": "226329", "title": "scoping concern when dealing with coupling", "text": "I'm learning ruby (and OOP in the process) and I find keep having to write the same patterns when logging progress so I want to wrap this up in a logging library that my other code can then just pass data to - say a string, a file name and a log level. This of course couples the logging library with other code - other code still has to be aware of how the logging library works (that it wants the string, name, & level). I tend not to like coupling like this but it seems like I can't really prevent it, but rather have to deal with it by consciously deciding to scope the library's use to an area of concern \"ie: snoweagle's personal tech projects\" Is this good design or is there a better way to approach coupling?"} {"_id": "226328", "title": "Getting List of objects of same class", "text": "I have a class 'Task' which represents any Task that is done by an individual employee. For this class I wrote properties like EmpID, Priority, etc. and methods like `AddNewTask(),` `ArchiveTask(),` `MoveTaskToUser(int userid),` etc. Then I wrote a method public List GetAllTasks() This seemed incorrect to me with regard to OOP. I thought any single task object should represent one single task and it doesnt seem right to call `taskObj.GetAllTasks()` because object taskObj should carry out methods on the single Task that it represents. So I changed the method to static: public static List GetAllTasks() Now I call `Task.GetAllTasks()`. Is this the proper way to follow object oriented approach. Was the earlier idea of calling the method from single object correct?"} {"_id": "219088", "title": "Multiple schemas vs aditional column per customer", "text": "If you have different database per customer as there is no need to share any data between them does the same apply if a customer has multiple stores ? I believe that it is easier to do queries for statistics (some require 4-6 table join) if they are at a single schema rather than multiple schemas. But everyone (haven't done anything yet) in team believes that cross schema queries is the way to go.They claim that postgres supports it well. Which is the way to go?"} {"_id": "219089", "title": "Terminating a freelance contract before completion", "text": "I've been building a large web application for a company. Initially the project owner appointed a Graphic/Web designer to complete all design/front end works. Me and who ever I bring in to complete back end and all the view binding at the front. The initially front end/graphics guy dropped out and was replaced with a (better) graphic designer who ISN'T producing the html/css required. This deficit caused many months of overrun with me having to try to introduce front end devs to complete the missing pieces. On top of that, numerous articles of the initial spec have been changed and several items added. A month or so ago after receiving an incredibly aggressive phonecall from the project owner decided I was no longer interested in being involved in the project. Obviously all the source code, all documentation etc belong to the client for them to bring in another party. I at this point had been paid 2/3 of the agreed sum. Also shouldered hosting costs myself. The clients response was immediately to threaten me with legal action claiming I'd be breaching our contract by not completing and that I had already breached our contract by running over on delivery dates. He also stated that he felt it would be impossible to replace me on the project because of the obscure technology I'd selected to build it (ASP.Net MVC). He suggested he would intend to attempt to recoup all monies paid so far and damages that he \"calculates\" (see:made up) to be about \u00a370k. I do a lot of work for a company who's MD is a corporate lawyer. Who's confirmed that technically both parties are in breach of contract, but that's justified by the unforeseen changes in replacing designers etc... I've continued working to now. My (a little long-winded) question is morally and practically what would you do? I'm owed 1/3 of the remaining contract and hosting fees. (about \u00a33500 total) They have lined up another .Net dev to build on the out of spec items as I've stated they're not my responsibility. The site is just undergoing bug/acceptance testing/fixing which the client doesn't believe they should have any involvement in. Should I stick around for the cash, which also ties me into any corrective maintenance for 2 months. Or do I write it off as a \"make sure to re-agree terms when big things change\" lesson?"} {"_id": "226320", "title": "On which abstraction level would you do TDD?", "text": "**Problem** I find myself nailing the class structure down by having too many unit tests which makes making changes hard. **Example** Assume we have a class A which uses classes B1 and B2. Class B1 uses classes C1 and C2. ![enter image description here](http://i.stack.imgur.com/XtOXG.png) I start off by writing tests first for class A. During that process, classes B1 and B2 are created. To make sure that classes B1 and B2 work, I start writing tests for B1 and B2. Same for C1 and C2. Now, I end up having unit tests for all the different levels off abstraction. Assuming class A was for example a payroll system and classes B1, B2, C1 and C2 came to existence while building A. **Question** Where did I go wrong in my process? Shouldn't I have written tests for levels B and C? Should my unit tests in level A cover all the functionality in levels B and C? What is the process which will lead to the \"correct\" unit tests and a flexible design? Perhaps you can make an example with two or more levels of abstraction and show me your process and the code (including unit tests and production code and design) that comes out of your process."} {"_id": "212052", "title": "Unit testing methods which access an Internet API", "text": "Hi I'm developing a Wordpress plugin which accesses a couple of API's (Amazon Product API, Flickr, Freebase, Ebay). I've already started writing unit tests for it but I'm still wondering if its really the way to go since it takes a lot of time downloading data from the API primarily because my download speed isn't that fast (around 2 Mb/s max). Currently it takes 2 hours or more to run the whole test suite and sometimes I even reach the API limit and the test won't pass. Is there a better way of doing this? Perhaps automating the test or even allow me to upload the tests somewhere which has a faster download speed. Do I really need to be testing methods which access the API to begin with? Can I just make use of a mock data instead of querying the API directly? Thanks in advance!"} {"_id": "219081", "title": "PHP class data implementation", "text": "I'm studying OOP PHP and have watched two tutorials that implement user login\\registration system as an example. But implementation varies. Which way will be more correct one to work with data such as this? 1. Load all data retrieved from database as array into a property called something like _data on class creation and further methods operate with this property 2. Create separate properties for each field retrieved from database, on class creation load all data fields into respective properties and operate with that properties separately? Then let's say I want to create a method that returns a list of all users with their data. Which way is better? 1. Method that returns just an array of userdata like this: Array([0]=>array([id] => 1, [username] => 'John', ...), [1]=>array([id] => 2, [username] => 'Jack', ...), ...) 2. Method that creates a new instance of it's class for each user and returns an array of objects"} {"_id": "37155", "title": "What is a resonable workflow for designing webapps?", "text": "It has been a while since I have done any substantial web development and I'd like to take advantage of the latest practices but I'm struggling to visualize the workflow to incorporate everything. Here's what I'm looking to use: * CakePHP framework * jsmin (JavaScript Minify) * SASS (Synctactically Awesome StyleSheets) * Git **CakePHP:** Pretty self explanatory, make modifications and update the source. **jsmin:** When you modify a script, do you manually run jsmin to output the new minified code, or would it be better to run a pre-commit hook that automatically generates jsmin outputs of javascript files that have changed. Assume that I have no knowledge of implementing commit hooks. **SASS:** I really like what SASS has to offer but I'm also aware that SASS code isn't supported by browsers by default so, at some point, the SASS code needs to be transformed to normal CSS. At what point in the workflow is this done. **Git** I'm terrified to admit it but, the last time I did any substantial web development, I didn't use SCM source control (IE, I did use source control but it consisted of a very detailed change log with backups). I have since had plenty of experience using Git (as well as mercurial and SVN) for desktop development but I'm wondering how to best implement it for web development). Is it common practice to implement a remote repository on the web host so I can push the changes directly to the production server, or is there some cross platform (windows/linux) tool that makes it easy to upload only changed files to the production server. Are there web hosting companies that make it eas to implement a remote repository, do I need SSH access, etc... I know how to accomplish this on my own testing server with a remote repository with a separate remote tracking branch already but I've never done it on a remote production web hosting server before so I'm not aware of the options yet. **Extra:** I was considering implementing a javascript framework where separate javascript files used on a page are compiled into a single file for each page on the production server to limit the number of file downloads needed per page. Does something like this already exist? Is there already an open source project out in the wild that implements something similar that I could use and contribute to? Considering how paranoid web devs are about performance (and the fact that the number of file requests on a website is a big hit to performance) I'm guessing that there is some wizard hacker on the net who has already addressed this issue."} {"_id": "212056", "title": "Implementing semantic actions in table driven predictive parsing", "text": "When doing table driven predictive parsing on an LL(1) grammar, (as explained in good detail here for example), how can we augment the algorithm to allow processing of semantic actions while doing the parsing. One technique could possibly use the same stack in the algorithm by defining the production rules with markers for the semantic actions, and pushing these markers on the stack the same way it's done with terminals/non-terminal in the rest of the production rule, and then processing the code designated by the marker when it gets popped out of the stack. So given something like this (from the Dragon book - actions in curly braces): rest -> + term { print('+') } rest | - term { print('-') } rest | Epsilon term -> 0 { print('0') } | 1 { print('1') } ... | 9 { print('9') } We can add a marker for each type of action, and push it in on the stack with the rest of the production terms, and then execute the corresponding code when the marker is popped out by the algorithm. Is this how this is done or is there better approaches?"} {"_id": "212057", "title": "What is an example of a continuation not implemented as a procedure?", "text": "An interesting discussion about the distinction between _callbacks_ and _continuations_ over on SO has prompted this question. By definition, a continuation is an abstract representation of the logic needed to complete a computation. In most languages this manifests as a one argument procedure to which you pass whatever value needs continued processing. In a purely functional language (where all functions are pure and first class citizens), I would think a continuation could be entirely modeled as a function. This is, after all, how I've previously understood continuations up to this point. However, the world is full of state (sigh..) and so the general definition does not require that a continuation capture program state -- it need only encompass intent. To help my understanding, can an example be provided in a functional language where the continuation is expressed in a more abstract way than a function? I know Scheme allows you to grab the current continuation in a first-class manner (call/cc), but even so, it seems that the one argument procedure passed to call/cc is simply given the current continuation in the form of another one argument procedure to which the call/cc'd function can apply its result."} {"_id": "83100", "title": "Learning barriers for beginners towards a programmers mind (Study)?", "text": "I stumbled upon this Study (of The Development of Programming Ability and Thinking Skills In High School Students)( **link only opens with left-click correct, not in new tab?! dunno why** ) claiming, that students seem to have serious problems learning how programming, abstraction into algorithms,... actually works. What do you think about this and has somebody links to newer free available studies investigating the learning process of programming newbies? Im wondering if this is mainly due to worse teaching or more abstract & difficult to learn languages like C in the past. Of course abilities in mathematics play likely also a bigger role, but this shouldnt be a fundamental hurdle? Is there maybe a connection to how difficult your mother language is structured (grammar, syntax, vocabulary). It is known that programmer newbies of distinct mother languages faster adapt to distinct programming languages? Are there programming languages specially oriented to, for example, chinese/german language? Is the origin of more and more new programming language a sign of individual adaption to a single programmer liked syntax? I would suppose that from a theoretical informatics view there should be a limit of use & different problem types how many different programming languages are really needed. But they seem to grow on and on."} {"_id": "245341", "title": "How to get Datasource JNDI access to work with WebLogic reliably (no NameNotFoundException)?", "text": "in a WebLogic web application (bundled and deployed as a Jar) I sometimes get a NameNotFoundException when I try to fetch a datasource using JNDI lookup. The datasource is not shown in the JNDI tree then, but it can be seen under Services -> Datasources. It did work as expected and I don't think I changed anything since then. It seems like a race condition or something similar might play a role. This does not seem to be a common problem, which makes me think that I might be missing something very basic. I went through the wizard to create the datasource (Datasource and JNDI name are identical) and I tested it. Again, the programmatical access did work properly as well. try { Context context; context = new InitialContext(); DataSource dataSource = (DataSource) context.lookup(datasourceName); connection = dataSource.getConnection(); } catch (NamingException e) { throw new ConnectionException(e); } catch (SQLException e) { throw new ConnectionException(e); } Are there any best practices or completely different ways to work with datasources? Anything you can think of that might make a difference?"} {"_id": "202014", "title": "What is the origin/meaning of the name 'NHibernate'?", "text": "Been poking around the web because I was curious as to why they called it that but haven't found anything yet. Anyone know? _(I checked the FAQ and it wasn't clear if questions on history/origins were OK to put here or not, but I figured it wouldn't be good on StackOverflow, feel free to start closing it if it is out of scope and I will delete.)_"} {"_id": "40508", "title": "Stopping endless technical discussions and making a decision", "text": "I always come across people who like to bang on for ages over the smallest \"technical things\". Don't get me wrong, I'm a geek programmer who loves what I do, but you know the type of conversation. * Mac is so much better than Windows * Don't use a For Each loop, use a While loop * Don't buy an Intel based PC, get an AMD based one. * We should use one IoC container over another. All these \"things\" have valid pro's and con's for both sides, and you'll never get a \"correct\" answer, and the person will never concede the point. (of course there will be some where there is an answer, maybe :). My question (I'm getting there!!) is: In a software team, how do you cut through these long discussions without inhibiting innovation, so that a decision can be made and you can get on to solving the real business problems."} {"_id": "44888", "title": "Is there a Windows philosophy of programming?", "text": "I've been programming both in Unix and Windows environments. Mostly I've worked in Unix, where I've learned Unix Philosophy, which can be summarized as * Write programs that do one thing and do it well. * Write programs to work together. * Write programs to handle text streams, because that is a universal interface. There seems to be a clear difference in programming cultures between Unix and Windows worlds, for example: * GUI vs CLI * Registry vs config files * Lots of tools specializing for any given need vs group of generic orthogonal tools which can combined **Is there equivalent of \"Unix philosophy\" in Windows world?** What Unix- programmer can learn from Windows or should be aware of when moving to programming in Windows? I would like answers to focus on the best practices of Windows programming (and not a fight between Windows and Unix)."} {"_id": "14422", "title": "You make websites? So then you must have heard about Web 2.0, right?", "text": "The other day someone asked me what type of work I do. I answered that I help to build web sites (Asp.net, c#, mostly). So he said \"so then you must do a lot of Web 2.0 stuff, right?\". I proceeded to respond by trying to define Web 2.0 as a change in the way that websites were designed 5+ years ago, incorporating more client-side scripting, ajax and standards-based approach to building web sites. I then said that almost anyone doing web design/development today who has stayed up to date with new standards and techniques over the past few years, should be using \"Web 2.0-like\" techniques, and it is not a really accurate statement to describe a website today as being \"Web 2.0\". (Person with whom I was talking was a programmer, but in a totally different domain - C++/VOIP stuff). How do you answer (well-intentioned) people who try to talk to you about programming using the catch-phrases of yesteryear? (\"Web 2.0\" could apply for web development, I am sure that there are similar terms in other areas as well.) Just nod and say yes? Try to explain yourself?"} {"_id": "198405", "title": "Checklist for coding MVVM web application", "text": "We are a small team working on a web application using MVVM design pattern using technologies like .NET, Knockout and HTML. I am trying to come up with a code review checklist for this, so that my team can make use of it while coding as well as while reviewing. I tried to find in google, but no help there. Can you guys suggest me what I have to look for code review checklist?"} {"_id": "236216", "title": "alternate approach of binary serialization/de-serialization", "text": "Is it possible to convert a list of object directly to byte[] (and vice versa) to gain performance (by avoiding serialization/de-serialization)? What I have in mind is that a list is somewhere in memory(heap). If I could read the bytes in heap for my list I could just assign it a variable (of byte[]) in my program. Is it possible? and if yes, how would I get back the original list from byte[] without de-serializing using BinaryFormatter?"} {"_id": "207620", "title": "What are the downfalls of MVC?", "text": "I've been using MVC/MV* since I started actually organizing my code years ago. I've been using it so long that I can't even think of any other way to structure my code and every job I've had after being an intern was MVC based. My question is, what are the downfalls of MVC? In what cases would MVC be a bad choice for a project and what would be the (more) correct choice? When I look up MVC alternatives, nearly every result is just different types of MVC. To narrow down the scope so this doesn't get closed, let say for web applications. I do work on the backend and front-end for different projects, so I can't say just front-end or backend."} {"_id": "131852", "title": "Never use Strings in Java?", "text": "I stumbled upon a blog entry discouraging the use of Strings in Java for making your code lack semantics, suggesting that you should use thin wrapper classes instead. This is the before and after examples the said entry provides to illustrate the matter: public void bookTicket( String name, String firstName, String film, int count, String cinema); public void bookTicket( Name name, FirstName firstName, Film film, Count count, Cinema cinema); In my experience reading programming blogs I've come to the conclusion that 90% is nonsense, but I'm left wondering whether this is a valid point. Somehow it doesn't feel right to me, but I couldn't exactly pinpoint what is amiss with the style of programming."} {"_id": "131857", "title": "Teaching Classes and Objects", "text": "I'm trying to teach how an object is just an instance of a class to a buddy of mine. However, he doesn't seem to understand it so well. I've heard a ton of the examples (blueprint to a house, etc.) But does anyone have a real concrete way of teaching this?"} {"_id": "201661", "title": "Placing elements in graph with a streaming/online algorthm", "text": "We have a stream of points with about 1000 points per second. For each point, we have a complex vector (hundreds of dimensions). Our goal, for each point is to link it to the 5 closest points that we've already seen. We determine \"closest\" by computing a distance (euclidian or other) between 2 points. Obviously, in a perfect world we would have enough money and time to compute the distance between a new point with each of the points we've already seen and it would keep the 5 closest. The world is not perfect and we're looking for a solution. Has anyone worked with this before?"} {"_id": "201669", "title": "How can a programmer contribute to planetary science?", "text": "I'm just getting back from WWDC13, and had the opportunity to see Bill Nye speak at the last day's lunch session. Of course, I grew up watching him, and left incredibly excited for what lies ahead for humanity ... but at the same time, I'm left wondering how can I, as a \"lowly\" software engineer, contribute to progress in the space industry. Of course, there are some obvious choices: I could donate cash to the Planetary Society. Or I could go to college and start doing physics research (I never went, though am really happy with my career arc over the last 14 years). So, I guess it's a bit of an open ended question, but what areas of opportunity might be available to someone with more time and skills than money that still wants to find a way to contribute?"} {"_id": "232762", "title": "When internationalizing texts, should underscores (for keyboard shortcuts) be part of the texts to be translated?", "text": "I have a project where I need to translate a lot of words and text fragments in different languages. In these texts, underscores are currently added in these text (fragment(s)) to be used as keyboard shortcuts (used in Visual Studio). However, I was wondering if this is the way to go, or if it would be better to remove the underscores and add them AFTER translation. Is there some guideline to follow?"} {"_id": "207993", "title": "Karger's algorithm for bin-packing?", "text": "I first came across that algorithm as the \"Random minimum cut\" algorithm. And recently a colleague was trying to pack a big quantity of small textures into one image file. Then it clicked - why not use Karger for this packing problem? I don't know a good way to map this problem to a graph defined in order to minimize the wasted space between images. Here I define the minimum cut as \"the atlas with less blank filler pixels\". So I need to generate a graph that represents a \"distance\" between images as an edge. Any suggestion on what type of distance to use? Here is the first idea I had: Given an image A(w, h) where w is width and h i height, if B(w', h') exists for w = w' or h = h' create an edge between A and B. (The same rule with a certain tolerance could be used, like + or - 1%). I would be very surprised if I am the first one to think about this. So if anybody knows about anything similar that has been done, please do say so."} {"_id": "207990", "title": "Recommended sprint length when adopting Agile?", "text": "I'm new to a small development company (half a dozen programmers and should grow to possibly a dozen eventually). We also have a few external contractors working with us (just to add a bit of complication). I would like to slowly start adopting Agile, or more importantly short sprints. I was curios to know, from past experience, what would you recommend that our initial sprint length (in weeks) be? Maybe I can even go as far as asking what is the sprint length of a very mature and experienced Agile team (shorter [2 weeks] or longer sprints [4 weeks])? My initial thought was to use a shorter sprint, say 2 weeks and then in time, once we get the hang of things and all has smoothed out, we could simply double it to 4 weeks."} {"_id": "160932", "title": "Why rpm and deb package formats are not unified into one standard system?", "text": "I had this question asked on stackoverflow, but it was closed _\"as not a real question\"_. So i decided to remove all rumbling and post that question here, assuming that this stackexchange is for _\"conceptual questions about software development\"_ The main question is in the title: **Why rpm and deb package formats are not unified into one standard system?** I think that this is very important step, which we need to do, because such format fragmentation is bad, and not only for independant software developers, but for other tools and languages, who reinventing own packaging, like vim, python, ruby, nodejs, php, eclipse and many many others"} {"_id": "230624", "title": "What should I do when I've already waited too long between commits?", "text": "I was naughty... Too much \"cowboy coding,\" not enough committing. Now, here I am with an enormous commit. Yes, I should have been committing all along, but it's too late now. What is better? 1. Do one very large commit listing all the things I changed 2. Try to break it into smaller commits that likely won't compile, as files have multiple fixes, changes, additional method names, etc. 3. Try to do partial reversions of files just for appropriate commits, then put the new changes back. Note: as of right now I am the only programmer working on this project; the only person who will look at any of these commit comments is me, at least until we hire more programmers. **By the way:** I am using SVN and Subclipse. I did create a new branch before doing any of these changes. **More information** : I asked a separate question related to how I got into this situation in the first place: How to prepare for rewriting an application's glue"} {"_id": "96211", "title": "What is a faster alternative to a CRC?", "text": "I'm doing some data transmission from a dsPIC to a PC and I'm doing an 8-bit CRC to every block of 512 bytes to make sure there are no errors. With my CRC code enabled I get about 33KB/s, without it I get 67KB/s. What are some alternative error detection algorithms to check out that would be faster?"} {"_id": "88905", "title": "Find next occurence in a Tree", "text": "Have a tree of variable depth, and width. What is the best algorithm to find the next occurence of the node in that tree. Next = Search to the right side of the tree ( as in breadth first search ) The selection criteria for the next occurrence of the tree is changeable in different situations. For example, at one place, I want find the next occurrence of the node that contains the Value equal to the current node. In another, I want to select the next value that is less than the current value. Once the next Occurrence is found, the program can terminate and return the value or node. 5 6 9 10 7 0 5 6 8 5 6 9 5 Suppose I've a pointer to node (depth = 4, value = 5, parent = 10)... when I perform the search I want to get the pointer to the node (depth = 4, value =5, parent=0). Say this is not there, then I want to get node(depth =1, value=5, rootNode)."} {"_id": "225674", "title": "Why define a Java object using interface (e.g. Map) rather than implementation (HashMap)", "text": "In most Java code, I see people declare Java objects like this: Map hashMap = new HashMap<>(); List list = new ArrayList<>(); instead of: HashMap hashMap = new HashMap<>(); ArrayList list = new ArrayList<>(); Why is there a preference to define the Java object using the interface rather than the implementation that is actually going to be used?"} {"_id": "103083", "title": "How can variables be created at runtime?", "text": "Is it possible to define variables dynamically? Last night I was writing some code (C and VB2010) and I ran into a problem related to defining variables in my program. The variables needed depend on the number of entries added by the user and the number of users. Even using dynamic memory allocation we have a size limit, but I wonder about this type of thing sometimes. We can define as many variables as we need (to a limit of course) when writing the code, but can't we define more at runtime? For example: say your whole program is done and when user says 'x', this cannot be created as variable; rather we require a variable to store that x. Why cant we make x a variable at runtime? Can't we build a bridge between compile-time and runtime with some rules?"} {"_id": "225673", "title": "Implementing an interface from a framework vs simple java interface", "text": "This concept is unclear with me. I have worked on several frameworks for an instance Spring. To implement a feature we always implement some interfaces provided by the framework. For an instance if I have to create a custom scope in Spring, my class implements a org.springframework.beans.factory.config.Scope interface. Which has some predefined low level functionality which helps in defining a custom scope for a bean. Whereas in Java I read an interface is just a declaration which classes can implement & define their own functionality. The methods of an interface have no predefined functionality. interface Car { topSpeed(); acclerate(); deaccelrate(); } The methods here don't have any functionality. They are just declared. Can anyone explain this discrepancy in the concept? How does the framework put some predefined functionality with interface methods?"} {"_id": "103086", "title": "What's happens if I develop a program taking inspiration from another GNU GPL application?", "text": "My friend and I are planning to develop a tiny and simple CMS by taking some inspirations from an already existing GNU GPL CMS. We take the ideas and try to develop it as simply we can, is it possible ? We are planning to set up the same licence, GNU GPL. Is it a good choice ? Which are the conditions to do it ?"} {"_id": "131122", "title": "I feel I'm a good junior programmer, but how to become a good programmer who writes beautiful code?", "text": "As a beginner, I can say that in about six months I grasped very well many basic programming concepts. I began with php, then wen to c, then to java, and I know quite well the procedural and the object oriented concepts. Most important, I really enjoy programming, it's quite a lot a mental challenge than a duty for me. Reality is that I 100% (and more) improved my skills spending lots of time in the project Euler website (and I should be ashamed to say I just solved the first 30 problems, but all by myself without any external help, and I am proud of it). So I could think I'm in the right track. But at the moment I feel I'm in a phase when I don't know how and above all what to do then. In these months I also followed two courses, one for webdesigning (I know how to program css and html markups, and also know the basis of graphics processing), I'd setup a basic site easily, and the second course (which will finish on may) is going well meaning I have no problem to learn what the teachers tell me, and sometimes I find better solution than my php teacher (probably php being the language I'm most accustomed I find it easier). But apart from all the basic \"garbage\", all the simple exercises people do when learning and when beginning, I don't know how I'm supposed to go on. It's a fact many great coders say that I just have to begin to code, and code, and code, and I know that it is, but what code? If I should think about a real program, one of that you could be proud of it, I don't believe I know where I should begin. I also tried to read a couple of sources codes, and understanding them is one of the most difficult process I found and definitely depressing, as you think: \"omg how much I'm ignorant?\"; furthermore, my ideas are not clear; I would like to setup something but don't know how to put together all the things I could think, and I always think I'm forgetting something. I tried to read some documents and books about object oriented best practices, uml, design patterns, but it seems that, although all those readings make great sense to me, it stops then, I can't use nothing because I have this feeling I am still not enough good to use that and worst I am desperate to find the right track to do it, at the point I think that perhaps I'm not enough intelligent and programming is a too difficult task for me. Perhaps I should have a real job in some company where \"learn\" from other more experienced programmers, but companies hire just experienced programmers so what? Please can you address me in some intermediate-level activity I can take and enjoy in order to overcome this state? thanks."} {"_id": "170305", "title": "Incorporating GPL Code in my Open Source Project", "text": "I have downloaded a currently inactive GPL project with a view to updating it and releasing the completed codebase as open source. I'm not really a fan of GPL though and would rather licence my project under BSD. What are my options? Is it just a case of keeping any existing non-touched code under the GPL and any updated stuff can be BSD (messy)? The source will essentially be the same codebase i.e. there is no logical separation between the two and they certainly can't be split into anything resembling different libraries. Are my only realistic options to either GPL the whole thing or seek the original author's permission to release everthing under BSD?"} {"_id": "170304", "title": "Should I try to write simple key-value storage by myself?", "text": "I need a key-value storage in a simplest form we can think of. Keys should be some fixed-length strings, values should be some texts. This key-value storage should have an HTTP-backed API. That's basically it. As you can see, there is no big difference between such storage and some web application with some upload functionality. The thing is - it'll take few hours (including tests and coffee drinking) to write something like this. \"Something like this\" will be fully under my control and can be tuned on demand. Should I, in this specific case, not try to reinvent bicycles? Is it better to use some of existing NoSQL solutions. If yes, which one exactly? If, say, I'd needed something SQL-like, I won't ask and won't try to write something by myself. But with NoSQL I just don't know what is adequate and what is not."} {"_id": "228040", "title": "Model and ViewModel for View", "text": "I am new to the MVVM pattern. I have a window which has 3 text boxes (`Name`, `Address`, `Description`), a save button, and a listview which displays the above fields. When the save button is clicked I want to save the fields into database as well as show the record in the listview. How do I design my Model, ViewModel for this interface ?"} {"_id": "228773", "title": "Object Calisthenics - reducing to two attributes", "text": "I'm refactoring an expense tracker system using Object Calisthenics. I'm able to bring my Class down to five attributes. How do I go forward from here? This is my class right now. public class Expense { private Identifiers ids; private AmountInCurrency amountInCurrency; private Remarks remarks; private UserList userList; private ExpenseDate expenseDate; } Identifiers has attributes expenseId and reportId AmountInCurrency has an amount and a currency Remarks has a string with remarks. UserList has a List which is List as User has only userName (String). ExpenseDate is a Date object. Also I have a sql database where i'm storing the contents of the Expense object. Should I directly pass the expense object and retrieve the primitives or should I create an entity object which takes this expense object as a constructor argument? NOTE : This is not production code, I'm doing this just as an exercise."} {"_id": "201596", "title": "Public-key cryptography security given NSA resources", "text": "I was wondering how secure public private key encryption methods are. If two individuals were sending emails back and forth forever, where each person would encrypt the body of the email they were sending with the other person's public key, would anyone be able to decrypt the body of those emails after a while? I'm assuming that each person creates their own private-public key pair and then if you wanted to send someone an email you would grab their public key and encrypt your message before sending it to them. They would then decrypt it with their own private key. Wikipedia (http://en.wikipedia.org/wiki/Public-key_cryptography) claims that its effectively impossible to decrypt email if all you know is the public key. But is this really true? What if you had the resources of the NSA, are they not able to brute force the decryption?"} {"_id": "43081", "title": "Where should I start and how to progress when learning Java EE", "text": "I know basic stuff like, what are beans, jsp, servlet, jsf and how this stuff should work together. I know how to make basic jsp page with database query for example. Now I need to know what is the best path to learn all this stuff. My plan is to learn in this order: 1. jsp (including persistance and JSTL) 2. servlets + beans 3. jsf 4. The jump to frameworks (hibernate, struts, spring, etc) Also I'm not exactly sure about JSF, is it a must to make great pages or is it just a convenience to know?"} {"_id": "43083", "title": "python vs php (for project managers)", "text": "Up to now, as a developer i preferred python for web programming and scripting. Now, i will manage some projects. I know that finding developers that know php is easier than finding developers that know python. I have a background of python, so if developers use python; i will be able to control and lead easier. This is a trade-off. What is your suggestion?"} {"_id": "215538", "title": "Representing and executing simple rules - framework or custom?", "text": "I am creating a system where users will be able to subscribe to events, and get notified when the event has occured. Example of events can be phone call durations and costs, phone data traffic notations, and even stock rate changes. Example of events: * customer 13532 completed a call with duration 11:45 min and cost $0.4 * stock rate for Google decreased with 0.01% Customers can subscribe to events using some simple rules e.g. * When stock rate of Google decreases more than 0.5% * When the cost of a call of my subscription is over $1 Now, as the set of different rules is currently predefined, I can easily create a custom implemention that applies rules to an event. But if the set of rules could be much larger, and if we also allow for custom rules (e.g. when stock rate of Google decreses more than 0.5% AND stock rate of Apple increases with 0.5%), the system should be able to adapt. I am therefore thinking of a system that can interpret rules using a simple grammer and then apply them. After some research I found that there exists rule-based engines that can be used, but I am unsure if they fit our need as they seem more appropriate for business logic, as hinted in Does Your Project Need a Rule Engine and Jess guildelines. Is there a open source Java framework suited for this area? If anyone can point me to the correct direction, or recommend some system, that would be great! **Edit:** Thoughts for custom solution: I am thinking of using an event-to-rule mapping, where each rule is assosicated with a set of events. Then, when an event occurs, I know what rules that can be applied. If each rule is represented by is own class, then triggering the rule is simply to execute the class. Pros: Simple and easy implementation. No need for interpretation of events to find rules Cons: Each new rule requires coding. Cannot do deduction analysis. If you see any major flaws in this approach please do tell me!"} {"_id": "118342", "title": "How can I convince my colleague to unit test his code?", "text": "> **Possible Duplicate:** > Colleague unwilling to use unit tests \"as it's more to code\" I've been trying in the last couple of months to convince one of my colleagues to start unit testing his code and drop the old \"print, run, debug\" way of doing things. I need clear and elaborated proofs that unit testing increases your productivity - this guy has a decent amount of experience and can give a counter-argument for all of the arguments I gave him until now. Unit testing is not a policy that's being enforced in the team, but is something that most of us do and it definitely worked for us, and we can see how print, run and debug isn't working for him - it's taking way too long to implement something and it's taking even longer to manually test his code."} {"_id": "238392", "title": "MVC: Where should I store interchangeable algorithms used by the Model (whose names also need to be accessible to the View)?", "text": "Please consider a program, where the user chooses an algorithm from a list, and the Strategy pattern is utilized to set this algorithm as the model's operation. For example, an image procession application. There are a number of algorithms that can be used to manipulate an image (darken, brighten, contrast, etc). They are all encapsulated in objects, and the controller sets the model to use an algorithm from this list. The algorithms also need to be presented by name on the UI. This is how it will work: The user can select an algorithm from a list on the GUI. When he/she does, the controller is notified, and sets the suitable algorithm in the model. Later on, when the model is called `model.operate()`, it will delegate the operation to it's current algorithm. Classic Strategy pattern. Also, _the view needs access to the names of the algorithms_ , in order to allow the user to choose one from a list. My question is this: **Should I store the algorithms somewhere, where the controller will fetch them from when it needs to set an algorithm in the model? Or should the controller just instantiate a new algorithm whenever it needs to set an algorithm in the model?** If the answer is that it's good to store the algorithms somewhere - than where? In the model? - makes sense because the model encapsulates the business logic and data of the app. But how will the view fetch the names of the algorithms? In the controller? - makes sense because the controller is the one that sets the algorithms in the model. And also it can offer the view a method `getAlgorithmNames()`. But technically it shouldn't hold business logic objects, which is what the algorithms are. In a different class, where anyone interested can fetch the algorithms or their names from? I'm sure there have been a lot of applications where the user selects an operation, and the appropriate algorithm is set in the model. How do other applications do this?"} {"_id": "215539", "title": "Algorithms or patterns for a linked question and answer cost calculator", "text": "I've been asked to build an online calculator in PHP (and the Laravel framework). It will take the answers to a series of questions to estimate the cost of a home extension. For example, a couple of questions may be: * What is the lie of your property? Flat, slightly inclined, heavily inclined. (these suggestive values could have specific values in the underlying calculator like, 0 degrees, 5 degrees, 10 degrees). * What is your current flooring system? Wooden, or concrete? These would then impact the results of other questions. Once the size of the extension has been input, the lie of the land will affect how much site works will cost, and how much rubbish collection will cost. The second question will impact the cost of the extensions flooring, as stumping and laying floorboards is a different cost to laying foundations and a concrete slab. It will also influence what heating and cooling systems are available in the calculator. So it's VERY interlinked. The answer to any question can influence the options of other questions, and the end result. I'm having trouble figuring out an approach to this that will allow new options and questions to be plugged in at a later stage without having things too coupled. The Observer pattern, or Laravel's events may be handy, but currently the sheer breadth of the calculator has me struggling to gather my thoughts and see a sensible implementation. Are there any patterns or OO approaches that may help?"} {"_id": "159706", "title": "Does current JIT optimize generated machine codes for branch prediction based on runtime statistics?", "text": "Some JVMs would compile Java byte code into native machine code. We know that there are lots of optimizations we could apply for that. Recently, I also learn that a branch operation may block the CPU and affect the performance significantly, if a CPU makes a wrong prediction. Does anyone know if any JVM would generate machine codes easier for CPU making right prediction based on runtime statistics collected?"} {"_id": "157275", "title": "Including local headers first", "text": "So I read up on the ordering of your includes, and this guy suggested you include your local header first so as to make sure it doesn't have prerequisites. Ok, I get that. I'm on board. The whole compartmentalization thing is good. But I've got this file, file.c which includes it's `file.h`, which declares functions to save files. Which passes around the `FILE*` type used by `fopen` and friends. If I include `file.h` before I include `stdio.h` then there's an obvious parsing error when it's building `file.h` because it doesn't know about the `FILE*` type. I know I've got to be missing something dirt simple, but I can formulate this into something google can use. Should I be doing something different in `file.h`? Is this simply something that needs to be included in a specific order? Thoughts?"} {"_id": "65139", "title": "Should data structures be integrated into the language (as in Python) or be provided in the standard library (as in Java)?", "text": "In Python, and most likely many other programming languages, common data structures can be found as an integrated part of the _core language_ with their own dedicated syntax. If we put LISP's integrated list syntax aside, I can't think of any other languages that I know which provides some kind of data structure above the array as an integrated part of their syntax, though all of them (but C, I guess) seem to provide them in the standard library. From a language design perspective, what are your opinions on having a specific syntax for data structures in the core language? Is it a good idea, and does the purpose of the language (etc.) change how good this could be of a choice? **Edit:** I'm sorry for (apparently) causing some confusion about which data structures I mean. I talk about the basic and commonly used ones, but still not the most basic ones. This excludes trees (too complex, uncommon), stacks (too seldom used), arrays (too simple) but includes e.g. sets, lists and hashmaps."} {"_id": "103558", "title": "What is the difference between a prototype and a production level solution?", "text": "This question is purely for learning and to step-up my technical understanding. I know there is no perfect solution and this question has possible never ending solution list but I think it's very important for every architect to understand the difference between demo and a live project. I created many demo solutions in .Net in the past. I now have been assigned to architect and implement a production level web solution so I wanted to ask - on a very high level, what is required to convert a demo into a production level solution. From my understanding, this will require (other than functionally implementing clients' requirements): 1. Unit testing every method 2. Ensuring ~100% code coverage is achieved 3. Logging all exceptions and possible pointcuts - possible with AOP 4. Using interface design pattern, dependency injection, possibly by using a framework e.g. spring.net 5. Using performance counters and profilers for instrumentation 6. Applying appropriate security - i.e. windows authentication (if that's whats required by client). 7. Transaction management on every single method 8. Back up of the web application files before new deployment of solution **What else?** My question is more related to technical side instead of functional/documentation because otherwise we will go into another path :-) Thank you."} {"_id": "213017", "title": "Is POST for an element generally not exposed or invalid in REST APIs?", "text": "I was browsing around wikipedia on REST, reading specifically the section on REST APIs Reading the different ways to treat elements from collections I read that POST is _not generally used_. How does that work in practice? When successful REST APIs are developed, when it comes to elements, does POST just not exist or return an error? Is there a reasoning behind that I haven't grasped?"} {"_id": "214044", "title": "Is it practical to write exit codes in a script where the outcome is more complex than success/fail?", "text": "Where I work, we're in the process of automating a lot of tasks that currently need to be run manually by an IT person to determine if the next task can be performed (the second task depends on certain outcomes of the first task). We're ending the first phase of our script automation process, having reworked the first task to have better support for logging, error handling and even emailing a report, but we will not be able to get to reworking the second task right away so it will remain in its current state of manual use. In the meantime, it has been suggested that we implement exit codes into the first task so that we can simply chain the second task, in its current state, on a successful first task. The only issue here is that the first task is a little more complicated than just success/fail. There are many different parts that can pass that will allow the second task to be run a certain way. Now I know none of this is impossible with exit codes. We could take the time to implement the first task to return all these different exit codes so that the second task can be run in a semi-automated fashion based on these exit codes as a temporary solution to making the entire process automated until we have the time to go in and rework the second task the correct way. My question is, is it practical to take the time to implement exit codes into a script as a temporary solution to a larger problem where the script is more complex than success/fail? Note: The exit codes will take some time to implement as our rework of the first task was not designed to return exit codes (it is a self-contained script that handles its own logging and error reporting). Edit: It is a given that we **have** to go back and rework the second task in a way that the two tasks can be run independently of each other (their outcomes/state will be shared through a database). Edit: In general, I know it is useful to have exit codes, but my current situation involves spending a considerable amount of development time to go back and implement the exit codes properly (again because the outcome of the script is quite complex with many different levels of success/fail), when they aren't part of the overall solution, but a temporary hack to our new system in order to \"fake\" it as a completely automated system until we can actually complete the rework. I guess it's more of a \"Is the time spent to implement this hack, while not a complete solution, worth it if we have to go back and undo the hack later?\" question."} {"_id": "213014", "title": "How to gracefully handle unsupported browsers?", "text": "Recently I have started experimenting with some of the newer W3C specifications such as WebGL and the Web Audio API that are not yet widely supported in browsers. As I want my page to look professional, I need a way to treat the unsupported browsers gracefully. Now I was wondering: * Is there a standard or a guideline that details what a person should see when their browser does not support a certain page? * Are there any pitfalls I should look out for when implementing a not-supported-message? * And considering the extremely fast pace browsers release new, updated versions, how can I ensure that my script still works in future browser versions that actually do have the previously unsupported feature without blocking them?"} {"_id": "155605", "title": "Is executing SQL through a WebService a really bad idea?", "text": "Typically when creating a simple tool or something that has to use a database, I go through the fairly long process of first creating a webservice that connects to a database then creating methods on this webservice that do all the type of queries I need.. methods like List GetUsers() { ... } User GetUserByID(int id) { ... } //More Get/Update/Add/Delete methods Is it terrible design to simply make the webservice as secure as I can (not quite sure the way to do something like this yet) and just make a couple methods like this SqlDataReader RunQuery(string sql) { ... } void RunNonQuery(string sql) { ... } I would sorta be like exposing my database to the internet I suppose, which sounds bad but I'm not sure. I just feel like I waste so much time running everything through this webservice, there has to be a quicker yet safe way that doesn't involve my application connecting directly to the database (the application can't connect directly to database because the database isn't open to any connections but localhost, and where the appliction resides the standard sql ports are blocked anyway) Especially when I just need to run a few simple queries"} {"_id": "127466", "title": "Best practices in comment writing and documentation ", "text": "Commenting nowadays is easier than ever. In Java, there are some nice techniques for linking comments to classes, and Java IDEs are good at making comment shells for you. Languages like Clojure even allow you to add a description of a function in the function code itself as an argument. However we still live in an age where there are often obsolete or poor comments written by good developers - I'm interested in improving the robustness and usefulness of my comments. In particular I'm interested in Java/Clojure/Python here, but answers don't need be language-specific. Are there any emerging techniques that validate comments and automatically detect either \"flimsy\" comments (for example comments with magic numbers, incomplete sentences, etc..) or incorrect comments (for example, detecting mispelled variables or the like). And more importantly: Are there accepted \"commenting-policies\" or strategies out there? There is plenty of advice out there on how to code - but what about \"how to comment?\""} {"_id": "150230", "title": "Atomic Memcache Operations in PHP", "text": "This post is a follow up to this question: PHP Atomic Memcache on StackOverflow. Considering I am using Memcache (no d at the end) on PHP 5.3.10, I implemented a custom locking system where a client will wait until a lock key is destroyed before it begins to modify a key on memcache. So: Client 1 connects, checks for an active lock on key 1, finds none, and gets the data Client 2 connects a few microsecond after Client 1, requests the same data from key 1, but finds a lock Client 2 enters a retry loop until Client 1 releases the lock Client 1 saves new data to key 1, releases the lock Client 2 gets the fresh data, sets a lock on key 1, and continues This works 90% of the time. It would work 100% of the time if two requests are made far apart from each other (say 500ms). But lets say two requests are made at almost the same time (10 to 100 microseconds apart) the above solution fails, and both clients write to the same key, resulting in incorrect data. I have tried many things, including a loop that varies in wait time every iteration: while(/*lock key exists*/) { usleep(mt_rand(1000,100000); } This helps only a little. What would be the solution to this particular issue? These mecache processes must be atomic. I am willing to tolerate a 1% failure rate (since it means I just need to work a little harder to make it 0), but anything more is just too risky. I've broken my head trying to figure this out. There is no possibility to upgrade to Memcached, and the value changes are not simple (they are not increments)"} {"_id": "87480", "title": "How to give life to my idea which belong to my company?", "text": "I wonder, what options do I have in the following situation. In the course of the several projects I realised the need in some auxilary software product (related to testing of the main products). I applied a creative approach to the matter and implemented a system which I think has a potential and looking promising (maybe not on the market but at least among some interested supporters). I have even more ideas related to this system and continue developing at my free and work time. It has become a work and hobby at the same time. Unfortunately, this work basically has nothing in common with the company's business and there is no way this will be organized in a form of standard development process and be presented to costumers as a product. What can you suggest in this situation? How to avoid breaching of contract? Have you had something similar in your career? What if my intention is to develop it as an open source project?"} {"_id": "150236", "title": "Quantifying the price for source code and software product", "text": "I'm about to undertake a project. This requires me to write code, and tons of it. The client's requirement is to hand in all source code at the end of the project. My question is: How do I quantify the price for source code and the software product? Is there any metric that one follows to determine pricing? How would I quantify the software product? _Extra info:_ The application must run anywhere, in any OS, including tablets (iPad, Galaxy tabs, etc.), Smartphones (iPhone, Android phones, etc.) and also on the web. _(Now, imagine how much code this will be)_."} {"_id": "82471", "title": "C# type system and dynamic type", "text": "I'm writing a paper about the C# (and Go) type system with focus on the dynamic aspect. Does anybody have suggestions for papers/literature? The things I found don't go much into detail. I would like to add some paragraphs on how it's implemented, comparisons with other languages, speed, memory allocation etc."} {"_id": "153768", "title": "Reflective practice in programming using keystroke playback", "text": "I'm thinking of applying Reflective Practice to improving my programming skills. To that end, I want to be able to watch myself writing code. In general, what is a good method for applying Reflective Practice to the craft of programming? In particular, if it's a good idea, is there an editor that records keystrokes then plays them back at a later time - possibly running the keys together without delays, or replaying at a 2x/4x/8x accelerated rate? Screencasting with RecordMyDesktop is an option, but has downsides of waiting for encoding and ending up with a big video file instead of a list of keystrokes. * * * _Update_ : From \"watching myself code\" I expect to learn what kind of mistakes I make most frequently or where I waste time while coding. Then I can work on improving those aspects. It could be certain formatting, syntax or runtime errors, or maybe long pauses that indicate I hadn't considered some issue before I started coding, or maybe I re-write entire functions because my initial design was wrong. I understand that there's a lot more to programming than the act of writing code and this won't capture all of it. As recommended I should make more design notes and reflect on those too. Recording keystrokes may be more helpful to improving my technique in time- limited programming contests, and less helpful for improving day-to-day programming at the office."} {"_id": "60560", "title": "Are you a functional programmer?", "text": "Are you a functional programmer? By that I mean * employed full time * paid by someone else * using a recognized functional language (Haskell, Scala, Erlang, F# etc, not just using FP techniques in an imperative language like Javascript) If so, what do you do, and how did you get there?"} {"_id": "120114", "title": "Studying parallel programming", "text": "I'm currently finishing my Bachelor's degree in Computer Science and thinking a lot about which specialisation to choose in my Master's degree. One subject I'm particularly interested in is parallel programming. However, this topic does not seem to be a standard topic in Computer Science degrees, although it is something that is used more and more - new processors nowadays are usually dual or quad cores. So I was wandering: does anybody know a good study program in this field? I was mostly looking for it at universities in Germany, but they tend to combine the application side with some type of engineering or natural science. Thus, programs are more the \"Computational Engineering\" or \"Computational Science\" type, but I'm more interested in the Computer Science part of it, i.e. parallel programming, languages and compilers, algorithms and hardware."} {"_id": "120116", "title": "Is a program linked against an LGPL library in linux still under GPL?", "text": "If I were to write say, an embeded linux video server, how much of the code do I have to offer to someone who requests the source? Do I have to offer the code that directly links against the GPL covered code or do I have to offer all code? For instance, if I use gstreamer, or any other LGPL code, on a linux platform in my program, does all of my code become under GPL simply because somewhere in the chain, the LGPL program had to link agaist GPL code? I guess this is an extension of the question. Can you write a C library that compiles in linux that does not become subject to GPL?"} {"_id": "235394", "title": "Should refactoring be the exception or the rule?", "text": "I had a discussion with a co-worker yesterday about design philosophy. The other coder is more experienced then me, and I fully admit that he is likely much better at properly automating his testing, which I'm just now trying to break sloppy habits. However, it seems that some of our dispute is a philosophical issues where I'm unwilling to simply yield to his greater experience. Generally he is a Directing programmer, who follows all of the formal approaches. He has interfaces for nearly every class and has automated tests for everything down to getter and setters on basic POJO objects. He sees refactoring as a dirty word which is a sign that things weren't written/designed properly from the start, it\u2019s inevitable and done as necessary but everything should be done to avoid having to do it and you should feel bad when you have to. I feel I'm a more flexible/Enabling programmer. I feel that interfaces should be used as necessary, but not thrown in until they are needed (KISS), and that it's easy enough to pull an interface out of an existing object by refactoring if you decide an interface/polymorphism makes more sense later. I also felt that, while I admit I need to be better at testing then I am currently, the level of automated tests he suggested seemed too rigors since it would take so long to implement and, more importantly, make refactoring a nightmare since I wouldn't be free to modify the behavior of any non-private method. As my requirements grow and change I find myself refactoring heavily and that level of testing feels like it would kill my flexibility. His view is that I shouldn't _be_ refactoring so it doesn't matter if the tests make it harder to do. Ultimately all of the other philosophical discussions seemed to boil down to the word 'refactor'. I love refactoring, I feel it should be used liberally any time you feel that something isn't quite as clean and beautiful as it should be, and that the natural course of growing a program or adding new requirements will necessitate refactoring to better enable code reuse, moving logic to new classes/packages as it grows more complex, and yes fixing inevitable design mistakes which will happen from a flexible agile development approach (and that\u2019s not always bad if your code is written to allow modification!). He disagrees with all of that. I'm wondering what other's thoughts are on refactoring. How common should it be? Is writing code with a philosophy of \"I'm doing it simple now, I can refactor it later if more complicated logic proves necessary\" inherently flawed/dangerous? Does acceptability of refactoring depend on program approach and style? I generally am on rapid prototyping or \u201cclean up this ugly, but working, prototype some non-engineers wrote\u201d, while it sounds like he actually gets to do formal programming with official requirements that don\u2019t change; I\u2019ve never once had a job like that myself!"} {"_id": "235395", "title": "Kibibtye or Kilobyte to represent 1024 bytes", "text": "I am working on a project with several older programmer and we were doing up the documentation of the program when I had a heated debate with one of the older programmers regarding the term : `Kilobyte`. He wanted to stick to the old way of using `Kilobytes` to represent 1024 bytes as he said most programmers understood the term : Kilobyte to represent 1024 bytes. I wanted the new way of using `Kibibyte` to represent 1024 bytes as we can't forever be stuck in our old ways and this has been updated by the community. How should I resolve this conflict with him so as to appease both of us ??"} {"_id": "235393", "title": "Is there a named antipattern for unclear API not exposing the requirements?", "text": "In the source code I'm evaluating (jarjar), there exists java code that can be used like this: JarJarTask fixture = new JarJarTask(); fixture.addConfiguredRule(new Rule()); fixture.execute(); Which will throw an exception like: java.lang.IllegalArgumentException: The element requires both \"pattern\" and \"result\" attributes. Now, to me it is clear that the API is less than optimal here - if the rule element requires these attributes, it should ask for them in the constructor (or provide a Builder or similar that would require them). But I was unable to find coding conventions or java programming recommendations to agree with me. Is there some specific name for this that I can use to look this up in research, programming literature, style guides? Which terms could I use to find some discussion on this? Surely it's been discussed somewhere using some terms, but I seem to be unable to find anything. * * * I did find discussion around constructor injection as presented by Kent Beck in Smalltalk Best Practice Patterns and supported by Martin Fowler, which, according to this, makes it \"immediately clear what a class requires when it is instantiated, and furthermore it is impossible to instantiate the class without passing in the field\u2019s objects\". So that's a starting point at least to find some discussion."} {"_id": "173146", "title": "C++ Building Static Library Project with a Folder Structure", "text": "I'm working on some static libraries using visual studio 2012, and after building I copy .lib and .h files to respective directories to match a desired hierarchy such as: drive:/libraries/libname/includes/libname/framework drive:/libraries/libname/includes/libname/utitlies drive:/libraries/libname/lib/... etc I'm thinking something similar to the boost folder layout. I have been doing this manually so far. My library solution contains projects, and when I update and recompile I simply recopy files where they need to be. Is there a simpler way to do this? Perhaps a way to compile the project with certain rules per project as to where the projects .h and .lib files should go?"} {"_id": "46107", "title": "Do I need to notify a user if I am using statistics software in an iPhone app?", "text": "I am currently creating a (very simple) Objective-C client to send basic statistical data to my server for an iPhone app - just things like the state of the app (first-launch or launch, error, etc), along with the make/model/version (i.e.: \"iPod touch 4.2\"). No personally identifiable information or location data is sent. Is there anything, in the Apple Developer agreement or otherwise, that states that I must notify the user if I am doing this? I'm not interested in selling the data or anything, I just want to use the data to make my apps better. I am not adverse to telling the user I am doing this if it is required, I just don't want to scare the users (the paranoid \"oooh, they're tracking me, they know exactly where I am\" crowd) if I don't have to. Thanks for any advice."} {"_id": "173141", "title": "Start with open source desktop application and move to iPhone/Android app", "text": "I'm a high schooler and I am competing in an open source software development competition. It must be a desktop application that runs on either Windows or Linux. I have a great idea for the open source desktop app, and I wanted to know if I could take it farther and port it to the iPhone or Android platform and make money (preferably through $.99 cost, not ads) I read somewhere that certain open source licenses allow me to do this... am I correct?"} {"_id": "226265", "title": "Best way to display domain object summary information efficiently and in an OO way from a large inheritance tree?", "text": "**I've provided only simplified code as it's more of an abstract design question.** So I have many, many nested business/domain event objects, e.g. public class Event { //bunch of properties and standard accessors } public class ExplosionEvent extends Event { //properties and standard accessors } And many more of these at different levels. If I need information about any given object chosen, I display it in HTML like so private String generateHTML(Event event) { StringBuilder sb = new StringBuilder(); sb.append(\"\");//simplified sb.append(\"\"); sb.append(\"\"); if (event instanceof ExplosionEvent) { //append HTML and ExplosionEvent specific data } // ...many, many more calls like the one above sb.append(\"
    Time\" + event.getEventTime() + \"
    \"); } As I have many event types, this means loads of duplicated HTML table tags and uses of `instanceof` so `generateHTML` is hundreds of lines long, split into methods of course but still, it's a lot of code that makes this hard to understand, navigate and therefore maintain. This is ugly and I need a better design for this. I had the idea of creating a method on `Event` which is overridable by all sub methods //Using LinkedHashMap to preserve order as Properties wont do that public LinkedHashMap getAttributes() { LinkedHashMap list = new LinkedHashMap(); list.put(\"Time\", eventTime); } sub-classes then override this, call their parent and add date specific to them to the list meaning that no matter how many `Event` classes there is, the existing `generateHTML` method (external to `Event` objects) will then simply be one small loop private String generateHTML(Event event) { StringBuilder sb = new StringBuilder(); b.append(\"\"); for (Entry entry : event.getAttributes().entrySet()) { sb.append(\"\"); sb.append(\"\"); } sb.append(\"
    \" + entry.getKey() + \"\" + entry.getValue() + \"
    \"); } Is this putting too much logic in business/domain objects? **Is there a better way**?"} {"_id": "162399", "title": "How essential is it to make a service layer?", "text": "I started building an app in 3 layers (DAL, BL, UI) [it mainly handles CRM, some sales reports and inventory]. A colleague told me that I must move to service layer pattern, that developers came to service pattern from their experience and it is the better approach to design most applications. He said it would be much easier to maintain the application in the future that way. Personally, I get the feeling that it's just making things more complex and I couldn't see much of a benefit from it that would justify that. This app does have an additional small partial ui that uses some (but only few) of the desktop application functions so I did find myself duplicating some code (but not much). Just because of some code duplication I wouldn't convert it to be service oriented, but he said I should use it anyway because in general it's a very good architecture, why programmers are so passionate about services?? I tried to google on it but I'm still confused and can't decide what to do."} {"_id": "136671", "title": "How to get contributors assign copyright for copyright holder?", "text": "Let's say we have a company and we release our project under LGPL && GPLv3, as you know the only thing here is about contribution. People commit bug- fixes/features to the project, it's fine and we are GPL fans. Basically company makes money by this project. So what about contributors ? I know they commit code and software get better and it is actual repay to them, But how we can get contributors assign copyright for copyright holder?"} {"_id": "238877", "title": "Overriding GetHashCode in a mutable struct - What NOT to do?", "text": "I am using the XNA Framework to make a learning project. It has a Point struct which exposes an X and Y value; for the purpose of optimization, it breaks the rules for proper struct design, since its a **mutable struct**. As Marc Gravell, John Skeet, and Eric Lippert point out in their respective posts about `GetHashCode()` (which Point overrides), this is a rather bad thing, since if an object's values change while its contained in a hashmap (ie, LINQ queries), it can become \"lost\". However, I am making my own `Point3D` struct, following the design of `Point` as a guideline. Thus, it too is a mutable struct which overrides `GetHashCode()`. The only difference is that mine exposes and int for X, Y, and Z values, but is fundamentally the same. The signatures are below: public struct Point3D : IEquatable { public int X; public int Y; public int Z; public static bool operator !=(Point3D a, Point3D b) { } public static bool operator ==(Point3D a, Point3D b) { } public Point3D Zero { get; } public override int GetHashCode() { } public override bool Equals(object obj) { } public bool Equals(Point3D other) { } public override string ToString() { } } I have tried to break my struct in the way they describe, namely by storing it in a `List`, as well as changing the value via a method using `ref`, but I did not encounter they behavior they warn about (maybe a pointer might allow me to break it?). Am I being too cautious in my approach, or should I be okay to use it as is?"} {"_id": "191478", "title": "Designing models for a generic service layer", "text": "We are building a web interface to a tiered membership system, which will interface with a third-party CRM web service for the creation and management of accounts. The web service, unfortunately, is not yet built; however, we need to begin work. I have created an interface, `IMembershipService`, in which I am beginning to define \"best-guess\" prototypes, so we can begin building our User Controls. Most of these methods will return some data bundled in a Model object, e.g.: ContactModel GetContact (string userId); When the web service methods become available, I will create a concrete implementation of `IMembershipService` that will wire up the controls to the web service. The problem I have is that I don't yet know whether the web service will consist of: * calls returning complex objects; e.g. a `User` object with a nested `Membership` object, which, in turn, has a nested `PaymentMethod` object * simple calls for specific pieces of information; e.g. `String GetUserMembershipType (string userId);` This is causing me to have trouble specifying the structure of the models and interface, which is causing problems for the developers beginning work on the User Controls: * If the service returns complex objects, I don't want my `IMembershipService` methods to be too simple, forcing me to use multiple web service calls where it is not necessary. * If the service consists of simple calls, I don't want to have a load of complex models defined that I then can't implement, thereby having to do a load of refactoring. In theory, creating `IMembershipService` should allow me to abstract away from the actual nature of the web service, but the fact that each call to a method in `IMembershipService` will, ultimately, result in a web service call, thereby adding overhead, is making this difficult to spec. How can I design my models and `IMembershipService` in order to minimize the amount of refactoring I have to do when the nature of the web service becomes less elusive?"} {"_id": "162392", "title": "Value passing by reference", "text": "What's the key deference between passing by reference and passing by value in PHP ? function ( &$var1 ,$var2) { var1[$x]... } Function that passes an array by reference and value, what would the impact be."} {"_id": "99305", "title": "What's better right now: IronRuby or IronPython?", "text": "I have plenty of experience with both Ruby and Python, and I'm looking to embed either one of these on a C# application I'm developing. I don't really care which, but I'd like to know which one currently has better support and is overall less buggy, and is the easiest to implement and deploy. I also took a good look at Boo, which I'm also messing around with. C# Is too complicated and verbose, so I'm leaving that as the last option. As for what I'm doing, I need to extend some classes in a friendly and dynamic way, without the need to recompile. They will also need to have access to a data structure created on the C# side of the code. So, any recommendations on which should I use?"} {"_id": "99307", "title": "jQuery - programming style - many bindings vs. conditional single one", "text": "I frequently see code like that: $(\"#foo\").live(\"mouseover mouseout\", function(e) { if (e.type == \"mouseover\") { $(\"#foo\").append(\"
    \"); } else { $(\"#bar\").remove(); } }); instead of more self-explanatory in my opinion: $(\"#foo\").live(\"mouseover\", function(e) { $(\"#foo\").append(\"
    \"); }) .live(\"mouseout\", function(e) { $(\"#bar\").remove(); }); the same goes with $('#contentPageID, #itemURL').change ( function () { if ( $(this).is('#contentPageID') ) ... else ... }); does it have any purpose or is it just different coding style (counter intuitive in my point of view) ?"} {"_id": "262", "title": "Will Java still be relevant in 5 years?", "text": "Will Java have the same importance it had in the past, or it will be less relevant than nowadays?"} {"_id": "195501", "title": "Oracle Database Integration with Team Foundation Server?", "text": "So my company is using TFS and SQL to Manage their Database (MS SQL Server). It integrates with the nightly build servers to do builds and produce scripts to build the entire Database. Also the Compare Schema tool that visual studio offers is really handy. We want to do the same with Oracle, Preferably with TFS if that's even possible. It can be done without, but is this a possibility? Also are there any tools to assist in automatically creating the Database in a Nightly Build? What about Managing Upgrades? Our Big worry is moving triggers and the MS SQL Service Broker to Oracle... this we might have to do manually though."} {"_id": "19941", "title": "How long and what type of complexity would have been involved in Chris Sawyer writing most of rollercoaster tycoon in assembler?", "text": "From this question, I have another question about... How long and what type of complexity would have been involved in Chris Sawyer writing most of rollercoaster tycoon in assembler? In order to specify and break this question down, I am interested in; 1. Approximately how many man hours (have a guess) do you estimate it would have taken Chris to write the game by himself? Or alternatively give an approximate percentage of the ratio of assembler coding hours to say, writing the whole thing in C/C++. 2. Do the programmers that know assembler well think of this as an overly complex task for such a low level language abstraction? Apart from performance benefits is this just a freakish natural ability that Chris has, or a skillset worthy of learning to that extent? I'm interested if people think the complexity/performance thing is worth learning assembler that well (in order to write), or is it only \"worth it\" if you have a lot of naturally developed skills in assembler (presumably from working with hardware/hardware drivers/electronics/etc)."} {"_id": "245836", "title": "Object identification in Python", "text": "In learning Python, I found that when two \"names\" (or \"variables\") are assigned to the same value, both of them point to the same memory address. For example >>> a = 10 >>> b = 10 >>> a is b True My question is, when assigning `b`, how does Python figure out that a `10` object already exists? One way might be to create the new object for `b`, scan pre-existing objects in memory to find a duplicate and, if found, point `b` to it. This sounds expensive, and tricky for more complex objects."} {"_id": "219788", "title": "Is error suppressing bad practice?", "text": "On a SO question I asked here about some code I was unsure about, someone replied \"BTW, horrible code there: it uses the error suppressing symbol (@) a lot.\" Is there a reason why this is bad practice? With things like: $db=@new mysqli($db_info) or die('Database error'); , it allows me to display just a custom error message. Without error suppressing, then it would still display the typical PHP message of: **Warning** : mysqli::mysqli(): php_network_getaddresses: getaddrinfo failed: No such host is known. in **some\\file\\path** on **line 6** as well as 'Database error'. Is error suppressing _always_ bad, and if so, what specifically about the above is bad? Update: the actual code that I'm using is: or error('Datatabase error', 'An error occurred with the database' . (($debug_mode) ? '
    MySQL reported: ' . $db->error . '
    Error occurred on line ' . __LINE__ . ' of ' . __FILE__ . '' : '')) which removes all previous output and displays an error message. So the fact that the error message doesn't include details about what specifically happened (which people seem to be suggesting as a reason why error suppressing is bad) is irrelevant."} {"_id": "119877", "title": "Best practice Java - String array constant and indexing it", "text": "For string constants its usual to use a class with `final String` values. But whats the best practice for storing string array. I want to store different categories in a constant array and everytime a category has been selected, I want to know which category it belongs to and process based on that. Addition : To make it more clear, I have a categories `A,B,C,D,E` which is a constant array. Whenever a user clicks one of the items(button will have those texts) I should know which item was clicked and do processing on that. I can define an `enum`(say `cat`) and everytime do if clickedItem == cat.A .... else if clickedItem = cat.B .... else if .... or even register listeners for each item seperately. But I wanted to know the best practice for doing handling these kind of problems."} {"_id": "52475", "title": "Interview question ranking FizzBuzz (1), implementing malloc (10)", "text": "I'd like to have your opinion on the difficulty of the following interview question: **Find the contiguous subarray with maximum sum in an array of integers in O(n) time.** This trivial sounding problem was made famous by Jon Bentley in his Programming Pearls where he uses it to demonstrate algorithm design techniques. On a scale of 1-10, 1 being the FizzBuzz (or HoppityHop) test and 10 being implement the C stdlib function malloc(), how would you rank the above problem? I think the people who can best answer this question are those who have read Programming Pearls and have tried to solve this problem on their own. To motivate those who haven't, 'Programming Pearls' gets featured many times in the 'Top 10 programming books' list. A couple of comments might help get a better rating: * Implementing malloc() is not as formidable as it seems. See K&R's C Programming Language for example. It sometimes gets asked at Microsoft. * CLRS observation on problem solving: **_it is often more difficult to solve a problem from scratch than to verify a clearly presented solution, especially when working under time constraints**._"} {"_id": "119872", "title": "Are there any good examples of open source C# projects with a large number of refactorings?", "text": "I'm doing research into software evolution and C#/.NET, specifically on identifying refactorings from changesets, so I'm looking for a suitable (XP- like) project that may serve as a test subject for extracting refactorings from version control history. **Which open source C# projects have undergone large (number of) refactorings?** **Criteria** A suitable project has its change history publicly available, has compilable code at most commits and at least several refactorings applied in the past. It does not have to be well-known, and the code quality or number of bugs is irrelevant. Preferably the code is in a Git or SVN repository. The result of this research will be a tool that automatically creates informative, concise comments for a changeset. This should improve on the common development practice of just not leaving any comments at all."} {"_id": "235974", "title": "2D Game Data Structure in OpenGL ES 2.0", "text": "I'm trying to come up with some data structures for rendering my map on OpenGL. It is going to be ~ 100x100 blocks(squares) total, with each block being around 100 pixels. However, the screen will only display about 20x10 of these blocks at a time, depending on character location. So from this post: http://stackoverflow.com/questions/19979031/android-only-game-in-opengl- performance-in-c-ndk-vs-java-dalvik?newreg=53760d542cb94d05afe42faa39d1aef6 It says that I shouldn't do a lot of allocation with ByteBuffers. So here are approaches I came up with: 1. Allocate all 10,000 blocks, and simply change the vertices on every frame, for the ones I need to display. So no dynamic allocation, but a lot of up-front space. 2. Only allocate blocks as I need them. So if in a frame I move left, and have to display new blocks, I will allocate 10 blocks in OpenGL. That way I have less memory allocated at once. However there is dynamic allocation and I need to set up the textures on every frame. 3. Cache a few blocks of each type, and update the vertex information for them as I need them, that way I don't need to allocate a lot in the beginning, and I don't need to allocate anything dynamically. (So have 100 wall blocks, 100 door blocks, 100 floor blocks, all set up from the beginning) Are any of these approaches the right way to go about doing this? Or how would one go about displaying a bunch of Bitmaps and updating their location on every frame? Or is Java a bad idea from the beginning, even for a simple 2D game?"} {"_id": "235976", "title": "Is it common for the founder of a web app to lack technical expertise to scale it?", "text": "When I look at things like Twitter, it seems like the idea is so simple to implement initially that the founder does not have to be very technically talented. Basically it's just a guy with a good idea. But when an app / software blows up and entails much harder engineering problems, how does the founder deal with it? Have we seen cases in which the original guy with the good idea somehow falls off the enterprise as it becomes more about technical challenges and less about ideas?"} {"_id": "9268", "title": "Should I insist that we perform code reviews before merging back to trunk?", "text": "Requested re-post from StackOverflow: I'm working in a small development time with very limited time for development. We develop a tool that is important for the result of our work, but not used daily. I am the only person in the team that has a background as a programmer. My problem is that I have been pushing for code reviews before merging back to trunk for over a year. Everyone agreed on this, but still it's only my code that has been reviewed. Returning from a long vacation I come back to a trunk with code comments as \"this is an ugly solution - remove as soon as possible\" and \"quick fix\". What also is new is that a guy has been appointed the responsibility for the tool. (A role that first was offered to me but I turned down due to a non work related reason.) And he thinks that this is an ok way to work: As we have such limited time to develop, we should cut corners like that. My concern is that the other developers write ugly code: often breaking encapsulation, writing huge classes, adding internal classes at strange places, having few or no unit tests, and so on. It will eventually be impossible to develop the tool further. Should I insist that we perform code reviews before merging back to trunk or am I just a code quality bitch?"} {"_id": "10731", "title": "How to manage a project in school?", "text": "I just started doing PhD and we are supposed to do a project for a class, there are 14 people taking the class and we are supposed to develop a system all together. I was away from academia and working in the industry before, and I know it is very hard to manage even a couple of people towards the same goal. We are going to make the first meeting in a couple of weeks. First, I will suggest using a version control system like SVN. Second, I will try to take the lead for the architecture of the system, because I think I am more experienced. Since the class is about _computer vision_ and I anticipate that most of the people's background is research related so there is a big chance that I am more experienced. I will gladly hand architecture to someone else if he/she is more experienced. What else should we do to progress without much hassle? PS. You can assume every one of us is going to work remotely, and meet once in a week at it's best (not everyone will attend though). And the project needs to be finished in 2 months. It does not need to be a perfect, complete product, we just need to make a prototype. PPS. The aspects of the group reminds me of open source project groups, maybe the answers will be helpful for those groups as well."} {"_id": "176011", "title": "Do functional generics exist and what is the correct name for them if they do?", "text": "Consider the following generic class: public class EntityChangeInfo { ChangeTypeEnum ChangeType {get;} TEntityKeyType EntityKey {get;} } Here `EntityType` unambiguously defines `TEntityKeyType`. So it would be nice to have some kind of types' map: public class EntityChangeInfo with map < [ EntityType : Person -> TEntityKeyType : int] [ EntityType : Car -> TEntityKeyType : CarIdType ]> { ChangeTypeEnum ChangeType {get;} TEntityKeyType EntityKey {get;} } Another one example is: public class Foo with map < [TIn : Person -> TOut1 : string, TOut2 : int, ..., TOutN : double ] [TIn : Car -> TOut1 : int, TOut2 :int, ..., TOutN : Price ] > { TOut1 Prop1 {get;set;} TOut2 Prop2 {get;set;} ... TOutN PropN {get;set;} } The reasonable question: how can this be interpreted by the compiler? Well, for me it is just the shortcut for two structurally similar classes: public sealed class Foo { string Prop1 {get;set;} int Prop2 {get;set;} ... double PropN {get;set;} } public sealed class Foo { int Prop1 {get;set;} int Prop2 {get;set;} ... Price PropN {get;set;} } But besides this we could imaging some update of the `Foo<>`: public class Foo with map < [TIn : Person -> TOut1 : string, TOut2 : int, ..., TOutN : double ] [TIn : Car -> TOut1 : int, TOut2 :int, ..., TOutN : Price ] > { TOut1 Prop1 {get;set;} TOut2 Prop2 {get;set;} ... TOutN PropN {get;set;} public override string ToString() { return string.Format(\"prop1={0}, prop2={1},...propN={N-1}, Prop1, Prop2,...,PropN); } } This all can seem quite superficial but the idea came when I was designing the messages for our system. The very first class. Many messages with the same structure should be discriminated by the `EntityType`. So the question is whether such construct exists in any programming language?"} {"_id": "152493", "title": "Good practice or service for monitoring unhandled application errors for a small organization", "text": "I'm working with multiple software with varying ways of monitoring for errors. When I make software, I usually send email with the stack trace to admins(usually me). Some customer software is monitored by a team who check that a particular batch run was successfull. Other software might not have any monitoring at all(someone will call when things go wrong horribly). Sending emails is good, except when things start going wrong, my mail gets filled fast. Also I don't want to solve the same problem in code for every software. Is there some relatively cheap and low maintenance software or practice to handle this. I want it to be cheap/low maintenance because usually I work alone or in teams of 5 or smaller. For example it would be great if errors would be aggregated so I don't get 10 000 emails when something unexpected happens... For clarification: By unhandled errors I mean Exceptions that were unhandled by application code that were propagated to Tomcat or Jboss. I don't need help with how to catch those errors. I need help with what to do with them. Is there any cloud application that I could send my errors to? Or some simple server to install? Or some library that can handle errors using configuration files. I use Java if that is any help."} {"_id": "149158", "title": "How do I determine if my code is optimized?", "text": "I am only developer working on a project. The functionality that I have coded works to expectations(desired result) but since I am the only one I don't know if I can do anything better to it. How do I determine if my code is optimized (I want to increase the performance of some features)? One thing that comes into mind is _Code Review_ but there is no one free over here to review my code. I just keep on trying with alternate logic but at times things get stagnant and really dirty. Is there any other possible way?"} {"_id": "212833", "title": "Datastructure for a factory pattern in practice", "text": "I'm implementing what's basically an event log system for a larger system. I used Single-table inheritance to build out the table. The problem I'm having is figuring out how to build out the classes from the database. It's easy enough for me to figure out what they are to load them into the database, but to pull them out and create objects and collections out of them is a little trickier. The only way I know of is to have a switch statement and hope that if someone implements a new object that they'll update the switch, but that doesn't seem practical. I'm sure I'm getting something wrong about how I'm thinking of the factory pattern here."} {"_id": "12394", "title": "Choosing a licence for open source projects", "text": "I've done some open source projects, and I plan to do more in the future. So far, I've released all my code under GPL, but I've read a few articles which claim GPL is too restrictive for any code to be used in corporate environment. This, supposedly, reduces contributions. Here's what I wanted to accomplish: For **full applications** : * no commercial use with the exception of selling _support_ for the application (i.e. the app cannot be sold, but everything around it can) For **libraries** (components, plugins, ...): * can be included into commercial projects **without modifications** * any modification the the library/component must be open sourced (contributed back) - the rest of the project, commercial or not, is not affected For applications, GPL still seems the logical choice. For libraries, my primitive understanding of licences makes me think that LGPL is a good match, but, I'm not sure. I've looked at MIT licence, and that seems too permissive. Most of the time, I want people to use my code anywhere they want to, as long as any improvements are contributed back. This brings me to my question(s): is LGPL a logical choice for open source libraries, components, plugins etc? Is there a better alternative? Is GPL a good choice for my applications or is there something better? **Update:** For those who are interested in my final decision, I've decided to release my libraries under multi-license scheme, MPL, LGPL and GPL. This enables virtually _everyone_ to use my code with no obligations, unless they modify it under MPL, in which case it would have to be contributed back. This means the code can be used by both FSF and proprietary software, but \"bad\" commercial exploitation is prevented (or so I'd like to think)."} {"_id": "212837", "title": "Storing donor addresses in a relational database", "text": "I am building a donor database for a non-profit organization and one issue I'm mulling over is how to store some of the donor data. There are some families for whom we capture the names of both spouses, hence each spouse gets its own record in the DB, and naturally they share the same address. Because it is likely to happen that multiple people from a family will exist in the database, and they will share the same address (husband, wife, kids), I am considering storing the addresses in a separate table and having Address_ID be a foreign key on the Roster table. This would reduce the amount of duplicate data, and when it came to address changes, it would be far easier to handle for multiple donors that are part of a family. Table: Roster Columns: Roster_ID (PK) PrimaryRoster_ID (FK, references Roster_ID of the head of the household) Address_ID (FK) FirstName LastName Phone Table: Address Columns: Address_ID Street Apt City State ZipCode I haven't build the GUI yet for entering the data - I'm still building the DB and populating it via scripts, but the idea is that when a new donor is created, he/she is assigned the Primary, and then any other donor who is part of the same family will have the primary donor's Roster_ID populated in the PrimaryRoster_ID column so that a link can be made between family members. People in the same family don't have to have the same address (if they don't, a separate Address record would be made for that donor), but if there are 4 people in a family, all with separate DB records and the user who enters the data assigns the same address record to all of them, only one record will exist in the Address table, with 4 records in the Roster table having that Address record as an FK to associate each individual to that address. Can anyone think of a better way of handling this, or would this design be sufficient? The size of the database right now is around 150 records in the Roster table, with maybe a max of 1000 records down the road. Edit: Changed table design slightly - I realized that Address_ID should really just be an FK in Roster instead of having a Roster_Address table."} {"_id": "212834", "title": "User Story vs Requirement", "text": "User Story captures what the user wants to do with the system at a high level. I understand that the user story would further drive a number low level requirements. Is user story same as high level requirement for the system?"} {"_id": "212835", "title": "Do we need a weekly project status meeting in Agile?", "text": "I am working in a small company with around 15 people (dev and qa). We are trying to practice Agile in our company. We are doing daily stand-up meetings. We are also doing a weekly project status meeting with our CTO. There we report our CTO regarding last week task and overall project status. My question is whether doing this weekly status meeting with our CTO supported in Agile or not?"} {"_id": "124344", "title": "Datastructure for Peg Board Game", "text": "What datastructure would work for the triangular peg board game? It's a 5x5x5 triangle with 15 holes."} {"_id": "124345", "title": "What are frameworks, stacks, and middleware?", "text": "Hey I'm a student programmer currently working in Java. I often see several terms thrown around alot and it would be very helpful for someone to explain the differences. I was prompted by my research that I did when reading this question. Anyway here are the terms (and wiki links), I've read their wiki's but am still uncertain. Software Stack Software Framework/Application framework - is there a difference? what are they? Middleware"} {"_id": "69164", "title": "Storing credit card information: Looking for a creative solution", "text": "I recently found the need to do recurrent billing. I am extremely averse to storing people's credit card information and I refuse to take risks in that area. I am stuck between a rock and a hard place, however. Using a gateway in my local area and currency is prohibitively expensive. Basically, as things stand, we can't do business through a gateway. Yet we need recurrent billing and the need to store credit card information has arisen. I thought of storing the credit card numbers (encrypted), partially or wholly, then saving the CCV number locally on an unwired machine. To bill customers, I would generate temp billing tables, and fire off a form manually for each customer that needs to be billed. I would manually populate the CCV and pipe a form to my payment processor (they will not store the data, unfortunately-- only process the payment). It's a manual PITA, and defeats the point of automation, but it's my best bet outside of storing the data outright. Can you give me any advice on my scheme?"} {"_id": "250988", "title": "Filtering common starting/ending characters from array/list of strings", "text": "Ok so for example I have an array of strings with each string as below: 364VMS1029 364VMSH920 364VMSH192 364VMSU839 364VMN2382 364VMR223 364VMR2X3 364VMN829 364VMN8757 364VMN831 How can I dynamically get the program to recognise the common characters among all strings in the array, which in this case is `364VM` and filter them out? If there's no common character, then don't do anything."} {"_id": "154261", "title": "Any suggested approaches to track bugs/defects?", "text": "What is the best way to track defect sources in tfs? We have various teams for a project like the vulnerability team, the customer, pre-sales, etc. We give a build and these teams independently test it. They do not have access to our tfs system. So they usually send in their defects via email. It will usually be send in an excel format. Our testing team takes these up and logs them into tfs. Sometimes they modify the original defect description (in excel) and add the expected/actual results. Sometimes they miss to cite the source. I am talking about managing the various sources as such. Is there a way we can add these sources into tfs, and actually link this particular source with the defects, with individual comments associated with them (saying where in the source we can find the actual material for the defect). Edit: I don't know if there is a way to manage various sources. Consider this: the vulnerability assessment team has come out with defects/suggestions. They captured it into an excel and passed that on to the testing team (in my case). The testing team takes the responsibility of elaborating the defect and logging it in tfs. Now say that the excel has come with 20 defect items. This is my source. (It answers the question _where did this defect come from_ ). So ultimately when I am looking at a bug I know from where it came from - I'll ultimately be looking at the email sent from the VA team which has the excel or the excel file itself sent by the VA team. It may be one of the 20 items in that excel. How should the tester link to this source just once? On the contrary, it does not make sense for the tester to attach the same excel 20 times (i.e. attach the same excel for the 20 defects while logging it into tfs) right? I hope you get my point."} {"_id": "239222", "title": "When to add new project to solution?", "text": "I'm tidying up my company's Version Control Guidelines. One of my tasks is to determine how solutions should be organized in a very broad sense. I have somewhat come to my own conclusion that one broad requirement to document is that, for the most part, CSProj/VBProj projects should live on their own all at the same level in a branch. However, there are a measureable number of projects that aren't independent, and are tightly coupled to another. For instance, a Unit Testing project has no business by itself since it only exists in support of one other project. So it can (and should) be directly added to a solution with the project it supports. Same with an individual UI who's sole existence is to test one project. Or an implementation test project... and probably similar examples. So my goal is to mandate, in a way, that projects should not be added directly to a solution except in such examples. **Disclaimer** I'm not saying that one application cannot have a solution with multiple projects having been \"Add Existing Project\" to the solution. But in those cases, each of those projects should be addable/removable because they exist all at the same level of a namespace or folder organization. This would be easiest if there was some documentation I could find with a history of better wording. But any searches I do just end up with How-To tutorials to add projects to a solution. There are no Best Practies that get into this."} {"_id": "250984", "title": "Encapsulate multiple properties into a single class to use as a custom DependencyProperty", "text": "My application is a WPF project implemented in C# .NET (4.5) using an MVVM architecture with no code-behind in the View. In order to eliminate the coupling between the View and the ViewModel I'm implementing some of the WPF- specific behaviours as custom DependencyProperties that can be bound to 'simple' properties exposed by the ViewModel. The View assembly now has several of these DependencyProperties that are bound by the top-level 'Windows' in the View. I can't see any obvious reason to collect them together into a single 'Controller' class, other than that when I do so the code suddenly looks prettier, but doing so introduces unnecessary coupling between these properties simply because they may be bound by a top- level View because not all ViewModel objects require all of the behaviours. Should I sacrifice some reduction in coupling by requiring that the top-level ViewModel objects expose a cohesive 'Controller' property for the View to bind to with a single, custom DependencyProperty? Would it be better to implement multiple, single-purpose custom DependencyProperties for each of the behaviours? Is there a completely different appropach I could use, such as providing some kind of 'behaviour container'? Single Controller approach: using System; using System.Threading; using System.Windows; using FrameworkMVVM.BindingControllers; // For TopLevelViewController namespace FrameworkMVVM.ViewBehaviours { public static class WindowBindableProperties { // One property that encapsulates the behaviours in a controller class #region WindowControllerProperty private static DependencyProperty _windowControllerProperty = DependencyProperty.RegisterAttached ( \"WindowController\", typeof(TopLevelViewController), typeof(WindowBindableProperties), new PropertyMetadata(null, WindowControllerPropertyChanged) ); public static DependencyProperty WindowControllerProperty { get { return _windowControllerProperty; } } private static void WindowControllerPropertyChanged( DependencyObject d, DependencyPropertyChangedEventArgs e) { // Hook the events, EventHandlers, SynchronizationContext, // DialogResult, etc. here. } #endregion } } Multiple Properties approach: using System; using System.Threading; using System.Windows; namespace FrameworkMVVM.ViewBehaviours { public static class WindowBindableProperties { // Multiple properties, one for each behaviour. #region CloseViewEventProperty private static DependencyProperty _closeViewEventProperty = DependencyProperty.RegisterAttached ( \"CloseViewEvent\", typeof(EventHandler), typeof(WindowBindableProperties), new PropertyMetadata(null, CloseViewEventPropertyChanged) ); // The rest of the CloseViewEvent implementation goes here. #endregion #region DialogResultProperty private static DependencyProperty _dialogResultProperty = DependencyProperty.RegisterAttached ( \"DialogResultProperty\", typeof(Boolean?), typeof(WindowBindableProperties), new PropertyMetadata(null, DialogResultPropertyChanged) ); // The rest of the DialogResultProperty implementation goes here. #endregion #region ViewContextProperty private static DependencyProperty _viewContextProperty = DependencyProperty.RegisterAttached ( \"ViewContextProperty\", typeof(SynchronizationContext), typeof(WindowBindableProperties), new PropertyMetadata(null, ViewContextPropertyChanged) ); // The rest of the ViewContextProperty implementation goes here. #endregion // Etc., etc., etc.. } }"} {"_id": "250987", "title": "An algorithm for finding subset matching criteria?", "text": "I recently came up with a problem which I would like to share some thoughts about with someone on this forum. This relates to finding a subset. In reality it is more complicated, but I tried to present it here using some simpler concepts. To make things easier, I created this conceptual DB model: ![Recipe model](http://i.stack.imgur.com/M9mvd.jpg) Let's assume this is a DB for storing recipes. Recipe can have many instructions steps and many ingredients. Ingredients are stored in a cupboard and we know how much of each ingredient we have. Now, when we create a recipe, we have to define how much of each ingredient we need. When we want to use a recipe, we would just check if required amount is less than available amount for each product and then decide if we can cook a dinner - if amount required for at least one ingredient is less than available amount - recipe cannot be cooked. Simple sql query to get the result. This is straightforward, but I'm wondering, how should I work when the problem is stated the other way round, i.e. how to find recipies which can be cooked only from ingredients that are available? I hope my explanation is clear, but if you need any more clarification, please ask."} {"_id": "124348", "title": "Applications of Reverse Engineer, and very general process overveiw", "text": "As a student programmer, I only have very limited knowledge of the workings of much of the consumer software. I would love to one day be able to reverse engineer software to learn what makes it tick and implement it in different ways. My real question what is the general process that reverse engineering requires. I am aware that this is very broad. I was just wondering the tools such as decompilers that people use."} {"_id": "124349", "title": "What skills does it take to develop an Android app?", "text": "What knowledge does it take to develop an Android app? How easy is it to publish one in the app market? I was thinking of trying one once I have a more firm grasp on Java. What skills should I focus on to meet that goal? Is it a long-shot to think I could develop an app like a photo editor (which I already can in Java) in the near future with intermediate Java experience, or are apps more for small teams or commercial software firms? This question serves as a personal to-do list on what to learn. I would like to develop an Android app in the near future from which I could make money."} {"_id": "250982", "title": "Which algorithm is faster?", "text": "I'm creating a small game, where the computer generate pseudo random number in give range, and the user have to guess it. I also made the option to play computer vs computer. I mean the computer generate random number and the computer should guess it. The easiest way I found was to create for loop and increase a variable by 1 until it guesses it. But I decided to try to create my own algorithm about this. Basically the variable that holds the guess number is equal to range / 2, and then the program enters into the while loop where if the random number > the number which is generated by logic(the one which is = to range /2) then 2 it to two and add 1 to it. If the number is < the program * by 2 and add one to it until it guess the number. Here is code in C++: void aiVsAI(int range){ // range argument is used to be given to the randome generator func int random_number = generate_random_number(range); // assigning the random number to variable int guess = range / 2; // the computer guess number int tries = 0; // how many tries the computer needed to guess the number while(guess != random_number){ // while the guess isn't equel to the random number if(guess > random_number){ // if the guess is > than the random number guess = guess / 2 + 1; // dev it to 2 and add one to it } else if(guess < random_number){ // if the number is < than the number then guess = guess * 2 + 1; // * it with 2 and add one } tries++; // increase the number of tires } std::cout << tries; } the generate_random_number is function that will generate pseudo-random number. _My question is which algorithm is faster this one or just using for loop and increasing variable with one until it guess the number?_ * * * Here is the full code if anyone need it for something: #include #include #include #include //function that will generate random number int generate_random_number(int range){ // the range argument is used to represent the maximum value of the number if(range < 10){ // the range couln't be < 10 but can be == std::cout << \"range can't be less than 10\"; exit(1); } std::srand(time(NULL)); // making sure that the random number will be different every time return rand() % range; // returning the random generated number in the specific range } //Game computer vs computer void aiVsAI(int range){ // range argument is used to be given to the randome generator func int random_number = generate_random_number(range); // assigning the random number to variable int guess = range / 2; // the computer guess number int tries = 0; // how many tries the computer needed to guess the number while(guess != random_number){ // while the guess isn't equel to the random number if(guess > random_number){ // if the guess is > than the random number guess = guess / 2 + 1; // dev it to 2 and add one to it } else if(guess < random_number){ // if the number is < than the number then guess = guess * 2 + 1; // * it with 2 and add one } tries++; // increase the number of tires } std::cout << tries; } // Game human vs human void humanVsAI(int range){ // range argument is given to the generate random function int random_number = generate_random_number(range); // assigning the random number to variable int input = 0; // this var represents the user input int tries = 0; // the number of tries user had make to guess the number while(input != random_number){ std::cout << \"Enter number: \"; // alerting the user to enter a number std::cin >> input; // getting the number if(input < random_number) // checking if the input > than the random num std::cout << \"You didn't guess it,you are too low..., try again!\\n\"; // alerting the user else{ // else it will be > than the number std::cout << \"You didn't guess it,you are too high..., try again!\\n\"; // alerting the user } tries++; // increasing tries by one for each try// } std::cout << \"\\n\\nYou guess it! You need \" << tries << \" to do this.\\n\\n\"; } //game Human vs Human void humanVsHuman(int range){ int number_to_guess; // the number which must be guessed int user_guess; // user input int guess; std::cout << \"Enter number int range 1- \"<< range<<\": \\n\"; // alerting the user to enter number std::cin >> number_to_guess; // entering number for(int i = 0; i < 100;i++){ //Clearing the screen std::cout << \"\\n\"; } while(user_guess != number_to_guess){ guess++; std::cout << \"Please enter numer:\\n\"; // alerting the user std::cin >> user_guess; // Getting the user input if(user_guess < number_to_guess) std::cout << \"You didn't guess it,you are too low..., try again!\\n\"; else if(user_guess > number_to_guess) std::cout << \"You didn't guess it,you are too high..., try again!\\n\"; } std::cout << \"You guess the number\\n\"; } int main(){ std::string game_type; char play_game; int range = 0; while(1){ std::cout << \"Please enter range(type 100 to leave it as default)\\n\"; std::cin >> range; std::cout <<\"What you want to play AI vs human, human vs human or AI vs AI ?\\n\"; std::cin.ignore(); // for get line std::getline(std::cin, game_type); // getting the user input std::cin.clear(); // clearing if(game_type == \"AI vs AI\" || game_type == \"ai vs ai\"){ // if the user chose AI vs AI std::cout<<\"You choosed AI vs AI\\n\"; aiVsAI(range); // calling the AI function } else if(game_type == \"human vs ai\" || game_type == \"ai vs human\" || game_type == \"human vs AI\" || game_type == \"Human vs AI\" || game_type == \"Human vs ai\" ){ // if the user chossed AI vs human std::cout << \"You choosed human vs AI\\n\"; humanVsAI(range); // calling the human vs human function } else if(game_type == \"human vs human\" ||game_type == \"Human vs Human\" ||game_type == \"Human vs human\" ||game_type == \"human vs Human\"){ // if the user choosed human vs human std::cout <<\"You choose human vs human\\n\"; humanVsHuman(range); // calling the human vs human function } else{ // In case of any error std::cerr << \"An error occured!\"; } std::cout << \"Do you want to play another game?\\n\"; // do you want another game???? std::cin >> play_game; if(play_game == 'y' || play_game == 'Y') std::cout <<\" starting another game\\n\"; else break; // enough playing for now. } return 0; }"} {"_id": "69168", "title": "Gaining experience working with a team", "text": "I've always been a solo programmer and I would like to gain experience working with a team. Where are some places I can go to gain experience with a team assuming that I am willing to work for free in an open source project simply for gaining experience. (SIDENOTE) I think this question can be enhanced as I don't think in it's present form it is clear and concise. Basically I'm asking for resources on how to find a team to develop for to learn how to work with others. Please edit this question as you see fit. I apologize, at the moment I don't have enough points to make this into a wiki"} {"_id": "168096", "title": "Native mobile app development - how do I structure my user stories?", "text": "I'm about to start on a project which will involve developing prototype native mobile apps (iOS and Android initially) as well as a web-based admin interface and an API for these apps to communicate with. We've got a list of stories already drafted up, however a lot of them are in the format: As a mobile user I want to be able to view a login screen so that I can sign into the app If this were targeted for a single platform, I wouldn't see a problem. However, since we're targeting multiple platforms, I'm not sure whether these should now be duplicated eg \"As an Android user\" or similar. This seems like duplication, but it's work that will need to be completed separately for each platform. This is the first mobile project we've gone native on - previously it was Phonegap and we lumped all stories in under \"As a mobile user\". Since essentially this was a web-based app wrapped in native code, this didn't present too much of an issue, but I'm conscious that wholly-native apps are a different ballgame!"} {"_id": "125436", "title": "How does the license work for the LGPL open source framework?", "text": "Without knowing anything I wrote a big application, almost 1 year coding. It does Video/Audio such as softphone for commercial use (we sell it). I used a framework which was licensed under the LGPL, lesser Gnu Public License. Now before I release it to production, since I used the `H.264` video codec and a 'LGPL' framework, what should I know and what should I do? Or I do not need to do anything? This is my employer releasing and selling the application, I just did my job making the application and getting it to run without crashing. Do I need to apply for a license to court? Or do I apply for license to the framework programmers? How do I make myself valid before I let it go to my company management. **Follow up:** H.264 Encoder has GPL license H.264 Decoder has LGPL license Where do i buy encoder license? http://www.x264licensing.com/features Where do i buy decoder license? http://www.mpegla.com/main/programs/AVC/Pages/AgreementExpress.aspx How much does it cost me license? http://www.zdnet.com/blog/bott/h264-patents- how-much-do-they-really-cost/2122"} {"_id": "152147", "title": "What are the advantages of GLSL's compilation model?", "text": "GLSL is fundamentally different from other shader solutions because the server (GPU driver) is responsible for shader compilation. Cg and HLSL are (afaik) generally compiled _a priori_ and sent to the GPU in that way. This causes some real-world practical issues: * many drivers provide buggy compilers * compilers differ in terms of strictness (one GPU can accept a program while another won't) * also we can't know how the assembler code will be optimised What are the upsides of GLSL's current approach? Is it worth it?"} {"_id": "254609", "title": "Is there any danger in writing raw bytes to a file?", "text": "I'm working through a problem in Programming Pearls -- specifically, the implementation of a program which sorts a file containing, at most, 10,000,000 integers (Column 1, Problem 3). Since the book doesn't specify how the data should be stored in the file, I'm considering storing the integers as raw bytes (there are some other constraints that make raw bytes a good option). I've never worked at this low of a level before, so I want to know if there's anything dangerous I should watch out for. Do I need to worry about accidentally using some sort of end-of-file sequence when I'm writing raw bytes to a file, for example? Edit: I realize now how broad my question was. I really meant problems of the more catastrophic kind, like accidentally overwriting other files on the disk. Sorry I wasn't clearer originally."} {"_id": "152144", "title": "How should my local git workflow work?", "text": "At home, I have a server that is running some software (on a LAMP stack, but only accessible internally). I have another machine and a laptop that I both use for developing said software. What is the best workflow for me? Should I have a repository on my local server, create a live branch, staging branch and development branch, then checkout the development branch from my laptop/development PC to work on, commit that back when I'm done, then merge the development branch with the staging branch for testing, before further merging to the live branch? Would I simply checkout the production branch to my /www/var/ on my server? Or am I thinking/going about this all wrong? Thanks."} {"_id": "249497", "title": "VB.net - Unable to delete/over ride files in Net App environment", "text": "I have been working on a change release application for some time now. The workflow is as follows: 1. Archive the staging file 2. Archive the current prod file 3. move staging to prod * Delete prod file * Copy staging file to prod dir * Delete orig staging file When I execute my code against a Windows Server using UNC (i.e.: \\Server1\\d$\\Production) it works perfectly. My issue is that most our changes happen against a Net App file system. When I execute against this, I have gotten many different errors based on how my code is structured. Currently, I get 'The process cannot access the file '\\Netapp\\share\\files\\FILE_NAME.ext' because it is being used by another process.' Here is my current code: Private Sub ReleaseToProd_Click(sender As Object, e As EventArgs) Handles ReleaseToProd.Click Dim fs As System.IO.StreamWriter = My.Computer.FileSystem.OpenTextFileWriter(LogPath, True) Dim i = 1 fs.WriteLine(\"--- \" & Now) TextBox1.Text += \"Releasing to Production:\" & vbNewLine For Each item In ListBox1.SelectedItems If My.Computer.FileSystem.FileExists(Path.Combine(ProdDir, item)) Then 'Delete existing production file Try File.Delete(Path.Combine(ProdDir, item)) fs.WriteLine(\"Deleting \" & Path.Combine(ProdDir, item)) 'Copy staging to production Try File.Copy(Path.Combine(StageDir, item), Path.Combine(ProdDir, item)) fs.WriteLine(\"Copying \" & Path.Combine(StageDir, item) & \"\\nTO: \" & Path.Combine(ProdDir, item)) Try File.Delete(Path.Combine(StageDir, item)) fs.WriteLine(\"Deleting \" & Path.Combine(StageDir, item)) Catch ex As Exception MsgBox(item & \": issue deleting from staging!!\" & vbNewLine) TextBox1.Text += item & \": issue deleting from staging!!\" fs.WriteLine(\"--- Error Releasing - Step 3 ---\" & vbNewLine) fs.WriteLine(\"Stack Trace:\" & vbCrLf & ex.StackTrace & vbNewLine) fs.WriteLine(\"Stack Trace:\" & vbCrLf & ex.Message & vbNewLine) End Try Catch ex As Exception MsgBox(item & \": issue copying file to production!!\" & vbNewLine) TextBox1.Text += item & \": issue copying file to production!!\" fs.WriteLine(\"--- Error Releasing - Step 2 ---\" & vbNewLine) fs.WriteLine(\"Stack Trace:\" & vbCrLf & ex.StackTrace & vbNewLine) fs.WriteLine(\"Stack Trace:\" & vbCrLf & ex.Message & vbNewLine) End Try Catch ex As Exception MsgBox(item & \": issue deleting production file!!\" & vbNewLine) TextBox1.Text += item & \": issue deleting production file!!\" fs.WriteLine(\"--- Error Releasing - Step 1 ---\" & vbNewLine) fs.WriteLine(\"Stack Trace:\" & vbCrLf & ex.StackTrace & vbNewLine) fs.WriteLine(\"Stack Trace:\" & vbCrLf & ex.Message & vbNewLine) Finally TextBox1.Text += item + \" released\" & vbNewLine End Try Else TextBox1.Text += item + \" released\" & vbNewLine fs.WriteLine(\"Releasing Stage:\" & vbNewLine & StageDir & \"\\\" & item & vbNewLine & \"To:\" & vbNewLine & ProdDir & \"\\\" & item & vbNewLine & \"With Overwrite\" & vbNewLine) My.Computer.FileSystem.MoveFile(StageDir & \"\\\" & item, ProdDir & \"\\\" & item, True) End If i = i + 1 Next ListBox1.Items.Clear() fs.Close() End Sub I know all my variables/constants are as they should be. I have full control over the destination directories, and I have launched the application as administrator Here are the stack traces received: > \\--- Error Releasing - Step 1 --- > > Stack Trace: at System.IO.__Error.WinIOError(Int32 errorCode, String > maybeFullPath) at System.IO.File.Delete(String path) at > ChangeManagementApp.Form1.ReleaseToProd_Click(Object sender, EventArgs e) > > Stack Trace: The process cannot access the file > '\\Netapp\\share\\files\\FILE_NAME.ext' because it is being used by another > process. It obviously looks like a lock, but I am being assured that our application, that reads these files, does not hold locks... additionally, I can manually overwrite it with no issues at all. Any ideas on how I might be able to fix this issue? * * * The failing code is here: If My.Computer.FileSystem.FileExists(Path.Combine(ProdDir, item)) Then 'Delete existing production file Try File.Delete(Path.Combine(ProdDir, item)) fs.WriteLine(\"Deleting \" & Path.Combine(ProdDir, item)) 'Copy staging to production Try"} {"_id": "118835", "title": "Can I use my username in a legal license?", "text": "I have an open source project, but I don't want to use my real name in the license file. Legally, can I just use my project hosting username instead? I know programmers Are Not Lawyers\u2122, but I'm looking for advice from people who may have dealt with simlar problems."} {"_id": "254601", "title": "Best practice: dynamically handle variable data, multiple file uploads and encoding with jQuery, AJAX, PHP and MySQL", "text": "I'm currently writing an e-learing web-application. I'm sorry if this is information overkill but I think it's better to describe it in detail so you get the idea. I have made thought on every point and would just like to know if my approach is ok or totally off the mark for this is the first time I handle that much data \"on the fly\". The e-learning courses contain files of different types, e.g. Word files, PDFs, Flash (yes, sadly no way around it), videos etc. and have 1 or more sets of tests, each containing questions where images can be attached (one each question) and answers of course (single- or multiple-choice), which also can hold images (one each) in addition or instead of pure text. Upon completion a PDF (certificate) is created and stored on the server for the user to download at any time. Planned workflow: 1. Course creation is handled similar to a checkout in online shops, meaning you click on \"New course\" and a window pops up and at the top you see a flowchart showing where in the creation process you are right now. The whole process should be dynamic so information is only stored in the final \"checkout\". Each part of the process is on a separate \"page\" but you can go back and edit things. It'll be visualized like this (I will add text descriptions to the points of course): ![Workflow](http://i.stack.imgur.com/eFTwX.png) 2. First page contains only basic information, e.g. Name, version, short description and a list of topics being taught in this course. 3. 2nd page is where the fun starts: Upload multiple files with description. These file uploads should not disable the use of the form. Videos shall be uploaded and encoded in background. 4. 3rd page: Test-sets are created, this means categories (Test-sets) with subcategories (questions) and sub-subcategories (answers) also with multiple file uploads (images). 5. 4th page: final overview and confirm. How I want to handle things: **Data pre-storage** I would like to use temporary tables for this case to store the file paths and basic data. As the creation process can be cancelled at any time all files belonging to entries for this current creation process shall be deleted. This means creation starts --> Process is created in temporary table `creation- processes`. After the first page is filled the data is stored in a temporary table with fk to process id, same goes for any file uploaded. **Video encoding** As soon as a video file is uploaded I will run a background php-script which sets a database-entry in the temporary table for the video file and starts encoding via ffmpeg (3 files for each video file: mp4, ogg, webm). Uploaded files can have 4 states: \"Uploading\", \"Processing\", \"Finished\" and \"Error\". **Permanent storing** Upon completion the Data from the temporary tables is transferred into permanent tables and then deleted. Files still being processes (as video encoding can take some time) will also have their state transferred in the permanent table. The background-encoding-script checks for the filename (which is unique: timestamp+md5-hash of filename) in the temporary table first and then in the permanent table. If no entry exists in both it deletes the files it created. **Garbage collection** In addition to just using temporary tables I will have a permanent table \"tmp_files\" containing path information of ALL files being uploaded. The entries and the files listed in this table are deleted upon cancellation of the process, also after completion the entries are deleted (but not the files) for they are now stored in the regular table (course-files). So for things going wrong I can run a cron job every hour that compares the entries of \"tmp_files\" with the files in the regular table and the temporary table and if it holds an entry which is not present in one of the other tables this means there is a zombie file which must be deleted (together with the entry of course). Well that's it so far. I don't know if questions regarding planning of software are welcome here but I really would like to know if this is a good way to approach this process or not and if I'm missing something essential."} {"_id": "58082", "title": "To what extent can choosing a particular job limit future opportunities?", "text": "I'm at the beginning of my career and I'm currently looking for jobs. I know that the farther you get away from college graduation, the more employers look at your experience as opposed to your actual degree. I'm wondering with the ultimate goal of being in software engineering/computer science, would taking a job in the IT field limit my options of getting into software engineering? Likewise would taking a job in software quality assurance limit me from pursuing a development position later, even though both are in software engineering?"} {"_id": "58083", "title": "how to convince other we should move to hadoop?", "text": "Everything I've read about Hadoop seems like exactly the technology we need to make our enterprise more scalable. We have terabytes of raw data that is in non-relational form (text files of some kind). We're quickly approaching the upper limits of what our centralized file server can handle and everyone is aware of this. Most people on the tech team, especially the more junior members of the tech team are all in favor of moving from the central file system to HDFS. The problem is, there is one key (most senior, etc.) member of the team who is resisting this change and every time Hadoop comes up, he tells us that we could simply add another file server and be in the clear. So, my question (and yes, it's really subjective, but I need more help with this than any of my other questions) is what steps can we take to get upper management to move forward with Hadoop despite the hesitation of one member of the team?"} {"_id": "58084", "title": "How indepth should a programmer develop his understanding of the Unified Process and UML?", "text": "I know a lot of places use UML and the unified process as their development model... but then there's a lot that don't. Would it be important to have more than a basic and functional understanding of these subjects or would it be better to focus on developing programming skills in other areas (i.e. language development, IDEs, etc)?"} {"_id": "189750", "title": "How to associate a new/modified changeset with an existing review in VisualStudio 2012", "text": "I'm trying out the new Code Review tool in Visual Studio 2012. Which seems okay for the most part, but I've hit a wall in regards to when changes are required. How should this be handled in a code review in VS2012 land? 1. Joe submits a code review 2. Fred reviews and requests some changes 3. Joe makes the requested changes 4. ????? How does Joe associate a new changeset with the review so Fred can see it? Abandon the existing review (thus losing all the original comments), and create a new one? Or is there a way to attach the new changes to the existing review?"} {"_id": "58086", "title": "Design Code Outside of an IDE?", "text": "Does anyone design code outside of an IDE? I think that code design is great and all but the only place I find myself actually design code (besides in my head) is in the IDE itself. I generally think about it a little before hand but when I go to type it out, it is always in the IDE; no UML or anything like that. Now I think having UML of your code is really good because you are able to see a lot more of the code on one screen however the issue I have is that once I type it in UML, I then have to type the actual code and that is just a big duplicate for me. For those who work with C# and design code outside of Visual Studio (or at least outside Visual Studio's text editor), what tools do you use? Do those tools allow you to convert your design to actual skeleton code? It is also possible to convert code to the design (when you update the code and need an updated UML diagram or whatnot)?"} {"_id": "21594", "title": "How to improve the programmers work environment", "text": "I manage a team of six programmers, working on diverse systems. We work in an open plan office, with members sitting in cubicles. A lot of people on these forums are big on private offices, but that is not an option for me. But I was wondering if there were ideas for other ways to improve and energize the working environment and experience. One suggestion is more plants. Any suggestions would be greatly appreciated."} {"_id": "107242", "title": "Which database to prefer while developing a WPF medical inventory system?", "text": "I have a project to develop an inventory system for a medical shop. Till date, I confronted simple requirements which were fulfilled with XML as the backend db. My interaction knowledge with XML is pretty-good and I can make almost anything with XML using LINQ-TO-XML. Since, this is an inventory system, I am bit confused as to which database should I use. Can i stick with XML or proceed with SQL Server 2008. In case I use SQL Server, will I need to install the SQL Server on Client Machine as well. This is important information because SQL Server is a commercial product, hence i need to include this in my project estimation cost."} {"_id": "211567", "title": "Why are different components of the \"web platform\" not modularized?", "text": "The web platform is hip these days. But the web platform consists of many parts that are conceptually separable, developed at different times and paces, and (most important to me) could be useful on their own. Basically the web platform is \"a browser\", but the browser lumps together several things: * HTML/CSS renderer * DOM * scripting on the renderer and DOM (JavaScript) * sandboxing (including cross-site scripting restrictions, limited access to local files, etc.) * the browser itself, which is a GUI app with its own look and feel My question is, am I really the only one who thinks it would be useful to have each of these as separate components that are usable on their own, mixable and matchable with different versions of each other and with other programming tools? The renderer/DOM are the ones I'd really like to see abstracted, because HTML/CSS is a very nice way to describe interfaces, but it's annoying that it can only be used to describe interfaces on a web page. Most notably, the link between the rendering engine, JavaScript, and the browser GUI seems almost impenetrable. This makes it really annoying to use HTML/CSS for interface design, because your interface is always going to be running inside a browser that you don't control, and you have to use JavaScript. Compare that to other interface toolkits (like Tk, Qt, Wx), which often have bindings to multiple programming languages and can be leveraged in standalone apps that you have total control over. (This means, for instance, that browser-wide user preferences for things like fonts can interfere with your presentation, and it means any menus, keyboard shortcuts, or other UI doodadssyou create have to compete with those of the enclosing browser.) In similar fashion, it's very tough to write something that has an HTML/CSS interface, but stores data on your local computer, because browsers think it's an insecure web app, even if what you want it to be is just a \"regular app\". I know there are _historical_ reasons why people did all this together, but I find it baffling that at no point have people thought it would be worthwhile to make a separate, pluggable rendering engine that could be used with multiple programming languages, or a \"browserapp maker\" that would run a single web app in its own window, or anything like that. Or, do such things already exist? I'm aware of things like the web widgets in GUI toolkits, and XULRunner, but as far as I can see these still seem to be embedding a browser-like amalgam of several components. (In particular, there does not appear to be a stable, mature rendering engine with DOM-access bindings for languages other than JavaScript.) This makes them more heavyweight and less integrable into an application than it would be if you could just grab an HTML rendering library the same way you grab Qt or a PNG library or anything else."} {"_id": "157512", "title": "Product owners with more than one product?", "text": "Is it normal and still proper (in agile/SCRUM-based software development) for a product owner to be in charge of more than one product?"} {"_id": "141708", "title": "Optimal communication pattern to update subscribers", "text": "What is the optimal way to update the subscriber's local model on changes C on a central model M? ( M + C -> M_c) The update can be done by the following methods: 1. Publish the updated model M_c to all subscribers. Drawback: if the model is big in contrast to the change it results in much more data to be communicated. 2. Publish change C to all subscribes. The subscribers will then update their local model in the same way as the server does. Drawback: The client needs to know the business logic to update the model in the same way as the server. It must be assured that the subscribed model stays equal to the central model. 3. Calculate the delta (or patch) of the change (M_c - M = D_c) and transfer the delta. Drawback: This requires that calculating and applying the delta (M + D_c = M_c) is an cheap/easy operation. If a client newly subscribes it must be initialized. This involves sending the current model M. So method 1 is always required. Think of playing chess as a concrete example: Subscribers send moves and want to see the latest chess board state. The server checks validity of the move and applies it to the chess board. The server can then send the updated chessboard (method 1) or just send the move (method 2) or send the delta (method 3): remove piece on field D4, put tower on field D8."} {"_id": "162643", "title": "Why is Clean Code suggesting avoiding protected variables?", "text": "Clean Code suggests avoiding protected variables in the \"Vertical Distance\" section of the \"Formatting\" chapter: > Concepts that are closely related should be kept vertically close to each > other. Clearly this rule doesn't work for concepts that belong in separate > files. But then closely related concepts should not be separated into > different files unless you have a very good reason. Indeed, this is **one of > the reasons that protected variables should be avoided**. What is the reasoning?"} {"_id": "193838", "title": "What are the licencing requirements for publishing and distributing an ASP.net application", "text": "I started developing websites using PHP. I have read many times that PHP is free and open source which is an advantage over ASP.net. However, due to my current job requirements, I had to switch to ASP.net which is pretty good. Now after developing few applications, I am wondering how can I publish them for free. As PHP is free and opensource, it was never an issue for me. What's bothering me is this. Suppose I developed a web application using Visual Studio Express (or Professional version or any other version) * Do I need any special permission/license from Microsoft to host my application on Internet * If I create a desktop application and want to distribute it for free on internet, is any license required. * If no, then what are microsoft licenses all about and how is PHP free and ASP is not free Thanks!"} {"_id": "193839", "title": "Logging to database: Log first or action first?", "text": "Long story short, I'm working on a web-based frontend that interacts with a database, and one of the functions is that every action on a particular table gets logged to keep a full history of all changes to that table. An earlier attempt at using triggers in postgreSQL to handle the logging automatically ran afoul of a couple of other requirements of software that will use said database as a backend, so I'm back to manually creating the log entries and saving them. My question is, what's best practice? Creating and storing the log first, then making the change, or making the change first, then storing the log? I realize that it's essentially a moot point because I'm wrapping the entire process into a transaction anyway, but I'm suddenly wondering if there are arguments in favor of either methods."} {"_id": "162644", "title": "What is the value of workflow tools?", "text": "I'm new to Workflow developement, and I don't think I'm really getting the \"big picture\". Or perhaps to put it differently, these tools don't currently \"click\" in my head. So it seems that companies like to create business drawings to describe processes, and at some point someone decided that they could use a state machine like program to actually control processes from a line and boxes like diagram. Ten years later, these tools are huge, extremely complicated (my company is currently playing around with WebSphere, and I've attended some of the training, its a monster, even the so called \"minimalist\" versions of these workflow tools like Activiti are huge and complicated although not nearly as complicated as the beast that is WebSphere afaict). What is the great benefit in doing it this way? I can kind of understand the simple lines and boxes diagrams being useful, but these things, as far as I can tell, are visual programming languages at this point, complete with conditionals and loops. Programmers here appear to be doing a significant amount of work in the lines and boxes layer, which to me just looks like a really crappy, really basic visual programming language. If you're going to go that far, why not just use some sort of scripting language? Have people thrown the baby out with the bathwater on this? Has the lines and boxes thing been taken to an absurd level, or am I just not understanding the value in all this? I'd really like to see arguments in defense of this by people that have worked with this technology and understand why its useful. I don't see the value in it, but I recognize that I'm new to this as well and may not quite get it yet."} {"_id": "193834", "title": "What's the best practice for naming uploaded images?", "text": "Suppose I have a form in my web application where users can upload a profile picture. I've got few requirements about file size, dimensions etc, but when the user uploads the image, how should I name them on my system? I suppose it would need to be consistent and also unique. Maybe a GUID? a5c627bedc3c44b7ae7c06a44fb3fcf8.jpg A timestamp? 129899740140465735.jpg A hash? Ex: md5 b1a9acaf295cf14ffbc5b6538294562c.jpg Is there a standard or recommended way to do this? Maybe an ASP.NET framework I should use?"} {"_id": "193830", "title": "Multiprocess RPC Architecture Design", "text": "I am currently working on a project that has client applications communicating with a server process. The client applications could be local to the same machine as the server process, on the same network, or over the internet. I had written a single-threaded asynchronous RPC module for the server process using Boost ASIO while Google Protobuf was used for RPC message serialization. The architecture was a pretty simple client-server model where a client would pack a request into protobuf message and send it to the server which would send back a response. The server process is written in C++ and the client applications were in varying languages. The main use case for this was to present a GUI to allow users to monitor, and interact with, the server process. In the future, the server process is going to be separated into functionally separate processes instead of one monolithic process. This presents an issue wherein I now have multiple processes for the client applications to connect to. I don't think this would be a problem in the case of the local or lan client applications but, over the internet, this could become unwieldy if the number of processes grow. Furthermore, I may want to enable IPC between server processes by sending each other RPC messages. I'm not sure what distributed application architecture would best fit this system going forwards. I don't want to reinvent the wheel if it's not necessary so I've taken cursory glances at D-bus and ZeroMQ, but seeing as I don't really have my architecture settled I thought I'd ask you fellow programmers. Would it be wise to have a single server process handling communication between client applications and server processes or is there a better architecture style I should be considering? If I want to enable IPC between server processes should I have them directly send messages to each other or should I use the single server process mentioned previously? Should I be looking at something like a publish/subscribe model?"} {"_id": "163513", "title": "Severity and relation to occurence - priority?", "text": "I have been browsing through some webpages related to testing and found one dealing with the metrics of testing. It says: > The severity level of a defect indicates the potential business impact for > the end user (business impact = effect on the end user x frequency of > occurrence). I do not think think this is correct or what am I missing? Usually it is the priority which is the result of such a calculation (severe bug that occurs rarely is still severe but does not have to be fixed immediately). Also from this description, what is the difference between the effect on the end user and business impact?"} {"_id": "239225", "title": "Coding: conciseness/efficiency vs readability", "text": "I am fairly new to c# and trying to learn best practices. I've been faced with many situations over the last week in which I need to make a choice between longer+simpler code, or shorter code that combines several actions into a single statement. What are some standards you veteran coders use when trying to write clear, concise code? Here is an example of code I'm writing with two options. Which is preferable? A) if (ncPointType.StartsWith(\"A\"))//analog points { string[] precisionString = Regex.Split(unitsParam.Last(), \", \"); precision = int.Parse(precisionString[1]); } else { precision = null; } B) if (ncPointType.StartsWith(\"A\"))//analog points precision = int.Parse(Regex.Split(unitsParam.Last(), \", \")[1]); else precision = null; C) precision = ncPointType.StartsWith(\"A\") == true ? int.Parse(Regex.Split(unitsParam.Last(), \", \")[1]) : null;"} {"_id": "112966", "title": "How to link classes in different packages (on different pages)?", "text": "Extending the answer to this question, I have broken down a large system into a few classes per package. Each package is now shown in a different page for readability, but now how do I show the relationship of two classes in separate packages (separate printed pages)? **An example for clarification:** ![E has an association with A but are in separate packages](http://i.stack.imgur.com/b6Hs4.jpg) E has an association with A but are in separate packages (which means they are on different pages)"} {"_id": "112962", "title": "When following SRP, how should I deal with validating and saving entities?", "text": "I've been reading Clean Code and various online articles about SOLID lately, and the more I read about it, the more I feel like I don't know anything. Let's say I'm building a web application using ASP.NET MVC 3. Let's say I have a `UsersController` with a `Create` action like this: public class UsersController : Controller { public ActionResult Create(CreateUserViewModel viewModel) { } } In that action method I want to save a user to the database if the data that was entered is valid. Now, according to the Single Responsibility Principle an object should have a single responsibility, and that responsibility should be entirely encapsulated by the class. All its services should be narrowly aligned with that responsibility. Since validation and saving to the database are two separate responsibilities, I guess I should create to separate class to handle them like this: public class UsersController : Controller { private ICreateUserValidator validator; private IUserService service; public UsersController(ICreateUserValidator validator, IUserService service) { this.validator = validator; this.service= service; } public ActionResult Create(CreateUserViewModel viewModel) { ValidationResult result = validator.IsValid(viewModel); if (result.IsValid) { service.CreateUser(viewModel); return RedirectToAction(\"Index\"); } else { foreach (var errorMessage in result.ErrorMessages) { ModelState.AddModelError(String.Empty, errorMessage); } return View(viewModel); } } } That makes _some_ sense to me, but I'm not at all sure that this is the right way to handle things like this. It is for example entirely possible to pass an invalid instance of `CreateUserViewModel` to the `IUserService` class. I know I could use the built in DataAnnotations, but what when they aren't enough? Image that my `ICreateUserValidator` checks the database to see if there already is another user with the same name... Another option is to let the `IUserService` take care of the validation like this: public class UserService : IUserService { private ICreateUserValidator validator; public UserService(ICreateUserValidator validator) { this.validator = validator; } public ValidationResult CreateUser(CreateUserViewModel viewModel) { var result = validator.IsValid(viewModel); if (result.IsValid) { // Save the user } return result; } } But I feel I'm violating the Single Responsibility Principle here. How should I deal with something like this?"} {"_id": "79101", "title": "Advanced learning topics for junior developers", "text": "I'm a junior developer at a company that has asked me to establish academic goals for the near future. I didn't realize how hard of a question this was until I could only come up with one answer, off the top of my head: Learn more Design Patterns What subjects have you learned, after you finished school, that have helped you significantly?"} {"_id": "237532", "title": "Are random number generators security holes?", "text": "If I retrieve a random number from a database (e.g. RAND() in SQL Server) or using a programming language and send this in some form back to a client machine, is there an economic chance I will be sending an indicator of what's in my server's memory that might form a security problem (like revealing my schema, etc)?"} {"_id": "13045", "title": "When am I ready to start using Jquery for Javascript?", "text": "I was told not to use Jquery as a beginner because it would hamper my learning of Javascript. Now I've read a couple books on Javascript, read loads of sites, and made a Javascript web app. Am I ready for Jquery? If not, then how will I know when I'm ready?"} {"_id": "237537", "title": "Progressive Enhancement vs. Single Page Apps", "text": "I just got back from a conference in Boston called An Event Apart. A really popular theme amongst the speakers was the idea of progressive enhancement \\- a site's content should go in the HTML, and JavaScript should only be used to enhance behavior. The arguments that the speakers gave for progressive enhancement were very compelling. Not only is it a solid pattern for supporting older browsers, and devices on a network with low bandwidth, but HTML fails much more gracefully than JavaScript (i.e. markup that is not supported is just ignored, while if a browser throws an exception while executing your script - you are hosed). Jeremy Keith gave a particularly insightful talk about this. But what about single page web apps like Backbone and Angular? The whole design behind these frameworks seems to push the developer toward moving content out of the HTML, and into something like a JSON API. I can not seem to gel these two design patterns: progressive enhancement vs. single page web apps. Are there instances when one is better than the other? Or are they not even antagonistic technologies, and I am missing something here with my mental model?"} {"_id": "54163", "title": "Origin of wParam and lParam", "text": "Given a standard windows procedure function: `LRESULT CALLBACK WindowProc(HWND hwnd, UINT uMsg, WPARAM wParam, LPARAM lParam);` Where do the names wParam and lParam come from and what the history behind them?"} {"_id": "234004", "title": "How does FDS (flat datacenter storage) make optimizations around locality unnecessary?", "text": "I was reading the following computer systems paper: https://www.usenix.org/system/files/conference/osdi12/osdi12-final-75.pdf And I was trying to understand why it claims that it does not need data locality for it be perform well. Basically, in the abstract it says: \"...FDS multiplexes an application's large scale I/O across the available throughput and latency budget of every disk in a closer. FDS therefore makes many optimizations around data locality unnecessary.\" What I was not sure was, why does multiplexing an applications I/O across the servers in the datacenter, make it unnecessary to have to optimize in terms of data locality?"} {"_id": "143145", "title": "How to recover from finite-state-machine breakdown?", "text": "My question may seems very scientific but I think it's a common problem and seasoned developers and programmers hopefully will have some advice to avoid the problem I mention in title. Btw., what I describe bellow is a real problem I am trying to proactively solve in my iOS project, I want to avoid it at all cost. By finite state machine I mean this > I have a UI with a few buttons, several session states relevant to that UI and what this UI represents, I have some data which values are partly displayed in the UI, I receive and handle some external triggers (represented by callbacks from sensors). I made state diagrams to better map the relevant scenarios that are desirable and alowable in that UI and application. As I slowly implement the code, the app starts to behave more and more like it should. However, I am not very confident that it is robust enough. My doubts come from watching my own thinking and implementation process as it goes. I was confident that I had everything covered, but it was enough to make a few brute tests in the UI and I quickly realized that there are still gaps in the behavior ..I patched them. However, as each component depends and behaves based on input from some other component, a certain input from user or some external source trigers a chain of events, state changes..etc. I have several components and each behave like this Trigger received on input -> trigger and its sender analyzed -> output something (a message, a state change) based on analysis The problem is, this is not completely selfcontained, and my components (a database item, a session state, some button's state)...COULD be changed, influenced, deleted, or otherwise modified, outside the scope of the event- chain or desirable scenario. (phone crashes, battery is empty phone turn of suddenly) This will introduce a nonvalid situation into the system, from which the system potentially COULD NOT BE ABLE to recover. I see this (althought people do not realize this is the problem) in many of my competitors apps that are on apple store, customers write things like this> \"I added three documents, and after going there and there, i cannot open them, even if a see them.\" or \"I recorded videos everyday, but after recording a too log video, I cannot turn of captions on them.., and the button for captions doesn't work\".. These are just shortened examples, customers often describe it in more detail..from the descriptions and behavior described in them, I assume that the particular app has a FSM breakdown. So the ultimate question is how can I avoid this, and how to protect the system from blocking itself? EDIT> I am talking in the context of one viewcontroller's view on the phone, I mean one part of the application. I Understand the MVC pattern, I have separate modules for distinct functionality..everything I describe is relevant to one canvas on the UI."} {"_id": "175098", "title": "Adhering to a protocol and being a subclass at the same time?", "text": "In objective C, I have a situation where I would like to have an abstract protocol (interface) with 5 methods and 4 properties, but at the same time, I'd like to have a common implementation of 3 of those 5 methods. So my question is, is it ok to 1) have just an interface declaration with all the method and property declarations, 2) have a base class that adheres to that protocol (implements that interface) but also provides a common implementation of some of those classes, and only has empty stub method implementations for the rest of the methods, and then finally, 3) have a bunch of subclasses (from that base class), that will conform to that protocol - but - also inherit common method implementations -and - implement on their own those stub methods?"} {"_id": "20240", "title": "What is the best PHP ORM library?", "text": "I want to build a PHP web application, but always faces the problem that I need to connect to database and deal with related objects. So I tried codeigniter and it uses a special way to connect to database named Active Record and I've known that its an ORM technique and there are lot of ORM libraries out there. What is the best, the easiest and the fastest of them?"} {"_id": "256113", "title": "Backbone vs Angular/Ember for a 1-3 person development shop?", "text": "We are a really small (1-3) Ruby shop building a Rails application and need to pick one of the 3 (we've narrowed down to) JS Frameworks. Assume plenty of development experience overall but very little front end JS Framework experience (little with Backbone but absolutely nothing with the other two). If productivity and speed of development (and the need to do lesser rewrites down the road - where the road could be 6-12 months long) along with community support are key metrics, what would be a reasonable recommendation amongst these 3? And by Backbone, I am assuming the need for Marionette or Chaplin (most likely the former). I have to say that I like the fact that Backbone starts off in a rather minimalistic fashion allowing one to add the necessary modules progressively as opposed to starting off with them all (akin to Sinatra vs Rails) but having said that, I've chosen Rails (equally familiar with Sinatra at this point) simply to not worry about adding the dependencies one by one. So, I am a little unclear about the right choice and simply fear losing couple of months of development effort in the event Backbone loses traction in the community and/or it becomes difficult to find the necessary add ons or something like that. (Forgive the rambling - guess this is what happens after 2-3 sleepless nights of JS Framework comparisons)."} {"_id": "175095", "title": "Because of over incumbent patents, is it possible to safely develop any software without the risk of legal action?", "text": "Take this System and method for restricting user access rights on the internet based on rating information stored in a relational database There are hundreds of thousands of them out there. So basically you can't program anything really without breaching one of thousands of software patents. If your program succeeds you will be sued by someone! Does this happen all the time and people get silenced? Do trendy startups get hit by things like this? Surely all major web properties would have been hit by the example above by AT&T?"} {"_id": "256115", "title": "What can be done when you are the only person to care about consistency?", "text": "After reading this question I may have a partial answer to the issue at hand, but I'd like to explore the issue further. I seem to be the only person on my team (a team of 6 people working on an Enterprise software solution) who cares about consistency. At times I am vocal about it, but there are members of my team that are just as vocal against it. I can file tickets, as the question I linked recommends. Which sounds great. But, I think the issue I am having has to do with the mentality or attitude of myself and my co-workers. What concrete steps can be taken if your team-mates don't care about consistency? Do I clobber them with statistics from studies that have been done linking consistency of code with the speed at which code can be read? Do I flood our already overly full backlog with consistency issues that can then either be solved or put on record as being \"not important enough\"? Should I just shut-up, sit down, write code, and quit worrying about it? How best to handle this situation?"} {"_id": "20246", "title": "Would you re-design completely under .Net?", "text": "A very extensive application began as an Access-based system (for database storage). Forms were written in VB5 and/or VB6. As .Net became a fixture in the development community, certain modules have been rewritten. This seems very awkward and potentially costly just to maintain because of the cross- technologies and extra work to keep the two technologies happy with each other. Of course, the application uses a mix of ~~ODBC~~ OleDb and MySql. ~~Would you think spending the time and resources to completely re-develop the application under .Net would be more cost effective?~~ In an effort to develop a more stable application, wouldn't it make sense to use .Net? Or continue chasing Access bugs, adding new features in .Net (which may or may not create new bugs between .Net and Access), and rewriting old Access modules into .Net modules under time constraints that prevent proper design and development? **Update** The application uses OleDb and MySql - I corrected my previous statement. Also, to lend further support to rewriting: I have since found out that when the _\"porting\"_ to .Net began, the VBA/VB6 code that existed was basically translated to the .Net equivalent. From my understanding, nothing was done to improve performance, or take advantage of new libraries or technologies. In my opinion, this creates a very fragile and unstable application. With every new update, this becomes more and more visible. As a help desk technician, I have noticed an increase in problems reported. The customers using the software have noticed an increase in problems and are commenting on it."} {"_id": "29792", "title": "Should every member of a team use the same IDE?", "text": "Do you think it makes sense to enforce that every member of a team must use the same IDE? For instance all engineers that are already on the team use IDE X. Two new engineers come and want to use IDE Y instead because that's what they have been using for several years now. Do you have any experience with \"mixed IDE\" teams? If so what is it?"} {"_id": "216039", "title": "Approximately how much is solid and broad knowledge of data structures and algorithms worth in the employment market?", "text": "I'm wondering how much I can boost my salary if I was to gain a strong and broad understanding of data structures and algorithms. I know the real basis but there are so many data-structures out there so I'm wondering what would be the financial benefit of cover these in depth?"} {"_id": "129192", "title": "Algorithmic problem: Picking Cards", "text": "The problem below appeared on the last Code Sprint 2 programming competition (it's over already). The base cases are clear, but developing an algorithm that solves all possible cases has been a challenge so far. > There are N cards on the table and each has a number between 0 and N on it. > Let us denote the number on the Nth Ni. You want to pick up all the cards, > but you can only pick a specific Nth card if you already picked Ni cards > before. (As an example, if a card has a value of 3 on it, you can\u2019t pick > that card up unless you\u2019ve already picked up 3 cards previously) In how many > ways can all the cards be picked up? > > Input are the cards with their respective numbers, and the output should be > the total number of ways possible to pick them up. Sample Input: 0 0 0 Sample Output: 6 Sample Input: 0 0 0 0 Sample Output: 24 Sample Input: 0 0 1 Sample Output 4 Sample Input: 0 3 3 Sample Output: 0"} {"_id": "156644", "title": "How do I treat application aspects with regard to features and user stories?", "text": "When drawing up a backlog, I have several requirements that apply to a great many user stories, i.e. aspects of the application like error handling and feedback. How do I include these (without using an #include directive in each user story )? Should I treat error presentation as a feature, then have user stories for this feature like \"system catches exception, and shows info to user\"?"} {"_id": "216033", "title": "Is Java easy decompilation a factor worth considering", "text": "We are considering the programming language for a desktop application with extended GUI use (tables, windows) and heavy database use. We considered Java for use however the fact that it can be decompiled back very easily into source code is holding us back. There are of course many obfuscators available however they are just that: obfuscators. The only obfuscation worth doing we got was stripping function and variables names into meaningless letters and numbers so that at least stealing code and renaming it back into something meaningful is too much work and we are 100% sure it is not reversible back in any automated way. However as it concerns to protecting internals (like password hashes or sensible variables content) we found obfuscators really lacking. Is there any way to make Java applications as hard to decode as .exe counterparts? And is it a factor to consider when deciding whether to develop in Java a desktop application?"} {"_id": "156647", "title": "Should an API platform enforce only receiving JSON requests?", "text": "I am building an API platform. I have already ensured that the platform always returns JSON responses. My question is **Should my API platform enforce the rule that all requests must be JSON? What are the benefits of making all requests to be JSON?** I understand the benefits of making all responses to be JSON as this means consistency for the client apps using the API. I fail to see the benefits of making all requests to be JSON as well. I am asking this because GitHub API v3 appears to be enforcing this rule. My API platform will involve uploading of files in the requests. As far as I know, JSON requests does not work well with file uploads. Or do I do a hybrid? Enforce that anything NOT to do with file upload should send their requests as JSON? And allow an exception for file upload?"} {"_id": "188247", "title": "Relation of CPU Hz's and network speed in clusters", "text": "In cluster the Amdahl's law, Gustafson's law exists. I though that there might be some law which states the relation between CPU Hz's and network speed: maximum network speed after which no additional Mbps would increase the calculations. Does such limits exists and what it depends on?"} {"_id": "66515", "title": "Minimum Software for Learning C#, ASP.NET, Winforms", "text": "What is the minimum software list that one should be looking at for learning C#, ASP.NET, WinForms. I do have a system which has Windows 2003 installed in it. I am looking at something like WAMP for C#.NET development. Do we have a listing? Something along the following lines 1. Visual Studio 2010 Express Edition 2. IIS 3. My SQL (anything specific to C#.NET or should it definitely be SQL Server 2005/2008 Express Edition) 4. LINQPad 5. Anything else?"} {"_id": "234123", "title": "Scheduling between child and parent process", "text": "When child processes are created using the fork system call what is there scheduling priorities..are they same? if so will always a child process run first and then parent...or is there a manipulation to this pattern. I have a implementation which apparently is running the parent process first. Is this expected?"} {"_id": "71976", "title": "What are things to be taken care of before putting your app in android market?", "text": "I have developed a small android app. I am planning to release it in Google Android market. What are the things I should be aware of before doing this?"} {"_id": "97948", "title": "What programming issues require a delay in processing a mail unsubscribe request?", "text": "Whenever I unsubscribe from a mailing list, I see that they say something like 'The change will take effect in 10 days/30 days/etc'. I would assume unsubscribing is just removing my email from some database. What is the idea behind making me wait so many days? Edit: Since this question is close here as off-topic, I have opened it here in Superuser. Closed in Superuser too. So rephrasing the question here."} {"_id": "161602", "title": "Would this data requirement suit a Document -Oriented database?", "text": "I have a requirement to allow users to fill in journal/diary entries per day. I want to provide a handful of known journal templates with x columns to fill in. An example might be a thought diary; a user has to record a thought in one column, describe the situation, rate how they felt etc. The other requirement is that a user should be able to create their own diary templates. They might have a need for a 10 column diary entry per day and might need to rate some aspect out of 50 instead of 10. In an RDBMS, I can see this getting quite complicated. I could have individual tables for my known templates as the fields will be fixed. But for custom diary templates I imagine I would would need a table storing custom_field_types (the diary columns), a table storing entries referencing their field types (custom_entries) and then a third custom_diary table which would store rows matching custom_entries to diaries. Leaving performance / scaling aside, would it be any simpler or make more sense to use a document oriented database like MongoDB to store this data? This is for a web application which might later need an API for mobile devices."} {"_id": "161601", "title": "Is there any advantage to learning C first?", "text": "I've got a little bit of history with programming, having gotten my start with Visual Basic. I let it slide as a hobby from a little after VB6 up until just a year or so ago when I got on the iPhone bandwagon, when I decided to take up Objective-C programming. I now have what I feel to be an intermediate level of knowledge on the language, but I can't help but feel that there's a substantial gap in where my knowledge is, given that there seems to be a substantial bit of the language I don't feel I understand. Would I benefit from reviewing the underlying C structure, or would I be better served from continuing to practice solely in Objective-C whilst reviewing the documentation for various features? **Edit:** Though I've accepted the first answer given, more insight is always appreciated from those with supplementary or opposing views."} {"_id": "97942", "title": "Refactoring techniques for asp.net webforms application", "text": "I'm working on a large application written in asp.net web forms. It was developed under asp.net 1.0, and still uses DataGrid, though portions have been updated. Most of the code resides in either the codebehind, or in controller classes that can't be instantiated outside of a running IIS. (The controller framework we use provides lifetime management tied to session state.) Data access is through a custom DAL, meaning that most of that code also requires a live database with proper data inside it. I want to decouple the code from the database and the web server, so that I can run it under a test harness. Are there any good strategies for moving from this sort of code to a more testable structure?"} {"_id": "161605", "title": "Dealing with units in arithmetic operations (multiplication and division)", "text": "I need to design a function to perform the basic arithmetic operations that are `addition (+)`, `subtraction (-)`, `multiplication (x)`, and `division (/)` between 2 numbers. That function takes 3 arguments: -`number1`: composed of a `value` and a `unit`. -`number2`: compoased of a `value` and a `unit`. -`operation_type`: one of the 4 aforementioned operations. and should return: -`number3`: composed of a `value` and a `unit`. Returning `number3`'s `value` is easy as all I need to do is use conditional statements to perform the corresponding operation (e.g. `if (operation_type == 'addition') { number3.value = number1.value + number2.value; }` ). However I'm having difficulties figuring out how I should represent and deal with the `unit` ( _for`multiplication` and `subtraction` operations, as for `addition` and `subtraction` it remains the same_) in a way that I can chain up multiple operations and properly update the resulting `unit` every time. Here is an example: operations type: multiplications number1.unit: Kw number2.unit: h / user / year number3.unit: Kwh / user / year _ | number1.unit: Kwh / user / year <- number2.unit: user number3.unit: Kwh / year _ | number1.unit: Kwh / year <- number2.unit: year number3.unit: Kwh ... My only constraint is that the `unit` must initially be represented as a `string` because it is stored in the database, then we can parse parse it into an `object/array` as needed. Some of the problems I'm facing: **Q1: how to ensure consistent order in the resulting unit (e.g.`h x Kw` and `Kw x h` should both give `Kwh`)?** **Q2: how to deal with complex units (e.g.`foo / Kw x h / bar`)?** And because hope dies last: **Q3: Are there any known algorithms / design patterns for dealing with this?**"} {"_id": "178117", "title": "Is it conceivable to have millions of lists of data in memory in Python?", "text": "I have over the last 30 days been developing a Python application that utilizes a MySQL database of information (specifically about Norwegian addresses) to perform address validation and correction. The database contains approximately 2.1 million rows (43 columns) of data and occupies 640MB of disk space. I'm thinking about speed optimizations, and I've got to assume that when validating 10,000+ addresses, each validation running up to 20 queries to the database, networking is a speed bottleneck. I haven't done any measuring or timing yet, and I'm sure there are simpler ways of speed optimizing the application at the moment, but I just want to get the experts' opinions on how realistic it is to load this amount of data into a row-of-rows structure in Python. Also, would it even be any faster? Surely MySQL is optimized for looking up records among vast amounts of data, so how much help would it even be to remove the networking step? Can you imagine any other viable methods of removing the networking step? The location of the MySQL server will vary, as the application might well be run from a laptop at home or at the office, where the server would be local."} {"_id": "178119", "title": "Why aren't Object Oriented databases used as much as Relational Databases?", "text": "I have come across many relational database management systems (RDBMS). But recently I used hibernate which made me start wondering why Object Oriented databases aren't more popular. If object oriented languages like Java or C# are so popular, then why aren't object-oriented database management systems (OODBMS) more popular too?"} {"_id": "191997", "title": "How can I unit-test my REST web service?", "text": "I am new to unit testing, I've one REST web method that just calls DB and populates a DTO. Pseudo code is public object GetCustomer(int id) { CustomerDTO objCust = //get from DB return objCust; } My doubt is how to write tests for these methods and type of tests (Integration/Unit) to be included. And for unit tests, does it need to hit the DB. If it were and I pass a customer id and do few assertions, the data might change eventually resulting in failures. I think I am missing something here understanding these concepts."} {"_id": "111680", "title": "How to become an Eclipse contributor?", "text": "I dont think I get enough programming at work so I'm thinking of doing some contributions to Eclipse. Seems that might be the only way to get some of my bugs fixed :) But how do I get started? What are the requirements in regards to software used? What is the typical process? Any special do/don'ts one needs to know?"} {"_id": "27195", "title": "What are your thoughts on online editors?", "text": "Currently, I'm an Emacs user, (I'm used to the commands, and I work in a bunch of different languages day-to-day, one of which is Common Lisp, so it's the natural choice), but a recent-ish talk by Steve Yegge has gotten me thinking that an online editor/IDE (if done well) might provide a lot of benefits. Both in terms of tracking large code-bases, and in terms of supporting editing at the same highly-interactive level for many languages. The Bespin roadmap makes it sound like they're going in the same direction and there are also web-based implementations of both Emacs and vi (none of which I have any experience with). My question is twofold. First, are any of these editors ready for prime-time? Meaning, are they at least on-par with desktop based editors? If not, do they reliably bring any of they benefits that a server-based implementation could support (things like fully combined VCS/editor/server-side preview/storage, real IDE-level support for many disparate languages, support for team coding/remote pair programming, etc.)? I'm more interested in experience than speculation here. Second, where do you see the concept going? Are editors eventually evolving into services, are they staying on the desktop, or will we get to a hybrid system that supports local editing but provide certain, cluster-friendly pieces through a network connection? Feel free to speculate on this part."} {"_id": "12808", "title": "Keeping a connection string secure when working with others", "text": "Say you've started an open source project and are posting it on a public repository. (like I have, using Codeplex.) One of the key files just makes the connection to the database, contains the login/password, and is just included from any other source file that needs a database connection. What's the best way to share the project without giving out your password? So far I've specifically removed it before committing any changes, but I'm thinking there has to be a better way."} {"_id": "119051", "title": "Why does Java not permit the use of headers as in C++", "text": "I have a question that I did not find an answer for except the following answer that does not meet my requirements: > \"Because James Gosling didn't want to\" I know that Java can have interfaces (only pure virtual functions, no attributes), but it is not the exact same thing as class definitions."} {"_id": "198641", "title": "What Does \"The Program Must Process Each Character Before Reading the Next One\" Mean?", "text": "From the book _Think Like a Programmer_ (emphasis mine) > The Luhn formula is a widely used system for validating identification > numbers. Using the original number, double the value of every other digit. > Then add the values of the individual digits together (if a doubled value > now has two digits, add the digits individually). The identification number > is valid if the sum is divisible by 10. Write a program that takes an > identification number of arbitrary length and determines whether the number > is valid under the Luhn formula. **The program must process each character > before reading the next one.** In a broader sense, what does it mean to process each character before reading the next one? What does this look like syntactically? What does the opposite mean and what does _it_ look like syntactically? **Update:** For some background, here's my attempt at the problem (in JavaScript): Array.prototype.sum = function() { var sum = 0, itemCount = this.length, i; for(i = 0; i <= itemCount - 1; i++) { sum += this[i]; } return sum; } function validateNumberWithLuhnForumla(number) { var numbersArray = number.toString().split(''), numberCount = numbersArray.length, i = 0, isValid = false, doubledDigit, checksum, numbersToSum = []; for(i = 0; i <= numberCount - 1; i++) { if(i % 2 > 0 && i > 0) { doubledDigit = numbersArray[i] * 2; if(doubledDigit > 9) { numbersToSum.push(1, (doubledDigit - 10)); } else { numbersToSum.push(doubledDigit); } } } checkSum = numbersToSum.sum(); if(checkSum % 10 === 0) { isValid = true; } return isValid; }"} {"_id": "221361", "title": "Syncing objects from code with the view in WPF", "text": "I've been reading some into it, but I am time pressed, so I would require a simple solution now and I promise to read up on it later. I come from a winforms c# background, and have lately been working with WPF. As the user interface I'm implementing is not that input intensive, I've gotten by with using INotifyPropertyChanged so far. Until I ran into this: I have a 3D interface from a third party library (MogreInWpf). Upon clicking an object in the 3D interface, I need to display some of the object's properties (position, size etc). Here's how my code-behind looks. I have a class, let's call it objectOfInterest, that saves all these properties, the 3D data and includes some methods that manipulate the object. In one of these methods, I generate a Grid containing textboxes to simulate name-value pairs of properties. Like \"X position:\" \"134\". As these properties are only user- changed, I've managed to bind the textbox on text changed event to respond to most changes, IE when the size is changed from 134 to 1345. Once this grid is generated, it is set to display in the main window using a ContentPresenter and setting its content to the objectOfInterest.grid. Works like a charm. Now, I have to implement a property that changes how the properties grid look like depending on a combobox included in this grid. For example, if the object is a cube, I want width/height/length to show, while I only want a radius property to show if it's a sphere. I'm kind of stuck here, as regenerating the grid upon combobox selectedValueChanged doesn't notify the view, and the generating occurs in the objectOfInterest class instead of the main window class, meaning I can't rebind the grid once it's regenerated (I cannot call contentPresenter.Content = objectOfInterest.grid again, as the main window has no idea the instances grid has changed). So, without completely understanding the MVVM methods and rewriting most of the application to fit the model, how can this be accomplished?"} {"_id": "221362", "title": "processing Postfix log with python", "text": "I need to process all log messages from Postfix (`/var/log/mail/mail.log`), and print a summary/statistics (how many emails were sent/received and from/to which email addresses) The situation is made more complicated by the fact, that Postfix has multi- line log entries (in contrast, `Apache` for example, has single line entries and the task would have been much easier). A sample Postfix log might look something like this: 2013-12-03 14:40:45 postfix: 6F1AA10B: client=unknown[64.12.143.81] 2013-12-03 14:40:45 postfix: 6F1AA10B: message-id=<529DDF56.6050403@aol.com> 2013-12-03 14:40:45 postfix: 6F1AA10B: from=, size=1571, nrcpt=1 (queue active) 2013-12-03 14:40:45 postfix: 6F1AA10B: to=, relay=local, delay=0.13, delays=0.13 2013-12-03 14:40:45 postfix: 6F1AA10B: removed 2013-12-03 14:52:07 postfix: 9DD9610B: client=unknown[209.85.219.65] 2013-12-03 14:52:07 postfix: 9DD9610B: message-id= 2013-12-03 14:52:07 postfix: 9DD9610B: from=, size=2388, nrcpt=1 (queue active) 2013-12-03 14:52:07 postfix: 9DD9610B: to=, orig_to=, relay=local 2013-12-03 14:52:07 postfix: 9DD9610B: removed Every email message that was processed by Postfix has a unique message ID (in my example `6F1AA10B`). What would be the best approach to process the logs in Python? What data structure would you recommend to use for storing the entries?"} {"_id": "50425", "title": "What is the difference between a Software Test Engineer and Software Test Analyst?", "text": "Do they differ in role and responsibilities. Or the terms are dependant on geographical area."} {"_id": "229003", "title": "Difference between MPL 2.0 and LGPL 2.1 (+static linking exception)?", "text": "Is there a difference between MPL 2.0 and LGPL 2.1 ( **+static linking exception** )? If yes, what is it? As long as i understand the only difference between them is you can't use trademarks of contributors under MPL[1]."} {"_id": "223216", "title": "Force users to update to the latest version of my app on the Google Play Store", "text": "Is there a way to have a \"critical update\" for my app on the Google Play Store that would require the user to update the app to keep using it? For instance, let say I push an update that brings a few bugfixes. Nothing mandatory, the user updates it if he wants to: that's a non-critical update. Now, say I correct a very serious security issue on my app and want all the users to update to this version to keep using the app: this is a critical update. Basically, if there is a critical update and the user has not updated yet, he cannot use the app until he does so. Is this possible using the Play Store? Thanks for your feedback!"} {"_id": "55616", "title": "Am I too young to be worrying about college right now?", "text": "I'm 19 and have been a hobbyist programmer since I was a freshman in high school (2002). I recently graduated in May of '08. Right out of high school I had my first job programming contract work for a company (got noticed off of my personal website). Originally the CEO and I had discussed working for about a year from home before going out there, as recently graduated, no previous jobs, no money, and them wanting me to move across the United States, after 6 weeks of work and two projects I made my decision to stay with my family because my father might lose his job, and with my sister and mother trying to finish college I want to be here to help out in case things get real rough. - Contract work stopped there, which was fine with me, I expected it. During my time working for them I learned at a rapid rate, and I was very eager and happy to work well over 12 hours a day to get whatever they threw at me done. I worked directly under a senior developer, whose a friend of mine and they seemed to like my work and that I was doing a good job. Working for them as a whole was very good, professional, and very comfortable. I've always thought that school is a joke and doesn't really prove anything (my high school grades show it, minus all the technology classes I aced) - But I always feel that I don't truly know enough to secure a good programming job. Visual Basic .NET, C#, WinForms, sockets, are some of my skills.., but I am very modest about what I know, because there is just so much I need to learn. Question is.. how valuable can college \"really\" be? How can I really polish my skills up and fill in the holes of knowledge I may be missing? How can I really find out what I need to focus on, and where to go next? I find myself overwhelmed a lot of the time because I try to run a software development website, many projects, as well as helping other programmers who know much less than me, and of course personal life, its all very frustrating. I have many directions to go, but I'm paralyzed in which way to pick.. I've read many topics on this already here, and on the net, but I'm young still and it feels like a lot of it doesn't apply to me."} {"_id": "55615", "title": "Best way to manage minutes of meeting", "text": "What is the best way of managing minutes of meeting on daily bases ? Do we have to manage the documents in a repository like VSS or it can be maintained in a excel sheet itself. Kindly share your experience for the same and guide. Thanks in advance"} {"_id": "220095", "title": "Is securing the credit card data is considered a requirement from the customer", "text": "I'm taking an Introduction to Agile Software Development course, The instructor was discussing an example about online store and asked us to write the **User Stories** and the **Conditions of satisfactions** and one story was _As a user I can commit for marketing card that was selected previously in order to buy my goods_ , the conditions of satisfactions in this case contained some conditions in addition to **The system must secure the Credit Card data across the internet connection using https** * I disagreed with that and said this is a thing that must be done, not just in case the user asked me to protect it, and it is a fatal threat like or more important than SQl Injections and this is a non-functional requirement at the technical level and if the user mention it or didn't this must be taken in our development, some project managers in the course showed their disagreement and said this is a time wast on features that the user didn't ask for!, the instructor didn't give my comment any attention and didn't reply to me. I really think I'm true and not convinced, but all of those trained people disagreed with me[I'm new in the software engineering field but used always to work as a developer], so I'm asking you about this."} {"_id": "196079", "title": "Are there any large scale enterprise frameworks for PHP", "text": "I'm curious if there are any commonly used large enterprise frameworks for PHP that would be your whole or most of your environment if you were working in PHP. Something comparable to ASP.NET's WebForms or MVC in that when you're working in either one, most of your code and system is based off of working integratively within that framework. Edit: Specifically I'm looking for a framework that fit's the bill of basically being what you would use for 99% of your work in writing your website, this means: * data retrieval, some kind of ORM in the framework * data representation, some kind of data aware automatically configured UI controls that make reading and writing to a DB easy * data caching * data serialization * data communication, an easy way to generate and host web services of various sorts"} {"_id": "166558", "title": "How can I figure out if programming is right for me?", "text": "I have an IT background and was pretty confident until an opportunity came up at work to go into programming(C#). I have never programmed before this, and the software I am programming for is a program I have never used before (a 3D modeling software). It has been 6 months since then and I feel like giving up. I didn't get much training... about 3 weeks of training spread out over the last 6 months. I think I would be good at programming but this experience is making me rethink my decision. I'm not sure if it's just me, or if this frustration is normal. How can I figure out if programming is right for me?"} {"_id": "166551", "title": "How do I make a .sh file that counts each time you compile?", "text": "I wanted to compile my program and I wanted to know how many times I have compiled it. How do I make a .sh file to do that? (I'm using mac, is it .sh file?)"} {"_id": "189113", "title": "Best practices to avoid fake profiles?", "text": "When offering to create a profile (for example, login+pwd) for a web service, what are the best practices one should implement to avoid mass spamming/creation of fake profiles? I am thinking about email confirmation, captchas, etc... any other ideas that work in practice?"} {"_id": "166555", "title": "Best C# database communication technique", "text": "A few days ago, I read an answer stating that writing queries within your c# code are long gone. I'm not sure what the specific person meant with the comment but, it got me thinking. At the company where i work, we maintain an assembly containing all the queries to the database (let's call it Queries). This assembly is reference by a QueryService (Retrieve the correct queries) assembly which in turn is referenced by a UnitOfWork assembly (The database connector classes, we have different connector classes for SQL, MySQL etc.). We use these three assemblies to perform operations on our database and all queries/commands are written by C# code. Is there a better way to communicate with the database and is there a better way to communicate with different database types?"} {"_id": "166554", "title": "How would I go about measuring the impact an article has on the internet?", "text": "For an application of mine, I analyze the sentiment of articles, using NLTK, to display sentiment trends. But right now all articles weigh the same amount. This does not show a very accurate picture because some articles have a higher impact on the internet than others. For example, a blog post from some unknown blog should not weigh the same amount as an article from the New York Times. How can I determine their impact?"} {"_id": "60910", "title": "Whats the best way to determine when it is appropriate to use a database?", "text": "I began my programming career in web development using PHP and MySQL. I got quite accustomed to utilizing db for storage of most dynamic data as well as some setting/parameter data. Sometimes there would be a lot of data while other times entries in the tables would be few. To me this just seemed natural and as far as I'm aware this is more or less an acceptable approach in web development. (Please correct me if I'm wrong...) I'm delving into desktop applications now and my natural inclination is to again utilize a db to store a lot of information that will be generated through the usage of the application. ~~However, as far as I can tell I don't see applications (that I use) utilizing a db very often.~~ _[EDIT: It has since been pointed out that this was a faulty assumption in that many applications do use lightweight dbs embedded into the program itself.]_ What is the reason for this? At what point is it appropriate to utilize a db? Are there any standards on this matter? Also what are the reasons to NOT use a db to develop a desktop application?"} {"_id": "107889", "title": "When should I use a 2-property class over a pre-built structure like a KeyValuePair?", "text": "When should you put Key/Value type of data in it's own class instead of using a pre-built generic structure, such as a `KeyValuePair` or a `Tuple`? For example, most ComboBoxes I create contain a DisplayName and a Value. This is the kind of data I am trying to decide when to put in a new class, and when to just use a KeyValuePair. I am currently working on something that uses `iCalendar`, and the selected user's data ultimately gets combined into a `key1=value1;key2=value2;` type of string. I started out by putting the data in a `KeyValuePair`, but now I am wondering if that should be it's own class instead. Overall, I am interested in finding out what guidelines are used when deciding to use an existing structure/class like a `KeyValuePair` over a 2-property object, and in what kind of situations you would use one over another."} {"_id": "107883", "title": "AGPL - what you can do and what you can't", "text": "AGPL is a fairly new license that was meant to go GPL-over-networks. However, not being a lawyer, and actually not having read the whole license, I can't understand what exactly you can do freely and what not with AGPL. My uncertainty is fed by this post about MongoDB (which is AGPL) and even more by the comments below. If we follow the comments it turns out that you can use AGPL libraries with your closed-source, commercial server-side software, as long as you don't modify the library. Is that the case? Or you have to distribute your entire application when you use an AGPL licensed library? The case with MongoDB is that it uses Apache license for the client code, which poses another question. What happens if you use AGPL software, but deploy it as a different application that your closed-source commercial one? For example, take iText \\- it is an AGPL library: * if you use it and modify it, do you have to open-source your entire application or you have to redistribute only the changes in iText? * if you use it and **don't** modify it, do you have to open-source your entire application? * If you wrap iText in another application that you start as a separate process, but use it from your main application, should you open-source everything, or just the wrapper application? (The wrapper application will be HTTP-based API that will take pdf files and will return the results of using iText as JSON). Can this be used to circumvent the AGPL license? Note: The question is about AGPLv3"} {"_id": "138333", "title": "What are effective ways to introduce the concept of code kata into the workplace?", "text": "In your experiences, what are some effective ways to introduce code kata practice into an organization or company? To be clear, I'm not concerned with the usefulness of code kata. I'm interested in methods to introduce this concept to a development team."} {"_id": "110484", "title": "Source control approach for plugins?", "text": "I'm writing an application and a set of many small plugins\u2014where \u201cmany\u201d means dozens and \u201csmall\u201d means one or two sources. It's not necessarily going to be possible to divide the plugins into neat, non-overlapping categories. Which would be the best strategy for source control: * Many repositories, one for each plugin; or * One repository, with a directory for each plugin? Other suggestions are of course welcome. I'm using Mercurial and hosting on Bitbucket."} {"_id": "110487", "title": "My customer wants me to record a video of how I develop his software product", "text": "Working as a freelancer, I often see _strange_ requests from my customers, some of which can negatively affect my daily work\u00b9, and others trying to set some sort of control. I usually encounter those things during preliminary negotiations, so it's easy enough at this state to explain to the customer that I do care about my work and productivity and expect my customers to trust my work. Things were much harder\u00b2 on a project I just accepted, since it's only after the end of the negotiations (the contract being already signed and not mentioning anything about video tracking) and after I started to work on the project that **my customer requested that I record a video of all I do on my machine when working on his project** , that is, a video which will show that I move the cursor, type a character, open a file, move a window, etc. I work in my own company, using my own PCs. I answered to this customer that such request cannot be accepted, since: * Hundreds of hours of work on a dual-screen PC will require a large amount of disk space for the recorded videos. If I don't care about space, I do care about this customer wasting my bandwidth downloading those videos. * Recording a video can affect the overall performance and decrease my productivity (which is not actually true, since the machine is powerful enough to record this video without performance loss, but, well, it still looks like a valid argument). * I can't always remember to turn the video recording on before starting the work, and off at the end. * It may be a privacy concern. What if I switch to my mails when recording the video? What if, to open the directory with the files about this customers project, I first open the parent directory containing the list of all of my customers? * Such video cannot be a reliable source to track the cost of a project (I'm paid by the hour), since some work is done with just a pencil and a paper (which _is_ actually true, since I do lots of draft work without using the PC). Despite those points, **the customer considers that if I don't want to record the video, it's because I have something to hide** and want to lie about the real time spent on his project\u00b3. **How to explain to him that it is not an usual practice for the freelancers to record the videos of their daily work** , and that such extravagant requests must be reserved to exceptional circumstances\u2074? * * * \u00b9 The most frequent example is to be requested to work through Remote Desktop on a more-than-slow server which uses a more-than-slow Internet connection, or to be forced to use an outdated software as Windows Me without serious reasons as legacy support. \u00b2 In fact, I already did a lot of management and system design related work, which is essential, but usually misunderstood by customers and perceived as a waste of time and money. Observing the concerned customer, I'm pretty sure that he will refuse to pay a large amount of money for what was already done, since there is actually zero lines of code. Even if legally I can easily prove that there was a lot of work on design level, I don't want to end my relation with this customer in a court. \u00b3 Which is not as risky as it could be, since I gave to this customer the expected and the maximum cost of the project, so the customer is sure to never be asked to pay more than the maximum amount, specified in the contract, even if the real work costs more. \u2074 One case when I effectively record on my own initiative the video of actions is when I have to do some manipulations directly on a production server of a customer, especially when it comes to security issues. Recording those steps may be a good idea to know precisely what was done, and also ensure that there were no errors in my work, or see what were those errors. * * * **Update:** First of all, thank you for all your answers and comments. Since the question attracted much more attention and had much more answers than I expected, I imagine that it can be relevant to other people, so I add an update. First, to summarize the answers and the comments, it was suggested to (ordered randomly): * Suggest other ways of tracking, as shown in Twitter Code Swarm video, or deliver a \"short milestone with a simple, clear deliverable, followed by more complex milestones\", etc. * Explain that the video is not a reliable source and can be faked, and that it would be difficult to implement, especially for support. * Explain that the video is not a reliable source since it shows only a small part of the work: a large amount of work is done without using a computer, not counting the extra hours spent thinking about a solution to a problem. * Stick with the contract; if the customer wants to change it, he must expect new negotiations and a higher price. * Do the video, \"but require that the customer put [the] entire fee into an escrow account\", require a lawyer to video tape all billable time, etc., in other words, \"operate in an environment void of trust\", requiring the customer to support the additional cost. * Search for the laws which forbid this. Several people asked in what country I live. I'm in France. Such laws exist to protect the employees of a company (there is a strict regulation about security cameras etc., but I'm pretty sure nothing forbids a freelancer to sign consciously a contract which forces him to record the screen while he works on a project. * Just do and send the videos: the customer will \"watch a few ten second snippets of activity he won't understand\", then throw those videos away. * Say no. After all, it's _my_ business, and I'm the only one to decide how to conduct it. Also, the contract is already signed, and has nothing about video tracking. * Say no. The processes and practices I employ in my company can be considered as trade secrets and are or can be classified. * Quit. If the relation starts like this, chances are it will end badly soon or later. Also, \"if he's treating you like a thief - and that is what he's suggesting - then it's just going to get worse later when XYZ feature doesn't work exactly the way he envisioned\". While all those suggestions are equally valuable, I've personally chosen to say to my customer that I accept to do the videos, but in this case, **we must renegotiate the contract** , keeping in mind that there will be a **considerable cost, including the additional fee for copyright release**. The new overall cost would be in average three times the actual cost of the project. Knowing this customer, I'm completely sure that he would never accept to pay so much, so the problem is solved. * * * **Second update:** The customer effectively declined the proposal to renegotiate the original contract, taking in account the considerable additional cost."} {"_id": "75545", "title": "Will you list C++/CLI on your resume?", "text": "A follow-up of Languages on a resume: Is it better to put \"C/C++\" or \"C, C++\"? Would you list `C++/CLI` on your resume? How? How would a resume reviewer initially react to it if `C++/CLI` is new to the reviewer? Will the reviewer feel that the candidate is not precise enough, just like resumes which list `C/C++` instead of `C, C++`? Will a resume parser splits them into two parts `C++`, `CLI` by using the slash as separator? Is it even a language? (Or, a _\"glue language\"_ )"} {"_id": "127706", "title": "Why the Select is before the From in a SQL Query?", "text": "This is something that bothered me a lot at school. 5 years ago, when I learned SQL, I always wondered why we specify first the fields we want and then where we want them from. According to my idea, we should write: From Employee e Select e.Name So why the norm says: Select e.Name -- Eeeeek, what e means ? From Employee e -- Ok, now I know what e is It took me weeks to understand SQL, and I know that a lot of that time was consumed by the wrong order of elements. It is like writing in C#: string name = employee.Name; var employee = this.GetEmployee(); So, I assume that it has a historical reason, anyone knows why?"} {"_id": "65276", "title": "How to learn/become capable of providing quality app design?", "text": "I'm interested in things a programmer should learn/know to provide extremely good application architecture design. As known any design can be regarded as good one for certain circumstances but when it comes to extending app feature set or providing some optimization and etc. it turns out that app was poorly designed or lacks some abstraction. How to be able to predict and include such stuff into app design? And from the other hand, software is here to be robust, efficient and fast. And it's main purpose from some point of view is to bring money (by saving or gaining them). So if one goes deep with abstraction and considered certain stuff that will be needed in 2 or more years, he's not going to end up with good app architecture (he's gonna have lots of unnecessary stuff). So how to understand to which extent certain stuff in app architecture is needed? Hope I made myself clear, else please comment on vague parts."} {"_id": "65275", "title": "controlling 2 Windows simultaneously for a-b testing", "text": "I'm searching for an app that is cloning all my commands(mouse and keyboard) to a second window. Especially for Browsers. I'd like to test a web page and want to see the different behavior in two windows(different revisions of the page) directly. For example, in window 1 i click link3 and an a specific url opens. This should be done automatically in the second window. If i enter some formula data this should be cloned either. Is there any application for windows oder linux which serves this desire? i know that there are command line tools which clone commands in one terminal to several others."} {"_id": "214521", "title": "Using lucene and sql server togheter. Newbie needs directions", "text": "Basically the whole thing can be explained simply: I need to index one or more SQL Server 2005 databases with lucene so I can search the various records. I found a lot of examples and documentation and I started to read it. The result is that I got confused and frustrated. So, as a beginner, what are the steps to implement it along with SQL server to search in the databases? I need some kind of workflow to follow so I can sort out a plan to work out the project. I don't really know where to start. -Bonus question: Is there a way to store the Lucene index directly into the SQL server database instead of a FSDirectory or RAMDirectory? -Last one: Do lucene supports similar word searchs? (I.E.: I search 'lu' and I get all the similar words show like the google suggestions, for example 'luke', 'lucene','lua' etc etc.) Please be as detailed as possible, I can program in C# and I've already wrote some SQL Server CLR assemblies to do various tasks, but Lucene mechanics are quite obscure to me. Thank you!"} {"_id": "214529", "title": "How to create unit/integration tests for my web app?", "text": "I am currently developing an Ajax chat application that uses PHP on the back- end. Some of the features it has right now are different types of users (admin/mod/banned), public and private rooms, commands, and announcements. I recently learned about unit/integration testing and it sounds like a good alternative to manually testing everything every time I release a new feature. How can I get started with adding testing functionality into my PHP application? BTW: The app uses long polling (where PHP holds the connection through usleep() and constantly queries the database until new information is returned) so I think that might be a problem when testing it."} {"_id": "250516", "title": "How should I include jQuery in a library?", "text": "I'm writing a JavaScript graphing library using canvas which I am licensing under MIT, and I'm using jQuery, as well as a couple of other open sourced libraries, all under MIT. I'm also using bower to manage my front-end dependencies. How should I best handle these dependencies without violating the license? Preferably, the user would not have to include multiple scripts, just one ``, and it would include all of the plugins. Should I just ask the user to include jQuery and the other libraries in its own script tag, or should I concat all of the scripts together into one big file, headers included?"} {"_id": "92898", "title": "How do you decide if you should take a project?", "text": "I am a fairly new developer. Professionally I have programed in C# for two years as an intern and 6 months as a junior developer. A friend of my family needs help with a project that is written in VB.net. I have never used VB.net, so I am a little worried there. But, the real question comes in the fact that once I looked at the documents for the project I have a feeling that nothing really good will come from it. I have a feeling that it will cause more stress then I would like to have in my life currently. How do experienced developers make the decision of whether to take the project or just let it go? What are some good metrics to make the decision easier? **Edit** This actually seems like a very large ERP that he would like me to work on and I don't believe that he knows anything about programming so I don't think the fact that I am very junior has even crossed his mind."} {"_id": "250518", "title": "What is a good design for a method that can return several logically different results?", "text": "The question title is probably too abstract, so let me provide a particular example of what I have in mind: There is a webservice that encapsulates a process of changing passwords for users of a distributed system. The webservice accepts user's login, his old password and a new password. Based on this input, it can return one of the following three results: 1. In case user was not found, or his old password does not match, it will simply return with HTTP 403 Forbidden. 2. Otherwise, it takes a new password and makes sure that it conforms to a password policy (e.g. it is long enough, contains a proper mix of letters and numbers, etc.). If it does not, it will return an XML describing why the password does not conform to the policy. 3. Otherwise, it will change the password and return an XML containing an expiration date of the new password. Now, I'd like to design a class, ideally with a single method, to encapsulate working with this webservice. My first shot was this: public class PasswordManagementWebService { public ChangePasswordResult ChangePassword(string login, string oldPassword, string newPassword) { ChangePasswordResult result; // send input to websevice, it's not important how; the httpResponse // will contain a response from webservice var httpResponse; if (HasAuthenticationFailed(httpResponse) { throw new AuthenticationException(); } else if (WasPasswordSuccessfullyChanged(httpResponse)) { result = new ChangePasswordSuccessfulResult(httpResponse); } else { result = new ChangePasswordUnsuccessfulResult(httpResponse); } return result; } } public abstract class ChangePasswordResult { public abstract bool WasSuccessful { get; } } public abstract class ChangePasswordSuccessfulResult { public ChangePasswordSuccessfulResult(HttpResponse httpResponse) { // initialize the class from the httpResponse } public override bool WasSuccessful { get { return true; } } public DateTime ExpirationDate { get; private set; } } public abstract class ChangePasswordUnsuccessfulResult { public ChangePasswordUnsuccessfulResult(HttpResponse httpResponse) { // initialize the class from the httpResponse } public override bool WasSuccessful { get { return false; } } public bool WasPasswordLongEnough { get; private set; } public bool DoesPasswordHaveToContainNumbers { get; private set; } // ... etc. } As you can see, I've decided to use separate classes for return cases #2 and #3 - I could have used a single class with a boolean, but it feels like a smell, the class would have no clear purpose. With two separate classes, an user of my `PasswordManagementWebService` class now has to know which classes inherit from `ChangePasswordResult` and to cast to a correct one based on the `WasSuccessful` property. While I now do have a nice, laser-focused classes, I made a life of my users more difficult than it should be. As for the case #1, I've just decided to throw an exception. I could have created a separate exception for the case #2, too, and only return something from the method when the password _was_ successfully changed. However, this doesn't feel right - I don't think that a new password being invalid is a state exceptional enough to warrant throwing an exception. I am not very sure how would I design things were there more than two un- exceptional result types from the webservice. Probably, I would change a type of `WasSuccessful` property from boolean to an enum and rename it to `ResultType`, adding a dedicated class inherited from `ChangePasswordResult` for each possible `ResultType`. Finally, to the **actual question:** Is this design approach (i.e. having one abstract class and forcing clients to cast to a correct result based on a property) a correct one when dealing with problems like this? If yes, is there a way to improve it (perhaps with a different strategy for when to throw exceptions vs. return results)? If no, what would you recommend?"} {"_id": "114474", "title": "Multiple Zend application code organisation", "text": "For the past year I have been working on a series of applications all based on the Zend framework and centered on a complex business logic that all applications must have access to even if they don't use all (easier than having multiple library folders for each application as they are all linked together with a common center). Without going into much detail about what the project is specifically about, I am looking for some input (as I am working on the project alone) on how I have \"grouped\" my code. I have tried to split it all up in such a way that it removes dependencies as much as possible. I'm trying to keep it as decoupled as I logically can, so in 12 months time when my time is up anyone else coming in can have no problem extending on what I have produced. Example structure: applicationStorage\\ (contains all applications and associated data) applicationStorage\\Applications\\ (contains the applications themselves) applicationStorage\\Applications\\external\\ (application grouping folder) (contains all external customer access applications) applicationStorage\\Applications\\external\\site\\ (main external customer access application) applicationStorage\\Applications\\external\\site\\Modules\\ applicationStorage\\Applications\\external\\site\\Config\\ applicationStorage\\Applications\\external\\site\\Layouts\\ applicationStorage\\Applications\\external\\site\\ZendExtended\\ (contains extended Zend classes specific to this application example: ZendExtended_Controller_Action extends zend_controller_Action ) applicationStorage\\Applications\\external\\mobile\\ (mobile external customer access application different workflow limited capabilities compared to full site version) applicationStorage\\Applications\\internal\\ (application grouping folder) (contains all internal company applications) applicationStorage\\Applications\\internal\\site\\ (main internal application) applicationStorage\\Applications\\internal\\mobile\\ (mobile access has different flow and limited abilities compared to main site version) applicationStorage\\Tests\\ (contains PHP unit tests) applicationStorage\\Library\\ applicationStorage\\Library\\Service\\ (contains all business logic, services and servicelocator; these are completely decoupled from Zend framework and rely on models' interfaces) applicationStorage\\Library\\Zend\\ (Zend framework) applicationStorage\\Library\\Models\\ (doesn't know services but is linked to Zend framework for DB operations; contains model interfaces and model datamappers for all business objects; examples include Iorder/IorderMapper, Iworksheet/IWorksheetMapper, Icustomer/IcustomerMapper) (Note: the Modules, Config, Layouts and ZendExtended folders are duplicated in each application folder; but i have omitted them as they are not required for my purposes.) For the library this contains all \"universal\" code. The Zend framework is at the heart of all applications, but I wanted my business logic to be Zend- framework-independent. All model and mapper interfaces have no public references to Zend_Db but actually wrap around it in private. So my hope is that in the future I will be able to rewrite the mappers and dbtables (containing a Models_DbTable_Abstract that extends Zend_Db_Table_Abstract) in order to decouple my business logic from the Zend framework if I want to move my business logic (services) to a non-Zend framework environment (maybe some other PHP framework). Using a serviceLocator and registering the required services within the bootstrap of each application, I can use different versions of the same service depending on the request and which application is being accessed. Example: all external applications will have a service_auth_External implementing service_auth_Interface registered. Same with internal aplications with Service_Auth_Internal implementing service_auth_Interface Service_Locator::getService('Auth'). I'm concerned I may be missing some possible problems with this. One I'm half-thinking about is a config.ini file for all externals, then a separate application config.ini overriding or adding to the global external config.ini. If anyone has any suggestions I would be greatly appreciative. I have used contextswitching for AJAX functions within the individual applications, but there is a big chance both external and internal will get web services created for them. Again, these will be separated due to authorization and different available services. \\applicationstorage\\Applications\\internal\\webservice \\applicationstorage\\Applications\\external\\webservice"} {"_id": "41849", "title": "Machine Learning Web Jobs", "text": "I always see job positions for web companies for Machine Learning. For example facebook always has this type of job opening. Anyways, i was curious as to what exactly do web companies use machine learning for. Is it for giving people ads based on their site surfing history or something like that. I want to know because i have some experience with machine learning and it sounds like a fun thing to work on as long as i can convince the business guys to go ahead with it."} {"_id": "79931", "title": "Data Access Layer, Business Class or Repository?", "text": "I've been having a debate within my team on what constitutes a Data Access Layer vs Data Functions vs Business Layers. My thoughts is all database access is done in a data access layer with Repository classes. The DAL contains utility classes & methods that populate a DataSet, List<>, POCO, Execute SQL, but as internal classes using your choice of Access method: ADO, EntityFramework, nHibernate, etc. The Business layer then interacts with the DAL without knowing any of SQL or data access methodology. A teammate has this approach. The stored procedure names and SQL are in the business classes. Then they are passed to the DAL which then populates and executes the SQL. Which is the best practice?"} {"_id": "104281", "title": "How do you deal with errors in enumeration / list processing (lowish-level API)", "text": "Ive struggled with variants of this problem many times, experimenting with different solutions, happy with none. **Abstract:** Enumerating a list of items, an error for one item does not affect the others. Potential callers have varying requirements on how to treat these errors, goal of the API is to simplyfy use and reduce complexity. A more general case in which this problem occurs would be processing a list of independent items, and partial success is acceptable / desired, i.e. it's _not_ run as a transaction. * * * **Example scenario** : I am dealing with a low level C-Style enumeration API like this: // gets called for each \"stuff\" item typedef bool (*tEnumStuffCallback)(stuff * item, void * callbackArg); // calls \"callback\" for each \"stuff\" item. // caller enumeration context in callbackArgs void EnumStuff(tEnumStuffCallback callback, void * callbackArg) { stuff * item = 0; for(;;) { int err = stuff_get_next_item(&item); if (err == no_more_stuff) return; // === TODO: handle \"real\" errors === // pass item to callback, stop enumeration if requested if (!callback(item, callbackArg)) return; } } For backward compatibility and \"caller expectations\", I want/need to preserve the pure C style without exceptions. A higher-level wrapper, built on top, might look like this: EnumStuffs(std::vector< boost::shared_ptr > & items); The caller may need one of the following strategies when enumeration errors occur: * Stop enumeration on first error, e.g. because caller will ignore the entire list and enumeration is costly. A C++ client might want to throw an exception from the callback (allowed). * Ignore the items - e.g. when enumerating connected devices, and looking for a specific one to be available, and device not found errors are handled separately * Ignore, but log diagnostics in a caller-specific way (i.e. caller needs to know abotu errors) * Caller needs detailed list of errors for further analysis, e.g. items may be \"retried\" on certain types of errors Note that this is _the callers choice_ , I can't force a decision. * * * **Note / Context** Example idealized, in practice, I have multiple enumerators involving different types; sometimes I have to provide the enumerator, sometimes the enumerator function comes from the OS. Error information can be more complex than a single `int`. The underlying API involves dozens of rather confusing types already, I want to avoid introducing the same amount of additional/wrapper types. Bridging different development styles is actually part of my job description, opportunities for educating either end is tricky (I have to pick my battles). * * * **My ideas / solutions** * provide a separate callback for the errors. Enables all scenarios, but bloats the interface. * Replace the enumerator function with a stateful object. Increases number of types, does not provide backward \"style\" compatibility. * pass the errors to the enumerator. that's _sometimes_ a bit better than the previous solution, but usually requires an \"enumeration context\" structure instead of separate arguments. Goes against the plan to avoid additional types for each enumerator * introduce an \"error type\", caller can either request a \"list of errors\" as a result, or get the \"stop/throw at first error\" behavior by default. I've used that before, but in the end the detailed information is typically used only to build some diagnostics, and isn't \"liked\" as it makes it \"complicated to deal with errors\". * * * **Questions** What are your thoughts? Do you have ideas for, or have you used other approaches? Under what circumstances would you use / not use above approaches? * * *"} {"_id": "104285", "title": "Supplying code to people helping you debug?", "text": "I was wondering if it is wise or advisable to send over a large amount of source code of one's application for debugging and error solving purposes, I'm working on an iOS app that I hope to release to the App Store and am seeking help on Stack Overflow and I'm tempted to send over a large amount of the code so someone (they requested it) can help me. Would you recommend sending over a significant portion of your app to someone for debugging purposes?"} {"_id": "165609", "title": "Using a GPL game engine", "text": "I am confused. I thought I would try and use a new game engine in order to expand my abilities, and found a nice engine called springrts. I was looking through the licensing info and it is licensed under the GPL 2 licence. If I remember correctly, does that mean that anything I make with it, I have to distribute the source code to?"} {"_id": "104289", "title": "Are Java certifications important for an architect role?", "text": "My this question is career path related. I want to know how much Java Certifications (SCJP, SCWCD and others) are important for an architect position. If a person posses a good experience in Java development and want to pursue his career on architect level, do you guys think he need to have certification on his CV. If he has never worked on lead developer roles? If you conducting my interview for an architect position. And I have worked as a Java web developer in different teams having 5 years of exp. Never lead any. And I am having certification badges on my CV. How can a developer make his career path towards being an architect in a team?"} {"_id": "251518", "title": "Testing an MMO server", "text": "I'm working on a **server** for a very large (feature wise) MMO. After some bad experiences with breaking changes that caused bugs weeks down the line, we'd like to add unit/automated/regression tests to our project before we get much father (we've implemented approximately 5% of our requirements). We haven't really used \"serious\" tests before (we've done the tutorial check- division-by-zero testing) so I thought I'd locate some guides for developing client-server test solutions. I was not able to find anything of much relevance. How should I deal with testing the following aspects of a typical server? * Testing the client-server communication (API can be broken down into \"parsing\", \"handling\" and \"sending\" stages) * Testing changes to an SQL database * Testing security measures * Testing **around** security measures (ie, making sure our tests don't trip our own security code) * Testing timed events In case it's relevant, our language is C#, .Net 4.5"} {"_id": "69954", "title": "Looking for suggestions on where to purchase vs.net 2010 from? which edition?", "text": "I need to purchase vs.net 2010 so I can use all the new features and entity framework. ANy suggestions on which edition I should get? Where is a good place to buy it on the cheap?"} {"_id": "206122", "title": "Openning an Excel File (Like it opens in MS office when double clicked) but using java", "text": "some buddy told me there is nothing impossible in programming but i dont know about this one... :) well i want to open an .xlsx file and make it appear on screen normal in MS office, but the problem is, i want to write a program in JAVA for this, which control opening and closing of that file... ???? i dont want to convert it in csv or making my own spreadsheet to show, and also dont want to read constants (like using jxl or apache poi libraries ), i want let that file execute as it is, i want to make a program that make my Excel file open automatically in MS Excel.. could you please help me...!!!!!!"} {"_id": "250091", "title": "In UML is it correct to have an association class with a composition or aggregation relationship?", "text": "An example of an association class is given here: http://pic.dhe.ibm.com/infocenter/rsarthlp/v9/index.jsp?topic=%2Fcom.ibm.xtools.modeler.doc%2Ftopics%2Fcassnclss.html Composition and Aggregation are also types of Associations, just with more defined semantics to what the association consists of. Can association classes also be part of a relationship by Composition or Aggregation? That is, in the case where there is a many-to-many relationship between two entities, with some additional attributes on the relationship. Or would it be more correct to model that as two separate one-to-many relationships onto a middle entity that hold the additional attributes?"} {"_id": "162967", "title": "My father wants to learn PHP-MySQL to port his application. What I should do to help?", "text": "My father is a doctor/physician. About 15 years ago he started writing an application to handle his patient's medical records in his clinic at home. The app has the ability to input patient's medical records (obviously), search patients by some criteria, manage medicine stocks, output receipt to printer, and some more CRUDs. He wrote it in dBase III+. A few years later he migrated to FoxPro 2.6 for DOS and finally in a few more years he rewrote his app in Visual FoxPro 9. And now (actually two years ago) he wants to rewrite it in PHP, but he doesn't know how. The Visual FoxPro version of this app is still running and has no serious problem except sometimes it performs slowly. Usually there are 1-5 concurrent users. The binary and database files are shared via windows share. He did all the coding as a hobby and for free (it is for his own clinic after all). He also use this app in two other offices he managed. Some reasons of why he wants to rewrite in PHP-MySQL: * He wants to learn * Easier to deploy (?) * Easier client setup, need only a browser What should I do to help my father? How should he start? I explored some options: 1. I let my father learn PHP and MySQL (and HTML (and JavaScript?)) from scratch. 2. I create/bundle framework. I'm thinking on bundling CodeIgniter and a web UI framework (any suggestion?) especially to reduce effort on writing presentation codes. What do you think? * * * ## tl;dr My father (a doctor) wants to rewrite his Visual FoxPro app in PHP-MySQL. He knows very little of PHP and MySQL but he wants to learn. What should I do to help? How should he start? * * * Some facts: * My father is 50 years old. * His first encounter with a PC is in early 1980s. It was IBM PC with Intel 8088. * He knows BASIC. * He taught me how to use DOS and how to program with BASIC. * The other language he knows fairly well is dBase/FoxPro. * I got my bachelor CS degree last year. * I know the internals of my father's app because sometime he wants me to help him writing his app. * * * Update After reading early responses, there seem to be better alternatives to PHP for this case. I'm open for other language/technology suggestions."} {"_id": "99848", "title": "Teaching Myself Computer Science, what should I learn?", "text": "I've already been programming for quite some time and have a firm grasp on programming itself, OOP, and a few other programming related things. However, **I'm interested in learning the same things that would be taught to a computer science graduate, and was wondering what I need to cover?** In case it's relevant, I've programmed with PHP, Java, Python, C & C++ and I'm looking into Lisp/Scheme."} {"_id": "99842", "title": "VS2010 winform designer to learn, or \"bottom up\" approach?", "text": "Long story short, I'm president of a programming club at my university. We're all 1st and 2nd year programming students. We have a project we're about to start working on (converting a console program to a gui program, in c#). Only two of us have any experience working with winforms and event driven programming in general. We want to balance learning with production (getting the project finished) as most of us don't have time to commit to a full on, 40 hour a week project. Our adviser is trying to steer us in the direction of not using VS2010's winform designer, he's trying to make it a requirement that we code everything in a \"notepad++\" type environment. So basically, I need to get some answers to a few questions that I'm having trouble researching, so when we discuss how to proceed later today and tomorrow, I'm armed with some actual knowledge. (We are all, including the adviser, generally \"new\" to winforms,c#, and .net) In \"real world\" situations with winforms, how frequent is it that the designer would not at all be used? Are we (the club officers) right in our estimation that what we have figured at a 4-6 month project (using the designer) would double, if not triple coding everything without it? What are the merits of not using the designer at all as a learning tool to \"see how its done\"? Is it a valid learning opportunity if we use the designer, then look at the generated code to get more familiar with how our custom controls/event handlers should be done? I'm currently at a bit of a loss, and thank you all for your responses. (Also, if possible could you list your years in field, and degrees if you respond, our adviser is huge on only taking opinions from who he considers \"qualified\", thanks). Edit -- Also take into account that learning is also one of our priorities, not just time constraints (We're trying to strike a balance between learning every single thing that we can, and having a project that needs so much time invested in it that it becomes unfeasible for us)."} {"_id": "167607", "title": "How can I reformat my condition to make it better?", "text": "I have a condition if(exists && !isDirectory || !exists) {} how can I modify it, so that it may be more understandable."} {"_id": "220540", "title": "Is UML a good way to shortly and simply explain a concept?", "text": "I was recently in a job interview that was mostly technical. One of the things that stood out was my difficulty explaining ideas and concepts. During the interview I decided not to use UML to explain those concepts, because I thought it can be too time consuming and I will make a mess on paper. So how can UML be used to explain ideas in a situation like an interview, where you need to be as concise as possible when explaining technical concepts and ideas? If you think UML is not a good solution for situations like this, then an alternative will be appreciated."} {"_id": "220547", "title": "Custom directive or ng-show/hide", "text": "On my form I have an icon which represents whether my entity is locked (shown by a locked padlock) or unlocked (an open padlock). At the model level, this is represented by a boolean property (isLocked). The user can toggle the entity between locked and unlocked by clicking the icon. This also updates the icon tool-tip text. This calls a controller method that toggles the entity.isLocked property. It is like a fancy kind of checkbox. I can implement this in one of (at least) two ways: 1) Create a custom angular directive that changes the class of the icon element to show the correct icon and also sets the tool-tip text. The icon will then be a single DOM element in my view, decorated with the custom directive. 2) Put both the locked and unlocked icon elements in the DOM and show/hide each one using the custom ng-show directive. Option 1 involves writing custom code which feels like the wrong thing to do, whereas option 2 makes maximum use of the built in Angular features, but leaves the HTML more cluttered. In general, should I prefer the custom directive method (option 1) or the HTML method (option 2) or some other method? Which would be considered more idiomatic for Angular do you think? And, more importantly, why? I did consider putting this on StackOverflow, but it feels like it would be considered off-topic there because it is a matter of opinion in the end..."} {"_id": "220549", "title": "Should timeout be a public static property or a parameter to every function?", "text": "**TLDR:** Should TIMEOUT be a public property on my static class, or a parameter to every function? * * * **Background:** I am releasing a c# client-api library that facilitates communicating with our REST api. The client-api consists of an object model and one large static class with a bunch of request extension methods for various types. 10 or so `.Get()` variations, a few `.Post()`, `.Put()`, etc. This static class full of extension methods is preconfigured with our server URL, and some other static configuration constants related to the connection. Because all the functionality is exposed via extension methods, there's no need to create an instance of this class. Any configuration is done via static properties and affects all requests made by the application. **Problem:** Right now, all requests use a default timeout of 3 seconds (which seems generous to us, as our API is meant to be very responsive.) Some users with a bad connection though might want to wait longer for a response. Our quick-fix solution is to expose the DEFAULT_TIMEOUT at a public static property on the class which can be set once on initialization, but is the proper thing to do to add a new optional parameter to every single one of our dozens of methods? .Get(int timeout = DEFAULT_TIMEOUT) { ... } .Get(..., int timeout = DEFAULT_TIMEOUT) { ... } .Get(..., ..., int timeout = DEFAULT_TIMEOUT) { ... } .Get(..., ..., ..., int timeout = DEFAULT_TIMEOUT) { ... } ... .Post(..., int timeout = DEFAULT_TIMEOUT) { ... } .Put(..., int timeout = DEFAULT_TIMEOUT) { ... } .Save(..., int timeout = DEFAULT_TIMEOUT) { ... } ... On the one hand, we want to be flexible to whatever the consumer of our client-api library might need, but on the other hand, I won't want to get in the habit of polluting every method with every optional parameter that might impact a request (server url, authorization tokens, etc.) What I don't want, is for the user to feel the need to do this: ClientAPI.DEFAULT_TIMEOUT = 5000; Thing found = new Thing(){ id = \"19473\" }.Get(); found.count += 1; ClientAPI.DEFAULT_TIMEOUT = 20000; found.Put(); ClientAPI.DEFAULT_TIMEOUT = 8000; List widgets = found.GetSubresource(\"widgets\"); ...etc Is it reasonable to assume users won't be micromanaging timeouts for individual requests like that?"} {"_id": "105629", "title": "Web development in a small team - best practices", "text": "Currently developing a web app in a team of two maybe three in the near future. Tech stack is at the moment : flask, mongodb, and extjs for the fontend. I currently have the project under version control using mercurial and bitbucket. Question What is exactly the best way to work as a team on a we app peoject? I ussually work on the back end while my colegue works on the frontent. Sometimes i also help out on the front end. How should we do this? Each has a repo on their system but whee is the web server started? Curentlly i have it on my computer but that means that my partner needs to commit and push for each modification and i need to pull the changes and merge for each change. And for frontend stuff there needs to be a lot of changes. We tried having a server started on each of our machines but its a pain with multiple mongodb servers. Anyway any tips, clues and advices are greatly appreciated and welcomed."} {"_id": "105621", "title": "HTML5 card game", "text": "I created a card game in Silverlight a year ago, in order to learn a bit about Silverlight. I am now wanting to make a HTML5 version of the game in an effort to learn a little bit more about that. I am thinking I'd like to take advantage of stuff like Knockout.js and WebSockets and the canvas element. Now what I'm confused about is how to lay out the cards on the screen. With Silverlight I was able to make a \"Hand\" control, which was made up of two sub controls: the cards the player has in their hand and the ones they have on the table. They were made up of Card controls. I don't believe there is the concept on a User Control in JavaScript; so I am possibly thinking about this in entirely the wrong way. I have a client side JSON object called game, which contains an array of players; each player has a hand which is made up of an array of in-hand cards and on-table cards. Ideally I would like to bind these to something using Knockout.js, but I don't know what I could bind to. How could I lay out some cards on the table, and perhaps make reuse of something for each player? Would I simply position images (of cards) on a canvas? Is there a way to make some kind of hand object that each player could have and that I could bind to?"} {"_id": "105627", "title": "Can I brand my open-source application?", "text": "Is it possible for the author to register a logo or the name of his/her application when it is open-source because it uses a gpl library (for example)? The application uses the library but it has its own features, that is it's not a modification of the library. So everyone can see the source code, downloading an \"anonymous\" non-branded version from Sourceforge, but none can use the logo or the name and so the author can sell it from his/her web-site (for example) to non-expert users, only after payment. Is he/she obliged to give directly the source code, or it is enough that there is the non-branded version available on Sourceforge or other official repository?"} {"_id": "52138", "title": "Do you keep intermediate files under version control?", "text": "Here's an example with a Flash project, but I'm sure a lot of projects are like this. Suppose I create an image with Photoshop. I then export this image as a jpeg for integration in Flash. I compile the fla as an asset library, which is then used in my Flash Builder project to produce the final swf. So it goes like : psd => jpg -> fla => swc -> Flash Builder project => swf. => : produce -> : is used in The psd, fla, and Flash Builder Project are source files : they are **not** the result of some process. The jpg and swc are what I would call \"intermediate\" files. They are the product of one (or more) source file(s) that are used as input in another tool or process. The swf is the final result. So, would you keep those intermediate files under version control? How do you deal with them?"} {"_id": "143961", "title": "Optimum Number of Parallel Processes", "text": "I just finished coding a (basic) ray tracer in C# for fun and for the learning experience. Now I want to further that learning experience. It seems to me that ray tracing is a prime candidate for parallel processing, which is something I have very little experience in. My question is this: how do I know the optimum number of concurrent processes to run? My first instinct tells me: it depends on how many cores my processor has, but like I said I'm new to this and I may be neglecting something."} {"_id": "102934", "title": "Big source tree refactor ahead - what tool to use?", "text": "We are doing a major refactor of the layout of our source tree. Masses of files are being moved, folders are being renamed, etc. etc. We currently have everything in SVN. We're going to move to either GIT or HG in the future. However, now we're thinking that maybe we should move to GIT or HG first and then do the refactor. We're worried that work in progress on svn branches aren't going to be able to merge the refactoring changes into their branch NOR are they going to be able to merge any changes to files that have been moved and/or renamed back into the trunk. 1. Which between GIT and HG would better handle this type of refactoring? 2. If we do move to GIT/HG and then do the refactor, what will be merging be like for the work in progress branches once they move to GIT/HG?"} {"_id": "60007", "title": "OpenID implementation - PHP, Javascript, MySQL", "text": "I've started doing some research on the technologies that I will need for my website. I'm trying to implement a really simple website with OpenID user registration. The website will store a block of text for each user. I imagine this means that I will need a database with: * User ID * Open ID url * Data Having said that, I'm still having trouble deciding what I really need to do this. I know that I will need the following for the actual site: * Javascript * JQuery * CSS But on the back end, I'm kind of lost at the moment. I've been looking at the OpenID-Selector, which is coded in Javascript. It seems like it's exactly what is used on this site. Will I actually need PHP? MySQL for the data and user registration?"} {"_id": "102932", "title": "How to tell if a license is compatible with your program?", "text": "I'm using a code sample from MSDN in my project. Accompanying the sample is this license: http://pastebin.com/K46NYf69 I've modified the code sample slightly to better suit my needs and I now want to release my program as open source. However, I'm not sure which licenses are compatible with this license because I cannot find it on Google. The license claims it is a Microsoft _Permissive_ License but I can only find Microsoft _Public_ License. Also, I'm not completely sure what this license allows. Can I modify the sample? Can I package it in my project under another license like the GPL or the Apache/BSD license? Thanks for reading."} {"_id": "143968", "title": "Form Follows Function in Programming?", "text": "Does the saying, \" **form follows function** \" hold true in programming or language-design? Why or why not?"} {"_id": "113177", "title": "Why do languages such as C and C++ not have garbage collection, while Java does?", "text": "Well, I know that there are things like malloc/free for C, and new/using-a- destructor for memory management in C++, but I was wondering why there aren't \"new updates\" to these languages that allow the user to have the option to manually manage memory, or for the system to do it automatically (garbage collection)? Somewhat of a newb-ish question, but only been in CS for about a year."} {"_id": "113176", "title": "What is the name for this variation to Adapter Pattern?", "text": "### Introduction An Adapter normally wraps another object, so that it can be used in an interface it wasn't designed for, e.g., when you want to use interface Node { Node parent(); Iterable children(); } together with class TreeModel { private Node root; // example method (stupid) Node grandparent(Node node) { return node.parent().parent(); } } and you're given a class like class File { File getParent() {...} File[] listFiles() {...} } you need to write some `FileToNodeAdapter`. Unfortunately, it means that you need to wrap each single object and you also need both a way to get from `FileToNodeAdapter` to `File` (which is trivial, since it's embedded), but also from `File` to `FileToNodeAdapter`, which leads either to creating a new object each time or to using some `Map`, which must be either globally accessible or referenced in each `FileToNodeAdapter`. ### The Pattern Replace the interface `Node` by interface NodeWorker { T parentOf(T node); Iterable childrenOf(T node); } and modify the `TreeModel` like class TreeModel { private NodeWorker nodeWorker; private T root; // example method (stupid) T grandparent(T node) { return nodeWorker.parentOf(nodeWorker.parentOf(node)); } ... } Does this pattern have a name? Are there any disadvantages, besides the fact that it is little bit more verbose and only applicable when you are in charge of the `TreeModel` code?"} {"_id": "36441", "title": "Hobbyist programmer releasing software with a donate button", "text": "I'd like to start this with a disclaimer that I realize that a full, clear-cut answer should be sought out by a lawyer. I am more so curious about what other users of this community have done * * * Say that I had a small program that I had developed for fun, that I wished to release to the public. I'll drop it out there with one of the various open- source licenses, and probably put it up on SourceForge or Git in case if anybody should ever want to fork/maintain/check out code. Also say that I wanted to accept donations for the project, with absolutely 0 expectation that people will send any money. However, if somebody donated in order to buy me a beer or a pizza for the work that they liked, I would accept gladly. The question, then, is what are the general requirements of accepting donations? Can it go into a personal account with no questions asked as a \"gift,\" or do I need to setup an LLC to avoid any taxation issues? (US citizen here). Again, yes this should be lawyer discussed, but I also know that many projects that I see have the ability to donate, and assume that the community probably has a decent amount of experience in this regard."} {"_id": "36443", "title": "How not to suffer from ideologists when you're a pragmatic person?", "text": "I'm a pragmatic person (I think I am. But then again, Jon here has an interesting point ). Sometimes, the most simple solution to a problem to get the job done is the one that fits best for me, if it's not an utter blasphemy and reproach to any design principles. Check out my answer to this question on Stack Overflow. Simple. Works. Was accepted. Could be improved. Is clearly not perfect and not very elaborate. And along comes this guy. He downvotes me, comments on the question how his answer is better, more accurate etc and when I ask him why he downvoted me, he calls me _plain wrong_ in his comments. Reminds me of this comic strip. Just to get this straight: His answer is clearly better. But that's not the point! While on Stack Overflow I can laugh and not really care about these things because those people are far away, in the real world I'm suffering from ideologies every now and then. Heck, I'm not creating a miracle piece of software, I need to keep that huge legacy thing running, and it's an adventure to me every day. I'm good at some things and bad at other things. I'm eager to learn stuff. But I can accept one or two flaws in a system as what they are: flaws. Tomorrow, we're going to refactor all of them, but first let's do what the customer wants, and then have a beer. My questions are: * How do you deal with ideologies / ideologists, when you're a pragmatic person? * How do you deal with pragmatism / pragmatists, when you're an ideologic person? I'm interested in both point of views."} {"_id": "21926", "title": "Is it possible to be good at both programming and graphic design?", "text": "The stereotypical view of a programmer can't do graphics very well, from what I've read and seen. However, I love programming (preferably OOP, PHP, C++, Objective-C) and can't deny the fact I have a unique taste in web design and others have said I am doing well at it (CSS). I thought to myself \"Hey, wait, I'm a programmer - how can I design well?\". Question is: is it possible to be good at programming and designing? Does anyone here feel the same? For the record: actual images I have created have been called programmer art several times before by friends"} {"_id": "212291", "title": "Search algorithm", "text": "I would like to create a site where users can post articles with the following optional parts: * A title * Contents (text) * Categories * Keywords Articles will be stored in mongodb and the site will be built in node.js. Users can search the site using a normal search text box. I'm thinking about creating the following collections: * Users * Articles * Keywords I will then create an entry for each keyword used in the Keywords collection with an array containing all the articles that use it. If a user conducts a search, the search is broken up into keywords and each keyword is looked up in the Keywords collection. Each article is then retrieved from the db and ranked based on relevance. My questions are: 1. Would it be efficient to use a Keywords collection like this, should I just use the Articles collection (Using full-text search or something) or should I structure it in some other way? 2. How would I incorporate the ability to search the title, contents or categories for articles instead of just the keywords? 3. Would it be better to use something like Apache Lucene than to build this functionality myself?"} {"_id": "190257", "title": "How do I test my validation without being too much perfectionist, yet to leave most part of logic tested?", "text": "For example I would like to validate that name is letters only and is between 4 and 14 letters length. I have the following code in model: validates: name, :format => { :with => /^[a-zA-Z]+$/, :message => 'Name should be letters only. }, :length => { :minimum => 4, :maximum => 14 } So it clearly lets me do what I want. But as for unit tests, I have a bit too much perfectionism so I set something like invalid_names = ['1234', 'qwe', '1%#$#$', 'Sam1', '%', random_string(15)] #I also have a test method to create random string with parametrized length valid_names = %w['test', 'Travis', 'John', random_string(5), random_string(14), random_string(4)] and test each of them in a loop with asserts, like invalid_names.each do |name| user = User.new(:name => name) user.save assert user.errors[:name].any?, \"#{name} is valid.\" end So it definitely works great. But it is too verbose (because of valid/invalid names arrays, and added `random_string` method), also I can't be sure my test actually tests all symbols and their combinations possible and all lengths and stuff, even though I am definitely sure it works as expected. So what is acceptable way to test my validation without being too much perfectionist, yet to leave most part of logic tested? Am I just set in a mind trick of trying to write a perfect code just to write a perfect code and forgetting about main finish goal: working product?"} {"_id": "170003", "title": "My proposed design is usually worse than my colleague's - how do I get better?", "text": "I have been programming for couple of years and am generally good when it comes to fixing problems and creating small-to-medium scripts, however, I'm generally not good at designing large scale programs in object oriented way. Few questions 1. Recently, a colleague _who has same number of years of experience as me_ and I were working on a problem. I was working on a problem longer than him, however, he came up with a better solution and in the end we're going to use his design. This really affected me. I admit his design is better, but I wanted to come up with a design as good as his. I'm even contemplating quitting the job. Not sure why but suddenly I feel under some pressure e.g. what would juniors think of me and etc? Is it normal? Or I'm thinking a little too much into this? 2. My job involves programming in Python. I try to read source code but how do you think I can improve me design skills? Are there any good books or software that I should study? Please enlighten me. I will really appreciate your help."} {"_id": "197494", "title": "Seeking advice on design of application protocol", "text": "**UPDATE 1** as requested by Brendan. We are developing a Unix batch application for storing millions of customer records into a relational database. In order to allow multiple batch jobs to run in parallel, and to achieve a certain amount of concurrency while processing an input record, we've distributed the work across nine server daemons, all within the same LAN as the client, each of which is responsible for an isolated task (e.g. store the name in the name table, store the address in the address table, etc.). Each daemon will be able to accept connections and requests (concurrently) from multiple clients. Finally, whatever communication protocol we design (or adopt) must be extensible to accommodate different _kinds_ of connections - for instance, we'd like for a monitor tool to be able to connect to a daemon, over the same port as the clients, and request statistics or send commands. There will also eventually be different kinds of clients that require different functions to be performed by the daemons. In other words, a client connecting to a daemon will have to be able to declare \"I want to post customer records,\" or \"I want to apply a batch of changes-of-address to the database\" or \"I want to send you a few commands.\" I think it's a foregone conclusion that the underlying transport of these connections be TCP. Finally, apart from the \"command session\", these will be multi-million record batch jobs, and performance is critical. The processing is iterative, and every record has the same layout (so wrapping each one in XML, say, would be unnecessary and highly wasteful). I've no idea if this is enough information for someone to suggest some formats and protocols to use, but I'll gladly try to clarify or supply additional details if asked. **END OF UPDATE 1** * * * (Below is the original post, which may or may not be of any value.) * * * I'm in charge of designing an application protocol for a set of in-house, batch-oriented, client-server applications. I'm familiar with IBM LU 6.2 (a.k.a. \"Advanced Program-to-Program Communication\"), and for the past 15 years have worked in Unix environments. The client is a batch job that connects to a server process \u2014 which, in turn, connects to nine other \"sub-server\" processes. The client passes a customer transaction record to the server, which distributes different portions of the record to the sub-servers. Each sub-server processes its portion independently and concurrently, and passes some results back to the server which then passes them back to the client. These are TCP connections, and we decided that, first of all, we'd use newline-terminated ASCII strings (we call these **lines** ) as our Protocol Data Unit. A **record** is a line that is subdivided into tab-separated **fields** , and since each process will have its own data requirements, field#1 of every record will contain a **record type**. This is used as a key into a metadata store to find the **field names** , in the order they\u2019re expected to appear in the record, in order to know how to interpret the record. For example: Metadata: NAME=(prefix, first_name, middle_name, last_name, personal_suffix, professional_suffix) Record:\u00a0\u00a0 NAME||CHAP||HARRISON|JR|CTO# (Here I use '|' to represent the tab character, and '#' to represent newline character) Very straightforward so far, I think. But then, new twists arose. The first was that, instead of sending all the data about one transaction in a single [large] record, we'd like to possibly use several records: a NAME record, an ADDRESS record, an EMAIL record, etc. So we invented the **block** \\- a series of records, preceded by a \u201cspecial\u201d **BB** record and followed by an **EB** record. The BB record\u2019s first field is, of course, 'BB', and the second field holds the **block type**. Block type, like record type, is a key into metadata listing the record types (required, or optional) in this block type. Understandably, each of these ten or so processes, having its own unique data requirements, has led to its own set of record types, enclosed in their own set of block types. That's a lot of metadata, but it isn't really a showstopper. The next thing requirement that emerged was a special \"context-setting\" message, to transmit certain configuration variables that would remain constant from transaction to transaction. This gave rise to a new record and block type, as well as a new column in the block metadata that indicated this kind of block was somehow \"special\". Then, envisioning circumstances where the \"context\" could _change_ during the runtime of the client, we lifted the rule that stipulated there could be only one \"context\" message, at the beginning of a connection. At the same time, we realized there was still a need for a true \"initialization\" message that could only come once, at the beginning, conveying \"session-global\" parameters. Thus the INIT block was born, and now there were three flavors of block: session- initialization, context-switching, and normal application data. Then, how about a connection that isn't for transaction-posting at all, but rather a control session for querying the health of the server, retrieving statistics, changing operational settings? The INIT block\u2019s role became more general, declaring the \"mode\" of the session, which could now be either \"posting\", or \"control.\" We've been trying to wedge all of this into a single-layered application protocol, and it's getting out of hand. Just defining a \"special\" BB record should have raised a red flag - it's constructed from application-level objects and metadata, but it is not application data - rather, it _frames_ the application data. * * * I think what has evolved is a 2-layer architecture atop TCP: one layer that simply provides typed containers (blocks and records), and a higher layer that uses those containers for a specific purpose, be it posting, or control, or something else - the true application protocols. However, I'm not well-versed enough in the design of protocols to recognize \"patterns\" that might be obvious to the pros. I'd appreciate any feedback on how to approach this. It also seems possible that someone may have written a guide to designing protocols, that addresses these very concerns. I just haven't found it. And my apologies for the length of this post. I hope none of it was immaterial. My deep thanks for your patience! Chap"} {"_id": "210312", "title": "Coding Style for Visually Impaired Programmer", "text": "I am visually impaired. With glasses I see well enough to drive, but at the font size I'm comfortable working at I can only see about 15 lines of 100 characters at a time. This has affected my coding style. One thing I do is write shorter functions. My code tends to get good reviews because these short functions with good names make the higher level functions very readable, but in high performance situations some folks make comments about how much space I'm taking up on the stack by passing variables down several layers for processing. A second thing I do is divide classes up between files to make shorter files. This reduces the scrolling distance to get to relevant functions and depending on organization may allow me to put the files up on different monitors to look at them together. Both of these practices make for more documentable units that most coding styles require I document, which further aggravates the issue by extending the length of my file and the distance between related functions. I'm currently using Visual Studio, which allows code folding at the function and comment block level (which I use frequently) but does not fold at the bracket level like Notepad++ does. The editor that offers better code folding doesn't have all the intellisense features of VS. I could use regions in VS, but this looks very cluttered if used every 10 lines. Folding is occasionally helpful to get completed code out of view while I'm working on a different feature of the code. Can anyone recommend better coding practices to help with limited visibility of the code?"} {"_id": "125836", "title": "Do you have to include a license notice with every source file?", "text": "I've been looking for various licenses that I can use for an open-source project of mine, but all of the projects that I've seen, with all kinds of licenses, appear to have a giant, obnoxious (in my opinion) notice in each source file that states that the file is listed under a certain license. I don't think that I've found a single source project that isn't public domain that _doesn't_ have a notice like that. This just seems like a waste of time and file space. I plan on putting `@license` and `@author` tags in my projects, but I don't see why I need to list such a giant notice in each individual file if I don't want to make my code public domain. Is there any reason why I would want to include such a notice in my projects, or would simply including a notice in the `README` and a `@license` tag be good enough? Does this affect the \"clearly stated\" rule of most licenses, or is it just overkill so that people won't argue?"} {"_id": "233495", "title": "Copyright in all files or just a license file?", "text": "Does it have any legal merit to put a copyright comment on each class. Example: // // MyClass.h // Companyname // Created by DeveloperName on MM/DD/YY // Copyright YYYY Companyname. All rights reserved. // public class MyClass { ... } Or does it have the same effect as putting a license file in the root of the source code, like many do on github."} {"_id": "15712", "title": "How much design happens in your implementation phase?", "text": "For those of you who work in big-design-up-front/waterfall groups: how much critical thinking and design is skipped in your design phase and left to your implementation phase? How complete and detailed are your functional and technical specifications delivered at the end of the design phase? It seems to me that there's a lot of room for interpretation in the level of detail that needs to be provided at the end of the Design phase, and it irks me when the design phase is rushed and then managers get angry that the build phase doesn't progress like we're churning out widgets on an assembly line. On the other hand, a design phase that's complete enough to make the build phase operate like an assembly line practically includes the implementation phase with it - all that would be left in \"build\" is to type stuff into the editor. Of course, it also means your design phase is gigantic. **I realize that this is a shortcoming of the waterfall model** , but if there's anyone out there that can provide some constructive advice: Where does one draw the line? What should the expectations for the design and build phases be, and what kinds of design shortcomings or mistakes should/shouldn't be tolerated in the build phase?"} {"_id": "107714", "title": "Need advice on designing interactions between various parts of my application", "text": "I'm trying to design the \"main\" classe(s) of a Rich Desktop Application based on NetBeans Platform 7. This application will consume HTTP services and, through a \"push system\" over TCP, will receive messages. * We are 3 developpers and we want to develop modules in parallel * Application will be layered (Data, Business, Presentation) * We'll use Presentation Model in order to separate responsibilites * Some granular data (a bean Person for example) will be shared by several screens (and possibly displayed on several screens at the same time) * ... We are able to develop individual screens, but We don't know exactly how to organize the whole application and define each module content. 1. So, have you any advice (a pattern / best practice / book / sample app) to coordinate/manage interactions inside of the whole application? 2. Any advice about how to define modules content? Thanks! * * * Small example to illustrate what I want to build: **A Foo User Management Application** 1. Launch the application 2. At the left [explorer] we have a list of platforms (list is stored in a local file) 3. At the top we have button to add a new Platform (also available with right click) 4. By double-clicking on a platform, the app calls a HTTP service a retrieve a complete list of users. This list is displayed in the [editor] (in a JTable) 5. A background process is started: through a TCP connection we receive messages 6. It is possible to Add new user thanks to a button in a Toolbar If the application is launched on another PC, and if the user is connected to the same platform, its User List will be updated dynamically (add/remove/status:{offline/online}) (thanks to messages) In the future it a Chat Module will be provided. My question is (in other words): any advice/best practice to decide on content of each module? If PM (Presentation Model) is a good way to separate view/business and data and create screens, what is the best way to link several screens based on PM? Imagine we develop the Chat Module, how to add an entry \"Discuss with...\" to the Context menu available with right click on User List?"} {"_id": "113794", "title": "How can I properly manage commits, prevent feature conflicts, and manage dependencies with a VCS?", "text": "It\u2019s become an annoying factor of working with large teams: how do you manage checkins and prevent feature conflicts and manage dependencies with source control properly? Currently at my workplace we use TFS 2008 and we're migrating to TFS 2010 in early December. Firstly any source control is better than none but what approach has anyone found useful to prevent multiple checkins and rollback all over source control history? Is a volatile trunk the best way to go or branch off when your implementing a new feature and once you're happy merge back into the trunk? I'd really like to hear other people experiences and perhaps some best practices to managing source control."} {"_id": "107716", "title": "Tdd on a datadriven webapp", "text": "How would one go about to use tdd/bdd on a mostly data driven webapp? For example a blog or a forum?"} {"_id": "9741", "title": "Is it reasonable to assume/require the .NET framework these days?", "text": "**Background:** I have a project where I need to provide the user a download package with some sensitive data in it. The data needs to be encrypted. After they download it, they need to be able to view it (no editing required). For this question, let's approximate the data as a series of static html files. Because the data is sensitive, it needs to be encrypted any time it is on disk. We are thinking of providing the user with a download option that would give them a zip file containing two files: * A data file (we'd probably use an encrypted zip file behind the scenes) with the data they asked for * An application to view the data that would appropriately prompt for a passphrase and handle decrypting the data and displaying it via an embedded web browser. Additional details: * Users are not under our control. They are consumers. * We are not worried about cross platform in this question. This is just about Windows. We will have a separate download for Mac users. **Get to the question already:** For that application we need to create, we're internally debating if it is reasonable for that app to be a .NET winforms application. We want a single .exe, and we want the download to be reasonably small (e.g. 100k). * Dare we use the .NET framework (we don't need to use a particularly recent version of .NET--2.0 would be fine)? * Is it reasonable to assume that most consumers have .NET on their machines now due to Windows Update? * Is it reasonable to ask the ones that don't have it to install it? We know that 100% of users will not have .NET. The real question is if it is _reasonable_ to ask them to have it in this day and age. P.S. Does anyone know of any reliable statistics of what percentage of people actually do have .NET installed already?"} {"_id": "187400", "title": "What is a job in programming really like?", "text": "More specifically: **Is a job in programming really as bad as it sounds?** I'm still in high school, but I love programming. I take all of the programming classes that my school offers (which are actually quite a lot), and program in my free time. In college, I intend to do something related to programming, and then I eventually hope to get a job in programming something. However, from what I've read on the internet, a job in programming sounds really terrifying and intimidating, and I'm wondering what it's like, and if it's really as bad as it sounds."} {"_id": "187401", "title": "Controller layer and 3-tier architecture", "text": "What is controller layer and where we put this in our 3-tier architecture? 1)UI 2)Business Logic Layer 3)Data Access Layer I search in net but unable to get exact ans.Any links or sample example helps to understand better.Thanks."} {"_id": "24157", "title": "If you had to go back and re-learn your skill set, how would you do it?", "text": "My younger brother is looking to start programming. He's 14, and technically- inclined, but no real experience programming. He's looking to me for guidance, and I don't feel as if my experience is enough, so I figured I'd ask here. He's more interested in web programming, but also has an interest in desktop/mobile/server applications. What would be a good learning path for him to take? I'm going to buy him a bunch of books for Christmas to get him started; the question is, what should he learn, and in which order? The way I see it, he needs to learn theory and code. I'd like to start him off with Python or Ruby or PHP. If he wants to get in to web, he's also going to need to learn HTML, CSS, Javascript, etc. Out of those three domains (Languages, Theory, Markup/Etc.), what is the best order do you think to learn in? Also, am I missing anything? Thanks!"} {"_id": "187403", "title": "'import module' vs. 'from module import function'", "text": "I have always been using this method: from sys import argv and use `argv` with just **argv**. But there is a convention of using this: import sys and using the argv by `sys.argv` The second method makes the code self documented and I _(really)_ adhere to it. But the reason I prefer first method is it is fast because we are importing only the function that is needed rather than import the whole module (which contains more useless functions which python will waste time importing them). Note that I need just argv and all other functions from sys are useless to me. So my questions are. Does the first method really makes the script fast? Which method is preferred most? Why?"} {"_id": "233024", "title": "Database Design for Web based RSS Feed Agreegator", "text": "I am working on an open source application which can allow users to add RSS feeds. All users of the site can read the content of those RSS feeds. It's not just for user's own feeds. Using PHP and SimplePie library I have created a simple app. I am trying to implement category based listing. I have a list of RSS/Atom feeds in a database. Each post in the feed can have a category/label associated to it and a single feed can have multiple posts falling under multiple categories. **Question:** I want to get only the posts of category \"Science\". In worst case, if I have 1000 RSS feeds, should I read posts from each feed and check the category of each post? Storing all posts of each feeds and its categories is not a good solution. How best can we have a database design for this? Database schema is provided below. Nothing is fixed and I am free to change any schema or design. http://sqlfiddle.com/#!2/2b519/1 NOTE: The fiddle is just for reference and does not need to worry. Its optional as I thought it might help to make others understand."} {"_id": "24159", "title": "Which programming language is the best to start learning/teaching about programming", "text": "As this question is closed, let me do an attempt that actually meets all 6 requirements. At our faculty, there's for the moment a big change in the classes \"informatics\". We used to teach Java, but this is going to be replaced with Matlab (and the class will be called \"Scientific Computing\"). Personally, I'm not in favor of that idea, because : * Matlab is not freeware, hampering the possibilities for the students to get hands-on experience. They can on campus or via VPN, but not on their own computers. * ~~Matlab isn't even a real programming language, but a mathematical environment that excells in that but fails in anything else.~~ * ~~Matlab isn't suited for general concepts of programming.~~ edit: these last two points seemed a bit strongly formulated. Matlab has progressed quite a bit since I last encountered it. Now I also had a problem with Java, as that turned out to be a real pain in the proverbial behind for many students, mainly because of the verbosity. As it's for a general course in programming, I think these points are important : * all basic concepts of programming * rather easy syntax without too much verbosity * the possibility to easily program both procedurally and object oriented * a short feedback loop on your programming * a proven usefulness in many applications, especially in the scientific world (bioPerl, bioConductor, bioPython, bioJava, ...) so they can be used for practical work during their studies. **What's your idea about the best teaching language on a serious level (hence not the pseudo languages used in primary or secondary school, I'm talking bachelor level at university/college)?** * * * edited to keep the question more general. Originally I mentioned R, Python or Perl, and : R is maybe less of a good choice as that one is a vectorized language. This is far from general, so it might be too specific. **edit:** I consider it a blessing, but the argument here was that a vectorized language is not general enough for teaching purposes as people would get into trouble when moving to Perl and Python. I'm also not talking statistics education, we're talking students of level 1st bachelor. Any direction, any kind. I just added the personal experience."} {"_id": "233022", "title": "How should I document a multi-tier application?", "text": "I have to create a documentation structure for a legacy application, and I'm not sure of how to organize it. **Documentation goals** : * List of use-Cases * Program flow for each of the use cases. ( _Flow-chart of all the logical steps the code does for a particular use-case_ ). * As far as possible explanations of _why_ a certain business/code logic is followed. * Documentation format if possible should not require installing a new tool, and at best be readable in Word or PDF Format (so that the business types can check it easily). * Some questions that should be answerable using the documentation: \"What business logic does the code execute for a given use case?\";\"Is this code redundant elsewhere in the application?\"; \"If I change this code, what Use-Cases are affected?\"; **Application characteristics** : (it's generally a bit messy, with presentation and business logic slightly mixed in almost every layer) * Presentation in .NET Web Forms using WebControls (GridViews, ObjectDataSource, Reports) and JavaScript(jQuery, jQuery-ui) in *.aspx pages * Server-Side Code in C# in the *.aspx.cs to handle post-back events. * C# Code-Pages and a Seperate Project integrated into the WebApplication supplying Business Logic as part of the Web-Application * SQL Server for data persistence (Master-Data DB, Data Staging DB) * Views including some data from others Servers * C# CLR assemblies for business logic deployed on the SQL Server, and some minor SQL Stored Procedures/Triggers * A File structure on the same server as the SQL DB that handles archiving, data-import from files. * File interface to SAP (that I don't fully understand yet). **So far...** I've started documenting in Word. I have a separate Word file for the Front-End and the Database/File Structure. I begin each File with Use- Cases (user-initiated are in the front-end, scheduled jobs in the Database file), followed by the code structure. Each use case has a hyperlink to a flowchart/explanation of the code that first gets executed, which has a hyperlink to the flowchart/explanation of the code that next gets executed, etc... I do this so that each part of code is documented only once, and that other parts of code documentation can link to it if they execute it in the application. **Problems** : * I can't navigate backwards from hyperlinks ( _i.e. I can't answer the question: \"If I change this code, what Use-Cases are affected.\" I can only go from Use-Case to Code, not the other way around._ ) * The word document already feels clunky and messy after I've barely started to write stuff into it. **Question** : How can I document this multi-tier application without making a great mess?"} {"_id": "93787", "title": "How to effectively telecommute when working for a small firm?", "text": "I'd like to know what experiences others have had telecommuting full time? What software tools and processes helped maintain cohesion and maximized collaboration & productivity?"} {"_id": "112227", "title": "Can we guarantee a program will never go wrong?", "text": "We have a system here. Recently there is a wrong calculation in one of the number in the report generated by the system. Through out our experience, we've never encounter any problem/error in this system for some years. Because the writer of this system had already gone, we can hardly trace the programs. But we've verify the input data, the setting and they are right. Now my question is, will a computer program suddenly go wrong without any logical reason? If I slam on the server machine, will one of the number which the computer is calculating, become another number and make the calculation wrong? I agree my idea there is quite mad, but I just want to know, how can we know the problem is not caused by the program and the input, but some other factors? P.S. This mad system has no log."} {"_id": "30329", "title": "Can I remove all-caps and shorten the disclaimer on my License?", "text": "I am using the MIT License for a particular piece of code. Now, this license has a big disclaimer in all-caps: THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF... ... I've seen a normally capitalised disclaimer on the zlib license (notice that it is above the license text), and even software with no disclaimer at all (which implies, i take it, that there is indeed a guarantee?), but i'd like some sourced advice by a trusted party. I just haven't found any. GNU's _License notice for other files_ comes with this disclaimer: > This file is offered as-is, without any warranty. Short and simple. My question therefore: Are there any trusted sources indicating that a short rather than long, and a normally spelled rather than capitalised disclaimer (or even one or the other) are safely usable in all of the jurisdictions I should be concerned with? For the purposes of this question, the software is released in the European Union, should it make any difference."} {"_id": "207136", "title": "What is the difference between a Future and a Promise?", "text": "What is the difference between a Future and a promise? (In Akka and Gpars.) They look the same to me as both block and return the value of the future when get is called and a promise is to get the result of a future."} {"_id": "59924", "title": "Technology choice for cross platform development (desktop and phone)?", "text": "I'm a Microsoft developer mainly, but there are a couple of small-ish projects I'd like to fiddle with which would benefit from being cross platform. The platforms I want to target are: Windows, Linux, Mac, Android and preferably iPhone, web (running in a browser). I need 3D (Around the level of support seen in something like Minecraft (I'm not writing Minecraft)), some networking. I'm pretty certain Java would work on all except iPhone. Looking at the \"related questions\" above it's offered up QT (no browser or phone afaik) and also HTML/CSS/Javascript (3D? package for desktop?) The other alternative is to have seperate versions for seperate platforms, developed with some common code where possible. That option isn't something I know anything about. Does anyone have experience of this sort of conundrum? I figured here was better than SO, because I imagine there are compromises which extend beyond technical choice. Finally, this is not a commercial operation, so some of the very expensive cross platform tools are out of the question unless they offer some sort of community edition."} {"_id": "20790", "title": "What mistakes in managing software products must be avoided to keep people from hating the vendor?", "text": "A previous question about why people hate Microsoft was closed. This is an attempt at a somewhat more constructive question along the same general line. This one is both broader and narrower though. It's more general by being about software vendors in general, not just Microsoft. It's narrower by dealing only with management of software products. So, what steps should be taken (and/or avoided) in managing individual software products to assure that not only the individual products, but the company as a whole is respected/liked/seen in a positive light?"} {"_id": "115791", "title": "Is the Product Owner also a developer on your team?", "text": "I'm confused about the PO's responsibility here. I was a developer on a Game Feature Team, but also a PO. The daily work of the developer is almost full time, so I have to work over time to take care my PO duty, and the responsibility of PO seems to be against developer's thoughts. As a PO, I will chose more features next sprint. Otherwise, I will tell myself not to do so, because I'm a team member to develop those features. This situation makes me confused, so I want to hear some ideas from you guys. I'm a new to Scrum and Game Dev (about 1 and half year), and also new to here and English."} {"_id": "170106", "title": "Leadership does not see value in standard process for machine configuration and new developer orientation", "text": "About 3 months ago our lead web developer and designer(same person) left the company, greener pastures was the reason for leaving. Good for them I say. My problem is that his department was completely undocumented. Things have been tough since the lead left, there is a lot of knowledge both theoretical knowledge we use to quote new projects and technical/implementation knowledge of our existing products that we have lost as a result of his departure. My normal role is as a product manager (for our products themselves) and as a business analyst for some of our project based consulting work. I've taught myself to code over the past year and in an effort to continue moving forward I've taken on the task of setting my laptop up as a development machine with hopes of implementing some of the easier feature requests and fixing some of the no brainer bugs that get submitted into our ticketing system. But, no one knows how to take a fresh Windows machine and configure it to work seamlessly with our production apps. I have requested my boss, who is still in contact with the developer who left, ask them to document and create a process to onboard a new developer, software installation, required packages, process to deploy to the productions application servers, etc. None of this exists, and I'm spinning my wheels trying to get my computer working as a functional development machine. But she does not seem to understand the need for such a process to exist. Apparently the new developer who replaced the one who left has been using a machine that was pre-configured for our environment, so even the new developer could not set up a new machine if we added another developer. My question is two part: 1. Am I wrong in assuming a process to on-board and configure a new computer to be part of our development eco-system should exist? 2. Am I being a whinny baby and should I figure the process out and create a document on my own?"} {"_id": "197677", "title": "How is transparency defined in context of the broker architecture?", "text": "I would like to know how transparency is defined, and what is the measurement for this in the context of a broker architecture. For example : from a developer point of view, [in the broker architecture] distribution is transparent. you talk to a broker one way or the other and it introduces an object model in which distribute services are encapsulated within objects."} {"_id": "197675", "title": "Is there any way to get faster at solving bugs? I've just had a warning from my boss", "text": "I've just been told by my boss that I will receive a negative performance review on Monday. He wants to talk to me about why I am so slow and why my bug fix rate is so low. I love programming and solving problems but I actually do find my job really really hard. I've actually been a programmer for about 10 years. But this is my first multithreading embedded linux job - I've been here 2 years and it's obvious to everyone that I'm still struggling. And I think I've become so demoralised and feel so marginalised that I've lost a lot of the fire that I had at the start of the job. Has anyone ever been in a similar situation and how do you go about increasing your bug fix rate? * * * Update: I had the review. I have been put on a 3 month 'employee development program' (of the type mentioned by Dunk ). Not sure whether I can turn this around. But even if I do have to move on, I've learned a lot from this experience. ## Another Update It's now about 6 weeks since the first review. My advice to anyone facing the same situation is to be humble enough to take criticism and learn from your mistakes. And to not be afraid to look dumb. Ask loads and loads of questions. Let people know you're trying to learn and keep asking until you understand. But be prepared for it not to work out. I'm constructing a portfolio of code ... as well as giving it my best shot. ## Yet another update I am hesitant to put this on here, since I'm concerned that I will not be able to refer future employers to my stackoverflow profile... But anyway, it might be of interest for someone reading this question, but I actually lost my job a few weeks ago. I'm in the midst of brushing up on all the skills I need to - I've taken a lot from the advice given here."} {"_id": "189969", "title": "Visiting points on a number line while minimizing a cost not related to distance", "text": "I need some help on this ACM ICPC problem. My current idea is to model this as a shortest path problem, which is described under the problem statement. **Problem** There are `N = 1000` nuclear waste containers located along a 1-D number line at _distinct_ positions from `-500,000 to 500,000`, except `x=0`. A person is tasked with collecting all the waste bins. Each second that a waste container isn't collected, it emits 1 unit of radiation. The person starts at `x = 0` and can move `1` unit every second, and collecting the waste takes a negligible amount of time. We want to find the minimum amount of radiation released while collecting all of the containers. **Sample Input:** `4` Containers located at `[-12, -2, 3, 7]`. The best order to collect these containers is `[-2, 3, 7, -12]`, for a minimum emitting of `50` units. Explanation: the person goes to `-2` in 2 seconds and during that time `2 units` of radiation are emitted. He then goes to `3` (distance: `5`) so that barrel has released `2 + 5 = 7` units of radiation. He takes `4` more seconds to get to `x = 7` where that barrel has emitted `2 + 5 + 4 = 11` units. He takes `19` seconds to get to `x = -12` where that barrel has emitted `2 + 5 + 4 + 19 = 30` units. `2 + 7 + 11 + 30 = 50`, which is the answer. **Notes** There is an obvious `O(N!)` solution. However, I've explored greedy methods such as moving to the closest one, or moving to the closest cluster but those haven't worked. I've thought about this problem for quite a while, and have kind of modeled it as a graph search problem: 1. We insert `0` in as a baseline position (This will be the initial state) 2. Then, we sort the positions from least to greatest. 3. We then do a BFS/PFS, where the `state` consists of * Two integers `l` and `r` that represent a contiguous range in the sorted position array that we have visited already * An integer `loc` that tells us whether we're on the left or right endpoint of the range * An integer `time` that tells us the elapsed time * An integer 'cost' that tells us the total cost so far (based on nodes we've visited) 4. From each state we can move to [l - 1, r] and [l, r + 1], tweaking the other 3 integers accordingly 5. Final state is [0, N], checking both ending positions. However, it seems that `[L, R, loc]` does not uniquely define a state, and we have to store `L, R, loc, and time`, while minimizing `cost` at each of these. This leads to an exponential algorithm, which is still way too slow for any good. Can anyone help me expand on my idea or push my into the right direction? **Edit:** Maybe this can be modeled as a dynamic programming optimization problem? Thinking about it, it has the same issues as the graph search solution - just because the current `cost` is low doesn't mean it is the optimal answer for that sub problem, since the `time` also affects the answer greatly. Greedy doesn't work: I have a greedy selection algorithm that estimates the cost of moving to a certain place (e.g. if we move right, we double the distances to the left barrels and such). Could you do a Priority-first search, with a heuristic? The heuristic could combine the cost of the current trip with the amount of time elapsed."} {"_id": "211044", "title": "How to integrate google search results in a spring mvc app", "text": "Here's what am trying to do (and searching for similar hasn't shown any results anywhere): On my website - provide an input box which will search google for results. I want to display those results on my page and customize the URL of those search results. So whenever a user clicks on the displayed result, my application will (internally search for profile pages and) provide the future course of action. The only integration of google is, providing the search results (and probably the map coordindates) Env: Spring mvc, jsp, hibernate. Can some one please advise on how can I achieve this ? I dont where to begin but I have read about the Google custom search, but I think that is only for searching within your site or to redirect users to relevant sites found from the result."} {"_id": "68350", "title": "How do handle developer keys that are supposed to be non human readable in your app? (example: specific conflict with twitter api and twitter gem)", "text": "How do handle developer keys that are supposed to be non human readable in your app? From twitter's developer page, under My Access Token it says: \"Keep the oauth_token_secret a secret. Along with your OAuth consumer secret, these keys should never be human readable in your applications.\" However, the awesome twitter gem ( https://github.com/jnunemaker/twitter ) says to make an initializer with this: Twitter.configure do |config| config.consumer_key = YOUR_CONSUMER_KEY config.consumer_secret = YOUR_CONSUMER_SECRET config.oauth_token = YOUR_OAUTH_TOKEN config.oauth_token_secret = YOUR_OAUTH_TOKEN_SECRET end What do you do about this type of conflict? You can't just reset the oauth_token and oauth_token_secret, as far as I know you have to create a whole new twitter app. I concerned as I am going to begin working with some freelancer programmers and to begin with I wouldn't want to trust them my apps private keys/tokens. Thanks."} {"_id": "157259", "title": "What is the logic behind filtering/sanitizing input?", "text": "I have always found it more logic to validate input instead of filtering it. How to appropriately filter data depends on the situation, so IMO it should be done in output or when saving to a database. But I see that some frameworks provide automatic XSS filters for incoming POST and GET data. What is the logic behind this solution? I can't see any advantages in doing this except providing a quick and easy solution to \"lazy\" developers. Or is there some specific security reason I don't understand?"} {"_id": "189962", "title": "Maintain a web application once the only developer is gone", "text": "I have a terminal disease and there is a very high chance that I will no longer be in this world by the end of the year. I have developed a web application that it is extensively used in my family\u2019s business (a small hairdressing shop). No member of my family has neither programming nor system administration skills. I have neither close friends with those skills. The business makes at most 10k in net profits per year. In fact, the business profits can only afford to pay the salaries of its 3 employees (father, mother and sister) and those are quite low and decreasing each year due to the financial crisis. In fact, I am not an employee of my family\u2019s business, I work for a normal software development company. I developed the application during my free time in order to help them. So far I do not care if another business also uses my application or even if the application itself loses my ownership. I just want that my family\u2019s business can continue using it, which means system administration support if something goes wrong and development for new features/bugs. I would like to ask you if you could give me the measures you think I could take in order to guarantee as much as possible the continuity of the application. The technologies of the application are: Platform: Tomcat (Java), MySQL and Linux Frameworks: mainly JPA and ZK"} {"_id": "136846", "title": "How can I effectively explain technical concepts to a non technical boss when I'm not a good talker?", "text": "## Preface I work as web developer for a big retail company, which sells goods to smaller retails through its e-commerce web site, developing both for the e-commerce and intranet. The main source of profits is made through the sales through the e-commerce, followed by the ad-hoc customer management by the marketing department. Our department (web development) report directly to the top managment (AD, CEO, etc). ## My problem Even though my direct manager is an ex developer who understands the various programming issues, the department head is a marketroid who doesn't understand a thing about programming. He thinks that everything that doesn't yield benefits in a short span is a waste of time and not worthy, and his behaviour is hurting us developers. The main problem is I'm unable to explain technical concepts or issues to non technical people, so I fail to persuade him about what I should do or not do or what would make my work easier and more efficient. ## My questions Reading the similar questions I understood that I should * Not make him feel stupid. * Not look like I'm insubordinating. * Not look like I'm trying to skip my work. But how the hell can I do that? * How can I cross this \"cultural\" gap? * How can I speak his language? * What's the best way to convey him that thing X should be done Y way, that thing W is not a good idea and that tool Z is something really useful for us? * Is there any good material on how to deal with non technical managers? Some of the answers around are suggesting examples he can understand, but I find these answers a little vague, I fail to come up with anything decent. Lately I'm starting to think that maybe we should take marketing classes or buy Mitnick's social engineering book... :\\ ## Beware Let me stop those who are starting to scream \"let your manager handle him\". When my manager fails to handle him, the ones who will pay the consequences are us, the developers. So, helping my boss getting the point through is helping myself keeping my sanity around, and certainly something worth the trouble. ## Some examples * Both the password for the intranet user and the e-commerce customers are saved in clear text in the database and must be changed every sixty days. This is a nonsense to me: aside from all the implications of not encrypting passwords before saving them, forcing the user to change their password every two month is begging them for using weak passwords. (He thinks the first is not an issue and that changing password frequently is more secure.) * When a customer log in, all the user informations are saved into a cookie and using HTTPS everywhere is only a burden. This too doesn't make sense to me: we're bouncing arounk 75 KiB of cookies with every request instead of looking up user details from database, and by sniffing the cookie (e.g. unprotected WiFi) you can impersonate the user and buy stuff (even really expensive things) under his name yet only the login process is encrypted. (The former is not worth refactoring, and the latter is a non issue to him.) * A colleague handles trouble tickets quickly because he's been working for the company for twelve years, many of the tools he's been \"entrusted\" have been developed by him and tells the user \"don't do X\" instead of fixing the problem in the code, and we who have been working here for one or two years, are managing tools written by former employees and go digging through the code to fix the problem get chewed for out higher mean resolution times. (He tells us we should be quicker, like him, because the time spent dealing with trouble tickets is not profitable, yet he won't allow the rebuild of the most crappy tools which would save time in the long run.) * An issue tracking a la Redmine is seen only as a time sink because it does not \"produce\" any tangible benefit, yet we're stuck on taking note of to dos and problems on paper or .txts and keeping spreadsheets of the modifications as reference."} {"_id": "211049", "title": "XSLT and possible alternatives", "text": "I had a look at XSLT for transforming one XML file into another one (HTML, etc.). Now while I see that there are benefits to XSLT (being a standardized and used tool) I am reluctant for a couple of reasons * XSLT processors seem to be quite huge / resource hungry * XML is a bad notation for programming and thats what XSLT is all about. It do not want to troll XSLT here though I just want to point out what I dislike about it to give you an idea of what I would expect from an alternative. Having some Lisp background I wonder whether there are better ways for tree- structure transformations based upon some lisp. I have seen references to DSSSL, sadly most links about DSSSL are dead so its already challenging to see some code that illustrates it. Is DSSSL still in use? I remember that I had installed openjade once when checking out docbook stuff. Jeff Atwood's blog post seems to hint upon using Ruby instead of XSLT. Are there any sane ways to do XML transformations similar to XSLT in a non-xml programming language? I would be open for input on * Useful libraries for scripting languages that facilitate XML transformations * especially (but not exclusively) lisp-like transformation languages, or Ruby, etc. A few things I found so far: * A couple of places on the web have pointed out Linq as a possible alternative. Quite generally I any kind of classifications, also from those who have had the best XSLT experience. * For scheme http://cs.brown.edu/~sk/Publications/Papers/Published/kk-sxslt/ and http://www.okmij.org/ftp/Scheme/xml.html"} {"_id": "67367", "title": "How long should someone take classes in a language they don't plan to work professionally in?", "text": "I was late in my signing up for classes at the start of last term, so I signed up for the only programming class I could find C++. After doing well in that class I was allowed into intermediate C#, which I learned was what I needed to be in for my major. I also took intermediate C++ thinking that taking both would make them both easier. I was right. I will be moving on to ASP.NET next term, and while I really like C++, I'm not sure that it will help me overall with my future education in C#. I also have the option to move into java, so I would be taking ASP.NET and java at the same time. So do I take C++, Java, both, or neither? and why?"} {"_id": "115240", "title": "Why were short, int, and long invented in C?", "text": "I'm having trouble understanding, what were the exact purposes of creating the `short`, `int`, and `long` data types in C? The reason I ask is, it doesn't seem like their sizes are bounded -- they could be of any size, so long as `short` is smaller than an `int`, for example. In which situations, then, should you use an `unsigned int` or `unsigned long`, for example, instead of a `size_t`, when doing so offers no hope of binary compatibility? (If you don't know the size, then how would you know when to choose which?)"} {"_id": "197596", "title": "Who should support and maintain development infrastructure?", "text": "I am interested to know what other peoples' experiences are with managing development infrastructure are. I am talking about things like the build server, the central git repo etc etc. Any infrastructure which end users would probably not even know existed, but which are essential for the development team. 1. Do your dev/devops/ops teams consider them to be as important as 'true' production systems? 2. What happens when one goes down (both in terms of ramifications and recovery processes)? 3. Who should be responsible for managing them? Note: to complicate things (especially (3) above, our \"teams\" are small, about five or six devs and only the one sys admin. Hence, there is a lot of crossover of us devs in to ops, especially with the advent of the \"devops\" movement and technologies. Disclaimer - I am the author of this blog post about the subject."} {"_id": "115247", "title": "Where are programming languages published?", "text": "I have read that a number of new programming languages are created each year, however I have never seen a single one. Where exactly are these things published? Is there some site out there that keeps track of them? (I don't have any intention to learn such languages - I am only interested)"} {"_id": "137468", "title": "Should I care about Junit redundancy when using setUp() with @Before annotation?", "text": "Even though developers have switched from junit 3.x to 4.x I still see the following 99% of the time: @Before public void setUp(){/*some setup code*/} @After public void tearDown(){/*some clean up code*/} Just to clarify my point... in Junit 4.x, when the runners are set up correctly, the framework will pick up the `@Before` and `@After` annotations no matter the method name. So why do developers keep using the same conventional junit 3.x names? **Is there any harm keeping the old names while also using the annotations** (other than it makes me feel like devs do not know how this really works and just in case, use the same name AND annotate as well)? **Is there any harm in changing the names** to something _maybe_ more meaningful, like `eachTestMethod()` (which looks great with @Before since it reads 'before each test method') or `initializeEachTestMethod()`? **What do you do and why?** I know this is a tiny thing (and may probably be even unimportant to some), but it is always in the back of my mind when I write a test and see this. I want to either follow this pattern or not but I want to know why I am doing it and not just because 99% of my fellow developers do it as well."} {"_id": "137462", "title": "Technical interview and programmer ability", "text": "What I will say might be a tad controversial in nature but I am very disheartened today - and so I will ask this. I just had an interview with a major tech firm for an internship position, where I was asked a lot of typical algorithm oriented interview questions. Now, given my background, I consider myself to be strong in algorithms (I have also got good grades in graduate level algorithms -stuff involving NP- completeness and beyond (approximation and randomized algorithms), but unfortunately I flunked the interview. I could not think of a very efficient method of solving a string problem in approximately ~10 minutes. Once the interview was over, I had a glass of water, ate a banana and relaxed for a while and tried the problem again. And vola! there is the answer I could arrive in under 5 minutes. And the worst of it all - I was actually on that track and the interviewer did hint about it, but too much pressure cooked me. My entire experience got me thinking about tech interviews. I had some questions and I wanted to pose them in this forum - 1. Is it really possible to judge someone's technical ability in half an hour? Honestly? Or is it just a throw of dice? 2. Do technical interview questions measure problem solving ability? This point is very debatable? As a PhD student I know that Mathematical problem solving involves solving something that you have never heard about before. On the other hand questions like - merging two linked lists in sorted order, or printing all the elements of a binary tree in the kth level become \"mere exercises\" once someone has seen the solution or solved the problem beforehand? 3. Do people who come out with flying colors in these interview go on to become great programmers? Do they go on and design a sleek game engines, graphics libraries, write fast fork-join frameworks? Is there any evidence to point to a positive co-relation between doing well in technical interviews and actual programming ability? Or are these interviews more geared towards finding \"getting things done\" type of person (Spolsky)? I can bet that lots of academics publishing ground breaking ideas in - ICML, VLDB, Mobicom - will flunk these interviews. But I can assure you that they are some of the smartest people you will find on this planet. I am mainly in academia (grad student) - so I will greatly appreciate some perceptive from someone in the other side of the fence. Someone who actually conducts these interviews? [Ok everyone. Thanks for all the nice and thoughtful responses. Since I do not want to ask another question, I will ask you to answer this question for me. Suppose candidate X has a good public portfolio of works where he has contributed to some known open source project where - you can actually go and verify his patches, verify the bugs he has closed and take a look at the designs he has created. In that case, the question is how much weightage are you willing to give to his publicly available/verifiable work versus how well he does in answering some very contrived binary tree interview question in under 15 minutes?]"} {"_id": "137464", "title": "Simple questions about apache and cgi", "text": "I'm just learning about using cgi in html files. I've read a lot of stuff on the web, but most of it is all about the trees and not enough about the forest. I'm a top-down learner and I'm having trouble understanding what's going on with cgi, apache, and html. At the moment I'm using python as my cgi scripting language, but if you'd care to answer using perl, I can handle that. Here's a MWE for a failed test I ran. HTML: (file is getname.html)

    Your First Name:

    Click here to submit form:

    Python: (file is test.py) #!/usr/bin/python import sys, os, os.path, shutil, string, fileinput import cgi, cgitb print \"Content-type: text/html\\n\" form = cgi.FieldStorage() f = open('./testOutput', 'w') if form.has_key(\"firstname\"): f.write(form[\"firstname\"]) else: f.write(\"Failure\") f.close() So, it _looks_ like what's happening is that the info the user enters into the form is put into a variable called 'firstname' (actually it's a key-value pair put into a hash-like thing in the scripting language. In python it's a dictionary). This variable is then sent as input to the cgi script (test.py), and the script is run. But, that isn't what's happening. When I hit the 'Submit' button in the html file, the test.py script is displayed in the browser and the script isn't run. BTW, the reason that I have the script write to a file is so I can see whether it gets run. When I run test.py from the command line, it works fine. The fact that test.py is displayed in my browser indicates, I think, that apache isn't running or is misconfigured or has permission problems. As far as I can see, none of those things is true, but I could be wrong. So here are my questions: 1) why is apache involved at all? Is it needed to send the variables gathered in the html form to the script? If so, why? I know that html is just a markup language, but I thought the 'form' tag had the ability to send a variable to a script and run the script. 2) Is there a way to test an apache configuration? Is it possible for apache to use 127.0.0.1 as the 'ServerName' (i.e. ServerName 127.0.0.1:80 in my httpd.conf file) Basically, I just want to write cgi scripts for some webpages I'm hand-coding and test them on my home laptop. I don't seem to know how to do that and I could use some help. TIA."} {"_id": "136592", "title": "Can resizing images with css be good?", "text": "After reading Is CSS resizing of images still a bad idea?, I thought of a similar question. (too similar? should this be closed?) Lets say you need to use 10 different product image sizes throughout your website and you have 20k-30k different product images, should you use 10 different files for each image size? or maybe 5 different files and use css to resize the other 5? Would there ever be combination that would be good? Or should you always make separate image files? If you use css to resize them, you will save on storage (in GBs) but you will have slight increase in bandwidth and slower loading images(but if images are cached, and you show both sizes of the image would you use less bandwidth and have faster loads?) (But of course you wouldn't want to use css to resize images for mobile sites.)"} {"_id": "136594", "title": "In Functional Programming, should domain-relevant simple functions (e.g., sorts) be reified?", "text": "In a functional application, should you wrap common higher-level functions in domain-meaningful names or should you leave them \"bare\"? For instance, if you have a list of Addresses, and \"sorted by zipcode\" is a common domain-meaningful ordering (targeted mailers, etc.), is it preferable to write: val sortedCustomers = customers.sort((a,b) => (a.zipCode compareTo b.zipCode) < 0) Or is it better to create a function `sortedByZip(cs : Iterable[Customer]) : Seq[Customer]`? Creating the function has the advantage of being (minutely) more abstract, but has the disadvantage of being opaque, creating a name to remember, etc. I'm asking in the context of a significant professional codebase, one that you intend to live for years, be as maintainable as possible, be \"true\" to the expectations of functional programmers, etc."} {"_id": "108133", "title": "Are developers more productive at night?", "text": "I personally stay awake late at night, coding and enjoying working on personal projects. My other colleagues also feel the same and like coding at night. However, it's not about being passionate about personal hobbies, rather, I really feel that I'm more productive at night. I think that there is something about night, maybe its darkness, maybe its silence, maybe another attribute that makes developers become more productive. Is there some truth to this? Why do some developers believe that they are more productive at night? Is there any scientific proof to justify this proposition? Maybe something like \"in night, monitor light is less harmful\" or \"the natural air in night has more oxygen, thus is more suitable for thinking process\", or anything like that. > ### Moderator Note: > > The question is asking for **scientific proof** and otherwise cited > information on this subject. Answers that do not provide supporting > references will be removed. This is _not_ a poll where you should share when > you wake up and what parts of the day you personally are productive."} {"_id": "76157", "title": "When they say its open source it means i can take their pictures?", "text": "when they say that a project is open source (i.e. LifeRay) does it mean that I can take anything I want from that project? I want to use some of the icons used in LifeRay portal for my own (commercial) apps. Is this legal?"} {"_id": "120806", "title": "Is there a way to prevent others to steal your open source project and use it to make a profit?", "text": "This might seems like a silly question to ask, but I can't really figure out the answer. The title pretty much says it all. Let's say you have an open source music player, along comes someone, copies it, adds features, modifies the interface, etc and sells it. Nobody would find out. So how does it work? Related: I'm working in some projects myself to make me more employable, so employers can take a look at my code but with some of them I don't feel like uploading them to an online repository, ie sourceforge, and make them visible for the general public."} {"_id": "234557", "title": "Why text editors are recommended over IDEs for beginners in books like Head First Java?", "text": "I am a programmer with some experience in C++ and I am learning Java. In most of the Java books (like Head First Java) authors ask readers to stay away from IDEs and recommend using a good text editor, but I could not find a convincing reason apart from the fact that a beginner might be overwhelmed by many options being thrown at once. However I started using Eclipse after writing a couple of small programs and I think that tools like auto-completion, inbuilt help debugging etc makes learning a better experience. I would like to know which approach is better and if I am missing out anything by using an IDE rather than a text editor."} {"_id": "232866", "title": "Succinct Lazy Initialization Pattern", "text": "# Background I often use the following lazy initialization pattern: public class Clazz { private Object object; private Object getObject() { Object object = this.object; if( object == null ) { object = createObject(); setObject( object ); } return object; } public Object setObject( Object object ) { this.object = object; } protected Object createObject() { return new Object(); } } Member variables are only ever used directly twice (in the accessors), which includes internal calls. This also allows subclasses to inject new behaviour by overriding `createObject()`. # Problem This is a lot of code to ensure every member variable is always initialized. # Question What Java mechanism (syntactic sugar) can simplify the code? For example, Scala has the `lazy` keyword. # Ideas Ideally I would like to code: public class Clazz { @lazypolymorph private Object object; public Clazz() { // Uses the object... System.out.println( getObject().toString() ); } } The compiler would then expose a public mutator, a private value accessor, and a protected polymorphic creation method (used by the value accessor), following the typical Java naming conventions introduced in the background section."} {"_id": "232860", "title": "Programmer Timeliness vs Effort", "text": "The engineering team I am on has a very laid back approach to work hours. People come in at 6am and at 11am, and work until things are done when necessary. Engineers will routinely work very late, on weekends, etc as pressing issues arise (and deadlines loom). Recently, the company has decided to implement a \"core business hours\" initiative, which is fundamentally incompatible with the Engineering Team's \"get it done\" approach. What would be a good way to expose the nature of the hard work the team does to the rest of the company, in such a way that: a) It does not come off as self congratulatory bragging b) It does not encourage the development of a no work-life balance culture"} {"_id": "232869", "title": "How to find number of points with same minimal distances on matrix", "text": "I'm trying to find the number of points in a matrix with the same minimal distances. Start with a MxN matrix, where M and N < 50000\\. There is given a set of fixed points, with their respective coordinates. The problem is to find the number of points in the matrix such as the minimal distance from any point in the set can be reached for at least two point in the set. The distance is measured by moving one units a time horizonatally or vertically. You can't move diagonally. An example would make things clearer. Let's say we have 3 fixed point: (1,3), (3,1) and (3,6). Here the point (3,3) would be one point since the minimal distance is 2 units and it can be reached for the first two fixed points in the set. However, the point (4,4) doesn't satisfy the requirements. Although the distance from (1,3) and (3,1) is same, 4 units, the minimal distance is 3 units. My idea was to use any point with integer coordinates that lies on the bisector of a segment joining each par as a point that would fill our requirements. But I find a lot of false positives were a point satisfies the requirements, but doesn't lie on the perpendicular bisector. Brute-force methods won't work, because we need too check a billion of distance for each point and that one heck of a job to do. How should I go about solving this problem?"} {"_id": "120802", "title": "Finding a definition for this anti-pattern", "text": "Working on a large and multi-tiered software project, I just found a recurring anti-pattern to be occurring in the code. I coulnd't find its definition in Wikipedia or other sources after a quick search, so I would like to ask you if there is a known anti-pattern for our current situation. I'll explain it both in the general way and by providing you an example. In general, this anti-pattern could be called like the Italian expression \"scaricabarile\" (lit. _passing the buck_ , avoiding your responsibility and giving it to someone else under you). ![Funny image describing this](http://i.stack.imgur.com/Td3JI.png) If there are several people on a [sub]project, and they are organized in a chain of responsibility, someone on the upper levels could decide to deliberately avoid resolving a design problem and leave it resolution to the person next to him, who could repeat this and ultimately \"leave the last developer in a mess of development complexity\". The anti-pattern appears when it's discovered that if the person on top of the chain resolved the problem on his own, much less work and rework would have been done by people under him. Real world situation (in general): you are developing a middle-tier service which handles calls from front-end and has to perform several calls to different back-end services (a DB, a web service, a file storage, outsourced services...) which are not very well documented. When you get an input field you immediately can't handle on your own, instead of asking further information you simply put it in your front-middle-end interface and require the front-end guy to supply you with this parameter that will be passed unprocessed to the back-end. The front-end guy may have to perform several calls to services to get a single value that could be else obtained by the middle-tier with few calls/instructions. In my case (assuming that in the middle-back-tier interface all values are considered to be `string`, and please don't comment on that becuase there are reasons), I discovered that the back-end services require fields such as `order`, `flagDescription` which can be thought to be respectively an enumeration of `ascending`, `descending`, and a `boolean`. One of the problems is to encode the enumeration. What does the service accept as values? `ASC`/`DESC`? `A`/`D`? `0`/`1`? Only BE developers know. In fact, we **ultimately** discovered that the original ME developer was too lazy to ask for detailed information about the possible values of these fields. He instead put them in the front-middle-end interface as two `string`s, leaving the FE guys angrily questioning \"where do we pick values these fields?\". I'll describe another case with a fictional example: suppose an outsourced back-end service define an operation as `doSomethingOnAPerson(string socialSecurityNumber, string phoneNumber, Date birthdate)`, your `Persons` DB table identifies people by a unique ID and contains all of these information. At a certain time, FE knows only person ID and a bunch of information insufficient to make a complete call. However, the middle tier exposes the same signature as the back-end instead of a simpler `doSomethingOnAPerson(long personID)` which can be implemented as a DB call followed by a service call. Since ME is already deployed, this forces the FE guys to request a rework for a new service to retrieve the missing information to call the middle end services, or in other cases request that the service is re-developed to change the signature and perform the missed operation. In general words, _the original developer didn't take the time to resolve the [very simple/simplified] problem of obtaining the required information from the smallest information of data available, but decided to **pass the buck** to the FE_ In a few words, a better design and much more attention could have prevented **both** situations. In fact, the first also led to troubles when switching from a mocked environment to **integration testing**. I would like to explain the team that we did wrong by passing the buck between ourselves rather than solving problems. ## Do you find a popular definition for what I have descripted so far? Can you help me find _articles_ about this, if it has been documented yet?"} {"_id": "234080", "title": "Utterly Confused with OOP - How do I overcome a beginner's hurdle?", "text": "I have been reading and working through the exercises of Steve Lott's book _Building Skills in Python_. However, on the very **first** exercise dealing with OOP I have gotten completely stuck. **The problem set is here** : http://www.itmaybeahack.com/book/python-2.6/html/p03/p03c01_class.html#class- definition-exercises **My attempt** (sourcecode): # Stock Valuation OOP class StockBlock(object): def __init__(self, purchdate =0, purchprice =0, shares=0): self.date = purchdate self.price = purchprice self.shares = shares def __str__(self): return \"Price: %2f \\tDate: %s \\tShares: %d\" % (self.price, self.date, self.shares) def getpurchvalue(self): return self.price * self.shares def getsalevalue(self, saleprice): return saleprice * self.shares def getroi(self, saleprice): return (self.getsalevalue() - self.getpurchvalue()) / self.getpurchvalue() blocksGM = [ StockBlock(purchdate = \"25-Jan-2001\", purchprice = 44.89, shares = 17), StockBlock(purchdate = \"25-Apr-2001\", purchprice = 46.12, shares = 17), StockBlock(purchdate = \"25-Jul-2001\", purchprice = 52.46, shares = 15), StockBlock(purchdate = \"25-Oct-2001\", purchprice = 37.73, shares = 21), ] blocksEK = [ StockBlock(purchdate = \"25-Jan-2001\",purchprice = 35.86, shares = 22), StockBlock(purchdate = \"25-Apr-2001\",purchprice = 37.66, shares = 21), StockBlock(purchdate = \"25-Jul-2001\",purchprice = 38.57, shares = 20), StockBlock(purchdate = \"25-Oct-2001\",purchprice = 27.61, shares = 28), ] class Position(object): def __init__(self, name, symbol, *blocks): \"\"\"Accept collection of StockBlock instances.\"\"\" self.name = name self.symbol = symbol self.blocks = blocks def __str__(self): return \"Symbol: %s \\tTotal Shares: %d \\tTotal Price: %d \" % (self.symbol, self.blocks[1], self.blocks[2]) def getpurhvalue(self): value = 0 for stock in self.blocks: value += stock.getpurchvalue() return value def getsalevalue(self, saleprice): value = 0 for stock in self.blocks: value += stock.getsalevalue(saleprice) return value def getroi(self, saleprice): (self.getsalevalue() - self.getpurvalue()) / self.getpurchvalue() portfolio = [ Position(\"General Motors\", \"GM\", blocksGM), Position(\"Eastman Kodak\", \"EK\", blocksEK), Position(\"Caterpillar\", \"CAT\", [StockBlock(purchdate = \"25-Oct-2001\", purchprice = 42.84, shares = 18)]) ] # display individual blocks purchased, and purchase value of the block def main(): for position in portfolio: StockBlock(position) main() There is no solution in the book, so I cannot check that way. It seems that I totally do not understand OOP? It's just so frustrating and I don't know what I should do to continue... My question is, when I get this utterly stuck learning programming, what strategies can help me overcome the problem? I think this is an important question because I am learning programming by myself. I cannot ask a teacher or friend for help (because I don't have any that are programmers.)"} {"_id": "211378", "title": "Correctly value a development?", "text": "I started a discussion on StackOverflow earlier this week about one of my work project and how to lead it correctly. But it seems that this stack is a more appropriate place to ask those sort of questions, so here is my last question about my project: **How do you value your work? / How do you rate it in a financial way?** I'm working on a quite cool project with two other friends and people that we know ask us for an earlier access and how many it would cost per month for us to access it. Unfortunately, I was not able to answer this simple question due to the fact that I'm not agree with my friends on this specific topic. For me a bill of 80$/month/contract seems to be reasonable and appropriate for our targeted market (Institutions and Corporations) but my partner would rather count a charge of 500$/month/contract. Our project is an asset manager in SAAS mode, and I don't really think that our potential clients would be ready to pay more than 90$/month/contract even if they have a lot of money due to their situation. So, I would really like to have some opinions and advices on this kind of question. PS: If my post is not on the appropriate stack, feel free to tell me. PS2: I'm quite sorry if I'm not totally crystal clear, I'm not a native english, if you do not understand something, as above, feel free to notice me and I'll rephrase."} {"_id": "103718", "title": "Always keeping 2 people expert on any one chunk of code", "text": "I interviewed for a job at a company where they said their policy is to make sure that at least 2 people understand any piece of code, just in case one of them \"goes on vacation\". They also said that some people don't last more then a few months at this company, although many have stayed for years. I think I get the message: They are inclined to fire people right away who don't like the work culture or who don't integrate into their cliques. But is there another interpretation? Thanks."} {"_id": "232823", "title": "What are the principles of open source projects?", "text": "Although its generally agreed by organisations like the OSI and the FSF what is and isn't an open source software (basically, the the terms of the source code license) what are the guiding principals for open source projects? Other software movements have principals like the Agile Manifesto. It seems as though open source share some common values (such as transparency, collaboration, etc.). Are these documented somewhere?"} {"_id": "144437", "title": "Which stages of the requirements analysis process in mobile requirements engineering are the most challenging ones?", "text": "I'm doing a research on formulating a requirements analysis model as a stage of requirements engineering for mobile-application development by considering the limitations and the needs of it ( agility and etc.. .), what I'm trying to figure out is that which parts of this process (requirements analysis for mobile development) are the most challenging ones ( so i can focus more on) , and if there is any stage that u think I need to include or exclude (exp. some may think a quality plan may or may not be necessary and etc.) to make it more clear below is the list of few of the areas in which I can focus on ( by the way your suggestions can be anything out of the below list.) -Requirements specification -Prototyping -Requirements Prioritization -Focusing on quality functions"} {"_id": "144430", "title": "Was API hooking done as needed for Stuxnet to work? I don't think so", "text": "Caveat: I am a political science student and I have tried my level best to understand the technicalities; if I still sound naive please overlook that. In the Symantec report on Stuxnet, the authors say that once the worm infects the 32-bit Windows computer which has a WINCC setup on it, Stuxnet does many things and that it specifically hooks the function `CreateFileA()`. This function is the route which the worm uses to actually infect the .s7p project files that are used to program the PLCs. ie when the PLC programmer opens a file with .s7p the control transfers to the hooked function `CreateFileA_hook()` instead of `CreateFileA()`. Once Stuxnet gains the control it covertly inserts code blocks into the PLC without the programmers knowledge and hides it from his view. However, it should be noted that there is also one more function called `CreateFileW()` which does the same task as `CreateFileA()` but both work on different character sets. `CreateFileA` works with ASCII character set and `CreateFileW` works with wide characters or Unicode character set. Farsi (the language of the Iranians) is a language that needs unicode character set and not ASCII Characters. _I'm assuming that the developers of any famous commercial software (for ex. WinCC) that will be sold in many countries will take 'Localization' and/or 'Internationalization' into consideration while it is being developed in order to make the product fail-safe_ ie. the software developers would use `UNICODE` while compiling their code and not just 'ASCII'. Thus, I think that `CreateFileW()` would have been invoked on a WINCC system in Iran instead of `CreateFileA()`. Do you agree? My question is: If Stuxnet has hooked only the function `CreateFileA()` then based on the above assumption there is a significant chance that it did not work at all? I think my doubt will get clarified if: my assumption is proved wrong, or the Symantec report is proved incorrect. Please help me clarify this doubt. Edit: For more clarity of my question and what I'm looking for. Is it possible that the WinCC STL Editor be programmed in the following way? //Pseudocode Begins if (locale == ASCII Dependent) //like US, UK, Australia etc. { CreateFileA(); //with appropriate parameters } else if (locale == UNICODE Dependent) //like Middle East, China, Japan etc { CreateFileW(); //with appropriate parameters } //Pseudocode ends If it is possible then does it follows that Stuxnet would work appropriately in the US but not in China or Japan or Iran?"} {"_id": "229887", "title": "How do programs like JAVA and C++ store variables in a database , does it still use MySQL like in PHP?", "text": "I imagine they have to have some sort of query to goes to a database , or maybe I'm wrong and they can just store it to their computer? Is MySQL for server - side scripting only?"} {"_id": "186431", "title": "When do you 'speak' C++ fluently?", "text": "I see that many companies require the same skill. This skill is often described as following: > _\"Applicants are required to speak C++ fluently.\"_ I never really understand what _fluent_ meant for programming. So, my question is: When do you speak a programming language fluently?"} {"_id": "103710", "title": "What has been learned about making variance part of the type?", "text": "In Java, the variance of parameterized types is indicated depending on how it's used:
    void store(ArrayList list, A elem) { list.add(elem); } Whereas in Scala it is indicated in the class declaration with a `+` or `-`. What I want to know is, now that Scala has been around awhile, what has been people's experience with making variance a property of the type? Is it generally flexible enough for what you want to do? Are there aspects you would have done differently, knowing what you know now?"} {"_id": "103711", "title": "When developing a piece of software, when do you start thinking/designing the concurrent sections?", "text": "Following along with the principle of not optimizing too early, I'm wondering at what point in the design / development of a piece of software do you start thinking about the concurrency opportunities? I can well imagine that one strategy would be to write a single threaded application, and through profiling identify sections that are candidates to run in parallel. Another strategy I've seen a little of is to consider the software by groups of tasks and to make the independent tasks parallel. Part of the reason for asking is that of course, if you wait until the end and only refactor the software to operate concurrently, you may structure things in the worst possible way and have a major task on your hand. What experiences have helped to determine when you consider parallelization in your design?"} {"_id": "91202", "title": "What are some common algorithm optimization opportunities - mathematical or otherwise", "text": "What are some common algorithmic optimization opportunities that everyone should be aware of? I have recently be revising/reviewing some code from an application, and noticed that it appeared to be running considerably slower than it could. The following loop turned out to be the culprit, ... float s1 = 0.0; for (int j = 0; j < size; ++j) { float diff = a[j] - b[j]; s1 += (diff*diff * c[j]) + log(1.0/c[j]); } ... This is equivalent to, \u2211j { (aj-bj)2*cj \\+ log(1/cj) } Each time the program is run, this loop is called perhaps over 100k times, thus the repeated calls to log and divide result in a very large performance hit. A quick look at the sigma representation makes it pretty clear that there is a trivial fix - assuming you remember your logarithm identities well enough to spot it, \u2211j { (aj-bj)2*cj \\+ log(1/cj) } = \u2211j { (aj-bj)2*cj } + \u2211j { log(1.0/cj) } = \u2211j { (aj-bj)2*cj } + log(1.0/(\u03a0jcj)) and leads to a much more efficient snippet, ... float s1 = 0.0; float s2 = 1.0; for (int j = 0; j < size; ++j) { float diff = a[j] - b[j]; s2 *= c[j]; s1 += (diff*diff * c[j]); } s1 += log(1.0/s2); ... this lead to a very large speed-up, and should have made its way into the original implementation. I assume it did not because the original developer(s) either weren't aware, or weren't 'actively aware' of this simple improvement. This made me wonder, what other, similar, common opportunities and _I_ missing out on or overlooking, and how can I learn to better spot them? I'm not so much interested in complex edge cases for particular algorithms, but rather examples like the one above that involve what you might think of as 'obvious' concepts that crop up frequently, but that others may not."} {"_id": "143428", "title": "Function like C# properties?", "text": "I've been thinking about how C# properties work and could work. I know the purpose that C# properties were originally designed for, which is certainly useful. However instead in this question I'm comparing them more abstractly to functions and other programic elements. Firstly, I wondered, if it were possible, and if so why not, to have a function like C# property. For example: byte n = 4; byte test // property { get { return n; } set { n = value; } func { n++; } } To use as follows: // n is 4 byte n2 = test; // get test = 2; // set // n is now 2 test; // function // n is now 3 The 'n++' in this example being used only as a simple demonstration. I also noticed that there is room for more polymorphism than just in function parameter types. For example having overload resolution by return type, get/set and private/public as well. public test { get { } get byte { } private get byte { } get bool { } get myType { } set byte { } set myType { } func { } func(bool) { } func(byte, myType) { } // etc... } The above example defines \"test\" along with reasonably fine detail involving different implementations for using test in various different ways. More examples: Read only: byte test { get { } } Function like: test { func { } } Function like with parameter polymorphism, returns a byte: byte test { func(bool) { } func(myType, Int16) { } } Behaves differntly depending on the type assigned to it: test { set bool { } set myType { } } Function like and could return a value or not, depending on the context it is used: test { byte get { } func(bool) { } func(byte, byte, myType) { } } The additional possibility for expression and code tidiness should be apparent. However I have been challenged to find specific uses. One example of how this could be used is equality. Where a bool is expected, for example in an 'if' statement, the behaviour could be defined as being '==', however where there was either nothing to return to, or the return to was other than bool, the behaviour would instead be '='. if (n.equals(4)) // if n == 4 n.equals(2); // n = 2 Another example is as follows: class my_list { List store; public count { get { // unless otherwise apparent, // use the Int32 version. return (Int32)count; } get byte { byte n = 0; ForEach(var e in store) n++; return n; } get Int32 { Int32 n = 0; ForEach(var e in store) n++; return n; } private get { // An implmentation of \"count get\" that only // occurs when count is used from inside the // my_list class. } func { print store.Count(); } set int { if (value == 0) list.Clear(); } } }"} {"_id": "72384", "title": "At what point should developers become involved in a triangular relationship among the client, the design agency, and the developers themselves?", "text": "Imagine that there is a project to be completed. This project involves three parties: client, a development organisation (a one that you work for), and a design agency. The design agency is a subcontractor of the client. The role of the design agency are largely responsible for doing styling; however in the past these designers have produced substandard work such as extensive use of id targetting in css instead of class targetting, etc. At what point should developers become involved in a triangular relationship among client, design agency, and themselves? What meetings or communication would you expect to take place at this point?"} {"_id": "73478", "title": "Is there any legislation requiring how we store passwords?", "text": "Given the Sony data breach and other events recently, is there any actual laws or regulation regarding how to store passwords? I think there are with credit cards, you're not allowed to store the 3 digit key or something. Is it illegal to actually store plaintext passwords without warning the user? Or it there a level of encryption that has to be used? Are there any standard guidelines that anyone can point me to?"} {"_id": "180172", "title": "Number of semi-random combinations / permutations given a set of constraints", "text": "### Background: There are around 60 students at the boarding school I work for. The counselor asked my colleague and me to come up with a better way to come up with seating arrangements for dinner than by hand. He would like assignments for the rest of the school year. He also asked us to try to solve some of the issues he has hearing about from students and faculty. ### Constraints: * Most of the students are not from the US, so when they are surrounded by people of the same nationality (i.e., at the dinner table), they speak the language they are fluent in, instead of practicing English, * Complaints are made when students have sat at a certain table \"too many\" times overall, * or if they sit at the same table more than twice in a row, * and some of the students do not get along, so they cannot sit together. ### Input: At run time, the program is supplied with: * A set of people, * A set of tables, and * Each table has a different number of seats (repetition is allowed) The size of both sets, and the size of each table does not change between each assignment. ### Tests: I am using 18 people of different nationalities, and 4 tables of size 3 through 6, inclusive. I picked numbers that I thought made sense for that data set: * No more than 3 people of the same nationality can sit together at a time * No person can sit at a table more than 4 times ### Results: I have run the generator around 15 times without changing the input data. Each time, it comes with anywhere from 6 to 12 \"weeks\" of assignments. ### Questions: (least to most important) 1. Why do I get a different number of generated assignments every time I run the program? The data set isn't changing between runs. 2. How do I find the... * minimum number of people of the same nationality that can sit at a given table, * minimum number of overall times they sit at a given table, all while * maximizing the number of generated assignments? 3. How do I guarantee that these actually are the correct numbers? ### Edit: Each time I generate a new assignment, I call `Collections.shuffle(List)` on the list of people to randomize their order. I then pass the list of tables and people to a backtracking method based off of kapilid's eight queens implementation on github to assign people to tables."} {"_id": "50857", "title": "How do open-source projects grow?", "text": "I know of lots of software that is open-source. For at least some of it, someone, somewhere must have written the first version alone. How does good open-source software become well known? I'm most interested in the first steps. How does software written by one person gain its first new contributors? I'm looking for practical advise. I've started a project here, called aodbm. What steps can I take to give it the best possible start?"} {"_id": "136337", "title": "Component Diagram, what's next?", "text": "I'm working on an iPhone application and I created a component diagram. I defined interfaces for each component. For example I have an AI component and I have made some interfaces for it. How should I carry on? Should I create a class diagram to go into details, should I go into details of each interface or should I just start implementing my interfaces? I am a bit confused on how to proceed."} {"_id": "188762", "title": "How do I create a .NET WebService for File Upload", "text": "I need to create a web service using the .NET platform for accepting file uploads. What are the options available for doing this in C#? What is the best approach to use? Can please you provide me with blogs/code samples/references for further reading?"} {"_id": "101711", "title": "Do you do any validation on directories when recursively deleting them?", "text": "How do you recursively delete paths of the data that your app created? The deleting process itself is trivial. The question is, do you do extra validation? And if so, what kind? From one point of view, maybe it's simply an uncomfortable task to do, but on the other, imagine if the program has a bug and something passes \"c:\\\" to the recursive function, or if there is a memory overflow and some data becomes corrupt, causing paths you work on to be truncated etc. Is there even a way to do a clean validation?"} {"_id": "101716", "title": "In pseudo code what does := mean?", "text": "The section entitled Algorithmic Implementation has the following code: // Return RC low-pass filter output samples, given input samples, // time interval dt, and time constant RC function lowpass(real[0..n] x, real dt, real RC) var real[0..n] y var real \u03b1 := dt / (RC + dt) y[0] := x[0] for i from 1 to n y[i] := \u03b1 * x[i] + (1-\u03b1) * y[i-1] return y what does := mean?"} {"_id": "188765", "title": "Why are references rarely used in PHP?", "text": "I have some C++ knowledge and know that pointers are commonly used there, but I've started to look at PHP open source code and I never see code using references in methods. Instead, the code always uses a return value instead of passing the reference to the variable to the method, which then changes that variable's value and just returns it. I have read that using references uses less memory, so why aren't they used in PHP?"} {"_id": "146374", "title": "Object oriented versus function oriented for backend design in PHP?", "text": "I am curious as I am currently using functions exclusively in my webpages. The MVC pattern is very interesting, and I know Code Igniter utilizes classes which works very well. I want to be able to keep my code as clean as possible, and I thought about trying to move my functions into classes. Currently I am separating the files by logic, so I have functions that output the html with arguments that pass any dynamic content, and the functions which handle the user input. I also have functions that interact with the database. I require and include the necessary files between them. As one can tell, there are a lot of arguments being passed around. So that leads me to wanting to try a style of OOP in PHP that I can do the same without having any html inside of the actual class. I really don't want to hear anything about frameworks as the point is to learn and incorporate these ideas into my own website. That being said, I am fairly new at web development, so I do not understand the many different styles in setting up the logic of websites. I would like to get some insight on how to best clean up my code in OOP? EDIT: It is fairly obvious people have strong feelings about frameworks. So I decided to make this very clear; MY WEBSITE IS A PROJECT FOR LEARNING. I WISH ONLY TO LEARN FROM IT. THEREFORE I do not wish to use a framework as it makes more sense for me to learn how it all works before making my life easier and actually use a framework. A FRAMEWORK FOR MORE ROBUSTS PROJECTS. Thank you."} {"_id": "230707", "title": "Reducing Valgrind Findings (uninitialised value)?", "text": "I'm trying to run some code though Valgrind. The code depends on OpenSSL, and OpenSSL is making Valgrind useless due to uninitialised values. I know where the use lies in OpenSSL's PRNG, but I'm getting hundreds more at the moment. I'd like to try an run a tool over the OpenSSL sources that initializes values. So: int* p; int n; is changed to: int* p = NULL; int n = 0; **Question** : Is anyone aware of a tool to perform bulk initialization on declarations of primitives? I'm more than happy to perform an un-needed initialization and have the optimizer remove it later. That's the optimizers job. For completeness, \"speed\" has never been a top concern for me. I'm more interested in \"correctness\" and assuring it. To me, it does not matter how fast someone arrives at the answer if its wrong."} {"_id": "230701", "title": "How can resolve an URL on a specific DNS server", "text": "I'm currently facing a issue on a project. I'm resolving URL's on multiple DNS servers using a node.js server. Until here, everything is fine. But some ISP restrict the incoming requests from IPs outside their IP range. So how could I modify my request to those specific DNS servers to make it look like I'm one of their clients? If I understand DNS request correctly, there are no such things as x-forwarded-for header as an HTTP request would have. So can it be achieve ? Or am I facing a real blocker ? (I hope I am on the correct stackexchange for this question)"} {"_id": "10035", "title": "Am I potentially hurting my career by taking a dev test position?", "text": "Background: If you haven't read much of what I've written here, then: * I'm a sole developer * I've developed professionally for ~3 years, personally for 13 * I love my work * Top-to-bottom, I find the whole process of developing software interesting, fun and exciting (with the slight exception of marketing) * I hate the environment I'm in now and had to leave Now: I've applied for a position as a Software Development Engineer in Test which potentially pays significantly more than I'm making now (~70%), is located somewhere I'd really like to be and looks like a company I'd really like to work for (interesting product, good tools, smart people - I was referred by an existing employee). So obviously, I'm really hoping that I get the position. I don't want to get pigeonholed as a test developer, however, because they can sometimes have a bad reputation as not being good 'production' developers (not sure why... and this could be wrong, but that's the vibe I've gotten). I'd prefer my career path to end up at a senior dev/lead dev position in 5-10 years. If I'm offered** the position, is there a possibility of hurting this desired path? ** I'm getting ahead of myself. I know. I haven't even had the interview yet. I'm anxious. Still, I interview well and I think that I have the knowledge/experience for the position."} {"_id": "109171", "title": "What would the role of a \"Software Engineer in Test\" be?", "text": "> **Possible Duplicate:** > Microsoft SDET position > How difficult is it to transition from a software test engineer to a > software development engineer? I interviewed for a Software Engineering role, but unfortuantely wasnt offered the position. The company said there were impressed with me and Inwas close behind the other cadidate and that they wanted me to go in and talk to about another role, Software Engineer in Test. So my question is as per the title really, What is the role of a Software Engineer in Test? What do you envisage the type of work would be involved in a role like this? Would taking a role like this limit me to moving into regular software engineering?"} {"_id": "132658", "title": "Does any software come with a warranty?", "text": "> THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, [blah blah > blah]. It's everywhere. **Is there any software which expressly comes with warranties** which can be taken seriously? That is: * Provided by a group/company which has the **resources** to handle warranty claims, not uncle Bob. * Assures something \" **useful** ,\" such as ACID or ISO-whatever compliance, not protection from meteor impacts."} {"_id": "146376", "title": "How to improve the training of students regarding maintainability?", "text": "Maintainability is a major stake of professional software development. Indeed, maintenance is nearly always the longest part of a software life cycle, as it lasts from the project release until basically the end of times. Moreover, projects being in maintenance represent a large majority of the overall number of projects. According to http://www.vlegaci.com/298/interesting-statistics-%E2%80%93-numbers-of- programmers-in-maintenance-vs-development/, the proportion of projects being in maintenance is about 2/3. I recently came across this question, where the guy looks pretty surprised discovering that his job is mainly about maintenance. I then decided to open a discussion (french) on the main site of the French community of software development professionals (http://www.developpez.com/). The discussion is entitled \"Are students well-enough trained to the reality of professional software development?\" and is mainly about **maintainability**. It was pointed out that, at least in France, people are not well-enough prepared to face maintenance in both aspects of it: * maintain existing code * make maintainable code My question here echoes to this discussion and aims at finding a good way to teach maintainability. * How can we teach maintainability? * What kind of exercise would you suggest? * If you have been well-trained regarding maintainability, what particular kind of courses did you take? [edit] After some misunderstanding, I think I must clarify my question. As a project leader and software developer, I often work with trainees or freshly graduated students. I once was freshly graduated myself. The thing is that students are usually unfamiliar with principles such as SOLID that increase the maintainability of a project. We often end up having important difficulties making projects evolve (low maintainability). What I am looking for here is a concrete academical example of successful teaching about the importance of maintainability and how to make better code regarding this particular point; or possible suggestions to improve the way students are trained."} {"_id": "87972", "title": "OOP technology death", "text": "I've heard many times about the aspect-oriented programming, mostly that it is the \"next generation\" technology in programming and is going to 'kill' OOP. Is it right? Is OOP going to die or what can be reason for that?"} {"_id": "181809", "title": "Is code duplication a necessary evil in C?", "text": "I'm rather new to C, and I'm wondering if code duplication is a necessary evil when it comes to writing common data structures and C in general? I could try to write a generic implementation for a `hash map` for example, but I'm always finding the end result to be messy. I could also write a specialized implementation just for this specific use case, keep the code clear and easy to read and debug. The latter would of course lead to some code duplication. Are generic implementations a norm, or do you write different implementations for each use case?"} {"_id": "181802", "title": "How to Release to the App Store as an Individual", "text": "I've written an iOS app, and I'd like to release it on to the App Store. I'm an individual so it's not being released via a company or anything, just me. Is it typical to just release a free app under your own name? If so, what would be appropriate copyright information to submit?"} {"_id": "132657", "title": "How can I obtain feedback from external developers?", "text": "One of our external devs is leaving after successfully completing a **short (1 month) project** which he was contracted for. I'd like to get some feedback from him by doing an exit interview to find out what could be better in our organization and development process. What sort of questions can I ask to get detailed feedback from him, and not just vague responses?"} {"_id": "53872", "title": "I want to master ASP.NET - What concepts should I focus on/What concepts do you most value?", "text": "I start a job this summer doing work in ASP.NET 4 (C#). I plan on working with some legacy code as well as MVC. I want to get a running start. I have good understanding of HTML/CSS/Javascript, and pretty good understanding of C# itself, Design principles, Design Patterns, and understand masterpages, basic MVC2, and code behinds for web forms. * In your opinion what aspects of ASP.NET are the most important to master for web applications? * What do you value most in your usage of ASP.NET? * Do you have a recommendation for understanding the internals of ASP.NET itself?"} {"_id": "206805", "title": "Converting from web to PhoneGap", "text": "I was going through the `PhonGap Documentation` about how to package your `HTML/JS/CSS` to a platform specific `\"native app\"`. They have maintained separate documentations for the separate `Cordova` versions which seems fine. I essentially understand most of the things there. But the confusion I have is the disparity or the loss of information in the documentation from the older ones to the newer ones. For an instance, if you take a look at this documentation for the version `2.1.0` about setting up the android project and scroll a bit down, they have mentioned such things: * In the root directory of your project, create two new directories: * /libs * assets/www * Copy cordova-2.0.0.js from your Cordova download earlier to assets/www * Copy cordova-2.0.0.jar from your Cordova download earlier to /libs * Copy xml folder from your Cordova download earlier to /res .... * Change the class's extend from Activity to DroidGap * Replace the setContentView() line with super.loadUrl(\"file:///android_asset/www/index.html\"); ... and so on which clearly mentions the steps to follow while migrating or creating an app for a specific platform. But upwards from the `version 2.1.0`, these information are missing. Like in the newer documentations, all they have is just the `regular android setup stuffs and android hello world` which we can obviously find on the official android documentation. I tried searching the whole documentation for that version and I could not find any information regarding the steps I mentioned above in the older documentation. Otherwise how are the readers supposed to know those steps? Am I missing something here or have they provided?"} {"_id": "177498", "title": "Cache While Developing or Finish Development then Cache?", "text": "I'm new to cache and haven't used it on my projects. I would like to know about what is best practice in caching. Should caching be done while developing or finish development then cache everything?"} {"_id": "130656", "title": "Is 'process debt' a term people work with", "text": "As a result of a retrospective we were uncovering worse ways of developing software. We had a idea we though was great and tried it. We stayed with it during the development of a major update which took 3 months. After a retrospective with the maintenance engineers it turn out our idea does not work for them (we had discussions with them before we tried the idea and the also thought it was a great idea). We came to the agreement we better get back to the old situation. The team for the update is dismantled and the maintenance engineers don't have time to do so (although their project manager has agreed to invest the time). We're now in the situation the maintenance engineers pay the interest each time a minor release has to be made. This is very much like technical debt but has nothing to do with the product itself. Is this called process debt or is this put under the term technical debt? And what would be a good way of dealing with it? (Any other or concrete ideas to make it visible to product managers?) PS The idea was migrating a 4 product VSS database to SVN. The database heavily leans on shared files and is a mess to untangle and pour in a usable SVN structure. It's very counter intuitive but is seems some things are better kept in VSS."} {"_id": "43867", "title": "What are some good resources for debugging/disassembling proprietary software?", "text": "From time to time, I experience different bugs with proprietary software that I need to interact with. In order to get through these bugs, I need to develop various workarounds. Is there a good book for debugging/disassembling proprietary software to write better workarounds?"} {"_id": "43864", "title": "Quantify value for management", "text": "We have two different legacy systems (window services in this case) that do exactly the same thing. Both of these systems have small differences for the different applications they serve. Both of these system's core functionality lies within a shared library. Most of the time, the updates occur in the shared library and we simply deploy the updated library to both of these systems. The systems themselves rarely change. Since both of these systems do essentially the same thing, our development team would like to consolidate these two systems into a single service. What can I do to convince management to allocate time for such a task? Some of the points I've noted are: * Easier maintenance * Decrease testing/QA time Unfortunately, this isn't enough. They would like us to provide them with hard numbers on the amount of hours this will save in the future and how this will speed up future development. Since most of the work is done in the shared library and the systems themselves never change, it's hard for us to quantify how many hours this will save. What kind of arguments can I make to justify the extra work to consolidate these systems?"} {"_id": "244496", "title": "Continuous Integration using Docker", "text": "One of the main advantages of Docker is the isolated environment it brings, and I want to leverage that advantage in my continuous integration workflow. A \"normal\" CI workflow goes something like this: * Poll repository for changes * Pull from repository * Install dependencies * Run tests In a Dockerized workflow, it would be something like this: * Poll repository for changes * Pull from repository * Build docker image * Run docker image as container * Run tests * Kill docker container My problem is with the \"run tests\" step: since Docker is an isolated environment, intuitively I would like to treat it as one; this means the preferred method of communication are sockets. However, this only works well in certain situations (a webapp, for example). When testing different kind of services (for example, a background service that only communicated with a database), a different approach would be required. What is the best way to approach this problem? Is it a problem with my application's design, and should I design it in a more TDD, service-oriented way that always listens on some socket? Or should I just give up on isolation, and do something like this: * Poll repository for changes * Pull from repository * Build docker image * Run docker image as container * Open SSH session into container * Run tests * Kill docker container SSH'ing into the container seems like an ugly solution to me, since it requires deep knowledge of the contents of the container, and thus break the isolation. I would love to hear SO's different approaches to this problem."} {"_id": "128860", "title": "How do you validate critical input that cannot be vetted?", "text": "How does one prevent users from creating erroneous input sets, when there is no practical way to vet the input? ### The scene I modify a small ERP package written in Visual FoxPro. One part of the package concerns itself with printing truck manifests and invoices to be sent with the drivers on their delivery routes. The print routine, when fed nothing as an input, will attempt to print everything, resulting in reams and reams of printer paper being wasted on a high-speed printer. I am not in a position to re-write any of the GUI interface elements, nor can I adapt any frameworks, tool kits, or other outside code to be used in this situation. The reasons are related to office politics, please do not suggest that I can override the existing ERP framework, as it is not an option for me. ### The issue The users are in a high-pressure, time-critical environment. Each process is measured in minutes or even seconds, which means that I have to minimize processing time as much as possible. Because of this environment, and possible distractions, users _frequently_ ignore the dialogs, pressing the [Enter] key which causes the focus to rapidly move through the form and eventually landing on the action button for the input dialog, resulting in them triggering an automatic printout. The input consists of a date range, route range, and sales order range. The input for date range cannot be auto-set to \"today's date\", as frequent back-printing is required. Also, the end-users work during midnight, i.e. date rollover makes this impractical without rigging a routine that auto-detects the change, etc. The input for routes cannot be hard-coded, nor can it be deduced from routes that have already been shipped, because re-prints are required (see above). The input for sales orders only has meaning when printing single orders or specific ranges. So, frankly, _there is no practical way to validate input_. The action button that triggers printing cannot be blocked. Any suggestions that a blocking dialog be placed in front of the user will be ignored. I am not at liberty to discuss why this is not an option, other than the concept has been already discussed elsewhere on the site (from a different vantage point) and was rejected. Blocking printouts when all inputs are empty was rejected as a design decision as the software _must_ accommodate this as a feature. ### The users The users have been repeatedly asked not to do this. They frequently ignore this advice. Triggering this unfortunate event is not something that their foremen/managers will address, so there is no pressure to end the behavior. ### The organization I do not have a say in the workflow involved, only the modification of existing software components affected by that workflow. ### The vendor The vendor second-sources the package as a custom installation from the original software vendor. The vendor requires that all code changes be sent back to them for integration into their codebase. Significant changes to architecture will result in increased future costs during version migrations due to the extensive customization involved; in some cases, the programmers have even told me that they will completely ignore such large changes and will do as they please. ### The software I have no say in the selection or installation of the software, so changing the platform is out of the question. Regarding the environment of the software, each invoice printed is a single call. There isn't a batch printing facility, and because of how the print facility is integrated into the system (and some language quirks as well) it isn't feasible to make a batch wrapper around that API. Topping this off, this part of the program calls another program that does the invoice print, which in turn calls the print-a-report API, which prints a single invoice. Horrid design, I know. Input forms are a weird combination of a form header that is devoid of input boxes, but can contain other GUI elements. Input boxes are defined at runtime. ### The objective The software will prevent the users from erroneously printing all paperwork. How would you solve this issue?"} {"_id": "893", "title": "Is going to grad school going to hurt your engineering career?", "text": "I study software engineering and have every intention to become one and stay one. However, I also love logic, computability theory, automata theory, and similar computer sciency math topics, and would love to do at least a Masters degree _at some point_ after I graduate to study these topics more in depth. This is really something I want to do for myself, and not because it will contribute something to my career as a software engineer (and let's face it- it won't either), but I'm worried that a few years of absence from the field will be frowned upon, and will make it harder for me to get back on the horse, so to speak. Is that true? How _do_ employers react to such absence?"} {"_id": "128867", "title": "interview for an internship but I think I may be unprepared", "text": "I have an interview for a C++ internship position. Now, the thing is I've taken two out of three quarters of the basic CS classes (in C++) at my community college and we covered the basics up to arrays, pointers, linked lists, recursion, etc (basically all of Walter Savitch's Absolute C++ book). I spoke to the interviewer on the phone and he said the interview will include technical questions on trees, hashes, probably some sorting algorithms, and other stuff like that. I have less than a week to prep for the interview. What can I do to familiarize myself with the absolute, bare, essentials of the aforementioned interview question topics? Or, is it just a waste of time for me until I take the next class that covers all of those things?"} {"_id": "188496", "title": "Design strategies for storing and validating serial numbers", "text": "We are writing software to track Foo Widgets. Each Foo Widget has a serial number. The serial number is an 32-character alphanumeric string. The string is separated into five sets. Each set is separated by a dash (so the s/n is 32-characters NOT including dashes). So for example: 11111111-1111-1111-1111-111111111111. But this may change, since our software isn't actually creating the serial numbers. I'd like to learn about different strategies for storing the serial number and doing user validation of the serial number in the UI. To start with, I'd like get talk about strategies for storing the the serial number in our system. This issue came up at Foo Widgets Incorporated, and there was a disagreement about whether we should store each serial numbers with or without the dashes. I think the most flexible way of doing this (but maybe not most simple) would be to store the serial number without the dashes, store the schema of the serial number (as a regular expression), and then create an identifier that is used to track which schema is used (so later if the manufacturer changes it we can support that and perhaps both schemas at the same time). The counter argument to this was that the dashes \"are a part of the data\" and that it's \"not like a phone number\". I'm having some trouble understanding this point of view."} {"_id": "135020", "title": "How to store record statuses (like pending, complete, draft, cancelled...)", "text": "Quite a lot of applications require records in their tables to have a status, such as 'complete', 'draft', 'cancelled'. What's the best way of storing these statuses? To illustrate what i'm getting at here is a *very short) example. I have a simple Blog application and each post has a status one of: published, draft or pending. The way i see it there are 2 ways to model this in the database. 1. The Post table has a text field that includes the status text. 2. The Post table has a status field that contains the ID of a record in the PostStatus table The Blog example here is a very simple example. Where an enum (if supported) might suffice. However i'd like responses to the question to take into account that the list of statuses could change at any time, so more could be added or removed. Can anyone explain the advantages/disadvantages of each? Cheers! My initial optnion on this is that its better to use another table and look up the status as its better for normalisation and i've always been taught that normalisation is good for databases"} {"_id": "135025", "title": "Separation of Concerns when adding new types", "text": "I have a system I've been working on this week where I'm having a hard time balancing separation of concerns with easy extensibility. I'm adding new types to the system, and it feels like shotgun surgery. The basic idea is that data is collected (polled) from a remote system and then made available to a number of different kinds of clients. To support their interface type (protocol), I'm doing a lot of translation. I'm going to try to simplify this so I can fit it in a question box. There's a FruitService out in the world with an SNMP interface. My publisher's job is to collect that data and then publish it. 1. So for each kind of new Fruit we decide to publish, I create a class that knows how to pull the attributes for that fruit via SNMP. 2. Internally Fruit is translated to DTOs (the same objects used to talk to client #1 (RMI)). 3. A cache is updated and checked for changes to trigger async notifications. 4. A class to translate the fruit into the xml schema used by another client (#2). 5. A class to translate the fruit into simple attribute/value pairs used by another client (#3). In the end I create 5 classes and edit 8 more to plug in a new type. It's not a lot of work; couple hours to write code, test, and check-in for a simple type. Note that there isn't a lot of commonality in the fruit types (e.g.: pear and coconut), so most of the common abstractions are based around the process of updating the data and not the data itself. My design concerns are: 1. The SNMP interface changes 2-3 times a year 2. The xml schema (client 2) changes 10 times a year. 3. Adding new types happens less frequently, generally when someone gets around to it. But maybe if it was easier... So the notional goal with the design was to handle those external changes easily. This lead to all the separation in the translation layers. But adding types feels harder than I'd like. Is there a technique or example I might look into? An idea I'm missing? Am I applying SRP wrong and there should be an Apple type that speaks SNNP, xml schema, DTO, etc? ![alt text](http://imgur.com/IaAEU.png) EDIT: Client #1, the RMI client, is custom written when a new fruit is added, to take advantage of all the fruit's capabilities. So if we decide to pull banana information from the FruitService we will also write a client that allows you to peel the banana, get a notification when it's ripe, and tell you which bunch it belongs to - along with the basic fruit attributes like size, weight, color, time till rotten, etc."} {"_id": "254923", "title": "What are the disadvantages of a \"simple factory\"?", "text": "I am reading the book \u00bbHead First Design Patterns\u00ab from O'Reilly. Before explaining the Factory Method Pattern, they introduce a Simple Factory first. They are using the example of a pizzeria. In a first step they show the problem: Pizza orderPizza(string type) { Pizza pizza; if (type.equals(\"Pepperoni\") { pizza = new PepperoniPizza(); } else if (type.equals(\"Salmon\") { pizza = new SalmonPizza(); } // ... and so on pizza.Prepare(); pizza.Bake(); pizza.Cut(); pizza.Pack(); return pizza; } The obvious problem is that you had to change Pizzeria whenever you add or remove a Pizza. Hence they introduce a \"Simple Factory Idiom\" first. They move the creating part into a class \"SimplePizzaFactory\". Now you don't need to modify Pizzeria anymore when adding or removing a Pizza. Then they say that this approach isn't that good, when you have more than one pizzeria (in several towns). I don't really understand their reasoning. They give the following example code and then they say that each pizzeria wouldn't be using the procedure as implemented above but were using different methods in order to \"bake\", \"cut\" and \"pack\" the pizza. BerlinPizzaFactory berlinFactory = new BerlinPizzaFactory(); Pizzeria berlinPizzeria = new Pizzeria(berlinFactory); berlinPizza.Order(\"Pepperoni\"); Instead of using the Simple Factory, they suggest using the Factory Method Pattern. First, I don't see why the BerlinPizzeria is supposed to not using the procedure. It's still a Pizzeria and when you call Order, you're using the same procedure. My best guess is that they are implying that you are able to implement, let's say, a cafeteria (I'm deliberately using something entirely different to make my point) and use the factory (as it is independent of the pizzeria) and prepare the pizza in a way you want to. But even when using the Factory Method Pattern, nobody forces you to use the default procedure. It's even simpler to \"hide\" that you're doing it differently. Their code examples are given in Java and Java methods are virtual by default. So I would be able to implement BerlinPizzeria and override Order (or had to explicitly declare the method as final). The client, however, wouldn't notice that my BerlinPizzeria is doing things differently. In conclusion I don't see any significant difference between a Simple Factory and the Factory Method Pattern. The only advantage of the Factory Method Pattern I'm seeing is that you would save a few classes (namely, the 'outsourced' factories). So, what is really the disadvantages of a Simple Factory and why isn't is a good idea to 'outsource' the creating part? Or what is really the advantage of the Factory Method pattern and why is it a good idea to force the creating part being implemented in the subclass?"} {"_id": "155702", "title": "Open .doc file from my website in browser", "text": "What's the best way to give the end-user of my web application the ability to open, edit and save (via browser) word documents that are stored in my database? I have this working by doing an html conversion of the file (via Aspose Words) but this method seems not even close to flawless and i'm trying to improve this. Is integrating with google docs possible/good? Their edition seems awesome and very powerful. I can't use any Microsoft Word objects (and this is even discouraged by MS). EDIT: The application is developed in .NET and currently uses the .NET framework 2.0. However, as this is fairly obsolete the idea is to restart from scratch and therefore use the 4.0 framework and C# or VB."} {"_id": "140542", "title": "Wolfram is out, any alternatives? Or how to go custom?", "text": "We were originally planning on using wolfram alpha api for a new project but unfortunately the cost was entirely way to high for what we were using it for. Essentially what we were doing is calculating the nutrition facts for food. (http://www.wolframalpha.com/input/?i=chicken+breast+with+broccoli). Before taking the step of trying to build something that may work in its place for this use case is there any open source code anywhere that can do this kind of analysis and compile the data? The hardest part in my opinion is what it has for assumptions and where it gets that data to power the calculations. Or another way to put it is, I cannot seem to wrap my head around building something that computes user input to return facts and knowledge. I know if I can convert the user input into some standardized form I can then compare that to a nutrition fact database to pull in the information I need. Does anyone know of any solutions to re-create this or APIs that can provide this kind of analysis? Thanks for any advice. I am trying to figure out if this project is dead in the water before it even starts. This kind of programming is well beyond me so I can only hope for an API, open source, or some kind of analysis engine to interpret user input when I know what kind of data they are entering (measurements and food)."} {"_id": "205388", "title": "Complex fetching of Domain Objects", "text": "Usually whenever I want to fetch an aggregate root by ID I just use some type of Repository::findByID(...) function Whenever I started with DDD I thought factories where just a pattern to build _new_ objects, but after meeting some aggregates needing one or two extra queries to load, I realized that Factory::objectWithID(...) were useful to also create instances of objects already existing in the database. Now I have a tree relationship of entities, like **Project->Task**. The number of relations is huge and I have no framework providing lazy loading. Since Tasks can be nested, have complexity of their own, and should not be fully retrieved in one query, I made **Project** and **Task** different aggregate roots How should I retrieve and persist **Tasks**?. It seems that a **ProjectFactory** is not the solution this time because a **Project** does not contain the whole **Task** tree. I still want some aggregate root like features for my **Project** , and since I am avoiding queries inside my entities, I decided to write the relationships inside a Project-Service-Aggregate. Now I can retrieve a single **Project** with a Repo::findByID() function, but fetching a **Task** looks like ProjectAggregationService *service = ProjectAggregationService.new(SomeProject) Task task_1 = service.findTask(...) Task task_1_child_2 = service.findTaskChild(task_1,2) I am puzzled at this point because: 1. I have a service to represent entity relationships. 2. I am declaring many instances of a service, whereas before, services tended to be quite unique objects. 3. Not having a clear object tree like Project.tasks[1].subtask[3] make the code above look like more complex than necessary. Basically I could summarize my question to: Did I take the right approach? I would greatly appreciate any comment on my reasoning. I am mostly concerned about degenerating my code with overblown complexity, but I still think that keeping references to queries and repos outside my entities implementation is a good goal."} {"_id": "205389", "title": "How to detect root cause of problem or bug", "text": "Often in coding, I find it very slow and difficult to detect the root cause of a bug and sometimes I end up going to wrong point in my code. It's painful. I know that to detect the root cause of a bug is a very important skill for programmers. Does anybody have a trick or technique to suggest a good way of finding the root cause?"} {"_id": "131304", "title": "Confused About Virtual Memory for All Processes", "text": "I hope this is the right place. This is a homework assignment for my Operating Systems course and I have to implement a working virtual memory system in C++ so programming is directly involved. I've read some sources about Paging and Virtual Memory now (Tanenbaum, What Every Programmer Should Know About Memory, etc) and noticed that everybody starts off with the phrase `Every process gets its own virtual memory.` Then off they go into a detailed analysis about page size, page replacement algorithms, the MMU and TLB, etc. Those things are actually quite fine in my mind, what doesn't make sense is the concept of virtual memory. If I have four processes running simultaneously and each process uses it's own virtual address space, how is the physical memory protected? The MMU translated the virtual address 0 the same whether it comes from process one, two, three or four. Let me give an example: Every process has a 32-bit virtual address space that it can use. Every process starts off the same way MOVI 10, 0 // r10 = 0 LOAD 11, 10 // r11 = MEM[r10]; value at address 0x00000000 I just don't understand this concept of distinct virtual address space if every process tries to call the same virtual address which will be translated the same way by the MMU. So all four processes will receive the same data? We have a page table which keeps an index of every mapped and unmapped page to a page frame. If each page entry is unique, then how can it account for the duplicated virtual address space that each process has? I'm at the point where I have written design drafts for the MMU, TLB and the Page Table, but I can't continue until I properly understand how the heck virtual space is defined and used for each process."} {"_id": "182446", "title": "Is Clojure a 3GL or a 4GL?", "text": "A bit of background (in case I'm mistaken)... I think I understand that (it's an oversimplification): * manually entering codes into memory (or on a punchcard) is \"first generation language\" * using mnemonics corresponding to CPU instructions would be a 2GL (like assembly language) * C/C#/Java/Objective-C are all 3GLs * SQL is a 4GL Where would Clojure stand in such a classification? I'm particularly confused by this Wikipedia sentence: > The archetypical example of a declarative language is the fourth generation > language SQL, as well as the family of functional languages and logic > programming. This sentence seems to imply that all functional languages are declarative. Is that correct? Then Clojure can be used in a functional way (and it is probably recommended to use it \"as much as possible\" in a FP style), so is Clojure a declarative language ? Also, I can see that Clojure make it really easy to create 4GLs (e.g. as embedded DSLs, like core.logic reproducing logic programming in about 200 lines of code) but is Clojure itself a 4GL?"} {"_id": "205382", "title": "What are the requirements for an open-source license inside an open-source license?", "text": "If I include an open source library in my project that is licensed under the MIT license, but contains BSD-licensed code that requires attribution (correctly attributed inside the project), is it my responsibility to attribute it again if I decide to use that library? Normally this is not a problem (I would just credit everyone regardless of the license) but on a mobile platform there is not a lot of real estate or efficient ways to show / bundle these licenses."} {"_id": "205385", "title": "Polymorphism versus authorization", "text": "I have something bother me in the understanding of polymorphism (vs role): Note: I am using rails (but it's a general question) I have **4 models** : * User * Pro * Customer * Company There is a **polymorphic association (profileable)** between Pro/Customer and User [because Pro has many more fields for the registration process than Customer]. User is used for the session (I am using the devise gem) Basically something very similar to this writing is used: http://jeffsaracco.com/blog/2012/03/04/ruby-on-rails-polymorphic-user-model/ ### Company association * Now, I want to create my association with Company. * By now, only **Pro has one Company** (but I figure it could be other profileable_type which can also have a Company). * Customer doesn't have a Company. In that case which association is preferable: ### Approach 1 * User has_one Company * Company belongs_to User * and manage authorization (with a gem like cancan) to permit company creation only for Pro (with my profileable_type field in the user table) or: ### Approach 2 * Pro has_one Company * Company belongs_to Pro * So in that case I don't need any authorization management anymore (User (and by extension Customer) cannot create company by default) Looking for pros/cons of the 2 approaches. I am pretty sure the first one is the best solution as I have a user_id in my company table which would be more generic and expandable than having a pro_id.. I would be pleased to be sure. Thanks!"} {"_id": "117893", "title": "How to add image support to client-server database application?", "text": "I have an architectural question about a project that I am working on. Currently it is a simple .NET C# application that runs on several client machines, and communicates with a central MySQL server for data storage. There is not an application or service on the server at this time, just MySQL. I would like to add image support so that users can add images to the database (and view after the fact), but after storing the images directly in MySQL as `BLOB` I've decided this approach is a bad idea, as suggested in many posts on SO. The application can't store images on the local filesystem because they would then be inaccessible to other users not on the same local host. That being the case, I'm not sure how best to implement image support otherwise: * The machines are all Windows-based, I can set up a shared folder but then I have the headache of maintaining a mapped network drive and user credentials/permissions. * I've thought of adding another application (or service) that runs on the server and handles reading/writing of images on the server side, but this seems redundant in light of the fact that the computers already are capable of file transfers. * Should I consider creating an application on the server to handle all communications between clients and the SQL database, adding filesystem communication there?"} {"_id": "124102", "title": "Writing to event log - best practice if log is full", "text": "I have an application that writes to its own Event Source. The Event Source itself is created upon install to prevent user vs admin access issues at runtime. Should writing to our own Event Source fail, the event is instead written to the Application log (two events actually, on for the original event data, and another saying 'Could not write to own event log because...' or something similar). We've recently discovered a problem that when the Application event log is full, the exception is unhandled. Now granted, I could just eat the exception (similar to what this answer says). So what is the best behavior? The application is attempting to log the issue and attempting to log that there is a problem. Should I eat the exception, let the logging operation fail silently and continue execution? Or should I let the exception bubble up the call stack so that user/developers know of the issue? What's the recommended approach under these conditions?"} {"_id": "121556", "title": "Move a player to another team, with players stored in one arraylist and teams in another using java", "text": "Basically I have a team class, which has an array list that store the players in. In the driver class there's an arraylist that stores the teams. Anyhow I've worked out how to add a player to a specific team and likewise remove a player from said team. Where I'm hitting problems is when I try to transfer one player to another. My understanding is to scan through the first team, and get the player. Then somehow add this player to another, by scanning through the chosen team and add to it? I've tried this way but it seems to replace the original player with the new player in both teams. My other approach would be to somehow return the parameters of the player object, create another with the return parameters, remove the original then add the new instance in the other team? Really not quite generally how I can go about this, been trying all afternoon! If someone could offer me a general idea, then I can go off and apply the understanding to practice."} {"_id": "128282", "title": "How to store exception messages", "text": "How are exception messages commonly stored? for any domain. I'm thinking about this from a maintenance standpoint. if(!Condition1) throw new Exception(\"Some exception\"); if(!Condition2) throw new Exception(\"Some exception\"); If it was decided that the exception message in that snippet needed to be changed, it would have to be changed in two places; leaving it wide open for inconsistent messages and such. How better to store exception messages then? Perhaps as a static class with constants? public static Exceptions { public const string CONDITION_NOT_MET = \"Some exception\"; } ... if(!Condition1) throw new Exception(Exceptions.CONDITION_NOT_MET); if(!Condition2) throw new Exception(Exceptions.CONDITION_NOT_MET); Are they often (in production) hardcoded?"} {"_id": "173553", "title": "Validation and Verification explanation (Boehm) - I cannot understand its point", "text": "Hopefully my last thread about V&V as I found the B.Boehm is text which I just do not understand well (likely my technical English is not that good). http://csse.usc.edu/csse/TECHRPTS/1979/usccse79-501/usccse79-501.pdf Basically he says that verification is about checking that products derived from requirements baseline must correspond to it and that deviation leads only to changes in these derived products (design, code). But he says it begins with design and ends with acceptance tests (you can check the V model inside). The thing is, I have accepted ISO12207 in terms of all testing is validation, yet it does not make any sense here. In order to be sure the product complies with requirements (acceptance test) I need to test it. Also it says that validation problems means that requirements are bad and needs to be changed - which does not happen with testing that testers do, who just checks correspondence with requirements."} {"_id": "173554", "title": "Is there an open source license with proprietairy-like terms?", "text": "Is there any license meeting the following criteria: * source code available (users allowed to browse the source) * users are allowed to modify the code with credits to the original author * not allowed to sell the software itself or any other program using code from it * can be sold by original author"} {"_id": "173557", "title": "Knowing so much but application is a problem?", "text": "In my work, my friends always tell me, you know so much about computer science, electronics engineering,..etc. But I have difficulty in applying them and my code is crap. How to solve that problem? Will I be better or programming isn't my career? For example, yes I know OCTree that is used for space partitioning in games and it is used for optimization, did I implement it? No, but I know about it in principle.. Do I know algorithms like Sorting, Searching,..etc? Yes, and I know them pretty well, but didn't implement them.. When I get a task, I struggle in applying the things that I know..."} {"_id": "219333", "title": "When defining directory path, should a trailing slash be included?", "text": "Say I'm defining a directory and then including files from it. Is it better practice to do: define('PATH', 'C:/xampp/htdocs/includes/'); require PATH.'header.php; or: define('PATH', 'C:/xampp/htdocs/includes'); require PATH.'/header.php;"} {"_id": "173559", "title": "Using foldr to append two lists together (Haskell)", "text": "I have been given the following question as part of a college assignment. Due to the module being very short, we are using only a subset of Haskell, without any of the syntactic sugar or idiomatic shortcuts....I must write: _append xs ys : The list formed by joining the lists`xs` and `ys`, in that order_ append (5:8:3:[]) (4:7:[]) => 5:8:3:4:7:[] I understand the concept of how foldr works, but I am only starting off in Functional programming. I managed to write the following working solution (hidden for the benefit of others in my class...) : > append = \\xs -> \\ys -> foldr (\\x -> \\y -> x:y) ys xs However, I just can't for the life of me, explain **_what the hell is going on!?_** I wrote it by just fiddling around in the interpreter, for example, the following line : foldr (\\x -> \\y -> x:y) [] (2:3:4:[]) which returned `[2:3:4]` , which led me to try, foldr (\\x -> \\y -> x:y) (2:3:4:[]) (5:6:7:[]) which returned `[5,6,7,2,3,4]` so I worked it out from there. I came to the correct solution through guess work and a bit of luck... I am working from the following definition of foldr: foldr = \\f -> \\s -> \\xs -> if null xs then s else f (head xs) (foldr f s (tail xs) ) Can someone baby step me through my correct solution? I can't seem to get it....I already have scoured the web, and also read a bunch of SE threads, such as How foldr works"} {"_id": "135683", "title": "Is having C++ header files without extension a good practice?", "text": "I have an argument with a collegue of mine regarding the C++ guidelines to follow. He currently designs all his libraries that way: * He uses inconsistently uppercase and lowercase letters in his filenames * Some of his headers don't have any extension I believe that having no extension is something reserved for C++ standard files and that using uppercase letters is error prone (espcially when you deal with code which is meant to work on both Windows and Linux). His point is that he follows `Qt` conventions (even for code that doesn't uses Qt) and keep saying : \"If Qt does it that way, then it can't be bad.\" Now I try to keep an open-mind, but I really feel bad when I have to work on/with his libraries. Is there a common established set of rules regarding this ? Does the standard tell something about it ? Thank you very much."} {"_id": "156167", "title": "Dynamic Fields/Columns", "text": "What is the best way to allow for dynamic fields/database columns? For example, let's say we have a payroll system that allows a user to create unique salary structures for each employee. How could/should one handle this scenario? I thought of using a \"salary\" table that hold the salary component fields and joining these columns to a \"salary_values\" table that hold the actual values. Does this make sense? **Example Salary Structures:** Notice how the components of the salary can be shared or unique. \\-- Jon's Salary -- Basic 100 Annual Bonus 25 Tel. Allowances 15 \\-- Jane's Salary -- Basic 100 Travel Allowances 10 Bi-annual Bonus 30"} {"_id": "208091", "title": "Best practices for deploying multilingual API", "text": "I have a hobby project and it looks like there is need to have some API library (which in turn is a wrapper over existing http API) at least for two languages - JavaScript and Python, adding support to Perl would be nice as well. The question is - what are the best practices for minifying the cost of supporting and developing of such multilanguage APIs? Is it better to choose one language and write just wrappers that for other ones? Or it's better do move slower but to implement thoroughly each method on each language."} {"_id": "94715", "title": "Is Independent Java Development Worth it Compared to Objective-C?", "text": "First I would like to say that I'm not developer, but I love to code whenever I can. I mainly use Perl which is the language I use when doing System Administration. For different reasons (that I'm not sure of) I recently have been told to learn Java by my manager, so I'm starting to do so. I was thinking that I should take advantage to the situation to build some Android Apps. But it seems that Objective C has increased in popularity recently because of the iPhone and iPad. I want to know if it's worth it to develop with Java for Android compared with to develop with Objective C for iPhone (or iPad). Of course, not based on the APP functionality, let's say someone has developed the APP X for both platforms. In which platform would it become the worthiest? And no, I'm not gonna learn both (sadly ain't got much time) Thank you!"} {"_id": "134379", "title": "industry averages for time spent on maintenance", "text": "A manager recently announced that were were spending far too much time fixing bugs. I guess he thinks we should write perfect code all the time (whilst still hitting those impossible deadlines of course!) and it made me wonder what the industry average of time spent bug fixing v writing new code was. So does anyone have any metrics on time spent bug fixing against new code development? Or is there any empirical analysis of bug fixing time for the industry as a whole? Is 50% spent bug fixing too much, or about right? How about 20% or 33%? I'm happy to accept anecdotal evidence from personal experience as that would form part of some statistics here that I could compare our performance against."} {"_id": "203979", "title": "Shouldn't documentation be written together with tests rather than in the code?", "text": "It is popular to write documentation in the same file as the code and extract that using software to generate documents. In order to not affect performance, the documentation is written within commented lines, in a DSL designed just for the purpose of that. And that often results in cumbersome source file. Nowadays, test driven development is popular, and usually tests are written on separate files from the code. Since test files are not run during production, it does not interfere with performance, and hence test are not written within commented lines in a DSL, but are written in the same programming language as the code. Doesn't it make more sense to write documentation in the same file as tests not in commented lines but in the same programming language using DSL? For each syntactic element like class, method, first a documentation telling the features of it can be written, and then test can follow it. That would make things nicer in my opinion. I have scripting languages like Ruby in mind if that makes difference."} {"_id": "81988", "title": "What \"code smells\" are there that are a symptom that an event listener model is required?", "text": "What are the symptoms in a code base that indicate that an event-listener approach is required? It seems to me that when there are classes that need to be called by multiple, not defined at design-time set of other classes, you need some sort of signaling framework, but I would like to hear what other situations are there that would be improved by changing to an event-based model."} {"_id": "203975", "title": "Development Test Interview", "text": "I have a development test I am required to solve a given problem with any language of my liking in about 8 hours. The company is GFI if anyone was wondering. How can I truly prepare for such a test?"} {"_id": "203977", "title": "How to continue development without the webservice?", "text": "We make iOS apps where we often get data in a list from an API, then we select the data and go to the next ViewController. Often it happens that the server is down or some API is not ready. What is a good development practice where we can continue our regular work flow without depending on the webservice and then when its ready without making much changes continue using the web-service and till then use some fixed data."} {"_id": "134374", "title": "Is using dynamically generated code as cache a good idea?", "text": "I have a web search interface that can compare products in a table. This data set changes a few times a week. I have been storing a \"DISTINCT\" list (used for parametric selection) in a `cache` table. The query is computationally expensive because it involves table joins and thousands of records, hence the reason to cache it. I was wondering if it is a good idea to \"cache\" certain data in PHP in dynamically generated code. The idea is that I could create a cache.php file that is 'included' which has the data in arrays for PHP to use without going to the database. This PHP file could be cached using any of the PHP compiler caches out there. I'm not having speed issues (yet), but I don't like the idea of having to ask the database for cache data, as it seems the overhead of the php->mysql transaction is expensive."} {"_id": "240379", "title": "Why is Java not 'pure' OOP?", "text": "Java is designed in a very OO approach, and somewhat even 'forces' programmers to program within the OO paradigm (which can be considered good or bad, a matter of opinion). However while _almost_ everything in Java is a class or an object, primitive data types (`int`, `double`, etc) are not. While I see no advantages to this, there are disadvantages. For example, when a method wants to take in as a parameter 'any value', it usually declares the parameter of type `Object`, the supertype of all classes in Java. However if the programmer wants to input an `int` to the method - a perfectly valid value - he/she has to 'box' the value in an instance of the `Integer` class, created specifically for these kinds of needs (as far as I understand). My question is: why aren't primitives in Java classes, like everything else?"} {"_id": "81981", "title": "What naming anti-patterns exist?", "text": "There are some names, where if you find yourself reaching for those names, you know you've already messed something up. For example: **XxxManager** This is bad because a class should describe what the class does. If the most specific word you can come up with for what the class does is \"manage,\" then the class is too big. What other naming anti-patterns exist? **EDIT** : To clarify, I'm not asking \"what names are bad\" -- that question is entirely subjective and there's no way to answer it. I'm asking, \"what names indicate overall design problems with the system.\" That is, if you find yourself wanting to call a component Xyz, that probably indicates the component is ill concieved. Also note here that there are exceptions to every rule -- I'm just looking for warning flags for when I really need to stop and rethink a design."} {"_id": "156160", "title": "Does it make sense to use ORM in Android development?", "text": "Does it make sense to use an ORM in Android development or is the framework optimized for a tighter coupling between the UI and the DB layer? * * * **Background** : I've just started with Android development, and my first instinct (coming from a .net background) was to look for a small object- relational mapper and other tools that help reduce boilerplate clode (e.g. POJOs + OrmLite \\+ Lombok). However, while developing my first toy application I stumbled upon a UI class that explicitly requires a database cursor: `AlphabetIndexer`. That made me wonder if maybe the Android library is not suited for a strict decoupling of UI and DB layer and that I will miss out on a lot of useful, time-saving features if I try to use POJOs everywhere (instead of direct database access). * * * **Clarification** : I'm quite aware of the advantages of using ORM _in general_ , I'm specifically interested in how well the Android class library plays along with it."} {"_id": "133688", "title": "Is C++11 Uniform Initialization a replacement for the old style syntax?", "text": "I understand that C++11's uniform initialization solves some syntactical ambiguity in the language, but in a lot of Bjarne Stroustrup's presentations (particularly those during the GoingNative 2012 talks), his examples primarily use this syntax now whenever he is constructing objects. Is it recommended now to use uniform initialization in _all_ cases? What should the general approach be for this new feature as far as coding style goes and general usage? What are some reasons to _not_ use it? Note that in my mind I'm thinking primarily of object construction as my use case, but if there are other scenarios to consider please let me know."} {"_id": "42803", "title": "Is there a universal date format that anyone in the world can understand?", "text": "In Canada, everyone is familiar with the date format `YYYY-MM-DD`. In Europe or South Africa, they prefer `DD-MM-YYYY`. There are users from South Africa who get confused with the `YYYY-MM-DD` date format. Is there a way to handle this situation? I was thinking of using the following method format for all: `Feb 02, 2011`"} {"_id": "23234", "title": "What are the hardest parts of the C++/C#/Java programming languages?", "text": "Just wondered what are the features of the three main programming languages which show you are an 'expert'? Please exclude 'practical' skills such as indenting. Am I right in saying for C++ the most difficult aspect to master is STL/generics? Java seems much easier as memory is handled for you. I'm not entirely sure on C# either? I'm trying to use this to guage my current level of ability and what i wish to aim for. ps this was posted on stackoverflow but got binned due to arguing, please do try to keep it civil as I am really interested in the answers from everyone :)"} {"_id": "85605", "title": "Do programmers possess the means of production?", "text": "I was listening to The Servile State by Hilare Belloc this morning and pondering whether or not I possessed the means of production, as did the peasant of the Middle Ages; as did not his descendants after the oligarchs of England forced him into servitude. The means of production was the arable land that the serf was seated on, which, even though not legally his, was illegal to evict him from. So, as programmers, with the hitherto unknown supply of free tools and resources, have we reclaimed as a class of workers, unlike any others, the means of production? Given the chance, a midrange PC and a stable internet connection, could we not each of us be wholly self-sufficient and not just wage-earners?"} {"_id": "143462", "title": "Web application deployment and Dependencies", "text": "I have a free software web application that using other free software scripts for appearance. I have trouble to decide whether should I copy source code of used scripts to my project main repository or list them as dependencies and ask user (who installs application to his server) to install them himself? Since some of scripts solving browser compatibilities issues and I'm not a good web designer (i hate to check my web site on IE to see compatibility) using the newest version of scripts is preferable and second solution works here. But it has problem with scripts aren't backward-compatible with versions I've used them for development. Maybe another method is well-known for this issues that I don't know them."} {"_id": "135971", "title": "When is it not appropriate to use the dependency injection pattern?", "text": "Since learning (and loving) automated testing I have found myself using the dependency injection pattern in almost every project. Is it always appropriate to use this pattern when working with automated testing? Are there any situations were you should avoid using dependency injection?"} {"_id": "70919", "title": "Why c++? Where to start?", "text": "I know there was many questions like this one, but please hold on one more. All programming languages I know now are made for web purposes. I've been learning ActionScript, php and a bit of Javascript, AJAX, etc.. I've liked php very much but I'm still looking something I'll really love! That's why I want to try something new. Can You tell me what are positives and negatives of this language? Is there some big example of C++ usage? Is it a good language to start this journey (maybe You can recommend something else)? Are there some good video tutorials which I can buy on CD/DVD? Could You recommend some books? @edit: After reading Your answers and some info on the internet I'm confused about two languages. Those are C++ and Python. Python can be used for web so it combine great with my current knowledge. But how about Python for desktop applications? Is it hard to build cross-platform apps? Could You compare for me this two languages? PS. I'm really glad about all of Your answers. It is very helpful for me & it forces me to productive thinking about my future ;)"} {"_id": "200770", "title": "How can I translate my android app help and keep it up to date?", "text": "My Android application is almost done and I spend much time creating a set of very precise help pages in HTML that I display using WebViews. I am now ready for the next step, which is _i18n_. Translating my app is no problem, the strings.xml is not that long and I can easily ask friends to translate it. My problem is about these help pages. When I'll update my app, I'll probably have to update the help pages as well. I saw that it wasn't that simple, even with one language. I need your opinion on how to perform such a massive work (which is not at all my job as a developer). Do I need to: * Manually update every language help page when I update the app, which means asking all of my translators to provide a new version of their work each time I change anything. * Provide help only in a default language (probably English, even though the default language is French. French people would still have the French help ;-D). * Google-translate the help pages, even if it _\"not well English is\"_. I don't know any way to do it automatically. * Do something else, which you probably thought of long ago. Thanks for your answers, tips and tricks."} {"_id": "200771", "title": "Creating a separate project for JPA entities", "text": "Where I work it is a common practice to create a separate for JPA entities and a project for the web application (the WAR). So basically you have (at least) two project for each application - appJPA and appWEB. The people who started this convention are no longer there, so I have been wondering about the benefit of having this separation. The only advantage I could think about is the Eclipse JPA facet which provides various tools for JPA. But if I am not mistaken, this facet can be applied to WAR projects as well. So my question is: is this separation unnecessary?"} {"_id": "135974", "title": "How can I sell my boss on Python+Django instead of PHP+a different framework?", "text": "My boss has tasked me with a re-write of our intranet website. The existing system is very old PHP that doesn't use a framework. My preference is strongly to do the rewrite in Python and Django but my boss does not like Python syntax (he is also a developer). I'm on the opposite end... I don't like developing in PHP and my PHP experience is extremely limited but I've done a lot of work in Python. My boss is aware of my experience but still wants me to sell him on Python. Some of the things he mentioned he does not like about Python: * indentation is the only marking for the begin/end of a code block (he loves his curly braces) * documentation issues (I told him python documentation is great) * IDE support is limited (mentioned PyCharm and Wing IDE, not sure which is better) * he's had compatibility issues moving between older versions of python He may be the only other pair of eyes on the new code. How can I convince him that Python is a better choice? Is Ruby a potential middle ground?"} {"_id": "134084", "title": "Assembly instructions execution time", "text": "Where can I find the x86 instructions execution time? How to find out which instruction is faster or smaller?"} {"_id": "208541", "title": "Inserting HTML code with jquery", "text": "One of our web applications is a page that takes in a serial number and various information is returned and displayed to the user. The serial is passed via AJAX, and based on the response, one of the following can happen - * An error message is shown * A new form replaces the previous form Now, the way I am handling this is to use jQuery to destroy (using $.remove()) the table that displayed the initial serial form, then I'm appending another html table that contains another form. Right now I am including that additional form as part of the html source, and just setting it to display:none, then using jQuery to show it when appropriate. However, I don't like this approach because if someone views source on the page, they can see that table html code that is not being displayed. My next thought would be to use AJAX to read in another HTML file, and append it that way. However, I am trying to keep down the number of files this project uses, and since most pages in our project will use AJAX, I could see a case where there are multiple files containing HTML snippets - and that feels sloppy to me. What is the best way to handle a case where multiple html elements are being shown and removed with jQuery?"} {"_id": "208540", "title": "What must I take into consideration when designing a UI around a 0..1:1 relationship?", "text": "I'm designing the database schema for a new product feature. In my current design I have some related optional data. Rather than have nullable fields I have a separate table with a 0..1:1 relationship to the main table. I chose this design because the queries are simpler* if null data doesn't have to be taken into consideration. The team lead pointed out to me that it will complicate data binding in the UI and suggested that I just use nullable fields. I am wondering what complications the optional table approach will introduce? The obvious thing that comes to mind is the data aware controls can't bind to a closed data set so I'll need to either create a record on the fly when the user attempts to fill in the optional data or create the record at the same time as the main record, negating the purpose of have a separate table. Is there anything else I should be aware of? **Clarification** *Simpler, by which I refer to the dizzying amount of rules and ambiguities concerning null's behavior under different query scenarios in the SQL standard and the fact that no two vendors agree on which of these rules to implement."} {"_id": "170923", "title": "What to do when there are no logical user stories but separate development tasks?", "text": "We need to generate a release in 3 weeks, and for the planning we are doing today we don't have coherent logical user stories from the backlog tasks. Is valid to match say each development task is equivalent to an user story? Methodologically what is the correct way of handling this?, because anyway we are going to have standup meetings and we are going to control the progress of the project against those development tasks. For example, we have things like: . Adapt ETL to process numeric lists . Adjust licensing component . Remove DTC and so on. So, for the planning poker and iteration planning is valid to use those tasks? if not, what is the alternative?"} {"_id": "208542", "title": "Choosing the simplest platform version of a source when porting to another platform", "text": "The question will sound weird as I'm not very experienced in C, C++ and ASM. Let's say I have a hard time finding a C# managed, safe-code-only solution to solve a problem. Then I find that in 1998, some codes for (e.g.) a codec have been written in C for PPC, x86, arm, etc. This code can have 20000+ lines, and I need to port it. Then I try one platform, and notice a lot of API call, CUDA stuff, inline assembler in most of the code. Etc. Is there a preference, concerning portability, between PPC, PS2, x86, arm, sparc, etc, when porting to another platform that helps reducing the number of platform-specific code occurrence? I know that all these builds exist especially because there are specific features on each processors, but I wonder if some processors have less antennas, third eye, and four legs, and are less simpler (or \"limited\") than others. I spent the last week porting codes, and had the intuition that focusing on certain aspects helps finding more cross-platform code. I may be wrong, and I - know - that it highly depends on skill. But that doesn't change anything to the fact that, e.g., a 20000 lines Java code may be easier to port to .Net for Windows Phone than a x86 asm code. More specifically, my question is about a good approach when porting Orange to Peach, and avoiding going through several platform specific implementations."} {"_id": "170925", "title": "Does Liskov Substitution Principle also apply to classes implementing an interface?", "text": "LSP states that classes should be substitutable for their base classes, meaning that derived and base classes should be semantically equivalent. But does LSP also apply to classes implementing an interface? In other words, if an _interface method_ implemented by a class is semantically different from what user expects it to be, would this be considered as a violation of LSP?"} {"_id": "249513", "title": "What is the \"1620's multiplication operation\"?", "text": "I was stumbling through Wikipedia when I came across the entry for FLOPS, specifically the table in this section. The first entry is for a computer from 1961, the comment on the right reads > The 1620's multiplication operation takes 17.7 ms.[46] What is this operation? I assume it means it can do multiplication in 17.7ms?"} {"_id": "164404", "title": "Should Developers Perform All Tasks or Should They Specialize?", "text": "Disclaimer: The intent of this question isn't to discern what is better for the individual developer, but for the system as a whole. I've worked in environments where small teams managed certain areas. For example, there would be a small team for every one of these functions: 1. UI 2. Framework code 3. Business/application logic 4. Database I've also worked on teams where the developers were responsible for all of these areas and more (QA, analsyt, etc...). My current environment promotes agile development (specifically scrum) and everyone has their hands in every area mentioned above. While there are pros and cons to each approach, I'd be curious to know if there are more pros and cons than I list below, and also what the generally feeling is about which approach is better. **Devs Do It All** **Pros** 1\\. Developers may be more well-rounded 2\\. Developers know more of the system **Cons** 1\\. Everyone has their hands in all areas, increasing the probability of creating less-than-optimal results in that area 2\\. It can take longer to do something with which you are unfamiliar (jack of all trades, master of none) **Devs Specialize** **Pros** 1\\. Developers can create policies and procedures for their area of expertise and more easily enforce them 2\\. Developers have more of a chance to become deeply knowledgeable about their specific area and make it the best it can be 3\\. Other developers don't cross boundaries and degrade another area **Cons** 1\\. As one colleague put it: \"Why would you want to pigeon-hole yourself like that?\" (Meaning some developers won't get a chance to work in certain areas.) It's easy to say how wonderful agile is, and that we should do it all, but I'm somewhat of a fan of having areas of expertise. Without that expertise, I've seen code degrade, database schemas become difficult to manage, hack UI code, etc... Let's face it, some people make careers out of doing just UI work, or just database work. It's not that easy to just fill in and do as good of a job as an expert in that area."} {"_id": "204478", "title": "How do you get into the habit of using a repository (e.g. GitHub)?", "text": "Are there some best practices on the repository front, or some common newcomer-traps that I should avoid? I have recently been reading about the benefits of repositories even for single-developer projects and, in addition, I am likely to start working with multiple other engineers in a short while but, how do I go from never having used a repository to setting one up - for multiple people, even! **Background:** I am an electronics engineer before anything else, and while application programming has been part of that, it's never been something I've done a lot of - it has mainly been snippets of C for embedded systems where I was the only person working on it."} {"_id": "204475", "title": "Trying to understand the 2N lnN compares for quicksort", "text": "I was going through the analysis of quicksort in Sedgewick's Algorithms book. He creates the following recurrence relation for number of compares in quicksort while sorting an array of N distinct items. ![enter image description here](http://i.stack.imgur.com/6g8bm.jpg) I am having a tough time understanding this... I know it takes 1/N probability for any element to become the pivot and that if k becomes the pivot, then the left sub-array will have k-1 elements and right sub-array will have N-k elements. 1.How does the cost of partitioning become N+1 ? Does it take N+1 compares to do the partitioning? 2.Sedgewick says, for each value of k, if you add those up, the probability that the partitioning element is k + the cost for the two sub-arrays you get the above equation. * Can someone explain this so that those with less math knowledge (me) can understand? * Specifically how do you get the second term in the equation? * What exactly does that term stand for?"} {"_id": "204476", "title": "How to design a composite pattern in Python?", "text": "# The concept I'm programming an interface over pygame as a personal project, to make the creation of games easier for me. So far I managed to design an architecture that behaves like this : * Objects are displayable components that can appear and move on the screen * Objects can have children objects * When an object displays itself, it ask all his children to display themselves on the parent's surface. * Objects have three important elements : a callback system, a graphics system and a physics system to respectively act, display and move. Then, when I want to create a game \"scene\", I create a \"root\" object that contains other objects like the player, the mouse, the ground, monsters... Then I just have to ask the root to display itself, and every object appears recursively. I designed this without knowing about the composite pattern at first, only the basics of OOP. My main issue was to make the substitutability property of objects that comes from inheritance to work well with the recursive composition I made. I mean that I have an \"abstract\" class called \"Object\" (I put abstract into quotes because Python doesn't really have such concept) that is inherited by classes like \"Image\" (to be able to display) or \"MovingObject\" (to be able to move). Here, inheritance is meant to extend my object abilities. But my composite pattern requires that \"groups of objects must be considered the same as single objects\". So when I call recursively a method from an object, it calls that method from every child of the object, regardless of the fact that some of them may not have that method. # Example For instance, let's use this root element : * root (Image) * player (MovingObject) * cloud (MovingObject) * background (Image) * sun (Image) Now let's suppose we want to call the method `move()` on the root element, to make every child move : First, we cannot because root is an Image instance, so it doesn't know the `move()` method. But even if it was the case, the children \"background\" and \"sun\" would not know it. So I decided to put an empty method `move()` inside my \"abstract\" Object class, so every object knows it, even if it doesn't do anything. The issue is that my Object class is now containing empty methods that it doesn't understand nor needs, only to permit the recursive behavior. # Possible solution Then I heard about all the \"inheritance vs composition\" fuss and one of the solutions that came to my mind was to stop using inheritance for Object abilities and use composition instead. That means I would create, for example, a \"Body\" class, an \"Image\" class and a \"Callback\" class that would represent different actions, and then plug these into an Object instance to \"equip\" it and give it more power. But then I thought that this will barely change something because I will have to call `move()`, and then the object will check if it has the Body plug-in, and use it. But it still requires the `move()` method to be present inside the Object class. # Questions So I'm turning to you guys to gave me advices about my pattern : * Did I understand well how the composite pattern works ? * Is my approach correct ? * Does the use of \"plug-in\" classes will help me ? * Is the recursive behavior a good idea ? * Is there other patterns that are more fitting to my needs ? I hope you can give me some hints!"} {"_id": "212436", "title": "Making a business case for a future proof Silverlight application", "text": "I have recently started working for a pretty big bank writing internal apps. I am being subcontracted out by the company I work for (who I have also recently started working for). This bank **has recently** upgraded to Windows XP, they previously used Windows 2000. I have been tasked with re-writing a report viewing application, which I am currently doing in MVC4 (Windows XP == Visual Studio 2010). The original app was written in .net 2 with webfoms, and is a bit of a mess. The whole application seems to rely on state, where MVC is all about embracing the stateless nature of the web, as in the data you input to form A, affects the options you see in form B, options you select in form B affect form C etc etc (I love MVC and think that it is an awesome thing in the right context). The whole maintaining models between states thing is starting to get pretty messy, and I can't help but thinking that I am either doing it wrong, or not using the right tool for the job. My thoughts are that WPF would be perfect for this but the Computers are controlled by an external agency and are pretty locked down. We have slightly more control over the servers hosting the applications. The obvious next option is Silverlight(I think, maybe I'm horribly wrong!) A couple of questions. First: Is MVC ASP actually the best option here, or am I doing it wrong? Second: What argument would you present to management if you agreed with me and thought a Silverlight based application was the way to go? Using non MS based technologies is a no go. MVC4 is controversial."} {"_id": "71292", "title": "Can I turn off Visual Studio white space at bottom of code files?", "text": "In any open file within Visual Studio, there is the ability to scroll all of the code out of sight\". In other words, there is heaps of white space below the code lines that one can scroll to. This to me is of no use, and when I pull my mouse wheel down with aggression, my code disappears off the screen. Is there a way to turn this annoyance off? Also, can someone explain to me any logical reason as to why it's even there? ps: I tagged this question as visual-studio since I'm pretty sure it's been in every version of VS that I've ever used... though I'm currently using visual- studio-2010 ![This space is completely useless](http://i.stack.imgur.com/HKsUI.png) ![Now it's even worse because I scrolled to the bottom \\(note the line #46\\)](http://i.stack.imgur.com/QPfVf.png) ## EDIT Jon Galloway pointed me to another thread stating that it was added by popular request. Still makes no sense why they wouldn't make it a switchable feature on/off"} {"_id": "236938", "title": "Is database version control practical?", "text": "The topic of database version control seems to come up at my work and in social discussion more and more often. But the truth is, I haven't met anyone actually DOING it. The only thing Googling the topic has ever yielded are expensive products/services. Is this really all there is? Does an open source (or at least a _standard_ , and not just \"Version control from company x\") solution exist for ANY type of database (transactional or otherwise)? I'm in love with the idea, but I don't know where to start looking."} {"_id": "236931", "title": "What is (or where can i find) the algorithm to decode FLAC to PCM?", "text": "I'm trying to program a very basic FLAC player using 100% C# completely from scratch. My understanding of this type of thing is very limited, so I'm using this project as a way to learn about compression and decompression. I'm a computer science and math major currently in university, so I'm not adverse to learning whatever math may be involved. I've looked up the format specifications on xiph.org and I understand how the headers and all are structured, but I can't seem to find a concise explanation of how the audio compression works. How do I go about converting the frames to pcm? Sadly, my knowledge of C/C++ is extremely limited, as I only have experience with Java and C#. As such, trying to navigate the c++ code already out there is not a great starting point. I looked at FlacBox and learned quite a bit from it, but it's barely commented at all and I am lost trying to figure out how the conversion from flac to pcm works. So what should I read as far as math goes so I can get started with this? Where can I find a basic algorithm for converting the audio frames to wav? Thanks."} {"_id": "233608", "title": "How is 'bolt-on' the same or different from add-on, extension, or module?", "text": "I hear the term 'bolt-on' used in many contexts, specifically in my organization where PeopleSoft is used extensively. I don't know exactly how it differs from an 'extension' of an existing product, a new module within an application, or an 'add-on' like you would purchase for a product you own. Is there a more formal definition of 'bolt-on' that I'm not familiar with?"} {"_id": "236937", "title": "How to design this better?", "text": "I'm developing a system using .NET which will be used by multiple users. Because of that, I need to identify on the database which data belongs to each user. Explaining with an example, imagine I have the entity `Product`. Then each user has its own products, and so in the database on the table of products we must be able to distinguish between products of each user. Saying that, my solution was to add on the database an extra column for each table for the user id. Now, on my code this was the same as adding on my repositories a parameter to receive the id of the user so that the repository would be able to locate the correct data. The concrete implementation of the repository to deal with relational databases just check those columns. The problem is that to be able to get those columns available on my repository, and on my ORM (in the case EF) I needed to add on each entity one property `UserID`. The problem is that if I think for a while this doesn't seem like a good solution. I'm coupling domain entities with details of how to persist data. More than that, I'm coupling each entity with the way I manage the access to the data and this seems a bad approach. So, concearning this, is there a better way to plan this? A way to make sure we can relate data to users and in the same time avoid those properties on the domain entities?"} {"_id": "236936", "title": "what is the best solution for this problem", "text": "I need help. We have website(PHP site) and it has different software(like shopping cart, chat software, ticketing software, blog,.... )."} {"_id": "237042", "title": "Using Django to Create Child Sites", "text": "I am creating a series of small sites, I'm using the django framework. The theory goes a user comes to a master site, signs up, then he gets his own child site. Example: * navigate to example.com * user creates an account \"mysite\" * user then gets his own site: mysite.example.com and he can configure this all he wants My question: * would it be better to have a \"gold\" version of the site that gets created for each site? for instance: cp ~/goldsite ~/mysite and change the database pointers appropriately ** the downside is if I ever have to do maintenance on a file, I would have to change all subsites. ...or * have one host and configure the database to support multiple sites. The DB might get messy. Any feedback would be great."} {"_id": "246238", "title": "Is it ok to start with templates in MVC development?", "text": "I'm new to web development and I've started working on a project in my company that uses DJANGO. I feel it flexible to start my development straight first from the templates. I think it will be easier if I visualize things first. So my question: Is it okay to start with a template rather than starting with models first? Will I stumble on to any sort of confusions if I go by this method of development?"} {"_id": "232036", "title": "Testing a very specific function in a large, complex application", "text": "I'm new to testing but wholeheartedly realize how important it is. The main issue is that my company has no top-down support for testing at all. That is we don't have any unit testing and just a bit of human testing (I hope I'm using the right terminologies). I have been trying to get testing started here and have to do it subtly because I'm often asked \"when will feature X be done\" and therefore can't start the week(s) long project of writing tests for all of our legacy code. Which would need to get rewritten just to provide seams for testing (if I have the concept of 'seams' right). I can, however, sneak a bit of time to test one function. Or at least I think I can. I'm writing code that basically processes quite a bit of data and then renders it using D3. I would like to write a test for one of the most important functions, `binData`. It takes one optional argument (the date to start binning from) and accesses a few properties like `this.rows` and `this.timestamps`. How would I go about doing that? I think I'll need to create a mock object to hold `this.data` and `this.timestamps` (and the other accessed properties). In fact, I imagine several different mock objects which would allow testing of several different datasets. But other than that I'm kinda stuck. In terms of frameworks I've heard of Mocha and Jasmine and Vows but don't know which would be best. I also don't know how to tie into the massive (and complex) class and module hierarchy and to meet each object's dependencies. That is, the function `binData` is in a file called, say, `dataBinner.js` which requires all sort of other functions in other files but doesn't explicitly include anything on it's own. Instead, it itself is included in a \"manager\" file which includes everything that dataBinner.js and the other modules will need. I suppose I'll need to find a way to instantiate everything necessary to test `binData`. I'm not afraid to do a bit (or a lot) of reading to better understand things. Any help regarding this specific task of writing a test for `binData` or the larger question of how to slowly implement testing in a non-test-friendly place would be much appreciated. CLARIFICATION I'm 100% on board, testing-wise. I know it's important and I know that my company is incorrect by not having testing. You'd expect emails that say \"fixed regression caused by fixing bug 1413\" would trigger something. Or that things that shouldn't take that long take waaaaaay too long ('cause it's spaghetti legacy code). What I'm asking is _given that_ how do I test my one function which relies on the legacy spaghetti code. That is, how does someone inject a bit of testing into a place without it? I'm guessing that I'll have to write my own testing framework because I'll have to create mocks for the spaghetti legacy code. So that I can instantiate my object and call my function outside of the scope of the application."} {"_id": "237041", "title": "To which level Haskell's HDBC is lazy?", "text": "The HDBC documentation states: > fetchAllRows :: Statement -> IO [[SqlValue]]Source > > Lazily fetch all rows from an executed Statement. > > You can think of this as hGetContents applied to a database result set. > > The result of this is a lazy list, and each new row will be read, lazily, > from the database as the list is processed. > > When you have exhausted the list, the Statement will be finished. > > Please note that the careless use of this function can lead to some > unpleasant behavior. In particular, if you have not consumed the entire > list, then attempt to finish or re-execute the statement, and then attempt > to consume more elements from the list, the result will almost certainly not > be what you want. > > But then, similar caveats apply with hGetContents. > > Bottom line: this is a very convenient abstraction; use it wisely. > > Use fetchAllRows' if you need something that is strict, without all these > caveats. Then, I wonder, to which level does the laziness extend? Say, I can have conn <- connectSqlite3 databaseFilePath rows <- quickQuery conn (\"SELECT * FROM foo\") [] mapM_ bar $ take n rows disconnect conn Will it actually only fetch `n` rows? Like, from the database point of view, will it be equivalent of `SELECT * FROM foo LIMIT (n)`? Because fetching all rows at the level of database driver and then `take`ing `n` of them seems to be silly and kind of defeats the purpose. If it's lazy up to the database itself, how's it implemented? Is it using cursors? I know there're several drivers for HDBC alone. I'm asking only about principle of implementation."} {"_id": "175258", "title": "Using 'new' in a projection?", "text": "I wish to project a collection from one type (`Something`) to another type (`SomethingElse`). Yes, this is a very open-eneded question, but which of the two options below do you prefer? **Creating a new instance using`new`:** var result = query.Select(something => new SomethingElse(something)); **Using a factory:** var result = query.Select(something => SomethingElse.FromSomething(something)); When I think of a projection, I generally think of it as a _conversion_. Using `new` gives me this idea that I'm creating new objects during a conversion, which doesn't feel right. Semantically, `SomethingElse.FromSomething()` most definitely fits better. Although, the second option does require addition code to setup a factory, which could become unnecessarily compulsive."} {"_id": "232032", "title": "Decoupling Server and Client using REST API", "text": "I was thinking about how I can decouple a web-application completely into a server-side and a client-side component. I want to decouple the app to the extent that I can host both components on seperate servers. So, for example, I would have: 1. Server 1 (API Server): A server-side component running on `Django` on something like Heroku or EC2. 2. Server 2 (Static Server): A client-side component running on `AngularJS` on a static server like S3 or CloudFront. The communication between these components will take place using a JSON REST API. **Questions:** 1. Is this approach common? advisable? > Does a company like Facebook or Twitter _mostly_ utilise the same API for > the webapp as it does for its mobile apps or its open API? 2. Is it a good idea to use oAuth2 for the login process? > So the user is redirected to a login page on **Server 1** (this is the only > page on Server 1) and then redirected back to **Server 2** with a token if > authentication succeeds. Is this the best approach? It seems like I am kinda > \"breaking the flow\" if I do this. Is this normal? The motivation for this is for me to be able to use the same API for my web, iOS and Android clients. Thanks!"} {"_id": "237045", "title": "Managing client-side and server-side validations in one place", "text": "I'm 100% on board with the case that one should definitely use both client- side and server-side data validations. However, in the frameworks and environments I've worked in, the approaches I've seen have never been DRY. Most of the time there's no plan or pattern - validations are written in the model spec, and validations are written in the form on the view. (Note: Most of my first-hand experience is with Rails, Sinatra, and PHP w/ jQuery) Mulling it over, it seems like it would not be difficult to create a generator which, given a set of validations (e.g. model name, field(s), condition), could produce both the necessary client-side and server-side material. Alternately, such a tool could take the server-side validations (such as the `validates` code in an ActiveRecord model) and generate client-side validations (such as jQuery plugins, which would then be applied to the form. Obviously, the above is just a \"hey I had this idea\" musing, and not a formal proposal. This sort of thing is surely more difficult than it seemed when the idea hit me. That brings me to the question: **How would you approach designing a \"write once, run on server and client\" technique for data validation?** Related subtopics: Do tools like that exist for any particular frameworks or client-server technologies? What are major gotchas or challenges with trying to maintain only one set of validations?"} {"_id": "10581", "title": "Should my multi-server RDBMS or my Application handle database Referential Integrity?", "text": "Should items like Foreign Keys, Constraints, Default Values, and so on be handled by the database management system (in this case, MS SQL 2005) or the application? I have heard opinions from both sides and I'm honestly not sure which way to go. There is a chance we will be spanning multiple servers/databases and I don't think Foreign Keys can be used across linked servers. In addition to that, there are some circular references in the database design which prevents me from using `ON UPDATE CASCADE` on everything. The database is MS SQL 2005 (possibly 2008) and all interactions with it should go through the application."} {"_id": "175250", "title": "How to model a system to help my team grasp the project's bigger picture?", "text": "According to the software engineering point of view, I should model the system to make it easier for other people to understand well what they work on. To do so, I have used the Dia drawing program. But, after having used Dia for some time, I find that it falls short in helping me to correctly and efficiently model my project. How do you usually tackle this problem (modelling a project in the large) and what tools would you recommend for the job, and why?"} {"_id": "175252", "title": "Why some consider static analysis a testing and some do not?", "text": "Preparing myself also to ISTQB certification, I found they call static analysis actually as a static testing, while some engineering book distinct between static analysis and testing, which is the dynamic activity. I tent to think that static analysis is not a testing in the true sense as it does not test, it checks/verifies. But sure I would love to hear opinion of the true experts here. Thank you"} {"_id": "175253", "title": "Why does an unsigned int compared with a signed character turn out with an unexpected result?", "text": "Why does the following code output **y>x** when clearly 1>-1? unsigned x=1; signed char y=-1; if(x>y){ printf(\"x>y\"); } else { printf(\"y>x\"); } Please explain this result."} {"_id": "145701", "title": "Is there a semi-scientific term for this filtering behavior?", "text": "I'm looking for the term that applies to a certain kind of filtering behavior. It's often used in webshop-like interfaces, where large amounts of data are filtered based on a selection of filter criteria. The most distinguishing feature is that _it's impossible to pick a filter criterium that leads to no results_ , as the filters that would do so are hidden with real-time updates. To see an example look at this boot finder section on the blue-tomato.com webshop. Is there a term for this kind of filtering?"} {"_id": "146460", "title": "How to verify the client's view is consistent with the remote model?", "text": "i'm designing a client-server system via web broswser and i have this problem: I send the data to the client via JSON, then the javascript view shows the stuff. Then the user takes actions and commands are sent back to the server. Usual stuff, the problem is what if the view's data is corrupted or old? One solution i thought is making a hash via SHA1 or some other algorithm of the full objects in the client's model, but JSON data is not ordered, also this adds delays to the system. How do google docs manages this kind of thing? Also, sending ALL THE DATA, every time the model is updated doesn't make sense, but how do i keep consistency by only sending changes? I think the whole problem is just one."} {"_id": "146463", "title": "Paid open-source app", "text": "The question that bothers me is whether it is possible/feasible/reasonable to expect for an open-source app to sell well on the mobile market? Should I believe that my users will use my app, rather than build the checked- out version, and, more important, how can I deal with the competition if I make my app available under an OSS license? So far the only link on the subject I've found is http://blog.zachwaugh.com/post/17554643060/selling-open-source-apps however it deals with a Mac OS X app. I should mention my question does not focus on iOS, Android or another OS, it's about mobile applications in general. EDIT: The very reasonable question of whether my users are programmers has been asked. I do not expect most of my users to be even remotely familiar with programming."} {"_id": "244980", "title": "Minima of a convex list using binary search", "text": "A list is strictly convex if its elements first decrease then increase. How can I write a function in python that accepts a convex list and returns its minima in time complexity O(log(n)), n being the size of the list. (NO USE OF ANY BUILT IN FUNCTIONS OVER LISTS)"} {"_id": "254799", "title": "\"Ever change the value of 4?\" - how did this come into Hayes-Thomas quiz?", "text": "In 1989 Felix Lee, John Hayes and Angela Thomas wrote a Hacker's test taking the form of a quiz with many insider jokes, as \u201cDo you eat slime-molds?\u201d I am considering the following series: 0015 Ever change the value of 4? 0016 ... Unintentionally? 0017 ... In a language other than Fortran? Is there a particular anecdote making the number \u201c4\u201d particular in the series? Did some Fortran implementation allow to modify the value of constants? Was this possible in other languages in common use at that time?"} {"_id": "123956", "title": "Why should I use reflection?", "text": "I am new to Java; through my studies, I read that reflection is used to invoke classes and methods, and to know which methods are implemented or not. When should I use reflection, and what is the difference between using reflection and instantiating objects and calling methods the traditional way?"} {"_id": "36168", "title": "Telecommuting from Australia - tax arrangements", "text": "I live in Australia, and I am currently looking at some of the remote job offerings in countries like US/UK. However I have very little understanding of what needs to be done to report that kind of income - i.e. do I need to get an ABN, pay payroll tax myself etc. If you had any experience in this area I would like you to share that."} {"_id": "33533", "title": "Release an upgraded iOS app with a different revenue model", "text": "I am starting a new iOS project and initially plan release a simple free version to gather feedback. I don't intend to monetize or market this initial version. However, I believe \"Version 2\" of this app will be good enough to pay for. I would prefer to release Version 2 as an upgrade from Version 1 rather than release it as a separate app. This way I can reserve a name for the app. It will also be easier to keep everything in a single repository. Are there any downsides of this approach? It's my understanding that I can change the price of an app at any point in time, so it shouldn't be an issue transitioning to a paid app, should it?"} {"_id": "33532", "title": "Naming conventions for variables", "text": "When programming, what naming conventions do you use for your variables? I don't mean when the name of the variable should be obvious; ie. sum, total, first, last. But when you name variables that don't really fit into a category/obvious structure, what sort of names do you use? Is it, myVar1 or test1 or variable1, etc...?"} {"_id": "108858", "title": "How exactly do Patchers work?", "text": "Did a search for Patchers and didn't find anything. But seriously Patchers: How do they work? I've done some googling, and i've gotten mixed results. I mean it seems they probably have some way of going in and changing the files (maybe comparing what is different between the files) but it seems like there is probably more going on than that. Or is it really as simple as running a diff. vs another file....and then applying the changes?"} {"_id": "18719", "title": "XQuery & XML databases", "text": "I decided to learn XML from a professional book instead of reading tutorials online, after some research i decided to read \"Beginning XML, 4th Ed\" book and it was very interesting book and i recommend it to anyone want to learn XML, my question is: does reading & learning these chapters: 1. XQuery, the XML Query Language 2. XML and Databases really required for someone who is doing mainly: Hibernate/JPA for object/relational persistence? or in other words does learning XQuery & XML databases really required by the current industry standards skills ? Thanks in advance"} {"_id": "251328", "title": "Java and rest OOP languages - when to use super or this keywords", "text": "I have been programming java for like a year or more, and i have always used the `this` and `super` keywords. And yesterday my mate read one source of mine, and told me not to over use it unless you are overriding the method. class A extends JPanel { public A { super.setSize(400, 400);// instead of not using super keyword } } But what he meant on how to use it, is this: class A extends JPanel { public A { super.setSize(4,4); } public void setSize(int a, int b) {} } So the language will know that your constructor calls the parent method and not local. Should i only use the the keywords in this situation only or use the keywords everywhere for better readibly? I know that `this` keyword is returning the whole local instance including the parent and all of its parents and interfaces. And the `super` keyword returns the instance of the parent class and all of its instances (parents and interfaces). But my question is, should you always use these keywords?"} {"_id": "71746", "title": "Which websites to use for presenting free open source applications?", "text": "After releasing a new free open source application, I would like to present it to as many people as possible. What are web sites (directories) which one should use to present free open source applications? Looking for a list of sites like Freshmeat, Sourceforge, ..."} {"_id": "223236", "title": "Android: Considering an ORM tool", "text": "I am developing an Android app in which I have to add chat functionality. Also, I have to save the chat history. I will accomplish this task using SQlite. But, I am not sure whether should I consider using an ORM like GreenDAO or ORMLite or I should use traditional way of persisting data in SQLite. Actually, I am not clear when should go for using an ORM and how do I get benefit from it."} {"_id": "41577", "title": "Best Practices and Etiquette for Setting up Email Notifications", "text": "If you were going to set up a Email Alerts for the customers of your website to subscribe to, what rules of etiquette ought to be followed? I can think of a few off the top of my head: * Users can Opt-Out * Text Only (Or tasteful Remote Images) * Not sent out more than once a week * Clients have fine-grained control over what they receive emails about (Only receive what they are interested in) What other points should I consider? From a programming standpoint, what is the best method for setting up and running email notifications? * Should I use an ASP.NET Service? A Windows Service? What are the pitfalls to either? * How should I log emails that are sent? I don't care if they're received, but I do need to be able to prove that I did or did not send an email."} {"_id": "63103", "title": "Is HDTV good as a programmer's monitor?", "text": "I'm wondering if someone has any experience with programming on an HDTV. I'm trying to decide between dual pc monitor vs one big HDTV, like 32\" (or maybe even dual hdtv). The nice thing about hdtv is that they cost alot less than an equivalent sized pc monitor. I will specifically be using this for programming and graphics (photoshop, etc). I'm just wondering if HDTV is bad if I use it as a workstation b/c maybe it's worse for the eyes or not really good for programming (b/c maybe the fonts won't be crisp enough), etc. Any thoughts? Thanks"} {"_id": "17830", "title": "Is Extreme Programming (XP) the best way of learning from experts?", "text": "I have been involved in many development models, and have found XP to be the best for a new programmer from the aspect of learning, as the collaboration between the team members is very high and the opportunities to share knowledge are great. What are the expert opinions on this?"} {"_id": "122394", "title": "Who can change the View in MVC?", "text": "I'm working on a thick client graph displaying and manipulation application. I'm trying to apply the MVC pattern to our 3D visualization component. Here is what I have for the Model, View, and Controller: Model - The graph and it's metadata. This includes vertices, edges, and the attributes of each. It does not contain position information, icons, colors, or anything display related. View - This would commonly be called a scene graph. It includes the 3D display information, texture information, color information, and anything else that is related specifically to the visualization of the model. Controller - The controller takes the view and displays it in a Window using OpenGL (but it could potentially be any 3D graphics package). The application has various \"layouts\" that change the position of the vertices in the display. For instance, one layout may arrange the vertices in a circle. Is it common for these layouts to access and change the view directly? Should they go through the Controller to access the View? If they go through the Controller, should they just ask for direct access to the View or should each change go through the controller? I realize this is a bit different from the standard MVC example where there a finite number of Views. In this case, the View can change in an infinite number of ways. Perhaps I'm shattering some basic principle of MVC here. Thanks in advance!"} {"_id": "129424", "title": "Object oriented programming concepts", "text": "> Specifically, programming without inheritance is distinctly not object- > oriented; we call it programming with abstract data types. I found this great line from Grady Booch's \"Object-Oriented Analysis and Design With Applications\" book. So in order for a program to be an OO one, are inheritance, abstraction, encapsulation, and polymorphism(?) must-to-be things? Could anybody please explain me?"} {"_id": "87112", "title": "Which EXPLAIN SELECT is better -- the one in PostgreSQL or the one in MySQL?", "text": "Or are the two basically the same when it comes to figuring out how to build the right indexes?"} {"_id": "202734", "title": "Putting semicolons after while and if statements in C++", "text": "How come in C++ when you put: while (expression); the while loop doesn't run no matter if the expression is true or not. However if you put: if (expression); the statement runs no matter if the expression is true or not. It seems like they should both behave the same way."} {"_id": "218529", "title": "How does this normalized example work?", "text": "I had the following DB schema: Customer Car Rental ------- ---- ------ Name Name Car_ID ID ID Customer_ID Date This is said to be non normalized as the date can repeat (multiple customers can rent a car at the same day). So the teacher said it should be like: Customer Car Date ------ --- ---- and linked with foreign keys. Well I do not get that - how can I then simply enter that \"John rented a BMW on 2/3/2013\"?"} {"_id": "133536", "title": "How do I efficiently store all OpenStreetMap data in an indexed way?", "text": "I have a PBF file that contains the following information about a country: * Nodes, each with their own longitude, latitude and properties; used to store points in a 2D space. * Ways, each with their properties, they are connected through nodes; used to store roads, boundaries. While this file is only 80 MB in its compressed form, it's 592 MB when uncompressed and stored in a DB. Yeah, and that's only for one country, Belgium. Imagine storing France, Germany and Italy alongside. * * * Let's take a single highway for example, from Antwerp through Brussels to Charleroi. This would consist of a ton of nodes to store all the turns in the highway, but do I need all these turns? I doubt it. Let me tell you what I want to be able to do: * I want to view the map at different zooming levels; major cities, minor cities and street level at least. * I want to be able to get routing information between two points. * I want to be able to compute the nearest road to my GPS location. * Search for a location, by means of an index in the database. But most importantly, **the database shouldn't be too big as it will be stored on a mobile device**. * * * So, I thought about a combination of two techniques: * Image tiles for viewing purposes, to working around storing/processing all the individual nodes. * Storing the endpoints of roads for routing information, alongside information about the road. The problem with this is that I can't compute the nearest road to my GPS location with only this information; imagine that a bend in a highway, I can't determine that I'm on the highway with just the two endpoints. I was thinking about storing intermediate nodes between endpoints but that would be very costly to generate, I think. Also, determining the endpoints of roads (that are like a T-split) is most likely not even that easy as I need to figure out whether I need to store the midpoint at the top of that T-split or not. So, viewing is easy using image tiles; but I can't find an easy way to do routing and GPS location finding, what kind of storage technique should I be looking into? I find it a bit inconvenient that a `80 MB` file turns into a database of `592 MB`, I want to reduce that size a much as possible... What can I do to do this as efficiently as possible? In terms of disk and CPU. I'm targeting a WP7..."} {"_id": "83602", "title": "Splitting up a single project into libraries", "text": "I am working on a project/application that I feel is not very well organized, and parts of it intertwine in different ways. Everything works, but I can see things are not very modular. _Is it reasonable to split up an application into various libraries, even if they might not be reused by another application?_ Just the mere thought of splitting it up into libraries reveals many problems with the current system. I feel it will encourage better design, and future reuse (There are talks of a new project that seems it could benefit from at least some of these libraries). My concerns are * How will it affect perfomance? * How far should I go in splitting things up? * If three libraries all depend on each other, is there a point in making them libraries? (perhaps it suggests a re-architecture of the modules) My question seems to go against the wisdom of this answer, in that _Dynamic libraries should never be created for a single app_. Then the question becomes- how to ensure modularity in a large application? Thanks! **EDIT:** It seems I have used the term \"shared library\" too much, so I removed it to imply any kind of library (either static or dynamic). The essence of the question is whether to split stuff up into any type of libraries."} {"_id": "137859", "title": "Oracle Enterprise JavaBeans (EJB) Developer Certification", "text": "I would like to gain the \"Oracle Enterprise JavaBeans (EJB) Developer\" certification. According to this page I have to take a number of classes each of which costs up to a couple of thousand GB\u00a3. Is this really the only way to obtain the certification? Can I not just buy a certification guide book from amazon and just sit the requisite tests? At the moment I have no Java EE experience and I'm finding it impossible to get interviews for the jobs I'm interested in. I'm hoping this will at least help me get my foot through the door."} {"_id": "218520", "title": "SQL-99 specification, is it specified to first order the result and limit it afterwards?", "text": "If I wrote this in MySQL SELECT * FROM foo ORDER BY bar LIMIT 1 it would first order the result and limit it afterwards. I always get the lowest bar. I wonder if this is specified like this in SQL-standards and how to check that? All I found are some BNF's which do not explain how something should work, but only what syntax is valid."} {"_id": "133538", "title": "Need to make my code more readable to the other programers in my team", "text": "I am working a project in **delphi 7** and I am creating a installer for the application, there are Three main parts. 1. **PostgreSQL** installation/uninstallation 2. **myapplication** ( setup of myapplication is created using nsi) installation/uninstallation. 3. **Creating tables** in Postgres through script(batch files). Every thing runs fine and smoothly, but if something fails I have created a logger which will log every step of the process, like this LogBook.Log('[POSTGRESQL INSTALLATION] : [ACTION]:Postgres installation started'); The function `LogBook.Log()` This will write the contents to a file. This is working nicely, but the problem is this has messed up the code as in it has become difficult to **read** the code as one ca only see the `LogBook.Log()` function call everywhere in the code an example if Not FileExists(sOSdrive+'\\Mapannotation.txt') then begin if CopyFile(PChar(sTxtpath+'Mapannotation.txt'), PChar(sOSdrive+'\\Mapannotation.txt'), False) then LogBook.Log(2,'[POSTGRESQL INSTALLATION] : [ACTION]:copying Mapannotation.txt to '+sOSdrive+'\\ sucessful') else LogBook.Log(2,'[POSTGRESQL INSTALLATION] : [ACTION]:copying Mapannotation.txt to '+sOSdrive+'\\ Failed'); end; if Not FileExists(sOSdrive+'\\Mappoint.txt') then begin if CopyFile(PChar(sTxtpath+'Mappoint.txt'), PChar('c:\\Mappoint.txt'), False) then LogBook.Log(2,'[POSTGRESQL INSTALLATION] : [ACTION]:copying Mappoint.txt to '+sOSdrive+'\\ sucessful') else LogBook.Log(2,'[POSTGRESQL INSTALLATION] : [ACTION]:copying Mappoint.txt to '+sOSdrive+'\\ Failed'); end; as you can see there are lots of `LogBook.Log()` calls, before it was if Not FileExists(sOSdrive+'\\Mapannotation.txt') then CopyFile(PChar(sTxtpath+'Mapannotation.txt'), PChar(sOSdrive+'\\Mapannotation.txt'), False) if Not FileExists(sOSdrive+'\\Mappoint.txt') then CopyFile(PChar(sTxtpath+'Mappoint.txt'), PChar('c:\\Mappoint.txt'), False) this is the case in my whole code now. its difficult to read. can any one suggest me a nice way to unclutter the calls to log? like 1. Indenting the ' LogBook.Log()` call like this if Not FileExists(sOSdrive+'\\Mapannotation.txt') then begin if CopyFile(PChar(sTxtpath+'Mapannotation.txt'), PChar(sOSdrive+'\\Mapannotation.txt'), False) then {Far away--->>} LogBook.Log(2,'[POSTGRESQL INSTALLATION] : [ACTION]:copying Mapannotation.txt to '+sOSdrive+'\\ sucessful') else {Far away--->>} LogBook.Log(2,'[POSTGRESQL INSTALLATION] : [ACTION]:copying Mapannotation.txt to '+sOSdrive+'\\ Failed'); end; 2. Separate **unit** like `logger` This unit will have all the log messages in a `switch case` like this Function LoggingMyMessage(loggMessage : integer) begin case loggMessage of 1 : LogBook.Log(2,'[POSTGRESQL INSTALLATION] : [ACTION]:copying Mapannotation.txt to '+sOSdrive+'\\ sucessful'); 2 : LogBook.Log(2,'[POSTGRESQL INSTALLATION] : [ACTION]:copying Mapannotation.txt to '+sOSdrive+'\\ Failed'); 150 : LogBook.Log(2,'[somthing] : [ACTION]: somthing important); end; so I can just call the LoggingMyMessage(1) where ever required. Can anyone tell me which is a better and cleaner approach to logging this way?"} {"_id": "218524", "title": "Managing codebase for basic and pro edition of a project", "text": "I have a project which will have basic and professional edition. The professional edition will have all the features of the basic edition. I am using git to manage the project's codebase. I consider that I will fork repo of the basic edition after I completed it. Then, I will start to code the professional edition on the forked repo. My problem is that if there would be a bug in the basic edition in the future, I don't want to fix the bug in the basic edition and professional edition twice. How to handle this situation?"} {"_id": "80751", "title": "How can we make agile enjoyable for developers that like to personally, independently own large chunks from start to finish", "text": "We\u2019re roughly midway through our transition from waterfall to agile using scrum; we\u2019ve changed from large teams in technology/discipline silos to smaller cross-functional teams. As expected, the change to agile doesn\u2019t suit everyone. There are a handful of developers that are having a difficult time adjusting to agile. I really want to keep them engaged and challenged, and ultimately enjoying coming to work each day. These are smart, happy, motivated people that I respect on both a personal and a professional level. The basic issue is this: Some developers are primarily motivated by the joy of taking a piece of difficult work, thinking through a design, thinking through potential issues, then solving the problem piece by piece, with only minimal interaction with others, over an extended period of time. They generally complete work to a high level of quality and in a timely way; their work is maintainable and fits with the overall architecture. Transitioning to a cross- functional team that values interaction and shared responsibility for work, and delivery of working functionality within shorter intervals, the teams evolve such that the entire team knocks that difficult problem over. Many people find this to be a positive change; someone that loves to take a problem and own it independently from start to finish loses the opportunity for work like that. This is not an issue with people being open to change. Certainly we\u2019ve seen a few people that don\u2019t like change, but in the cases I\u2019m concerned about, the individuals are good performers, genuinely open to change, they make an effort, they see how the rest of the team is changing and they want to fit in. It\u2019s not a case of someone being difficult or obstructionist, or wanting to hoard the juiciest work. They just don\u2019t find joy in work like they used to. I\u2019m sure we can\u2019t be the only place that hasn\u2019t bumped up on this. How have others approached this? If you\u2019re a developer that is motivated by personally owning a big chunk of work from end to end, and you\u2019ve adjusted to a different way of working, what did it for you?"} {"_id": "85321", "title": "How to get into web coding?", "text": "I don't know how to get into this area, or more specifically, become a website maker. Here's what I now * html * some java script * C# * and a little of everything else (php, css, xhtml, ect.) I've been on w3schools.com for a while reading and doing the tutorials and I've become much more knowledgeable about websites since I started. But when I compair what I've learned from w3schools to a website template I downloaded I notice things in this template that w3schools never even mentioned. What I'm wondering is whats the best way to go about becoming a web coder on your own, would it continuing w3schools be good? Or just download template read through them and figure out how the code works? One more question, I noticed these all use SQL, how do I go about learning SQL? and sorry, I lied, one more question, whats the best step process when learning web code? like what order, html then css then php? or html then xhtml then css? that kinda thing. thanks, Robert Edit: A website like this is my goal in this area: geforce.com Do you think making a website like this is difficult from a expert's view?"} {"_id": "72699", "title": "Should I invest time learning Coffeescript?", "text": "I am a freelancer and I earn my bread and butter by helping others write better java-script code. I have good experience with most of the JavaScript frameworks around. I am wondering if it is worth for me to invest time in learning coffeescript. Who should learn it and who need not ?"} {"_id": "72698", "title": "Implications of crediting a book source in a code file available under an open license", "text": "I'm writing some OS software just for funsies, and it's licensed using MS-PL (ie, do pretty much anything you want with it). It's a project from which I'm using many different sources (lots of personal reading/research going into it) and I'm crediting the original book/writer in the comment header(s) of the files as applicable. I'm doing it so that if anyone actually decides to read or use the code, they can refer to the original sources that inspired the work. Of note: * I own a copy of each source * All of the sources are able to be purchased by the general public (amazon) * As far as I can tell, none of the books explicitly state that the code is licensed * None of the code is copied verbatim. I've translated it all from various languages (mostly C++) to C#, and when appropriate I've added exception handling etc. I'm just wondering if there are any implications of sourcing these books in a freely open source application/library."} {"_id": "40439", "title": "Best practices for retrofitting legacy code with automated tests", "text": "I'm about to take on the task of reimplementing an already defined interface (a set of C++ header files) in a relatively large and old code base. Before doing this, I would like to have as complete test coverage as possible, so I can detect reimplementation errors as early and easily as possible. The problem is that the already existing code base was not designed to be easily testable, with (very) large classes and functions, a high degree of coupling, functions with (many) side effects, etc. It would be nice to hear of any previous experience with similar tasks, and some good and concrete tips on how you went about retrofitting automated tests (unit, integrations, regression, etc.) to your legacy code."} {"_id": "166530", "title": "Introduce unit testing when codebase is already available", "text": "> **Possible Duplicate:** > Best practices for retrofitting legacy code with automated tests I've been working on a project in Flex for three years now without unit testing. The simple reason for that is the fact that I just didn't realize the importance of unit testing when being at the beginning of studies at university. Now my attitude towards testing changed completely and therefore I want to introduce it to the existing project (about 20000LOC). In order to do it, there are two approaches to choose from: 1) Discard the existing codebase and start from scratch with TDD 2) Write the tests and try to make them pass by changing the existing code Well, I would appreciate not having to write everything from scratch but I think by doing this, the design would be much better. What would be your approach?"} {"_id": "149782", "title": "How to unit test large legacy systems?", "text": "> **Possible Duplicate:** > Best practices for retrofitting legacy code with automated tests When working in large legacy systems (large systems with no unit testing ever) I often come across people saying use unit testing as a tool against possible bugs. I wonder if somebody have tried it on large systems with no unit testing at all. Of course it is easy to say unit testing helps ,but in large systems it can hugely time consuming process. It can take months if not years to fully unit test each part of the application . When you are asked to make a functionality which is modification of existing functionality or adding new functionality to the application, how would you go about it? Of course there will be lots of instances where code in different classes will look similar and you would want to re-factor those classes which without proper unit testing can and most likely will open a can of worms. So how would you go about unit testing large applications? And what other measures would you use to reduce possible bugs from your coding (of course some bugs will probably remain anyway)?"} {"_id": "76215", "title": "Is Scrum based on 'daily reporting'?", "text": "Google translates '\u65e9\u8bf7\u793a\u665a\u6c47\u62a5' as 'Consult early and late reporting' or 'Ask for instructions to report back later'. Bing translates as 'Early and late reporting'. Some translated content: > The politics invaded all aspects of daily life during the **Cultural > Revolution** , as **\u65e9\u8bf7\u793a\u665a\u6c47\u62a5**. Every day at a political activity or ceremony, > everyone, after getting up or reported to work/study, had to \" _consult the > great leader Chairman Mao_ ,\" about that day's work, study. At the end of > the day/before going to bed, everyone had to _confess_ to the \"Great Leader > Chairman Mao,\" that day's work/learning. **Late reporting** is called \" > _confessing his/her sin_ ,\" because a day's work or study certainly would > contain errors, which delayed revolutionary work; hence, the person would > confess, \" _I am sorry, great leader_ \", akin to \" **confessing his/her > sin**.\" However, as \"confessing one's sin,\" has a serious religious > undertone to it, so it was not considered appropriate and was renamed to \" > **late reporting.** \" Anywhere people gathered--schools, army, cadre > schools, community centers that provided three meals per day--everyone > involved had to appear for the **collective report**. Here is another excerpt that sounds beautiful even after machine translation: > Anyway, several times a day for several years removed from the > \"instructions\", \"reporting\" so that \"life\" of this short period of time is > finally free of political control winding all the time, everyone has a sense > of relief. Ref: Lin Zhao, a blog entry, a news paper article I feel compelled to be certain that this practice is definitely the source of 'Scrum' process. Some of these similarities cannot simply be ruled out as coincidences. My question is, \"Is there really a relation? If so, should we read some mission statements before we begin work, and we have stand-ups at the end of the day, before we leave?\""} {"_id": "76214", "title": "Webserver / DB / Application - Best way to setup the system for performance", "text": "I have a turbogears app that I am bringing live that uses a postgresql DB on the back end. From a performance issue am I better off having the DB and app on separate server or on the same server? If on the same server and I better off having the DB on a separate physical drive?"} {"_id": "76212", "title": "What is one correct architecture when using a DB with multiple clients?", "text": "We have a legacy system with data stored in a SQL database. Several clients connects to this database using a web service. The web service expose a lot of \"commands\" to query the database and to \"do\" operations. So far, so good. However, new operations need to be added and the web service cannot be extended. We can add a new service that would implement the operations from the old one (somewhat easy) and add our own operations. At the same time we want to enhance the performance of the web service by having multiple requests bundled as one request (the responses would come out as a bundle too). One desirable outcome would be to isolate the application from the web service using programming interfaces and a somewhat generic messaging (or whatever) mechanism. We are contemplating several ways to effectively support multiple clients: enterprise service bus, a request/response service layer, ORM with client/server architecture... Has anyone been confronted to such issues and what did you end up with?"} {"_id": "213521", "title": "What happens if I do not explicitly state \"This program comes with absolutely no warranty\"?", "text": "And lets say, my application is a simple feed reader. Does my app really have to include the no warranty notice?"} {"_id": "163993", "title": "Do I need to know servlets and JSP to learn spring or hibernate or any other java web frameworks?", "text": "I've been asking a lot of people where to start learning java web development, I already know core java (Threading,Generics,Collections, a little experience with (JDBC)) but I do not know JSPs and servlets. I did my fair share of development with several web based applications using PHP for server-side and HTML,CSS,Javascript,HTML5 for client side. Most people that I asked told me to jump right ahead to Hibernate while some told me that I do not need to learn servlets and jsps and I should immediately study the Spring framework. Is this true? do I not need to learn servlets and JSPs to learn hibernate or Spring? All of their answers confused me and now I am completely lost what to learn or study. I feel that if I skipped learning JSP and servlets I would missed a lot of important concepts that will surely help me in the future. So the question, do I need to have foundation/know servlets and JSP to learn spring or hibernate or any other java web frameworks.?"} {"_id": "105975", "title": "Project management tool built into visual studio", "text": "My friend and I are working on a project together in visual studio, and we are using subversion for source control. We have used php based project management tools before for task and bug tracking, but we find it inconvenient to go to work in visual studio for hours then have to go find the website and log in and report on what we already made commit notes for, especially since sometimes we are working off line and then have to remember to go back and update the trackers. We are wondering if there is a project management tool (or set of tools) that is built into (slash an add-on) Visual Studio that will work with AnkhSvn allowing multiple people to maintain and track tasks."} {"_id": "8020", "title": "Which format is best for the first prototype not on paper?", "text": "Console app (my favorite), quick & sloppy form, MS Paint (for GUI); what works best most of the time for your standard application? why?"} {"_id": "193207", "title": "Generating stats from large data set", "text": "I would like to do a following: I have a fairly large data set, say billions of rows, each row having multiple binary columns. I would like to generate estimated stats with some certainty about those rows. Stats would have a simple form: how many rows are there with columns 1, 7 and 10 set. Obviously going through entries is not a viable solution. To be honest, I would prefer not touch the data set at all when I have to provide an estimation. I would like to precompute everything and then used precomputed data to answer questions about the data set. Do you know of any algorithms/techniques of solving such problem?"} {"_id": "95976", "title": "Is it good to keep the bugfix comments within the code?", "text": "My team is using clear-case as the version control. The project which I am working is not started 7-8 years back. During the entire life time of the project we had several releases bug-fixes service packs etc. The issues are tracked using the bug tracking system and most of the people who work on the bug-fixes follows a routine of enclosing the comment in START/END block with the date, author, bug-id etc. I feel this is quite irrelevant and making the code cluttered and uneasy to maintain and these are the things which must be part of check-in comments/labels etc, where we can keep additional life cycle information of the work product. What's the best practice to be followed? Some of the reviewers of the code insist to out the comments about the bug and fixes to ease out their life. In my understanding they must review the files by mapping it to a view and get the change log of the branch and review it. It would be helpful if I can get some best practices on submitting the updated code for review."} {"_id": "95975", "title": "Importance of algorithms in a telephonic interview", "text": "I had taken a telephonic interview and the interviewer has given some problem for which I was supposed to give him the algorithm. Since its was a telephonic interview and I had no paper or pen, so I could not give him the correct solutions. So I got bit demotivated that my problem solving skills are very low (I have a decent exp on developing software). How do you overcome these kind of situations and suggestions to improve algorithm skills especially while facing these kind of tactical interviews."} {"_id": "123046", "title": "Protecting an Application with Logins", "text": "I am a beginner Java programmer, and have tried searching for a way to set up a login system with my Java application. The application is a game and I do not want people to just upload it to a website so others can down load and play. I want to use something like MySQL to make the user authenticate in order to use the application. Does anyone have a suggestion?"} {"_id": "44929", "title": "Color schemes generation - theory and algorithms", "text": "I will be generating charts and diagrams and I am looking for some theory on color schemes and algorithm examples. Example questions: * How to generate complementary or analogous colors? * How to generate pastel, cold and warm colors? * How to generate any number of random but distinct colors? * How to translate all that to the hex triplet (web color)? My implementation will be in AS3 but any examples in pseudocode are welcome."} {"_id": "226517", "title": "How can Lisp produce an iterative process from a recursive procedure?", "text": "I am starting to learn Lisp, using the SICP book. The authors mention that a procedure (i.e. function) can be recursive or iterative. Additionally, the process those procedures will generate will also be recursive or iterative, and that, surprisingly, a recursive procedure can sometimes generate an iterative process. The example given is the factorial procedure, which is a recursive procedure, but which generates an iterative process: (define factorial n) (iter 1 1 n)) (define (iter product counter max-count) (if (> counter max-count) product (iter (* counter product) (+ counter 1) max-count))) And here's a quotation from the book: > One reason that the distinction between process and procedure may be > confusing is that most implementations of common languages (including Ada, > Pascal, and C) are designed in such a way that the interpretation of any > recursive procedure consumes an amount of memory that grows with the number > of procedure calls, even when the process described is, in principle, > iterative. The implementation of Scheme we shall consider does not share > this defect. It will execute an iterative process in constant space, even if > the iterative process is described by a recursive procedure. **Question** : I understood the principles involved (i.e. the what) but I still don't understand how the Lisp interpreter/compiler will generate an iterative process from a recursive function, being able to calculate it using constant space, and why most other languages are not able to do it."} {"_id": "126671", "title": "Is it considered bad practice to have PHP in your JavaScript", "text": "So many times on this site I see people trying to do things like this : I don't think that this is some sort of pattern that people naturally fall into. There must be some sort of tutorial or learning material out there that is showing this, otherwise we wouldn't see it so much. What I'm asking is, am I making too big a deal of this or is this a really bad practice? **EDIT :** Was speaking to a friend of mine about this who often puts ruby in his JavaScript and he brought up this point. Is it ok to dynamically place application wide constants in your JavaScript so you don't have to edit two files. for example... MYAPP.constants = ; also is it OK to directly encode data you plan to use in a library ChartLibrary.datapoints = ; or should we make an AJAX call every time?"} {"_id": "226510", "title": "DDD: Domain Model Factory Design", "text": "I am trying to understand how and where to implement domain model factories. I have included my `Company` aggregate as a demo of how I have done it. I have included my design decisions at the end - I would appreciate any comments, suggestions, critique on those points. **The`Company` domain model:** public class Company : DomainEntity, IAggregateRoot { private string name; public string Name { get { return name; } private set { if (String.IsNullOrWhiteSpace(value)) { throw new ArgumentOutOfRangeException(\"Company name cannot be an empty value\"); } name = value; } } internal Company(int id, string name) { Name = name; } } **The`CompanyFactory` domain factory:** This class is used to ensure that business rules and invariants are not violated when creating new instances of domain models. It would reside in the domain layer. public class CompanyFactory { protected IIdentityFactory IdentityFactory { get; set; } public CompanyFactory(IIdentityFactory identityFactory) { IdentityFactory = identityFactory; } public Company CreateNew(string name) { var id = IdentityFactory.GenerateIdentity(); return new Company(id, name); } public Company CreateExisting(int id, string name) { return new Company(id, name); } } **The`CompanyMapper` entity mapper:** This class is used to map between rich domain models and Entity Framework data entities. It would reside in infrastructure layers. public class CompanyMapper : IEntityMapper { private CompanyFactory factory; public CompanyMapper(CompanyFactory companyFactory) { factory = companyFactory; } public Company MapFrom(CompanyTable dataEntity) { return DomainEntityFactory.CreateExisting(dataEntity.Id, dataEntity.Name); } public CompanyTable MapFrom(Company domainEntity) { return new CompanyTable() { Id = domainEntity.Id, Name = domainEntity.Name }; } } 1. The `Company` constructor is declared as `internal`. **Reason:** Only the factory should call this constructor. `internal` ensures that no other layers can instantiate it (layers are separated by VS projects). 2. The `CompanyFactory.CreateNew(string name)` method would be used when creating a new company in the system. **Reason:** Since it would not have been persisted yet, an new unique identity will need to be generated for it (using the `IIdentityFactory`). 3. The `CompanyFactory.CreateExisting(int id, string name)` method will be used by the `CompanyRepository` when retrieving items from the database. **Reason:** The model would already have identity, so this would need to be supplied to the factory. 4. The `CompanyMapper.MapFrom(CompanyTable dataEntity)` will be used by the `CompanyRepository` when retrieving data from persistence. **Reason:** Here Entity Framework data entities need to be mapped into domain models. The `CompanyFactory` will be used to create the domain model to ensure that business rules are satisfied. 5. The `CompanyMapper.MapFrom(Company domainEntity)` will be used by the `CompanyRepository` when adding or updating models to persistence. **Reason:** Domain models need to be mapped straight onto data entity properties so that Entity Framework can recognise what changes to make in the database. Thanks"} {"_id": "213529", "title": "How to solve this problem- Neural Net? Fuzzy? Other?", "text": "Hi I have a programming problem that I would like to solve using some artificial intelligence technique. I really dont know where to start. I would like some guidance as to what methodology to pursue. Lets say I have 10,000 images of random people, and I need to detect elderly people in images. I might have algorithms like wrinkle detector, glasses detector, walking cane detector, missing teeth detector, skateboard detector, Playstation detector, etc. Each algorithm does a scan independently and outputs a number from 0 to 10 on the likelihood it thinks the image contains that item. Lets assume that works. There might be 100 different algorithms. My set of 10,000 images would be divided by a human into two groups, those that contain an elderly person, and those that do not. Now I need to develop a system that takes the series of values from the algorithm modules, when given an image to analyze, and calculates a single value that represents the likelihood that an image has elderly people in it or not. During training I would like it to be able to automatically build rules by analyzing all the algorithms' outputs. For example: * If wrinkle detector, glasses detector, walking cane detector and missing teeth detector all output a high number, then output a high number. * If wrinkle, glasses, cane and teeth detectors are high, but playstation and skateboard detectors are also high, then output is neither low nor high. * hands detector and clothes detector should be essentially ignored as old and young people both have those (hopefully) What type of technology should I be implementing for the automated rule building system? Is this better solved by a neural network system? A fuzzy logic system? Something else? Thanks for any advice."} {"_id": "226518", "title": "Scrum: Writing the time it took to complete a task", "text": "In my organization when we move a task to the \"Done\" column we write the number of hours it took to complete the task. It can be useful when during retrospective the team tries to understand why certain stories took longer to implement than anticipated. However it is also burdensome and developers feel somewhat micromanaged or stressed when they have to do this. What do you do in your teams? What do you recommend?"} {"_id": "88735", "title": "When practicing collective code ownership, can there be experts of each component?", "text": "Consider a complex and extensive software component; for example a multi- language text rendering engine, an IPC framework, a scheduler that can handle multiple time zones, a module full of complexities caused by the need to stay backwards compatible, etc... It requires a lot of knowledge of the details to maintain and extend such a component without introducing bugs with every code change. It is also often helpful to know the bug history when receiving a new bug report. Moreover, it requires thorough knowledge about the component's design to be able to maintain its consistency after future extensions, and it is good to know the rationale behind past design decisions. You might try to document this kind of knowledge, but it would require a lot of time and effort, it would rapidly become outdated and hence untrustworthy. Worse, everybody would still need to keep that written documentation in his/her mind. It seems to me that for such components, it is best to have someone that has in-depth knowledge, as one cannot thoroughly know and keep up with the code produced by a whole group of people without an excessive communication and learning overhead. I wonder whether the practice of collective code ownership allows for such experts, as it seems to me that it would require some kind of \"weak code ownership\". And if not: how can it succeed without a massive amount of duplicate work?"} {"_id": "245589", "title": "Pattern to use to relate multiple data sources to different user data widgets", "text": "I have an client x server intranet application the basically gets data from the server, format it and send that data to the client for display. At the server we are using ASP.NET C# running on IIS and at the client we have Javascript widgets. The client x server requests are done through ajax calls. I have lots (100+) of server objects, that can be shown into 10 different formats in the browser though different graphical widgets. So, basically we have the following structure: ![enter image description here](http://i.stack.imgur.com/JJLxc.png) So, for every server object I need, at nnnWidgetDataGetter objects, to do some data transformation to widget data format to be sent to the client. Every widget can request data from all objects of the server. Using a normal approach, I would have to write tons of methods, one for each server basic class times each datagetter class. To avoid writing hundred of methods with similar behaviour, we decided to use a dynamic approach at the DataGetter classes (static classes), where the business object is evaluated at runtime, data is gathered from this object, formatted and sent to the client. The problem is that using dynamic object creation and invokation we loose the Intellisense, so the code is turning very big and difficult to understand and maintain. After a while we found out that we\u00b4re not using the correct approach and started thinking about sw patters that would help us solve this. So we are looking for the correct pattern to solve this issue that would keep a typed structure where Intellisense could work and that avoid us to write hundred of different methods. We appreciate very much any kind of help."} {"_id": "252791", "title": "How to route messages between clients using a central server in Python", "text": "I've got three Raspberry Pis sitting around. I want 2 of them to be able to chat while the 3rd routes the messages (acts as a server between them). The general flow of events should be something like this: * Server starts running on Pi #1 * Pi #2 starts running and connects to the server (who's IP will be static I guess) with a name he chooses. Pi #3 does the same as #2. * Pi #3 can then, knowing the name of Pi #2, send a message to Pi #2 using ... something. This is the general outline of what I want to achieve. I'm not sure what the server that runs on Pi #1 should be (I've heard of webserver frameworks like Flask but I don't have enough knowledge to determine if they fit my needs). I'm also not sure on what I should be using for the client side (Pi #2,3). I could probably use sockets but I assume there is a better / easier way."} {"_id": "213198", "title": "Do software licenses (ie, MIT) need to be included with the executable?", "text": "Whenever a software license reads: `... The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software.` Does this mean you have to include it with the executable (the version customers get) or just the source code (for the programmers)? Sorry if this is a silly question, but it doesn't clearly state what \"the Software\" is referring to, and I can see people maybe wanting credit for it. If you don't have to include the license with the executable, do you need to give credit in your executable release some other way, like some other kind of copyright notice that just says \"Copyright \"? Is there a specific format to that?"} {"_id": "244860", "title": "Range searching with WebGL", "text": "Is it possible to use somehow WebGL to fasten up 1d range searching? I need to find overlapping date intervals, and I thought that 3d games usually have a very similar problem by rendering frames... _I read ofinterval trees as an alternative if this idea does not work, but I am open for suggestions... Btw. if stackoverflow would be a better place for this question, pls let me know!_"} {"_id": "178886", "title": "Liskov substitution and abstract classes / strategy pattern", "text": "I'm trying to follow LSP in practical programming. And I wonder if different constructors of subclasses violate it. It would be great to hear an explanation instead of just yes/no. Thanks much! P.S. If the answer is `no`, how do I make different strategies with different input without violating LSP? class IStrategy { public: virtual void use() = 0; }; class FooStrategy : public IStrategy { public: FooStrategy(A a, B b) { c = /* some operations with a, b */ } virtual void use() { std::cout << c; } private: C c; }; class BarStrategy : public IStrategy { public: BarStrategy(D d, E e) { f = /* some operations with d, e */ } virtual void use() { std::cout << f; } private: F f; };"} {"_id": "244867", "title": "How to store Role Based Access rights in web application?", "text": "Currently working on a web based CRM type system that deals with various Modules such as Companies, Contacts, Projects, Sub Projects, etc. A typical CRM type system (asp.net web form, C#, SQL Server backend). We plan to implement role based security so that basically a user can have one or more roles. Roles would be broken down by first the module type such as: -Company -Contact And then by the actions for that module for instance each module would end up with a table such as this: Role1 Example: Module Create Edit Delete View Company Yes Owner Only No Yes Contact Yes Yes Yes Yes In the above case `Role1` has two module types (Company, and Contact). For company, the person assigned to this role can create companies, can view companies, can only edit records he/she created and cannot delete. For this same role for the module contact this user can create contacts, edit contacts, delete contacts, and view contacts (full rights basically). I am wondering is it best upon coming into the system to session the user's role with something like a: `List roles;` Where the `Role` class would have some sort of `List modules;` (can contain Company, Contact, etc.).? Something to the effect of: class Role{ string name; string desc; List modules; } And the module action class would have a set of actions (Create, Edit, Delete, etc.) for each module: class ModuleActions{ List actions; } And the action has a value of whether the user can perform the right: class Action{ string right; } Just a rough idea, I know the action could be an enum and the ModuleAction can probably be eliminated with a `List`. My main question is what would be the best way to store this information in this type of application: Should I store it in the User Session state (I have a session class where I manage things related to the user). I generally load this during the initial loading of the application (global.asax). I can simply tack onto this session. Or should this be loaded at the page load event of each module (page load of company etc..). I eventually need to be able to hide / unhide various buttons / divs based on the user's role and that is what got me thinking to load this via session. Any examples or points would be great."} {"_id": "244868", "title": "Android From Local DB (DAO) to Server sync (JSON) - Design issue", "text": "I sync data between my local DB and a Server. I'm looking for the cleanest way to modelise all of this. I have a com.something.db package That contains a Data Helper and couple of DAO classes that represents objects stored in the db (I didn't write that part) com.something.db --public DataHelper --public Employee @DatabaseField e.g. \"name\" will be an actual column name in the DB -name @DatabaseField -salary etc... (all in all 50 fields) I have a com.something.sync package That contains all the implementation detail on how to send data to the server. It boils down to a ConnectionManager that is fed by different classes that implements a 'Request' interface com.something.sync --public interface ConnectionManager --package ConnectionManagerImpl --public interface Request --package LoginRequest --package GetEmployeesRequest My issue is, at some point in the sync process, I have to JSONise and de- JSONise my data (E.g. the Employee class). But I really don't feel like having the same Employee class be responsible for both his JSONisation and his actual representation inside the local database. It really doesn't feel right, because I carefully decoupled the rest, I am only stuck on this JSON thing. What should I do ? Should I write 3 Employee classes ? EmployeeDB @DatabaseField e.g. \"name\" will be an actual column name in the DB -name @DatabaseField -salary -etc... 50 fields EmployeeInterface -getName -getSalary -etc... 50 fields EmployeeJSON -JSON_KEY_NAME = \"name\" The JSON key happens to be the same as the table name, but it isn't requirement -name -JSON_KEY_SALARY = \"salary\" -salary -etc... 50 fields It feels like a lot of duplicates. Is there a common pattern I can use there ?"} {"_id": "154873", "title": "Reasonable Number of Directed Graph Nodes and Edges", "text": "How many directed graph nodes are typically represented in the browser? I'm working with some large data-sets with nodes and edges more then 400,000. I'm wondering if I am going down a fruitless path trying to represent them in the browser via arbor.js or similar JS libraries. What's the most effective way to allow a large number(large relative to my domain of real estate transactions, maybe 1-10,000) of users to visualize and browse a large directed graph of up to 500,000 records?"} {"_id": "163240", "title": "I don't understand why algorithms are so special", "text": "I'm a student of computer science trying to soak up as much information on the topic as I can during my free time. I keep returning to algorithms time and again in various formats (online course, book, web tutorial), but the concept fails to sustain my attention. I just don't understand: why are algorithms so special? I can tell you why fractals are awesome, why the golden ratio is awesome, why origami is awesome and scientific applications of all the above. Heck I even love Newton's laws and conical sections. But when it comes to algorithms, I'm just not astounded. They are not insightful in new ways about human cognition at all. I was expecting algorithms to be shattering preconceptions and mind- altering, but time and time again they fail miserably. Perhaps I am doing something wrong in my approach. Can someone tell me why algorithms are so important to programming?"} {"_id": "160876", "title": "Should you make private properties?", "text": "private string mWhatever; private string Whatever { get { return this.mWhatever; } set { this.mWhatever = value; } } I've seen some people who make properties for every single member, private or not... does this make any sense? I could see it making sense in 1% of the cases at times when you want to control access to the member inside the class containing it because if you didn't use properties for every member it would lead to inconsistencies and checking to see if the member has an access or not (since you have access to both in the scope of the class)."} {"_id": "59642", "title": "Best practices for constants", "text": "How do you guys handle constants, especially in Java (static final) and C++ (define)? * Do you use dedicated headers (C++) or classes (Java) for all constants? * Do you turn all literal values into constants or just those you want to use in multiple places? This is probably a bit subjective, so I'm especially interested in the reasons for your approach."} {"_id": "213220", "title": "Modeling a Student Application/Committee Relationship", "text": "I'm developing an ERD for a graduate student manager program (it's for a university class, so it's a fairly trivial implementation). In this snippet of the model, I'm trying to work out the 'application' and 'committee' entities/relationships. Basically, a committee - comprised of staff members - can be formed and assigned to review an application. Here are the entities I've come up with: * **application:** Comprised of a student's application data submitted through a form (student id, date submitted, degree, etc.). * **committee:** Self-explanatory. Group of staff members that can review applications. * **staff_member:** Any faculty member. * **committee_membership:** An associative entity I've created to resolve the many-to-many relationship between committee and staff member since a committee can have many staff members, and a staff member can belong to many committees. Is this an effective implementation of what I'm trying to do? I'm still trying to wrap my head around associative entities and when they are needed. It seems strange to have the 'committee' table with a single column. Also - and I know I've given you limited information - do my relationships look generally correct? ![enter image description here](http://i.stack.imgur.com/6fo25.png)"} {"_id": "213226", "title": "How does datomic handle \"corrections\"?", "text": "**tl;dr** * Rich Hickey describes datomic as a system which implicitly deals with timestamps associated with data storage * from my experience, data is often imperfectly stored in systems, and on many occasions needs to retroactively be corrected (ie, often the question of \"was a True on Tuesday at 12:00pm?\" will have an incorrect answer stored in the database) This seems like a spot where the abstractions behind datomic might break - do they? If they don't, how does the system handle such corrections? * * * Rich Hickey, in several of his talks, justifies the creation of datomic, and explains its benefits. His work, if I understand correctly, is motivated by core the insight that humans, when speaking about data and facts, implicitly associate some of the related context into their work(a date-time). By pushing the work required to manage the implicit date-time component of context into the database, he's created a system which is both much easier to understand, and much easier to program. This turns out to be relevant to most database programmers in practice - his work saves everyone a lot of time managing complex, hard to produce/debug/fix, time queries. However, especially in large databases, data is often damaged/incorrect (maybe it was not input correctly, maybe it eroded over time, etc...). While most database updates are insertions of new facts, and should indeed be treated that way, a non-trivial subset of the work required to manage time-queries has to do with retroactive updates. I have yet to see any documentation which explains how such corrections, or retroactive updates, are handled by datomic; from my experience, they are a non-trivial (and incredibly difficult to deal with) subset of time-related data manipulation that database programmers are faced with. Does datomic gracefully handle such updates? If so, how?"} {"_id": "241307", "title": "Password reset process", "text": "So I'm working on my first password reset mechanism. I'm going with what I understand to be a fairly common procedure: 1. User clicks \"Forgot Password\" 2. User is prompted for email address 3. If the entered email address is valid, send an email with a reset link to it 4. Reset link uses a token of some kind to identify the user account and keep its details secure 5. When password is reset, generate a new token and save it to the user account I feel like this should be pretty secure, but I was wondering if anyone could provide any insights that I may not be considering at this point."} {"_id": "213228", "title": "A good name for ValueObject that contains database update/create column values", "text": "We all know these fields, database admins so like to add: * UserCreated & DateCreated * UserUpdated & DateUpdated They need to be displayed in the UI so I want to put them in some `ValueObject` but I can't come up with a good name to call the class..."} {"_id": "241309", "title": "Builder Pattern: When to fail?", "text": "When implementing the Builder Pattern, I often find myself confused with when to let building fail and I even manage to take different stands on the matter every few days. First some explanation: * With _failing early_ I mean that building an object should fail as soon as an invalid parameter is passed in. So inside the `SomeObjectBuilder`. * With _failing late_ I mean that building an object only can fail on the `build()` call that implicitely calls a constructor of the object to be built. Then some arguments: * In favor of failing late: A builder class should be no more than a class that simply holds values. Moreover, it leads to less code duplication. * In favor of failing early: A general approach in software programming is that you want to detect issues as early as possible and therefore the most logical place to check would be in the builder class' constructor, 'setters' and ultimately in the build method. What is the general concensus about this?"} {"_id": "155477", "title": "Best way to reuse common functions between ASPX pages ?", "text": "I have a bunch of functions that are used across multiple ASPX files. I want to condense these down to one file to be used for all the ASPX files. I have a few ideas but I want to know what the accepted method to doing this would be. I have an idea to just create a class to put them in. However, I was wondering if i could put them in a ascx page, but that does not look like the solution I'm looking for. Is there a accepted method for this type of situation?"} {"_id": "119447", "title": "Sports Programming", "text": "Other than programming, I'm addicted to sports. I'd like to integrate the two together. What are different programming languages that companies like ESPN use to work with stats? What techniques are used to do this, and how can I get going with it myself?"} {"_id": "201417", "title": "Does the ray tracing algorithm involves rasterization of image?", "text": "I am constructing a ray tracing algorithm , i know that the first step is to develop camera and view plane specifications. Now is the next step performing rasterization algorithm on image before a BVH tree is constructed so that intersection tests can be performed? Kindly Guide"} {"_id": "43328", "title": "How to structure a project that supports multiple versions of a service?", "text": "I'm hoping for some tips on creating a project (ASP.NET MVC, but I guess it doesn't really matter) against multiples versions of a service (in this case, actually multiple sets of WCF services). Right now, the web app uses only some of the services, but the eventual goal would be to use the features of all of the services. The code used to implement a service feature would likely be very similar between versions in most cases (but, of course, everything varies). So, how would you structure a project like this? Separate source control branches for each different version? Kind of shying away from this because I don't feel like branch merging should be something that we're going to be doing really often. Different project/solution files in the same branch? Could link the same shared projects easily Build some type of abstraction layer on top of the services, so that no matter what service is being used, it is the same to the web application?"} {"_id": "43321", "title": "What are the so-called \"levels\" of understanding multithreading?", "text": "I seem to remember reading somewhere some list of 4 \"levels\" of understanding multithreading. This _may_ have been in a formal publication, or it _may_ have been in an extremely informal context (even like in a Stack Overflow question, for example). Unfortunately I don't remember who referred to them or precisely what they were. I seem to recall that they were roughly like: 1. Total ignorance 2. Awareness mixed with incompetence 3. Relative competence mixed with fear 4. True understanding My intention is to refer to these levels in a blog post I'm writing, with a reference; but I can't for the life of me remember where I first encountered this list. Brief Google searches have proved unfruitful."} {"_id": "231208", "title": "Using \u03c0, \u03c6, \u03bb etc. as variable names while programming?", "text": "This is a function in the d3.v3.js file (the graph library D3.js): function d3_geo_areaRingStart() { var \u03bb00, \u03c600, \u03bb0, cos\u03c60, sin\u03c60; d3_geo_area.point = function(\u03bb, \u03c6) { d3_geo_area.point = nextPoint; \u03bb0 = (\u03bb00 = \u03bb) * d3_radians, cos\u03c60 = Math.cos(\u03c6 = (\u03c600 = \u03c6) * d3_radians / 2 + \u03c0 / 4), sin\u03c60 = Math.sin(\u03c6); }; function nextPoint(\u03bb, \u03c6) { \u03bb *= d3_radians; \u03c6 = \u03c6 * d3_radians / 2 + \u03c0 / 4; var d\u03bb = \u03bb - \u03bb0, cos\u03c6 = Math.cos(\u03c6), sin\u03c6 = Math.sin(\u03c6), k = sin\u03c60 * sin\u03c6, u = cos\u03c60 * cos\u03c6 + k * Math.cos(d\u03bb), v = k * Math.sin(d\u03bb); d3_geo_areaRingSum.add(Math.atan2(v, u)); \u03bb0 = \u03bb, cos\u03c60 = cos\u03c6, sin\u03c60 = sin\u03c6; } d3_geo_area.lineEnd = function() { nextPoint(\u03bb00, \u03c600); }; } I was completely taken aback that the programmers used \u03c0, \u03c6 and \u03bb as variable names. Surprisingly, these variables were accepted even by Notepad (ie: it didn't turn into junk/unrecognized characters). Is it good practice to use such variables? I can see that they're very intuitive and searchable too, but a bit unnerving."} {"_id": "96408", "title": "1-click software release", "text": "I am rewriting a vb6 installer into NSIS. One of my priorities is to compile- to-release in the least number of steps possible; ideally, a one click process, in which all needed files are included, registered, and put in the right folders. At the moment, I am creating several folders. One for app_path files, one for system_files, etc. So, every time that a file is modified or a new file added, I just drop it in the right folder and recompile. However, if a completely new file is included, steps like registering a DLL still need to be implemented in code. Any suggestion on how to improve this approach?, or on how to design a one- click compile-to-release process? EDIT: The original installer was created using the Package and Deployment wizard from Visual Basic 6.0. Every time that I need to release my software, I found my self updating and adding files, updating the respective lst file, and repackaging everything (CAB files, dlls, etc) into one exe in order to release it from our website. It is a cumbersome process. Additionally, due to pure design. The original software has been copy pasted in several projects, creating several application and its respective VB6 installers (repeating a lot of common files). Ideally, there would be one single installer, which allows the installation of the different executable and manages all the shared files among them."} {"_id": "16016", "title": "What is the difference between `update` and `upgrade`", "text": "What is the difference between `update` and `upgrade` in the context of application software?"} {"_id": "11296", "title": "Freedom of Speech vs. Computer Security", "text": "Over the Summer I found a cookie put on my computer by the school's network that seemed like it could be a problem. It stored authentication information and (long story short) I found if you put in other people's easily attainable information, you become signed in as them. Slow but sure, I discovered 3 security exploits that, when working together, compromised ALL security on the website. In theory I could sign in as teachers, access any student's email (with capabilities to send), adjust all grades, anything I wanted to do, I could do. They have since fixed this error, not only is the cookie gone, but the site is replaced with a Microsoft Sharepoint foundation instead of their \"roll your own\" system. I didn't research this to take advantage of the exploit. I did it because these exploits would allow MY information to be viewed by ANYBODY, a student of the school or not. So I went to my School's IT so that they would fix it. They did, and almost a month after they fixed, I decided to write a 2 page-ish article on my blog. I'd link to that page now except that the head of IT has asked that I take the page down. He claims we agreed that I'd never talk about the exploit to anybody outside of IT. I know I never agreed to that because since square one I wanted to take credit for my 3 or 4 weeks of hard investigation and work, I fully intended on telling people about the problem. I wrote my article with discretion. I never mentioned the school's name and it had very little subjective content, it was primarily how I found the exploits, what they were, and how they could be used. The idea was that if I talked about it from the point of view of the discoverer, it'd show insight into how to develop against it. The school was very appreciative that I did not use these exploits and that I came straight to them with the intention of fixing it. I only blog about Software Development concepts. Finding security holes in the wild like this is a perfect subject for my blog. Also, I want to make a name for myself in the cyber security world, Bruce Schneier linked me to an article in his blog talking about how if you want to make a name for yourself in the cyber security, you need to break ciphers, find security holes and fix them, and such. In short, I feel I deserve credit for the hard work I did, they feel that the school's image is more important even though the security hole is more than fixed. What's the best way to go about getting my credit here? I have posted a pseudo censored version in the mean time. UPDATE: no more pseudo censored version, I just put up a new article that's a little more ambiguous about it even being a school. New article can be found here."} {"_id": "116323", "title": "What algorithm browsers follow to remember values for form controls?", "text": "You can have different HTML `input` elements on a single HTML `form` and all of them, have their associated remembered values (values that you've entered in those fields); Is it related to the `id` or `name` attribute of the `input` element? No. Proof? Your email address appears on many `input` elements across different websites, and all those fields have different `id` and `name` attributes. So, there is a mystery here I don't understand. How browsers remember values for HTML form controls? What is the algorithm? How they know that they should show your email on _this_ control and not on _that_ control, while controls are both HTML `input type=text` elements?"} {"_id": "253094", "title": "Difference between reverse lookup tables and rainbow tables", "text": "Using Reverse Lookup Tables, you create a lookup table consisting of the password hash of user accounts. Then you use another table which consists of hashes with guessed passwords. Then you compare the two to see if the hashed password of compromised user account matches hashed password in lookup table. Using Rainbow Tables, it appears to be the same technique except the lookup table is smaller so you can search through them quicker. What is the real difference between reverse lookup tables and rainbow tables?"} {"_id": "56857", "title": "What is the term that means \"keeping the arguments for different API calls as similar as possible\"?", "text": "There is a word which I can never remember... it expresses a design goal that API calls (or functions or methods or whatever) should be as similar as reasonably possible in their argument patterns. It may also extend to naming as well. In other words, all other things being equal, it is probably bad to have these three functions: deleteUser(email) petRemove(petId,species) destroyPlanet(planetName,starName) if instead you could have deleteUser(userId) deletePet(petId) deletePlanet(planetId) What is the word for this concept? I keep thinking it's \"orthogonal\" but it definitely isn't. Its a very important concept, and to me it's one of the biggest things that makes some APIs a joy to work with (because once you learn a few things you can pretty much use everything without looking at doco), and others a pain (because every function is done inconsistently)."} {"_id": "161835", "title": "Are all programming problems algorithm problems?", "text": "I like how \"Introduction to Algorithms\" by Cormen et al. conveys knowledge. One reason is that everything has to do with programming problems and the book is not implemented in any particular programming language. This language independency provides focus on the ideas in general. So my question is, as it says in the title. Is every solveable programming problem solveable by thinking in this algorithmic fashion. No matter which language, field, etc ? If yes, give arguments, else, give arguments! I have not implemeted many complex programs with GUI, AI, Graphics, etc ... But are these types of problems also a matter of thinking out good algorithms?"} {"_id": "161834", "title": "Use of versioned objects/data to handle program version compatibility?", "text": "Is there a common name for the practice of keeping a version number on your data, so that different versions of your program can identify, for example, \"current\", \"legacy\", and \"too-old-to-deal-with\" versions of the same type of object? As discussed here: Strategy for backwards compatibility of persistent storage."} {"_id": "116328", "title": "Is there canonical jUnit reference documentation?", "text": "I can't seem to find any kind of complete reference for jUnit. The only way I've managed to find to know about all the features in jUnit is to check out all the release notes for it. For example there is no mention of @Rules in the official page unless you count the blog posts. But for that you first need to know a feature exists to search for it. Is there a canonical reference for jUnit? What's the best method for learning about jUnit features? What I've tried: * jUnit Cookbook \\- Really basic, very few features covered * The official FAQ \\- Covers more but it doesn't have @Rules or anything covered there. Updated last time in 2006"} {"_id": "120090", "title": "Who should control navigation in an MVVM application?", "text": "Example #1: I have a view displayed in my MVVM application (let's use Silverlight for the purposes of the discussion) and I click on a button that should take me to a new page. Example #2: That same view has another button that, when clicked, should open up a details view in a child window (dialog). We know that there will be Command objects exposed by our ViewModel bound to the buttons with methods that respond to the user's click. But, what then? How do we complete the action? Even if we use a so-called NavigationService, what are we telling it? To be more specific, in a traditional View-first model (like URL-based navigation schemes such as on the web or the SL built-in navigation framework) the Command objects would have to know what View to display next. That seems to cross the line when it comes to the separation of concerns promoted by the pattern. On the other hand, if the button wasn't wired to a Command object and behaved like a hyperlink, the navigation rules could be defined in the markup. But do we want the Views to control application flow and isn't navigation just another type of business logic? (I can say yes in some cases and no in others.) To me, the utopian implementation of the MVVM pattern (and I've heard others profess this) would be to have the ViewModel wired in such a way that the application can run headless (i.e. no Views). This provides the most surface area for code-based testing and makes the Views a true skin on the application. And my ViewModel shouldn't care if it displayed in the main window, a floating panel or a child window, should it? According to this apprach, it is up to some other mechanism at runtime to 'bind' what View should be displayed for each ViewModel. But what if we want to share a View with multiple ViewModels or vice versa? So given the need to manage the View-ViewModel relationship so we know what to display when along with the need to navigate between views, including displaying child windows / dialogs, how do we truly accomplish this in the MVVM pattern?"} {"_id": "216162", "title": "Semantic coupling vs. large class", "text": "I have hardware I communicate with via TCP. This hardware accepts ~40 different commands/requests with about 20 different responses. I've created a HardwareProxy class which _has a_ TcpClient to send and receive data. I didn't like the idea of having 40 different methods to send the commands/requests, so I started down the path of having a single SendCommand method which takes an ICommand and returns an IResponse, this results in 40 different SpecificCommand classes. The problem is this requires semantic coupling, i.e. the method that invokes SendCommand receives an IResponse which it has to downcast to SpecificResponse, I use a future map which I believe ensures the appropriate SpecificResponse, but I get the impression this code smells. Besides the semantic coupling, ICommand and IResponse are essentially empty abstract classes (Marker Interfaces) and this seems suspicious to me. If I go with the 40 methods I don't think I have broken the single responisbility principle as the responsibility of the HardwareProxy class is to act as the hardware, which has all of these commands. This route is just ugly, plus I'd like to have Asynchronous versions, so there'd be about 80 methods. Is it better to bite the bullet and have a large class, accept the coupling and MarkerInterfaces for a smaller soultuion, or am I missing a better way? Thanks. ## Possible Solution Based on DXM's response this is what I came up with: struct Proxy { template ICommand* CreateCommand() { return new C(this); } void Send(std::string s) { //tcp send here for now print command name cout << s << endl; } }; template struct IResponse {}; struct DerivedResponse1 : IResponse { std::string GetValue() { return \"DerivedResponse1\"; } }; template struct ICommand { protected: Proxy* pProxy; public: Response* Send() { std::string msg = \"Sent: \" + static_cast (this)->GetName(); pProxy->Send(msg); //wait for future here Response* pResponse = new Response; //replace with future value return pResponse; } }; struct DerivedCommand1 : public ICommand { DerivedCommand1(Proxy* pProxy) { this->pProxy = pProxy; } std::string GetName() { return \"DerivedCommand1\"; } }; int main() { Proxy proxy; ICommand* pDerivedCommand = proxy.CreateCommand(); DerivedResponse1* pDerivedResponse = pDerivedCommand->Send(); cout << \"Received :\" << pDerivedResponse->GetValue() << endl; return 0; } How does that look? It seems like I may not even need the IResponse interface, we'll have to see when I implment it. I realize there is a lot of important stuff missing, like the futures, but I wanted to get the template stuff down first. What do you think of passing in the proxy to the command? Also what about having the send in the ICommand as opposed to writting it for every command? Am I violating any other OO principles? Thanks."} {"_id": "90803", "title": "Coding user rights", "text": "Imagine a system which has a number of functions and a number of users. A user must have rights to a specific function. Users may belong to a group. A group may belong to a group. So as a simple illustration, user A has rights to function 1 and 2. User b has rights to function 2 and 3. User A is in group1 which has rights to function 3, but negative, i.e. explicit denied access to function 1. For extra complexity, perhaps the function has default rights. So you can say, but default everyone has access to function a, or no one has access to function a. I guess it's the same as having an Everybody group. So the question is how are you managing user rights? Do you make all rights additive? Do you allow the explicit denied I mention at the end? Do you have a system where the most access possible is granted or the least? Do you make user rights trump group rights, or vice versa? I've seen a number of variations for applying rights. I'm now defining my own and I'm really looking for any experience you have in that area where you wish you'd done something different, or were delighted you chose a particular way of doing things. Thanks"} {"_id": "120099", "title": "How to become a professional web developer from a C/C++ programmer?", "text": "I am currently a high school student and know how to use Pascal and C/C++ to take part in competitions such as the Informatics Olympiad. I have learned data structures and many algorithms to solve various kinds of problems. Now, I want to move on to become a web developer. However, I know web development is quite different from competitive programming. To make a web application, I have to master HTML, databases, back-end programming etc. But these all look like separate pieces of information. I don't know where to start and what order should I follow. Anybody who can give a comprehensive list of learning points? I know there are HTML, Ruby on Rails, CSS and Javascript. What else? More importantly, can someone give a brief outline of their relationship?"} {"_id": "90808", "title": "Are Django forms violating MVC?", "text": "I just started working with Django coming from years of Spring MVC and the forms implementation strikes as being slightly crazy. If you're not familiar, Django forms starts with a form model class that defines your fields. Spring similarly starts with a form-backing object. But where Spring provides a taglib for binding form elements to the backing object within your JSP, Django has form widgets tied directly to the model. There are default widgets where you can add style attributes to your fields to apply CSS or define completely custom widgets as new classes. It all goes in your python code. That seems nuts to me. First, you are putting information about your view directly in your model and secondly you are binding your model to a specific view. Am I missing something? EDIT: Some example code as requested. Django: # Class defines the data associated with this form class CommentForm(forms.Form): # name is CharField and the argument tells Django to use a # and add the CSS class \"special\" as an attribute. The kind of thing that should # go in a template name = forms.CharField( widget=forms.TextInput(attrs={'class':'special'})) url = forms.URLField() # Again, comment is even though input box size # is a visual design constraint and not tied to the data model comment = forms.CharField( widget=forms.TextInput(attrs={'size':'40'})) Spring MVC: public class User { // Form class in this case is a POJO, passed to the template in the controller private String firstName; private String lastName; get/setWhatever() {} } <%@ taglib prefix=\"form\" uri=\"http://www.springframework.org/tags/form\" %>
    First Name:
    Last Name:
    "} {"_id": "185460", "title": "Delphi Build Server - Do I need to check in .dres files?", "text": "We're using final builder to build a Delphi project and the person managing the build server noticed that projects with no .dres files were not building because they're not in SVN and because they're not in SVN they're not on the build machine. So he put them in SVN. I'm a little skeptical about the necessity of putting them in SVN though. For one thing, if they're needed by the build server then they're not being built by the build server and we're not really creating the build in one step since we're using pre-compiled code (I might as well just check in my DCU's, tear off my beard and return my Delphi-4-Ever allegiance card). I see in Delphi after compiling a project: ` c:\\program files (x86)\\embarcadero\\rad studio\\9.0\\bin\\cgrc.exe -c65001 \"PROJResource.rc\" -foPROJ.dres ` Those files are produced by brcc32 compiling an RC file. I'd say, well just add that line to the build server, but `PROJResource.rc` isn't in SVN either! `PROJResource.rc` is automatically generated by Delphi from adding things using the project manager and I never noticed it so I never added it to SVN and no one else complained (I think the .dproj file is behind this). The RC files that I wrote myself are in SVN though. So... what's the best way to fix this, just check in the PROJResource.rc or is there something else we can be doing to streamline this?"} {"_id": "63918", "title": "Transactional database and Datawarehouse on the same Postgresql cluster?", "text": "Is it conceptually feasible to have on a Postgresql Cluster a transactional database and at the same time a datawarehouse that would get feeded by the transactional database ?"} {"_id": "121945", "title": "How to be up to date with the LAMP platform?", "text": "Most of times I get to know about the new features too late. It is okay at least I know about them does not matter from where I know But I feel it is too late to know about those features. I am working on LAMP platform and I want to keep myself up to date with the new things, anything happening new with LAMP. Can you please let me know what resources should I use? What groups should I follow? From where I can get the latest updates about any activity, event and feature about LAMP?"} {"_id": "63912", "title": "Looking for an open source project in Python", "text": "I am looking for practical tasks to get experience with Python. Just reading the books and not doing any tasks in the language is not effective. I solved some problems on the Project Euler and TopCoder and it helped me to learn the syntax of the language better. But those tasks are hard algorithmically, but as a rule is quite simple from the point of view of programming. Now I'm looking for an interesting open source project in Python, participation in which will help me to better understand the OO-model of language. Although, this is my first step with Python, in general, I am an experienced programmer and I can be useful for a project. May be someone can suggest something?"} {"_id": "19592", "title": "What is detailed design? what are the advantage disadvantages using it?", "text": "Im really not getting the idea: 1. What is Detailed Design. 2. Why use Detailed Design. 3. Advantages/Disadvantage of using Detailed Design. 4. Any alternative methods other than using Detailed Design. Could some one please guide/explain me ?"} {"_id": "20896", "title": "What do I call people who extend my class in base class documentation?", "text": "Been calling them \"implementers\", but it seems weird to me. /// /// Implementers should implement this. Derp /// protected abstract void InternalExecute(); A point of clarification, I'm interested in what to call the **people** who create child classes, not the child classes themselves. \"Hey, you there\" not \"that thing there.\""} {"_id": "153131", "title": "How to store multiple requirements with OR and AND?", "text": "Well I'm working on a personal project that needs to check if a user has met certain requirements, and they come in a form of Requirement: [c1 OR c2] AND [d1 OR d2] Requirement: [c1 AND c2] OR [d1 AND d2] Requirement: c1 AND any dn(n can be any integer) I'm just not sure how to store these sorts of requirements, I'm thinking of using another object to hold c1,c2,d1,d2....dn and OR, but that seems like a roundabout way of doing things. Is there a better method?"} {"_id": "60791", "title": "What is the best way to implement different views of one resource in RESTful manner?", "text": "Imagine, there is a resource, for example event. User is able to get list of events in HTML format. He should be able to view that list in two ways: as a list and as a calendar. How the API should respond to this options? The simplest approach, I think, is to pass some parameter to index action to determine which type of presentation to use, but I don't like it. Don't know why, it is not RESTful for me. What do you think, do I misunderstood something and should keep things simple or there is a better approach?"} {"_id": "153134", "title": "Where does Microsoft currently stand on dynamic languages?", "text": "With languages Python and Ruby still having a good foothold in the market what is Microsoft's current stance on dynamic languages? Does Microsoft have any plans to incorporate or invent it's own dynamic language?"} {"_id": "153136", "title": "Using a back-end mechanism to copy files to DB and notify the application", "text": "This Scenario: User copies large files to a local folder. I want to watch that folder and when a new file is dropped then go and copy it to Database, so later when coping is done I can actually use it in my application. ( A C# WinForms App). It would be awesome to also find a way to somehow get notified in the Application that hey copying the file to DB is finished and ready for use... I am using C#.net, Windows... What solutions/architecture do you suggest for this? For example having a windows service running all the time watching that folder, when something copied goes and write it to DB ... then how about getting notified? MSMQ is something I can use? don't know much about it yet. Thanks."} {"_id": "152848", "title": "How do you manage projects left over by other employees?", "text": "It happens that some one just leaves the company all of sudden. Now his work needs to be completed and you are being assigned it. Having no idea what was he up to (was it 90% done or 9%), how do you manage the leftover? 1. Shall I start from scratch? What if it was 90% done? 2. Shall I try and understand whatever he has done? What if it was just nonsense?"} {"_id": "235586", "title": "Is my application secure enough", "text": "first of all, I don't have any code to display in my question here, because I'm still designing the application structure, so i only got design developed. I'm building a phone application that I'm trying to make as secure as possible, i know it's not that necessary on application that is the size of mine, but still, good practice. My app is on Angular with phonegap, server side is accessible through https and is nodejs with mongoDB, is a stateless server, restful api. Once the user is logged in (using user and password or facebook), a session id is stored on the client side (angular cookies) and is required with each access to the server (checking if there is a user logged in with that session id). I also heard about form tokens so i implanted that - On each form that in the end will send a request to the server (for example, changing user information and hitting save) it will also require a form token, random value that is generated for the session on building the form - onLoad function that calls the server to generate a specific form token. Seems pretty cool, but still not sure if it any good. By thinking of it, someone can call my restful method to generate any form token he wants using session id he found (by brute forcing or whatever), and then call any other restful method to change anything for that user.. also, my server should be limited to be accessed only from the application...can't restrict origin because the IP changes from phone to phone...what are the restriction my server should have? I am a starter at security for web apps and any advice, explaining and help would really help! is my application secure enough?"} {"_id": "148434", "title": "Why do Git/Mercurial repositories use less space?", "text": "I've read on several discussions here and on SO that DVCS repositories use about the same or less space than their centralised counter-parts. I may have missed it, but I haven't found a good explanation of why that is. Anyone know?"} {"_id": "235583", "title": "How to design a domain entity that uses a dependency to manage a state field?", "text": "I'm new to DDD and IOC/DI and I'm having some trouble figuring out how to design an entity that needs to use a state pattern to manage its status. As the transitions are somewhat complicated, I'm using a finite state machine (FSM) to handle the states (I'm using Stateless). To further complicate the situation, the FSM needs to be loosely coupled to an entity. I.e. while I know that the state of the entity must be handled by a FSM, I don't necessarily know exactly what the FSM is (i.e. the entity doesn't know what states/triggers/transitions it might go through). The users use the term workflow to define the FSM (state, triggers, transitions, rules etc). The workflow changes independently of the entity. They might want to add new states, change rules, triggers etc. independent of any changes to the entity. I'm handling this by dynamically loading an assembly that contains a definition of the FSM (which implements a known interface), then using a service that is injected into the entity, the entity calls the service and gets the FSM it is using. This involves potentially loading assemblies, caching the factory that creates the FSM and then assigning the FSM that is in the appropriate state to the entity. Finally, the FSM can return the set of valid triggers that can be provided to the UI so one of the those triggers can be selected and fired. The actual workflow in completely encapsulated externally to the entity. So, currently I'm my entity looks something like this: public class EntityUsingWorkflow { private readonly workflowService; public Order(IWorkflowService service, string workflowKey, string status) { this.workflowService = service; this.WorkflowKey = workflowKey; this.Status = status; // Note that the two lambdas are used to retrieve // and set the state on the entity. this.Workflow = service.GetStateMachine(workflowKey, () => this.Status, stateValue => this.Status = statValue ); } public string WorkflowKey { get; set; } public IStateMachine Workflow { get; private set; } public string Status { get; private set; } } I'm concerned with this design. My basic understanding is that domain entities should not have dependencies injected into them, yet here I am injecting one. I also understand that the entity needs to handle its own state, so I'm hesitant to move the workflow completely out of the entity. Other ideas I've had, like an init method, would just move the injection to method injection. What would be a clean way to design an entity with these requirements? Is it OK to inject a dependency in this situation?"} {"_id": "148437", "title": "Is it legal / moral to republish oss project of an author that does not respond?", "text": "I have found a small oss project (one file) in someone's blog a few months ago. The license is \"Attribution-ShareAlike 2.5 Generic\". I sent a mail to the author if I can put this in github but got no response back. Meanwhile his blog has been shut down. I am not a lawyer, but this seems legal to republish the code (with appropriate attribution) in github. Am I right? Is this moral? Maybe the guy just wants to disappear for a while..."} {"_id": "166820", "title": "How can I distribute a unique database already in production?", "text": "Let's assume a successful web Spring application running on a MySQL or PostgreSQL database. The traffic is becoming so high and the amount of data is becoming so big that a distributed database solution needs to be implemented to address scalability issue. Let's also assume this application is using Hibernate and the data access layer is cleanly separated with DAOs. Ideally, one should be able to add or remove databases easily. A failback solution is welcome too. What would be the best strategy to scale this database? Is it possible to minimize sharding code (Shard) in the application?"} {"_id": "49141", "title": "I read Pro ASP .NET MVC 2 Framework - anything else for AJAX in MVC 2?", "text": "I read Pro ASP .NET MVC 2 Framework to try and learn the ASP .NET MVC Framework, but I'm really struggling with Ajax in MVC even after going over that chapter again and again. I seem to have a decent grasp on MVC 2 without Ajax. I used the Ajax controls from ASP .NET (non-MVC) and am used to user controls in update panels. I have somewhat complicated objects with nested sets of objects on my website, and trying to update them from the client side and keep their nested array indexes ordered is killing my brain. Having little familiarity with web scripting in general and Javascript / jQuery / JSON in specific isn't helping. Is there anything else I could read that might help me get a better grasp of Ajax in MVC? Is Ajax poorly supported in MVC 2, or am I just struggling because I have huge holes in my understanding of the web due to the WebForms- ish style of the old ASP .NET? Hoping for the latter . . . my own ignorance can be fixed."} {"_id": "187326", "title": "Is it possible to modify Lamport's mutual exclusion algorithm to work without a FIFO guarantee?", "text": "I'm trying to implement a modified version of the Lamport Mutual Exclusion algorithm. The original algorithm assumes FIFO message ordering between connected systems, but I would like to use ~~UDP~~ a protocol which does not guarantee FIFO. Is it possible to modify this algorithm to work without FIFO? EDIT: Forget UDP. Just assume it is some protocol where the only problem is that FIFO is not guaranteed. That is the only problem I'm concerned with right now."} {"_id": "162565", "title": "Should a programmer take writing lessons to enhance code expressiveness?", "text": "Given that programmers are authors and write code to express abstract thoughts and concepts, and good code should be read by other programmers without difficulties and misunderstandings, should a programmer take writing lessons to write better code? Abstracting concepts and real world problems/entities is an important part of writing good code, and a good mastery of the language used for coding should allow the programmer to express his thoughts more easily, or in a better way. Besides, when trying to write or rewrite some code to make it better, much time can be spent in deciding the names for functions, variables or data structures. I think this could also help to avoid writing code with more than one meaning, often cause of misunderstanding between different programmers. Code should always express clearly its function unambiguously."} {"_id": "162562", "title": "Where does Java get its SOA reputation from?", "text": "I see lots of SOA books surrounding Java and have always had the subliminal notion that Java is the \"right\" language for robust/enterprisey SOA, even though I know what SOA involves and that it is perfectly possible using other languages & frameworks. What _exclusive_ traits Java has that other languages/platforms don't, that makes it so popular for SOAs?"} {"_id": "48238", "title": "How well do free-to-open-source-projects policies work in practice?", "text": "In comparison with an open source license and requesting donations, is a free- for-open-source-projects (or free for non-commercial developers) closed source and otherwise commercial project likely to get more license fees? Or just to alienate potential users? Assume the project has value to programmers - I'm looking for generalizations here, though specific examples comparing existing projects will be very interesting. What I have in mind involves code generating programming utilities. And one issue I can think of, either way, is a near total inability to enforce any license restrictions. After all, I can't go around the internet demanding that everyone show me their source code just in case!"} {"_id": "162561", "title": "Is it possible to migrate struts/spring based application to GWT?", "text": "I am using the combination of spring, spring-security, struts and iBatis in my application. Now I am looking to migrate the struts UI to GWT. The new combination must be spring, spring-security, GWT and iBatis. I applied a layered approach to develop my application. In Controller/UI layer i am using Struts. I want to replace struts and use GWT in Controller/UI layer. Is is possible to use GWT without affecting another layers DAO/BL/SL?"} {"_id": "48237", "title": "What is an integration test exactly?", "text": "My friends and I have been struggling to classify exactly what is an integration test. Now, on my way home, I just realised, that every time I try to give a real world example of an integration test, it turns out to be an acceptance test, ie. something that a business person would say out loud that specs out what the system should be delivering. I checked the Ruby on Rails documentation for their classification of these testing types, and it's now completely thrown me. Can you give me a short academic description of an integration test with a real world example?"} {"_id": "45643", "title": "How to suggest changes as a recently-hired employee?", "text": "I was recently hired in a big company (thousands of people, to give an idea of the size). They said they hired me because of my rigor and because I was, despite my youngness (i'm 25), experienced as a C/C++ programer. Now that I'm in, I can see that the whole system is old and often uses obsolete technologies. There is no naming convention (files, functions, variables, ...), they don't use Version Control, don't use exceptions or polymorphism and it seems like almost everybody lost his passion (some of them are only 30 years old). I'd like to suggest somes changes but i don't want to be \"the new guy that wants to change everything just because he doesn't want to fit in\". I tried to \"fit in\", but actually, It takes me one week to do what I would do in one afternoon, just because of the poor tools we're forced to use. A lot my collegues never look at the new \"things\" and techniques that people use nowadays. It's like they just given up. The situation is really frustrating. Have you ever been in a similar situation and, if so, what advices would you give me ? Is there a subtle way of changing things without becoming the _black sheep_ here ? Or should I just give up my passion and energy as well ? Thank you. ## Updates Just in case (if anyone cares): following your precious advices I was able to suggest changes and am now in charge of the team that must create and deploy Subversion :D Thanks to all of you !"} {"_id": "9180", "title": "Should I be a good programmer immediately after college?", "text": "> **Possible Duplicate:** > I've graduated with a Computer Science degree but I don't feel like I'm > even close to being an expert programmer I recently graduated from university, and I have since then joined a development team where I am by far the least experienced developer with maybe with a couple work terms under my belt. Meanwhile, the rest of the team is rocking 5-10 years experience. I was a very good student and a pretty good programmer when it came to bottled assignments and tests. I have worked on some projects with success, but now I'm working with a much bigger code-base, and the learning curve is much higher. I was wondering how many other developers started out their careers in teams and left like they sucked. When does this change? How can I speed up the process? My seniors are helping me but I want to be great and show my value now."} {"_id": "198141", "title": "How much inconsistency arises from Javascript's high flexibility?", "text": "I'll admit it, I haven't yet mastered the language, but my experience with it tells me that Javascript is a highly flexible language, allowing prototypal inheritance, dynamic typing, functions as first class citizens and so many more cool stuff. I think such features bring some inconsistencies, but that's nothing new, most people would agree with me. I wanted to discuss specific examples that have been annoying me, like \"for in\" loops: for(var role in rolesToAdd){ if(rolesToAdd.hasOwnProperty(role)){ // for body } } Why should I need to do this? Doesn't this break the concept of classes and inheritance? I might not want to loop through the whole prototype, but what if I still need some of the attributes/methods from an object's direct parent, for example? Also, I would be most grateful if someone could explain me why the interpreter doesn't allow synonymous global/local variables to share the same scope, when there is certainly a way it could distinguish one from the other. Like in: var foo = function() { var wth = bar(); var bar = wth; // body... }(); function bar(){ return 'bar mitzvah'; } You can see that, even though our local bar has been declared after the call to our global bar, bar's value in foo's scope will always reflect the local bar, which will hold 'undefined'. Isn't that a strange behavior? I would love if senior Js programmers enlightened me on this and showed me what is gained from these seemingly strange features and if the disadvantages I mentioned here are valid."} {"_id": "51997", "title": "Best Practices for Renaming, Refactoring, and Breaking Changes with Teams", "text": "What are some Best Practices for refactoring and renaming in team environments? I bring this up with a few scenarios in mind: 1. If a library that is commonly referenced is refactored to introduce a breaking change to any library or project that references it. E.g. arbitrarily changing the name of a method. 2. If projects are renamed and solutions must be rebuilt with updated references to them. 3. If project structure is changed to be \"more organized\" by introducing folders and moving existing projects or solutions to new locations. Some additional thoughts/questions: 1. Should changes like this matter or is resulting pain an indication of structure gone awry? 2. Who should take responsibility for fixing errors related to a breaking change? If a developer makes a breaking change should they be responsible for going into affected projects and updating them or should they alert other developers and prompt them to change things? 3. Is this something that can be done on a scheduled basis or is it something that should be done as frequently as possible? If a refactoring is put off for too long it is increasingly difficult to reconcile but at the same time in a day spending 1 hour increments fixing a build because of changes happening elsewhere. 4. Is this a matter of a formal communication process or can it be organic?"} {"_id": "210882", "title": "Refactoring jQuery spaghetti code to use DDD", "text": "Most of my client side code ends up as a long script in one file the mostly looks like this: It's a maintenance nightmare. Although I use a well designed server-side structure using DDD (application services, domain sevices, value objects,...etc.) I have had little luck structuring my client code to a better separation of concerns. I'm not building a client-side application. I just use jQuery intensively for the client side. How should I approach the code structure to apply DDD client-side wide?"} {"_id": "198145", "title": "What programming related tasks can you do with a \"dead\" brain?", "text": "Here is my problem: programming (learning about programming, coding, etc) is my hobby. I do it in my freetime. I have a full time job, which literally drains my brain (no kidding!) by the end of the day so much I can't even add three-digit numbers, but I still want to do some programming-related things. Learning in this state is stupid, finding bugs is impossible. Writing unit test worked sort-of :D I understand that programming is a very conscious activity, but I need to get my job done every day and I still want to deal with some programming-related stuff, as it's very fun to me. What do you suggest ? I tought about watching videos, but what topic should I looking for with this state of my mind (which are easy to understand but I can still learn a lot from them) ? (I can't do it before work, as it is far more important to be fresh there than during my hobby...) Edit: The question need a bit of addition, which I left off intentionally at first, because I tought my question is clear. I have no problem at all with my brain activity or anything like that, the \"brain drain\" cause is that I do 9-10 hours _very intense_ mental job a day (much more intense than for example programming or thinking about solving problems), with only less than 5 minutes pause every hour. It is not possible to change it or pause more or think less (huh?!) between those working hours."} {"_id": "16905", "title": "Sprint Meetings - What to talk about", "text": "In work we have just started using the Scrum methodology, it is working well but I have a question on the daily sprint meetings. We set aside 15 minutes for the meeting (between the three devs and the scrum master if we think he is needed) but I have found that we have normally finished by 5 minutes. This might be because we have said all that needs to be said but I was wondering what people tend to talk about in them, in case we are missing something. For the record we normally update each other on current objective, troublesome previous objectives and plans for the rest of the day (including if we will not be available on that project)."} {"_id": "210889", "title": "What techniques are there for debugging remote client side errors?", "text": "What techniques are there for debugging remote client side errors in a web application, especially when they only affect a small subset of users? In my case we have an app that is working well for hundreds of users, internally and externally, but a handful (12) have a specific problem with a JavaScript that prevents them from using the site. We have screenshots of the error, have confirmed they have no server side errors, confirmed that everything is getting rendered to the browser correctly, have seen the specific error in the IE console, but still have no idea why it isn't working for these specific users. The issue is exhibiting on different versions of IE. We have never been able to replicate the problem here. I'm not looking for a solution to my problem here, but rather what are the steps you would take to solve this kind of problem, and what tools there may be that might help?"} {"_id": "210888", "title": "Apply filter in Kafka Queue", "text": "Is it possible to apply a filter on the Kafka stream of topic? What I want to do is filter out messages while consuming and then run some custom logic on top of it. It is a sort of thing that Storm trident allows us to do ( apply filter on the stream ). What I've found so far is I can create a trident topology and do the filtering but would be of great benefit if I can filter out the stream itself, as it will reduce the amount of traffic at my consumer's end. I know this is crazy but could this be a valid approach? Please correct me if I am wrong."} {"_id": "80397", "title": "C++ Interview question", "text": "This problem was given to me on previous job interview test. It is over now, but I would like to go through my mistakes and hopefully be more prepared next time! // Identify as many bugs and assumptions as you can in the following code. // NOTE that there is/are (at least): // 1 major algorithmic assumption // 2 portability issues // 1 syntax error // Function to copy 'nBytes' of data from src to dst. void myMemcpy(char* dst, const char* src, int nBytes) { // Try to be fast and copy a word at a time instead of byte by byte int* wordDst = (int*)dst; int* wordSrc = (int*)src; int numWords = nBytes >> 2; for (int i=0; i < numWords; i++) { *wordDst++ = *wordSrc++; } int numRemaining = nBytes - (numWords << 2); dst = (char*)wordDst; src = (char*)wordSrc; for (int i=0 ; i <= numRemaining; i++); { *dst++ = *src++; } } All I found was a syntax error on line for (int i=0 ; i <= numRemaining; i++); Also the first for loop looks fishy. Can you guys find any other mistakes?"} {"_id": "80396", "title": "Programming languages with these features", "text": "I've got a small project coming up where I can choose any language I want. My team prefers the feeling of safety we get from static typing. In our experience, dynamically typed languages can be more difficult to maintain (we feel they are too \"magic\"). The usual choice would be Java, but our team is expressing interest in the conciseness and flexibility provided by features of some of the other languages such as Python and Ruby. Based on my team's interests, I need some help evaluating which language to use. Ideally, the language would support these features: * Statically typed, * Properties or some form of the Uniform Access Principle, * First-class functions, * Anonymous functions (less important). Bonus points for languages that: * Aren't dependent on Microsoft or any specific operating system. * Have a non-JVM implementation. * Have an ORM. * Have a web development framework. Apart from these criteria, I'd still love to know more reasons why to choose the language(s) in your answer, and any experience you've had with them. Hopefully the answers here will be useful to anyone else that wants to evaluate some alternatives to Java without blindly switching another popular language such as C#. _*Edit: My question was closed due to how poorly I communicated my question before. I hope that I have cleared things up since I got the question re- opened. By the way, I've put my own research into a community wiki answer below, so don't think I haven't done any homework. ;-)_"} {"_id": "80390", "title": "Where are all the DBAs?", "text": "For some reason I got thinking the other day about DBAs and what they do. This thread goes some way to answer this question, but then I looked up the leading jobs site in my area out of curiosity, and it seems like there are more Oracle DBA jobs around than many other technology specialties. Even relatively common-sounding ones, such as \"Java Developer\" or \"Network Administrator\". Here's the thing: I've been in this industry for ten years, worked in several jobs (a couple in fairly large corporate shops too), and I've never actually seen a real live DBA. There was usually a self-taught \"database guru\" around who was the goto guy for database issues (otherwise employed as a developer like the rest of us), but I've never seen anyone in an official DBA role, anywhere, ever. So, where are all the DBAs? I'm guessing that since all the places I've worked so far were relatively application-oriented, I've just never experienced a very hardcore DB-heavy environment. At the same time, at least one of the jobs I've had seemed like a pretty extreme data-centred operation (real time market/trading systems, huge databases), and even here the only \"database people\" were these \"developers who were database gurus on the side\". No official DBA roles. Is it really just a case of me never having been in the kind of environment where DBAs are needed? If so, what kind of environment is that? Is this phenomenon perhaps to do with data centres being separate/outsourced operations these days, so most application level programmers just don't see them anymore? **Note:** I am basically trying to understand where the separation between \"developers who know databases well\" and actual DBAs is. It seems like a lot of dev roles require some pretty hardcore database knowledge these days (and that most development teams - even in quite DB-heavy environments - get by without an official DBA on staff). ie Please don't close or move to dba.SE."} {"_id": "236582", "title": "Javascript: Anonymous functions", "text": "How do I turn this definition of an anonymous function, An anonymous function is a function that is assigned to a variable. Anonymous functions are also used when you want to perform a short and straight forward task, into an analogy?"} {"_id": "236586", "title": "Natural Language to Search Criteria - Date Ranges", "text": "Consider an application that stores a set of records that contain: * Description * Cost * Purchase Date I'd like to be able to allow users to utilize natural language to search the dataset. For example, the search expression: > blue from last month less than $20 would translate into (Linq as an example only - correctness of the query is not in question): _db.Widgets.Where(w => w.Description.Contains('blue') && w.PurchaseDate >= DateTime.Now.AddMonths(-1) && w.Cost < 20) I'm struggling to find a starting point. Any resources to get me in the right direction would be appreciated (I am working in .NET)."} {"_id": "97460", "title": "Which contract work type provides the most stability while offering flexibility?", "text": "I tried my best to frame this question in such a way that it will help others out there who are curious about the same thing as I am. I am currently a full time C++ programmer. I also do a little bit of C#/.NET programming as well. I have over 4 years of experience total. I have looked into the possibility of doing contract programming. I am looking to avoid having to be stuck on one team/project forever. I am single, have no kids, and have enough money saved up to last over a year without an income. I live in one of the biggest cities in the U.S. I know there are different types of contract work, such as W2, independent, contract-to-hire, etc. I'm looking to work on contracts that last 6 months to a year. I would like to avoid having to do most of the marketing/overhead work myself and just want to do the programming work. I thought about getting contract work through different agencies and they would get a cut of the pay for doing the searching/overhead work for me. I just feel so stuck in my current job and want the freedom of working on a variety of projects and have some flexibility in hours and location (whether it's working at home or at another office). I have no desire to move up to management or navigate through office politics every day. What would be the best option for someone in my situation? W2 contracts through an agency? I wouldn't mind getting a little less pay as long as someone would take care of most of the marketing/overhead work involved."} {"_id": "236589", "title": "Java Communcation API not available for Windows", "text": "I want to write a program in Java using RS-232. But I am unable to find the java.comm Package for the windows. Which library should I use for this purpose? http://www.oracle.com/technetwork/java/javasebusiness/downloads/java-archive- downloads-misc-419423.html The only package avaiable here are not for Windows. Can you please guide me as I am new to this kind of development. Using rxtx + java I got this code from the internet import java.util.Enumeration; import gnu.io.CommPortIdentifier; public class SimpleWrite { public static void main (String args[]) { Enumeration port_list = CommPortIdentifier.getPortIdentifiers(); System.out.println(port_list); while (port_list.hasMoreElements()) { CommPortIdentifier port_id = (CommPortIdentifier)port_list.nextElement(); if (port_id.getPortType() == CommPortIdentifier.PORT_SERIAL) { System.out.println (\"Serial port:\" + port_id.getName()); } else if (port_id.getPortType() == CommPortIdentifier.PORT_PARALLEL) { System.out.println (\"Parallel port:\" + port_id.getName()); } else System.out.println (\"Other port:\" + port_id.getName()); } } } But this code gives me error of gnu.io.rxtx.properties has not been detected."} {"_id": "190797", "title": "Do frameworks put too much abstraction?", "text": "I've been programming for a little under a year and have some experience writing systems applications, web apps, and scripts for businesses/organizations. However, one thing I've never really done is working with a framework like Django, Rails or Zend. Looking over the Django framework, I'm a little frustrated with how much is abstracted away in frameworks. I understand the core goals of DRY and minimal code, but some of this over-reliance on different modules and heavy abstraction of core functions feels like it: 1. Makes programs get dated really fast because of the ever-changing nature of modules/frameworks, 2. Makes code hard to understand because of the plethora of frameworks and modules available and all of their idiosyncrasies, 3. Makes code less logical unless you've read all of the documentation; i.e., I can read through some list comprehensions and conditional logic and figure out what a program is doing, but when you see functions that require passing in arbitrary strings and dictionaries, things get a little hard to understand unless you're already a guru in a given module; and: 4. Makes it difficult and tedious to switch between frameworks. Switching between languages is already a challenge, but it's manageable if you have a strong enough understanding of their core functionality/philosophy. Switching between frameworks seems to be more a matter of rote memorization, which in some ways seems to encourage the very inefficiency these frameworks were designed to eliminate. Do we really need to put like 50 layers of abstraction on top of something as simple as a MySQL query? Why not use something like PHP's PDO interface, where prepared statements/input testing is handled but the universally understandable SQL query is still a part of the function? Are those abstractions really useful? Isn't feature bloat making them useless, making applications more difficult compared to similar applications written without using a framework?"} {"_id": "176196", "title": "Static analysis, dynamic analysis and testing", "text": "Based on answers I have received here and then confirmed in some authoritative sources (not ISTQB which seems to be too vague), there are 3 activities: * Static analysis * Dynamic analysis * Testing But is there any reason why we cannot combine all of that as \"testing\"? I mean, even a dynamic analysis review of a program to look for memory leaks is a kind of testing, right?"} {"_id": "176197", "title": "A little code to allow word substitution depending on user", "text": "I'm creating a demo web app in html in order for people to physically see and comment on the app prior to committing to a proper build. Whilst the proper app will be database driven, my demo is just standard html with some JavaScript effects. What I do want to demonstrate is that different user group will see different words. For example, imagine I have an html sentence that says: > This will cost \u00a3100 to begin. What I need to some way of identifying that if the user has deemed themselves to be from the US, the sentence says: > This will cost $100 to begin. This requirement is peppered throughout the pages but I'm happy to add each one manually. So I envisage some code along the lines of 'first, remove the [boot US] trunk' where the UK version is 'first remove the boot' but the code is saying that the visitor needs the US version. It then looks up boot (in an Access database perhaps) and sees that the table says for boot for US, display 'trunk'. I'm not a programmer but I can normally cobble together scripts so I'm hoping someone may have a relatively easy solution in JavaScript, CSS or ASP. To recap: I have a number of words or short sentences that need to appear differently and I'm happy to manually insert each one if necessary (but would be even better if the words were automatically changed). And I need a device which allows me to tell the pages to choose the US version, or for example, the New Zealand version."} {"_id": "190792", "title": "Streamlining ASP.Net MVC deployment?", "text": "I own a VPS with Windows Server 2012 on it. I can install whatever I want on it. In the past when deploying an ASP.Net MVC project, I would right click the project in the solution and `Publish` it. I would then copy over the files to the IIS folder and that would be a `deployment` for me. But this is no longer something I want to do. There has to be an easier way right? How can I streamline the deployment process? I'm 100% in the dark on this subject with .NET. Can I set something up on my server so I can `Web Deploy` to my VPS? Or is there a better alternative? Thank you!"} {"_id": "97468", "title": "solve TOR edge node problem by using .onion proxy?", "text": "I would like to improve the TOR network, where the exit nodes are a vulnerability to concealing traffic. From my understanding, traffic to .onion sites are not decrypted by exit nodes, so therefore - in theory - a .onion site web proxy could be used to further anonymize traffic. Yes/no? perhaps you have insight into the coding and routing behind these concepts to elaborate on why this is a good/not good idea."} {"_id": "196659", "title": "ACID compliant Database that isn't NoSQL?", "text": "I'm not necessarily asking if a NoSQL database can be ACID compliant, which has been asked here: Is there any NoSQL that is ACID compliant? I'm wondering if we have a database either now or in the future that is wanting or is another option to a traditional RDBMS? I know NoSQL was supposed to be the big thing and RDBMS were supposed to go by the wayside and so forth (I've read I dunno how many articles on it). But that never really solved the issue of data that was very strict and had to be kept consistent (like bank transactions... stuff like that). So when RDBMS was \"supposedly\" supposed to go to the wayside... what was supposed to replace this Data that had to be strict?"} {"_id": "227651", "title": "Flat organizations vs hierarchical for software development", "text": "My organization was flat two years ago, and people felt that the general manager had disproportionate almost dictatorial powers. Also the general manager didn't have time to coach employees, so some employees were dysfunctional and nothing was done about them. However on the plus side everyone was involved in the decision making, thus motivation was very high. So what do you think about flat vs hierarchical organizational structures in software development?"} {"_id": "167882", "title": "Understanding clojure keywords", "text": "I'm taking my first steps with Clojure. Otherwise, I'm somewhat competent with JavaScript, Python, Java, and a little C. I was reading this artical that describes destructuring vectors and maps. E.g. => (def point [0 0]) => (let [[x y] point] => (println \"the coordinates are:\" x y)) the coordinates are: 0 0 but I'm having a difficult time understanding keywords. At first glance, they seem really simple, as they just evaluate to themselves: => :test :test But they seem to be used is so many different ways and I don't understand how to think about them. E.g., you can also do stuff like this: => (defn full-name [& {first :first last :last}] => (println first last)) => (full-name :first \"Tom\" :last \"Brennan\") Tom Brennan nil This doesn't seem intuitive to me. I would have guessed the arguments should have been something more like: (full-name {:first \"Tom\" :last \"Brennan\"}) because it looks like in the function definition that you're saying \"no required arguments, but a variable number of arguments comes in the form of a single map\". But it seems more like you're saying \"no required arguments, but a variable number of arguments comes which should be a list of alternating keywords and values... ?\" I'm not really sure how to wrap my brain around this. Also, things like this confuse me too: => (def population {:humans 5 :zombies 1000}) => (:zombies population) 1000 => (population :zombies) 1000 How do maps and keywords suddenly become functions? If I could get some clarification on the use of keywords in these two examples, that would be really helpful. **Update** I've also seen http://stackoverflow.com/questions/3337888/clojure- named-arguments and while the accepted answer is a great demonstration of how to use keywords with destructuring and named arguments, I'm really looking more for understanding how to think about them--why the language is designed this way and how I can best internalize their use."} {"_id": "90119", "title": "Am I overusing trees as a model or are they just very common?", "text": "I'm working on a project and I've found that I've modeled the two largest components as trees. My main uses so far are: 1. Generically model physical containers (and sub containers, sub sub containers etc) 2. Model liquid samples (and sub samples, and sample derivatives - and subs of those, etc) I now need to model some events based on the shipment of samples e.g. collected at location X, when arrive at processing centre do Y It now seems natural to model the events as a tree to denote which events follow on from others. I'm starting to wonder if I am just seeing trees everywhere because I want to, or if its a legit approach? **Edit To Answer Some Points Raised** Everyone seems to agree with trees for the containers. For the liquid samples, although there are various types - what I really need to track is parentage so that given any sample, I can quickly find all things that derived from it in some way. I have been told I can assume that samples will not be combined so all sub samples (or derivatives) will have only 1 parent. As to the events, yeah I think I was just going for a tree for ease. It doesn't make much sense to me after reading comments and further thought. Thanks all for the input."} {"_id": "152266", "title": "Are symbolic programming and metaprogramming the same thing?", "text": "Are symbolic programming and metaprogramming the same thing? I've always read about symbolic programming while using Mathematica, but I've never searched about it's meaning, I've searched about it today and I found that both concepts are similar, are there any differences?"} {"_id": "57894", "title": "Ever taught yourself drawing/art skills?", "text": "I often see programmers advise non-tech people that they should 'just learn to code' if they want to execute their big idea, saying it's not that hard to get the ball rolling. However, while as programmers we can be adept at writing websites including backends and CMSs etc., there's still a great need for the thing **to look nice.** Usually we can leverage clipart and templates etc. and do some image editing to get by. But being able to draw original art yourself would be a big advantage I'm guessing. So my question is has anyone here tried to brush up on their art skills to help with the design aspects of their projects? I know some people have 'always' been good at art, or at least been good at drawing since their highschool days, but I'm curious about anyone who went from being bad at drawing, to regularly completing original art work that went into production on a site or application front-end."} {"_id": "29344", "title": "JIT compiler for C, C++, and the likes", "text": "Is there any just-in-time compiler out there for compiled languages, such as C and C++? (The first names that come to mind are Clang and LLVM! But I don't think they currently support it.) Explanation: I think the software could benefit from runtime profiling feedback and aggressively optimized recompilation of hotspots at runtime, even for compiled-to-machine languages like C and C++. Profile-guided optimization does a similar job, but with the difference a JIT would be more flexible in different environments. In PGO you run your binary prior to releasing it. After you released it, it would use no environment/input feedbacks collected at runtime. So if the input pattern is changed, it is probe to performance penalty. But JIT works well even in that conditions. However I think it is controversial wether the JIT compiling performance benefit outweights its own overhead."} {"_id": "142177", "title": "How is JavaScript insecure, and what are the main methods used to deal with that?", "text": "I just read about Caja, which is a \"sanitized\" version of JavaScript. But I'm wondering - what is the big problem with JavaScript(it seems so widely used )? Just how dangerous is it?"} {"_id": "21412", "title": "Why I'm not selected in an interview?", "text": "I'm working in development for 4 years, and 3.5 in PHP - why I don't seem to be able to be selected in an interview. I want to know what special things the interviewer wants to see in candidates - for senior PHP developer roles. Interviewer asks me 10 questions and I'm able to answer only 5. Does selection depend on these things? It doesn't mean that I can't solve the problem, I can google the question, I can ask on forums. Why don't they understand that a man can't remember all the answers for each and every question? Especially programming ones. Please advise."} {"_id": "142175", "title": "Introducing functional programming constructs in non-functional programming languages", "text": "This question has been going through my mind quite a lot lately and since I haven't found a convincing answer to it I would like to know if other users of this site have thought about it as well. In the recent years, even though OOP is still the most popular programming paradigm, functional programming is getting a lot of attention. I have only used OOP languages for my work (C++ and Java) but I am trying to learn some FP in my free time because I find it very interesting. So, I started learning Haskell three years ago and Scala last summer. I plan to learn some SML and Caml as well, and to brush up my (little) knowledge of Scheme. Well, a lot of plans (too ambitious?) but I hope I will find the time to learn at least the basics of FP during the next few years. What is important for me is how functional programming works and how / whether I can use it for some real projects. I have already developed small tools in Haskell. In spite of my strong interest for FP, I find it difficult to understand why functional programming constructs are being added to languages like C#, Java, C++, and so on. As a developer interested in FP, I find it more natural to use, say, Scala or Haskell, instead of waiting for the next FP feature to be added to my favourite non-FP language. In other words, why would I want to have only **some** FP in my originally non-FP language instead of looking for a language that has a better support for FP? For example, why should I be interested to have lambdas in Java if I can switch to Scala where I have much more FP concepts and access all the Java libraries anyway? Similarly: why do **some** FP in C# instead of using F# (to my knowledge, C# and F# can work together)? Java was designed to be OO. Fine. I can do OOP in Java (and I would like to keep using Java in that way). Scala was designed to support OOP + FP. Fine: I can use a mix of OOP and FP in Scala. Haskell was designed for FP: I can do FP in Haskell. If I need to tune the performance of a particular module, I can interface Haskell with some external routines in C. But why would I want to do OOP with just **some basic** FP in Java? So, my main point is: why are non-functional programming languages being extended with **some** functional concept? Shouldn't it be more comfortable (interesting, exciting, productive) to program in a language that has been designed from the very beginning to be functional or multi-paradigm? Don't different programming paradigms integrate better in a language that was designed for it than in a language in which one paradigm was only added later? The first explanation I could think of is that, since FP is a new concept (it isn't new at all, but it is new for many developers), it needs to be introduced gradually. However, I remember my switch from imperative to OOP: when I started to program in C++ (coming from Pascal and C) I really had to rethink the way in which I was coding, and to do it pretty fast. It was not gradual. So, this does not seem to be a good explanation to me. Or can it be that many non-FP programmers are not really interested in understanding and using functional programming, but they find it practically convenient to adopt certain **FP-idioms** in their non-FP language? **IMPORTANT NOTE** Just in case (because I have seen several _language wars_ on this site): I mentioned the languages I know better, this question is in no way meant to start comparisons between different programming languages to decide which is better / worse. Also, I am not interested in a comparison of OOP versus FP (pros and cons). The point I am interested in is to understand why FP is being introduced one bit at a time into existing languages that were not designed for it even though there exist languages that were / are specifically designed to support FP."} {"_id": "63269", "title": "Why would I choose Unity over Autofac", "text": "I'm looking to start a new application and I want to use Dependency Injection. I have a lot of \"Microsoft is the only way to go guys\" in our shop so of course Unity is the way they wanted to go. However I am leaning more towards Autfac because of its speed and its Module feature. I found the speed results on this page. If you use Autofac or Unity can you give me some solid reasons not to use Unity and to use Autofac over Unity? If not, can you point me into a good direction to do some research on bringing back to our next meeting?"} {"_id": "142173", "title": "Why do computer architecture textbooks prefer MIPS architecture?", "text": "I have read a lot of computer architecture textbooks, and I wonder why most of them (if not all) used MIPS as the architecture to teach. Why MIPS and not Intel or AMD or something else? What makes the architecture suitable for teaching?"} {"_id": "176222", "title": "Is adding in the header the license type enough to say: \"my code is licensed\"?", "text": "I read on various sites about licenses. I did just put the license type in the header file (in my case a javascript file, open-source): /* * \"codeName\" \"version\" * http://officialsite.com/ * * Copyright 2012 \"codeName\" * Released under the \"LICENSE NAME\" license * http://officialsite.com/LICENSE NAME */ javascript code ... In the same folder I leave a copy of the license. The listing of the folder looks like this: * codeName.js * LICENSE In the file `LICENSE` is the full text of the license my code uses. What I cannot find anywhere that says is this is enough to say _my code is licensed_ (the case of open-source). Is something more required?"} {"_id": "223826", "title": "Are for loops supposed to be read inward or outward?", "text": "for (i = 0; i < 3; i++) { for (j = 0; j < 4; j++) { cout << arr2d[i][j] << \"\\t\"; } cout << endl; ..... Like that for example. Do you read the for loop inward-out (starting from `for (j = 0; j < 4; j++)`) or outward-in (starting from `for (i = 0; i < 3; i++)`) Just wondering. :D"} {"_id": "211680", "title": "A question about password storage concept", "text": "My previous question is here - A question about storing passwords This question is somewhat related to my previous one but with some new doubts. Take for example Windows. I've heard that Windows stores passwords in NTLM hashes somewhere inside the registry (if I'm not wrong). There are programs that crack the NTLM hashes to recover the passwords (brute-force method) in- case you've forgotten them or just want to hack someone's PC. Why don't they just replace the NTLM hashes in the registry with something common like - NTLM hash of 'password'? This was just an example for your understanding. I'm trying to learn, how large programs store, change or verify passwords without any hacker/attacker being able to read them?"} {"_id": "211682", "title": "Is a large increase in velocity realistic in a Scrum environment?", "text": "My manager has recently really been pushing to use velocity as a target and measure of productivity. We are currently working at an average velocity of 50 story points. My manager wants us to increase it by 40% to 70 story points (with no increase in team members). If we don't achieve this increase he wants us to deliver a full break down explaining why. The whole idea of measuring team performance by velocity and using it as a target seems wrong to me, but I am finding it difficult to explain why. Any help? Why isn't this the right way to measure and incentivize productivity?"} {"_id": "211683", "title": "Implementing User Authentication on an N-Tier Web Application", "text": "_I appreciate all help and feedback. Parts bolded are critical parts if this is too verbose. Perhaps it will help to mention I am a green developer. I have found some useful info from related questions posted here and on Stack Overflow but nothing that felt 100%._ ## Background Currently at work we are developing a web application with strict constraints set forth by the client. Normally we are a Rails shop, but the client really wanted to work with us. Thus to fit into their architecture we will be using ASP.NET. We are not very experienced with ASP.NET but some of the team including myself have used .NET for desktop applications. This application is three tiered and includes: * Public facing server(s) * Web services server(s) * Data server(s) The client is expecting high demand and they may have multiple servers in any one of the tiers. There is a firewall between every tier. From my understanding this is a very common set up. **A core goal is to minimize trips over the wire from the public facing server to the database.** ## The Problem **How do we handle user authentication without hampering performance on an N-Tier application with potentially many servers at each layer?** ## What We've Done so Far We toyed with the idea of using view state but ultimately felt like this would be a poor idea. Of course we are not opposed to re-visiting view state, but feel that view state offers no benefits over cookies and simply makes it harder for a user to have multiple tabs of our application open. I have been exploring the path of using the default session state with a custom implemented session store. **The default session state tends to generate multiple requests going over the wire. Logging in and out this is okay, but when the user is using the app we do not want this.** The custom session store helps minimize impact, but not by enough. ## Conclusion & Possible Idea Since trips to the database will be inevitable on almost all requests once inside the application, perhaps giving the user a key on successful login is the best way to go. Then when the user requests a new page, or posts a new page we could send that key with the related SQL transaction for the request and validate the key. I believe this will simplify the code base immensely. So to re-iterate the problem: **How do we handle user authentication without hampering performance on an N-Tier application with potentially many servers at each layer?** Does the idea above sound like a solid implementation? Thanks, Jonathon"} {"_id": "62798", "title": "How to lock items when we do check out in commerce applications", "text": "I have a specific requirement. Lets say we have 3 items and a user has selected all 3 items to buy. I need to lock the 3 items (meaning the other people who want to buy the same items cannot view it) for around 10 minutes. How do I implement this? I am developing a Java web app using Hibernate and struts."} {"_id": "211688", "title": "Use a service layer with MVC", "text": "If a controller gets to fat and model instantation starts to add up a service layer could be used. * If I just wrap the logic inside a service class I will get a bunch of Services with one/two methods. This feels like a code smell. Any best practice regarding this? * Can a service instantiate models? * If a service instantiates models the services can't be unit tested. They can only be covered by integration tests?"} {"_id": "193432", "title": "The best way to store dictionary from file", "text": "I'm working on a translator in C++. Basically I want to parse the file with translations and store it in my program, so I can **perform search through the words and simply access the corresponding word**. My file will look like that: word|translation second word|second translation etc. It doesn't have to be `|` as delimiter and the word can contain spaces. So after I store it in my program I want to search for a word and get the corresponding word easily. The question is, what is 'the best' way to store this dictionary? Should I use dynamic structures and link them? Maybe vectors? Or should I use two- dimensional array to store the 2 strings? Could you please propose to me how the structure will look like?"} {"_id": "67568", "title": "How can one use git-flow effectively on a project in which more than one major version is being maintained?", "text": "I've migrated several of my projects over to the git flow work flow, and I'm loving it. However, I haven't found a best practice that keeps things flowing as smoothly when working with a project in which more than one major version is maintained at a time. _Specifically, I'm not maintaining a \"free version\" and a \"paid version\" or any other parallel model, I'm talking about a project in which Version 1 gets released, and remains supported with minor versions (1.1, 1.2, etc.) until Version 3 has been released, at which point 2 and 3 would be maintained, until 4 is released...you get the idea._ How have you, or would you, maintain two or more supported versions of a project at once in a gitflow workflow?"} {"_id": "67561", "title": "Dealing with resistance to testing code", "text": "I've recently switched jobs. At my previous job, everyone wrote tests and we were all in a happy place. In my new role, I've been asked to setup CI and testing. I'm experiencing some resistance to testing from some developers and wondered if anyone had experienced this and had strategies for dealing with resistance?"} {"_id": "255866", "title": "What is meant by \"code rot\"?", "text": "Please say that 'code rots' if you don't clean it. What does that mean? Obviously code is text, not an organism that can change itself. So why does code 'rot' if you don't touch it? What makes it 'rot' and what does it actually mean? Please explain, examples would be welcome."} {"_id": "187723", "title": "code review with git-flow and github", "text": "With regular git and github I can do a code review by simply creating a pull request of the feature branch I'm working on to the master branch. How would I do code reviews with git-flow? With workflow like \"git flow feature finish` I'm confused as to where the code review actually happens and how git-flow or git can facilitate that review."} {"_id": "187722", "title": "Are there any empirical studies about the effects of commenting source code on software quality, maintainability and developer productivity?", "text": "I am an advocate of commenting on source code and documenting software products. It is my personal experience and observation that working on source code that is rigorously commented has helped me in different ways when I have had to grow software or maintain it. However there's another camp that says commenting is ultimately worthless or its value is questionable. Numerous proponents of coding without commenting argue that: * If a piece of code is well-written, it is self explanatory and hence does not need commenting * If a piece of code is not self-explanatory, then refactor it and make it self-explanatory so that it does not need any comments * Your test suite is your live documentation * Over time code and comments get out of sync and it becomes another source of headaches * Agile says working code is more important than piles of documentation, so we can safely ignore writing comments To me this is just dogma. Again, my personal observation has been that software written by teams of smart and experienced developers ultimately end up with a considerable amount of code that is not self-explanatory. Again, the Java API, Cocoa API, Android API, etc. show that if you want to write and maintain quality documentation, it is possible. Having said all these, conversations about pros and cons of documentation and commenting on source code that are based on personal beliefs usually do not end well and lead to no satisfying conclusions. As such I am looking for academic papers and empirical studies about the effects of software documentation, especially commenting source code, on its quality and maintainability as well as its effects on team productivity. Have you stumbled upon such articles and what's been the outcome of them, if any?"} {"_id": "193381", "title": "How does the ETVDX model fit in with project management?", "text": "In a lecture, the lecturer described the following model : E - entry (the preconditions to a task). T - task - doing the task V - verifying the tasks quality D - Delivering the tasks X - Exit. or ETVDX If anyone is familiar with this 'generic compliance model', how does it fit into software development exactly? I presume it's equivalent to the waterfall model of negotiating requirements > defining/decompose stage > estimating effort > estimating resources > developing schedule."} {"_id": "251842", "title": "Frog crossing N lane road problem", "text": "Problem says that: > There are `N` lanes, and the speed of each lane is given. There are many > cars in all the lanes and the start position and the length of each car and > its corresponding lane is given. There is a frog which can do 2 functions: > `wait()` or `jump()`. Find if there is a path for the frog to go from lane > `1` to lane `N` without getting hit by any of the moving cars. I am not able to solve it. I took the data structures `speed[1..n]` to denote speed of cars at `i`th lane, `length[1..n]` denoting length of cars, `start[1..n]` denoting starting position of cars. Then I took starting lane and on the basis of time calculations I decided whether to jump or wait. **My algorithm** : for each lane 1. I will assume that it is safe to jump to the next lane. 2. Then I will calculate the time period within which frog can be hit (time when I will add car length in total distance and time when i will not add car length.) 3. Now if frog will reach to the next lane within that time period then it is good to wait on the current lane. 4. While waiting I will also check whether it will be hit by the car in current lane or not. But in some cases my algorithm will not work. How should I structure my code to solve this problem?"} {"_id": "168688", "title": "How to start competitive programming?", "text": "I have been practicing coding for a while, but the problem is that it takes me a lot of time to write a solution for the problems. I want to ask if competitive programming can help me in improving this. If yes, then how should I start and from which web sites could I use (like TopCoder)? I obviously won't be able to solve very hard problems for now. What should I do? If no, what else should I do? I also have another problem that I want to learn coding but the thing is that I feel that I am not very good at it. What should I do? It's like bugging me from inside. I know some people may not find this question informative but please at least allow me to get an answer."} {"_id": "149478", "title": "Why is Invariance, Covariance and Contravariance necessary in typed languages", "text": "Ok not really sure if I'm right. I only recently learned that I needed to have contravariant interface to be able to pass that interface as a parameter in C# and this feature was only added in .NET 4.0. So obviously there is some reason you can't do this Covariant or Invariant interfaces and it probably has something to do with passing in, getting out the generic class. I'm not really sure what the limitation here is, and know that I've seen where Contravariant interface can be used where the others can't where can I use Covariant (when is that neccesary?). This is all out of curiosity (I'm very interested in programming concepts surrounding some languages). I would love to see (link would be fine) some examples of why it is necessary to use these features, what would break if you would allow covariant/invariant into contravariant scene and vice versa and where each one shines."} {"_id": "157729", "title": "Is it better to use multiple html pages or just change content on the same page using JavaScript?", "text": "Is it better to use multiple html pages and link them together with `href` or just change content on the same page using JavaScript? I am thinking of how to layout a page and I don't have a lot of content. It would probably be about three of for pages if I just used all html. If I toggled and swapped bits of html around using JavaScript, I could probably fit it all on one page, and it would be a bit \"cooler\", in that it's more of an application, dynamic, etc. But I'm just wondering what the best way to go here? Is it horrible to have to much JavaScript \"squashed\" to one document? How do you know where to draw the line when thinking about this?"} {"_id": "158919", "title": "Are these steps enough to put my bash script under GPL 3?", "text": "I have written a bash script I would like to put under GPL v3. I've read the GNU documentation on How to Apply These Terms to Your New Programs and How to use GNU licenses for your own software. Still, I'm not quite sure what to put there and which artifacts are needed. So far, I did the following: 1. Put a file called `COPYING` (which contains the license) into the project folder 2. At the beginning of my script, I attached the following to my script: > up is a bash function to simplify vertical file system navigation. > > Copyright (C) 2012 Oliver Weiler > > This program is free software: you can redistribute it and/or modify it > under the terms of the GNU General Public License as published by the Free > Software Foundation, either version 3 of the License, or (at your option) > any later version. > > This program is distributed in the hope that it will be useful, but WITHOUT > ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or > FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for > more details. > > You should have received a copy of the GNU General Public License along with > this program. If not, see http://www.gnu.org/licenses/. Is this all I have to do? Is putting the licence into a separate file actually needed, if the notice in the script file links to it?"} {"_id": "162743", "title": "Applying for job: how to showcase work done for (private) past clients?", "text": "I want to apply for my first \"real\" (read: non-freelance) Ruby on Rails job. I've built several apps already. My best work (also the most logically complicated app) was for a freelance client, and I'd like to show it to potential employers. Only problem is: it isn't online anymore. And I've lost touch with the client. How can I include this work in my portfolio? * * * **About the app:** It's a Facebook game. The client's business idea for this app was not the best. It was never going to make any money. I think it was kind of a vanity side project for him. The logo and graphics are nice-looking, though, and were designed by the client. I've actually spent a lot of time recently recoding most of the app, and adding a full test suite. I want to showcase the BDD / TDD skills I've acquired. * * * I'm not very familiar with the etiquette (/law?) concerning this situation. Can I just put my new version of the app up at a free Heroku URL (perhaps with a \"credits\" section, where I credit the ideas and graphic designs to my former client)? **NOTE:** Again, this is just to show potential employers. I am **not** trying to market the app as my idea, or attract any users. Can I put some or all of the code on GitHub? What if I don't put the code up publicly, but merely send a tarball to potential employers? Do I need to ask permission from my former client (and what if he says no)? The last thing I want to do is get in any legal trouble, or offend people I'm trying get a job from. But I believe that my work and experience on _this app_ are my highest recommendation for getting a job."} {"_id": "157720", "title": "What are the disadvantages of starting an opensource project if one is not an experienced programmer?", "text": "I have lots of ideas for products to be built. The problem is that I have less than a year of professional work experience and I am afraid of getting **judged negatively in the future** based on what I produce now. I have no clue if my code is any good. I am not familiar with any of the coding patterns. All I know is to build products that work. I want to have a public profile in github for my future projects and I will try hard to make sure that it is well commented, is optimized and clean. These are the things that I fear getting exposed publicly: 1. My code may not be highly optimized. 2. Wrong usage of certain libraries or functions which coincidentally get the job done. 3. Not knowing or following any coding pattern. 4. Lots of bugs/ not considering corner, edge cases 5. Fundamental lack of understanding and application of certain concepts such as thread safety, concurrency issues in multi-threaded programming, etc. Should I go ahead and get started or continue to stick to building stuff locally and privately till I get more experience. I don't want the mistakes made here to haunt my career prospects in the long run."} {"_id": "157722", "title": "What sorts of software patent issues should one be aware of when writing software?", "text": "**The scope of this question is intended to be global in nature:** techniques (approaches to solving a problem, such as the use of a layer palette), operations (actions that the software or its user can take, such as using a paint bucket tool to fill in a selection or otherwise bounded area), etc. If the development process comes into play, such as Person A saying \"We need to implement this one cool feature in Software X\" and perhaps any study of that functionality via decompilation, hex-editing, etc. (vs. a more genuine, independent process of imagining functionality or writing up the spec), then that would be appropriate to mention. I am in the process of creating a substantial application, and I don't know how much of a problem software patents will be. I'm of the impression that no one can patent a \"paint bucket\" tool (though I wouldn't know the legal justification behind why that can't be patented), but perhaps if you stray much further from that, you'll get hit with a lawsuit? I really have no idea! In my case, I'm not even looking at competitor's software for this very reason. I did sit down with someone who uses software that is currently on the market long enough to know that I can easily demolish it, but I would just consider that market research. The only things that excited me during that little demo session were instances where I would dream up some functionality and his response would be \"Yeah, that would be really useful, and no, this software can't do that.\" There was no \"I really need to copy that feature!\" because, well, the application was garbage for the most part, despite it price tag."} {"_id": "158917", "title": "Add complex customization to form or create two forms?", "text": "I'm working in a WPF application that both imports and exports delimited text files. At both ends, there is a UserControl which encapsulates some logic about delimiter configuration. It has some controls for individually selecting delimiters, and a 'Presets' dropdown for quickly selecting certain common combinations such as CSV. I've been tasked with modifying the export configuration dialog. They want the presets feature modified to also cover some text formatting options that aren't related to delimiters. The changes should only appear in the export configuration dialog, because the settings in question aren't applicable to importing. They will involve both adding some new controls, which isn't too bad. But there will also need to be some additional presets which aren't available on the import dialog, and their presence may impact the behavior of the dialog overall. I'm unsure of the best way to do this. Three options come to mind, but of course I might be missing one: 1. Clone the UserControl and make the requisite changes to it. Each dialog now has a separate control. However, both controls will have a lot of logic in common, so I'm uncomfortable with this from a DRY perspective. 2. Keep the single UserControl but throw in a 'mode' property. Based on its setting, show or hide certain controls, swap out the ItemsSource for the 'Presets' dropdown, etc. This is attractive since it involves less copy-and-paste, but at the cost of the single control being more complicated overall, and perhaps more difficult to maintain on down the road. 3. Redo the whole thing's architecture so that the behavior-related aspects are properly managed in a separate ViewModel. This new ViewModel should be programmatically configurable to represent the different working modes (basically by just swapping out the list of available Presets, each represented as an instance of a Preset class). Create two different sets of UI to represent the different layouts, with one simply not having the controls for options it doesn't care about. My gut instinct is that #3 is the best option. I'm concerned that it's an excessive effort, though, and that I'm just being too picky about one of the first two options. Or that there's some other pitfall with it I haven't thought of. I have a bit of a history with this UserControl and my personal feeling is that it's kind of a PITA to maintain, so I want to be cautious in case I'm overeager for the rewrite for emotional reasons."} {"_id": "209943", "title": "Switching between Azure Mobile Services vs my own implementation. Will UIDs change?", "text": "I'm looking at Azure Mobile Services, particularly the Authentication part (which I believe relies exclusively on OAUTH 1 or 2). I want to make sure that my application isn't tightly coupled to the service and I can bring authentication back in-house using either of the following methods: 1. I use a version of Azure Mobile Services built for 2012 R2 (which may include support for AMS + OAUTH) 2. I use a local DLL such as .NET Open OAuth to handle the authentication. My theory is that UIDs are portable between all three scenarios (the third being Azure Mobile Services itself), because I manually would be typing in the same secret into each provider. My second gut reaction is that any Windows Live IDs will not have the same URL and aren't portable in these scenarios (based on my Azure ACS and LiveID experience). However since I've noticed OAuth support in .NET OpenID I think I could be mistaken. **Question** Could someone more well versed in authentication, and Microsoft products let me know if authentication can be \"moved\" to and from Azure Mobile Services if needed? The main contention point I believe will be differing user IDs after the switch which would mean that after migration, users will loose their previous history, etc. in my application."} {"_id": "209944", "title": "What arguments can I use to justify the use of either XML or JSON to store and transmit object data?", "text": "My project lead considers both of these approaches to be unnecessary overhead. I have been involved in and have witnessed a lot of talk around XML vs JSON, but this is the first time that I have heard an argument against BOTH. We are using c# in .Net 4. We'll be storing data in a sql server database on the server and also on mobile devices. I've heard all the arguments between the two but now I need to understand the arguments for and against the use of any object notation to store data - my project lead prefers csv format."} {"_id": "114885", "title": "Which functional language is good for a beginner?", "text": "> **Possible Duplicate:** > Choosing a functional programming language I am a C++ programmer looking to learn a functional language as a hobby and out of sheer curiosity. I am not looking to be an expert, but just to get a grasp on functional programming. This language should be simple to learn and have good tutorials and resources for beginners. Are there any such languages?"} {"_id": "151761", "title": "Using visitor pattern with large object hierarchy", "text": "**Context** I've been using with a hierarchy of objects (an expression tree) a \"pseudo\" visitor pattern (pseudo, as in it does not use double dispatch) : public interface MyInterface { void Accept(SomeClass operationClass); } public class MyImpl : MyInterface { public void Accept(SomeClass operationClass) { operationClass.DoSomething(); operationClass.DoSomethingElse(); // ... and so on ... } } This design was, however questionnable, pretty comfortable since the number of implementations of MyInterface is significant (~50 or more) and I didn't need to add extra operations. Each implementation is unique (it's a different expression or operator), and some are composites (ie, operator nodes that will contain other operator/leaf nodes). Traversal is currently performed by calling the Accept operation on the root node of the tree, which in turns calls Accept on each of its child nodes, which in turn... and so on... But the time has come where I need to **add a new operation** , such as pretty printing : public class MyImpl : MyInterface { // Property does not come from MyInterface public string SomeProperty { get; set; } public void Accept(SomeClass operationClass) { operationClass.DoSomething(); operationClass.DoSomethingElse(); // ... and so on ... } public void Accept(SomePrettyPrinter printer) { printer.PrettyPrint(this.SomeProperty); } } I basically see two options : * Keep the same design, adding a new method for my operation to each derived class, at the expense of maintainibility (not an option, IMHO) * Use the \"true\" Visitor pattern, at the expense of extensibility (not an option, as I expect to have more implementations coming along the way...), with about 50+ overloads of the Visit method, each one matching a specific implementation ? **Question** Would you recommand using the Visitor pattern ? Is there any other pattern that could help solve this issue ?"} {"_id": "151765", "title": "How do \"custom software companies\" deal with technical debt?", "text": "**What are \"custom software companies\"?** By \"custom software companies\" I mean companies that make their money primarily from building custom, one off, bits of software. Example are agencies or middle-ware companies, or contractors/consultants like Redify. **What's the opposite of \"custom software companies\"?** The opposite of the above business model are companies that focus on long term products, whether they be deployable desktop/mobile apps, or SaaS software. **A sure fire way to build up technical debt:** I work for a company that attempts to focus on a suite of SaaS products. However, due to certain constraints we sometimes end up bending to the will of certain clients and we end building bits of custom software that can only be used for that client. This is a sure fire way to incur technical debt. Now we have a bit of software to maintain that adds nothing to our core product. **If custom work is a sure fire way to build technical debt, how do agencies handle it?** So that got me thinking. Companies who don't have a core product as the center of their business model, well they're always doing custom software work. How do they cope with the notion of technical debt? How does it not drive them into _technical bankruptcy_?"} {"_id": "187091", "title": "Avoiding ubiquitous language clashes", "text": "I have been reading DDD Quickly and wondered about how to avoid naming clashes with technical terms and domain terms. For example, if I commonly used the repository pattern (with classes such as AddressRepository), but a customer also has something called a repository in their domain. How would I best avoid confusing the two?"} {"_id": "168534", "title": "How to implement isValid correctly?", "text": "I'm trying to provide a mechanism for validating my object like this: class SomeObject { private $_inputString; private $_errors=array(); public function __construct($inputString) { $this->_inputString = $inputString; } public function getErrors() { return $this->_errors; } public function isValid() { $isValid = preg_match(\"/Some regular expression here/\", $this->_inputString); if($isValid==0){ $this->_errors[]= 'Error was found in the input'; } return $isValid==1; } } Then when I'm testing my code I'm doing it like this: $obj = new SomeObject('an INVALID input string'); $isValid = $obj->isValid(); $errors=$obj->getErrors(); $this->assertFalse($isValid); $this->assertNotEmpty($errors); Now the test passes correctly, but I noticed a design problem here. What if the user called `$obj->getErrors()` before calling `$obj->isValid()`? The test will fail because the user has to validate the object first before checking the error resulting from validation. I think this way the user depends on a sequence of action to work properly which I think is a bad thing because it exposes the internal behaviour of the class. How do I solve this problem? Should I tell the user explicitly to validate first? Where do I mention that? Should I change the way I validate? Is there a better solution for this? UPDATE: I'm still developing the class so changes are easy and renaming functions and refactoring them is possible."} {"_id": "168532", "title": "How can I compute the Big-O notation for a given piece of code?", "text": "So I just took a data structure midterm today and I was asked to determine the run time, in Big O notation, of the following nested loop: for (int i = 0; i < n-1; i++) { for(int j = 0; j < i; j++2) { //1 Statement } } I'm having trouble understanding the formula behind determining the run time. I thought that since the inner loop has 1 statement, and using the series equation of: (n * (n - 1)) / 2, I figured it to be: 1n * (n-1) / 2. Thus equaling (n^2 - 1) / 2. And so I generalized the runtime to be O(n^2 / 2). I'm not sure this is right though haha, was I supposed to divide my answer again by 2 since j is being upped in intervals of 2? Or is my answer completely off?"} {"_id": "187094", "title": "Should programmers talk with customers / users according to MSF / agile methods?", "text": "I've just read two statements that seem to be very different: > Des Weiteren ist mangelnde Kommunikation zwischen Programmierern und Nutzern > eine nicht zu vernachl\u00e4ssigende Quelle von unzureichenden Produkten. > > Translated: > > A lack of communication between programmers and users is a source of poor > [software] products. Source: de.wikipedia.org I think I have read something similar in CHAOS report of Standish Group. And > Insbesondere bei der Rolle Development ist Kontakt zum Kunden oder zu den > Benutzern nach Meinung des MSF geradezu zu unterbinden. > > Translated: > > According to MSF, especially the role \"Development\" should not have contact > to the customer or to the user. Source: msdn.microsoft.com This also makes sense, because as a programmer I want to have happy end users. So the user likes to have a new feature, I'll try to implement it. This could lead to feature creep. If I understand it correctly, MSF (Microsoft Solution Framework) tries to avoid this problem by a role that has contact to the customer (this is the product manager, the user experience role and maybe the testing role, isn't it?) and only one role that has contact to the development role (the program manager). **Question 1:** How do agile methods deal with the problem of feature creep? I read that the developers should have very strong contact with customers in agile methods and that one of the main problems in using scrum is to persuade the customer to get involved in the process. Does in SCRUM only the Product Owner have contact with the user / customer? Isn't this a problem, as the programmer might see different problems than the Product Owner? **Question 2:** Who does the requirements engineering in agile methods and MSF? **Question 3:** Do you validate in MSF / agile methods if your product does what the customer wants and the user needs before shipping it? How do you do it?"} {"_id": "58541", "title": "mind map for programmers", "text": "How are mind maps useful for programmers in organizing the way they work?"} {"_id": "28434", "title": "Spartan programming... What is it good for?", "text": "We are being forced to use Spartan programming on a project, to everybody's dismay. So I get it, it makes the methods really short and it handles the simple cases first. But is it really worth the price of the code looking like something out of the Obfuscated C Code contest? Can you see it being useful for something?"} {"_id": "62139", "title": "best practice: setting up several IDE / frameworks (at file/dir level)", "text": "I don't know how to exactly search for this topic, so if there are a lot of answers to that, please just provide a link :) I'm getting a new laptop in a few weeks and am thinking about setting up a logical, easy-to-use and clean folder structure for coding from the beginning.(mixing everything coding related now) My question is: Does your folder structure look like: C:\\grails C:\\Java\\jre-... C:\\JAVA\\jdk-... C:\\eclipse ... etc or do you always use the default directory? or do you sort them, like: C:\\ide\\, C:\\framewor and C:\\CMS\\ Do you put the frameworks and ide on a separate partition? Do you map your projects accordingly? How do you name your projects and how is the saving file structure? Is there a really good way that I've missed? I mean I just have a handful of projects to take care of and I mostly just do some light coding or minor changes, and still I'm confused every time I try to find anything or am often surprised to find several older versions still active... A few information: * Dual boot to a second partition is out of the question, I do like convenience (#1 reason for me to try programming: make things easier for me.) * I'll be getting a SSD, so the space is limited. * Running Windows 7 Prof- 64bit * I'll need: Eclipse, NetBeans, Grails, Groovy, VisualStudio, Ruby, Perl. Hopefully anyone has a good idea about that. It really annoys me and since I do have the opportunity to change that it's a good time to think about it."} {"_id": "59494", "title": "Result class dependency", "text": "I have an object containing the results of a computation. This computation is performed in a function which accepts an input object and returns the result object. The result object has a print method. This print method must print out the results, but in order to perform this operation I need the original input object. I cannot pass the input object at printing because it would violate the signature of the print function. One solution I am using right now is to have the result object hold a pointer to the original input object, but I don't like this dependency between the two, because the input object is mutable. How would you design for such case ?"} {"_id": "179475", "title": "In developing a soap client proxy, which return structure is easier to use and more sensible?", "text": "I'm writing (in PHP) a client/proxy for a SOAP web service. The return types are consistently wrapped in response objects that contain the return values. In many cases this make a lot of sense - for instance when multiple values are being returned: GetDetailsResponse Object ( Results Object ( [TotalResults] => 10 [NextPage] => 2 ) [Details] => Array ( [0] => Detail Object ( [Id] => 1 ) ) ) But some of the methods return a single scalar value or a single object or array wrapped in a response object: GetThingummyIdResponse Object ( [ThingummyId] => 42 ) In some cases these objects might be pretty deep, so getting at properties within requires drilling down several layers: $response->Details->Detail[0]->Contents->Item[5]->Id And if I unwrap them before passing them back I can strip out a layer from consumers' code. I know I'm probably being a little bit of an Architecture Astronaut here, but the latter style really bug me, so I've been working through my code to have my proxy methods just return the scalar value to the client code where there's no absolute need for a wrapper object. My question is, am I actually making things more difficult for the consumers of my code? Would I be better off just leaving the return values wrapped in response objects so that everything is consistent, or is removing unneccessary layers of indirection/abstraction worthwhile?"} {"_id": "73667", "title": "Mobile games: paid vs. ad-supported?", "text": "> **Possible Duplicate:** > Which has more benefits when selling through mobile app stores: free or > paid? I'm working on a game, and I'm hoping to distribute it through the Android and/or Apple stores. The big decision that I have to make: do I charge money for the game? Or do I put ads in it? I really hate advertising, so I'd prefer not to subject my users to it, but I'll do it if that's the only way to make money. I'd like to hear from anyone who has experience distributing mobile apps. Also, from regular users of the Apple and Android stores; how easy do they make it for you to drop $0.99 on something?"} {"_id": "144631", "title": "Is there any reason in a Java program for a special naming for a function arguments?", "text": "I'd like to know, why would I want to have a special prefixes for a function arguments, like \"p_name\", \"p_age\", \"p_sex\"? On the one hand it helps to distinguish parameter from local variable or field further in the function body, but would it help? On the other hand, I didn't saw such naming recommendations anywhere including official Java language conventions. Please advise any reasons for using such naming policy"} {"_id": "175751", "title": "Print all values in a value object", "text": "I have to debug an issue which requires me to print all the values of a Value Object that is returned by a web service call. The Value object is a complex object in the sense, it has another object as its member which in turn has another object. Printing all the values by using get methods is cumbersome. So I was wondering if there is a way to break down the value object by any way to get to a primitive level like String or int or Date and print them all using one API? I had a look at the below question but my prob is that I don't have access to the source code of the value object. The sources are in obfuscated jar. http://stackoverflow.com/questions/2413001/how-to-print-values-of-an-object- in-java"} {"_id": "175750", "title": "Keeping up with upstream changes while adding small fixes or even major changes", "text": "Often I need to apply some small fixes (to make them run on my environment) or even change some parts of the software (to tailor it to my needs) to software from outside. However this obviously creates problem with updating said software, even when it changes nothing related to my fix. It would be easier when the software provided integration for some kind of plugins but more often than not it doesn't. What would be an ideal workflow regarding that? Most of the projects are git repos I pulled from outside. How should I apply my changes so that I can update painlessly? You can assume that external changes are much more often and larger than my own ones, so reviewing each one of them won't be a solution."} {"_id": "175753", "title": "looking for a short explanation of fuzzy logic", "text": "I got the idea that basics of fuzzy logic are not that hard to grasp. And I got the feeling that someone might explain it to me in like 30 minutes. Just like I understand neural networks and am able to re-create the famous Xor problem. And go just beyond it and create 3 layer networks of x nodes. I'd like to understand fuzzy till a similar usefully level, in C# language. However the problem is face, I'd like to get concept right however I see many websites who include lots of errors in their basic explaining. Like for example showing pictures and use different numbers as shown in pictures to calculate, as if lots of people just copied stuff without noticing what they write down. While others for me go to deep in their math notation). To me that's very annoying to learn from. For me there is no need to re-invent wheel; Aforge already got a fuzzy logic framework. So what I am looking for are some good examples, good examples like how the neural XOR problem is solved. Is there anyone such a instructional resource out there; do you know a web page, or YouTube where it is shortly explained, what would you recommend me? Note this article comes close; but it just doesn't nail it for me. After that I downloaded a bunch of free PDF's but most are academic and hard to read for me (I'm not English and don't have a special math degree). (I've been looking around a lot for this, good starter material about it is hard to find)."} {"_id": "179473", "title": "Asking potential developers to draw UML diagrams during the interview", "text": "Our interview process currently consists of several coding questions, technical questions and experiences at their current and previous jobs. Coding questions are typically a single method that does something (Think of it as fizzbuzz or reverse a string kind of question) We are planning on introducing an additional step where we give them a business problem and ask them to draw a flowchart, activity, class or a sequence diagram. We feel that our current interview process does not let us evaluate the candidate's thinking at a higher level (which is relevant for architect/senior level positions). To give you some context, we are a mid size software company with around 30 developers in the team. If you have this step in your interview process, how has it improved your interviewing veracity? If not, what else has helped you evaluate the candidates better from a technical perspective."} {"_id": "79730", "title": "What should a self-taught/no experience programmer's resume look like?", "text": "I asked a question a while back about knowing when you're ready to look for a job and got positive replies. Now I'm working on writing up a resume to begin my job search. The title pretty much sums up the question, what should a self-taught programmer who has nothing but personal project experience put in a resume? PS. What I really want to ask is for someone to take a quick look at my resume(draft) but I know that's too specific here. Is there a place where I can ask this type of question? **EDIT:** Thanks to everyone for the feedback. I've finished a RC version and will hopefully be entering the job market soon."} {"_id": "79737", "title": "Why are Oracle directories named /u01 /u02 etc...?", "text": "I've been working with the Oracle RDBMS for a few years and today, after installing one for the n-th time, I was left wondering, why do we install it in /u01, /u02, etc.? Of course you could install it somewhere else, but for some unknown reason, this convention is used everywhere and I haven't seen any serious Oracle installation in, for example, /opt Any history lesson I missed?"} {"_id": "179479", "title": "Returning status code where one of many errors could have occured", "text": "I'm developing a PHP login component which includes functions to manipulate the User object, such as `$User->changePassword(string $old, string $new)` What I need some advice with is how to return a status, as the function can either succeed (no further information needs to be given) or fail (and the calling code needs to know why, for example incorrect password, database problem etc.) I've come up with the following ideas: * Unix-style: return 0 on success, another code on failure. This doesn't seem particularly common in PHP and the language's type-coercion messes with this (a function returning FALSE on success?) This seems to be the best I can think of. * Throw an exception on error. PHP's exception support is limited, and this will make life harder for anybody trying to use the functions. * Return an array, containing a boolean for \"success\" or not, and a string or an int for \"failure status\" if applicable. None of these seem particularly appealing, does anyone have any advice or better ideas?"} {"_id": "145582", "title": "Extend the API or use the same name as a class in the API?", "text": "I have been running into this problem more and more: I am not happy with the current API, and end up making my own class that does what I wish the API did; however, I don't extend the 'super class' as I don't feel it fully fits. I can't imagine that this is good practice, so should I just extend the 'super class' or should I name the class slightly different than what I feel the original class should have done? For example, in Java you end up writing ten lines of code just to read a file into a List, with each entry being one of the file. So again, in cases like this, should the class be renamed to something similar, or should it go ahead and just extend the class the include the behavior you wanted? It almost seems wrong to me to extend the API classes, but at the same time, by not doing so, you are defeating the point of inheritance in OOP."} {"_id": "178707", "title": "Storing lots of large strings with frequent \"appends\" and few reads", "text": "In my current project, I need to store a very long ASCII string to each instance of a given object. This string will receive an 2 appends per minute and will not be retrieved so frequently. The worst case scenario is a 5-10MB string. I'll have thousands of instances of my object and I'm worried that storing all those strings in the filesystem would not be optimal, but I can't think of a better solution. Can anyone suggest an alternative? Maybe a key-value store? In this case, which one? Any other thoughts?"} {"_id": "178700", "title": "If you have the full spec done, what is left for the developer to do?", "text": "I'm working in a small company, started as a developer and coded pieces of a big system being provided with detailed specs. Over five years I moved towards analyst position. I know how existing parts of the system are build, so when we need a new subsystem I know how to connect it to the existing things. So I analyse requirements for a new subsystem to be done, design a new module, then code main parts of it. After that me with my colleagues who are proper analysts write detailed specs for junior developers to finish the module. The problem is that I don't see a new job for myself. I realise that jack-of- all-trades isn't considered to be good, and I don't see getting myself a job exactly like this in a big company. But if I look for a developer job, then I would be somewhat like junior again? Because if I will be provided with detailed description of what software has to do, all that seems to be left for me is merely translating spec to the code, which is plain boring. But developer is considered to solve problems, so which problems are those supposed to be? Only pure technical problems I can imagine is performance optimization. So basically my question is - what problems developers are supposed to face and solve, if all decisions of how application should work to meet customers needs are considered to be an analyst job? What problems do you solve at work?"} {"_id": "228910", "title": "Is This A Good Example Of Open Recursion?", "text": "I understand _open recursion_ as the process of a method on a class calling another method on a class using a keyword such as `this`, but whereby the method call may actually be bound to a sub class at run time. Is this a fair demonstration of open recursion? class Sup { go() { alert('sup'); } callGo() { this.go(); } } class Sub extends Sup { go() { alert('sub'); } } var sub = new Sub(); sub.callGo();"} {"_id": "228911", "title": "Guessing a phone number's country code", "text": "This is the problem I'm working with: given a phone number from anywhere in the world and some location information (state, province, possibly country name if I'm lucky, etc.), return the ISO country code for that number. For the purposes of this question, I will not focus on the location information, as that provides an alternative solution to determining the country code which doesn't even need to use the phone number anymore (though, it would be useful for validation purposes) When I first started working on the problem, I was hoping there was a deterministic way to figure this out because there was some sort of international standard out there. It became immediately apparent that one does not exist for phone numbers. There are standards within countries, between countries (NANP for example), but no unified international standard. Playing around with libphonenumbers for a few days, it seems to be able to provide accurate validation of a phone number if I'm given a country code (eg: CA for Canada, GB for United Kingdom, etc). The library provides two methods: `isPossibleNumber`, and `isValidNumberForRegion`. This is the code I'm using boolean isValid; PhoneNumber number; PhoneNumberUtil util = PhoneNumberUtil.getInstance(); String numStr = \"(123) 456-7890\"; for (String r : util.getSupportedRegions()) { try { // check if it's a possible number isValid = util.isPossibleNumber(numStr, r); if (isValid) { number = util.parse(numStr, r); // check if it's a valid number for the given region isValid = util.isValidNumberForRegion(number, r); if (isValid) System.out.println(r + \": \" + number.getCountryCode() + \", \" + number.getNationalNumber()); } } catch (NumberParseException e) { e.printStackTrace(); } } So for example, if I took an arbitrary phone number like `+44 20 7930 4832` and ran it through the method, I would get the following output GB: 44, 2079304832 Now, that's assuming I'm given the dialing code (sometimes it's there). If I weren't given the dialing code, I might just get something like `20 7930 4832`, and the results are not as pretty DE: 49, 2079304832 US: 1, 2079304832 GB: 44, 2079304832 FI: 358, 2079304832 AX: 358, 2079304832 RS: 381, 2079304832 CN: 86, 2079304832 NZ: 64, 2079304832 IN: 91, 2079304832 IR: 98, 2079304832 JP: 81, 2079304832 Given a phone number, I can run it through all of the different rules for every country and filter the list down from 244 to around 20 or less if I'm lucky, but I'm not sure if there's anything else I could do to try and guess the country."} {"_id": "148983", "title": "What are the advantages of Ceylon over Java?", "text": "Looking for the recent and powerful upcoming programming languages over net, I came across Ceylon. I dropped in at ceylon-lang.org and it says: > Ceylon is deeply influenced by Java. You see, we're fans of Java, but we > know its limitations inside out. Ceylon keeps the best bits of Java but > improves things that in our experience are annoying, tedious, frustrating, > difficult to understand, or bugprone. What are the advantages of Ceylon over Java?"} {"_id": "197269", "title": "A weakness of the TDD method?", "text": "This is in summary the TDD method: 1. Write a test 2. check if thes test fails 3. write production code 4. run test I think that TDD as presented works only in ideal circumstances. I'll take a simple example: **Specification** : Write a program that calculates the square root of a number, the user must enter a number. If the number is negative, the program should display an error, and if the number is positive or zero the program should display the number whose square is the number entered, or for decimal number whose square is closest to the number entered. **Writing test** class SquareRootTest { private SquareRoot squareRoot; public void testNegativeNumber() { assertException(squareRoot.execute(-5); } public void testIntegerSquareRoot() { assertEqual(squareRoot.execute(9),3); } public void testDecimalSquareRoot() { assertEqual(squareRoot.execute(3),1.732); } } **run failed test** **writing production code:** class SquareRoot { public double execute(Number number) { if(number < 0) throw exception; if(number < 1) return squareRootLessThanOne(number); else return squareRootGreaterThanOne(numner); } //private methods.... } this is when writing production code, I find that I have to deal with the numbers lower than one differently from those numbers greater than 1. So I need to update my tests to reflect this. **Updating test** class SquareRootTest { public void testNegativeNumber() { assertException(squareRoot.execute(-5)); } public void testIntegerSquareRoot() { assertEqual(squareRoot.execute(9),3); } public void testDecimalSquareRoot() { assertEqual(squareRoot.execute(3),1.732); } public void testNumberLowerThanOne() { assertEqual(squareRoot.execute(0.04),0.200); } } If I had found a single algorithm to calculate the square root, I would not change my tests. The TDD method focuses solely on tests from specifications, yet there are tests that are derived from the implementation, particularly all conditional instructions and all instructions controlled by loops should normally be tested. These tests case can not be detected at the time of specification. My question: How does TDD will deal with this situation? or is effectivelly a weakness of the TDD?"} {"_id": "188162", "title": "Repeatability of CPU-intensive benchmarks on NUMA hardware", "text": "Consider a benchmark of a C++ application. It does little or no I/O in proportion to its overall runtime -- it is compute-intensive. It is a single- threaded program. It pages/reads in all of its data at the outset, and then runs many iterations of the core task at hand, so as to average out cache or other ephemeral variations. We run it on a large, multi-core, Linux system with far more memory than it uses, which has a NUMA memory hierarchy. We run it on the machine when it is 'idle'. Of course, it has some number of the usual daemons floating around, but there should be plenty of cores and memory to spare to keep them happy. We observe a surprising (to us) range of variation in the wall-clock time that results. Can anyone suggest where to look for an explanation, or what to do to reduce the variation? uname: Linux perf2.basistech.net 2.6.32-71.29.1.el6.x86_64 #1 SMP Mon Jun 27 19:49:27 BST 2011 x86_64 x86_64 x86_64 GNU/Linux numactl --show: ~/ numactl --show policy: default preferred node: current physcpubind: 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 cpubind: 0 1 nodebind: 0 1 membind: 0 1 ~/ numactl --hardware available: 2 nodes (0-1) node 0 cpus: 0 2 4 6 8 10 12 14 node 0 size: 6144 MB node 0 free: 2030 MB node 1 cpus: 1 3 5 7 9 11 13 15 node 1 size: 6134 MB node 1 free: 144 MB node distances: node 0 1 0: 10 20 1: 20 10"} {"_id": "91993", "title": "Are there tangible benefits to being a Microsoft MVP?", "text": "I have several certifications, MCPD, MCTS, MCAD, etc and I've learned that MVP is an award (just a title), not a certification. So if I wanted to get one of these, I'd have to spend valuable time on communities, sharing knowledge and providing answers to others for free, consistently, for _years_ , until finally someone special notices and rewards me. I imagine there'd be some personal benefits along the way, but supposing I achieved this goal and became an MVP, would this pay off financially in any way (career opportunities, higher pay scale, etc.)?"} {"_id": "218912", "title": "How much difference does experience make?", "text": "I see many job adds which require at least x years experience. The question is how do you know when a candidate has the required years of experience? What do you expect from a person with x years experience (edit: effectively how do you check if the CV isn't lying without relying to skill checking)? What can a person with x years experience can do that one with y years (with y < x) cannot do (edit: assuming they have similar skills)? There can be cases with some passionate programmer with y years experience that has vast knowledge and worked on multiple projects and other programmer with x years experience (x > y) that has worked on few projects and doesn't have that much experience. Why can not it be reduced to something like this \"if you know this technology and you know how to do that stuff (be it design, communication, estimates etc.) then you are suitable for our job\"? I know you cannot hire a fresh graduate with 1 year experience for the post of an enterprise architect but I also see a problem with the facts that almost all ads ask experience. IMHO firstly passion should be taken into account. Firstly I did not know if the question is suitable for this site but since there is a tag for recruiting and experience I believe it has a place here."} {"_id": "144188", "title": "Is it just me or is this a baffling tech interview question?", "text": "## Background I was just asked in a tech interview to write an algorithm to traverse an \"object\" (notice the quotes) where A is equal to B and B is equal to C and A is equal to C. That's it. That is all the information I was given. I asked the interviewer what the goal was but apparently there wasn't one, just \"traverse\" the \"object\". I don't know about anyone else, but this seems like a silly question to me. I asked again, \"am I searching for a value?\". Nope. Just \"traverse\" it. Why would I ever want to endlessly loop through this \"object\"?? To melt my processor maybe?? The _answer_ according to the interviewer was that I should have written a recursive function. OK, so why not simply ask me to write a recursive function? And who would write a recursive function that never ends? ## My question: Is this a valid question to the rest of you and, if so, can you provide a hint as to what I might be missing? Perhaps I am thinking too hard about solving real world problems. I have been successfully coding for a long time but this tech interview process makes me feel like I don't know anything."} {"_id": "144187", "title": "How do we provide valid time estimates during Sprint Planning without doing \"too much\" design?", "text": "My team is getting up to speed with Scrum, but most of us are more familiar with non-agile or \"pseudo-\"agile methodologies. The part that is the biggest hurdle for us is running an efficient Sprint Planning meeting where we break our backlog items into tasks, and estimate hours. (I'm using the terminology from the VS2010 Scrum Template; apologies if I use the wrong word somewhere.) When we try to figure out how long a task is going to take, we often fall into the trap of designing the feature at the code level -- table layout, interfaces, etc -- in order to figure out how long that's going to take. I'm pretty sure this is not the appropriate place to be doing that kind of design. We should be scheduling tasks for these design meetings during the sprint. However, we are having trouble figuring out how else to come up with meaningful estimates for the tasks. Are there any practical habits/techniques/etc. for making a judgement call about how long a feature is going to take, without knowing how you plan to implement it? If our time estimates are going to change significantly once the design has been completed, how can we properly budget our Sprint backlog ahead of time? EDIT: Just to clarify, since some of the comments/answers are very valid but I think addressing the wrong question. We **_know_** that what we're doing is not right, and that we should be building time into the sprint for this design. Conceptually all of the developers understand that. We also also bringing in a team member with Scrum experience to keep us on track if we start going off into the weeds. The problem is that, _without_ going through this design process, we are finding it difficult to provide concrete time estimates for anything. We are constantly saying things like \"well if we design it this way it might take 8 hours but if we end up having to do this other way instead that will take about 32 but it might not be as bad once we start trying to write it...\". I also assume that this process will get better once we have some historical velocity to work from, but many of the technologies and architectural patterns we are using are new to us. But if potentially-wildly-wrong estimates are just a natural part of adapting this process then we will just need to recondition ourselves to accept that :)"} {"_id": "144183", "title": "What makes Instagram so valuable?", "text": "If as in the FAQ, that topics about business (computer industry) is allowed here, I'd like to find out why Instagram can be so valuable, that it is acquired for $1 billion dollars (USD). To put it simply, isn't it just a photo enhancement app (such as making a photo vintage look), plus sharing those photos on Facebook? That's because in contrast, PlayFish had superb Facebook games, and many of them, and are so much more sophisticated (such as Restaurant City and Pet Society). And PlayFish was merely acquired for $400 million. Some companies such as RockYou, had the number one app on Facebook, but wasn't even acquired for a low price like $200 million. And now just a photo filter app and sharing photos, and it is a business considered to be worth a billion dollars. Why is that?"} {"_id": "221539", "title": "Algorithm for fast tag search", "text": "The problem is the following. * There's a set of simple entities E, each one having a set of tags T attached. Each entity might have an arbitrary number of tags. Total number of entities is near 100 million, and the total number of tags is about 5000. So the initial data is something like this: E1 - T1, T2, T3, ... Tn E2 - T1, T5, T100, ... Tk .. Ez - T10, T12, ... Tl This initial data is quite rarely updated. * Somehow my app generates a logical expression on tags like this: T1&T2&T3 | (T5&!T6) * What I need to is to calculate a number of entities matching given expression (note - not the entities, but just the number). This one might be not totally accurate, of course. What I've got now is a simple in-memory table lookup, giving me a 5-10 seconds execution time on a single thread. I'm curious, is there any efficient way to handle this stuff? What approach would you recommend? Is there some common algorithms or data structures for this? **Update** A bit of clarification as requested. 1. `T` objects are actually relatively short constant strings. But it doesn't actually matter - we can always assign some IDs and operate on integers. 2. We definitely can sort them."} {"_id": "212955", "title": "What factors should be considered before deciding to build a message bus with SOAP services?", "text": "It would seem to me that the cost of having a team of developers build and maintain all the components necessary to provide routing, workflow orchestration, durability, security and the other features provided by a commercial ESB would be equivalent to, or even more than, licensing a commercial ESB. Other than up front licensing costs and perhaps architectural preference, what are important factors to consider when designing an integration architecture around building a message bus using SOAP services over licensing message bus software from a commercial vendor and integrating your business components with it?"} {"_id": "151182", "title": "is it a bad practice to call a View from another View in MVC?", "text": "I have some plain Views, they don't have any logic behind them (there is no action or controller behind them), their only propouse is to alert the user about something like \"We have sent you an email to confirm your account\", \"You have no access to this resource\", etc... These views are really simple, and calling them through a Controller/Action seems to be too much overhead, but somehow I feel like it is not quite correct. What do you think? How do you handle this kind of situations?? I guess this question will apply to any MVC Framework, but in my case I'm using the ASP.NET MVC 3 framework."} {"_id": "130414", "title": "What is the best way to structure workshops for starting programmers?", "text": "We are planning to organize programming workshops for Java web developers with basic coding but no design experience. The goal of the workshops is to introduce these programmers to clean code. Focus Areas: * TDD * Four Rules of Simple Design * runs all the tests * contains no duplications * expresses the intent of the programmers * minimizes the number of classes and methods Structure: * 6 full day coached workshops spread over 3 months using pair programming and pair-switching * Each workshop contains * four 15 minute presentations/demos * four 90 minute problem solving exercises * 4 minute pecha kucha's by every programmer Planned Katas for first workshop: * prime numbers * bowling game * tennis scoring * Arabic to Roman numerals conversion * prime factors Do you think this is a good way to structure a programming workshop? Are there any elements that I could introduce to the workshops to make them more interactive and create more interest among developers? Edit 1: Summary of the feedback * Create an interest in reading books. * Have real project examples. * Have contact with the group during off days."} {"_id": "130416", "title": "Version Control based on portable storage?", "text": "I develop personal projects on two machines without use of a shared server or a network connection between the two. Do any common version control systems reliably support use of portable storage (such as a USB flash device) as the shared repository?"} {"_id": "130419", "title": "Why do organizations limit source code access to engineers?", "text": "Most organizations restrict access to the source code to engineers, and even at places like Google, the Android source code is kept off-limits to most engineers within the company. Why? Note: I am not talking about write access for everyone in the company, I'm talking about read access."} {"_id": "221534", "title": "Plain old struct vs class in this case?", "text": "Stroustrup says that you should use structs unless you can specify an invariant for the data structure. I have to represent a physical object which holds things such as position, velocity, mass and inertia (and so on). I also want to render the object, for which I will need setup in the constructor and unloading stuff in the destructor. So I guess that's an invariant, but it feels wrong to put all this physics stuff into it that doesn't really have invariants. The drawing parts are pretty trivial, so it doesn't feel worth it to separate them from the rigid body in this case. Any advice? edit: I suppose I just realized the answer to my own question. Rigid bodies **do** have invariants, namely that it has to follow the physical laws."} {"_id": "35595", "title": "How does SSL relate to the Public Key Infrastructure?", "text": "This is possibly a stupid question but, how does SSL relate to the Public Key Infrastructure? Cheers, J"} {"_id": "93594", "title": "Do i need to know how Ajax works since Asp.net provides me UpdatePanel", "text": "I am working on Asp.net webform and it already provides me ready to user Ajax solution by using an update panel, so should I invest my time learning how Ajax really work ?"} {"_id": "231031", "title": "Self-organization during Sprint Planning", "text": "During Sprint Planning there are a lot of decisions to be made: 1. How many PBIs should the team commit to? 2. Which tasks should constitute each PBI? 3. How much time should each task take? With no team leader it is not always easy to make all these decisions, since there will be some decision points (sometimes more than a few) in which the team members will disagree. I thought about a few options to deal with such disagreements when a consensus cannot be reached in a short period of time (we don't want endless arguments). 1. Let ScrumMaster decide 2. Let the person who will most likely work on the story decide. 3. Majority vote 4. Let the person who actually writes the tasks in Excel decide (in such case have a different person write the tasks for each story). I have two questions: 1. How to measure consensus? Should the Scrum Master ask each person whether they agree with every decision proposed? 2. When no consensus can be reached shortly, which option from the list that I proposed do you think we should choose, if any? Thank you!"} {"_id": "253147", "title": "Log within try-catch or after?", "text": "Should one log success of an operation within a try-catch-block, or after it? Example: try do x log('successful') catch log('fail') end or is this better: try do x catch log('fail') end log('successful') I would say that a logger should never fail, but what if, for some reason, it does anyway? In this specific case, I had a logfile on a network drive which disconnected due to actions of do(x), which however did not fail -- obviously my try failed, but my operation worked. However, I find the latter example harder to read, and I try to keep my code as tight as possible for readability."} {"_id": "253141", "title": "Ideal & idomatic javascript interface for RESTful API", "text": "I am trying to write an angular service to interface with a RESTful API. For sake of simplicity, lets assume the API is + Company |___+ Department | |____ Person | |____ Person > Notice how person can be under `Company > Department` or directly > under`Company`. Each of the entities (Company, Department and Person) > support add, edit, list and get_by_id. Which of the following interfaces is more idiomatic? ## Option 1: // In all cases, get(), put() return $http promise companyApi().get() // List all companies companyApi(1).get() // Get company with ID 1 companyApi(1).departments().get() // List all departments companyApi(1).departments(2).get() // Get department with ID 2 companyApi(1).departments(2).persons().get() // List all persons companyApi(1).departments(2).persons().put(p) //Add a new person in department 2 companyApi(1).persons().put(p) //Add a new person in company 1 companyApi(1).persons(3).put(p) //Edit person with ID=3 in company 1 companyApi(1).persons(1).remove(p) // Delete a person ## Option 2: // In all cases, get(), put() return $http promise // List all companies companyApi({ type: 'company' }).get(); // Get company with ID=1 companyApi({ type: 'company', companyId: 1 }).get(); // Get department with ID=2 under company 1 companyApi({ type: 'department', companyId: 1, departmentId: 2 }).get(); // List persons under department with ID=2 under company 1 companyApi({ type: 'person', companyId: 1, departmentId: 2 }).get(); // Get person with ID=3 under department with ID=2 under company 1 companyApi({ type: 'person', companyId: 1, departmentId: 2, personId: 3 }).get(); // Get person with ID=3 under department with ID=2 under company 1 companyApi({ type: 'person', companyId: 1, departmentId: 2, personId: 3 }).get(); // Add person under department with ID=2 under company 1 companyApi({ type: 'person', companyId: 1, departmentId: 2, }).put(personObj); //Edit person with ID=3 in company 1 companyApi({ type: 'person', companyId: 1, departmentId: 2, personId: 3 }).put(personObj); // Add person under directly under company 1 companyApi({ type: 'person', companyId: 1 }).put(personObj);"} {"_id": "214177", "title": "How do you formulate the Domain Model in Domain Driven Design properly (Bounded Contexts, Domains)?", "text": "Say you have a few applications which deal with a few different Core Domains. _The examples are made up and it's hard to put a real example with meaningful data together (concisely)._ In Domain Driven Design (DDD) when you start looking at Bounded Contexts and Domains/Sub Domains, it says that a Bounded Context is a \"phase\" in a lifecycle. An example of Context here would be within an ecommerce system. Although you could model this as a single system, it would also warrant splitting into separate Contexts. Each of these areas within the application have their own Ubiquitous Language, their own Model, and a way to talk to other Bounded Contexts to obtain the information they need. The Core, Sub, and Generic Domains are the area of expertise and can be numerous in complex applications. 1. Say there is a long process dealing with an Entity for example a Book in a core domain. Now looking at the Bounded Contexts there can be a number of phases in the books life-cycle. Say outline, creation, correction, publish, sale phases. 2. Now imagine a second core domain, perhaps a store domain. The publisher has its own branch of stores to sell books. The store can have a number of Bounded Contexts (life-cycle phases) for example a \"Stock\" or \"Inventory\" context. In the first domain there is probably a Book database table with basically just an ID to track the different book Entities in the different life-cycles. Now suppose you have 10+ supporting domains e.g. Users, Catalogs, Inventory, .. (hard to think of relevant examples). For example a DomainModel for the Book Outline phase, the Creation phase, Correction phase, Publish phase, Sale phase. Then for the Store core domain it probably has a number of life-cycle phases. public class BookId : Entity { public long Id { get; set; } } In the creation phase (Bounded Context) the book could be a simple class. public class Book : BookId { public string Title { get; set; } public List Chapters { get; set; } //... } Whereas in the publish phase (Bounded Context) it would have all the text, release date etc. public class Book : BookId { public DateTime ReleaseDate { get; set; } //... } The immediate benefit I can see in separating by \"life-cycle phase\" is that it's a great way to separate business logic so there aren't mammoth all- encompassing Entities nor Domain Services. A problem I have is figuring out how to concretely define the rules to the physical layout of the Domain Model. A. Does the Domain Model get \"modeled\" so there are as many bounded contexts (separate projects etc.) as there are life-cycle phases across the core domains in a complex application? **Edit: Answer to A.** Yes, according to the answer by Alexey Zimarev there should be an entire \"Domain\" for each bounded context. B. Is the Domain Model typically arranged by Bounded Contexts (or Domains, or both)? **Edit: Answer to B.** Each Bounded Context should have its own complete \"Domain\" (Service/Entities/VO's/Repositories) C. Does it mean there can easily be 10's of \"segregated\" Domain Models and multiple projects can use it (the Entities/Value Objects)? **Edit: Answer to C.** There is a complete \"Domain\" for each Bounded Context and the Domain Model (Entity/VO layer/project) isn't \"used\" by the other Bounded Contexts directly, only via chosen paths (i.e. via Domain Events). The part that I am trying to figure out is how the Domain Model is actually implemented once you start to figure out your Bounded Contexts and Core/Sub Domains, particularly in complex applications. The goal is to establish the definitions which can help to separate Entities between the Bounded Contexts and Domains."} {"_id": "142328", "title": "What kinds of low level knowledge matter?", "text": "I realize that this question is similar to Low level programming - what's in it for me, but the answers didn't really address my question well. Part from just an understanding, how exactly does your low level knowledge translate into faster and better programs? There's the obvious lack of stop-the-world from garbage collection, but what else is an advantage? Do you really outperform your optimizing compiler? Do you pack your data structures in as tight as possible and be concerned about alignment? There's extra freedom naturally, but does that really translate into a faster program?"} {"_id": "73673", "title": "How can I manage a central repository of documentation?", "text": "Our team works with a large number of different APIs and services, and we also have our own internal tooling and services we maintain as well. Right now, we do not have a good centrally managed system to list which projects we used a certain API on, or to attach documentation for all of our utilities/apis. I believe what I am looking for is some type of team wiki, but we weren't impressed with the search capability of the the solution we tried (Sharepoint). Are wikis the right way to approach centralize documentation? What types of things should I be looking out for when creating a centralized documentation system?"} {"_id": "73678", "title": "Visual Studio Express vs. SharpDevelop", "text": "I need a bit of input from people who have used both of these programs. I am using Visual Studio Express 2010 right now for creating a project in VB.net and WPF 4. I just discovered a freeware IDE called IC#Develop, which apparently supports both of those formats. Has anyone used both, and if so, what are the advantages and disadvantages of each? Which would you recommend? Is IC#Develop a good alternative to the very expensive Visual Studio Ultimate?"} {"_id": "22255", "title": "What's the right way to fork/re-use code from an open source project?", "text": "Let's say I am working on an open source project and want to re-use a trivial utility function from another open source project (for example, a file search/replace function). Is it legal to copy the function and simply write a small copyright notice in top of the file? Should I include their name as copyright holders of the entire project in the license? Similarly, let's say I fork an open source project. Where and how do I specify that the copyright is shared between both the original copyright holder and myself? I guess the answer must somewhat vary according to the open source license but I'd like a general answer as much as possible. PS: I'm mostly concerned about the legal aspect, but feel free to include your ethical point of view."} {"_id": "189453", "title": "Java vs PHP Memory / CPU Consumption", "text": "I work in a PHP based company. There is a project where we want to create a backend service. Senior members here are going for PHP, even though it is slower than Java. Their only point of contention, that Java is heavier than PHP in both memory and cpu load standpoint. Jvm is more like a container environment, if u bring in more baggage it will consume that much, but service we are talking here will be of medium complexity. So chances are that it won't be much demanding (or will it?) I understand this question leans heavily towards vagueness, however my point of ponder is, that is it always the same case? That java is more hardware demanding? I would just like to know opinions of all you experienced folks about how the actual scenario looks like."} {"_id": "74022", "title": "What kind of bug is this?", "text": "My application has 3 shipping methods, the cheapest of which is free. The free shipping option is only available for orders totaling over $100. As soon as I built it, I thought of a way a user could easily circumvent this. Add items to the cart totaling over $100, then select your free shipping, then edit your cart to be less then $100 and maintain the free shipping. I fixed my application to not allow this. I started wondering though, what do you call this sort of design flaw? I know you can call it a bug, but is there a better description that I can use so that someone will understand what I am talking about?"} {"_id": "197882", "title": "Security through obscurity and storing unencrypted passwords", "text": "What exactly does \"Security through obscurity\" means in the context of stroing unencrypted passwords? I'm using a small program (I won't name it, to not enlarge enough large shame on its author) that uses my Google account for some tasks. I've noticed, that it stores my password in plain-text unencrypted file. Just a string, clearly seen to everyone, that can drag&drop it to Notepad or use `F3` in Total Commander. I have risen a ticket asking program author to fix this ASAP. I haven't got any reply yet, but my issue got one comment, that includes only above mentioned link to Wikipedia's \" _Security through obscurity_ \" page. How should I understand this comment? Is it pro or con my issue? At first I thought, that it supports my statement of fixing this ASAP. But then I found a Eric Raymond's Fetchmail example (in \" _The Cathedral and the Bazaar_ \"), who refused to implement config file encryption (passwords are stored in config file for Fetchmail), claiming that it is up to the user to assure security by not letting anyone \"from the outside\" access that configuration file. This statement (or refusal) is often brought as example of _Security through obscurity_. And looking from this point of view, I'm completely wrong and that program author is right. He do not have to implement encryption of file with my password, it can remain there, stored unencrypted and it is I, who is responsible for assuring security by not giving anyone access to this file or by deleting it each time I stop using that soft. (another question is, how can I achieve this on system as unsecure as Windows itself?) These seems to be in a complete opposition, to what I've been told and learnt for years, so I would like to ask more experienced developers, who is right here and how exactly I should understand \"StO\"?"} {"_id": "37550", "title": "C++: calling non-member functions with the same syntax of member ones", "text": "One thing I'd like to do in C++ is to call non-member functions with the same syntax you call member functions: class A { }; void f( A & this ) { /* ... */ } // ... A a; a.f(); // this is the same as f(a); Of course this could only work as long as * `f` is not virtual (since it cannot appear in `A`'s virtual table. * `f` doesn't need to access `A`'s non-public members. * `f` doesn't conflict with a function declared in `A` (`A::f`). I'd like such a syntax because in my opinion it would be quite comfortable and would push good habits: 1. calling `str.strip()` on a `std::string` (where `strip` is a function defined by the user) would sound a lot better than calling `strip( str );`. 2. most of the times (always?) classes provide some member functions which don't require to be member (ie: are not virtual and don't use non-public members). This breaks encapsulation, but is the most practical thing to do (due to point 1). My question here is: what do you think of such feature? Do you think it would be something nice, or something that would introduce more issues than the ones it aims to solve? Could it make sense to propose such a feature to the next standard (the one after C++0x)? * * * Of course this is just a brief description of this idea; it is not complete; we'd probably need to explicitly mark a function with a special keyword to let it work like this and many other stuff."} {"_id": "55797", "title": "How far to go with unit tests", "text": "A question asked many times before but with a specific slant twds mvc development. I've been a very good boy and have been coding all my controller actions with corresponding unit tests which has been great (if a little [read a LOT] repetitive at times). To be honest, I've actually created a little T4 template to write most of the bare bones of the intial unit tests and then tweaked as appropriate as per usage. I will admit to not being quite sure how to handle tests in views that contain partialviews - but that's a story for another question. Now, the difficult part for me to decide upon is just how deep the coverage should be in my service layer. The reason being that some of my service methods (for better or worse) actually perform a variety of linq queries which then supply discreet information to subsequent logic within the method. I know i could (should??) break these methods down to only call the required logic for each linq statement and then apply them within the method. However, in many instances, there is never any reuse of the linq 'functions' and therefore it feels that this would refactor the code out a level too far. What I'm asking is, with complex logic occurring within a method, is it 'good enough' to have a test method that simply asserts the required result and/or expected error, or should every logic line be simulted and tested too. the way I'm seeing it, to do the testing correctly, then the method logic (line by line) should be getting some sort of coverage too. That however (in my naive opinion) could lead to a never ending cycle of trying to keep the test and the implemented method so closely aligned (which i know they should be) as to create a cottage industry in the tests themselves. I know my question may offend a few of the TDD devotees who will see this as a no brainer. Not being in the TDD camp, this is a 'yes brainer' for me, hence the question. btw - had checked this out for ideas: http://dotnetslackers.com/articles/aspnet/Built-in-Unit-Test-for-ASP-NET- MVC-3-in-Visual-Studio-2010-Part-1.aspx looking fwd to the steady downvotes now :) **[edit]** \\- for the benefit of the single (well at the moment single!!) 'close' voter. this question is **not** subjective. I'm looking for concensus on a very focussed subject. I'm not attempting to stir up negative passions, I'm not looking to expose flaws in the technology - i'm a HUGE fan. So please, drop a polite comment for my benefit if voting to close as it may help me to restructure the question if there's ambiguity or misinformation. this question could benefit a large chunk of the mvc population. thank you!! jim"} {"_id": "195146", "title": "caching on multiple servers", "text": "Because we need to keep response times low, we get tons of requests, and we need to basically process ALMOST the same data (which I'll refer to as X) each request (the inputs are different though, so we can't cache responses), we are using a technique where we grab a new copy of X every 90 seconds from the database and store it locally in memory as a python list of dictionaries, on our application servers (we are using uwsgi). The kink in the machine: There are temporary analytics that we need to keep track of in those 90 seconds to adjust our data each iteration, and each iteration is dependent on what we calculate from the last iteration. The trouble with this is, we have multiple application servers that are storing the same data, X, in memory and each of those servers need to refresh X at the same time to keep calculations consistent for the next interval. I've tried some techniques, like broadcasting a message after each calculation to reload each server's X, but it hasn't been as effective as I would hope, and it just makes things more complicated. I should say, the reason we haven't used memcached or something similar is because we don't want to sacrifice any speed if we can. Maybe I am ignorant on how fast we can retrieve and load the list into python objects from memcached. I understand my explanation isn't the greatest, and will answer any questions to give a better picture of the situation. Edit: we are at about 5000 request/second, the size of the data we process is about 2MB at the moment but will continue to grow, so we'd like to avoid sending it over the wire for each request."} {"_id": "115917", "title": "Team member missing? Glue between data producers and data consumers", "text": "We're in the business of automated trading and our team consists of two bigger groups, I call them data producers and data consumers. The producers' primary task is to maintain a chain of smaller tools that push some real-time data through an indicator system and out comes an order. All the data that was needed or produced is logged into files, one file per tool per run. The data consumers on the other hand, used to backtests and captured in their backoffice world, want to fragments of the data produced in the different runs, polished to their needs, more specifically one big post-processed chunk of data per day. **Now the problem** that has split our team into two well-distinguishable sides is that the data producers consider it their responsibility to provide comprehensive data without any loss of information and want the consumers to cherry-pick whatever they need in a _pre_ -processing step. The consumers on the other hand want to see live trading as a black-box, to them it shouldn't be different to the backtest which means the data producers in their eyes lack a crucial _post_ -processing step without which they can't start their task. Now clearly there has to be some glue between the two teams, my question is whose task is it? Or is there to be a third group in the middle that provides the glue? What does the theory say about this (can we apply the producer/consumer pattern to `real life')? And just to make the problem a real one: The data producers consider it ugly to boil down the data into consumable chunks, mainly because the consumers' side keeps changing their requirements. The consumers on the other hand are not skilled enough to do the proposed cherry-picking."} {"_id": "232391", "title": "Adding unit tests to brownfield applications", "text": "I'm working for a company that has been developing a series of products for years with little to no unit testing in place. They want to move to TDD and unit test new code going forward. However, I'm concerned about the lack of testability of the old code and where to draw the line between old and new. Has anyone had experience with layering unit testing onto brownfield software like this? If so, could you recommend some effective approaches?"} {"_id": "166458", "title": "High resolution graphical representation of the Earth's surface", "text": "I've got a library, which I inherited, which presents a zoomable representation of the Earth. It's a Mercator projection and is constructed from triangles, the properties of which are stored in binary files. The surface is built up, for any given view port, by drawing these triangles in an overlapping fashion to produce the image. The definition of each triangle is the lat/long of the vertices. It looks OK at low values of zoom but looks progressively more ragged as the user zooms in. The view ports are primarily referenced though a rectangle of lat/long co-ordinates. I'd like to replace it with a better quality approach. The problem is, I don't know where to begin researching the options as I am not familiar either with the projections needed nor the graphics techniques used to render them. For example, I imagine that I could acquire high resolution images, say Mercator projections although I'm open to anything, break them into tiles and somehow wrap them onto a graphical representation of a sphere. I'm not asking for \"how do I\", more where should I begin to understand what might be involved and the techniques I will need to learn. I am most grateful for any \"Earth rendering 101\" pointers folks might have."} {"_id": "200936", "title": "How Do You Organize Your Methods in OO Progamming", "text": "Whenever I am programming in an object-oriented language, I am always faced with what order and how to group the methods for an object. Are there any standards for this, or any suggestions?"} {"_id": "98619", "title": "Submit code during interview", "text": "I'm interviewing for a position at an internet startup. The position relates to doing data mining on their very large database of user information. As part of the (long-distance) interview procedure which involves examining a subset of their database, they requested that I submit the code I used for analysis. My main concern is that this code is \"proprietary\", for lack of a better word. I have no problems giving them all my code if I end up working for them, but considering that they could potentially take the code, not hire me, and use it on their larger database to generate revenue, I'm hesitant. Am I being just being paranoid? Is this a legitimate concern?"} {"_id": "166454", "title": "Can the csv format be defined by a regex?", "text": "A colleague and I have recently argued over whether a pure regex is capable of fully encapsulating the csv format, such that it is capable of parsing all files with any given escape char, quote char, and separator char. The regex need not be capable of changing these chars after creation, but it must not fail on any other edge case. I have argued that this is impossible for just a tokenizer. The only regex that might be able to do this is a very complex PCRE style that moves beyond just tokenizing. I am looking for something along the lines of: > ... the csv format is a context free grammar and as such, it is impossible > to parse with regex alone ... Or am I wrong? Is it possible to parse csv with just a POSIX regex? For example, if both the escape char and the quote char are `\"`, then these two lines are valid csv: \"\"\"this is a test.\"\"\",\"\" \"and he said,\"\"What will be, will be.\"\", to which I replied, \"\"Surely not!\"\"\",\"moving on to the next field here...\""} {"_id": "56447", "title": "good books about Scrum and XP", "text": "I want to know what would you recommend to read for Scrum and XP. I've got Scrum and xp from the Trenches, but I would like to see around some more references that are worthwhile."} {"_id": "220660", "title": "Base class should have no knowledge of its subtypes?", "text": "What's the OO principle that states (in sum): > A base object should have no knowledge of its subtypes. I _thought_ it was Liskov Substitution but after reading that wikipedia article I don't believe I'm correct. Thanks in advance!"} {"_id": "220661", "title": "Separate the renderer from the business model", "text": "I have a small responsibility separation issue that I hope someone can clarify. I have a small model composed by 2 classes: GameBoard and GamePiece, and the obvious relationship is that a GameBoard can have several GamePieces. In the application I'm implementing, GameBoard and GamePiece contain data related to rendering in the screen (Update/Draw methods, Textures, etc) as well as their logic (gameboard number of squares, pieceType, etc). Updating/Drawing the GameBoard causes to update/draw the pieces as well. However, in order to keep the single responsibility principle, I decided to separate the logic from the rendering. But I have some troubles to implement it adequately. I came up to a couple of solutions and I'm not sure which one is the best: 1- Create GameBoardRenderer and GamePieceRenderer classes (who would store Textures, updating and drawing), and every instance these Renderer classes would be related to one GameBoard or GamePiece respectively. The problem I see is that it forces me to keep a dual relationship between GameBoard and GamePiece, and GameBoardRenderer and GamePieceRenderer, and ensure that it is always consistent. 2- Create GameBoardRenderer and GamePieceRenderer, but make them inherit from GameBoard and GamePiece respectively, bringing all the rendering stuff to those classes, while keeping the association at the GameBoard and GamePiece classes. The problem is that when I call the Update/Draw method of GameBoard, I won't be able to call the GamePieceRenderer Update and Draw method (as no relationship will be available). What should I do? Is there any better solution? Thank you very much."} {"_id": "165816", "title": "Why are invariants important in Computer Science", "text": "I understand 'invariant' in its literal sense. I also recognize them when I type code. But I don't think I understand the importance of this term in the context of computer science. Whenever I read conversations\\white papers about language design from famous programmers\\computer scientists, the term 'invariant' keeps popping up as a jargon; and that is the part I don't understand. What is so special about it?"} {"_id": "57792", "title": "Resources on how to relate structured and semi- / un-structured information", "text": "I don't have a great background in information organisation / retrieval, but I know of a few ways of dealing with the problem. For structured information, it's possible to go OOish - everything \"has-a\" or \"has-many\" something else, and you navigate the graph to find relationships between things. For unstructured information, you have techniques like text search and tagging. I know about basic CS data structures and algorithms for structured data, but what I'm interested in goes beyond that. I want to know about how unstructured data can be related to structured data \"intelligently\", i.e. without users having to explicitly understand the object hierarchy. What resources - articles or books - are there that summarise the CS theory behind these techniques or could introduce me to others? Since so far it's been difficult to communicate exactly what kind of problem I'm dealing with, here's an example scenario. It is my own wording and the domain has been (radically) obfuscated to protect the client. > A wine enthusiast wants to build a website for her wine society. Wine farms > could capture many wine prices and enter their locations. Users would search > for wine tasting venues in their area. Or all wines which cost below X per > bottle. Nothing difficult about that so far, because we're dealing with > structured information in a straightforward graph of objects or database > tables. > > However, they also have advice articles (unstructured info), such as \"10 > things to look out for in a good Red\" (sorry, we're stretching the limits of > my knowledge about wine :). As a user, you'd want the website to display a > link to the article when you're viewing wine farms. If the enthusiast > decides to have a section where people can list \"recipes for meals which go > well with wine\" on her website, she might also want the article to appear > next to those recipes which go well with red wine. Now, you don't want the > enthusiast who captures the data to have to link every new article to every > meal (because it won't just be this one article that is relevant to recipes > or wines); and you don't want meal authors to be bogged down by exactly > which one, two or three articles are most relevant out of a library of 50. > You certainly don't want to introduce a \"red wine advice\" field on every > wine or meal. So that's where you might use tagging. Wines, recipes and the > red wine advice article could all be tagged, \"red\". Great solution that > really works for blogspot.com, right? Well yes, as long as the data > capturers know that the \"red\" tag exists. Maybe they decide to use \"Claret\" > instead. Doh! If there are only 5 tags, they probably get it right, but if > there are 50 tags, or even just 20 they might not. Worse still you get the > spam scenario where the author of a new article just applies every tag - > \"red\", \"white\", \"shiraz\", \"chenin blanc\" etc. > > At this point we need to consider other techniques, like text search. The > wine is marked as a red, the article is chock full of the word \"red wine\". > Done. Well, you might want to apply some of the data clustering techniques > that can help you to better identify what a piece of text's main topics is. > Or create an indexing process that drops out common words and punctuation. > No point matching two items because they both use the word \"but\" a lot. > Actually, the best results of all would probably be to use a combination of > everything I've mentioned. You may notice that in this example my tone moves from, \"I know exactly what I'm doing\" to \"try do some stuff like...\". So, I'm looking for resources that will teach me about dealing with the combination of structured and unstructured info - tagging, text search with smart indexing and clustering as well as any other techniques which I don't know about yet. Some discussion of strengths and weaknesses of each would also be appropriate."} {"_id": "88422", "title": "How do you know if you've split your domain correctly", "text": "In DDD i struggle to understand whether or not my domain is split correctly into aggregate roots and then those aggregate roots are grouped correctly into bounded contexts. is there a way - like a set of rules/guidance i can use to decide whether or not i am placing things in the right AR/BC"} {"_id": "223006", "title": "Communication between nested directives", "text": "There seem to be quite a few ways of communicating between directives. Say you have nested directives, where the inner directives must communicate something to the outer (e.g. it's been chosen by the user). So far I have 5 ways of doing this ## `require:` parent directive The `inner` directive can require the `outer` directive, which can expose some method on its controller. So in the `inner` definition require: '^outer', link: function(scope, iElement, iAttrs, outerController) { // This can be passed to ng-click in the template $scope.chosen = function() { outerController.chosen(something); } } And in the `outer` directive's controller: controller: function($scope) { this.chosen = function(something) { } } ## `$emit` event The `inner` directive can `$emit` an event, which the `outer` directive can respond to, via `$on`. So in the `inner` directive's controller: controller: function($scope) { $scope.chosen = function() { $scope.$emit('inner::chosen', something); } } and in the `outer` directives controller: controller: function($scope) { $scope.$on('inner::chosen, function(e, data) { } } ## Execute expression in parent scope, via `&` The item can bind to an expression in the parent scope, and execute it at an appropriate point. The HTML would be like: So the `inner` controller has an 'innerChoose' function it can call scope: { 'innerChoose': '&' }, controller: function() { $scope.click = function() { $scope.innerChoose({item:something}); } } which would call (in this case) the 'functionOnOuter' function on the `outer` directive's scope: controller: function($scope) { $scope.functionOnOuter = function(item) { } } ## Scope inheritance on non-isolated scope Given that these are nested controllers, scope inheritance can be at work, and the inner directive can just call any functions in the scope chain, as long as it doesn't have an isolated scope). So in the `inner` directive: // scope: anything but a hash {} controller: function() { $scope.click = function() { $scope.functionOnOuter(something); } } And in the `outer` directive: controller: function($scope) { $scope.functionOnOuter = function(item) { } } ## By service injected into both inner and outer A service can be injected into both directives, so they can have direct access to the same object, or call functions to notify the service, and maybe even register themselves to be notified, in a pub/sub system. This doesn't require the directives to be nested. **Question** : What are any potential drawbacks and advantages of each over the others?"} {"_id": "223004", "title": "What does the Apache licensing mean by \"Permitted: Commercial Use\"", "text": "At choosealicense.com/licenses I am reading the Apache licensing, and see that under `Permitted` there stands `Commercial Use`: > This software and derivatives may be used for commercial purposes. Meaning, they can use (and modify) my app, in a commercial environment (e.g. Microsoft, Apple). Or, they can commercialize my app? English is not my native language and I am wondering if I am misinterpreting something."} {"_id": "117513", "title": "What is the advantage of using map datastructure?", "text": "I am a member of some online coding website and today when I submitted a solution in C++, I saw there were much better answers in C++ itself which took considerably less time. The solutions were open so I went on through some of them and found all of them using map datastructure. I could not figure out where exactly it takes an edge. I made some googling but it was not convincing. Please help."} {"_id": "117512", "title": "Should I HTML encode all output from my API?", "text": "I am creating a RESTful JSON API to access data from our website where the content is in German. A handful of the fields will return formatted HTML while most are single lines of text although they are highly like to include special characters. To make it easy to use I wanted consistency throughout. As the text in the HTML fields would not be easy to encode after they have the data my first thought was to encode all fields (they can always be un-encoded later int he other fields). Is this the best approach or should I suffix all the HTML fields e.g. description_html to imply they are already encoded or try something else? The plan is let people use the API however they want although initially to let our partners use our data on their website."} {"_id": "142911", "title": "Difference between Javabean and Java Beans", "text": "I'm confused. I've seen two different terms: `Javabean` and `Java Beans`. Is there a significant difference between them?"} {"_id": "142912", "title": "Should I use a seperate class per test?", "text": "Taking the following simple method, how would you suggest I write a unit test for it (I am using MSTest however concepts are similar in other tools). public void MyMethod(MyObject myObj, bool validInput) { if(!validInput) { // Do nothing } else { // Update the object myObj.CurrentDateTime = DateTime.Now; myObj.Name = \"Hello World\"; } } If I try and follow the rule of one assert per test, my logic would be that I should have a Class Initialise method which executes the method and then individual tests which check each property on myobj. public class void MyTest { MyObj myObj; [TestInitialize] public void MyTestInitialize() { this.myObj = new MyObj(); MyMethod(myObj, true); } [TestMethod] public void IsValidName() { Assert.AreEqual(\"Hello World\", this.myObj.Name); } [TestMethod] public void IsDateNotNull() { Assert.IsNotNull(this.myObj.CurrentDateTime); } } Where I am confused is around the TestInitialize. If I execute the method under TestInitialize, I would need seperate classes per variation of parameter inputs. Is this correct? This would leave me with a huge number of files in my project (unless I have multiple classes per file). Thanks"} {"_id": "142915", "title": "What is a good way to comment if-else-clauses?", "text": "Whenever I'm writing a typical if-else-construct in any language I wonder what would be the best way (in terms of readability and overview) to add comments to it. Especially when commenting the else clause the comments always feel out-of-place for me. Say we have a construct like this (examples are written down in PHP): if ($big == true) { bigMagic(); } else { smallMagic() } I could comment it like this: // check, what kind of magic should happen if ($big == true) { // do some big magic stuff bigMagic(); } else { // small magic is enough smallMagic() } or // check, what kind of magic should happen // do some big magic stuff if ($big == true) { bigMagic(); } // small magic is enough else { smallMagic() } or // check, what kind of magic should happen // if: do some big magic stuff // else: small magic is enough if ($big == true) { bigMagic(); } else { smallMagic() } What are your best-practice examples for commenting this?"} {"_id": "142916", "title": "Strategy for backwards compatibility of persistent storage", "text": "In my experience, trying to ensure that new versions of an application retain compatibility with data storage from previous versions can often be a painful process. What I currently do is to save a version number for each 'unit' of data (be it a file, database row/table, or whatever) and ensure that the version number gets updated each time the data changes in some way. I also create methods to convert from v1 to v2, v2 to v3, and so on. That way, if I'm at v7 and I encounter a v3 file, I can do v3->v4->v5->v6->v7. So far this approach seems to be working out well, but I haven't had to make use of it extensively yet so there may be unforseen problems. I'm also concerned that if the objects I'm loading change significantly, I'll either have to keep around old versions of the classes or face updating all my conversion methods to handle the new class definition. Is my approach sound? Are there other/better approaches I could be using? Are there any design patterns applicable to this problem?"} {"_id": "105294", "title": "What is the reason to put prefixes in new CSS features?", "text": "Is there a valid reason for the browsers to prefix new CSS features, instead of letting the webmasters use the non-prefixed version? For example, a sample code for the background gradient looks like: #arbitrary-stops { /* fallback DIY*/ /* Safari 4-5, Chrome 1-9 */ background: -webkit-gradient(linear, left top, right top, from(#2F2727), color-stop(0.05, #1a82f7), color-stop(0.5, #2F2727), color-stop(0.95, #1a82f7), to(#2F2727)); /* Safari 5.1+, Chrome 10+ */ background: -webkit-linear-gradient(left, #2F2727, #1a82f7 5%, #2F2727, #1a82f7 95%, #2F2727); /* Firefox 3.6+ */ background: -moz-linear-gradient(left, #2F2727, #1a82f7 5%, #2F2727, #1a82f7 95%, #2F2727); /* IE 10 */ background: -ms-linear-gradient(left, #2F2727, #1a82f7 5%, #2F2727, #1a82f7 95%, #2F2727); /* Opera 11.10+ */ background: -o-linear-gradient(left, #2F2727, #1a82f7 5%, #2F2727, #1a82f7 95%, #2F2727); } What's the point in forcing webmasters to copy-paste the same code four times to have the same result? * * * Note: one of the reasons often quoted is that **prefixed styles are intended to be temporary while either the browser does not implement the spec correctly, or the spec is not definitive**. IMO, this reason is a nonsense: * If the browser engine does not implement the spec correctly, the browser will not be compliant, no matter if it does not implement it in a non-prefixed form or it does not implement it in a prefixed form. * If the spec is not definitive, it may matter when there were previous implementations with the same name. For example if CSS2 had `linear-gradient`, but CSS3 was intended to extend `linear-gradient` with additional features, it would be clever to temporary prefix the new, draft, implementation by `-css3- That's two fewer HTTP requests, yet I've not seen this technique in practice. Why not?"} {"_id": "100316", "title": "Programming late at night problem", "text": "I recently found out that programming late at night causes sleeping problems, so what are the best hours for 'after-job' programming? Maybe there is some time interval that should be set after finishing programming and going to bed, or some activity involved?"} {"_id": "240598", "title": "Is it possible for business logic not to creep into the view?", "text": "I've developed for several web application projects for the last 3 years, both personal and at work, and I can't seem to figure out whether it's possible for at least _some_ business logic not ending up in the view layer of the application. In most cases there will be problems like \"If the user has selected option x then the application must enable him to supply info for y, if not then s/he should supply info z\". Or do some AJAX operation which should apply some changes to the model but NOT commit them until the user has explicitly requested so. These are some of the simplest problems I've encountered and I can't figure out how it's possible to avoid complex logic in the view. Most of the books I've read describing MVC usually showcase some very trivial examples, like CRUD operations that just update data on the server and display them, but CRUD is not the case on most rich applications. Is it possible to achieve having a view with no business logic at all?"} {"_id": "121313", "title": "Getting solutions off the internet. Bad or Good?", "text": "I was looking on the internet for common interview questions. I came upon one that was about finding the occurrences of certain characters in an array. The solution written right below it was in my opinion very elegant. But then another thought came to my mind that, this solution is way better than what came to my mind. So even though I know the solution (that most probably someone with a better IQ had provided) my IQ was still the same. So that means that even though i may know the answer, it still wasn't mine. hence if that question was asked and i was hired upon my answer to that question i couldn't reproduce that same elegance in my other ventures within the organization My question is what do you guys think about such \"borrowed intelligence\"? Is it good? Do you feel that if solutions are found off the internet, it makes you think in that same more elegant way?"} {"_id": "231364", "title": "Is there a best practice for populating a dropdown in ASP.Net MVC?", "text": "I'm coming into a project where the overlying method is to open the page, then use jquery AJAX calls to populate all of the dropdowns dynamically in the page. To me, this seems like an extra burden on the server and it looks terrible, plus the functionality seems to be limited as the user can start using the dropdowns even though they aren't loaded yet. To me, it seems to make much more sense to create a SelectList in the model and populate the dropdown in the View before the page is ever sent over to the browser. Am I missing something in my thinking here? Is there a good reason to dynamically bind the dropdown via AJAX initially?"} {"_id": "231366", "title": "Global vs Individual object event handlers", "text": "Lately I've been studying a lot of javascript samples, both with/without libraries, jQuery to mention one. As an old JavaScript developer, I learned early to make use of unobtrusive javascript where one were adding mouse click events to a global handler using `document.onclick = mylib.document_onclick;` Then, by tagging any element with a custom attribute/property/expando, I been able to deal with all kinds of functions in a very simple way. // HTML
    Start page 1 Start page 2 // JS document_onclick: function (e) { e = getEvObj(e); // custom method to get event object var evSrcTag = getSrcObj(e); // custom method to get source element //mouseclick if (evSrcTag.getAttribute('data-mc') != undefined) { switch (evSrcTag.getAttribute('data-mc')) { case 'logout': if (!confirm(myconfig.msg['asklogout'])) { return cancelEv(e); // custom method to cancel the event } break; case 'ajax': //process ajax request as javascript is available, cancel default event (href) //case ...... } } }, Today I see a lot solutions where the event is bound/attached straight to a specific element. It appears to quikly become a long list of handlers to be added, together with individual functions and class names tied/added. // HTML create link dynamically create link dynamically create link dynamically // JS $(function(){ $(\"#anchor1\").click( function() { $(\"#anchor1\").append('test1'); }); $(\"#anchor2\").click( function() { $(\"#anchor2\").append('test2'); }); $(\"#anchor3\").click( function() { $(\"#anchor3\").append('test3'); }); }); Referring to the way setting up handlers, is one approach better than the other, and if so, which one is and why?"} {"_id": "173711", "title": "how to model this relationship ? in ERD", "text": "I have three entities : Technician Vehicle Repair The question is how to model this in ERD? Knowing that a technician can repair multiple cars, and the same car can be repaired by one and only one technician? How should the repair entity tied to all this?"} {"_id": "14706", "title": "\"Overtime is part of the job\" true but a bad attitude?", "text": "My manager told me that working overtime is just part of the job and that I'm expected to work overtime. We're not paid overtime like most companies. I'm aware that most programmers put in 50-60+ hour work weeks, but is that the attitude a manager should take? It seems like they're taking it for granted. Or maybe I'm totally wrong and it's completely normal :P"} {"_id": "184568", "title": "Turning n-dim points into m-dim where m The curve represents the Value, i.e. the brightness of pixels as you can see > them in the composite image. **Red; Green; Blue** > The curve represents the quantity of color in each of the three RGB > channels. Here, dark means little of the color. Light means a lot of the > color. The values range from 0 to 255."} {"_id": "75777", "title": "Benefits of integration platform", "text": "We are looking at introducing an integration platform. In the begining it is an extra layer and an extra cost. But after a while the service that a new system needs will be available on the integration platform, therefore saving development effort. Question is does anyone know of any studies that look at how long the above \"a while\" is? Or are there any ROI studies on the use of integration platforms?"} {"_id": "249751", "title": "Applications of Artificial Intelligence on regular business applications", "text": "This is a doubt I had for a really long time. I'm currently learning the basics of AI, mostly from Peter Norvig's book. I already know that you can use several AI techniques in Data Mining and Business Intelligence, but I wonder if there aren't more \"normal\" scenarios where you can apply AI. Could a scenario where an application can be improved by using AI? Books really don't specify more than a few select business cases. Thanks in advance **EDIT** Based on comments i'll be more specific. How can an application be improved by usig AI thechniques? And I don't mean adding search capabilities (when there are tools for that already) or BI stuff (I already know that to be AI based). What I want to know, is if it is possible to take a regular application (take a shipment scheduling application) and apply some AI techniques to improve its results, or to make recommendations to the user."} {"_id": "208309", "title": "How to migrate my thinking from C++ to C#", "text": "I am an experienced C++ developer, I know the language in great details and have used some of its specific features intensively. Also, I know principles of OOD and design patterns. I am now learning C# but I cannot stop the feeling not being able to get rid of C++ mindset. I tied myself so hard to the strengths of C++ that I cannot live without some of the features. And I cannot find any good workarounds or replacements for them in C#. What **good practices** , **design patterns** , **idioms** that are different in C# from C++ perspective can you suggest? How to get a perfect C++ design not looking dumb in C#? Specifically, I cannot find a good C#-ish way to deal with (recent examples): * Controlling the lifetime of resources that require deterministic cleanup (like files). This is easy having `using` in hand but how to use it properly when ownership of the resource is being transfered [... betwen threads]? In C++ I would simply use shared pointers and let it take care of 'garbage collection' just at the right time. * Constant struggling with overriding functions for specific generics (I love things like partial template specialization in C++). Should I just abandon any attempts to do any generic programming in C#? Maybe generics are limited on purpose and it is not C#-ish to use them except for a specific domain of problems? * Macro-like functionality. While generally a bad idea, for some domain of problems there is no other workaround (e.g. conditional evaluation of a statement, like with logs that should only go to Debug releases). Not having them means that I need to put more `if (condition) {...}` boilerplate and it is still not equal in terms of triggering side effects."} {"_id": "219286", "title": "What to consider when designing a web application that will be deployed under a load balancer?", "text": "I am currently maintaining a Java web application that is initially designed to work only as a single instance (not in a cluster/farm). Now, the client is planning to upgrade their infrastructure and part of their plan is to place our web application behind a load balancer. I have already identified several serious design problems: 1. Users can upload files and our application store those files in the local filesystem. 2. There are scheduled jobs that might cause problems when executed concurrently (i.e. generating files). 3. Session variables are heavily used in most modules. I need opinion on my proposed solutions below. 1. I can solve item 1 by storing all files in an external/shared storage (SAN, etc..) 2. With item 2, I can create a locking mechanism in the database so when the scheduled jobs run, the web apps will first check the table and only the first one to update will run the said job. 3. Actually, I'm still not sure if item 3 can cause problems. If a user logs in to our application and the load balancer directs him to Server 1, is it possible that the load balancer will point him to Server 2 the next time he click a link? I have no idea yet how the load balancer works in this level. Also, what are the other things that I should consider when designing a horizontal-scalable web application [from scratch]?"} {"_id": "155943", "title": "Is it practical to have perfect validation score on HTML?", "text": "I was in a heated discussion the other day, about whether or not it's practical to have a perfect validation score on any HTML document. By practical I mean: * Does not take a ridiculous amount of time compared to it's almost-perfect counterpart. * Can be made to **look good on _older browsers_** and to **be usable on _very old browsers_**. * Justifies the effort it may take to do so (does it come with some kind of reward on SEO/Usability/Accessibility that cannot be achieved in a simpler way with almost-perfect validation) So basically, **is perfect validation score practical on any HTML document?**"} {"_id": "138746", "title": "Modularity vs Single class simplicity", "text": "I have been part of the organization I work in for the past year and a half or so. The company basically writes Perl code and they have large amounts of legacy code which originally prevented the company from moving forward with any code changes. My first observation with the existing code was non existence of Modularity. Over the time I worked here I, with the help of a few (who left the company), did a lot of hardwork Modularizing all the core sections of the application. I believe that we now have about 70% of the core systems modularized, coherent and layered. At this point in time I am accused of complicating the system design and everything I try to do is sort of looked at with suspicion. To design these systems I applied all the best design approaches(patterns) and methodologies(SOLID) available or known to me. When I wrote it, it was all appreciated but now the developers want to revert back to what they believe as simpler methodologies(which basically involves having large single classes, doing 100's of things). I still do not believe that those approaches are actually going to lead to simpler design but no one is actually ready to listen. To give an example, our code basically loads a bunch of files during its lifetime. Traditionally the class handled a lot of things like loading the file, repeatedly checking if the file has changed, is the right version of the file loaded etc. We also load a different set of files for test cases to ensure that it is working fine. All this was done very un organized fashion earlier. Every one created their own version of the file. There were lots of versions of these files and they were un manageable. I introduced a subsystem that will load you the right file(do the version verifications) given a name and a configuration that will tell all the versions of the file available. It eventually ensured that only two to three versions of the files are used across all the test cases. It basically follows a facade and factory pattern where the loader will create the right file object, which will have interface to load, save and should_reload. It was eventually even used to load contents from CouchDB as they only represent a document. This is now too complicated for the developers and no one really reads the code or the documentation I wrote. I do agree that going back to our old system will remove one sub system but will bring back the problem of un manageable number of files. At this juncture I find myself quite unsure of all the things I have learnt. I find myself not really interested in suggesting any improvements as my solution will only be accused of being too complicated. What do you think I can do at this point in time?"} {"_id": "231984", "title": "How to develop an algorithm for brute-forcing / backtracking?", "text": "As a beginner programmer, I don't know how to conceptually think about brute- forcing. My mind can't really fathom how to write code that will try every possibility. I have a problem that I want to solve. Here is a code snippet (there are other functions, but no reason to include them here, they just do background work): main(): step_segment = [] # contains current segment STATS = [250,0,0,0,13,0] # contains 1 step (current step stats) # STATS = [step_id, danger, danger_limit, rnd, b_loop, enc] # Temporary Interface choice = None while choice != \"0\": print \\ \"\\n1 - Benchmark Run\" choice = raw_input(\"Choice: \") if choice == \"0\": print \"\\nGoodbye.\" elif choice == \"1\": while (len(step_segment)) < 128: step_segment, STATS = take_step(step_segment, STATS) if STATS[5] == \"B\": \"Dont know what to do here. Try every possibility!?\" **The goal:** I want to produce a list of every possible 'route' through a segment, along with how long it takes. **Description:** 1. I will take a step in the route (There's only one direction: forward). Every time I take a step, my stats are updated. This takes a step and updates the stats: `step_segment, STATS = take_step(step_segment, STATS)` 2. A list of steps taken, along with the stats, are kept in `step_segment`. This is so I can 'undo' an arbitrary amount of steps, if I want. To undo a step call the function: `step_segment, STATS = undo_step(step_segment, STATS)` 3. I can see how long my current route has taken by doing: `time = frames(step_segment)`. 4. At some point, I will get into a Battle. I get into a Battle when `STATS[5] == \"B\"` 5. When there is a battle, I simply have two choices: **i.** Fight the battle, or, **ii.** Run away. 6. If I want to Fight, I do: `step_segment = do_fight(step_segment, STATS)`. This also records that I chose to fight, along with the stats, in `step_segment`. (So I can undo it, if i want). 7. If I want to Run Away, I do: `step_segment = run_away(step_segment,STATS)`. Can someone advise me, how I can code a brute forcer for this problem? **Specifically** , I would like to know how I can tell the computer to try every possibility; I simply cannot think how to... >.<' I want to see every possible combination of Run & Fight (the only two choices, when I reach a battle). There are only around 200 possibilities. I only need to take 128 steps, so there are a finite amount of possibilities, hence: `while (len(step_segment)) < 128`. I just don't know how to accomplish such a thing in Python. My whole reason for learning how to program is to solve this problem.. I think I can explain the solution in English, as I have done above, but I don't know how to code it. I would be **very** appreciative if someone could advise me on this!! Thank you."} {"_id": "231981", "title": "OO design for client-server/RPC/n-tier data transfer (specifically SignalR)", "text": "I'm using SignalR to implement a client/server system, but I guess this question could apply to other tiered/client-server/RPC systems. If you aren't familiar with SignalR, you basically create a server-side class (called a \"hub\") that contains methods you want to expose to clients. On the client- side, SignalR exposes a \"hub proxy\" class via which you can invoke the server- side hub methods. Let's say I want to remotely control cars from my client application. My first thought was to create an abstract `CarBase` base class with methods such as Start(), Stop(), Steer(), etc. On the client-side I would implement a concrete `Car` class whose methods would invoke (via the SignalR hub proxy) the relevant methods on the server hub. Similarly, on the server-side I would implement a concrete `Car` class, but this time its methods would carry out the relevant actions on the \"physical\" car. Let's say the client wants to start a car:- var car = new Car(); car.Id = 123; car.Start(); The Car's `Start` method would basically invoke the relevant server method via the hub proxy, e.g.:- public override void Start() { _hubProxy.Invoke(\"StartCar\", this); } _Here is my first question_ \\- is it acceptable for the Car class to have a dependency on the hub proxy, and pass itself to the remote method like this? I've read that functionality such as persistence is a separate concern and should therefore live in a separate class, and I'm in two minds about whether that guideline applies to what I'm doing here. If so, it would result in client code such as `_hubProxy.StartCar(car);` rather than the more OO- friendly and intuitive \"car.Start();\". Moving on, on the server-side, the \"hub\" class's StartCar() method might look something like this:- public void StartCar(CarBase clientCar) { // Instantiate a server-side Car // (this implementation interacts with the \"physical\" car). var serverCar = new Car(); // Map properties from received client car serverCar.Id = clientCar.Id; .. etc.. serverCar.Start(); } Is this an acceptable approach, i.e. having the client-side pass in its Car object? Or should I be passing around something else, e.g. just the car's ID and other pertient properties? I guess I can't avoid instantiating the server- side Car (and mapping its properties) - after all, this is where the functionality to interact with the physical car lives. I've gone round in circles trying to think of a better solution, and have reached the point where I can't see the wood for the trees."} {"_id": "200684", "title": "What is a good rule-of-thumb for naming link-tables?", "text": "In the same way that a `publication` table might relate to a `person` table via `subscriptions`, or a `company` table might relate to a `person` table via `employee`, I'm wondering if there is a descriptive way to relate a `company` table to a `company_type` table. Here are some rough (and simplified) examples of the relevant tables. **company:** \\- |id|name| **company_type:** \\- |type|description| **{name needed}:** \\- |id|company_id|type| Also, I realize that not all relationships can be as succinctly named as `subscriptions` or `employee`, so if that's the case here, what would be a good rule of thumb to avoid a near name-collision with something like `company_types`? _Additional Details:_ Company types, in our current case, are _somewhat_ convoluted. This industry has multiple supply channels and multiple customer channels, so while \"vendor\" is a valid company_type, a vendor can also be \"independent\", \"authorized\", or \"franchised\" ... or any mix of the three. Customer types are very similarly multi-faceted, and to further compound the issue, a single company can simultaneously be of some vendor and customer types."} {"_id": "5119", "title": "How to bypass middlemen?", "text": "I'm freelancing on a project where I'm the only programmer, and find myself at the end of a line of four middlemen, who stand between me and the actual customer, each passing my work as internal to their own company. Communication is terrible and the requirements, made by an advertising company, are flimsy. I've managed to communicate with people upper the ladder by keeping asking questions that made people face their ignorance, but they won't let me contact the end client since, from his end, it's pretty much a done deal. The project will soon be over though, and I've decided it's the last time I'll be working under these conditions. The middlemen, are pretty much useless from the perspective of shipping a product, but still necessary to me since they are the ones bringing the contracts in. Hence I'm not thinking about crossing them altogether, which would probably end badly. Rather I'm looking for a way to make them understand I need to be part of the requirements and design process, meet the clients, and shouldn't have to go through a whole channel of clueless people each time I require some information. Sorry for the venting :) Any ideas ?"} {"_id": "205247", "title": "What logic is used to control Lock and Update actions in P2P networks? (similar to Bitcoin)", "text": "Bitcoin relies on Proof of Work (hashing) to control the writing to a central repository (the blockchain). It also relies on a set of rules for these updates, and defines what happens if a conflict is found (longest chain wins) Using that as an example for my question: * What are some architectural examples of different distributed lock and/or update mechanisms suitable for P2P networks? (I'm looking for descriptions of software design suitable to this project, not software products) * What key logic or rules are fundamental to that decentralized \"database\"?"} {"_id": "200681", "title": "What to do when a company request permission to use open source code without attribution?", "text": "I have an opensource project currently under MIT license. I have received a request from a company to use my code for their commercial project without having to give any attribution or credit. To be honest, when I released the code, my sole intention was only to help a fellow programmer, and I didn't really think about if I was credited. Choosing the license was just one of the step I had to do to set up the project on codeplex. On one hand, I feel honored and appreciate that they actually bothered to ask, on the other hand, I felt if I just allowed them to do so without any cost may just destroy the spirit of open source. **What are the typical things I or other code owners can do or request from the company to make it a fair trade? Should I even allow it?** I am thinking of asking the company to write a official letter of intent and I will sign against it just to make it more formal; and also to request a donation to project/charity of my choice or buy something on my wishlist as compensation (not very expensive). Will that be too much?"} {"_id": "122668", "title": "Django as Python extension?", "text": "I come from **php** community and just started learning **Python**. I have to create server-side scripts that manipulate databases, files, and send emails. Some of it I found hard to do in python, comparing to php, like sending emails and querying databases. Where in **php** you have functions like **mysql_query()** , or **email()** , in python you have to write whole bunch of code. Recently I found Django, and my question is: is it a good framework for network-oriented scripts, instead of using it as a web-framework?"} {"_id": "122667", "title": "Can I use jQuery Mobile if I am developing a native app?", "text": "I am new to jQuery and mobile apps development. I know the features of jQuery Mobile. I want to know where and why to use it. Can I use jQuery Mobile if I am developing a native app?"} {"_id": "136948", "title": "Entry level software developer technical interview... I need preparation advice", "text": "I'm pretty fresh out of college. So I don't have a lot of experience with job interviews. I'm interviewing for an entry level software developer job. I've already made it past the first phone screen and a coding challenge. Now I'm going in for a face to face technical interview. If I pass this then I probably get the job. The job description that I'm interviewing for now says I will be working mainly with C++, some C#, databases, and web development. It's a smaller comopany and my responsibilities will be a little of everything (design, documentation, coding, and QA). I bombed my last technical interview (with a different company) because I prepared by studying the crap out of all the technologies mentioned in the job description (c#, html, sql, ajax, javascript, etc) but the interview was all questions about hash maps vs binary trees, abstract classes vs interfaces, and recursion and absolutely nothing about the technologies mentioned in the job description. Which I would have done amazing on it if I prepared by going over everything I learned in college... but the last time I worked with most of the stuff he asked about was my sophomore year in college so i was very rusty. Anyway, I'm just looking for some advice on my situation. Should I focus on the computer science stuff that I learned in college like data structures, algorithms, recursion, etc. or should I focus on the technologies they mention in the job description? Any advice from someone with experience would be greatly appreciated."} {"_id": "237204", "title": "Maintaining sorted Array speed", "text": "I have a for loop running over a list of objects like: [{a: 2001, b: \"hello\"}, {a: 54, b: \"hi\"}....] In this loop, I filter out objects based on certain field values (like, b == hello?) and create a new list of filtered objects. Once the for loop is complete, I sort the remaining objects by the value of a. This is an oversimplified example, so presorting by field a isn't possible. My question is, is it faster to do the sort after the loop is complete vs doing something like a binary search at the end of each iteration and then insert the object then? That would avoid sorting at the end, but I don't really know if that ends up costing more."} {"_id": "237200", "title": "Traversing an AST using Visitors", "text": "I'm writing a compiler for a C-like language, and I'm looking for an elegant way to traverse my abstract syntax tree. I'm trying to implement the Visitor pattern, although I'm not convinced that I'm doing it correctly. struct Visitor { // Expressions virtual void visit(AsgnExpression&); virtual void visit(ConstantExpression&); ... virtual void visit(Statement&); ... virtual void finished(ASTNode&); protected: virtual void visit(ASTNode&) = 0; }; `visit` is overloaded for each type, and by default each overload will call `visit(ASTNode&)` which subclasses are forced to implement. This makes it easier to do quick and dirty things, although defining a `visit` for each type is tedious. Each subclass of `ASTNode` must implement an `accept` method which is used to traverse the tree structure. class ASTNode { public: virtual ~ASTNode(); virtual void accept(Visitor& visitor) = 0; }; However, this design is quickly becoming tedious because the `accept` methods are often very similar. **Who should be responsible for traversing the structure, the nodes or the visitor?** I'm leaning towards having `ASTNode` provide an iterator for accessing its children, and then having the visitor traverse the structure. If you have any experience designing Abstract Syntax Trees, please share your wisdom with me!"} {"_id": "237201", "title": "Objective-C style: Do I implement factory methods or init methods?", "text": "I'm new to Objective-C programming, and creating various classes for an iOS application I'm working on. When creating objects, it seems like many classes in the built-in frameworks use the \"static factory method\" pattern, like this: MyObject* m = [MyObject objectWithName:@\"foo\" id:@7 description:@\"bar\"]; however, many classes also simply have overrides on `init`, like this: MyObject* m = [[MyObject alloc] initWithName:@\"foo\" id:@7 description:@\"bar\"]; I can see that if I want to cover all my bases, I'd implement both, and have the `objectWithName...` method call `initWithName...`, however this seems quite tedious. I was wondering - is there any style or guidance from around when I should implement the factory method pattern vs an `init` overload? I've googled for this, but have been unable to find anything (most likely because the terms are quite generic and google doesn't search well for them) Any advice or opinions would be much appreciated"} {"_id": "72865", "title": "How much is Google investing in the Go language?", "text": "I have read quite a bit about the Go language, and it seems promising. The last important bit of information I am missing before I decide on spending more effort on the language is: How much money/man power does Google or other companies invest in the development effort? If this information cannot be provided, do you have any other information showing the commitment of Google to the project. Is it being used as the primary language for a new investment or similar (my guess is that it is too early for this, but I do not know)?"} {"_id": "125123", "title": "Is there another meaning to prototype?", "text": "I am reading Pro C#. It says > do be aware that the value 0 is automatically returned, even if you > construct a Main() method prototyped to return void. What does prototyped mean in this context? The only C# related definition of a prototype I'm aware of has to do with a model or test program tossed together without the intention to go to production."} {"_id": "125128", "title": "Is it sensible to teach programming with a language I don't know myself (yet)?", "text": "I have a friend who has asked me to teach him how to program. I was thinking that using a language that I do not know could be beneficial because: * I will learn something new too. * That will make me slow down, as I've been told that I usually explain things too fast. So now I need to find some balance between what is didactic for a _newbie_ and is useful to me. My draft of requirements: * Not C, Java or Python (because I already know). * My personal interests now are on Erlang, Scala and C++ (but they might be too hardcore)... I could refresh Scheme or Prolog too. * Need a good IDE or REPL to play with. * Clean and simple language. * Good (and free!) documentation. * Some practical for the real world (i.e. making a web application, image processing, statistical analysis...) About the methodology, I was thinking about this sequence: * Introduction to logic. * Basic maths and algorithms. * Software design principles. * Some foundations on computer technology. * Make some large project. So, any suggestions about the language or methodology? **edit** : My \"student\" is a 28 years old MD, his goal is to make a web site for medicine students. As he is also into research, so I think he could find immediate use to his new programming skills for data analysis and plotting. Ultimately I might help him directly with the project."} {"_id": "84790", "title": "How to increase the speed of the page loading in ASP.NET?", "text": "I am .net programmer using Asp.net ,C# and Sql server 2005 to display set of information. My project is to create tools for Call Center Systems in my office. Now i am have created the project. but its too slow and takes much time in the time of page loading and post back. I need Suggestions to improve page performances and wat technologies can be used to make the tool efficient..."} {"_id": "94591", "title": "Interviewing by providing an application to work on?", "text": "While thinking about better ways to design technical interviews (meaning not based on social skills but purely technical ones) I started to think about this possible way of doing (without having the possibility to test it): 1. Determine the kind of problems, high or low level of abstraction, from architecture to bitwise manipulations if that's your shop's business, that you are solving every day in the applications you work for. For this example I'll suppose we are in a PC game development company that produce a game based on extensions via modules. 2. Write a simple application that solve those problems. Lets say a simple game that have not the same scale than the one we're working on but that would be 3. Add bugs. Architecture bugs and more language-knowledge related bugs if it's critical (for example it's important to know about virtual destructors in C++). Make it obviously buggy, maybe make it crash first and buggy after that. Make sure it works with the tools that most people use in the domain (like cross-platform C++ for a simple game, providing the libraries code with the rest). 4. Provide the full source code of the application to the candidate and tell him to a) fix the code, b) add a functionality (related to the position if necessary). The candidate would have a limited time to send something back. For example, one week or two. That time would be several times the time needed to do the work in full time, it have to be fair. 4'. Maybe a different way to do that would be to provide the application source code publicly online and start to meet only the candidates who sent you the working application with the additional feature...but that would work very differently so I'm not sure if it's really interesting too. For 4.b., I'm assuming that the functionality to add is simple but requires basic understanding of the overall organisation of the code. That would not be adding content. 5) Make the candidate explain what he did. All that process would be a kind of interview screening, there would be a meeting after that if the candidate did provide something. It feels like parallel screening : you screen several people at the same time without spending time sequentially on each. A meeting with discussions about the code is obviously really necessary. The questions : a) Do you think that it would be pertinent? What do you think? (I'm not very experienced in interviewing so I'm relying on your experiences) b) Would it be considered as paid work? Even if it's not work done on your product but just a interview-specific application? c) I'm assuming that the candidate will use whatever he have available to solve the problems, even friends helping because I think that reflect better an real development environmental. Is there something wrong with assuming that? (Yes I'm questioning my own way of thinking, they say that's sane. Are they wrong?) d) Is there a variant that would be more interesting?"} {"_id": "252538", "title": "Stable, Hash Based Symmetrical Difference", "text": "A lot of thought went into the implementation of the set algorithms in LINQ: `Distinct`, `Except`, `Intersect` and `Union`. They guarantee that the items are returned in the same order that they appear in before calling the method. So, these two expressions return the items in the same order: var authors1 = books.Select(b => b.Author).OrderBy(a => a).Distinct(); var authors2 = books.Select(b => b.Author).Distinct().OrderBy(a => a); The implementation of `Distinct` would look something like this: public IEnumerable Distinct(this IEnumerable source) { HashSet set = new HashSet(); foreach (T item in source) { if (set.Add(item)) { yield return item; } } } The implementation of `Except` would look like this: public static IEnumerable Except(this IEnumerable source1, IEnumerable source2) { HashSet set = new HashSet(source2); foreach (T item in source1) { if (set.Add(item)) { yield return item; } } } The implementation of `Intersect` would look something like this: public static IEnumerable Intersect(this IEnumerable source1, IEnumerable source2) { HashSet set = new HashSet(source2); HashSet found = new HashSet(); foreach (T item in source1) { if (set.Contains(item) && found.Add(item)) { yield return item; } } } I added a second set to remove duplicates from the list. Finally, `Union` is a little more complex, but basically the same thing: public static IEnumerable Union(this IEnumerable source1, IEnumerable source2) { HashSet set = new HashSet(); foreach (T item in source1) { if (set.Add(item)) { yield return item; } } foreach (T item in source2) { if (set.Add(item)) { yield return item; } } } So, the only operation not supported by LINQ is symmetrical difference or `SymmetricExcept`. I have been playing around with creating a \"stable\" version of this algorithm and out of pure curiosity am wondering if there is a better implementation. The most straight-forward implementation is to just call `Except` twice: public static IEnumerable SymmetricExcept(this IEnumerable source1, IEnumerable source2) { var except1 = source1.Except(source2); var except2 = source2.Except(source1); return except1.Concat(except2); } Although, this requires going through both lists twice. So I wrote a version that only requires going through the second list twice: public static IEnumerable SymmetricExcept(this IEnumerable source1, IEnumerable source2) { HashSet set = new HashSet(source2); HashSet found = new HashSet(); foreach (T item in source1) { if (!set.Contains(item) && found.Add(item)) { yield return item; } } foreach (T item in source2) { if (found.Add(item)) { yield return item; } } } I haven't verified the correctness of these algorithms. There is some ambiguity about how LINQ handles duplicates. I am curious if there is a more efficient way to do `SymmetricExcept`, something better than `O(m + 2n)`."} {"_id": "252938", "title": "Is splitting up a function into several inner functions an anti-pattern?", "text": "Imagine a long and complicated process, which is started by calling function `foo()`. There are several consecutive steps in this process, each of them depending on result of the previous step. The function itself is, say, about 250 lines. It is highly unlikely that these steps would be useful on their own, not as a part of this whole process. Languages such as Javascript allow creating inner functions inside a parent function, which are inaccessible to outer functions (unless they are passed as a parameter). A colleague of mine suggest that we can **split up the content of`foo()` into 5 inner functions**, each function being a step. These functions are inaccessible to outer functions (signifying their uselessness outside of the parent function). The main body of `foo()` simply calls these inner function one by one: foo(stuff) { var barred = bar(stuff); var bazzed = baz(barred); var confabulated = confabulate(bazzed); var reticulated = reticulate(confabulated); var spliced = splice(reticulated); return spliced; // function definitions follow ... } **Is this an anti-pattern of some sort? Or is this a reasonable way to split up long functions?** Another question is: is this acceptable when using OOP paradigm in Javascript? **Is it OK to split an object's method this way** , or does this warrant another object, which contains all these inner functions as private methods? **See also:** * Is it OK to split long functions and methods into smaller ones even though they won't be called by anything else? \\- a previous question of mine, which leads into this one. * @KilianFoth's answer to a very similar question which provides a different perspective compared to the answers given here."} {"_id": "232183", "title": "Best practice for security checks, in surface or deep layer?", "text": "Let's take a server-side WebServices app, we need to make sure that all function applies every security rules, and keep the code clean. In such a case, I usually prefer to place my security checks on the upper layers. As soon as the user call a function, I check if he has the rights to access it or not. But this strategy doesn't always works, if we need to retrieve infos from the database before performing all the checks, for instance. Is this a bad idea? Should it be better to place security checks in deep layers, just before/after accessing database? I'm trying to figure out the best approach for a system that have a lot of security checks, and avoiding creating a big spaguetti. The question could be like \"What are the best practice to make sure an app is secure and the code is easy to maintain?\"."} {"_id": "39888", "title": "Job title inflation and fluffing", "text": "When you work on the same project for a relative long time you get more experienced. You may also master many new technologies. Besides the coding you may also do what would classify other roles. There is however one part of your career that may not get updated. That is your job title. It seems beside all technological hypes there is also job title hype. It all depends on which company you work for. Many companies give employer better job titles because they want to keep them. The employee doesn\u2019t change their job because the current title is much better, even if they would get better working condition and benefits if they changed their job. When you consider changing you job you notice that your job title is kind of \u201coutdated\u201d. People with less skill have a much better title for their job than you. You may very well explain what you did on your project but the fact is that many employers go by the title. So here are the questions: * Do you change your current title in your CV? * What are other options? Here are some good readings regarding these phenomena: * Job title inflation * Job title fluffing"} {"_id": "146519", "title": "What companies hire data scientists?", "text": "What kind of data jobs are there in the software industry for someone with a B.S. in pure mathematics and a minor in statistics but with little experience in programming? I want to work on the data research side rather than on the software developement side. What companies hire data scientists?"} {"_id": "253471", "title": "Single complex or multiple simple autoload functions", "text": "Using the **spl_autoload_register()** , should I use a single autoload function that contains all the logic to determine where the include files are or should I break each include grouping into it's own function with it's own logic to include the files for the called function? As the places where include files may reside expands so too will the logic of a single function. If I break it into multiple functions I can add functions as new groupings are added, but the functions will be copy/pastes of each other with minor alterations. Currently I have a tool with a single registered autoload function that picks apart the class name and tries to predict where it is and then includes it. Due to naming conventions for the project this has been pretty simple. if has namespace if in template namespace look in Root\\Templates else look in Root\\Modules\\Namespace else look in Root\\System if file exists include But we are starting to include Interfaces and Traits into our codebase and it hurts me to include the type of a thing in it's name. So we are looking at instead of a single autoload function that digs through the class name and looks for the file and has increasingly complex logic to it, we are looking at having multiple autoload functions registered. But each one follows the same pattern and any time I see that I get paranoid about code copying. function systemAutoloadFunc logic to create probable filename if filename exists in system include it and return true else return false function moduleAutoloadFunc logic to create probable filename if filename exists in modules include it and return true else return false Every autoload function will follow that pattern and the last of each function _if filename exists, include return true else return false_ is going to be identical code. This makes me paranoid about having to update it later across the board if the file_exists include pattern we are using ever changes. Or is it just that, paranoia and the multiple functions with some identical code is the best option?"} {"_id": "133738", "title": "How can I properly compare double values for equality in a unit test?", "text": "I recently designed a time series module where my time series is essentially a `SortedDictionnary`. Now I would like to create unit tests to make sure that this module is always working and producing the expected result. A common operation is to compute the performance between the points in the time series. So what I do is create a time series with, say, {1.0, 2.0, 4.0} (at some dates), and I expect the result to be {100%, 100%}. The thing is, if I manually create a time series with the values {1.0, 1.0} and I check for equality (by comparing each point), the test would not pass, as there will always be inaccuracies when working with binary representations of real numbers. Hence, I decided to create the following function: private static bool isCloseEnough(double expected, double actual, double tolerance=0.002) { return squaredDifference(expected, actual) < Math.Pow(tolerance,2); } Is there another common way of dealing with such a case?"} {"_id": "202938", "title": "Secure an Application/Software by expiration with Date?", "text": "I have been working on some software application and I update them every 6 months. Currently, the way I track the date is by extracting the date from the system when the user installs the application, encrypt it and store it in a file locally. Whenever the application is started, it checks if 6 months have passed, then it works or it doesn't, in which case it shows an error message telling the user to update. If the user finds the encrypted date in the file they can simply replace it with one from a more recent install. I am wondering if there's a more secure way to do this?"} {"_id": "207727", "title": "Why there is no markdown for underline?", "text": "I am wondering why there is no markdown syntax for underline? I know that basic html tags can be embedded to achieve this but I am trying to understand why `underline` got omitted when **bold** and _italics_ exists"} {"_id": "202932", "title": "Named output parameters vs return values", "text": "Which code is better: // C++ void handle_message(...some input parameters..., bool& wasHandled) void set_some_value(int newValue, int* oldValue = nullptr) // C# void handle_message(...some input parameters..., out bool wasHandled) void set_some_value(int newValue, out int oldValue) or bool handle_message(...some input parameters...) ///< Returns -1 if message was handled //(sorry, this documentation was broken a year ago and we're too busy to fix it) int set_some_value(T newValue) // (well, it's obvious what this function returns, so I didn't write any documentation for it) The first one doesn't have and need any documentation. It's a self-documenting code. Output value clearly says what it means, and it's really hard to make a change like this: - void handle_message(Message msg, bool& wasHandled) { - wasHandled = false; - if (...) { wasHandled = true; ... + void handle_message(Message msg, int& wasHandled) { + wasHandled = -1; + if (...) { wasHandled = ...; With return values such change could be done easily /// Return true if message was handled - bool handle_message(Message msg) { + int handle_message(Message msg) { ... - return true; + return -1; Most of compilers don't (and can't) check documentation written in comments. Programmers also tend to ignore comments while editing code. So, again, the question is: if _subroutine_ has single output value, should it be a _procedure_ with well-named self-documenting output parameter, or should it be a _function_ which returns an unnamed value and have a comment describing it?"} {"_id": "133730", "title": "How should I handle exposing Java beans to end users? ", "text": "I need to create some Java interfaces to query some database tables and some web services for new framework I'm building. In a previous Java Spring web application, I used Java Beans to handle the results of the query. In this case, I need to write some interfaces to publish the methods that will be available to end users. I'm a little bit of a newbie in this particular aspect of Java and I was wondering: is it better to allow the end user to see and use the Java beans in JSP, or to hide them using some generic type?"} {"_id": "133733", "title": "How to optimize methodology to collect Market Data?", "text": "I am working on Application which goes and get's market data from different sources, I am using polling mechanism where vendor/broker put his data file and my application would go and hit that ftp location multiple times during the day to get data. Right now, we hit location every 5 mins and if data is not there then again after 5 mins. In it's current software architecture system using lot of resources between time window of 3 hrs where we do this polling mechanism. I want to improve performance of application and so what would best practices to solve this kind of software problem? **Note:** Some might say that this problem is engineering and should not be post here but my target audience is software guys working in finance and so i have posted this question here, if you feel that this is not right location for this then please feel free to migrate it to stackoverflow or programmers.stackexchange.com"} {"_id": "41668", "title": "Would be good to include your freelancer account in your Resume / CV when applying for a job?", "text": "I've been working as a **freelancer** for about two years in vWorker. Any person can visit a coder's profile, and see in how many projects the coder has worked on, (if the coders allows) see how much the coder obtained in each project, ratings, feedbacks, etc. Would be good to include a freelancer account in your Resume / CV when applying for a job? Is it something you would do if you have finished several projects there?"} {"_id": "87688", "title": "Why do people put '\\n' at the beginning of strings?", "text": "Very often I get into C code where `printf` format strings start with `\\n`: printf( \"\\nHello\" ); This in my opinion is an annoying thing that offers no advantages (rather many disadvantages!) with respect to printing `\"Hello\\n\"`: * If the first printed line begins with `'\\n'`, program output will begin with a (useless) empty line * If the last printed line doesn't end with `'\\n'`, program output won't end with a new line (useful when reading the output on a terminal) * On most terminals (on line buffered streams in general), output gets flushed when a `'\\n'` is encountered, so a line not ending with `'\\n'` could be shown on screen much time after it's been actually `printf`'d (or maybe never, if the stream never gets flushed, for instance if the program crashes) So, why do people does like this?"} {"_id": "70852", "title": "Scheme and Functional programming is to \"Structure and Interpretation of Computer Programs\" as Prolog and Logic programming is to what book?", "text": "I'm looking for some advice how to get started with Logic programming, and I am really enjoying working through the Scheme book \"Structure and Interpretation of Computer Programs.\" Is there a similar book that is geared toward Prolog? Or, if there is a better language to study than Prolog for Logic programming I am interested in hearing your opinions on that as well."} {"_id": "87686", "title": "Nested languages code smell", "text": "Many projects combine languages, for example on the web with the ubiquitous SQL + server-side language + markup du jour + JavaScript + CSS mix (often in a single function). Bash and other shell code is mixed with Perl and Python on the server side, `eval`ed and sometimes even passed through `sed` before execution. Many languages support runtime execution of arbitrary code strings, and in some it seems to be fairly common practice. In addition to advice about **security** and **separation of concerns** , 1. what **other issues** are there with this type of programming, 2. what can be done to **minimize it** , and 3. is it ever **defensible** (except in the \"PHB on the shoulder\" situation)? **Edit:** To clarify, this is _not_ about using more than one language for the job - That's a _good thing_ for any but the simplest projects. The issue is mixing languages _in the same file_ , such as magic strings for SQL, Bash & Perl `eval`, and the like."} {"_id": "87685", "title": "Learning C# quickly", "text": "I just got a position at a big, well-known C#/.NET company. The thing is that I don't know any C# or .NET at all (they know that) and I want to learn as much as I can before I start, to not waste time (and money). How do I learn C#/.NET quickly and efficiently? Resources? Great tutorials? Videos? EDIT: I forgot to mention that I have a couple of years experience with Java. So I am not new to programming - just new to .NET. UPDATE: Thank you for all your replies. I have ordered \"Essential C# 4.0\" and until then, I will go through some of the guides / tutorials and the general documentation at MSDN."} {"_id": "203333", "title": "Should testers approve releases, or just report on tests?", "text": "Does it make sense to give signoff authority to testers? Should a test team 1. Just test features, issues, etc, and simply report on a pass/fail basis, leaving it up to others to act on those results, or 2. Have authority to hold up releases themselves based on those results? In other words, should testers be required to actually sign off on releases? The testing team I'm working with feels that they do, and we're having an issue with this because of \"testing scope creep\" -- the refusal to approve releases is sometimes based on issues explicitly not addressed by the release in question."} {"_id": "203330", "title": "Continuous integration testing server: hosted, own desktop, or own server", "text": "For testing, I am planning to run a continuous integration testing. There are mainly two options: hosted, or own desktop/server. I will break it into 3 options I have: 1. Hosted: * Economical, $10-20/month for a small app * Less setup, the CI company manage all hardware and software 2. Desktop: * I could just buy a simple, cheap desktop as a test server (about $500). 3. Used server: * My current office is offloading some old Dell rack server (Probably dual core Xeon, which I can purchase for $50 or less Please advise me which best serves me for a small team of 2-3 developers. Thanks."} {"_id": "137924", "title": "How are open source repositories managed for popular languages?", "text": "I'm a developer that creates Open Source code for a small language (LabVIEW), and am currently sharing this in several places. The vendor of LabVIEW has a certification process for Open Source libraries (mixed with commercial/closed libraries), but it's rather lengthy. I was wondering how more popular languages manage large repositories of libraries (for instance Debian, or python-works) of Open Source libraries, are those strictly controlled and monitored for quality or can anyone contribute? * How would one set up the process for a centralized repository and accept incoming code? * Are there different levels of quality/trustworthy? * Is it allowed to rely on libraries outside the repository?"} {"_id": "40394", "title": "How do you organize your projects?", "text": "Do you have any particular style of organizing projects? For example, currently I'm creating a project for a couple of schools here in Bolivia, this is how I organized it: TutoMentor (Solution) TutoMentor.UI (Winforms project) TutoMentor.Data (Class library project) How exactly do you organize your project? Do you have an example of something you organized and are proud of? Can you share a screenshot of the Solution pane? In the UI area of my application, I'm having trouble deciding on a good schema to organize different forms and where they belong. * * * **Edit:** What about organizing different forms in the .UI project? Where/how should I group different form? Putting them all in root level of the project is a bad idea."} {"_id": "202681", "title": "How to Implement Complex Form Data?", "text": "I'm supposed to implement a relatively complex form that looks like follows, but has at least four more pages requiring the user to fill in all necessary information for the tracks: ![](http://www.mediafire.com/convkey/b441/1hnp8lxbu49ar2ufg.jpg) This data will need to be sent to the server, which is implemented using Dropwizard. I'm looking for best practices on how to upload and send such a complex form with potentially dozens of songs to the server. The simplest available solution I have seen is a simple _multipart/form-data_ request with the following form schema (Source): Client

    File Upload with Jersey

    Select a file :

    Server @POST @Path(\"/upload\") @Consumes(MediaType.MULTIPART_FORM_DATA) public Response uploadTrack(final FormDataMultiPart multiPart) { List artists = multiPart.getFields(\"artist\"); StringBuffer output = new StringBuffer(); for (FormDataBodyPart artist : artists) output.append(artist.getValueAs(String.class)); List tracks = multiPart.getFields(\"track\"); for (FormDataBodyPart track : tracks) writeToFile(track.getValueAs(InputStream.class), \"Foo\"); return Response.status(200).entity(output.toString()).build(); } Then I have also read about file uploads via Ajax or Formdata (Mozilla HttpRequest) which allows for Posts in the formats _application/x-www-form- urlencoded_ , _multipart/form-data_ , or _text/plain_. I don't know which approach, if any, is best. An ideal solution would be to utilize Jackson to convert a json string into my data objects, but I don't get the impression that this is possible with binary data."} {"_id": "202682", "title": "Dev Server vs Local Development", "text": "On the past 2 projects I've worked, teams prefer a local development environment over a development server. The one project lead stated that local was better since it didn't require an internet connection. But, this seems assumed when developing. Which is usually better?"} {"_id": "76451", "title": "Department organization of \"source code\", \"projects\", and \"processes/work-instructions\"?", "text": "How do you organize your software assets (source code and resources), your \"active\" and \"past\" projects, and your Software Department processes/work- instructions (including source code conventions, rules, and reviews process)? For example, one approach would be to use \"Trac\" with databases for \"Software Assets\" and \"Software Process\". Another would be to have databases for \"Project A\", \"Project B\", and \"Software Process\". Is your source code \"by-project\", or \"by-all-in-department\", or \"by-product\"?"} {"_id": "175902", "title": "How to design software when using BDD?", "text": "I'm working on a project right now and it's my first project using BDD. Up till now, the user stories have proven themselves a very valuable weapon to understand requirements and to specify the solution in a comprehensive, easy to understand language. My question is this: now that my user stories are complete, how do I design my solution? I understand that I derive behavior tests from my user stories, and I have to do UI design, but am I supposed to use good ol' UML? I'm under the impression that when using user stories, UML is left out; is this correct?"} {"_id": "8228", "title": "What is the first published reference to test-first programming?", "text": "I am rereading Refactoring by Martin Fowler. In Chapter 4, Building Tests, I came across the following passage. > In fact, one of the most usefull times to write tests is before you start > programming. When you need to add a feature, begin by writing the test. This > isn't as backward as it sounds. By writing the test you are asking yourself > what needs to be done to add the function. Writing the test also > concentrates on the interface rather than the implementation (always a good > thing). It also means you have a clear point at which you are done coding-- > when the test works. While I am an advocate of test-driven development now, I did not remember having been introduced to the concept when I originally read this book nearly 5 years ago. According to Amazon.com, this book was originally published on July 8, 1999. Is this the first published reference to test-first programming or is there something even earlier?"} {"_id": "206303", "title": "Preventing password hashing algorithm from overloading CPU", "text": "These days password hashing algorithms are designed to be slow. While it prevents black hats from guessing the password (at least partially), it also gives additional work for the server. I can imagine that, if someone wanted to make server run extremely slowly, they could simply send many log-in requests which would lead to many password hash calculations therefore dramatically increasing CPU usage. What is the common practice for preventing such attacks? And how is it called?"} {"_id": "123200", "title": "Sharing buffer between multiple threads", "text": "I had a job process that was executing a lot of IO to read and write temporary files. Now I want to (need to) reduce the quantity of IO executions. So I want to create a sort of circular buffer that is going to be filled up with a data from a text file within first thread. The consumer (reading) thread will fetch data from this buffer. The problem is that there could be multiple consumers that need to read from same one buffer. For buffer implementation I prefer not to use any existing wrappers (simple array, where i just \"play\" with indices is enough). I also don't want to create a separate buffer for every reader. And ofcourse I want to avoid any deadlocks and unnecessary blockings. **UPDATE** Right now I am using circular buffer (an array and 2 indices) The question is how to implement such buffer that can be accessed by multiple consumers where each consumer can read from it interdependently from other consumers (one costumer may read faster than the other one). **IMPORTANT UPDATE** The first thread don't know (and should not know) about it's consumers!!! It should write data to a buffer, when data ends it should raise a flag."} {"_id": "15502", "title": "When to start writing Exception Handling, Logging", "text": "When do you start writing your Exception Handling Code? When do you start writing Logging Statements. For the purpose of elaborating this question, let us assume that we are on .NET platform with log4net logging but feel free to answer in a generic way. Solution: A Windows Forms Project. Projects: UI, BusinessRules, DataHandlers So, do you go about writing your DataHandlers which does your Data Manipulations such as Create, Read, Update, Delete first. Then follow it up with your Business Rules And then your UI or any other permutation of the above. Test your Application for Functionality. And _then_ start writing your Exception Handling Code and _finally_ your Logging code? When is a right time to start writing your Exception Handling code? PS: In the book Clean Code, they say Write your **try-catch-finally** block first. That, prompted me to ask this question."} {"_id": "86483", "title": "Services or Shared Libraries?", "text": "I work in an environment where we have several different web applications, where each of them have different features but still need to do similar things: authentication, read from common data sources, store common data, etc. Is it better to build the shared functionality into a set of services, to be called by the web apps, or is it better to make a shared library, which the webapps include? The services or libraries would need to access various databases, and it seems like keeping that access in a single place (service) is a good idea. It would also reduce the number of database connections needed. A service would also keep the logic in a single place, but then it could be argued that a shared library can do the same thing. Are there other benefits to be gained from using services over shared libraries?"} {"_id": "226355", "title": "Why Perl is appropriate language for CGI?", "text": "I heard many people around still saying Perl is great language for CGI. But i think it is not as much as popular as now growing languages such as Python, Ruby. Is there any solid reason for Perl still being the appropriate language for CGI."} {"_id": "212084", "title": "Objects of different programming languages", "text": "Apparently, there are some resemblance between objects in JavaScript and dictionaries in Python. Each language defines an object a little different (and there is some logic that all definitions to be the same as in physics). How are objects alike and how do they differ between JavaSscript, Python and PHP?"} {"_id": "233272", "title": "How do you call one program from another?", "text": "What I'm wondering is how running programs communicate with each other, and if someone could post some sample code for how to do this, so I can try it out myself, just for educational purposes. For example, I've worked with databases before, and in my code I always have to \"establish a connection to the database.\" The database service has to be running before I start my program, or else it will fail. What exactly is going on with that connection and how does it work?"} {"_id": "245393", "title": "Why was C# made with \"new\" and \"virtual+override\" keywords unlike Java?", "text": "In Java there are no `virtual`, `new`, `override` keywords for method definition. So the working of a method is easy to understand. Cause if _DerivedClass_ extends _BaseClass_ and has a method with same name and same signature of _BaseClass_ then the overriding will take place at run-time polymorphism (provided the method is not `static`). BaseClass bcdc = new DerivedClass(); bcdc.doSomething() // will invoke DerivedClass's doSomething method. * * * Now come to C# there can be so much confusion and hard to understand how the `new` or `virtual+derive` or new + virtual override is working. I'm not able to understand why in the world I'm going to add a method in my `DerivedClass` with same name and same signature as `BaseClass` and define a new behaviour but at the run-time polymorphism, the `BaseClass` method will be invoked! (which is not overriding but logically it should be). In case of `virtual + override` though the logical implementation is correct, but the programmer has to think which method he should give permission to user to override at the time of coding. Which has some _pro-con_ (let's not go there now). So why in C# there are so much space for _un_ -logical reasoning and confusion. So may I reframe my question as in which real world context should I think of use `virtual + override` instead of `new` and also use of `new` instead of `virtual + override`? * * * **EDIT** After some very good answers especially from Omar, I get that C# designers gave stress more about programmers should think before they make a method, which is good and handles some rookie mistakes from Java. Now I've a question in mind. As in Java if I had a code like Vehicle vehicle = new Car(); vehicle.accelerate(); and later I make new class `SpaceShip` derived from `Vehicle`. Then I want to change all `car` to a `SpaceShip` object I just have to change single line of code Vehicle vehicle = new SpaceShip(); vehicle.accelerate(); This will not break any of my logic at any point of code. But in case of C# if `SpaceShip` does not override the `Vehicle` class' `accelerate` and use `new` then the logic of my code will be broken. Isn't that a disadvantage?"} {"_id": "245392", "title": "Do Rails Join Models Get Controllers?", "text": "I have a rails app where my users can buddy up with other users. Since that relationship can have a status (approved/rejected/pending), I decided to go with a join model (`UserRelationship`) so now I have the attribute. To handle state for that join model, I have a controller (`UserRelationshipsController`) with actions/RESTful endpoints for changing the state of the relationship. Is this bad practice? If so, where should I handle state-changes for the relationship? In the User model?"} {"_id": "229362", "title": "Storing user uploaded images / files in multi server env", "text": "In a multi server system, e.g. a load balancer, multiple web servers and a database server, where do you store the files / images users upload when each web server will need access? Before now I've just used a single web server, or maybe one web server and a MySQL database server. But if you have multiple web servers I guess you have a range of options for making sure resources like images and other user files are shared, such as: * Storing them on the database server * Or a server specifically for them * Maybe mounting the same directory * Or using a tool like rsync to synchronize the user uploads directories on each web server Or maybe you have a different solution? or there's a de-facto way of doing this that I'm not aware of?"} {"_id": "238539", "title": "Find second largest element in a array?", "text": "The problem is to solve it in **n+logn\u22122** (base 2) no of comparisons. My algorithm is taking some extra space less then (O(n*n)). How can i reduce this extra space without increasing the complexity. comparisionarray=[][] (2D array ) //a function to find the largest. findlargest(a,b) if(len(a)==1 and len(b)==1) append b to array at ath index in comparisonarray. append b to array at bth index comparisionarray. return max(a,b) else return max(findlargest(1stHalfof(a),2ndHalfof(a)),findlargest(1stHalfof(b),2ndHalfof(b)) input= input_array() largest=findlargest(1stHalfof(input),2ndHalfof(input)) subarray=[array at index largest in comparisionarray] 2ndlargest=findlargest(1stHalfof(subarray),2ndHalfof(subarray)) print(2ndlargest) **Edit:** It can be done in minimum **n+logn\u22122** comparisons. That is the only condition . This answer on math.stackexchange.com proves it. It has been generalized for nth largest element Seems that there is a solution at this link stackoverflow. But i am not sure about the extra space and time complexity."} {"_id": "88934", "title": "Using OpenID to log into multiple domains: Is this plan feasable?", "text": "For example: * We're running a two community sites on two domains (call them `example.com` and `example.net`). * We want to be able to expand that to more domains later. * We want to allow multiple types of login (OpenID, Facebook, Twitter, standard username/password). * We want someone who's logged into one site to automatically be logged into the other(s). In other words, it's a bit similar to the StackExchange network. In this case, would this plan work? * Set up `example.com` and `example.net` (and any later additions) as OpenID relying parties, which accept OpenID login from `id.example.org` only. * Set up `example.com` and `example.net` to do an OpenID reply-immediate request the first time you visit them, so that if you're logged into `id.example.org` you're immediately and automatically logged into the site you're visiting. They should set a cookie if you're not logged in, to save them doing this on every page request. * Set up `id.example.org` as an OpenID provider and consumer. It should also consume Facebook and other identity providers, and allow standard username/password access. (Multiple login methods could be attached to one account.) * On logout, simply change the authentication tokens in the database. The user will still have cookies, but they'll be meaningless. Thus can the user be signed out of all sites simultaneously. Multiple authentication tokens can be stored against one user at one time (and should be different for each site), so that the user can sign out in one browser but still be signed in in another. Signing out always signs out for all sites. The only problem I can see with the above is this: * Someone visits `example.com`. A \"not-logged-in\" cookie is set. * Zie then goes onto `example.net`. Ditto. * Zie then signs in, and continues browsing on `example.net`. * Zie then goes back to `example.com` and, because of the \"not-logged-in\" cookie, is not checked against `id.example.org` and is therefore not logged in. * However, as soon as zie clicks the \"log in\" button, zie is logged in. I don't think this is a major problem. On the whole, I think it's a pretty good system. I'd just like to see it reviewed. Are there any problems I haven't foreseen? Would it be buggy or slow? StackExchange uses a very different method. I assume they have a good reason for that?"} {"_id": "250896", "title": "Passing data between hundreds of objects in java", "text": "Currently, I'm working with a group on building a model. This model simulates interactions between many \"agents\" in a region. Agents can be any entity such as a city, a farmer, a business etc. Each agent is represented by its own object within java. Each agent will need to be able to interact with all of the other agents, and therefore, needs to have access to information about the other agents. So, for example, lets say we have 1 city object interacting with 99 farmer objects. The city needs to know for instance, the total land that each of the 99 farmers owns in order to perform some sort of calculations. So, my question is, what is the most performance efficient way of completing something like this? There may even be 500-1000 or more agents in the future, and each agent may need to access many variables from the other agents. Would you potentially set up a table within mysql for each agent and have them access eachother's information that way? Or can something be set up directly within java that allows the agents to access each other's information. I'm looking to possibly multithread this model in the future too. In any case, I want the most performance efficient way of completing this."} {"_id": "40536", "title": "Spec Writing Management", "text": "I simply cannot imagine writing software without a spec. No matter how sketchy or high level it is, spec is important to explain to the clueless programmers on what are the functionalities of the program. But the problem with spec is that it is somewhat a second class citizen in the whole software development cycle; when the development picks up the steam, it is neglected. But when dispute arises, the developers and testers and sales will scramble to find the spec to justify their grounds. Either one or more scenarios will happen: 1. The spec cannot be recovered, no one knows where is the spec 2. Different versions of the spec emerge from different sources; it takes great difficulties to find out which version is the latest version, or whether there _is_ a latest version available. 3. The spec is incomplete, some parts of the documents it refers to are missing. So spec management is important, and it's equally important that everyone has only One Single Source of Spec. How do you manage your specs? I tried to get everyone to use Google Docs but everyone objected. Everyone is just too attached and enamored with Microsoft Word, which is-- in their opinion-- very easy to use, very easy to insert image, very easy to type equation and whatnot. How to convince them that MS Word is just terrible for sharing?"} {"_id": "238535", "title": "How do I simplify a compiler/interpreter?", "text": "Recently I wrote an interpreter for operations on sparse matrices (a \"sparse matrix calculator\") in lex/yacc. The language is still very bare bones and doesn't even include control structures or parameterized subroutines, yet it is already at several thousand lines of code, and that's not including the matrix classes. In particular, the yacc file is close to two thousand lines in length. Because of this I'm finding it quite difficult to work on. Is this normal or is there a way I can simplify things? If you want to review my code, it can be found at: http://sourceforge.net/projects/msci/files/libpetey/"} {"_id": "178882", "title": "Where did the estimation rule of thumb originate that time spent will be one-third in each of the following: design, implementation, and testing?", "text": "I'm looking for a reference to the following. I commonly hear that one-third of a projects time will be spent in design, one-third in implementation, and one-third in testing. The three phases of development seems to be derived from the waterfall model. But, where did the time division originate (1/3, 1/3, 1/3)? Is there a paper or book that this is from?"} {"_id": "249692", "title": "Memory management scheme for custom memory allocator", "text": "I am in the process of implementing a small memory manager. The users of this memory pool will always access the memory bytes via handles. So a memory allocation/deallocation is done with two APIs: Handle Allocate(size_t numBytes); void Free(Handle handle); The allocation/deallocation deals with opaque handles. To actually access the bytes, the user must then map/unmap the memory: void * Map(Handle handle, int mappingFlags); void Unmap(Handle handle, void ** ptr); Mapping flags: are read-only, write-only and read-write. The average size of the memory blocks should be between 1KB to 1MB, with some eventual very big block in the neighbourhood of 10MB. The memory pool starts of as one big pre-allocate block. The manager must then handle variable size allocations. When the pool is depleted, the manager can try to ask the system for another big block. My questions: 1) I'm not sure which memory management scheme would be best employed in the scenario. 2) I think it would be possible to implement a memory defragmentation scheme, thanks to the handles. Am I right?"} {"_id": "208687", "title": "What is the appropriate approach to study an api when it is not well documented and there are no good tutorials around?", "text": "I am a student and I like to develop applications at my own level. While building an application I am the only one in my team. After analyzing the application I choose a library/api to use for the project. For example,these days I am working on a chat application based on XMPP protocol.For that I am using smack api. These third party api's are usually not very well documented nor they have good tutorials before one can start working with the api. These are unlike java api and other popular api's which are very well documented and there are very good tutorials to polish the concepts before one can actually start writing the code for the application. My point is, it takes me a lot of time to implement a simple concept when I am using api of the like of smack. For example it has been quite a time and I have been unable to make one user read another user's status ! There are still many problems. What kind of approach should I follow so that I could maximize my output ? The point is even not maximizing the output but understanding that how the api works."} {"_id": "219505", "title": "Using a function's return value as an if condition, good practice?", "text": "Do you think it is a good practice to use function return values as if conditions? I'm coding in PHP atm but it holds for many other languages. if(isTheConditionMet($maybeSomeParams)) { } or $res = isTheConditionMet($maybeSomeParams); if($res) { } I can't think of a situation where the first one creates a problem, can it? EDIT: Assume return value is not going to be used after condition is evaluated."} {"_id": "219504", "title": "How to present code in academic work?", "text": "Actually, I'm writing my undergrad thesis, that consists in analysing the BitTorrent algorithm and see its application on Transmission client as an example of implementation. Reading through its code, written in C, you can see many layers of functions static const char* tr_metainfoParseImpl (const tr_session * session, tr_info * inf, bool * hasInfoDict, int * infoDictLength, const tr_variant * meta_in) { int64_t i; size_t len; const char * str; const uint8_t * raw; tr_variant * d; tr_variant * infoDict = NULL; tr_variant * meta = (tr_variant *) meta_in; bool b; bool isMagnet = false; /* info_hash: urlencoded 20-byte SHA1 hash of the value of the info key * from the Metainfo file. Note that the value will be a bencoded * dictionary, given the definition of the info key above. */ b = tr_variantDictFindDict (meta, TR_KEY_info, &infoDict); if (hasInfoDict != NULL) *hasInfoDict = b; if (!b) { /* no info dictionary... is this a magnet link? */ if (tr_variantDictFindDict (meta, TR_KEY_magnet_info, &d)) { (...) `tr_metainfoParseImpl()` is the function called after we add a .torrent by file or magnet link. It calls `tr_variantDictFindDict()` to find some string \"info\" somewhere in the metadata dictionary, in order to get information about that torrent file. Algoritmically, it has no value to me, since I want to emphasize other aspects of BitTorrent algorithm other than string search, although I want to leave its caller line there just to illustrate it's happenning. The function `tr_variantDictFindDict()` and its child are bool // func1 tr_variantDictFindDict (tr_variant * dict, const tr_quark key, tr_variant ** setme) { return tr_variantDictFindType (dict, key, TR_VARIANT_TYPE_DICT, setme); } static bool // func2 tr_variantDictFindType (tr_variant * dict, const tr_quark key, int type, tr_variant ** setme) { return tr_variantIsType (*setme = tr_variantDictFind (dict, key), type); } As we can see, although this may exist for code engineering reasons, algorithmically it has no value. So, I'm looking for reasonable, feasible, practical ways to avoid showing this kind of code in my work. Using the code above as an example, I thought some options: 1. put the relevant part of func1 caller function, showing the line calling func1, and after that showing func2 callee (in which will have the another relevant part of code. 2. putting func1 caller code and func2 callee \"side by side\", as if they were in one big function 3. literate programming from the beginning Please, feel free to share your experiences with this situation. Also, please change the SX site if needed, although I looked for the best SX site to ask this question and this one seemed legit."} {"_id": "208689", "title": "When is the best time to do self learning in relation with software management?", "text": "It all started from here. I have been following Software Estimation: Demystifying the Black Art (Best Practices (Microsoft)). The third chapter says that in Software Management: * You cannot give too much time to software developers, if you give it to them, then it is likely that extra time given to them will be filled by some other tasks (in other words, the developers will eat that time :)) Parkinson's Law * You can also not squeeze the time from their schedule because if you do that, it is likely that they will develop poor quality product, poor design and will hurt you in the long run, there will be a panic situation and total chaos in the project, lots of rework etc. My question is related to the first point. If you don't give enough time then will the typical software engineer learn his/her skills? The market is always coming with new technologies, you need to learn them. Even with the existing familiar technologies there are always best practices and dos and don'ts."} {"_id": "158073", "title": "Function points measure for a business applications framework. Is that possible?", "text": "So, my boss wants to have a complexity measure for a framework developed internally in our company. Is that possible? As far as I know, functions points do not apply to software that doesn't have persistence per se, and also doesn't have GUI's (it's a framework, built in C#, useful for crafting business applications, and it's used only internally). Also, function points tend to get less useful as the target software being assesed gets more complex. If it is possible to do this, could someone give me some guidance? If not, how do I convince my boss to stop asking this? Is there a better metric for valuing such a complex software?"} {"_id": "214030", "title": "How to design (just the outline) an enterprise class PHP application", "text": "For the last year or so, I have been working on a very large application. I am currently on version 17 of it, and every time, I start over with some of the code from before. But now, this is starting to become unmanageabley large. When I do the design for the application, I generally start with a Word document, outlining all the classes, with the functions, descriptions for each, their dependencies, default configuration data, and how each component augments the rest of the application. But with that method, within a few hours, I can easily reach 20-30 pages of documentation, and the nice simple outline of the software because so complex in itself, using that as the template to write the code to becomes difficult. So, now, I am trying to describe the application in an XML format, built on an XSD that contains the structure of how the classes will be laid out. But, this is also demonstrating its problems. At this stage, everything ahead of the base application has been designed (the database er diagrams, the functionality, etc), but the main problems lies in the design of the underlying software that runs it all. So, can anyone recommend what path I should go down, either with specific ways to lay out a Word document for easy reading, an XML pattern that would do the same, or some other software package that can do this for me and help guide me through the development? Any help would be greatly appreciated, mainly because once this part has been cracked, my application can finally be built and completed. Thanks"} {"_id": "131808", "title": "Can I use the concept of 'tags' (derived from StackExchange sites) in a personal project?", "text": "I have a small personal project that I expect will be viewed by, at most, about 5 people. 'Tags', as used on this site and StackOverflow, would be helpful. Can I explicitly use something called a Tag in my personal project where the purpose/function is essentially the same?"} {"_id": "88685", "title": "Why aren't more desktop apps written with Qt?", "text": "As far as I know and have understood in my experience with Qt, it's a very good and easy to learn library. It has a very well designed API and is cross- platform, and these are just two of the many features that make it attractive. I'm interested to know why more programmers don't use Qt. Is there a deficiency which speaks against it? Which feature makes other libraries better than Qt? Is the issue related to licensing?"} {"_id": "69862", "title": "Senior Interview LINQ questions", "text": "I'm preparing a LINQ section in interview questions for senior programmers. What are the most interesting questions in LINQ to include? And why?"} {"_id": "69869", "title": "Proper way to \"say\" Big-O notation?", "text": "What is the proper way to convey an algorithm complexity in Big-O notation in speech? Dose \"the total number of operations is big oh of N log N\" sounds strange? What's generally accepted: \"It have \"order log n\" space complexity\"? \"It is guaranteed to run in n log n time\"?"} {"_id": "160906", "title": "Recurring Problem - need instruction to run only once inside code which executes multiple times", "text": "Hello I find in programming I come across this problem usually (especially when dealing with certain frameworks) where I would like once piece of code to execute once and only once however the provided method e.g (something like an onComplete function) that I wish to place this instruction in will in reality execute multiple times. I am just wondering does anybody know of the best way that would get around this problem, one programmer friend of mine told me to use a flag (boolean) to check whether or not this code has already run but I feel like this is not a suitable solution and I would not like to introduce global variables for the sake of checking a condition once and only once. Has anybody ever come across a problem like this before, I am wondering is there any good practice out there to keep in mind when dealing with code like this. Thanks"} {"_id": "89083", "title": "Should this code be rewritten or refactored?", "text": "There is a module in our telecoms equipment which is written in C. I think the code in this module has a bad smell because it has a number of symptoms: * When new features are added to this module, some original features go wrong. And some weird problems were finally orientated to the buffer overflow of this module. * It is difficult to add new code to the module. Although the unit tests and integration tests covers new code, errors and flaw concealed in the original code come to the surface. * By means of code review, we found the quality of the code is extremely bad. But the module was tested for many rounds, the errors concealed in the surface of the code were found by the QA engineers. Other problems hidden deeper were difficult to find. _as well as a virus in the body._ (easy to bring misunderstanding, delete it) We summarize the status quo lay in these reasons: * The original code has corrupted because of the loose process monitoring, in fact, many self testings are ignored or partial executed. * The first programmer of this module was lack of the programming ability and the team leader didn't find the problem and risk in time. * Many improvement approaches stay in the plan phase due to the team leader's suspicion. The project came to a new version. Some team leaders want to change this situation and we discussed the procedure many times. Everyone has agreed to revise the module but we have to decide between: * rewriting the module. It means throwing all the code of the module to the trash can and rewriting the code without changing its interface. * refactoring the module. This means one step at a time. we need retain all the code at the beginning and build a solid set of tests for the code. Then we dig into the code and revise a small piece of it, and relying on the tests to tell us whether we introduce a bug. This process is repeated until the refactoring complete. I agree with rewriting because we already have some rewriting experiences in our systems which were proven to be the best choices. The project manager wishes to refactor the module on account of the limited time available required to complete the detailed design, coding, code review, unit tests and integration testing which is three months. He thinks the time scales are too tight to do a full rewrite. Should this code be rewritten or refactored? EDIT: As Joel said in Things You Should Never Do, Part I > The idea that new code is better than old is patently absurd. I agree with this viewpoint. But I have some puzzles that some rewriting cases in our project proved to be wise choice. Maybe it relies on the new programmer who is more outstanding than the previous programmer or the testing is more sufficient."} {"_id": "96242", "title": "To Gather Requirements, from those who don't want to give time for interviewing?", "text": "Actually, I'm A Fresh CS graduate,who want to build a System for his Dad's Road Transport business. However,the issue is whenever I go to ask to him about : what are people involved in his business? whats their routine work? or how their business actually works? then he avoids my questions, and says, just give me system by utilizing your knowledge, to ease my transporting business,and rest all you think. So, I had brainstormed about it, but I still feel the need of viewpoint of my dad,as he is actually in that business. Hence, how can I get business requirements form him ?"} {"_id": "160905", "title": "A place for putting code samples in projects", "text": "Every now and then I get or write some minimal code samples to achieve tasks. What's the usual practice for storing these samples (which could prove useful later on) ? Have a separate source folder or create a separate project ?"} {"_id": "201652", "title": "Design question on best option to store data on remote computers", "text": "I'm making a windows forms application which I want to install on a few computers that are all connected together on a network. Each of these computers have access to a number of servers on which various tasks are carried out etc. I want each instance of the application in use to be able to store and retrieve data regarding the separate remote servers (this data can be edited by everyone using the application on their computer). The application needs to be able to retrieve data from the servers when the user clicks an update button. The user can then edit and update that data if necessary. I think the main options for storing the data are as follows: * Store data all in one place in a database * Have a folder on each of the servers and save XML files to it with the data Which would be the best approach in this scenario and why? Are there any better alternatives? Note: I would usually opt for the database approach but I want to store data in an alternative way because using a database would require selecting one of the servers to host the database, which doesn't make sense as to which one would have that stored on it and not the other servers, albeit only one database would be needed by the application."} {"_id": "201653", "title": "What does \"extreme\" in \"extreme programming\" (XP) refers to?", "text": "\"Extreme\" suggests that it has very different from normal, very aggressive, exceeding limits, but in my opinion regular releases, pair programming, unit testing, collaboration with customers are quite normal and acceptable. What does extreme mean?"} {"_id": "234724", "title": "RESTful API Call Method Names, C#", "text": "I am working on some old code that works with a REST api in c#. The method names (what method to invoke on the API side) is being passed in as hard-coded strings. Would a static class be the best solution to force the user to pass in values that I want (so they don't make a mistake and mistype something, etc)?"} {"_id": "5705", "title": "What are some ways to be more productive with Emacs?", "text": "I've used Emacs quite a bit, and I'm comfortable with the most basic commands/shortcuts, but I know Emacs has a lot more to offer than what I'm using. What are some of the lesser known features of emacs that can help me become a more productive programmer?"} {"_id": "170378", "title": "Why are Java servers so scarce and costly?", "text": "I think \"why PHP over Java\" has been already discussed in other questions, the question I have is: What makes LAMP/WAMP stacks so cheap and abundant vs a Glassfish one? What are the prime factors behind this trend? Also, Why has no java based light weight stack come up as a competitor?"} {"_id": "234728", "title": "How should I organize a file with many functions in it?", "text": "I am writing a web app using PHP and JQuery. I have a file full of functions to access the database. I have function names like `createUser`, `createRole`,`getUserByLogin`, `setSOMETHING`... Is there an accepted convention, or a compelling reason to choose how to organize this file? (alpha by name, grouped by creators, getters, setters..., by the table they reference,...)"} {"_id": "116525", "title": "How to use Blend sample data as real data?", "text": "I am trying to do some design in Blend 4. The sample data function of Blend is nice to have during design time, but it could be much more better to see the design in browser. Can anyone help me to use sample data created by Blend as real data (see same data in browser) also ? Thanks in advance."} {"_id": "165004", "title": "Best thing to do about projects supporting multiple versions of Visual Studio?", "text": "I have an open source project that works on .Net 2.0 and up. The thing is though that I prefer to use Visual Studio 2012, which forces the solution and project files to only work with VS2010/2012. What exactly should I do? I don't want for my users to have to create a solution from scratch if they don't have access to VS2010, but yet, I also don't want to attempt to keep 3 different project files in sync(VS2005, VS2008, and VS2010/2012) What is the usual solution for this?"} {"_id": "192024", "title": "Justifying deficiencies in design", "text": "I would like some input on how to handle clients and third party vendors that ask me about the deficiencies in my design. For example. It turns out I need a data field in a webservices response. This response has been signed off on weeks ago and I get questions like \" _Why haven't you identified this field X weeks ago?_ \". Of course the answer is that all software development is iterative and this requirement wasn't on my radar at that point in time. I'm not prescient. I would like a business-y way of deflecting these questions. Any advice? **Update** : To clarify, this shortcoming is completely on me. How do I then own up to this mistake cleanly, without sounding too apologetic?"} {"_id": "192027", "title": "Why can't a compiler avoid importing a header file twice by its own?", "text": "New to C++! So I was reading this: http://www.learncpp.com/cpp- tutorial/110-a-first-look-at-the-preprocessor/ > **Header guards** > > Because header files can include other header files, it is possible to end > up in the situation where a header file gets included multiple times. So we make preprocessor directives to avoid this. But I'm not sure - why can't the compiler just... **not** import the same thing twice? Given that header guards are optional (but apparently a good practice), it almost makes me think that there are scenarios when you do want to import something twice. Although I can't think of any such scenario at all. Any ideas?"} {"_id": "254279", "title": "Why doesn't Python have a \"flatten\" function for lists?", "text": "Erlang and Ruby both come with functions for flattening arrays. It seems like such a simple and useful tool to add to a language. One could do this: >>> mess = [[1, [2]], 3, [[[4, 5]], 6]] >>> mess.flatten() [1, 2, 3, 4, 5, 6] Or even: >>> import itertools >>> mess = [[1, [2]], 3, [[[4, 5]], 6]] >>> list(itertools.flatten(mess)) [1, 2, 3, 4, 5, 6] Instead, in Python, one has to go through the trouble of writing a function for flattening arrays from scratch. This seems silly to me, flattening arrays is such a common thing to do. It's like having to write a custom function for concatenating two arrays. I have Googled this fruitlessly, so I'm asking here; is there a particular reason why a mature language like Python 3, which comes with a hundred thousand various batteries included, doesn't provide a simple method of flattening arrays? Has the idea of including such a function been discussed and rejected at some point?"} {"_id": "89335", "title": "Which one is more reliable in a large project, COCOMO or FP?", "text": "I have estimated the cost of a product by COCOMO, then I estimated it by FP too. the results were so different from each other! The cost estimated by FP was about two times more than the estimated cost by Cocomo. which one is more reliable as this product belongs to a large project. shoud I skip the FP result?"} {"_id": "62628", "title": "Good continuous-integration solutions for Haskell projects", "text": "I am looking for a good CI solution for a haskell project. Ideally something that will work with git. Really basic need (so far) build and run tests after each check in. Some basic reporting would be great too, but it does not need to be anything real fancy. It should also support running javascript tests in a browser. (via Selenium or the like) What have people been using for this?"} {"_id": "82484", "title": "Simplified Interfaces or Object Abstraction", "text": "i've been facing a common situation at work that has happened quite often when handling objects. The situation goes like this: You have to realted classes A and B, class A has an instances of class B. Now, imagine that we want to call methodB of class B, but we only have access to an intance of class A. What's the best approach to this situation and why: Ainstances->getClassBInstance()->methodB(); or Ainstances->methodB() where methodB is implemented as follows: function methodB() { return self.getClassBInstance()->methodB() }"} {"_id": "127432", "title": "Struggling as a programmer. Need some advice", "text": "I've been a developer now for a number of years. I'm pretty good at what I do and can \"get the job done\". But, there is a difference between \"getting the job done\" and \"doing the job properly\". Let's use an example. Recently I developed a web site from scratch. The website runs fine and I've had no issues. Looking through the code I thought to myself that I could have done it better. I could have cut down on my MySQL queries. I could have used MVC making it easier to extend (it does need extending now). I decided to rewrite the project using CodeIgniter. I like the framework. But I then got sidetracked because to cut down on my MySQL queries I had to learn advanced joins. And this is the problem. Whenever I do a job properly I'm in a constant learning wheel. And topics such as advanced MySQL joins take time to learn, and then time to implement. I don't work for a company. I do everything alone. So I'd imagine if I was working as a PHP developer for a company there would be separate teams handling the SQL. Being solo it's difficult. And sometimes, although my knowledge is advanced I find myself asking question, after question. I probably have a lot of pride in my work. But if I had to work for a company handling complete projects I could imagine projects taking a while because I'd have to learn more and more to satisfy my pride and to ensure I'm doing things \"correctly\". I do plan on getting a job after the new year. I need the job security. Which is why I'm asking this question. What advice can you give in terms of self development and self improvement? Should I worry less? Or maybe look for a job as a PHP developer when I won't be handling SQL queries directly?"} {"_id": "245003", "title": "Reference wind directions to texture space?", "text": "I have a 2D array filled with a simple class: class Tile { boolean N,E,S,W; } I also have a tilesheet representing all possible outcomes except all false (12 + crossroad). Now i need to reference all these possibilities to texture space on my sheet. Apart from having an if statement for each and every outcome is there a more efficient way i cannot currently think off?"} {"_id": "103508", "title": "How is dependency inversion related to higher order functions?", "text": "Today I've just seen this article which described the relevance of SOLID principle in F# development- F# and Design principles \u2013 SOLID And while addressing the last one - \"Dependency inversion principle\", the author said: > From a functional point of view, these containers and injection concepts can > be solved with a simple higher order function, or hole-in-the-middle type > pattern which are built right into the language. But he didn't explain it further. So, my question is, how is the dependency inversion related to higher order functions?"} {"_id": "127438", "title": "How to provide SaaS integration with enterprise systems (from the SaaS provider perspective)?", "text": "I'm doing some general research for a potential SaaS project. The solution we are considering creating will need data integration capabilities with various enterprise systems. I understand that SaaS adds complexity to enterprise integration since it lives in the cloud and outside of the firewall. I've read a few articles that describe approaches for enterprises to integrate data with SaaS solutions. Integration approaches range from the primitive FTP transfers, custom point to point integration, to a vast and growing range of commercial solutions (appliance, cloud-based, and EAI). These articles are focused on the customer perspective. In other words they are intended to help enterprises better understand their options for integrating with SaaS providers. **Can anyone provide some insight and advice from the SaaS provider perspective when it comes to making their solution as easily integratable as possible?** I assume the SaaS provider needs to create and publish web services API's and RESTful interfaces. Any other advice or resources would be most appreciated. PS: I realize saying \"need to integrate with various enterprise systems\" is incredibly vague."} {"_id": "115717", "title": "Resources for CSQA Exam", "text": "This year I'm basically going to attend for the Certified Software Quality Analyst (CSQA) examination. I'm attending the classes for it, even though I'm reading the books and notes provided by the teachers, I need to know some good resources for CSQA examination. The resources that will be provided will be good if it contains some tips, the way they ask question, and it will act as a good CSQA examination references. Thanks in advance.."} {"_id": "231168", "title": "Should an object update itself?", "text": "I'm working on Ruby on Rails. There is a feature in our app where doctors can \"claim\" cases for themselves. I can either have the doctor perform the action and update the plate, or I can have the doctor send itself to the plate and have the plate update itself. Should objects update itself? It just seems cleaner to me, but I'm not sure. Or maybe I'm overthinking? First case, doctor only: def claim(plate) plate.claimed = true plate.doctor_id = id plate.save end This is the second case (doctor sends itself to the plate): #doctor def claim(plate) plate.claimed_by(self) end #plate def claimed_by(doctor) self.claimed = true self.doctor_id = doctor.id self.save end"} {"_id": "154180", "title": "How to create a Semantic Network like wordnet based on Wikipedia?", "text": "I am an undergraduate student and I have to create a Semantic Network based on Wikipedia. This Semantic Network would be similar to Wordnet(except for it is based on Wikipedia and is concerned with \"streams of text/topics\" rather than simple words etc.) and I am thinking of using the Wikipedia XML dumps for the purpose. I guess I need to learn parsing an XML and \" _some other things_ \" related to NLP and probably Machine Learning, but I am no way sure about anything involved herein after the XML parsing. * Is the starting step: XML dump parsing into text a good idea/step? Any alternatives? * What would be the steps involved after parsing XML into text to create a functional Semantic Network? * What are the things/concepts I should learn in order to do them? * I am not directly asking for book recommendations, but if you have read a book/article that teaches any thing related/helpful, please mention them. This may include a refernce to already existing implementations regarding the subject. Please correct me if I was wrong somewhere. Thanks! EDIT: The final product should be like a complete Semantic Network (like Conceptnet or Cyc etc.), So I can't use things like Semantic Mediawiki. (On a second thought, it seems like I should have asked this question on linguistics and not here... )"} {"_id": "154183", "title": "How do functional languages handle a mocking situation when using Interface based design?", "text": "Typically in C# I use dependency injection to help with mocking; public void UserService { public UserService(IUserQuery userQuery, IUserCommunicator userCommunicator, IUserValidator userValidator) { UserQuery = userQuery; UserValidator = userValidator; UserCommunicator = userCommunicator; } ... public UserResponseModel UpdateAUserName(int userId, string userName) { var result = UserValidator.ValidateUserName(userName) if(result.Success) { var user = UserQuery.GetUserById(userId); if(user == null) { throw new ArgumentException(); user.UserName = userName; UserCommunicator.UpdateUser(user); } } ... } ... } public class WhenGettingAUser { public void AndTheUserDoesNotExistThrowAnException() { var userQuery = Substitute.For(); userQuery.GetUserById(Arg.Any).Returns(null); var userService = new UserService(userQuery); AssertionExtensions.ShouldThrow(() => userService.GetUserById(-121)); } } Now in something like F#: if I don't go down the hybrid path, how would I test workflow situations like above that normally would touch the persistence layer without using Interfaces/Mocks? I realize that every step above would be tested on its own and would be kept as atomic as possible. Problem is that at some point they all have to be called in line, and I'll want to make sure everything is called correctly."} {"_id": "154182", "title": "Security using jsonp", "text": "I'm writing an app that will make available a set of api functions that require cross site scripting to work. I'll be utilizing jsonp which will allow other developers to consume these services for their web applications. * What security concerns should I consider to protect my server data? * What security concerns should other developers take when consuming my services via jsonp?"} {"_id": "116292", "title": "Are there standard strategies for defining job flow and dependencies?", "text": "I'm working on a project that involves the chaining of separate jobs into a single master job, though there may be parallel paths in the chain leading up to the final output. Job and chain details will be stored in a database. Eventually what I want to end up with is a GUI in which blocks representing the individual jobs can be moved around and chained up, with the system definition stored in the DB for execution. This will be implemented as C# on top of SQL Server. I don't want to reinvent the wheel if I can help it. I'm certain there must be some good patterns out there, particularly for how to represent the flow and dependencies in DB tables, but have been unable to find anything that fits the bill. More than anything, I'm curious if there are any effective schemas people have used to define the jobs, dependencies, etc. Does anyone know of any commonly used strategies?"} {"_id": "94084", "title": "iOS developer interview question doubts", "text": "How deep should an iOS developer review data structure & algorithm when preparing for iOS job interview position? I know most people would just say there's no harm in reviewing it, but would like to get some insights based on your experience interview what are the percentage of the questions are related to data structure/algorithm and what percentage are iOS related?"} {"_id": "94085", "title": "Should I learn low-level principles if I plan to develop in high-level languages?", "text": "I am entering university next month and have some exemption exams today. One of which is Computer Organization: things like Boolean Algebra, Gates (AND, OR, NOT etc), Assembly Language are taught. I wonder why learn such low level stuff. Does it benefit me as a programmer, likely developing in much higher level languages like C#, Python, PHP etc?"} {"_id": "94086", "title": "Decoupling classes from the user interface", "text": "What is the best practice when it comes to writing classes that might have to know about the user interface. Wouldn't a class knowing how to draw itself break some best practices since it depends on what the user interface is (console, GUI, etc)? In many programming books I've come across the \"Shape\" example that shows inheritance. The base class shape has a draw() method that each shape such as a circle and square override. This allows for polymorphism. But isn't the draw() method very much dependent on what the user interface is? If we write this class for say, Win Forms, then we cannot re-use it for a console app or web app. Is this correct? The reason for the question is that I find myself always getting stuck and hung up on how to generalize classes so they are most useful. This is actually working against me and I'm wondering if I'm \"trying too hard\"."} {"_id": "148230", "title": "Why we don't import a package while we use String functions?", "text": "I asked myself why we didn't import a package while we use String functions such as `toUpperCase()`? How they get in there without importing packages?"} {"_id": "121871", "title": "What's the difference between a solution and platform architect?", "text": "I'm having trouble finding any information regarding the job title \"platform architect\". What is the difference between it and a solution architect?"} {"_id": "121873", "title": "Exposing business logic as WCF service", "text": "I'm working on a middle-tier project which encapsulates the business logic (uses a DAL layer, and serves a web application server [ASP.net]) of a product deployed in a LAN. The BL serves as a bunch of services and data objects that are invoked upon user action. At present times, the DAL acts as a separate application whereas the BL uses it, **but is consumed by the web application as a DLL**. Both the DAL and the web application are deployed on different servers inside organization, and since the BL DLL is consumed by the web application, it resides in the same server. The worst thing about exposing the BL as a DLL is that we lost track with what we expose. Deployment is not such a big issue since mostly, product versions are deployed together. Would you recommend migrating from DLL to WCF service? If so, why? Do you know anyone who had a similar experience?"} {"_id": "225759", "title": "Are big IT companies continuously firing or continuously expanding?", "text": "Today I just received my (almost-daily) mail from a big company career automated job search like the following: Based on your search criteria the following openings are now available: * Senior Software Engineer II - based in NY * Principal Software Architect - based in NJ * Software Engineer - based in LA and I wondered: how come I'm receiving these ads so often? It doesn't seem they're always the same so (at least apparently) it doesn't seem they're always sending me the same jobs. Are big IT companies like this one firing people continuously, are they continuously expanding or are people continuously leaving from there (and in that case how come **that** many people are leaving)?"} {"_id": "222069", "title": "What exactly happens on a LAMP machine when I request a php file?", "text": "I am a .NET developer who has recently started working in a LAMP environment. I know that if I go to `www.somedomain.com/files/test.php`, then (1) DNS resolves the URL to my server (2) my server handles the request on a given port (3) the server looks in /files/test.php and somehow runs test.php and returns the output of the file to the client. But it would be really great to understand this process in much more detail. For instance, does Apache/nginx actually run the php file or does it pass it to the php interpreter? Does every php file run every time or does the server cache its output? It would be really helpful to know the major details/decisions that a LAMP environment makes during this process. Kind of like this answer, which explains in detail how SSL works... http://security.stackexchange.com/questions/20803/how-does-ssl- work/20847#20847"} {"_id": "46178", "title": "How to organize a Coding Dojo?", "text": "Over on stack overflow it was asked how to organize a coding dojo (http://stackoverflow.com/questions/4338567/how-to-organize-a-coding-dojo- event). I believe that may have been the wrong forum... I wonder the same thing: how is a Codeing Dojo organized? What is the structure of a meeting? How would one pick Katas? What do you plan ahead of time? I am interested in any ideas on this as well as links to any resource that may be outlining this."} {"_id": "155588", "title": "How to get a legal advice for a open source project ?", "text": "I have a question about open source software. **Questions:** Where do you get legal advice from? Do you have to find a lawyer specialised in software issues right from the bat, or do you get legal advice from lawyers that may join the community later on? Is there any other way a newly created open source organization may seek advice on legal issues?"} {"_id": "97190", "title": "When to switch to mobile programming?", "text": "We're a team of developers working on some PC applications. But we have also witnessed a trend in the market towards writing more and more mobile sites and mobile applications. Is it time for developers to shift to mobile programming?"} {"_id": "215072", "title": "Designing a single look up entity", "text": "In almost every application you have this look up entity that provides a dynamic references. This are things like type, category, etc. These entities will always have id, name, desc So at first I designed different entities for each look up. Like education_type, education_level, degree_type.... But on a second thought I decided to have on entity for each of these kinds of entities. But when I am done with the design and check the relation this entity will be referenced by almost all entities in the system and I don't believe that is appropriate. So What is your take on this? Can you give me some clear pros and cons?"} {"_id": "215071", "title": "Do activity diagrams always end in one endpoint?", "text": "For example an activity diagram for a simple program: 1. Get User data. 2. If User Exists DO something, ELSE do nothing. 3. End. I often see diagrams with multiple endpoints but also with just one. Should activity diagrams merge both ways to one final state, regardless of the previous paths?"} {"_id": "100839", "title": "Should we hire someone who writes C in Perl?", "text": "One of my colleagues recently interviewed some candidates for a job and one said they had very good Perl experience. Since my colleague didn't know Perl, he asked me for a critique of some code written (off-site) by that potential hire, so I had a look and told him my concerns (the main one was that it originally had no comments and it's not like we gave them enough time). However, the code works so I'm loathe to say no-go without some more input. Another concern is that this code basically looks exactly how I'd code it in C. It's been a while since I did Perl (and I didn't do a lot, I'm more a Python bod for quick scripts) but I seem to recall that it was a much more expressive language than what this guy used. I'm looking for input from real Perl coders, and suggestions for how it could be improved (and why a Perl coder _should_ know that method of improvement). You can also wax lyrical about whether people who write one language in a totally different language should (or shouldn't be hired). I'm interested in your arguments but this question is primarily for a critique of the code. The spec was to successfully process a CSV file as follows and output the individual fields: User ID,Name , Level,Numeric ID pax, Pax Morgan ,admin,0 gt,\" Turner, George\" rubbish,user,1 ms,\"Mark \\\"X-Men\\\" Spencer\",\"guest user\",2 ab,, \"user\",\"3\" The output was to be something like this (the potential hire's code actually output this): User ID,Name , Level,Numeric ID: [User ID] [Name] [Level] [Numeric ID] pax, Pax Morgan ,admin,0: [pax] [Pax Morgan] [admin] [0] gt,\" Turner, George \" rubbish,user,1: [gt] [ Turner, George ] [user] [1] ms,\"Mark \\\"X-Men\\\" Spencer\",\"guest user\",2: [ms] [Mark \"X-Men\" Spencer] [guest user] [2] ab,, \"user\",\"3\": [ab] [] [user] [3] Here is the code they submitted: #!/usr/bin/perl # Open file. open (IN, \"qq.in\") || die \"Cannot open qq.in\"; # Process every line. while () { chomp; $line = $_; print \"$line:\\n\"; # Process every field in line. while ($line ne \"\") { # Skip spaces and start with empty field. if (substr ($line,0,1) eq \" \") { $line = substr ($line,1); next; } $field = \"\"; $minlen = 0; # Detect quoted field or otherwise. if (substr ($line,0,1) eq \"\\\"\") { $line = substr ($line,1); $pastquote = 0; while ($line ne \"\") { # Special handling for quotes (\\\\ and \\\"). if (length ($line) >= 2) { if (substr ($line,0,2) eq \"\\\\\\\"\") { $field = $field . \"\\\"\"; $line = substr ($line,2); next; } if (substr ($line,0,2) eq \"\\\\\\\\\") { $field = $field . \"\\\\\"; $line = substr ($line,2); next; } } # Detect closing quote. if (($pastquote == 0) && (substr ($line,0,1) eq \"\\\"\")) { $pastquote = 1; $line = substr ($line,1); $minlen = length ($field); next; } # Only worry about comma if past closing quote. if (($pastquote == 1) && (substr ($line,0,1) eq \",\")) { $line = substr ($line,1); last; } $field = $field . substr ($line,0,1); $line = substr ($line,1); } } else { while ($line ne \"\") { if (substr ($line,0,1) eq \",\") { $line = substr ($line,1); last; } if ($pastquote == 0) { $field = $field . substr ($line,0,1); } $line = substr ($line,1); } } # Strip trailing space. while ($field ne \"\") { if (length ($field) == $minlen) { last; } if (substr ($field,length ($field)-1,1) eq \" \") { $field = substr ($field,0, length ($field)-1); next; } last; } print \" [$field]\\n\"; } } close (IN);"} {"_id": "69135", "title": "Where can I host my JSP+Java Web application?", "text": "My web application is currently written in JSP/Java, using an Oracle DB. I develop on Windows, and I use JDeveloper (Oracle's eclipse clone) and I use JDev's integrated WebLogic Server. I want to go live using a respectable Web hosting company. LAMP devs get all the girls, I'm stuck using WWOJ (Windows Weblogic Oracle Java). I can't find any hosts that use Oracle, and ones that run Tomcat are few and far between (short of a custom box on RackSpace). I'm switching to MySQL this week. I'm running Tomcat on my Windows box now, and using NetBeans. I can compile a .war now. I'm finally up to WTMJ :) Does anyone have experience with getting a Java project hosted? Or something with an Oracle backend? I feel like I'm lost out here."} {"_id": "221612", "title": "Why do some compilers generate direct machine code?", "text": "I was taking this course - CMU 18-447, Computer Architecture at Carnegie Mellon to brush my knowledge and concepts. They say that most of the machine level details and implementations is taken care at the Instruction Set Architecture(ISA) level and is abstracted at that level. Some Intel processors even have hardware level translation layers that take the front level ISA that is exposed to the programmer and translate them further closer to the machine. Given such power is provided by the ISA/Processor itself, why do compilers generate direct machine code, or is it just a black box and internally it uses assemblers to convert them into direct machine code? I hear that JVM takes byte code and translate them directly to machine code(exe).Is this true or is my understanding wrong here?"} {"_id": "254092", "title": "Maximum tcp port number constant in java", "text": "Is there a public constant for the maximum TCP port number (65535) defined in java or a common library such as Apache Commons, that I could refer to from my code (instead of using the integer hardcoded)?"} {"_id": "254091", "title": "Embedded Tomcat Cluster", "text": "Can someone please explain with an example how an Embedded Tomcat Cluster works. Would a load balancer be necessary? Since we're using embedded tomcat, how would two separate jar files (each a standalone web application with their own embedded tomcat instance) know where eachother are and let eachother know their status, etc? Here is the code I have so far which is just a regular embedded tomcat without any clustering which would formulate one of the jar files mentioned above: import javax.servlet.ServletException; import javax.servlet.http.HttpServlet; import javax.servlet.http.HttpServletRequest; import javax.servlet.http.HttpServletResponse; import java.io.File; import java.io.IOException; import java.io.Writer; public class Main { public static void main(String[] args) throws LifecycleException, InterruptedException, ServletException { Tomcat tomcat = new Tomcat(); tomcat.setPort(8080); Context ctx = tomcat.addContext(\"/\", new File(\".\").getAbsolutePath()); Tomcat.addServlet(ctx, \"hello\", new HttpServlet() { protected void service(HttpServletRequest req, HttpServletResponse resp) throws ServletException, IOException { Writer w = resp.getWriter(); w.write(\"Hello, World!\"); w.flush(); } }); ctx.addServletMapping(\"/*\", \"hello\"); tomcat.start(); tomcat.getServer().await(); } } Source: java dzone"} {"_id": "118268", "title": "Do any languages use =/= for the inequality operator?", "text": "Wikipedia says: > **Not equal** > > The symbol used to denote inequation \u2014 when items are not equal \u2014 is a > slashed equals sign \"\u2260\" (Unicode 2260). > > Most programming languages, limiting themselves to the ASCII character set, > use ~=, !=, /=, =/=, or <> to represent their boolean inequality operator. All of these operators can be found in this table, apart from `=/=`. I can find this equals-slash-equals used as a way of formatting \u2260 in plaintext but not in any programming language. Has `=/=` been used as the inequality operator in any programming language?"} {"_id": "191428", "title": "How to process an endless XML data stream", "text": "There is an **endless data stream of XML messages** (and \"heartbeats\"), that I receive via a telnet connection and through a site-to-site VPN IPsec tunnel. I'm still pondering. **What is the best/most elegant solution to process the XML messages without losing any data, without redundance and with a (nearly) constant processing time?** A never-ending process/script? Writing the stream in file(s) and processing it/them periodically step by step? Or something completely different? The messages usually come every few seconds. Sometimes every second. Sometimes maybe every 10 seconds. It differs but not a lot. One XML message within the stream contains 45 rows. The messages should be stored afterwards. Note: The concrete structure of the XML messages and the infrastructure of the participating systems are negligible in my opinion."} {"_id": "254095", "title": "What is the value of a let expression", "text": "From what I understand, every code in f# is an expression, including `let` binding. Say we got the following code: let a = 5 printfn \"%d\" a I've read that this would be seen by the compiler as let a = 5 in ( printfn \"%d\" a ) And so the value of all this would be value of inner expression, which is value of `printf`. On the other hand, in f# interactive: > let a = 5;; val a : int = 5 Which clearly indicates that the value of let expression is the value bound to the identifier. **Q: Can anyone explain what is the value of a let expression? Can it be different in compiled code than in F# interactive?**"} {"_id": "148584", "title": "Global Texture Container", "text": "For my first large-ish endevor in Open-GL I'm making a simulator of sorts. I've got much of the basic functionality working, but recently ran into a problem. As I've since realized, I originally designed the program so that each object in the simulator had its own class which stored its position, texture, draw code, etc. This became a problem, however, when I began creating lots of objects of the same type, as I quickly realized that I wrote the classes to reload a new instance of the texture data for each instance of an object. To fix this I considered making a simple texture database class of sorts which would contain a pointer to a single instance of each objects texture data that each instance of the object would copy upon its creation. The problem here is that lots of different classes in the simulator create objects and I am hesitant to simply store the texture database class at the top of the program hierarchy and pass it down to every function that creates an object, as I feel that this will get very complex very fast. Instead I feel it would be better to have a global container class that would keep track of the texture pointers, but I'm not sure how I could store the pointers without instantiating an instance of the container which would require passing it all over the place. I'm hoping that there is a more elegant and simple solution that I'm overlooking, otherwise I'll try the way I've described. Alternatively, if it seems some restructuring of the simulator would be best, that isn't out of the question, but I'd appreciate any advice."} {"_id": "191421", "title": "Mobile-first implementation or desktop-first?", "text": "When working with CSS media queries, I am unsure of whether I should program in a mobile-first style, or a desktop-first style. For example, let's say I'm given a design that consists of a set of blocks that are side-by-side. On desktop, I'll stick to spec, but on mobile, given the reduced screen area, two blocks stacked on top of one another would be better. Which is better, this (desktop-first): div { width: 50%; display: inline-block; } @media all and (max-width: 600px) { width: 100%; margin: 0 auto; display: block; } or this (mobile first)? div { width: 100%; margin: 0 auto; display: block; } @media all and (min-width: 600px) { width: 50%; display: inline-block; } Wordpress' latest theme follows the mobile-first method, by using `min-width`, but am I correct in assuming that older browsers without media query support would be unable to parse these directives, and load the \"mobile\" css? What are the advantages/disadvantages of both?"} {"_id": "118261", "title": "code contracts/asserts: what with duplicate checks?", "text": "I'm a huge fan of writing asserts, contracts or whatever type of checks available in the language I'm using. One thing that bothers me a bit is that I'm not sure what the common practice is for dealing with duplicate checks. Example situation: I first write the following function void DoSomething( object obj ) { Contract.Requires( obj != null ); //code using obj } then a few hours later I write another function that calls the first one. As everything is still fresh in memory, I decide not to duplicate the contract, since I know that `DoSomething` wil check for a null object already: void DoSomethingElse( object obj ) { //no Requires here: DoSomething will do that already DoSomething( obj ); //code using obj } The obvious problem: `DoSomethingElse` now depends on `DoSomething` for verifying that obj is not null. So should `DoSomething` ever decide not to check anymore, or if I decide to use another function obj might not be checked anymore. Which leads me to writing this implementation after all: void DoSomethingElse( object obj ) { Contract.Requires( obj != null ); DoSomething( obj ); //code using obj } Always safe, no worries, except that if the situation grows the same object might be checked a number of times and it's a form of duplication and we all know that's not so good. What is the most common practice for situation like these?"} {"_id": "46284", "title": "How do you manage a complexity jump?", "text": "It seems an infrequent but common experience that sometimes you're working on a project and suddenly something turns up unexpectedly, throws a massive spanner in the works and ramps up the complexity a whole lot. For example, I was working on an application that talked to SOAP services on various other machines. I whipped up a prototype that worked fine, then went on to develop a regular front end and generally get everything up and running in a nice, fairly simple and easy to follow fashion. It worked great until we started testing across a wider network and suddenly pages started timing out as the latency of the connections and the time required to perform calculations on remote machines resulted in timed out requests to the soap services. It turned out that we needed to change the architecture to spin requests out onto their own threads and cache the returned data so it could be updated progressively in the background rather than performing calculations on a request by request basis. The details of that scenario are not too important - indeed it's not a great example as it was quite forseeable and people who have written a lot of apps of this type for this type of environment might have anticipated it - except that it illustrates a way that one can start with a simple premise and model and suddenly have an escalation of complexity well into the development of the project. What strategies do you have for dealing with these types of functional changes whose need arises - often as a result of environmental factors rather than specification change - later on in the development process or as a result of testing? How do you balance between avoiding the premature optimisation/ YAGNI/ overengineering risks of designing a solution that mitigates against possible but not necessarily _probable_ issues as opposed to developing a simpler and easier solution that is likely to be as effective but doesn't incorporate preparedness for every possible eventuality? Edit: Crazy Eddie's answer includes \"you suck it up and find the least expensive way to implement the new complexity.\" That made me think of something that was implicit in the question but I didn't specifically raise. Once you hit that bump, and you incorporate the necessary changes. Do you do the thing that will keep the project as close to schedule as possible but may affect maintainability or do you go back to your architecture and rework it on a more detailed level that may be more maintainable but will push everything back during development?"} {"_id": "42706", "title": "Software management for 2 programmers", "text": "me and my very good friend do a small bussiness. We have company and we develop web apps using Scala. We have started 3 months ago and we have a lot of work now. We cannot afford to employ another programmer because we can't pay him now. Until now we try to manage entire developing process very simply. We use excel sheets for simple bug tracking and we work on client requests on the fly. We have no plan for next week or something similar. But now I find it very inefficient and useless. I am trying to find some rules or some methodology for small team or for only two guys. For example Scrum is, imo, unadapted for us. There are a lot of roles (ScrumMaster, Product Owner, Team...) and it seems overkill. Can you something advise me? Have you any experiences with software management in small teams? Is any methodology of current agile development fitten for pair of programmers? Is there any software management for simple bug tracking, maybe wiki or time management for two coders? thanks a lot for sharing."} {"_id": "252239", "title": "Is this a specific pattern and what is its purpose?", "text": "I recently stumbled over the following C++ Code which confused me a bit: class One {/*definition*/}; template class Two : public Base {/*definition*/}; template class Three : public Base {/*definition*/}; template class Four : public Base {/*definition*/}; class Five : public Four < Three< Two < One > > > {/*definition*/}; I know the purpose of templates and more or less when and how to use them. Is there a name or a common pattern for a cascade of several templates? When would I use this pattern (instead of anything less complex)? P.S.: definition of class Five actually is empty."} {"_id": "198703", "title": "Maximum flow and minimum cut undirected graph java", "text": "I need to implement an algorithm where I find the maximum flow and the minimum cut of an undirected graph and I cannot really find an algorithm for this. I have looked at the Ford Fulkerson algortihm but for that one it\u00b4s used for an directed graph. Anyone can help me to find an appropriate algorithm?"} {"_id": "210642", "title": "Slowly realising that I should have asked for a way higher salary", "text": "I've been working in this new company for almost half a year now and I'm beginning to realize that I made a terrible mistake when I proposed my salary. This is my first \"real\" job, previously I've been working freelance, and for a short amount of time for a small local IT company. When I moved here, I wasn't sure how much was I supposed to ask for. I did a lot of research and settled on amount about 20% higher than an average developers' starting salary in my country. Now I'm attending interviews (as a trainee so far) and I got to learn something more about salary capabilities of my employer. I learned that almost twice as much as I make would be \"acceptable\". That's because the company was started by an American fellow, and while we are based in a less wealthy country, more generous pay is acceptable. Additionally, I saw that I'm a better programmer than I thought I was (based on comparing my performance with others and reviews by others). I already did one thing towards this: some time after a new guy came in to work with me (paid the same as me), I realised that I'm clearly a better developer and I boldly asked for a raise (which I got, after being lectured that raises only happen during reviews). What do I do here? I'm at a loss."} {"_id": "210641", "title": "Good unit-testing story for a unit test training", "text": "I have to advise a training on unit testing in my company. I would like to show a striking, real-life example of an unexpected regression not caught by compilation (of course) but detectable with unit testing. Something more like a seemingly valid change that in fact cause a regression in another part of the program. Would you share your unit testing epic-win stories ? (Target language is C++, networked distributed application, but any good example will do)"} {"_id": "198707", "title": "Casting from string to integer and the vica versa", "text": "I am trying to cast in Java from string to integer and the other way around but the compiler is complaining about this. My question is: Is this the matter of compiler or Java programming language doesn't support this kind of casting."} {"_id": "17214", "title": "The Art of Computer Programming - To read or not to read?", "text": "There are lots of books about programming out there, and it seems Code Complete is pretty much at the top of most people's list of \"must-read programming books\", but what about _The Art of Computer Programming_ by Donald Knuth? I'm a busy person, between work and a young family I don't have a ton of free time, so I have to be picky about how I use it. I'm wondering - has anybody here read 'TAOCP'? If so, is it worth making time to read or would some other book or more on-the-side programming like pet projects or contributing to open source be a better use of my time in terms of professional development? DISCLAIMER - For those of you who sport \"Knuth is my homeboy\" t-shirts, don't get me wrong - I want to read it, but I'm just wondering if it should be right at the top of my priority list or if something else should come first."} {"_id": "166992", "title": "What is a good design model for my new class?", "text": "I am a beginning programmer who, after trying to manage over 2000 lines of procedural php code, now has discovered the value of OOP. I have read a few books to get me up to speed on the beginning theory, but would like some advice on practical application. So,for example, let's say there are two types of content objects - an ad and a calendar event. what my application does is scan different websites (a predefined list), and, when it finds an ad or an event, it extracts the data and saves it to a database. All of my objects will share a $title and $description. However, the Ad object will have a $price and the Event object will have $startDate. Should I have two separate classes, one for each object? Should I have a 'superclass' with the $title and $description with two other Ad and Event classes with their own properties? The latter is at least the direction I am on now. My second question about this design is how to handle the logic that extracts the data for $title, $description, $price, and $date. For each website in my predefined list, there is a specific regex that returns the desired value for each property. Currently, I have an extremely large switch statement in my constructor which determines what website I am own, sets the regex variables accordingly, and continues on. Not only that, but now I have to repeat the logic to determine what site I am on in the constructor of each class. This doesn't feel right. Should I create another class Algorithms and store the logic there for each site? Should the functions of to handle that logic be in this class? or specific to the classes whos properties they set? I want to take into account in my design two things: 1) I will add different content objects in the future that share $title and $description, but will have their own properties, so, I want to be able to easily grow these as needed. 2) I will add more websites constantly (each with their own algorithms for data extraction) so I would like to plan efficienty managing and working with these now. I thought about extending the Ad or Event class with 'websiteX' class and store its functions there. But, this didn't feel right either as now I have to manage 100s of little website specific class files. Note, I didn't know if this was the correct site or stackoverflow was the better choice. If so, let me know and I'll post there."} {"_id": "162676", "title": "What encryption algorithm/package should I use in a betting game?", "text": "I have a betting type site where I publish a number (between 0-100) that is encrypted. Then after a period of time, I would review what the number is and prove it with a key to decrypt the encrypted number to prove that I'm not cheating. I also want it to be easily verifiable by an average user. What encryption algorithm/technique/package should I use? I'm no expert on cryptography. There seems to be so many options out there and I'm not sure what to use. python friendly is a plus."} {"_id": "113160", "title": "Should you use C# and F# together", "text": "I know you can use C# and F# together in the same project however i'm not sure if its a good idea to do so. It seems to me that mixing two very different coding styles (functional vs OO) could cause a lack of cohesion in the design. Is that correct?"} {"_id": "195086", "title": "How can we avoid having to build yet another CRM system from scratch", "text": "We are building an accounting web application. In our database, we store basic data about our customers, like phone number, their login informations, because these things are tied into our web application. Now we need a CRM system to do things like sending marketing mails to users who signed up, but never used the service, we also want a database of potential customers or partners, who we would like to contact some day etc. etc. Such functionality is basic in any CRM system and I would hate to have to implement all these features ourselves. At the same time, I cannot see how we could utilize a CRM solution like SalesForce without having to store some data in two places and with all the complications that that would lead to. What should we do? Is there any CRM system that can be put upon our PostgreSQL database and just utilize the data we have and store what we do not have?"} {"_id": "9219", "title": "Are design patterns generally a force for good or bad?", "text": "I've heard it argued that design patterns are the best thing since sliced bread. I've also heard it argued that design patterns tend to exacerbate \"Second System Syndrome,\" that they are massively overused, and that they make their users think they're better designers than they really are. I tend to fall closer to the former camp, but recently I've been seeing designs where nearly every single interaction is replaced with an observer relationship, and everything's a singleton. So, considering the benefits and problems, are design patterns generally good or bad, and why?"} {"_id": "9213", "title": "Is the time spent customizing your dev machine environment worth it?", "text": "There are times when I am working on programming project and I get the itch to change some stuff in my environment (OSX or Linux). Vim might not being doing exactly what I want, so instead of doing it the round about way I've been doing for a couple months (sometimes years) I go and figure out the right way. Or I may be doing something long handed in bash and I say to myself why don't I figure out a better way. The thing is when I go off and do this hours can fly by. Sometimes I get stuck on trying to get what I want. I'll know I'm really close, so I don't give up. I usually always get it eventually, but its after hours of tinkering and googling. I hate the feeling of giving up and having to deal with something I know could work better. When I'm done I have a warm feeling knowing that my environment is a little more smooth and personalized, but I wonder could my time be better spent. Where do I draw the line? It seems with the all the UNIX-style tools there is an endless amount to learn. Always thought the sign of a superior programmer is someone who goes out of their way to make the computer bend to their will. Am I doing it right? I figure the bash shell, unix/linux, and vim will be around forever, so I see it as an investment. But then again I just spend 3 hours trying to get some stupid thing the vimperator firefox plugin to work right. So I wondering what this community think about this."} {"_id": "195088", "title": "How to embed an article in the source code?", "text": "Sometimes, I notice typos in articles (blog posts) or books in source code that appears in the body of the article. It may be an indication that the code has been manually copied and pasted (e.g. missing braces ), or that something nasty happened to the text. How may I write an article inside the source code of a project ? I'm targetting Wordpress; I'm basically looking for a parser that would recognize two kinds of region: article and source code, and that would format them to whatever I want (stackoverflow question, LaTeX code or Wordpress article)"} {"_id": "212865", "title": "Deduplication of complex records / Similarity Detection", "text": "I'm working on a project that involves records with fairly large numbers of fields (~15-20) and I'm trying to figure out a good way to implement deduplication. Essentially the records are people along with some additional data. For example, the records are likely to include personal information like first name, last name, postal address, email address, etc. but not all records have the same amount of data. Currently records are stored in a RDBMS (MySQL) and I want to detect duplicates on insertion and still have them inserted but flagged as a duplicate. It needs to be fast as I need to provide feedback as to if it is a duplicate or not in real time. The dataset is large (millions of records). I've considered the following options but I'm not sure which is best/if they are better options available: * Use MySQL's built in fulltext search and use fuzzy searching. Major issue with this is that is seems slow, only the latest version supports fulltext indexes with InnoDB (alternative engine is MyISAM which is not good and critically does not support transactions) and fuzzy searching alone does not seem the best method for similarity detection. * Use simhash or similar. Issue with this is that I'd also like to be able to detect synonyms which I don't see how simhash handles this. For example, address might be: \"Some Road\" or \"Some Rd.\" and names might be: \"Mike\" or \"Michael\" * Index the data using an Apache Lucene derivative (elasticsearch/solr/etc) and perform a query that would likely return numerous results. In terms of using Apache Lucene I've been reading about similarity detection and using cosine similarity to produce a value from 0 to 1 from the term frequency vectors that lucene stores. I could apply this to the results from the lucene query and check to see if any of the results are above a certain threshold. My concern about this is how relevant the cosine similarity would be for the type of data I'm storing, i.e a number of fields with either single or a small number of words compared to calculating the cosine similarity of a comparison of some large text document. Basically, I'm wondering what is the best way to deduplicate this type of data (or put alternatively, detect similarities with this type of data)?"} {"_id": "149180", "title": "Why is the output of a compiler called object code?", "text": "From the essay _Programming Languages Explained_ by Paul Graham, published in _Hackers & Painters_: > The high-level language that you feed the compiler is also known as _source > code_ , and the machine language translation it generates is called _object > code_. From the Wikipedia article on object code: > Object code, or sometimes object module, is what a computer compiler > produces. From a definition of 'compiler': > Traditionally, the output of the compilation has been called object code or > sometimes an object module. (Note that the term \"object\" here is not related > to object-oriented programming.) So what _is_ the term object related to?"} {"_id": "212861", "title": "The Lisp in Gnu", "text": "Since the GNU project is celebrating its anniversary, and the initial announcement for GNU is linked to (http://www.gnu.org/gnu/initial- announcement.en.html) all over the place, I reread it and I stumbled upon the plan for a lisp-based window system: > and eventually a Lisp-based window system through which several Lisp > programs and ordinary Unix programs can share a screen. It is well known that Stallman had a lisp background and has launched lisp- based projects (Emacs, Guile). As I understand, Stallman's preferred desktop environment for the GNU system is nowadays GNUstep (which seems to be a little dormant). When looking at demonstrations of Symbolics Lisp machines, I am really impressed by the powerful approach and I think that Stallman had something like this in mind during the announcement. So I wonder: **What happened to this plan of a Lisp-based window system? Has it been actually pursued once?**"} {"_id": "176028", "title": "Real-Time Multi-User Gaming Platform", "text": "I'm considering developing a real-time multi-user game, and I want to gather some information about possibilities before I do some real development. I've thought about how best to ask the question, and for simplicity, the best way that occurred to me was to make an analogy to the field (or playground) game darebase. In the field game of darebase, there are two or more bases. To start, there is one team on each base. The game is a fancy game of tag. When two people meet out in the field, the person who left his base most recently timewise captures the other person. They then return to that person's base. Play continues until everyone is part of the same team. So, analogizing this to an online computer game, let's suppose there are an indefinite number of bases. When a person starts up the game, he has a team that is located at, for example, his current GPS coordinates. It could be a virtual world, but for sake of argument, let's suppose the virtual world corresponds to the player's actual GPS coordinates. The game software then consults the database to see where the closest other base is that is online, and the two teams play their game of virtual tag. Note that the user of the other base could have a different base than the one run by the current user as the closest base to him, in which case, he would be in two simultaneous battles, one with each base. When they go offline, the state of their players is saved on a server somewhere. Game logic calls for the players to have some automaton-logic of some sort, so they can fend for themselves in a limited way using basic rules, until their user goes online again. The user doesn't control the players' movements directly, but issues general directives that influence the players' movement logic. I think this analogy is good enough to frame my question. I've been looking at smartfoxserver, but I'm not convinced yet that it is the best option or even that it will work at all. One possibility, of course, would be to roll out my own web server, but I'd rather not do that if there is an existing service out there already that I could tap into. Note that darebase is not the game I intend to implement, but, upon reflection, that might not be a bad idea either. What I'm looking for, specifically, is an appropriate architecture that can do the following things: * Act as a repository of saved game states * Provide a framework that allows multiple players to communicate with each other. * (optionally) executing some of the gaming logic, for example, in the game of darebase, who determines who tagged whom? Is it one or both clients or is it the server? I've never been involved with such a multi user real-time environment, so I can only guess at the pitfalls of one decision vs. another."} {"_id": "252234", "title": "What is the verb for \"to make something into a plugin\"?", "text": "What is the verb for \"to make something into a plugin\"? Example use: \"Developer can you make Module Foo into a plugin?\". \"Yes sir, I can `some verb` Module Foo\". Terms I have considered: pluginify, pluginize, make pluggable Criteria for Acceptance: A single verb which means \"to make something into a plugin\" or \"to create something as a plugin\". _I am looking for a term which is already in use_ as opposed to a new one."} {"_id": "176025", "title": "Java Dynamic Binding", "text": "I am having trouble understanding the OOP Polymorphic principl of Dynamic Binding ( Late Binding ) in Java. I looked for question pertaining to java, and wasn't sure if a overall answer to how dynamic binding works would pertain to Java Dynamic Binding, I wrote this question. Given: class Person { private String name; Person(intitialName) { name = initialName; } // irrelevant methods is here. // Overides Objects method public void writeOutput() { println(name); } } class Student extends Person { private int studentNumber; Student(String intitialName, int initialStudentNumber) { super(intitialName); studentNumber = initialStudentNumber; } // irrellevant methods here... // overides Person, Student and Objects method public void writeOutput() { super.writeOutput(); println(studentNumber); } } class Undergaraduate extends Student { private int level; Undergraduate(String intitialName, int initialStudentNumber,int initialLevel) { super(intitialName,initialStudentNumber); level = initialLevel; } // irrelevant methods is here. // overides Person, Student and Objects method public void writeOutput() { super.writeOutput(); println(level); } } I am wondering. if I had an array called person declared to contain objects of type Person: Person[] people = new Person[2]; person[0] = new Undergraduate(\"Cotty, Manny\",4910,1); person[1] = new Student(\"DeBanque, Robin\", 8812); Given that person[] is **declared** to be of type **Person,** you would expect, for example, in the third line where person[0] is initialized to a new Undergraduate object,to only gain the instance variable from Person and Persons Methods since doesn't the assignment to a new Undergraduate to it's ancestor denote the Undergraduate object to access Person - it's Ancestors, methods and isntance variables... Thus ...with the following code I would expect person[0].writeOutput(); // calls Undergraduate::writeOutput() person[1].writeOutput(); // calls Student::writeOutput() person[0] to not have Undergraduate's writeOutput() overidden method, nor have person[1] to have Student's overidden method - writeOutput(). If I had Person mikeJones = new Student(\"Who?,MikeJones\",44,4); mikeJones.writeOutput(); The Person::writeOutput() method would be called. Why is this not so? Does it have to do with something I don't understand about relating to arrays? Does the declaration Person[] people = new Person[2] not bind the method like the previous code would?"} {"_id": "12369", "title": "What's stopping Oracle from supporting identity (auto-numeric) columns?", "text": "EDIT: As gavenkoa's answer points out, Oracle Database 12c (released a couple of years after this question was asked) has support for Identity Columns. * * * As far as I know, Oracle's RDBMS is one of the few (the only?) SQL database products that doesn't support identity/autonumeric columns. The alternative offered by Oracle is database sequences, a feature in many ways much more powerful than auto-numeric columns, but not equivalent. It is not that I don't like sequences. What I hate is having a different programming model for generating row identity values between Oracle and any other database. For example, I often try to setup HSQL or SQLite for java apps that will eventually run over an Oracle database when I'm not working specifically on the data layer (just as a stub or mocking database). I cannot do that easily because I need different set of SQL DDL scripts: one for Oracle, and one for everyone else; I also need two sets of Hibernate mapping files if I'm using Hibernate. What I find intriguing is that Oracle Database, being one of the most complete and robust enterprise software packages of the last decade hasn't put that seemingly basic feature in their product, but almost any other RDBMS, even the smaller ones, has it. Why? Why doesn't oracle support a sequence-based identity column shortcut syntax that dumb and lazy people like me can use? The only reason I can think of is that Oracle does that on purpose as a vendor lock-in strategy so your code is harder to migrate to other RDBMS where your database sequences cannot be used. Or maybe I'm just wrong and confused? Please enlighten me."} {"_id": "62952", "title": "How does your company manage knowledge and information?", "text": "I am interested in the architecture, methods and software used by your company to capture and store knowledge. Is the information easily searchable (especially by non-techies) ? Is it stored in a central repository or in several places ? Do you find the current implementation adequate ? What could be improved ?"} {"_id": "254635", "title": "Output an Access Report As PDF on Windows 8 from a VB6 Application", "text": "I am working with VB6 and Microsoft Access 2013 and Windows 8. I am trying to output an access report as a pdf from a VB6 application. I have this code: Public Sub OpenReport(sReportName As String, frmCallingForm As Form) Dim objAccess As Access.Application Dim lResult As Long Dim Path As String Path = Environ$(\"AppData\") Set objAccess = New Access.Application With objAccess On Error GoTo ErrHndlr .OpenCurrentDatabase App.Path & \"\\\" & \"OrderBackEnd.mdb\" .DoCmd.OutputTo acOutputReport, sReportName, acFormatPDF, Path & \"\\rpt\" & sReportName & \".pdf\", False .Visible = True On Error GoTo 0 .Quit acQuitSaveNone End With Set objAccess = Nothing lResult = ShellExecute(frmCallingForm.hWnd, \"Open\", Path & \"\\rpt\" & sReportName & \".pdf\", 0&, 0&, 3) Exit Sub ErrHndlr: objAccess.Visible = True MsgBox Err.Number & \":\" & Err.Description End Sub The problem is the output gets hung and freezes the system. When I terminate the application I notice the below box in the background of my form. The only way to see the below box is to End Task on the VB6 application. It appears to not be associating the acFormatPDF in OutputTo line. Is there any way to resolve this issue or a workaround? If I manually do this process in Access it works fine. ![enter image description here](http://i.stack.imgur.com/KJCAL.png)"} {"_id": "254631", "title": "Applying the principles of Clean Code to functional languages", "text": "I'm currently reading Robert Martin's _Clean Code_. I think it's great, and when writing OO code I'm taking his lessons to heart. In particular, I think his advice to use small functions with meaningful names makes my code flow much more smoothly. It's best summed up by this quote: > [W]e want to be able to read the program as though it were a set of TO > paragraphs, each of which is describing the current level of abstraction and > referencing subsequent TO paragraphs at the next level down. ( _Clean Code_ , page 37: a \"TO paragraph\" is a paragraph that begins with a sentence voiced in the infinitive. \"To do X, we perform steps Y and Z.\" \"To do Y, we...\" etc.) For example: > TO RenderPageWithSetupsAndTeardowns, we check to see whether the page is a > test page and if so, we include the setups and teardowns. In either case we > render the page in HTML I also write functional code for my job. Martin's examples in the book definitely do read as if they were a set of paragraphs, and they're very clear -- but I'm not so sure that \"reads like a set of paragraphs\" is a desirable quality for functional code to have. Taking an example out of the Haskell standard library: maximumBy :: (a -> a -> Ordering) -> [a] -> a maximumBy _ [] = error \"List.maximumBy: empty list\" maximumBy cmp xs = foldl1 maxBy xs where maxBy x y = case cmp x y of GT -> x _ -> y That is about as far away as you can possibly get from Martin's advice, but that's concise, idiomatic Haskell. Unlike the Java examples in his book, I can't imagine any way to refactor that in to something that has the sort of cadence he asks for. I suspect that Haskell written to the standard of _Clean Code_ would come off as long-winded and unnatural. Am I wrong to consider (at least some of) _Clean Code_ at odds with functional programming best practices? Is there a sensible way to reinterpret what he says in a different paradigm?"} {"_id": "10296", "title": "Ethics, Clients, and legal repercussions", "text": "I'm asking this after reading a SO question where the OP asked for help decoding obfuscated code, which looks like it belongs to a closed-source company. The OP's client is saying \" _I don't know the full legal particulars of the contract (if there was one) for the code development. They claim it was fully paid for and released to them. If the code development was completely outside the scope and use of Ioncube, then it's debatable if it is unethical._ \" To me the whole thing stinks of: the client bought the product and hired the OP to decode it, but either doesn't understand he didn't buy a licence to modify, distribute, reverse engineer, or reproduce the software. But it's easy to just accept a job and say \"whatever\" to these sorts of things, but there is room for this to be legit. What considerations should we take to ensure we're doing the right thing, and furthermore protect ourselves?"} {"_id": "10292", "title": "What's the difference between a \"developer\" and a \"developer-analyst\"?", "text": "At larger companies there seems to be a distinction between the two positions. How exactly do they differ?"} {"_id": "20262", "title": "What are the pros and cons of having a life partner from the same field?", "text": "Having a life partner definitely affects an individual's career. For e.g. A programmer spends hours on the computer for work. A spouse working in the same field as yours would not mind adjusting with the household chores. On the other hand, a partner from some other field will not understand the significance of your work, and this may create problems in one's married life. Apart from this factor, what do you think are the other factors which are affected with the choice of partner from the same field? And how useful it is for programmers to have a spouse from the same field as you are working?"} {"_id": "46821", "title": "How does trilicense (mpl,gpl,lgpl) work when you want to use it on public website", "text": "I have tried to search for this answer for quite some time and I have gone through all the various FAQ's and documentation regarding the three licenses; but none of them have been able to answer a question that I have. So I've been working an idea for a website for sometime now and recently I found open source software that has many of components that are similar. It is licensed under the mpl/gpl/lgpl licenses. I think for the most part I understand the ramifications, due to the searches and reading, of what is required if I modify/use and want to distribute the software. But what if I want to modify and not distribute, but use it on a public website that I generate ad revenue from? Is this illegal? It doesn't seem like it is from reading other open source system, say like Drupal, where they allow you to use the software but it's not considered \"distribution\" if people just go to the website. I know this site may not be the best resource and I've tried some other sites, but I haven't received any clear replies back. If you know some other resource that I could contact also, please let me know. Links for those who don't know: * MPL - Wikipedia, Legalese * GPL - Wikipedia, Legalese * LGPL - Wikipedia, Legalese"} {"_id": "21562", "title": "How to retain familiarity with previously worked on technology/language/feature", "text": "One problem that I have faced over the years is that when I stop using a technology (COM,QT)/language (VBScript)/feature (Templates) for development over the time I lose skill in that. What in your opinion is the \"easiest\" way to retain familiarity so that when I come back to any of them, effort for relearning is minimal."} {"_id": "152172", "title": "rails fake data, considering switch from faker to forgery, any advantages or pitfalls?", "text": "With Ruby on Rails I've usually used **Forgery** for generating dummy data for testing. I've noticed recently that several clients and tutorials are using **Faker** They both seem fairly similar in use and popularity: **Faker** 128 forks, 418 watchers. **Forgery** 59 forks, 399 watchers. They both seem similar in how current they are: **Faker** Most updates are from 6 and 9 months ago. **Forgery** Most updates are from 4 and 9 months ago. The one distinguishing factor I've found so far is that **Forgery** seems like it has better instructions. Are there any particular _benefits_ or _disadvantages_ to using one over the other? Have you ever needed to switch from one to another for a particular reason?"} {"_id": "152173", "title": "How does one unit test an algorithm", "text": "I was recently working on a JS slideshow which rotates images using a weighted average algorithm. Thankfully, timgilbert has written a weighted list script which implements the exact algorithm I needed. However in his documentation he's noted under todos: \"unit tests!\". I'd like to know is how one goes about unit testing an algorithm. In the case of a weighted average how would you create a proof that the averages are accurate when there is the element of randomness? Code samples of similar would be very helpful to my understanding."} {"_id": "152177", "title": "Develop for Desktop and mobile use?", "text": "I am in the very beginning of developing an app / desktop program. I want it to be cross-platform and possibly also as a tablet version (preferably Android Icecream sandwich). Note that I need to run it offline. I thought about the following approaches: * ADOBE Air, since I do not need much performance. Plus I did some web programming in the past which might be of some use. Afaik it would run on OS X and Windows and should run on mobile OSes, too. * Qt. Found some nice Qt based desktop recently and read it also works on android. Plus I like the SDK. * HTML5 / JS. Again my web background should help me here. I wont need no sever side scripts, thus it should work without installing anything but a browser. How easy could this be converted into an Android app? There might be a plethora of other (better) ways to do it, but I haven't thought of them yet. Can you help out? How would you create such an application. Would it be better to do some pure desktop client and then create tablet versions? Would you rather start to create a website and worry later on how to turn into an app?"} {"_id": "157522", "title": "CQRS + Event Sourcing: (is it correct that) Commands are generally communicated point-to-point, while Domain Events are communicated through pub/sub?", "text": "Didn't know how to shorten that title. I'm basically trying to wrap my head around the concept of CQRS ( http://en.wikipedia.org/wiki/Command-query_separation) and related concepts. Although CQRS doesn't necessarily incorporate Messaging and Event Sourcing it seems to be a good combination (as can be seen with a lot of examples / blogposts combining these concepts ) Given a use-case for a state change for something (say to update a Question on SO), would you consider the following flow to be correct (as in best practice) ? The system issues an aggregate UpdateQuestionCommand which might be separated into a couple of smaller commands: UpdateQuestion which is targeted at the Question Aggregate Root, and UpdateUserAction(to count points, etc) targeted at the User Aggregate Root. These are send asynchronously using point-to-point messaging. The aggregate roots do their thing and if all goes well fire events QuestionUpdated and UserActionUpdated respectively, which contain state that is outsourced to an Event Store.. to be persisted yadayada, just to be complete, not really the point here. These events are also put on a pub/sub queue for broadcasting. Any subscriber (among which likely one or multiple Projectors which create the Read Views) are free to subscribe to these events. The general question: Is it indeed best practice, that Commands are communicated Point-to-Point (i.e: The receiver is known) whereas events are broadcasted (I.e: the receiver(s) are unknown) ? Assuming the above, what would be the advantage/ disadvantage of allowing Commands to be broadcasted through pub/sub instead of point-to-point? For example: When broadcasting Commands while using Saga's (http://blog.jonathanoliver.com/2010/09/cqrs-sagas-with-event-sourcing-part-i- of-ii/) could be a problem, since the mediation role a Saga needs to play in case of failure of one of the aggregate roots is hindered, because the saga doesn't know which aggregate roots participate to begin with. On the other hand, I see advantages (flexibility) when broadcasting commands would be allowed. Any help in clearing my head is highly appreciated."} {"_id": "46634", "title": "How to ...set up new Java environment - largely interfaces", "text": "Looks like I need to setup a new Java environment for some interfaces we need to build. Say our system is X and we need to interfaces to systems A, B and C. Then we will be writing interfaces X-A, X-B, X-C. Our system has a bus within it, so the publishing on our side will be to the bus and the interface processes will be taking from the bus and mapping to the destination system. Its for a vendor based system - so most of the core code we can't touch. Currently thinking we will have several processes, one per interface we need to do. The question is how to structure things. Several of the APIs we need to work with are Java based. We could go EJB, but prefer to keep it simple, one process per interface, so that we can restart them individually. Similarly SOA seems overkill, although I am probably mixing my thoughts about implementations of it compared to the concepts behind it... Currently thinking that something Spring based is the way to go. In true, \"leverage a new tech if possible\"-style, I am thinking maybe we can shoe horn some jruby into this, perhaps to make the APIs more readable, perhaps event- machine-like and to make the interface code more business-friendly, perhaps even storing the mapping code in the DB, as ruby snippets that get mixed in... but thats an aside... So, any comments/thoughts on the Spring approach - anything more up-to- date/relevant these days. EDIT: Looking a JRuby further, I am tempted to write it fully in JRuby... in which case do we need any frameworks at all, perhaps some gems to make things clearer... Thanks in advance, Chris"} {"_id": "157526", "title": "Explanation on how \"Tell, Don't Ask\" is considered good OO", "text": "This blogpost was posted on Hacker News with several upvotes. Coming from C++, most of these examples seem to go against what I've been taught. Such as example #2: Bad: def check_for_overheating(system_monitor) if system_monitor.temperature > 100 system_monitor.sound_alarms end end versus good: system_monitor.check_for_overheating class SystemMonitor def check_for_overheating if temperature > 100 sound_alarms end end end The advice in C++ is that you should prefer free functions instead of member functions as they increase encapsulation. Both of these are identical semantically, so why prefer the choice that has access to more state? Example 4: Bad: def street_name(user) if user.address user.address.street_name else 'No street name on file' end end versus good: def street_name(user) user.address.street_name end class User def address @address || NullAddress.new end end class NullAddress def street_name 'No street name on file' end end Why is it the responsibility of `User` to format an unrelated error string? What if I want to do something besides print `'No street name on file'` if it has no street? What if the street is named the same thing? * * * **Could someone enlighten me on the \"Tell, Don't Ask\" advantages and rationale?** I am not looking for which is better, but instead trying to understand the author's viewpoint."} {"_id": "157524", "title": "What is the significance of each paragraph of the GPL \"copying permission statement\"?", "text": "Part of the FSF's instructions for placing a program under the GPL is including the following \"copying permission statement\" at the top of your file, under the copyright notice: This program is free software: you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation, either version 3 of the License, or (at your option) any later version. This program is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details. You should have received a copy of the GNU General Public License along with this program. If not, see . I am wondering what is the significance of each paragraph in this statement. In particular, for a program that I am about to release under the GPL, I am considering omitting the second and third paragraphs to reduce the length of the statement, and I'm wondering what would be the negative consequences (if any) of doing so."} {"_id": "68426", "title": "Convincing a Client to Offer a RESTful Web Service instead of a SOAP Service?", "text": "### BACKGROUND: I develop custom WordPress plugins for my clients that they then distribute via the WordPress plugin repository. I'm increasingly running into clients who want my WordPress plugins to consume SOAP web services developed by their internal development teams _(and as an aside, thus far every one of these SOAP web services have been developed using ASP.NET)._ From my experience, especially within the realm of WordPress plugin development, interacting with RESTful web services is almost always trivial, and they just work. From my admittedly third-hand knowledge of actually consuming SOAP web services via WordPress plugins, especially ones that are widely distributed to mostly non-technical WordPress users, embedding a SOAP client is fraught with peril as there are so many things that can cause a SOAP web service call to fail; wrong local SOAP stack, missing local SOAP stack, malformed service response, etc. etc. What I am finding is that many of the business people in decision-making positions within my _(prospective)_ clients have little-to-no knowledge of the tangible differences between RESTful web services and SOAP-based web services. To these people a web service is a web service; it's 6 of one, 1/2 dozen of the other. They tend to think _\"What's with all the fuss?\"_ Further the ASP.NET developers at these client, developers who have been immersed in the Visual Studio toolset have been conditioned by Microsoft's excellent developer tools marketing to see SOAP as the easy way; just add Visual Studio and the SOAP web service works like magic! And it does, at least until you try to use some other stack to access the web service and/or until you are trying to get people who are not using Visual Studio or adopt the web service; then the picture is very different. When these developers hear me advocate they implement a RESTful web service instead if I get push back I am getting one of two responses; they say: 1. _\"Why go to all the effort of creating a RESTful web service when I've already created a SOAP web service for you to use? You are just creating more work for me and I have other things to do.\"_ 2. _\"There is no benefit to RESTful web services; SOAP is actually much better because I can create an object and then I can program it just like an object. Plus SOAP is used by enterprise developers and we are an enterprise development shop; REST is just not for serious use.\"_ As an aside I think one reason I get these responses is because ASP.NET developers often have little-to-no exposure to REST _(isn't this article really on the fringe for most ASP.NET developers?)_ I think they really don't know how little work it takes to create an HTTP `GET`-only RESTful web service once they already have all the code implemented for a SOAP web service. And I think this happens because Microsoft's approach is to give tools to developers so they don't feel the need to learn the details. Since Visual Studio claims to take care of so many things for developers why should a developer care to learn anything that Visual Studio claims to handle? I know that's what I thought when I used to code web sites for the Microsoft platform. It wasn't until I moved to PHP that I realized what HTTP headers were and that I realized the difference between a 301 and a 302 HTTP status code, and most importantly that I realized these concepts were both easy to understand and vitally important to understand if one wants to create a robust and effective site on the web. * * * ### MY QUESTION: **What I am asking is how do I counter these responses and get my prospective clients to consider creating a RESTful web service?** How can I get them to see the many benefits that using a RESTful web service can offer them? Also how can I get them to see the large potential downside of releasing a WordPress plugin that potentially incurs a large support cost? * * * ### NOTE: If you disagree with my premise that calling RESTful web services are preferable to calling SOAP web services from within a WordPress plugin then please understand that I'm asking for help from people who agree with my premise and _ideally I'm not looking to debate the premise._ However if you feel the need to argue then please do so in a respectful manner recognizing that we each have the right to our own opinions and that you might never be able to sway me to agree with yours. Which of course, should be okay."} {"_id": "142663", "title": "Queue in facebook games [looking for opponent and such]", "text": "I've seen many games on FB have queue which connects you to an opponent you should be playing against. So my question is: In which language can it be coded? Is it possible to do it in PHP/Javascript? Does anyone have any real/live example or tutorial how is this done? Thanks in advance!"} {"_id": "160293", "title": "Are there any theories or books about how to debug \"in general\"?", "text": "I read and studied a lot of computer science and engineering and I rarely or never seen a book about debugging or a theory how to debug (though I surely developed some debugging theories of my own). Are there any debugging theories and/or books? Why / why not? I can read how to debug using gdb and some of the most useful info about development I got from reading how to use gdb."} {"_id": "160290", "title": "How can we handle multiple instances of a method through a single class instance", "text": "How can we handle multiple instances of a method through a single class instance(Single constructor call)? To elaborate, I have a handler class which has two methods lets say `verifyUser(String JSON)` and `getUserInfo(String JSON)`. Assuming that there are many users accessing this methods, there will be multiple instances of these methods. The handler class should be initialized once. I.e. multiple objects of this class are not permitted, rather multiple method instances are allowed. What will be the ideal solution for enabling concurrent method calls without corrupting data while executing?"} {"_id": "142667", "title": "Refactoring While Programming", "text": "When posed with a problem, particularly when it is complicated in nature, I try to take some time to think about the approach I am going to take to solve the problem. Despite this, what often happens is, as I am programming the solution, I start to think of details of the problem that I missed, and I adjust the code accordingly. What results is a mess of code that needs to be refactored. I want to \"refactor as I go,\" but while it sounds easy enough to do, I have a really hard time doing it. When the detail that I missed is small, it is tempting to make a small update to my design, rather than erase what I've already written and write it the way it is supposed to be. It sounds like a question with an obvious answer, but are there any techniques to use to better \"refactor as you go\"? I know that this is a good principle, but I fail with it time and time again."} {"_id": "211519", "title": "TFS Branching Advice", "text": "I am new to Branching and Merging but I have been tasked with making future development on an application possible while still allowing bug fixes to production. Usually I am the only developer on the application unless I am on leave. I have watched a Pluralsight video on Branching and done some forum/stack reading. I was hoping someone could take a look at my solution proposal and critique. I am concerned I will cause more problems than solutions if I get this wrong > As of Version 1.1.0.0 I have introduced a branching system for future > development. Version 1.1.0.0 is our production branch. No changes should > appear here except bug fixes. Version 1.2.0.0 is the next version and our > development branch. > > After completing a development, the development branch will merge to the > Application (trunk). The application will be deployed for testing. After > sign off the development branch becomes production, the previous version > branch will be removed. A new branch will be created for the next version. > > For bug fixing, the bugs are fixed against the production branch and merged > to the application so when the development branch merges down it also > obtains those fixes."} {"_id": "45733", "title": "How do I go about hosting facebook apps that are picking speed?", "text": "My situation is this. I coded in php and built a facebook app. After 3 days it has 13,000 users. I have my own server at hostmonster. It is a regular plan costing me about $70 per year. It has unlimited bandwidth. I did not anticipate hosting apps or that it could pick up so many users. Already 1 Gb of data was transferred in the last few days. I am planning to build a few more apps(around 10 - 20) and reach atleast a million users in total. Should I continue hosting on the same server or move to a VPS? I am a student and I don't have too much of a disposable income. So I want to move only if it is necessary. Right now it shows 1 Gb/infinity in data transfer. Any help/suggestions highly appreciated."} {"_id": "193861", "title": "Interview for UX/UI Javascript", "text": "I am interviewing for a job where the requirements are for... UX practice with UI and Javascript expertise I am expecting questions perhaps about MVC, maybe some specific frameworks, like backbone, yui, or jQuery UI. Are there any good resources to prepare me for the interview? Any obvious things I should look into beforehand?"} {"_id": "211510", "title": "Rest API and caching data", "text": "I have developed an application where you should be able to browse products catalog( **READ ONLY** ) even network access is down no wifi or 3g/lte etc. Clients(windows,ios,android etc) consume the rest api. So far There is a REST api that grabs parts like categories with products and other api call to get preferences etc. This adds logic at client like joining them together something that developers dont like. Also we need to add multiple pricebooks for each product. Current proposed solution demanded by team was to require network for this operation and get products with price of the pricebook. As the ideal solution would be to ask for what is needed everytime this isnt real solution as it would degrade performance and would make offline usage impossible. So I believe that each client probably should just store at a simple local database the data and resync based on some versioning?Products/catalogs change about 1 time per month."} {"_id": "211515", "title": "Returning view code in an API response", "text": "I have an API that returns a JSON formatted array of users within a given pair of lat/ long bounds. I'm using this data to plot a number of markers on a map, based on the location of each user. I also want to render a HTML list on each client request so that I can show the list of plotted users alongside the map. What is the best way to get the list HTML to the client? To me it seems like an incorrect solution to return the HTML for the user list _within_ the initial API call (response.html, or something), although this feels like I'm shoe-horning functionality into an otherwise clean API response. I also don't want to make two API calls (one for the initial data and one for the HTML), for obvious reasons (overhead). Finally, I don't want to generate the HTML client-side (in JavaScript), as I already have a class to do this for me server-side. What options does that leave me with? Thanks"} {"_id": "193869", "title": "Does this version of insertion sort have O(n) complexity for best case?", "text": "for ( i = 1 ; i <= N ; i++ ) { for ( j = 0 ; j < i ; j++ ) { if ( arr[j] > arr[i] ) { temp = arr[j] ; arr[j] = arr[i] ; for ( k = i ; k > j ; k-- ) arr[k] = arr[k - 1] ; arr[k + 1] = temp ; } } } Source: http://programminggeeks.com/c-code-for-insertion-sort/ If not, can it really be called insertion sort? This version of sort is there in a book from reputed author originally."} {"_id": "243237", "title": "Declarative Transactions in Node.js", "text": "Back in the day, it was common to manage database transactions in Java by writing code that did it. Something like this: Transaction tx = session.startTransaction(); ... try { tx.commit(); } catch (SomeException e){ tx.rollback(); } at the beginning and end of every method. This had some obvious problems - it's redundant, hides the intent of what's happening, etc. So, along came annotation-driven transactions: @Transaction public SomeResultObj getResult(...){ ... } Is there any support for declarative transaction management in node.js?"} {"_id": "112996", "title": "Finding Speakers for a Conference", "text": "Heroku has offered to sponsor a conference run by my Ruby user group, and wondering if anyone has suggestions for finding, selecting, and managing speakers. Related Question: How do you ask or get asked to speak at a software conference?"} {"_id": "24302", "title": "Most Active Open Source C# Projects?", "text": "I am looking for open source projects being done in C# that are actively looking for developers and does not mind the person coming in from a C++ background. Any pointers appreciated."} {"_id": "168732", "title": "How to balance programming projects between feasibility and usefulness", "text": "I've become fairly competent as a programmer, but I would not say I am a master. I work independently, most as a hobby, although I have done some freelance PHP work. I tend to find myself dabbling in a lot of things: Java Android SDK, Arduino, game scripting, Lua, etc. I've reached the point where I want to start a real software project, but cannot think of a small enough project that allows me enough practice, while still being able to publish a decent piece of software in a reasonable amount of time, and build up a portfolio. More specifically, I was looking at Ubuntu development, in Python, using the Quickly toolset, which includes the PyGTK libraries. So the question is, what is the best way to come up with a small project that is still useful, as a starting point to a software development career?"} {"_id": "160128", "title": "How to inspire an intern with programming?", "text": "The situation is this - we took an intern for a summer with a mind that if he will catch up during summer, we will keep him as a part time junior developer. We took him after his first year in university, so his knowledge was way too low to be involved into real projects(actually he programmed only in university), so my task was to push as much stuff as I can to have him ready for joining real projects in September. As long as we are remote developing team, we gave him possibility to have its internship remotely, with flexible hours. So two months passed and I'm not impressed with his progress. I gave several tasks to implement, access to learning resources, recommended start paths and so on, agreed that he will report every couple days, we had pair sessions from time to time, reviewed his code together and etc. Anyway, it looks like he spent less time writing code than I expected and before I would say that he didn't performed well enough to join our team, I was thinking may be that is my fault, may be I was pushing too much stuff, that is important in real life projects (like unit testing, structuring code, database stuff, etc.) and not enough fun stuff, that would hook up him on programming and that is the reason he spent less time than I hoped. So I have ~20 days left and I can use them to inspire him with programming, question is with what and how?"} {"_id": "160129", "title": "Is there a programming language where 1/6 behaves the same as 1.0/6.0?", "text": "While I was programming in C++ some days ago, I made this mistake (that I have history of making it!). In one part of my code, I had 1/6 and I was expecting it be 0.16666666666 which is not the case. As you all know the result is 0 - C, C++, Java, Python, all behave the same. I post it on my Facebook page and now there is debate on if there is a programming language where `1/6` behaves the same as `1.0/6.0`."} {"_id": "168736", "title": "Test case as a function or test case as a class", "text": "I am having a design problem in test automation:- Requirements - Need to test different servers (using unix console and not GUI) through automation framework. Tests which I'm going to run - Unit, System, Integration Question: While designing a test case, I am thinking that a Test Case should be a part of a test suite (test suite is a class), just as we have in Python's pyunit framework. But, should we keep test cases as functions for a scalable automation framework or should be keep test cases as separate classes(each having their own setup, run and teardown methods) ? From automation perspective, Is the idea of having a test case as a class more scalable, maintainable or as a function?"} {"_id": "164635", "title": "Does the deprecation of mysql_* functions in PHP carry over to other Databases(MSSQL)?", "text": "### I'm not talking about MySQL, I'm talking about Microsoft SQL Server I've been aware of PDO for quite some time now, standard mysql functions are dangerous and should be avoided. http://php.net/manual/en/function.mysql-connect.php But what about the MSSQL function in PHP? They are, for most purposes, identical sets of functions, but the PHP page describing mssql_* carries no warning of deprecation. http://us.php.net/manual/en/function.mssql-connect.php There are PDO drivers available for MSSQL, but they aren't quite as readily available or used as the MySQL drivers. Ideally, it looks to me like I should get them working and move from mssql_* to PDO like I have with MySQL, but is it as big of a priority? Is there some hidden safety to MSSQL that means it's exempt from all of the mysql_* hatred as of late? Or is its obscurity as a backend the only reason there hasn't been more PDO encouragement?"} {"_id": "164636", "title": "What are some general guidelines for setting up an iOS project I will want to personally publish but sell in the future?", "text": "I have an idea for a personal iOS project that I would like to write and release to the iOS store. I'm the type of developer who enjoys developing and publishing. I want to write quality software and take care of my customers. Assuming that I wrote an application that had reasonable success, there is a fair chance that I would want to sell the ownership rights of the app to another party and I'd use the proceeds to develop my next personal project which, in turn, I'd probably want to sell in the future. With that said, what are some general guidelines for creating, making and publishing an iOS project that I will eventually want to transfer to another company/developer? I know this is a bit of a broad question, but I request that the given advice be a general list of tips, suggestions and pitfalls to avoid. If any particular bullet point on your list needs more explanation, I'll either search for the answer or post a new question specific to that requirement. Thank you! **Note Regarding this Question** I am posting this question on Programmers.SO because I think that this is an issue of software architecting, seeking advice for setting a new application project and publishing a project to the Apple iOS store-- all within the requirements for questions on this site. **UPDATE** \\- 2012-09-14 I would further like to request that if anyone is aware of a good article on a case study of such a transition, I would consider that a good answer as well. Such an article may not have all the answers, but it could outline quite a few of the pitfalls which should be avoided. Apps are sold to other holding companies on a semi-frequent basis. Often, it happens when a small app becomes a runaway success and a bigger company wants to purchase the ownership and rights. I've had difficulty finding any information on this topic (probably my poor googling skills.) Most keywords that I tend to search also relate to promoting an app so that people will download it. Thanks for your insight."} {"_id": "79134", "title": "What are the three most important questions you should ask your team about your performance as their team leader?", "text": "I'm approaching the 1 year mark as a leader of a small development team (4 members, including myself) inside of a small software company. I'd like to give my team the opportunity to evaluate how I am doing as their team leader who is also a developer on the team. I find it's hard to get good feedback with an open ended 'How am I doing?' question, so what specific questions are the most important to ask? Ideally I'd like to be able to provide 3 simple questions that my team would be able to answer. Which are the most important parts that you would like to give your team leader feedback on? My initial thought was to allow my team to answer these questions anonymously? Is this a good idea?"} {"_id": "164632", "title": "statistics for checking imported data?", "text": "I'm working on a data migration of several hundred nodes from a Drupal 6 to a Drupal 7 site. I've got the data exported to the new site and I want to check it. Harkening back to my statistics classes, I recall that there is some way to figure out a random number of nodes to check to give me some percentage of confidence that the whole process was correct. Can anyone enlighten me as to this practical application of statistics? For any given number of units, how big must the sample be to have a given confidence interval?"} {"_id": "237565", "title": "Is it legal to reuse code from a \"programming cookbook\"?", "text": "I have just found a perfect fit solution for a problem I was having in O'Reilly's _Perl Cookbook._ The problem is, the book is copyrighted, and I could not find anything that looked like a license to reuse the code. On the other hand, I suppose \"programming cookbooks\" are more or less explicitly made with the intent that the reader will reuse some of the code that is provided. Legally, can I just reuse the code, or do I have to rewrite it? If this is relevant, my intent was to release the project I am working on under the AL/GPL dual license, which is the norm for Perl. I suppose this question could apply equally well to any programming cookbook with no explicit license."} {"_id": "237561", "title": "Naming test methods in Java", "text": "Over at codereview a comment hinted that using snake_case to name test methods is a good idea. This contradicted my views and I did some research and there seem to be a lot of examples that actually use snake_case for this. Myself, I use classic camelCase names like `testXyz` for unit tests or a BDD approach `givenXyzWhenXyzThenXyz` for integration tests. There also is a third option that combines both ways, e.g. `useCase_WhenSomething_ThenAssertion`. # _My question is_ : What are good arguments for either using snake_case or camelCase to name test methods? I'd like to start this with a few arguments from my experience and work: ## In Favor of camelCase * It follows the general Java coding conventions of using camelCase. * It is _consistent_ with production code. * I _personally_ find camelCase no harder to read than snake_case \u2013 I'm used to descriptive names from production code anyway. * IDE support suffers when using snake_case (at least Eclipse). ## In Favor of snake_case * Some frameworks can generate reports from tests and using snake_case allows such frameworks to create correct sentences from the test names. ## In Favor of a combination of both * ? What are other arguments? How does maybe the pattern in which the test methods are named affect the arguments? Little disclaimer: Of course, ultimatively it comes down to team conventions. However, it might be worth discussing pros and cons for certain conventions."} {"_id": "13010", "title": "Do you write bad code when under pressure?", "text": "When you are under pressure, the deadline is approaching, and a manager is breathing down your neck do you find yourself starting to write bad code? Do TDD and best practices slip by the wayside in order to get things done? What do you do in situations like that? What were your experiences?"} {"_id": "66858", "title": "Avoid having an initialization method", "text": "I have this existing code where they have a class and an initialization method in that class. It is expected that once the object of the class is created, they need to call initialize on it. **Reason why the initialize method exist** The object gets created early to have a global scope and then the initialize method gets called later after loading a dll which it depends on. **Issue with having the initialize** The class now has this bool isInitialized which needs to be checked in every method before it proceeds and returns error if it is not initialized. Simply put, it is a big pain. **One possible solution** Initialize in the constructor. Have just a pointer to the object in the global scope. Create the actual object after the dll is loaded. **Issue with the above solution** Anyone who creates an object of this class needs to know that it needs to be created only after the dll is loaded or else it will fail. Is this acceptable?"} {"_id": "237568", "title": "What does the author of Code Complete mean when talking about hiding global data?", "text": "In section 6.4 of Code Complete 2nd Edition there is a paragraph about hiding global data. What I am particularly interested in, is that McConnell (the author of the book) gives examples of the benefits of hiding global data. There is one example that I cannot understand. I don't have the English version of the book, so I'll try to translate the text. > **Hiding global data.** (...) You can change the structure of the data > without modifying the program. What does McConnell mean by that? Is he talking about changing global data? If so, why wouldn't you have to modify your program when you are using methods to retreive that data? Or maybe he is referring to something else here? I would greatly appreciate if someone could clear up my confusion. If you could also provide an example, it would be great (examples are awesome, you know)."} {"_id": "48571", "title": "most concise reference for organizing code for sharing?", "text": "My colleagues and I are a bunch of scientists (i.e., untrained in programming) hacking code for data processing. Is there a concise and simple reference that documents idioms, conventions, or guidelines for organizing code? For instance: 1. conventions for using global variables 2. documenting code that is distributed to others (e.g., at least listing all functions contained within a code document at the top of the file) and so on? I'm aware that this is big field and can get into security, unit testing, refactoring, and all these issues, but hopefully there is some primer out there that covers the bare minimum with (extremely) little programming experience? **Edit:** Thanks all -- I aware there are language specific guidelines for (and debates over) use of parentheses, camelcasing (or not) variable names, etc. but I was hoping for basic conventions that apply to most languages. **Edit2:** To narrow it down, these are mostly imperative or procedural languages (e.g., Fortran but one in particular that a lot of my colleagues use is a DSL called IGOR Pro by Wavemetrics if anyone has heard of this one)."} {"_id": "178149", "title": "When can I publish a software tool written at work?", "text": "I'm working on a software problem at work that is fairly generic, but I can't find a library I like to solve it, so I'm considering writing one myself (at least a bare-bones version). I'll be writing some if not all of the 1.0 version at work, since I need it for the project. If turns out well I might want to bring the work home and polish it up just for fun, and maybe release it as an open-source project. However, I'm concerned that if I wrote the 1.0 version at work I may not be allowed to do this from a legal sense. Obviously I could ask my boss (who probably won't care), but I'm curious how other programmers have dealt with this issue and where the law stands here. My one sentence question is, **When is it okay (legally/ethically) to open-source a software tool originally written by you for work at work? What if you have expanded the original source significantly during off-hours?** **Follow-up:** Suppose I write the whole thing at home on my time then simply use it at work, does that change things drastically? **Follow-up 2** : Note that I'm not trying to rip off my employer (I understand that they're paying me to build products that they own)--I'm just wondering if there's a fair way of doing this for all involved... It would be nice if some nonprofit down the road could use my code and save them some time. Also, there's another issue at stake. If I write the library for a very simple, generic thing (like HTML tables in Javascript), does that mean I can never again do so on my own time without putting myself at legal risk (even if it was a whole new fresh rewrite or a segment of a larger project). Am I surrendering my right to write code for this sort of project for the rest of my life (without this company's permission), since the code at work might still be somewhere in my brain influencing me? This seems related to software patents, as a side-note."} {"_id": "178147", "title": "Database Schema Usage", "text": "I have a question regarding the appropriate use of SQL Server database schemas and was hoping that some database gurus might be able to offer some guidance around best practice. Just to give a bit of background, my team has recently shrunk to 2 people and we have just been merged with another 6 person team. My team had set up a SQL Server environment running off a desktop backing up to another desktop (and nightly to the network), whilst the new team has a formal SQL Server environment, running on a dedicated server, with backups and maintenance all handled by a dedicated team. So far it's good news for my team. Now to the query. My team designed all our tables to belong to a 3-letter schema name (e.g. User = USR, General = GEN, Account = ACC) which broadly speaking relate to specific applications, although there is a lot of overlap. My new team has come from an Access background and have implemented their tables within dbo with a 3-letter perfix followed by \"_tbl\" so the examples above would be dbo.USR_tblTableName, dbo.GEN_tblTableName and dbo.ACC_tblTableName. Further to this, neither my old team nor my new team has gone live with their SQL Servers yet (we're both coincidentally migrating away from Access environments) and the new team have said they're willing to consider adopting our approach if we can explain how this would be beneficial. We are not anticipating handling table permissions at schema level, as we will be using application-level logins and the 7-character prefixes are not an issue as we're using LINQ so the tables can simply be renamed in the DMBL (although I know that presents some challenges when we update the DBML). So therefore, given that both teams need to be aligned with one another, can anyone offer any convincing arguments either way?"} {"_id": "178144", "title": "Jenkins Paramerized Trigger + Copy Artifact", "text": "I'm working on setting up Jenkins to handle our release builds. A release build consists of a Windows installer that includes some binaries that must be built on Linux. Here's what I have so far: * The Windows portion and Linux portion are set up as separate Jenkins projects. * The Windows project is parameterized, taking the Subversion tag to build and release. * As part of its build, the Windows project triggers a build of that same Subversion tag for the Linux project (using the Parameterized Trigger plugin) then copies the artifacts from the Linux project (using the Copy Artifact plugin) to the Windows project's workspace so that they can be included in the Windows installer. Where I'm stuck: Right now, Copy Artifact is set up to copy the last successful build. It seems more robust to configure Copy Artifact to copy from the exact build that Parameterized Trigger triggered, but I'm having trouble figuring out how to make that work. There's an option for a \"build selector\" parameter that I think is intended to help with this, but I can't figure out how it's supposed to be set up (and blindly experimenting with different possibilities is somewhat painful when the build takes an hour or two to find success or failure). How should I set this up? How does build selector work?"} {"_id": "178143", "title": "What is the basic process and tools needed for crawling a source code repository for the purpose of data mining?", "text": "This all is with respect to Microsoft project CodeBook: CodeBook There is huge amount of code in the repository, many classes , a call hierarchy of functions, testcases etc. I am interested in knowing how this crawling process takes place, and how this data is sorted?"} {"_id": "54132", "title": "Solo vs Team development and the consequences", "text": "I've been programming for a while on different languages. I never really studied programming at school nor worked on a team of more than two people (me included). Still, I've been a professional developer for over three years. Last year, I took over my first C# project and it ended up being fine. I can't help but think that because I learned and worked alone I must be missing some concepts/hints/edge. For those who've been solo developers before being part of a team, can you share your experience? Did you realize you were missing something? Did you find it hard? Did you learn faster after?"} {"_id": "54133", "title": "Sensitive Data Storage - Best Practices", "text": "I recently started working on a personal project where I was connecting to a database using Java. This got me thinking. I have to provide the login information for a database account on the DB server in order to access the database. But if I hard code it in then it would be possible for someone to decompile the program and extract that login info. If I store it in an external setup file then the same problem exists only it would be even easier for them to get it. I could encrypt the data before storing it in either place but it seems like that's not really a fail safe either and I'm no encryption expert by any means. So what are some best practices for storing sensitive setup data for a program?"} {"_id": "256160", "title": "How to signal that a method chain should not continue?", "text": "When doing method chaining, you have methods that return the object allowing you to continue the chain. However, you need to get values out of the object somehow. Usually I've added a method that returns a value at the end, but this complicates matters if you add to the chain and that return method may not be valid anymore. The way I view it each time you add a method onto a chain you are refining your result until you get your desired answer in the last method call. Having a final output method creates a restriction in that it needs to know about the last operation to know how to display the result. Maybe you have a method that results in a string and one that results in an array your output method has to deal with both those cases. What if you add a method that stores your string result somewhere new? Now you have to update the output method to handle that case too. Is there a way of letting a method know it's the last in the chain so should output its result?"} {"_id": "256163", "title": "Are EventHandler and IObservable interchangable?", "text": "I have an object which will periodically raise an event based on an action performed in an application. This will be heard by any listener(s) and acted upon accordingly. I do not wish to use a custom type for this and would like to make use of either `EventHandler` or `IObservable` to manage the pub/sub mechanism I am looking to put in place. I have had a play with both and they both do what I require. I have read through the MSDN Observer Design Pattern and MSDN Events Programming Guide, however, I remain uncertain as to if one mechanism is more appropriate than the other. Is one more suited for my scenario than the other or are both suitable and merely down to personal choice? **Edit 1** My pub/sub requirements are not at the UI layer at this point and are lower down. My publisher is monitoring network activity and raising an event based on certain network events. The subscriber is responsible for listening for raised events and performing an associated action. The UI is not involved at this point."} {"_id": "91758", "title": "Debugging Facts and Statistics", "text": "I'm trying to find a research answering a question: \"How much time developers spend on development vs debugging?\". I found several interesting links on the Net but they are little too old. This 2002 RTI study states that software bugs cost U.S. economy $59.6 billion annually. And in this book Beizer (1990) reports that of the labor expended to develop a working program, 50% is typically spent on testing and debugging activities. I've seen this number is 80% but can't find the link."} {"_id": "256168", "title": "Audio editing sdk", "text": "What's the best way to create an audio editing software? (edit, copy, paste audio - somehing like sound forge or audacity) My programming language is Delphi. Are there any good component or controls available so that i can buy it use it in project."} {"_id": "106815", "title": "Difference between Idiom and Design Pattern?", "text": "What is the difference between idiom and design-pattern? It seems that these terminologies overlap somewhere; where exactly, I don't know. Are they interchangeable? When should I use what? Here is a list of C++ Idioms. Can I call them design patterns? Wikipedia defines, > Programming Idiom as a low-level Design Pattern What does it mean? What does _\"low-level\"_ mean here? This question is inspired from another question : Are the some design patterns language dependent?"} {"_id": "106810", "title": "Planning to write research paper - Tips or resources?", "text": "I am a junior in high school and I've developed an optimization system for functional languages that could be very powerful. My computer science professors at Boise State University believe I should write a paper and take my idea as far as I can. I've formalized my method, but I haven't had an expert look at it rigorously. I'm also being careful to retain intellectual property safety, so my options are a little limited. Where should I start? I have implemented the optimizer in Haskell (it doesn't actually compile and generate code, but it demonstrates the concept). I'm thinking that I need to finish the research paper focusing on the system itself and see if I can publish it to a journal. The doctor specializing in compiler optimizations at BSU said I might be able to present at Apple because I was planning to generate code with LLVM, which they want to encourage interesting projects for and demonstrate its versatility. This sounds like an exciting prospect, but I'm guessing writing a research paper comes before that. I feel slightly overwhelmed. I'm not sure if I should implement the compiler further to lend my paper credibility, or see if the idea takes off and gather a small community to fully implement it. I'd write the research paper (I have the methods section done), but I'm no expert at research papers, and I might need someone who knows about compiling functional languages that I can trust to look at it. Also, I'm worried that I'll be discriminated for my age (I'm 16 years old). What are some good resources on writing research papers, particularly computer science ones? Where should I start when I have it written? How would I develop interest and a community to implement and expand my idea?"} {"_id": "91752", "title": "What do you use to organize your team knowledge?", "text": "Last year, me and three good old friends of mine founded a small web/mobile development team. Things are going pretty well. We're learning a lot, and new people are joining the group. Keeping knowledge always updated and in-sync is vital for us. Long emails threads are simply not the way to go for us: too dispersing and confusing, and hard to retrieve after a while. How your team manages and organizes common knowledge? How do you collect and share useful resources (articles, links, libraries, etc) inside your team? **Update:** Thanks for the feedback. More than using a wiki to share team common procedures or informations, I'd like to share external links, articles, code libraries, and be able to comment them easily within my team. I was particularly interested in knowing if you're aware of any way/webservice to share a reading list with a team. I mean, something like Readitlater/Instapaper, but for teams, maybe with some stats available, like \"# of coworkers who read it\"."} {"_id": "91750", "title": "Confusion about the first day of a burn down chart", "text": "I built a simple burn down chart with Google Spreadsheet, as shown below. ![enter image description here](http://i.stack.imgur.com/IfXtt.png) The project consists of 99 tasks. And the user finished 5 tasks on the first day. So the \"Actual Tasks Remaining\" point is below the \"Ideal Tasks Remaining\" point on the first day, which is different from the burn down chart shown in the Wikipedia at http://en.wikipedia.org/wiki/Burn_down_chart. I'm wondering whether I should add a date before the first date so that the \"Actual Tasks Remaining\" point will overlap the \"Ideal Tasks Remaining\" point on the first day."} {"_id": "156696", "title": "is it possible to auto-generate annotated POJO from a table", "text": "I wonder is it possible or is there a tool to generate annotated POJOs from a table.To make it clear,for example,Person table has fields like ,id,name,surname etc and i wanna to generate a POJO named Person with mappings made with annotations."} {"_id": "156695", "title": "What's a better name for this many-to-many table?", "text": "Part of one of my applications has `contracts` and `contract_types` tables, wherein a type may have many contracts but a contract may only be of one type. Now a new wrinkle has been introduced: a contract may change type over time (although it can, thankfully, still only be of one type at any one time). The simplest solution appears to be to introduce a new table with columns something like this: contract_id contract_type_id from_date to_date But what to call the new table? `contract_type_allocations` is the best I can do so far and I can't say I'm impressed with myself. Suggestions gratefully received."} {"_id": "252679", "title": "Should I parse XML on the server or provide a proxy and let the browser parse it?", "text": "I need to interface with a 3rd party API. With this API I make a GET request from within the end user's browser and receive an XML response. This data is to be used in a browser based application where the user can search through it, use it to make decisions, etc. The main issue is that most browsers have locked down cross-domain XML use, so I can't simply get the XML from the API. The overall data, though, is basically broken into two sets. 1. The first set of data is public and only needs to be updated every so often, so it can be cached for all users on the server side, lightening the traffic considerably. 2. The second set of data is private and individual to each user. This data is also updated in the API more frequently. This leads caching to be much less effective. For scalability reasons I would like to keep the server's load a small as possible. I see two options before me: 1. Provide a proxy that can be used to route XML requests to the 3rd party server and directly back and forth between client and 3rd party API. 2. Have the server do the conversion from XML to JSON and strip out unnecessary information. This essentially means making a new API for our server, which translates into requests from the 3rd party API What would be the best way to provide the data to the user? (Does not have to be one of the two options)"} {"_id": "143134", "title": "Why is my class worse than the hierarchy of classes in the book (beginner OOP)?", "text": "I am reading _PHP Objects, Patterns, and Practice_. The author is trying to model a lesson in a college. The goal is to output the lesson type (lecture or seminar), and the charges for the lesson depending on whether it is an hourly or fixed price lesson. So the output should be Lesson charge 20. Charge type: hourly rate. Lesson type: seminar. Lesson charge 30. Charge type: fixed rate. Lesson type: lecture. when the input is as follows: $lessons[] = new Lesson('hourly rate', 4, 'seminar'); $lessons[] = new Lesson('fixed rate', null, 'lecture'); I wrote this: class Lesson { private $chargeType; private $duration; private $lessonType; public function __construct($chargeType, $duration, $lessonType) { $this->chargeType = $chargeType; $this->duration = $duration; $this->lessonType = $lessonType; } public function getChargeType() { return $this->getChargeType; } public function getLessonType() { return $this->getLessonType; } public function cost() { if($this->chargeType == 'fixed rate') { return \"30\"; } else { return $this->duration * 5; } } } $lessons[] = new Lesson('hourly rate', 4, 'seminar'); $lessons[] = new Lesson('fixed rate', null, 'lecture'); foreach($lessons as $lesson) { print \"Lesson charge {$lesson->cost()}.\"; print \" Charge type: {$lesson->getChargeType()}.\"; print \" Lesson type: {$lesson->getLessonType()}.\"; print \"
    \"; } But according to the book, I am wrong (I am pretty sure I am, too). Instead, the author gave a large hierarchy of classes as the solution. In a previous chapter, the author stated the following 'four signposts' as the time when I should consider changing my class structure: * Code duplication * The class who knew too much about its context * The jack of all trades \\- classes that try to do many things * Conditional statements The only problem I can see is conditional statements, and that too in a vague manner - so why refactor this? What problems do you think might arise in the future that I have not foreseen? **Update** : I forgot to mention - this is the class structure the author has provided as a solution - the strategy pattern: ![The strategy pattern](http://i.stack.imgur.com/RJbQZ.png)"} {"_id": "188274", "title": "How does a symbol table relate to a namespace?", "text": "The official tutorial uses the term _symbol table_ in a few places where I would expect the term _namespace_. 1\\. Defining functions > The execution of a function introduces a new symbol table used for the local > variables of the function. More precisely, all variable assignments in a > function store the value in the local symbol table; whereas variable > references first look in the local symbol table, then in the local symbol > tables of enclosing functions, then in the global symbol table, and finally > in the table of built-in names. 2\\. More on modules > Each module has its own private symbol table, which is used as the global > symbol table by all functions defined in the module. Thus, the author of a > module can use global variables in the module without worrying about > accidental clashes with a user\u2019s global variables. I found Eli Bendersky's blog where he quotes the symtable module: > Symbol tables are generated by the compiler from AST just before bytecode is > generated. The symbol table is responsible for calculating the scope of > every identifier in the code. So it seems like a symbol table precedes a namespace. Yet another quote, from the first source, leads me to believe they also exist at the same time. > The actual parameters (arguments) to a function call are introduced in the > local symbol table of the called function when it is called; thus, arguments > are passed using call by value (where the value is always an object > reference, not the value of the object).1 Is a symbol table involved with the creation of a namespace? Does a symbol table \"contain\" a namespace or simply information that a namespace contains? In short, how does a symbol table relate to a namespace?"} {"_id": "23548", "title": "How do I explain to non-programers what .NET is?", "text": "I don't work at a software company, and I'm one of a small handful of people in the company that know anything about programming. I spend a lot of time automating other programs that are used in the office through public APIs, and I've also created a few stand alone applications. I work almost entirely in C#.NET as every application we seem to use in the office seems to have some form of .NET API. I've had a few people here ask me about learning \"how to program\", and where they should start. I think it makes a lot more sense to learn a .NET language as nearly all the programs they would want to automate have a .NET API, and it sounds like VBA is on it's way out and being replaced by VSTA. However, I'm trying to figure out how to explain what .NET is and why they should learn it to a someone that doesn't know anything about programming. It's not really a language, as there are a number of languages that are considered .NET languages. Plus I think there is a distinction between \".NET\" and \"The .NET framework\" as the latter is more about the libraries provided by Microsoft."} {"_id": "51505", "title": "Is a designer supposed to write CSS code?", "text": "I'm wondering if the CSS creation is supposed to be the job of the designer or the programmer. I'm not talking here about really complex CSS layout, I'm referring more to skinning. So is this supposed to be the responsibility of one or another?"} {"_id": "23542", "title": "How do I go about writing a programming language specification?", "text": "I really enjoy programming language design. Sometimes I think my language projects and their potential users would benefit from a comprehensive standards document. I've looked at many language standards, ranging from the very formal (C++) to the rather informal (ECMAScript), but I can't really get a handle on how I should break things down and organise such a document, even though I think I'm pretty good at technical writing in general. Should I write it like a long tutorial, or more like a formal math paper? How do I keep it up to date if I'm developing it alongside a reference implementation? Should I just give up and treat the implementation and documentation as the de facto standard? Further, is there really any significant benefit to having a standard? Does _requiring_ a standard mean that the language is needlessly complex?"} {"_id": "226567", "title": "Finding most Important Node(s) in a Directed Graph", "text": "I have a large (\u2248 20 million nodes) directed Graph with in-edges & out-edges. I want to figure out which parts of of the graph deserve the most attention. Often most of the graph is boring, or at least it is already well understood. The way I am defining \"attention\" is by the concept of \"connectedness\" i.e. How can i find the most connected node(s) in the graph? In what follows, One can assume that nodes by themselves have no score, the edges have no weight & they are either connected or not. This website suggest some pretty complicated procedures like n-dimensional space, Eigen Vectors, graph centrality concepts, pageRank etc. Is this problem that complex? Can I not do a simple Breadth-First Traversal of the entire graph where at each node I figure out a way to find the number of in-edges. The node with most in-edges is the most important node in the graph. Am I missing something here?"} {"_id": "193135", "title": "Implementing a new coding standard to an existing application", "text": "Recently we have had some turnaround in the shop I work in, because of this comments in our source code were made hastily and explained very little. We have started working on the departments first draft of coding standards and one of the bigger questions is: Should we implement this standard across the existing source code? Since this is only our coding standards first draft (done to promote flexibility and future considerations), should we be implementing on new features and during maintenance? **Update:** I would like to say thank you for all of the answers and comments that were received on this topic. Our shop has started to move forward with developing a draft for our coding standards and we have taken bits and pieces from all of the comments and answers to work with throughout our process. Thank you for your time and experience on this matter."} {"_id": "193134", "title": "Splitting user stories into smaller stories", "text": "I've been reading various techniques for splitting large user stories in helpful ways, such as by user workflow through the system etc. What I'm struggling with is how to word these smaller stories if all they achieve is facilitating the next step in the process and not delivering the application's main benefit to the user. For example, if my new system is split down into 3 smaller stories along the lines of; 1. Create a new account online 2. Create certain entities against my new online account 3. Have my mobile device query these entities against my account and act on them The system only really provides useful functionality to the end user when all stories are complete. So if following the traditional \"As a [User] I would like [Functionality] so I can [Benefit]\" The benefit of the first and second story is simply facilitating subsequent stories and not really providing the user with the main piece of functionality (the epic). Is this the correct way to do this?"} {"_id": "161657", "title": "What programming skills does someone in QA need to work effectively in extreme programming projects?", "text": "Well, the title really says it all, but to elaborate a bit, can you take a random, typically effective QA department and have them learn to work in an XP environment (with a learning curve to pick up the XP workflow of course) or would they need more programming skills to be effective? If so, what would they need to know?"} {"_id": "161651", "title": "How to share methods and properties between custom web controls", "text": "I'm building some custom web controls in .NET using C#. The controls inherit from the standard web controls, and add additional properties and functionality (e.g. I'm creating an 'extendedTextBox' and I'm adding a 'required' property, which if set to 'true' will add a .NET required field validator to the control automatically. I'm doing this for a number of web controls (e.g. radioButtonList, textArea). They share some common properties and methods, for example I have an AddRequiredFieldValidator method that uses some of the extended properties I've added. I'd like to share the common properties and methods. I've tried adding the methods to a separate class as extension methods for a web control. To achieve this I've implemented an interface that defines the additional shared properties, and am using it like this: public interface IExtendendControl { string RequiredMessage { get; set; } string ValidatorCssClass { get; set; } bool ClientScript { get; set; } RequiredFieldValidator rfv { get; set; } } public static class ExtendedControlExtensions : IExtendendControl { public static void AddRequiredFieldValidator(this IExtendendControl control) { control.rfv = new RequiredFieldValidator(); control.rfv.ErrorMessage = control.RequiredMessage; ConfigureAndAddValidator(control, control.rfv); } public static void ConfigureAndAddValidator(this IExtendendControl control, BaseValidator validator) { validator.ControlToValidate = control.ID; validator.Display = ValidatorDisplay.Dynamic; validator.CssClass = \"validationMessage \"; validator.CssClass += control.ValidatorCssClass; validator.EnableClientScript = control.ClientScript; control.Controls.Add(validator); } } Trouble is that the 'ConfigureAndAddValidator' method now doesn't know anything about the 'ID' or 'ClientScript' properties of the control since 'IExtendedControl' only defines the custom properties, not the standard properties of a web control. So I tried adding a base class that inherits from WebControl and implements the interface, like this: public interface IExtendendControl { string RequiredMessage { get; set; } string ValidatorCssClass { get; set; } bool ClientScript { get; set; } RequiredFieldValidator rfv { get; set; } } public class BaseExtendedControl : WebControl, IExtendendControl { public string RequiredMessage { get; set; } public string ValidatorCssClass { get; set; } public bool ClientScript { get { return ClientScript = true; } set { } } public RequiredFieldValidator rfv { get { return rfv = new RequiredFieldValidator(); } set { } } } public static class ExtendedControlHelper : IExtendendControl { public static void AddRequiredFieldValidator(this BaseExtendedControl control) { BaseExtendedControl extendedControl = (BaseExtendedControl)control; extendedControl.rfv = new RequiredFieldValidator(); extendedControl.rfv.ErrorMessage = extendedControl.RequiredMessage; ConfigureAndAddValidator(control, extendedControl.rfv); } public static void ConfigureAndAddValidator(this BaseExtendedControl control, BaseValidator validator) { validator.ControlToValidate = control.ID; validator.Display = ValidatorDisplay.Dynamic; validator.CssClass = \"validationMessage \"; validator.CssClass += control.ValidatorCssClass; validator.EnableClientScript = control.ClientScript; control.Controls.Add(validator); } } The trouble now is that in my extendedTextBox class I can't cast my extendedTextBox as a 'BaseExtendedControl' to use the extension methods, as the extendedTextBox inherits from the standard TextBox class like this: public class ExtendedTextBox : TextBox, IExtendendControl so there's no common base to cast ExtendedTextBox as 'BaseExtendedControl' as it doesn't inherit from it. I also can't just pass the extended web control objects as a parameter into the shared methods as the extended controls are of different types (e.g. they inherit from TextBox, RadioButtonList and so on). If I specify the expected type being passed into 'standard' methods as 'WebControl' it doesn't work as the extended controls have the additional properties. As I can't use true multiple inheritance in C#, how would I design this to be able to share the methods and properties?"} {"_id": "97970", "title": "When not to reuse software?", "text": "I'm working on an application that had basic requirements for authentication in the first version (i.e. think single administrator login), and now I have a requirement to extend this to allow for different users, roles, and permissions. It has been suggested that I attempt to integrate with an existing product we have that does this sort of thing as a separate server for basic authentication, or also extending to second factor authentication. The application I'm working on is a small, downloadable application that is supposed to be easy to set up and have running. The other product is large and 'enterprisy', requiring manual setup, which I could try to do automatically for what I need (maybe). The big problem is that when described as a feature set of what we'd like to do in my product, it maps up perfectly with what the other product does, but it's not in a form that I can use it. Management wants integration with the other product, because it seems intuitive that we could leverage that, but the other product isn't in a form that I could use. i.e. thinking of it in terms of MVC, I don't think there's a way to separate the view from the model. The other thing I'm concerned about, which has happened in the past, is that if I'm asked to integrate with this that I'm going to be the one doing all the integration work, and if something cannot be done from the other product, then I'm just out of luck. Integration with my product is secondary to anything the other team would want, and my team isn't a 'customer', so there's little motivation to get them to fix or add stuff for us. **So, in summary, my question is:** I don't want to reinvent the wheel, but should I attempt to force a square peg into a round hole by trying to use a product that does what I need, but not in a form that's easy to reuse? edit: I'm using Java, so Spring Security is an option."} {"_id": "161652", "title": "Whats a better way of designing this class", "text": "Currently I have some code like this: OntologyGenerator generator = new OntologyGenerator(); generator.AddOntologyHeader(\"Testing\"); generator.AddClassDeclaration(owlBuilder); generator.AddSubClass(owlBuilder); generator.AddAnnotationAssertions(owlBuilder); where that OwlBuilder param you see being passed has collections of objects like this: public class OwlLBuilder: IOwlLBuilder { private ICollection owlClasses = new Collection(); private ICollection owlRelations = new Collection (); } so for example when I say generator.AddClassDeclaration(owlBuilder); it will be looping through owlClasses collection of that owlBuilder param and do some stuff to it... I feel it is an ugly design. Do you have any other better design suggestions, well with some code sample so I can have the big picture of what I should do in my head!"} {"_id": "61637", "title": "When learning JS, what was your your Aha-moment?", "text": "**Do you remember when you were learning JavaScript? What was the moment that you suddenly \"got it\"?** (For example, my CSS aha-moment was when I learnt about the box model...) The reason I\u00b4m asking is that I\u00b4m learning JS for 6 weeks, but I still find it quite confusing. Here\u00b4s a quote from something I read recently on SO: > \"..functions act similar to values, as method is a property of the object > that has a value of a function (which is also an object).\" I\u00b4m curious if you were confused as well in the beginning and what was it that made you understand it. (I\u00b4m reading the Sitepoints \"Simply JavaScript\", the book \"Eloquent JavaScript\" and following Lynda\u00b4s Essential JavaScript tutorial. I don\u00b4t have any programming expeirence and was terrible at math ;) Thanks!"} {"_id": "27656", "title": "How to find entry level positions in a new city", "text": "I am just graduating from a computer science degree (tomorrow is my last exam). I have been thinking about job hunting this semester but I wanted to focus on my studies and part time job so I am a bit late on the job hunt. I want to find a job in a city that I have very little professional network in. **How would you go about job hunting in a new city?** I do not live there yet and I cannot easy go there so that makes finding places to apply a bit trickier. Normally I would ask people that I studied and worked with but I have few contacts in the city I want to work in. Where would you look to find jobs? I have been using * Craigs-list * My Universities job listings (but they are mostly focused on the east coast) * This government job listing page Anyone have any great job finding resources?"} {"_id": "129276", "title": "Give a more formal definition for the term \"pythonic\" than PEP8?", "text": "For me as python programmer having a formal definition for the term \"pythonic\" would be much more important and less subjective compared to asking more rhetoric (non-formal) questions. This may lead to better communication by argumentation being less subjective (as discussed in here). PEP8 by Guido is assumed as a starting point despite of some religious dogmatism (e.g. PEP20 is more axiomatic, shorter and simpler, but has no code). Having identified _minimalism_ , _readability_ (human comprehension) and _unambiguity_ (explicit code) is actually the same as paraphrasing PEP20. But is it possible to rewrite zen of python in python? Obviously print('Beautiful is better than ugly') does not capture python in python ;-) Any thoughts on this? Ultimately I would like to get closer to more logical or mathematical interpretation of the matter. Argument in form of the code answering why family of snippets **A** is _pythonic_ compared to family of snippets **B** is also acceptable. **EDIT** If possible try to reduce your answer to the following outline. Let us consider all possible syntactically correct program texts generated by some procedure (e.g.). Please note that we a considering only programs written in python (which is a formal language with respective grammar). Also being syntactically correct does not mean semantical correctness (programs may be buggy or without any obvious purpose). Further we observe (assume) that ultimate family _POmega_ of all syntactically correct programs in python must contain a subset of programs which are called _Pythonic_. Now in order to argue about what does it mean to be pythonic we will use a formal method - another language called _Reason_ which would be less ambiguous (consistent, complete) than natural language. We will define a mapping from programs in _Omega_ into _Reason_. Such mapping we will call interpretation. We will also produce more mappings in similar manner (e.g. as in) for all individual symbols in _Python_ and their combinations (statements). Now if we would argue about images of such mappings, we would do so in form of proofs in _Reason_ language. The question would be if subset _Pythonic_ indeed exists can we follow that up from aximos in _Reason_ built/reformulated from \"Zen of python\" or other motivating set of concepts? In fact, the whole deductive core of the language _Reason_ would be built from \"Zen\" or PEP8. We will have to proof that each program from _Pythonic_ will consist only from statements for images of which in _Reason_ can be shown to satisfy initial axioms of the model language. Different outcomes are plausible: 1. Subset of _Pythonic_ programs could be empty. We have not found any programs that would satisfy our axioms in _Reason_. 2. It could happen that this set of axioms would turn out to be ambiguous (inconsistent or incomplete). Then our original assumption about existence of _Pythonic_ programs would be wrong. 3. Alternatively axiomatic \"Zen\" could be simple enough and so well-chosen that _Pythonic_ set is non-empty. Now the question is will such (consistent, complete) set be of practical value, i.e. can we say that such set of axioms is indeed a formal definition of \"pythonic\"? This is a draft of how one can try to approach answering the question in formal manner. I would appreciate recommendations on a more elegant approach for solution to the problem (everything except ignoring it). But at the end it matters to me to see how such set of axioms looks like. Hence it will allow in precise (quantitative) way to ask and try to explain questions like why programs have \"pythonic\" qualities once written according to following python \"dogmas\"? BTW One can built a more sophisticated model of _Reason_ by introducing an equivalent of \"human factor\" - a software developer of the programs. Then the question would be - can human formally claim the program he rights belongs to \"Pythonic\" subset?"} {"_id": "74626", "title": "Is there such a thing as staying in a job too long?", "text": "After reading through a few \"job hopping\" related threads recently, I've been thinking how the opposite of job hopping can also be a problem. I've known many people (especially in large, relatively sluggish companies) who got comfortable in a cushy and unchallenging role and stayed around for a very long time - say 10 or 15 years or even more. They might have moved around internally a little, but it was mostly a case of \" _one year of experience 15 times over_ \" as seasoned hiring managers would say. Or to put it another way, they were \"Special Projects\" cases. Just sitting in a comfortable role where no more learning is going on, but that might look okay on paper (on their CV) if the various stuff they were involved with is embellished a bit. What really got me thinking about this is that the longest role on my CV (almost 6 years) fits into this category somewhat, at least mildly. If I was being completely intellectually honest, I'd say I really only got 3 solid years of learning experience from it. The last 2-3 years were cushy maintenance mode. So I know first hand that it's quite possible than many \"seniors\" with 15 years experience (if they were in a job like that the whole time) might not be as broadly experienced and \"senior\" (in terms of having 15 years of quality experience) as they look on paper. So my question is - does hanging around in the same job for very long raise any red flags? For example: if you see a CV which has only one 15 year job on it after college, as opposed to an equally experienced person who has several 4-5 year stints instead, does the single-job guy look like a possible \"Special Projects\" case for only having had one very long job? My experience suggests that it's quite likely. Or at least that the guy with several 5 year stints is probably more dynamic and adaptable, from having experienced a variety of roles, environments and technologies (and different uses even if using the same technologies across all jobs). **EDIT:** Note that I am not personally worried that _my_ history looks like this. My longest role above just serves as a mini example of what can happen with cushy long term roles, which got me thinking about this in general terms (If anything, my actual employment history (except for that longest role) leans more towards being a bit too job hoppy)."} {"_id": "126429", "title": "Finding an agile coach", "text": "I'm hoping that this is the right place to ask this question, but my company has recently started a process to look at how we build software, from the tools we use to the methodology. As you can imagine we have the kind of endless meetings where we sit around a table and split hairs. In an effort to actually get something done I've suggested we hire in a professional to take a look at how we do what we do and maybe guide us into a more agile approach to delivering software. My problem is that I'm not sure how to find a consultant we could bring in and get us started? I was thinking of approaching someone like ThoughtWorks, but I think finding someone who isn't going to push an agenda or product on us would be better. In the meantime I've picked up a few books and will do a bit of reading on the beach over the holiday vacation."} {"_id": "126423", "title": "How/where would I best advertise my open source project, in order to maximize my odds of finding collaborators?", "text": "I'm working on an open source project, and am looking to find collaborators. What's my best bet, in terms of advertising my project, with an eye specifically towards finding other developers interested in contributing?"} {"_id": "253077", "title": "Is this looping solution possible with recursion?", "text": "Eventually, I would like to generalize this solutions to work with a Tuple of any length. I think recursion is required for that, but I haven't been able to do it. def combineRanges(maxValues) : for x in range(0, maxValues[0]) : for y in range(0, maxValues[1]) : for z in range(0, maxValues[2]) : print (str(x) + '-' + str(y) + '-' + str(z)); m = (6,9,20); combineRanges(m); http://repl.it/WiZ"} {"_id": "240568", "title": "Should I follow the normal path or fail early?", "text": "From the _Code Complete_ book comes the following quote: > \"Put the normal case after the `if` rather than after the `else`\" Which means that exceptions/deviations from the standard path should be put in the `else` case. But _The Pragmatic Programmer_ teaches us to \"crash early\" (p. 120). Which rule should I follow?"} {"_id": "35755", "title": "Overused or abused programming techniques", "text": "Are there any techniques in programming that you find to be overused (I.E. used way more excessively than what they should be) or abused, or used a bit for everything, while not being a really good solution to many of the problems which people attempt to solve with it. It could be regular expressions, some kind of design pattern or maybe an algorithm, or something completely different. Maybe you think people abuse multiple inheritance etc."} {"_id": "145470", "title": "How to get a list of valid addresses?", "text": "I need to get a list of valid addresses in 4 or 5 clusters (cities) in both the UK and US. It's to generate sample data for an application that will do geolocation and searching. At the moment I'm generating the addresses more or less randomly, which means I hit postcodes that don't exist and obviously the street and number never match the postcode. Any recommendations on how to programatically obtain a list of addresses to build test data?"} {"_id": "92224", "title": "Public vs Private Repositories", "text": "Say I am developing an app for sale. Does it make sense to use a public repository for this project? Doesn't default copyright protect the code from someone else using it? If so, what advantages are there in paying for a private repository?"} {"_id": "197931", "title": "Is this a Best Practice with Enum in C#", "text": "When Enum is used as below, say if we have enum Designation { Manager = 0, TeamLead = 1, Associate = 2 } then write the below code if (designation == Designation.TeamLead) //somecode Now if we decide to change the enum element from \"`TeamLead`\" to \"`Lead`\" then we have to modify the above line of code as well i.e. `designation == Designation.TeamLead` to `designation == Designation.Lead`. So what is the best practice."} {"_id": "218738", "title": "computing whether service level times are met", "text": "For a ticket reporting tool, I want to compute whether certain service level times (slt) are met. That is (a bit simplified): Was the timespan between opening and closing of the ticket within a certain slt. Now there is also something called a service window (SW), like mo-fr,9:00-17:00. Now to compute whether a slt was met, times not within the SW have to be excluded. Even though not forseen, tickets can be opened or closed outside the SW, since there are other sla wich allow for 24x7 service. An engineer could work on a ticket outside the SW, since it might be easier for him to close a ticket while he is at it. Maybe he was doing work on a related ticket. In order to determine the time a ticket took between opening and closing time, I though of two algorithms: 1) Distinguish all possible combinations of start_date and end_date beeing within or outside the SW, on different days and so on (I see 8 combinations). Then for each combination, set up a seperate formula to compute the time the ticket took. Plus: little computational overhead Minus: Rather complex, lots of code 2) Decide for a smallest possible unit (minutes seem to be apropriate), then for each unit between start_date and end_date, decide whether this unit is inside or outside the SW. If inside, add to the time the ticket took. Plus: very simple Minus: big computational overhead. I am leaning towards 2) since compuation is cheap, simple code is to be preferred, and the rest of the programm contains lots of time consuming db queries, so a little computation will not be noticable. However, both solutions seem not very elegant to me. Does anyone see a third way?"} {"_id": "111506", "title": "How do you put different versions of your library under version control? Do you use tags? Or branches? Or another method?", "text": "I have recently started putting my code under version control (in the lab I'm working, under SVN, and my own codes in github (obviously with git)). Before using version control, I used to do something like this. I had a folder with the name of the library, inside many folders with the version number. Every time I wanted to start working on a newer version, I would make a copy of the last version, change the name to the new version and start implementing. This however seems way redundant when the folder is put under version control. Apart from redundancy, if someone wants to get the latest version, they would be downloading all versions if he just `import`s/`clone`s. Now I see many ways to doing this with version control but since I'm new to it, I don't know which would be more maintainable. ### Method 1: Using tags If I understood tags correctly, you would have your main branch, you commit whatever change you got and tag them with a version. Then, when you want to get a working copy of it, you get the one with a certain tag. (correct me if I'm wrong) ### Method 2: Branching versions In this method, the main branch would be the development branch. Every now and then that a stable version is is made (let's say `v1.2.0`), you create a branch for that version and never commit to it. That way, if you want to download a certain version, you get the code from that branch. Although I said you never commit to it, it may be possible to do bug fixes and commit to an old version's branch to keep the old version running. For example if the current version is `v2.0`, but there are people who want to use `v1.2`, you can get another branch from `v1.2`, namely `v1.2.1` and commit the bug fixes, or just keep the version the same as `v1.2` and just commit the bug fixes. So the branches would look like this: v1.2.1 v1.2.2 / / v1.0.0 v1.2.0--------- v2.0.0 / / / -------------------------------------- dev This way you have branches for every minor version update. (Note that in the graph above, v1.2.1 and v1.2.2 or created after v2.0.0 was released, so they were not part of the development between v1.2.0 and v2.0.0. Think of it as support for older versions) ### Method 3: Branching development This method is the opposite of the previous. The main branch would be the latest stable version. Whenever you are working on a new version, you create a branch (for development), work on your code and when it is stable, merge it with the main branch. In this case, the branches would look like this: ________ ____ ________________ _____ dev / \\/ \\/ \\/ ---------------------------------- latest_version Probably this one needs to be done in conjunction with tags right? ### The question! Anyway, my question is that based on your experience, which of these methods proves more practical? Is there a known best method out there (that possibly I didn't figure out myself)? How are these things commonly done?"} {"_id": "111507", "title": "What is a good toy project to teach an introduction to DVCS?", "text": "## Context I will be coaching student's programming projects in my engineering faculty (group of <10 students). At this occasion I wanted to draw the student's attention to the usefulness of version control systems in particular DVCS. There are many convincing DVCS tutorials out there, however all the ones I found use unrealistic or contrived examples (e.g. hginit uses cooking recipes). ## What I am looking for I want the students to collaborate in class on a _simple but realistic_ programming project that makes them discover the benefits of the DVCS through its usage. The class would collaborate as a dev team. I am looking for a toy project simple enough to keep the focus on the DVCS workflow. The goal is for student to face DVCS challenges that arise naturally in a collaborative development: bug fixes, branch management (stable vs development), merge conflict resolution, etc. I would be greateful if you could give me specific project ideas and highlight their DVCS challenges (e.g. a basic signal processing pipe-line, where each steps is written by different student, forcing them to deal with interface conflicts, etc). Thanks !"} {"_id": "251360", "title": "What's the canonical way to store translations of user data?", "text": "I've developed some software that allows my users to attach a blurb to some non-language specific information: Pseudocode model: item(): ID creationdate byline //Only one byline for the object description //Only one description for the object faq: faqID question answer itemID //ForeignKey So this is basically a simple model in which an info item has a description and any number of faq questions related to it. If my user now wants to have translations for his description and faq questions, what is the canonical way to extend the model? Some candidates that I thought of: * Add an alias (FK to self) field on the info as well as a language field. Then have business logic that finds the desired language. * Upside: required fields are required for any language entry. You create partial translations. You will be reminded when the schema adds another text field. * Downside: There's now two types of objects in the table: main language objects and alias objects. * Add a table with all the language specific fields (description), and access the string via the foreign key * Upside: clean, everything is it's own type * Downside: more tables. Also the FAQ table needs to somehow make sure there's a full translation for each language. So how would you do it?"} {"_id": "196046", "title": "Dynamic typing function arguments - how to keep readability high?", "text": "Dynamic typing newbie here, hoping for some wizened words of wisdom. I'm curious if there is a set of best practices out there for dealing with function arguments (and let's be honest, variables in general) in dynamically typed languages such as Javascript. The issue I often run into is with regards to readability of code: I'm looking at a function I wrote a while ago and I have no clue what the structure of the argument variables actually is. It's usually ok at the moment of development of new code: everything's fresh in my head, every variable and parameter makes sense because I just wrote them. A week later? Not so much. For example, say I'm trying to crunch a bunch of data about user sessions on a website and get something useful out of it: var crunchSomeSessionData = function(sessionsMap, options) { [...] } Disregarding the fact that the function name isn't helpful - that obviously is a huge deal - I actually don't know anything at all about what the structure of sessionsMap or options is. Ok.. I have k/v pairs the sessionsMap object, since it's called Map, but are the value a primitive, an array, another hash of stuff? What is options? An array? A whitespace separated string? I have a few options: * clarify the structure exactly in the comment header for the function. The problem is that now I have to maintain the code in two places. * have as useful of a name as possible. e.g. userIdToArrayOfTimestampsMap or even have some kind of pseudo-Hungarian dialect for variable naming that only I speak that explains what the types are and how they're nested. This leads to really verbose code, and I'm a fan of keeping stuff under 80 col. * break functions down until I'm only ever passing around primitives or collections of primitives. I imagine it might work, but then I'd likely end up with micro-functions that have one or two lines at most, functions that exist only for the purpose of readability. Now I have to jump all over the file and recompose the function in my head, which just made readability worse. * some languages offer destructuring, which to some extent can almost be thought of as extra documentation for what the argument type is going to contain. * could create a \"class\" for the specific type of object, even though it'd not make a huge difference in a prototypal language like JS, and would probably add more maintenance overhead than necessary. Alternatively, if available, one can try to use protocols, maybe something along the lines of Clojure's deftype/defrecord etc. In the statically typed world this is not nearly as much of an issue. In C# for example you get a: public void DoStuff(Dictionary foo) {[...]}; Ok, easy peasy, I know exactly what I'm getting, no need to read the function header, or go back to the caller and figure out what it's concocting etc. What's the solution here? Are all people developing in dynamically typed languages continuously boggled by what types their subroutines are getting? Are there mitigation strategies?"} {"_id": "223248", "title": "Report generator windows service polling database for work", "text": "I'm building a new report generator for our in-house survey system. (No I can not use any off-the-shelf software. These are highly customized reports.) I want to use Topshelf to host the generator as a service. Our current generator is a desktop-app and requires a user to be logged in on the server. I want to try to avoid this. The report-generation itself is very straight forward and procedural in nature. But the server is more than capable of generating several reports at one time. I want the service to spin up a few instances of the generator at the same time, is this something I would use the Task Parallel Library for? A bit of pseudo would look a bit like this: poller.Poll(order => { // blocking(?) call to listen for new report-orders var gen = new Generator(order); gen.process(); // generates report set gen = null; // or something else to destroy the generator for that report }); Anyone have any suggestions on how to accomplish this?"} {"_id": "196048", "title": "So I'm a developing a workflow with vagrant+git...does this make sense?", "text": "**Relevant Background Details** 1. We've got two types of VMs (Utility Boxes & Web Servers) that developers need. 2. We are going to be using git for version control. 3. We have developers who have different preferences for their working environment (e.g. Some Linux, Some Windows). 4. I'm in the Linux camp and I believe Git and Windows doesn't mix as well as Linux & Git. However, this could be personal bias. 5. We using Linux in production. **Building the Vagrant VMs for distribution** 1. Build a base box with the relevant OS for vagrant. 2. Using a configuration manager (e.g. Chef) to build out the Utility & Web images, convert them to new base boxes. It will clone service configurations from a centralized git repository. 3. Distribute the base boxes (really just virtual machine images) for the users to develop locally with vagrant. The distributed box will automatically pull in source code from certain git repos (e.g. libraries). 4. If changes are planned for the production environment, all developers will need to pull down new base boxes for vagrant as they come prepackaged. I think this is the simplest way for a new developer to deal with it. Staging is updated to match the new development VMs in preparation. **Developer Workflow** 1. Get assigned an issue from the issue tracker. 2. Use the vagrant VM to clone the current dev repository into the folder it shares with the host OS (so the Developer can use their favorite IDE). 3. Developer commits changes and tests locally. 4. When satisfied, Developer merges his changes to the dev repository. If conflicts, work with the Developer commited the conflicting code to resolve the issue. 5. When Dev is in a stable state, Dev is merged with the current Staging repository for QA of the new features. Nothing is pushed from Dev to Staging until Step #6 is completed. Hooks generate new copy of documentation for Staging. 6. Staging is cloned into Production once the QA is completed. Hooks generate new copy of documentation for Production. Is there any obvious flaws/pitfalls in the above or steps that are generally considered 'best practices' that should be added?"} {"_id": "223242", "title": "How to consider the login function collecting requirements of an application? is it a user requirement?", "text": "I am working on an application that expected to the user logs in when the application is started (as for example it is in Skype: the user open the application and appear to him a login mask when he insert the username and password to start work with the application) Now I am preparing some use case diagram and I am collecting the application requirements (I have to write some documentation) Can I consider it as a user requirement? Or what is it?"} {"_id": "216574", "title": "Could someone help me understand SQL TDE Database encryption?", "text": "I don't quite follow how it works. According to the MSDN Article there is a big hierarchy of keys protecting other keys and passwords. At some point the database is encrypted. You query the database which is encrypted, and it works seamlessly. If you're able to simply connect to the database as normal and not have to worry about any of the encryption from a developer point of view, how exactly is it secure? Surely anyone can simply connect and do `select * from x` and the data is revealed. Sorry my question is a bit scattered, I am just very confused by the article."} {"_id": "52854", "title": "How would most programmers feel about the bugs they wrote?", "text": "Do they feel frustrated, disappointed, or even don't admit at all?"} {"_id": "60949", "title": "Am I a code monkey?", "text": "I just tried integrating my website with facebook. I got a lot of copy-paste code from the facebook developers site. I just put the code and it works fine. Do you call this kind of programmers \"code monkeys\"? If you say I am a code monkey, in the same case what would you expect me to do?"} {"_id": "60947", "title": "Designing tool for C#", "text": "I have seen few developers use a tool for designing their application where they simply dragged needed elements (classes, variables, objects) and just did magic work. Then there was a button to generate a code where the design was generated into a C# code after what developer continued to work on software manually. Anyone can give me an idea about what those are called and where I may grab one?"} {"_id": "253695", "title": "Billing from card directly", "text": "I, for the life of me, cannot find any literature on this, simply because I have no clue how it is called. I want to learn how to implement a payment option that consists of paying with your credit/debit card directly without the use of a third party like paypal. This is what I am talking about ![enter image description here](http://i.stack.imgur.com/9gYOn.png) Can you please give me some information about how this payment method is called and possibly some articles I can read up on. Thank you!"} {"_id": "253694", "title": "Where did the notion of 'calling' a function come from?", "text": "I've always wondered why one _calls_ a function as opposed to, for example, _executing_ it. A Google search for `function call etymology` and similar terms turns up nothing useful, Wikipedia doesn't mention it, online dictionaries have either no entry at all or no etymology section. Where did the notion of 'calling' a function come from?"} {"_id": "220578", "title": "How would you create useful tests for Oracle BI Publisher reports?", "text": "I'm curious about how to test reports that seem rather straightforward. I'm supposed to create test cases for an Oracle report(XML Publisher/BI Publisher). So for example, a report is supposed to just return columns(say , 10 columns). Here is what I thought of so far: Validate that the report returns accurate output. Validate that the all column headers and table values are correctly aligned and there are no run-ons/incorrect formatting Validate that dates are in the MM/DD/YYYY format Validate that all currencies are accurate, and contain 2 decimal places. Here's a sample pic(of what the report produces) : ![enter image description here](http://i.stack.imgur.com/MfxvB.png) What would the edge/boundary cases be? If the report takes in parameters, does that make it more demanding of a report? Thanks"} {"_id": "253690", "title": "embedding LEFT OUTER JOIN within INNER JOIN", "text": "I am having some problems with one of the question's answered in the book \"SQL FOR MERE MORTALS\". Here is the problem statement ![Problem Statement](http://i.stack.imgur.com/894yN.jpg) Here is the Database Structure ![The Database Structure](http://i.stack.imgur.com/CKsHU.jpg) Here is the answer which I am unable to comprehend ![DoubtFul Answer](http://i.stack.imgur.com/SWxVI.jpg) Here is an answer which looks perfect to me ![Confirmed Answer](http://i.stack.imgur.com/PnJza.jpg) Now the problem with the first answer I am having is: We first use LEFT OUTER JOIN for recipe class and recipes. So it selects all recipe class rows but only matching recipes. Perfecty fine as the question is demanding. Lets call this result set R. Now in the next step when we use INNER JOIN to join RecipieIngridients, it should filter out the rows from R in which Recipie ID doesn't match with the Recipe Id in Recipie Ingredients and hence filtering out the related Recipe class and recipe description also(Since it filters out the entire row of R). So this contradicts with the problem which demands all recipieID and RecipieDescription to be displayed from Recipe_Classes Table in this very step only. How can it be correct. Or Am i Missing some concept."} {"_id": "220574", "title": "Where should user permission checks take place in and MVC and by who?", "text": "Should user permission checks take place in the model or the controller? And who should handle the permission checks, the User object or some UserManagement helper? ## Where should it happen? ### Checking in the Controller: class MyController { void performSomeAction() { if (user.hasRightPermissions()) { model.someAction(); } } ... Having the checks in the Controller helps making the Models simple actions, so we can keep all logic to the Controllers. ### Checking in the Model: class MyModel { void someAction() { if (user.hasRightPermissions()) { ... } } ... By putting the checks in the Model, we complicate the Model, but also make sure we don't accidentally allow users to do stuff they aren't supposed to in the Controller. ## And by who? Once we've settled on the place, who should do the checks? The user? Class User { bool hasPermissions(int permissionMask) { ... } ... But it's not really the user's responsibility to know what he or she can acccess, so perhaps some helper class? Class UserManagement { bool hasPermissions(User user, int permissionMask) { ... } ... I know it's common to ask just a single question in, well, a question, but I think these can be answered nicely together."} {"_id": "195934", "title": "Can I demand code quality on a project I've gotten", "text": "I have been given a Drupal project from an external web agency and have been trying to becomer wiser on both Drupal and their approach of making a site. With time I've learnt a bit morer about Drupal, even though I've come to notice it really isn't my cup of tea. Recently, I tried to \"simply\" replace the HTML generation of a part of the site, which seemed to be generated through a custom module (their addons for the site). I changed that code and realised that the effect simply didn't alter anything, or at least I thought. Basicly now I've without exaggerating replaced the same piece of code with very small differences 26 times, and it's _still_ not changing everywhere. So what I'm dealing with is a set of code that have literally never or barely heard of global class nor function and is simply copy & paste'd all over the site. I'm curious on what I'm supposed to do with this. There's one file alone that has over 3000 lines of basicly the _exact_ same code 16 times over. If you order a web agency to develop at site, can you expect it to be common sence that the code should be at least a _little_ maintainable and simply not a big piece of code, more annoying to edit than far more necessary, or does it specifically have to be requested and I should had instead had to validate the code before paying for the handover of the site? This is the first time I have ever came across such a poorly written piece of code in my career and it feels as if we bought a good looking suit that we saw from the outside of the store, and all we got was a poster of that suit for the same price. ## The question What would be the appropriate thing to do in my case? Should I suck it up and convince my employer that the code is poorly written in hope that my given deadline for completing the project gets extended, or can I demand the web agency to do some action? **EDIT:** Thanks for all your answers! They help me get some more perspective on this. I will discuss this internally a bit more, and try to mark out the best answer, even though most of yours covers the same but also different fields of the question."} {"_id": "207790", "title": "Does IE have more strict Javascript parsing than Chrome?", "text": "This is not meant to start a religio-technical browser war - I still prefer Chrome, at least for now, but: Because of a perhaps Chrome-related problem with my web page (see https://code.google.com/p/chromium/issues/detail?can=2&start=0&num=100&q=&colspec=ID%20Pri%20M%20Iteration%20ReleaseBlock%20Cr%20Status%20Owner%20Summary%20OS%20Modified&groupby=&sort=&id=161473), I temporarily switched to IE (10) to see if it would also view the time value as invalid. However, I didn't even get to that point - IE stopped me in my tracks before I could get there; but I found that IE was right - it is more particular/precise in validating my code. For example, I got this from IE: SCRIPT5007: The value of the property '$' is null or undefined, not a Function object ...which was referring to this: That array defines the content and structure of the page, as well as what programming assets are present at the time. Next, I would have listeners or running processes that monitor the structure of the MVCDocument and react to any changes in it. These processes are the \"brains\". Ajax calls will modify or annotate the MVCDocument. Changes to the MVCDocument object might in turn modify the listeners/\"Brains\". CSS would function to style the presentation layer. The problem is that I don't know the right terms to look for examples of what I'm wanting to develop. I prefer NOT to re-invent the wheel. What are some good examples of aps that run based on this premise? and are there more components than just the bones (document array), brains (javascript coding), and clothing (css)?"} {"_id": "125011", "title": "Does not testing internals entail diligent refactoring and/or rely on developer talent?", "text": "I'm not asking here what the arguments are for/against testing internal methods (though I'll restate some, and don't mind hearing others). My questions relate to the implications of only testing public interface. Especially, whether I understand the extent of the \"refactor\" part of Red- Green-Refactor in by-the-book TDD, and if there are non TDD techniques TDD practitioners use that address the problems that make me want to test internals. My question is: If you write the most straightforward code possible to get the light green, and if you don't test internal methods, does it follow that from time to time you allow yourself to start off by writing a class that does far more than it should with a bunch of member variables you know will not survive refactoring? And wind up after refactoring with a bunch of methods that aren't explicitly tested? As I get familiar with TDD, it feels like anyone would at least be tempted to test internal methods. And many practitioners flatly say you shouldn't. E.g. (of about 23 million results): Item 2 here and it's comments and this StackOverflow post Given an implementation will involve more than one non-trivial problem, if you only test the public interface, there are two possibilities: 1) non-trivial logic winds up getting tested only indirectly, AND tested in the same calls as other non trivial logic. 2) you make methods public that you don't expect clients other than the SUT and the tests to call. I think what I hear from the advocates of by-the-book TDD is \"1 is right. Yes, a lot of non trivial code is tested only indirectly, and that isn't a problem. After all, you don't want your tests to start failing when you improve the implementation. Even if they don't fail, it's not innocuous if tests keep passing when they're targetting a bunch of no-longer-used code, because the tests show how to use your class. You Should Only Test the Public Interface.\" And that sounds like a reasonable thing to say, but I haven't seen it said explicitly. But even though it sounds reasonable -- if I don't test internals I feel like I lose part of what helps keep the tests DRIVING the coding (maybe part of my difficulty applying the precepts is that I don't distinguish between coding and developing). If there's some internal logic that has to turn a String to a valid int32, int64 or decimal value depending on what's in the string and the type of some other object, I want to test that little bit of logic, not just find out that the whole method failed. I want one test to pinpoint those few lines of code and exercise what should fail. The thinking in the doubly downvoted answer here seems so obviously attractive that I have to wonder if the by-the-book fellows are neglecting to mention some instrumentation/tracing/logging code they use in addition to TDD, or if they watch it work in a symbolic debugger. Or do they just recognize that as a failure point and write a test with input that will cause a failure pinpointing those few lines of code? If they do that it's testing the implementation though. It seems so natural to want to test that code in isolation, and verify it fails where it should, even though it's an implementation detail. I suppose that with talent and experience you can classify bigger chunks of code as \"non-trivial\" but I haven't seen anyone bluntly saying or condescendingly implying that that's why they don't have to test implementation details. Lots of time it makes perfect sense to me to test the public interface, lots of methods really are like what you see in TDD examples. But for the feel of where I wonder if purists are doing something more than writing tests on a public member: Say you want to put data from an ISAM file into a database. There is one public method to the envisioned consumer of the code: public void LoadIt(SqlConnection c). \"Loadit\" has a bunch of other dependencies that will be resolved using configuration files and environment variables: some code has to determine that it can find the name of the folder where the ISAMs live; to find, load and parse the schema file for the isams using the ISAM vendor's DDL library; so forth+so on. But none of that is of any interest to any envisioned calling code. My only reason for moving the dependency resolvers out of the Loadit class would be so I could test them OR to make the code easier to understand/maintain. Moving to another class to make them testable is Doing It Wrong -- they're really just implementation details of LoadIt. And in my first implementation, I am just trying to write enough code to get the light to turn green. So I won't write anything with reasonable maintainability if it's easier to just manhandle a bunch of variables in a big method or two. I won't write helper classes. Instead, I'll get the light green and then do the Refactor part diligently. Even when I clean it up, any classes introduced keep \"internal/friend\" accessibility, and only get tested via the call to LoadIt. Thanks for reading this far; as a reminder, my question is in the second paragraph."} {"_id": "85575", "title": "How suitable is D2 for numerical work", "text": "I'm interested in working with the D programming language for simulating physics. How suitable is it? Is DMD's floating point code generation mature enough to compete with compilers available for c++ and fortran? Have fast matrix libraries been ported to D (or are they accessible from D)? Is there anything in the language itself that would inhibit/improve the floating point performance of the language? I love programming in the language, however I'm worried that the compilers/libraries might not be suitable yet for numerical work."} {"_id": "128020", "title": "How to deploy a heterogeneous server application to customers?", "text": "We have a bigger application server which customers would like to have deployed locally. It consists of an MySQL server database, a REDIS database, multiple Web servers for sub parts, a NGINX reverse proxy so that these web servers are reachable from Port 80 and a homegrown C++ server. All sub parts (DB, Webservers, ...) have to be configured to be accessible by each other. At the moment (for in house use) deployment and configuration is done by hand; but in order to roll it out to customers, we would like to have a Out-of-the- Box-Solution which gets assembled by some build script. What would you suggest? 1. Maintaining a VM which just gets configured and then deployed? 2. Maintaining some installation script or package 3. Something else? Usually we would prefer 2. as it seems the more natural way and let the customer decide whether he wants to use a real server or a VM. Also sounds automatic creation of VMs rather time-expensive. The Problems we face are the usually not-to-be-embedded-components like MySQL and NGINX. NGINX configuration is stored in /etc/nginx/. I don't think that .DEB-packages are allowed to overwrite foreign NGINX configuration nor is it a good practice. The same with MySQL. It is also possible to embed MySQL/NGINX/Redis but this is not a trivial task."} {"_id": "125014", "title": "What about ALM systems, ERPs, and embedded products?", "text": "We are working on getting a new application lifecycle management (ALM) system, including a bug tracker, a documentation system, project management, etc. The concern is that we deal with quite complex embedded systems and we would like to have the best possible integration of the different services (project management, issue/task tracking, documentation, etc.). If it was software only, I'd just buy something like JIRA, but the fact that we would hope to manage software, firmware (no big problem there), and hardware in the same system makes me doubt a little bit. I'm looking for advice regarding these points: * What about managing \"bugs\" with embedded products? In software, you have the versions of the affected module(s). In embedded products, you have different software driving often times different firmware operating sometimes different hardware. Is it good in practice to simply consider hardware parts as if they were software modules in a software-intended system? I tend to believe it will add many custom fields to the bug tracking interface and consequently won't promote its systematic use. * Some would like to push integration to the level where you have hardware inventory integrated to the rest. This means that a project in the ALM (think JIRA) would have to reserve its hardware components based on the sub-components. It would also mean that, for example, the ALM in question would have to be able to manage part providers and offer facilities to easy purchase order creation, sending, and management in general. I'm wondering, at that point, if everything in an embedded project management process can be integrated in one management system, or if a line has to be drawn between 1) bug/task tracking (possibly software and hardware) and project management and 2) high level project management, inventory tracking, sale/buy orders, etc. What is ultimately wished for seems to be nothing less than an integrated JIRA and ERP. Or is it possible to do proper ALM in an existing ERP or have a decent ERP in an ALM system already? My personal opinion, from what I know right now, would be to split issues and project management (ALM + documentation + ...) from the ERP. The problem with that is that there's documentation about a project both in the ERP and in the project management of the ALM (which would be used for software, firmware, hardware, ...). What's funny in this is that some things might seem very unrelated at first, like the time sheets (ERP) and an issue (bug tracker in ALM), but in the end, it might very well be interesting, wanted, or even required to know how much time was spend on a bug, or a project altogether (bugs, issues, features, other tasks). This gives a point toward total unification of ERP/ALM... At this point, you should get a feel of my questioning. Any helpful input is greatly appreciated. Thanks!"} {"_id": "57885", "title": "LINQPad still being used much out there?", "text": "I'm trying to guage how popular and how used LINQPad is today. I'm just wondering if it's still a useful tool or not as VS and other tools have gotten better. Furthermore, I am coding over LLBGen by working with LINQ to SQL. I see there is a plug-in for LLBGen and LINQPad. Still I wonder if LINQPad is really worth it or what benefits it can give me or if it's still highly suggested out there for ORMs, etc."} {"_id": "135868", "title": "What language is the CLR written in?", "text": "Just out of curiosity, what language is the CLR written in? I read on the Java Virtual Machine wikipedia entry that it is programmed in C++; is this the same for the CLR? Sorry if this is off-topic, I didn't feel that this question was technical enough to go on Stack Overflow. Thanks! :)"} {"_id": "184945", "title": "Why is an interface in Java not allowed to have state?", "text": "There must be a good reason why Java designers didn't allow any state to be defined in interfaces . Can you please throw some light on this aspect of design decision ?"} {"_id": "38441", "title": "When not to use Google Web Toolkit?", "text": "I'm considering use of GWT on a major in-house web app development project, namely it's major advantage in my eyes is the cross-compilation to Javascript which would (at least theoretically) help my team reduce the size of tech stack by one. However, having been burnt before (like most devs), I would like to hear from programmers who did actually use it on any problems with GWT which would hamper, or limit, it's use within a certain problem domain. What are the arguments against using GWT, and why?"} {"_id": "208276", "title": "CakePHP: Automation triggers after save - best done as component or behavior?", "text": "Folks, first time on CodeReview - just looking for some input in creating some automation in CakePHP, wondering if this is best used a component or a behavior. It's model driven but involves logic and processing. Here's the scoop: The app I'm creating requires the user to be notified when a model meets certain conditions - additionally data in the affected model or a related model might need to be changed. For example, if a customer model is saved with a certain state/province, assign that customer to a sales rep (by changing a relational field) and notify them through a Growl or an Email. So it's data driven, for sure, suggesting behavior, presumably afterSave() But it also requires logic and other utilities not typically used in the model or behavior (email, for example and Session/system configuration/user preferences) - functionality typically found in a controller/component and not readily available to a model or behavior. So is this best developed as a behavior or a component? Any tips on how to approach by Cake Guru's this would be much appreciated. Thanks in advance."} {"_id": "208274", "title": "When using int's as boolean values, is it in poor form to use 0's and 1' directly?", "text": "Is it better to do this #define INT_TRUE 1 #define INT_FALSE 0 int someFunctionalityIsEnabled = INT_TRUE; or this? int someFunctionalityIsEnabled = 1; It can be safely assumed that false will always be zero and true will be non- zero."} {"_id": "870", "title": "If you could only have one programming related book on your bookshelf what would it be and why?", "text": "One per answer please. I'll add my favourite as an answer."} {"_id": "4596", "title": "Is local \"User\" rights enough or do developers need Local Administrator or Power User while coding?", "text": "We have an offshore development crew who has a bad habit of installing nonsense software on corporate desktops (which has nothing to do with their job function) and so are considering removing their local administrator rights. Is Local Administrator, or local Power User a requirement with VS2010? How do you run without elevated rights? What issues will you run into?"} {"_id": "204568", "title": "Established antipattern name? Only getting data ducks in a row right before you need them", "text": "This one's all over our codebase but I'm not sure I've ever heard a name put to it. We have C# and Java (and Rails but we don't have to touch it very often) so I'll speak more generally. It's like the data concerns never get fully established until you're literally right in a view controller preparing to drop it on the page (note: we're way past the actual SQL queries in most cases here). For instance, we have data for a certain product type coming in from CRMs, from scans done with our mobile app, and pulled up from inventory in DBs and yet somehow it's not until we get to the spot where we're literally about to drop it on the page that we suddenly discover that there's business logic to attend to. Our actions and view controllers are GIGANTIC. We're asking all kinds of questions of every object and branching on them, whether it isThis or hasThat, data sources, customer preferences and sometimes stuff so seemingly arbitrary it might as well be phase of the moon, migration habits of species I've never even heard of, most-googled film stars... And it doesn't always stop at the controller. The Java in particular is notorious for letting it all spill out into the JSP and then even onto the client-side in the JS. This phenomenon has to have a name, but I've never heard it. What is it?"} {"_id": "203828", "title": "Regaining my spark on a project", "text": "So I'm in a little bit of a pickle. I suppose some people have gotten in my situation at some point, I know but at least as many have kept going for various reasons. To put the problem shortly; I am stuck in a project which I have after several attempts of \"new fresh attitude\" ended up drained out of energy and creativity and been less productive than I wish I were. The scenario is this; I had earlier been handed a, in my opinion, half done project made in Drupal. At first it had really terrible code, but I managed to get them to clean it up a bit first. Needless to say, a poorly constructed house full of used clothes and thrash is still a poorly constructed house even without the used clothes and the thrash. My hope currently is to regain that spark I have for all of my other projects that lets me work effeciently and mainly gets things done. In this case, I have little to none interest in working with this project. Here are some reason: * The code have quite some room for improvements. The only things that I find acceptable are the things contributed from Drupal, which are simply implemented into this project. Most custom things made in it is nothing I cherish. * I can't push myself to endure it, because I know this project will be something I'm forced to work with in the long term. In the same way I can't push myself to run the last mile of a marathon faster if I know I have to walk the entire way home afterwards. * I've read up quite a bit regarding people's opinion upon Drupal, and have came to the conclusion that Drupal is great if you have the right mindset for it. I don't have that mindset. I prefer to create as much as possible myself and simply let a framework such as CodeIgniter worry about the process. It helps me be more confident with my code and less dependent on a module fetched from somewhere. * I generally don't believe in the project. It already takes silly long time to load, weird bugs appears from time to time that I can't even solve through google effeciently. It feels like the best thing I can do with the project is to patch it up, but I strive to do great websites, not make bad ones less bad. * Drupal has a long learning process, and I've been put in the middle of a half made project and am supposed to implemented more advanced features. Usually I have no problem to add these things in my other projects, but in Drupal it feels close to impossible often. Apart from this, I might add that I'm currently economicly independent and don't fear loosing my job for that reason. I generally have nice colleagues but I work alone as a developer and would prefer to have at least someone whom I could discuss technical solutions with. This project isn't something that will go away from my understanding, I've sat down with my boss, explaining how I feel regarding this project but all I get in reply is \"We've spend too much time and money on this project. It needs to get completed.\" From my point of view, we're currently throwing money into a pit, hoping that the money will reach the surface sooner of later. **Question:** How can I regain the spark for this project? The creativity simply isn't there for me. This project won't give me much knowledge that I will have use of in the future (I have maintained lots and lots of code for 3 years) and it doesn't lead me towards my goal; to create \"the perfect website\". If I can't regain the spark, what other alternatives should I consider? **Note:** I am aware that this question can seem too localized, but I think my main question can help many in a similiar position. I'm sorry if I sound like a spoiled developer who only wants to take on specific tasks, but I work _so_ much better with project I'm comfortable with and I also like being productive."} {"_id": "204562", "title": "Is there any situation in which it would be appropriate to use unless... else?", "text": "This construction is considered wrong unless #.. else #.. end This is for the general reason of double-negative confusions. It translates to: if not #.. if not not #.. end _We all agree this is confusing._ * * * Is there any situation (include semantic) that the `unless..else` construction would be more appropriate?"} {"_id": "80898", "title": "How do you ask or get asked to speak at a software conference?", "text": "This is in my TODO.txt. How would you go about achieving this goal?"} {"_id": "133420", "title": "How should I include third-party models in my domain model?", "text": "I'm currently trying to design a little application using Domain Driven Design but I'm afraid I don't really get the concept yet. Let me try to explain this as clearly as possible. public interface IMyDomainRepository { void Add(IAnInterfaceFromAThirdParty element); } The interface above is defined in my domain model and this will be used in a different project called `ThirdPartyImplementionVersionX`. Now `IAnInterfaceFromAThirdParty` is not defined in my domain model but comes from my third party library. I could create a class/interface in my domain model which implements the same properties and methods as the interface `IAnInterfaceFromAThirdParty` but this interface has 20 methods defined thus making it a difficult task to implement the same logic. Is this the way to go or am I missing something?"} {"_id": "80894", "title": "Managing re-usable code in user-centric agile stories?", "text": "I work on an agile team at the moment where our stories are written primarily from the user perspective as well as testing. So, for example, we may have a request for a date picker. The story would go something like: User goes to page x and clicks on date to launch a native date picker All fine and good, but the problem is how to communicate what that is from a development POV. For example, the issues we'd want to address: * we have to support multiple devices * even though the user sees a 'native' widget, we're often having to build emulated versions in JS * we'd want to be using said widget on many other pages than just x and would want to incorporate variations needed on page y and z into this one component. We're struggling in figuring out how to best handle this to enable the dev team. One option seems to be for our dev team to build our own component and pattern libraries. We'd then take the user stories and use that data to enhance our own component/pattern library. Have you ran into this problem and, if so, have you found a way to consolidate the discrepancies between user stories and the concept of reusable component code for the dev team?"} {"_id": "24681", "title": "Charles Barkley syndrome", "text": "Charles Barkley was an excellent basketball player, a hall of fame, and a dream team member. He played for the 76ers, Suns, and Rockets. Yet he never won an NBA championship. Some might argue this was because he was never surrounded by other players of his caliber, and in the NBA, you can't win on your own. So what does this have to do with programming? How many of you out there feel like Sir Charles? Leading your team in every category, KLOCs, bugs fixed, systems configured... Always the one pushing for improvements, upgrading systems, negotiating with customers... Feeling like you are carrying the team. Anger just under the surface. Only to retire eventually, without \"the ring\"1. * * * 1: Keep in mind, Charles never blamed his team. He just performed at his best."} {"_id": "80891", "title": "Understanding my dvcs workflow", "text": "We are going to be building a new project using Mercurial as our version control system. I'm still trying to fully understand what my workflow should look like, so I've listed below a case that I need some help undertanding. A single application with multiple modules (Each module being hosted in its own repository) Multiple developers need to make changes to the same module. How should this be managed? * Should a hg pull be done and all changes be made on the local developers machine. Once those changes have been made the changes are pushed to dev. * Should each feature/bug fix get a new repository or branch on the server and the developer pushes his changes from his local hg to the remote hg. Once the feature is completed that repo/branch is merged back into dev. * Or a combinatiom of both. Small changes go straight into dev, and a feature that could take some time gets its own branch/repo. * Or any other strategies that you think are better suited."} {"_id": "5209", "title": "How valuable are programming conferences?", "text": "Inspired by this, just how useful are programming conferences of various types? What do they offer that you can't get from just reading and researching subject online?"} {"_id": "145501", "title": "Are any companies moving from DVCSs to CVCSs?", "text": "Are there any actual business cases that have made any company move from a DVCS to a CVCS (regardless of whether they were on a CVCS originally)?. Other than having a closed mind and rejecting the paradigm shift altogether (for the particular case of companies coming from a CVCS) I cannot think of any cause for this happening. Double chocolate cookie for anyone with empiric evidence."} {"_id": "175121", "title": "Constant values in the interface", "text": "Some time ago I have read two different books and each of them gives totally different answer for the question if it is a good pattern to define constant values in the interface (in java). So I am curious of your opinions with some reasonable arguments. Is it a good habit/ pattern to define constant values in interfaces in java? Is it generally good, generally bad or it depends?"} {"_id": "175127", "title": "How do you safely work around broken code?", "text": "Once in a while, a co-worker will check-in bad code which blocks my application from initializing properly. To get around this problem quickly, I comment out the offending code. I then forget to uncomment the offending code at the time of my check-in, so I want to prevent this from happening again. Do you have any suggestions on how to: 1. Disable bad code that stop you from working 2. Prevent yourself from checking in unwanted changes? I'm currently using Visual Studio as my IDE and TFS as my source code repo."} {"_id": "187849", "title": "Unindented elses", "text": "Been writing code since about 18 years now. Lately, I've got into the habit of not indenting elses and when I look at the code it does worry me a bit mostly that someone who might look at the code might cry foul. The snippet below explains the need for the style if(condition) { ... } //1 if(condition) { // Don't want this second block to be entered unnecessarily ... } So I put an else where the comment labelled 1 is if(condition) { ... } else if(condition) { ... } Putting an else there helps maintain readability especially when forgetting to put one there doesn't necessarily break anything. What I'm uneasy with is that it doesn't strictly follow conventions (Sun's in this case) in two different ways - 1) proper indentation 2) having an open brace after the else I guess what I'm trying to do is prevent having massive indents which really makes a difference in blocks that have a lot of conditions - 5 or more although only three are showcased below. if(condition) { ... } else { if(condition) { ... } else { if(condition) { ... } else { ... } } } I find it a lot neater to have the else unindented and without a brace following. What are your thoughts?"} {"_id": "164261", "title": "How can I implement a database TableView like thing in C++?", "text": "How can I implement a TableView like thing in C++? I want to emulating a tiny relation database like thing in C++. I have data tables, and I want to transform it somehow, so I need a TableView like class. I want filtering, sorting, freely add and remove items and transforming (ex. view as UPPERCASE and so on). The whole thing is inside a GUI application, so datatables and views are attached to a GUI (or HTML or something). So how can I identify an item in the view? How can I signal it when the table is changed? Is there some design pattern for this? Here is a simple table, and a simple data item: #include #include #include #include #include using boost::multi_index_container; using namespace boost::multi_index; struct Data { Data() {} int id; std::string name; }; struct row{}; struct id{}; struct name{}; typedef boost::multi_index_container< Data, indexed_by< random_access >, ordered_unique, member >, ordered_unique, member > > > TDataTable; class DataTable { public: typedef Data item_type; typedef TDataTable::value_type value_type; typedef TDataTable::const_reference const_reference; typedef TDataTable::index::type TRowIndex; typedef TDataTable::index::type TIdIndex; typedef TDataTable::index::type TNameIndex; typedef TRowIndex::iterator iterator; DataTable() : row_index(rule_table.get()), id_index(rule_table.get()), name_index(rule_table.get()), row_index_writeable(rule_table.get()) { } TDataTable::const_reference operator[](TDataTable::size_type n) const { return rule_table[n]; } std::pair push_back(const value_type& x) { return row_index_writeable.push_back(x); } iterator erase(iterator position) { return row_index_writeable.erase(position); } bool replace(iterator position,const value_type& x) { return row_index_writeable.replace(position, x); } template void rearrange(InputIterator first) { return row_index_writeable.rearrange(first); } void print_table() const; unsigned size() const { return row_index.size(); } TDataTable rule_table; const TRowIndex& row_index; const TIdIndex& id_index; const TNameIndex& name_index; private: TRowIndex& row_index_writeable; }; class DataTableView { DataTableView(const DataTable& source_table) {} // How can I implement this? // I want filtering, sorting, signaling upper GUI layer, and sorting, and ... }; int main() { Data data1; data1.id = 1; data1.name = \"name1\"; Data data2; data2.id = 2; data2.name = \"name2\"; DataTable table; table.push_back(data1); DataTable::iterator it1 = table.row_index.iterator_to(table[0]); table.erase(it1); table.push_back(data1); Data new_data(table[0]); new_data.name = \"new_name\"; table.replace(table.row_index.iterator_to(table[0]), new_data); for (unsigned i = 0; i < table.size(); ++i) std::cout << table[i].name << std::endl; #if 0 // using scenarios: DataTableView table_view(table); table_view.fill_from_source(); // synchronization with source table_view.remove(data_item1); // remove item from view table_view.add(data_item2); // add item from source table table_view.filter(filterfunc); // filtering table_view.sort(sortfunc); // sorting // modifying from source_able, hot to signal the table_view? // FYI: Table view is atteched to a GUI item table.erase(data); table.replace(data); #endif return 0; }"} {"_id": "164267", "title": "XDIME for Mobile Applications", "text": "I'm involved in a project that requires to mobile-enable some previously developed Portlets. The Portlets are deployed in WebSphere Portal, and the container offers a technology called IBM Mobile Portal Accelerator that uses XDIME to render mobile pages according to the device. I'm trying to document myself in the technology and I'm having a bad time: Google only shows some outdated sites from IBM and even older posts from Volantis, another company involved in the technology (Amazon shows no related books). So... what's the current status of that technology actually? Is has some decent level of adoption?"} {"_id": "36071", "title": "What javadoc equivalents do you use in .net?", "text": "Java developers use Javadoc in order to write inline documentation in their comments. What are the alternatives for .net - specifically c# and VB.net that people actually use in real world projects?"} {"_id": "237112", "title": "Randomized Hash function with no collisions", "text": "Related to the question Which hashing algorithm is best for uniqueness and speed? Is there a way to create a hash function, or find one, whose hash length depends completely on the input length, has an adjustable hash character set (Since it must be 1-to-1 function the input must comply to this character set constraint as well). Moreover the most important part is that the hash function generates as **random** as possible strings. So for example it would be possible to get the following two different results. Hash(aaaa)->blue // for character set a-z and Hash(aaaz)->pink // for character set a-z More examples: Hash(aa@12a)->dakj@4 // for character set a-z, @, 0-9 and Hash(a2#46Ww)->@3#0Pdw // for character set a-z A-Z, !-), 0-9 **Notice** the character length between input and output and the character sets. What I thought so far was a probability distribution function, there are random distribution functions but I am not sure how to get there. http://en.wikipedia.org/wiki/Probability_distribution http://en.wikipedia.org/wiki/Normal_distribution"} {"_id": "237113", "title": "Why are interfaces more helpful than superclasses in achieving loose coupling?", "text": "( _For the purpose of this question, when I say 'interface' **I mean the language construct`interface`**, and not an 'interface' in the other sense of the word, i.e. the public methods a class offers the outside world in order to communicate with and manipulate it._) Loose coupling can be achieved by having an object depend on an abstraction instead of a concrete type. This allows for loose coupling for two main reasons: **1-** abstractions are less likely to change than concrete types, which means the dependent code is less likely to break. **2-** different concrete types can be used at runtime, because they all fit the abstraction. New concrete types can also be added later with no need to alter the existing dependent code. For example, consider a class `Car` and two subclasses `Volvo` and `Mazda`. If your code depends on a `Car`, it can use either a `Volvo` or a `Mazda` during runtime. Also later on additional subclasses could be added with no need to change the dependent code. Also, `Car` \\- which is an abstraction - is less likely to change than `Volvo` or `Mazda`. Cars have been generally the same for quite some time, but Volvos and Mazdas are far more likely to change. I.e. abstractions are more stable than concrete types. All of this was to show that I understand what loose coupling is and how it is achieved by depending on abstractions and not on concretions. (If I wrote something inaccurate please say so). **What I don't understand is this:** **Abstractions can be superclasses or interfaces.** **If so, why are interfaces specifically praised for their ability to allow loose coupling? I don't see how it's different than using a superclass.** The only differences I see are: 1- Interfaces aren't limited by single inheritance, but that doesn't have much to do with the topic of loose coupling. 2- Interfaces are more 'abstract' since they have no implementation logic at all. But still, I don't see why that makes such a big difference. **Please explain to me why interfaces are said to be great in allowing loose coupling, while simple superclasses are not.**"} {"_id": "237115", "title": "How do Traits in Scala avoid the \"diamond error\"?", "text": "_(Note: I used 'error' instead of 'problem' in the title for obvious reasons.. ;) )._ I did some basic reading on Traits in Scala. They're similar to Interfaces in Java or C#, but they do allow for default implementation of a method. I was wondering: can't this cause a case of the \"diamond problem\", which is why many languages avoid multiple inheritance in the first place? If so, how does Scala handle this?"} {"_id": "71079", "title": "One controller per page or many pages in one controller?", "text": "I just wanted some advice regarding the MVC way of doing things. I am using codeigniter and I was wondering if it's better to have one controller per page for a _website_ or to have one controller for all the pages? Let's say I have a simple website where you can visit the homepage, login, create an account and contact the admin. 1. Would it be better to have these controllers: frontend(index), login, account, contact OR having one controller called frontend or whatever with the actions such login, createAccount, contact? 2. When do you know if its better to use one controller in a situation?"} {"_id": "33243", "title": "What is the difference between Acceptance Test-Driven Planning and Acceptance Test-Driven Development?", "text": "What is the difference between Acceptance Test--Driven Planning and Acceptance Test--Driven Development? Are they the same?"} {"_id": "50034", "title": "Enterprise VS Regular corporate developer", "text": "Ok, I \" _almost_ \" lost a job offer because I \" ** _didn't have enough experience as an enterprise software engineer_** \". I've been a programmer for over 16 years, and the last 12-14 professionally, at companies big and small. So this made me think of this question: What's the difference between a software engineer and an enterprise software engineer? Is there really a difference between software architecture and enterprise architecture? BTW: I try to do what every other GOOD software programmer does, like architecture, tdd, SDLC, etc."} {"_id": "11199", "title": "One-week release cycle: how do I make this feasible?", "text": "At my company (3-yr-old web industry startup), we have frequent problems with the product team saying \"aaaah this is a crisis patch it now!\" (doesn't everybody?) This has an impact on the productivity (and morale) of engineering staff, self included. Management has spent some time thinking about how to reduce the frequency of these same-day requests and has come up with the solution that we are going to have a release every week. (Previously we'd been doing one every two weeks, which usually slipped by a couple of days or so.) There are 13 developers and 6 local / 9 offshore testers; the theory is that only 4 developers (and all testers) will work on even-numbered releases, unless a piece of work comes up that really requires some specific expertise from one of the other devs. Each cycle will contain two days of dev work and two days of QA work (plus 1 day of scoping / triage / ...). My questions are: (a) Does anyone have experience with this length of release cycle? (b) Has anyone heard of this length of release cycle even being attempted? (c) If (a) or (b), how on Earth do you make it work? (Any pitfalls to avoid, etc., are also appreciated.) (d) How can we minimize the damage if this effort fails?"} {"_id": "71070", "title": "Python : how can I impress people coming from Ruby/Java?", "text": "I am preparing a presentation about Python for my company, and would like to show Python's awesomeness to developers using Java or Ruby ! I guess it will be very simple to write shorter and cleaner looking stuff than in Java, however I've never coded in Ruby (well only a few stuff with rails, but I don't know the language) ! **PLEASE NOTE** : This topic **IS NOT** about which language is the best. I just want to convince people that don't know Python at all, or even -for some- think that it is weird, that it is worthy of interest as well (as much as Ruby, Java, etc... are). And for this, I need input from people that know Ruby (or Java) and Python ... I'm looking for an easy to understand code snippet or application that would shows where Python shines. I was thinking of a couple of list comprehensions, because I think it is really great. Some operator overriding, because that looks dead simple in Python, maybe some metaclass programming, and/or why not a doctest/sphinx example ... Any other/better suggestions ? PS : the people are backend web developers"} {"_id": "18649", "title": "\"To code quickly, you must quit coding\"", "text": "First off, not my phrase: http://www.whattofix.com/blog/archives/2010/11/to- code-quickly.php Props to Mr. Markham. BUT, it got me to thinking about a lot of questions I have seen about being able to get things done. The approach advocated (setting a timer for a set period, in this case 50 minutes, but I've seen people talk about breaking procrastination by setting times as short as five minutes on tasks that you just cannot bring yourself to do, and then taking a short break) seems to be common sense, but lots of people advocate getting into the \"zone\" and staying there as long as possible, maybe many hours, rather than break their groove. I keep trying different approaches and find that each has its own strengths and weaknesses. What kind of technique do you use to be more EFFECTIVE (i.e., getting work done to the quality level demanded by your client / boss / etc. in the time frame allowed) in your software development and not just to spend more time at the keyboard?"} {"_id": "2715", "title": "Should curly braces appear on their own line?", "text": "Should curly braces be on their own line or not? What do you think about it? if (you.hasAnswer()) { you.postAnswer(); } else { you.doSomething(); } or should it be if (you.hasAnswer()) { you.postAnswer(); } else { you.doSomething(); } or even if (you.hasAnswer()) you.postAnswer(); else you.doSomething(); Please be constructive! Explain why, share experiences, back it up with facts and references."} {"_id": "140027", "title": "Good practice and terminology of braces", "text": "> **Possible Duplicate:** > Should curly braces appear on their own line? I've come across two methods of using braces with if/for/switch/while/function blocks in Java, JS, C++, and PHP (any language that uses braces for creating scopeblocks) One of them is like this: If(...){ ... }else{ ... } I've always used this, it's become an unbreakable habit for me. This leads to compact code, but many people who look at my code get confused because they don't realise that there is an opening brace. I personally call this method 'bouncing braces' (don't know why) The other method is this: If(...) { ... } else { ... } This is less compact, but is clearer. This is what most people seem to expect (from what I've seen--and I've not seen much). If they don't see the opening brace below the if, they think its missing. So I have two questions: * What are these two practices, (if they have names) called? * Which one is better practice?"} {"_id": "17929", "title": "Web versus desktop development - is web development worse?", "text": "As a longtime desktop developer looking at doing our first large-scale web application, what are the pros and cons of doing web development? Is developing a web application much worse than developing a desktop app? E.g., is it more tedious or annoying? Is the time to market much worse? Is the web platform excessively limiting? If the answer to any of these is yes, then why? (And how does developing a Flash or Silverlight app compare?)"} {"_id": "252224", "title": "When is it appropriate to do calculations in front-end?", "text": "My team is developing a WEB based finance application and there was a bit of an argument with a colleague where to keep the calculations - purely in back- end or keep some in front-end too? Brief explanation: We are using Java (ZK, Spring) for front-end and Progress 4gl for back-end. Calculations that involve some hardcore math & data from database are kept in back-end, so I'm not talking about them. I'm talking about the situation where the user enters value X, it is then added to the value Y (shown in screen) and the result is shown in the field Z. Pure and simple jQuery-ish operations, I mean. So what would be the best practice here: 1) Add values with JavaScript that saves from going to the back-end and back and then validate them at the back-end \"on save\"? 2) Keep all the business logic in the same place - therefore bring the values to the back-end and do the calculations there? 3) Do the calculations in the front-end; then send data to the back-end, validate them there, do the calculations again and only if the results are valid and equal, display them to the user? 4) Something else? Note: We do some basic validation in Java but most of it is still in the back- end as all the other business logic. Increase of data that would be sent by recalculating everything in a back-end wouldn't be a problem (small XML size; servers and bandwidth would withstand the increased amount of operations done by users)."} {"_id": "87049", "title": "How come many project-hosting sites don't have a forum feature?", "text": "I'm considering starting an open-source project, so I shopped around some popular project hosting sites. What I find surprising is that many (see here for a nice feature table) of the popular project hosting sites (e.g. GitHub, BitBucket) don't have a forum feature, i.e. a place where users can talk to the devs, ask questions, raise ideas, etc. IMHO an active forum is an important factor in creating a user community around a project, so I would expect that most project owners would be interested in such a feature. I've also noticed that some projects _do_ have support forums (or mailing lists) hosted elsewhere - e.g. Ruby on Rails is hosted on GitHub but has a Google Groups support group, and TortoiseHG is hosted on BitBucket but has a mailing list on SourceForge - so it's not like this feature is unneeded. So how come many project hosting sites don't have a forum feature?"} {"_id": "187844", "title": "Is there any taxonomy/language to describe user interfaces?", "text": "I'm currently researching the options which I have, to build an 'Online Help System'. This system should offer the user information about dialogs - which consists mainly out of forms. So my main question is: Is there any taxonomy/language to describe user interfaces? I have searched a lot on this subject matter, but end up with things like XUL. Which is not what I'm looking for. I'm looking for a structure / language / taxonomy to describe the user interface (of the dialogs). However, I'm not sure of its existence. Since I'm not, I would also be pleased with any (search) terms which are worth investigating in this case. Besides this, I'm also looking for languages / taxonomies which are used for regular Online Help systems. The things I came across were: DocBook, DITA and MAML. Are there any other big players in this 'market'?"} {"_id": "246053", "title": "Cloud Newbie ... what should I know while creating my App", "text": "I never developed for the Cloud, and to tell you the truth I am not fully 100% sure of what Cloud is when coming down to servers and services. I am starting to develop an App where I will need to be able to scale as needed. The App has a lot of User entries as well as file uploads. Think is as a \"forum\" (this is the closest model that I can think that applies to my App). User will be able to register, create \"threads\", create \"polls\", answer, and vote. They can also upload images, videos, and audio files. The person that previously created the website used 'Auto-increment\" fields to keep track of the user ids. while rebuilding the page I was thinking to drop that part and use UUID() for all the users entries, this, I believe might solve the concurrency of entries at the same millisecond or weird locks or duplicate ids or possible other problems... but... other than that, I do not have a clue of what the \"cloud\" would do. Those are my questions: * I will use MySQL as DB, will I have to connect to different DBs to write to alleviate the stress or I should not even worry about that because the \"cloud\" will take care of that? (I will have replication for the reads) * When accessing files, will I have to know on what server the files where uploaded to be able to serve them, or will the \"cloud\" magically do that ? * If I increase the \"cloud\" space by adding a new server, what should I know about it? simply deploy it and everything is up and running or will I need to do something to my code to tell \"hey, there is another instance, now do this and that differently\" Thank you and I hope you understood what I meant."} {"_id": "87047", "title": "Can I legally publish my Fortran 90 wrappers to Nvidias' CUFFT library (from the CUDA SDK)?", "text": "From a legal standpoint (licensing issues), can I legally, in agreement with the license, publish Fortran 90 wrappers (bindings) to the CUFFT library from Nvidia CUDA Toolkit, under some open source license (either CC0, that is, public domain, or some kind of permissive license like BSD)? Nvidia provides only C bindings with their CUDA SDK. Header files contain the following text. /* * Copyright 1993-2011 NVIDIA Corporation. All rights reserved. * * NOTICE TO LICENSEE: * * This source code and/or documentation (\"Licensed Deliverables\") are * subject to NVIDIA intellectual property rights under U.S. and * international Copyright laws. * * These Licensed Deliverables contained herein is PROPRIETARY and * CONFIDENTIAL to NVIDIA and is being provided under the terms and * conditions of a form of NVIDIA software license agreement by and * between NVIDIA and Licensee (\"License Agreement\") or electronically * accepted by Licensee. Notwithstanding any terms or conditions to * the contrary in the License Agreement, reproduction or disclosure * of the Licensed Deliverables to any third party without the express * written consent of NVIDIA is prohibited. The `License.txt` file includes the following fragment > Source Code: Developer shall have the right to modify and create derivative > works with the Source Code. Developer shall own any derivative works > (\"Derivatives\") it creates to the Source Code, provided that Developer uses > the Materials in accordance with the terms and conditions of this Agreement. > Developer may distribute the Derivatives, provided that all NVIDIA copyright > notices and trademarks are propagated and used properly and the Derivatives > include the following statement: \"This software contains source code > provided by NVIDIA Corporation.\""} {"_id": "87041", "title": "At what point in the process do you create the visual design?", "text": "I like to do a wireframe before I start coding and work from there. But when should I start worrying about the visual design? When should I consider the colours, the font, whether corners should be round or sharp, icon design, etc? Is that the last thing to do before launch? Or is there any reason to work on it while you're still in the coding phase, or testing and debugging phase?"} {"_id": "45927", "title": "What is the role of QA in a BDD-driven project?", "text": "If running a project using BDD with 100% coverage of user stories with automated acceptance tests, what would be the role of a tester / quality assurance person? I guess I am imagining that developers would write the acceptance tests in conjunction with the product owner, let me know if that seems like a foolish assumption."} {"_id": "252228", "title": "How does requirements management work in the long term with Agile projects?", "text": "Requirements Management in the short term for Agile projects seems like a solved problem to me. From the Scrum angle new requirements or changes to existing requirements are delivered through User Stories. And User Stories grouped under an Epic or Feature facilitate the delivery of larger more complex requirements. Of course, a User Story isn't technically a requirements document. It is a manageable grouping of work which maps to what is often called a _Vertical Slice_ of functionality. And the scope of these stories can be defined unambiguously through the use of Acceptance Criteria (AC). So, although User Stories aren't formal requirements, browsing through them can give you a pretty clear idea of their underlying requirements. In the short term. I say in the short term because, as a project progresses, the number of User Stories increases. Thus, browsing through an ever increasing list of Stories to find a Requirement becomes less and less efficient over time. This problem is compounded when you consider User Stories that expand on, supersede, or even negate previous Stories. Now, imagine a 2 year gap between development iterations on a project (stable in production). The original team is gone and so is all their knowledge. If the original team knew this was going to happen (eg, it's the nature of the business), then what measures could they take to help subsequent teams? Sure, the backlog will provide some information, but it's hardly in an easily browsable form. So, what can be done to help subsequent teams understand the state of the project, including _why_ and _how_ it got there? In my experience, the following things don't work: * **Backlog grooming** to delete or update previous User Stories so that the Backlog can be read as a requirements document. * **Documentation Sprints** where team members are tasked with documenting the current state of the system. * **Documentation through Behaviour Tests**. This approach is the only one I have seen come close to working. Unfortunately, Coded Behaviour tests are victims the Naming Problem. Although the tests might properly document the system, getting a fluctuating team of developers to write their tests following the same Domain terminology, wording, and style is almost impossible. So, to reiterate: How does one manage Agile project Requirements in the long term?"} {"_id": "254973", "title": "Tweaking a ranking algorithm from a few variables", "text": "I'm going to have a mySQL table with elements in it and would like to rank them in the same manner than Reddit but not quite. I'd like to know how to add or remove importance to a variable in my ranking algorithm. For instance, what I have is a time (a timestamp, so a huge value) and a number of upvotes (a smallish value, under 50). How do I give more importance to the upvotes and less to the time?"} {"_id": "254978", "title": "What's wrong with comments that explain complex code?", "text": "A lot of people claim that \"comments should explain 'why', but not 'how'\". Others say that \"code should be self-documenting\" and comments should be scarce. Robert C. Martin claims that (rephrased to my own words) often \"comments are apologies for badly written code\". My question is the following: What's wrong with explaining a complex algorithm or a long and convoluted piece of code with a descriptive comment? This way, instead of other developers (including yourself) having to read the entire algorithm line by line to figure out what it does, they can just read the friendly descriptive comment you wrote in plain English. English is 'designed' to be easily understood by humans. Java, Ruby or Perl, however, have been designed to balance human-readability and computer- readability, thus compromising the human-readability of the text. A human can understand a piece of English much faster that he/she can understand a piece of code with the same meaning (as long as the operation isn't trivial). So after writing a complex piece of code written in a partly human-readable programming language, why not add a descriptive and concise comment explaining the operation of the code in friendly and understandable English? Some will say \"code shouldn't be hard to understand\", \"make functions small\", \"use descriptive names\", \"don't write spaghetti code\". But we all know that's not enough. These are mere guidelines - important and useful ones - _but they do not change the fact that some algorithms are complex._ And therefore are hard to understand when reading them line by line. Is it really that bad to explain a complex algorithm with a few lines of comments about it's general operation? What's wrong with explaining complicated code with a comment?"} {"_id": "126545", "title": "Would you rather make private stuff internal/public for tests, or use some kind of hack like PrivateObject?", "text": "I am quite a beginner in code testing, and was an `assert` whore before. One thing worrying me in unit testing is that is often requires you to make `public` (or at least `internal`) fields that would have been `private` otherwise, to un-`readonly` them, make `private` methods `protected virtual` instead, etc... I recently discovered that you can avoid this by using things like the PrivateObject class to acces anything in an object via reflection. But this makes your tests less maintainable (things will fail at execution rather than compile time, it'll be broken by a simple rename, it's harder to debug...). What is your opinion on this ? What are the best practices in unit testing concerning access restriction ? edit : consider for instance that you have a class with a cache in a file on disk, and in your tests you want to write to memory instead."} {"_id": "129641", "title": "Is there a canonical book on C Programming in GNU/Linux?", "text": "I am looking for a good ebook (or two) for learning the C programming language, specifically programming in a GNU/Linux environment. I'm not a beginner programmer, but I have almost no experience in this particular area. I need to learn both the fundamentals of C and the GNU toolchain (gcc, gdb, and make). For the C language side, I know K&R comes highly recommended, but it doesn't seem particularly practical for me given that it won't describe modern best practices, current libraries, or any particulars about GNU/Linux programming. Is there a book out there that's the de-facto standard for describing best practices, design methodologies, and other helpful information on C Programming in GNU/Linux and covering the GNU toolchain? What about that book makes it special?"} {"_id": "129646", "title": "Bluetooth or Bump API for membership card functionality?", "text": "I am looking to implement membership card functionality inside a mobile application for a local coffee shop. The idea is to make payments at the point of sale, and deduct the amount from the client's prepaid account. I was looking into NFC functionality, but only a few devices support them (and most are not available in Montreal yet). The other idea is to use the Bump API, which would work great for iOS and Android. The last resort would be to use Bluetooth somehow. Any thoughts on any of these approaches, or for those who have tried this, any pros and cons for these ideas."} {"_id": "213840", "title": "Implementing basic data structures in programming interviews", "text": "I have been preparing to do my first technical interview in a month, and I have a question about implementing common data structures like stacks or linked lists. I plan to do the interview in either Python or Java but I am not sure which one to use yet. I was practicing and tried to implement a stack in Python, using already built-in lists. However, lists in Python already have all stack methods, while there is still some work to do when arrays in Java are used to implement a stack. If I am asked to implement a stack and my language of choice is Python, what should I do? Should I define Node class? Would I not even be asked to implement such low-level data structure in Python anyway?"} {"_id": "245627", "title": "Is duplicate syntax for defining named functions a bad language design decision?", "text": "I am modelling a programming language for fun, and the syntax is heavily influenced by Scala - specifically function definitions. I have encountered a design problem because my language **does not differentiate** between functions defined via the `def` syntax (class methods) and anonymous functions assigned to values (created using `=>`) - it removes the differences in both implementation and behaviour. The result is that the following two definitions mean the same thing: def square(x: Int) = x*x val square = (x: Int) => x*x There is no reason for the latter form (immediate anonymous function assignment) to be used in any normal situation - it's simply _possible_ to use it instead of the `def` form. **Would having such duplicate syntax for defining named functions hurt the orthogonality of the language or some other design aspect?** I prefer this solution because it allows for short and intuitive definitions of methods and named functions (via `def`), and short definitions of anonymous functions (using `=>`). Edit: **Scala _does_ differentiate between the two** \\- anonymous functions are not the same as methods defined with `def` in Scala. The differences are relatively subtle though - see the posts I linked before."} {"_id": "178576", "title": "How to handle sorting of complex objects?", "text": "How would one sort a list of objects that have more than one sortable element? Suppose you have a simple object `Car` and car is a defined as such: class Car { public String make; public String model; public int year; public String color; // ... No methods, just a structure / container } I designed a simple framework that would allow for multiple `SortOption`s to be provided to a `Sorter` that would then sort the list. interface ISorter { List sort(List items); void addSortOption(ISortOption option); ISortOption[] getSortOptions(); void setSortOption(ISortOption option); } interface ISortOption { String getLabel(); int compare(T t1, T t2); } Example use class SimpleStringSorter extends MergeSorter { { addSorter(new AlphaSorter()); } private static final class AlphaSorter implements ISortOption { // ... implementation of alpha compare and get label } } The issue with this solution is that it is not easily expandable. If car was to ever receive a new field, say, currentOwner. I would need to add the field, then track down the sorter class file, implement a new sort option class then recompile the application for redistribution. Is there an easier more expandable/practical way to sort data like this?"} {"_id": "218741", "title": "What is the benefit of switching on Strings in Java 7?", "text": "When I was starting to programme in Java, the fact that switch statements didn't take strings frustrated me. Then on using Enums, I realised the benefits that you get with them rather than passing around raw values \u2014 type safety (which brings easier refactoring) & also clarity to other developers. I'm struggling to think of a situation where with SE7 I'll now decide to use a switch with strings as inputs rather than Enums. If they are implemented via switching on whole strings (e.g. rather than partial or regex matches), it doesn't seem to offer less reason for the code to change. And with IDE tools and the read-write ratio of coding, I'd be much happier auto-generating an extra Enum than passing around string values. What benefit do they bring to us as programmers? Less boiler-plate? It doesn't feel like the language was crying-out for this feature. Though maybe there's a use-case I'm overlooking."} {"_id": "212469", "title": "Real-World node.js", "text": "I am 14 years old, and have been studying web/app/software development. I am trying to learn a backend language, and am considering node.js. I have heard awesome things about it, plus I like that fact that I can already use the JS I know in it. However, I am concerned that it may just be a \"trendy/hip\" technology, and that it may not be being used in the real-world that much. Are people really applying node.js in production? Or is it just hip right now?"} {"_id": "67590", "title": "What do you call a cron that cksums all your files and writes them to a database?", "text": "It is a way to take inventory of many cgi scripts and web applications in a Software Service environment. It will be a way of creating changelogs and to keep track of which customers have which programs and will use diff and cksum to notice, compare and identify versions. It \"takes inventory\", but we actually have inventory systems for non software companies (distributors, etc). So I would prefer not to have confusing names. Edit: We do not use a code repository. So this is not as much for compromised file checking (although it is pretty cool that will be covered), but it is really more as a replacement for a repository and as a way to keep track of which customers have what products to make sure we don't get out of sync with billing. We actually make most of our modifications live. Since these are not public systems, it does not matter that much. If it is public, we either create copies of the program or we run the program on a PC and then test it and then upload it to be live."} {"_id": "47719", "title": "Unit testing data access objects", "text": "I have recently started using test-driven development and unit testing, and it has paid off immensely in the areas where I have aplied it. One area that it has been helpful in is database access. When I abstract away the data access, the testing of methods that need the data become almost ridiculously easy. However, I haven't been able to find or figure out a way to test the DAOs themselves. (It would be self-defeating to abstract them out!) A DAO should in theory just move data back and forth between the database and the application; can this be considered too simple to test? I have experimented with setting up a Derby database on my local machine for tests, but it's difficult to automate starting the server, creating the databases, and creating the tables. Is there any way to automate testing of data access objects?"} {"_id": "179448", "title": "Why does the Scrum guide say no testers?", "text": "I have been reading the Scrum Guide from scrum.org and it says: > Development Teams do not contain sub-teams dedicated to particular domains > like testing or business analysis. In its literal translation this means that there are no testers which is confusing. How can they be suggesting this?"} {"_id": "212463", "title": "When using an ORM should mappings be defined in the code file?", "text": "Doctrine offers three ways to define the object mapping properties: in XML, in YAML and as inline docblock annotations in the code. The Doctrine documentation doesn't give any advice on choosing among them, but I'm specifically wondering if there is a generally accepted for best practice with regard to keeping mapping metadata for an ORM system in the file where the objects are defined versus keeping it in a separate data file. Is there?"} {"_id": "195949", "title": "How to create in Python Tkinter a widget that would act like a choice tree?", "text": "I would like to know if there is such widget in tk (or in any different standard Python 3 module), or how to create it: ![enter image description here](http://i.stack.imgur.com/sg5g9.png) Of course it doesn't have to look like this, but it should offer same functionality. Unfortunately PyQT, wxWidgets or any other non standard modules I cannot use."} {"_id": "6905", "title": "What unit test frameworks exist for Java?", "text": "I've used TestNG and JUnit. What other frameworks are out there? What makes them special and/or different from the rest?"} {"_id": "12423", "title": "FP and OO orthogonal?", "text": "I have heard this time and again and I am trying to understand and validate the idea that FP and OO are orthogonal. First of all, what does it mean for 2 concepts to be orthogonal ? FP encourages immutability and purity as much as possible. and OO seems like something that is built for state and mutation(a slightly organized version of imperative programming?). And I do realize that objects can be immutable. But OO seems to imply state/change to me. They seem like opposites. Does that meant they are orthogonal ? A language like Scala makes it easy to do OO and FP both, does this affect the orthogonality of the 2 methods ?"} {"_id": "213594", "title": "Test Driven Development: A good/accepted way to test file system operations?", "text": "I am working on a project at the moment that generates a table (among other things) based on the contents of a file-system, and and in turn does some meta-data modifications on the things it finds. The question is: how should tests be written around this, or set up? Is there a easy way to mock this out? Or should I setup a \"sandbox\"?"} {"_id": "82133", "title": "Solution organisation", "text": "I have been working on asp.net applications for 6 years but almost all of it has been extending and maintaining existing applications. I now have the need to develop a new application and I am scratching my head :( There is a lot of material on Software Design principles and patterns but not much content on organisation. Should the various layers be in separate namespaces, folders or projects? I do have plan to create WCF in future so having Service layer as project makes sense but I am unsure of how many projects I should have? By default, MVC 3 web site has Models and Controllers in the same project does it make sense to separate them to different projects? I would be very very appreciative if someone can post screen shot of well organised MVC 3 solution. I understand this probably depends on personal preference and project size but I need some kind of guidance. Our main application has over 70 projects in a mammoth solution...Please help me avoid this. Many thanks."} {"_id": "82131", "title": "Does a License Change for OSS cover all versions, including previous releases?", "text": "If we decide to use a piece of open sourced software that is currently released under a license that allows inclusion in an internal closed source system, what protection are we offered if the owners of the open source software decide to change the license? Typically I'm thinking of where the new license does not allow inclusion in closed source software (I think the PDF generation api iText did this back in 2009). Can the new license be applied retrospectively to the old versions? I'm guessing it can, but often isn't. We are just looking at worst case scenario, would we be in a position where we could just continue using an old version for as long as it suited and then make a decision about the future - or would we be in a position where we'd have to decide to either pay the commercial fee that is often available or find a different \"free\" alternative."} {"_id": "123116", "title": "If I want my client program to be free software, do I also have to release the server software as free software?", "text": "Imagine I've written a game and I want to make it free software. Am I also required to make the game server software free software because the game uses it to connect to other players to play against? Imagine I've written a stock ticker and I wish to make it free software. Can I charge for the subscription to the stock information, even though the software serves little purpose without paying for such a subscription? I'm also interested in revenue sources for free software that go beyond charging for distribution or support, that counteract one person purchasing your software and then distributing it themselves and undercutting your prices."} {"_id": "206237", "title": "Does understanding functional programming help in understanding popular javascript libraries?", "text": "I am not experienced in javascript. I try to use some popular javascript libraries such as jQuery, Angular.js and Meteor.js. I wonder if understanding the logic of functional programming (in javascript of course) will help in understanding and using these libraries better?"} {"_id": "206230", "title": "Unit test strategy for layered (or derived) method calls", "text": "Forgive the title -- it needs work. I am struggling to find better English to express my issue. Edits encouraged. Example to describe my issue: ## Checker Method I have an argument checking method called `public static void StringArgs.checkIndexAndCount(String, int, int)`. Given a string, an index, and a count, confirm the string is not null, and the index & counts are reasonable. Unchecked (runtime) exceptions are used to report errors. There is a battery of unit tests written to check all angles of this method. ## Layered (or Derived) Method The checker method is called by other methods, such as `public static String removeByIndexAndCount(String, int, int)`. The first line of this method checks the arguments by calling the above checker. ## Unit Test Strategy When I write unit tests for the second/layered/derived method, how do I account for the existing set of unit tests for the checker method? It seems to violate duplication/copy-paste principles to simple re-add the same unit tests (modified slightly) from the checker method to the second method. Please advise. My code is Java, but I don't think that particularly relevant, as this same issue could occur in any language."} {"_id": "238355", "title": "How do we call database table entity in Java?", "text": "For each database table, I create an entity (object) which contains all columns of the database as Class fields. Now, how do we call these entities in Java? I could not find an official term so I read on forums that people refer to them as `entities`, `models`, etc. I'd like to know either Java standardized term or Java community accepted term."} {"_id": "171117", "title": "Using Mock for event listeners in unit-testing", "text": "I keep getting to test this kind of code (language irrelevant) : public class Foo() { public Foo(IDependency1 dep1) { this.dep1 = dep1; } public void setUpListeners() { this.dep1.addSomeEventListener(.... some listener code ...); } } Typically, you want to test what when the dependency fires the event, the class under tests reacts appropriately (in some situation, the _only_ purpose of such classes is to wire lots of other components, that can be independently tested. So far, to test this, I always end up doing something like : * creating a 'stub' that implements both a `addXXXXListener`, that simply stores the callback, and a `fireXXXX`, that simply calls any registered listener. This is a bit tedious since you have to create the mock with the right interface, but that can do * use an introspective framework that can 'spy' on a method, and inject the real dependency in tests Is there a cleaner way to do this kind of things ? EDIT : to clarify what I mean, my trouble is that if I could write something like this : public class FooTest() { public void testListensToDependencies() { IDependency mockDependency = createMock(IDependency.class) Foo tested = new Foo(mockDependency); tested.setUpListeners(); expect(mockDependency.addSomeEventListener).toHaveBeenCalled(); } } However, obviously this test would pass _no matter what the listener does_. This is troubling me, since I want to test that I am wiring my objects to do the proper thing. The next best thing would be : public class Foo() { public Foo(IDependency1 deps) { .... } public void setUpListeners() { // In a language with first class function : this.dep1.addSomeEventListener(this.handleSomeEvent); // In a language without it : this.dep1.addSomeEventListener(new Foo.SomeEventHandler()); } And then I would write tests to : * check that the 'addSomeEventListener' was called either with the right handler function, or with an instance of the right event handler class * check that the handler function, or the event handler class, actually implement the expected behavior. Now I am not sure if that would be clearer, or if it is the sign I'm wiring my objects at the wrong place...."} {"_id": "104867", "title": "What does the latest \"C++ Renaissance\" mean?", "text": "There's recently some voice about C++ renaissance, among which the most noteworthy one is from Herb Sutter, Chairman of the C++ Standard Committee. You can search for \"C++ renaissance\" on Google and you'd find a bunch of links including Herb's talk on \"C++ and Beyond\" and other talks on Channel9 from Microsoft. The key argument here is that with the Cloud trend becoming clearer and more popular than ever, a lot of the dev work on the cloud side calls for high- performance native languages, which on some level, basically means C/C++. I don't mean to start again some flame war about C vs C++. But I do want to know to what extent does this trend affect the growing and expanding of C++ community and popularity. How exactly are C++ used on the cloud side? high-perf backend? more? How large, exactly, is the Cloud dev market anyway? P.S. I've been lucky to be able to use C++0x in my project recently, and it _is_ *awesome* (VC++10). The most useful feature for everyday programming is of course lambda. And the second I would say is rvalue reference (I finally had the courage to return vector!!) Below are Herb's words C++ and Beyond: > I\u2019ve been asked to give the opening \u201cWelcome, Everyone!\u201d talk at C&B 2011, > and it\u2019s time to cover an increasingly open secret: After a decade-long > affair with managed languages where it became unfashionable to be interested > in C++, C++\u2019s power and efficiency are now getting very fashionable again. > At the same time, C++ has been getting easier to use; key productivity > features from the C++0x standard (aka C++11), like auto and lambdas, are > increasingly widely available in commercial compilers and making using C++ > easier than ever before without sacrificing its cornerstone \u2014 efficiency. > > This opening 40-minute talk covers the reasons why C++ is now enjoying a > major renaissance, and why that will continue to grow over the next few > years because of industry trends from processor design to mobile computing > to cloud and datacenter environments. > > We already know that C++ is \u201cthe\u201d language of choice for demanding > applications. Here, we\u2019ll cover why \u201cdemanding applications\u201d increasingly > means \u201cmost applications\u201d and will be the bread and butter of our industry > for the foreseeable future. We\u2019ll see why and where other languages are > still appropriate, but why C++\u2019s applicability and demand is now again on an > upswing more so than it has been for over a decade."} {"_id": "171113", "title": "Is SOA an Utopia?", "text": "I have attended to many SOA related sales pitches and presentations through the years. SOA projects have died because of lack of interest or because of grandiose scopes. The very buzzword has lost momentum. Has someone seen SOA implemented ? Or is it a kind of utopian vision one must strive for ? Do one have to believe in SOA without touching it or having seen it ?"} {"_id": "110711", "title": "Front-End Java Web App Framework", "text": "I have not done web development in Java for the past three years and now I need to use one for a project I am working on. I am thinking of using Seam, Flex + BlazeDS, Struts2 or Spring MVC. The most attractive of the four is Flex, however I am trying to stay away from it since the app will end up being entirely in Flash. Struts2 and Seam are mature but might require more time to learn. I had basic experience with Spring framework in the past so I am also considering its MVC. Should I use one of these or go for some other framework?"} {"_id": "171111", "title": "Using Completed User Stories to Estimate Future User Stories", "text": "In Scrum/Agile, the complexity of a user story can be estimated in story points. After completing some user stories, a programmer or team of programmers can use those experiences to better estimate how much time it might take to complete a future user story. **Is there a methodology for breaking down the complexity of user stories into quantifiable or quantifiable attributes?** For example, User Story X requires a rich, new view in the GUI, but User Story X can perform most of its functionality using existing business logic on the server. On a scale of 1 to 10, User Story X has a complexity of 7 on the client and a complexity of 2 on the server. After User Story X is completed, someone asks how long would it take to complete User Story Y, which has a complexity of 3 on the client and 6 on the server. Looking at how long it took to complete User Story X, we can make an educated estimate on how long it might take to complete User Story Y. I can imagine some other details: * The complexity of one attribute (such as complexity of client) could have sub-attributes, such as number of steps in a sequence, function points, etc. * Several other attributes that could be considered as well, such as the programmer's familiarity with the system or the number of components/interfaces involved * These attributes could be accumulated into some sort of user story checklist. To reiterate: **is there an existing methodology for decomposing the complexity of a user story into complexity of attributes/sub-attributes,** or is using completed user stories as indicators in estimating future user stories more of an informal process?"} {"_id": "171110", "title": "Using a random string to authenticate HMAC?", "text": "I am designing a simple webservice and want to use HMAC for authentication to the service. For the purpose of this question we have: * a web service at example.com * a secret key shared between a user and the server [K] * a consumer ID which is known to the user and the server (but is not necessarily secret) [D] * a message which we wish to send to the server [M] The standard HMAC implementation would involve using the secret key [K] and the message [M] to create the hash [H], but I am running into issues with this. The message [M] can be quite long and tends to be read from a file. I have found its very difficult to produce a correct hash consistently across multiple operating systems and programming languages because of hidden characters which make it into various file formats. This is of course bad implementation on the client side (100%), but I would like this webservice to be easily accessible and not have trouble with different file formats. I was thinking of an alternative, which would allow the use a short (5-10 char) random string [R] rather than the message for autentication, e.g. H = HMAC(K,R) The user then passes the random string to the server and the server checks the HMAC server side (using random string + shared secret). As far as I can see, this produces the following issues: * There is no message integrity - this is ok message integrity is **not** important for this service * A user could re-use the hash with a different message - I can see 2 ways around this 1. Combine the random string with a timestamp so the hash is only valid for a set period of time 2. Only allow each random string to be used once * Since the client is in control of the random string, it is easier to look for collisions I should point out that the principle reason for authentication is to implement rate limiting on the API service. There is zero need for message integrity, and its not a big deal if someone can forge a single request (but it is if they can forge a very large number very quickly). I know that the **correct** answer is to make sure the message [M] is the same on all platforms/languages before hashing it. But, taking that out of the equation, is the above proposal an acceptable 2nd best?"} {"_id": "19199", "title": "Is this technique a design pattern? If so, what's it called?", "text": "I'll use C# as an example, but it should apply globally. Say I have a string value that should be one of a few constants, but I also want the client to set which string value to use so: private int foo; private string bar; public int Foo { get { return foo; } set { foo = value; bar = getStringValueFromDatabase(value); } } public string Bar { get { return bar; } } I use this technique quite a lot and want to know if it's considered as any formal concept."} {"_id": "224503", "title": "ruby for desktop app or web app development", "text": "i am a beginner to ruby . i just did some minor research about why RUBY ? why choose RUBY ? whats new in it. Whenever i do type in a word RUBY in google search there comes a suggestion like RUBY on RAILS . so my mind changed to learn stuffs about ruby on rails. I went through some forums but got the same answer from everyone. RUBY ON RAILS is a web development FRAMEWORK . once again i learnt stuffs about what a framework is. There are lot of questions which is u know like itching my mind.Varieties of answer from everyone. i dont know which is the right one. first if anyone wish to give me a reply .. tell me solution for this. just like java we run ruby programs in command prompt by moving to the directory in cmd where ruby is installed.it is similar to java. so can we create a desktop application by using it.i mean like a file searching program which is default in windows . We can create the same file searching program using java swing and playing with some string functions isn't it? can we do the same with ruby .? thats my first one. i will raise doubts on ruby on rails after i get cleared about the above question."} {"_id": "117726", "title": "Becoming a part-time freelancer", "text": "I have a full time job as a senior software developer. It pays well but the work is unsatisfying for many reasons. I often daydream about going freelance but don't want to throw away a stable paycheck without dipping my toe in the freelancing water first. I have recently developed a commercial-grade application in my spare time so have a healthy respect for the kind of lifestyle having two jobs can lead to but the experience has also given me the confidence that I can pull it off, at least for a while. I joined eLance but was quickly disheartened by the number of bids at very low prices by well established studios and teams of programmers with good ratings. How can I compete with other freelancers if I am only willing to work evenings and weekends? I can prepare a solid portfolio which will certainly give me an advantage over 'sweatshops' in third world countries but I still struggle to see why anyone would employ me when they could get their project completed in less time by employing a full-time developer or a dedicated studio. Should I 'fail to mention' that I already have a full time job? This doesn't seem honest though, and I don't want freelance contractors phoning me while I'm at work. I'm willing to work for cheap at first. Contracting and travelling are out of the question. Where can I find part-time programming work from home?"} {"_id": "117721", "title": "Do employers and headhunters maintain a database somewhere about programmers?", "text": "Is there a website that is being maintained somewhere in which trash talk and nasty allegations about workers are entered by employers and headhunters sort of like a reverse glassdoor.com? I ask this as a worker, NOT as an employer."} {"_id": "193469", "title": "Application Architecture", "text": "first of all I am new here and I hope that this is the right place for my question. I have a question about the recommended architecture of a project. IDEA: Automate some calculations concerning to aerodynamics. I have some input data and I need to process this data in several sub-calculations. You can see such a calculation somehow like a black box: INPUT => CALCULATION => RESULTS Some of these calculations are already available as open source programs and some of them I need to program on my own. This shouldn't be a problem at all. But I want to connect these calculations in a flexible way. For example: INPUT => CALCULATION 1 => CALCULATION 2 => CALCULATION 3 => RESULTS Number and order of calculations should be changeable. Should I see each calculation as standalone program and connect them in a kind of framework where I can specify order and arguments of program calls? Or are there other techniques to solve problems like this? Thanks a lot for each advice and every recommendation. Sebastian"} {"_id": "193468", "title": "Communication between View and Model", "text": "I have a basic issue with the MVC architecture. I am aware that the View usually listens to the Model. But how are user requests propagated to the Model? Currently I do it like this, when the user clicks e.g. the update Button in the GUI. So the `ActionListener` calls a method of the View. The View calls a method of the Controller. And the Controller calls a method of the Model. But I have three concedrns about this. * The View must be aware of the Controller. * The long call chain seem to be not the right way * With the number of user actions, the number of those call chains increase. What are the best practises here?"} {"_id": "193463", "title": "What is a Service Locator?", "text": "I've heard the term pop all around. I've read various articles regarding the subject and heard two main definitions to the term \"Service Location\": 1. A glorified Registry - Bad practice, global variables, general evil. 2. A type of Dependency Injection Container - Can help with managing dependencies, making modular and extensible applications, generally helpful. But I can't differentiate between the two. What does a Service Location really means? Can you give a simple example of a Service Locator? Is it good or evil?"} {"_id": "193462", "title": "JavaScript codes complexity and maintainability", "text": "I am trying to make my way back to JavaScript (been there last time some 7 years ago) with the help of lovely \"Eloquent JavaScript\" book. While I admire author's capabilities and approach, I have also began being concerned. I am from C/C++ background and there I learnt it rather hard way that funky constructs often undermine project survival. Experience with the Perl proved different. Being capable of cool tricks seemed a valuable perk. **So, what is the common view upon JavaScript code complexity. Does using it at \"full speed\" (OOP done own way, higher order functions everywhere etc) help having maintainable projects?** Thanks for your answers"} {"_id": "161341", "title": "Is speed a parameter for responding emails of technical tests?", "text": "I am sending my CV to different companies and some of them have replied asking me to complete a test to have an idea of my skills. I wonder if the time to respond to this email is key factor for being selected? I have a job now and I need to take my spare time to answer that questions. So, I don't know if I should take time to respond the questions carefully or try to respond them in the same day, even is that implies not to sleep too much that night."} {"_id": "161340", "title": "Which CSS attributes should be in HTML and which in BODY?", "text": "I have the following: html { overflow-y: scroll; } body { font-family: Georgia, \"Times New Roman\", serif; font-size: 1.125em; line-height: 1.5em; margin: 0 auto; max-width: 41em; } Which attributes should be with the HTML section, and which within the BODY section?"} {"_id": "193467", "title": "What tools can I use for professional document-creation and -printing in PHP?", "text": "I'm in the process of building a new pledge management system for one of my clients, a charity foundation. I have already built one for them (it was done using Delphi), but its feature-set is a little limited. Now, I have decided to move to PHP and use the Laravel framework to manage the database for the new system - its Eloquent ORM allows me to easily implement new features that are needed at present. This is good and well \u2013 I know everything that I need to do there. However, I am not sure which direction to take for when it comes to creating, saving, and printing documents that will be hand-delivered to donors. At the moment, it used Word to process documents - i.e. creating invoices, letters of appreciation, and any other templates that may be selected during a pledge submission. The thing is, I am not extremely happy with the implementation, and don\u2019t believe this is the best route to take. There have been times where the documents did not print correctly, or multiple documents where printed (where only one was needed). Nonetheless, I could stick to Word, provided that PHP can handle it properly and show me progress as it goes along (i.e. **Opening Word** -> **Opening Invoice Template (companies)** -> **Replacing Variables** -> **Saving** -> **Printing** -> **Closing** -> **Opening Food Project Mission Statement** -> **Replacing Variables** \\- > **Saving** -> **Printing** -> **Closing** \u2026 etc.). I guess that progress notifications are not necessary, but I would like to have them there - just in case it freezes, the user would be able to see whatever it may be struggling with. I would assume that I could do this using a jQuery script (with AJAX) that interfaces with a PHP script, but I honestly have no idea how to do it. (Note: I\u2019d have to use AJAX as Laravel uses buffered output.) I also know that I could pass the information about a pledge to an EXE which would handle the printing on its own, but I don\u2019t think I want to use this implementation as I have plans to make the system cross-platform. **Question 1:** Is there a package for PHP that allows me to create documents, save them, and then print them? If not, is there a suitable package that handles Word without difficulty, and with a large array of features? If it is the latter, I would need to be able to access the full COM API so that I can prevent dialogs from popping up in the background and pausing the procedure. **Question 2:** Is there a package (jQuery, AJAX) available that would allow me to track the progress of the document-creation procedure? **EDIT** Having reconsidered everything, and weighed the pro's and con's, I would like to put emphasis on **printing** here. When the user submits a pledge to the database, I need the document to print immediately. I do not necessarily want to focus on a document creation tool only. The reason I asked my question as I did is because I would, ideally, like to find something that can do both the creation and printing of each document. In addition, I do not want to make it too difficult for myself. This is why I originally chose Word - and, it was a lot easier to manage from a Delphi application. Because of this, I will be leaving the question open (just in case something very interesting is proposed), but will be asking a new question that is a little more specific to the problem that this question originated from. (To those who have answered so far, thank you for your help and showing me the various options.)"} {"_id": "126893", "title": "How do you handle time series .Net?", "text": "I have been looking for Time Series models over in SO, but I figured this place was the best site to actually ask the question. I am wondering what is the \"best\" way to handle Time Series in .Net. First, I would define a Time Series as a mapping of different `DateTime` with a single value of type `T`. The value can possibly be `null`. Several methods should be available so that the series can be manipulated easily. For example, you want to be able to `Filter` the data (based on criteria on dates and/or on values). You would also like to have an `Apply` method, which applies a given function to all the values. Ideally, you would be willing to run some of these functions in parallel to increase performance. Several data structures already implement such features, such as a `Dictionary`, possibly even a `SortedDictionary`. However, these are mutable structures, which can be an issue if there are some closure effects. Which model would/did you use? Do you know an existing library already widely used for this matter?"} {"_id": "126890", "title": "Encapsulating a single property", "text": "If you have a single property that is relevant across a full project, but you should logically have only a single representation of, how would you represent this? In my case, I am developing a simple drawing app where the user can select a colour. This colour should then be available to all the pen/shape classes in the project. Would I simple just have a static class with a static property of Color CurrentColor (C#), that each client class could then call, or is there a more effective way?"} {"_id": "250223", "title": "Spring Controllers and Services", "text": "we are at the middle of a project. It's a REST Service. Now we have controllers to handle the Requestmappings and the forwarding to the Services. A concrete example we have a UserController, GroupController, UserService and a GroupService. The user controller has as injection the UserService and the GroupController has as injection the GroupService. (one to one dependency) When I create/update/delete a user then the scenario is simple: userControllers tells the userService and it do this. Now the system has a lot of other controller/services that are going to be added. But what I don't know is: what is the best way when I have to to mix user/group action? For example: I have to add a user in a group. But I must check that the user exists, that the group exists and after save the information in the user (Entity/DTO) that the user is now port of this group. 1) schould I manage all from the userController: -userService.checkUser() -groupService.checkGroup() -userService.updateUser() 2) or schould I manage al from the userService: -userService.addUser() (in the userService then: -checkUser() -checkGroup() -updateUser() ) In the solution 1 what I don't like is the injection of some other Services that not belongs to users In the solution 2 I have the idea that I'm going to replicate concepts that exists on the groupServices... The exapmle above is only a litle part of the great picture. But that what i would understand is how can I design this part to don't have a lot dependency and a lot of code replication?"} {"_id": "126895", "title": "Need help understanding reference operator(C++) in specific functions", "text": "In the current semester at the university we are working on OOP with C++. I would like to understand the difference between a pointer and a reference operator. The differences that I understand are: 1\\. Cannot change the object that the reference variable is binded to 2\\. we can use the reference variables to refer to the binded object without having to type the & operator (in contrast with the pointers where we would write *pi = 5;) Also,does a reference variable contain the address of the object that is binded to? In example: int i; int &ri = i; Here ri contains the address of i? And the reason why when overloading ++ operator in this example of enumeration we are using the dereference or reference(*) operators before the name of the function."} {"_id": "244971", "title": "Exception handling in WIn Forms application", "text": "When handling exceptions for example in a method in my presentation logic, is it ok to catch all possible exceptions in a one catch block as follows if the only purpose here is alerting the user. private void Do() { try { // Do some stuff here which might throw an exception } catch (Exception e) { MessageBox.Show(e.Message); } } Or else should we always catch every possible exceptions (like `OutOfMemoryException`, `NullReferenceException` etc. followed by more generalized exceptions) in separate catch blocks ? Since the information in e.message is not relevant for average users, we could do like MessageBox.Show(\"Exception occurred and contact system administrator\"); Is that the standard way ? NOTE : My sole purpose is to alert user and try to keep the system up (with out crashing)"} {"_id": "184313", "title": "How can \"hash functions\" be used to implement hash maps at all?", "text": "My understandment is that hash maps allow us to link, say, a string, to certain memory location. But if every string were to be linked to a unique place in memory it would need a huge block of empty memory. I don't get it."} {"_id": "244977", "title": "Best way to let users/visitors alter the website design", "text": "What I am trying to do is give the users/visitors the option to alter the whole website based on their taste. So they can move for example the sticky bar from top to bottom, alter background colors, news box from left to right etc. For users I will probably store all the information in the DB but I am not sure how I should handle this afterward. Should I store all the settings into a cookie instead of grabbing that info from the DB all the time and \"rebuilding\" the website based on their taste or do you have any other idea?"} {"_id": "82792", "title": "Red flags of unpaid IT internship", "text": "I read the following question: Tips for a first year CS student looking for a summer internship to gain experience? But rather than how to get and/or look for internships, my questions concerns of how to filter companies looking for free labor to do their website VS companies that might not offer a paid check, but it will offer you mentorship and skills. I have a cousin that is a college freshman and she is looking for Sofware development internships, but at this point, she is desperately looking for any internship that is IT related. Besides the big software names and elite small software shops, How do you recognize a good internships in non-it- companies(large or small) doing IT maintenance vs the bad ones? If you don't have work experience, what questions a college student should ask to prevent ripoff? Are there any red flags that you can spot before accepting an offer?"} {"_id": "213334", "title": "Django REST + Backbone/Ember/Angular Implementation Method", "text": "http://stackoverflow.com/questions/10941249/separate-rest-json-api-server-and- client In light of this post, I wanted to ask questions regarding Django and specifically the implementation methods of getting one of these client side technologies to work with Django. Currently, I have a backend that is setup that uses Django Rest Framework to serialize the data. My problem though, is I'm not sure how to get the client side to access the backend. Django is a MVC framework. Originally I thought that I could just take out the View, and just implement the model and controller. Then, I would write complete separate View code so to speak using Backbone/Ember/Angularjs. Then the client would access the REST resources. How would I combine these two later if I wanted to deploy this to Heroku? Heroku only takes a whole Django application. How can I get the client code on there as well then? The other option, which I've seen before, is to NOT take out the View completely, but to use Django templates WITH Backbone/Ember/Angularjs. Then, I can just simply put these js files in the \"static\" folder so to speak. But then that seems weird, because then I'd have a View on the server side (correct me if I'm wrong), that accesses my own REST resources. I tried this, but for some reason even though my page retrieves the javascript files, it does not seem to be working as expected."} {"_id": "241255", "title": "Flags with deferred use", "text": "Let's say I have a system. In this system I have a number of operations I can do, but all of these operations have to happen as a batch at a certain time, while calls to activate _and_ deactivate these operations can come in at any time. No matter how many times the `doOperation1()` method/function is called the operation is done only once in the batch. To implement this, I could use flags like `doOperation1` and `doOperation2` but this seems like it would become difficult to maintain. Is there a design pattern, or something similar, that addresses this situation?"} {"_id": "213332", "title": "Best way to define 'snap points' between two arbitrary objects in 3D?", "text": "I'm working on a simple in-browser 3D model constructor using THREE.js. The user picks a plane body, and adds wings, cockpit, tail etc of their choice, choosing from multiple options. I need to be able to define relationships between many bodies and many add-ons, and of course I don't want to manually have to define the position/rotation of each combination of parts. Therefore, I want to be able to define a set of 'snap points' on the body, places where certain components may be inserted, and to define a matching set of points on the add-on parts, basically matching the two in software to position the add- ons correctly. I though of defining a triangle of 3 points, as that would lock in all 3 degrees of freedom. If I make the order important, it will even prevent the object from \"flipping\" across the plane defined by the three points. Here's an illustration: ![before](http://i.stack.imgur.com/LCeyZ.png) ![after](http://i.stack.imgur.com/ierSd.png) I'm thinking of making a \"dummy\" for the 3D designers to use to get the coordinates of the points, but once case I can see this failing is when the two triangles don't exactly correspond to each other. Should I simply set in stone that the points must form an equilateral triangle of side length 10 units for example, or is there another way I could do the maths from triangles which do not completely match? Perhaps there's another, more standard way to do what I want? Thanks"} {"_id": "241250", "title": "Using packages (gems, eggs, etc.) to create decoupled architectures", "text": "**The main issue** Seeing the good support most modern programming platforms have for package management (think `gem`, `npm`, `pip`, etc), does it make sense to design an application or system be composed of internally developed packages, so as to promote and create a loosely coupled architecture? **Example** An example of this would be to create packages for database access, as well as for authentication and other components of the system. These, of course, use external packages as well. Then, your system imports and uses these packages - instead of including their code within its own code base. **Considerations** To me, it seems that this would promote code decoupling and help maintainability, almost in a Web-based-vs.-desktop-application kind of way (updates are applied almost automatically, single code base for single functionality, etc.). Does this seem like a rational and sane design concept? Is this actually used as a standard way of structuring applications today?"} {"_id": "214108", "title": "How small is the footprint of a small C compiler?", "text": "This week I could optimize using a reduced C library that allowed a drastic shrinkage in code size - from about 60 K to about 6 K and then we could load the code in the 8 K on-chip memory of an FPGA (Altera DE2) which I suppose is SRAM so there is SRAM both on-chip and off-chip(?) The program was rather small itself and we noticed that most of the size was from libraries and doing embedded system we reduce the libraries to only what is needed so that the footprint is minimized. It makes me wonder about something I heard in the media which was a story, maybe fictious, that Microsoft had to deliver a C compiler in only 20 K or so in the 70s or early 80s when there was not much memory available for software, is it true? What is a feasible size of the footprint for a small C compiler?"} {"_id": "241258", "title": "Websockets VS SSE", "text": "Suppose I have a service which requires to _seek_ the database for different data once and in a while. For this I have 2 or 3 SSE, each one with a different _retry_ basetime (20000 miliseconds, 1000 miliseconds...). **What I'd like to know is if websockets can handle different \"data type\" accordingly to the request** , for example, could I create **one** websocket to handle a notification system, a chat system, a group system instead of separated **_SSEs_** and treat data differently with javascript? And if so, would it be of higher interest (performance) than actually performing different queries to the server through different _SSEs_? If this is possible then would it be bad design to create multiple sockets for each service here?"} {"_id": "119735", "title": "What is (are) the most useful technique/visualization for overall project status?", "text": "For reasons \"above my pay grade\", we're developing an issue/project tracking system where I work (similar to Trac, FogBugz, etc). The managers want a useful tool to be able to track the overall health of the project (e.g. How much time left, how are we performing vs estimates) and one of the features that has been requested is some type of critical path support and visualization. The logic explained to me is that they want to be sure that at least the most important pieces of the project are currently being worked on. The initial idea was that we would create task-based dependencies. My understanding of project management tells me that this kind of granular approach is unnecessary - having milestones with specific deadlines/dependencies is much more useful. I would like to know what are the most useful techniques and \"pretty pictures\" you've seen/used for project development. Having objective data would be best, but somewhat subjective data is helpful too."} {"_id": "215851", "title": "Validating data to nest if or not within try and catch", "text": "I am validating data, in this case I want one of three ints. I am asking this question, as it is the fundamental principle I'm interested in. This is a basic example, but I am developing best practices now, so when things become more complicated later, I am better equipped to manage them. Is it preferable to have the try and catch followed by the condition: public static int getProcType() { try { procType = getIntInput(\"Enter procedure type -\\n\" + \" 1 for Exploratory,\\n\" + \" 2 for Reconstructive, \\n\" + \"3 for Follow up: \\n\"); } catch (NumberFormatException ex) { System.out.println(\"Error! Enter a valid option!\"); getProcType(); } if (procType == 1 || procType == 2 || procType == 3) { hrlyRate = hrlyRate(procType); procedure = procedure(procType); } else { System.out.println(\"Error! Enter a valid option!\"); getProcType(); } return procType; } Or is it better to put the if within the try and catch? public static int getProcType() { try { procType = getIntInput(\"Enter procedure type -\\n\" + \" 1 for Exploratory,\\n\" + \" 2 for Reconstructive, \\n\" + \"3 for Follow up: \\n\"); if (procType == 1 || procType == 2 || procType == 3) { hrlyRate = hrlyRate(procType); procedure = procedure(procType); } else { System.out.println(\"Error! Enter a valid option!\"); getProcType(); } } catch (NumberFormatException ex) { System.out.println(\"Error! Enter a valid option!\"); getProcType(); } return procType; } I am thinking the if within the try, may be quicker, but also may be clumsy. Which would be better, as my programming becomes more advanced?"} {"_id": "215853", "title": "What happens with project backlog if sprint due date is missed?", "text": "Suppose that I have project backlog item with effort of 40 hours. My sprint is 40 hours (1 week) and I have one developer in team. So developer creates child task to pending backlog and estimates work to 40 hours. At the end of the sprint developer didn't succeed in resolving his task. Suppose that developer works only and only 40 hours per week. On the next week there would be new backlog items and new sprint. What should I do with backlog item and velocity graph? Obviously backlog item is not resolved on that sprint. Should I estimate the remaining work and subtract it from effort , so that now I see that our velocity is, say, 38hr per 40hr sprint?"} {"_id": "13775", "title": "Is it absolutely necessary for a web programmer to be a web designer?", "text": "Though it is always a plus point to have knowledge of multiple technologies, but due to time-constraint it is not possible for me right now. I have extensive experience in .NET Windows development and 1 year experience in PHP. I want to apply for a job of web programmer. Is it absolutely necessary for me to learn web-designing also that includes CSS and PhotoShop?"} {"_id": "69916", "title": "What are the main things a programmer expects from the senior programmer?", "text": "Recently I read the following 5 Types Of Bosses and How To Deal With Them , which describes the attires of the worst boss. I've just started leading a small team of software developers. I would like to know what are the main things a programmer expects from the senior programmer or what are the things we should avoid while managing a team. Also, I would like to know how to keep the programmers satisfied and create a productive & completeness environment for my team."} {"_id": "13779", "title": "Payments and open source core developers", "text": "I'm not sure if there's an established protocol for this (even if it's not an official one), but thought those most experienced with open source might want to share with us. I'm aware that random patches submitted to open source projects are never paid. They may be indirectly funded by a client but they're never paid for by the open source project itself. But how about core developers? I heard for example that drupal has some 800 core developers behind it. Core developers means that they work on drupal core itself and together they push the main releases, so they're very important to the project. Of course drupal is just an example, but in general, is there any established protocol in the open source world that defines whether the company behind the project is expected to pay them and do these core developers expect such payment? Any facts or first hand experiences?"} {"_id": "13778", "title": "When one should read 'Code Complete'?", "text": "I'm pretty sure about `Who` but when ? The one with proficient knowledge of programming and software development or the one who's just a beginner in cyber ( programming, to be precise ) world ? I'm persuing bachelors right now, when its preferable for me( and folks like me) to read this **_Must read for programmers_** book ?"} {"_id": "81442", "title": "Is splitting up programming tasks a good idea?", "text": "We have a small but growing team at work, and are thinking of doing things differently. We develop websites from scratch and do HTML/CSS/Javascript/PHP/MySQL coding. Currently, we all work on things as they come, and everybody could do either of those things. So everybody in the team has currently several projects assigned, and they could be doing different things, so for one project somebody takes care of the front-end things - mostly Javascript/CSS, and for another site it's the back-end part with mostly PHP/MySQL. The issue is that most in the team have a decent basic understanding of things but are still learning the details. And it seems to take its toll in terms of productivity. I think we could improve this by assigning everybody to just specific tasks (only CSS / Javascript / PHP / MySQL... just one thing at a time). This way: * Tasks are clearer for everybody * Everybody can better learn one particular thing and not get overwhelmed * When mastering one skillset, one can upgrade to the next level * Productivity should go up What do you think?"} {"_id": "160066", "title": "Preferring Python over C for Algorithmic Programming", "text": "I've been studying a bit of algorithms and have been looking at sites like SPOJ.pl TopCoder etc. I've seen that programmers prefer C or C++ usually for most algorithmic programming contests. Now I've been having some trouble lately. I know both a bit of C and Python and when trying to write a code I seem to prefer Python over C for most algorithms. Everytime I sit down to write a code in C I give up after about 15 minutes because I find it too cumbersome and tend to move over to python. Passing matrices Pointers and so on seem to be useless time wasted that I could actually be utilizing to think about the algorithm itself. Now I know and have heard from a lot of people that C is a very important language and is the bread and butter of a lot of programmers out there. What I wanted to know was whether this approach of mine has any drawbacks/consequences/Disadvantages etc. This is not a Python vs C debate; This a question about how this specific practice of preferring python over C because of the ease of use will affect me or any other programmer/computer Scientist in the long run. * * * I'd love to hear from people who've used these languages in the industry/and or to develop large software/libraries etc."} {"_id": "60393", "title": "How to maintain different, customized versions of the same software for multiple clients", "text": "we have multiple clients with different needs. Although our software is modularized to a degree, it's almost certain that we need to adjust every module's business logic here and there a little for each customer. The changes are probably too tiny to justify splitting the module into a distinct (physical) module for each client, I fear problems with the build, a linking chaos. Yet, those changes are too large and too many to configure them by switches in some configuration file, as this would lead to problems during deployment and probably a lot of support troubles, especially with tinker-type admins. I'd like to have the build system create multiple builds, one for each client, where changes are contained in a specialized version of the single physical module in question. So I have some questions: Would you advise letting the build system create multiple builds? How should I store the different customizations in source control, svn in particular?"} {"_id": "94246", "title": "Do companies hire software developers that are aspiring entrepreneurs?", "text": "There are developers out there that not only write code and solve problems, but aspire to one day be an entrepreneur and run their own company. They may participate in open source projects, go to various networking events/meetups, or even write code to help shape/start their own business outside of work. And, for example, an fully-candid interview with a prospective hire might go something like this: > Company: _Where do you see yourself in 5 years?_ > > You: _I see myself running my own software company in City Z, doing xx > projects, solving yy kind of problems._ This might be a red flag to a company, who may consider this kind of developer a high risk for leaving, and that they would take with them the experience of developing a particular software or specific industry knowledge. Should developers hide these kind of aspirations/traits from their current employers, or where they are interviewing? Is it unprofessional to mention these kind of things? Does it help or hurt their chance of getting hired?"} {"_id": "60398", "title": "iOS App Store Developers - What is the 'copyright name' of an app?", "text": "I understand that whenever you upload a new app, you can set the copyright name for the app. **Where is this name displayed?** Is this name displayed underneath the name of the app when searching in the app store? Or is that the seller name that you specify when you upload your first app?"} {"_id": "201276", "title": "Why is it rare to collect analytics/usage data in open source software?", "text": "So, I've been developing some analytic software at my work and also have started to take more notice to analytics in general. For instance, I recently installed Google Analytics on my blog(which is custom and open source). I mostly make open source software outside of work, so I think it might be cool to be capable of gathering some form of usage data and such on how my tools are used, opt-in only of course. However, this seems to almost never be done. Why? What's with the taboo of analytics in open source software?"} {"_id": "19225", "title": "In Java, what are checked exceptions good for?", "text": "Java's checked exceptions have gotten some bad press over the years. A telling sign is that it's literally the only language in the world that has them (not even other JVM languages like Groovy and Scala). Prominent Java libraries like Spring and Hibernate also don't use them. I personally have found one use for them (in business logic between layers), but otherwise I'm pretty anti-checked exceptions. Are there any other uses that I don't realize?"} {"_id": "235180", "title": "Functional tests only to testing the infrastructure layer, or test too the domain services without mocking?", "text": "This is a code example: My entities (Domain Layer): class Account: def __init__(name, author): self.name = name self.email = email My repositories interfaces (Domain Layer): class AccountRepository: def add(self, account): \"\"\" @type account Account \"\"\" pass def find_by_name(self, name): pass def find_by_email(self, email): pass # remove, and others... My domain services (Domain Layer): class SignUpService: def __init__(self, account_repository): \"\"\" @type account_repository AccountRepository \"\"\" self._account_repository = account_repository def create_account(self, username, email): account = Account(username, email) self._account_repository.add(account) # other methods that uses the account_repository My repositories strategies implementations (Infrastructure Layer): class MongodbAccountRepository(AccountRepository): def __init__(self, mongodb_database): self._mongodb_database = mongodb_database def add(self, account): account_data = account.__dict__ self._mongodb_database.accounts.insert(account_data) # and the other methods So I can do: **A** : * Functional test for MongodbAccountRepository, testing the add, and directly querying to mongodb to check if the data is persisted as i suppose. * Unit test for SignUpService, mocking the AccountRepository * **Pros:** Too quickly * **Cons:** can I really suppose that if my infrastructure works fine, and my domain service works fine with mock, the real integration will work fine?. * And if i'm introducing a bug in my mock object, and the test passes when should be failing? **B** : * Functional test for SignUpService, using the real MongodbAccountRepository. * **Pros:** I can be sure that SignUpService really works fine. * **Cons:** too many tests (with all strategies, etc), too slow What do you think?"} {"_id": "235183", "title": "What features does MIT-Scheme have that make it ideal for SICP?", "text": "I've been thinking about trying to get through the SICP again, this time well- armed with a better idea of what the SICP is meant to accomplish, and being older and wiser than my first attempt back in university. I've been told by old hands that the MIT Scheme is the _only_ scheme I should think about using, and that other schemes lack features that make the SICP harder to accomplish. \"There's a reason all the 'SICP-in-X' end with chapter 3. Other languages can't support what's in chapter 4.\" When I asked what's in chapter 4, the answer is, \"You'll have to get through the first three chapters to understand.\" Which is very Zen, I admit, but not helpful. The only things I can think of that older lisps have is dynamic scope and fexprs, but those don't seem to be the issue. What features does MIT Scheme possess that makes it \"ideal\" for getting through SICP? (Other than that it's the target language of the book, of course.)"} {"_id": "11436", "title": "Start to understand an existing project for beginner programmer", "text": "I want to understand an existing project for improve myself more but i am not sure that project is overwhelming for me or not. I am not sure i am capable to understand it. I even think that i will be failed. When i look at projects, at a glance it makes me feel scary to see thousand line of codes . I don't even know where to start read codes.Do i have to follow by debugging or another else? Can you please guide me about this situation ? I am totally get confused for start. Edit : Few thing for specify about my question. I know C# language and I have learned some WEB technologies like ASP.NET MVC 2 and Web Forms but sure still there will be many thing which i still don't know. I call myself as beginner but it is just because lack of professional experience in this area. But my OOP understanding is enough good. I am thinking to improve myself on WEB so i am looking for WEB projects like blog engines which is simpler than others for starting."} {"_id": "17100", "title": "Clarify the Single Responsibility Principle", "text": "The Single Responsibility Principle states that a class should do one and only one thing. Some cases are pretty clear cut. Others, though, are difficult because what looks like \"one thing\" when viewed at a given level of abstraction may be multiple things when viewed at a lower level. I also fear that if the Single Responsibility Principle is honored at the lower levels, excessively decoupled, verbose ravioli code, where more lines are spent creating tiny classes for everything and plumbing information around than actually solving the problem at hand, can result. How would you describe what \"one thing\" means? What are some concrete signs that a class really does more than \"one thing\"?"} {"_id": "116185", "title": "How to develop a career path for programmers in a small company?", "text": "I am working in a small software company, which has grown from 4 to just over 50 employees (1 to 12 developers respectively) in the last 4 years with me being the lead developer/manager of the development team. No developer has quit so far and during the last round of feedback interviews everyone emphasized that they like the environment, colleagues and their current tasks at hand. However, for the first time, a few people mentioned that they started to think about how to expand their horizon (beyond the component/team they are currently working on) or develop additional skill sets (in addition to \"plain\" programming), e.g., to look into technical pre-sales activities. These issues were all raised by developers who have been with the company the longest and had seen the wild early days. Since then we've established processes to improve stability and quality which seem to work okay and a certain part of the daily work load has shifted from \"new and exciting\" to \"routine and maintenance\", so I can understand that they are looking for additional challenges. I've read several questions on SO from developers concerning steps to improve their career, but I am now faced with the other side: **What kind of career path can I offer a developer in a small company like ours?** Obviously, those people provide a lot of our specific domain knowledge and are very valuable to the team, so I want to make sure I can keep them happy. Given that business constraints and tight resources in a small company don't allow for a lot of extra projects on the side, any suggestions on what kind of options I can offer them?"} {"_id": "52515", "title": "Specifying and applying broad changes to a program", "text": "**How do you handle incomplete feature requests, when the ones asking for the feature cannot possibly write a complete request?** Consider an imaginary situation. You are a tech lead working on a piece of software that revolves around managing profiles (maybe they're contacts in a CRM-type application, or employees in an HR application), with many operations being directly or indirectly performed on those profiles -- edit fields, add comments, attach documents, send e-mail... The higher-ups decide that a _lock_ functionality should be added whereby a profile can be locked to prevent anyone else from doing any operations on it until it's unlocked -- this feature would be used by security agents to prevent anyone from touching a profile pending a security audit. Obviously, such a feature interacts with many other existing features related to profiles. For example: * Can one add a comment to a locked profile? * Can one see e-mails that were sent by the system to the owner of a locked profile? * Can one see who recently edited a locked profile? * If an e-mail was in the process of being sent when the lock happened, is the e-mail sending canceled, delayed or performed as if nothing happened? * If I just changed a profile and click the \"cancel\" link on the confirmation, does the lock prevent the cancel or does it still go through? * In all of these cases, how do I tell the user that a lock is in place? Depending on the software, there could be hundreds of such interactions, and each interaction requires a decision -- _is the lock going to apply and if it does, how will it be displayed to the user?_ And the higher-ups asking for the feature probably only see a small fraction of these, so you will probably have a lot of questions coming up while you are working on the feature. How would you and your team handle this? * Would you expect the higher-ups to come up with a complete description of all cases where the lock should apply (and how), and treat all other cases as if the lock did not exist? * Would you try to determine all potential interactions based on existing specifications and code, list them and ask the higher-ups to make a decision on all those where the decision is not obvious? * Would you just start working and ask questions as they come up? * Would you try to change their minds and settle on a more easily described feature with similar effects? The information about existing features is, as I understand it, in the code -- how do you bridge the gap between the decision-makers and that information they cannot access?"} {"_id": "52516", "title": "I want to write a Todo list application for mac but I only have experience with C++", "text": "Do I learn objective C? Is using Cocoa the easiest and best way to make a UI?"} {"_id": "215277", "title": "At what point does caching become necessary for a web application?", "text": "I'm considering the architecture for a web application. It's going to be a single page application that updates itself whenever the user selects different information on several forms that are available that are on the page. I was thinking that it shouldn't be good to rely on the user's browser to correctly interpret the information and update the view, so I'll send the user's choices to the server, and then get the data, send it back to the browser, and update the view. There's a table with 10,000 or so rows in a MySQL database that's going to be accessed pretty often, like once every 5-30 seconds for each user. I'm expecting 200-300 concurrent users at one time. I've read that a well designed relational database with simple queries are nothing for a RDBMS to handle, really, but I would still like to keep things quick for the client. Should this even be a concern for me at the moment? At what point would it be helpful to start using a separate caching service like Memcached or Redis, or would it even be necessary? I know that MySQL caches popular queries and the results, would this suffice?"} {"_id": "215276", "title": "Storing a pass-by-reference parameter as a pointer - Bad practice?", "text": "I recently came across the following pattern in an API I've been forced to use: class SomeObject { public: // Constructor. SomeObject(bool copy = false); // Set a value. void SetValue(const ComplexType &value); private: bool m_copy; ComplexType *m_pComplexType; ComplexType m_complexType; }; // ------------------------------------------------------------ SomeObject::SomeObject(bool copy) : m_copy(copy) { } // ------------------------------------------------------------ void SomeObject::SetValue(const ComplexType &value) { if (m_copy) m_complexType.assign(value); else m_pComplexType = const_cast(&value); } The background behind this pattern is that it is used to hold data prior to it being encoded and sent to a TCP socket. The copy weirdness is designed to make the class `SomeObject` efficient by only holding a pointer to the object until it needs to be encoded, but also provide the option to copy values if the lifetime of the `SomeObject` exceeds the lifetime of a ComplexType. However, consider the following: SomeObject SomeFunction() { ComplexType complexTypeInstance(1); // Create an instance of ComplexType. SomeObject encodeHelper; encodeHelper.SetValue(complexTypeInstance); // Okay. return encodeHelper; // Uh oh! complexTypeInstance has been destroyed, and // now encoding will venture into the realm of undefined // behaviour! } I tripped over this because I used the default constructor, and this resulted in messages being encoded as blank (through a fluke of undefined behaviour). It took an absolute age to pinpoint the cause! Anyway, **is this a standard pattern for something like this? Are there any advantages to doing it this way vs overloading the SetValue method to accept a pointer that I'm missing?** Thanks!"} {"_id": "215271", "title": "Requiring a specific order of compilaiton", "text": "When designing a compiled programming language, is it a bad idea to require a specific order of compilation of separate units, according to their dependencies? To illustrate what I mean, consider C. C is the opposite of what I'm suggesting. There are multiple `.c` files, that can all depend on each other, but all of these separate units can be compiled on their own, in no particular order - only to be linked together into a final executable later. This is mostly due to header files. They enable separate units to share information with each other, and thus the units are able to be compiled independently. If a language were to dispose of header files, and only keep source and object files, then the only option would be to actually include the unit's meta- information in the unit's object file. However, this would mean that if the unit A depends on the unit B, then the unit B would need to be compiled _before_ unit A, so unit A could \"import\" the unit B's object file, thus obtaining the information required for its compilation. Am I missing something here? Is this really the only way to go about removing header files in compiled languages?"} {"_id": "245213", "title": "Open source license limitations and compatiblity", "text": "I am new to open source licenses and it seems very confusing, therefore need some suggestions. We are developing an application which utilities existing components with following license types: * Apache License v2 * LGPL v2.1 * MIT License * BSD style license * Mozilla Public License v1.1 Now my concern is: 1. What kind of distribution limitations our application might have from commercial perspective? 2. What kind of license possibilities we have for our application? (e.g. Apache, proprietary, GPL, etc.) Thanks!"} {"_id": "235233", "title": "How to Learn Doing it Right Way?", "text": "I am self taught web developer. I do not have any computer science degree from university. I know HTML5, CSS3, Javascript, PHP and some Python. But I am having difficulties about being efficient. When I create a project, I am buried into files, folders, bugs and todoes. Because I am managing multiple projects at a time, I totally feel stressed and not in control. I see real programmers whom have computer science degrees use utilities like Git for source management. They use Unit tests for testing and other tools for Debugging in an easy and quick way. I am sure there are lots of other tools they have which help them stay safe and calm while they manage their projects. But because I do not have any computer science degree, I do not know how to do things in a right and efficient way. I just do it, in an organized way. It works but it burns out me, too. I Googled a bit and find some books from Amazon about Project Management but I feel intimidated. The tools and resources I found also very scattered which seems getting them together to create an organized routine is also requires another expertise. As a self taught web developer what can I do to learn doing things right way? Do I need a computer science degree or a course about programming? Can I learn the right way by Googleing or from books?"} {"_id": "958", "title": "What IDE features would you miss most if you didn't have them?", "text": "What features of your IDE would you miss most if you didn't have? Please list one one feature or group of related features per answer."} {"_id": "251220", "title": "X509 certificate question on WCF", "text": "**My condition:** 1. A WCF service which is self-hosted and it's on a Win8 Machine. 2. Client is a WPF Program on another machine. 3. Then I follow the article on Codeproject about how to set X509 certificate for WCF. **Problem Description:** 1. Communication between Client and Service was OK when they are on the same Machine. 2. When I put the Client on another machine, exception occurs that it says \"The caller is not authenticated by the service\". I believe the cause of the exception above may be relevant to X509 Certificate. When I put the Client.exe on another computer, I just generate a new certificate for client, is it right? I want to know if the X509 Client certificate should be exported from the service Machine which has generated both client and server certificate, and then be imported into other Client Machine, or just use makecert.exe generate another certificate for other Client Machine? In short, can the certificate be generated by any machine or only by the machine having generated the service certificate?"} {"_id": "251225", "title": "Requesting Advice Regarding Storing Encryption Keys", "text": "I am using HMAC to hash some data before inserting it in a database and currently I have my key as a static field. Just wondering what the best practice regarding storing the key would be. Is having it in code good enough, or should it be in a configuration file. Thanks!"} {"_id": "118066", "title": "Solutions for implementing a full-duplex game server?", "text": "I am designing a game server which would be used for Android terminals. I've been searching for products or frameworks to use for two-way socket communication but haven't found anything worth mentioning. Simply, I want to implement the architecture below: ![Server architecture](http://i.stack.imgur.com/leKtu.png) In other words, one TCP connection from the client to the server, and another from the server to the client, in order to avoid always having to be connected. Just to be clear, my main aim is for the **server** to be able to send data to the **client** , **without** the client having to explicitly request it. I do **not** want the client to have to poll the server to see if there is any new data. What combination of design strategies, network protocols, and/or products or frameworks (if any) would be appropriate for implementing this architecture?"} {"_id": "70877", "title": "Are design patterns really essential nowadays?", "text": "I was reading \"Coders at Work\" and have faced the fact that some of the professionals interviewed in the book are not so enthusiastic about design patterns. I think that there are 2 main reasons for this: 1. Design patterns force us to think in their terms. In other words, it's almost impossible to invent something new (maybe better). 2. Design patterns don't last forever. Languages and technologies change fast; therefore, design patterns will eventually become irrelevant. So maybe it's more important to learn how to program properly without any particular patterns and not to learn them. EDIT: The point also was that usually when people face a problem and they don't have much time, they try to use a pattern. This means copying and pasting existing code into your project with minor changes in order to get it working. When it's time to change or add something, a developer doesn't know where to start because it's not his code and he's not deeply familiar with it."} {"_id": "107834", "title": "How often do you actually use design patterns?", "text": "> **Possible Duplicate:** > Are design patterns really essential nowadays? I recently read a book on design patterns. A few of them seem very usefull in specific situations. Im not sure how much use they will be in everyday coding though. How often do you use design patterns in your everyday work? Should I be trying to find situations to apply them?"} {"_id": "49379", "title": "When should I use\u2014and not use\u2014design patterns?", "text": "In a previous question of mine on Stack Overflow, FredOverflow mentioned in the comments: > Note that patterns do not magically improve the quality of your code. and > Any measure of quality you can imagine. Patterns are not a panacea. I once > wrote a Tetris game with about 100 classes that incorporated all the > patterns I knew at the time. Why use a simple if/else if you can use a > pattern? OO is good, and patterns are even better, right? No, it was a > terrible, over-engineered piece of crap. I am quite confused by these comments: I know design patterns help to make code reusable and readable, but when should I use use design patterns and perhaps more importantly, when should I avoid getting carried away with them?"} {"_id": "224934", "title": "Are design patterns essential for good code?", "text": "Are design patterns (e.g. factory pattern, observer, etc...) required to be known to write good code? I often have no idea of what people mean when they talk about _inserting pattern name here_ pattern and sometimes I realize I\u2018ve implemented that pattern without knowing it even had a name. I\u2018m a self-taught programmer so I try to learn about patterns as I go, but are they necessary? I find them over-engineered sometimes, like just fancy design concepts that programmers sometimes overestimate and force themselves to use them even if they can write clean and working code that is however not a common pattern."} {"_id": "95718", "title": "To design pattern, or not to design pattern", "text": "Design patterns are good, but complex. Should we use them in small projects? Implementing design patterns needs more sophisticated developers, which in turn raises project costs. On the other hand, they make code neat and clean. Are they necessary for small projects? **Update:** Should we insist on using design patterns when the team is not efficient at working with them?"} {"_id": "219767", "title": "What if I will not use Software Design Patterns?", "text": "What kind of problems may I face, if I won't use Software Design Patterns? Can you tell me about the problems of approaching the design using standard object-oriented techniques?"} {"_id": "235238", "title": "How do you get over tooling problems in a communal open-source project?", "text": "Two different teams (from different companies) are uniting to work on a communal open-source project. Agreeing on technical design is something we have no trouble with, but I'm struggling with tooling/workflow problems. I really like a BDD testing tool called `phpspec` (analogous to RSpec), whilst a lot of my teammates stick to what they know (`phpunit`) regardless of the pros and cons of either tool. How do you move forward with a project when members are in disagreement like this? Should you enforce a standard testing tool? Is there a way of using both? I think it boils down to whether members will break out of their comfort zones to learn new technologies that are better for the job. I'm of the strong opinion that you should always be willing to learn new things, but I get the impression that others are purely concerned with getting things done in the quickest way possible - thereby using tools they've used before."} {"_id": "118063", "title": "What is a good way to familiarize myself with PHP, coming from an ASP.NET background?", "text": "Currently, I'm very comfortable with building tools/web apps in an ASP.NET environment. I'm not really looking to leave tbh, as I really like C#, ASP.NET, MVC 3, Visual Studio, etc. However, right now I know almost nothing about PHP and that seems like a deficiency that I'd like to rectify. Are there any books (or other learning methods) that would be a good resource to learn PHP? Obviously there are plenty of beginning PHP books, but I am already comfortable with much of what is involved with building a web page, and interested in focusing on PHP itself which might not be compatible with the scope of some beginning PHP books. I went through the PHP Manual quite a bit, and it doesn't seem to flow as smoothly as might be ideal. Is there a beginning PHP book that would be appropriate? I miss the cohesiveness that most books contain when looking through those samples. There is lots of info more, but it feels more like a reference while coding than a primary learning vehicle."} {"_id": "118060", "title": "Is is possible to develop an application for both metro and classic style?", "text": "Is it possible to develop one application that can be be run in both: metro and classic style in Windows 8? I mean, application with the same core and two separate UIs, adjusted to currently used mode."} {"_id": "118061", "title": "What's there in Eclipse, other than PyDev, for a Python developer?", "text": "Like many other people, I use Eclipse with the excellent PyDev plugin (available in the Eclipse Marketplace) to develop my Python projects. And like most people with the same setup, I came to PyDev without any prior Eclipse experience. What other tools, plugins, features, etc are there in Eclipse that could be useful for us Python developers? And/or, said another way, how could we become more proficient and make a better use of the Eclipse IDE for Python developing? I'll start adding my own 2\u20accents: EGit, or similar plugins for CVS, SVN, etc.., are good for integration with a version control system / repository. (even if I admit I'm usually just writing git command lines in the terminal)."} {"_id": "80798", "title": "Is is possible to do TDD without a test tool?", "text": "We want to implement a fairly rough outline of test driven development which involves a developer asking themselves the tests at each stage of the development process. I have read here that its impossible to perform TDD without a tool. Is this true?"} {"_id": "49018", "title": "CS Concentrations and Career Paths", "text": "I'm approaching the end of Sophomore year in college (Studying Computer Science), and very soon I'm going to have to decide on my concentration, but I honestly don't know what each concentration means. I basically have two questions: 1\\. How much influence does your concentration have on your career path? For example, would a video game development company only look at people with a concentration in Game Development? 2.It would be great if you guys could, in a line or two, tell me what sort of jobs am I looking at for each of the concentrations? I need to pick at least two of the 9 below. \\- Algorithms and Data Structures \\- Artificial Intelligence \\- Computer and Network Security \\- Computer graphics and vision \\- Human-Computer Interaction \\- Game Development and Design \\- Numeric and Symbolic Computation \\- Programming languages \\- Systems"} {"_id": "152912", "title": "Warn about 3rd party methods that are forbidden", "text": "Note: This question refers to code written in Java or C#. I am managing a couple of large projects where we have discovered issues (not necessarily bugs) with some 3rd party/SDK methods and have written our own extensions that should be used instead. We want developers to remember that using those methods is not recommended for this project. If we had been using our own libraries we could easily remove that method or mark it obsolete/deprecated but we cannot do so for libraries that we didn't write. As an example, we use a library that provides us with two overloads: acme.calculate(int quantity_, double priceInUsDollars_); acme.calculate(int quantity_, string currencyCode_, double priceInCurrency_); We want developers to always use the first one and get price in US Dollars from our own standard FX rate systems. And it'd be nice to have the IDE (Eclipse/Visual Studio) warn the developers when they use the first one. A compiler warning will suffice too. Right now, as it stands, we have to rely on the code reviewers to spot such errors and as you can see that is not a reliable approach. One possible way I am prepared to go is to write my own check style check (http://checkstyle.sourceforge.net/writingchecks.html). But I was wondering if there was something simple that I could use. Does anyone know of ways to achieve an IDE/compiler warning of the sort I have described? Non IDE/compiler solutions are most welcome."} {"_id": "80792", "title": "What modes of profit are open to programmers?", "text": "This is an important question because \"apply for a job/internship\" isn't a solution for everyone especially during economic stress periods when constant rejection can push good programmers into depression. Can people with experience briefly comment on other ways to get programming revenue: * Creating \"free for non-commercial\" software * Turning an idea into a startup with near-zero resources and **not flopping** * Turning an idea into a startup with near-zero resources and coming out **employed** * Using outsourcing sites and **actually** getting paid * Making freeware and getting sponsors/advertisers (I know we all hate installer spam, but still)"} {"_id": "62829", "title": "What is better making a separate temporary table or inserting directly to big table?", "text": "I have a big table with 1,400,000 rows, and I need to insert 3000 rows daily in it. When I insert 3000 rows daily should * I first insert to a temporary table than dump that temporary table to the main table. * insert directly to the big table. Which approach is fast and why?"} {"_id": "153359", "title": "Asynchronous update design/interaction patterns", "text": "These days many apps support asynchronous updates. For example, if you're looking at a list of widgets and you delete one of them then rather than wait for the roundtrip to the server, the app can hide the one you deleted, giving immediate feedback. The actual deletion on the server will happen in the background. This can be seen in web apps, desktop apps, iOS apps, etc. But what about when the background operation fails. How should you feed back to the user? Should you restore the UI to the pre-deletion state? What about when multiple background operations fail together? Does this behaviour/pattern have a name? Perhaps something based on the Command pattern?"} {"_id": "153351", "title": "Is there an specific way or algorithm to decode protocols?", "text": "I am designing a simple logic analyzer, I know that the best way to decode a protocol like I2C, SPI, UART is whit something like an FPGA but I want to do it by software by now :) I am running on an OMAP4460 ARM proccesor. I would like to know if there is an special way of decoding protocols like I2C, SPI, UART or it is simply if() and flags as the normal way."} {"_id": "153350", "title": "Are too many assertions code smell?", "text": "I've really fallen in love with unit testing and TDD - I am test infected. However, unit testing is normally used for public methods. Sometimes though I do have to test some assumptions-assertions in private methods too, because some of them are \"dangerous\" and refactoring can't help further. (I know, testing frameworks allow testing private methods). So it became a habit of mine that the first and the last line of a private method are both assertions. However, I've noticed that I tend to use assertions in public methods (as well as the private) just \"to be sure\". Could this be \"testing duplication\" since the public method assumptions are tested from the outside by the unit testing framework? Could someone think of too many assertions as a code smell?"} {"_id": "158965", "title": "Template rendering engine legitimate use of a singleton?", "text": "I wrote a standalone singleton class (scaffold) tonight that serves as a facade to a few other classes, including a template class and a view class. Templates and views are both named and instances of each are stored in the scaffold object. Templates can contain views and views can contain other views. A template and it's contained views is rendered when: scaffold->render('template_name') Making scaffold a Singleton seemed like a good idea because: * I want to control when and how the object is constructed * I only want one instance (the gui will only be rendered once, regardless of the template which is rendered) * All state can easily be released from the scaffold class (if, for testing reasons or whatever, I wanted to) Does this seem like an acceptable use case? If not, what specific design considerations am I overlooking? No religious wars, please. I didn't include the language because I'm hoping for a language agnostic consideration, but I will say: * Scripting language * Single threaded"} {"_id": "99243", "title": "Why doesn't Python allow multi-line lambdas?", "text": "Can someone explain the concrete reasons why BDFL choose to make Python lambdas single line? This is good: lambda x: x**x This results in an error: lambda x: x**x I understand that making lambda multi-line would somehow \"disturb\" the normal indentation rules and would require adding more exceptions, but isn't that worth the benefits? Look at JavaScript, for example. How can one live without those anonymous functions? They're indispensable. Don't Pythonistas want to get rid of having to name every multi-line function just to pass it as an argument?"} {"_id": "210737", "title": "Is learning C essential for Computer Science?", "text": "I am a front-end developer who barely even see a file with `.h` or `.c` extension. I know basic C syntax, I've learned it in Unreality but never was interested in such low level programming because it was simply too much setup for simple things. I am very interested in learning all aspects of Computer Science but I want to believe I do not really have to know a specific language in order to understand most of concepts in Computer Science. Yet when I start reading books and articles about fundamental Computer Science concepts like Data Structures and Algorithm Design it seems that I have to learn C, because all examples and even lessons are in C (and sometimes Java). My question is, is C as a programming language essential for Computer Science or we just happened to have all of our resources in CS written in C? Can one learn Computer Science without learning C?"} {"_id": "125976", "title": "How does Assembla compare to FogBugz + Kiln", "text": "I'm choosing a project management solution and narrowed it down to either Assembla or FogBugz + Kiln. These were my criteria: * Hosted service but with an option to go to a custom installation (rules out e.g. Codebase HQ) * Centered around projects, not repositories (GitHub, although having added some PM features recently, seems to be repo-centered) * Git repository hosting a must, SVN + Mercurial nice to have * Popular service with (at least somewhat) guaranteed continuity - both FogBugz and Assembla seem to be amongst the most popular PM solutions * Wikis, discussions, code reviews, user management etc. The more features, the better. This rules out (otherwise a very nice service) BitBucket and others. Both FogBugz and Assembla seem quite feature packed. I'm looking for someone who has experience with both FogBugz and Assembla and could compare them. This is what I've gathered so far from feature pages / screenshots / random mentions on the web: * FogBugz seems to have quite a nice UX. Assembla makers are certainly trying hard and the screenshots don't look that bad but it still feels less elegant. * FogBugz is offered with Kiln which I'm sure is a nice product but I'd prefer Git over Mercurial. I know FogBugz can be used with other SCM services but I'd rather have one integrated solution out of the box. * Assembla is cheaper, actually much cheaper should there be more than a few users. Maybe I'm wrong, these are just my first impressions. If someone could offer a more complete / educated comparison I would be grateful."} {"_id": "16851", "title": "how to get clients and domain experts on board and interested", "text": "So I think this might be **me** soon! Excuse my ignorance in this post, I am not formally educated and young! =S (Taken from http://theoatmeal.com/comics/design_hell) ![alt text](http://i.stack.imgur.com/r3lVG.png) Im building an inhouse application for a bank and it might not be big but its the largest project I've worked on. ## Issues I can't do much about 1. There is tons of beaurocracy in doing anything, it took 4 months to get them to sign a contract with me. 2. I have to work with Finance managers, who have never worked with someone making software for them. 3. I am not usually taken seriously because I am young. ## My Problems 1. The managers don't give a crap about the Bank and they never bother to read my emails most of the time, or give me any input on mockups or demos. I spent 20 hours over 2 days on the last mockup and they would rather crap on the phone than attend the meeting! 2. New features keep coming and going like hot cakes and they just cannot decide which ones are important. Or they don't care to discuss it! And the top dogs keeps reminding me of the dealines!? ## My question So Im going straight to the top dogs to politely try to change things. These guys however are older and won't give me much time, **I need a very convincing, quick, succinct way to explain to them how important it is that their \"managers\" communicate with me and tell me what the heck they want and what they don't** Any funny presentations or pictures like oatmeal would be really awesome to show the top dogs. ## Other Solutions I was thinking of 1. Signing up for teambox.com (as opposed to email!) 2. If in 3 days I dont get a response on a feature it is NO MORE! 3. I display features on a board and they get 5 secs to tell me if its important or not. Any more policies I should adopt? Any other ideas? Thanks so much, Gideon **EDIT** * * * In response to the first comment: The departmental heads (top dogs) are the people really looking forward to shiny software that will solve all their problems. They are more or less the stakeholders but I don't interact with them much, I only interact with their subordinates(the managers)."} {"_id": "125971", "title": "Has SSRS replaced Crystal Reports as a new standard for Visual Studio reporting tool?", "text": "I never used SQL Server Reporting Services (SSRS) before but used little Crystal Reports. My present project has extensive use of Crystal Reports for Visual Studio 2005. This choice brought some nasty issues of Crystal Reports in front. The very first report failed to print or export because of a bug. After wasting a week, I got stuck in another issue of reducing font size on print and export. Help in forums is also not coming soon. I want to know whether I should learn and switch to SSRS or migrate my project to Visual Studio 2010 with a higher version of Crystal Reports. Has SSRS replaced Crystal Reports as a de facto standard reporting tool?"} {"_id": "198671", "title": "Why is C++ preferred over C for commercial applications?", "text": "I program in C mostly. However, it is pretty obvious that many more commercial applications are done in C++. As far as I can tell, C++ is a very complex language, with seemingly convoluted syntax and too many constructs. C++ also encourages the abuse of Objects where structs and functions will do. In fact, the only significant advantage I see in C++ is the use of templated generic types (though, according to the developers of Go, generics are bad for programs). Basically, my question is, did I miss something? Or is C++ more popular purely by merit of luck or marketing? Edit: I'm sorry that I apparently asked a loaded question; in retrospect I can see that the way I worded it appears to be complete flamebait. What I meant was, since C++ has so many different constructs and paradigms available to it, why hasn't it been replaced by languages that do less but are better at that specific thing? For instance, both Java and C# are much better suited for OOP than C++ is, while C is much simpler for system-level programming, and something like lisp is more suited for functional programming. Why is C++ used over one or more of these other languages?"} {"_id": "125978", "title": "Should I decompress zips before I archive?", "text": "I am writing a small personal archiving tool. I frequently work with a lot of client databases for short periods, what my tool will do is in overnight batch jobs it will detach the database and zip up the database files and any extra files that where sitting in the folder that the database resided in and move it to a network storage drive. My question is quite frequently there will be zip files sitting inside the folder (the most common one would be the client's original backup before I did the processing on it). Would it be better to unzip the file to a unique sub- folder name, delete the zip, then compress the whole parent folder, or are compression algroithoms good enough that I could leave the file zipped (it was likely done with a 7zip Fastest setting) and my archiving (which will be a 7zip Ultra setting) will just compress (mostly) the difference between the two compression levels? If I can get another 1% off of a 20GB database file it may be worth it to do the decompression."} {"_id": "12165", "title": "How do you structure your branches in TFS", "text": "I'm not sure if i should ask this question here, or on SO ... i'll cross post it to SO if this is the wrong place. My group at work has been trying to come up with a good process that we can use with TFS. I'm just wondering if some of you guys had some successful strategies with using TFS across multiple sites using multiple branches. One specific issue that we have is probably our release engine. On the simple side, a developer should be able to check in his/her changes into the DEV Branch, and then ok certain date (say a freeze date) the DEV Branch will be \"reverse integrated\" into the Main Trunk (QA) in which then the changes will be pushed to the Production branch. The issue arose when a user check into the DEV Branch, but he doesn't want those changes to be moved into QA (because maybe other portion of the code is not done yet) ... any thoughts on that? thanks in advanced"} {"_id": "46599", "title": "Best time to start writing technical blogs", "text": "I want to know how much time I should wait until I start writing a blog since, IMO, a newbie does not have that skills to write something substantial. I'm talking about technical blogs like writing about new C# features or Advantages of Ruby over other languages."} {"_id": "190464", "title": "How to rewrite from scratch a code for which I own the copyright so I can use it on my job without losing the rights to the first version?", "text": "Well I don't want to make it open-source! That's the problem. But I do want to use it on my current job. Company did not agree to sign any alternative license with me and told me to rewrite everything from scratch so they will own it. :( So how can I do it in a safe way so later on the company don't come back to me and say that I am using the code I wrote for them, which will be similar to the first version I wrote and own the copyright, on my personal projects or even on another job? How would you rewrite a second version of a hash map without making it look like the first version? This sounds kind of hard to me. :("} {"_id": "42674", "title": "Any recommendations on a good Objected-Oriented book", "text": "I'm looking to really grasp OO once and for all. Any recommendations on a good Objected-Oriented book? I program in .Net, so if it's .Net oriented all the better. Regards,"} {"_id": "158960", "title": "Allocation problem identification", "text": "I have a matrix. I have a list of people who have to occupy n1,n2,n3 etc cells, different number of cells in different rows. I have to place the people in the cells. Occupying the same cell across rows is considered overlap. Overlap has to be minimized. I have people referring to this variously as an optimization problem, an allocation problem and even linear programming. I need to first get a consensus on what is this class of problem called. Second, I need to know what an efficient solution should look like, in terms of the big-o notation or anything else. Here is an example: There is a board of 4 x 4 cells. There are many pieces each of one color (R, G, B). Each color peice has to fill a number of columns in each row. A possible example: Row1 Row2 Row3 Row4 Red 1 2 3 2 Blue 2 2 0 0 Green 1 0 1 2 That is the input. The input arrangement could be different but it occupies the entire board... no cell is left blank. The count of same color pieces occupying a COLUMN should be minimized. One possible (bad) arrangement based on the input is this: 1 2 3 4 1 r b b g 2 r r b b 3 r r r g 4 r r g g This is bad because it actually maximises the count of colors in the same column. A better arrangement is like this: 1 2 3 4 1 r b b g 2 b r r b 3 g r r r 4 r g g r Overlap is unavoidable, but should be minimized, and provably so. I dont know if there is only one best solution or if there could be many solutions. Even if there could be many solutions, we need to come up with just one. As I think about it more, it looks like a loose version of eight-queens problem. I am getting worried because I keep seeing recursion and back- tracking. EDIT 1: I am thinking a relaxed version of sudoku puzzle generation."} {"_id": "249764", "title": "Where and how to reference composite MVP components?", "text": "I am learning about the MVP (Model-View-Presenter) Passive View flavour of MVC. I intend to expose events from view interfaces rather than using the observer pattern to remove explicit coupling with presenter. **Context:** Windows Forms / Client-Side JavaScript. I am led to believe that the MVP (or indeed MVC in general) pattern can be applied at various levels of a user interface ranging from the main \"Window\" to an embedded \"Text Field\". For instance, the model to the text field is probably just a string whereas the model to the \"Window\" contains application specific view state (like a persons name which resides within the contained text field). Given a more complex scenario: * Documentation viewer which contains: * TOC navigation pane * Document view * Search pane Since each of these 4 user interface items are complex and can be reused elsewhere it makes sense to design these using MVP. Given that each of these user interface items comprises of 3 components; which component should be nested? where? who instantiates them? **Idea #1 - Embed View inside View from Parent View** public class DocumentationViewer : Form, IDocumentationViewerView { public DocumentationViewer() { ... // Unclear as to how model and presenter are injected... TocPane = new TocPaneView(); } protected ITocPaneView TocPane { get; private set; } } **Idea #2 - Embed Presenter inside View from Parent View** public class DocumentationViewer : Form, IDocumentationViewerView { public DocumentationViewer() { ... // This doesn't seem like view logic... var tocPaneModel = new TocPaneModel(); var tocPaneView = new TocPaneView(); TocPane = new TocPanePresenter(tocPaneModel, tocPaneView); } protected TocPanePresenter TocPane { get; private set; } } **Idea #3 - Embed View inside View from Parent Presenter** public class DocumentationViewer : Form, IDocumentationViewerView { ... // Part of IDocumentationViewerView: public ITocPaneView TocPane { get; set; } } public class DocumentationViewerPresenter { public DocumentationViewerPresenter(DocumentationViewerModel model, IDocumentationViewerView view) { ... var tocPaneView = new TocPaneView(); var tocPaneModel = new TocPaneModel(model.Toc); var tocPanePresenter = new TocPanePresenter(tocPaneModel, tocPaneView); view.TocPane = tocPaneView; } } **Some better idea...**"} {"_id": "46592", "title": "So what *did* Alan Kay really mean by the term \"object-oriented\"?", "text": "Reportedly, Alan Kay is the inventor of the term \"object oriented\". And he is often quoted as having said that what we call OO today is not what he meant. For example, I just found this on Google: > I made up the term 'object-oriented', and I can tell you I didn't have C++ > in mind > > \\-- Alan Kay, OOPSLA '97 I vaguely remember hearing something pretty insightful about what he _did_ mean. Something along the lines of \"message passing\". Do you know what he meant? Can you fill in more details of what he meant and how it differs from today's common OO? Please share some references if you have any. Thanks."} {"_id": "167954", "title": "How to calculate Sin function quicker and more precisely?", "text": "I want to calculate `y(n)=32677Sin(45/1024\u2022n)`, where `y` is an integer and `n` ranges from 0 to 2048. How can I make this process quicker and more precisely? Now I want to show you a reference answer: Since `Sin(a+b)=Sin(a)Cos(b)+Cos(a)Sin(b)` And `Cos(a+b)=Cos(a)Cos(b)-Sin(a)Cos(b)`. So I can store `Sin(45/1024\u20221)` and `Cos(45/1024\u20221)` only.Then use this formula: `Sin(45/1024\u20222)=Sin(45/1024\u20221+45/1024\u20221)`, `Cos(45/1024\u20222)=Cos(45/1024\u20221+45/1024\u20221)`, `Sin(45/1024\u2022n)=Sin(45/1024\u2022(n-1)+45/1024\u20221)`, `Cos(45/1024\u2022n)=Cos(45/1024\u2022(n-1)+45/1024\u20221)` , This way maybe quicker without storing large array."} {"_id": "75072", "title": "How can I stay focused and motivated on a project?", "text": "I'm working on starting my own software development business, but I've noticed that I have major issues getting projects out by their deadlines, and in general getting them out of the \"almost-done\" stage. I feel my problem is that I like complex programming too much, and I end up rewriting code to make it cleaner/more efficient/less error prone as a means of putting off more \"boring\" development. I end up with polished applications at the end, but I spend twice as long on the project as I should have. If I ever want my company to succeed, I need to work on staying focused on what actually needs to be done, so the question is: What techniques do you use to keep yourself focused and motivated on a project, in your career?"} {"_id": "119477", "title": "How to concentrate on one project at a time. Divide and Conquer doesn't work for me", "text": "> **Possible Duplicate:** > Tips for staying focused and motivated on a project I have serious issues on concentrating on one project at a time. I cant even follow the Divide and Conquer Approach. Once I start a project, I try to get the things done as neatly as possible but very soon I end up messing so many components of it. I try to do divide and conquer, but my approach doesn't work smoothly, and then I then wonder here and there in other projects. Sometimes I try spending so many hours for some trivial issues, which in-fact are not even issues. How do I avoid this jargon and be a smooth developer and have a nice workflow around my projects. I tend to loose my concentration on the current project and wonder in another project."} {"_id": "167951", "title": "Is it more difficult to upgrade your certification from SQL Server 2008 to 2012 than to get it from scratch?", "text": "I was wondering about the new MCSA certification on SQL 2012 and how it seems to be more difficult to upgrade your certification from 2008 to 2012 than to get the 2012 from scratch. Reason I think that is true is because anyone with any MCTS SQL Server 2008 certification can upgrade it to a MCSA 2012 by passing 2 tests (457 and 458). If you try to get it from scratch, you need to pass 3 tests (461, 462 and 463 - which are pretty much the same as 432, 433 and 448 for SQL 2008). But the thing is, even though its one test less to upgrade, all the skills necessary to pass 461, 462 and 463 are squeezed on 457 and 458 so, it seems easier to get from scratch than upgrade. Any thoughts?"} {"_id": "196706", "title": "Creating a coding standards document", "text": "I work in a control systems company, where the primary work is SCADA and PLC, along with other control systems stuff. Softwre development is not really something the company does, apart from little bits here and there, until there was a decision to create an internal project management and appraisal system. This project has been taken on by people who came here as software people originally and we are mostly junior. The project started off small, so we only documented stuff like design, database stuff etc, but we never really agreed upon a coding format/conventions. We started using Stylecop to make sure we had well documented code, but I feel we need an official document for coding conventions/practices so we can continue a good standard and if there is any more major development work in the future, whomever works on it has a good baseplate. Therein lies the problem, I have no idea how to draft up a document for coding conventions and standards, all I can think of is examples of good vs bad practice (for example camel case when naming variables, avoiding Hungarian notation etc) we are all competent enough programmers (apparently) but we just don't have a charter for this kind of stuff. To put a point on it, my question is: **What are the key aspects and contents of a good coding standards document?**"} {"_id": "196709", "title": "how safe/sane is it to use git for deployment on my webapp production server?", "text": "I started with sftp, then switched to WebDAV and I'm currently using rsync through ssh to deploy any updates/upgrades from my development server into my production server. I'm still not very happy using this system and think using git to deploy may be better, mainly because of the possibility of rolling back any changes instantly with just one command. Appart from using ssh tunnel to pull and push to the production server (leaving the git service behind the firewall), I've also realized I have to edit .htaccess to deny web access to .git folder. Is this the correct approach, should I check anything else, do something in a different way, or should I go in a totally different direction?"} {"_id": "167958", "title": "Are only companies and no private programmers allowed to use VS Studio 2012 Express Desktop?", "text": "Programming is my hobby and I've just downloaded VS Studio 2012 Express Desktop. Now I am going to register it but they want me to tell them business information > ![enter image description here](http://i.stack.imgur.com/YrryP.png) Are only companies and no private programmers allowed to use VS Studio 2012 Express Desktop? And what shall I type in these text fields?"} {"_id": "162709", "title": "Five new junior developers and lots of complex tasks. What's now?", "text": "Our company has hired five new junior developers to help me to developer our product. Unfortunately the new features and incoming bug fixes usually require deeper knowledge than a recently graduated developer usually has (threading/concurrency, debugging performance bottlenecks in a complex system, etc.) Delegating (and planning) tasks which they (probably) can solve, answering their questions, mentoring/managing them, reviewing their code use up all of my time and I often feel that I could solve the issues less time than the whole delegating process takes (counting only my time). In addition I don't have time to solve the tasks which require deeper system knowledge/more advanced skills and it does not seem that it will change in the near future. So, what's now? What should I do to use their and my time effectively?"} {"_id": "123495", "title": "Why is Zend Framework so complicated?", "text": "I am web developer and have experience of developing several web applications in PHP. I have an idea of developing a product for myself and decided to use a MVC based framework because I really like the idea of MVC and how one can easily manage and modify the application without any difficulty. I chose Zend Framework and it seems more difficult than learning a new programming language. There are so many things going at one time even to run a small application. Similarly the idea of routing is very complex as it is new for a core programmer. I know that the guys here read thousand of such questions like one I am asking but I am not looking to learn Zend Framework overnight. I am willing to give as much time as it needs but until now it's making no sense to me. There are thousand of classes in the Zend library, but how would a noob know where to use a specfic class and how to use it? I am still finding it very difficult to understand the bootstrap of Zend Framework and its mapping. I read the manual, follow it and things start working but I really don't exactly how they are actually happening. I also still have no clue how models, views and controllers work together and how to plan an application in Zend Framework. When it comes to core php I exactly have the idea in my mind what to do and than easily translate them in code but in Zend Framework I don't know how to translate my idea."} {"_id": "216773", "title": "Relation between \" lines of the longest working program \" in a language and familiarity with it?", "text": "In some computer master program online application, it says: > Please list the programming languages in which you have written programs. > For each language, indicate the length in lines of the longest working > program you have written in that language. You may approximate, but only > count those parts of the program that you wrote yourself. 1. I don't quite remember that, and I have never counted the lines of each program. Do programmers always know approximately how many lines in each of his programs, and keep record of them? 2. What is the relation between \" lines of the longest working program \" in a language and familiarity with it? Typically, how many lines will indicate the programmer being excellent, good, fair, or unfamiliar with the language? Is knowing \"lines of the longest working program\" really helpful?"} {"_id": "29212", "title": "How often do you run & test your code while programming?", "text": "Especially when writing new code from scratch in C, I find myself writing code for hours, even days without running the compiler for anything but an occasional syntax check. I tend to write bigger chunks of code carefully and test thoroughly only when I'm convinced that the code does what it's supposed to do by analysing the flow in my head. Don't get me wrong - I wouldn't write 1000 lines without testing at all (that would be gambling), but I would write a whole subroutine and test it (and fix it if necessary) after I think I'm finished. On the other side, I've seen mostly newbies that run & test their code after every line they enter in the editor and think that debuggers can be a substitute for carefulness and sanity. I consider this to be a lot of distraction once you've learned the language syntax. What do you think is the right balance between the two approaches? Of course the first one requires more experience, but does it affect productivity positively or negatively? Does the second one help you spot errors at a finer level?"} {"_id": "53649", "title": "Business layer access to the data layer", "text": "I have a Business layer (BL) and a Data layer (DL). I have an object o with a child objects collection of Type C. I would like to provide a semantics like the following o.Children.Add(\"info\"). In the BL I would like to have a static CLASS that all of business layer classes uses to get a reference to the current datalayer instance. Is there any issue with that or must I use the factory pattern to limit creation to A class in the BL that knows the DL instance. * * * Let me Clarify. In the past when defining DL I create an interface IDL that the DL implements. The only objects I allow to be created from my BL are the factories which in their constructors take A reference to the IDL IE IDL idlRef = new DataLayer(); IBlFactory iFac = new BLFactory(IDL); IBLa = IBlFactory.Geta(...); As I have tried to work statics and singletons out of my system my factory creates all the objects and always passes the reference the the IDL to the new objects. Some on my team have complained about my copious use of interfaces and factories and would like to utilize concrete classes from the BL directly. So the issue is if you have a object which can be created IE a new object are you better severed by utilizing the factory. IE IBLA oBLA = iFac.GetBLA(); or forcing the client to always passing the reference to the DL to new object. IE BLA oBLA = new BLA(idlRef); or would testability really be harmed by having one static property in the BL. Static IDL CurrentDL; Allowing albeit with some breaking of encapsulation more succinct style Assuming that each BL object will know what the currentDL is."} {"_id": "53648", "title": "What are the best and worst policies you have seen used to run a programming team?", "text": "If I were to begin managing a team of programmers (which I'm not, I'm just asking out of curiosity) what are some of the office / team policies you have seen that are either particularly conducive or particularly prohibitive to productivity and teamwork? Some of the well known bad ones include regular overtime, micromanagement, not having admin rights, very strict hours, and endless meeting requirements. What else is there to avoid, and what interesting policies have you seen that do wonders for a team?"} {"_id": "163253", "title": "What are the best education options for software engineers after an undergraduate degree?", "text": "Having recently completed a BSC (computer science), I am working as software engineer. With one year of experience, what educational options are available to me? I want to move further in management. Please provide suggestions. I am considering a MSC but I got negative reply from lots of people."} {"_id": "163250", "title": "Showing user and userinterface interaction on a sequence diagram", "text": "I need to draw a sequence diagram and I am not sure exactly if it is correct to show interaction between user and user interface in this diagram. For example I want to make a panel visible or invisible based on choosing a check box by user. Is it correct to have it on a sequence diagram? and how can I draw it? thanks"} {"_id": "168285", "title": "Difference between a REPL and interactive shell", "text": "Noob question. I am not quite able to tell the difference between a REPL and an interactive shell just by reading the definitions on Wikipedia. Wiki notes that REPL is a particular kind of interactive language shell. Is it a proper subset though? The Wiki definition seems to restrict the terminology REPL to Lisp-like languages, whereas the stated properties don't really contain any distinguishing features. In particular, would it be correct to call IPython a REPL?"} {"_id": "195176", "title": "How does dependency inversion principle work in languages without interfaces?", "text": "In C#/Java, the dependency inversion principle is often demonstrated by high- level classes that depends on an interface/abstraction (that it owns). Low- level classes will implement the interface, thus inverting the dependency. I wonder, how is it applied in object-oriented languages with no interface? For example, I'm not an expert in Ruby, but it seems that in this language you can't define interfaces like in Java/.NET"} {"_id": "63151", "title": "Any good tools for managing list of tasks?", "text": "We are changing how we manage low priority admin, support and development tasks. The plan is to have a 'Stack' of tasks, which anyone can pick up if/when they are light on work. We would like a tool to make it easy to find tasks, check them out and work on them as well as creating and prioritising them. We could do this at a simple level using a custom sharepoint list, as we have that available, and it fits in pretty well with our environment but has anyone got any experience of using any third party tools for this? It is somewhat similar to a standard ticketing system, so I guess we could look at things like Bugzilla, but I don't have any experience in such things (Our incident management system we use in hosue isn't really fit for the job). has anyone used one in this way? Or any other tool suggestions would be welcome."} {"_id": "176338", "title": "When do you typically write a software module yourself vs. buying an existing product?", "text": "I'm trying to find out your decision rationale of when to do what. Would be great to learn from you. I'm happy to provide more context, but I want to make it general for now."} {"_id": "142069", "title": "Using macro as an abstraction layer", "text": "I am having a discussion with a colleague about using macro as a thin (extremely) layer of abstraction vs using a function wrapper. The example that I used is Macro way. #define StartOSTimer(period) (microTimerStart(period)) Function wrapper method void StartOSTimer(period) { microTimerStart(period); } Personally, I liked the second method as it allows for future modification, the #include dependencies are also abstracted as well."} {"_id": "236765", "title": "How to provide a service with RESTful API?", "text": "Generally speaking, RESTful API's are very good for representing resources and collections of resources. * http://example.com/resources * http://example.com/resources/item17 And we are good, if we work with resource. However, what should be done, when you need to expose API which does an action which doesn't create/update or delete a resource. Couple of examples: * You have an algorithm (as example add two numbers) * You have an action which uses external system (as example send an email). I saw two approaches: **Represent an action as a subresource** POST http://example.com/resource/item17/sendemail On one hand, it's straightforward. On other hand, it start smelling SOAPy (read RPC calls). **Represent an action as a standalone resource** POST http://example.com/emailsender This looks more RESTful. However, it doesn't feel right too (only one of CRUD actions is implemented). This resource actually doesn't have a representation. I am not sure, may be there are other methods which I have missed. The question is - \"Is there a consensus on this subject? What is the preferred way to do it?\""} {"_id": "112600", "title": "When a co-worker asks you to teach him what you know, do you share the information or keep it to yourself?", "text": "I am the only developer/DBA in a small IT department. There is another guy who can do it, but he's more of a backup as he spends his time working on IT support stuff. Anyway we have a new hire and I've been training him on the IT support side of things. Seems like he is eager to learn and be productive, but nobody is going out of their way to show him anything. He's been asking me to teach him database design, SQL, etc. For some reason, the boss has him working with me. He is also sending him to meetings that I go to, yet he hasn't said outright that I have to teach him anything. Meanwhile, the boss insists on doing a lot of the support work himself (i.e. he hoards information and doesn't delegate to anyone). I'm a little bit on the fence. First, the new guy doesn't yet have a strong foundation on the IT support functions which is where we really need help at this time. Second, I paid thousands of dollars for classes and spent many hours learning this stuff. Is it my responsibility to teach others skills that I had to learn on my own? Others here really aren't quick to share information so I'm not sure that I should either in this environment. I do know that if I get him involved, and get him started on projects, then I'd be responsible for his mistakes. I had to take the heat for the other guy when he made mistakes. OTOH the guy wants to learn something, is motivated, and I don't want to stop him. We've had our share of slackers in the group and it's nice to have someone who is willing to work for a change. So what would you guys do? Would you teach him the skills that you spent all of that time learning? Set him up with a test database on his PC and recommend some books for him? Encourage him to get a strong foundation in IT support first and ask later? We haven't had a new hire in years, let alone one that is interested in what I do, so this is new to me."} {"_id": "112601", "title": "Designing around the constraints of external services in a client-server architecture", "text": "Suppose you have a client that interfaces with a server which in turn invokes an external service to fulfill some of the client's requests. I'm designing the client and server and need to accommodate the limitations of the external service. I have these options: 1. Encapsulate (as much as possible) all code dealing with the service's limitations inside the **server** application. 2. Encapsulate (as much as possible) all code dealing with the service's limitations inside the **client** application. 3. Design both the client and server applications around the external service's limitations. I had chosen option #1 because the server application was 'closer' to the external service and I didn't want to have to redesign the client if and when the external service was improved. However, now I'm being asked to choose option #3 by the server-side team so we can deal with the external service limitations in the API instead of by handling exceptions. Is there a design pattern I can reference to convince them (or myself) that one option is preferred? (Separation of concerns looks promising, but I haven't convinced myself that it's sufficient because it could also argue for option #2)"} {"_id": "115374", "title": "Using SQLDataSource / DataBound Controls in ASP.NET - Bad Practice?", "text": "When programming in ASP.NET, you can get very quick, effective functionality out of using DataBound controls (`GridView`, `FormView`, etc) with an `SQLDataSource` control on the page (In my opinion, anyway - I could be alone on that one). For instance, I often use these types of controls to create basic search functionality - like looking at Order History in a Shopping Cart application. However, this answer on SO got me thinking: **Is the use of the`SQLDataSource` control considered bad practice?** I've not been able to find any resources online that substantiate this claim, so I thought I'd ask here. The question on SO is regards calculating the grand total of a column in a `GridView`. The downsides of the `SQLDataSource` pointed out by the answerer (in their comment) include: * ASPX is presentation, not business logic. * This practice causes you to repeat SQL on every page that needs it. * When you make a change to a DB you have to change SQL in aspx. * Because it makes simple tasks like displaying grand total hard. * Solution provided calculates grand total it in `DataBound` event. What if you don't use control with `DataBound` event? * Because it's not testable. I'm somewhat new to ASP.NET development (coming from a Java / C# background), and I just wanted to make sure I'm not heading down the wrong path here (with using the `SQLDataSource` control)."} {"_id": "179306", "title": "Remembering user credentials in standalone Application", "text": "I'm developing a standalone application using java, I have a Login screen, wherein user enters his username and password. For each instance of the application, user have to enter his credentials. From a usability standpoint, I thought of keeping a remember me check Button. My question is, as I am not using any database, how do I persist the user details. I have some rough thoughts, storing them in a property file and retrieving. I think there may be better approaches than the one I had. Any Ideas and suggestions?"} {"_id": "179307", "title": "Interface extension", "text": "Suppose that I have an input stream interface, which defines a method for reading data. I also have a seekable interface which defines a method for seeking. A natural way of defining a input file is then to implement both input stream and seekable. I want to construct a data decoder from the input stream interface so I can read data from a file or from another stream. The problem is that I also want to implement seek functionality to the data decoder, since I want to be able to step individual records not raw bytes. This is not possible if I only provide an input stream, which does not have the bytewise seek method. Should I skip the seekable interface and add the seek method to input stream instead and force all streams to at least leave it as a nop. EDIT: The decoder does not need to seek if the client does not request a seek operation. Also the stream is associated with the decoder and cannot be changed after initialization"} {"_id": "152006", "title": "Is a readonly field in VB.NET thread safe?", "text": "Is a readonly field in VB.NET thread safe? For example, see the code below: Class Customer ReadOnly Name As String ReadOnly ZIP As Integer = 98112 Sub New(ByVal Name As String) Me.Name = Name End Sub End Class"} {"_id": "251912", "title": "Why Increment Pointers?", "text": "I just recently started learning C++, and as most people (according to what I have been reading) I'm struggling with pointers. Not in the traditional sense, I understand what they are, and why they are used, and how can they be useful, however I can't understand how incrementing pointers would be useful, can anyone provide an explanation of how incrementing a pointer is a useful concept and idiomatic C++? This question came after I started reading the book A Tour of C++ by Bjarne Stroustrup, I was recommended this book, because I'm quite familiar with Java, and the guys over at Reddit told me that it would be a good 'switchover' book."} {"_id": "251918", "title": "How do I ensure my site will be crawled when articles are generated by the database?", "text": "I wasn't sure how to ask the question. But basically, it's a textbook scenario. I'm working on a site that's article based, but the article information is stored in a database. Then the page is rendered with the information in the database based on the requested article id: For example: `http://www.mysite.com/articles/9851` I'm new to SEO, so I'm wondering how engines are able to crawl the contents of pages like this and/or what I need to do in order to ensure that it _will_ be crawled. So for instance, this site. All of the articles/posts on this site appear to live in a database somewhere. The URL has an ID which looks like it is used to tell the server which data to use to generate the page -- so the page doesn't actually exist somewhere, but it's template does. When I search google, I might find one of these posts based on the content of the post. I understand that crawlers normally just find a page and follow it's links and follow its links' links and so on, but how does that work when the site is search based like this? Do have to create a page that randomly picks articles out of the database so that the crawler can see it or something?"} {"_id": "26931", "title": "UI automation patterns and best practice for desktop applications", "text": "**Background** I'm currently automating some tests for a plugin for MS Office. We are creating Coded UI tests in VS 2010. I suppose I could use the \" _Coded UI test builder_ \" tool, but it does not really suit my particular case. Because of this I created my own UI Map class and extension methods for each UI Control/Map where I add different action functionality. For example press buttons or assert some UI values. The scenarios of the test cases are in the test classes. I am new in this area and also I'm new in working as a automation tester. **The question** Would people be kind enough to share their experience and advice for some good practices for test automation on desktop applications from a programming/design point of view?"} {"_id": "232998", "title": "How to reuse spaghetti code", "text": "We're working on a new firmware for our new V2 device. The company has an older V1 firmware (and hardware). Hardwares are similar to each other (but there are some differences) so basically we could use V1 firmware with \"some\" modifications. Unfortunately V1 firmware is spaghetti code. We need to support both hardware including new features, so code should be maintainable. We've tried to create a backlog with small stories. Each story would show a new working function to the user while it extracts out a reusable part of V1 code to a library project and use it in V2. It would have been kept both V1 and V2 firmwares always releasable and we would have been able to show a working demo early (with limited functionality at first). In time it turned out that management expects that it will be a short project (since hws are similar) and they suggest another approach: move everything to the library project and modify only those parts which are different. This approach seems riskier because we won't be able to demonstrate any progress (the first demo will be later, it's rather all or nothing, every module has to work more or less for the first demo) and I guess it could lead to unpredictable bug fixing. What should I take into account before deciding between the two approaches? Is there a third one?"} {"_id": "169887", "title": "Given a project and working with 1 other person - never worked with someone before", "text": "I'm taking a class where I work with a partner to implement the link layer of the OSI model. I've worked programmed with a partner once before and it went bad. Is the goal to divide the work up and decides who does what or should one person code and the other person reviews and switch roles after a while? Any tips are much appreciated. Literally I know nothing about working with a partner to program so even if it's basic please tell me."} {"_id": "68738", "title": "What's a good question to measure the candidate's capacity to abstract?", "text": "It's part of my job to interview new candidates and I came up with a test that pretty much measure the coding skills of the candidates. However I couldn't (yet) come up with a good question to measure the candidate's capacity to deal with abstraction. Earlier I had the following question in my test: > Suppose a tree structure where each node stores an integer value. Draw the > simplest Class Diagram using UML that represents the domain model described. Then I'd ask: > Now change the model on question above to represent a leaf (i.e. a node that > has no children). Eventually, after several interviews, I realized those two questions were not giving me any clue if that candidate knew abstraction. Some people knew the answer but during the interview showed me they actually don't have a clue when it comes to abstract more complex subjects. I can't really have a very deep complex question in this test because: 1. The total time for the entire test is ~2h and they already spend about 1h to 1h30 in the first part (coding skills) 2. A good candidate might fail in a specific complex question and that would not really prove they can't abstract at all After reading this article I got intrigued when he says: > Inventing questions that force candidates to understand pointers without > using C isn\u2019t too hard. Nearly any question that forces candidates to invent > a data structure (e.g., a hashtable, an AVL tree, or the like) will test how > they handle indirection, the idea that having a thing is different from > having a pointer to that thing. So I\u2019ve picked a question that forces > candidates to design a data structure. And, sure enough, I see candidates > who have a lot of programming experience, but who don\u2019t \u201cget it\u201d, completely > bomb out in my interview. The way I see it, inventing a data structure is a good way to measure abstraction skills. So my question is, does anyone know a good question (or a set of small questions) that could measure for abstraction skills in a test? I'm looking for those kind of questions that: 1. Don't depend on any language in particular 2. Can be answered by smart people 3. Can't be answered by people who know all books by heart 4. Will take average 40 min to solve 5. Will not produce huge amount of pages as an answer"} {"_id": "232996", "title": "Is a genetic algorithm needed when computation is infinitely fast?", "text": "From what I understand, genetic algorithms try out multiple variations and evaluate the fitness of each variation. Then they select the best variations, change them a bit and continue the process with the next generation. But what if we have unlimited computation resources? Can we then just try out all possible variations and evaluate their fitness without resorting to the complex process of breeding new generations? In other words, are genetic algorithms only needed when computation is expensive and when a brute-force method is impossible? Or do they add other benefits as well?"} {"_id": "149563", "title": "Should we avoid object creation in Java?", "text": "I was told by a colleague that in Java object creation is the most expensive operation you could perform. So I can only conclude to create as few objects as possible. This seems somewhat to defeat the purpose of object oriented programming. If we aren't creating objects then we are just writing one long class C style, for optimization?"} {"_id": "255212", "title": "What to do if you're stuck on a project because you got dumped in without the information you need?", "text": "I'm a junior developer and I'm supposed to be debugging this project, but the simple fact is that neither I, nor anybody else here (the only person who's worked on it just left recently) knows much about how it works. I'm frustrated because I'd like to get this done and get it off of my plate, but that's not really possible at the moment. Is this normal?"} {"_id": "255210", "title": "Large internal features on kanban", "text": "This days I'm reading some tutorials/advices about Kanban process and Agile methodologies, but I have two big questions about this Kanban methodology. What I understood is user stories are about adding bussiness value, like a feature which our customers can use. The main problem is we have a lot of features that looks trivial (simple form that manages synonyms for solr, for example) but are hard to implement and may take several days/weeks to complete, due to scaling/coordination details on our cluster (refreshing the synonyms using zk and reloading from database, for exmaple). How do you break down this user story into smaller ones since Kanban specificies every user story has to be independent? Can I break it down into user stories about implementation details? (implement coordination algorithm, implement X storage service, etc) And my last question is about architectural changes/refactoring. Since it doesn't add any bussiness value (invisible to our customers) how do you reflect this tasks in this methodology?"} {"_id": "219531", "title": "Which one of these designs is preferred?", "text": "In the case of an application with a single simple responsibility (eg, a simple replacement for `grep` or `wc`), which of these designs is preferred and why? I find that they are all testable and they all do their job. Does any of them have any significant advantages over the other? 1: All methods in the main class: public class App { public static void main(String[] args) { App app = new App(); AppInput input = app.readInput(); AppOutput output = app.processInputToOutput(input); app.writeOutput(output); } public AppInput readInput() { .. } public AppOutput processInputToOutput(AppInput input) { .. } public void writeOutput(AppOutput output) { .. } } 2: Basic methods in a separate class, main class only making calls to read and process input, and output result: public class App { public static void main(String[] args) { AppWorker worker = new AppWorker(); AppInput input = worker.readInput(); AppOutput output = worker.processInputToOutput(input); worker.writeOutput(output); } } public class AppWorker { public AppInput readInput() { .. } public AppOutput processInputToOutput(AppInput input) { .. } public void writeOutput(AppOutput output) { .. } } 3: Have all the work in a separate worker class, with the main class only instantiating the worker and triggering it to do its work: public class App { public static void main(String[] args) { AppWorker worker = new AppWorker(); worker.doYourWork(); } } public class AppWorker { public void doYourWork() { AppInput input = readInput(); AppOutput output = processInputToOutput(input); writeOutput(output); } public AppInput readInput() { .. } public AppOutput processInputToOutput(AppInput input) { .. } public void writeOutput(AppOutput output) { .. } }"} {"_id": "245693", "title": "Generating all the words accepted by a deterministic finite automaton", "text": "1. Imagine you've got a finite deterministic automaton `A` defining some finite regular language `L`, consisting of `N` words. 2. The first question is: how would you determine the number of words `N`? 3. Then, I want to enumerate all these words, i.e. create a function like `f: {1..N} -> L`. Any suggestions? And the following restrictions apply: * `N` may be large enough, for instance `[0-9A-Z]{8}` yields `N = 36^8` different combinations, so storing it all in memory is not an option. * I want to be able to persist the state. For instance, compute `f(x)`, then store all the needed intermediate data in the DB, and some time later compute `f(x+1)`. What I've done for now is a simplification for fixed length words. Any suggestions for doing this for an arbitrary DFA? I suspect this is a common problem, so any algorithm references are welcome. Thanks!"} {"_id": "216191", "title": "How should I structure my database to gain maximum efficiently in this scenario?", "text": "I'm developing a PHP script that analyzes the web traffic of my clients websites. By placing a link to a javascript on the clients website (think of Google Analyses), my script harvests information like: the visitors IP address, reference link, current page link, user agent, etc. Now my clients can view these statistics via a control panel that I have build. These clients can also adjust profile settings, set firewall rules, create support tickets and pay invoices. Currently all the the traffic is stored in one table. You can imagine that this tabel would become very large as some my clients receive thousands of pageviews per day. Furthermore, all the traffic data of each client would be stored in the same table, creating a mess. This is the same for the firewall rules currently, and the invoice and support system. I'm looking for way to structure my database in a more organized way to hold large amounts of data of multiple users. This is the first project that I'm developing that deals with so much data, and would like to hear suggestions and tips. I was thinking of using multiple databases to structure the data. The main database will store users data (email,pass,id,etc) admin/website settings. Than each client will have an unique database labeled prefix_userid, which carry tables holding their traffic, invoice, and support ticket data. Would this be a solution, and would it slow down or speed up overall performances (that is spreading the data over muliple databases). I have a solid VPS, but would like to safe and be as effient as possible."} {"_id": "216190", "title": "Help with design structure choice: Using classes or library of functions", "text": "So I have GUI Class that will call another class called `ImageProcessor` that contains a bunch functions that will perform image processing algorithms like `edgeDetection, gaussianblur, contourfinding, contour map generations, etc.` The GUI passes an image to `ImageProcessor`, which performs one of those algorithm on it and it returns the image back to the GUI to display. So essentially `ImageProcessor` is a library of independent image processing functions right now. It is called in the GUI like so Image image = ImageProcessor.EdgeDetection(oldImage); Some of the algorithms procedures require many functions, and some can be done in a single function or even one line. All these functions for the algorithms jam packed into ImageProcessor can be pretty messy, and ImageProcessor doesn't sound it should be a library. So I was thinking about making every algorithm be a class with a shared interface say `IAlgorithm.` Then I pass the IAlgorithm interface from the GUI to the ImageProcessor. public interface IAlgorithm{ public Image Process(); } public class ImageProcessor{ public Image Process(IAlgorithm TheAlgorithm){ return IAlgorithm.Process(); } } Calling in the GUI like so `Image image = ImageProcessor.Process(new EdgeDetection(oldImage));` I think it makes sense in an object point of view, but the problem is I'll end up with some classes that are just one function. What do you think is a better design, or are they both crap and you have a much better idea? Thanks!"} {"_id": "99051", "title": "Should functions always return a success/failure status?", "text": "> **Possible Duplicate:** > When and why you should use void (instead of i.e. bool/int) What is the reasoning behind making all functions (and methods!) return a uniform type, usually int, as a status? To me it feels odd; especially when you're calling a getter and it returns a status (which is usually the same always) and sets one of the parameters to the value you're getting."} {"_id": "156583", "title": "Allow app analytics opt-out?", "text": "Given that mobile app analytics reporting can cost a customer data traffic that, unlike a web app, use of the mobile app may otherwise not require, should a mobile app provide a user accessible setting to allow customers to opt out of app analytics collection or reporting? If and where should a customer be notified that running a given mobile app might capture and transmit analytics data on the app's usage? Are there any app stores where either of the above is a requirement?"} {"_id": "168425", "title": "Hiring Developers - Any tips on being more efficient?", "text": "I represent a software company that is in process of building a large software development team. We are picky in who we hire and have really good retention rate (most of the devs have been here for an average of 5-6 years). We've been spending a lot of developers' and HR time and have a low applications to hire ratio. Here's the process we use: * HR Interview on phone - Involves asking basic behaviorial and tech questions * Online test - Involves a 30 minute technical test * Technical Phone interview - A 60 minute interview by a developer * Onsite Interview - A 60-90 minute interview by several senior developers Although this process has been working, we've been spending way too much time on interviews. Any thoughts on how this can be done differently? Our goal is to automate any tasks if possible still retaining the quality of talent. * * * **UPDATE:** Thanks for the responses. Need to clarify a few things. Our aim is to reduce the number of applicants that go from one stage to the other. Here's our current numbers. 1. We receive 1000 resumes 2. 800 resumes pass the HR interview 3. 500 pass the online test 4. 100 pass the initial phone screen 5. 10 pass the onsite and get hired As you can see, we need to do a better job of weeding out the candidates earlier on in the process. Can we do a better job in the way the online test evaluates people? Here are more details on the process based on some responses: * HR Interview on phone - They ask very basic technical questions (What is a CLR?) to weed out as many people as possible * Online test - Have around 10 basic questions with 3 coding questions * Tech phone screen - Covers a variety of technologies. We don't care if the applicant doesn't know everything as long as they can demonstrate they will be able to pick up new technologies and come up to speed quickly * Onsite - Coding questions in front of the developers. More architectural level questions."} {"_id": "209819", "title": "Creating a custom GUI. App/DE/WM?", "text": "I am starting with this project of mine of writing a custom UI for linux. What would happen is: * The computer would boot into this UI which would not be the typical taskbar/icons/startbutton kind of thing. Think more like a dedicated UI instead of a general purpose one. * It would provide access to wifi, ethernet, bluetooth etc. Basically have access to most system resources. Up until this point, I don't planning on having a file manager for the user. The app would take care of this. Sort of the way apps on mobile phones work. My first instinct was to work (fork an existing one) on a custom DE like Gnome/KDE. So I read up a lot about window managers and desktop environments and while window managers seem to be the best option for what I am trying to do, another idea occurred to me which would be much less complicated. I could simply (I know!) write an app which the native OS boots into, without any splash screen etc. So, take a distro like Arch Linux, strip it down to the basics and then build an app on top of that. I would like to get some advice on what the best way to go forward with this would be. Do you guys concur that an app is better to go with? Please excuse me if the question seems naive. Any suggestions/ideas welcome."} {"_id": "168427", "title": "Forced to write Stored Procedures", "text": "Can you think of some reasons that the management force the developers to write and call Stored Procedures instead of inline SQL statements directly? Even a very simple CRUD statement, writing a stored procedure takes more time and create extra workload."} {"_id": "26688", "title": "How important is blogging/tutorial generation to career development?", "text": "Lately I've seen a number of application developer job postings with a dedicated field for personal blog site. I was wondering how important keeping a blog and generating tech related content is for career development? How does having one/not having one effect your potential for getting hired?"} {"_id": "187145", "title": "Which controller should I put for a search action that touches many different models?", "text": "I currently have the following models and am designing search functionality that searches within these 3 models: Locations Users User_friends Should I create a search controller or should I put the action in one of the existing locations, users, or user_friends controllers? If so should it be pluralized - searches_controller or just search_controller?"} {"_id": "2829", "title": "What licence should I choose for my project?", "text": "I originally thought of creative commons when while reading a book about wordpress (professional wordpress), I learned that I should also specify that the product is provided > ... WITHOUT ANY WARRANTY; without even the implied warranty of > MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE and they recommend GNU GPL. How do I write a license or select 1? btw, what does `MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE` mean actually? Isn't without warranty enough?"} {"_id": "26682", "title": "Use of underscores and earmuffs", "text": "I have seen underscores and earmuffs (like `_good_` or `*good*`) frequently used in conversations among programmers. All I know is that underscores are used to denote private variables or methods in languages like C, Python etc. and earmuffs are the preferred way of indicating that something is a global variable. Do these have any other punctuational meaning when they are used outside code, in mailing lists etc? Does the highlighted word have any hidden meaning or expression added to it? Or, are they just like air quotes?"} {"_id": "209813", "title": "What is a battery backup up cache?", "text": "I've read an article about Innodb performance optimization an in that post the author was repeatedly mentioning someting named a \"battery back up cache\". It is not clear for me what he was talking about and google did not help either. My guess is that this is some kind of backup storage in case there is a power outage. Am I right?"} {"_id": "209814", "title": "How to promote Scala to the management?", "text": "As a developer I like Scala and could tell the management about technical benefits of the language. But even if management understands that a superior technology has business impact, I fear that they don't understand what I'd be talking about. What arguments would you present to managers with technical backgrounds (computer scientists not programming for many years)?"} {"_id": "209815", "title": "Sql Server Data Tools & Entity Framework - is there any synergy here?", "text": "Coming out of a project using Linq2Sql, I suspect that the next (bigger) one might push me into the arms of Entity Framework. I've done some reading-up on the subject, but what I haven't managed to find is a coherent story about how SQL Server Data Tools and Entity Framework should/could/might be used together. * Were they conceived totally separately, and using them together is stroking the wrong way? * Are they somehow totally orthogonal and I'm missing the point? Some reasons why I think I might want both: * SSDT is great for having 'compiled' (checked) and easily versionable sql and schema * But the SSDT 'migration/update' story is not convincing (to me): \"Update anything\" works ok for schema, but there's no way (AFAIK) that it can ever work for data. * On the other hand, I haven't tried the EF migration to know if it presents similar problems, but the Up/Down bits look quite handy."} {"_id": "160632", "title": "Is it good/safe OOP practice to have a method whose only purpose is to send/retrieve data from another class?", "text": "I have a class that performs basic MySQL operations. This is all in PHP. class dbTables { public $name; protected $fields = array(); // array of dbTableField objects public $result_sets = array(); protected $primary_key; // filled using getFields() from __construct() public function __construct($table_name) { $this->name = $table_name; $this->getFields(); } public function removeWhitespace($table_name) { $this->name = $table_name; $this->getFields(); } // etc... } So all you need is the table name to create the object, and construct automatically gets a bunch of field data from `INFORMATION_SCHEMA` (names, data types, etc). I also have a method for removing whitespace from a column It's currently located in another class, but I'm thinking of moving into the above `dbTables` class. You can see the method below. I don't think it's really necessary to read the method to answer my question, but it could be helpful. protected function removeWhiteSpace($target_field,$abbrev_field,$from_size,$to_size,$options) { // optional conditions based on original character length and resulting character length. /* options: from_size = all/longer_than/shorter_than * to_size = all/longer_than/shorter_than */ $temp_table = $this->table_name; $sql = \"UPDATE `$temp_table` SET `undo_sku` = `$target_field`, `$target_field` = REPLACE(`$target_field`,' ',''), `$abbrev_field` = 'w'\"; $where = array(); if($options['from_size'] == 'longer_than') { $where[] = \"`$target_field` > $from_size\"; } else if($options['from_size'] == 'shorter_than') { $where[] = \"`$target_field` < $from_size\"; } if($options['to_size'] == 'longer_than') { $where[] = \"LENGTH(REPLACE(`$target_field`,' ','')) > $to_size\"; } else if($options['from_size'] == 'shorter_than') { $where[] = \"LENGTH(REPLACE(`$target_field`,' ','')) < $to_size\"; } //WHERE `$target_field` = '' AND LENGTH(REPLACE(`vnd_sku`,' ','')) <= $string_length\"; $result = mysql_query($sql); } Now, if I do move it into `dbTables`, I will still want to use the functionality in the other class. Should I just instantiate a `dbTables` object whenever I need to use `removeWhitespace`? Or, could I give `dbTables` its own `removeWhitespace` method like this: protected function removeWhiteSpace($target_field,$abbrev_field,$from_size,$to_size,$options) { /* options: from_size = all/longer_than/shorter_than * to_size = all/longer_than/shorter_than */ $dbtable = new dbTables($this->table_name); $temp_table = $this->table_name; $result = $dbtable->removeWhiteSpace($target_field,$abbrev_field,$from_size,$to_size,$options); return $result; } The advantage I see is that now I have a way to limit instantiation of the `dbTables` object in various other classes. Methods that need to use `removeWhiteSpace` can use `$this->removeWhiteSpace` instead of instantiating a `dbTables` object. The class's own `removeWhiteSpace` method takes care of that. I'm kind of new to OOP, though, so I'm just wondering if that is good practice or bad. I also hope I'm asking this question in the right place. I'm pretty sure it's not a StackOverflow question."} {"_id": "202041", "title": "Are there any benefits to removing unused script files in a web site/project?", "text": "VS Web sites/projects come loaded with several .js files, most of which I don't use (e.g., I use a CDN for newer versions of jQuery and jQuery-UI). I know it's safe to remove these unneeded .js files from my projects, but is there any benefit from doing so? Are they deployed to the web server if left alone?"} {"_id": "66250", "title": "What is the most productive way to start and manage the development of a large web application?", "text": "I've searched high and low for a good answer to this question, and as far as I can gather it's just a combination of standard organization tools (keeping a routine, good folder structure, extensive documentation) and making sure you think about each step before you move on. I'm planning on starting a new web application very soon, and I'm finding the volume of choices I need to make almost overwhelming. Although I usually create applications using Django I've been considering the alternatives recently. Also, things like which host should I go for, which version control system should I use and which bit should I start on first are driving me crazy. I was wondering if anyone had any professional advice for me on how I can better manage what I'm doing so that I get this project off on the right foot."} {"_id": "151673", "title": "What are the key points to evaluate to select a good SMS gateway?", "text": "We are planning to add a \"SMS verification account\" option for our customers. (So we will only send SMS. We do not need a short code.) We have found several companies who offer SMS gateways through REST APIs (we use PHP). As we are totally new to the SMS world, we are wondering: What are the key points to pay attention to/evaluate to select a good SMS gateway?"} {"_id": "66254", "title": "Can a user relicense LGPL as GPL or GPL as AGPL?", "text": "The LGPL (we'll just assume version 3 for all in discussion for ease), is a less restrictive version of the GPL, likewise, the AGPL, is a more restrictive version of the GPL, but is it possible to use LGPL code, make additions(or don't), and relicense it as GPL or AGPL; can GPL code be modified and relicensed as AGPL?"} {"_id": "66257", "title": "When thinking about dates and times - is midnight today in the past or future?", "text": "This is always a puzzle to me- and I realise that it is not strictly an issue in programming or software development, but it seems to be a reasonable common one in our field. For example, if I were to set an expiry datetime as 2011-04-08 00:00:00 - and given my current local time is 10:45 on the 8th already - have I already expired? Or do I still have half a day or so left? Is there are universal standard for which end of the day midnight 'belongs' to? Or should I take a leaf from the British military, and say that the day ends at 23:59:59 and starts at 00:00:01 and that there is no midnight?"} {"_id": "34246", "title": "Can One Get a Solid Programming Foundation Without Going To College/University?", "text": "First, I have already searched the site and read all the previous \"self-taught vs. college\" topics. The majority of the answers defended that going to college was the best choice, for two main reasons: 1. Going to college gives you the paper, which is essential to landing jobs, especially in tough economic times. 2. Going to college gives you a solid programming base, teaching you the principles that will be essential regardless of the language/path you take after. Here comes my question: I am not worried about reason 1 at all, because I already have my own company (I build websites/ do affiliate marketing) and a stable financial situation, so I am pretty sure I won't need to look around for a job. I am worried about reason 2 though. That is, I want to make sure I'll have as solid a programming foundation as anyone else out there, and I am wondering if that is possible with self-learning. Suppose I take my time to study the very basics, like discrete maths, algorithm design, programming logic, computer architecture, Assembly, C programming, databases and data structures - mostly using books,online resources and lots of coding. Say I spend 1-2 years covering those basics. Do you think my foundation would be solid, or still lack in comparison to someone who went to college?"} {"_id": "164703", "title": "Useful programming languages for hardware programming", "text": "I am thinking to take the next semester a course called \"Digital systems architecture\", and I know that we need to program micro-controllers with several programming languages such as C, C++, verilog, and VHDL. I want to be prepared to take that course, but I need to know if I need to study deeper these languages. At this moment, I have taken one course in basic Java dealing with basic methods, data types, loop structures, vectors, matrices, and GUI programing. Must I study deeper Java and then go with C, and C++? Besides, I know basic verilog and VHDL."} {"_id": "164707", "title": "Spring AOP advice order", "text": "In Spring AOP, I can add an aspect at the following locations * before a method executes (using MethodBeforeAdvice) * after a method executes (using AfterReturningAdvice) * around a method (both before and after a method executes) (using MethodInterceptor) If I have all three types of advice, is the order of execution always as follows? * Around (before part) * Before * Method itself * After * Around (after part)"} {"_id": "164709", "title": "How do software patches and updates work?", "text": "So, how exactly do software patches work. If there is a certain bug in the source code of a program, how is this source code changed when one installs a patch? After the patch has been installed how is the program 'automatically' rebuilt?"} {"_id": "157983", "title": "Pair Programming and ISO 27001", "text": "I\u2019ve been working in an eXtreme programming team and doing pair programming for over 7 years in a windows environment. When we first started doing it someone would log in with their windows credentials and therefore all access to domain resources, and more specifically version control, would be accountable to that windows user. Eventually we have evolved to have windows pairing accounts for specific pairing stations (e.g pairA, pairB, PairC etc\u2026). All the devs know the passwords to these accounts. Accountability for commits (check-ins) is achieved by putting the programmers initials in the comment during the commit. Up until now this has worked fine for us, but my company is currently going through a ISO 27001 audit and this was flagged up by the auditor as a risk. I have a number of possible solutions such as creating a pairing account for every pair combination, but I really would like to know whether anyone else has encountered this problem and how they solved it? What solution was acceptable by the auditors?"} {"_id": "157984", "title": "How to manage, in practice, licence files when combining GPL and BSD licensed code?", "text": "I am writing code that uses one library with GPL (not LGPL) license, and one with the 3-clause BSD license. Since I link to GPL-licensed library, my code will need to be GPL as well. How should I, in practice, deal with the original LICENSE.txt from the BSD-library? (A) Can I distribute a project so that the main source code is GPL-licensed, and then some subdirectory is BSD-licensed? (B) If I were not only to link to libraries, but to use and combine the BSD and GPL code in a more involved manner, what to do with the LICENSE.txt then? The 3-clause BSD text tells: \"Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer.\" so apparently I should retain the copyright notice, and that list of conditions, somewhere. But then I'll also need to put the GPL license txt-file somewhere. Further, apparently I don't need to retain the \"Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met:\" part of the BSD license text, since it only tells me to retain the other parts. So, how, and in which text files, should I in practice organize the GPL license text and the parts of the BSD license and copyrights that I retain? EDIT: So in case B, I would be taking 3-clause BSD licensed code, and redistributing it under GPL, which is permitted, as 3-clause BSD license is (one-way) compatible with GPL. I am just asking how to deal with the license texts and textfiles in practice."} {"_id": "220192", "title": "Using raw types in Java method signatures", "text": "In general I try to avoid using raw types in the signature of methods in the libraries I develop. However, lately I am starting to relax this (self- adopted) rule-of-thumb, and I am starting to write raw types more often than before. For example, assuming a method receiving _any_ list as parameter (i.e., the type of the list members is not important), I was tempted before to declare it as: void myMethod(List list) {...} But now I am starting to write more and more: void myMethod(List list) {...} Because it is shorter. My concrete question is: In the scenario where type parameters are not important, is using raw types in method parameters considered a good practice ? If not, why not ? The only problem I see is being condemned to see an eternal warning in the IDE I use (although probably that can be deactivated somewhere), but I would like to be sure I am not missing something else. **Update:** Got fully convinced I should use generics everywhere after reading the paragraph below here: > Use of raw types is discouraged. The Java Language Specification even states > that it is possible that future versions of the Java programming language > will disallow the use of raw types."} {"_id": "73765", "title": "Should a EULA be translated?", "text": "I have a application which will be available in 4 languages: French, German, Danish and English. All the text in the app will be translated, but should the EULA also be translated? What is the common way of handling this? The lawyer has accepted the english version of the EULA and I assume all the translated versions should also be accepted then. Is it okay just to use a English EULA?"} {"_id": "73763", "title": "How to Pick Which MCPD Certification is Appropriate", "text": "I've been developing on multiple platforms for years, including Java, PHP, Flash/Flex, ASP.NET (including MVC), Ruby on Rails, and many others. I've decided (for various reasons) that it's time to pick one technology to learn deep, instead of learning many technologies wide; and I chose .NET. I've decided that one crucial move will be to grab some Microsoft certifications -- if only to show that I'm serious about this as my technology of choice. But **how do I choose which certification to take?** There are several of them, with similar-sounding descriptions. Although I will probably choose between **the windows and web versions of the MCPD.** I'm targeting team lead and project-manager style jobs, if that helps. **Edit:** The reason I want to do certification is because my resume reads as: * 3 years JEE * 3 years C++ * 2.5 years Flash/Flex * 2 years .NET * 1 year Ruby on Rails * 1 year PHP When I say I have \"ten years of experience\" and apply for a job that says \"5+ years of experience in .NET,\" I know I can learn and do the work. But I don't have those five years; I gave them to Flex, C++, and JEE. I think considering my specific situation, a certification is a (small) step in the right direction."} {"_id": "197153", "title": "Using third party/ open source controls", "text": "I usually feel reluctant to use any third party or open source controls while coding in Objective-C iPhone due to following reasons. Open source controls are developed in incremental manner. So once integrated I have to keep looking for update and will have to update my code. Will increase effort. Open source controls may likely add memory leaks and thus make application incompetent in terms of memory. Third party libraries are difficult to manage, as well as they keep conflicting with your classes. Please correct me on these points, so that I may feel comfortable with open source codes."} {"_id": "101678", "title": "Is SCRUM Methodology a good technique to go with in crucial deadline project?", "text": "I am just join a firm and they are working in a (their called) SCRUM methodology, They have very sharp deadline and the project development doesnt seems to have any direction or neither it is looking like (This is what we want.) I am developing a module which doesnt have Database, neither I have any information which needs to linked to this module, the people who are linked to the modules and messed up with their own work because which I am not able to get their support in it. Now could anyone suggest me that is this the good way to go ahead in this methodology, which doesnt contain any DB table for to be developed module? Please Suggest Something.. Thanks"} {"_id": "207674", "title": "How to formalize feature requests", "text": "I'm new to developing software and have been handling feature requests for an internal web-app I built. Sometimes the feature requests are straight-forward and require minimal business logic for me to implement, so I talk to the person for a bit, write their requirements down, and get to work. However, I'm starting to work on feature X where X has dark-corners of business-logic and special-case scenarios keep on popping-up because I didn't ask the right questions and/or the person I'm speaking with didn't think about mentioning it. So I'm curious, how do professionals handle this process? Some things that I thought of are: 1. Require feature requests be written down with the appropriate requirements. 2. Understand their job well-enough so I can do mine. An example to illustrate a similar problem is, implementing government regulations in code. I researched the regulations, created a flowchart and went from there. I could have saved a couple days had someone well-versed in said regulations, written the requirements down and handed them to me. I'm doing the same thing for feature X, except, nothing is written down so I'm unable to deduce their business logic without going through a step-by-step process through their job. Even that fails sometimes, because some special- case wasn't present that day. Using the above example, is it the responsibility of the developer to research this or something that should be provided? Any suggestions for making this process go a bit smoother?"} {"_id": "32578", "title": "SQL: empty string vs NULL value", "text": "I know this subject is a bit controversial and there are a lot of various articles/opinions floating around the internet. Unfortunatelly, most of them assume the person doesn't know what the difference between NULL and empty string is. So they tell stories about surprising results with joins/aggregates and generally do a bit more advanced SQL lessons. By doing this, they absolutely miss the whole point and are therefore useless for me. So hopefully this question and all answers will move subject a bit forward. Let's suppose I have a table with personal information (name, birth, etc) where one of the columns is an email address with varchar type. We assume that for some reason some people might not want to provide an email address. When inserting such data (without email) into the table, there are two available choices: set cell to NULL or set it to empty string (''). Let's assume that I'm aware of all the technical implications of choosing one solution over another and I can create correct SQL queries for either scenario. The problem is even when both values differ on the technical level, they are exactly the same on logical level. After looking at NULL and '' I came to a single conclusion: I don't know email address of the guy. Also no matter how hard i tried, I was not able to sent an e-mail using either NULL or empty string, so apparently most SMTP servers out there agree with my logic. So i tend to use NULL where i don't know the value and consider empty string a bad thing. After some intense discussions with colleagues i came with two questions: 1. am I right in assuming that using empty string for an unknown value is causing a database to \"lie\" about the facts? To be more precise: using SQL's idea of what is value and what is not, I might come to conclusion: we have e-mail address, just by finding out it is not null. But then later on, when trying to send e-mail I'll come to contradictory conclusion: no, we don't have e-mail address, that @!#$ Database must have been lying! 2. Is there any logical scenario in which an empty string '' could be such a good carrier of important information (besides value and no value), which would be troublesome/inefficient to store by any other way (like additional column). I've seen many posts claiming that sometimes it's good to use empty string along with real values and NULLs, but so far haven't seen a scenario that would be logical (in terms of SQL/DB design). P.S. Some people will be tempted to answer, that it is just a matter of personal taste. I don't agree. To me it is a design decision with important consequences. So i'd like to see answers where opion about this is backed by some logical and/or technical reasons."} {"_id": "220199", "title": "How to organize methods that check username/email availability in a REST API?", "text": "I am developing a RESTful API in my project, but I do not have experience on that. To fill these gaps I've been watching some videos, mainly from Apigee, which are great. One situation brings a lot of discussion on my project and I hope you can help me clear it out. The client using the API is able to create a user by posting to my /Users url, but it has some requirements as: name format, username availability, email availability, valid region and many others. I can easily validate the name format in my client app, that would be easy. The problem is when it comes to availability (mainly if the email and the user name are available to be user or are already taken). We have two different lines of thoughts on my team: the first one would break each verification in a different url, something like /Users/EmailAvailable and /Users/UsernameAvailable ou /Available/Email and /Available/Username They claim that it would make the interface cleaner (more intuitive) and would also keep cohesion, since one method has only one responsibility. The second line of thought is to have only one URL /User/Verify and it will verify either the username, email or both, depending on the JSON it receives. They claim that it would only require one trip to the server and that cohesion is not a problem because, since the URLs are not necessarily mapped to methods, they can divide the calls into methods that do one thing each. They also claim that the interface shouldn't care about cohesion My questions: 1- Are these arguments about cohesion from the second line of though valid? 2- How would you do? (Either one of those or any other that you know of) thanks!"} {"_id": "144052", "title": "How to convince teammates to use TDD", "text": "I am the only person on my team that use TDD. How do I make them to use it? I am annoyed that when I pull, someone's code will break my tests and I am the one who has to fix them. Will using github, fork and pull request solve this problem, so that they need to pass the test before the pull is accepted? I am not the project manager and I can't seems to convince her to use it."} {"_id": "73233", "title": "Learn Java resource (for programming noobs) that doesn't use an IDE", "text": "> **Possible Duplicate:** > Best Java book you have read so far I have learned to program some in Ruby, but I'm having trouble grasping OO concepts. I've been told that there are a lot of guides/books that teach object oriented programming that use Java. I've heard that Java is a good language for learning OO for a number of reasons... haha, reasons that I don't know. Anyway, I tried to use an IDE (Eclipse) and I don't care much for it. Are there any good guides/books/videos that focus on teaching OO concepts using Java that don't use an IDE such as Eclipse to teach it?"} {"_id": "111215", "title": "Beginning to code with java (No coding experience)", "text": "> **Possible Duplicate:** > Best Java book you have read so far I want to learn java but, I have absolutely no coding experience. what is the best website, book, or anything that is best at teaching java?"} {"_id": "114963", "title": "What is the best book to prepare for a Java interview?", "text": "> **Possible Duplicate:** > How to prepare yourself for programming interview questions? I found older questions but I was wondering if newer, better books/guides have been published in the last two years. I am interviewing for a technology analyst position in finance and I expect to have a lot of technical questions based on running time/data structures etc. This is for an entry-level position so I have not had previous experience programming for work, only experience through my coursework."} {"_id": "98973", "title": "What is a good Planned Release/Freeze/Beta Methodology?", "text": "I've been working on a nightmare of a project for some months now. The product is a small Ruby on Rails app/website for internal use by a small group of people. I'm coming up to the finish line and I'd like the actual beta release to go smoothly. I still have a number of small bugs that affect functionality in a minor way and do not break the build. This is my first job and first time taking a product to launch. I'm curious what people's methodologies are for a smooth transition to bring your application from a development environment to a live production environment."} {"_id": "143003", "title": "Whats the difference between a Ledger and a list of transactions?", "text": "re: algorithm and data structure concepts When accountants talk about Ledgers they have something very specific in mind. As a programmer I'm very used to thinking about tables of information, but clearly there are specific things that make a ledger a ledger. So, given a simple list of transactions (date, money in, money out, notes) what needs to be added/changed to make that list into something Accountants would call a ledger? e.g. formatting considerations, does it have to be immutable, does it have to reflect something happening in a real bank account somewhere, does it have to have some sort of 'foreign key' describing which entity each transaction refers to..."} {"_id": "222917", "title": "TDD and complete test coverage where exponential test cases are needed", "text": "I am working on a list comparator to assist sorting an unordered list of search results per very specific requirements from our client. The requirements call for a ranked relevance algorithm with the following rules in order of importance: 1. Exact match on name 2. All words of search query in name or a synonym of the result 3. Some words of search query in name or synonym of the result (% descending) 4. All words of the search query in the description 5. Some words of the search query in the description (% descending) 6. Last modified date descending The natural design choice for this comparator seemed to be a scored ranking based on powers of 2. The sum of lesser important rules can never be more than a positive match on a higher importance rule. This is achieved by the following score: 1. 32 2. 16 3. 8 (Secondary tie-breaker score based on % descending) 4. 4 5. 2 (Secondary tie-breaker score based on % descending) 6. 1 In the TDD spirit I decided to start with my unit tests first. To have a test case for each unique scenario would be at a minimum 63 unique test cases not considering additional test cases for secondary tie breaker logic on rules 3 and 5. This seems overbearing. The actual tests will actually be less though. Based on the actual rules themselves certain rules ensure that lower rules will always be true (Eg. When 'All Search Query words appear in description' then rule 'Some Search Query words appear in description' will always be true). Still is the level of effort in writing out each of these test cases worth it? Is this the level of testing that is typically called for when talking about 100% test coverage in TDD? If not then what would be an acceptable alternative testing strategy?"} {"_id": "91621", "title": "Guidelines for HTML/CSS/JS optimization process", "text": "Currently we have a Development server on which we write the HTML/CSS/JS code. We do not optimize our code using compressors/minifiers. So our deployment process from Development to Production is simply to copy the files on the Development server to the Production server. But now we are planning to use compressors/minifiers. We are confused about the process we should follow. Currently we use a single JS file called common.js and a single CSS file called common.css. Also now we are planning to use multiple CSS files in Dev like reset.css, common.css, etc. and when deploying them to production we will be combining them to a single CSS file. The simplest process I can think of is: 1. Make an exact copy of the development files on the Development server. Use the copy in the further steps. 2. Combine all CSS files to a single CSS file say common.css. Same with JS files. Delete the other files for example reset.css. 3. Compress/minify common.css and common.js to common.min.css and common.min.js. Delete common.css and common.js. 4. In all the HTML files remove all link tags and script tags except for common.css and common.js 5. In all the HTML files modify the script src and link href from common.js to common.min.js and common.css to common.min.css. 6. Copy all the files to Production server. 7. Delete the copy This looks too tedious for us. What can be a better process? Is there any tool which can do the optimization (above steps) automatically. It looks like it would not be very difficult to write a script which performs all of the above steps. But I would like to use an existing tool rather than developing one myself. I would like to put the efforts only if such a tool does not exist. Also when is the testing generally performed? I think it should be performed on the optimized files (just in case the compressors have some bug). But then the process would become more tedious as the above process would have to be repeated after every Test-Bugfix cycle."} {"_id": "197802", "title": "Object Constraint Language (OCL) for Stack in java.util package", "text": "I have an exam coming up and I'm looking at past papers to get some ideas of what to expect. I'm a bit stuck on the following one and would really appreciate if someone could give some example answers. Write preconditions and postconditions in OCL for each of the following operations (included in the Stack class in the java.util package): * (1) Boolean empty() - Tests if this stack is empty * (2) E peek() - Looks at the object at the top of this stack without removing from the stack * (3) E pop() - Removes the object at the top of this stack and returns that object as the value of this operation * (4) E push(E item) - pushes an item onto the top of this stack Here E denotes the type of elements in the stack. My attempts are as follows: Boolean empty() pre: none post: self -> IsEmpty() = true //should this be result -> IsEmpty() = true because it returns a boolean value? E peek() pre: self -> NotEmpty() = true post: result = ??? // I lose hope at this stage. I also don't know if I should be referencing the elements in the stack. For example: self.elements -> IsEmpty() = true If anyone could help me out I'd really appreciate it. **EDIT** A friend has the following ideas: context Stack empty() pre: self.data.size = 0 context Stack peek() pre: self.data.AsSequence.first context Stack pop() pre: !self.data.isEmpty post: self.data.AsSequence.first.remove (not sure about this one) post: self.data.count = @pre:data - 1 context Stack push(E Item) post: self.data.asSquence.prepend(E.asSequence) post: self.data.size = @pre.data.size + 1"} {"_id": "143009", "title": "Why is CS never a topic of conversation of the layman?", "text": "Granted, every profession has it's technicalities. If you are an MD, you better know the anatomy of the human body, and if you are astronomer, you better know your calculus. Yet, you don't have to know these more advance topics to know that smoking might give you lung cancer because of carcinogens or the moon revolves around the earth because of gravity (thank you Discovery Channel). There's sort of a common knowledge (at least in more developed countries) of these more advanced topics. With that said, **why are things like recursive descent parsing, BNF, or Turing machines hardly ever mentioned outsided 3000 or 4000 level classes in a university setting or between colleagues?** Even back in my days before college in my pursuit of knowledge on how computers work, these very important topics (IMHO) never seem to get the light of day. Many different sources and sites go into \"What is a processor?\" or \"What is RAM?\", or \"What is an OS?\". You might get lucky and discover something about programming languages and how they play a role in how applications are created, but nothing about the tools for creating the language itself. To extend this idea, Dennis Ritchie died shortly after Steve Jobs, yet Dennis Ritchie got very little press compared to Steve Jobs. So, the heart of my question: **Does the public in general not care to hear about computer science topics that make the technology in their lives work, or does the computer science community not lend itself to the general public to close the knowledge gap?** Am I wrong to think the general public has the same thirst for knowledge on how things work as I do? Please consider the question carefully before answering or vote closing please."} {"_id": "106966", "title": "In Scrum, why shouldn't the Product Owner and ScrumMaster roles be combined?", "text": "In the more traditional projects that I've worked on, the project manager (and, on larger projects, there might be associate/deputy/assistant project managers should one person be unavailable) is the person responsible for communicating with the customer, receiving project health and status updates, determining scheduling and budgeting, managing the process, ensuring the team has what they need to complete tasks, and so on. In Scrum, however, these responsibilities are split between the Product Owner and the ScrumMaster. The Product Owner is the voice of the customer. They interact directly with the customer, create user stories, organize and prioritize the product backlog, and other user/customer facing issues. The ScrumMaster handles the process, overseeing meetings (including estimation and planning), removing impediments, and monitoring the overall health of the project, making adjustments as needed. I've read in multiple sources, including Wikipedia, that the role of ScrumMaster and Product Owner should be held by two different people. I've not only read about, but worked on successful \"traditional\" style projects where the activities of both were handled by a single individual. In fact, it makes more sense for one to three people to be responsible for handling project (including human resources/staffing) and process level tasks, as they often go hand-in-hand. Process changes have an impact on scheduling, budgeting, quality, and other project-level goals, and project changes have an impact on process. Why does Scrum call for isolating these activities into two roles? What advantages does this actually provide? Has anyone been on a successful Scrum project where the Product Owner and ScrumMaster were the same individual?"} {"_id": "114131", "title": "Where should I look to know which technology is used in a web app?", "text": "Let's say, I have one or more web app I find interesting, and as I want to do something similar, I would like to know on which technology they are based (programming language, database, web server, framework, ...). Is there any site where such information is centralized? If not, where should I look for this information?"} {"_id": "114138", "title": "Delegate vs Interfaces-Any more clarifications available?", "text": "After reading the article- When to Use Delegates Instead of Interfaces (C# Programming Guide), I need some help understanding the below given points, which I found to be not so clear (for me). Any examples or detailed explanations available for these? **Use a delegate when:** * An eventing design pattern is used. * It is desirable to encapsulate a static method. * Easy composition is desired. * A class may need more than one implementation of the method. **Use an interface when:** * There are a group of related methods that may be called. * A class only needs one implementation of the method. _**My Questions are,_** 1. What do they mean by an eventing design pattern? 2. How the composition turns out to be easy if a delegate is used? 3. if there is a group of related methods that may be called,then use interface-What benefit it has? 4. if a class only needs one implementation of the method, use interface-how is it justified in terms of benefits?"} {"_id": "93665", "title": "Interview Assignment: Production, Bug-free, or Bells & Whistles?", "text": "I was given a small project to complete over the weekend for an interview. Not a very difficult project at all. Fortunately, I was able to spend all my time designing, developing, and testing the application rather than having to research something I wasn't familiar with. After working out a couple of critical bugs, I am left wondering if I should add anything more to the assignment. Assume, at this point: * everything is commented * no bugs found * any known exceptions are handled * 8 hours remain in allotted time **So, is it better to go ahead and submit a bug-free, working assignment earlier than the deadline?** ...or ... **Take the fully allotted time to add a feature (obviously something simple like logging)?** ...or ... **Provide a list of TODOs that could improve flexibility, scalability, or general usability?** These would be improvements that simply could not be done within a day or could introduce too many issues to resolve in the time allotted. FWIW: the project consisted of creating a multi-threaded sort controller implementing 3 different sorting algorithms; results to be displayed on WinForms UI."} {"_id": "188309", "title": "Terminology for class", "text": "I am integrating with a Financial Management System (FMS). I have a class that prepares a set of objects (each of these objects is called a TransactionEntity). These are then passed to the FMS. I give particular importance to correct terminology. What should I name this class that prepares the transactions? I am thinking TransactionSomething. Example TransactionBuilder, TransactionFactory etc. But I don't think these names are correct. For the record, I use .Net but I don't think the framework should have an impact on this."} {"_id": "188307", "title": "JavaEE server experience matters for a developer?", "text": "Lately I am seeing quite a few application development job ads asking for experience with this or that Java EE server. I can understand this if it is for a server administrator, however I find it stupid and ridiculous to ask for some one with a specific server implementation experience when the job is Java software development. The whole idea behind Java EE as I studied in my initial days was for development of standards and to deploy to any platform of choice that including servers, OS, etc. My question is, are the various Java EE application container implementations so vastly different that they can be considered their own career path for development? What kinds of features exist amongst Java application containers that require such specialized skill that I might not want to not consider applicants with otherwise impressive software development qualifications?"} {"_id": "51437", "title": "Harmful temptations in programming", "text": "Just curious, what kinds of temptations in programming turned out to be really harmful in your projects? Like when you really feel the urge to do something and you believe it's going to benefit the project or else you just trick yourself into believing it is, and after a week you realize you haven't solved any _real_ problems but instead created new ones or, in the best case, pleased your inner beast with no visible impact. Personally, I find it very hard to not refactor bad code. I work with a lot of bad legacy code, and it takes some deep breaths to not touch it when I have no tests to prove my refactoring doesn't break anything. Another demon for me in user interface, I can literally spend hours changing UI layout just because I enjoy doing it. Sometimes I tell myself I'm working on usability, but the truth is just I love moving buttons around. **What are your programming demons, and how do you avoid them?**"} {"_id": "150989", "title": "NLP as a job requirement", "text": "When an employer points out that NLP is required in a position what does he expect? Can I teach myself natural language processing theory or must know specific frameworks or what?"} {"_id": "189659", "title": "What's the best programming language to learn for solving partial differential equations?", "text": "I have to create a program that compares two or three different methods (FEM FVM FDM) for solving an easy pde. Is there a program language in which I could do this easily? (i need to operate with vectors/matrix and perform inversions on matrix)"} {"_id": "22305", "title": "How do you handle supporting Chrome versions?", "text": "I'm working on a site for my company which, up until a certain point, was an Internet Explorer-only site for various reasons, mainly that when the site was originally conceived IE had a 90%+ market share so the work to get it working in other browsers just didn't make sense. Now that we live in a more diverse browser economy we're doing the work to get the site working everywhere, and as luck would have it a decent chunk of it just happens to already work. However, one issue we're struggling with is the issue of what to support and what not to support. For starters, non-IE browsers release much more frequently than IE did, and you don't know which versions are still in the wild. There's been basically three versions of IE released in the last decade, and IE6 is still supported until 2014. But there's an update for Firefox every other day, Apple updates Safari more or less annually. And then there's Chrome. Chrome has gone from 0.2 to 9.0 in a little over two years. 7.0.517 was released a month and a half after 6.0.472. There's three different versions out right now, a stable, a beta, and a dev. And the dev version of 9.0.587 was actually released before the latest beta version of 8.0.552. With IE we've had the situation arise where we have to support an old version because the IT department of the company in question does not allow the employees to upgrade. With non-IE browsers I'm thinking we'll adopt the line of \"update to the latest or we can't help you\" but I'm not sure how effective that is. Also, my company does some amount of artificial limitations. For example we have a product aimed at companies so we don't support \"Home\" versions of Windows (i.e., XP Home, 7 Home Premium) even though there's no technical reason we couldn't. When my company starts asking \"what version or versions of Chrome do we support\", how should I answer?"} {"_id": "189656", "title": "Thread safety IDE warnings", "text": "I wonder, would it not be possible for an IDE to detect any shared mutable objects that are used in multiple threads? You could flag types as either `[ThreadSafe]` or `[Immutable]` using attributes in .NET for example, and then those variables would never cause warnings, but any other variable that is used inside of a method invoked via `Thread` would be highlighted as \"possibly unsafe\" unless all assignments occur inside of a `lock` or something like that. I know it wouldn't be perfect because the compiler simply can't reason about all the possible scenarios (humans barely can), but hints like these would be quite valuable, no? Is this even feasible?"} {"_id": "150986", "title": "Javascript: Safely upload a client data file", "text": "I'm (still) working on a template-based XML editing program. It's a GUI-based XML editor that only allows users to add certain tags and attributes based off the requirements. You can see the current version here for an idea. Now, I'd like to allow users to upload their own data templates, but I'm concerned about potential XSS hacks. Currently, the template file is in Javascript object literal notation, which unsurprisingly is a security nightmare if the user can upload their own. I was thinking of using XML instead, but is there an even better alternative?"} {"_id": "145099", "title": "I can't draw. How can I make polished applications?", "text": "I'm not a graphic designer. I'm pretty bad at drawing anything. I struggle to build things that look even as nice as \"sample\" applications bundled with development tools; primarily because I don't have squat in the way of art assets. What strategies might I take to mitigate this?"} {"_id": "145091", "title": "Why are virtual machines required?", "text": "Instead of compiling the source code for the respective OS (on which it is targeted), you compile once and run everywhere. For the sake of this question, I would call it VM (for example, both for Java and .NET). So this the execution of programs becomes something like ------------ ---- ---- | Executable | -> | VM | -> | OS | ------------ ---- ---- It perfectly makes sense, the compiler remain generic for the respective VM. However, the implementation of VM may vary depending on the machine it is going to be installed i.e. (*nix, windows, mac) x (32 bit, 64 bit). My question is, instead of writing VM for respective machines, why isn't the compiler is written for that specific machine? By this, instead of downloading respective VM you download the respective compiler and that compiler will take care of the machine-code+OS for that specific machine. End result, the execution of native code for any machine. Definitely, each source code would need compilation for that specific machine but now a days, the automated systems, scm builds can help us do this thing. Do my reasons of being confused are right or I am missing some bits of technicalities here? Edit: **PORTABILITY** : Yes, it is one reason but is portability a big issue in today's automated systems? how often do we have to worry about the fact that we don't have to compile it for other machines? Having a code compiled for native machine would give much better performance. Take Java for instance, you can't do low level programming on Windows and you have to choose JNI. Take automated systems like TeamCity/Jenkins or others. We could have such an automated system setup where code submitted through version control would result in the executable."} {"_id": "74759", "title": "How to get good scenario coverage in performance benchmarks?", "text": "If it is relatively easy to get good _code coverage_ from profiling (because profiling tells you which functions are/aren't called, how many times, and with what parameters), how do I get good _performance scenario coverage_ when doing performance benchmarks? How do I even know that there isn't some \"performance blackholes\" which can't be discovered except when certain test parameters are very near the \"blackhole\"? For a toy example, a sorting algorithm can be tested with data sets of size `1, 10, 100, 1000, 10000, ...`. An example of non-numerical coverage would be to test the sorting algorithm with `sorted data`, `unsorted data`, or `evil- constructed data intended to expose the worst case`. These scenarios have been exhaustively investigated by the academia in the case of sorting algorithms. How to apply that thinking into other kinds of software systems?"} {"_id": "74756", "title": "What are the characteristics for a good report generation software for reporting and tracking software benchmarking results?", "text": "This is an offshoot question from this answer to a previous question, where http://speed.pypy.org is highlighted as an example having a good presentation. (However, it appears to me that the project doesn't separate the components of execution/tracking/report-generation/web-service, which makes it harder for other people to adopt.) I am interested in both the functional and the UI requirements of such software. I hope to be able to choose an existing one based on the criteria so that I can use it in my project. Right now, the only thing I can think of is that the Execution UI should be similar to a Unit Testing harness, but the Reporting UI should be totally different from the xUnit family of software. Webpages seem to be a better way to navigate through the results. And, along with some minor ideas: * There should be a tracking component to track performance changes at all levels * However, the presentation layer should highlight only \"relevant\" performance changes, that is, performance drops in important areas that are serious enough to require developers' attention. I am also interested in whether any of the advices from Edward Tufte can be applied here."} {"_id": "74757", "title": "Correct path towards a Microsoft certification?", "text": "So here's my situation - Currently in college and confident, that i want to pursue a career in Microsoft technologies (.NET development more specific). I haven't got any real world experience (as in no experience in paid nor open source projects, just personal applications and school projects) and it's hard to get into any development position, as most if not all ask for years of professional experience. I'm pretty sure going for a certification is the right thing to do, to \"seperate myself from rest of the pack\", which should help me get into open source projects and paid jobs alike. The question is, which certifications should i choose? I thought about about going for MCTS: .NET Framework 4, Windows Applications ( 70-511 ) and then MCTS: .NET Framework 4, Web Applications ( 70-515 ) Is that even the correct way to do what i'm trying to do ? Should i disband anything related to desktop applications since everything is moving towards web based ? Which way to go after the given certs ? Any input appreciated! L.A."} {"_id": "65931", "title": "Are there serious companies that don't use version-control and continuous integration? Why?", "text": "A colleague of mine was under the impression that our software department was highly advanced, as we used both a build server with continuous integration, and version control software. This did not match my point of view, as I only know of one company I which made serious software and didn't have either. However, my experience is limited to only a handful of companies. Does anyone know of any real company _(larger than 3 programmers)_ , which is in the software business and doesn't use these tools? If such a company exists, are there any good reason for them not doing so?"} {"_id": "196286", "title": "How to maintain standard quality of images uploaded by many users?", "text": "We're developing a site where individuals (store owners) will be able to take pictures and upload to the site. Our biggest concern is the variance in quality of pictures across the site. The options we are considering are: 1. Minimum image cleaning scripts (including filters, resizing, sharpening, etc.) 2. Approval process for uploaded images 3. Having a team take the photos for them I'd like to see if anyone has had any experience with 1 and 2 above. How effective is 1, and how efficient is 2?"} {"_id": "196280", "title": "Should semantic breaking changes be tied to syntactic breaking changes?", "text": "**Explanation** First let me briefly define how I'm using terms (I might be bending their typical use a little): When I talk about _semantic breaking changes_ , I'm referring to a change in the meaning/behavior of a public API. It may require changes to consumer code, but it isn't immediately obvious, since nothing 'broke'. Changes are only required because the assumptions made about preconditions/behavior are no longer valid. This is in contrast to _syntactic breaking changes_ by which I'm referring to changes in the public API that require the calling code to change the way it calls the API. This kind of breaking change is obvious since it usually results in compile errors which have to be fixed. **Example** Let's say I'm writing a micro-ORM that given a POCO object, a database connection, and a query will map the results into the object for you. The public API, is fairly simple: Query(connection As IDbConnection, sql As String) I'm very unlikely to have a syntactic change in how you interact this tool. However, since there is a lot of behavior bound up in that simple statement, I'm very likely to have semantic changes. **Question** Since there's no syntax change, how do I effectively communicate to consumers of my API that there has been a semantic change? Should I always bundle up semantic changes in syntatic changes so there's no question that something has changed? If semantic versioning encompasses semantic as well as syntactic changes, then that goes a long way to solving the issue. But it would still be possible for developers to build against a newer version, not notice any compile errors, assume things will be fine (since they haven't read the docs), and introduce bugs due to semantic changes in my API."} {"_id": "65939", "title": "What's the first language that had the 'Unless' conditional/loop built into itself?", "text": "What's the first _(oldest)_ language that had the 'Unless' conditional/loop built into itself? Where an example could be unless (myVar) == if (!myVar) until (myVar) == while (!myVar)"} {"_id": "214848", "title": "OpenGL extension vs OpenGL core", "text": "I was doubting: I'm writing a cross-platform engine OpenGL C++, I figured out windows forces the developers to access OpenGL features above 1.1 through extensions. Now the thing is, on Linux, I know that I can directly access functions if the version supports it through glext.h and opengl version. The problem is that if on Linux, the core doesn't support it, is it possible there is an extensions that supports the same functionality, in my case vertex buffer objects? I'm doing something like this: Windows: `#define glFunction functionpointer_to_the_extension` Linux: Since glext already defined glFunction, I can write in client code glFunction, and compile it both on Windows AND Linux without changing a single line in my client code using the engine (my goal). Now the thing is, I saw a tutorial use only the extension on Linux, and not checking for the opengl implementation version. If the functionality is available in the core, is it also available as extension (VBO's e.g.)? Or is an extension something you never know is available? I want to write an engine that gets all the possibilities on hardware, so I need to check (on Linux) for extensions as well as core version for possible functionality implementation."} {"_id": "214841", "title": "When do you say \"near ID\" / \"far ID\" when using RTMFP in ActionScript 3?", "text": "I've been using RTMFP streaming for around a year in ActionScript 3, and I pretty much know the difference between a near ID and far ID. The near ID is your peer ID, and the far ID is the other guy's peer ID. The problem I'm having is that, to my knowledge, choosing whether to use one term or the other in a given sentence is a little like choosing whether to say \"go\" or \"come\". In English, \"go\" and \"come\" have opposite meanings, but ironically, they can still be used almost interchangeably. It's because whether you're going or coming is so heavily dependent on perspective, as opposed to anything concrete. Is the choice of words between \"near ID\" and \"far ID\" just as ambiguous, or is there some sort of method to the madness? Thanks!"} {"_id": "175749", "title": "emacs keybindings", "text": "I read a lot about vim and emacs and how they make you much more productive, but I didn't know which one to pick. Finally when I decided to teach myself common lisp, the decision was straight forward: everybody says that there's no better editor for common lisp, than emacs + slime. So I started with emacs tutorial and immediately I ran into something that seems very unproductive to me. I'm talking about key bindings for cursor keys: forward/backward: Ctrl+f, Ctrl+b up/down: Ctrl+p, Ctrl+n I find these bindings very strange. I assume that fingers should be on their home rows (am I wrong here?), so to move cursor forward or backward I should use my left index finger and for up and down right pinky and right index fingers. When working with any of Windows IDEs and text editors to navigate text I usually place my right hand in a position so that my thumb is on the right ctrl and my index, ring and middle fingers are on the cursor keys. From this position it is very easy and comfortable to move cursor: I can do one- character moves with my 3 right fingers, or I can press `ctrl` with my right thumb and do word-moves instead. Also I can press `shift` with my left pinky and do single-character or word selections. Also it is a very comfortable position to reach `PgUp`, `PgDn`, `Home`, `End`, `Delete` and `Backspace` keys with my right hand. So I have even more navigation and selection possibilities. I understand that the decision not to use cursor keys is to allow one to use emacs to connect to remote terminal sessions, where these keys are not supported, but I still find the choice of cursor keys very unfortunate. Why not to use `j`, `k`, `i`, `l` instead? This way I could use my right hand without much finger stretching. So how is emacs more productive? What am I doing wrong?"} {"_id": "214845", "title": "How can I introduce QA and break it into parts for various people?", "text": "I was recently asked how to do this, specifically: 1. How to introduce QA into an organization? 2. How to break up QA into parts that others can do ? 3. How to prioritize what needs QA? 4. How to determine what to buy? code? etc. The organization uses Rails on Rails extensively as the development platform Note: I am posting a lengthy answer myself but I will also upvote additional good answers (and probably incorporate them into my answer)."} {"_id": "214847", "title": "Canonical representation of a class object containing a list element in XML", "text": "I see that most implementations of JAX-RS represent a class object containing a list of elements as follows (assume a class House containing a list of People) Adam Blake The result above is obtained for instance from Jersey 2 JAX-RS implementation, notice Jersey creates a wrapper class \"houses\" around each house, however strangely it doesn't create a wrapper class around each person! I don't feel this is a correct mapping of a list, in other words I'd feel more confortable with something like this: Adam Blake Is there any document explaining how an object should be correctly mapped apart from any opninion?"} {"_id": "74286", "title": "What is the proper amount of time required to interview a .NET developer?", "text": "I had two interviews during the last two weeks, one of them was about 30 minutes, and they had me fill out a job application before it, and the other one was less than 10 minutes! In your own point of view, how much time do you need to interview a .NET developer with almost 2 years experience?"} {"_id": "55977", "title": "Should I avoid SharePoint Development in Visual Studio?", "text": "Not long ago I started an internship at a company that supplies SharePoint consultancy, hosting and development. While their consultancy seems to be pretty good and solid, I feel their development department lacks direction. The reason for this, most likely, is that they stopped outsourcing not too long ago. One thing that I've frequently bumped my head into is the following: My supervisor strongly insists that everything that can be done natively in SharePoint (somehow this includes editing xslt files in Designer) should be done in SharePoint. Even if this results in longer development time (at least when they make me write XSLT) and reduced usability. Her main arguments for this are: * Better maintainability * Editing the functionality doesn't require programming knowledge I feel the company is a little biassed and I am unable to get a decent discussion going. This is why I am looking for other places to get some responses on the subject (and not only on the arguments of my supervisor, but more on the subject in general). Kind regards"} {"_id": "142848", "title": "Interfaces: profit of using", "text": "First of all, my ubiquitous language is PHP, and I'm thinking about learning Java. So let me split my question on two closely related parts. Here goes the first part. Say I have a domain-model class. It has some getters, setters, some query methods etc. And one day I want to have a possibility to compare them. So it looks like: class MyEntity extends AbstractEntity { public function getId() { // get id property } public function setId($id) { // set id property } // plenty of other methods that set or retrieve data public function compareTo(MyEntity $anotherEntity) { // some compare logic } } If it would have been Java, I should have implemented a `Comparable` interface. But why? Polymorphism? Readbility? Or something else? And if it was PHP -- should I create `Comparable` interface for myself? So here goes the second part. My colleague told me that it is a rule of thumb in Java to create an interface for every behavioral aspect of the class. For example, if I wanted to present this object as a string, I should state this behaviour by something like `implements Stringable`, where in case of PHP `Stringable` would look like: interface Stringable { public function __toString(); } Is that really a rule of thumb? What benefits are gained with this approach? And does it worth it in PHP? And in Java?"} {"_id": "224980", "title": "Who actually performs the task of log rotation?", "text": "I know _log rotation_ is a feature seen in application servers like Oracle's Weblogic. I would like to understand: 1. What is actually meant by log rotation? 2. What is it useful? 3. Who performs the task? Is this duty performed by the logging frameworks that the developers use or is it handled by the servers themselves? Would log rotation be available in a server like Weblogic if the application doesn't use any logging framework(not `java.util.logging`) and simply rely on `System.out.println()` and `System.err.println()` etc?"} {"_id": "224988", "title": "What is the proper copyright information?", "text": "I want to know what should I write in copyright part? I mean the copyright part when you right click on the executable, click properties, and click the details tab. I was developing a little tool that parses some information from output files. It's been a year now and somehow it became something more than a simple, personal tool. We are using it for academic purposes and it's going to cited in several academic papers. BTW, I do not want to make it freeware. Also, I am the only developer of the software. I have used Visual Studio 2010 Ultimate that I obtained for free with the academic license provided my university."} {"_id": "104611", "title": "Real-time chat in Ruby on Rails without owning a server", "text": "I'd want to implement a Real-time chat for my Rails app but I can't really host the server which handles the sockets. I've tried Faye but it needs a server. I've also heard of pusher but it's limited to 20 users at a time on the chat and I can't really be sure they won't be more. I've thought of IRC but I think I can't really embed it into a rails app, maybe it needs sockets... How can I implement a real-time chat without owning a server?"} {"_id": "220402", "title": "How far should we rename code and data when end users nomenclatures change?", "text": "A long time ago we added a feature where our users could \"Accept\" an image after it was added to a workflow queue. Turns out, we used the wrong term, and users actually \"Approve\" the image. Changing Accept to Approve on our interface is easy, just replace one word. But we programmed all layers with the word \"accept\", from the CSS class name to the database values. * The CSS class that turns the button green: \".accepted\"; * The model method that verifies and binds the class attribute on the DOM node: \"isAccepted\"; * JavaScript status attribute: Array with \"unreviewed\", \"accepted\" and \"published\"; * Mysql status column: ENUM with \"unreviewed\", \"accepted\" and \"published\"; * Test names; It's trivial (specially when you have tests) to replace most occurrences of accept to approve. A little bit harder is to migrate the data, specially since it needs to be synchronized with deployment. This specific case is simple, but I've faced similar, yet more complex cases, during my career. When a file is also renamed and deployment happens on dozens of servers, or when proxy caching, memcached and mysql are involved. Leaving \"accepted\" on every other layer except the interface is a bad idea, since new programmers joining the team might not learn the historical reasons it led to this decision, and while accept -> approve are close words in terms of meaning, if it was renamed to \"queued for managerial next status meeting\", it certainly wouldn't make any sense. And it feels if we compromise here and there, in a few iterations the user interface concepts will have no bearings to the system internals, and I certainly do not want to work on a system where half of the output has no connection to its innards. So, do you always rename everything when needed? If this happened to you, and you decided that the trade-off was not worth, did it come back to bite you? Is code comment or developer documentation enough to avoid this issue?"} {"_id": "57533", "title": "Just wondering about \"Do-It Yourself Apps\" on the internet versus apps written by us developers", "text": "I have been doing Objective-C programming over the past few weeks, and I have learnt a lot. However, I see that there are other Web-companies offering services to consumers directly from their website that allow consumers to create their apps through a point and click and drag features without any code. Clearly they are more cost effective and fast than having a developer write an app. I was wondering if there are any advantages then of having a developer build an app for someone, other than the obvious advantage that its got a custom look and feel. Could someone please clarify, since Im new and would like to evaluate whether it is worthwhile spending time towards learning a whole new development environment when someone could just use a webservice to make an app for multiple platforms Thanks"} {"_id": "165895", "title": "What happens at control invoke function?", "text": "A question about form controls invoke function. Control1 is created on thread1. If you want to update something in Control1 from thread2 you must do something like: delegate void SetTextCallback(string txt); void setText(string txt) { if (this.textBox1.InvokeRequired) { SetTextCallback d = new SetTextCallback(setText); this.Invoke(d, new object[] { txt }); } else { // this will run on thread1 even when called from thread2 this.textBox1.AppendText(msg); } }` What happens behind the scenes here? This invoke behaves different from a normal object invoke. When you want to call a function in an object on a specific thread, then that thread must be waiting on some queue of delegates, and execute the incoming delegates. Is it correct that the windows forms control invoke function is completely different from the standard object invoke function?"} {"_id": "223660", "title": "Why do we need to use sealed on a class? Do we really need sealed?", "text": "Reading this article (by Eric Lippert), it has four arguments as to why you should use `sealed`, however, I don't understand why we actually **need** it. Philosophical/aesthetic reasons aside, why do we need to use sealed? I can see how things like `readonly` from C# or `const` from C++ actually catch genuine errors. Is there any type of error that `sealed` can catch? An often cited reason for sealing a class is because, it hasn't been to designed to be extended. What does that mean? Why is it important? Are there any real examples of this? The point on compatibility isn't a reason why we need to seal classes but why it should be default. The final point again is strange, because sealing your class doesn't prevent someone providing an alternative implementation in a heirachy. **Edit:** If I didn't put enough emphasis, on it already I'm looking for reasons we actually **need** it. Not the usual, \"someone else might write bad code\". I understand that plenty of people _like_ using `sealed` for design intent, but I'm looking for a practical purpose beyond that."} {"_id": "223669", "title": "How to provide data to a web server using a data warehouse?", "text": "We have data stored in a data warehouse as follows: Price Date Product Name (varchar(25)) We currently only have seven products. Once every business day, seven new data points are added representing the day's price for each product. On the website, a user sees this information on a page load. Analytics shows that the amount of page loads is about 100 per day. The website server is an external server to our data center. The data warehouse server is internal to our data center and has other critical data (in different tables) not related and should not be accessible to the website. The following approaches have been considered: 1) The data warehouse could daily push (SFTP) a CSV file containing the daily data to the web server. The web server would have a process running on a crontab every 15 minutes. It would check if the file had changed. If so, it would update its database with the data. Then, on page loads, the web server would query its database to get the data to display on the web page. Usually, the push would only be once a day, but more than one push could be possible to communicate (infrequent) price corrections. Even in the price correction scenario, all data would be delivered in the file. The polling process would pick up the change and overwrite the data in the database. 2) The web server could request data from the data warehouse using JDBC or similar SQL connection technology. However, a security concerns have been voiced. The concern is that by allowing the web server access to our data warehouse, SQL injection attack or some other external attack through the website could compromise the data warehouse. Measures could be put in place to reduce the risk, but the easiest and safest approach has been suggested to simply not allow any public facing system to directly access the data warehouse. In other words, the data warehouse can establish a communication with other servers (say to SFTP the file), but no server can initiate a connection with the data warehouse. Do these concerns seem reasonable and hard to mitigate? 3) A web service could be built, and the web server could call the web service that is hosted in our internal data center. 4) A process hosted in our internal data center could call the web server when it knows the data warehouse has data available. If so, how should this be done? HTTPS with some way to prevent other unauthorized clients from making the same call? Which of the above approaches is best or is there a better approach than listed above? What are the pros and cons to an approach?"} {"_id": "235427", "title": "Is it fine to reuse a class instance?", "text": "I ask in terms of high cost classes, with a particle engine as an example. I read somewhere that an instance of a class with a high cost to initialize, like a particle manager, should have its state variables altered at run time rather than reinitialize the particle manager with new data every time it is used in a different way. The particle manager would create perhaps thousands of instances of particles on initialization and then manage them from then on. For example, let's say this particle manager is used to manage shrapnel in an explosion; and you're creating a ground zero simulation for an artillery attack; thus there are many explosions. Instead of multiple managers for each explosion, which is extremely expensive, you alter the variables for each explosion and have a scattered sequential set of explosions using the same manager and the same particles. So I ask: is it fine to reuse a class instance at run time? Or is the above really just a super optimization case?"} {"_id": "75489", "title": "How did you find the slow performance in your application?", "text": "What makes the web application perform and scale better is always a big topic. And finding the performance problems and tuning them is another... Here is some my thoughts of how to \"finding\" performance problems: For a \"new\" api/application or other * Analzying the detail api and then preparing the Jmeter/Grinder testing scripts for it. * Using different load to identify the threshold for the api * Adding profiling codes find the slownes * Restart from point one again.. For a \"old\" api/application or other * Analyzing the user pattern from the access detail log * Simulate the real user load to find the slowness * Adding profiling codes find the slownes * Restart from point one again.. So,how can you identify the performance problems?"} {"_id": "141577", "title": "iOS build machine setup: problem with certificates", "text": "some background: * work with multiple team mates * each work on our own MBP * I'm setting a build machine that we can `git push` to in order to generate a build (aim to allow anyone to push to the build machine and then generate an archive, upload to testflight and send on its way) problem: * getting my apple developer certificates on the build machine. I installed Lion, XCode, etc and I signed into my developer account through Xcode organizer, provisioning profiles download,etc. but beside each one it says: `valid signing identity not found` I also download my certificate from the developer.apple.com page, imported it into keychain, etc but no luck. Anyone else have a similar issue? Or maybe hints to fix? Thanks"} {"_id": "141574", "title": "Importing an existing project into Git", "text": "# Background During the course of developing our site (ASP.NET), we discovered that our existing source control (SourceGear Vault) wasn't working for us. So, we decided to migrate to Git. The translation has been less than smooth though. Our site is broken up into three environments DEV, QA, and PROD. For tho most part, DEV and the source control repo have been in sync with each other. There is one branch in the repo, if a page was going to be moved up to QA then the file was moved manually, same thing with stuff that was ready for PROD. So, our current QA and PROD environments do not correspond to any particular commit in the master branch. Clarification: The QA and PROD branches are not currently, nor have they ever been in source control. # The Question How do I move QA and PROD into Git? Should I forget about the history we've maintained up to this point and start over with a new repo? I could start with everything on PROD, then make a branch and pull in everything from QA, and then make another branch off of that with DEV. That way not only will the branches reflect the differences in the environments, they'll be in the right order chronologically with the newest commits in the DEV branch. # What I've tried so far I thought about creating a QA branch off of the current master and using robocopy to make the working folder look like the current QA environment. This doesn't work because the new commit from QA will remove new files from DEV and that will remove them when we merge up, I suspect there will be similar problems if I started QA at an earlier (though not exact) commit from DEV."} {"_id": "227758", "title": "Is it normal to get your software development products critiqued?", "text": "I am currently a software developer on coop that was in an interesting situation today. I cannot say too much, as the things I do are under NDA, but I make Grasshopper 3D (which is a Rhino plugin) components. Our firm has under three software developers including me. We all specialize in different languages. This project I was working on, I was in charge of creating a piece of software that has a very broad general use. It's an extension from typical Grasshopper behavior. With this said, the other developer I worked with has been in this industry for a good... decade or two. He is very knowledgeable towards the use of Grasshopper. His prior education involved great use of it, and his daily work involves the use of Grasshopper (though not explicitly what I am doing, which is creating C# components for it). With that said, my final component went through a number of iterations. It seemed each iteration was better than the previous. An interesting point is, the other developer I worked with (I was essentially under his wing) manages to break my component every single time. I don't mean this in a malicious way. By breaking, I meant, he was able to overflow particular lists. He was able to, after almost every iteration, find ways that my component could be improved on. Honestly, I really feel grateful for his suggestions and critique, because these were honestly things that either I overlooked, or did not know could happen. My question is are three-fold: 1. Is it normal in software development practices/companies to have someone critique and improve your code? There was no code review. It was simply ways improving the tool through either UX improves or back-end improvements. And honestly, even though there was no code review, I learned so much from his suggestions which I always found success in implementing. 2. Are there any suggestions for a more \"formal\" testing procedure? I am relatively new to the software development world, and thus am interested in software dev testing procedures. 3. How can I improve on my way to \"predict\" issues? At one side, I feel like there are issues that I cannot normally find out myself because everyone uses a piece of software differently. On another side, I feel it may be my oversight or incompetence that's causing this. I'm reading up on this thread, and it seems in a way, we have a loose testing procedure, though it's all dependent on me to ask for critique."} {"_id": "75487", "title": "How to suggest using an ORM instead of stored procedures?", "text": "I work at a company that only uses stored procedures for all data access, which makes it very annoying to keep our local databases in sync as every commit we have to run new procs. I have used some basic ORMs in the past and I find the experience much better and cleaner. I'd like to suggest to the development manager and rest of the team that we look into using an ORM Of some kind for future development (the rest of the team are only familiar with stored procedures and have never used anything else). The current architecture is .NET 3.5 written like .NET 1.1, with \"god classes\" that use a strange implementation of ActiveRecord and return untyped DataSets which are looped over in code-behind files - the classes work something like this: class Foo { public bool LoadFoo() { bool blnResult = false; if (this.FooID == 0) { throw new Exception(\"FooID must be set before calling this method.\"); } DataSet ds = // ... call to Sproc if (ds.Tables[0].Rows.Count > 0) { foo.FooName = ds.Tables[0].Rows[0][\"FooName\"].ToString(); // other properties set blnResult = true; } return blnResult; } } // Consumer Foo foo = new Foo(); foo.FooID = 1234; foo.LoadFoo(); // do stuff with foo... There is pretty much no application of any design patterns. There are no tests whatsoever (nobody else knows how to write unit tests, and testing is done through manually loading up the website and poking around). Looking through our database we have: 199 tables, 13 views, a whopping **926** stored procedures and 93 functions. About 30 or so tables are used for batch jobs or external things, the remainder are used in our core application. Is it even worth pursuing a different approach in this scenario? I'm talking about moving forward only since we aren't allowed to refactor the existing code since \"it works\" so we cannot change the existing classes to use an ORM, but I don't know how often we add brand new modules instead of adding to/fixing current modules so I'm not sure if an ORM is the right approach (too much invested in stored procedures and DataSets). If it is the right choice, how should I present the case for using one? Off the top of my head the only benefits I can think of is having cleaner code (although it might not be, since the current architecture isn't built with ORMs in mind so we would basically be jury-rigging ORMs on to future modules but the old ones would still be using the DataSets) and less hassle to have to remember what procedure scripts have been run and which need to be run, etc. but that's it, and I don't know how compelling an argument that would be. Maintainability is another concern but one that nobody except me seems to be concerned about."} {"_id": "75486", "title": "Why aren't young programmers interested in mainframes?", "text": "A key issue with mainframes is that the cohort of supporting programmers is dwindling. While normally this wouldn't be a problem in that a falling supply of programmers would be offset by an increasing amount of salary those causing a rising supply of programmers via the law of supply and demand, I'm not sure this is really happening for mainframes. While they still form critical infrastructure for many businesses, the simple fact is there isn't an adequate number of young programmers coming up along to keep the support population populated. Why is this? What makes mainframes unattractive to young programmers?"} {"_id": "26108", "title": "Is it more valuable to double major in Computer Science/Software Engineering or get an undergraduate CS degree with a Masters in SE?", "text": "A friend and I (both in college) are currently in a debate over which is better, in terms of employment opportunities, experience, and education: a Bachelors degree in both Computer Science and Software Engineering, or a Bachelors in Computer Science with a Masters in Software Engineering. My point of view is that I would rather go to school for 4-4.5 years to learn both sides of the field, and be out working on real projects gaining real experience, by going the double major route. His point of view is that it would look better to potential employers if he had a Bachelors in CS and Masters in SE. That way, when he's finally done after 4 years of CS and 2-4 of SE (depending on where he goes), he can pretty much have his choosing of what he wants to do. We are both in agreement on the distinction between the two degrees: CS is \"traditional\" and about the theory of algorithms, data structures, and programming, where SE is the study of the design of software and the implementation of CS theory. So, what's your stance on this debate? Have you gone one route or another? And most importantly, why?"} {"_id": "26107", "title": "What are your decision critera in C++ to make something a data member or virtual method?", "text": "(Figured this was too subjective for SO, so posting here...) I have some behavior that I can implement in various ways. At least two methods are shows in the code snippet below. Presume that the m_well data member is correctly set somehow at object construction time. struct Behavior { virtual bool behavesWell() { return true; } private: bool m_well; public: bool behavesMemberWell() { return m_well; } }; struct OtherBehavior : public Behavior { virtual bool behavesWell() { return false; } }; Obviously the one relies upon virtual dispatch, and the other merely the return of a data member. A third, non-shown method would likely have the non-virtual public member function not return a fixed data member, but instead call a virtual - lets leave that aside for the purpose of this please. What would lead you to one or other other of these two methods of implementing functionality, such that a user of this class can interrogate an object's behavior?"} {"_id": "204760", "title": "\"every statement and declaration an expression that yields a value\" why?", "text": "At the end of the answer to \"Can I do ++x and x++ in Python?\" on this page: http://norvig.com/python-iaq.html, you can read: > [...] I'm with my fellow Dane, Bjarne Stroustrup, on this one. He said in > The Design and Evolution of C++ ``If I were to design a language from > scratch, I would follow the Algol68 path and make every statement and > declaration an expression that yields a value''. Why?"} {"_id": "204763", "title": "Data integrity in NoSQL situations", "text": "**Background** To start off, at work I work with a legacy system that, in its day, was quite spectacular, but now is ...interesting... to work with. It uses IBM (now Rocket) UniVerse as its backing database. One particular part of it that has caused some issues is its lack of data-integrity checks. By data integrity, I don't mean file corruption, but rather things like orphaned records or invalid keys. The particular version they use doesn't support things like triggers and such and so unless the programmer remembered to update their computed indexes, it becomes \"broken\" and filled with bad data. Now, the other programs have been built to live with this bad data, but it is most annoying when putting it into another database, such as MySQL (using InnoDB as the engine) which actually has constraints on the data. **The question** I'm experimenting with MongoDB and NodeJS just to see what all the hype is about. I really like Mongoose as well and its schema system. I've been reading a lot about what to store on each record vs in a separate collection. Perhaps its just my own RDBMS bias, but I decided to store each \"type\" of thing in a separate collection and utilize Mongoose's \"populate\" functionality to essentially relate the records together. Now, I am sure someone is going to say that that is going against the whole NoSQL thing, but I haven't really read anywhere that says that just because something stores documents instead of records and has no set schema on the db level that it can't be made relational. In my experiment, I have \"Posts\" and \"Comments\". I see four ways to store the relation between these two: * The full data for each comment is put directly onto the post as a subdocument. There are two main cons that I have seen with this: If I decide to put comments on something else (let say a \"Page\") as well, I have to essentially repeat myself and it isn't quite as simple to find out how many comments a user has posted if the comments are actually stored in several collections * Comments are a separate collection and they store their parent's key and a Schema name for mongoose to use when populating (the schema name switching wouldn't be automatic). This isn't too bad, but its biased towards loading comments first and posts later. Finding out the comments on a post isn't hard, but requires a manual query. * Comments are a separate collection and posts have a list of comment ids that relate to them. This is biased towards loading posts first and finding out what a comment is attached to becomes difficult. However, mongoose would let me load the comments without having to write anything additional. * Comments are a separate collection and have a parent id. Posts also have a list of comment ids. This combines the above two methods and neutralizes their cons and makes for relatively few \"manual\" queries, but introduces the possibility of data becoming dirty and out of sync like the legacy system that I described above (e.g. a comment says it belongs to one post and another post (or multiple posts) claims that they own that comment). I was going down the latter path listed above and I realized that I was starting to enter the realm of this legacy system that had caused so many headaches with its manually updated indexes and possibility for bad data. Now, I don't expect to do a ton with this little experiment of mine, but its the principal of the thing that has me thinking. What would you all recommend to go with on this? I want to be able to keep query counts low, but I also don't want to have to remember to update all these indexes. There has to be a happy medium somewhere. Another option, of course, is to just use MySQL with some nice schema constraints, but that isn't the point of this particular exercise for me since I've done that tons of times already."} {"_id": "250635", "title": "When I test out the difference in time between shifting and multiplying in C, there is no difference. Why?", "text": "I have been taught that shifting in binary is much more efficient than multiplying by 2^k. So I wanted to experiment, and I used the following code to test this out: #include #include int main() { clock_t launch = clock(); int test = 0x01; int runs; //simple loop that oscillates between int 1 and int 2 for (runs = 0; runs < 100000000; runs++) { // I first compiled + ran it a few times with this: test *= 2; // then I recompiled + ran it a few times with: test <<= 1; // set back to 1 each time test >>= 1; } clock_t done = clock(); double diff = (done - launch); printf(\"%f\\n\",diff); } For both versions, the print out was approximately 440000, give or take 10000. There was no (visually, at least) significant difference between the two versions' outputs. So my question is, is there something wrong with my methodology? Should there even be a visual difference? Does this have something to do with the architecture of my computer, the compiler, or something else?"} {"_id": "187963", "title": "When would using a scripting language within a larger program be useful?", "text": "I have heard of several situations of people using say, JavaScript or Python (or something), inside a program written in C#. When would using a language like JavaScript to do something in a C# program be better then just doing it in C#?"} {"_id": "187965", "title": "Will the Apache License or the GPL License protect my trademarks?", "text": "I'm trying to see if these two open source licenses give protection of trademarks to the originator. I have read both licenses but they only seem to talk about their own trademark i.e. 'Apache'."} {"_id": "63839", "title": "Joomla or development from scratch?", "text": "As someone who has very little experience in it, I would like to know what makes you think that Joomla can fulfill all you requirements?. What makes you choose it over development from scratch (or using a framework like Yii or Kohana). What are the most common or crucial problem that you are facing when using Joomla? Since I don't have much experience in it, I would pressume that one of the biggest problem is flexibility. You can't scale or customize the behaviour of your app, and even if you could, you need to break some rule in Joomla, or wait for the next release. Is this true?. Currently I am building a long term project, there might be a lot of specific functions and behaviours in it. I would like to build it from scratch or with help from some PHP frameworks. But I've seen so many websites (and some of them are great in terms of complexity) are using Joomla. This gave me some doubts about choosing the right tech. I would like to know is there any, one or two ultimate reasons to choose Joomla/other."} {"_id": "75155", "title": "Design of an evaluator object for propagation and IO of results", "text": "We are having a discussion about design. Keep into account this is fortran, so we can't be too smart. We have the following classes: Application, System, Calculator, CalculatorSimple, CalculatorPeriodic, Result. Application is an application object which handles System objects. In order to compute information on the System object, the Application passes System to a Calculator, which returns a Result. Inside Calculator there are two specialized classes CalculatorSimple, and CalculatorComplex, which act on System depending on information available on the system itself. Some Systems require CalculatorSimple. Others require CalculatorComplex. The decision happens inside Calculator, and the Result object may contain additional information if the CalculatorComplex was run. These additional info can be queried with methods on the Result object, to test if the information is present or not. This info is of small size. To keep separation of roles, I want to keep this part unaware of input output. The Result object is received by the Application, and then the Application has the role of writing the data onto the file, according to what it finds on the Result object. A colleague instead proposes to pass the file to the Calculator object, and have it percolate through the chain so that stuff is written directly on the file. The additional point is that we must have only one output file, so when the program runs in parallel, only the master node must write on the output file, which is identified via an absolute path and due to a limitation of our raw IO lib cannot be accessed by MPI slave processes. We need an independent opinion. What would you do ? **Edit** : thanks. I spread votes and awards as much as I could, as there's clearly not a correct answer, just a set of feedbacks."} {"_id": "147939", "title": "Best overview to modern C++ paradigms?", "text": "I used to write C++ extensively between 8 and 10 years ago. I have since moved on to C# for professional reasons. However, from time to time I see statements like > \"If you're still manually tracking pointer references, you're doing it > wrong\" or > \"C++ is perfectly safe as long as you're using modern concepts like RAII and > not manually allocating memory like a recovering C developer.\" Both of those were standard procedure a decade ago. I have seen that C++ has been improved considerably in recent times; C++ 0x in particular seems like it has some new capabilities. What's the best resource for a \"C/old C++\" programmer to get caught up on \"modern\" C++ patterns and practices?"} {"_id": "37451", "title": "Telecommuting with a foreign employer as a permanent job", "text": "Does anyone have any experience in telecommuting (working at home) for a company based in some foreign country? By this I don't mean working on some contracted job, but more or less permanent job. Is this even possible, what are options for payment, and can you expect to be paid by usual rates for that country or significantly less? Is there any working hours control, or as long as you deliver on time it's all good."} {"_id": "147937", "title": "How much detail about a user story can a developer expect?", "text": "The biggest drawback of agile development I have experienced is that people not involved in development focus on the mantra that a user story (3-10 ideal person days) should not contain more than 1-3 sentences like: > As a customer, I can use free-text search so that I can find the products > I'm looking for. Giving this sentence, project managers expect me as a developer to commit to an estimate and develop the story. They assume that agile development means that sentences like this are all they have to provide to developers. I won't blame them because the well-known literature about agile development creates the impression that this would actually work. I've read something like 2 pages in natural language per story in \"Planning XP\", but that's it. Because \"working software\" is favored over \"comprehensive documentation\", this topic seems to be generally avoided. The reality is, of course, that if the developer is given the chance to do so, an interview with the customer brings up a long list of requirements that the customer has about the story: > * We need boolean operators like AND and OR. > * We need fuzzy search an all terms. > * We need to search by single words as well as by phrase. > * We don't want to find products that meet criteria X, Y, and Z. > * We want to sort the result. Oh, and by the way, the user can select the > sort criteria in a combo box with options a, b, and c. > So you see that I'm not talking about technical details or software design or even implementation details. It's pure requirements. The longer we talk the more the customer realizes that there's actually quite a lot to say about what they want. But often enough I find myself in the situation that such information is not provided or in very shoddy fashion. Neither is it possible that I do the interview, nor does the person that would be in the position to do the interview provide me with a result of it. Sometimes, managers even come up with technical details like \"we want Lucene search\" but they don't want to think about if they want find only product names or also product descriptions. Sometimes I think they are just lazy ;) For me, this is the top issue in projects I work in (e-business web application, 500-2000 person days per project). I've addressed this problem often enough, and managers are aware that most developers have a problem with the situation. But they believe that developers are just too much \"perfectionists\". They seem annoyed that developers \"always want to have everything specified\". Due to the lack of generally acknowledged numbers it's hard to argue. Everybody knows how long an iteration should be. But nobody can tell how much requirements are needed to estimate and develop a story. Do you have some reference?"} {"_id": "37457", "title": "How is the price of a website evaluated?", "text": "I cannot find a good way to to ask this question: Every day I read the news about the sale of \"Twitter\" or \"Friendster\" or \"hi5\". What I would like to know is how the price of such websites are evaluated?"} {"_id": "138166", "title": "I want to publish my slides for students", "text": "I want to make a website for uploading my slides for students but i don't want to use public sites that every one can see my slides, I want only my students to see and download them. what is the best things can I do? Is it to use WordPress for example (I am not sure if I can upload my slides in WordPress or not)? Or is there any easy to install and use script for this purpose?"} {"_id": "249933", "title": "Why is C++ \"this\" poorly designed?", "text": "1) For every `a` and `b` which are non-const pointers of the same type, you can do `a = b;`, right? 2) Inside non-const member functions the `this` keyword exists, which is a non-const pointer. So logicaly if `b` is same type as `this` you can also do `this = b;` right? Wrong. You cannot do `this = b;`, because `this` uses pointer syntax but logically `this` is a reference! But why on earth is `this` syntactically a pointer but logically reference? Can this weird behavior be corrected in next C++ standard, by introducing a new keyword, for example `me` which will be reference not only logically but also syntactically? (See also my attempt to solve this here: Is it a good idea to \u201c#define me (*this)\u201d ?)"} {"_id": "249936", "title": "As an API-user, would you tolerate BestPractice Exceptions?", "text": "I'm in the process of designing an API, part of which involves writing POCOs to a database. In C#, we have the `DateTime` structure. The \"default\" value for this (`DateTime.MinValue`) is 01/01/0001. Part of the API serializes POCOs to the database. If a date field is optional, it should _really_ be nullable (in C# syntax, this would be defined as `DateTime?`). What I'd like to avoid, is programmers falling in to the trap of writing `DateTime.MinValue` to the database _at all_. It's a valid date, for sure - but smells of something. So I am debating implementing a `BestPracticeException` class that would be thrown in circumstances such as this. If the user of the API really needs to deal with that date, they probably also need to deal with dates prior to that and need an entirely different structure (e.g comparing how many months have occurred between 500 BC and today). Do you think preventing writing 01/01/0001 to the database, on the whole, is a good thing to do? One might argue that you need to differentiate between being no date entered, and the user entering nothing. My answer to _that_ would be it is the function of a more broader auditing process that would pick that up, rather than attempting to ascertain intent from a field being null, or not null."} {"_id": "70507", "title": "How do I know that the freelancer (Developer) gave me my money's worth?", "text": "I am hiring a PHP developer to build me a website. Their rate is 70$ an hour. He seems to have a good portfolio and experience. How do I know that I get a good website that is scalable and easy to maintain? Do I ask them to give me code samples so I can review it (I am a programmer but I did not do much PHP)? What are good credentials for a PHP developer? experience with Drupal? LAMP?"} {"_id": "209063", "title": "Problem in Understanding Algorithm from TAOCP \"Multiply Permutations in Cycle Form\"", "text": "I am not able to understand one algorithm discussed in TAOCP Volume 1; Section 1.3.3 named as \"Algorithm A\" stated as \"Multiply permutations in cycle form\" while compared with the stated example in next page. The step that is not clear is mentioned in 8th and 9th rows;i.e. how can be the \"CURRENT\" value becomes \"g\" after the previous iteration where the CURRENT value was \"d\"? Please refer to \"The Art of Computer Programming Volume 1\" by Knuth for more details (section 1.3.3). It contains the detailed description of this algorithm. Detailed Algorithm: **Algorithm A (Multiply permutations in cycle form).** This algorithm takes a product of cycles, such as (6), and computes the resulting permutation in the form of a product of disjoint cycles. For simplicity, the removal of singleton cycles is not described here; that would be a fairly simple extension of the algorithm. As this algorithm is performed, we successively \"tag\" the elements of the input formula; that is, we mark somehow those symbols of the input formula that have been processed. * A1. [First pass.] Tag all left parentheses, and replace each right parenthesis by a tagged copy of the element that follows its matching left parenthesis. (See the example in Table 1.) * A2. [Open.] Searching from left to right, find the first untagged element of the input. (If all elements are tagged, the algorithm terminates.) Set START equal to it; output a left parenthesis; output the element; and tag it. * A3. [See CURRENT.] Set CURRENT equal to the next element of the formula. * A4. [Scan formula.] Proceed to the right until either reaching the end of the formula, or finding an element equal to CURRENT; in the latter case, tag it and go back to step A3. * A5. [CURRENT = START?] If CURRENT i- START, output CURRENT and go back to step A4 starting again at the left of the formula (thereby continuing the development of a cycle in the output). * A6. [Close.] (A complete cycle in the output has been found.) Output a right parenthesis, and go back to step A2."} {"_id": "209062", "title": "Where should I put methods that make an Http Request to get data from a web service in iOS development?", "text": "I have a Model Car in my iOS application where it's parameters like name, year, value etc are fetched from a web service in order to fill a list with cars data. **Where should I put the method that asynchronously goes to the server and returns an array of cars (this method already converts the JSON to a Car array)?** My current approach is a static method in my Car class that receives an HttpClient (so I'm able to unit test it mocking the client) and returns an NSArray of cars, is this good? What have you guys done in this situation? I'm concerned because I recently started reading clean code which says that a Class should do only one thing, and the way I have it now appears to do 2 things (hold information about a Car and get a list of cars)."} {"_id": "147388", "title": "Static objects and concurrency in a web application", "text": "I'm developing small Java Web Applications on Tomcat server and I'm using MySQL as my database. Up until now I was using a connection singleton for accessing the database but I found out that this will ensure just on connection per Application and there will be problems if multiple users want to access the database in the same time. (They all have to make us of that single Connection object). I created a `Connection Pool` and I hope that this is the correct way of doing things. Furthermore it seems that I developed the bad habit of creating a lot of static object and static methods (mainly because I was under the wrong impression that every static object will be duplicated for every client which accesses my application). Because of this all the Service Classes ( classes used to handle database data) are static and distributed through a `ServiceFactory`: public class ServiceFactory { private static final String JDBC = \"JDBC\"; private static String impl; private static AccountService accountService; private static BoardService boardService; public static AccountService getAccountService(){ initConfig(); if (accountService == null){ if (impl.equalsIgnoreCase(JDBC)){ accountService = new JDBCAccountService(); } } return accountService; } public static BoardService getBoardService(){ initConfig(); if (boardService == null){ if (impl.equalsIgnoreCase(JDBC)){ boardService = new JDBCBoardService(); } } return boardService; } private static void initConfig(){ if (StringUtil.isEmpty(impl)){ impl = ConfigUtil.getProperty(\"service.implementation\"); // If the config failed initialize with standard if (StringUtil.isEmpty(impl)){ impl = JDBC; } } } This was the factory class which, as you can see, allows just one Service to exist at any time. Now, is this a bad practice? What happens if let's say 1k users access `AccountService` simultaneously? I know that all this questions and bad practices come from a bad understanding of the static attribute in a web application and the way the server handles this attributes. Any help on this topic would be more than welcomed. Thank you for your time!"} {"_id": "255253", "title": "Data persistence for transactional customer emails", "text": "I'm developing a system to handle sending transactional emails to our customers. This is how it works: 1. An event occurs during the order's life cycle, for example 'shipped' 2. This event will trigger the creation of an email in the database (email queue) 3. A separate windows service is polling the db table for new emails to send. When it finds one it calls a Web service with all the required data. It's the Web service's responsibility to handle the actual sending of the email. My question relates to step 2. When an email triggering event occurs, should I take a snapshot of all the data required by the service (thereby duplicating data and introducing new tables) or should I get the required data from the transactional db tables only at the point where I'm ready to call the Web service. [Note: I'm not at all concerned with the body of the email, all I'm doing is passing the data to a Web service. The email templates etc are handled by another system. My concern is purely to queue the email notification and pass the required data to the Web service.]"} {"_id": "54648", "title": "Decompilers - Myth or Fact?", "text": "Lately I have been thinking of application security and binaries and decompilers. (FYI- Decompilers is just an anti-complier, the purpose is to get the source back from the binary) * Is there such thing as \"Perfect Decompiler\"? or are binaries safe from reverse engineering? (For clarity sake, by \"Perfect\" I mean the original source files with all the variable names/macros/functions/classes/if possible comments in the respective headers and source files used to get the binary) * What are some of the best practices used to prevent reverse engineering of software? Is it a major concern? * Also is obfuscation/file permissions the only way to prevent unauthorized hacks on scripts? (call me a script-junky if you should)"} {"_id": "222130", "title": "What is idiomatic use of arbitrary blocks in C?", "text": "A block is a list of statements to be executed. Examples of where blocks come up in C are after a while statement and in if statements while( boolean expression) statement OR block if (boolean expression) statement OR block C also allows a block to be nested in a block. I can use this to reuse variable names, suppose I really like 'x' int x = 0; while (x < 10) { { int x = 5; printf(\"%d\",x) } x = x+1; } will print the number 5 ten times. I guess I could see situations where keeping the number of variable names low is desirable. Perhaps in macro expansion. However, I cannot see any hard reason to need this feature. Can anyone help me understand uses of this feature by supplying some idioms where it is used."} {"_id": "232450", "title": "What is a scalable and practical way to search existence of a group of strings in a huge file", "text": "**Context** : I built an app which generates around **1000 domain names** based on user input. I need to check if they are **available or not** by matching against a huge zone file of parsed domain names which is around **2 GB**. I have an amazon micro instance and cannot store the text file in there due to space constraints. I am expecting around **100k - 200k** and more in **search queries** per month. **Naive approach** (Potentially): 1\\. Store the text file in dropbox. Then get the contents of the file and search for the strings and spit out the available domain names on the EC2 instance. I only need to check if domains exist or not. **Should I have to store it in a database?** Some Info: There are currently 100 million dot com names registered according to Verisign. And my parsed domain names are one on each line. Like: * GOOGLE * APPLE * FACEBOOK * STACKOVERFLOW etc What is the best and practical way to deal with the problem? Ideally the checking should take only a few seconds. But I am fine with anything that works at this point."} {"_id": "232454", "title": "Readers Writers algorithm help - starvation and no starvation", "text": "I found this link, as an explanation of r/w problem - http://pages.cs.wisc.edu/~bart/537/lecturenotes/s7.html But, I'm having problems realizing is there any starvation present here and why? Also, if there is, what should be changed? I would also appreciate help with confirming the answers on the questions that are provided below the lecture: > * Is the first writer to execute Lock.P() guaranteed to be the first > writer to access the data? > No? > * Can OKToRead ever get greater than 1? What about OKToWrite? > Yes and No?"} {"_id": "222139", "title": "why no native compiler of C# or other \u201cproductive\u201d language?", "text": "I've been reading about D and Go and how they aim at being compiled to machine code yet be convenient (like garbage collection, no need to manipulate pointers unless needed) and I agree that there is a need for such a language. However designers of both Go and D for some reason decided to invent a new language. Why? Why not take existing language which is popular, proven to be productive but is not compiled to native code like C#? What are technological reasons for that? Would it be hard to implement garbage collection (D does), linq or something entirely else?"} {"_id": "106106", "title": "Can applications override the OS keyboard layout?", "text": "We have in-house .NET application, which can be used across multiple languages. Lately we received a strange bug that is when our application is being used, user was unable to enter accent characters (Spanish, Win 7) in other applications (ex: MS Word or Notepad). Strange, huh! Did anyone come across similar bug? If yes how did you solve that?"} {"_id": "106107", "title": "Test-driven development and improving white box testing skills", "text": "I am an entry level Java Programmer straight out of school. I have good knowledge and experience with J2SE. Can anyone advise me on how to improve or tune my skills towards being a Java white box tester? Wide range of inputs are welcome. And what is Test Driven Development?"} {"_id": "106104", "title": "How can I use Scrum with a freelance team?", "text": "I'm working in a start-up. I have a background with teamwork and management, but I'm currently the only developer. We have a project that will involve a few developers in the form of 2 freelancers. I will function as a developer as well as the ScrumMaster. The project is scheduled to last three to four months. The 3 developers (myself and the two freelancers) have never worked together before. Would it be possible or advisable to use Scrum to manage this type of project? Would there be any problems organizing or running the team using Scrum?"} {"_id": "113215", "title": "Is It OK to Develop Against a 3rd Party Library You Discover?", "text": "If you find out a way to interact with a web service API that is not publicly documented as available, and can emulate it within an application you develop, is it illegal to do so? Furthermore is it illegal to charge for a product that you build to interact with these services (maybe all that's needed is a reference to the 3rd party)? An example would be, lets say, Google didn't advertise how to do a search in your program, but you found out how and made it so people could do the search differently than how Google presents it, can you build against the service you discovered and sell a version of it?"} {"_id": "188886", "title": "What's the standard practice to prevent users from having unreasonable expectations?", "text": "There's some subscription-based data processing web service - users pay via PayPal for the right to use the service. The \"terms of service\" document prepared by lawyers explicitly says that there're no guarantees so customers can't possibly file a lawsuit. Now the service doesn't always work trouble-free. Sometimes there're network problems and so it's unreachable to some clients. Sometimes there's a problem with third-party services the service depends upon. So something like twelve hours per year total it doesn't work. This is not perfect, but IMO very good. Yet clients feel that since they've paid for it - it must just work, period, at all times and they even write claims for compensation to the service support. I'd guess they just don't read the \"terms of service\" but I can't be sure. What's the standard practice (besides having \"terms of service\") to prevent users from having unreasonable expectations?"} {"_id": "188881", "title": "How to optimize calls to multiple APIs at once and return as one set?", "text": "I have a web app that searches across 2 APIs right now. I have my own Restful web service that I call, and it does all the work on the backend to asynchronously call the 2 APIs and concatenate them into one result set for my web app to use. I want to scale this out and add as many other APIs as I can (currently looking at about 10 more). But as I add APIs, the call to my service gets (potentially) slower and more complex. How do I handle one API not responding ... and other issues that arise? What would be the best way to approach this? Should I create a service call for each API, that way each one is independent and not coupled to all the other calls? Is there a way on the backend to handle the multiple API calls without all the extra complexity it adds? If I go the route of a service call per API, now my client code gets more complex (and I have a lot of clients)? And it's more work for the client, and since I have mobile apps, it will cost the client more data usage. If I go one service call, is there a way to set up some sort of connection so I can return data as I get it, in case one service call hangs?"} {"_id": "85995", "title": "Matching my skills with Java and Web Programming", "text": "here is my main question: **What is the most common way that Java is used in web development?** The reason I ask: I am currently in the process of finding my _first_ internship. Every employer has a separate set of languages, technologies and acronyms they want their candidates to know. * In school I did well with Java. * As a hobby and interest I have developed a handful of web pages widgets, scripts, etc. My university emphasized _Java_ , C and theory. My hobbies emphasize HTML, PHP, JavaScript, CSS, and a little jQuery, etc. I can't learn a dozen different technologies to satisfy most prospective employers (in what is left of the summer). I think my best bet is combine my skills with Java and my interests in web development. That brings me back to my original question: _What is the most common way that Java is used in web development?_"} {"_id": "102617", "title": "Pointless Code In Your Source", "text": "I've heard stories of this from senior coders and I've seen some of it myself. It seems that there are more than a few instances of programmers writing pointless code. I will see things like: > * Method or function calls that do nothing of value. > * Redundant checks done in a separate class file, object or method. > * `if` statements that always evaluate to true. > * Threads that spin off and do nothing of note. > Just to name a few. I've been told that this is because programmers want to intentionally make the code confusing to raise their own worth to the organization or make sure of repeat business in the case of contractual or outsourced work. My question is. Has anyone else seen code like this? What was your conclusion was to why that code was there? If anyone has written code like this, can you share why?"} {"_id": "36503", "title": "What is \"Open Core\" software?", "text": "What does the term \"Open Core\" software mean?"} {"_id": "112167", "title": "Is there a \"SMACCS\" for jQuery?", "text": "I find my usage of jQuery scaling in proportion with user expectations and RIA app. development. Problem is, I'm no jQuery Guru. I am looking for some nice practices that also do not require a deep knowledge of js/jquery. An article I ran across today inspired me to ask this question. Is there such a simple set of general guidelines for jQuery? I have not read \"javascript the good parts\" as of yet, so looking for some basic stuff."} {"_id": "112162", "title": "How much does/should your style imply about your skill in a language?", "text": "Two things I've noticed in the past week that make me wonder: * An interview where my Perl skills were reviewed. I always use C-style for loops and use map about once in every 10,000 lines of code, so I almost always have to reference it before using it. While I can interpret others code using that, it's not my style and that is for reasons involving readability and ease of changing between languages. I at least perceived that I was dinged for that. * This answer to my question in which the answerer proceeded to refactor and bullet-proof my tiny script. I admit I was a little bit offended and kind of annoyed at that, though I can understand how that as a habit may be helpful. I understand that in a team, my style will need to adapt what the team has decided is correct -- this is necessary in every project and everybody has their own ideas. Yet I feel like I'm judged for what is syntactically fine and readable code. How much does the style of your smaller snippet code weigh in? Does code like that imply to others that I don't have control of the language because I didn't use all the features? Does my code imply that I'm not familiar with unit testing and proper testing because I didn't fully error-protect a tiny script?"} {"_id": "230344", "title": "How to create a WinForms project that is \"WPF-ready\"", "text": "I'm a C# developer who hasn't had the privilege of learning WPF yet. However, I recently initiated the architecture phase of a new project which I expect will eventually employ WPF (probably; although ASP.Net is still an option) -- but the GUI development won't begin for _several_ months. (We need to get buy- in on a WinForm-based proof-of-concept before we proceed with full production- grade development.) When we are ready to begin implementing the GUI, I will either learn WPF myself or hire someone with experience (preferably both). So my question is, what are some key fundamentals (design patterns, language essentials, etc) that I could employ right away which would facilitate a smoother migration down the road? For example, my code is a simulation model which will contain a very large number of entities that will eventually need to be \"visualized\", with interactive user-controls. Are there certain techniques that, if implemented from the ground up, would make wiring up these objects with WPF less painful later on? (Like dependency properties, or what- not.) Similarly, if I stick with the MVC approach that I am familiar with for right now, will it be straight-forward to morph that into an MVVM strategy later? Are there some simple concepts that I could apply now that would make the migration easier, but which won't take two weeks out of my current dev- cycle to master? Basically, I am trying to follow Uncle Bob's advice to postpone architectural decisions (in this case, the GUI layer) until I absolutely have to make them. But I want to know if there are any unnecessary migration costs that I could avoid if I anticipate them in advance."} {"_id": "234853", "title": "Appending local MySQL data to same MySQL data in web server", "text": "my IT boss wanted this scenario to be undertaken: First, we develop an Alumni program using PHP and deploy this in the web server which can be accessed by that school alumni in the web to update their personal data anytime. Then, using exactly that SAME program we must also deploy this locally, thru the LAN in its satellite campus which is far and have a very erratic and unreliable internet connection. The purpose is that that the latter system (in LAN) can still be used to add new alumni data even during \"offline\" periods, and then, once the internet signal is OK, or now being \"online\" to just upload these offline-generated data in MySQL to the web server hosting THE SAME software program. How do we do a seamless data transfer then of this MySQL database - from the LAN to the web server - once the online signal is now steady? Thanks for any advice..."} {"_id": "83225", "title": "Should a developer be forced to memorize details?", "text": "Many times I forget things about my application. I don't memorize the table names or what a query did and I search to get what I want. My team leader told me I'm supposed to memorize the table names that I use. Is the developer required to `memorize` the table names in the database, the classes names...etc? And if the answer is \"Yes, all the time,\" what should I do to remember those things?"} {"_id": "34906", "title": "Why is it good not to rely on changing state?", "text": "This question arises out of the question Is Haskell worth learning? Generally a few often repeated statements are made, about how Haskell improves your coding skills in other languages, and furthermore, this is because Haskell is stateless, and that's a good thing. Why? I've seen someone compare this to only typing with the left hand, or perhaps closing your eyes for a day and just relying on touch. Surely there is more to it than that? Does it relate to hardware memory access, or something else which is a big performance gain?"} {"_id": "202352", "title": "Should we be looking out for lying code?", "text": "This is referring to a discussion in an answer and the comments of this question: What's with the aversion to documentation in the industry?. The answer claimed that \"code can't lie\" and thus should be the go-to location instead of documentation. Several comments pointed out that \"code can lie\". There is truth on both sides, at least partly because of how poorly and inappropriately documentation is handled. Should we be on the lookout for lying code, comparing it against any existing documentation? Or is it usually the best source for what it needs to be doing? If it is agile code, is it less likely to lie, or can that code not lie at all?"} {"_id": "202356", "title": "Is Node.js future-safe?", "text": "I've been getting great results with Node.js since a year+. Everything is perfect and I couldn't be happier. Yet I have a feeling that this model won't last long and will be forgotten as soon as something replaces JavaScript on the browser, so I'm considering migrating to Haskell or similar as soon as I can. Is Node.js a future-safe technology, or is it a middle-term solution?"} {"_id": "133156", "title": "When using Git, is using the master branch for active development advisable?", "text": "First, some background, we are in the process of moving all of our project teams over to using git and are in the process of laying down the guidelines for how the repositories should be organized so that certain branches can also be monitored for continuous integration and automatic deployment to the testing servers. Currently there are two models that are developing: 1. Heavily influenced by the nvie.com article on successful branching with the master branch representing the most stable code, a development branch for the bleeding edge code, and an integration branch for code that is ready for QA testing. 2. An alternate model in which the master branch represents the bleeding edge development code, an integration branch for code that is ready for QA testing, and a production branch for the stable code that is ready for deployment. At this point, it is partly a matter of semantics in regards to what the master branch represents, but is doing active development on the master branch actually a good practice or is it not really that relevant?"} {"_id": "133159", "title": "Entity-Component-System architecture: interaction between systems", "text": "I am studying the Entity-Component-System architecture philosophy. As I have read about it, a typical entity system has: 1) **Entities** \\- which are merely ID tags which have a number of components 2) **Components** \\- which contain _data_ on various aspects of an enity that the component is responsible for 3) **Systems** \\- which update relevant components of every entity. Say, a rendering system updates the rendering component, or simply saying, draws a picture that is stored in the data of that component. A positional and movement system handles position and movement of each entity who has a corresponding component. These statements follow from this article which in my opition tries to be the most clear and pure in it's statements - But the author did not explain how the interaction between systems should be realized. For example, the rendering system must know the data from the positional component of an entity in order to draw it in a correct position. And so on. So the question is - how should I implement the interaction between the various systems?"} {"_id": "85976", "title": "How should you approach supporting rapidly-updating web browsers?", "text": "Today, Firefox 5 was released. If all goes according to plan, Firefox 7 will be out by the end of the year. Firefox has adopted the Google Chrome development model wherein version numbers are largely unimportant and so just supporting \"the latest (publicly available) one\" is probably the best strategy. But how do you best test that? As my QA guys have pointed out, if you tell the client that you support \"the latest version\" but a version comes out that breaks your site, then you have a problem because now you've stated you support a web browser you don't. And since both Firefox and Chrome now update themselves automatically, the average person probably has no clue or care what version they're running. And having them either not upgrade or roll back is nontrivial. I'm finding there are a number of organizations that mandate their employees use IE (the head of IT subscribes to the Microsoft school of thought), or mandate their employees use Firefox (the head of IT subscribes to the IE-is- insecure school of thought), so Chrome updating constantly was a non-issue. But now that Firefox is a member of that club, I can see this becoming a bigger issue soon. My guess, in the case of Firefox, would be that the Aurora channel is the key, but what is the best way to approach testing it? Should we fix anything that comes up as an issue in Aurora, or should we wait until closer to the scheduled release? Do people automate this sort of thing?"} {"_id": "85975", "title": "Learning Erlang vs learning node.js", "text": "I see a lot of crap online about how Erlang kicks node.js' ass in just about every conceivable category. So I'd like to learn Erlang, and give it a shot, but here's the problem. I'm finding that I have a much harder time picking up Erlang than I did picking up node.js. With node.js, I could pick a relatively complex project, and in a day I had something working. With Erlang, I'm running into barriers, and not going nearly as quickly. So.. for those with more experience, is Erlang complicated to learn, or am I just missing something? Node.js might not be perfect, but I seem to be able to get things done with it."} {"_id": "47554", "title": "Why does there seem to be a lot of fear in choosing the \"wrong\" language to learn?", "text": "Perhaps its just me, but as a current CS student I have already come across many questions on this site and elsewhere about not just \"Which language should I use for x?\" but also \"Does anyone still use language Y?\" My first CS class was taught in Scheme, which, if I'm not mistaken, isn't used widely (at least in comparison to languages like Java, PHP, Python, etc). Many of my classmates balked at the idea of having to learn a language they would never have to use again, but I don't quite understand where so much of this fear of learning less popular languages comes from. No, I may not use Scheme in any job I get, but I certainly don't regret having learned to use it (albeit in a very beginner, not very in-depth manner in that one semester). I am taking a search engines class this semester, which is done in Perl and again I am seeing classmates complaining about the language choice. I can understand having a favorite language and disliking others but why do some get worked up over learning it in the first place? Can you really learn the \"wrong\" language? Isn't learning something like Scheme or Haskell good mental exercise if nothing else, and useful at least to exposure to different ways of solving problems?"} {"_id": "125687", "title": "Can I use Ruby to automate everything?", "text": "I face various types of applications (web-based, GUI-based, command-line, etc.) on various platforms (Windows, Linux, etc.) to operate everyday. There is a great opportunity for me to automate tasks by scripting. But almost every type of application and platform has its **native** scripting language or tools (such as VBScript and PowerShell for Windows, Bash scripts for Linux, Selenium for web applications, and AutoIt for GUI applications, etc.). It kills me to learn and maintain so many scripting languages. I have a feeling that Ruby can interoperate with various platforms easily, and it is very expressive. So my question is: 1. It is possible to use Ruby to script everything? 2. If it is, what are the main disadvantages compared to the **native** scripting language of each platform?"} {"_id": "138837", "title": "When checking for transposed day and month values between two Dates - should comparing 11/11/2000 and 11/11/2000 return true or false?", "text": "Assuming a function with a signature of boolean isTransposed(Date date1, Date date2); Example outcomes: date1 date2 Outcome 06/02/2000 02/06/2000 true 02/06/2000 06/02/2000 true 02/06/2000 null false 06/02/2000 02/06/1987 false 11/11/2000 11/11/2000 true 04/04/2001 04/04/2000 false Granted the behavior can be documented but what are your thoughts on `isTransposed('11/11/2000', '11/11/2000')` returning `true` verse `false`?"} {"_id": "102873", "title": "Should I be able to design good interfaces and produce quality content?", "text": "I'm just coming into my final year of university so I'm going to start looking for jobs as a software developer and I just wanted to know just how important it is to be able to design good looking interfaces and produce good quality content in a software development job? I'm only asking because I struggle to be creative due to my dyslexia but If given a design to do (i.e. a picture or description of some sort) I am able to do it to a high standard, Will this affect my career options?"} {"_id": "238113", "title": "Controller and Model Interaction", "text": "I'm fairly new to MVC and I'm trying to get a better understanding of it. There is endless information about the theory and general responsibilities of each part of an MVC app, which I've read a good deal of, but I find myself constantly second guessing my implementation. I currently have my application setup where my \"Model\" provides a number of methods that effectively do CRUD work on the application data. Methods like `add_document(name, path)` and `update_account(id, code, name, status)` At this point it works, but it feels clunky, as I effectively have to decide whether I want to write a method to update every property of a model object, or whether I want one method that can handle updating any property for a given object. These methods all interact with model objects in SQLAlchemy (A Python db toolkit and ORM). Lately I've considered the possibility of passing ORM objects back and forth between the Model and Controller. The controller would change a property, then pass the object back to the Model which would handle sessions, commits, and error conditions. While this seems so much easier than writing tons of CRUD methods, it also seems to break the division of responsibility by allowing the Controller to touch ORM objects at all... I was hoping to get some guidance here. Are a whole ton of CRUD operations normal in MVC apps? Is it acceptable to pass ORM objects between the Model and Controller - where the Controller only ever updates properties while the model handles session and database work? I'm also open to alternative ideas about how to separate responsibilities in an elegant way."} {"_id": "238115", "title": "Is often using int constants as parameters in communication between objects considered bad design?", "text": "An app I'm working on is designed with MVC. The components often interacts with each other by passing int constants, which then have to be interpreted with `if` statements. I'm wondering if this is considered bad OO design. (Not specifically with MVC, but in general). For example: When the user clicks a specific button, the controller is told by the view `controller.somethingHappened(Controller.SOME_INT_VALUE);` The view uses a constant int value in the controller as a message to tell what happened. The controller then has to have a lot of `if` statements to interpret the message from the view. E.g.: public void somethingHappened(int message){ if(message == AN_INT_VALUE) doSomething(); if(message == ANOTHER_INT_VALUE) doSomeThingElse(); if(message == A_DIFFERENT_INT_VALUE) doAnotherThing(); } As a result the controller passes another int value into the model, as a message to do something. I.e. `model.doSomething(Model.SOME_INT_VALUE);`. Which forces to model to also have a bunch of `if` statements: public void doSomething(int message){ if(message == AN_INT_VALUE) doSomething(); if(message == ANOTHER_INT_VALUE) doSomeThingElse(); if(message == A_DIFFERENT_INT_VALUE) doAnotherThing(); } This results in both the controller and model having a lot of `if`s to interpret the messages the receive. **Is this considered ugly and/or bad design?** Should I try to pass actual objects into methods, or make more specific method calls - instead of passing constants that have to be interpreted with a bunch of `if`s? **Or is this practice of using a lot of constants as messages considered reasonable?** If it is considered bad design, how can I avoid this? _(Please note: For the purpose of this question I'm also referring to enums)_."} {"_id": "78053", "title": "how to make the move from C++ to java", "text": "I have a few years worth of C++ professional experience under my belt. Recently, I decided to look for a new job, and found out that find a C++ job is not as easy as it used to be. Nearly all jobs posting and head hunters are in the market for Java/C# developers. Event though I have a some experience with Java, it's not enough to make me interesting to any of the companies I applied for. I've decided that after I get a new job (C++ as it appears) I must get my hands dirty with Java or C# . Currently, my only possible plan is to either join or start my own open source project. But will it be enough? Can such a project replace 2-3 years of working in a commercial company? Is there another way to make the transition?"} {"_id": "238116", "title": "Describe business logic with diagrams", "text": "I am currently developing a web application for my thesis.I was asked by my professor to make diagrams to describe the business logic. Since I don't have a prior experience, I am pretty confused with all the terminology. I managed to clarify,I think, what business rules and business logic are, but I can't find out how you describe the business logic. Is it something particular or is it something more general? Do I need to learn UML? Does the fact that I use MVC affects the way I'll describe it?"} {"_id": "238119", "title": "Replacing string comparisons with dictionary lookups", "text": "Given 2 strings s1 and s2, if I do a simple equality check , it is considered to be O(n) when calculating algorithm efficiency. So, if I am using a brute force approach for the rotated substring question the efficiency is O(n^2) (take the 2nd substring, rotate by 1 and check. Repeat till rotations = strlen or matched) However, a Dictionary lookup is considered O(1). If I instead used a dictionary with the sole element having a dummy value and a key = s1. Then , instead of doing a string compare I checked for the existence of s2 in the dictionary, wouldnt my complexity go down to O(n)? Intuitively it doesnt make sense to me, so I'm guessing one of my assumptions is incorrect..."} {"_id": "137385", "title": "What do you do when you can't seem to understand a certain part of programming?", "text": "I'm learning new languages as I go along, I write code for very basic programs in multiple languages, and I go to classes. I've read books, articles, lessons, videos, you name it, however I can't seem to get the hang of certain things. For example I never understood pointers - what they are good at. (NOT PART OF THE QUESTION - retagging with \"Pointers\" is not required...) * * * My question however, is not what pointers do, but instead how can I understand things like that? If, after reading a book or an article about a certain part of programming, and I don't understand, what do I do? Writing code in a certain feature of programming surely helps, however it doesn't actually help with understand that much. The theoretical part is important in understanding."} {"_id": "173262", "title": "Gerrit code review, or Github's fork and pull model?", "text": "I am starting a software project that will be team AND community developed. I was previously sold on gerrit, but now Github's fork and pull request model seem to almost provide more tools, ways to visualize commits, and ease of use. For someone who has at least a little experience with both, what are the pros/cons of each, and which would be better for a team based project which wants to leave open the possibility for community development?"} {"_id": "173260", "title": "Microsoft Dev Centre accounts", "text": "Looks like Microsoft is offering a special offer of 95% of the yearly subscription for the Phone Dev Centre (I didn't say anything about desperate). What I was wondering is do you need a seperate account to publish to the Windows Phone app centre and the Windows App Centre? Also I heard some horror stories about the time it takes to get application published on the Windows phone marketplace, does anyone have any experience with this? Windows Phone Dev Centre Windows App Dev Centre"} {"_id": "45251", "title": "When would you use multiple languages in one project?", "text": "When would you consider incorporating another language into your project? Some domains are inherently multi-language ( Database interaction, interactive web client side development), but others could be done with 1 language and the idea of introducing another language could hinder understanding more than help it. Currently I have high performance code in C++ that also needs to do a fair amount of routine file and i\\o manipulation which would be easier and faster to write correctly in a higher order language like Python. To be honest I don't know C++ extremely well when it comes to 3rd party libraries including Boost."} {"_id": "109703", "title": "How to Avoid Fragile Unit Tests?", "text": "We have written close to 3,000 tests -- data has been hard coded, very little reuse of code. This methodology has began to bite us in the ass. As the system changes we find ourselves spending more time fixing broken tests. We have unit, integration and functional tests. What I'm looking for is a definitive way to write manageable and maintainable tests. **Frameworks** * FakeItEasy * nUnit * AutoFixture"} {"_id": "109702", "title": "Complexity vs maintainability in modern hardware", "text": "Today with the modern hardware and memory coming cheap, how much sense does it make to spend effort to analyze algoriths or data structure complexity? Wouldn't it be better instead, to focus on clean, maintainable code, readable code than on optimizations for complexity? Note:I not talking about embedded systems or systems with similar kinds of constraints."} {"_id": "231431", "title": "UDP distributing/sharding methods", "text": "Currently I'm sending UDP messages to a server which handles and processes the message. To make the processing more scalable, I'd like to have some sort of autoscaling mechanism for the receiving servers. I have two ideas: 1.) Use a UDP load balancing technology 2.) Distribute the messages on the client side. For the second option, I would update the receiving server list on the client side using a background thread on the client and would hit some sort of internal API and retrieve an updated list periodically. My question is, does this sound like a reasonable method? Also, would something like zookeeper be good for this? It would store the server list and be retrieved by the clients."} {"_id": "109706", "title": "I will be delivering a lecture on Android development tomorrow. Any topics I should cover?", "text": "I have been asked by my local Association for Computing Machinery chapter to give a lecture on Android development, and while I already have some topics to cover, I was curious if the community of SO had any suggestions of things that would be interesting to hear about. Thus selected: * OpenGL * Using hardware features (camera, audio, accelerometer, multitouch) * Good UI practices * Social media integration"} {"_id": "246616", "title": "An algorithm for finding converse duplicates of ordered pairs", "text": "Given an array of ordered pairs of values, is there a well-known algorithm to find converse duplicates? [Converse meaning the same values, but in the opposite order.] That is, given [ab,ac,ad,bc,bd,ca,db] is there an efficient way to find ca and db, being the converse duplicates for ac and bd? The application is simple enough: the ordered pairs are edges in a directed graph and if there is a converse edge then a single double-ended edge is to be drawn rather than one edge in each direction. The values are strings, being node names. It can be viewed as a lookup in a sparse array. Given coordinates (a,b), check whether (b,a) exists. However, common programming languages do not (appear to) provide sparse 2d arrays. I have written a solution in ruby using hash-of-hash, but it's about 20 lines of awkward code and an unsatisfying outcome. Writing the same code in a language like C# or Java would be longer. A better solution is sought, in any convenient language (or pseudocode). * * * I haven't attempted to define 'efficient' or 'better', and performance is not an overriding consideration for a drawing of a few hundred nodes. The nodes are not sorted, so the default algorithm would be, for each pair, to form the converse and brute-force search the preceding half. A binary search would require a prior sort. A solution based on hash indexing should be much faster."} {"_id": "171683", "title": "What's the best language to use for a simple windows 7 dynamic gui desktop app", "text": "[Note: I hope I am not breaking etiquette, but I originally posted a variant on this question here, but am re-asking here because I am making this now solely a question about programming.] I would like to program of the following simple form: 1. The user can produce X number of resizable frames (analogous to HTML frames). 2. Each frame serves as a simple text editor, which you can type into and save the whole configuration including resized windows and text. 3. The user should be able alternately \"freeze\" and present the information, and \"unfreeze\" and edit frames. Thus it will be a cross between a calendar and a text editor. I don't particularly care if it is a web application or not. What languages are ideal for such a setup? I know some C and Python and Html, and am willing to learn others if need be. It seems to me this should be a relatively easy program to make, but I need a little direction before starting. EDIT: My OS is Windows 7."} {"_id": "148721", "title": "Fluid VS Responsive Website Development Questions", "text": "As I understand these form the basis for targeting a wide array of devices based on the browser size, given it would be a time consuming to generate different layouts targeting different/specific devices and their resolutions. * * * ## Questions: * * * * Firstly right to the jargon, is there any actual difference between the two or do they mean the same? * Is it safe to classify the current development mainly a html5/css3 based one? * What popular frameworks are available to easily implement this? * What testing methods used in this regard? * What are the most common compatibility issues in terms of different browser types? * I understand there are methods like this http://css-tricks.com/resolution-specific-stylesheets/ **which does this come under?**. * Are there any external browser detection methods besides the API calls specific to the browser that are employed in this regard? * * * **Points of interest [Prior Research before asking these questions]** 1. Why shouldn't \"responsive\" web design be a consideration? 2. Responsive Web Design Tips, Best Practices and Dynamic Image Scaling Techniques 3. A recent list of tutorials 30 Responsive Web Design and Development Tutorials by Eric Shafer on May 14, 2012 * * * ## Update Ive been reading that the basic point of designing content for different layouts to facilitate a responsive web design is to present the most relevant information. now obviously between the smallest screen width and the highest we are missing out on design elements. I gather from here http://flashsolver.com/2012/03/24/5-top-commercial- responsive-web-designs/ The top of the line design layouts (widths) are * desktop layout (980px) * tablet layout (768px) * smartphone layout \u2013 landscape (480px) * smartphone layout \u2013 portrait (320px) Also we have a popular responsive website testing site http://resizemybrowser.com/ which lists different screen resolutions. I've also come across this while trying to find out the optimal highest layout size to account for http://stackoverflow.com/questions/10538599/default-web- page-width-1024px-or-980px which brings to light seemingly that 1366x768 is a popular web resolution. * Is it safe to assume that just accounting for proper scaling from width 980px onwards to the maximum size would be sufficient to accommodate this? given we aren't presenting any new information for the new size. * Does it make sense to have additional information ( which conflicts with purpose of responsive web design) to utilize the top size and beyond?"} {"_id": "39302", "title": "How to handle people who lie on their resume", "text": "> **Moderator comment** > > Please note that this is a **two year old** question that has just been > migrated from Stack Overflow. Please take your time to read **all** the > answers and ask yourself \"would my answer add anything to this?\". I'm conducting technical interviews to fill a few .NET positions. Many of the people I interview really _do_ know .NET pretty well, but I find at least 90% embellish their skillset anywhere between \"a little\" to \"quite drastically\". Sometimes they fabricate skills relevant to the position they're applying for, sometimes they don't. Most of the people I interview, even the most egregious liars, are not scam artists. They just want to stand out among the crowd, so they drop a few buzzwords on their resume like \"JBoss\", \"LINQ\", \"web services\", \"Django\" or whatever just to pad their skillset and stay competitive. (You might wonder if a person that lies about those skills is just bluffing their way through a technical interview. My interviews involve a lot of hands- on coding and problem-solving - people who attempt to bluff will bomb the hands-on coding portion in the first 3 minutes.) These are two open-ended questions, but it would really help me out when I make my recommendations to the hiring managers: 1. **Regarding interviewing etiquette, should I attempt to determine whether a person really possesses all of the skills they claim to have? Can I do this without making the candidate feel uncomfortable?** 2. **Regarding the final decision, should I recommend candidates who are genuinely qualified for the positions they're applying for, even if they've fabricated portions of their skillset?**"} {"_id": "233498", "title": "Logging in a latency sensitive system", "text": "Requirements: 1. My application is latency sensitive. Millisecond level responsiveness matters. Not all the time, but when it acts, it needs to be fast. 2. My application needs to log information about what it's done in those times. 3. The actual writing of the logs does not need to be that fast, just the actions. * However, the log files need to be written at human speed, in other words, I cannot set `immediateFlush` to false. So, obviously, certain kinds of logging are getting offloaded to another thread. I am using logback as a framework, but for certain sorts of log messages I still want to offload the work to another thread. Here is how the system currently works: * Each part of the latency sensitive system gets a particular logger interface injected. Each of these log methods has a signature specific to the sorts of things I know the logger will need to log. * A `SimpleLogger` implementation is written for each case. This writes to the log on the same thread. * I have also written a `ThreadedLogger` which implements ALL the logging interfaces, and gets a \"backing\" logger implementation injected for each sort of logger. * Whenever a log method is called in `ThreadedLogger`, it wraps the request in an `SomeLogObject implements LogCommand`, throws the objected into a `LinkedBlockingQueue`, and returns. This is similar to the Go4 Command pattern * There is a consumption thread that blocks on `BlockingQueue.take()` waiting for log objects to come in. When they do, it calls `LogCommand.execute()`, which calls the appropriate method in the backing `Logger` in `ThreadedLogger`. * Currently the `LogCommand` implementations are very stupid. They just call the appropriate method in the proper injected logger. I am trying to decide if I should refactor this system. Currently, if I need to create a new place to do these offloaded logs, I have to create: * A new interface * A new implementation of this interface * Two new DI bindings (one for the simple logger, one for the `ThreadedLogger` backer) * A new `LogCommand` implementation * A new method for creating this `LogCommand` object * Code for injecting and storing as a field the appropriate logger in `ThreadedLoggger` It seems to me that it would be a lot simpler to offload the creation of the `LogObject` to the calling threads, but I am concerned that I'm exposing too much of the internal workings of the log system if I do that. Not that this is necessarily a problem, but it seems unclean. Another possibility would be to combine the functionality of the simple logger implementations with their respective log objects, so the objects change from being \"I am an object that does logging when given log data\" to \"I am a log event that knows how to log myself\", and these objects can be created by a log factory."} {"_id": "233497", "title": "Is it bad software design to mix JDBC SQL calls and use an ORM in an application?", "text": "Is it bad software design to have JDBC/raw SQL and also use an ORM? I don't mind using an ORM for boilerplate CRUD, but some queries are better in SQL (for me). I'm not arguing for or against ORMs. I've read about them until I'm blue in the face. I used JDBC in my question, but it could have been DBI, or some other database interface. ORM can be any of them. How about maintainability? Would mixing the strategies complicate it? To me the maintainability of an application efficiently becomes even more blurrier, if it is a compiled language vs. a scripting language. I have to recompile Java. I don't have to recompile Python or PHP. If my SQL is in a model layer, does that change anything? EDIT: If I use find a point where I need to use SQL in my ORM, in my model, what kind of data is sent back? I won't be able to use something like person.save('somevalue'), for example, will I?"} {"_id": "173191", "title": "Solutions for Project management", "text": "* The team consists of **3** people. * The method of development is **Scrum**. * The language of the project is **C++**. * The project will be under the control of the **git** system. * The start up budget is **0**. **The following things have to be chosen:** 1. Build and Version Numbering 2. Project documentation ( file with the most common info for current stage of the project, which will be changed every time the new version or subversion of the project emerges ) 3. Project management tool ( like Trac or Redmine, I cannot use them, because there is no hosting ) 4. Code documentation ( I consider Doxygen ) **The following questions have arisen:** 1. What can you add to the above list of the main solutions for project management in the described project? 2. One of three project participants has linux os (No MS Office), one has Windows and MS Office (does not want to use Libre or Open Office), one has Windows, but does not have MS Office. What formats, tools can u suggest using for project documentation? The variant of using online wiki does not fit, it must be files. 3. OneNote mb is a good tool for project management, but because of the reason mentioned above it is not possible. What can you advise? 4. Offer a system for Build and Version Numbering."} {"_id": "173193", "title": "Open source library, can the project owner change the license to be more restrictive?", "text": "A company releases a library with an open source MIT license. If they wanted to, could they change the license to be very restrictive so competitors cannot use it? What impact would this have on previous versions? Meaning if on Nov. 1st they make it very restrictive to some other license, would all versions prior to Nov 1st be still on MIT?"} {"_id": "173192", "title": "How would I implement this application idea?", "text": "I am a D&D gamer and a developer that has mostly worked with ASP.NET applications professionally. I have written some chat bots in Node.js and I have only played a little with PHP but wrote nothing serious. I have had inspiration to create a site that allows a person to keep track of characters (aka the character sheet). I am thinking of using this as a learning opportunity to learn noSQL and to write a full javascript front-end. I want this application to save the value as I change it. So if I edit the armor class, it is saved immediately instead of waiting until I hit the submit button. I think that will make it easier to use while gaming and not losing anything because I forgot to save the change. I have never done anything like this. How do you implement this style of application? Is there a tutorial or howto to get me on the right path? While I would really like to use ASP.NET but I don't have a Windows server to publish on (and I really can't afford to pay for a service). What language that runs on Linux would work well for this type of application? Note: I feel noSQL would work in this case because of the sheer number of tables required to create something like this in SQL."} {"_id": "173194", "title": "Is there a way to add unique items to an array without doing a ton of comparisons?", "text": "Please bare with me, I want this to be as language agnostic as possible becuase of the languages I am working with (One of which is a language called PowerOn). However, **most languanges support for loops and arrays.** Say I have the following list in an aray: 0x 0 Foo 1x 1 Bar 2x 0 Widget 3x 1 Whatsit 4x 0 Foo 5x 1 Bar Anything with a 1 should be uniqely added to another array with the following result: 0x 1 Bar 1x 1 Whatsit Keep in mind this is a very elementry example. In reality, I am dealing with 10's of thousands of elements on the old list. Here is what I have so far. Pseudo Code: For each element in oldlist For each element in newlist Compare If values oldlist.element equals newlist.element, break new list loop If reached end of newlist with with nothing equal from oldlist, add value from old list to new list End End **Is there a better way of doing this? Algorithmicly, is there any room for improvement?** And as a bonus qeustion, what is the O notation for this type of algorithm (if there is one)?"} {"_id": "229942", "title": "Keeping data consistent across ACID database and a bunch of blobs or files", "text": "I have a database that is ACID. And I have a different data storage that is not relational or acid compliant. Let's say its just a bunch of blobs or files. I want to make sure that these two data stores are in sync, so if someone creates a new file the database has a record created. If someone deletes the file the record is deleted. and the other way around. and one operation cannot happen without the other one happening as well, \"if someone wants to put a book in the library they have to put the book in the index\". How to make sure this happens? If I create a process that remembers one action and expects the other one, what happens if the process dies before completing the second action? Should I create some kind of a write ahead log that says \"I am about to do these two things, if you are reading this I died trying to do X, please undo Y\"?"} {"_id": "88356", "title": "OpenGL CPU vs. GPU", "text": "So I've always been under the impression that doing work on the GPU is always faster than on the CPU. Because of this, in OpenGL, I usually try to do intensive tasks in shaders so they get the speed boost from the GPU. However, now I'm starting to realize that some things simply work better on the CPU and actually perform worse on the GPU (particularly when a geometry shader is involved). For example, in a recent project I did involving procedurally generated terrain, I tried passing a grid of single triangles into a geometry shader, and tesselated each of these triangles into quads with 400 vertices whose height was determined by a noise function. This worked fine, and looked great, but easily maxed out the GPU with only 25 base triangles and caused a very slow framerate. I then discovered that tesselating on the CPU instead, and setting the height (using noise function) in the vertex shader was actually faster! This prompted me to question the benefits of using the GPU as much as possible... So, I was wondering if someone could describe the general pros and cons of using the GPU vs CPU for intensive graphics tasks. I know this mainly comes down to what your trying to achieve, so if necessary, use the above scenario to discuss why the \"CPU + vertex shader\" was actually faster than doing everything in the geometry shader on the GPU. It's possible my hardware (newest macbook pro) isn't optomized well for the geometry shader (thus causing the slow framerate). Also, I read that the vertex shader is very good with parallelism, and would love a quick explanation of how this may have played a role in speeding up my procedural terrain. Any info/advice about CPU/GPU/shaders would be awesome!"} {"_id": "88359", "title": "Is the Spark View Engine for ASP.NET dead?", "text": "Is the Spark View Engine for ASP.NET dead? I'm considering using it for a new project. I tried to get approved to the Google Groups but nobody would approve me. The last message posted there was in May. I tried emailing the developers but nobody would reply back. I'm not having happy feelings about this using SPARK for a major project of mine at the moment. Is this project now dead especially after the Razor came out?"} {"_id": "229949", "title": "How to handle field level Acces Security when using Entity Framework", "text": "Scenario: a large existing system (~300 tables, 500 stored procs, 200 views and a code base of several 100k lines) with most security level stuff in stored procedures needs to be refactored (for maintainability reasons and just availability of skills more will likely be moving to the C# layer, as well as we are hoping for performance since we'll be able to better control what gets pulled when better). Entity Framework is something we are seriously considering to make things more easily extensible (inheriting the backend schema from a base class for example without having to track down a massive join yourself each time). Question: how do you handle security with Entity Framework? The examples I've seen where just how to get your model/data model to handle service wide security (tokens for can this guy login? types of things). How can you say a normal user can see these 3 fields on a class but an admin can see these 10? These fields could be logically other classes tables (eg. a particular customers orders). How about things like \"this post is read\"? Do you just add a list of \"haveRead\" people to the class or is there a smarter way to get EF to return different versions of the same object depending on who you are? Is there a way to get EF to do this for you without needed a lot of logic in stored procs? If not how do you manage performance (say a person can see a single object and you hit the model for a list of objects then do the filtering higher up in C# meaning you might be getting 1000's of items but only passing on 1 to the client). Can you lazy load individual fields so that if only weak users are making requests all the admin fields don't get pulled over from the database?"} {"_id": "177685", "title": "What kind of specific projects can I do to master bitwise operations in C++? Also is there a canonical book?", "text": "I don't use C++ or bitwise operations at my current job but I'm thinking of applying to companies where it is a requirement to be fluent with them (on their tests anyway). **So my question is:** Can anyone suggest a project which will require gaining a fluency in bitwise operations to complete? On a side note, is there a canonical book on optimization techniques using bitwise operations since that seems to be an important use of them?"} {"_id": "177684", "title": "What are the benefits vs costs of comment annotation in PHP?", "text": "I have just started working with symfony2 and have run across comment annotations. Although comment annotation is not an inherent part of PHP, symfony2 adds support for this feature. My understanding of commenting is that it should make the code more intelligible to the human. The computer shouldn't care what is in comments. What benefits come from doing this type of annotation versus just putting a command in the normal PHP code? ie- /** * @Route(\"/{id}\") * @Method(\"GET\") * @ParamConverter(\"post\", class=\"SensioBlogBundle:Post\") * @Template(\"SensioBlogBundle:Annot:post.html.twig\", vars={\"post\"}) * @Cache(smaxage=\"15\") */ public function showAction(Post $post) { }"} {"_id": "228093", "title": "Good idea to put bug numbers in a comment in the beginning of the source file?", "text": "Is it a good practice to put bug numbers in the file itself inside a header comment? The comments would look something like this: MODIFIED (MM/DD/YY) abc 01/21/14 - Bug 17452317 - npe in drill across in dashboard edit mode cde 01/17/14 - Bug 2314558 - some other error description It seems helpful, but is it considered bad practice?"} {"_id": "159468", "title": "Studies on how well can a programmer understand code in unfamiliar languages?", "text": "Are there any serious studies on how well an experienced programmer who knows language X can understand code written by a competent programmer using language Y, for a good range of widely used languages as X and Y? Of course the real world isn't so simple as a programmers knows just one language. What we'd like to know is: if we do our project in, say, C#, and someday some old physicists who know only Fortran and Algol look at it, to what extent would it make sense to them? Mathematical parts of it might read fine to them, if they ignore what to them are some random-ish punctuation marks. Or, would a Python expert be able to find flaws in my clever Ruby script? There could be issues from the level of superficial syntax to the level of grand concepts such as objects, template metaprogramming, functional and so on. I'm not expecting the one programmer to fully understand every syntax detail of code in a \"foreign language\" or to follow the religion of some grand concept, but wondering to what extent they'd get the main flow of control, find the spot where something is drawn on the screen and what determines its color or size, verify that a robot programmed to drive a car will turn off the engine when it's done, those sorts of things. A good quality study would include published academic research, an official report from some industry group or major software company, though I'll take systematic unbiased observations by experienced leaders of workshops and classes or other sources. Not interested in short blogs, single-case examples, or anecdotes. (Well, maybe a few anecdotes if they make for a good read.)"} {"_id": "159465", "title": "Web services and business rules engines", "text": "We have a web service that takes in input different types of messages. The function of the webservice is to merely write in a database the content of the messages. There is about one table (with foreign keys to others) for every kind of message. We have been asked to transform the webservice in a configurable product by using a business rules engine. To have full configurability, a collegue suggested that the best way could be transform the webservice from one that accepts many types of messages to one that accepts one type of message, but in the content of the message there is a field that indicates the type of the content. This way, there would be just one table for the messages (plus other tables for static informations), and it would be more configurable using business rules engines. What is the best way to tackle such situation? **Update:** for the business rules engines we are very likely going to use an implementation of the Rete Algorithm, like Drools"} {"_id": "159461", "title": "Industries and types of projects avoiding OO", "text": "Seems like OO is everywhere. What if someone just doesn't grok it beyond a simple level, no matter how many books, workshops, kindly mentors etc they've had? Like some people are tone-deaf and can't carry a tune in a bucket, there seem to be some who just aren't going to carry on within a project based on deep and sophisticated object designs. Yet these are smart people who surely would be of great benefit in some role in some organization. What types of projects in what industries are largely free from OO, and can be expected to stay that way for the next several years?"} {"_id": "100506", "title": "One global HashMap vs. many local HashMaps?", "text": "Which is more efficient; which is faster? Trade-offs? Goal is for fast look-ups in a web application. UUIDs are the keys, so global will work. Approx 50 million values. A global cache is definitely more manageable than many smaller ones, even using JMX. But, I had a hunch that maybe the total overhead of the hashtable's internals would somehow be less with a global hashtable vs many spread out through various classes. However, I cannot find any evidence of this. Edit: Please, you must back-up your claim if it is that the overhead is minimal or even if it is significant. Maybe this should be moved to the Computer Science forum?"} {"_id": "218834", "title": "Language with syntactic sugar that translates to C++ that looks \"hand-written\"", "text": "I'm a college student and I have homework in C++. My professor wants separate hpp and cpp files for each C++ class. And it just didn't feel good how much I had to type, and how much I had to click to create two new files for every class. I haphazardly put together a simple Python script that would read a single file and generate all the needed classes (the easy stuff like class name, list of members, and list of things to #include I sprinkled with syntactic sugar, but the hard stuff to parse like typenames and function bodies I copied and pasted for the most part, except my script pokes around in a very primitive way to qualify method signatures with the class name when it generates the implementation file). And I felt pretty good about it. But I realized, that if a lot of other people had worked on a similar problem in the past, they probably could have done it better. I know that Chicken scheme translates to C, and so does vala and genie, but from what I understand, with these sorts of translators, you don't really have full control over the C output. Do you know of any translators that output C++ that largely keeps the semantics of C++ intact, so that the output C++ files look handwritten, but the source language is just loaded with wonderful syntax that makes life easier? Maybe even have the language mess with the body of functions so that if I typed something like `vector v = [1,2,3,4,5]` it would become `vector v; v.push_back(1); v.push_back(2); ..` in the generated source? Maybe have braces automatically inserted based on indentation? Have newlines serve as semicolons?"} {"_id": "218833", "title": "history and application areas of 'pull'/'push' programming paradigm", "text": "Recently reading a lot materials on Reactive Programming, one thing frequently mentioned is the 'PUSH' versus 'PULL' style of application. Event driven ( Reactive programming ) Some of the authors say that both these styles have their area of applications, but aren't most application domains we are trying modeling are all changing rapidly in real world and thus event driven is more appropriate? And why we started from the 'PULL' approach and now turning into 'PUSH' approach, are there specific reasons causing the change?"} {"_id": "103095", "title": "Why there is a testing team?", "text": "> **Possible Duplicate:** > Why to let / not let developers test their own work Developers develop the code and software. Ya that means they know very well of what they have done. But why there is always a separate department in IT companies called as Testing department? (Mainly for testing developers outcome) Why a developer is not doing **Testing** his or her own code? Is it Testing department knows very well about developers code, if so how come? Thanks in advance."} {"_id": "77143", "title": "Is it possible to insert the hash of an executable into the executable itself without changing the resulting hash? What's the best alternative?", "text": "I've been thinking about this problem recently and it really doesn't seem like it's possible, but I figured that I'd ask just in case. Assume a simple program, in pseudocode: run hash on an executable we think is mimicing the true executable check the value against the hash that's hard coded into this executable if it matches, do x if not, do y In theory it should be quite secure, since only the true executable would be able to produce the same hash that's coded into the executable, but the minute that hash is compiled into that executable the hash would change. Do you think it's even possible to hard code the resulting hash in any realistic way? On to the question that has a more likely answer; what would be the most secure way to make sure that an executable is in fact the same executable that was distributed with a program and not some impostor executable trying to hijack the system?"} {"_id": "246612", "title": "How to handle enums in an indirection with functionpointers at ANSI-C?", "text": "Moinsen, I am somehow stucked in a design problem. Language is ANSI-C. Lets assume we have a tinkerbox of software-modules: * one module for the logic **Logic** * (at least) one module doing some logging **Logger** * two modules, both giving a \"frame\" to let the program run, lets say * one with a GUI * one for commandline * ... Therefore, the same logic could used in a comandline- and a graphic-version of the software. The **Logic** has to log some errors but should not know anything about the specific logger as it could be dependent on the \"frame\". It is obvious to give **Logic** a function pointer that has to be filled by the frame to bind the used **Logger** to the **Logic**. At **Logging** -Module (all code Pseudo-ANSI-C): void Logger_Log(char *sLogText) { //do some stuff } At **Logic** -Module: void Logic_PseudoLog(char *sLogText) { printf(sLogText); } void(* Logic_Log)(char *sLogText) = &Logic_PseudoLog; void Logic_SetLogger(void(* LogFct)(char *sLogText)) { Logic_Log = LogFct; } At GUI/Cmd-Line: #include \"Logger.h\" #include \"Logic.h\" Logic_SetLogger(&Logger_Log); Now I want to introduce different severity levels for logging and implement them as an enum in **Logger** : //Logger.h: typedef enum { DEBUG, INFO, ERROR } teLogLevel; void Logger-Log(char *sLogText, teLogLevel eLevel); And here the problem rises: The function-pointer at Logic needs to have the correct signature. To do so, it has to know about `teLogLevel`. Therefore **Logic** has to know about the **Logger** , exactly the case I wanted to avoid with the indirection in the first place. #include \"Logger.h\" void(* Logic_Log)(char *sLogText, teLogLevel eLevel); The situation as layed out is just an example. Please don't solve it by saying something like \"use `int` instead of `enum`\" or \"build three functions for the levels\". The bottemline question is: **How to handle enums in an indirection with functionpointers at ANSI-C?** **How to \"inject\" enums into a module, that should not now about the origin of the enums?**"} {"_id": "201073", "title": "Easily add metrics to measure java code usability", "text": "I'm trying to create a process for better understand what's happening in my code. I want to create metrics to automatically give my answers about simple or complex questions like: 1. How many times a url was clicked? (how many requests arrived to a certain servlet method) 2. How many times a certain user requested the same page? 3. How many requests are pending in a queue on average? and so on... Is there an easy way to do this automatically and elegantly (for example - attribute like @CountHits would be great)? I found this open source: http://metrics.codahale.com/getting- started/#reporting-via-jmx But it's too coupled to the code. Not so elegant :/"} {"_id": "201070", "title": "Best way to set up a dev environment for Node.js using AWS s3?", "text": "We are getting ready to port part of our app over to node.js, and are looking for a way to support s3 uploads and testing in our development environment. Right now we are thinking about setting up test buckets (ie 'myProductionBucket-test'), setting this in our dev environment configuration, and then creating a lifecycle rule to delete content after 24 hours. This seems clunky though, wondering if there is are local alternatives we could run on our dev boxes that might work better. Also, we're leaning towards node-config vs node-convict or just loading json. Any thoughts there also greatly appreciated. **Edit:** We've looked at https://github.com/jubos/fake-s3, and also thought about just mocking for tests, but it would be handy to put and retrieve the same files, since that's the basic function of the app. It seems crazy to pay amazon for running dev/test and production."} {"_id": "4000", "title": "How involved should our employers be in our education?", "text": "I consider myself a Software Craftsman. I like to attend local user groups and events to learn about new technologies as well as network with other software craftsman. We love to talk about what we're doing, what we're learning, and in general how to get better at the things we like. Some of these meetups and events are free, others are not and sometimes take place during working hours (9am - 5pm). My work generally did not allow me to attend events during office hours unless it directly had something to do with what we were currently implementing/using. Nor did they provide any support to any software resources/plugins that could help boost productivity. For the most part, I've been in charge of educating myself and actually purchasing my own software tools (example: Resharper). My question is: **How involved should our employers be in providing software developers with tools, resources, and general education on up and coming technologies?** Should they provide MSDN subscriptions for their developers to install software at home? Should they pay for conference fees to learn about new technologies?"} {"_id": "4001", "title": "What are off-shore resources' experiences like working for foreign companies?", "text": "Are you an off-shore coding resource for a foreign company? What are the challenges of working with foreign companies? What helps make the project more successful?"} {"_id": "68682", "title": "How do you remember encapsulation types for effective use?", "text": "I've been attempting to learn C#.NET for the past month or so, and the array of ideas that seems to always trip me up is encapsulation. As this is one of the three pillars of OOP, I feel that I am operating at a loss for not understanding their use and implementations more clearly. When learning, though, it is often useful to develop or assimilate mnemonics to assist in maintaining all this knowledge. Having a good reference manual on hand is one thing, but keeping a functioning base of understanding is another entirely. When keeping track of whether a type/method is `public`, `private`, `protected`, `static`, or `sealed`, I find myself wondering what and why all at the same time. My question, then is **how do you go about remembering encapsulation keywords and when to use them**? Trial and error is what is working for me now as a student, but I would hope to move beyond that before making professional use of this skill."} {"_id": "253851", "title": "Ninject/DI: How to correctly pass initialisation data to injected type at runtime", "text": "I have the following two classes: public class StoreService : IStoreService { private IEmailService _emailService; public StoreService(IEmailService emailService) { _emailService = emailService; } } public class EmailService : IEmailService { } Using Ninject I can set up bindings no problem to get it to inject a concrete implementation of IEmailService into the StoreService constructor. StoreService is actually injected into the code behind of an ASP.NET WebForm as so: [Ninject.Inject] public IStoreService StoreService { get; set; } But now I need to change EmailService to accept an object that contains SMTP related settings (that are pulled from the ApplicationSettings of the Web.config). So I changed EmailService to now look like this: public class EmailService : IEmailService { private SMTPSettings _smtpSettings; public void SetSMTPSettings(SMTPSettings smtpSettings) { _smtpSettings = smtpSettings; } } Setting SMTPSettings in this way also requires it to be passed into StoreService (via another public method). This has to be done in the Page_Load method in the WebForms code behind (I only have access to the Settings class in the UI layer). With manual/poor mans DI I could pass SMTPSettings directly into the constructor of EmailService and then inject EmailService into the StoreService constructor. With Ninject I don't have access to the instances of injected types outside of the objects they are injected to, so I have to set their data AFTER Ninject has already injected them via a separate public setter method. This to me seems wrong. How should I really be solving this scenario?"} {"_id": "170554", "title": "How to check if my open source idea already exists?", "text": "I've got a great idea for an open source project. What should I do before starting, to make sure that I'm not duplicating anyone's efforts? Of course I've googled, but which keywords to use isn't obvious. Is there a site to discuss such ideas? To address a common answer: My goal is not to be the author of a successful project. It is to make sure that this product exists, and can be improved. (There's already a similar commercial venture, which is of low quality and not being developed.) Thus, it won't help me to have the only project that can be found in this niche. If there's a start out there, I'd like to find it and help."} {"_id": "19425", "title": "Static Create method -- pros and cons compared with constructors", "text": "What are the pros and cons of having static object creation methods over constructors? class Foo { private Foo(object arg) { } public static Foo Create(object arg) { if (!ValidateParam(arg)) { return null; } return new Foo(arg); } } Few that I can think of: Pros: * Return null instead of throwing an exception (name it `TryCreate`). This can make code more terse and clean on the client side. Clients rarely expect a constructor to fail. * Create different kinds of objects with clear semantics, e.g. `CreatFromName(String name)` and `CreateFromCsvLine(String csvLine)` * Can return a cached object if necessary, or a derived implementation. Cons: * Less discoverable, more difficult to skim code. * Some patterns, like serialization or reflection are more difficult (e.g. `Activator.CreateInstance()`)"} {"_id": "19427", "title": "Which Open Source License to choose for an ASP.NET MVC 3 OpenId StarterKit?", "text": "I've built a ASP.NET MVC 3 (RC at the moment) site that uses OpenID login system. I was still learning about OpenID while implementing this so I commented the code heavily. The result is a site that let's users login/register with OpenID, add other OpenIDs to their account and also remove them. This little project can be then used as a starting point for any new project that would use OpenID login system. It can also be used as a resource for people to learn OpenID with. I decided to release this project as open source. This will be my first open source project and I need to decided what license to use. I want people to be able to use this for any purpose they wish for. They can learn from it, use it for commercial or non-commercial projects and make their own forks of the code. It would also be nice for others to be able to contribute back to the project with stuff like bug fixes on sites like GitHub. But I'd like to be the copyright owner of the code that is under my control. For example the code that is in my GitHub repository (I'll call this the main code base). I've heard that for this I need to get every contributor, that adds code to this code base, to give me the copyright for their contribution. How exactly does this work? I also use other licensed (mostly open source) resources in my projects. Here's their list and their licenses: * DotNetOpenAuth (Ms-PL) * T4MVC (part of MvcContrib which is licesned using Apache License 2.0) * ASP.NET MVC (Ms-PL) * ADO.NET Entity Framework CTP4 (I couldn't find a license) I of course want to use the main code base for any type of projects I want. Commercial, non-commercial, open source, ... So I have some very important questions here: 1. Which license should I use? I think GPL or LGPL is not suitable here. I was looking at Apache 2, New BSD, MIT and Ms-PL. Ms-PL seems to be a good fit as, but I'm not sure. 2. What restrictions and/or obligations do I have towards the resources I use in this project? I think I read somewhere that I have to add -LICENSE.txt for Ms-PL resources. Is that true? How does this work for Apache 2 and other licenses? What do I have to do if I modify any of these resources' code and then use that in my project? 3. I'd also really like a \"as-is\" clause in the license, so people can't sue me if something goes wrong while they're using my code. 4. Do I need to add anything to my files to make clear what the license is? If so, how do I format that? Also one last thing. If I decide to make a Visual Studio template out of this samples how do I license that?"} {"_id": "136762", "title": "How does bit flipping / complementing work?", "text": "I am currently learning about bitwise operation, so bear with me. I understand AND, OR, and shifting. What I don't understand is bit flipping. So, `5` is `0101`. When someone says to me \"flip those\", it would result in `1010` which is `10`. So why does `~5`result in `-6`? I think I got this very wrong. Can someone enlight me? (I am putting this here because I think it is a general programming question and not a specific one, which would belong to stack overflow). **Edit 1** Thanks to mcfinnigan's comment I learned that there is a \"most significant bit\" that defines if a number is positive or negative (correct me if I'm wrong). This would explain why positive results are ending up negative and vice versa. However, IMO the example above would result in `-10` then. **Edit 2** Thanks to Don 01001100 and S.Lott, I finally got it."} {"_id": "239036", "title": "How are negative signed values stored?", "text": "I was watching this video on the maximum and minimum values of signed integers. Take an example of a positive signed value - 0000 0001 The first bit denotes that the number is positive and the last 7 bits are the number itself. So it is easily interpreted as +1. Now take an example of a negative signed value - 1000 0000 which comes out to be -8. Okay, the computer can understand that it is a negative value because of the first bit but how the hell does it understand that 000 0000 means -8? In general, how are negative signed values stored/interpreted in a computer?"} {"_id": "235431", "title": "Is there any performance benefit in checking the item count prior to executing a foreach loop?", "text": "I saw this in code and was wondering if there is any performance benefit to checking the item count prior to looping: if (SqlParams.Count > 0) foreach (var prm in SqlParams) cmd.Parameters.Add(prm); I always prefer to do a `null` check instead and let the `foreach` loop just hop out if there are `0` items. if (SqlParams != null) foreach (var prm in SqlParams) cmd.Parameters.Add(prm); Isn't that the better way?"} {"_id": "185051", "title": "CQRS with Repository pattern and Inversion of Control (with DI)", "text": "I assigned a POC project to someone where I asked to implement both Command Query Responsibility Segregation, Inversion of Control (with Dependency Injection) and Repository pattern. \u201cSomeone\u201d gave me a POC solution project but I am not sure whether this is the way it is done. I will brief here about the POC project * The project is a simple 3-tier application \u2013 the Presentation Layer (PL), the Business Logic Layer (BLL) and the Data Access Layer (DAL); each tier being a separate project * The Presentation Layer is a Web Application, the BLL and DAL are class library projects * In the Business Layer, there are defined Repository Interfaces. The reference of BLL library is added to DAL project and inside the DAL project there are concrete classes that implement the Repository Interfaces. This is how Inversion of Control is applied * Since **Command-Query-Responsibility-Segregation** is done, the repository interfaces in the Business Layer only declare Add/Update and Delete methods. For read, there are \u201cRead\u201d interfaces directly in the DAL and in the DAL there are concrete classes that implement these interfaces. * The Presentation Layer contains reference to both the BLL library and the DAL library. Calls to Add/Update/Delete are routed through the BLL to the DAL while any read is done directly from the DAL. I believe this conforms to Command-Query-Responsibility-Segregation concept of bypassing the BLL for doing reads. Here is an illustration of how this is all setup. There are three projects * NW.Web * NW.Business * NW.DataAccess Below is a snapshot of code in the different layers. **\\-- NW.Web --** // A class in the Presentation Layer public class CustomerPage { // Business layer Interface from NW.Business namespace private ICustomerBusiness ICustB; //DAL Read interface from NW.DataAccess.Read namepsace private ICustomerRead ICustR; //Constructor for the Customer Page that uses Constructor Injection public CustomerPage(ICustomerBusiness ICustB, ICustomerRead ICustR) { this.ICustB = ICustB; this.ICustR = ICustR; } } **\\-- NW.Business --** //Declaration of business interface in the Business Layer interface ICustomerBusiness { void Persist(); } // A class in the Business Layer that implements the business interface public class Customer: ICustomerBusiness { //Repository interface object that will be injected by Constructor Injection. private ICustomerRepository ICustRep; public Customer(ICustomerRepository ICustRep) { this.ICustRep = ICustRep; } public void Persist() { ICustRep.AddOrUpdate(); } } //Declaration of Repository interface in the Business Layer public interface ICustomerRepository { void AddOrUpdate(); void Delete(); } **\\-- NW.DataAccess--** public class CustomerRepository : ICustomerRepository { public void AddOrUpdate() { //implementation of Add or Update } public void Delete() { //implementation of Delete } } //A Read interface in the Data Access Layer interface ICustomerRead { // A read is returned as DTO since in Database this may map to more than 1 table CustomerDTO GetCustomerDetails(T id); } // An implementation of the Read Interface in the Data Access Layer namespace NW.DataAccess.Read { public class CustomerRead : ICustomerRead { public CustomerDTO GetCustomerDetails(T id) { //implementation here } } } My gut feeling is that there is something wrong here. It seems CQRS or at least the above implementation does not address some requirements * The Customer Business object (Customer class) may need to read from Database for its internal purpose (like initializing variables etc). With read directly defined in DAL layer, the only way to do this would be to reference the DAL dll in the BLL. But this would create circular reference and go against the IOC that is done * What happens when there is some common Read requirement across all business objects ?"} {"_id": "235437", "title": "Why no MVC methodology for desktop applications?", "text": "I'm currently learning how to develop web apps with C#, ASP, .NET and MVC. I am enjoying the MVC paradigm a lot, but then thought about using this to develop desktop software with. I googled around, but found nothing using the MVC pattern to create desktop apps that connect to databases. So what software pattern do .net programmers like use if they not using MVC? Thanks."} {"_id": "181745", "title": "What are the cons of using XML for GUI development?", "text": "I'm hoping this question fits within the scope of Programmers SE and that it's not too open ended. I've always hated GUI design. I don't mean to offend anyone, but I've hated CSS/HTML, I've hated Java Swing, I can't stand any type of GUI editor, and in general I don't have any skill making anything look nice. But then the Android platform came around, and I've been in love with it ever since. GUI design isn't a problem anymore, because the abstraction of the design in XML files (which combine both structure and style) out of the code (Java files). It was clean, easy to use, and very extensible. Making themes was incredibly easy, and it followed the same structure as the concept of overriding in computer science. This brings me to my question. The Android platform was released in 2007. It's been 5 years as of the posting of this question. And yet I haven't seen this idea of XML for GUI design pop up anywhere other than for Android. (For reference on how android design works, here's an official hello world tutorial) Are there any cons that would make using XML for GUI design a bad choice for either web or desktop development? Are there any other reasons why this type of model would be hard to make or use or simply any reason preventing it from becoming widespread? As for a little context, this question popped into my head as I was recently struggling with a GridBagLayout in Java."} {"_id": "203302", "title": "How does the ? make a quantifier lazy in regex", "text": "I've been looking into regex lately and figured that the `?` operator makes the `*`,`+`, or `?` lazy. My question is how does it do that? Is it that `*?` for example is a special operator, or does the `?` have an effect on the `*`? In other words, does regex recognize `*?` as one operator in itself, or does regex recognize `*?` as the two separate operators `*` and `?`? If it is the case that `*?` is being recognized as two separate operators, how does the `?` affect the `*` to make it lazy. If `?` means that the `*` is optional, shouldn't this mean that the `*` doesn't have to exists at all. If so, then in a statement `.*?` wouldn't regex just match separate letters and the whole string instead of the shorter string? Please explain, I'm desperate to understand."} {"_id": "203303", "title": "Is a function plotter a legitimate use of eval() in JavaScript?", "text": "From PHP development I know that eval is evil and I've recently read What constitutes \u201cProper use\u201d of the javascript Eval feature? and Don't be eval. The only proper use of eval I've read is Ajax. I'm currently developing a visualization tool that lets users see how polynomials can interpolate functions: * Example * Code on GitHub ![enter image description here](http://i.stack.imgur.com/RSRab.png) I use eval for evaluation of arbitrary functions. Is this a legitimate use of eval? How could I get rid of eval? I want the user to be able to execute any function of the following forms: 1. a x^i with a,i in R 2. sin, cos, tan 3. b^x with b in R 4. any combination that you can get by * adding (e.g. x^2 + x^3 + sin(x)), * multiplying (e.g. sin(x)*x^2) or * inserting (e.g. sin(x^2))"} {"_id": "38894", "title": "Advice On How To Securely Manage [Client] Server Details Across Team?", "text": "Does anybody have any advice on this? I currently work as a kind of lead developer/team leader and we have some remote team members and sometimes a contractor or two. At times, the entire team might be working on a large project and server details are usually managed by the entire team. What is the best practice way to deal with this? Is there something I'm missing? I can't see anyway to have somebody access a server without the details. Also, I'm considering the security protection they have on their machines, should I be ensuring certain things happen to the data like encryption, shredding. Should I use a 'company' proxy for connections? Please advise!"} {"_id": "38891", "title": "Starting my first RoR project, what JS library is good to go with it?", "text": "I'm starting my first Ruby on Rails project as I've been excited about the language for quite a while now and I'm sick of writing PHP. I've gathered that rails is pretty much an automation framework which should really speed up programming on the backend for me. Though this does not impact the frontend, I still need to write tables, styles and div's to put my layout together. I know there's libraries like ExtJS out there that automate a big part of this process but I was wondering if there's any frameworks out there that actually integrate with Ruby on Rails, as in, offer a build-in way to handle ajax queries for example. **TLDR** : I would really appreciate some tips on a good JS framework to go with Ruby on Rails."} {"_id": "253153", "title": "How do processes communicate?", "text": "What transports/pipes/interops are available that most or all languages support across OSs? Not necessarily, network, but interprocess. Or is interprocess communication OS specific? I mean, for instance, the COM interop is on windows. Is there an equivalent on Linux, for instance, that would work the same and a programming language could use interchangeably? I know most languages support network IO. Do we have to use that for interprocess communication or are there other options?"} {"_id": "256228", "title": "how to upgrade software", "text": "I am a beginner in computer programming with C/C++. I have previous experience in programming for the web. There is some confusion that I am having with the software development process in desktop applications. Web applications are easy to upgrade because it is basically a bunch of source files, but I have a question about desktop applications. The major thing I notice about desktop applications is that they are released in different versions. This requires the user to go through the uninstall/re-install process in order to obtain a newer version of the software. In C/C++ software for various platforms, how would a programmer implement an \"upgrade\" option that would allow a user to update an already installed application without having to go back to a website. How hard is it to upgrade an executable file on platforms like Windows and Linux. I understand that the process may be different for different platforms. Say, for example, I have a desktop application (version 1) that the user has already downloaded and installed onto their own PC. After a while, I make some changes to the software and make improvements that the user would want applied to their software. The user has made customizations to the software, but would like to upgrade to version 2 without having to re-customize their application all over again. Originally, I thought that an \"upgrade\" feature would replace certain parts of an executable file, but is that really what you would have to do? My main question is: How exactly would someone go about accomplishing this \"upgrade\" process? Is it possible?"} {"_id": "80225", "title": "Has any language become greatly popular for something other than its intended purpose?", "text": "Take this scenario: * A programmer creates a language to solve some problem. * He then releases this language to help others solve problems like it. * Another programmer discovers it's actually much better for some different category of problems. * By virtue of this new application, the language then becomes popular for that application primarily. **Are there any instances of this actually occurring?** Put another way, does the intended purpose of a language have any bearing on how it's actually used, or whether it becomes popular? Is it even important that a language _have_ an advertised purpose?"} {"_id": "80227", "title": "Scoping recommendations while developing in C", "text": "While developing a library using C, what are your recommendations about variable and function scoping? In C++, OOP and namespaces made the whole thing a lot easier. But how to do that with plain C? Specially how to use the keywords `static` and `extern` in headers and code files to manage scoping?"} {"_id": "218163", "title": "How would you model an objects representing different phases of an entity life cycle?", "text": "I believe the scenario is common mostly in business workflows - for example: loan management the process starts with a loan application, then there's the loan offer, the 'live' loan, and maybe also finished loans. * all these objects are related, and share many fields * all these objects have also many fields that are unique for each entity * the variety of objects maybe large, and the transformation between the may not be linear (for example: a single loan application may end up as several loans of different types) How would you model this? some options: * an entity for each type, each containing the relevant fields (possibly grouping related fields as sub entities) - leads to duplication of data. * an entity for each object, but instead of duplicating data, each object has a reference to it's predecessor (the loan doesn't contain the loaner details, but a reference to the loan application) - this causes coupling between the object structure, and the way it was created. if we change the loan application, it shouldn't effect the structure of the loan entity. * one large entity, with fields for the whole life cycle - this can create 'mega objects' with many fields. it also doesn't work well when there's a one to many or many to many relation between the phases."} {"_id": "20178", "title": "How can I quickly weed out \"copy & paste\" coders?", "text": "I need a way to filter out resumes of folks who just copy-and-paste code then hope it works, and check it in if it does. All this happens without having an understanding (or care) to understand the rest of the code in the system. Sure I know that copying and pasting code is part of learning a new object, control, etc... but how can one tell if that accounts for 70% (or more) of their development career? I've come across some senior level guys perhaps whose skills are so outdated or irrelevant for the project, that all they do is google, copy-then-paste some code without thinking about the solution as a whole. As a result we have a mismash of JSON, AJAX, callbacks, ASMX, WCF and postbacks in the same project. It is clear there is no consistency or logic behind where each technology is being used. In the worst case, this type of developer creates security issues and vectors for attack. **Question** How would you recommend I filter out people who have a poor programming background? Can I do it at the resume level? If not, how do I do this during the interview."} {"_id": "254320", "title": "Is this a good place to ask a question i have about programing?", "text": "I have asked the same question twice about a program on python and it keeps getting put on hold and i thought this was a good place to ask programming questions and I just wanted to make sure."} {"_id": "42079", "title": "Is it common practice to hire third parties to do code reviews for contractors?", "text": "I recently observed some contract offers which included a \"code review by third party\" clause - the contract would not pay out fully until the code review was completed and it received a pass. I was surprised, especially considering that these were fairly simple, and small-scale contracts (churning out vanity apps for the iPhone). Is this kind of third-party code review a common thing to run into when contracting out as a programmer?"} {"_id": "135126", "title": "Should I drop in on local businesses and offer my services as an intern to get experience?", "text": "I am trying to get an entry-level developer job, but have no relevant experience. I am able to code C#, SQL, HTML, and CSS and understand principles of OOP and have completed a couple of applications both in WinForms and WebForms. I've tried applying to graduate roles, but I get blocked by recruiters who don't like my experience so far. I was thinking about writing a detailed report (like a CV) on what I've done so far, visiting software shops, introducing myself, and ask for an internship. Is this an effective way to get my foot in the door in software development? Should I try cold-calling first, to try and get a formal interview?"} {"_id": "42077", "title": "How do I deal with code of bad quality contributed by a third party?", "text": "I've recently been promoted into managing one of our most important projects. Most of the code in this project has been written by a partner of ours, not by ourselves. The code in question is of _very_ questionable quality. Code duplication, global variables, 6-page long functions, hungarian notation, you name it. And it's in C. I want to do something about this problem, but I have very little leverage on our partner, especially since the code, for all its problems, \"just works, doesn't it?\". To make things worse, we're now nearing the end of this project and must ship soon. Our partner has committed a certain number of person-hours to this project and will not put in more hours. I would very much appreciate any advice or pointers you could give me on how to deal with this situation."} {"_id": "124570", "title": "Why pointer symbol and multiplication sign are same in C/C++?", "text": "I am writing a _limited_ C/C++ code parser. Now, multiplication and pointer signs give me really a tough time, as both are same. For example, int main () { int foo(X * p); // forward declaration bar(x * y); // function call } I need to apply special rules to sort out if `*` is indeed a pointer. In above code, I have to find out if `foo()` is a forward declaration and `bar()` is a function call. Real world code can be lot more complex. Had there been different symbol like `@` for pointers, then it would have been straight forward. The pointers were introduced in `C`, then why some different symbol was not chosen for the same ? Was keyboard so limited ? [It will be an add-on if someone can throw light on how modern day parser deal with this ? Keep in mind that, in one scope `X` can be typename and another scope it can be a variable name, at the same time.]"} {"_id": "124575", "title": "What is a practical way to debug Rails?", "text": "I get the impression that in practice, debuggers are rarely used for Rails applications. (Likewise for other Ruby apps, as well as Python.) We can compare this to the usual practice for Java or VisualStudio programmers --they use an interactive debugger in a graphical IDE. How do people debug Rails applications _in practice_? I am aware of the variety of debuggers, so no need to mention those, but do serious Rails programmers work without them? If so, why do you choose to do it this way? It seems to me that console printing has its limits when debugging complex logic."} {"_id": "48612", "title": "Has anyone had a positive experience using Google Analytics in their native iOS app?", "text": "I'm having a hard time finding reviews on Google Analytics for native iOS apps and I was hoping to find some programmers on here that have any experience with it. If so, how well did it work? Thanks so much for your thoughts!"} {"_id": "134293", "title": "What data structure could I use for modeling a network of nodes and edges?", "text": "My application needs to model of and perform operations on a network with 40 - 50 nodes and typically less than 6 edges per node. Both nodes and edges are objects with around 1K data each. During execution the mapping of the network is frequently changed - nodes added and deleted, edges added and deleted, in addition to the properties of individual nodes and edges being adjusted. The node objects and edge objects are allocated using 'new' with the resulting pointers stored in a `std::list` for each object type. I have experimented with two different approaches for mapping: 1. Put a container in each node to hold IDs of edges, and 2 variables in each edge to store the IDs of the end nodes. 2. Add a new top-level container, separate from the container of edges and container of nodes, to store the mapping information. Functions in the node and member classes will be easier to implement, if the mapping information is stored in those classes. Making changes to the network mapping would be much easier, if all the mapping data was stored separately. But if the mapping is not stored in the nodes and edges, then member functions in the nodes and edges need a way to get mapping information from the parent object. Is there a data structure or a conceptual technique that gives the best of both approaches, without duplicating data or breaking encapsulation? Performance is not major concern, since there are not any extremely expensive calculations involved. The more important concern is for safe, understandable, maintainable code."} {"_id": "134292", "title": "Product classifying algorithm - text classification - C# - algorithm suggestions", "text": "Alright people. Finally with the help of stackoverflow community i have gathered 20 commercial product selling websites product pages with the following features Product URL Product Price Product Name Product Category Product Page Title Product Page Description Product Page Keywords Now with using these features of the products i have to classify them. What does classification means ? Let me explain it. Now as you can imagine every website lists the products in their own ways. There is not format. So lets say iphone 4 is being sold at 20 different websites with 20 different way. So what i need to achieve is grouping the these 20 iphone page at 20 different websites. So when person query my website with iphone 4 word i will show that 20 results. Basically out of over 500.000 product urls i need to group every product. So lets say there are 15 gefore gtx 570 card out of these 500k urls so i need to group them as same product. You can imagine it as google products. But i am doing it at my own country which is Turkey and google does not have product search for Turkey. In short with using the features above what algorithm would you suggest. I don't want to use any training techniques if possible. Everything automated. I am using C# 4.0 WPF and data is stored at the MSSQL 2008 R2 database"} {"_id": "48616", "title": "How does one learn to program (and think) the Ruby way?", "text": "**Why I Ask this Question:** I've just starting to learn Ruby (and by extension IronRuby since I work in the Microsoft world). I picked up IronRuby Unleased to teach me the basic syntax of Ruby, and any particulars of IronRuby. However, learning the syntax is not my primary goal (if that was, I would just obtain The Ruby Programming Language, which I might get eventually anyway). I say this because I could learn the syntax, but still write programs in a non-Ruby way. Such as: 1. Forcing heavy-handed DI via DI frameworks ***** 2. Using Ruby to write mostly C, Java, or Perl type code To me, doing these things sounds the effective equivalent of writing procedural code in Java, or learning the syntax of F#, but writing programs as if the language were C#. **Therefore** , my main goal is to learn to program, to think, in the way that embodies Ruby's: 1. Language idioms 2. Dynamic style 3. Tried-and-true principles and patterns of the community * * * *Response #28 to the link above (Forcing heavy-handed DI via DI frameworks), asks a similar question to the one I post here. The blog's author suggested reading Jim Weirich's code and perhaps Rails. I'm looking for additional suggestions."} {"_id": "48617", "title": "How does one pluralize 's?", "text": "What is the most appropriate way of writing this comment: /// /// Order - Identifies the ordinal location of this category /// relative to other listed categories. /// if I'm wanting to wrap \"category\" in `` tags? I've considered: /// /// Order - Identifies the ordinal location of this /// relative to other listed 's. /// Do you see my dilemma? **Edit:** I should add that I am using Visual Studio's XML Comments. So I am somewhat restricted as to the schema. I believe `cref` has to point to a valid type reference."} {"_id": "201920", "title": "How to share control of links/domains on an open source project with many collaborators?", "text": "I'm trying to help the Rebol project re-engineer its web presence now that it is open source as Apache 2 -- after nearly two decades of proprietary license! The language's creator currently has registrar control of the rebol.com/.org/.net sites. He wishes to keep control of the `rebol.com` domain as a holding for his Rebol Technologies company (and its remaining non-open codebases). But he has indicated a willingness to let the community take over `rebol.org` and `rebol.net` -- which are sites that show their age at present. They need large overhauls in terms of organization / content / visual theming; and the current proposal is that the .org become a neutral Wikipedia-like documentation and curated module library (with few outbound links), while the .net be a more sociable \"developer network\" of diverse community resources. While the creator is trusted as being benevolent, he can't really be a benevolent dictator for the project, as he has largely released his interest in Rebol due to a new career. Moreover, the turnaround in responding to community concerns has historically been on very long timescales... and hence many alternative sites have popped up to fill in the gaps. The question of how to take control of the issue and get the community to invest in content hosted under the nice domain names--rather than each going their separate direction--is a thorny one. For comparison, I did WHOIS domain- name checks and a little research on the main go-to sites for Ruby and Python. **ruby-lang.org** * Admin Name: Matsumoto.Yukihiro Matsumoto.Yukihiro * Admin Organization: Ruby Users Group It seems that the source to ruby-lang.org is maintained on GitHub. If I read it correctly, there are 35 members of the ruby GitHub organization, though I don't know enough about that to know which of these people have commit access or what the checks and balances are. I don't see any official link defining the \"Ruby User's Group\". There is a non-profit called Ruby Central that seems to do some organization, but the only holdings I could find was some controversy when the Rails guy tried to prevent people from using the rails logo. ruby-lang.org seems to have almost no remarks on legal structure. **python.org** * Admin Name: Domain Administrator * Admin Organization: Python Software Foundation This seems significantly more formal, as they have bylaws and you can either buy your way into the voting structure with sponsorship...or be nominated / elected by a member of the existing membership. * * * I like how lightweight and trusting the Ruby model is. But if Matz were hit by a bus--then other than people being sad--what would happen? Whose hands would ruby-lang.org fall into? What if the situation were slightly different such that Matz were very trusted on technical matters, but not to follow up on getting site issues fixed in a timely manner? Python seems to have this more planned out, but is rather heavyweight. According to Wikipedia the Python Software Foundation was formed in 2001, has 124 members with Guido Van Rossum as president, and had a budget of $750,000 in 2011! So how might a project with fewer resources manage something as simple as a couple of domain names, and how content disputes will be resolved? There's a desire to reboot the identity...but without some form of enforceable contract it's not likely going to please the parties involved."} {"_id": "201923", "title": "General questions regarding open-source licensing", "text": "I'm looking to release an open-source iOS software project but I'm very new to the licensing side of the things. While I'm aware that the majority of answers here will not lawyers, I'd appreciate it if anyone could steer me in the right direction. With the exception of the following requirements I'm happy for developers to largely do whatever they want with the projects source code. I'm not interested in any copyleft licensing schemes, and while I'd like to encourage attribution in derivative works it is not required. As such, my requirements are as follows: * Original source can be distributed and re-distributed (verbatim) both commercially and non-commercially as long as the original copyright information, website link and license is maintained. * I wish to retain rights to any of the multi-media distributed as part of the project (sound effects, graphics, logo marks, etc). Such assets will be included to allow other developers to easily execute the project, but cannot be re-distributed in any manner. * I wish to retain rights to the applications name and branding. Futher to selecting an applicable license, I have the following questions: * The project makes use of a number of third-party libraries (all licensed under variants of the MIT license). I've included individual licenses within the source (and application) and believe I've met all requirements expressed in these licenses, but is there anything else that needs to be done before distributing them as part of my open-source project? * Also included in my project is a single proprietary, close-sourced library that's used to power a small part of the application. I'm obviously unable to include this in the source release, but what's the best way of handling this? Should I simply weak-link the project and exclude it entirely from the Git project?"} {"_id": "201924", "title": "Static code analysis for bash scripts", "text": "I program CLI utilities in bash to automate much of my work as a DBA. I would like improve my code and make it more robus,t maybe with the help of some static code analysis tool like the one I used to use for Java. Does any of you use a static code analysis tool for bash scripts ? Is so, which one do you use ? Does one exist ?"} {"_id": "163016", "title": "What to do when projects are slow and you are being held up by others?", "text": "Where I work, projects take a significant amount of time because the teams are large, there is a lot of \"design and analysis\", a lot of documentation, and work always gets pushed off. I work in the middle tier and I always have to wait for the services and client folks to get their work done. Oftentimes there are weeks at a time when I can't get any work done. I feel bored and weird just sitting here scrambling to at least appear like I am busy. Management seems to do little when asked for more work. What do you do in such cases?"} {"_id": "139719", "title": "Web Development Algorithms", "text": "While searching for jobs online I noticed that most PHP web developer jobs ask you to know about algorithms and data structures.While I don't know PHP yet I have started to learn it in order to obtain a job in that field. While I have seen some similar questions on the forum I don't feel like the answers satisfied my needs. I already have some knowledge of the quicksort algorithm and how to use a stack from when I was working in Java and Javascript , but I am guessing it is not enough.What I would like is a list of the most common used algorithms in web development."} {"_id": "231396", "title": "Best way to migrate a series of code (v1, v2, etc) into a new VCS?", "text": "I am completely new to using VCS and haven't settled on Git or Hg, but I do know that I will be using it soon. My old project folder has versions of files labeled v1, v2, etc. How easily can I fold this into a new [local] repo? There are only a few sets of files with version counters (so commit, then copy paste, then commit is an option). I should also note that I am not a classically trained programmer but a self-taught former analytical chemist."} {"_id": "231397", "title": "Is it conventional to raise a NotImplementedError for methods whose implementation is pending, but not planned to be abstract?", "text": "I like to raise a `NotImplementedError` for any method that I want to implement, but where I haven't gotten around to doing it yet. I might already have a partial implementation, but prepend it with `raise NotImplementedError()` because I don't like it yet. On the other hand, I also like to stick to conventions, because this will make it easier for other people to maintain my code, and conventions might exist for a good reason. However Pythons documentation for NotImplementedError states: > This exception is derived from RuntimeError. In user defined base classes, > abstract methods should raise this exception when they require derived > classes to override the method. That is a much more specific, formal use case than the one I describe. Is it a good, conventional style to raise a `NotImplementedError` simply to indicate that this part of the API is a work in progress? If not, is there a different standardised way of indicating this?"} {"_id": "139712", "title": "Is performance testing applicable to a QA or business user testing?", "text": "Performance testing is something that I, as a developer do when optimizing. However, if the change is purely optimization, should the QA or business user perform their own performance testing? I would reckon they should, aside from testing that the functionality remains the same. If that's the case, is it acceptable for them to test via manually timing the response time from the screen?"} {"_id": "236163", "title": "Storing tokens during lexing stage", "text": "I am currently implementing a lexer that breaks XML files up into tokens, I'm considering ways of passing the tokens onto a parser to create a more useful data structure out of said tokens - my current plan is to store them in an arraylist and pass this to the parser , would a link list where each token points to the next be better suited? Or is being able to access tokens by index easier to make a parser for? Or is this all a terrible strategy? Also if anyone has used antlr , I know it uses a token stream to pass tokenized input to the parser, how can the parser make decisions on if the input is valid / create a data structure if it does not have all the tokens from the input yet?"} {"_id": "236161", "title": "assembly.GetTypes() vs assembly.DefinedTypes.Select(t => t.AsType());", "text": "public static IEnumerable GetAccessibleTypes(this Assembly assembly) { try { #if NET40 return assembly.GetTypes(); #else return assembly.DefinedTypes.Select(t => t.AsType()); #endif } catch (ReflectionTypeLoadException ex) { // The exception is thrown if some types cannot be loaded in partial trust. // For our purposes we just want to get the types that are loaded, which are // provided in the Types property of the exception. return ex.Types.Where(t => t != null); } } What are differences between assembly.GetTypes() and assembly.DefinedTypes.Select(t => t.AsType()); Original code: https://entityframework.codeplex.com/SourceControl/latest#src/Common/AssemblyExtensions.cs"} {"_id": "179541", "title": "Pattern for loading and handling resources", "text": "Many times there is the need to load external resources into the program, may they be graphics, audio samples or text strings. Is there a patten for handling the loading and the handling of such resources? For example: should I have a class that loads all the data and then call it everytime I need the data? As in: GraphicsHandler.instance().loadAllData() ...//and then later: draw(x,y, GraphicsHandler.instance().getData(WATER_IMAGE)) //or maybe draw(x,y, GraphicsHandler.instance().WATER_IMAGE) Or should I assign each resource to the class where it belongs? As in (for example, in a game): Graphics g = GraphicsLoader.load(CHAR01); Character c = new Character(..., g); ... c.draw(); Generally speaking which of these two is the more robust solution? GraphicsHandler.instance().getData(WATER_IMAGE) //or GraphicsHandler.instance().WATER_IMAGE //a constant reference"} {"_id": "161526", "title": "When to prefer a generalized solution over solving specific cases", "text": "In programming we're often faced with a choice: cover each conceivable use case individually, or solve the general problem: ![XKCD - The General Problem](http://imgs.xkcd.com/comics/the_general_problem.png) Its obvious that solving the immediate problem is faster, however creating a generalized solution will save time in the future. How do I know when it's best to try and cover a finite list of cases, or make a generic system to cover all possibilities?"} {"_id": "70269", "title": "Idea for CAPTCHA", "text": "CAPTCHAs have a lot of downsides, accessibility and user friendliness are two things that are often sacrificed. I've thought of an idea that might work really well, has this ever been attempted before? ## Prerequisites The prerequisite for it is that email verification _must be a requirement_ on your website, which is currently very common. ## Initial Process * User/robot signs up * User/robot notified that email verification is required * Email dispatched in HTML format ## Verification Link in Email The email link will include the verification link. For the example, I've given the link 3 querystring parameters: * **UID** \\- ID of user registering * **Code** \\- Unique code for activation * **F** \\- Fail flag It will render as follows: ![enter image description here](http://i.stack.imgur.com/INGm0.png) But the HTML in the email is as follows: Thank you for registering!

    To activate your account, please click on the link below:

    http://www.example.com/Verify

    Regards
    Tom ## Outcome The assumption here, is that the robot will either click the fail link as it's first in the email, or it will click both links. * If both links are clicked, we can mark them as a potential bot. * If only the first link is clicked, we can mark them as a potential bot * If the second link is the only one clicked, we can assume they are a legitimate user ## Review **Downsides:** * HTML emails for client must be enabled or it will be very confusing for the end client * A small % of clicks in the honey pot area of links (however, you could probably hide this H link with some more HTML, but that risks confusing end users depending on how you approach this so I wouldn't try) **Benefits** * It's not a binary pass/fail. If the fail link is clicked, then you can manually review the account, or resend an activation link. * It's accessible (as long as the client enables HTML email) * It's non interruptive for user friendliness, the flow for the user is natural and they wont know they've just passed a CAPTCHA test If this method has a lower false positive rate than traditional methods, it's worth having. Not only that, but it's an invisible verification process from the genuine users point of view. Thoughts? Criticisms? Has this been done before?"} {"_id": "121438", "title": "Database design", "text": "I'm on the way developing an application which requires a kind of dynamic database, So this is what I want, ![enter image description here](http://i.stack.imgur.com/0fxbc.png) This is the for reading the details of a class, the number of variables and methods will only be known at run-time, and I have to create a number of classes like this. In the above scenario,how can I design a database (MySql) to store all these datas and retrieve them later?"} {"_id": "176640", "title": "Requiring multithreading/concurrency for implementation of scripting language", "text": "Here's the deal: I'm looking at designing my own scripting/interpreted language for fun. I'm only in the planning stages right now; I want to make sure I have a very strong hold on exactly how I will implement everything before I start coding. What I'm currently struggling with is concurrency. It seems to me like an easy way to avoid the unpredictable performance that comes with garbage collection would be to put the garbage collector in its own thread, and have it run concurrently with the interpreter itself. (To be clear, I don't plan to allow the scripts to be multithreaded themselves; I would simply put a garbage collector to work in a different thread than the interpreter.) This doesn't seem to be a common strategy for many popular scripting languages, probably for portability reasons; I would probably write the interpreter in the UNIX/POSIX threading framework initially and then port it to other platforms (Windows, etc.) if need be. Does anyone have any thoughts in this issue? Would whatever gains I receive by exploiting concurrency be nullified by the portability issues that will inevitably arise? (On that note, am I really correct in my assumption that I would experience great performance gains with a concurrent garbage collector?) Should I move forward with this strategy or step away from it?"} {"_id": "247209", "title": "Why isn't there a next operation on enums?", "text": "In most popular programming languages like Java and C# there is a way to define `enums`, which are essentially datatypes with a fixed set of values, e.g. `DayOfWeek`. The problem is, given a value, e.g. `DayOfWeek.Monday`, how do I get the next value of the `enum`, in this particular case `DayOfWeek.Tuesday`? I realize that not all `enums` are ordered and there might be different kinds of orders for them (cyclic, partial etc.), but a simple `next` operation would be sufficient in most cases. In fact, since the set of values in a `enum` is fixed and limited, this could be done completely declaratively, assuming there is a means in the language to do so. However in most programming languages that is simply not possible, at least not as easily as it could be in theory. So, my question is: why is that so? What are the reasons for not providing a simple syntax for declaring the `next` value for a `enum`? I suggest, in C# or Java this could even be done with a special _attribute_ or _annotation_ , resp. But there are none, as far as I know. I am explicitly _not asking_ for workarounds; I know that there are alternative solutions. I just want to know why I have to employ a workaround in the first place."} {"_id": "247203", "title": "Why do some functional programming languages use a space for function application?", "text": "Having looked at some languages for functional programming, I always wondered why some fp-languages use one or more whitespace characters for function application (and definition), whereas most (all?) imperative/object-oriented languages are using parentheses, which seems to be the more mathematical way. I also think that the latter style is much more clear and readable than without the parens. So if we have a function _f(x) = x\u00b2_ there are the two alternatives to calling it: * FP: `f x` Examples: * ML, Ocaml, F# * Haskell * LISP, Scheme (somehow) * Non-FP: `f(x)` Examples: * Almost all imperative languages (I know, see the comments/answers) * Erlang * Scala (also allows \"operator notation\" for single arguments) What are the reasons for \"leaving out\" the parentheses?"} {"_id": "121434", "title": "Is it more sensible to log exceptions in a catch-all or in a base exception class?", "text": "I'm in the process of refactoring a fairly large web app. One of the major issues is inconsistent error handling and I'm trying to come up with a sensible strategy. I've created a custom error handler, via set_error_handler that essentially turns PHP errors in ErrorExceptions and a custom base exception class, that directly inherits from Exception. On production I'm using a generic exception catch-all, via set_exception_handler, and I'm about to add exception logging* to the mix. My dilemma is where to do the actual logging, in the base exception class or in the catch-all. I've thought of a couple of reasons to log it in the catch-all: * There are quite a few exceptions in the code that need to be converted to some appropriate child of the base exception class. Until that happens, not all exceptions will get logged. * It somehow feels more natural to do it in the catch-all, a base exception class shouldn't do more than being just that. (It might be a single responsibility principle thing, but it could just be a misguided feeling) and one reason to log in the base exception class: * Currently the catch-all is only used on production. It would be easy to introduce it on our other environments (development, testing) but that would call for a few adjustments, as errors are handled differently per environment, as on production they are translated to 404/503 error pages. Is there any acceptable practice for where to logging exceptions? * Logging will involve writing to a text file at first, and it may evolve to sending mails for certain types of exceptions. * * * Some clarifications, prompted by @unholysampler's answer: I'm facing a 2*10^6 sloc codebase, with a lot of third party stuff I have no control over, and some of the code I do have control over pre-dates exceptions in PHP. And there's also some crappy recent code, we are recovering from a long period of intense pressure where we virtually had to stop thinking and just hacked. We are actively refactoring to address all the inconsistencies and introduce a sensible error handling approach, but that's going to take some time. I'm more interested in what to do until I reach the point where errors are handled appropriately. I'll probably ask another question on a sensible exception strategy at some point. The main motivation behind logging is to get an email on my phone whenever something bad happens on production. I don't care if data dumps get huge, if they do I'll have a cron job deleting old ones every now and then."} {"_id": "247200", "title": "Coding shortcuts for type conversions or similar", "text": "bool? example = null; is actually Nullable example = null; Now for the `bool` to `System.Boolean` conversion or actually compile-time replacement: We ususally hardly care, as it is very obvious. I even think I _never_ used `System.Int32` or anything. However: The conversion into a `Nullable` through appending a `?`, that is something not so common. It is not custom to VS(#devel does it too) and my guess is that it is some secret magic in the compliler. My question is: Are there more of these \" _Conversions_ \" / typing shortcuts available in `.NET / C#`? I do NOT mean keyboard shortcuts. A term to google for or something similar migth be best."} {"_id": "255059", "title": "UnmarshalException while consuming a web service", "text": "I have a web service based on a number of entity classes. one of them is shows below @Entity @Table(name = \"users\") @XmlRootElement @NamedQueries({ @NamedQuery(name = \"Users.findAll\", query = \"SELECT u FROM Users u\"), @NamedQuery(name = \"Users.findByUserName\", query = \"SELECT u FROM Users u WHERE u.userName = :userName\"), @NamedQuery(name = \"Users.findByUserPassword\", query = \"SELECT u FROM Users u WHERE u.userPassword = :userPassword\")}) public class Users implements Serializable { private static final long serialVersionUID = 1L; @Id @Basic(optional = false) @Column(name = \"user_name\") private String userName; @Basic(optional = false) @Column(name = \"user_password\") private String userPassword; @JoinColumn(name = \"user_category_id\", referencedColumnName = \"category_id\") @ManyToOne(optional = false) private UserCategory userCategoryId; @OneToMany(cascade = CascadeType.ALL, mappedBy = \"userName\") private List userRecordList; public Users() { } public Users(String userName) { this.userName = userName; } public Users(String userName, String userPassword) { this.userName = userName; this.userPassword = userPassword; } public String getUserName() { return userName; } public void setUserName(String userName) { this.userName = userName; } public String getUserPassword() { return userPassword; } public void setUserPassword(String userPassword) { this.userPassword = userPassword; } public UserCategory getUserCategoryId() { return userCategoryId; } public void setUserCategoryId(UserCategory userCategoryId) { this.userCategoryId = userCategoryId; } @XmlTransient public List getUserRecordList() { return userRecordList; } public void setUserRecordList(List userRecordList) { this.userRecordList = userRecordList; } @Override public int hashCode() { int hash = 0; hash += (userName != null ? userName.hashCode() : 0); return hash; } @Override public boolean equals(Object object) { // TODO: Warning - this method won't work in the case the id fields are not set if (!(object instanceof Users)) { return false; } Users other = (Users) object; if ((this.userName == null && other.userName != null) || (this.userName != null && !this.userName.equals(other.userName))) { return false; } return true; } @Override public String toString() { return userName; } } I was able to successfully deploy the web service and then i added a new Restful web client using netbeans, which created the following class public class Client { private WebTarget webTarget; private javax.ws.rs.client.Client client; private static final String BASE_URI = \"http://localhost:31691/ProductionEntitiesService/api\"; public Client() { client = javax.ws.rs.client.ClientBuilder.newClient(); webTarget = client.target(BASE_URI).path(\"entities.users\"); } ... public T find_XML(Class responseType, String id) throws ClientErrorException { WebTarget resource = webTarget; resource = resource.path(java.text.MessageFormat.format(\"{0}\", new Object[]{id})); return resource.request(javax.ws.rs.core.MediaType.APPLICATION_XML).get(responseType); } public T findAll_XML(Class responseType) throws ClientErrorException { WebTarget resource = webTarget; return resource.request(javax.ws.rs.core.MediaType.APPLICATION_XML).get(responseType); } public void close() { client.close(); } } This line of code then returned an xml result of the query result = c.findAll_XML(String.class); which had this format 2 admin admin d033e22ae348aeb5660fc2140aec35850c4da997 However, this line of code List l = (List)c.findAll_XML(Users.class); produces an exception, which seems to be caused by the \"userss\" tag that surrounds the xml result, I'm not sure how that came about. Can anyone help me resolve this? Exception in thread \"AWT-EventQueue-0\" javax.ws.rs.BadRequestException: HTTP 400 Bad Request at org.glassfish.jersey.message.internal.AbstractRootElementJaxbProvider.readFrom(AbstractRootElementJaxbProvider.java:124) at org.glassfish.jersey.message.internal.ReaderInterceptorExecutor$TerminalReaderInterceptor.aroundReadFrom(ReaderInterceptorExecutor.java:188) at org.glassfish.jersey.message.internal.ReaderInterceptorExecutor.proceed(ReaderInterceptorExecutor.java:134) at org.glassfish.jersey.message.internal.MessageBodyFactory.readFrom(MessageBodyFactory.java:988) at org.glassfish.jersey.message.internal.InboundMessageContext.readEntity(InboundMessageContext.java:833) at org.glassfish.jersey.message.internal.InboundMessageContext.readEntity(InboundMessageContext.java:768) at org.glassfish.jersey.client.InboundJaxrsResponse.readEntity(InboundJaxrsResponse.java:96) at org.glassfish.jersey.client.JerseyInvocation.translate(JerseyInvocation.java:740) at org.glassfish.jersey.client.JerseyInvocation.access$500(JerseyInvocation.java:88) at org.glassfish.jersey.client.JerseyInvocation$2.call(JerseyInvocation.java:650) at org.glassfish.jersey.internal.Errors.process(Errors.java:315) at org.glassfish.jersey.internal.Errors.process(Errors.java:297) at org.glassfish.jersey.internal.Errors.process(Errors.java:228) at org.glassfish.jersey.process.internal.RequestScope.runInScope(RequestScope.java:421) at org.glassfish.jersey.client.JerseyInvocation.invoke(JerseyInvocation.java:646) at org.glassfish.jersey.client.JerseyInvocation$Builder.method(JerseyInvocation.java:375) at org.glassfish.jersey.client.JerseyInvocation$Builder.get(JerseyInvocation.java:275) at service.Client.findAll_XML(Client.java:83) at examples.Find.(Find.java:44) at examples.Find$1.run(Find.java:166) at java.awt.event.InvocationEvent.dispatch(InvocationEvent.java:311) at java.awt.EventQueue.dispatchEventImpl(EventQueue.java:744) at java.awt.EventQueue.access$400(EventQueue.java:97) at java.awt.EventQueue$3.run(EventQueue.java:697) at java.awt.EventQueue$3.run(EventQueue.java:691) at java.security.AccessController.doPrivileged(Native Method) at java.security.ProtectionDomain$1.doIntersectionPrivilege(ProtectionDomain.java:75) at java.awt.EventQueue.dispatchEvent(EventQueue.java:714) at java.awt.EventDispatchThread.pumpOneEventForFilters(EventDispatchThread.java:201) at java.awt.EventDispatchThread.pumpEventsForFilter(EventDispatchThread.java:116) at java.awt.EventDispatchThread.pumpEventsForHierarchy(EventDispatchThread.java:105) at java.awt.EventDispatchThread.pumpEvents(EventDispatchThread.java:101) at java.awt.EventDispatchThread.pumpEvents(EventDispatchThread.java:93) at java.awt.EventDispatchThread.run(EventDispatchThread.java:82) Caused by: javax.xml.bind.UnmarshalException: unexpected element (uri:\"\", local:\"userss\"). Expected elements are <{}userCategory>,<{}users> at com.sun.xml.internal.bind.v2.runtime.unmarshaller.UnmarshallingContext.handleEvent(UnmarshallingContext.java:681) at com.sun.xml.internal.bind.v2.runtime.unmarshaller.Loader.reportError(Loader.java:247) at com.sun.xml.internal.bind.v2.runtime.unmarshaller.Loader.reportError(Loader.java:242) at com.sun.xml.internal.bind.v2.runtime.unmarshaller.Loader.reportUnexpectedChildElement(Loader.java:109) at com.sun.xml.internal.bind.v2.runtime.unmarshaller.UnmarshallingContext$DefaultRootLoader.childElement(UnmarshallingContext.java:1086) at com.sun.xml.internal.bind.v2.runtime.unmarshaller.UnmarshallingContext._startElement(UnmarshallingContext.java:510) at com.sun.xml.internal.bind.v2.runtime.unmarshaller.UnmarshallingContext.startElement(UnmarshallingContext.java:492) at com.sun.xml.internal.bind.v2.runtime.unmarshaller.SAXConnector.startElement(SAXConnector.java:163) at com.sun.org.apache.xerces.internal.parsers.AbstractSAXParser.startElement(AbstractSAXParser.java:509) at com.sun.org.apache.xerces.internal.impl.XMLNSDocumentScannerImpl.scanStartElement(XMLNSDocumentScannerImpl.java:378) at com.sun.org.apache.xerces.internal.impl.XMLNSDocumentScannerImpl$NSContentDriver.scanRootElementHook(XMLNSDocumentScannerImpl.java:604) at com.sun.org.apache.xerces.internal.impl.XMLDocumentFragmentScannerImpl$FragmentContentDriver.next(XMLDocumentFragmentScannerImpl.java:3122) at com.sun.org.apache.xerces.internal.impl.XMLDocumentScannerImpl$PrologDriver.next(XMLDocumentScannerImpl.java:880) at com.sun.org.apache.xerces.internal.impl.XMLDocumentScannerImpl.next(XMLDocumentScannerImpl.java:606) at com.sun.org.apache.xerces.internal.impl.XMLNSDocumentScannerImpl.next(XMLNSDocumentScannerImpl.java:117) at com.sun.org.apache.xerces.internal.impl.XMLDocumentFragmentScannerImpl.scanDocument(XMLDocumentFragmentScannerImpl.java:510) at com.sun.org.apache.xerces.internal.parsers.XML11Configuration.parse(XML11Configuration.java:848) at com.sun.org.apache.xerces.internal.parsers.XML11Configuration.parse(XML11Configuration.java:777) at com.sun.org.apache.xerces.internal.parsers.XMLParser.parse(XMLParser.java:141) at com.sun.org.apache.xerces.internal.parsers.AbstractSAXParser.parse(AbstractSAXParser.java:1213) at com.sun.org.apache.xerces.internal.jaxp.SAXParserImpl$JAXPSAXParser.parse(SAXParserImpl.java:649) at com.sun.xml.internal.bind.v2.runtime.unmarshaller.UnmarshallerImpl.unmarshal0(UnmarshallerImpl.java:243) at com.sun.xml.internal.bind.v2.runtime.unmarshaller.UnmarshallerImpl.unmarshal(UnmarshallerImpl.java:214) at javax.xml.bind.helpers.AbstractUnmarshallerImpl.unmarshal(AbstractUnmarshallerImpl.java:140) at javax.xml.bind.helpers.AbstractUnmarshallerImpl.unmarshal(AbstractUnmarshallerImpl.java:123) at org.glassfish.jersey.message.internal.XmlRootElementJaxbProvider.readFrom(XmlRootElementJaxbProvider.java:140) at org.glassfish.jersey.message.internal.AbstractRootElementJaxbProvider.readFrom(AbstractRootElementJaxbProvider.java:122) ... 33 more"} {"_id": "255056", "title": "Whether to store all numbers or just their ranges in database for this application", "text": "I'm developing a (`PHP MySQL`) Web App which will sell dynamically generated real time cell phone numbers for cellular companies. A company will ask for, say, a thousand numbers and this app will check for available numbers and provide them. The number will be an 8 digit figure. The first `two digits` of the number will mostly be fixed as specific `codes` for a particular company, rest of the digits will be dynamically generated. My question is whether a) I should store all the sold numbers in database, if yes then: i) Separate number in separate row or ii) All the numbers in a single row b) Should store range of numbers i.e 32500001-32510000 Keeping performance and ease of handling the algorithm in mind, kindly suggest me your solution."} {"_id": "255051", "title": "Adding an update feature to a web application", "text": "I'm interested in the best approach to adding auto update functionality to a web application. Currently * The patches are downloaded from the softwares support site * Uploaded to the host server(s) * Extracted * The patch process is run The patch process needs to know a few things, specifically the webroot of the application and the database host, name and user in order to run migrations between the point releases. What I would like to see happen if implement a module inside the application which using some technology similar to cURL fetches the latest package, extracts it and runs the necessary command to apply the patch. However I don't think the end user should be expected to know the specifics of the server. e.g. the location of the application, the database host, user and password. I'm thinking the best way to secure this information would be a separate database table accessed by the update module using the users existing login credentials (provided they had necessary privileges to update the installation) What other considerations do I need to take into account? Are there any factors I have overlooked?"} {"_id": "137979", "title": "What is the relationship between the business logic layer and the data access layer?", "text": "I'm working on an MVC-ish app _(I'm not very experienced with MVC, hence the \"-ish\")_. My model and data access layer are hard to test because they're very tightly coupled, so I'm trying to uncouple them. What is the nature of the relationship between them? Should just the model know about the DAL? Should just the DAL know about the model? Or should both the model and the DAL be listeners of the other? * * * In my specific case, it's: * a web application * the model is client-side (javascript) * the data is accessed from the back-end using Ajax * persistence/back-end is currently PHP/MySQL, but may have to switch to Python/GoogleDataStore on the GAE"} {"_id": "252580", "title": "What to do with a long unfinished project?", "text": "I am a programmer hobbyist (self learnt), and once in a while I like to make games and interactive scripts (nowadays mostly in JavaScript for its ease). Sometimes I start long projects that end up being forgotten because of other things I have to do. Sometimes a project gets really big (say, 3000 lines is a huge deal to me). My code is mostly uncommented, unorganized, and unoptimised and some of my projects are never finished. **Question:** * What do programmers (or teams), do with such projects ? * Is there a recommended practice regarding this issue in the \"real world\"?"} {"_id": "216421", "title": "Working with CPU cycles in Gameboy Advance", "text": "I am working on an GBA emulator and stuck at implementing CPU cycles. I just know the basic knowledge about it, each instruction of ARM and THUMB mode as each different set of cycles for each instructions. Currently I am simply saying every ARM instructions cost 4 cycles and THUMB instructions cost 2 cycles. But how do you implement it like the CPU documentation says? Does instruction cycles vary depending on which section of the memory it's currently accessing to? http://nocash.emubase.de/gbatek.htm#cpuinstructioncycletimes According to the above specification, it says different memory areas have different waitstates but I don't know what it exactly mean. Furthermore, what are Non-sequential cycle, Sequential cycle, Internal Cycle, Coprocessor Cycle for? I saw in some GBA source code that they are using PC to figure out how many cycles each instruction takes to complete, but how are they doing it?"} {"_id": "125226", "title": "How can I create Assert.AreEqual(myobject,somevariable) a test in TDD before writing production code?", "text": "Im researching TDD, I'm not really sure how to write a test before I have written production code. The problem is that TDD states that you make assertions and then write your code so that these assertions pass. But You have not created any classes yet (as part of the production code) so how are you meant to write the tests if you dont have any classes with which to assert. Hope someone can explain...."} {"_id": "137975", "title": "Functional Programming For Embedded Software", "text": "I was discussing F# and Functional Programming with a friend last night and he brought up an interesting question to me. How would you do embedded software in functional? I mean this seems like a fairly natural fit in terms of stateless code but embedded also entails being very frugal with memory and I'm not sure of the story for functional in that regard. Any suggestions about languages or packages for embedded with functional?"} {"_id": "255584", "title": "How to avoid model duplication in JavaEE web applications with a JSON front end", "text": "Recently we developed a web app that uses the following tech stack: * hibernate as orm * spring * extjs (MVC javascript front end) For 1 business object, let it be a Personnel, we have: 1) a Personnel class (hibernate) 2) PersonnelDTO (will explain why we needed this) 3) a Personnel javascript model for Extjs (I don't know may be other javascript UI frameworks does not need a model defined) Need for number 2 arised when we couldn't deal with sending hibernate objects back and forth, dealing with sessions and circular references etc. etc. Plus, in some cases you need a fully filled Personnel object (full fields joined), sometimes just a few fields and a relation. Sometimes you need all the info to process a Personnel but you can't send it all the way to the client (security, privacy reasons). So we used DTO's to decouple model objects and objects that will be sent through the wire. Adding-removing a field in such a project design becomes very tedious and error prone. Adding a field requires: * add to DB tables * add to model object * add to DTO * add to model -> DTO converter (when it's a field that can't be automatically mapped) regarding different scenarios that requires different amount of information about model * add to Extjs model * add to CRUD forms on the page * add to client and server validators... Isn't there a better approach to building similar applications? We are thinking of starting a new project and don't want to make the same mistakes."} {"_id": "255583", "title": "How does one learn QA?", "text": "When starting to learn programming you would usually point someone over to Code Academy, or Learn Python the Hard Way or even The C Programming language and tell them start writing code until it works. But how would one proceed if he wants to learn QA? More specifically, a programmer who wants to learn about the QA process and how to manage a good QA methodology. I was given the role of jumpstarting a QA process in our company and I'm a bit lost, what are the different types of testing (system, integration, white box, black box) and which are most important to implement first? How would one implement them?"} {"_id": "255582", "title": "Creating a web app that authenticates users from two different databases", "text": "I am in a bit of an awkward programming situation, and it will help me a lot to get your input in this situation. I am building a web app which expects to authenticate users from another, separate application (through API calls), and also authenticate users from it's own database. Essentially, there are two separate user bases. Let's say the app I'm building is App1, and the other app which I'm making API calls to is App2. This communication is only one-way; App1 makes API calls to App2, but App2 does not make calls to App1. Users can register in App1 and create an account in App1's database - but during this process, App1 makes an API call to App2 to make sure the email/username being entered into App1 is not already present in App2. Now, the other side of this. If a user is already registered in App2, they do not need to go and register again in App1. They can simply go to App1 and log in. In doing so, App1 sends an API call to App2 to authenticate the user, and creates a copy of the App2 user's account in it's own (App1) database. Problems arise when there are two distinct users, one in App1, and the other in App2, that share the same username. Now, when the App2 user attempts to log into App1, there will exist two users in App1 with the same username! How can a scenario like this be avoided? Is this even a place I want to be (working with App2's user base)?"} {"_id": "255581", "title": "Public API Facade with Micro Services", "text": "Consider a micro service infrastructure in which each service is responsible for one set of activities, and exposes a RESTful interface to its functionality. For example, assume a chat application. We might have one service which is responsible for creating users, and a second service that is responsible for creating messages. Now, we want to create a public-facing REST interface to the application. Are there any best practices for creating this public facade to the micro services? I'm interested in a couple of things, mainly: 1. Which layer should handle authentication/authorization (if the underlying services - do they share this data or each implement their own auth) 2. Clearly in this application messages are sent from users to users. However the message service can easily be written in such a way to be usable for anything. Should the public proxy be the one to determine the user information and then delegate to a message service?"} {"_id": "255580", "title": "Which JSON Module Should I Download?", "text": "I see there are many modules available to use with Python for using JSON. Now I am a bit confused! I see people are talking about SimpleJSON also there are references to DemJSON and also good words for Metamagic JSON. Now my question is which one is best module for Python? Specially for Python 3.4 version???"} {"_id": "218665", "title": "How does the CPU know when it received RAM data and instructions?", "text": "Well, the title is pretty much self explanatory, but I'll expound it a bit, and say the origin of my question. So, I've been wondering as to how the CPU knows when it received the RAM. I'm pretty sure it doesn't OR the RAM outputs together, because 0x00 is still a number, yet ORing that together says that the RAM has not outputed anything. Does the RAM have some kind of \"request acknowledged\" line? Well, my friend was making an MC CPU, and he used interrupts for RAM reading, so that's where I got this idea. Anyways, the origin of my problem is probably when I was thinking about Virtual Memory. In virtual memory, you have to fetch from the disk and to the RAM. How does one compensate for the gap between the speeds? May this be answered too?"} {"_id": "134028", "title": "What sources of sample work should be used in a job interview?", "text": "One of my friends has been laid off. When I talked to him, he said they didn't let him take a copy of anything he worked on. When he asked how to show what he worked on to another employer in an interview, he was told that he will have some explaining to do. Should we, as programmers, be allowed to take samples of our previous work former employers? What sources of code should we be expected to show off in an interview? When almost every employer asks for sample work, how are we to justify what can be sent? Is it our responsibility to maintain after-work projects for our entire life so we have code we can legally show to our next employer?"} {"_id": "164546", "title": "Using prefix incremented loops in C#", "text": "Back when I started programming in college, a friend encouraged me to use the prefix incrementation operator `++i` instead of the postfix `i++`, citing that there was a slight chance of better performance with no real chance of a downside. I realize this is true in C++, and it's become a general habit that I continue to do. I'm led to believe that it makes little to no difference when used in a loop in C#, regardless of data type. Apparently the ++ operator can't be overridden. Nevertheless, I like the appearance more, and don't see a direct downside to it. It did astonish a coworker just a moment ago though, he made the (fairly logical) assumption that my loop would terminate early as a result. He's a self-taught programmer, and apparently never came across the C++ convention. That made me question whether or not the equivalent behavior of pre- and post- fix increment and decrement operators in loops is well known enough. **Is it acceptable for me to continue using`++i` in looping constructs because of style preference, even though it has no real performance benefit? Or is it likely to cause confusion amongst other programmers?** * * * * Note: This is assuming the `++i` convention is used consistently throughout all code."} {"_id": "164542", "title": "Getting URLs from search results", "text": "After 1 months research I basically give up on getting all URL's from a search results programmatically, I looked at Google Search API to find a way to get millions of search results \"URL's\" to be specific to a text file or something relative but no success, but I am 100% there must be a way or trick of doing it. Real Question : Is there anyway programmatically or manually I can get 1000+ search results (URLs using search query e.g. \"Apple\" returns million of results on google and I want as much as possible URLs of them results in a text file) Note : Don't care for any specific search engine or programming language or technique or software or just point me to right direction, but yeah I tried it with google API i can't get more then 100 results at all."} {"_id": "133214", "title": "Roadblock-confused about structure of program", "text": "I'm new to programming, and I'm working in C. I know that this is structured programming but if I use blocks, say for local variables: { int i; for(i=0; i<25; i++){ printf(\"testing...\\n\"); } } Doesnt this make it kind of object oriented-like? Is this still structured?"} {"_id": "119085", "title": "Who should train new programmers? Junior or senior programmers?", "text": "On my team, we often require the most senior programmers to train/mentor the brand new junior programmers. However, these same senior programmers are the ones who are doing the bulk of the real, important work. I've tried to argue to my manager that it makes sense to have the junior programmers, who are showing a high aptitude, take the new programmers under their wing. First off, it will free up the senior developers to work on more important initiatives (not that mentoring isn't important). Next, it will give the junior programmers a bit of pride in their job that they would be looked to for such a responsibility and they may learn something in teaching. Finally, it will save the company money, as senior developers cost a great deal more than juniors. My boss has failed to be persuaded since this is how it has worked on this team since the beginning of time, apparently. Assuming that the decision has been made that some sort of training/mentoring is mandatory, can anyone provide me with some better arguments or tell me why I am wrong? What does your team do? **We can all agree that seniority does not necessarily denote competence so just assume by \"senior programmers\" i mean \"top programmers\"."} {"_id": "133219", "title": "Can I re-license Academic Free License code under 2-Clause BSD / ITC?", "text": "I want to fork a piece of code licensed under the Academic Free License. For the project, it would be preferable to re-license it under the ISC License or the 2-Clause BSD license, which are equivalent. I understand that the AFL grants me things such as limitation of liability, but licensing consistency is much more important to the project, especially since we're talking about just 800 lines of code, a quarter of which I've modified in some way. And it's very important for me to give these changes back to the community, given the fact that this is software relevant to security - I need the public scrutiny that I'll get by creating a public fork. In short: At the top of the file I want to say this, or something like it: # Licensed under the Academic Free License, version 3 # Copyright (C) 2009 Original Author # Licensed under the ISC License # Copyright (C) 2012 Stefano Palazzo # Copyright (C) 2012 Company Am I allowed to do this? My research so far indicates that it's not clear whether the AFL is GPL- Compatible, and I can't really understand any of the stuff concerning re- licensing to other permissive licenses. As a stop gap, I would also be okay with re-licensing under the GPL, however: I can find no consensus (though I can find disagreement) on whether this is allowed at all, and I don't want to risk it, of course. * * * * Wikipedia: ISC License * Wikipedia: Academic Free License"} {"_id": "162870", "title": "GPL - what is distribution?", "text": "An interesting point came up on another thread about _alleged_ misappropriation of a GPL project. In this case the enterprise software was used by some large companies who essentially took the code, changed the name, removed the GPL notices and used the result. The point was - if the company did this **and** only used the software internally then there isn't any distribution and that's perfectly legal under GPL. Modifications by their own employees for internal use would also be allowed. _So At what point does it become a distribution?_ Presumably if they brought in outside contractors under 'work for hire' their modifications would also be internal and so not a distribution. If they hired an external software outfit to do modifications and those changes were only used internally by the company - would those changes be distributed? Does the GPL apply to the client or to the external developers? If the company then give the result to another department, another business unit, another company? What if the other company is a wholly owned subsidiary? ps. yes I know the _answer_ is ask a lawyer. But all the discussion I have seen over GPL2/GPL3 _distribution_ has been about webservices - not about internal use."} {"_id": "162874", "title": "Python Coding standards vs. productivity", "text": "I work for a large humanitarian organisation, on a project building software that could help save lives in emergencies by speeding up the distribution of food. Many NGOs desperately need our software and we are weeks behind schedule. One thing that worries me in this project is what I think is an excessive focus on coding standards. We write in python/django and use a version of PEP0008, with various modifications e.g. line lengths can go up to 160 chars and all lines should go that long if possible, no blank lines between imports, line wrapping rules that apply only to certain kinds of classes, lots of templates that we must use, even if they aren't the best way to solve a problem etc. etc. One core dev spent a week rewriting a major part of the system to meet the then new coding standards, throwing away several suites of tests in the process, as the rewrite meant they were 'invalid'. We spent two weeks rewriting all the functionality that was lost, and fixing bugs. He is the lead dev and his word carries weight, so he has convinced the project manager that these standards are necessary. The junior devs do as they are told. I sense that the project manager has a strong feeling of cognitive dissonance about all this but nevertheless agrees with it vehemently as he feels unsure what else to do. Today I got in serious trouble because I had forgotten to put some spaces after commas in a keyword argument. I was literally shouted at by two other devs and the project manager during a Skype call. Personally I think coding standards are important but also think that we are wasting a lot of time obsessing with them, and when I verbalized this it provoked rage. I'm seen as a troublemaker in the team, a team that is looking for scapegoats for its failings. Since the introduction of the coding standards, the team's productivity has measurably plummeted, however this only reinforces the obsession, i.e. the lead dev simply blames our non-adherence to standards for the lack of progress. He believes that we can't read each other's code if we don't adhere to the conventions. This is starting to turn sticky. Now I am trying to modify various scripts, autopep8, pep8ify and PythonTidy to try to match the conventions. We also run pep8 against source code but there are so many implicit amendments to our standard that it's hard to track them all. The lead dev simple picks faults that the pep8 script doesn't pick up and shouts at us in the next stand-up meeting. Every week there are new additions to the coding standards that force us to rewrite existing, working, tested code. Thank heavens we still have tests, (I reverted some commits and fixed a bunch of the ones he removed). All the while there is increasing pressure to meet the deadline. I believe a fundamental issue is that the lead dev and another core dev refuse to trust other developers to do their job. But how to deal with that? We can't do our job because we are too busy rewriting everything. I've never encountered this dynamic in a software engineering team. Am I wrong to question their adherence to coding standards? Has anyone else experienced a similar situation and how have they dealt with it successfully? (I'm not looking for a discussion just actual solutions people have found)"} {"_id": "164099", "title": "How do I Export to excel on aspx page?", "text": "I am trying to take data that I request from an access database and put it into and excel file on the client computer. I usually use ajax to request a summary of the data I need. It is formatted into an html table. I need that table to be in an excel format for the user to download. What I have tried already is to use the vb.net code to open excel and silently save the data to a file, however I realized it's the server side that opens excel, not the client side, in my locally testing of the code, excel would open on my machine and create the file. When running this on the network, I realized excel isn't on the server, I am not sure if I should just install it or try stream the file?"} {"_id": "94859", "title": "Laptops or Notebooks in a meeting?", "text": "Is taking the laptop to the meeting a good idea? Of course, the project leader needs to have one -- but the programmers -- especially those who only need to get straight instructions on what to do next on the project -- do they need to take laptops? I feel it takes longer to save notes in a software -- and it's lot easier to just jot down \"things to do\" in a simple note book. That way you can keep up with the discussion and not lose track of what someone else is saying by spending too much time entering text in the machine."} {"_id": "179896", "title": "is Microsoft LC random generator patented?", "text": "I need a _very_ simple pseudo random generator (no any specific quality requirements) and I found Microsoft's variant of LCG algorithm used for rand() C runtime library function fit my needs (gcc's one seems too complex). I found the algorithm here: http://rosettacode.org/wiki/Linear_congruential_generator#C However, I worry the algorithm (including its \"magic numbers\" i.e coefficients) may by patented or restricted for use in some another way. Is it allowed to use this algorithm without any licence or patent restrictions or not? I can't use library rand() because I need my results to be exactly reproducible on different platforms"} {"_id": "179894", "title": "Rails-API gem, Is there such thing as an API only application?", "text": "I've built a few API's using the complete rails stack. In each project there have been multiple uses for rails core features. Each of the API has had management screens for monitoring usage, managing authentication keys, etc. Is there such thing as an API without a management front end?"} {"_id": "195868", "title": "Why use typedefs for structs?", "text": "in C (ANSI, C99, etc.), structs live in their own namespace. A struct for a linked list might look something like this: struct my_buffer_type { struct my_buffer_type * next; struct my_buffer_type * prev; void * data; }; It seems quite natural however for most C programmers to automatically typdef those structs like the following typedef struct tag_buffer_type { struct my_buffer_type * next; struct my_buffer_type * prev; void * data; } my_buffer_type; And then reference the struct like a normal type, i.e. `get_next_element(my_buffer_type * ptr)`. Now my question is: Is there a specific reason for this? Wikipedia says http://en.wikipedia.org/wiki/Typedef#Usage_concerns > Some people are opposed to the extensive use of typedefs. Most arguments > center on the idea that typedefs simply hide the actual data type of a > variable. For example, Greg Kroah-Hartman, a Linux kernel hacker and > documenter, discourages their use for anything except function prototype > declarations. He argues that this practice not only unnecessarily obfuscates > code, it can also cause programmers to accidentally misuse large structures > thinking them to be simple types.[4] > > Others argue that the use of typedefs can make code easier to maintain. K&R > states that there are two reasons for using a typedef. First, it provides a > means to make a program more portable. Instead of having to change a type > everywhere it appears throughout the program's source files, only a single > typedef statement needs to be changed. Second, a typedef can make a complex > declaration easier to understand. I personally wonder if there is not enough benefit of having the separate `struct` namespace to sometimes not use typedef'd structs and since there are several C programming cultures around (Windows C programming has different traditions than Linux C programming in my experience) if there are other traditions that I am not aware of. Then I am interested in historical considerations (predecessors, first versions of C)."} {"_id": "164094", "title": "Why does Javascript use JSON.stringify instead of JSON.serialize?", "text": "I'm just wondering about _\"stringify\"_ vs _\"serialize\"_. To me they're the same thing (though I could be wrong), but in my past experience (mostly with asp.net) I use `Serialize()` and never use `Stringify()`. I know I can create a simple alias in Javascript, // either JSON.serialize = function(input) { return JSON.stringify(input); }; // or JSON.serialize = JSON.stringify; http://jsfiddle.net/HKKUb/ but I'm just wondering about the difference between the two and why stringify was chosen. * * * for comparison purpose, here's how you serialize XML to a String in C# public static string SerializeObject(this T toSerialize) { XmlSerializer xmlSerializer = new XmlSerializer(toSerialize.GetType()); StringWriter textWriter = new StringWriter(); xmlSerializer.Serialize(textWriter, toSerialize); return textWriter.ToString(); }"} {"_id": "164096", "title": "Naming variables with fixed point units", "text": "What should I name a variable that has units with a fixed point? int herpLimitLo_psig2 = 6000; // 60.00 psig int derpLimitLoPsigwithtwodigits; int herpLimitHiPsigfixedtwo; int herpLimitHi_psig_timesOneHundred; Apparently I suck at naming things."} {"_id": "238256", "title": "Use Generic of Specific Function names for similar objects", "text": "Is there a school of thought on putting focus on using generic names for functions, opposed to naming functions based on the things they do? **Example** Say we have a `Bill / Price Sheet` object, that has line items Bill: * Shipping Charge $5.00 * Crate Charge $6.00 * Tax Charge $8.00 Generic function names would be: //function's code knows how to add a line item bill->addLineItem(\"Shipping Charge\", 5); bill->addLineItem(\"Crate Charge\", 6); bill->addLineItem(\"Tax Charge\", 8); Specific function names would be: //function's code reflects knowledge of this specific line item bill->addShippingCharge(5); bill->addCrateCharge(6); bill->addTaxCharge(8); This can be extended to any set of similar actions where something similar is being done. A more generic function takes the actual parameter, and can be deemed more flexible - any new line item can be added using the same function. A more specific function can only be used for that particular line item. Adding a new type of line item requires a new function. Something tells me that the specific way is more preferred, but I can't verbalize why, and maybe I am wrong. But perhaps it makes the code more clear as to what this line item is intended to do. So if specific naming is superior, and I am writing code for a restaurant with 600 items on the menu (such restaurants do exist), I am not sure I want to have 600 unique functions for each menu item. But in that case maybe a different approach is warranted. So what I am asking here, is if I have a reasonable class of items (i.e. 10-20) for my specific possible line item list, would you recommend any particular approach from the ones described (specific vs general), or does it all depend on my project specifics?"} {"_id": "114333", "title": " Is there a general pattern that could be used to describe Data Migration?", "text": "In our company we have to dismiss a software and introduce a new one. I have received the task to follow and take care of the whole data migration. We use 3 different data sources (each containing different kind of information), whose records have to be merged into a unique target DB. Since the new DB has a different schema also some mapping steps are required to coincile data from the two environments. Technically it is everything clear, but I have to describe in a \"proof of concept\" the migration process. **About data migration is there a general pattern that could be used to describe it?** I will describe the involved systems, which data will be moved, the mapping tables used and the planned tests to ensure data quality. However can anyone suggest a different approach? Maybe in this way I can realize that I am missing something in my documentation."} {"_id": "238254", "title": "Javascript MVC application design (canvas)", "text": "I'm having difficulty grasping how to structure/architect a canvas application using an MVC like approach in Javascript. UI will be fairly fluid and animated, the games fairly simplistic but with heavy emphasis on tweening and animation. I get how MVC works in principle but not in practice. I've googled the buggery out of this, read an awful lot, and am now as confused as I was when I started. Some details about the application area: * multi screen game framework - multiple games will sit within this framework common UI \"screens\" include: settings, info, choose difficulty, main menu etc. * multiple input methods * common UI elements such as top menu bar on some screens * possibility of using different rendering methods (canvas/DOM/webGL) At the moment I have an AppModel, AppController and AppView. From here I was planning to add each of the \"screens\" and attach it to the AppView. But what about things like the top menu bar, should they be another MVC triad? Where and how would I attach it without tightly coupling components? Is it an accepted practice to have one MVC triad within another? i.e. can I add each \"screen\" to the AppView? Is \"triad\" even an accepted MVC term?! My mind is melting under the options... I feel like I'm missing something fundamental here. I've got a solution already up and running without using an MVC approach, but have ended up with tightly coupled soup - logic and views and currently combined. The idea was to open it up and allow easier change of views (for e.g. swapping out a canvas view with a DOM based view). Current libraries used: require.js, createJS, underscore, GSAP, hand rolled MVC implementation Any pointers, examples etc., particularly with regards to the actual design of the thing and splitting the \"screens\" into proper M, V or C would be appreciated. ...or a more appropriate method other than MVC _[NB, if you've seen this question before it's because I asked it in 2 other incorrect stackexchange communities... my brain has stopped functioning]_"} {"_id": "238252", "title": "Jenkins: .NET website XML transforms, but no .sln or build file", "text": "We're setting up Jenkins for CI and deployments. We have .NET website that isn't managed through visual studio. Its a Logi Analytics application, so we develop exclusively through LogiStudio. As such, there are no build files (.sln). However, there are web config and other XML files that need to change depending on the environment they are deployed in. Slow Cheetah looks like a good solution for transforming non-web.config xml files, but again the problem is we aren't managing this as a visual studio solution, ergo no .sln file. In the past I would have used NAnt for this kind of thing, but I like the MS convention used by slow cheetah for defining transformations. I'd hate to have to put the application in a solution just to make this work, and I'm hoping to get away from NAnt since I'm the only one around here that is comfortable using and maintaining NAnt scripts. Are there any great options out there that I'm missing?"} {"_id": "238253", "title": "Mimicking a bluetooth disconnection", "text": "I've written a program to control a bluetooth device. I'm trying to test cases when the bluetooth disconnects, i.e. if its out of range. Physically taking the device out of range is one possibility, but its quite cumbersome and I have to go outside my office to achieve this. What can I do to trigger a disconnection? Is there, for example, an interferer I can setup, say with an Android phone, that would make the connection drop? Or limit the Bluetooth transmit power? Any other possibilities?"} {"_id": "114338", "title": "Why are exception specifications bad?", "text": "Back in school some 10+ years ago, they were teaching you to use exception specifiers. Since my background is as one of them Torvaldish C programmers who stubbornly avoids C++ unless forced to, I only end up in C++ sporadically, and when I do I still use exception specifiers since that's what I was taught. However, the majority of C++ programmers seem to frown upon exception specifiers. I have read the debate and the arguments from various C++ gurus, like these. As far as I understand it, it boils down to three things: 1. Exception specifiers use a type system that is inconsistent with the rest of the language (\"shadow type system\"). 2. If your function with an exception specifier throws anything else except what you have specified, the program will get terminated in bad, unexpected ways. 3. Exception specifiers will be removed in the upcoming C++ standard. Am I missing something here or are these all the reasons? My own opinions: Regarding 1): So what. C++ is probably the most inconsistent programming language ever made, syntax-wise. We have the macros, the goto/labels, the horde (hoard?) of undefined-/unspecified-/implementation-defined behavior, the poorly-defined integer types, all the implicit type promotion rules, special- case keywords like friend, auto, register, explicit... And so on. Someone could probably write several thick books of all the weirdness in C/C++. So why are people reacting against this particular inconsistency, which is a minor flaw in comparison to many other far more dangerous features of the language? Regarding 2): Isn't that my own responsibility? There are so many other ways I can write a fatal bug in C++, why is this particular case any worse? Instead of writing `throw(int)` and then throwing Crash_t, I may as well claim that my function returns a pointer to int, then make a wild, explicit typecast and return a pointer to a Crash_t. The spirit of C/C++ has always been to leave most of the responsibility to the programmer. What about advantages then? The most obvious is that if your function tries to explicitly throw any type other than what you specified, the compiler will give you an error. I believe that the standard is clear regarding this(?). Bugs will only happen when your function calls other functions that in turn throw the wrong type. Coming from a world of deterministic, embedded C programs, I would most certainly prefer to know exactly what a function will throw at me. If there is something in the language supporting that, why not use it? The alternatives seem to be: void func() throw(Egg_t); and void func(); // This function throws an Egg_t I think there is a big chance that the caller ignores/forgets to implement the try-catch in the second case, less so in the first case. As I understand it, if either one of these two forms decides to suddenly throw another kind of exception, the program will crash. In the first case because it isn't allowed to throw another exception, in the second case because nobody expected it to throw a SpanishInquisition_t and therefore that expression isn't caught where it should have been. In case of the latter, to have some last resort catch(...) at the highest level of the program doesn't really seem any better than a program crash: \"Hey, somewhere in your program something throwed a strange, unhandled exception.\". You can't recover the program once you are that far from where the exception was thrown, the only thing you can do is to exit the program. And from the user's point-of-view they couldn't care less if they get an evil message box from the OS saying \"Program terminated. Blablabla at address 0x12345\" or an evil message box from your program saying \"Unhandled exception: myclass.func.something\". The bug is still there. * * * With the upcoming C++ standard I'll have no other option but to abandon exception specifiers. But I would rather hear some solid argument why they are bad, rather than \"His Holiness has stated it and thus it is so\". Perhaps there are more arguments against them than the ones I listed, or perhaps there is more to them than I realize?"} {"_id": "160732", "title": "Function declaration as var instead of function", "text": "More and more I'm seeing functions being declared like var foo = function() { // things }; Instead of how I had learned, like function foo() { // things } What's the difference? Better performance? Scope? Should I be using this method?"} {"_id": "231101", "title": "Handling internal IT issues", "text": "In the company I work at, developers also serve as internal tech support. This means we have to do three types of things: * **Development:** writing actual code to implement new features and new products * **Maintenance:** handle bugs and minor changes with existing customer-facing software * **Tech support:** handle internal issues other employees have e.g. locked out of AD, network drive is missing, internet is down, new software installation, need permissions changes, etc. The tech support aspect of my job is the most troubling right now. This is because: * someone with a problem will likely interrupt you immediately, taking you away from development * problems come in too many ways: email, google chat, phone call, post-it notes on your desk, etc. * people often cannot or do not determine the severity or scope of their problem before asking for help, i.e. simple problems that inconvenience one person might come in the same way -- and often with the same urgency -- as major customer-facing issues that could affect thousands **My question is: What are the proven methods for dealing with internal IT issues?** Should we use software designed for this purpose? If so, how should it be separated from any software used for customer-facing issues or bug tracking software? How can the IT/development department ensure it's used properly? Also, is it a problem that developers handle these internal problems in the first place? Would it be ideal to have a separate person or department to handle these types of problems?"} {"_id": "108518", "title": "Sequel vs S-Q-L", "text": "> **Possible Duplicate:** > What's the history of the non-official pronunciation of SQL? I hear it every so often, \"In sequel server...\", and for some reason I cringe every time. Maybe it's because SQL doesn't mean sequel, it means Structured Query Language. However, I hesitate to mention anything because it is a little bit nitpicking after all. I do see the resemblance between SQL and sequel, but it's still wrong, is it not? Where does this way of phrasing come from?"} {"_id": "156872", "title": "PHP function __autoload($class_name) how to Load two class path directories", "text": "I am using following function to autoload classes, it works fine if i am using only one directory called 'classes' however when i try to use Smarty lib also it fails and give me the error Fatal error: Class 'Database' not found in /home/... for ex: require_once(DOC_ROOT.\"/libs/Smarty.class.php\"); function __autoload($class_name) { require_once( CLASS_LIB.\"/\".$class_name . '.class.php'); } $db = new Database(); $session=new Session(); $smarty = new Smarty(); but if i do this it give me the error unable to load the smarty class.. function __autoload($class_name) { require_once( CLASS_LIB.\"/\".$class_name . '.class.php'); require_once(DOC_ROOT.\"/libs/Smarty.class.php\"); } $db = new Database(); $session=new Session(); $smarty = new Smarty(); Warning: require_once(/home/.../classes/Smarty_Internal_TemplateCompilerBase.class.php) [function.require-once]: failed to open stream: No such file or directory in /home/.../includes/init.php Any idea what i am i doing wrong here ? i need to be able to load classes directory automatically but needs to make sure i dont loose smarty path too.."} {"_id": "252052", "title": "How to validate information on server without using database or session", "text": "Each user has multiple sites they can access reporting data for in an application I am working on. To prevent having to go to the database on every single request, I validate that they have access to the site only when they change sites and I then store the the current site id in the session. **I am trying to eliminate session state so that my async ajax requests are not synchronized and also so that the user can have a different site open on each browser tab. I also don't want to go back to calling the database on every request to validate that the user has access to the given site making a request.** I've seen implementations where people will encrypt the id on the client, but it's not sure what would prevent a third party from looking over someone's shoulder (seeing the id on the query string perhaps) then using that same id with their own login to make a request. **I have two ideas:** 1) Encrypt the id with the persons authenticated user name as the seed... Then encrypt it again with some private key. When the request comes in I would decrypt with the private key then try to decrypt with the current user name and get the id back. Or perhaps I would combine the user name with the id like username@username.com_[SITEID] then encrypt that with the private key and split them to see if the current username matches the first part. The problem with this though is that it never expires really, so they could in the future make a request even if they have lost access as long as they have the id around. 2) Similar to idea 1, but I would use the session id with encryption as a third key perhaps. The problem here though is if the session expires and they leave a tab open, all the requests would fail from the tab that was left open even though a session is active. 3) Use a cache so that async requests are not affected and just store keys like username@username.com_[VALIDATED_SITE_ID] then see if the key exists when the request comes in and if not, hit the database to establish the validation key. **Has anyone addressed this type of scenario where you need to validate the user can make a specific type of request, yet you doing it without session or hitting the database every time a request is made?**"} {"_id": "252053", "title": "What's in the \"contrib\" folder?", "text": "Often open-source software projects have a folder called \"contrib\". For example, Django has one. What is it for?"} {"_id": "237923", "title": "Where should you put functions and variables that are only needed by one function in a class?", "text": "Say you have a Car class. Properties that make sense for a Car class might be: var make; var model; var year; var turnOn; // a function But the `turnOn` function is very complicated and ends up needing a static variable and a couple sub functions: var isTurningOn; // too specific to be a class variable function turnOn() { // 30 lines to start the engine // 30 lines to start the air conditioner // 30 lines to start the radio } So now we have a function that maybe isn't really suitable to be made into its own class and it's polluting the Car class with a variable that only the `turnOn` function ever uses. Plus the function contains sections of code that should be made into their own functions. So what do you do in situations like this?"} {"_id": "110106", "title": "What is the proper way to implement the OnClickListener interface for many buttons", "text": "My Android Activity contains multiple buttons that all need an OnClickListener. I've seen lots of different ways of doing this such as:: * Implementing the interface in activity class * Creating a separate class that implements the interface * Defining an anonymous inner class for each button. I've seen many examples of each approach. However, its not clear to me why one approach would be used instead of another. Are the differences between these approaches stylistic or are there reasons that make one approach better?"} {"_id": "114481", "title": "How do mashups work with same-orgin policy?", "text": "If Javascript is only allowed to access scripts from the same domain, how can a website create mashups which must read and modify content from another domain?"} {"_id": "114483", "title": "How can I design an efficient moderation system for comments?", "text": "Here's the job I want to do: My project is a website where there will be a **lot** of comments. Those comments will be moderated: a moderator will connect, see comments and accept or refuse them. I need those things: * the comments won't have any \"moderator\" associated. Which implies that when a moderator connects, the system \"assigns\" the 20 first comments to this moderator then sends them via an AJAX/JSON request * of course, this implies that those comments musn't be assigned to other moderators How can I make a design so that I'll be sure \"getting the 20 first comments for a moderator\" can be an atomic operation? And there's another thing to think of (I don't know if it may have an impact or not on my problem): if there are say 10 moderators connected, one of them has 20 comments to validate, and this guy leaves his PC for the whole day. I may implement kindof a \"timeout\" for this moderator, then, when he tries to validate, a message will tell him \"sorry this comment has already been moderated by another moderator\", and the AJAX script will get a \"fresh new 20 comments). My first idea was the simplest idea: * a moderator connects * a jQuery AJAX script asks the server for 20 comments * the server (1) makes immediately a request of the 20 first free comments to the database * the server (2) \"freezes\" the 20 free comments * the server (3) makes up a JSON response and sends it But the operations (2) and (3) are not atomic, what happens if, in-between, another operator asks for the 20 first free comments? They both will get the 20 first free comments. That's not a good solution. I think this is a classical problem of atomic operation on db. Any idea how to design this (or where to look)? Thank you very much indeed!"} {"_id": "89569", "title": "How to gradually improve on an old IBM Host C++ framework", "text": "Imagine a C++ framework as a business layer, but with controlling logic as well. E.g. user rights, state of the webflow etc. With some plain C code to access DB2. The whole framework is host based. We are looking for a way to replace the existing framework gradually with a java framework. The main challenges are: the application is already huge, there are several application clients connected to the c++ framework at the same time (it's one web application with many instances), my knowledge of what is possible today with java on host is very limited (since all people I work with stick with C++) Do some good ideas exist for gradually moving the old framework without breaking existing code? I guess as entry one would expect a web service or RMI server for transactions. But how to replace the business/controlling logic gradually is still a black box to me."} {"_id": "89563", "title": "i am \"scared\" to learn a new language", "text": "I have be developing in c# for the last about the last 8 years. I feel that i am quite experienced and knowledgeable in the whole .net platform. I have got to the point where i want to start learning objective-c. I have been watching dev videos, reading books and researching online. So far i am getting a good idea of the development principals. BUT. I have not written 1 line of code. For some silly reason i have some type of mental block about writing code. I keep thinking: But what if i am not as good as when i do c# coding. This leaves me in a endless spiral where i cannot code. If you have been in this kind of situation what was the best way to get out of it."} {"_id": "89562", "title": "What does the term day-one-bug mean?", "text": "I came across this term recently in a mail chain. Google tells me there is a term zero-day bug and that Microsoft and Adobe are the frontrunners :) Is there such a term as **day one bug**? What might that mean?"} {"_id": "127262", "title": "Do more object declarations affect the program?", "text": "I am programming in Windows Forms and MySQL. If I declare this in the program, I can use the connection and command objects in the whole .cs page: MySqlConnection connection = null; MySqlCommand command = null; MySqlDataReader Reader; But if I write the code below, connection and command objects are declared twice: private void cmbo_class_SelectedValueChanged(object sender, EventArgs e) { string clas = cmbo_class.SelectedItem.ToString(); try { MySqlConnection connection = new MySqlConnection(hp.myConnStr); MySqlCommand command = connection.CreateCommand(); MySqlDataReader Reader; command.CommandText = \"select id,code from reg_class_master where name ='\" + clas + \"' and Delete_Status=0\"; connection.Open(); Reader = command.ExecuteReader(); while (Reader.Read()) { class_id = Convert.ToInt32(Reader[0].ToString()); class_code = Reader[1].ToString(); } connection.Close(); } catch(Exception ex) { MessageBox.Show(\"Error in \",\"Information\",MessageBoxButtons.OK,MessageBoxIcon.Information); } cmbo_division.Items.Clear(); try { MySqlConnection connection = new MySqlConnection(hp.myConnStr); MySqlCommand command = connection.CreateCommand(); MySqlDataReader Reader; command.CommandText = \"select name from reg_division_master where class_id = \" + class_id + \" and Delete_Status=0 order by display_index\"; connection.Open(); Reader = command.ExecuteReader(); while (Reader.Read()) { cmbo_division.Items.Add(Reader[0].ToString()); } connection.Close(); } catch { } I think one object declaration is the best way. Do more object declarations affect the program?"} {"_id": "207696", "title": "Does having more classes necessarily increase the memory requirements of the app?", "text": "When we add .edmx files to a DLL, the physical size of the DLL increases. DLL's are loaded into memory. However, the .NET infrastructure with functionality such as JIT compilation and the GAC complicates things. Is it true that having a larger DLL automatically increase the memory requirements of an application? In other words, are memory concerns a good reason to split generated code such as entity framework into N DLL's?"} {"_id": "127264", "title": "Am I misusing instance initialization?", "text": "I often write tests when I need to set data object for integration tests involving a few pages with forms. Herein each form on a page could be represented by a different data object class. There are are times when test end up in filling same data for a specific page while varying it for other pages and validating the overall flow. I thought instead of setting data object each time from test method I can take advantage of instance initialization block to set data for tests. Herein I have listed a smaller version of tests I usually write - This my data class which holds data for form values - textbox, drop down etc. I have not kept setters for sake of brevity - public class DataClass { private int i; private int j; public DataClass() { i=1; j=1; } public int getI(){ return i; } public int getJ(){ return j; } } Following is the helper class which is used by test class to fill form data on page - public class WorkerClass { private DataClass data; public void setData(DataClass data){ this.data = data; } public int add() { return data.getI()+data.getJ(); } public int substract() { return data.getI()-data.getJ(); } } And here is the test method. Notice that instead of calling setData of WorkerClass in each test method I used instance initialization block - public class TestClass { private DataClass data = new DataClass(); WorkerClass workerClass = new WorkerClass(); { workerClass.setData(data); } @Test public void testAddition() { assert workerClass.add()==0:\"addition Failed\"; } @Test public void testSubstraction() { assert workerClass.substract()==0:\"substraction failed\"; } } My question is, am I misusing instance initialization block and should rather be calling it from each test method, even if all the test method require same set of data?"} {"_id": "127268", "title": "Is it worth listing testing or self-learning repositories on my r\u00e9sum\u00e9?", "text": "I have a GitHub repository with toy programs that I write when I learn something. For example, when I read an about an algorithms or data structures, I write up a quick implementation of of it to make sure that it works and I understand it. I sometimes solve algorithm and data structure puzzles and that gets pushed into the repository. Would this repository be worth linking on my r\u00e9sum\u00e9, or would it actually be a detriment to my chances of getting hired?"} {"_id": "48976", "title": "What is the biggest weakness of students graduating with degrees in Computer Science?", "text": "This question is directed more toward employers and graduate student advisors/professors but all opinions are welcome. What do you find is a common weakness of new hires and/or new grad students? Is it entirely variable dependent on the student and his or her university? Is there a particular skill or skillset that you wish new hires/researchers had expertise in and how can we remedey this deficiency? I realize that this question is general and really encapsulates two questions, one more about the weaknesses of new software engineers and one about the weaknesses of new researchers. However, both types of people tend to come from similar courses of study so I'm wondering if there is any overlap. Note: I am not a professor but I'm interested in how best to revise the undergraduate curriculum in CS."} {"_id": "129855", "title": "How can I gauge a candidate's ability to learn in an interview?", "text": "We are hiring for a small development department within a medium sized company for which software is not the main line of business. As such, we are attempting to recruit what we have labeled a _Senior Programmer_. The goal is to find someone that can design, implement and maintain entire new and existing systems from the database through to the front-end. Regardless of a candidates claim to experience (read: massively spun CVs), or the results of the technical test, what I really care about is their ability to learn and the speed at which they will pick up technologies or concepts they are not familiar with to fill any gaps they might have in their knowledge. How can I go about getting an idea as to a candidates ability to (or speed of) learning?"} {"_id": "129859", "title": "How is Python used in the real world?", "text": "I'm looking to get a job as a Python programmer. I know the basics of the language and have created a few games with it using pygame. I've also started to experiment with Django. However, looking at the job market, it doesn't seem very many Python jobs are web-related. On the desktop side of things, it doesn't seem like very many companies use the popular GUI libraries like pyQt or wxPython. How are companies actually using Python? What areas should one focus on to land a job as a Python programmer?"} {"_id": "26885", "title": "What scenarios are implementations of Object Management Group (OMG) Data Distribution Service best suited for?", "text": "I've always been a big fan of asynchronous messaging and pub/sub implementations, but coming from a Java background, I'm most familiar with using JMS based messaging systems, such as JBoss MQ, HornetQ, ActiveMQ, OpenMQ, etc. I've also loosely followed the discussion of AMQP. But I recently became aware of the Data Distribution Service Specification from the Object Management Group, and found there are a couple of open-source implementations: OpenSplice OpenDDS It sounds like this stuff is focused on the kind of high-volume scenarios one tends to associate with financial trading exchanges and what-not. My current interest is more along the lines of notifications related to activity stream processing (think Twitter / Facebook) and am wondering if the DDS servers are worth looking into further. Could anyone who has practical experience with this technology, and/or a deep understanding of it, comment on how useful it is, and what scenarios it is best suited for? How does it stack up against more \"traditional\" JMS servers, and/or AMQP (or even STOMP or OpenWire, etc?) Edit: FWIW, I found some information at this StackOverflow thread. Not a complete answer, but anybody else finding this question might also find that thread useful, hence the added link."} {"_id": "223359", "title": "Use null object as argument to method", "text": "Consider the following piece of code class Foo { public: //... bool valueFirstGet(int& value) const { if(this==nullptr) {return 0;} value=values[0]; return 1; } //... private: int* values; size_t n_values; }; int main() { Foo* obj=findObject(\"key\"); int value; if(!obj->valueFirstGet(value)) {printf(\"key does not exist\\n\");} return 0; } findObject returns nullptr if it cannot find the object. Is it ok to let the member function do the null check instead of its caller. In my code, there are several calls to findObject directly followed by a call to valueFirstGet so leaving the check to the caller makes the code ugly. EDIT: Is there a cleaner way to avoid all null checking besides having findObject to throw an exception instead of returning null? EDIT 2: What about a static wrapper?"} {"_id": "163833", "title": "Best Method of function parameter validation", "text": "I've been dabbling with the idea of creating my own CMS for the experience and because it would be fun to run my website off my own code base. One of the decisions I keep coming back to is how best to validate incoming parameters for functions. This is mostly in reference to simple data types since object validation would be quite a bit more complex. At first I debated creating a naming convention that would contain information about what the parameters should be, (int, string, bool, etc) then I also figured I could create options to validate against. But then in every function I still need to run some sort of parameter validation that parses the parameter name to determine what the value can be then validate against it, granted this would be handled by passing the list of parameters to function but that still needs to happen and one of my goals is to remove the parameter validation from the function itself so that you can only have the actual function code that accomplishes the intended task without the additional code for validation. Is there any good way of handling this, or is it so low level that typically parameter validation is just done at the start of the function call anyway, so I should stick with doing that."} {"_id": "195750", "title": "What is the best way to understand code in a project with null documentation?", "text": "It is our first game and we are a start-up. We had a programmer who suddenly is seeming to be a dead weight. Though we knew him personally we thought that he was as motivated as we are and hence I never looked for a code review or documentation. Now it seems that he is slacking a lot and we want him out. Now during a conversation he said that he would not document the code so as we could be dependent on him. I looked at the code and there is no documentation. The code is all over the place. What would be the best way to understand his code? Details of the code: * Written in C# and targeting a mobile platform using Unity3D. * The game play mechanic and camera is scripts are the main and they have no documentation. * Too much of Send/Broadcast messages so I am not able to see the references through Mono. * Again hefty use of strings in either invoking methods or passing parameters * Update function everywhere which could have been reduced to coroutines. * Naming is poor and inconsistet * Absolutely no pooling with usage of destroy and instantiation. I am expecting memory leaks. What protocols should I adopt to understand the code and work on top it?"} {"_id": "117686", "title": "Why are brackets required for try-catch?", "text": "In various languages (Java at least, think also C#?) you can do things like if( condition ) singleStatement; while( condition ) singleStatement; for( var; condition; increment ) singleStatement; So when I have just one statement, I don't need to add a new scope with `{ }`. Why can't I do this with try-catch? try singleStatement; catch(Exception e) singleStatement; Is there something special about try-catch which requires always having a new scope or something? And if so, couldn't the compiler fix that?"} {"_id": "163835", "title": "Get system info from C program?", "text": "I'm writing a little program in C that I want to use to output some system stats to my HD44780 16x2 character display. The system I'll be working with is a Debian ARM system and, although irrelevant, the display is on the GPIO header.(The system is a Raspberry Pi). As an initial (somewhat unambitious) attempt, I'd like to start with something simple like RAM and CPU usage (I'm new to C). I understand that if I make external command calls I need to fork() and execve() (or some equiv that will let me return the results), what I would like to know is how I go about getting the information I want in a nice clean format that I can use. Surely I will not have to call (for e.g); free -h And then use awk or similar to chop out the piece I want? There must be a cleaner way? The question should be seen as more of a generic, what is best practice for getting info about the system in C (the RAM/CPU usage are just an initial example)."} {"_id": "117689", "title": "Asp.Net MVC CMS or Shopping Cart or any other type of System", "text": "I have been learning Asp.Net MVC for a while now and have gone through numerous tutorial ranging from MVC Music Store to Code-First to Custom Membership to Data Annotations. In essence, I have a firm grip on how things work in this new framework. But, there are few things which i am not able to realize and understand on my own. Stuffs like achieving pagination, product Catalog, AJAX and some other things. Therefore i was thinking of learning and knowing these aspects via any opensource CMS, shopping cart or any other type of system. This way I can see the code and understand how these things can be achieved in Asp.Net MVC. Things were pretty easier in regular Asp.Net as we weren't dealing with the HTML so i am bit sluggish in these departments and need help in small matters like pagination. **So, Are there any available opensource Asp.Net MVC CMS or any other type of system utilizing razor ?**"} {"_id": "193345", "title": "Multiple threads and single output source", "text": "For the sake of an exercise, say you have an input file with a series of lines of text with the objective of reversing their respective character sequences. Now introduce 5 threads that will each do the reversing, thread 1 taking care of line 1, thread 2 taking care of line 2 and so on. If the aim is to save these reversed lines in order, how would you save them to the same file ? P.S: I've thought of having some kind of queue but there is no guarantee on the order. Also, I'm wondering if there's some way to lock the file temporarily and have a thread wait it's turn. I imagine I should keep track of which line is being saved."} {"_id": "161060", "title": "Abstracting a zip as a filesystem - C++", "text": "I would like to access in read and write mode to a zip without decompressing it on disk: what options do I have? I need to perform the usual IO actions reserved for a filesystem like reading, writing new file, deleting them. If it's possible I would like to stick with the zlib library."} {"_id": "7505", "title": "How to read and understand ER diagrams", "text": "I've been handed the ER diagram for a quite complex database. Complex to me at least, 849 tables. One of the architects says to understand the code, you need to understand the database and relationships. Now I am basically a Java programmer hence not too familiar with this How do I start? Should I start from something basic like USER or ORDER and see which relationships they have with other tables around them? Any tips or tutorials would help a lot"} {"_id": "92847", "title": "What would you define as sensitive user data?", "text": "A recent previous question of mine had an answer that sparked a different and unrelated question in my mind: Customer wants to modify the .properties files packaged in our WAR file The question that I thought of after reading this answer is, just how low-risk is the data being collected on people (non-users, lets just say, \"people\") in my application? * A first name and last name * Company or organization that person currently is employed at. * (Optional) An email address * (Optional) A persons phone number * A photograph of the persons face * An digitally signed PDF document physically signed with electronic signature pad (a persons hand written signature) No other sensitive data like social security numbers, credit card numbers or anything that can accurately identify a person with 100% accuracy. How sensitive would you rate the data types listed above? Is identity theft even remotely possible with the above information? In light of all the recent news outbreaks of hacking successes and data breaches, if such a thing were to happen to my application (assume that I have reasonable security measures, SSL, encrypted passwords with salt, account lock after so many failed attempts, etc...), what kind of a response would be appropriate for my organization in your opinion? Should every attempt be made to notify the persons that this information has been compromised? Is it worth it? Thanks for sharing your thoughts."} {"_id": "7502", "title": "Is it worth it to translate an Android app into Spanish and other languages?", "text": "I have the \"user's side\" of the story, I think they want it better if it's on Spanish. But, what about the programmers? Do you make your programs multi- language? Why? Why not? Who translate your software? Are you ok with paying somebody to translate your app or you prefer doing it yourselves? Is the benefit bigger than the costs?"} {"_id": "92845", "title": "Area calculation of irregular shapes", "text": "Is there any algo that can help splitting irregular shape into regular shapes, and eventually calculate the area of the main object by summing the areas of those regular objects?"} {"_id": "206001", "title": "Does a group of Select Statements count as a valid model?", "text": "I'm making a facebook app and I'm trying to follow MVC properly. But I was wondering if I had a class that was a bunch of fql queries (getting data from facebook) should I have keep these in a controller or a model? According to Codeigniter's user guide \"Models are PHP classes that are designed to work with information in your database\". Since fql interacts with facebook's databases and not my own, would having a class full of what is basically just SQL Select statements count as valid model?"} {"_id": "96548", "title": "what is best way to learn latest technology", "text": "We have too many technologies in IT world today. How can be best be able to clear technical funda conceptually ? How do we able to find all the concept available within technology & required to be learnt ? Programming Experince always matters, but I want to know the ideas keeping experince aside. I mean like if I start developing a project on my own hand, how do I be able to find out about which latest technology & concepts tha I should use with ? Please suggest."} {"_id": "191275", "title": "What problems can be solved using Generics?", "text": "I haven't used Generics in C# for a long while. Every time I think I need to use them I either go in the wrong direction and give up or find that I don't really need them. I feel that I'm missing out or ignoring a technique which could be useful and powerful. I'm aware of how generics are meant to be used (or at least I think I am). If you have three methods that are all doing the same thing with different types, then using generics enables you to have just one method that will work with these three types. EDIT (in response to comment) **Are there any code smells or patterns that I should be looking out for?** Note I use `List<>` a lot, so I'm not ignoring generics in the the standard library. It's just that I've not written custom code using Generics for a while."} {"_id": "191276", "title": "Audit trails and recording actions", "text": "**Background** A discussion that has come up at work recently is how we handle audit logging and the recording of events. We are integrating with a 3rd party app so triggers are a no no from the off so we are handling it in code. We've written a number of prototype components for handling it but nothing feels right as yet. The main issue being we want to create Facebook style time lines for the users to see what action have happened recently but these don't seem to fit well with how we record audits. My question is how would be best to handle this type of scenario? * Should we tailor the audit log tables to fit the requirements of the front end? * Should we have separate tables to handle the \"Actions\" and have the events and auditing separate * Should we look to a more message based architecture so this will be more like an Event sourcing type component? Input from somebody who has done this type of system would be much appreciated."} {"_id": "127483", "title": "Want to do some damage control", "text": "I applied for a job in a company. The initial conversation went well.They sent me an assignment and asked me to write a web application, and send it in a specified time. It seemed fairly simple. I completed and sent it to them. Now my mind is completely revolving on the code that I wrote, and now I feel that, there was one small requirement which I did not fulfill. Before they give the verdict shall I be proactive and admit it. Definitely they will find it and I am certain of it. I can send them email saying that I forgot to add that. What shall I do? Is there anything I do for damage control?"} {"_id": "158080", "title": "For Which Kind of Cache will this Program not run well? How can it be optimized?", "text": "Consider the following code fragment: int a[8192], b[8192], c[8192]; int i; for(i = 0; i < 8192; i++) c[i] = a[i] + b[i]; For what kind of cache will this program not run well (set associative, direct, associative)? I've been thinking about how to optimize this, but can't really see how it can be done. How can I change this code to have fewer cache misses?"} {"_id": "255878", "title": "Correct terminology in type theory: types, type constructors, kinds/sorts and values", "text": "In an answer to a previous question, a small debate started about correct terminology for certain constructs. As I did not find a question (other than this or that, which is not quite the right thing) to address this clearly, I am making this new one. The questionable terms and their relationships are: _type, type constructor, type parameter, kinds or sorts, and values_. I also checked wikipedia for type theory, but that didn't clarify it much either. So for the sake of having a good reference answer and to check my own understanding: * How are these things defined properly? * What is the difference between each of these things? * How are they related to each other?"} {"_id": "105103", "title": "How and when do you scale Azure instances?", "text": "When dealing with Azure instances that are under-performing, how do you decide when to scale the instances? Do you scale horizontally by adding more instances or vertically by making the existing instances more powerful?"} {"_id": "194592", "title": "Function that modifies an argument, should I return the modified object?", "text": "We have a function that modifies a JS object, by adding some custom properties to it. The function doesn't return antyhing addTransaction: function (obj) { obj.transactionId = this.getTransactionId; obj.id = this.recordId; }, Somebody said they preferred that `addTransaction` return the `obj`. Here's what I thought * If I don't return anything (and document that the object is going to be modified), it's kind of clear that the object is going to be modified, as if the name were addTransactionToObj * If I do want to add a return value, I shouldn't modify the incoming object, I should clone the given object, add my properties to the clone and return the clone. * Having a return value that just returns one of the (modified) parameter just sounds wrong Does anybody have a preference in this matter?"} {"_id": "149303", "title": "Naming classes, methods, functions and variables", "text": "There are 3 important naming conventions: 1. `with_underscores` 2. `PascalCased` 3. `camelCased` Other variants are not important because they are not commonly used. For variables it seems that the one with underscores is the most used by developers so I'll stick with that. I think it's the same for functions. But what about class, and method names? Which of these 3 is the most used by developers for such constructs? (personally, it's 3. for methods and 2. for classes) Please do not post things like \"use what you feel is right\", because the code I'm writing is API for other developers, and I'd like to adopt the most popular coding style :)"} {"_id": "105100", "title": "ASP.Net MVC ambigious action methods - why the path choosen", "text": "Is there someone who could shed a light on why ambiguous methods are not allowed in ASP.Net MVC? I updated my question just to clarify it (using Carson63000 example from his answer), the question itself remains the same. So, lets say we have the following action /article/list/sport which calls public ActionResult List(string category) { } and want to be able to have the following action as well: /article/list/sport/20110904 which we would like to call accompanying method public ActionResult List(string category, DateTime date) { } Currently the framework throws an `AmbiguousMatchException` since it found two methods called List. That's the only reason for throwing the exception, that more than one method with was found that matches the actions name. After the framework has figured out what method to use and decided that it is not ambigious, it will then try to map the supplied data from the request to the parameters of the method. Why not do this the other way around? First get all applicable methods, and then try to find the best fit by mapping parameters. If ambiguous methods where allowed, I think the following is still possible to do. Given that `/article/list/sport` uses the route `/{controller}/{action}/{category}` and that `/article/list/sport/20110903` uses the route `/{controller}/{action}/{category}/{date}` it would then still be possible to map data for url `/article/list/sport?date=20110903` since it will match the route for just category, but can still map the data for method `public ActionResult List(string category, DateTime date) { }`. At the moment I fail to see the reason for not allowing ambiguous methods from a design perspective. The only thing I can think of myself is that it will slow down the framework too much, and therefor the current solution is the more logical one. Or am I missing another point here?"} {"_id": "255873", "title": "Is it ever OK for a conditional to have side effects?", "text": "I'm taking intermediate data structures course as a prereq for entry into the CS MS program at a University everyone in America has heard of. One line of code that was written in class caught my eye: `if (a > 33 | b++ < 54) {...}` This would never pass a code review at my workplace. If you wrote code like this in an interview, you would not be hired. This is not an exaggeration. (In addition to being a conditional with side effects, it's being clever at the expense of clarity.) In fact, I've never seen a conditional with side effects, and Googling doesn't turn up much, either. Another student stayed behind after class to ask about it, too, so I'm not the only one who thought this was weird. But the professor was pretty adamant that this was acceptable code, and that he would write something like that at work. (His FT job is as a Principal SWE at a company you've all heard of.) I cannot imagine a world in which this line of code would ever be acceptable, let alone desirable. Am I wrong? Is this OK? What about the more general case: conditionals with side effects? Are those ever OK?"} {"_id": "193695", "title": "Where to start in developing a game engine as a web app", "text": "I have to create a web application, preferably I would host that on Google App Engine. it is a multiplayer game, So it needs to be interactive. I am only familiar with C/C++ coding, and have started learning python. I have made multiplayer games before too(multi-player and single player) but with no GUI. Someone told me that the webapp interface would have to be written in JavaScript. And the back end could be in python or any other language. Please note, I have no issues in coding up the back-end, the game engine, of thew web app. Just that I do not know what are and how to, etc about web apps. And basically how to integrate the back end code into the web application (which I have to build from scratch) (I would also be writing the back end, so there is complete flexibility as of now) So please show me the light! I have absolutely no clue about where to start! Please tell me where to begin, and what all to read up which can quickly give me some insight into development work! How to go about the GUI. And what all possible language combinations can I use? The easier to use, the better."} {"_id": "149307", "title": "Seperable Kernel, MMX/SSE and TCP Transmittal of them?", "text": "So I was reading about Java Convolve and someone said that it may be faster than the MMX / SSE implementation. In it one of the comments had a kernal array and said it was seperable. 1. What is a seperable kernel? How is this useful for image processing? 2. What is MMX/SSE? The wikipedia page lists them as instruction sets. Are they specificially designed for image processing? 3. How would you transmit a MMX/SSE format (I assume) data set through TCP? Thanks."} {"_id": "69892", "title": "How do you read other's code?", "text": "Almost every advanced programmer says that it's very useful to read the code of other professionals. Usually they advice open source. Do you read it or not? If you do, how often and what's the procedure of reading the code? Also, it's a bit difficult for newbies to deal with SVN - a bunches of files. What's the solution?"} {"_id": "153967", "title": "How to stop a .NET application from being duplicated?", "text": "I have created a .net windows form application that I want to restrict from being duplicated. I want this application to be portable, so I would like to allow it to be moved. How would I be able to do this without managing it through a remote server?"} {"_id": "64867", "title": "Writing jenkins plugin: where is the documentation?", "text": "On my current project we're using Jenkins to monitor our builds. Now they want me to write a Jenkins plugin to add some more monitoring parameters. I've taken a look at how the status monitor plugin works, and I can't figure some things out. I've tried to look for documentation for writing a plugin, but that seems to be sorely lacking. (the site only mentions how to generate the base project, and refers to a tutorial that's not that informative) What I'm trying to do is just add some options to each build, add a link, and a monitoring page. Adding to the main page is apparently done by adding the action, but I'm still trying to figure out the rest. And how it all ties in. Does anyone have any pointers, or a place where I can find some decent documentation?"} {"_id": "153961", "title": "Stuff to read up on pricing applications", "text": "I'm about to release an app and I have no idea what would be the ideal pricing point. I'm not sure how pricing high and selling few copies will compare in revenue to pricing low and selling lots of copies in my case. Can somebody point me to books/articles/blog posts/etc that elaborate on the subject, preferably taking into account stuff like competition, number of features, being the first one to the market, research if this kind of app is even needed, etc?"} {"_id": "16463", "title": "Use switch statements for application flow?", "text": "I generally use switch statements to simplify a block of multiple if statements - for example - returning a feedback string to a user based on a multiple choice input. I also tend to construct classes so that there is one \"management\" method, to avoid sequential steps or chained method invocation within other methods of the class. I've found this helps to keep methods flexible and focussed - i.e. class MyClass{ // this method does nothing more than invoke the relevant method // depending on the status following the previous. It's entire purpose // is to control application flow public function manageFlow($input){ $status = $this->stepOne($input); if($status == false){ //exit routine } $status = $this->stepTwo($input); if($status == false){ //exit routine } } // this method has several sequential steps implemented direcly within it, // for example to a user logging in // it makes it impossible to re-use any of the intermediary steps public function tangledFlow($input){ if($input == 'something){ //100 lines of code } //then handing on to the next bit if($this->User->Authenticated){ //another 100 lines... } } } Then it occurred to me that I could use a switch statement to control this kind of sequential execution - so my question is: has anyone used a switch statement for this kind of flow control?"} {"_id": "121884", "title": "Object-Oriented Operating System", "text": "As I thought about writing an operating system, I came across a point that I really couldn't figure out on my own: Can an operating system truly be written in an Object-Oriented Programming (OOP) Language? Being that these types of languages do not allow for direct accessing of memory, wouldn't this make it impossible for a developer to write an entire operating system using only an OOP Language? Take, for example, the Android Operating System that runs many phones and some tablets in use around the world. I believe that this operating system uses only Java, an Object-Oriented language. In Java, I have been unsuccessful in trying to point at and manipulate a specific memory address that the run-time environment (JRE) has not assigned to my program implicitly. In C, C++, and other non-OOP languages, I can do this in a few lines. So this makes me question whether or not an operating system can be written in an OOP, especially Java. Any counterexamples or other information is appreciated."} {"_id": "57197", "title": "Best way to estimate cost related to porting code from language A to language B?", "text": "Have a client thinking about estimating the cost of porting a project from language A to language B. What's the best way to put together an request for proposal to do this?"} {"_id": "16468", "title": "How should I approach doing similar work on the side to what I do at work during the day?", "text": "Currently got a good job, spending 70% of my day doing what I love, the downside of this is of course that I am working for someone else and not my self. Ideally would love to get something of my own going, a night project etc which could trickle in some funds. The downside of this of course if that it would be a conflict of interest. I could of course just go ahead an start up my own little thing but if it got discovered I could risk my job. I could quit my job, but then I would miss the income (family + mortgage) and my ideas might not pan out. Thinking about it tonight, I will discuss it with my boss tomorrow - he's a good guy but I just hope he doesn't burry it or just offer over time. Anyone in this boat? Do what you love during the day (for someone else) and can't peruse it during the night.... Edit: Just to clarify, that what I want to do at night is very similar to what I do during the day. It's not a 'I love programming' in general, but I like doing data mining - and thats what I do at work. Very closely related."} {"_id": "221820", "title": "Preventing a parser from turning into a (seemingly) god-sized object", "text": "So I have a program whose purpose is to take text files and parse them into a binary format that an embedded system understands. However, the text format I've inherited that I need to parse is sufficiently complex enough that after refactoring the main `parse` routine I'm left with a class with more than 50 methods that almost all look something like `parseChannel`, `parseWCommand`, `parseVCommand`, `parsePCommand`, `parseLoop`, `parseHex`, `parseInt`, etc. etc. etc. Needless to say, the class, from its declaration, looks huge and daunting. However, the methods to interface with the class are extremely simple, just `parse` (compile the text, figure out its compiled size) and `link` (fix the pointers once the location in memory is known, return the finalized raw binary data). Really, to the user the class has virtually no other reason for existence besides those two functions, but there's so much seemingly useless fluff in the class declaration that it's hard to even see what it's supposed to do. There's a similar situation going on with the rather massive number of data members that, again, are useless to everything but the class's internal methods that need them to talk to each other, though I don't know if that's as much of an issue. I've considered making a separate class solely for parsing that's used within the `parse` method, but it seems strange to me to build an entirely separate class that's only used in a single method in a single class. It seems a bit...superfluous, I guess? And I don't even know if that's attacking the right problem. I guess in the end, here's what I'm asking: 1. Is this seemingly-huge class actually a problem? I don't think it's strictly a \"god object\", but it superficially looks like one. 2. If it is a problem, what are the best method(s) to fix it?"} {"_id": "214314", "title": "How do I limit the resources available to a program?", "text": "I'm delving into multi-threaded programming with Java, but I'm finding it hard to test my program for bugs. My computer simply has too many resources and cores, making it hard to see how my program acts under stress. Is there any way to limit the resources available to my program? It seems ridiculous that my only option right now is to open another application and render HD video in order to starve out the application I'm debugging."} {"_id": "61989", "title": "What's expected of a web-developer?", "text": "I've been getting into more web development later and talking to people often I'll get \"Can you make me a website [for my book]?\" types. \"Yes. But.... Well, what do you want from it?\" \"I want it to look nice and attract people and people will want to buy my book.\" \"...\" **What all does making a website for somebody really entail from the designer/developer end?** * How responsible are we for graphics? Are we expected to make them a nice header with their name/logo and pretty like? * Color scheme? Overall feel of the site? (I mean, do we decide by ourselves or have some back and forth with the client?) * Specific wording of content (such as homepage flavor text. Obviously we wouldn't be expected to write the About the Author section.)? * Additional functionality that we think the user may appreciate? Ideally you'd talk to the client and figure out what they want, but really--do clients ever _really_ know what they want? **What is expected of the client in order to be able to make a website for them?** **Are web-designer and web-developer essentially interchangeable in casual speak, or...?** * * * I guess this question really stems because I'm learning I don't like doing most of this. I don't program to be a designer. My artistic skills are probably better than the average programmer, but that isn't the point. I would actually prefer it if a client came to me and said \"I want it to look like this\" and handed me a paper copy, which I could then duplicate. What _does_ interest me is more of the behind the scenes functionality of websites. Is it smooth, functional, does the user have to navigate all over or rely on 'search' to find what he's looking for? Please shed some light on this perceived division of tasks for me."} {"_id": "65405", "title": "Must strong developers carry the weight of the world on their shoulders?", "text": "As developers we constantly strive to solve problems of the masses. We also constantly look for new methodologies, languages and possibly organizations to help us further our ability to solve problems. I feel as if I've always been one of the top members on my team. I also feel that I look for ways to improve my work in ways others often do not care to. I'm starting to feel a little burnout from ~6 years of supporting technology. I blame the fact that I do work so hard and hold myself to high expectations. Some of the greatest devs on the planet do not even write code for a living any longer. Often, its burnout. Some have said they grow tired of \"the game\", but I wonder if the problem is a bit simpler. One of \"carrying the weight of the world on our shoulders\". If you feel you are a strong developer and also feel that this is not a problem for you, please enlighten me with your approach. How do you stay up to date with tech, help others and solve problems quickly/accurately without getting all wound up?"} {"_id": "61981", "title": "Where is the golden mean between language monoculture and polyculture?", "text": "My company has been using Java (as a language and a platform) for many years. We have lots of products varying greatly in size, purpose and complexity. Whatever the requirements, the answer is always the same - Java. This stance obviously has advantages and disadvantages. On a plus side, there is no context switching between work assignments on different projects, every developer in the company can be relatively productive on many projects; on a minus side, a small tool might end up being \"crushed\" by Hibernate. On one hand, packaging, deployment and execution environments could be standardized; on the other hand, we might loose agility of the mind and miss out on optimal solutions by always \"thinking in Java\". I could go on and on and on. Polyglot programming has become quite common in the recent past, as much as I can observe. Sayings like \"right tool for the job\" always ring loud in my ears as well. A desire to open up and expose ourselves to more \"right tools\" is quite strong, but where is a catch? Better yet, where is the golden mean? It's always dangerous to go from one extreme to another. I dread a day of waking up into an incomprehensible mess of a giant pile of Java, Groovy, Python, Ruby, PHP, Scala, etc. with all their corresponding tools, frameworks, servers and philosophies. Do you have a practical experience working in language and platform polyculture for at least two years? What are your main observations? What are you still excited about and what do you dread?"} {"_id": "137896", "title": "Software engineer, already in late thirties, would I have trouble finding a new position?", "text": "> **Possible Duplicate:** > Is ageism in software development based on anything other than bias? I have been a software engineer in the U.S for about 10 years now and I am already in my late thirties. I still have the passion and dedication for this industry and I am still trying to improve myself every day at work. For family reasons, I have to relocate to another state where I need to find a new job. Since I am a pure technical guy I would still like to find a software engineer job. However I am a bit worried about my age since I am already in my late thirties. I am confident about my past experience and my technical strength however the age factor is something beyond my control. My question is: Will I be discriminated because of my age when looking for a software engineer position? If I am really as good as I think, would the employers be interested in hiring me anyways? Any comments/suggestions for improving my chances of being hired are also welcomed! No suggestions like \u201ctry to become a manager\u201d please, since I am not interested in any management related jobs. Thank you!"} {"_id": "61983", "title": "When to take that leap and program", "text": "I've been studying a beginners guide to ASP.NET 4 for the past month and a half, and I'm at the end of the books. The next book will be one up (Professional guide). As yet I've only programmed exercise that are in the books, not my own code. When should I start developing my own code? After reading both books? Also I'm still confused over Object-Oriented and how it works with ASP.NET 4."} {"_id": "250343", "title": "Is there any practical use for the empty type in Common Lisp?", "text": "The Common Lisp spec states that `nil` is the name of the _empty type_ , but I've never found any situation in Common Lisp where I felt like the _empty type_ was useful/necessary. Is it there just for completeness sake (and removing it wouldn't cause any harm to anyone)? Or is there really some practical use for the _empty type_ in Common Lisp? If yes, then I would prefer an answer with code example. For example, in Haskell the _empty type_ can be used when binding foreign data structures, to make sure that no one tries to create values of that type without using the data structure's foreign interface (although in this case, the type is not really empty)."} {"_id": "158559", "title": "What are some good ways to get familiar with .Net's IL?", "text": "I recently accepted a job where I will be working with the IL a lot(on the team of a certain obfuscator that's included with Visual Studio). They know I have little knowlegde of it, so I'll have some time to learn it on the job. But, I want to get a bit of a jump start on it in the interim. For my background, I've never wrote any IL and read very little of it. I do understand what it is though and such. I know x86 assembly and such as well, so I'm not dumb as far as assembly-type code goes. But anyway, what are some good ways of getting familiar with IL and learning the usual opcodes and such and also how it's garbage collection and such magically works at an assembly level?"} {"_id": "119507", "title": "Where do you put scenarios on a scrum board?", "text": "So traditional scrum board looks something like this Backlog | Story notStarted inprogress Done story 1 Story1 tasks Story 2 Story2 tasks Story .. Story n Epic x Epic x+1 However in general a story has many scenarios and when working with BDD you want to write each scenario for a story as Given, when and then. Also the scenarios don't belong in the notstarted column, inprogess or Done as a scenario is not a task. So you realize that a scenario/s should have their own column between \"story\" and \"notstarted\", as a scenario can have many task to be considered done. If you are going to build your task from scenarios then why would you need the story on the scrum board in the first place, maybe they should be left in the backlog. Some people put scenarios on the back of each story. This is a on going debate in my team and I wanted to see if anyone has solved this differently. Cheers!"} {"_id": "119500", "title": "Website or any material to practice algorithms that explains the solution", "text": "I know some projects and web pages for practicing algorithms like TopCoder and Project Euler. However, they just dictate the solution of a particular problem. I wonder if there is a resource, website or any material, that doesn't only give away the code, but also explains the solution thoroughly ?"} {"_id": "240930", "title": "What's the copyright status of boilerplate code?", "text": "I check Open Source Compliance for commercial code. I have recently found a few examples where the commercial source is matched against quite a few OSS projects. The matches are very similar, but not exact, say about 30 lines of code with about 4 methods, a few variables names differ, some extra lines in the commercial code, ... but substantially the same. I don't think this is auto-generated code, nor copy-pasta, because things like comments will differ. Rather it just looks like boilerplate - the developers say that Grails (and these examples always seem to come up in Grails, but it could be any framework) requires a specific script format for CRUD operations (and these examples are also predominantly CRUD operations). If one adds in a standard source-code style, then it amounts to boilerplate - even if it is original it's going to end up as very similar looking across a number of codebases. Which leaves me with 2 questions: 1. Is this a reasonable defence against accusations of copyright infringement? 2. How would one form a judgement that codeX is such boilerplate, but codeY is not (as a non-expert in the language+framework)?"} {"_id": "43067", "title": "Why do we have postfix increment?", "text": "_Disclaimer_ : I know perfectly well the semantics of prefix and postfix increment. So please don't explain to me how they work. Reading questions on stack overflow, I cannot help but notice that programmers get confused by the postfix increment operator over and over and over again. From this the following question arises: is there any use case where postfix increment provides a real benefit in terms of code quality? Let me clarify my question with an example. Here is a super-terse implementation of `strcpy`: while (*dst++ = *src++); But that's not exactly the most self-documenting code in my book (and it produces two annoying warnings on sane compilers). So what's wrong with the following alternative? while (*dst = *src) { ++src; ++dst; } We can then get rid of the confusing assignment in the condition and get completely warning-free code: while (*src != '\\0') { *dst = *src; ++src; ++dst; } *dst = '\\0'; (Yes I know, `src` and `dst` will have different ending values in these alternative solutions, but since `strcpy` immediately returns after the loop, it does not matter in this case.) It seems the purpose of postfix increment is to make code as terse as possible. I simply fail to see how this is something we should strive for. If this was originally about performance, is it still relevant today?"} {"_id": "240932", "title": "automated acceptance testing / BDD & workflow for designing a system", "text": "Recently, I started reading the book Specification by Example, which relates to automated functional testing and BDD (from what I've understood till now). I've tried using Concordion (.Net), and seems very interesting. I've been having issues with keeping any form of useful documentation for any designed system, and this might help My issue is, how would one suggest that the workflow in designing a complete system is? Some questions that arise are: * Is it ideal to try to design the entire whole system in the beginning? * Should you design a very high level overview of the system, and then create specifications one main feature at a time, i.e create detailed specifications -> develop -> test -> move to next main feature? * Should you create BDD-style specifications for **each and every** method in the system, even trivial ones like some `GetProductByReferenceCode`? The issue is that most of the times when you actually start developing, you start to realise something's that need to be done differently than originally thought, or omissions not noticed during the initial design. I find that sometimes the initial design stage takes a lot of time, only for the actual design of the system to be very different once the product is launched. My current workflow for designing a system is: 1. First start with the user-interface, creating mockups of each and every screen the users will be dealing with. I find this to be the most visual method that the business users can understand. 2. Define logic that is directly related to the user-interface 3. Define any logic that happens _in the background_ , for example notifications, etc. Does this make sense? Any ways this could be improved?"} {"_id": "2259", "title": "How should I organize programming files into directories?", "text": "Sometimes, one creates a exploratory prototype and forgets about structure in the directories... What are good tips on dividing the programming files over (several levels of) directories?"} {"_id": "240937", "title": "URI Representing a Single Resource with Two Possible Identifiers", "text": "I have a resource that can be represented in one of two ways: 1. Big Serial Number, or 2. Small Serial Number Is it closer adherence to REST principles to: * A: have a query string specify the type (https://acme.org/api/widget/aaa-111-aaa?type=big vs https://acme.org/api/widget/aaa?type=small), or * B: have query strings specify the type as search parameters (https://acme.org/api/widget?serial=aaa-111-aaa?type=big vs https://acme.org/api/widget?serial=aaa?type=small), or * C: in the URI (https://acme.org/api/widget/big/aaa-111-aaa vs https://acme.org/api/widget/small/aaa), or * D: something else (ex: headers)? To me, it seems like having it in the URI (option C) specifies an entirely different resource, and in this case, they should represent the exact same resource. Is option A the best way to represent this?"} {"_id": "199065", "title": "Conditional construct for a kleenean data type", "text": "I was thinking of an hypothetical programming language with a `kleenean` data type which would implement Kleene's three-valued logic. To sum up, it's an extension of the boolean data type with the three constants `true`, `false` and `unknown` where `unknown` means that the value is either `true` or `false`, but we don't know which. The truth tables for a kleenean type are well-known and the logic is quite easy to understand. However, I was wondering how one would design a conditional construct to take in account this `unknown` value. A basic `if-then-else` conditional construct is almost always as follows: if (boolean condition) then condition is true, do something else condition is false, do some other thing end However, I have troubles seeing what a kleenean `if` construct would look like. How could we interpret the `unknown` constant? Technically speaking, it could satisfy the `true` condition as well as the `false` condition since it is one of these two. However, we can't have it match any of those since it could be the other, it is not really `true` nor `false`. Is there a well-known way to implement such a construct? **EDIT:** To specify a little bit, I would prefere something different than the way `boost::tribool` works, or from a simple `switch` as if was an enum. Answers about quantum superposition and semantics are welcome."} {"_id": "240939", "title": "When is a script no longer a script?", "text": "I write many a PowerShell script for the organization I currently work for. Most recently I have been co-writing a tool that, in the end, will have a lot of functionality (beyond a single purpose), will be configurable, and will have both a CLI and GUI, all in PowerShell. So when does a script stop being a script and when can you call it an application or a program. Is this just kind of in the eye of the beholder?"} {"_id": "124900", "title": "How assertive should I be in handling exceptions in objects?", "text": "I have been writing in C# 4.0 a lot lately and trying to write as lean as possible. As such, I have not been using the classic `try/catch` blocks and `using` statements as often. I understand the general function of .Net's garbage collection and exception handling - _I want to bulletproof my code and be efficient with objects, but I am trying to achieve this with the minimum amount of 'extra code'_ (since 'bloat' pissed people off). And to be clear, I understand `using()` is fundamental for relinquishing resources to handles and other code and its relationship to the `IDisposable` interface. But I am trying to get a better grip about how assertive a programmer should be in exception handling objects. Are there key places in code you'd say `try/catch` blocks and/or `using` statements are inarguably necessary? Within the contexts of garbage collection and exception handling, what common-use objects/scenarios would you recommend explicit attention? Should you simply make a catch block for every possible exception an object might through? How deep should they be nested? Are there steps I can take to proactively collect garbage (for example, to combat the example in comments of waiting for `Dispose()` to be called)?"} {"_id": "2254", "title": "What are good keyboards for programming?", "text": "Programmers tend to type a lot of code, bashing a lot of shortcuts and a lot of other things. What keyboards are good for programming?"} {"_id": "2252", "title": "How do you respond to \"Tell me a little bit about yourself.\" question in interviews?", "text": "I've been asked this in a few interviews. And it always catches me off guard.My professional and academic background are already in the resum\u00e9, which the interviewer has obviously looked at. What more to tell him/her? Should I start with my hobbies? I like gardening, or looking at NSFW pictures on reddit in my free time? What and how do you answer to this specific question? Do you have a prepared answer for it? Am I wrong if I think this question is a bit silly? **UPDATE** There have been a lot of great answers to this question. I'm in pickle which to choose as the 'correct' answer, because most of them are very insightful. I found a great writing on this subject matter. It's a bit crazy for my taste, but it's interesting: How To Introduce Yourself\u2026 I Mean Practically"} {"_id": "154464", "title": "Licensing a JavaScript library", "text": "I am developing a free, open-source (duh) JavaScript library, and wondering how to license it. I was considering the GNU GPL, but I heard that I must distribute the license with the software, and I'm not sure anymore. I would like the library to be available much like jQuery: In a free, downloadable script, preferably in either original or minified form. Am I mistaken about the GNU GPL license terms? jQuery is dual licensed under GNU GPL or MIT licenses. How does the GPL apply to single script files like that? Can I license my library with nothing more than a few sentences in the script file? Is there another license that better suits my needs? What would be nice is a license that allows you to put the URL in the source, for people to read if they want. I don't know that many do, unless I am mistaken. I am generally looking to release the library as free software like the GPL specifies, **but don't want to have to force licensees to download the full license unless they wish to read it.**"} {"_id": "69766", "title": "Which is more valuable in product development: an action-oriented or visionary bent?", "text": "As a software development professional in a fairly conservative large-firm, I always had a much more action-oriented bent, as my job was fairly stable and all that mattered was doing as I was told and completing tasks that were germane to the career of a benevolent dictator (i.e., my boss' boss). Now that I'm no longer working for \"the Man\", I find it just as important to use the left side of my brain and wrap my head around this whole \"vision thing\". Which do you think is more important for software product development in a small, yet feisty start-up: Knowing the path or walking (or running) it?"} {"_id": "152709", "title": "How to host a site in another site - with little or no coding", "text": "**SUMMARY:** All of these happens on Site A * User visits site A * User enter username and password * User click on Login Button * User authenticated on Site B behind the scene * User is shown a page on Site A that contains his/her profile from Site B as layout/styled from Site B * User can click links in the Profile page that links to other area in Site B Meaning: Session has to be maintained somehow I have web application where I store users' password and username. If you logon to this site, you can login with the password and username to have access to your profile. There is another option that requires you to login to my site from your site and have your profile displayed within your site. This is because you might already have a site that your clients know you with. This link is close to what I want to do: http://aspmessageboard.com/showthread.php?t=235069 _A user on Site A login to Site B and have the information on site B showing in site A. He should not know whether Site B exists. It should be as if everything is happening in Site A_ This latter part is what I don't know to implement. I have these ideas: Have a **fixed IFrame** within your site to contain my site: but I am concerned about size/layout since different clients have different layout/size for their content section. I am thinking of how to maintain session too **A webservice** : I don't know how feasible this is since the Password and ID are on my server. You may have to send them back and forth. It means client would have to code with my API. _But I am not just returning data, I have to show them a page that contains the profile details_ **OpenID, Single-SignOn** : Just guessing - but the authentication and data resides on my server. there is nothing to access on your side in this case Examples: like login into facebook within my site and still be able to do post updates, receive notifications Facebook implement some of these with IFrame e.g. the Like button *NOTE: * I have tested the IFrame option. It worked but I still have to remove my site specific content like my page Banner, Side Navigation etc. I was able to login normally as if I was actually on the site. This show my GUI but - style sheet was missing - content not styled with CSS - Any relative url won't work. It would look for that resource relative to the current server. Unless I change links to absolute - Clicking on the LogIn button produces this error: _The state information is invalid for this page and might be corrupted._ **UPDATE:** I was reading about REST webservice few days ago and I got this idea: What about the idea of **returning an XML from a webservice [REST or SOAP] and providing an XSLT (that I can provide) to display it**. Thus they won't have to do much coding?"} {"_id": "105891", "title": "Where can I find articles on why interruptions are bad for programmers?", "text": "I've read/heard that interruptions are bad for programmers. I've also read/heard that getting into 'the zone' is good. I don't doubt these assertions, but I'd like to educate colleagues (managers, mainly) and family members (for when I work from home) about the perils of interruptions and the productivity gains to be had from 'the zone'. So could anyone point me to articles and/or books that make the arguments well? Feel free to make those arguments here in an answer if you like. It would be helpful if there were some in there that were aimed at managers and non-programming colleagues."} {"_id": "21355", "title": "How do you explain the complexity of bulk emailing to a manager?", "text": "I think we've all been there, you've been asked to write something to send bulk emails for whatever reason. It's a solved problem, theres a multitude of companies out there to handle this for you, and theres no point in catching up on their many years of single minded focus on making sure your email reaches its destination without being waylaid by spam filters. Now, I could go into a point by point talk about DKIM, SPF, ISP rate limits, content inspection, bounces, getting blacklisted, getting unblacklisted etc, but the eyes will glaze over. How do you convince a non technical manager of the issues around bulk email like this?"} {"_id": "211365", "title": "Who owns modifications to open-source software?", "text": "Suppose a piece of software is distributed under some permissive license, e.g. BSD. Now, I modify this software and want to distribute my derivative work. My questions: * Who is the owner of the derivative? Is it the original author, the new author or there is joint ownership? * Is a copyright notice (say under the original copyright) sufficient to establish the new ownership?"} {"_id": "211364", "title": "Generic name for types and values", "text": "In computer science, what is the abstract common name of types and values (I mean an abstract \"something\" that can be a type or a value) ? To be more specific: If we have `template ` or `template ` what is the abstract name of `X`?"} {"_id": "211368", "title": "C# Delegates are immutable - but why does that matter?", "text": "This is a followup question to this other question. **Background** Working from the MSDN Delegates Tutorial (C#), I see the following: > Note that once a delegate is created, the method it is associated with never > changes \u2014 delegate objects are immutable. And then, in the code sample, I see this (Spread out a little througout the code): public delegate void ProcessBookDelegate(Book book); bookDB.ProcessPaperbackBooks(new ProcessBookDelegate(PrintTitle)); bookDB.ProcessPaperbackBooks(new ProcessBookDelegate(totaller.AddBookToTotal)); **The Question** Now obviously, the way the code is written here a new delegate is created for each process. I am _guessing_ the immutability is relevant if you try to do stuff like ProcessBookDelegate thisIsATest = new ProcessBookDelegate(PrintTitle); ProcessBookDelegate thisIsATest = new ProcessBookDelegate(totaller.AddBookToTotal); which should... still compile when `thisIsATest` is immutable? So what problem has Microsoft solved by making it immutable? What problems would we run into if C# 6 made delegates mutable? **EDIT** I believe the immutability will prevent this: ProcessBookDelegate thisIsATest = new ProcessBookDelegate(PrintTitle); thisIsATest.ChangeTheDelegateInABadFashion(); someFunction(thisIsATest); //This function works as normal //because the delegate is unchanged //due to immutability but the immutability will NOT prevent this: ProcessBookDelegate thisIsATest = new ProcessBookDelegate(PrintTitle); thisIsATest = new ProcessBookDelegate(ChangeTheDelegateInABadFashion()); someFunction(thisIsATest); //This function works weird now because //it expected the unchanged delegate Am I correct in this understanding?"} {"_id": "60586", "title": "How to test a random feature in MS Excel/WORD/PPT", "text": "Interviewers often ask questions about testing a random feature in Excel, Word, Notepad etc.. I am someone who will be interviewing for a SDET position and has a SDE background not specifically SDET background. I am extremely interested in testing though so applied for a SDET position. So how should one go about testing a random feature in a software, especially word/excel/notepad kind of software? Exploratory testing? What about a feature like File-> open dialog in word/Excel/ How to tackle such interview questions? What is an interviewer specifically looking for in an experienced candidate when he/she asks such questions?"} {"_id": "167033", "title": "Is there a way to help streamline your typical developers discussions?", "text": "Often, my colleagues and I run into specific problems, for example what's the best way to deploy this, or would this or that technique be a good one to use, or just \"hey, have you seen that new thing\". However at the moment this communication goes via mail, or skype, or twitter, and a lot of information becomes hard to find very quickly. Is there a service or methodology to keep this kind of information ordered and traceable?"} {"_id": "167031", "title": "Best method in PHP for the Error Handling ? Convert all PHP errors (warnings notices etc) to exceptions?", "text": "What is the best method in PHP for the Error Handling ? is there a way in PHP to Convert all PHP errors (warnings notices etc) to exceptions ? what the best way/practise to error handling ? again: if we overuse exceptions (i.e. try/catch) in many situations, i think application will be halted unnecessary. for a simple error checking we can use return false; but it may be cluttering the coding with many if else conditions. what do you guys suggest ?"} {"_id": "167030", "title": "How can I view github tasks which are not assigned to a person?", "text": "On the main Issues page, you can click `Everyone's Issues`, or `Assigned to me`. If you want to view the tasks assigned to a particular person, you can click the `search` icon, and on the left choose the `assigned to anyone` drop-down, then choose from a list of assignees. But how to I list all the items which are not assigned to anyone?"} {"_id": "94033", "title": "Is there a canonical book for CMS design patterns and concepts?", "text": "I've been trying to understand the fundamentals of content management systems: I've looked into APress's _Pro ASP.NET 4 CMS_ , but it dives into implementation, dedicating a lot of time to technologies and not core principles. Is there a book out there that's the de-facto standard for describing best practices, design methodologies, and other helpful information about content management systems? What about that book makes it special?"} {"_id": "116221", "title": "Appropriate uses of fall-through switch statements", "text": "When is it appropriate to use a fall-through (classic) switch statement? Is such usage recommended and encouraged or should it be avoided at all costs?"} {"_id": "56935", "title": "Why are pointers not recommended when coding with C++", "text": "I read from somewhere that when using C++ it is recommended not to use pointers. Why is pointers such a bad idea when you are using C++. For C programmers that are used to using pointers, what is the better alternative and approach in C++?"} {"_id": "102458", "title": "How to learn Java web application development?", "text": "I\u2019m planning on making a Java web application using Amazon Web Services Elastic Beanstalk. AWS Elastic Beanstalk basically provides PaaS on top of AWS\u2019s cloud computing resources. The problem I\u2019m having is trying to learn how to make a Java web application. I\u2019m familiar with making basic programs with Java (1st year computer science stuff) and have read the Head First java book, however I\u2019m totally clueless when it comes to a large project like this. I\u2019ve tried to find a suitable tutorial in the AWS docs, but as far as I can tell it assumes you know how to make Java applications. I tried the Oracle Java docs, and I\u2019ve been searching the internet for something useful. The problem is that all the resources I find assume familiarity with what I\u2019m trying to learn. Ideally I would like an article or tutorial that describes the process and how to make a Java web app that assumes only basic knowledge of the Java programming language. So if anyone can offer any advice, or docs that could help me achieve that goal, I would greatly appreciate your help :-) Thanks in advance, and I apologize if it seems that Googling could answer this question, but I have honestly looked quite hard to find some way of learning this and have come up with nothing. So I\u2019ve come to you guys - the experts ;) - who presumably knew as little as I did at the start. Basic details of the app: It will have a web based interface as well as Android and iOS apps to connect to it. It will be of medium complexity and will involve user interaction. For the browser based part it will involve taking payments from users if they wish to buy a premium part of the service."} {"_id": "102459", "title": "How do you track your progress in a project?", "text": "Here's the type of situation that I've been struggling with, and I'm certainly not the first: * Project has a budget based on the estimated time to develop the solution * Deadline to turn project over to the PM for QA is one some date * I am working on multiple projects at once, and each has a budgeted number of hours and deadline Tracking my progress against a budget and deadline would be pretty straightforward if I knew exactly how I was going to implement it, and only had one or two projects to focus on at a time. However, I'm building websites with Drupal, and every project includes some functionality that might be available as module - or the available module won't work at all, and it will require significant effort to build that functionality. What strategy have you found most helpful to keep track of where you are in a project so that you can accurately report to the PM how far along you are, and identify early on if the project is in danger of running late or going over budget, especially in cases where the time required for some parts is unknown?"} {"_id": "235374", "title": "Make a monolithic architecture in something modular", "text": "Currently my architecture is a monolithic block that handles a really specific duty. Now it needs to be generalized. Right now it handles a request and all processes (1 or many) associated to it. There's a class `Request` and a class `Process` and since it serves one specific duty this model was OK. Right now I need to split both classes to achieve modularity. For instance, the `Request` class should be split. So I'll still have a `Request` class, which holds general request information, and many `RequestDetailsForServiceOne`, `RequestDetailsForServiceTwo` and so on, that hold detailed information relatively to a specific type of request. Equally for `Process`. _The question right now is: how to bind together`Request` class with `RequestDetailsForServiceOne` (at run time)?_ I had thought about Dependency Injection, but `RequestDetailsForServiceOne` and `RequestDetailsForServiceTwo` doesn't share any common behaviour, (that classes only store some properties), this way will drive me to code an `IRequestDetailsForService` completely empty. That sounds as a code-smell to me. Would be a better idea in this case use, (inside `Request`/`Process` class) a Dictionary of properties and completely avoid `RequestDetailsForService*` classes? I don't really like the loss of type-checking that this way get me back. What would be an ideal solution?"} {"_id": "235375", "title": "Does WinRt have support for .Net Framework and Wpf?", "text": "I want to find out if 1. WinRt does or not support .Net Framework? 2. WinRt does or not support WPF? Our company has developed applications using WinForms and they want to have them also running on WinRt. So is it possible to use classes developed under .Net Framework 4.5 in WinRt?"} {"_id": "191187", "title": "What type of projects should each of these projects be given their particular purposes?", "text": "I'm working in a creation of a new system that should be able to change the presentation layer without trouble, also be able to comport web-services, shop areas, internal control of data, creation of reports, multiple web portals, etc. Following this post about the \"benefits of multiple Projects and One Solution\" on Stackoverflow also in some self-experience I came with the idea of the following structure. * TestProject.BLL (Business) * TestProject.DAL (Persistence (Mappings, Conventions etc)) * TestProject.WEB (Presentation Layer) * TestProject.BASE (Framework) But, after some search on the web, I found that those project are **not** always the type of \" **ASP.NET Web Application** \". * * * My question is, what are the types that I should create each project? Should I build them as \" **Windows Forms Application** \" or what?"} {"_id": "207030", "title": "Solving which bugs will give greatest cost benefit", "text": "I wanted to get an idea of categorizing bugs based on how easy is it to solve and how much benefit it will give me. for e.g., if there is a bug which will take say an hour (double file close etc.) to solve vs another which takes a day (segmentation fault). But if solving the first bug is not very important, then I'll probably work on the second one. Are there any research papers which categorize bugs based on cost-benefit or similar metric? [EDIT] Let's say it is possible to categorize bugs based on bug characteristics e.g. security vulnerability, memory error, logic error etc. On the other dimension there could be parameters like difficulty (easy, medium, hard). Are there other dimensions I should be looking for. To simplify things, I can assume two things: 1. Every programmer in the team is equally capable of solving any bug 2. There is no deadline"} {"_id": "102450", "title": "EF vs. NHibernate", "text": "In the past 2 years since I started writing business applications (before I did either high level front end or very low level systems programming), learned datasets, linq to sql, and now entity framework. The logical thing to look at next seems to be NHibernate. Reasons I eventually ended up at EF are: (1) it has the best designer support and (2) it is the most supported by Microsoft. Reasons (these have elements of assumption) I am interested in NHibernate are: (1) it might not get superceded by a totally different thing as quickly as MS churns data access technologies (2) It seems like its either the front running or second to front running tool for what it does and (3) It appears to be stable and tracable back through time quite a bit. Has anyone published a comparioson of the two? Is one better than the other for certain types of architectures? Or is it just a matter of style and preference?"} {"_id": "102453", "title": "c# Web Content Filter", "text": "Can anyone point me to any GOOD open source .NET parental web content filters? I would like to gain an understanding of how I can filter/block web traffic based on Url and/or keyword. I've looked on google but can't seem to find any developed c#"} {"_id": "162007", "title": "Should a stack trace be in the error message presented to the user?", "text": "I've got a bit of an argument at my workplace and I'm trying to figure out who is right, and what is the right thing to do. Context: an intranet web application that our customers use for accounting and other ERP stuff. I'm of the opinion that an error message presented to the user (when things crash) should include as much information as possible, including the stack trace. Of course, it has to start with a nice \"An Error has occurred, please submit the below information to the developers\" in large, friendly letters. My reasoning is that a screenshot of the crashed application will often be the only easily available source of information. Sure, you can try to get a hold of the client's systems administrator(s), attempt to explain where your log files are, etc, but that will probably be slow and painful (talking to the client representatives mostly is). Also, having an immediate and full information is extremely useful in development, where you don't have to go hunting through the log files to find what you need on every exception. (But that could be solved with a configuration switch.) Unfortunately there has been some kind of \"Security audit\" (no idea how they did that without the sources... but whatever), and they complained about the full exception messages citing them as a security threat. Naturally, the clients (at least one that I know of) has taken this at face value and now demands that the messages be cleaned. I fail to see how a potential attacker could use a stack trace to figure anything out he couldn't have figured out before. Are there any examples, any documented proof of anyone ever doing that? I think that we should fight this foolish idea, but perhaps I'm the fool here, so... Who's right?"} {"_id": "235379", "title": "Picking a card from a shuffled deck", "text": "I'm pretty new to Haskell although I did some ML many moons ago. I'm trying to set up a deck of playing cards, shuffle them and then deal them. I have the deck and shuffle sorted (of a fashion) but I'm not sure what I'm doing after that. This is what I have so far: import System.Random import Data.Array.IO import Control.Monad -- | Randomly shuffle a list -- /O(N)/ shuffle :: [a] -> IO [a] shuffle xs = do ar <- newArray n xs forM [1..n] $ \\i -> do j <- randomRIO (i,n) vi <- readArray ar i vj <- readArray ar j writeArray ar j vi return vj where n = length xs newArray :: Int -> [a] -> IO (IOArray Int a) newArray n xs = newListArray (1,n) xs ranks=[2..14] suits=[1,2,3,4] cards=[(r,s) | r <- ranks, s<- suits] deck=shuffle cards myAction :: IO [(Integer, Integer)] myAction = shuffle cards main :: IO () main = do xs <- myAction print xs There was no particular reason I chose that list shuffler other than the reason I could interrogate (or at least display) the resulting list. I'd like to be able to pull items off the returned **IO [(Integer, Integer)]** but I'm not entirely sure how to proceed. I understand that I can't simply convert it to a vanilla list (this is covered sufficiently elsewhere) so presumably I either need to: * extend the IO (monad?) somehow * write a custom list of tuples shuffler * write my own monad * use some other method I haven't learnt yet Anecdotally, I believe this can be done \"uncleanly\" but I don't want to go to programmer hell until I understand how hot the fires are... Any idea how best to proceed?"} {"_id": "52722", "title": "Technologie Roadmap: Portlet JSR286 vs Widget/Gadget", "text": "IBM got me confused (again). For many years IBM have been pushing for Portlet Containers with the JSR 168 and later the JSR 286 Specification. 2008-2009, IBM the Lotus division introduced the iWidget Specification. Based on my reading, it is a more dynamic and lightweight version of the Portlets, close to Google Gadget. It uses a different paradigm than Porlet while providing the same features. A major differentiator with this kind of client side technologies is that you don\u2019t need a big and costly Portal infrastructure. To not fall in the \u2018It depends on needs\u2019 discussions, let consider the following: * New company, no legacy portlet, no portal in place. What are your thoughts on this?"} {"_id": "27985", "title": "Learning to Program in Assembly - Useful Resources", "text": "Following on from a previous thread, it has got me interested in learning a little about Assembly programming. Not so much for wanting to program useful apps in, just to get a feel for low level programming to better understand what is happening. Can anybody recommend any books or online resources for beginners? Thanks."} {"_id": "27983", "title": "What do you do when you have to work on a project using a language that you hate?", "text": "I've recently been assigned to work on a project written in PHP. I can't change that, and have no intention of really trying. However, this can't be an uncommon scenario. How do you motivate yourself to work on the project when the code you're looking at and producing constantly makes you want to cry? EDIT: It should be noted that this is pretty much the only on-campus job doing anything software related available at the moment, so \"just change jobs\" isn't really an option :("} {"_id": "27982", "title": "How to explain a project for a potential employer", "text": "I went to an interview last week for a junior programmer and mentioned that I have been working on a project with a buddy of mine. Tomorrow they would like to see this project in order to decide if they should hire me or not. So I was wondering: * What parts of a project are of interest to someone who has not seen the project before? * What could I show that would change their perspective of me as a programmer and what I know? The project is in C# and have been a hobby project for 6 months now. Without going into to much details: * it's a networked program * it's server / client based where clients end up connecting to each others * there is synchronization of data between clients * we are using files to store information about the different clients, which is also being synchronized * the program in whole is divided into some different projects (some which go into .dll's) * the project will later handle multiple layers of displaying the data to the user (meaning cmd, a window etc..) I have green lighted this with my buddy as well and it is ok to show. My idea was to show the shell (cmd) that we have which is working and the top layers of the hierarchy, the ones that the shell communicates directly with. These are three classes in particular, one that connects and communicates with the server, one that handles the synchronization and one that handles the objects that are being synchronized. I'll try to go into some details about how we store the objects and how this is communicated to the synchronizer class. Then discuss on a higher level of how we synchronize the data, ending with going into some detail about how this is done. Basically, **start from an abstract layer and go into detail where the magic happends.** But I wonder how this will be understandable for someone who has never seen our project before. How will this show that I am a better or worse programmer than my next? They are programmers so I guess they will understand the basics and be able to follow the functions and what the code do. But I know that _I_ have trouble with understanding the overall picture of a project, much less in 1-2 hours. Also, I wouldn't know how to decide if it was a good programmer or bad unless I saw some bad code. So my question is: 1. Is the above idea something to work on? (top > bottom) 2. Should I concentrate more on algorithms? 3. Should I try to build a UML diagram or some other visual representation of the project? Any other ideas?"} {"_id": "52725", "title": "What should I ask interviewer during the interview?", "text": "I have an SDET interview upcoming in a week. I have been preparing for a long time. It is a good company. I am working as SDET for two years. I wonder what questions I should ask my interviewer regarding testing and other things. I would appreciate your help if you give me some sample questions that I should ask my interviewer during the interview. Some of them I thought are: 1. What type of testing methodologies do you use? 2. Do you have triage meeting everyday? 3. What percentage of code coverage is done by unit tests? I do not find these questions to be more effective. I would appreciate if somebody could help me out in coming out with better questions."} {"_id": "211096", "title": "What does \"Released under MIT and GPL license\" mean?", "text": "There are lots of projects released under more than one license. One example is Mozilla's tri-license, which states that the Software is released under the MPL, but the user can also choose GPL or LGPL instead of MPL (I think this is because the MPL license is incompatible with the GPL license). However, I came across projects released under both the GPL license and the MIT license. What conditions must be met in this case? For the MPL/GPL/LGPL case, things are clear because you can choose MPL or GPL or LGPL. But from GPL & MIT I understand that the conditions from both licenses must be met at the same time in order to use it, and this doesn't make much sense because they have different purposes. In this case, should the MIT license be interpreted as an exception, similar to, for example, the QT's LGPL exception or as an additional set of conditions on top of the GPL license?"} {"_id": "157283", "title": "HTML in docblock comments?", "text": "In the PEAR standards there is no reference to HTML, if its allowed or not. http://pear.php.net/manual/en/standards.sample.php But I see some people use HTML tags like `` and stuff.. So is HTML allowed? Will it break some doc parsers?"} {"_id": "52729", "title": "How to become a Kernel/Systems/Device driver programmer?", "text": "I currently work in a professional capacity as a software engineer working with the Android OS. We work at integrating our platform as a native daemon among other facets of the project. I primarily work in Java developing the SDK and Android applications, but get to help with the platform in C/C++. Anywho, I have a great interest to work professionally developing low level for linux. I am not unhappy in my current position and will hang around as long as the company lets me (as a matter of fact I quite enjoy working there!), but I would like to work my way that direction. I've been working through Linux Kernel Development (Robert Love) and The Linux Programming Interface (Michael Kerrisk) (In addition to strengthening my C skills at every chance I get) and casually browsing Monster and similar sites. The problem I see is, _there are no entry level positions._ How does one break into this field? Anytime I see \"Linux Systems Programmer\" or \"Linux Device Driver Programmer\" they all require _at the minimum_ 5-7 years of relevant experience. They want someone who knows the ropes, not a junior level programmer (I've been working for 7 months now...). So, I'm assuming, that some of you on stackoverflow work in a professional capacity doing just what I would like to do. How did you get there? What platforms did you use to work your way there? Am I going to have a more difficult time because I have my bachelors in CSC as opposed to a computer engineer (where they would experience a bit more embedded, asm, etc)? _EDIT FOR CLARIFICATION!_ I am aware of the opensource nature of the linux kernel/drivers etc. I plan on contributing regardless of where my day job is. I'm more curious of what kinds of entry level positions will allow me to do relevant work and get paid doing it! Thanks for all the replies so far!"} {"_id": "17738", "title": "Time allocated to code reviews", "text": "If you are doing code reviews * How much time do you spend on code reviews, compared to implementation? * How much of the changes undergo code review? * you think it's to much / should be more? Are there any studies about effectiveness? [edit] thank you all for the answers, it's hard to pick a \"winner\" for such a question, there is lots of valuable info in the other replies, too."} {"_id": "68384", "title": "Mathematical Knowledge Required prior to entering Computer Science", "text": "I am currently a College Student in a Systems Analysis (3 year course). I am about to finish my course and would like to enroll in Computer Science at a University near me. I have a deep interest in learning sorting algorithms, measuring time complexity, cryptography, systems programming and security research. I am bored by the typical, high-level knowledge; java and general algorithm knowledge. I have absolutely no problem learning and utilizing new programming languages, mentalities and syntaxes. My only worry with regards to entering this course is what Mathematical knowledge is required prior to entering. I have done linear algebra but not quadratic equations; I have not done Calculus, nor have I done any Big-O notation. I plan on enrolling in 1 (one) year, so this gives me one year to prepare. I would like to know what I should be studying and reading to prepare. Books, worksheets, videos and any sort of advice or opinion(s) would be greatly appreciated. I have a great passion for efficient computing/programming and solving problems in the most efficient manner. I'm just trying to figure out if I would be in over my head entering Computer Science. Some people have said that I am to old, 22 this year, I would love to hear the opinions and advice of the Stack community. Thanks To All In Advance,"} {"_id": "250968", "title": "What is design pattern/paradigm for ASP.NET web-apps?", "text": "I don't actually know if my question is correct, but working on a webapp porting from ASP.NET world to Java + Spring platform I came up with this question in mind. Using Spring MVC, the new version of the the application, I'm obviously applying MVC pattern, but which was the ASP.NET design/paradigm. I know that ASP.NET uses event-oriented paradigm, but is this a design pattern? Hope to be clear..."} {"_id": "254046", "title": "And at what point of modification to the original does source code with no license become owned by me?", "text": "I've recently come across a publicly viewable project on Github that has no license associated with it. In this repo, there is a file with the logic and most of the code needed to work as a piece of a project I am working on. Not verbatim, but about 60% of it I'd like to use with various modifications. Once my code base is a little bit more stable, I plan to release what I've done under the WTFPL License. I've emailed the repo owner, and so far have not gotten a reply. I know I have the rights to fork the repo, but if I release a stripped down and modified version of the other project's file with mine, under the WTFPL, am I infringing on copyrights? Per Github's Terms of Service, by submitted a project on Github and making it viewable to the public, you are allowing other users to see and fork your project. Doesn't say anything about modifying, distributing, or using the fork. And at what point of modification to the original does it become owned by me?"} {"_id": "254041", "title": "Performing a Depth First Search iteratively using async/parallel processing?", "text": "Here is a method that does a DFS search and returns a list of all items given a top level item id. How could I modify this to take advantage of parallel processing? Currently, the call to get the sub items is made one by one for each item in the stack. It would be nice if I could get the sub items for multiple items in the stack at the same time, and populate my return list faster. How could I do this (either using async/await or TPL, or anything else) in a thread safe manner? private async Task> GetItemsAsync(string topItemId) { var items = new List(); var topItem = await GetItemAsync(topItemId); Stack stack = new Stack(); stack.Push(topItem); while (stack.Count > 0) { var item = stack.Pop(); items.Add(item); var subItems = await GetSubItemsAsync(item.SubId); foreach (var subItem in subItems) { stack.Push(subItem); } } return items; } I was thinking of something along these lines, but it's not coming together: var tasks = stack.Select(async item => { items.Add(item); var subItems = await GetSubItemsAsync(item.SubId); foreach (var subItem in subItems) { stack.Push(subItem); } }).ToList(); if (tasks.Any()) await Task.WhenAll(tasks); The language I'm using is C#."} {"_id": "113769", "title": "Displaying and processing objects in a list?", "text": "I currently have code like this to display some objects that meet some criteria in a grid: // simplified void MyDialog::OnTimer() { UpdateDisplay(); } void MyDialog::UpdateDisplay() { std::list objects; someService_->GetObjectsMeetingCriteria(objects, sortField, sortDirection); for(std::list::iterator it = objects.begin(); it != objects.end() ; it ++) { AddObjectToGrid(*it); } } This works fine. Now the requirement has come in to process these objects when they meet some criteria. The information on the objects can change quickly so it would be most desirable to check if an object meets criteria and then process it immediately and continue on in this fashion. My question is how to best architect this to handle the display and processing. I could simply add a method to process the object such as: for(std::list::iterator it = objects.begin(); it != objects.end() ; it ++) { ProcessObject(*it); AddObjectToGrid(*it); } However ideally I'd check if an object meets the criteria immediately before processing it to ensure its acting on the most recent info. In this example, all the objects are checked to see if they match the criteria and then afterwards each one is processed. Also I'm concerned this is coupling the processing code with the display code although I'm not sure how to separate them or if it's even necessary. I could solve that like this: void MyDialog::OnTimer() { ProcessObjects(); UpdateDisplay(); } But then I'm iterating over the list of objects twice, once to process and once to display. Finally I could do something like: // simplified void MyDialog::OnTimer() { ProcessAndDisplayObjects(); } void MyDialog::ProcessAndDisplayObjects() { std::list objects; someService_->GetAll(objects, sortField, sortDirection); for(std::list::iterator it = objects.begin(); it != objects.end() ; it ++) { if(someService->MeetsCriteria(*it)) { ProcessObject(*it); AddObjectToGrid(*it); } } } Overall what's holding me up is I'm worried about coupling the display and processing code together and the code running efficiently because timely processing is critical. How can I best structure this code?"} {"_id": "9795", "title": "What to look out for in employment contracts", "text": "I thought of this today after a co-worker looked through the contract they had signed several years ago and was quite alarmed. What should one look out for before signing a contract, as most employers _will_ get you to sign one. Please post ideas separately so they can be voted individually."} {"_id": "148556", "title": "Should a website use its own public API?", "text": "I am starting to write a webservice, and I've built with nodeJS and a RESTfulish approach. From what I gather: * The advantage is that you don't have to duplicate code. * The disadvantages are that you: * will update the public API frequently, but should be solved with versioning * can't really make service specific caching and optimizations What is considered best practice? Do sites such as Stack Exchange, Github, Twitter, etc use their own APIs for their clients?"} {"_id": "148551", "title": "Design Patterns / Strategies For Custom Fields & Data Types", "text": "Are there any common strategies or design patterns for designing applications that have either the ability to add custom fields to data objects, or for creating your own custom definition of objects. For example, I am thinking of products such as SalesForce, where you can have your own types of information,frameworks such as Expression Engine and the way it handles channels and channel field groups (Example), or How CMSes Like wordpress have the ability to add fields to custom post types."} {"_id": "220881", "title": "Understanding bitmap logic", "text": "I was going though this blog and it says > You can represent a list of distinct integers no larger than N using exactly > N bits: if the integer i appears in your list, you set the i th bit to true. > Bits for which there is no corresponding integers are set to false. For > example, the integers 3, 4, 7 can be represented as 00011001. As another > example, the integers 1, 2, 7 can be represented as 01100001. Can someone please explain me how [3, 4, 7] = 00011001 and [1, 2, 7] = 01100001 and also why 8 bits are taken?"} {"_id": "112415", "title": "Is there a difference between property and attribute?", "text": "These two words are completely synonymous for me, but I wonder if there's actually a useful semantic difference that I can incorporate into coding/naming conventions/process."} {"_id": "91129", "title": "Why are we still using the DOM in the browser rather than a desktop paradigm", "text": "From my understanding, the web interface was developed to use HTML because at the time it wasn't possible to simulate a desktop style application in the browser such as how Silverlight and Flash work, because of bandwidth limitations, and possibly processing power. Why hasn't there in the past, and the present been a larger acceptance and push for technologies like Flash/Silverlight? From my experience they are more pleasing to develop with (of course my opinion), and you don't have to deal with cross browser compliance, and older browsers (for the most part). Handling postbacks, AJAX, etc seems like extra unnecessary effort compared to the development paradigm of desktop applications. Do the DOM and it's complimenting technologies continue to thrive mainly on the fact that Silverlight/Flash requires a plugin install, and some mobile devices don't support the plugin?"} {"_id": "222033", "title": "The practical size limits of an AngularJS based application", "text": "We have been tasked with replacing a series of 25 year old mainframe applications with web applications. There are 4 applications in the series and we are trying to come up with a stack that will work on all 4. Now these applications are big. Not huge, but big. Based on the work that our sister company did to replace similar pieces of their systems, the small one is expected to come in at about 150k lines of rest service code and about 80k of UI code. For the big one, a million of service code and 200k of UI code isn't out of the question. The small application, so far we have about 150 different views, The big one has about 500 we know about. Now I understand that line numbers are a bad metric to use, but at the moment, its the best i got, so lets just go with that. So in the world I typically work in, this is one medium size application and three large ones. The next issue we have. Across the board, all systems. No user action is to take more than one second. Meaning, if you come into an application, the app has to be loaded and ready to login within one second. If you click on a search button, the response has to be displayed within one second. We have people that use the mainframe systems that process paperwork and in order for them to hit their daily quota, they have to average a start to finish processing time of 15 seconds for each case. The green screens they use right now just barely hit that mark. Dont get me going on eliminating the paper. State and Federal law requires it. So we already know that we have to build the app in such a way that a user can use it without touching the mouse. No way are they going to hit their quotas if they have to be mousing all over the place. Thats not such a big deal, we can define hot keys to do that. But the big issue is the speed. The typical internal framework is JSF. The standard is RichFaces, but many years of horrible performance out of Rich has finally convinced them to move to Prime. We would be about the 2nd or 3rd to use Prime. But since our performance requirements are the highest they have ever encountered, they asked my team to evaluate different frameworks. For internal applications, the current acceptable response time is between 5 and 7 seconds. Pretty typical in an enterprise, usually pretty easy target to hit with JSF. So we spent the past 5 weeks evaluating various options and by leaps and bounds the highest performing framework was GWT. In our benchmarks it blew Vaadin and JSF/Prime completely out of the ball park. Also, aside from JSF, its pretty much the only thing we could find designed to handle applications of this size with surprising ease. (Struts and SpringMVC are strictly forbidden in our enterprise) Then the problems started. Our Enterprise group said no, we can't use it. They acknowledge the fact that there is very little chance of JSF pulling off our performance requirements and the only practical way of hitting our goal is a thick browser front end, but, instead of GWT, they want us to use Angularjs. For the purpose of this post, lets not get into why they said no, lets just stick to the fact the answer was no. So, the past 2 days, I started digging into Angular. So far, I'm not impressed. It looks ok for very small applications (compared to what I normally deal with) but I can't find anything anywhere that shows an experience in trying to build an application of the size that I need to build. The closest I found was a guy talking about his \"Enterprise\" application he built with Angular that was about 9k lines. He said it was a bitch to pull off, mostly from inexperience, but once he found the tool Lineman, it became tolerable and he was able to pull it off. In the applications I need to build, 9k lines won't cover the logic I need to build into my search UI (so far we have identified about a dozen different ways to search for various data, and counting). To say nothing about actually doing anything with the results. So I called up two of our really, really good developers and asked their opinion. After we got past the initial \"Your out of your mind, that's what GWT was built for\" phase of the conversation, it was settled that to try to pull off apps of this size as single page javascript apps, like you can in GWT very easily, would be complete suicide. Angular just simply wasn't designed to go that big, it was designed for tiny stuff. Mobile apps, small web apps, etc. Nothing of this scale. The idea is we have to break the app into smaller modules. But, this brings the problem of how to transfer application state from one module to the other. This problem has been tackled at this company before with very poor results. But, they had an idea. Since we can't switch out our Websphere servers for Node.js, Their suggestion was to try a hybrid approach. The idea is to create a JEE6 Conversation bean. Then create a JSF file for each module of the application. That JSF would be linked to the conversation bean. This way, each module can run Angular to handle the MC side and the V side can be basic HTML with Twitter Bootstrap to make it pretty. When the user switches from one module to another, the JS application state is posted to the conversation bean. The conversation bean then redirects you to the new JSF module page, then the JSF can grab the object that represents the current application state from the conversation and pass that to Angular, loads it and they are off to the races. Then of course Angular would talk to the REST services directly. We are using the slow ass JEE stack as little as possible, but still able to sorta easily move app state from one module to another. The stack sounds ok to me. We have the ability to break the app into modules, if the module gets too big, we can break them into sub modules, use the conversation to move the state from module to module, and we can use Angular. My problem with this stack is 2 fold. One, thats a lot of plumbing. Lots of things to go horribly wrong. Two, I can't find anyone that has tried something like this. Questions: What is the practical size limit of an application based on Angular that does not have the benefit of Node.js sitting behind it? I have been told by my UI programmers that when an application gets to the point of about 10k HTML elements within a single page JS module, the performance starts to degrade. When you hit about 25k elements, the performance drops off a cliff, and about 40k elements the app usually doesn't load. Are these numbers consistent with medium to large applications that you have written? For comparison sake, lets say Stack Exchange is a medium sized application, Google Drive is large. Does a design pattern exist for this scenario. Meaning does a pattern exist where you can break a large JS based front end app apart into modules that can be handled easier by the JS frameworks by using JSF or some other system to transfer the application state from one module to another. The failed attempt made in the past was to post the app state object to a web service and that service store it in an application scoped EJB to be retrieved by the next module. That technique failed miserably when the app came under any kind of meaningful load. Are there any other frameworks out there to consider that can give us the performance, scalability, flexibility, and ease of developing large applications like this that doesn't have Google in its name? BTW, I was told about Angular Modules about 15 minutes before I walked out the door today. All I really know about it is its name. If you think it could help me, please let me know. Thanks guys/gals. I'm really stumped here."} {"_id": "20406", "title": "starting off with non trivial programs -- publish or not?", "text": "I've just started making some simple but non trivial(i think) programs in Ubuntu -- as of now have made a small xkcd scraper, which i plan to develop into a multi webcomic downloader+viewer At this point, would it be a good idea to start publishing the code on a site like Github or launchpad? (currently I'm not really worried about people copying my code/licensing) Or is it better to publish the code only after the program is completed? Also, can you suggest some such sites where I can post such codes and get suggestions for improvements/discuss the code with others?"} {"_id": "91121", "title": "Lisp: Benefits of lists as code over arrays as code?", "text": "Question for lisp programmers: Lisp code is lisp data, usually lists. Is there an advantage to code being lists over code being arrays? Would macros be easier to write/faster to run? You can assume that, in such a language (you could call it \"arrap\"), elements can be deleted from or inserted into arrays and that the arrays would grow and shrink as needed."} {"_id": "99450", "title": "When initially releasing an app, do you hold some of the features back for future releases?", "text": "New to the site so sorry if this is the wrong section. I'm starting app development and wondering what is the best practice when initially releasing my app. Do developers tend to keep some of the features for future updates to keep users active, or do they try to release the most complete app possible? Basically, is it advised to release an app as soon as possible, and then periodically update it to the complete app you have in mind, or wait until you have it fully developed and release it with fewer update prospects? EDIT: Thanks for the answers. I am currently just designing the app and writing down all the features I can think of and trying to prioritize which to include to the initial launch. Based on the answers given, I think I will get a MVP (thanks for the term) out as soon as it is ready, and then update with new features as soon as they are built. I am not holding back built features, was just torn between if I should build them all before launch or just the necessary ones, release, and then build the others. As far as I am aware this isn't a clone. It is my first app though and I will be using it as a learning experience"} {"_id": "100032", "title": "Using continuous build results as part of performance review metrics?", "text": "My boss is planning to use the metrics from our continuous build (builds and runs tests on every commit) as part of our performance reviews (in our 'quality' metric). This seems like a REALLY bad idea to me, but I'd like to know if anyone has studied this or seen this tried before. My thought is that it's going to have our developers not put in as many tests as they otherwise would, for fear that the tests are going to fail. I feel that he's turning a valuable developer tool into a stick to beat the developers with. The obvious counter argument is that it will promote people being more careful before they commit, and therefore leading to higher quality. Am I off base here? Please leave aside the question of whether we should be doing performance reviews at all or not - that's been answered elsewhere."} {"_id": "195034", "title": "Implicit optimization versus explicit optimization", "text": "To explain what I mean, let me start with an example. Consider a `deque` that supports `O(logn)` concatenation with another deque and `O(n)` addition of `n` elements at one end. This dequeimplements a general `seq` interface (or type-class, or what have you) that allows iterating over a collection. An explicit optimization approach would be, having a `concat` method (or function) for deque objects and a separate `pushSeq` method for seq objects. Each method would be documented with the appropriate complexity. An implicit optimization approach would be to have a single `concat` method that accepts a seq. An internal dynamic type test checks whether the supplied argument is actually a deque, and if so, calls an implementation method for concating deques. This is documented in the API. Obviously you could have both of these. The point of implicit optimization is that you don't give the user explicit control over optimization. It just \"happens\", unless the user deliberately looks for it. Right now I'm writing a library and I'm facing a very similar choice. I very much like the idea of a compact interface where things just \"work\". An implicit approach also gives me a lot more freedom. For example, maybe I can perform ten dynamic type tests to optimize the concat operation for different collections. Having ten different methods wouldn't make sense. What's your take on this? Which approach is better, and why?"} {"_id": "198007", "title": "What are standard directory layouts for storing database scripts in TFS?", "text": "I recently took over the maintenance of 10-15 SQL Server databases and to my surprise found out that the team does not store the code scripts anywhere. They currently deploy changes to each environment without any paper/code trail. The code for applications that use these databases are all stored in TFS and I need to implement a directory structure for storing the scripts - tables, views, stored procedures - in TFS. On our Oracle side of the house we use a two directory structure: * ddl * sql But I think there could be improvement in storing the scripts on the SQL Server side. I was thinking of using something similar to the way SSMS stores objects: * tables * views * stored procedures/functions Is there a standard structure or best practice that is used to store SQL scripts?"} {"_id": "195032", "title": "Why doesn't C# have local scope in case blocks?", "text": "I was writing this code: private static Expression> ToExpression(BindingCriterion criterion) { switch (criterion.ChangeAction) { case BindingType.Inherited: var action = (byte)ChangeAction.Inherit; return (x => x.Action == action); case BindingType.ExplicitValue: var action = (byte)ChangeAction.SetValue; return (x => x.Action == action); default: // TODO: Localize errors throw new InvalidOperationException(\"Invalid criterion.\"); } } And was surprised to find a compile error: > A local variable named 'action' is already defined in this scope It was a pretty easy issue to resolve; just getting rid of the second `var` did the trick. Evidently variables declared in `case` blocks have the scope of the parent `switch`, but I'm curious as to why this is. Given that C# does not allow execution to fall through other cases (it requires `break`, `return`, `throw`, or `goto case` statements at the end of every `case` block), it seems quite odd that it would allow variable declarations inside one `case` to be used or conflict with variables in any other `case`. In other words variables appear to fall through `case` statements even though execution cannot. C# takes great pains to promote readability by prohibiting some constructs of other languages that are confusing or or easily abused. But this seems like it's just bound to cause confusion. Consider the following scenarios: 1. If were to change it to this: case BindingType.Inherited: var action = (byte)ChangeAction.Inherit; return (x => x.Action == action); case BindingType.ExplicitValue: return (x => x.Action == action); I get \" _Use of unassigned local variable 'action'_ \". This is confusing because in every other construct in C# that I can think of `var action = ...` would initialize the variable, but here it simply declares it. 2. If I were to swap the cases like this: case BindingType.ExplicitValue: action = (byte)ChangeAction.SetValue; return (x => x.Action == action); case BindingType.Inherited: var action = (byte)ChangeAction.Inherit; return (x => x.Action == action); I get \" _Cannot use local variable 'action' before it is declared_ \". So the order of the case blocks appears to be important here in a way that's not entirely obvious -- Normally I could write these in any order I wish, but because the `var` must appear in the first block where `action` is used, I have to tweak `case` blocks accordingly. 3. If were to change it to this: case BindingType.Inherited: var action = (byte)ChangeAction.Inherit; return (x => x.Action == action); case BindingType.ExplicitValue: action = (byte)ChangeAction.SetValue; goto case BindingType.Inherited; Then I get no error, but in a sense, it _looks_ like the variable is being assigned a value before it's declared. (Although I can't think of any time you'd actually want to do this -- I didn't even know `goto case` existed before today) So my question is, why didn't the designers of C# give `case` blocks their own local scope? Are there any historical or technical reasons for this?"} {"_id": "210941", "title": "Design: multiple algorithms on the same large data sets", "text": "I have several algorithms that I would like to test against the same data sets to compare their results. I don't know how to design it so there is maximum readability and maximum efficiency. I have considered creating a class for each algorithm, and giving it a copy of the data to work with, but it doesn't seem that that is the right answer: 1. Each data set is fairly large (10,000 float numpy array), so I don't want to copy each one ~30 times. 2. Many of the algorithms have similar pre-processing routines (thus repeating them for each algorithm seems wasteful) 3. Some algorithms have nearly identical code, except a few parameters which are different. At the same time, having one function call per algorithm also seems wrong: as per (2), many will call the same preprocessing functions, and then it becomes very difficult to tell who is calling who. I want to be able to allow the user (which will be me) to easily call a variety of algorithms on the data, while keeping the code as clear as possible. I just keep thinking I need the inverse of a class; where each objects of a class will have the same methods but different data, I need something where each member will have the same data but different methods."} {"_id": "78291", "title": "Fast cold start text editor", "text": "Is there text editor with c# syntax highlight which starts VERY FAST? I just want look source code some files in project and i donnt want wait 30 seconds while VS starts."} {"_id": "163351", "title": "Where is the object browser in VS 2010", "text": "I am teaching myself C# and Im using a book that references Visual Studio 2008. However, I am using VS 2010. The book wants me to look at the object browser by choosing View, Other Windows, Object Browser from the menu. However, the object browser is not there. I moused over the icons on the menu and nothing stood out. So, where is it? Also, am I going to run into more problems like this? Is it worth getting an updated book?"} {"_id": "195038", "title": "Proper name for a project supports 2 different release", "text": "Is there a technical name for a software project where the current and prior stable releases are both maintained?"} {"_id": "255891", "title": "How does the Common Language Runtime improve performance?", "text": "I read on the wikipedia article for Common Language Runtime that one of the benefits that the runtime provides is \"Performance improvements\". Executing managed code (Or bytecode) must surely always be **slower** due to additional overhead for JIT compilation than executing native code. How then is it possible that the CLR causes \"Performance improvements\"? **Update:** I have looked at the question and answers to What backs up the claim that C++ can be faster than a JVM or CLR with JIT?, but it has not been helful as that question is actually asking why C++ would be faster rather than slower. What I am interested in is how it is possible, from an architectural point of view, that managed code could lead to performance improvements"} {"_id": "44822", "title": "what is a normal developer to pageview ratio?", "text": "I work for an e-commerce site that has lately been shedding its workforce. I was hired ten months ago as a UI Developer. At that time we had three other developers. One was the technical lead who had been with the company for 10 years. The other two were server-side developers who had been there for 10 and 3.5 years respectively. In ten months, the technical lead left for a better position, one developer was laid off, and the other very recently left. So, I am now the only developer on staff. We have one DBA and one network administrator. They are currently looking to hire another developer but are not willing to pay enough to hire a senior person. I consider myself a junior developer with two years of experience. I have argued that we need to hire at least one senior developer and another junior developer if we're going to keep our current site operational (not to mention develop new features)...even if that means laying off staff in other departments. Right now we get 6.5 million pageviews per month, and I feel like 3.2 million pageviews per developer must be incredibly abnormal. My question is then: what is a normal developer to pageview ratio? Are there any industry standards or literature on the subject that I can use to argue for more staff?"} {"_id": "157863", "title": "What is the name to differentiate between parts of an app that have different types of users in each part?", "text": "On a project I'm working on, I'm trying to find a noun to describe the different parts of the application dependent on user interface. There are three parts: super admin, admin and regular users. Each of these parts consist of modules, interfaces with users, interfaces with the other parts. I've tried \"sub-system\", \"module\", \"environment\", \"platform\" but none of these really described what each of these parts of the system is. I need a noun that captures everything included within each part e.g. the super admin(?) enables the owner to manually add and edit content. Normally I would just use \"module\" but there are many modules within each part."} {"_id": "111991", "title": "Software Engineer with no post-secondary degree?", "text": "So here's the deal: Right now I am in college and due to a maelstrom of recent events including difficulty in non-cs and -math related courses, I'm really not sure that I want to continue studying at a university next year. While I thoroughly enjoy my computer science and math courses, I find that I just can't keep up with the class. Take my calculus-based physics for example. I'm sure that, given enough time, I could make it through the course and learn everything I need to know; but because I am working and taking 4 other courses, I just don't have that time. I feel that I would benefit a lot more from teaching myself these courses one at a time, i.e. finishing Calculus before I take calculus-based physics. I probably should have registered for the algebra-based course, but it's too late to do that now. What kind of repercussions (career-wise) would there be if I decided to drop out of school and learn on my own? I might consider coming back after a few years of working full-time, but I really don't know. Is it worth paying for school if I don't feel like I'm gaining as much as I can from it?"} {"_id": "224164", "title": "How to create Public API wrapper with different configuration parameters", "text": "I'm looking to see if there is a design pattern that can solve the following problem. The example is fairly specific but the goal is a public + internal API. I need specific information for my underlying library but I want to present the API generically. If my question is vague or terrible I'll try and spruce it up with your comments. As an example, let's say I am creating a public API and I want it to be agnostic to the implementation. Continuing the example, I am creating a job handler API using the Quartz (can be any library) library in the current implementation. I want people to be able to type: Job job = jobManager.getJob(\"my.job.name\"); String otherInformation = job.getOtherInformation(); job.setSchedule(new IntervalSchedule(repeatInterval, intervalUnit)); job.setSchedule(new CronSchedule(cronExpression)); The Quartz API has builders for schedules (used in Trigger): CronScheduleBuilder.cronSchedule(cronExpression); CalendarIntervalBuilder.calendarIntervalSchedule() .withInterval(repeatInterval, DateBuilder.IntervalUnit.DAY); In my current example I would have a base interface or class Schedule. With this structure how would it be possible to get the specific information I need for the schedule builders without revealing an implementation specific object in my interface while keeping it extensible. My goal is to prevent implementation classes from leaking in to the public API. If in the future Quartz falls out of favor, the public API doesn't need to change and deprecate the Quartz specific classes. What design pattern or code reorganization am I missing?"} {"_id": "29001", "title": "Copyright issues with encryption algorythms", "text": "I'm developing an application for Android/Java. This application is a kind of password manager, so I'm storing encrypted passwords under the hood of a master password. There are a number of encryption algorythms DES/AES/BlowFish/TwoFish and so on. My intention is to develop application which is free of commercial copyright issues. So the questions are: 1. If I use built-in Java encryption API's (e.g. DES/AES)- does it mean that I will be free from possible commercial interests of DES/AES alike copyright holders? 2. Encryption algorythms sometimes are export limited, keeping in mind that I'm from Moscow/Russia - are there any implications of that fact? Any other thoughts would be helpful."} {"_id": "37922", "title": "need some concrete examples on user stories, tasks and how they relate to functional and technical specifications", "text": "Little heads up, I'm the only lonely dev building/planning/mocking my project as I go. I've come up with a preview release that does only the core aspects of the system, **with good business value** , and I've coded most of the UI as dirty throw-able mockups over nicely abstracted and very minimal base code. In the end I know quite well what my clients want on the whole. I can't take agile-ish cowboying anymore because I'm completely dis-organized and have no paper plan and since my clients are happy, things are getting more complex with more features and ideas. So I started using and learning Agile & Scrum Here are my problems: * * * 1. I know what a functional spec is(sample): Do all user stories and/or scenarios become part of the functional spec? 2. I know what user stories and tasks are. Are these kinda user stories? I don't see any Business Value reason added to them. * * * I made a mind map using freemind, I had problems like this: Actor : Finance Manager >> Can Add a Financial Plan into the system because well _that's the point of it_? What Business Value reason do I add for things like this? > Example : A user needs to be able to add a blog article (in the blogger app) > _because..??_ It's the point of a blogger app, it centers around that > feature? * * * How do I go into the finer details and system definitions: _Actor:_ Finance Manager >> _Action:_ Adds a finance plan. This _adding_ is a complicated process with lots of steps. What User Story will describe what a _finance plan in the system is_ ?? I can add it into the functional spec under definitions explaining what a _finance plan_ is and how one needs to add it into the system, but how do I get to the **backlog** planning from there? > Example: A Blog Article is mostly a textual document that can be written in > rich text in the system. To add a blog article one must...... But how do you create backlog list/features out of this? Where are the user stories for what a blog article is and how one adds/removes it? * * * Finally, I'm a little confused about the relations between functional specs and user stories. Will my spec **contain** user stories in them with UI mockups? Now will these user stories then branch out tasks which will make up something like a technical specification? Example : _EditorUser_ Can add a blog article. 1. Use XML to store blog article. 2. Add a form to add blog. 3. Add Windows Live Writer Support. That would be agile _tasks_ but would that also be part of/or form the technical specs? Some concrete examples/answers for my questions would be nice!!"} {"_id": "184579", "title": "Method overload in scala", "text": "I know method overload is not allowed in Scala and I have read some posts regarding the reasons. But still, I see some functions overloaded in Scala library (example: println). I want to know how it is done and if I can use the same mechanism to some of my methods."} {"_id": "235909", "title": "Testing abstract class' behavior", "text": "I'm currently refactoring an existing design, which was created without TDD. There is a class hierarchy with one abstract base class and two subclasses. In the original design, these classes were mostly just data holders without much behavior. I have identified some functionality which should be implemented in the base class, and I'd like to write tests for this functionality now. But since the class is abstract, I cannot instantiate it (obviously). **Note:** The functionality I'd like to test doesn't invoke any `pure virtual` methods. class Base {}; // Is abstract TEST(BaseTest, doesSomethingAmazing) { Base aBase; // <-------- Not possible!!! ASSERT_THAT(aBase.amazeMe(), Eq(AMAZING_RESULT)); } _Edit:_ To clarify a few things: * Inheritance does actually make sense in this situation - both subclasses map to specific domain concepts, and polymorphism helps keep the surrounding code clean * There is behavior which will be used in both subclasses, and which needs data that is common to both classes. So I think it makes sense to put it in a common super class. I can think of several possible solutions, but none of them seems optimal to me: * Add a subclass to the test code, which implements all `pure virtual` functions. Downside: Hard to name that subclass in a concise way, understanding the tests becomes harder * Instantiate an object of the subclass instead. Downside: Makes the tests pretty confusing * Add empty implementations to the base class. Downside: Class is not abstract anymore I tend towards option 3, to make the tests as clear as possible, but I'm not really satisfied with that. Is there a better way I'm not aware of?"} {"_id": "194883", "title": "Python, magic and objects that add attributes to its owner", "text": "Let me start with a disclaimer: I'm _not_ the best programmer out there. I _do_ however study I.T. and learnt a bit of Java and C. I'm getting stuck into Python and Django + Mongoengine, I'm not going to explain it into detail as it will deviate from the original question. While trying to understand what is happening at this bit of Django code I was **shocked** by some of concepts I seemed to understand. Mostly I was completely confused by how one object could add accessible attributes to a parent object and how Python defines `__set__()` and `__get__()` in such a way as to be so confusing as that. Could someone tell me how _orthodox_ this design style is and whether I find it bizarre because I'm an amateur programmer."} {"_id": "72608", "title": "copyright notice for client work", "text": "I'm wondering what's the best way to add copyright/info into source for a client. Normally I do projects for myself and release them as GPL however I'm working with a client and not sure how to give them the source. Would something like this be correct? > /* Package: Project Name > File: filename.php > Author: My name (emailme@example.net) > (c) 2011 Client Name All Rights Reserved > > File Description.... > */ The client owns the code once they have paid for it but I still think it's a good idea to add a file description and copyright to each file. Maybe this is a conversation to have with each client but does this seem like a fair (and legal) way to do it?"} {"_id": "79650", "title": "Is it still ok to get MCTS in .NET 3.5?", "text": "As I work for a Microsoft Gold Partner, they are keen for me to become certified. They have left it up to me to decide which certifications I want to go for. I had started training for the .NET 4 MCTSs but I am finding them harder than expected. Part of the problem is that I mostly work with 3.5 as my part of job and only work with 4 at home. Should I just give myself a break and go for the 3.5 exams? Would other employers think it was bad/strange that I'm still sitting 3.5 exams in 2011? I would still like to know all the latest technologies but at the moment I'm just finding that there is too much. By the time I'm ready for .NET 4 they will probably have released 5, and the cycle will continue."} {"_id": "194635", "title": "Why do bitwise operators have lower priority than comparisons?", "text": "Could someone explain the rationale, why in a bunch of most popular languages (see note below) comparison operators (==, !=, <, >, <=, >=) have higher priority than bitwise operators (&, |, ^, ~)? I don't think I've ever encountered a use where this precedence would be natural. It's always stuff like: if( (x & MASK) == CORRECT ) ... // Chosen bits are in correct setting, rest unimportant if( (x ^ x_prev) == SET ) // only, and exactly SET bit changed if( (x & REQUIRED) < REQUIRED ) // Not all conditions satisfied The cases where I'd use: flags = ( x == 6 | 2 ); // set bit 0 when x is 6, bit 1 always. are near to nonexistent. What was the motivation of language designers to decide upon such precedence of operators? * * * For example, all but SQL at the top 12 languages are like that on Programming Language Popularity list at langpop.com: C, Java, C++, PHP, JavaScript, Python, C#, Perl, SQL, Ruby, Shell, Visual Basic."} {"_id": "194634", "title": "SourceTree app, how do I know what is my current branch?", "text": "I have branched from my `develop` branch `bugfix/issue2` to work on a bugfix. Now that it's done I want to merge this branch `bugfix/issue2` back onto `develop` but it's asking me if I want to merge into my current branch. And I don't know if my current branch is `develop`? See attached screendumps: ![enter image description here](http://i.stack.imgur.com/cLFGl.png) ![enter image description here](http://i.stack.imgur.com/sEM44.png)"} {"_id": "194631", "title": "The physical implementation of quantum annealing algorithm", "text": "From that question about differences between Quantum annealing and simulated annealing, we found (in comments to answer) that physical implementation of quantum annealing exists (D-Wave quantum computers). Can anyone explain that algorithm in terms of quantum gates and quantum algorithms, or in physical terms (a part of algorithm that depends on quantum hardware)?"} {"_id": "144792", "title": "Is it a good practice to use smaller data types for variables to save memory?", "text": "When I learned the C++ language for the first time I learned that besides int, float etc, smaller or bigger versions of these data types existed within the language. For example I could call a variable x int x; or short int x; The main difference being that short int takes 2 bytes of memory while int takes 4 bytes, and short int has a lesser value, but we could also call this to make it even smaller: int x; short int x; unsigned short int x; which is even more restrictive. My question here is if it's a good practice to use separate data types according to what values your variable take within the program. Is it a good idea to always declare variables according to these data types?"} {"_id": "114817", "title": "How to learn C in two days (if I already know C++)?", "text": "I've been programming in C++ for a few years, and I've done a school project or two in C (as well as several other languages). However, I don't know C very well at all. I have a programming interview in two days, and I just realized that this interview will be in C. How do I approach this? How do I learn C well enough to succeed in a programming interview? This job is not looking for a \"C expert\" or anything like that, so I think they'll be somewhat understanding if I explain that I have not programmed much in C. They just choose to host their interviews in C. Buying and reading a textbook is not feasible, so my resources will have to be on the internet."} {"_id": "26769", "title": "Chrome extensions every programmer should know", "text": "I just switched from Firefox to Chrome, so what I wanna know: Are there any Chrome Extensions which are indispensable for a programmer?"} {"_id": "180639", "title": "Describing requirements in SRS - use cases?", "text": "I do not have the access to the IEEE standard and information on the net are contradictory. Can I capture user requirements in SRS using use cases? Or I should keep use cases separate as they are more like scenarios than separate requirements?"} {"_id": "26761", "title": "Summarize terms of service?", "text": "What's your opinion on summarizing the ToS? Example: [ToS here] Basically, it means you won't sue me. [I accept][I don't accept] Is this good practice or will this just get you sued more?"} {"_id": "26537", "title": "How: Personal life of a Software Developer", "text": "have just graduated and am looking for job in the field of Software Development. Since, I like to go with application developments. Well, being a Software Engineer; Can able to survive personal family life. I am single :) Once I get married. Will I able to spend time with my wife and kids? I see, nowadays many Software Specialists are not returning to home at nights. due to the workload. That makes their wife to scary about in someways. How the understandings will go? How do you compromise your personal life?"} {"_id": "136676", "title": "Do you need to meet personally to collaborate effectively?", "text": "If you are starting up a web, mobile, or standalone application, how well must you know the rest of your collaboration team? Would it be a bad idea to ever work with someone unless I've met him personally? Is it okay to sometimes accept small public contributions such as code snippets and artwork without meeting the contributor? Should I ever actively collaborate with people online that I have not met, as long as the person has a detailed profile and a good reputation. If you do collaborate online, what tools do you to build teams and find contributors? If not, would you consider the idea if there was a safe medium? Any other input on the subject would be appreciated."} {"_id": "188404", "title": "Re-gaining confidence of senior programmer", "text": "My boss found out I'm not as smart as he thought. An example from my experience: I'm a junior programmer, and I work in a team of two, my boss (senior programmer) and myself. I was tasked with developing an internal web-application for the company we work at. I wrote the back-end to the front-end (the database design was in already place and the server technology had been chosen). He would periodically check on my progress by observing the web-application in action and was happy with out it was coming along. When I finished the web-app he was pleased with how well the end-product turned out. A few days ago he became interested in the code so I told him what technologies I used (for the front-end), and this is where it went south. For the front-end of the web-app I used a Javascript framework (Backbone.js). When asked why I would do such a thing. My response was because that I felt that the framework fit into this app quite well, and would help me structure the code better than if I wrote it from scratch....\"Well, that's dis-heartening\" was his response. So given this example my question is: If your a senior programmer and have lost confidence in the ability of your junior programmer, what would you like to see from your junior to gain the confidence back? **EDIT** : Thank you everyone for the great answers and supportive feedback!"} {"_id": "188407", "title": "What is the C++ convention, if any, for naming to differentiate between structure types and other types?", "text": "In general, should I use some sort of convention for structure names which is distinct from other type name? I was thinking about this when my professor started talking about structures. I had the following discussion with myself, and since I do not want to come to any unreliable conclusions. How would you answer the question(s) and which answer(s) are untrustworthy? Q: Should I distinguish structure names and type names to make the code more clear? A: A structure is a type, so there doesn't need to be a distinction. RE: Q: So there is never a case when someone uses `typedef int SOMETYPE` to rename a built-in type to make the code more flexible? RE: A: There is no practical reason to rename built-in types. Actually, defining `int` types as `SOMETYPE` makes the code less readable because someone reading the code will not know the built-in type used for that type. Honestly I do not trust either answer. Looking at some of the base structure in a new Win32Project using Visual Studio, I see things like `typedef short HFILE;` which, I guess, makes it more clear to the programmer what the variable is used for. Perhaps that is too subjective however. As an aside, how does one usually determine what is going on when they come across variable declarations which use types you have never seen before. Do you just rely on the IDE to take you to or display the declaration?"} {"_id": "152444", "title": "Is making my own copyright licence safe?", "text": "I've seen various open source libraries (actually I've seen it for assets as well) having a home-baked license in the following manner : > SomeGuy's License: > 1\\. You can use this code freely in commercial projects and modify it as you > wish, but not sell it > 2\\. If you want to sell a modified version, drop me an email first, or give > credits to me Edit: The above example is ambiguous, so I am giving another one, I want to know if 3 lines of license will hold some ground: > SomeGuy's License: > 1\\. You can use this code in a commercial project as a 3rd party library > 2\\. You can't sell it as a derivative work I know that such license is not polished at all, for example the Creative Commons set of licenses seem to be short, but actually have some large legal stuff underneath it, but I wonder if at least some level of protection can be gained with a hobby license like that ? My question is, **could this hold any ground in the court** , or would the corporative lawyers of the company X tear it apart ?"} {"_id": "111335", "title": "Ruby isn't a PHP generator, right?", "text": "My boss was looking down on me about learning Ruby because \"It's just a PHP generator\" which I can't find anything about it being so. Is this the case? I understand that the Rails framework can be used to generate PHP code, but isn't Ruby in-and-of-itself its own programming language?"} {"_id": "143214", "title": "What is Script Binding, and How is it done?", "text": "I have heard such thing as Script Binding - binding Javascript or lua with API's from C++ or Objective-C. I want to know more about it- however Google has strangely proven unuseful for this. How is it done? - it OK if I get pointed to other sources, but, just how? I never knew a Scripts jitter could interact with compiled libraries."} {"_id": "250805", "title": "Improve coding quality", "text": "I have been dealing with programming for several years now (I am still a student but with a lot of internships). Mostly working with C++, Python and MATLAB, I noticed that whenever I download an SDK or a library from Internet/GitHub and look inside, the code is much more complex than mine, using error catches everywhere and other things that would never come to my mind while writing my own code. My question is, how can I learn or practice this kind of things? From my experience, no programming course talked about coding style, when and how to use try/catch efficiently, how to debug faster etc... I am also not thinking about using mutex while accessing one variable from different functions at the same time because my code is simply working without them and I didn't face any problem until now. Currently I am working on a project using Python and Pycharm is helping me a lot keeping a correct coding style but it is limited to the syntax and naming conventions."} {"_id": "34785", "title": "Which software licenses should I be aware of?", "text": "Licensing is something that I really haven't paid any attention to; I guess I felt there was no need. However I can't but help thinking that I should. So which are the most common licenses that I, as a programmer, should be aware of? Also, it would be helpful if you could include a brief description of each provided."} {"_id": "189757", "title": "Are there any programming languages understood by all operating systems, if so what?", "text": "I'm curious if there's a programming language that you can write code for all operating systems in, and if so what language that might be? I'm guessing something very low-level like Binary could do this, but am not sure."} {"_id": "189755", "title": "Should developers be worried about automation that make them redundant eventually?", "text": "Should developers be worried about possible automation happening in their projects that might make them redundant ? I never particularly worried about this myself but I have seen many developers worrying about it, even if they don't admit this openly. There is a fear that if you automate most of the aspects of a system that you are handling it might make you redundant eventually. I myself had an experience on this. 1 year back I had started working in a project were any requirement/change request that came, the only deliverable was a bunch of insert queries. That team had 2 developers and 1 tester working on it (all on shore). The only thing that the developer had to do would be to configure the insert queries based on the requirements. When I came to work on it, I immediately suggested to build a web application that would enable the business people to do it directly. Every thing would be configured in the web application including error reports in case anything went wrong. Needless to say my manager loved the idea but it never passed through with the higher management. The logic for that was until the client comes up with such a requirement it should never be done as it will involve future losses for the company and possibly loss of employment of the employees currently handling it. So should a developer be concerned about this ?I would certainly like to believe that > If you don't take care of your customer some one else will but experience tells me otherwise."} {"_id": "131029", "title": "Handling invoices with timestamps", "text": "I'm currently going to write something to **automatically create invoices** with cronjobs by using PHP and timestamps. I have a, for me, **well-considered idea** of how to solve it, but I want to ask the community **if someone may see errors in reasoning** and **possible problems with my solution**. I'm trying to describe my idea as detailed as possible so everyone can follow my way of thinking: In General there are **4 types of invoices** : 1. Paid yearly 2. Paid semiyearly 3. Paid quarterly 4. Paid monthly **Purchased products are saved in a SQL database** with the billing cycle: * ID of User * Product ID * Billing Cycle * Last Due Date Now there is a cronjob that runs once a day to check if it should create a new invoice for each purchased product. In the row `Last Due Date` I save the timestamp of the first date to pay when it's created. A code I already wrote calculates the **time that has gone by** since the `Last Due Date` timestamp and outputs something like this: * Timestamp is in past or in future * Month gone by * Days gone by Now my **rules for creating a new invoice** are: 1. **Paid yearly** if ( Timestamp is in past = true AND Month gone by = **11** AND Days gone by >= 20 ) then ( create a new invoice and set \"Last Due Date\" to time() ) 2. **Paid semiyearly** if ( Timestamp is in past = true AND Month gone by = **5** AND Days gone by >= 20 ) then ( create a new invoice and set \"Last Due Date\" to time() ) 3. **Paid quarterly** if ( Timestamp is in past = true AND Month gone by = **3** AND Days gone by >= 20 ) then ( create a new invoice and set \"Last Due Date\" to time() ) 4. **Paid monthly** if ( Timestamp is in past = true AND Month gone by = **0** AND Days gone by >= 20 ) then ( create a new invoice and set \"Last Due Date\" to time() ) As you can see a new **invoice would be created ~10 days before date of payment** and the timestamp in `Last Due Date` is set to the current time, so when the cronjob checks back the next day no invoice will be created for this purchased product. My question is **if this is an appropriate way of solving** this and if you can **see any errors in reasoning or problems that may occur**?"} {"_id": "131028", "title": "Resources for C/C++ security (whitehat) hacking", "text": "I'm currently working as a C++ developer in a software security department, but not as a researcher or whitehat hacker. Security is an interest of mine, mainly in the areas where I can exploit code or reverse engineer executables/communication/protocols. I've done some work in reverse engineering but only as a hobby (ie. modifying a saved game file for a game). As a person that knows C, C++ and some assembly (MASM), what types of resources would you recommend that I read so that I may gain more knowledge about these fields? Books, blogs, and articles would probably be best. Currently I'm reading 24 Deadly Sins of Software Security: Programming Flaws and How to Fix Them."} {"_id": "92781", "title": "What would be a good name of practices you don't (usually) learn at university?", "text": "I'm preparing a course dedicated to fresh graduated CS students. I have to name it. Is there an English word or sentence to define all the practices you don't learn at college/university? Here are a (partial) list of things are not teach to most students I interview or talk with: * Unit Testing + mocking * Design Patterns * Agility (such as Scrum or Extreme Programming) * DRY, YAGNI, SOLID, DIP, KISS, ... * Source Control, DVCS, ... I once heard \"Software Industrialization Practices\" (literally translated from French), and while it sounded great in French, I'm not sure it does in English."} {"_id": "77381", "title": "What should be \"D\" for a good unit test?", "text": "I'm trying to introduce what is unit testing, and how to write good unit test to a friend. I think it's better if I give him some principles for a good unit test, and suddenly I remember about \"ACID\" properties of transaction: **A** \\- atomic: a unit test should test only one thing **C** \\- consistent: a unit test, if passed, should pass thousand times, whenever you run it, so it should not depend on external things, such as network, database... **I** \\- independent: a unit test, should be independent from others, therefore, you can run unit tests in every order, without changing the result. But i get a bit stuck with \"D\" - durable. Durable unit test? It makes no sense. Any suggestion about this D? Thank you."} {"_id": "77385", "title": "Use of the word \"glitch\"", "text": "I find myself talking about computers with a lot of non-tech people, who often use the word \"glitch\" to describe an undesirable outcome of a program or operating system(of course, never Linux!). I have never heard another programmer use the word \"glitch\" and hearing it makes me think the user has no idea what they are talking about. SO my question: Do you use the word glitch in everyday, technical conversation, if not, does its use imply to you ignorance and lack understanding?"} {"_id": "107248", "title": "emacs - project explorer and auto complete features - is it available?", "text": "I know that Emacs is a very powerful editor out there. I try to use it occasionally and want to learn it better. But to learn it better i have to use it more frequently than I am using now. But one big obstacle is that I could not find the basic properties like a project auto completion. Maybe I am lazy but I dont want to write again and again the same long method names. Another one is go to definition stuff, I want to see the real declaration of any class or method or even a variable. Also I know that it can handle the make files but an actual representation of a project alongside the editor would be very good. So, what I am asking is, is there a way to provide these options in Emacs? Or is there an extended version of Emacs that supports these features?"} {"_id": "67912", "title": "How will closures in Java impact the Java Community?", "text": "It is one of the most talked about features planned for Java: Closures. Many of us have been longing for them. Some of us (including I) have grown a bit impatient and have turned to scripting languages to fill the void. But, once closures have finally arrived to Java: how will they effect the Java Community? Will the advancement of VM-targetted scripting languages slow to a crawl, stay the same, or acclerate? Will people flock to the new closure syntax, thus turning Java code-bases all-around into more functionally structured implementations? Will we only see closures sprinkled in Java throughout? What will be the effect on tool/IDE support? How about performance? And finally, what will it mean for Java's continued adoption, as a language, compared with other languages that are rising in popularity? To provide an example of one of the latest proposed Java Closure syntax specs: public interface StringOperation { String invoke(String s); } // ... (new StringOperation() { public invoke(String s) { new StringBuilder(s).reverse().toString(); } }).invoke(\"abcd\"); would become ... String reversed = { String s => new StringBuilder(s).reverse().toString() }.invoke(\"abcd\"); [source: http://tronicek.blogspot.com/2007/12/closures-closure-is-form-of- anonymous_28.html]"} {"_id": "203408", "title": "Unit test SHA256 wrapper queries", "text": "I have the following SHA256 wrapper. public static string SHA256(string plainText) { StringBuilder sb = new StringBuilder(); SHA256CryptoServiceProvider provider = new SHA256CryptoServiceProvider(); var hashedBytes = provider.ComputeHash(Encoding.UTF8.GetBytes(plainText)); for (int i = 0; i < hashedBytes.Length; i++) { sb.Append(hashedBytes[i].ToString(\"x2\").ToLower()); } return sb.ToString(); } Do I want to be testing it? If so, what do you recommend? My thought process is as follows: 1. What logic is there here. 2. The answer is my for loop and `ToString(\"x2\"`) so from my understanding I want to be testing this part? I can assume `Encoding.UTF8.GetBytes(plainText`) works. Correct assumption? I can assume `SHA256CryptoServiceProvider.ComputeHash()` works. Correct assumption? I want to be only testing my logic. In this case is limited to the printing of hex encoded hash. Correct? Thanks. EDIT: Update method based on MainMa's excellent answer: /// /// Returns a SHA256 hash of the plain text. /// /// The pain text. /// The hash. public static string SHA256(string plainText) { if (string.IsNullOrWhiteSpace(plainText)) throw new ArgumentNullException(\"The plain text cannot be empty.\"); StringBuilder sb = new StringBuilder(); Byte[] hashedBytes; using (SHA256CryptoServiceProvider sha256Provider = new SHA256CryptoServiceProvider()) { hashedBytes = sha256Provider.ComputeHash(Encoding.UTF8.GetBytes(plainText)); } foreach(Byte b in hashedBytes) { sb.Append(b.ToString(\"x2\").ToLower(CultureInfo.InvariantCulture)); } return sb.ToString(); }"} {"_id": "180189", "title": "Acceptance tests for large multi-step online form", "text": "I have to extend/fix a large online form written by other developer. There is a lot of code, mixing PHP and JS. It's kind of write-only style of coding and I want to redo it completely, but currently I can't. It works like this: * It is a wizard-style form with 12 steps - let's call them _phases_ to not confuse with steps of the tests. * User has to fill at least ~100 fields to finish. * Some fields are grouped by 3-5. Those groups can be dynamically added and removed with JS. * Validation is done after submitting each phase. If some errors occured, user is notified immediately and cannot go further until he/she fixes all errors. * Submitted data is temporarily stored in the session between phases. No permanent storage. * User cannot skip phases, only sequentially go back and forth to already completed phases. It is ridiculous to test this thing manually, so I wrote a test using Behat with Mink and Selenium2. I have only 1 test now, which completes all 12 phases of the form. To go to specific point in the process of filling the form, I created a step definition wich just makes webdriver wait 1 hour (laugh at me). When I need to test some specific phase, I just add this step where I want Selenium to stop - this leaves browser window open so I can do whatever I want manually without having to fill all previous fields. It saves a ton of time, but feels stupid. I have reasons to do that: * Has to be done ASAP. * I cannot test each separate step in the form. * I cannot reuse parts of the filling process without having to write quite a lot of PHP code. Now my test is just ~120 lines written in Gherkin language. Simplified version of my question is: **how should I test this?** I can think of several ways: * Modify the code to allow skipping phases. This can be done safely by detecting environment parameters withing the application and deciding whether client (webdriver or user) can or cannot skip phases, so skipping is not allowed in a production environment. * Just write more tests with a lot of copy-pasting and be a bigger monkey. * Write step definitions to complete specific phases separately. So, I could just write When I complete phase 1 And complete phase 2 And complete phase 3 ... in the tests. So, the full version of my question is: **which way is preferable and what (dis)advantages each way has?** Maybe there are other ways, like designing an application in a completely different way so there are no such problems."} {"_id": "180184", "title": "Is there ever a reasonable time to have test fixtures test MVC3 controller construction?", "text": "I've recently started with a new firm and I'm trying to understand the mechanics behind something, so I'll anonymize the code and present but a sample: [Test(Description = \"Retreives the New view for XXXX.\")] public void New() { \"~/XXXX/New\".ShouldMapTo(x => x.New()); \"~/XXXX/New\".WithMethod(HttpVerbs.Post).ShouldMapTo(x => x.New(null)); } and again: [Test(Description = \"Test the model injection into the New view.\")] public void New() { var @new = WXYZController.New(); @new.AssertViewRendered(); Assert.IsInstanceOf(@new.ViewData.Model, \"Expected the data model to be of type WXYZSettingsModel, but the model was another type.\"); var model = @new.ViewData.Model as WXYZSettingsModel; if (model == null) Assert.Fail(\"Failed to cast data model to type WXYZSettingsModel.\"); Assert.IsNotNull(model.YYYY, \"Expected an instantiated service message, but a null message was returned.\"); Assert.AreEqual(model.YYYY.Status, WXYZ.Success, \"Expected a success status code, but another code was returned.\"); Assert.IsNotNull(model.Categories, \"\"); Assert.AreEqual(model.ZZZZ.Status, ABCD.Success, \"Expected a success status code, but another code was returned.\"); } It seems to me this violates the purpose of unit-testing, as you should never unit-test the framework, but rather, your own code, right? Did someone just get carried away? Or am I missing some fundamental component of this code that is just obvious to someone who is more familiar with unit testing. Note that there are entire files (one of each of the above per controller) that never test a single line of internal code, but are filled with these tests. Even to the point of this is fairly boilerplate now that I've anonymized it. If we were instead testing like this (pseudocode now) [Test(Description = \"thingy tester, duh\")] public void ThingTest(){ var mock = new MyDataMock(); var finalMock = new MyFinalMock(); var controller = new thingController(); var test = controller.Thing(mock); assert.AreNotEqual(test,mock,\"It didn't change it\"); assert.AreEqual(test,finalMock,\"It worked!\"); } I would expect that to be what the unit tests I should see should look like, no?"} {"_id": "199846", "title": "Which Ext JS license should be acquired (when developing a Spring MVC/Hibernate/Ext JS web application)?", "text": "I am inquiring about a web app which uses _Spring MVC_ and hibernate on the server and _ExtJS 4_ (Sencha JavaScript framework) for the client framework (widgets). The app is cloud-hosted - it is not \"distributed\" to users. It is required that users of the app shall need to purchase a license from the vendor in order to use it. I am completely new to software licensing and am not sure which ExtJS license is required/preferable, i.e the commercial license or the GNU GPL license? I have a few questions to hopefully help clarify: 1. Is it possible to use the GPL license or similar allowing only part of the app's source to be distributable, e.g. the Extjs components, but not allow users to see other components of the system? i.e. the _Spring MVC_ , hibernate server components? Is it possible to restrict access to the _ExtJS_ source if using a GPL license? In general is it possible to restrict distribution of some aspects of the source when using GPL? 2. If the GPL license is acquired, is the source available only to users of the system or is it open to the whole general public?"} {"_id": "209602", "title": "Identity Design ASP.NET", "text": "I am trying to design a system with the below features, and am currently trying to figure out best way to handle Identity : 1. There will be multiple decoupled parts of the system, with same customers accessing various parts 2. I would like users organized by organizations/companies, i.e. user1 & user2 belong to ORG1 3. I would like additional info to be stored within a user profile, info will originate from various systems, as well as global info such as address, etc 4. For roles I haven't yet decided whether or not they will be handled by individual apps or globally and specialized in certain apps My conundrum is whether to use the new MVC Identity released in ASP.NET Beta currently out, or use WIF or Active Directory. I am assuming that a centralized application handling users & their associated admin tasks and then federating to other applications is best. If I understand correctly any of the 3 are able to do that. What I am wondering is which to use to be most flexible. Basically something that can be expanded later and doesn't have a huge learning curve, possibly to mobile & api use. I don't know enough about WIF or AD as I have never really used them, and ASP.NET Identity is still in beta and not really 100% documented. My experience with authentication systems is working with ones out of the box. I've never really had to deal with SSO or federation One thing I wanted to add is there is no need for outside registration. Registration will be handled purely by admins, not sure if this ties in at all, but thought it may be of importance"} {"_id": "199840", "title": "How to calculate throughput if there is network traffic", "text": "I came across this question in a textbook, but there are no solutions and I'm not sure how to solve this. The question is: Suppose Host A wants to send a large file to Host B. The path from Host A to Host B has three links of rates R1=500Kb/s, R2=2Mb/s and R3=1Mb/s. Assuming that there is another busy flow that travels through the same set of links (say from Host C to Host D), what will be the throughput for this file transfer? I know that the throughput for no traffic would be the min(R1, R2, R3), so would the throughput with traffic be the max(R1, R2, R3)?"} {"_id": "151428", "title": "Random Cache Expiry", "text": "I've been experimenting with random cache expiry times to avoid situations where an individual request forces multiple things to update at once. For example, a web page might include five different components. If each is set to time out in 30 minutes, the user will have a long wait time every 30 minutes. So instead, you set them all to a random time between 15 and 45 minutes to make it likely at most only one component will reload for any given page load. I'm trying to find any research or guidelines on this topic, e.g. optimal variance parameters. I do recall seeing one article about how Google (?) uses this technique, but can't locate it, and there doesn't seem to be much written about the topic."} {"_id": "223191", "title": "Difference between event loop and system calls/interrupts", "text": "When you create programs (like a socket server) that are designed to run on an operating system, such as Ubuntu, frameworks like Qt for C++ use something called a main event loop: app = new QCoreApplication(argc, argv); int rc = app->exec() Now programming language like C or C++ enable a program to make system calls through built-in functions, like fork() or pthread_create() and use software interrupts to change kernel mode. For example, the program can request use of the kernel by means of a system call in order to perform privileged instructions, such as process creation or input/output operations. Other examples are open, read, write, close, wait, execve, fork, exit, and kill. All this can be achieved within C/C++ directly without an event loop. So what's the purpose of the event loop?"} {"_id": "28759", "title": "Merits of Namepsaces/Packages", "text": "Some programming languages (e.g. Java and C++) have language features called \"packages\" or \"namespaces\". How useful is it really to have namespaces? It is possible to mark functions and classes as belonging to some particular library without using such a language feature, like the SDL does (e.g. `SDL_BlitSurface()`). Are namespaces not helpful enough to be worth having? Are they useful in libraries but not in applications? Are they useful everywhere except for in small projects? Thoughts?"} {"_id": "151424", "title": "Javascript Form Validation", "text": "When validating a JavaScript form, would it be better to process each input field individually, or, where possible check collectively? e.g. validate() { noblanks() fieldOne() fieldTwo() } or validate() { fieldOne() fieldTwo()"} {"_id": "176049", "title": "What is the use of Association, Aggregation and Composition (Encapsulation) in Classes", "text": "I have gone through lots of theories about what is encapsulation and the three techniques of implementing it, which are Association, Aggregation and Composition. What i found is, ## Encapsulation Encapsulation is the technique of making the fields in a class private and providing access to the fields via public methods. If a field is declared private, it cannot be accessed by anyone outside the class, thereby hiding the fields within the class. For this reason, encapsulation is also referred to as data hiding. Encapsulation can be described as a protective barrier that prevents the code and data being randomly accessed by other code defined outside the class. Access to the data and code is tightly controlled by an interface. The main benefit of encapsulation is the ability to modify our implemented code without breaking the code of others who use our code. With this feature Encapsulation gives maintainability, flexibility and extensibility to our code. ## Association Association is a relationship where all object have their own lifecycle and there is no owner. Let\u2019s take an example of Teacher and Student. Multiple students can associate with single teacher and single student can associate with multiple teachers but there is no ownership between the objects and both have their own lifecycle. Both can create and delete independently. ## Aggregation Aggregation is a specialize form of Association where all object have their own lifecycle but there is ownership and child object can not belongs to another parent object. Let\u2019s take an example of Department and teacher. A single teacher can not belongs to multiple departments, but if we delete the department teacher object will not destroy. We can think about \u201chas-a\u201d relationship. ## Composition Composition is again specialize form of Aggregation and we can call this as a \u201cdeath\u201d relationship. It is a strong type of Aggregation. Child object dose not have their lifecycle and if parent object deletes all child object will also be deleted. Let\u2019s take again an example of relationship between House and rooms. House can contain multiple rooms there is no independent life of room and any room can not belongs to two different house if we delete the house room will automatically delete. **The question is:** Now these all are real world examples. I am looking for some description about how to use these techniques in actual class code. I mean _what is the point for using three different techniques for encapsulation_ , _How these techniques could be implemented_ and _How to choose which technique is applicable at time._"} {"_id": "212809", "title": "Before you develop a web app, how much of a spec do you need?", "text": "If you were to accept a role as a full time web app developer responsible for creating a new application from scratch, how much of a defined specification and design would need and/or expect to start development? What tools/software is even used for this (for web apps built with something like RoR)? I would expect a mockup/wireframe to be required, but what else is done in the design stage? What is the best method or process to convey the functionality of the application so that developers know what to translate into code. I'm interested to hear from web app developers who have thought: \"If I only had , I could develop this so much quicker/easier/cleaner\", or \"I have no idea what this customer wants unless they document it in .\" I'm not trying to start a philosophical debate, I am curious from the developers' perspective what is the \"right\" way to design web apps from scratch. There is probably a book somewhere that someone can tell me to RTFM, but I've only been able to find resources on learning actual coding and development, not the design phase."} {"_id": "35788", "title": "Advantages of Hudson and Sonar over manual process or homegrown scripts", "text": "My coworker and I recently got into a debate over a proposed plan at our workplace. We've more or less finished transitioning our Java codebase into one managed and built with Maven. Now, I'd like for us to integrate with Hudson and Sonar or something similar. My reasons for this are that it'll provide a 'zero-click' build step to provide testers with new experimental builds, that it will let us deploy applications to a server more easily, that tools such as Sonar will provide us with well-needed metrics on code coverage, Javadoc, package dependencies and the like. He thinks that the overhead of getting up to speed with two new frameworks is unacceptable, and that we should simply double down on documentation and create our own scripts for deployment. Since we plan on some aggressive rewrites to pay down the technical debt previous developers incurred (gratuitous use of Java's Serializable interface as a file storage mechanism that has predictably bit us in the ass) he argues that we can document as we go, and that we'll end up changing a large swath of code in the process anyways. I contend that having accurate metrics that Sonar (or fill in your favorite similar tool) provide gives us a good place to start for any refactoring efforts, not to mention general maintenance -- after all, knowing which classes are the most poorly documented, even if it's just a starting point, is better than seat-of-the-pants guessing. Am I wrong, and trying to introduce more overhead than we really need? Some more background: an alumni of our company is working at a Navy research lab now and suggested these two tools in particular as one they've had great success with using. My coworker and I have also had our share of friendly disagreements before -- he's more of the \"CLI for all, compiles Gentoo in his spare time and uses Git\" and I'm more of a \"Give me an intuitive GUI, plays with XNA and is fine with SVN\" type, so there's definitely _some_ element of culture clash here."} {"_id": "212808", "title": "Treating a 1D data structure as 2D grid", "text": "Hopefully this is a good question. I am working with a native class that represents a 2D image as a 1D array. If you want to change one pixel, for example, you need to now how to derive the index from the `x,y` coordinates. And I am trying to grok, in abstract terms, how to map a multidimensional object to its underlying unidimensional representation. So, let's say we have a 1D array `array1d` like this: array1d = [ a, b, c, d, e, f, g, h, i, j, k, l, m, n, o, p, q, r, s, t, u, v, w, x, y ] In the context of our program, `array1d` represents a 2D grid: a b c d e f g h i j k l m n o p q r s t u v w x y And we want to perform operations on `array1d` such as: * Get the value at `x,y` coordinates (in this example, `1,2` would give `l`) * Get any sub-grid using `x,y,width,height` (`1,2,2,2` would give `[l, m, q, r]`) * Set the value at any `x,y` coordinate (etc.) Is there a computer science-related or mathematical term for said operations, which map points/regions in 1D data to coordinates in 2D/3D/4D... objects? If so, what is it? Clearly all computers with displays solve this same issue, so what is this fundamental problem called and how is it conventionally solved?"} {"_id": "181583", "title": "For PL/SQL, do large companies prefer ANSI SQL joins or old Oracle joins?", "text": "I am interviewing for a PL/SQL position with a large corporation. I will have to write a multiple choice exam. I am wondering whether the exam will likely use old-style joins (joining happens in the 'WHERE' clause) or ANSI ones. My apologies if this is not the correct forum for this question."} {"_id": "93788", "title": "Should I update this code or continue with current design", "text": "I am working with ASP.NET Application. The application is great and works fine but it a few flaws. To give you an example, 1. Every control uses an absolute position layout. This makes working with code very difficult in design mode. Absolute position has been used more than 700 times in the code. 2. There are different menus uses instead of one. And to stitch them together, again absolute position is used for each menu control 3. The application is role based. All the roles are implemented using Roles.IsUserInRole property. I would like to use proper permission for every role with page level permission. 4. There are a few security risk in the current code but that is easy fix in current program though. 5. There is no true admin. The top level role act as admin which is also a functional role, used by other members. Now that we are adding new functionality to this code and I mean adding some big functionality, what do you recommend. Should I rewrite this code or work in the existing. I would esp like to fix 1. The layout issues. Remove all fixed positions and make the code clean and easy to maintain. 2. Unfortunately to do so I have to rewrite the menu. Because I want to write one menu instead of 4 being used. 3. In order to achieve point 2 above, I have to crete new roles and provide proper security for the application. My question is **Should I work with the current code or rewrite the existing code keeping in mind that our company is growing and that it may take me 2-4 weeks to do rewrite up.** Would it be bad idea or a good one. I don't have a technical lead in my company."} {"_id": "181587", "title": "Good design pattern for a c++ wrapper around a c object", "text": "I have written an extensible c++ wrapper around a very hard to use but also very useful c library. The goal is to have the convience of c++ for allocating the object, exposing its properties, deallocating the object, copy semantics etc... The problem is this: sometimes the c library wants the underlying object (a pointer to the object), and the class destructor should not destroy the underlying memory. While most of the time, the destructor should deallocate the underlying object. I have experimented with setting a `bool hasOwnership` flag in the class so that the destructor, assignment operator, etc... will know whether or not it should free the underlying memory or not. However, this is cumbersome for the user, and also, sometimes there is no way to know when another process will be using that memory. Currently, I have it setup where when the assignment comes from a pointer of the same type as the underlying type, then I set the hasOwnership flag. I do the same when the overloaded constructor is called using the pointer from the c library. Yet, this still does not handle the case when the user has created the object and passed it to one of my functions which calls the c_api and the library stores the pointer for later use. If they were to delete their object, then it would no doubt cause a segfault in the c library. Is there a design pattern that would simplify this process? Maybe some sort of reference counting?"} {"_id": "181585", "title": "Why would I use Control.Exception in Haskell?", "text": "I'm trying to really master Haskell error handling, and I've gotten to the point where I don't understand why I would use Control.Exception instead of Control.Monad.Error. The way I can see it, I can catch Control.Monad.Error exceptions in both pure and impure contexts. By contrast, exceptions defined in Control.Exception can be caught only in impure/IO contexts. Given that they provide two separate interfaces and semantics that I would need to memorize, under what circumstances would I need Control.Exception instead of Control.Monad.Error?"} {"_id": "46920", "title": "Offshoring: does it ever work?", "text": "I know there has been a fair amount of discussion on here about outsourcing/offshoring, and the general opinion seems to be that at best it is difficult, and at worst it fails. I have direct experience of offshoring myself; a previous company, where I was a development manager, wanted to send some development offshore, and we ran a pilot scheme to see how well it would work. Of course, it was a complete failure, although it is not completely clear to me whether this was down to the offshore developers being less talented, the process, or other factors (no doubt it was really a combination). I can see as a business how offshoring _looks_ attractive (much lower day rate), but as far as I can see, the only way it could _possibly_ work is if you do _exceptionally detailed_ design up front, with _incredibly detailed_ specifications; and by the time you have invested in producing that, you have probably spent as nearly as much as if you had written the actual code locally (which I think is an instance of No Silver Bullet). So, what I want to know is, does anyone here have any experience of offshoring actually working _ever_? Especially if there are any success stories of it working in a semi-agile way? I know there are developers here from all over the World; has anyone worked on an offshore project they consider successful?"} {"_id": "93786", "title": "How to work with new director with different ideals?", "text": "### Background Over the past 2 years, we have built us a decent team of developers. The CEO of the company and myself have been working closing to make sure we are moving done a good path. As our department as grown, I've been telling the owner we need a manager/director over us that understands technology, but can lead the company on more of the business aspects of running an interactive development shop. He's gone out and done just that, I didn't get to meet the man before his first day, but quickly got his CV. He had owned a company in the late 90's that was bought out and has been working as a consultant since then. Most of his knowledge was in large corporate systems. We build web applications and sites for medium sized companies. And that is where the issues are starting to crop up. He's brought up on several occasions our need to switch to Flash, Oracle and J++ for the sites and applications we are building. Versus the LAMP stack we are using now for 90% of our development. The new guy is very smart when it comes to the business side of technical projects, but he is trying to lead the technical direction. And he and I aren't seeing eye-to-eye for a couple of upcoming projects. ### Question Now, I don't want to go into the issues with those comments. I want to ask how I, as the Senior Developer, can work with our new Director to ensure the success of our department within this company. While also keeping our developers happy, and working with the technologies that we enjoy, and are comfortable with."} {"_id": "32830", "title": "Usage of Pirated software at a company", "text": "I started to work at a company as an engineer a couple of months ago. It's a small company and what they basically do is answering service on phones. Now they are switching from normal phones to IP phones so that computers take more important place in the work. However, all the computers used by workers are equipped with pirated software, including their operating systems. Moreover, they didn't even buy one license to make copies for other computers. In other words, they did not spend any money for the software in office. I am not saying copying a licensed one is legit, but the situation is too much. There is one guy who installed the pirated software. He does not feel any sense of guilt and even justified when I asked about it. He is not even a specialist. He just searched on the internet to install pirated software. Our boss does not have any knowledge of computers, so he took the cheaper way. What do you think about this? Since I am still new to the company, I am not doing maintenance on those cracked computers. But I have to use those software daily. And later on I will be doing support, help desk kind of stuff. I really don't want to take responsibility for operating pirated software. From an aspect of developer and engineer, pirated software is not able to get legal support and it may work unexpectedly. So, I am thinking about changing jobs. Am I thinking too much? Should I wait until I have more credibility with the boss and try to change his policy? So far, the boss does not take any words from me. Any opinions are welcome. Thank you"} {"_id": "32832", "title": "Should APIs be in en_US or en_GB or both?", "text": "Should I write APIs in US spelling or British spelling? Or should I provide with mapping both of them to a single one and internally using anyone which I like so that both of them works ? At present most of the APIs are with US spelling. What is the solution for it? edit: Is there a way to standardize this. May be support both?"} {"_id": "145119", "title": "What do you suggest for cross platform apps, including web", "text": "I have always preferred cross platform development over most other concepts as long as I can remember. Which is one of the reason I never got into .Net. Currently, I use php/javascript/python as my primary languages of choice for web development. Which I also have been using at work. But the need to learn c# has come up and one of the windows guys has been teaching me c#, so I am still very new. I have really liked the language. I have also been brainstorming on an app that I have been wanting to make for some time and have not determined the best way to build it. The main system will be online where users can log in and do everything. But I also want to make a desktop client that ties into and syncs the users content with the server (considering using couchdb for my particular use case for this app). They should be able to do everything the web app can. I would like to launch to linux, windows, and mac as well as make mobile versions. In learning c#, my co-worked recommended I look at Mono, so I can use .Net as a cross platform system and even possibly use asp.net for the main site. So I am hoping to get some insight on where I could go. It seems as if I used .net with mono, I could reuse a lot of code for web and mobile using monodroid/monotouch. But then how could this setup compare to using something like sencha touch for making a mobile version and node.js for the server side (am fairly proficient with JS and php already, I just need to learn c#/.net for work and would like to learn it more anyways). Or is there anything else I should consider? I'm not really asking which method is better, I just want to know what options I have, which I could then make a calculated decision based on my needs and further research. Such as what features would the .net/mono route have that I would not in sencha/node.js, and vice versa (node.js is awesome at real time uset to user interaction, for example) I am just looking for some insight and advice, and help woudl be greatly appreciated."} {"_id": "35432", "title": "Inline functions in C++. What's the point?", "text": "According to what I read, the compiler is not obliged to substitute the function call of an inline function with its body, but will do so if it can. This got me thinking- why do we have the inline word if that is the case? Why not make all function inline functions by default and let the compiler figure out if it can substitute the calls with the function body or not?"} {"_id": "101160", "title": "What are the advantages and disadvantages of using Resharper Annotations?", "text": "A new colleague has mooted the idea of using **Resharper annotations** within our code base (we already are great fans and users of Resharper). My new colleague cites things like explicitly stating whether a parameter can be null or not null at coding time is a good thing so that consumers of that class can see warnings at coding time. My reservation is it adds noise to the code, is not runtime executed hence I favour Guard clauses. I am also concerned that like comments that they can go stale. I am also concerned about code readability and the overhead of writing them in the first place. Ultimately I am trying to keep development lean so don't want to add overhead for relatively little gain. Finally it feels wrong decorating code for one vendor's tooling support. What are your thoughts? Good/bad experiences?"} {"_id": "101163", "title": "What causes floating point rounding errors?", "text": "I am aware that floating point arithmetic has precision problems. I usually overcome them by switching to a fixed decimal representation of the number, or simply by neglecting the error. However, I do not know what are the causes of this inaccuracy. Why are there so many rounding issues with float numbers?"} {"_id": "23845", "title": "What's so bad about creative coding?", "text": "I was watching Bob Ross paint some \"happy trees\" tonight, and I've figured out what's been stressing me out about my code lately. The community of folks here and on Stack Overflow seem to reject any whiff of imperfection. My goal is to write respectable (and therefore maintainable and functioning) code, by improving my skills. Yet, I code creatively. Let me explain what I mean by \"coding creatively\": * My first steps in a project are often to sit down and bash out some code. For bigger things, I plan a bit out here and there, but mostly I just dive in. * I don't diagram any of my classes, unless I'm working with others who are creating other pieces in the project. Even then, it certainly isn't the first thing I do. I don't typically work on huge projects, and I don't find the visual very useful. * The first round of code I write will get rewritten many, many times as I test, simplify, redo, and transform the original hack into something reusable, logical, and efficient. During this process, I am always cleaning. I remove unused code, and comment anything that isn't obvious. I test constantly. **My process seems to go against the grain of what is acceptable in the professional developer community, and I would like to understand why.** I know that most of the griping about bad code is that someone got stuck with a former employee's mess, and it cost a lot of time and money to fix. That I understand. What I don't understand is how my process is wrong, given that the end result is similar to what you would get with planning everything from the start. (Or at least, that's what I have found.) My anxiety over the issue has been so bad lately that I have stopped coding until I know everything there is about every method for solving the particular problem I am working on. In other words, I have mostly stopped coding altogether. I sincerely appreciate your input, no matter what your opinions are on the issue. **Edit:** Thank you all for your answers. I have learned something from each of them. You have all been most helpful."} {"_id": "93435", "title": "best hosting structure to sell your app on a website", "text": "I'm considering trying to sell a small desktop application (I would be considered a one-man mISV, I guess?) at a price of about $20-$30 USD. I would distribute to customers by download only. I get the sense the best and most intuitive way to do this is provide a download link from the application's website. What are the best ways to go about this? Should I create a website with a standard web hosting provider (like, e.g. DreamHost) and use that to allow customers to download it (it's 18MB or so in size)? I want to allow for the possibility that many people may attempt to download it at once (if I were able to get a write-up on some very widely-read sites), and I would not want my provider to not be able to handle that, since that would be THE crucial sales period. I'd also like to keep costs of hosting/serving as low as possible, since I'm not sure I am going to make any money selling this at all. I'd also like to keep things relatively simple."} {"_id": "165598", "title": "Removing occurrences of characters in a string", "text": "I am reading this book, programming Interviews exposed by John Wiley and sons and in chapter 6 they are discussing removing all instances of characters in a src string using a removal string... so `removeChars(string str, string remove)` In there writeup they sey the steps to accomplish this are to have a boolean lookup array with all values initially set to false, then loop through each character in `remove` setting the corresponding value in the lookup array to true (note: this could also be a hash if the possible character set where huge like Unicode-16 or something like that or if str and remove are both relatively small... < 100 characters I suppose). You then iterate through the str with a source and destination index, copying each character only if its corresponding value in the lookup array is false... Which makes sense... I don't understand the code that they use however... They have for(src = 0; src < len; ++src){ flags[r[src]] == true; } which is turning the flag value at the remove string indexed at src to true... so if you start out with `PLEASE HELP` as your str and `LEA` as your remove you will be setting in your flag table at `0,1,2... t|t|t` but after that you will get an out of bounds exception because r doesn't have have anything greater than 2 in it... even using there example you get an out of bounds exception... Am is there code example unworkable? **Entire function** string removeChars( string str, string remove ){ char[] s = str.toCharArray(); char[] r = remove.toCharArray(); bool[] flags = new bool[128]; // assumes ASCII! int len = s.Length; int src, dst; // Set flags for characters to be removed for( src = 0; src < len; ++src ){ flags[r[src]] = true; } src = 0; dst = 0; // Now loop through all the characters, // copying only if they aren\u2019t flagged while( src < len ){ if( !flags[ (int)s[src] ] ){ s[dst++] = s[src]; } ++src; } return new string( s, 0, dst ); } as you can see, r comes from the remove string. So in my example the remove string has only a size of 3 while my str string has a size of 11. len is equal to the length of the str string. So it would be 11. How can I loop through the r string since it is only size 3? I haven't compiled the code so I can loop through it, but just looking at it I know it won't work. I am thinking they wanted to loop through the r string... in other words they got the length of the wrong string here."} {"_id": "93437", "title": "What are the biggest bottlenecks when developing large projects?", "text": "Let's say that my company was to develop a replica of MS Word (just as an example). What would be the bottleneck to the development process, assuming that one has infinite cash available and an organization like Microsoft? In other words, what are the most usual hindrances from developing such software fast? Let's assume that all specifications are in place and the organization is working perfectly, so we just focus on the software development until the product is ready to be shipped. Some alternatives might be: \\- Writing the code \\- Writing tests \\- Manually testing the end product \\- Rewriting the code due to poor design in the first place \\- Designing the code \\- Code review done by experienced developers \\- Designing GUI \\- Redesigning GUI based on alpha/beta-user feedback \\- Processing feedback from users \\- Waiting for alpha/beta-user feedback Please use references in your answer or state your experience on the subject."} {"_id": "165594", "title": "Is there a known algorithm for scheduling tournament matchups?", "text": "Just wondering if there is a tournament scheduling algorithm already out there that I could use or even adapt slightly. Here are my requirements: * A variable number of opponents belonging to a variable number of teams/clubs each must be paired with an opponent * Two opponents cannot be from the same club * If there are an odd number of players, 1 of them randomly is selected to get a bye Any algorithms related to this sort of requirement set would be appreciated. **EDIT:** I only need to run this a maximum of one time, creating matchups for the first 'round' of the tournament."} {"_id": "23848", "title": "Being a good mentee - a prot\u00e9g\u00e9", "text": "The complement of the Being a good mentor question. I work with many very senior people that have vast amounts of knowledge and wisdom in Software, Engineering and our business domain. What are some tips for gaining as much knowledge from them? I don't want to take up too much of their time, but also want to take full advantage of this since it really could help develop my skills. What are some good questions to get the conversation rolling in a sit down mentor/mentee session? Some of the people providing mentorship have little experience in this area, so it would help to be able to lead some of these discussions."} {"_id": "253082", "title": "Right options for the Creative Commons license for the theme", "text": "First I am not so much expert with the license terms and system but have some basic knowledge. I have been developing theme for one of the Open Source PHP script since two versions. It has been package included with GPLv3 till now. Currently, I am working on new version and now ready to release alpha version. But before I release it, I just want to change the license to Creative Commons. This is just to prevent a bit the copyright/credits notes. I want to allow user to modify, alter the theme and they can redistribute. But they must contains the original copyright and credit notes and the license. I am not sure either I can release new version under CC license and if yes what will be the best option for it. Also will just a logo (not with entire description) will consider the same rights?"} {"_id": "253084", "title": "How to store satellite data in C data structutres", "text": "I've been reading through Introduction To Algorithms 3rd Ed, and am really enjoying the material; however, I am having difficulty in implementing some practical situations. It's not the theory, or implementing the internals of the data structures themselves, but rather how to design a good interface that makes the data structures useful for storing real data (as opposed to just the keys). Specifically, I'm told that C has a very \"trust the programmer\" type philosophy. I just don't know how much trust is too much trust .. As an example, many data structures can be implemented as linked structures with similar nodes having a key, satellite data, and pointers to other nodes in the ds, ex: typedef struct { int key; void *data; node_t *prev; node_t *next; } node_t; Q1. To what extent should I be encapsulating my implementations? For most structures, you have both: (a) a data structure type, and (b) a type that can be inserted/deleted from the data structure. Should they both be hidden? Clearly if I expose a some `node_t` with the intention someone only alters `key` and `*data`.. theres the possibility `prev` or `next` is altered, thus destroying the data structure. A similar risk exists if I expose the data structure type and someone messes around with the root.. Q2. How should the keys be stored/accessed/compared? Presumably, the keys depend on the satellite data, but if you go the approach as above, you've decoupled the key from the data itself. What would happen if a user updates the data but forgets to update the key (or worse, they update the key, but don't remove/reinsert it into the ds)? For something like a binary tree which uses only comparisons, I could design the interface to accept comparators that accept pointers to the satellite data? This makes the structure more generalized, but all the defreferencing has to have an impact on speed. Alternatively, (and would be needed for structures like hashes which need a concrete key), I could accept a pointer to a `int toKey(void *data)`function.. I'm relatively new to C (and designing software) and am unclear on how good software should be designed for use in the real world.."} {"_id": "141458", "title": "Visually and audibly unambiguous subset of the Latin alphabet?", "text": "Imagine you give someone a card with the code \"5SBDO0\" on it. In some fonts, the letter \"S\" is difficult to visually distinguish from the number five, (as with number zero and letter \"O\"). Reading the code out loud, it might be difficult to distinguish \"B\" from \"D\", necessitating saying \"B as in boy,\" \"D as in dog,\" or using a \"phonetic alphabet\" instead. What's the biggest subset of letters and numbers that will, in most cases, both look unambiguous visually and sound unambiguous when read aloud? * * * Background: We want to generate a short string that can encode as many values as possible while still being easy to communicate. Imagine you have a 6-character string, \"123456\". In base 10 this can encode **10** ^6 values. In hex \"1B23DF\" you can encode **16** ^6 values in the same number of characters, but this can sound ambiguous when read aloud. (\"B\" vs. \"D\") Likewise for any string of N characters, you get (size of alphabet)^N values. The string is limited to a length of about six characters, due to wanting to fit easily within the capacity of human working memory capacity. Thus to find the max number of values we can encode, we need to find that largest unambiguous set of letters/numbers. There's no reason we can't consider the letters G-Z, and some common punctuation, but I don't want to have to go manually pairwise compare \"does G sound like A?\", \"does G sound like B?\", \"does G sound like C\" myself. As we know this would be O(n^2) linguistic work to do =)..."} {"_id": "32781", "title": "Declarative programming vs. Imperative programming", "text": "I feel very comfortable with Imperative programming. I never have trouble expressing algorithmically what I want the computer to do once I figured out what is it that I want it to do. But when it comes to languages like SQL or I often get stuck because my head is too used to Imperative programming. For example, suppose you have the relations band(bandName, bandCountry), venue(venueName, venueCountry), plays(bandName, venueName), and I want to write a query that says: all venueNames such that for every bandCountry there's a band from that country that plays in venue of that name. **EDIT (due to answers): in other words, I want all the venueNames in which bands from _all_ countries (bandCountry) played. Also, by \"relation\" I mean an SQL table.** In my mind I immediately go \"for each venueName iterate over all the bandCountries and for each bandCountry get the list of bands that come from it. If none of them play in venueName, go to next venueName. Else, at the end of the bandCountries iteration add venueName to the set of good venueNames\". ...but you can't talk like that in SQL and I actually need to think about how to formulate this, with the intuitive Imperative solution constantly nagging in the back of my head. Did anybody else had this problem? How did you overcome this? Did you figured out a paradigm shift? Made a map from Imperative concepts to SQL concepts to translate Imperative solutions into Declarative ones? Read a good book? PS I'm not looking for a solution to the above query, I did solve it."} {"_id": "32786", "title": "Programmer aptitude test", "text": "I have many friends that see what I do, find it interesting and ask me the question Do you think I could be a programmer? My response is ... ummm ... do you like math? I'd like to have a helpful response, so I didn't know if anyone knew of a fairly decent aptitude test for someone that would be starting from square one, but has critical thinking and problem solving skills?"} {"_id": "55580", "title": "Vulnerabilities and employability", "text": "I am a web developer. If I find a vulnerability in a prospective employer's website and notify them of it at the same as I send my (unsolicited) application, am I more likely to get the job? Less likely? Why?"} {"_id": "56058", "title": "Ways to find a tutor for 1st year University student", "text": "I tried searching here (and SO) without much luck. I also stared at the yellow box to the right and think this question is relatively on topic and can be answered. A co-worker asked me if I had any suggestions for how to find a tutor for his son. In this specific case, it was for Eclipse and Java, but it got me thinking about good general strategies one could use in situations like this. He preferred a local 1-to-1, but I suppose online might be a reasonable (or perhaps more likely) alternative. Any suggested strategies?"} {"_id": "56055", "title": "How to handle growing QA reporting requirements?", "text": "**Some Background:** Our company is growing very quickly - in 3 years we've tripled in size and there are no signs of stopping any time soon. Our marketing department has expanded and our IT requirements have as well. When I first arrived everything was managed in Dreamweaver and Excel spreadsheets and we've worked hard to implement bug tracking, version control, continuous integration, and multi- stage deployment. It's been a long hard road, and now we need to get more organized. **The Problem at Hand:** Management would like to track, per-developer, who is generating the most issues at the QA stage (post unit testing, regression, and post-production issues specifically). This becomes a fine balance because many issues can't be reported granularly (e.g. per-url or per-\"page\") but yet that's how Management would like reporting to be broken down. Further, severity has to be taken into account. We have drafted standards for each of these areas specific to our environment. Developers don't want to be nicked for 100+ instances of an issue if it was a problem with an include or inheritance... I had a suggestion to \"score\" bugs based on severity... but nobody likes that. We can't enter issues for every individual module affected by a global issue. **[UPDATED] The Actual Questions:** How do medium sized businesses and code shops handle bug tracking, reporting, and providing useful metrics to management? What kinds of KPIs are better metrics for employee performance? What is the most common way to provide per- developer reporting as far as time-to-close, reopens, etc.? Do large enterprises ignore the efforts of the individuals and rather focus on the team? Some other questions: * Is this too granular of reporting? * Is this considered 'blame culture'? * If you were the developer working in this environment, what would _you_ define as a measureable goal for this year to track your progress, with the reward of achieving the goal a bonus?"} {"_id": "56056", "title": "Does it make sense to compute cyclomatic complexity/lines of code ratio?", "text": "In general, maintainability index relies on many factors. For example, in Visual Studio, it rely on cyclomatic complexity, depth of inheritance, class coupling and lines of code; those four values must be as low as possible. At the same time, I've never seen, neither in code metrics tools, nor in books, the comparison between only cyclomatic complexity (CC) and lines of code (LC). Does it make sense to compute such ratio? What information does it give about the code? In other words, is it better to decrease more the CC than the LC to have a lower ratio? What I notice is that for small projects, the ratio CC/LC is low (\u2153 and lower). In other words, LC is high, and CC is low. In large projects, CC/LC is in most cases bigger than \u00bd. Why?"} {"_id": "36202", "title": "How Do You Test Your Software?", "text": "I'm currently working on 2 software projects: 1. A Social Networking Web Site for an NGO 2. A Patient Management System for a hospital Although I've been programming for 5 years, I can't just say that I'm very good at testing or Test Driven Designing an application. How would you arrange your Software Testing before coding phase, during coding and after you finished the coding phase for * a. providing stakeholders with information about the quality of the product or service under test. * b. providing an objective, independent view of the software to allow the business to appreciate and understand the risks of software implementation. and how do you understand that your program/application/product * c. meets the business and technical requirements that guided its design and development * d. works as expected p.s. please feel free to edit the question since my english is not very powerful."} {"_id": "36200", "title": "How do I improve my skills as a Qt GUI programmer?", "text": "I'm a python & Qt programmer and my day job is create small GUI programs to ship with hardware devices. However, my job is pretty basic: read/write data to the device through a serial port using pyserial, and display and edit data using PyQt and PyQwt. Because of how basic this job is, I'm not confident there's a lot of job security here. What can I do to improve my skills in this area?"} {"_id": "36207", "title": "What is the most secure way to \"Grandfather In\" existing users of a paid iOS app that will go free?", "text": "The title pretty much says it all, but I can elaborate. I have a paid iOS app that has plenty of existing customers. I think i want to convert to a free app now, and allow full upgrade via in-app-purchase. The problem is, I don't want to make my existing customers buy the app again to use it, nor do I want to make it easy for hackers to just flip a switch and get the pro version. What is the **most secure** way to \"Grandfather In\" existing users of a paid iOS app that will go free?"} {"_id": "130088", "title": "How to resolve methods with the same name and parameter types?", "text": "In many cases, I want to write methods that have the same functionality for different types of inputs. This is easily accomplished by method overloading if the parameter types are different. But what's the best (most robust) way to go about resolving the case when the parameter types are the same (i.e. two different representations of the data with the same type)? An example of this would be an integer matrix which can naturally be stored as an `int[][]`. But what if you want to write a method which accepts the transpose of the matrix as well? The transpose is also an `int[][]` but a clearly different representation altogether. I can see a couple ways of doing this: * Giving the methods different names * Adding a flag to the method * Wrapping each representations in different classes I think the third method is the most clear way of doing this. Unfortunately I'm working on some high performance libraries where that's not a feasible solution."} {"_id": "186910", "title": "Is it called an instance in Javascript?", "text": "Say I have a function. function foo(){ //do stuff } And then I create an object of that function. var fooObj = new foo(); What is `fooObj` called? An instance, An object instance, or something else entirely? Pardon the newbie question, but I am rather new to prototype based programming(and all programming really)."} {"_id": "130082", "title": "Best way to convincingly evangelize front end best practices to colleagues?", "text": "I've had experiences in the past working with (traditionally back-end) developers, whose occasionally cross into the front end realm. The resulting code would typically involve: * global namespace pollution (many many global functions included inline without flexibility or re-use in mind) * many one-off implementations of functionality which is already supported, or may be more easily implemented with the assistance of an already-included library (raw JS implementations of AJAX/DOM-related functionality) * mass groupings of inline styles As a (relatively) seasoned front end professional, I've had some success conveying best practices to colleagues who were interested in learning. However, that success was largely based on trust and open-mindedness. What one's options are when it comes to responding to questions asking to why some accepted \"best\" practices are genuinely better in a way which gains said trust and interest?"} {"_id": "218039", "title": "Programming methodologies at stackoverflow", "text": "I am in the middle of starting up a software company where we would use ASP.NET MVC and ASP.NET WebAPI extensively at shop. We will be a group of 4 and no more than 10 will work on any particular project at any point in time(these are ground rules). I would like to know, what programming methodologies best suit a small(guerilla) team. Specifically, I would also like to know which ones are being used at famous ASP.NET MVC shops like Stackoverflow. The ones I know are: 1. Scrum and 2. Waterfall(I know its bad). But what's the recommended way of development for smaller, group of 9-10 team. Also, will Test Driven Development help such a team in producing quality software? Are there any other techniques the team will have to know to be good at producing quality software?"} {"_id": "128275", "title": "what is exactly an enterprise app store", "text": "My idea of an enterprise app store is that there is a page on the company's website where all the apps that that company developed for its employees are listed so they can use them. And access to this apps is restriceted to the employees of the company. As far as I know you can download an Android App from any website and install it on an Andoid phone, you don't have to use Android Market. However installing an App that does not come from the Android Market is not that straightforward. For Apple, you can only install an app from the Appstore, so how can you have an enterprise app store? Please tell me what exactly is an enterprise app store. I think I got it wrong, also I read a lot on the internet for this keywords."} {"_id": "140898", "title": "Implementing a ILogger interface to log data", "text": "I have a need to write data to file in one of my classes. Obviously I will pass an interface into my class to decouple it. I was thinking this interface will be used for testing and also in other projects. This is my interface: //This could be used by filesystem, webservice public interface ILogger { List PreviousLogRecords {get;set;} void Log(string Data); } public interface IFileLogger : ILogger { string FilePath; bool ValidFileName; } public class MyClassUnderTest { public MyClassUnderTest(IFileLogger logger) {....} } [Test] public void TestLogger() { var mock = new Mock(); mock.Setup(x => x.Log(Is.Any).AddsDataToList()); //Is this possible?? var myClass = new MyClassUnderTest(mock.Object); myClass.DoSomethingThatWillSplitThisAndLog3Times(\"1,2,3\"); Assert.AreEqual(3,mock.PreviousLogRecords.Count); } This won't work I don't believe as nothing is storing the items so is this possible using Moq and also what do you think of the design of the interface?"} {"_id": "74127", "title": "How do I determine which language/framework is best for our web-based project?", "text": "I'm in the early stages of development for a web application that has three developers (myself included) working on it. The project is, at its core, a web-based database that will be used by around 5,000 people. Its primary purpose is to track information about game characters while enforcing a set of rules and security. Meanwhile, it's supposed to be as usable as possible. While the main presentation will be over a networked desktop web browser, we're also hoping that * certain features of the application will be usable while disconnected from the network and * that we can develop a version of the frontend for mobile devices. Here is some basic background for the developers; I think it's fairly relevant to the question. * Developer A maintains the system we are redesigning. It was built in PHP but there is very little actual code that we can keep. He also has veto power, though he takes suggestions readily. * Developer B is familiar only with VB and SQL, though he has been studying AJAX and HTML/CSS lately. * Developer C (me) has a degree in Software Engineering with experience in multiple (non MS) languages as well as with some general web/database development, but has only developed code in Ruby since graduating (2009). Dev C has experience in an older version of PHP and helped work on one project in the latest version of Rails back in 2008. The main tools we're considering are Ruby on Rails 3 and PHP 5. Developer A seems fairly opposed to learning Rails, but my guess is that he is assuming it is more difficult to learn than it actually is. I don't know that for sure, though. Regardless of which we choose, I want to use a MVC architecture. **What are some other notable concerns we should address in order to determine which language would best suit us?** Are any of the concerns listed below trivial? The main issues/points/concerns I _think_ I need to consider/address are: * The learning hurdle for Rails - for developer A, mostly, but also if it would be harder to learn a bit just to help for developer B than it would be for him to learn a bit of PHP. * The possibility that there will be performance issues with Rails. * The lack of forced structure with PHP - should I expect difficulties enforcing an MVC * Ease of AJAX integration in PHP vs. Rails."} {"_id": "74126", "title": "How likely is it for a programmer to become a good designer?", "text": "I'm a programmer, and I'm pretty good at programming. I can easily pick up most technical topics such as Linux, Systems Administration, Databases and of course, new programming languages. I even dabbled with 3d modelling and wasn't horrible at it. However, I am awful at Graphic Design. I have no skill there whatsoever, and even after quite a few Photoshop tutorials I can't design a website (or anything else really). I'm wondering if I pour a whole bunch of time into it, if I can become a _good_ graphic designer. So what I'm looking for is anyone else who has done this, or advice, thoughts, etc."} {"_id": "196094", "title": "Is a try and catch that does not throw an exception more efficient than a conditional?", "text": "I came across this example recently: > If 999 times out of 1,000 an exception will not be thrown then the exception > is only generated once. On the other hand a conditional would have been > called needlessly 999 times, therefore in this case the exception is > superior. In this instance it's C#, but generally speaking is this true? I had previously assumed try/catch statements had their own overhead that would equal the time spent handling a conditional. Granted, just throwing try/catch blocks anyplace a conditional would normally go would be a terrible way to code, but resource-wise does this statement hold up?"} {"_id": "253558", "title": "Type inference in Golang/Haskell", "text": "I've read that Go doesn't actually have true type inference in the sense that functional languages such as ML or Haskell have, but I haven't been able to find a simple to understand comparison of the two versions. Could someone explain in basic terms how type inference in Go differs from type inference in Haskell, and the pros/cons of each?"} {"_id": "253559", "title": "How would you create a program where two objects randomly move throughout the screen BUT never collide?", "text": "they are free to move anywhere on the screen weather in straight lines or curved ones just as long as they don't be at the same place at any given time. btw I'm not asking how to write it in code, but rather the logic if you will? 1- how would you make an object move randomly? (disregarding predefined functions or classes..etc) 2-how do you ensure they never hit?"} {"_id": "205605", "title": "Best data structure for representing English verb forms", "text": "I need to come up with a data structure to keep information about English verb forms. In most cases a verb can be in one of 4 forms: base, present participle, past participle and past simple, for example: * take * taking * taken * took It's seemingly easy to define 4 types for each form and be over with it. However there are few exceptions that ruin this simple idea. 1. Present single third person form, which is in our example would be \"takes\". 2. Copular verb \"to be\" has multiple irregular forms in the present tense: \"am\", \"is\", \"are\" and \"was\" and \"were\" in the past tense 3. Verbs like \"may\" that don't inflect in the present single third person form: \"she may\". What data structure would be efficient, accurate yet unambiguous for representing such information (with exceptional cases) given the following requirements have to be met: * for an arbitrary form answer the question what conjugations the form represents * for an arbitrary conjugation and a form answer the question whether the form represents the given conjugation or not?"} {"_id": "253553", "title": "Common phrases to know which resources are no longer available", "text": "I'm working on an script for tracking urls on a web page. This script runs daily to identify resources that are no longer available. If a page returns a `status code` like `404` I know the resource is not available anymore, but sometimes the page returns a `200` with a message like: > This resource has been moved to ...... Or > This resource is not available anymore.... I would like to know which `keywords` are used to say a resource is not longer available regardless the `status code`."} {"_id": "200231", "title": "Where to learn graph theory applications", "text": "I am completely new to designing algorithms with graphs. I am following CLRS and other video lectures in Youtube, notably from IIT/MIT. They are pretty good, and I currently have decent idea about graph data structures, search, spanning-tree, etc. However, I am completely clueless as to how to identify a coding problem (the likes of which you see in Topcoder/Codechef) that requires a graph-based approach. In which problem, shall I need to use a minimum spanning tree? Where do I need to use Prim's Algorithm? Is there ay book/resource which covers lots of problems on graphs, explaining (well, kind of spoon feeding) how to identify that this problem requires a graph-based solution, and finally how to do it?"} {"_id": "205606", "title": "Strategy for keeping secret info such as API keys out of source control?", "text": "I'm working on a website that will allow users to log in using OAuth credentials from the likes of Twitter, Google, etc. To do this, I have to register with these various providers and get a super-secret API key that I have to protect with pledges against various body parts. If my key gets ganked, the part gets yanked. The API key has to travel with my source, as it is used at runtime to perform authentication requests. In my case, the key must exist within the application in a configuration file or within the code itself. That isn't a problem when I build and publish from a single machine. However, when we throw source control into the mix, things get more complicated. As I'm a cheap bastard, I'd much prefer to use free source control services such as TFS in the cloud or GitHub. This leaves me with a slight conundrum: **How can I keep my body intact when my API keys are in my code, and my code is available in a public repository?** I can think of a number of ways to handle this, but none of them are that satisfying. * I could remove all private info from code, and edit it back in after deployment. This would be a severe pain to implement (I won't detail the many ways), and isn't an option. * I could encrypt it. But as I have to decrypt it, anyone with the source could figure out how to do so. Pointless. * I could pay for private source control. LOL j/k spend money? Please. * I could use language features to segregate sensitive info from the rest of my source and therefore keep it from source control. This is what I'm doing now, but it could easily be screwed up by mistakenly checking in the secret file. I'm really looking for a guaranteed way to ensure I don't share my privates with the world (except on snapchat) that will work smoothly through development, debugging and deployment and be foolproof as well. This is completely unrealistic. **So what realistically can I do?** Technical details: VS2012, C# 4.5, source control is either going to be TF service or GitHub. Currently using a partial class to split the sensitive keys off in a separate .cs file that won't be added to source control. I think GitHub may have the advantage as .gitignore could be used to ensure that partial class file isn't checked in, but I've screwed that up before. Am hoping for a \"oh, common issue, this is how you do it\" but I may have to settle for \"that doesn't suck as much as it could have\", :/"} {"_id": "200237", "title": "Having a Starting Place for Coding a Site", "text": "This is more of preference question that I wanted to toss out to other developers to learn a bit more about others process when developing a site. Every time I start a site I feel I could save a lot of time if I had a better starting point to work from. I have a 'starter' folder that has a CSS folder, a JS folder, an index.html file, etc. that I use as a guide, but I make tweaks every time I build a site, so I feel like I end up working from the last site I built instead of the starter folder most of the time. I figured there has to be a better way to setup a starting place that gets you up and running programming a site more efficiently. What do you use handle this in your workflow? Where/how do you save code snippets to reuse from site to site, etc.?"} {"_id": "75839", "title": "I've learned so much about OO programming I have no idea how to write procedural code. What's a good way to learn?", "text": "When I learned to program, I learned Object Oriented Programming very early on. For a while, I blundered around with my beautiful hammer, trying to use it for _everything,_ partially because I had no idea how to solve problems any other way. As I read more, I began to realize you need to use the right tool for the right job, and I quickly realized I had no other tools because I'd never learned any other style! I'm working on functional programming, and there are plenty of good books and articles about it. But what's a good way to learn procedural programming **the right way?** I have a basic idea of how to make global variables and pass state to functions, but I want to delve deeper into procedural code. I know most of the time people have to learn the other way, and procedural code is often bashed around for being unmaintainable, but I want to learn it anyway so I have more ways of approaching a problem. OOP may often be a 'better' choice, but right know its the only thing I know, and I want to branch out. I know procedural code was virtually the only style available for programming for many years, so most books don't say \"we're going to teach you procedural programming!\" and instead just say they'll teach you how to program, and the procedural is assumed. Those people can just find books that have OOP in size 72 font on the cover and know that's what it teaches. But I'm trying to go the other direction and feel a little lost... How did you learn procedural programming? Which books/languages do you recommend? Are there any important tips to learning the **right** way to write procedural code so its as maintainable and beautiful as possible? What tips would you give an OO junky as to how to write awesome procedural code?"} {"_id": "203769", "title": "Programming A Function in Python", "text": "I am quite new to programming, and was wondering if someone might be able to help me get the following working in python: Define ![$\\\\Phi_mx\\(mn+r\\)=mx\\(n\\)+\\\\frac{r}{m}\\(x\\(n+1\\)-x\\(n\\)\\)$](http://i.imgur.com/ZQeqIvx.gif), where we consider x(0)=0, r is always positive and less than m, and all values of n are positive integers. ![$\\\\Phi_m$](http://i.imgur.com/JAncAHQ.gif) takes in a string of numbers and outputs another string. How might I program this?"} {"_id": "137708", "title": "Mentoring Millennials", "text": "I'm in the process of revamping our company's software development internship program and the area I'm most passionate on is implementing an enjoyable and effective student/intern mentoring program. We have quite a demographic variance among our software development groups, however, the vast majority would be considered Generation-Xers. **Over the years I've seen our younger Millennial-Generation students/interns/employees struggle when working with older developers.** Software architecture, Agile principles, the importance of TDD, design patterns, all of these are important and I'm looking for insight on communicating these things to interns/students/younger professionals. I would love to hear of any good tips regarding mentoring today's incoming Millennial-Generation software developers, especially students/interns. What's worked well for you, what hasn't? What observations have you made that taught you something about a younger software developer, how they think, what they value, etc. How can we make this an experience and industry that they're excited to be a part of? Thank you very much for any and all comments."} {"_id": "218209", "title": "Entity framework separating entities for product and customer specific implementation", "text": "I am designing an application with intention into making it a product line. I would like to extend the functionality across all layers and first struggle is with domain models. For example, core functionality would have entity named Invoice with few standard fields and then customer requirements will add some new fields to it, but I don't want to add to core Invoice class. For every customer I could use customer specific DbContext and injected correct context with dependency injection. Also every customer will get they own deployment public class Product.Domain.Invoice { public int InvoiceId { get; set; } // Other fields } How to approach this problem? Solution 1 does not work since Entity Framework does not allow same simple name classes. public class CustomerA.Domain.Invoice : Product.Domain.Invoice { public User ReviewedBy { get; set; } public DateTime? ReviewedOn { get; set; } } Solution 2 Create separate table and link it to core domain table. Reusing services and controllers could be harder. public class CustomerA.Domain.CustomerAInvoice { public Product.Domain.Invoice Invoice { get; set; } public User ReviewedBy { get; set; } public DateTime? ReviewedOn { get; set; } }"} {"_id": "146498", "title": "Programming languages, positional languages and natural languages", "text": "Some programming languages are modeled on machine code, like assembly languages. Other languages are modeled on a natural language, the English language. Others are not modeled on either machine code or natural language. Languages such as PROLOG, for example, don't follow either model. I came across this Perl module Lingua::Romana::Perligata, that allows to write programs using a syntax that is very similar to Latin. Are there programming languages that have less positional syntax? Are there other languages or modules that allow you to write in syntaxes inspired by other natural languages, like French, Hebrew or Farsi? There is a very long list on Wikipedia, but most of those projects are dead. There is a related question on StackOverflow. The answer that was accepted is \"Use Google\"."} {"_id": "39922", "title": "Where to put profile test?", "text": "I have an application with multiple threads that may be run on different hardware. To assist with tuning on different hardware I would like to create a \"profiler\" that can automatically run a fixed amount of data through using different numbers of threads. I've thought of several ways of implementing this: 1. It's a test, so put it in with the unit tests 2. It's also a part of the app, so make it part of the top level class running the app 3. It's a helper, so create entirely new class `Profiler` I'm leaning toward option 2, because I think it is simplest to implement and seems to fit in well. Anyone have any other ideas or comments?"} {"_id": "39923", "title": "Why GRASP patterns are less known than GOF ones?", "text": "Design patterns help developers to improve the quality of their design, but only GOF patterns are very known, and paterns like GRASP that gives a good concepts like Information Expert,low coupling and high cohesion are less known. I think that GRASP patterns could be a good introduction to patterns for students and developers, they will master GOF patterns better if they knew GRASP patterns."} {"_id": "218205", "title": "Understanding exceptional cases", "text": "I've been studying the use of exceptions in various php projects (such as Doctrine and Zend Framework). Exceptions seem to be thrown when unordinary input/state occurs. A perfect example is Doctrine throwing an exception when you try to use a invalid query string. I think the creators of the doctrine api understood that first, you can't query data by using an invalid DQL statement, and a developer should immediately be warned that an error has occurred, rather then letting execution continue with the possibility of an error code going un-checked. I also bet that this simplifies reading the code. I can't think of a situation where you would want to use an invalid DQL statement, except unit testing. Since this is true, it's better to avoid plaguing a bunch of code with null/error checks and use exceptions. I've read in books that exceptions shouldn't be thrown when validating dating user input. I've seen examples where of where the guideline is broken. One example is the Zend framework. If supplying an invalid controller or action name, an exception is thrown. Unlike doctrine, the user has more direct control over this sort of input. I know you can configure an error controller and set up a 404 message or what have you, but I'm curious why they have used an exception in this scenario? I guess you can argue the Zend Framework does not know how to continue processing the request. One last example Is I wrote a function to return some html based on a given resource type. This resource type is hard-coded and sent when a user interacts with a web site (such as clicking a button to display the form to input data). I don't _expect_ users to be mucking around with the request type. Under normal operating conditions, the resource type should be valid. To clean up some logic, I was going to throw an exception if a particular form wasn't found. This is mainly to find the correct form associated with a resource type so proper validation can occur. Does this sound like a valid use case for an exception? Right now it's pretty trivial, but I do plan to implement a restful consumer and re-using a function to map resources to their validation services would be very useful. I can then catch the exception and based on the consumer, return an error message suitable for the request type..."} {"_id": "223146", "title": "Is a tree with nodes that have reference to parent still a tree?", "text": "If we make reference to the parent for each node in a tree, do we still have a tree (by definition) anymore? Wikipedia definition is: > In computer science, a tree is a widely used abstract data type (ADT) or > data structure implementing this ADT that simulates a hierarchical tree > structure, with a root value and subtrees of children, represented as a set > of linked nodes. ![enter image description here](http://i.stack.imgur.com/SXgBW.png)"} {"_id": "145034", "title": "Should large or old Codebases be expected to be easy to navigate?", "text": "I'm an Undergraduate Computer Science Student currently on a placement year at a company that produces and supports a large enterprise web application. I'm loving the experience of seeing how software is produced in the real world, and feel very lucky to find a company that offers a chance to not only maintain and extend existing functionality but also develop entirely new features for the product. All that said, though, I'm very conscious that this is very, very unlikely to be a perfect example of how to develop correctly. Far from it, in fact. I feel I'm learning a massive amount from my experience here, and I don't want to learn the wrong things or pick up bad habits from colleagues that could be hard to shake off further down the road. Mostly it's easy to tell what's good and what's not - for example the Unit Test coverage here is practically non- existent for various reasons (mostly poor excuses mixed in with one or two valid points). Lately, though, I've been noticing a regular occurrence that I'm just not sure about. Whenever we start a new project, naturally we need to find any relevant code that needs to be extended, altered, or removed. It seems to me that, the vast majority of the time, anything not within the most commonly used sections of the application takes people an age to find within the codebase. There are one or two tech leads who know their section of the code well, but even they get stumped sometimes and have to spend a long time searching for what they require, or turn to someone who has been editing that part of the code recently (if anyone) for help. When I say a long time, I don't mean hours (usually) but it seems to me that a good codebase would be navigable to any point within a few minutes at worst, to anyone even vaguely familiar with the system. So, my question. Is the above problem due to poorly structured code? Alternatively, is it down to developers not having enough knowledge of the codebase? Or is it simply unavoiable in large applications, regardless of how much work goes into keeping the file structure clear? Or, indeed...am I just wasting my time on a topic that really doesn't matter?"} {"_id": "98264", "title": "Technical manager (not lead architect) of development team", "text": "Can you be a technical manager of a development team, without knowing the technology they work with? I'm very technical, having been a tech lead for a while, now I'm a manager, and the new team reports to me now, administratively and technically. Problem is, a big part of the technology being used is old and archaic (delphi). Is it possible for someone with 'zero' experience in that to lead that team? What are some of your experiences?"} {"_id": "166029", "title": "Are dynamic languages at disadvantage for agile development?", "text": "From what I've read agile development often involves refactoring or reverse engineering code into diagrams. Of course there is much more than that, but if we consider the practices that rely on these two methods, are dynamically typed languages at disadvantage? It seem staticly-typed languages would make refactoring and reverse engineering much easier. Is Refactoring or (automated) reverse engineering hard if not impossible in dynamically typed languages? What does real world projects tell about usage of dynamically typed languages for agile methodology?"} {"_id": "98267", "title": "Agile Retrospective Ideas", "text": "I am a Junior at workplace and i have been to a number of our retrospectives over the last year. I have been asked to facilitate a retrospective of my own. So far, we have done \"hats (red, green, white etc)\", \"mad, sad, glad\", \"imagining going forward to end of next sprint, and discussing how it might have gone\". I would like to try something new. If you practice agile in your workplace, what do you use and would you recommend trying it for a retrospective? Thanks, Kohan"} {"_id": "166023", "title": "What does \"windowed streaming\" stand for?", "text": "So I was asking around the Mercurial development mailing list about binary diffing and patch handling and I got the following examples: * whole file approaches (classic diff, bsdiff, Mercurial's internal bdiff) * windowed streaming approaches (rsync, xdelta, libxdiff) What does \"windowed streaming\" stand for in this context? (and in general)"} {"_id": "227658", "title": "Front controller in PHP", "text": "When you are reading about web application development, \"front controller\" and \"single point of entry\" are frequent words you are confronted with. To my understanding, the point is to provide a single location where you decide what to do in your program. That's all nice and well in theory, but I am confused as to how this would look in reality. In OOP, I suppose your \"controller\" would commonly be instantiated in the \"index.php\", or whatever file is your \"entry point\". But how to go from there and how does the controller class look like? Is it something along the lines of this: //Dummy-code, it's just to make a point class Controller { //first function to be called public function main() { switch($appAction) { case \"login\": userLogin(); break; case \"view\" : viewArticle(); break; //... } } protected function userLogin() { //Sanitisation of user input //making instances of necessary classes etc. } //... } or did I get the whole concept completely wrong? This is the only thing I can imagine to be the case, but I'm just not sure if there is a major mistake in approaching it this way or if I just misunderstood the purpose of a front controller."} {"_id": "232393", "title": "How to manage multiple database credentials across multiple projects", "text": "We have 10 separate projects that all access the same database. Initially, all 10 projects had database credentials hardcoded into them. I decided to move the credentials into a utility method and have all of the projects call that instead. For example public String getDBUser() { return \"username\"; } So instead of hardcoding it in 10 places, I hardcode it in one place. All was fine. Fast-forward some months and I run into a wall with this solution, because now there are different credentials depending on which database server we're trying to connect to. For context, I'm trying to set up properly separated testing/production environments, but the servers have different username and passwords. One solution is to simply change the credentials to make them the same, but I imagine this is sometimes not an option and if I run into such a situation I would like to be prepared. What is a good way to manage multiple database credentials across multiple applications?"} {"_id": "101838", "title": "Class as first-class object", "text": "Could a class be a first-class object? If yes, how would the implementation look? I mean, how could syntax for dynamically creating new classes look like? EDIT: I mean what example syntax could look like (I'm sorry, English is not my native language), but still I believe this question makes sense - how you give this functionality while keeping language consistent. For example how you create reference for new type. Do you make reference first-class object too and then use something like this: Reference r = new Reference(); r.set(value); Well this could get messy so you may just force user to use Object type references for dynamically created classes, but then you loose type-checking. I think creating concise syntax for this is interesting problem which solving could lead to better language design, maybe language which is metalanguage for itself (I wonder if this is possible)."} {"_id": "101836", "title": "How much should Developers interact with Clients?", "text": "A philosophical discussion has come up in my department that I'd like p.se's opinion on. We're a 6 person development department inside a 60 person IT shop. All other departments are growing FAST, and ours has been the same size (and not fully booked at that size) for a few years. My manager, and old-school mainframe guy, contends that developers should never ever have ANY client contact. That all interaction with the client should be mitigated by a Project Management layer. He asserts that this allows a coder the focus they need in order to CODE, and protects the business's relationship with the client from the autism-spectrum tendencies of Joe Average Code Monkey. (My words, not his.) His boss, the owner of the company, told him this morning that every single person in the company needs to think of themselves as an extension of the sales department, and needs to be listening all the time for upsell opportunities. To this end, he thinks clients ought to have more or less full access to developers directly, in part so that devs have the opportunity to hear sales opportunities. I'm somewhere in the middle, myself. I think it's nice to shield devs from clients to some extent, but in practice that will never be total. And yes, every single job in the company includes sales, but that doesn't necessarily mean that everyone has the same opportunity for it. EDIT: We're not an agile shop. Some of us (cough) would like to head that direction, but for now assume this is a traditional fixed-bid-contract shop. EDIT2: The autism joke wasn't funny. Got it. Entirely possible that autism jokes never are. That said: there are developers who have a capacity to represent themselves and their employers well and developers who don't (currently) have that capacity. My manager has a real concern about how the company would be represented if all developers were structurally empowered to be company representatives. It's also getting increasingly clear from reading your responses that the real push-and-pull here is between waterfall and agile."} {"_id": "137881", "title": "Flexible development environments for creating a common code base targeting tablets (iPad/Android) and x86 PCs", "text": "Hopefully this won't be flagged for being too vague, but I'm really looking for suggestions from anyone with experience in this sort of situation. I work for a group that develops very specialized engineering calculations for Intel x86 platforms. We do not intend to continue using this existing x86 code. Instead, we would like to start a new code base capable of targeting both x86 PCs and iPad and/or Android tablets. Clearly the UI code will have to be maintained independently. But I suspect the backend engineering code, which we want to keep common, is possible to be shared among all platforms. I would like to know if there are any development environments or techniques for simplifying this multiple-platform development with a common backend code base, or will it be necessary to develop this backend code for each platform independently as well. Please let me know if you need any clarification! Thanks **EDIT** For example, we would need to write a large amount of code in C. Is it true both the iOS SDK's objective-C compiler can build native iOS code using the (relatively) same C code we would use to build native windows code using MSVC? Mind you, the code would not need to rely on _specialized_ OS-dependent facilities. Are there any tools for assisting in this sort of cross-platform development that may require multiple compilers? or perhaps any individual compilers that can target both platforms?"} {"_id": "101832", "title": "Why is Objective-C not widely used beyond Cocoa environments?", "text": "Objective-C features nice object orientation, simplicity, elegance, and (as a superset of C), low level ability. It could seem like the simple, modern alternative to C++ that many people look for and try to find in Go. But it is just used at Cocoa and post-NextSTEP environments, and even in this case is seen more as a burden for historical reasons than as an optimal choice. Why it's not more widely used then? What are its problems?"} {"_id": "101831", "title": "Why is the c family the standard CS study regiment for Mathematics/CS programs instead of the LISP family?", "text": "I have been familiarizing myself with LISP for self improvement purposes. One of the things I have noticed is that LISP is much more within the paradigm of Mathematics than say C. The syntax and design structure seems to echo directly the actual mathematical model of an algorithm. It doesn't make sense to me why even good Mathematics based CS programs study C instead of LISP. I think that LISP more directly employs higher mathematical concepts than C. I am not saying that you can't model mathematical structures in C. I am merely noticing that LISP seems to be hard-wired for mathematicians. I have read many of Joel Spolsky's rants on the JAVA schools and what not--and I agree with his assesment--, but my school didn't teach JAVA for that very reason. They were stringent in teaching fundamental concepts like pointers, algorithm design, recursion, and even assembly instructions. However, they did this all in C and c++. Does anyone know the reasons for this and/or its history?"} {"_id": "101830", "title": "Question about Cyclomatic Complexity", "text": "I am new to static analysis of code. My application has a Cyclomatic complexity of 17,754 lines of code. The application itself is only 37,672 lines of code. Is it valid to say that the complexity is high based of the lines of code? What exactly is the Cyclomatic complexity saying to me?"} {"_id": "33076", "title": "As a C# developer, would you learn Java to develop for Android or use MonoDroid instead?", "text": "I'd consider myself pretty well versed in C#. It's my language of choice at the moment, and it's where basically _all_ my professional experience lies. Still, I'm puzzled by the existence of the MonoDroid project. My understanding has always been that C# and Java are _very_ close. Like, if you know one, you can learn the other really quickly. So, as I've considered developing my first Android app, I just assumed I would familiarize myself with Java enough to get started and then just sort of learn as I go. Wouldn't this make more sense than using MonoDroid, which is likely to be less feature-rich than the Java Android SDK, and requires learning its own API (albeit a .NET API) anyway? I just feel like it would be better to learn a new language (and an extremely popular one at that) and get some experience in it\u2014when it's so close to what you already know anyway\u2014rather than stick with a technology you're experienced with, without gaining any more valuable skills. Maybe I'm grossly misrepresenting the average potential MonoDroid user. Maybe it's more for people who are experienced in Java and .NET and just prefer .NET. Or maybe (in fact it's likely) there are other factors I just haven't considered. I'm just wondering, why would you use MonoDroid instead of just developing for Android using Java?"} {"_id": "135411", "title": "If ASP.NET MVC 4 supports RPC style communication what does that mean for WCF?", "text": "From http://www.microsoft.com/download/en/details.aspx?id=28942 > ASP.NET MVC 4 also includes ASP.NET Web API, a framework for building and > consuming HTTP services that can reach a broad range of clients including > browsers, phones, and tablets. ASP.NET Web API is great for building > services that follow the REST architectural style, plus it supports RPC > patterns. If ASP.NET MVC 4 supports RPC style communication what does that mean for WCF? On what basis should we chose to use WCF or ASP.NET MVC Web API's RPC mechanism?"} {"_id": "18852", "title": "How do I learn linking compiling, makefile quickly? Any book recommendations?", "text": "I'm very good at programming in C++ but when it gets to linking and the other important stuffs I feel very ignorant. I want to learn allegro without wasting time. So please suggest a book or a resource to learn the concepts mentioned above. Thanks a lot in advance."} {"_id": "134505", "title": "Are XML Schemas bad for constantly evolving file formats?", "text": "I'm struggling with a client-server project where I have Java apps out on the Internet that store data to a backend server. The format of this data is well- defined, but the project is constantly evolving, so the definition keeps changing! To cope with the change I defined a simple REST interface on the server that offers only key-value storage. Clients can store or retrieve a chunk of data by referencing a unique key. This is nice because I don't have to modify the server interface (or the backend database) when the data format changes. To the server, it's just a bunch of opaque blobs. Of course, the issue then becomes, \"What goes inside the blob?\" For that I wrote an XML Schema that defines the content of a blob. At first it was great, since the Schema gives a bunch of nice things \"for free\": A formal yet human- readable spec of the file format, automatic validation of its contents, marshalling/unmarshalling to a stream, and auto-generated Java classes for programmatic access to the data. But then change happened! The Schema had to be altered, and naturally I ran into forward- and backward-compatibility issues. To deal with the constantly changing Schema, I came up with a solution that embeds a version number into the XML namespace, and I apply a series of XSL Stylesheets to \"upgrade\" any given blob to the latest version. For example, I'm now on version 1.3 of my Schema, so when I unmarshal a blob, I run it through a 1.0-to-1.1 XSLT, then a 1.1-to-1.2 XSLT, and finally a 1.2-to-1.3 XSLT. This works, but it's not sustainable because the chain keeps getting longer, which lowers performance and sucks up memory, plus I have to keep writing new Stylesheets, which takes time and isn't fun. Now here's the funny thing... In addition to the Java clients, the project also has iOS apps as clients, and iOS has none of the nice enterprise-y features associated with XML Schemas. There's no validation of the stream, no auto-generation of Objective-C classes, etc., just a low-level event-driven XML parser. But ironically I'm finding this _so_ much easier! For example, if the XML gets a new element, I just add a new `if` clause. If an element goes away, I remove its clause. Basically, I do a \"best effort\" at interpreting the XML stream, silently ignoring any unrecognized elements. I don't need to think about what version the file format is or whether it's valid. Plus this is much faster because there's no XSLT chaining, and it saves a lot of my time because I don't have to write any XSLT code. So far this approach has worked out great, and I've not missed having an XML Schema on the iOS side. I'm now wondering if a Schema, despite its nice feature set, is totally the wrong technology for a file format that often changes. I'm thinking about ditching my XML Schema altogether and using the same \"best effort\" low-level approach in Java that I'm doing in iOS. So is my negative assessment of XML Schemas correct? Or is there something I've missed? Perhaps I need to rethink the server interface? Or maybe I shouldn't have been using XML in the first place? I'm open to all suggestions. Thanks for reading!"} {"_id": "41232", "title": "How to process payments for a software (activation code)?", "text": "I want to sell software online and I need an easy to implement payment processing system. What I'm actually going to be selling is an activation code (one per purchase) that would activate the trial version of a product. I was about to use this one but I just found out that people without a paid email account (not hotmail or yahoo) can't process their orders, which I'm sure would discourage many, if not most, of the possible buyers."} {"_id": "134502", "title": "Website with test data files for specific algorithms/data structures?", "text": "Is there any website, like SPOJ and Project Euler, with the test data files available for specific algorithms/data structures? I know it's a fun challenge to solve those problems on your own, but I am looking for test data files where the algorithm/data structure to use are explicitly mentioned. \"Implement a linked list and do this and that with the data to confirm!\", \"Implement a hash tree and do this and that with the data to confirm!\" etc."} {"_id": "84058", "title": "Should the test and the fix be written by different people?", "text": "There is a common practice in TDD to write a test before fix to avoid regression and simplify fixing. I just wonder what if the test and fix will be written by different people, total spent time will be almost the same but as now three people will think about possible failures (+tester) we increase probability that fix will cover all possible failure scenarios. Does this practice make sense or it will just waste additional time needed for one more person to familiarize with bug?"} {"_id": "249879", "title": "Is it poor form to use C features such as the size_t type instead of their c++, such as std::size_t?", "text": "I have recently been told that using `size_t` as declared in the global namespace is incorrect in C++, ostensibly because `size_t` is a C-feature. I looked this up and came across this question on Stack Overflow: http://stackoverflow.com/questions/5813700/difference-between-size-t-and- stdsize-t The top answer makes it pretty clear that there isn't any _real_ difference between `size_t` and `std::size_t`, but that leaves open the question of style and correctness. Since I'm programming in C++, is it \"wrong\" to use a C feature such as `size_t` in place of the slightly longer but no better C++-specific `std::size_t`?"} {"_id": "41238", "title": "Recording unstructured suggestions and feedback in an issue tracker?", "text": "I'd like to advocate the use of issue-tracking software within an organisation that currently does not use it. But there's one aspect of their situation for which I'm unsure of what to suggest: their projects frequently receive informal verbal feedback or casual comments in meetings or in passing from a wide group of interested parties, and all this information needs to be recorded. Most of these messages are noise, but they're vital to record and share with developers for two reasons: 1. Good suggestions often come out of this process. 2. It can be necessary to have evidence of clients' comments when they forget previous instructions or change their mind. Is this the sort of information that should be stored in an issue-tracking system, or kept apart in a separate solution? Can issue-tracking system have good support for this sort of unstructured information?"} {"_id": "84054", "title": "Confused about the different terminology with .NET", "text": "What is the difference between .net and asp.net? Which role does PHP more closely fill? If someone was developing a website using asp.net and C#, how would you communicate that idea to a colleague, i.e. what's the language you would use?"} {"_id": "249875", "title": "Can I include a modified GPL/Apache licensed font in my LGPL project?", "text": "I want to include a GPL & Apache duel-licensed font in my LGPL project, but didn't really find a clue if that's possible. There are two problems here: 1. I didn't find any clue about Apache license's \"LGPL Compatibilty\". I know both the licenses involved here are compatible with GPL, but I still prefer to release the work under LGPL. 2. It's kind of tricky since it's a font. It was GPL & Apache duel-licensed, but the LGPL terms are asking about \"Library\", its \"Public API\", \"Linking\" or something. I am not really sure what the operation belongs to if I just edit a .ttf file manually, not even sure about if that's counted as \"source\". Any help would be really appreciated!"} {"_id": "249872", "title": "Should a domain object wrap/contain a DTO interface?", "text": "Using .NET - I have an interface IPerson. This interface is implemented by classes in multiple, separate repositories, e.g. EF6 (EfPerson), custom SQL (SqlPerson), or even custom assembly connecting to a web service (WebPerson). Assuming rich domain model, my idea is that my lovely rich domain object 'Person' could have a private member variable _PersonDto of type IPerson, supplied via constructor. The members of Person would be the only way to access data from the _PersonDto. Q. Is there anything actually inherently wrong with that approach? (Assume I'm not lazy loading, and that I will possibly have a service layer for cross- cutting stuff). Please note I'm using DTO here to simply mean the anaemic objects I get back from my repositories."} {"_id": "249873", "title": "How to deal with multiple output modes of multiple types?", "text": "_Note: The business domain being a bit complicated to explain, I replaced the names of actual classes by more illustrative examples._ I'm writing an application in which the business layer returns a set of `Pet`s: either a `Cat` or a `Dog`: class Pet(metaclass=ABCMeta): # Pet is abstract. ... getName(self): ... class Dog(Pet): ... class Cat(Pet): ... I want to display those pets using a given format which is determined by the user. For example, the user may ask to show only the name of each pet, or obtain a JSON representation, or have a rich representation which is biased towards cats, i.e. it _displays much more information about cats than about dogs_. In the presentation layer, I have an abstract class `Output` with an abstract method which outputs the pets, and then I have concrete implementations of every type of output: a `class NameOutput(Output)`, a `class JsonOutput(Output)`, or a `class RichOutput(Output)`. The first two ones are easy: they just use the abstract `Pet` class, and don't care if it's a dog or a cat. The problem is the last class, since the logic is different for a dog than for a cat. class Output(metaclass=ABCMeta): # Output is abstract. generateOutput(self): for pet in self.pets: yield self.processPet(pet) @abstractmethod processPet(self, pet): pass class NameOutput(Output): ... class JsonOutput(Output): ... class RichOutput(Output): ... processCat(): ... processDog(): ... Determining the type of pet during the execution and calling different methods for each pet feels ugly. Also, when a maintainer would need to add `class Hamster(Pet)`, he will learn the hard way that he also should create an additional method in `RichOutput(Output)`. On the other hand, moving the output logic to the pets themselves doesn't look right either: what if I need to create an additional display format? What are my options? How is this problem usually solved?"} {"_id": "249870", "title": "Should I group all of my .js files into one large bundle?", "text": "One of the difficulties I'm running into with my current project is that the previous developer spaghetti'd the javascript code in lots of different files. We have modal dialogs that are reused in different places and I find that the same .js file is often loaded twice. My thinking is that I'd like to just load all of the .js files in _Layout.cshtml, and that way I know it's loaded once and only once. Also, the client should only have to download this file once as well. It should be cached and therefore shouldn't really be a performance hit, except for the first page load. I should probably note that I am using ASP.Net bundling as well and loading most of the jQuery/bootstrap/etc from CDN's. Is there anything else that I'm not thinking of that would cause problems here? Should I bundle everything into a single file?"} {"_id": "83589", "title": "How did they count the number of lines of code executed at runtime?", "text": "There was a PC game released in 2001 called Black & White by Lionhead studios in which there was a lengthy statistics page which updated in real-time. There were stats such as how many people killed, how much money you've earned, etc... but the really puzzling one was Total lines of code executed, which was into the billions and counting. How would they have known this, how would they have calculated this at runtime? Did they make it up?"} {"_id": "83588", "title": "Tracking hours on a project", "text": "Other than the following, are there any helpful (to programmers) reasons to track hours/time on a project down to the \"accurate number of hours per week\" level? Reasons: * Beneficial to see how estimates vs. actuals came out (to improve estimation) * Management said so (for contractual or business reasons) This is a bit tongue in cheek, but my personal preference has always been to track at an abstracted level. For example: a person is either \"full time\" (defined as either 40hrs/week or a less intense 32hrs/week) or part time, or they spend a few hours here and there on the project. It seems (as a manager) that chasing any kind of accurate number for actual hours is much like measuring the length of a coastline. The more accurate you try to get, the more confounding it is. That said, I'd appreciate any insights on benefits to programmers. thanks!"} {"_id": "40183", "title": "What is the advantage of learning about and understanding compiler construction?", "text": "I'm a undergraduate in my 3rd year of a Software Engineering degree. From this year on, my university has introduced a new course called 'Compiler Constructions', which teaches you the basics of the theory of building a compiler. What would be the real world advantage for a Software Engineer of learning about compiler construction?"} {"_id": "113403", "title": "What is the equivalent of functions and methods in SQL?", "text": "In language like C or Java, there are things like functions/methods to modularize and abstract stuff. What is there in SQL, specifically for Oracle? I've seen people talking about using views to reduce the complexity in various blogs but there is only so much that can be done with a view. (For example, a bind variable cannot be passed as a runtime parameter to a view). In tools like Oracle Reports the query sometimes gets very awkward running several hundreds of lines. I was wondering what sort of techniques people use to manage the complexity of their SQL? What are the equivalents of functions and methods in SQL to keep SQL from becoming extremely complex?"} {"_id": "113409", "title": "How to specify WIP limits in Kanban?", "text": "Consider a typical Kanban board: Input, Analysis, Dev Ready, Development, Build Ready, Test, Release Ready How to specify WIP limits for each column? any formula?"} {"_id": "83587", "title": "Is there a common denominator of all Smalltalk implementations?", "text": "In most languages, there is standard libraries. And this is foundation, common denominator of them. Applications can be written with guarantee of the denominator. What's the core of Smalltalk?"} {"_id": "159938", "title": "Are there any FOSS operating systems available that conform to NASA's JPL coding standards?", "text": "I, like many others, have been completely enamored with the recent successful landing of Mar's Curiosity rover. After reading a couple of articles, and following a few links, I've found a couple C based coding standards that NASA JPL uses to formalize their code and protect it from error. (See here and here.) This has me curious. Are there any open-source operating systems available that adhere to these coding standards that are available for common architecture, such as x86, x64 or possibly ARM?"} {"_id": "202499", "title": "Provide multiple SendCompleted callbacks to SmtpClient", "text": "I have an `Email` class that has a `Send` method that optionally takes an `SmtpClient` and sends an email asynchronously using `SendAsync`. If no `SmtpClient` is supplied to this method, it instantiates a default `SmtpClient` and uses that to send the email. Inside the `Send` function, I provide a `SendCompleted` callback which disposes of the `MailMessage` and the default `SmtpClient` if one was not supplied to the method. **So when I am supplying the`SmtpClient`, how do I go about disposing it?** I can't dispose of it inside `Send` because it may be used for sending other emails, plus I figure it's the responsibility of the thing that instantiated the passed in `SmtpClient` to dispose of it again. Is it possible to add multiple `SendCompleted` callbacks to the `SmtpClient` that run after each other?"} {"_id": "202490", "title": "Except garbage collector, what else makes Java a non real time programming language", "text": "Except the garbage collector, what are some other features in Java that make it unsuitable for real time programming? On the net, whenever Java vs C++ is discussed with regards to real time programming, it is always the garbage collector that is mentioned. Is there anything else?"} {"_id": "202496", "title": "PayPal proof of payment - is there a need to store it at our server?", "text": "I am developing an iPhone app, which I am integrating with PayPal. I did it successfully using PayPal library. I am testing it on sandbox mode. When I transfer money from one account to the other account, it displays a message \"send to server for verification\". Is there a real need to store this PayPal transaction id and other details on our server?"} {"_id": "84584", "title": "Thoughts on type aliases/synonyms?", "text": "I'm going to try my best to frame this question in a way that doesn't result in a language war or list, because I think there could be a good, technical answer to this question. Different languages support type aliases to varying degrees. C# allows type aliases to be declared at the beginning of each code file, and they're valid only throughout that file. Languages like ML/Haskell use type aliases probably as much as they use type definitions. C/C++ are sort of a Wild West, with `typedef` and `#define` often being used seemingly interchangeably to alias types. The upsides of type aliasing don't invoke too much dispute: * It makes it convenient to define composite types that are described naturally by the language, e.g. `type Coordinate = float * float` or `type String = [Char]`. * Long names can be shortened: `using DSBA = System.Diagnostics.DebuggerStepBoundaryAttribute`. * In languages like ML or Haskell, where function parameters often don't have names, type aliases provide a semblance of self-documentation. The downside is a bit more iffy: aliases can proliferate, making it difficult to read and understand code or to learn a platform. The Win32 API is a good example, with its `DWORD = int` and its `HINSTANCE = HANDLE = void*` and its `LPHANDLE = HANDLE FAR*` and such. In all of these cases it hardly makes any sense to distinguish between a HANDLE and a void pointer or a DWORD and an integer etc.. Setting aside the philosophical debate of whether a king should give complete freedom to their subjects and let them be responsible for themselves or whether they should have all of their questionable actions intervened, could there be a happy medium that would allow the benefits of type aliasing while mitigating the risk of its abuse? As an example, the issue of long names can be solved by good autocomplete features. Visual Studio 2010 for instance will alllow you to type DSBA in order to refer Intellisense to System.Diagnostics.DebuggerStepBoundaryAttribute. Could there be other features that would provide the other benefits of type aliasing more safely?"} {"_id": "84586", "title": "How to stay focused on learning a language with so many other possibilites?", "text": "What do you do to stay focused when learning a new language with so many other interesting languages out there?"} {"_id": "84587", "title": "Best source control system for busy web development team", "text": "Having had a number of instances of merge problems and other difficulties with a large Subversion repository and a team of 10+ developers I'm considering whether SVN is the right tool and what else to consider? Primary system being managed is a large PHP web application with release cycles averaging 2-3 weeks apart and 4+ concurrent projects in active development at any time. Is subversion the right system or should we adopt Git, Mercurial or something else?"} {"_id": "219150", "title": "MVC sending information from view to controller", "text": "I am using the MVC pattern. Lets say I want to create a new object and add it to my database. Where is it better to create the new object: View: Boo boo = new Boo(\"awesomness\"); Controller.AddDB(boo); Controller: public void AddDB(string name) { Boo boo = new Boo(name); addtodb(boo); }"} {"_id": "197028", "title": "Controller JSP - no view", "text": "Most people say do not use JSPs. But what if I have a JSP that does not show anything, it only acts as a controller? Why would I do that? Because we do not need to redeploy complete webapp to make a small but significant change in a servlet -> instead can just put my new JSP in the JBoss _tmp_ folder till we do a full deploy. This controller JSP will of course include / redirect to other view JSPs for final rendering. Question what are the downsides of doing this? If we already have it in our application (it's been working for the last 8 years before I joined); should we keep it as it works or are there any compelling reasons to change it to a regular servlet? Note: the JSPs in question are not the only controller, rather they do some of the processing for certain use cases"} {"_id": "182113", "title": "How and why to decide between naming methods with \"get\" and \"find\" prefixes", "text": "I always have trouble figuring out if I should name a certain method starting with `getSomething` versus `findSomething`. The problem resides in creating _helpers_ for poorly designed APIs. This usually occurs when getting data from an object, which requires the object as a parameter. Here is a simple example: public String getRevision( Item item ) { service.load( item, \"revision\" ); // there is usually more work to do before getting the data.. try { return item.get_revision(); } catch( NotLoadedException exception ) { log.error( \"Property named 'property_name' was not loaded\", exception ); } return null; } How and why to decide between naming this method as `getRevision()` or `findRevision()`?"} {"_id": "182112", "title": "Referential integrity in a database where tuples are not physically deleted", "text": "Many modern Relational Database Management Systems automatically support referential integrity, i.e. when you try to delete a tuple which has a reference (in the form of foreign key, for example), the DBMS doesn't complete the operation and brings an error. Consider a database where every table has an attribute, which indicates if a tuple is deleted or not. So no data is actually deleted from the database, but is marked as deleted instead. If a tuple is marked as deleted, all its references need to be marked as deleted too or an error should occur. How can this be supported? Is performing additional checks (programmatically or with triggers) before deleting a tuple the only way to have referential integrity? Are there any accepted practices or algorithms? Edit: This flag is mostly used for statistics, and partially for data recovery after a long period of time. It is filter with a special meaning, and right now when queries are made, referential integrity is checked right in the query, which is extremely error prone and not reliable at all."} {"_id": "204260", "title": "Why is binary search,which needs sorted data, considered better than linear search?", "text": "I have always heard that linear search is a naive approach and binary search is better than it in performance due to better asymptotic complexity. But I never understood why is it better than linear search when sorting is required before binary search? Linear search is `O(n)` and binary search is `O(log n)`. That seems to be the basis of saying that binary search is better. But binary search requires sorting which is `O(n log n)` for the best algorithms. So binary search shouldn't be actually faster **as** it requires sorting. I am reading CLRS in which the author implies that in insertion sort instead of using the naive linear search approach it is better to use binary search for finding the place where item has to be inserted. In this case this seems to be justified as at each loop iteration there is a sorted list over which the binary search can be applied. But in the general case where there is no guarantee about the data set in which we need to search isn't using binary search actually worse than linear search due to sorting requirements? Are there any practical considerations that I am overlooking which make binary search better than linear search? Or is binary search considered better than linear search without considering the computation time required for sorting?"} {"_id": "41588", "title": "Will Python 3.0's backwards-incompatibility affect adoption?", "text": "I visited Slashdot this morning to find out that Python 3.0 has been released. I know C# and Perl, but have wanted to learn Python for some time, especially after I saw its ease of use to create useful tools, not to mention its use in game scripting. My question is, how does the intentionally backwards-incompatible release of Python 3.0 affect adoption, and should I learn Python 2? Or should I take the dive and learn Python 3.0 first, and wait for the libraries to be ported?"} {"_id": "178921", "title": "How many bits' address is required for a computer with n bytes of memory?", "text": "How many bits of address is required (for the program counter for example) in a byte-addressed computer with 512 Mbyte RAM? What does the formula look like? How is this connected with the fact that 32 bits can address no more than 4 GB RAM?"} {"_id": "238095", "title": "Dealing with similar objects with different method signatures", "text": "I am fairly new to OO design and have problems with the design of some software and looking for a pattern or a combination of patterns that could help me solving my problem. I have a type that has a collection of different geometric shapes (say lines, rectangles, and cubes). I actually even have composites of these shapes. Each shape has one or more values. A value is a coordinate (between one-dimensional for lines and three-dimensional for cubes). All types of shapes have two methods that are essential. A method for adding values and a method for getting all values of a shape. E.g. `Cube.AddValue(new Value3D{X=4,Y=7,Z=11})` or `Line.AddValue(new Value1D{X=42})` respectively `Cube.GetValues()` would return a collection of `Value3D` and `Line.GetValues()` would return a collection of `Value1D`. Since these methods have fundamentally different signatures, I can't just derive from a single interface to store all of these different objects in one collection. Splitting the collection into multiple separate collections, one for each type would be possible, but not desirable since they belong together (domain wise). I've considered using three-dimensional values for all shapes and just ignoring Y/Z for lines/rectangles, but that isn't really nice and would actually violate the O and L in SOLID (a 4th dimension is possibly needed in the future). Another possibility would probably be to use a generic Interface like `IShape` where `T` is Value*D. This interface would look like: public interface IShape { public IEnumerable GetValues(); public void AddValue(T value); } But this would require another, non-generic `interface INonGenericShape {}` because I can't use a generic interface without specifying the type. I would end up with something like this: public class SomeShape{ private IList _shapes = new List(); } But then I could as well use `System.Object`, since IShape does not provide any valuable information. So here I am, not knowing how to solve this (storing the objects in a single collection for consumers of that class). Can anyone provide some idea or insight?"} {"_id": "558", "title": "How can you learn to design nice looking websites?", "text": "I am a moderately capable web developer. I can put stuff where I want it to go and put some JQuery stuff in there if I need to. However, if I am making my own website (which I am starting to do) I have no idea how to design it. If someone was to sit next to me a point to the screen and say \"put this picture there, text there\" I can do that quite easily. But designing my own site with my choice of colours and text will look like a toddler has invented it. Does anyone know any websites/books I can look at or has anyone got any tips on the basics of non-toddler web design?"} {"_id": "178927", "title": "Is there a difference between a component and a module", "text": "I have a little problem with the terms module and component. In my mind, a module are bundled classes, which are only accesable via a well defined interface. They hide all implementation details and are reusable. Modules define modules on which they depend. What is the difference to components? I looked it up in some books, but the description of components is very similar."} {"_id": "202144", "title": "C project avoiding naming conflicts", "text": "I'm struggling to find pragmatic real-world advice on function naming conventions for a medium sized C library project. My library project is separated into a few modules and submodules with their own headers, and loosely follows an OO style (all functions take a certain struct as first argument, no globals etc). It's laid our something like: MyLib - Foo - foo.h - foo_internal.h - some_foo_action.c - another_foo_action.c - Baz - baz.h - some_baz_action.c - Bar - bar.h - bar_internal.h - some_bar_action.c Generally the functions are far too big to (for example) stick `some_foo_action` and `another_foo_action` in one `foo.c` implementation file, make most functions static, and call it a day. I can deal with stripping my internal (\"module private\") symbols when building the library to avoid conflicts for my users with their client programs, but the question is how to name symbols in my library? So far I've been doing: struct MyLibFoo; void MyLibFooSomeAction(MyLibFoo *foo, ...); struct MyLibBar; void MyLibBarAnAction(MyLibBar *bar, ...); // Submodule struct MyLibFooBaz; void MyLibFooBazAnotherAction(MyLibFooBaz *baz, ...); But I'm ending up with crazy long symbol names (much longer than the examples). If I don't prefix the names with a \"fake namespace\", modules' internal symbol names all clash. Note: I don't care about camelcase/Pascal case etc, just the names themselves."} {"_id": "86698", "title": "White box testing with Google Test", "text": "I've been trying out using GoogleTest for my C++ hobby project, and I need to test the internals of a component (hence white box testing). At my previous work we just made the test classes friends of the class being tested. But with Google Test that doesn't work as each test is given its own unique class, derived from the fixture class if specified, and friend-ness doesn't transfer to derived classes. Initially I created a test proxy class that is friends with the tested class. It contains a pointer to an instance of the tested class and provides methods for the required, but hidden, members. This worked for a simple class, but now I'm up to testing a tree class with an internal private node class, of which I need to access and mess with. I'm just wondering if anyone using the GoogleTest library has done any white box testing and if they have any hints or helpful constructs that would make this easier. * * * Ok, I've found the FRIEND_TEST macro defined in the documentation, as well as some hints on how to test private code in the advanced guide. But apart from having a huge amount of friend declerations (i.e. one FRIEND_TEST for each test), is there an easier idion to use, or should I abandon using GoogleTest and move to a different test framework?"} {"_id": "255386", "title": "Are there two type of associations between objects or are there just different representations?", "text": "I've been spending some time on 're-tuning' some of my OOP understanding, and I've come up against a concept that is confusing me. Lets say I have two objects. A `user` object and an `account` object. Back to basics here, but each object has **state, behaviour and identity** (often referred to as an entity object). The `user` object manages behaviour purely associated with a user, for example we could have a `login(credentials)` method that returns if successfully logged in or throws exception if not. The `account` object manages behaviour purely associated with a users account. For example we could have a method checkActive() that checks if the account is active. The account object checks if the account has an up-to-date subscription, checks if there are any admin flags added which would make it inactive. It returns if checks pass, or throws exception if not. Now here lies my problem. There is clearly a relationship between `user` and `account`, but I feel that there are actually two TYPES of association to consider. One that is data driven (exists only in the data/state of the objects and the database) and one that is behaviour driven (represents an object call to methods of the associated object). ## Data Driven Association In the example I have presented, there is clearly a data association between `user` and `account`. In a database schema we could have the following table: ----------------- USER_ACCOUNTS ----------------- id user_id ---------------- When we instantiate the `account` and load the database data into it, there will be a class variable containing `user_id`. In essence, the `account` object holds an integer representation of `user` through `user_id` ## Behaviour Driven Association Behaviour driven associations are really the dependencies of an object. If object A calls methods on object B there is an association going from A to B. A holds an object representation of B. In my example case, neither the `user` object nor the `account` object depend on each other to perform their tasks i.e. neither object calls methods on the other object. There is therefore no behaviour driven association between the two and neither object holds an object reference to the other. ## Question Is the case I presented purely a case of entity representation? The association between `user` and `account` is always present, but its being represented in different ways? ie. the `user` entity has an identity that can be represented in different forms. It can be represented as an object (the instantiated `user` object) or as a unique integer from the users table in the databases. Is this a formalised way of recognising different implementations of associations or have I completely lost my mind? One thing that bugs me is how would I describe the differences in UML or similar? Or is it just an implementation detail?"} {"_id": "14715", "title": "Find bugs in Java programs, specially using the eclipse plug-in", "text": "Is it any good ? http://findbugs.sourceforge.net/ I had a go on the webstart/demo version and it did indeed look useful. Has anyone got any experience with it, is it a valuable tool ?"} {"_id": "160271", "title": "URL parameters in RESTful web services", "text": "I'm wondering about the appropriateness of URL parameters in RESTful resource creation. First, here's some context. I'm working on an API that will remotely update the software on embedded devices connected to an online device cloud. The basic workflow is: 1. Discover available updates 2. Apply desired updates 3. Monitor update progress The update discovery resource will allow users to POST a new update discovery. POST /updateDiscovery?apikey=a1b2c3 The above URL will trigger update discovery for all devices. It is also useful to restrict which devices are targeted for updates. There are two identifiers used to restrict target devices: 1. a device id 2. a group id (which may reference many devices) This restriction can be communicated using either URL parameters or the request body. Using URL parameters: POST /updateDiscovery?deviceIds=1,2,3&groupIds=2,3 Using the request body: POST /updateDiscovery Content-Type: application/json { targets: { deviceIds: [1, 2, 3], groupIds: [3, 4] } } Either way, a new resource will be created with the following representation: { id: 1, targets: { deviceIds: [1, 2, 3], groupIds: [3, 4] }, status: 'pending', ... } Which of the above methods best follows the RESTful design pattern and why?"} {"_id": "190312", "title": "Disk I/Os incurred by readLine () in Java", "text": "Does `readLine()` Java function used with `BufferedReader` cause one disk I/O per call? If yes, is there any way to read specific number of lines, say n, from a text file causing only one disk I/O? Here's the code: String file; BufferedReader br = new BufferedReader (new FileReader (file)); int n = 10; for (int i = 0; i < n; i++) { String line = br.readLine (); /* Do something */ }"} {"_id": "207519", "title": "Source code \"prints\" at release", "text": "Is there a best practice for how to document the source code you're releasing? Currently, we have a table of the versions of the software, what SVN tag it's labeled as, what SVN rev that tag was created at and the SVN URL. Then, our CM department wants us to include a file listing of all the source files we're including for the .zip that we provide them which is just a zip of the tag we're releasing. I'm more fishing for ideas to update our print template, hopefully with good arguments to remove the file listing."} {"_id": "207518", "title": "Why is a tooltip's attribute labelled 'title='?", "text": "> HTML 4 supports a \"title\" attribute that can be inserted inside any HTML > tag. Inserting this attribute effectively gives the element a tooltip that > pops up when the mouse moves over it. It just seems like a bad word choice.. or am I missing something?"} {"_id": "208727", "title": "How fast should a Python factoring script be?", "text": "Just how efficient is \"good enough\" for all intents and purposes? I wrote a script to simply list off all numbers that divide into an input, x, as pairs (i, n//i) and was just curious how efficient I should be going for? what is the acceptable rate at which the script starts to lose its efficiency? This is my code (although advice is appreciated, I just want to give an idea as to how it works). import time print('''This program determines all basic factors of a given number, x, as ordered pairs, (a, b) where ab=x. Type \"quit\" to exit.''') timer = input('Would you like a timer? (y/n) ') while 1: try: x =(input('x = ')) T0 = time.time() b = [] n = int(x)**0.5 ran = list(range(1, int(n)+1)) if int(x) % 2 == 1: ran = ran[::2] for i in ran: if int(x) % i == 0: m = (i, int(x)//i) b.append(m) else: pass except ValueError as error_1: if x == 'quit': break else: print(error_1) except EOFError as error_2: print(error_2) except OverflowError as error_3: print(error_3) except MemoryError as error_4: print(error_4) T1 = time.time() total = T1-T0 print(b) print(str(len(b)) + ' pairs.') if timer == 'y': print(\"%.5f\" % total + ' seconds.') some of the results are: x = 9 [(1, 9), (3, 3)] 2 pairs. 0.00000 seconds. x = 8234324543 [(1, 8234324543)] 1 pairs. 0.07404 seconds. x = 438756349875 [(1, 438756349875), (3, 146252116625), (5, 87751269975), (15, 29250423325), (25, 17550253995), (75, 5850084665), (125, 3510050799), (375, 1170016933), (557, 787713375), (1671, 262571125), (2785, 157542675), (8355, 52514225), (13925, 31508535), (41775, 10502845), (69625, 6301707), (208875, 2100569)] 16 pairs. 0.88859 seconds. So this program can be pretty quick, but then the speed drops rather rapidly once you get up to higher numbers. Anyways, ya, I was just wondering what is considered acceptable by todays standards?"} {"_id": "208720", "title": "Estimating tasks in Scrum", "text": "Our team using Scrum three iterations. We successfully estimate PBI in storypoints using poker-planning. But next we cannot do anything because we don't know 1. Who create tasks? PBI is created by everyone and approved by product owner, but what about task 2. Who estimates task? 3. What technique should be using for task estimating? Poker planning are good for PBI estimating"} {"_id": "232640", "title": "Alternatives to JDT Annotation - License issues", "text": "I have used the JDT Annotation library in my Java project as I am quite fond of what it offers. To be more exact, I used the `@Nullable` and `@NonNullByDefault` annotations as I can use the synergy with Eclipse to automatically analyse possible `null` values and what may lead to `NullPointerException` bugs. Unfortunately, JDT Annotation is licensed under EPL1 which, as far as I know, is incompatible with GPL2 due to the former being a weak copyleft license and choice of some clauses. As the project should be published under a GPL2 license, I am exploring different options but have yet to fine any that would offer the same, or nearly the same, functionality. I am not keen on adding null checks as they only clutter the code with what an annotation could have solved as well. But unfortunately it seems to be the only viable option? I am looking for some expertise regarding this matter. What I propose is to use Google's `Preconditions` to formulate preconditions such as: Preconditions.checkArgument(providedArgument != null, \"The provided argument must not be null!\"); respectively: Preconditions.checkState(invariantField != null, \"The field may not be null!\"); These will solve the problem of course and be more explicit, in my opinion, when it comes to documenting my contracts by code. I usually also report these with custom tags in my Javadoc, for example, `@pre providedArgument != null` or `@inv invariantField != null`. I would be thankful for all"} {"_id": "214394", "title": "Problem with OAuth2 authentication process and session persistance", "text": "We're using node-oauth2-provider as an authentication library for our service. The current process for a user to log in is: POST /oauth2/access_token Which creates and saves the access_token to the database. On subsequent requests we send the `access_token` which is pulled from the database to verify that it exists. From there the user is added to session manually by us in a similar fashion to their examples. After this happens, if a request comes in without the access token, the session still seems to be set. As if it's persisting across requests. Maybe I misunderstand how this is supposed to work... but shouldn't the access token be the indication that a user's requests are still valid? If so, do I need to clear the session manually? If not and it stores it in memory... possibly indicated by: express.session({store: new MemoryStore({reapInterval: 5 * 60 * 1000}), secret: 'abracadabra'}) Then how does the server know that subsequent requests are valid?"} {"_id": "232649", "title": "Using words instead of numbers for versioning?", "text": "Would it be considered acceptable to use word compounds instead of numbers for version iterations? For example in a pattern: \"[Adjective] [Noun]\" The first version could be something like: \"Auspicious Armadillo\" Thus on each sequential release the noun would change to the next letter of the alphabet and the version could be along the lines of: \"Auspicious Bear\" Once the alphabet for the nouns runs out the Adjective gets changed and the Nouns can start over e.g: \"Bewildered Armadillo\" etc. This would give me around 676 possible different versions to iterate through. Assuming the project is small & short enough to fall within these 676 releases before a complete remake/overhaul, would such a versioning pattern be considered acceptable to use? Or would it just create needless confusion and chaos? I understand this might be somewhat opinion related, but I was hoping for some kind of a general consensus on whether this is an acceptable or terrible idea."} {"_id": "171529", "title": "Can a recursive function have iterations/loops?", "text": "I've been studying about recursive functions, and apparently, they're functions that call themselves, and don't use iterations/loops (otherwise it wouldn't be a recursive function). However, while surfing the web for examples (the 8-queens-recursive problem), I found this function: private boolean placeQueen(int rows, int queens, int n) { boolean result = false; if (row < n) { while ((queens[row] < n - 1) && !result) { queens[row]++; if (verify(row,queens,n)) { ok = placeQueen(row + 1,queens,n); } } if (!result) { queens[row] = -1; } }else{ result = true; } return result; } There is a `while` loop involved. ... so I'm a bit lost now. Can I use loops or not?"} {"_id": "199230", "title": "In an online questionnaire, what is a best way to design a database for keeping track of users all attempts?", "text": "We have a web app where users can take online exams. Exam admin will create a questionnaire. A questionnaire can have many Questions. Each question is a multiple choice question (MCQ). Let's say an admin creates a questionnaire with 10 questions. Users attempt those questions. Now, unlike real exams users can attempt single questionnaire multiple times. And we have to keep track of his all attempts. e.g. +-------+-----------------+------------+-------+------------+----------+ |User_id|Questionnaire_id |question_id |answer |attempt_date|attempt_no| +-------+-----------------+------------+-------+------------+----------+ |1 |1 |1 |a |1 June 2013 |1 | |1 |1 |2 |b |1 June 2013 |1 | |1 |1 |1 |c |2 June 2013 |2 | |1 |1 |2 |d |2 June 2013 |2 | +-------+-----------------+------------+-------+------------+----------+ Now it can also happen that after user has attempted same questionnaire twice, admin can delete a question from same questionnaire, but users' attempt history should still have reference to that so that user can see his that question in his attempt history in spite of admin deleting that question. If user now attempts this changed questionnaire he should see only 1 question. +-------+-----------------+------------+-------+------------+----------+ |User_id|Questionnaire_id |question_id |answer |attempt_date|attempt_no| +-------+-----------------+------------+-------+------------+----------+ |1 |1 |1 |a |3 June 2013 |3 | +-------+-----------------+------------+-------+------------+----------+ Also, after this user modified some part of question, users attempt history should show question before modification while any new attempt should show the modified question. How do we manage this at database level? My first gut feeling was that, For deletes, do not do physical delete, just make a question inactive so that history can still keep track of users attempt. For modifications, create versions for questions and each new attempt refres to latest version of each question and history keeping reference to version of question at attempt time."} {"_id": "13940", "title": "mercurial or tfs for codeplex?", "text": "I've been working on a .NET personal project for a little while now and it's almost ready to go open source. I've decided (arbitrarily) to host it on codeplex. Codeplex offers TFS or mercurial. I'm wondering which I should pick. Consider: * I've only ever used subversion. * I'm using VS 2010 express as my IDE. * Tools must be free (so the mercurial client if I go that route). * From what I've been hearing, mercurial sounds interesting but I know _very_ little about it so if there's a learning curve, then I don't want to add too many more learning objectives to the project. * I don't expect any contributors. So I guess the actual question is, is mercurial easy enough to use with codeplex and does it add anything that the TFS option doesn't?"} {"_id": "171526", "title": "design a model for a system of dependent variables", "text": "I'm dealing with a modeling system (financial) that has dozens of variables. Some of the variables are independent, and function as inputs to the system; most of them are calculated from other variables (independent and calculated) in the system. What I'm looking for is a clean, elegant way to: 1. define the function of each dependent variable in the system 2. trigger a re-calculation, whenever a variable changes, of the variables that depend on it A naive way to do this would be to write a single class that implements `INotifyPropertyChanged`, and uses a massive case statement that lists out all the variable names x1, x2, ... xn on which others depend, and, whenever a variable `xi` changes, triggers a recalculation of each of that variable's dependencies. I feel that this naive approach is flawed, and that there must be a cleaner way. I started down the path of defining a `CalculationManager` class, which would be used (in a simple example) something like as follows: public class Model : INotifyPropertyChanged { private CalculationManager _calculationManager = new CalculationManager(); // each setter triggers a \"PropertyChanged\" event public double? Height { get; set; } public double? Weight { get; set; } public double? BMI { get; set; } public Model() { _calculationManager.DefineDependency( forProperty: model => model.BMI, usingCalculation: (height, weight) => weight / Math.Pow(height, 2), withInputs: model => model.Height, model.Weight); } // INotifyPropertyChanged implementation here } I won't reproduce `CalculationManager` here, but the basic idea is that it sets up a dependency map, listens for `PropertyChanged` events, and updates dependent properties as needed. I still feel that I'm missing something major here, and that this isn't the right approach: 1. the (mis)use of `INotifyPropertyChanged` seems to me like a code smell 2. the `withInputs` parameter is defined as `params Expression>[] args`, which means that the argument list of `usingCalculation` is not checked at compile time 3. the argument list (weight, height) is redundantly defined in both `usingCalculation` and `withInputs` I am sure that this kind of system of dependent variables must be common in computational mathematics, physics, finance, and other fields. Does someone know of an established set of ideas that deal with what I'm grasping at here? Would this be a suitable application for a functional language like F#? * * * **Edit** More context: The model currently exists in an Excel spreadsheet, and is being migrated to a C# application. It is run on-demand, and the variables can be modified by the user from the application's UI. Its purpose is to retrieve variables that the business is interested in, given current inputs from the markets, and model parameters set by the business."} {"_id": "13945", "title": "How to control work of programmers (outsourcers)", "text": "Some time ago I decided to outsource web development projects (php coding & design). I've opened small office abroad with manager and couple of programmers and designer. I just started but already have a problem controlling their work. Some programmers are fast, others work much slower. I pay them fixed salary, so it matters for me if they do their job on time. When they work from office, I at least have a person responsible for the project to tell if they really worked. But sometimes they work from home and in that case I have no way to know if they really spent that time working on some issue. Some of my ideas were: controlling amount of code done with svn, strict deadlines, everyday reporting.. But nothing seems perfect to me. It creates even more work from my side. Can anyone suggest a way of fair judgement on how much really time was spent on actual work? How to stimulate people to work? Maybe create some bonus system if work is done fast? Any idea/experience on this topic would be highly appreciated."} {"_id": "68542", "title": "Does (/could) an LGPL-based license exist without clause 4d?", "text": "I've started writing a framework in C (you may have heard of it: Raphters). Someone contacted me to ask whether I could re-license it because it would be useful in embedded products, but clause 4d (the clause that says, and I'm paraphrasing, you must allow modified versions of the library to be re-linked against your executable) would make that difficult. I like LGPL because it prevents people making modifications and keeping them secret, but I don't want to prevent people using Raphters in their embedded products. What do you suggest?"} {"_id": "5531", "title": "What do you consider to be the prime cause of software defects (and how to minimize them)", "text": "I define **defect** as : > \"something within the application design or code which prevents it > functioning as per requirements.\" I'm looking for ideas about the causes of defects, eg the human factor, lack of testing, lack of prototyping, and possible ideas to mitigate these."} {"_id": "18141", "title": "Is a degree needed for low-level/embedded programming jobs?", "text": "I know that it is possible to get into software development without a degree in computer science, but is it possible (or rather, common) to be able to get an embedded programming job without the computer science degree (or any engineering degree as well)?"} {"_id": "89761", "title": "Fellow programmer used worst programming practices", "text": "I know it seems odd to say, but a fellow programmer at work deliberately used a couple of bad programming practices on purpose! I'll explain. First let me say that he's an intelligent guy and for the most part he writes intelligible code. He was asked to implement licensing on a web application project written in Java. Since it's Java, if one really wanted to, one could probably hack open the jars and read the names of the classes and methods written inside. His solution to this problem was to quite literally to awkwardly call variables and methods less-than-obvious names and plant them inside already congested classes rather than generating new classes. His justification was that if a hacker wanted to switch out certain classes in order to bypass licensing checks (and therefore get a free copy of the product), he'd have a far more difficult time of it if it weren't obvious which methods perform these particular tasks. Only after he had done it did I confront him about it, suggesting that we could perhaps buy some sort of obfuscator library to do it for us, while maintaining good programming practices. He claims to not have had the time or resources to search for that kind of solution. ..Which leaves me at a dilemma. Do I look for a obfuscator library in Java and fix his old code (which might be a little touchy about remodeling his code), or do I leave it as it, as much as that irks me to no end?"} {"_id": "242966", "title": "Program coded in .Net", "text": "What does it generally mean when a programmer says his program/application was \"coded in .Net\"? When I think of .Net I think of C# and Visual Basic. I don't usually think of any other language on this list: http://en.wikipedia.org/wiki/List_of_CLI_languages"} {"_id": "18147", "title": "how do you seperate from designing question to programming questions?", "text": "As i try to program a solution to a request arise. I can't separate the difference between implementation problems and design problems. How do you specifically express the design problem?"} {"_id": "103155", "title": "Avoiding wrapper objects in collections", "text": "I'm getting a little annoyed of having to wrap primitive types in wrapper objects to store them in collection data structures (sets, maps, lists, etc.) in languages like Java and Objective C. I'd really like to have, say, a Map data structure that works the same way whether I'm mapping NSNumbers to Strings or integers to doubles or integers to MyBigCustomObject*. My understanding is that the reason collection data structures in these languages require wrapping in an object is so it can just always assume that the number given is a pointer - I actually _can_ make a NSDictionary with [NSDictionary dictionaryWithObject: MyCustomObject forKey: 1], but it will treat the 1 as a pointer instead of an integer, try to access memory address 1 and segfault. But I don't see any reason why you couldn't make a collection data structure that keeps track of the _type_ of keys and values that wouldn't have this problem. Even a relatively inflexible data structure like a Map of specifically-ints to specifically-pointers-to-objects would be a decently common use case (I certainly could use one, I'm doing a lot of work recently that involves indexing objects by an integer ID). Is there a reason why this isn't a common paradigm in languages that have an Object/primitive type distinction?"} {"_id": "68547", "title": "Best resources for learning idiomatic PHP", "text": "My experience has primarily been in Java, but I'm working on a PHP project and conscious of my lack of knowledge of idiomatic PHP. What resources (online or print) will help me avoid making the mistakes that people with Java goggles normally make? First prize would be something that would give me short in- depth examinations of important PHP ideas, a la Joshua Bloch's Effective Java."} {"_id": "83057", "title": "LINQ Style preference", "text": "I have come to use LINQ in my every day programming a lot. In fact, I rarely, if ever, use an explicit loop. I have, however, found that I don't use the SQL like syntax anymore. I just use the extension functions. So rather then saying: from x in y select datatransform where filter I use: x.Where(c => filter).Select(c => datatransform) Which style of LINQ do you prefer and what are others on your team are comfortable with?"} {"_id": "73309", "title": "Software technical and project specification: How to?", "text": "We have a project here in my company of developing a new CRM. It's, basically, a big CRUD, with sub-CRUDS, with various details, such as, some users can do that, others don't. We will develop the system in PHP with Symfony. Developing is easy, but we're having some difficulties in the analysis part of the job. We have to present the project specification, as well the tecnical specification of the project. We have to draw and explain how to system will work. And we're not sure on how to do this. We always just... did the system. Not planning with graphics or text, or explaining what we were doing. We already asked everyone and got suggestions, as well requirements. Apparently, we have to take that and explain the system. Any tips, guys? :)"} {"_id": "226249", "title": "Citing Borrowed Code", "text": "If you borrow code from some source, it is probably best to cite it (like \"adapted from [source]\"). However, if you take let's say you borrowed this function (example in C++): void doWork() { cout << \"Doing work!\\n\" << endl; } and now you editted it to be like this: void doWork() { string name = \"\"; cout << \"Starting work...\\n\" << endl; cout << \"Hello user. What is your name?\" << endl; cin >> name; ... } Would you still have to say \"adapted from [source]\" or something like that? Or are you in the safe zone? **Note** : I am specifically wondering how this applies to copyrighted (and open source) code"} {"_id": "132019", "title": "What is the value in hiding the details through abstractions? Isn't there value in transparency?", "text": "## Background I am not a big fan of abstraction. I will admit that one can benefit from adaptability, portability and re-usability of interfaces etc. There is real benefit there, and I don't wish to question that, so let's ignore it. There is the other major \"benefit\" of abstraction, which is to hide implementation logic and details from users of this abstraction. The argument is that you don't need to know the details, and that one should concentrate on their own logic at this point. Makes sense in theory. However, whenever I've been maintaining large enterprise applications, I always need to know more details. It becomes a huge hassle digging deeper and deeper into the abstraction at every turn just to find out exactly what something does; i.e. having to do \"open declaration\" about 12 times before finding the stored procedure used. This 'hide the details' mentality seems to just get in the way. I'm always wishing for more transparent interfaces and less abstraction. I can read high level source code and know what it does, but I'll never know how it does it, when how it does it, is what I really need to know. What's going on here? Has every system I've ever worked on just been badly designed (from this perspective at least)? # My philosophy When I develop software, I feel like I try to follow a philosophy I feel is closely related to the ArchLinux philosophy: > Arch Linux retains the inherent complexities of a GNU/Linux system, while > keeping them well organized and transparent. Arch Linux developers and users > believe that trying to hide the complexities of a system actually results in > an even more complex system, and is therefore to be avoided. And therefore, I never try to hide complexity of my software behind abstraction layers. I try to abuse abstraction, not become a slave to it. # Question at heart 1. Is there real value in hiding the details? 2. Aren't we sacrificing transparency? 3. Isn't this transparency valuable?"} {"_id": "47323", "title": "Can I use an LGPL-licenced library in my commercial app?", "text": "I want to use an LGPL-licensed library in my app for Microsoft's app marketplace. Is that OK?"} {"_id": "132014", "title": "Is it possible to integrate UDP file transfer into a .NET web application?", "text": "# Background I have recently been tasked with designing a rebuild of an existing .NET web application that _currently_ uses a third-party company to handle large file transfers (as big as 50Gb). Currently, the .NET app depends on a .JAR (Java Applet) provided by this third-party which is called up inside of an iFrame and exposes the appropriate file-system interaction for selecting entire directories for upload and so forth. I realize that so far all of this is possible using some combination of .NET networking classes (ftp) and Flash or Silverlight for client access. I have been told that the reason that the third-party plugin is so special is that it uses UDP protocol so that if an upload or download is interrupted, it can be resumed later right where it left off. I have also been told that the third-party tool suite allows the IT folks to throttle bandwith (I don't even know what that means) and do a couple of other cool things. # Question Assuming that we will use the latest version of C# and and the .NET framework (4.0), is it reasonably possible to replicate this UDP-based behavior? By reasonable, I mean could it be accomplished in less than, say, 240 dev hours. **Please note** that the rebuilt app will ideally use all Microsoft technologies (including Silverlight for client access) and will run on Azure."} {"_id": "15044", "title": "Graphic demonstrating emphasis of front end in web apps", "text": "I remember stumbling across an amusing graphic a year or so ago which demonstrated the tiers of web development. The back-end was shown as a tiny box, but the front end was shown as a huge box crammed with lots of front-end technologies like AJAX, DHTML. This is all a vague recollection. Does anyone know where on the Intraweb this graphic might be? It was probably on a programming cartoon site, but I only view XKCD on a regular basis, and I couldn't find it on there. Although tagged as fun, my request does have a productive edge to it - it would be quite useful in driving home to my colleagues how UI top-heavy web application development has become."} {"_id": "155132", "title": "Is it possible to develop an IOS application without a Mac?", "text": "I'm a postgraduated student and I had the idea to build an app for elementary school kids. My question after many days of searching is whether anyone here have ever tried and what was their experience of developing an IOS app on other platforms than Xcode such as Phonegap, Monotouch, flex or others?"} {"_id": "87774", "title": "Are Datasets Patentable", "text": "I'm working on a project that uses a dataset produced from software informally licensed as \"non-commercial use only\". I'm developing an application that uses that dataset as an input to another algorithm. Our own software has a permissive free software license in which we don't restrict commercial usage. Are there legal implications of using that dataset in our own software in the sense that we are violating a copyright? **Edit** : Thanks folks, the ask a lawyer thing is the way to go, but its helpful for me to hear some opinions bounce around."} {"_id": "155136", "title": "How should I write new code when the old codebase and the environment uses lots of globals in PHP", "text": "I'm working in the Wordpress environment which itself heavily relies on globals and the codebase I'm maintaining introduces some more. I want this to change and so I'm trying to think how should I handle this. For the globals our code has introduced I think I will set them as dependencies in the constructor or in getter / setter so that I don't rely on them being globals and then refactor the old codebase little by little so that we have no globals. With Wordpress globals I was thinking to wrap all WP globals inside a Wrapper class and hide them in there. Like this class WpGlobals { public static function getDb() { global $wpdb; return $wpdb; } } Would this be of any help? The idea is that I centralize all globals in one class and do not scatter them through the code, so that if Wordpress kills one of them I need to modify code only in one place. What would you do?"} {"_id": "192078", "title": "Is it possible for a browser to use peer-to-peer browsing?", "text": "I was recently asked this from my friend after explaining how \u00b5Torrent worked, he then went to ask me, \"Could this be used for browsing the web?\", (not a direct quote) but I found my self thinking about this for a while there after. * Could someone be credited with making the first Peer-2-Peer browser? * How could it send the correct data without it being modified? * Would it be faster than normal browsers? (Referring to the way \u00b5Torrent works, I find when I use a torrent it's faster than a web download.) * Can it save on server loads, prevent DDoS, more?"} {"_id": "206645", "title": "How can I reverse engineer a hash code?", "text": "I am building an application in C# that works with a Progress database. The passwords that are stored in this database are stored using a hash algorithm that Progress has not made public. However, I would like to authenticate using these hashes. Is it feasible to reverse engineer such a hash algorithm and how would I go about doing this? To be clear: I am not looking for a way to get the unhashed passwords from the hashes. I'm looking for the algorithm to get a hash from a password."} {"_id": "206647", "title": "Why must a constructor's call to the superconstructor be the first call?", "text": "It is an error if you do anything in a constructor before calling the superconstructor. I remember that I had problems because of that. Yet, I do not see how this saves us from errors. It could save you from using uninitialized fields. But, the Java compiler that checks for using uninitalized variables does that and this stupid rule does not improve anything here. The only serious consideration argument I remember was that we need it for OOP, because objects in real life are constructed this way: you create a cucumber by first creating a vegetable and then add cucumber attributes. IMO, it is opposite. You first create a cucumber and, by duck is a duck principle, it becomes a vegetable. Another argument was that it improves safety. Not writing the code improves the dependability much better, so I do not consider this as argument. Creating more complex code when you need to workaround a stupid restriction, is what makes the program more error-prone. This rule creates a serious pain (at least for me). So, I want to hear the serious argument and serious example where it could be useful."} {"_id": "173011", "title": "How to solve cyclic dependencies in a visitor pattern", "text": "When programming at work we now and then face a problem with visitors and module/project dependencies. Say you have a class A in a module X. And there are subclasses B and C in module Y. That means that module Y is dependent on module X. If we want to implement a visitor pattern to the class hierarchy, thus introducing an interface with the handle Operations and an abstract accept method in A, we get a dependency from module Y to module X, which we cannot allow for architectural reasons. What we do is, use a direct comparison of the types (i.e. `instanceof`, since we program in Java), which is not satisfying. My question(s) would be: Do you encounter this kind of problem in your daily work (or do we make poor architectural choices) and if so, how is your approach to solve this? Here is a minimal example in Java to illustrate my point. Package a has ClassA and the Visitor-Interface over the ClassA Hierarchy: package pkg.a; public abstract ClassA extends ClassA { public abstract void accept(ClassAVisitor visitor); /* other methods ... */ } package pkg.a; import pkg.b.ClassB; import pkg.b.ClassC; public interface ClassAVisitor { public abstract void handle(ClassB visitee); public abstract void handle(ClassC visitee); } Package b has the concrete classes that extend from ClassA: package pkg.b; import pkg.a.ClassAVisitor; public ClassB extends ClassA { public void accept(ClassAVisitor visitor) { visitor.handle(this); } } package pkg.b; import pkg.a.ClassAVisitor; public ClassC { public void accept(ClassAVisitor visitor) { visitor.handle(this); } } Package a and b have a cyclic dependency."} {"_id": "239056", "title": "AngularJS: structuring a web application with multiple ng-apps", "text": "The blogosphere has a number of articles on the topic of AngularJS app structuring guidelines such as these (and others): * http://www.johnpapa.net/angular-app-structuring-guidelines/ * http://codingsmackdown.tv/blog/2013/04/19/angularjs-modules-for-great-justice/ * http://danorlando.com/angularjs-architecture-understanding-modules/ * http://henriquat.re/modularizing-angularjs/modularizing-angular-applications/modularizing-angular-applications.html However, one scenario I have yet to come across for guidelines and best practices is the case where you have a large web application containing multiple \"mini-spa\" apps, and the mini-spa apps all share a certain amount of code. I am not referring to the case of trying to have multiple `ng-app` declarations on the same page; rather, I mean different sections of a large site that have their own, unique `ng-app` declaration. As Scott Allen writes in his OdeToCode blog: > One scenario I haven't found addressed very well is the scenario where > multiple apps exist in the same greater web application and require some > shared code on the client. Are there any recommended approaches to take, pitfalls to avoid, or good sample structures of this scenario that you can point to?"} {"_id": "239054", "title": "How to use Subversion in conjunction with DTAP with several Scrum teams?", "text": "I've read _How Do You Pull Something from a Release?_ , but it doesn't solve our problems, as our case is more complex. Our situation is as follows. We're developing an application for an internal customer. We're doing Scrum with four more or less aligned teams (i.e. sprints end on the same day), one of which is off site. We have a basic DTAP street and we use Subversion for source control. We want to deploy to the TAP-end of the street from source control, as a preparation to use continuous integration later on. To be able to deploy from source control, we maintain branches that are in sync with each of the T, A, and P environments. Our current setup is a clean trunk, which is in sync with Production, an acpt branch that was branched from trunk at some point in the past and which is in sync with Acceptance, and a branch for each sprint that branches from the acpt branch at the start of each sprint and merges back into it at the end. This sprint branch is in sync with the Test environment for the duration of the sprint. At the end of each sprint, issues that are done are merged into acpt. A new sprint branch is made and a new sprint begins. Meanwhile, the acpt branch is deployed to Acceptance and tested there. Of course, in an ideal world, we would complete all user stories during the sprint and during UAT there would be no defects found. But alas, this world is not perfect and problems _are_ found during UAT. _Of course not bugs_ , but we may have interpreted a request incorrectly. Or there may be some other reason for a user story to not be released (yet). So now we have to pull some stories from that branch. Since the changes were merged from sprint branch to acpt branch in one go, this is not a trivial task. **How can we adjust our method to allow for user stories to be pulled from the acpt branch?** * * * I've also read _Agile version control with multiple teams_ by Henrik Kniberg. That looks like a model that's better suited to our needs, but even then I have some questions about it. If we adopt his model, should we sync the trunk to Acceptance? That would give us opportunity to conduct UATs _during_ a sprint, and would effectively uncouple the sprint schedule from the release schedule. But Kniberg promotes a stricter adherence to Scrum than we currently follow, where user stories are done pretty much in sequence. **How could we adjust Kniberg's method to suit our needs?**"} {"_id": "239052", "title": "How to present domain model exceptions thrown through validation", "text": "In domain model of my web application I've an entity Foo which can be created only by a pojo FooBean: `Foo.newInstance(FooBean fooBean)` (Might have been better a Builder-pattern.) In the factory methode `newInstance()` the pojo FooBean is validated and throws NullPointerExceptions and IllegalArgumentExceptions if needed. The fields of the pojo are filled by a form on the presentation layer. That form does some validation on the user input and shows user friendly messages if needed. The exception message of the NullPointerExceptions and IllegalArgumentExceptions thrown in the `newInstance()` method are rather technical and shouldn't be displayed to the end user. What is a proper way to show user friendly error messages that are originated by the NullPointerException or the IllegalArgumentException?"} {"_id": "88498", "title": "Bitbucket and a small development house", "text": "I am in the process of finally rolling Mercurial as our version control system at work. This is a huge deal for everyone as, shockingly, they have never used a VCS. After months of putting the bug in management's ears, they finally saw the light and now realise how much better it is than working with a network of shared folders! In the process of rolling this out, I am thinking of different strategies to manage our stuff and I am leaning towards using Bitbucket as our \"central\" repository. The projects in Bitbucket will solely be private projects and everyone will push and pull from there. I am open to different suggestions, but has anyone got a similar setup? If so, what caveats have you encountered?"} {"_id": "231609", "title": "UI design for changing multiple records at once", "text": "I have a desktop application where the user has tabular views on some data records. Now we got the requirement to let the user select multiple of these records at the same time and let the him edit some properties of the selected records all at once in a separate edit dialog. The idea is * to change only the properties the user really want change * to keep all other properties unchanged Furthermore, when the user has selected the records and opens the edit dialog, the properties with equal values in all of the selected records are shown in the corresponding UI fields, the other properties should be shown as \"undefined\" as long as the user does not starts to enter something in there. After pressing \"Ok\", the changes are applied to the records. This kind of UI behaviour is not \"rocket science\", we already found this, for example, in graphical editors (where you can select multiple drawing elements at once and change their properties.) The problem is how to visualize the \"undefined\" state in an intuitive manner, especially for text properties. For boolean values, this is easy: we use just a tri-state check box. For number values, this is easy, too: the \"undefined\" state is displayed as an empty numeric UI field, and if there is actually a number in there, the state is \"defined\". But how shall one design this for text properties? An empty text box field is actually not an \"undefined\" state, since many of the text properties _can_ actually be empty. One could add a separate check box beside each text box to indicate the difference between \"undefined\" and \"defined\", but since our edit dialog contains already check boxes for boolean properties, this seems to be counterintuitive and confusing to the user. If this matters, we are using Winforms (but I think this problem is not specfic to the actual UI framework)."} {"_id": "231603", "title": "Which license retains copyright while allowing work to be shared and derived from?", "text": "I am working on an academic research project whose results and _some_ code we intend to publish. Research journals and conferences explicitly require that the authors have copyright over the work done in a paper. However, I would like to share the code with the community as it progresses using an open source license. Is there a license that explicitly retains copyright while allowing sharing, redistribution and derivative works based on the original work? Alternatively, which license would be suitable for such projects?"} {"_id": "231607", "title": "Most efficient data structure for implementing inheritance structure without classes", "text": "I have a number of types that all relate to each other in terms of them being 'derived' from one another. I'd need a way to do `is` relationships, which made me initially think of classes, but I realized that other than the `is` relationships, there would be no reason to use classes as each 'type' of data function the exact same way. I'd also thought of enums, but since there is no way to extend or inherit from enums it was a poor choice. For context (this being my specific example): I have a number of different kinds of resources that all are 'types' so that they can be used in conjunction with an int to denote how much of that resource is available. I have several different kinds of these resources, such as foods, metals, building materials etc. Within the food category would be something like wheat or corn; under metals would be copper, iron, gold etc. Since all of these are still just 'resources' and have no actual functionality other than typing, it seems pointless to use classes and OO inheritance. How would I implement this kind of data type without resorting to using classes?"} {"_id": "231604", "title": "What's the difference between lists constructed by quote and those constructed by cons in Scheme?", "text": "(define ls1 '((1 . 2) 1 . 2)) (set-car! (car ls1) 6) ls1 (define ls2 (cons '(1 . 2) '(1 . 2))) (set-car! (car ls2) 6) ls2 After `set-car!`ing, ls1 will be `((6 . 2) 1 . 2)` and ls2 `((6 . 2) 6 . 2)`. It seems that `ls1` and `ls2` have different storage model, and what does it mean when someone says `x is bound to a list`? is x stand for a `starting address` or `location` like that a is the starting address of a[10] in C?"} {"_id": "231605", "title": "Organisation by business function vs technical function", "text": "I am designing a large application (a static code analyser) and I have a choice in how to organise the code into modules: * One approach is by what I'd call by **technical function**. This is where you have a package for data objects, another for service functions, views, controllers, etc. * Another approach is what I'd call by **business function**. This where you'd have a package for, say, stock control, another for product listings, customer orders, etc. Most frameworks tend to push me towards the first approach. Sometimes there are technical reasons for this, e.g. if the view is written in JavaScript and the rest in Python, you really need to keep it separate. But mostly it is just a matter of convention. Sometimes organising by technical function results in spreading related code across the application. If I have code to support a product, I end up with ProductModel, ProductService, ProductView, etc. Often I feel it would be neater to simply have a Product object that does all these. Organising by business function has issues too. Some classes are borderline and it's quite arbitrary which package you put them in. And there are some support objects that are used across all business functions. Which approach will be the most effective for making a scalable and maintainable application?"} {"_id": "228110", "title": "Disadvantages of vertical user stories", "text": "The **agile approach** is to structure the work into **vertical user stories** and deliver a focused but fully functioning piece of the application from **end-to-end**. Because this is the new approach to building software I read a lot of literature about why this is better than horizontal stories but I do not find much about the disadvantages to this approach. I already drank the agile cool-aid and I too agree that vertically slicing the cake has much advantages over horizontal slicing. Here is a short list of disadvantages that I could come up with: * A developer might initially be slower at implementing a feature because s/he must understand all the technologies required to develop the story (UI + service layer + data access + networking, etc...) * Overall architecture design (crating the backbone of the application) does not really fit this mantra (however some might argue that it is part of a user story to develop/change the overall architecture) **What are some more drawbacks of vertically slicing user stories?** _Note: The reason I am asking this question now is because I am going to attempt to convince a team to start writing stories the 'vertical way' and I want to be able to bring up the possible trade-offs ahead of time so they won't consider the approach a failure when they are faced with the drawbacks._"} {"_id": "199365", "title": "Efficient Copy While Sorting", "text": "I have an algorithms library called NDex. I am in the process of upgrading it to a new version. Part of this upgrade involves providing two versions of many algorithms: an in-place version and a version that copies the results to a destination list. I will be providing two sort variations: quick sort and merge sort. The merge sort is stable. I am curious whether quick sort is better than, say, heap sort when copying items to a new list. Merge sort uses a buffer internally. I am curious when doing a merge sort whether I can double up the destination as the buffer. I was hoping someone could give me some advice or point me to some useful material."} {"_id": "228118", "title": "Communicate to a junior developer about bad style", "text": "I have a friend who's managed to pick up comma first (best case might be made here: http://ajaxian.com/archives/is-there-something-to-the-crazy-comma-first- style) but it's 100% against our org's style guide and the Python culture. If you don't want to click the link, he's basically doing this: a, b, c, d = tuple('abcd') # setup for interpreter demo, if desired foo = ( a , b , c , d ) Maybe it's just habit or training, but I do not want to even begin to have the spark of a thought about even starting to think about doing this; much less see others pick it up. But I'm also a bit of a grammar fiend too. How do I communicate to a him that keeping to idiomatic style is better for his career and reputation as a programmer?"} {"_id": "39174", "title": "Separation of Tasks in an n-tier app: By module or by tier?", "text": "Developing an n-tier app with a team, is it better to divide tasks by module or use case (e.g. Employee 1 creates Admin module, employee 2 creates Payroll module, etc.) or by tier (e.g. Employee 1 creates UI, Employee 2 creates Logic, Employee 3 creates all Data Access)? I'm more inclined towards the former. I think that it's only logical that if I'm coding one whole module or at least related use cases, I would be able to code continuously without waiting for others to finish before integrating my work. But the counter-argument, which is the latter, is that by separating tasks between layers, a programmer can take advantage of n-tier's separation of concerns and focus only on a specific layer (to some) they're familiar with."} {"_id": "165219", "title": "What are the benefits and drawback of documentation vs tutorials vs video tutorials", "text": "Which types of learning resources do you find the most helpful, for which kinds of learning and/or perhaps at specific times? Some examples of types of learning you could consider: * When starting to integrate a new SDK inside an existing codebase * When learning a new framework without having to integrate legacy code * When digging deeper into an already-used SDK that you may not know very well yet For example - (video) tutorials are usually very easy to follow and tells a story from beginning to end to get results, but will nearly always assume starting from scratch or a previous tutorial. Therefore such a resource is useful for quick learning if you don't have legacy code around, but less so if you have to search for the best-fit to the code you already have. SDK Documentation on the other hand is well-structured but does not tell a story. It is more difficult to get to a specific larger result with documentation alone, but it is a better fit when you do have legacy code around and are searching for perhaps non-obvious ways of employing the SDK or library. Are there other forms of resources that you find useful, such as interactive training?"} {"_id": "84875", "title": "How are you setting up re-usable development environments?", "text": "I'm trying to come up with a good approach to creating a re-usable development environment such that it doesn't take a couple days to re-build a machine if it starts to sputter and to be able to onboard new developers faster. Reading about Quora's development setup made me consider alternatives to the old development environment build delay. For a .NET/Windows shop though, how are you solving this problem? * Local virtual machine on your desktop/laptop that you can share with the other members of the team? * A dedicated server (physical or virtual) that all developers remote desktop into and that can be easily backed up. (obviously requires a network connection, so there's a downside)? * An instance in the cloud (like Quora)?"} {"_id": "69568", "title": "How do you automatically test a time check?", "text": "Say you have a property `startTime`. Then you have a method `doSomething`: doSomething() { //...stuff startTime = System.currentTimeMillis(); //... more stuff } How do you test that `startTime` was assigned correctly? You can't test against an absolute timestamp - it'll likely change between assignment and test. Maybe use a range?"} {"_id": "215429", "title": "Is it wise to store a big lump of json on a database row", "text": "I have this project which stores product details from amazon into the database. Just to give you an idea on how big it is: ` [{\"title\":\"Genetic Engineering (Opposing Viewpoints)\",\"short_title\":\"Genetic Engineering ...\",\"brand\":\"\",\"condition\":\"\",\"sales_rank\":\"7171426\",\"binding\":\"Book\",\"item_detail_url\":\"http://localhost/wordpress/product/?asin=0737705124\",\"node_list\":\"Books > Science & Math > Biological Sciences > Biotechnology\",\"node_category\":\"Books\",\"subcat\":\"\",\"model_number\":\"\",\"item_url\":\"http://localhost/wordpress/wp- content/ecom-plugin- redirects/ecom_redirector.php?id=128\",\"details_url\":\"http://localhost/wordpress/product/?asin=0737705124\",\"large_image\":\"http://localhost/wordpress/wp- content/plugins/ecom/img/large- notfound.png\",\"medium_image\":\"http://localhost/wordpress/wp- content/plugins/ecom/img/medium- notfound.png\",\"small_image\":\"http://localhost/wordpress/wp- content/plugins/ecom/img/small- notfound.png\",\"thumbnail_image\":\"http://localhost/wordpress/wp- content/plugins/ecom/img/thumbnail- notfound.png\",\"tiny_img\":\"http://localhost/wordpress/wp- content/plugins/ecom/img/tiny- notfound.png\",\"swatch_img\":\"http://localhost/wordpress/wp- content/plugins/ecom/img/swatch- notfound.png\",\"total_images\":\"6\",\"amount\":\"33.70\",\"currency\":\"$\",\"long_currency\":\"USD\",\"price\":\"$33.70\",\"price_type\":\"List Price\",\"show_price_type\":\"0\",\"stars_url\":\"\",\"product_review\":\"\",\"rating\":\"\",\"yellow_star_class\":\"\",\"white_star_class\":\"\",\"rating_text\":\" of 5\",\"reviews_url\":\"\",\"review_label\":\"\",\"reviews_label\":\"Read all \",\"review_count\":\"\",\"create_review_url\":\"http://localhost/wordpress/wp- content/ecom-plugin- redirects/ecom_redirector.php?id=132\",\"create_review_label\":\"Write a review\",\"buy_url\":\"http://localhost/wordpress/wp-content/ecom-plugin- redirects/ecom_redirector.php?id=19186\",\"add_to_cart_action\":\"http://localhost/wordpress/wp- content/ecom-plugin- redirects/add_to_cart.php\",\"asin\":\"0737705124\",\"status\":\"Only 7 left in stock.\",\"snippet_condition\":\"in_stock\",\"status_class\":\"ninstck\",\"customer_images\":[\"http://localhost/wordpress/wp- content/uploads/2013/10/ecom_images/51M2vvFvs2BL.jpg\",\"http://localhost/wordpress/wp- content/uploads/2013/10/ecom_images/31FIM- YIUrL.jpg\",\"http://localhost/wordpress/wp- content/uploads/2013/10/ecom_images/51M2vvFvs2BL.jpg\",\"http://localhost/wordpress/wp- content/uploads/2013/10/ecom_images/51M2vvFvs2BL.jpg\"],\"disclaimer\":\"\",\"item_attributes\":[{\"attr\":\"Author\",\"value\":\"Greenhaven Press\"},{\"attr\":\"Binding\",\"value\":\"Hardcover\"},{\"attr\":\"EAN\",\"value\":\"9780737705126\"},{\"attr\":\"Edition\",\"value\":\"1\"},{\"attr\":\"ISBN\",\"value\":\"0737705124\"},{\"attr\":\"Label\",\"value\":\"Greenhaven Press\"},{\"attr\":\"Manufacturer\",\"value\":\"Greenhaven Press\"},{\"attr\":\"NumberOfItems\",\"value\":\"1\"},{\"attr\":\"NumberOfPages\",\"value\":\"224\"},{\"attr\":\"ProductGroup\",\"value\":\"Book\"},{\"attr\":\"ProductTypeName\",\"value\":\"ABIS_BOOK\"},{\"attr\":\"PublicationDate\",\"value\":\"2000-06\"},{\"attr\":\"Publisher\",\"value\":\"Greenhaven Press\"},{\"attr\":\"SKU\",\"value\":\"G0737705124I2N00\"},{\"attr\":\"Studio\",\"value\":\"Greenhaven Press\"},{\"attr\":\"Title\",\"value\":\"Genetic Engineering (Opposing Viewpoints)\"}],\"customer_review_url\":\"http://localhost/wordpress/wp- content/ecom-customer- reviews/0737705124.html\",\"flickr_results\":[\"http://localhost/wordpress/wp- content/uploads/2013/10/ecom_images/5105560852_06c7d06f14_m.jpg\"],\"freebase_text\":\"No around the web data available yet\",\"freebase_image\":\"http://localhost/wordpress/wp- content/plugins/ecom/img/freebase- notfound.jpg\",\"ebay_related_items\":[{\"title\":\"Genetic Engineering (Introducing Issues With Opposing Viewpoints), , Good Book\",\"image\":\"http://localhost/wordpress/wp- content/uploads/2013/10/ecom_images/140.jpg\",\"url\":\"http://localhost/wordpress/wp- content/ecom-plugin- redirects/ecom_redirector.php?id=12165\",\"currency_id\":\"$\",\"current_price\":\"26.2\"},{\"title\":\"Genetic Engineering Opposing Viewpoints by DAVID BENDER - 1964 Hardcover\",\"image\":\"http://localhost/wordpress/wp- content/uploads/2013/10/ecom_images/140.jpg\",\"url\":\"http://localhost/wordpress/wp- content/ecom-plugin- redirects/ecom_redirector.php?id=130\",\"currency_id\":\"AUD\",\"current_price\":\"11.99\"}],\"no_follow\":\"rel=\\\"nofollow\\\"\",\"new_tab\":\"target=\\\"_blank\\\"\",\"related_products\":[],\"super_saver_shipping\":\"\",\"shipping_availability\":\"\",\"total_offers\":\"7\",\"added_to_cart\":\"\"}] ` So the structure for the table is: * asin * title * details (the product details in json) Will the performance suffer if I have to store like 10,000 products? Is there any other way of doing this? I'm thinking of the following, but the current setup is really the most convenient one since I also have to use the data on the client side: * store the product details in a file. So something like ASIN123.json * store the product details in one big file. (I'm guessing it will be a drag to extract data from this file) * store each of the fields in the details in its own table field Thanks in advance! **UPDATE** Thanks for the answers! I just want to add some more details to my question. First, the records are updated for a specific interval. Only specific data such as the price or the title are updated. Second, I'm also using the json encoded data in the client-side so I thought at first it would be easier to just have it json encoded so I can easily use it in the client side without having to convert. Does this change your opinion about simply storing the fields in a regular table field in an RDBMS setup?"} {"_id": "100711", "title": "Is it time to do away with 'front-end' and 'back-end' as tech jargon?", "text": "It seems that the primary usage of these terms is either: 1. mutual mystification of contractors and/or clients who don't know what they are talking about or how to talk about it, 2. the innocent and ignorant (see this question) , or 3. laziness to avoid having to say user-interface/user-facing or business logic/database/infrastructure. Granted, you don't always want to explain to the boardroom what the aforementioned terms mean. Is that justification for adding confusion by using these terms to generalize and mis-generalize all kinds of technologies?"} {"_id": "154661", "title": "What is a best practice tier structure of a Java EE 6/7 application?", "text": "I was attempting to find a best practice for modeling the tiers in a Java EE application yesterday and couldn't come up with anything current. In the past, say java 1.4, it was four tiers: 1. Presentation Tier 2. Web Tier 3. Business Logic Tier 4. DAL (Data Access Layer ) which I always considered a tier and not a layer. After working with Web Services and SOA I thought to add in a services tier, but that may fall under 3. the business logic tier. I did searches for quite a while and reading articles. It seems like Domain Driven Design is becoming more popular, but I couldn't find a diagram on it's tier structure. Anyone have ideas or diagrams on what the proper tier structure is for newer Java EE applications or is it really the same, but more items are ranked under the four I've mentioned? **Aftermath:** Really, I should have reworded my question as my real intent was to ask where to put the business logic in a Java EE application? In thinking along those terms, I somehow concluded that the tier structure had changed, when it had not. However, with the advent of POJO programming and lightweight frameworks that I was using in my day to day work, I lost sight of all this. The tier structure of Java EE apps still holds to the same, but now instead of heavy weight EJB2 type business logic in the middle tier I was using POJO's with services to implement business logic."} {"_id": "224325", "title": "Small Product backlog / user story feedback - media player application", "text": "I have been tasked with developing a short product backlog for the following: > A media player application is needed for playing mp3 files. It should have > the ability to search the users file structure for playable files which can > be played or imported into a media library for playing later. There should > be the usual options for starting, pausing and stopping playback of a > selected file\u2026 My solution / user stories are as follows: * As a user I can use the media player application so that I can play MP3 files. * As a user I want the ability to search for playable files to play them later * As a user I want the ability to search for files so that I am able to import them to my media library. * As a user I can use certain options on an app so that I can start, pause and playback a file. Am I on the right track?"} {"_id": "120040", "title": "Design review for application facing memory issues", "text": "I apologise in advance for the length of this post, but I want to paint an accurate picture of the problems my app is facing and then pose some questions below; I am trying to address some self inflicted design pain that is now leading to my application crashing due to out of memory errors. An abridged description of the **problem domain** is as follows; * The application takes in a \u201cdataset\u201d that consists of numerous text files containing related data * An individual text file within the dataset usually contains approx 20 \u201cheaders\u201d that contain metadata about the data it contains. It also contains a large tab delimited section containing data that is related to data in one of the other text files contained within the dataset. The number of columns per file is very variable from 2 to 256+ columns. The original application was written to allow users to load a dataset, map certain columns of each of the files which basically indicating key information on the files to show how they are related as well as identify a few expected column names. Once this is done, a validation process takes place to enforce various rules and ensure that all the relationships between the files are valid. Once that is done, the data is imported into a SQL Server database. The database design is an EAV (Entity-Attribute-Value) model used to cater for the variable columns per file. I know EAV has its detractors, but in this case, I feel it was a reasonable choice given the disparate data and variable number of columns submitted in each dataset. **The memory problem** Given the fact the combined size of all text files was at most about 5 megs, and in an effort to reduce the database transaction time, it was decided to read ALL the data from files into memory and then perform the following; * perform all the validation whilst the data was in memory * relate it using an object model * Start DB transaction and write the key columns row by row, noting the Id of the written row (all tables in the database utilise identity columns), then the Id of the newly written row is applied to all related data * Once all related data had been updated with the key information to which it relates, these records are written using SqlBulkCopy. Due to our EAV model, we essentially have; x columns by y rows to write, where x can by 256+ and rows are often into the tens of thousands. * Once all the data is written without error (can take several minutes for large datasets), Commit the transaction. The problem now comes from the fact we are now receiving individual files containing over 30 megs of data. In a dataset, we can receive any number of files. We\u2019ve started seen datasets of around 100 megs coming in and I expect it is only going to get bigger from here on in. With files of this size, data can\u2019t even be read into memory without the app falling over, let alone be validated and imported. I anticipate having to modify large chunks of the code to allow validation to occur by parsing files line by line and am not exactly decided on how to handle the import and transactions. **Potential improvements** * I\u2019ve wondered about using GUIDs to relate the data rather than relying on identity fields. This would allow data to be related prior to writing to the database. This would certainly increase the storage required though. Especially in an EAV design. Would you think this is a reasonable thing to try, or do I simply persist with identity fields (natural keys can\u2019t be trusted to be unique across all submitters). * Use of staging tables to get data into the database and only performing the transaction to copy data from staging area to actual destination tables. **Questions** * For systems like this that import large quantities of data, how to you go about keeping transactions small. I\u2019ve kept them as small as possible in the current design, but they are still active for several minutes and write hundreds of thousands of records in one transaction. Is there a better solution? * The tab delimited data section is read into a DataTable to be viewed in a grid. I don\u2019t need the full functionality of a DataTable, so I suspect it is overkill. Is there anyway to turn off various features of DataTables to make them more lightweight? Are there any other obvious things you would do in this situation to minimise the memory footprint of the application described above? Thanks for your kind attention."} {"_id": "120047", "title": "What is block level design in context of mobile application?", "text": "I was wondering if anyone can give me some direction related to \"Block level design\" in context of mobile application? Recently I saw a video in my university and a professional mentioned about building block level design of applications, since then it has stuck in my brain! It would be really helpful if someone can share their knowledge."} {"_id": "55222", "title": "Why should I not do a masters degree", "text": "I have left university on July 2010 where I studied web design (as we all know you learn more by your self but that\u2019s not the issue at the moment). Since then I have not managed to find a job (apart from a one month work experience), from the way things are going, and by taking into account the fact that all my university friends are in the same situation, I don\u2019t think that I am going to find a job soon (within the industry) Now as we all do, even though I don\u2019t have a job, I am still working on personal projects and try to keep up to date (I don\u2019t need a job or uni to do this) \u2013 but I am thinking, because there is not work available, would it be worth going back to uni for a master degree? I know I don\u2019t need it and I know that is unlikely that I will learn anything important, as I believe in self learning, and in most cases it is a lot more effective (but I have to say I don\u2019t mind going back to school) The only reason I am thinking of doing the master is, (and this is where I need your help): If it takes me a year to get a job, then on the interview, would the employer think \u201cwhat the hell did this guy do since he left university\u201d \u2013 now if I go to university that would solve this problem. Or I\u2019m I making up a problem that does not exist Plus, I know that employers need examples of sites that I have been working on, at the moment I only have 3 (as when working on personal projects, where their is not time limit I tend drag things in order to get them perfect, and they never get perfect) \u2013 so by going back to uni, then this problem maybe solved I said all this as I have read a lot about the fact that you don\u2019t need to have a masters degree to work on web design market (and I totally agree) but considering my concern, the question is should I do a masters course to avoid just spending hours in my room working and learning in my own (but that it would be hard to convince employers that I was really learning in my room) Maybe because I\u2019m still young age 22 not that old anyway :), but I don\u2019t have the \u201cdream\u201d of being rich, so if I were to tell the truth I don\u2019t really care of the fact that I don\u2019t have a job (at the moment), because regardless, I am working on what I love every day, but I know that in the future when I will need the job I may find it harder to get one, if I neglect doing so now Every time I ask a question that I\u2019m not sure about, I keep going on and on, but I really hope you get what I am trying to get across. By the way the course that I am looking at for a masters says that it would teach me how to do these: e-commerce e-government e-science e-learning I don\u2019t know any of them, a part from e-commerce Thanks"} {"_id": "21118", "title": "What is the general tech news website every programmer should read?", "text": "A link and a quick explanation why would be very helpful. General tech news, I mean not like InfoQ which is too technical. Probably much more like P.SE than SO. EDIT: site should help developers gain general knowledge about the industry, technologies, inventions, etc. Much like when we listen to the news every morning in our car to know what is happening in our world. But target audience must be either programmmers, geeks or any other persons interested in technology."} {"_id": "236558", "title": "Generic object construction - Inherited Classes", "text": "Basically I am writing a MSMQ based multi-threaded messaging pattern utility library. It's like a set of components all inherited (directly or indirectly) one class that's called my base component class. Each of these messaging components have the ability to listen to multiple queues via multiple threads and to process messages via multiple threads. So I have a worker base class that executes itself on a thread, but in implementation you would inherit this class, and fill in the gaps. So I want to be able to basically construct a generic object that inherits this class on the fly, and then put it to work. So far I have this which works, but I am just wondering if there is a better way to do it out there. My current code... public class EzQBaseComponent : IEzQComponent where TWorker : EzQWorker { /// LOTS OF CODE YOU DON'T NEED TO KNOW :\"D private void Listener_MessageRecieved(Guid listenerID, MessageQueue queue, Message msg, MessageQueueTransaction myTransaction) { try { lock (m_MessageRecievedLocker) { if(myTransaction == null) { // YAWN } if(msg.Label == c_CustomComponentMessageCommandLabel) { // YAWN } else if(Workers.Count < DelegatedWorkers) { Type t = typeof(TWorker); ConstructorInfo[] conInfos = t.GetConstructors(); ConstructorInfo correctConstructor = null; foreach (ConstructorInfo cInfo in conInfos) { if (cInfo.GetParameters().Count() < 1) { correctConstructor = cInfo; } } if (correctConstructor == null) { throw new Exception(\"Generic TWorker class does not contain a consturctor with '0' arguments. Cannot Construct class.\"); } TWorker worker = (TWorker)correctConstructor.Invoke(null); // YAWN } else { // YAWN } } } catch (Exception Ex) { // NOOOO EXCEPTION!! } } Basically, my base class has a no-parameter constructor. So I look for the one without parameters via reflection, and then use that constructor. Any thoughts on the construction of the generic object? Is that the best way?"} {"_id": "241537", "title": "Algorithm for optimal combination of two variables", "text": "I'm looking for an algorithm that would be able to determine the optimal combination of two variables, but I'm not sure where to start looking. For example, if I have 10,000 rows in a database and each row contains price, and square feet is there any algorithm out there that will be able to determine what combination of price and sq ft is optimal. I know this is vague, but I assume is along the lines of Fuzzy logic and fuzzy sets, but I'm not sure and I'd like to start digging in the right field to see if I can come up with something that solves my problem."} {"_id": "236552", "title": "Is it appropriate to use inheritance to prevent code duplication of the logic for a user control?", "text": "Suppose I have two or more `UserControl` implementations with vastly different implementations but near identical code-behind. One strategy to avoid code duplication is as follows: 1. Change each `UserControl` to inherit from a new abstract class, `AwesomeUserControl` 2. Move all of the codebehind from each `UserControl` into `AwesomeUserControl` (`AwesomeUserControl` does not have an ascx file). 3. Add abstract getters to `AwesomeUserControl` for any controls from the ascx file of the original UserControls. 4. Implement these getters in each `UserControl`. This strategy works extremely well and is easy to maintain. However, it also introduces stupid scaffolding code; every new implementation of `AwesomeUserControl` has a bunch of getters to hook `AwesomeUserControl` the UI elements inside of the new control. Any time I find myself using a strategy which _requires_ copy-pasted scaffolding code, I worry that I am doing something wrong. Though it does have a nice effect of failing to compile if the UI elements are missing (as opposed to skipping the scaffolding code in favor of reflection and required naming). Is this use of inheritance reasonable?"} {"_id": "236550", "title": "What's the algorithm of \"Shuffle Mode\" in audio/music players", "text": "I was researching on how is the \"Shuffle mode\" in music players implemented and what is the algorithm behind this. How it makes sure that any songs won't repeat? What's the most efficient algorithm for doing this? * **Added on 2014-Apr-23:** The Fischer-Yates Shuffle Algorithm is the simplest algorithm for Shuffling. This blogpost titled \"The Art of Shuffling Music\" explains another algorithm with illustrations."} {"_id": "219936", "title": "What is the most efficient way to manage large collection", "text": "I have a rather difficult problem: I periodically receive a `Dictionary subDict` containing 100 records where * decimal is a Price * and int is Amount The outcome I need is a dataGrid showing prices with amounts. But I cannot display subList directly, because sublist is often missing some prices: subDict: * Price | Amount * 100.0001 | 192 * 100.0005 | 123 * 100.0007 | 2 * 100.0008 | 123 above needs to be displayed as: * Price | Amount * 100.0001 | 192 * 100.0002 | 0 * 100.0003 | 0 * 100.0004 | 123 * 100.0005 | 0 * 100.0006 | 0 * 100.0007 | 2 * 100.0008 | 123 I have to store a price with 4 decimal places, however the range of Price can be anything from 0 to 10 000. Final requirement is - once I click on the amount in the DataGrid, I need to be able to get access to all values in the row. So I decided to ask for some help - can anyone think of a good strategy of dealing with such a problem? One idea that came to my mind would be to initialize `Dicionary bigDictionary` containing every price in the range. However, if I initialize this dictionary with all the prices, with specified granularity (i.e. step of 0.0001) I end up with 10 000 * 10 000 records. That is rather large list, which takes considerable amount of memory. Of course it is easy to bind it to DataGrid in WPF so this can sort of work, but is in my opinion very inefficient. The other idea I have would be to save incoming prices to a `Dictionary smallDict` but then I would need to have some sort of mechanism to update DataGrid in such a way, that price gaps would be filled on the fly. Also I would somehow need to keep the dataGrid sorted by the price. The first idea I am able to implement. The second one, I have no idea about so any code would be excellent. I would be greatfull for any ideas! Thank you for your time"} {"_id": "159164", "title": "Do accompanying tools/scripts also need to be licensed to put my bash script under GPLv3?", "text": "This is some sort of follow up to an older question of me. So I put the GPLv3 license into the project which contains my script and also added the copyright notice and copying permission statement to the top of my script. Now, apart from the script, the package also contains files which are helpful for using the script (e.g. a bash completion file and unit tests) but which are not relevant for the script to actually work. Do I also need to put copyright notice/permission in there? I've looked into some projects on github, and most don't seem to add these information to accompanying things. Still, I'm unsure what's the best approach."} {"_id": "124293", "title": "How to detect impacts caused by changes?", "text": "I've often run on this problem. Now I am working on a team of 4 and we built a lot of stuff. We are still finishing somethings and making **changes**. The problem is that those changes can (and most probably will) cause interface/business to crash. We do not have any kind of map that show us something like \"If you change the DataSource of this table the code X on that table will have to change. And on the screen X the list will have to be changed\". Unit tests can help validate business rules, what about interface business? Can it also validate code bound to the interface? Should we rely only on unit tests? And how deep does it have to be to prevent the code breaking because of changes? Should we use some other approach as well?"} {"_id": "162245", "title": "Git workflow / practices for a small project (flowchart in png)", "text": "I'm trying to come up with a personal workflow. I've put together a flowchart of the hypothetical lifespan of a release: one developer pushing to a public github repo + a friend helping with some feature and fixing a bug. Is this a reasonable approach to version control? The main idea is to keep the public repo tidy: * Each new release gets on its own branch until it's finally tagged in the master branch when it's finished. * All work is done on \"feature\" or \"hotfix\" branches, never on an actual release branch, to prevent anomalies. * Merges to higher-level branches are always rebased or squashed (to avoid clutter). If it's overkill I don't mind because the whole point is for me is to learn skills I might need for a larger project. The only problem would be if I'm doing something flat out wrong or unnecessary. **edit 2:** fixed bad idea in the original flowchart and made it a bit easier to navigate. ![v1.1](http://i.stack.imgur.com/kxOIy.jpg)"} {"_id": "16179", "title": "Is Object Oriented Programming a solution to complexity?", "text": "Do you think Object Oriented Programming is a solution to complexity. Why? This topic may be a bit controversial but my intentions to know the answer of Why from the experts here !"} {"_id": "159160", "title": "Class design for internationalized object", "text": "I'm looking for some pointers on class design for a global application. Let's say I have to make a class structure to manage products, and the products are sold in different countries. Some of the fields for the product will have the same value across all countries (eg. product code, ERP Description) I will call these \"international\" fields, and some fields will be specific to a single country (eg. Local Description), lets call these \"local\" fields. Of course, some \"local\" fields will be the same for groups of countries (es. weight : 1 kilogram / 2 pounds). Also I expect that not all countries will have values for all fields. Which fields are \"international\" and which fields are \"local\" may change from one installation to another and I am reluctant to bake this into the design as I'm sure it will bite me later on. So, I'm trying to figure out how to structure the objects so that I can use a product at an international level and always refer to the same \"product\", but also maintain and use the local information when necessary? Just to be clear, I'm not talking about user-locale, number or date formatting etc. The source data is coming from different database schemas (one for each country). The end product will be written in C#. I'm wondering if anyone has experience or can point me to a pattern that would provide a good solution to this before I go and reinvent the wheel?"} {"_id": "124297", "title": "S-expressions readability", "text": "In a nutshell and for those who didn't know it, Lisp functions/operators/constructs are all uniformly called like this: (function arg0 arg1 ... argN) So what in a C-like language you would express as if (a > b && foo(param)) is transformed to a Lisp sexp like (if (and (> a b) (foo param))) . As things get more real/complicated, so do their corresponding s-expressions, for me. I'm aware that this is most likely a subjective question, but - is for many Lisp hackers this one little annoyance one will have to always deal with? Or does one mostly get used to this (lack of) syntax sooner or later? In any case, is adding breaklines (that you wouldn't add in your C equivalent, most times) for readability a good idea, especially on the long run? Any other suggestions would be welcome."} {"_id": "81349", "title": "Best permissive license for a utility library?", "text": "I have a small utility library of useful stuff written in Java that I plan on releasing open source. I've been wavering on what license to use. I quite like the BSD license, which is short and easy to understand, but I don't want/need the clause about including the disclaimer in their product's documentation. Considering just dropping that bit out. Would the MIT license suit me better, then? It doesn't have the endorsement prohibition clause like the BSD one does, which is something I like about BSD's. Also, does MIT's clause about keeping the copyright notice on substantial portions of the software just refer to the source code, and not binary form or any documentation they produce? From surveying other SO questions on the topic, I've seen a few people recommend the Apache license. Having a quick scan though it, it actually might do most of what I want really well, although even that amount of legalese makes my head hurt (particularly at 2:30AM when I should be in bed instead of on SO.) Thoughts? Basically I want something that is: * easy to understand, * says you can use the code as you like, but keep my copyright and permission notice on the source code, * you don't need to put the name of me or my product or copyright notice etc. in any documentation, manuals, etc. that you produce, * don't try and use me or my product as a selling point for your product (not that my endorsement would count for much anyway!) * and covers my butt in a reasonable manner. :-) EDIT: Wow, 30 minutes and already some good responses! In response: I'd prefer not to \"mix and match\" if I can help it, and produce yet another open source license. Using a standard license makes it easier for all of us. The butt covering comment is a bit tongue in cheek. The warranty disclaimer that all the licenses mentioned include is really all I'm talking about. EDIT: Reading through the Wikipedia page on the MIT License, I discovered that ncurses uses a modified version that has been approved by the FSF, that adds a non-endorsement paragraph. I figure if it is good enough for them, it is good enough for me. I was considering the Apache license, however compatibility problems with GPLv2 would be an issue I didn't want to introduce."} {"_id": "165217", "title": "Is the book \"Structure and Interpretation of Computer Programs\" a good read for Java programmers?", "text": "This may be subjective and likely to be closed but I still wanted to know if its really helpfull to read Structure and Interpretation of Computer programs. Structure and Interpretation of Computer Programs The book does not use Java. Not that I wanted to learn Java. I am just curious as to know if it be will useful read to be a better programmer and what are the things that I can gain from the book or are their any other alternatives to this book more suited to Java programmers?"} {"_id": "240485", "title": "Is it good style to store view data inside the model?", "text": "I'm using a variant of the MVC pattern. In my GUI code, often the need arises to synchronize \"view data\" (e.g., selected item) between different views. For example, let's imagine a vector drawing program. We have two views: the image, and a listview of all objects (rectangles, squares, ...). The currently selected item should remain in sync - if you click on \"Rectangle A\" in the listview, the same rectangle should be highlighted in the image view. The way I usually do it is to have a ViewState class contained in my model. Is that a good idea? If not, what would be a better solution? class VectorDrawing { List Items; DrawingViewstate Viewstate; } class DrawingViewstate { Object SelectedItem; event SelectedItemChanged; } class ListviewController { ListviewController(VectorDrawing model) { model.Viewstate.SelectedItemChanged += ... // Subscribe to event ... } } class ImageViewController { ... // similar to the ListviewController }"} {"_id": "190033", "title": "implement cons function in Java - type safety question", "text": "I am working on a small functional library written in Java, which mimics the a functional style of programming. I am stuck with a undesirable type cast in one of my method definitions and would love some help. Ok, so we do not have first class functions in Java, so I define them as objects, with one method 'apply()' like so: public abstract class Function2 { public abstract R apply(final T1 paramOne, final T2 paramTwo); } I then define my cons method as a Function2 object, which I can pass around like I would a function in another language that supports it: public static Function2, T, List> cons(){ return new Function2, T, List>(){ @Override public List apply(T x, List xs) { return new NonEmptyList(x, xs); } }; } I have omitted my list structure for brevity; assume it is a functional style list data structure with all the usual head/tail/etc. operations. I then want to implement a function like 'reverse', which returns a list of items in reverse order. I use foldl1 (fold a non empty list from the left) to achieve this, and pass the cons function as a parameter to foldl1 like so: public static List foldl( Function2, T, List> f, List acc, List xs ){ if(xs.isEmpty()){ return acc; } return foldl(f, (f.apply(xs.head(), acc)), xs.tail()); } public static List reverse(List xs){ // how do I avoid this cast?? return foldl( (Function2) cons(), new EmptyList(), xs); } But when I pass my 'cons()' object in 'reverse', I need to cast it as a Function2, and I have no idea how to avoid doing this. I have tried all manner of messing around with types...I feel this is simply a lack of experience on my part with the Java type system...anyone? PS. I am aware of other functional libraries in Java, I just wanted to do my own small one as a learning experience. EDIT - OK, so I am using a regular 'foldl' now to get back a List, but I still have to perform the cast? The return type of 'cons' align with 'foldl'..."} {"_id": "38483", "title": "Explaining Drupal to a Programmer", "text": "(Apologies in advance for the vague and miasmic question) Is there a book, or other resource, there that can explain **using** Drupal, The System, to a web app oriented programmer? I've read through the Apress Pro Drupal Development book and it was a fantastic overview of the system's architecture. That's not what I'm looking for here. Instead I want something that's going to tell/show me \"Here is how people use Drupal to manage their sites and build applications\". I've been building websites and web applications since the beginning, and whenever I sit down to spend some time with Drupal I find I can't leave behind my concepts of 1. A URL equal one HTML page, and everything flows from there 2. A URL equals a controller action, and everything flows from there and Drupal seems to be built on an entirerly different model. So any books, tutorials, or explanations that start from the above two points and explain how Drupal is different, and what Drupal developers spend their days doing would be fantastic."} {"_id": "135376", "title": "How extensive is the difference between building a WPF app and a Silverlight 5 app?", "text": "I have a fair amount of experience with WPF (C#) and XAML. I might soon be asked to create a Silverlight 5 application. I have no experience with any version of SL. What sort of learning curve could I expect in creating a SL 5 app, given that I have some experience with WPF?"} {"_id": "158417", "title": "Is my mediator layer a sensible way to manage this scenario using the Single Responsibility Principle?", "text": "I'm not sure how to start to explain my question, but here goes. We have just finished an MVC application that hits 2 WCF services. But there has been a bit of a disagreement between those who worked on the project and some who haven't over the design used. What we have is two WCF services, one on each site (site specific data) and another at head office (data shared/common between all sites). Normally we would have two apps, one for site and another for head office. In this case I decided from a UX point of view that it would be better to have it look like one app from the users point of view so they would not need know two urls. Basically, the user does not know if they are hitting site or head office. Now comes the design decision that has caused the disagreements. The MVC App has controllers that need to hit either a site wcf service or a head office WCF service. So to prevent the need of the controllers needing to be aware of the two WCF communications layers and converting the WCF results into a model the controllers could use, I created a layer in between the controllers and WCF Proxies. Which accepts a request from a controller, then calls the appropriate site or central WCF proxy call, then if needed, convert the WCF result into a model that the controller can use. This layer, I called a Mediator, but after much discussion, I am of the opinion that it is not the correct description for it. Anyway, the reason for this layer is when tring to keep to the SOILD object oriented principles, in particular the Single Responsibility Principal. The controllers responsibility is to handle client requests and return a view/result. It should not need to know or maintain a WCF proxy and convert its result. (It could go directly to a database instead of a WCF call but the controller shouldn't be responsible or even aware of that because it is responsible for the view) The same goes for the WCF proxy, it should not know what is consuming its result and that the WCF data contract should be converted into a model specific for each controller. The mediator layer in between is what decides which WCF service and method to call and then either via a extension method or a specific handler converts the result to a model the controller can use. (sometimes it may take several wcf calls to create the model). Controller1 -> Mediator -> HeadOfficeWCFProxy Controller2 -> Mediator -> SiteWCFProxy * So the controller asks for data (not knowing where it comes from) * The mediator converts and passes the request to the appropriate WCF Proxy(s) * The WCF Proxy(s) make the calls across the wire and return the results. * The mediator then converts the result(s) into a model (if needed) via an extension method and returns it to the controller. * The controller renders the view using said model. Now what I described here was just on the MVC side. I have a similar setup on the WCF service side but the mediators benefit is clearer. On the head office WCF service WCFListener -> Mediator -> DataLayer (LinqToSQL) -> SiteSyncHandler (sends change to all sites asyncronously) -> ThirdPartyContentManagementHandler (via web service) Now the devs that have worked on the project, like the separation of concerns alot as it is very clear where everything belongs and keeps code to a minimum in each layer. But the down side in the MVC app is for a lot of the calls the mediator just makes a single WCF call and doesn't even need to convert the result (most of the time the data contract is fine for use as a model) resulting in a fair amount of one liner method boiler plate code. The mediator doesn't really add much in that scenario except to keep things consistent across controllers. Some devs that haven't worked on the project, insist that there is no need at all for the Mediator layer and that it could be all handled by the controllers themselves. Which means maintaining the appropriate WCF Proxy in each controller. This in my mind violates the Single Responsibility principle. To which they argue that SOLID is more like a set of guidelines, and not to be blindly followed. Which while I kinda agree with that statement its also not to be blindly ignored. I have conceded that the term mediator is not really appropriate (perhaps director or relay), but I still think the basic design is valid, and would use it again. The same can be said for those who have worked on the project. I would like to know what other people think. Is there a better solution or have I \"over complicated it\". (I still don't agree that is is complicated, its a very simple concept and easy to maintain)"} {"_id": "90250", "title": "UML to portray SQL objects", "text": "I am delving into the science of UML. The tool I am using is **ArgoUML**. I feel very confident portraying OO designs through this tool. But what still vexes me is the incorporation of the database. What does one do in UML to show table structure? Stored procedures? So on and so forth."} {"_id": "244477", "title": "Is it possible to have a mutable type that is not garbage collected?", "text": "I'm wondering if such a thing can exist. Can there be an object that is mutable but not flagged as garbage collected ( specifically, tp_flags & Py_TPFLAGS_HAVE_GC ) I have a C++ struct-like object that I'm writing and I'd like to know if all of its members are immutable. I'm thinking of checking for the Py_TPFLAGS_HAVE_GC flag to determine this. If all members are immutable I want to speed up the deepcopy by doing a faster shallow copy, since I know members are immutable then it shouldn't have to go through an expensive deep copy. Is this logically sound, or is there some mythical type that will blow me out of the water here."} {"_id": "244476", "title": "What is decoupling and what development areas can it apply to?", "text": "I recently noticed decoupling as a topic in a question, and want to know what it is and where it can apply. By \"where can it apply\", I mean: Is it only relevant where compiled languages like C and Java are involved? Should I know about / study it as a web developer?"} {"_id": "244472", "title": "Modern.IE VM license", "text": "Microsoft provides some VMs for testing purposes (advertised on StackOverflow) and I'm trying to understand the license terms. The one I don't really understand is > 1.b. You may use the software for testing purposes only. You may not use the > software for commercial purposes. My thoughts: a) Testing a website in several browsers on several different virtual machines seems a quite professional approach. I hardly believe many private developers would do that. Of course they should, but which private developer has the time to do so? b) If that's really only available to private developers, what is the offer to companies doing the same thing? I am missing the advertisement for a paid service. My question as a non-native English speaker: Is testing by a company considered as a commercial purpose? Can I use the VMs within a company for testing or not?"} {"_id": "13443", "title": "Should source-code in textbooks and the like be translated?", "text": "A few weeks ago, my class was assigned to translate to Portuguese the book Real World Haskell. As I did the translation of the text and comments, I started to wonder if I should translate the code as well, as the instructor suggested. For example: data BookInfo = Book Int String [String] deriving Show would become data InfoLivro = Livro Int String [String] deriving Show Since I haven't read any software-related books in Portuguese, I don't know if that's a common practice, neither if it should be done this way. In the end, the code is a language mix (perhaps the example in Haskell is not a good one, since you can create synonyms quickly like `type CadeiaDeCaracteres = String`, but you get the point). So it doesn't really matter how hard you try, you'll have to rely on the reader previous experience with some sort of basic English words. Knowing this, I really don't see the point in translating code, since we learn in the early days of our coding life it should be written in the universal language. Nevertheless, if the surrounding text (comments, for example, and text itself in a book) needs to be translated, what is possible and feasible in this matter? Can you provide me with some guidance of what to do?"} {"_id": "152547", "title": "Should I tell a departed coworker about their \"sev 1\" defect?", "text": "I had a co-worker leave our company recently. Before leaving, he coded a component that had a severe memory leak that caused a production outage (`OutOfMemoryError` in Java). The problem was essentially a `HashMap` that grew and never removed entries, and the solution was to replace the `HashMap` with a cache implementation. From a professional standpoint, I feel that I should let him know about the defect so he can learn from the error. On the other hand, once people leave a company, they often don't want to hear about legacy projects that they have left behind for bigger and better things. What is the general protocol for this sort of situation?"} {"_id": "254281", "title": "Emacs stops taking input when a file has changed on disk", "text": "I'm using Emacs v24.3.1 on Windows 8. I had a file change on disk while I had an Emacs buffer open with that file. As soon as I attempt to make a change to the buffer, a message appears in the minibuffer. Fileblah.txt changed on disk; really edit the buffer? (y, n, r or C-h) I would expect to be able to hit `r` to have it reload the disk version of the file, but nothing happens. Emacs completely stops responding to input. None of the listed keys work, nor do any other keys as far as I can tell. I can't `C-g` out of the minibuffer. `Alt-F4` doesn't work, not does `Close window` from the task bar. I have to kill the process from task manager. Anyone have any idea what I'm doing wrong here? In cases it's various modes not playing nice with each other, for reference, my init.el is here. Nothing complex. Here's the breakdown: * better-defaults (ido-mode, remove menu-bar, uniquify buffer `forward, saveplace) * recentf-mode * custom frame title * visual-line-mode * require final newline and delete trailing whitespace on save * Markdown mode with auto-mode-alist * Flyspell with Aspell backend * Powershell mode with auto-mode-alist * Ruby auto-mode-alist * Puppet mode with auto-mode-alist * Feature (Gherkin) mode with auto-mode-alist The specific file was a markdown file with Github-flavored Markdown mode and Flyspell mode enabled."} {"_id": "152541", "title": "When is an object oriented program truly object oriented?", "text": "Let me try to explain what I mean: Say, I present a list of objects and I need to get back a selected object by a user. The following are the classes I can think of right now: ListViewer Item App [Calling class] In case of a GUI application, usually click on a particular item is selection of the item and in case of a command line, some input, say an integer representing that item. Let us go with command line application here. A function lists all the items and waits for the choice of object, an integer. So here, I get the choice, is choice going to conceived as an object? And based on the choice, return back the object in the list. Does writing this program like the way explained above make it truly object oriented? If yes, how? If not, why? Or is the question itself wrong and I shouldn't be thinking along those lines?"} {"_id": "152540", "title": "multitenancy with some data sharing", "text": "I'm in the planning stages of a new webapp, and I am leaning strongly toward a multitenancy model. The app has a file storage function, where the user can upload (and operate on) files. I would like the ability of the user to share these files, however. How is this typically accomplished in a multi-tenant model? The example would be something like google docs. Each user has their own files; they can edit and tag and build collections with these files. Then, they can share a doc or a collection with someone else for collaboration. If every user has their own Database and tables, what strategy would one use to allow this kind of sharing while minimizing duplication of files and associated metadata?"} {"_id": "53031", "title": "Should I use Ruby version 1.8.7 or 1.9.2 to start developing Rails apps?", "text": "I'm diving into RoR and I see that the current version of Rails (3.0.5) works with both 1.8.7 and 1.9.2. Currently, I have both versions of Ruby installed using RVM, but I'm wondering which version I should be using as I dive into Rails and start developing apps. I suppose I'd prefer to use the newest version (1.9.2), but I don't know the technologies well enough to know pros/cons of using either. Thanks so much!"} {"_id": "131951", "title": "How to use BPMN and use case and other diagrams together", "text": "I asked this question on stackoverflow, but it seems this question is not suitable there, so I post here for discussion. BPMN (Business Process Modeling Notations) is used for modeling business process by visualization, thus making intangible ideas become physically concrete through the expression of BPMN diagrams. The question is, **how do I organize the BPMN with the UML**. Initially, I thought of two ways to organize use cases and business process diagram: * **1 to one/many:** By mapping each step (`step` here means each node in the BPMN digram) in the business process diagram with one or several use cases. Each use case is mapped with relevant several class diagrams/component diagrams (I prefer this one, since you can encapsulate a set of classes into one component which has input and output), several sequence diagrams (optional). After you have class diagrams/sequence diagrams, code is written/generated based on the model. * **Many to one:** By mapping several steps into one use case. The subsequence steps are the same. * **Many to many:** For example, one step in the business process can be mapped with two or more use cases, and the same two or more use cases can be mapped with other steps. The above methods can be done by the modeling tool, and in my case, I use Enterprise Architect from Sparx System. I discover it recently and I am using its trial, but I will buy it in the future. I can organize many use case diagrams with one step of the BPMN diagram, and can click to view the necessary use cases. However, I don't if it supports many to many cases. After thinking my own method for organizing BPMN and Use Cases, I searched the Internet, and found two other papers, each suggest the following method: * **Turn each use case into each step of BPMN diagrams:** To visualize how refined use cases fit into the business process. I like this approach, since the business process with steps can be modeled, and later each step is turned into a use case. One step is one use case. This is the same with my one to one mapping above. Original presentation is here: Visualizing Use Case Sets as BPMN Processes ![Use case - BPMN mapping](http://i.stack.imgur.com/uxPpf.png) * **Each use case is exactly a business process:** Each step in the use case is each step of the business process. Original paper is here: Describing Business Processes with Use Cases ![Use case is a process](http://i.stack.imgur.com/jpYVW.png) It seems to me that there's not standardized way of gluing these artifacts (BPMN and Use Cases and other digrams) together. Maybe it's a management problem and rely more on creative usage rather than follow a formal steps. **What are the usage of these diagrams together?**"} {"_id": "62472", "title": "What is the genesis of the \"Golden Master\" term?", "text": "The name \"Golden Master\" evokes a level of martial arts proficiency or maybe even a final cut of a film. I think context clues provide adequate explanation of a Golden Master release. I would define it as the version of a software [product] that has been approved as the release version following a pre- determined phase of development completion and testing. If my definition is nearly accurate, what is the genesis of the term Golden Master? Apple is famous for their iOs, OSX GM candidate releases and I have yet to hear anyone else use the same term."} {"_id": "96947", "title": "Why should I declare a class as an abstract class?", "text": "I know the syntax, rules applied to abstract class and I want know usage of an abstract class > Abstract class can not be instantiated directly but can be extended by other > class What is the advantage of doing so? How it is different from an Interface? I know that one class can implement multiple interfaces but can only extend one abstract class. Is that only difference between an interface and an abstract class? I am aware about usage of an Interface. I have learned that from Event delegation model of AWT in Java. **In which situations I should declare class as an abstract class? What is benefits of that?**"} {"_id": "58661", "title": "Can an agile shop really score 12 on the Joel Test?", "text": "I really like the Joel test, use it myself, and encourage my staff and interviewees to consider it carefully. However I don't think I can ever score more than 9 because a few points seem to contradict the Agile Manifesto, XP and TDD, which are the bedrocks of my world. Specifically: the questions about schedule, specs, testers and quiet working conditions run counter to what we are trying to create and the values that we have adopted in being genuinely agile. So my question is whether it possible for a true Agile shop to score 12? **Edit:** On recommendation from an answerer below I am adding a link to my blog where I originally wrote about this and which led to me wanting to post the question here. http://simonpalmer.com/2011/03/16/why-i-will-never-score-more-than-9-on-the- joel-test/ I'm putting this in because I agree with much of what has been said below and I wanted to declare my full position."} {"_id": "160086", "title": "Should a github maintainer rewrite author's in pull requests?", "text": "I'm not a programmer by profession, but I do some coding and have used github some. I've run across what I find to be a surprising situation. I'm very familiar with git. There is a project which I found a (small) bug in that was affecting me. I spent an afternoon finding and fixing it. I forked the repository, commit the change, and issued a pull request. After seeing that it was closed as \"Merged into development branch\" I figured all was well. I was browsing the repo today getting ready to remove my branch, and I can't find where the commit was merged into the maintainer's repo at all. After some time I realize it's been added as a commit, but the author is no longer me. As far as I can tell the only way to do that would be to specifically use a rebase, amend, or other history rewrite to remove the original author. This seems very wrong to me. At best it's confusing, at worst the author of this repo is taking credit for everyone's commits and then the history of the original contributor is lost. Again it's a small bug, I don't use this for my professional resume, it just seems dishonest. Is this normal? Should I say something about it? Edit: The general feeling seems to be that I should go ask, so I'll do just that this morning. As per the request below. I've checked and my code exists and was applied exactaly as I wrote it (including the comment). I verified that both the committer and author have been changed. There was one additional change also added at the same time as my changes. It's a single line, which would affect the patch as well as other code before it. IE the one line addition is not related to the bug I was fixing. **Update** It seems the answer was that the author maintains a development branch and does not want to merge from his master branch into it. He re- authored my commit to avoid a merge. I wasn't concerned with the original branch b/c git's plenty powerful to cherry-pick, rebase, and merge commits around as needed. Is this typical on github? Should I be contacting the maintainer of a project to ask which branch to apply patches to?"} {"_id": "11975", "title": "I want to start using TDD. Any tips for a beginner?", "text": "I never used an automated test mechanism in any of my projects and I feel I'm missing a lot. I want to improve myself, so I have to start tackling some issues I've been neglecting like this and trying Git instead of being stuck on SVN. What's a good way to learn TDD? I'll probably be using Eclipse to program in Java. I've heard of JUnit, but I don't know if there's anything else I should consider."} {"_id": "58667", "title": "I'm scared for my technical phone interview for an internship!", "text": "[EDIT 2.0 ]Hello everyone. This is my second phone interview for a development internship. My very first one was okay, but I didn't get my dream internship. Now, I'm facing fears about this upcoming interview. My fears include the following: 1. I'm 19 years old. The thought of 2 lead developers interviewing me makes me think that I'll know so little of what they'd want me to know. Like they will expect so much. 2. I'm a junior having these panic attacks that I did not get in the other internship. I have a little voice saying \"You didn't get the other one. What makes you think you'll get this one?\". 3. I'm scared that I'll freeze up, forget everything I know, and stutter like an idiot. I'm still traumatized by the last one, because I really really wanted that internship, and I even studied very hard for it. When I was in the interview, I was so nervous I couldn't think clearly. As a result, I didn't do as well as I know I could have. The minute I hung up, I even thought of a better solution to the interview question! Any tips for a soon-to-be intern (hopefully!)? Thank you! P.S. I'm preparing by using this guide for phone interviews."} {"_id": "124436", "title": "Visual Studio c++ Windows forms?", "text": "I've been slacking off with learning how to program because I'm at my wits end trying to figure out the next step. I want to be able to create forms applications just to test random things and give myself the freedom I feel I have with javascript, which is why I learned it and am as comfortable with it as I am. I have no clue, however, what to do in Visual C++. I'm staring at an ugly, gray, blank form right now and have no clue what to click or what to write. I don't know how to find any tutorials either, as it's quite a specific request (c++, visual basic, windows forms). Does anyone have any advice for me? thanks"} {"_id": "124434", "title": "What is ethical/unethical while seeking help on the web with programming assignments?", "text": "I have used the web and Stack Overflow extensively during the past month or so in creating my final project for my C# class. I have used so much code that I didn't write myself that I feel I am being unethical by not giving proper credit to the people who helped me; or the websites that have provided excellent examples. Is it unethical to publish work which was created by me, even though its hardest problems were solved by other people? Should I credit these people for helping me with my assignment? Or the web sites which provided examples?"} {"_id": "175411", "title": "Programming Interview : How to debug a program?", "text": "I was recently asked the following question in an interview : > How do you debug a C++ program ? I started by explaining that programs may have syntax and semantic errors. Compiler reports the syntax errors which can be corrected. For semantic errors, various debuggers are available. I specifically talked about gdb, which is command line, and Visual Studio IDE's debugger, which has a GUI, and common commands. I also talked about debug and release version of code, how assertions should be used for debug build, how exceptions helps in automatic cleanup & putting the program in valid state, and how logging can be useful (e.g. using std::clog). I want to know if this answer is complete or not. Also, I want to hear how other people will go about answering this question in a structured manner ? Thanks."} {"_id": "175417", "title": "Automatic static analysis vs White box testing", "text": "Many sources note that automatic static code analysis include data flow and control flow. But these two are included in white box testing as well. Is there a difference in the automation? That in automatic static analysis all is done by the tools while in white box testing, a person creates the data to exercise the possible paths?"} {"_id": "232030", "title": "How do I stop designing and start architecting this project as suggested by my lead?", "text": "I'm a junior developer (~3 years' exp.) and at my job we're in the process of architecting a new system. My lead developer will be the principal architect, however he's challenged me to try architecting the system myself (in parallel). Over the course of a few iterations of brainstorming ideas and proposing what I saw as architecture suggestions, my lead has given me the feedback that most of what I've been doing was \"designing\" and not \"architecting\". He described the difference as architecture being implementation-agnostic whereas a design is the description of an implementation. He said I need to take off my designer hat and put on my architect hat. He gave me a little bit of advice on how to do so, but I would like to ask you as well: **How do I get out of software designer mode and start thinking more like an architect?** * * * Here are some **examples** of \"designs\" I came up with that weren't seen as relevant to the architecture by my lead: 1. I came up with an algorithm for loading and unloading resources from our system and my lead said that algorithms are categorically not architecture. 2. I came up with a set of events the system should be raising and in what order it should raise them, but this too didn't seem to cut it as architecture. I seem to be getting caught up in the details and not stepping back far enough. I find that even when I come up with something that is at an architecture level, I often got there by trying out various implementations and mucking around in the details then generalizing and abstracting. When I described this to my lead, he said that I was taking the wrong approach: I needed to be thinking \"top down\" and not \"bottom up\". * * * Here are some more **specific details about the project** : * The project we're architecting is a web application. * I'm estimating around 10-100 thousand lines of code. * We're a start up. Our engineering team is about 3-5 people. * The closest thing I could compare our application to is a lightweight CMS. It has similar complexity and deals largely with component loading and unloading, layout management, and plug-in style modules. * The application is ajax-y. The user downloads the client once then requests data as it needs it from the server. * We will be using the MVC pattern. * The application will have authentication. * We aren't very concerned about old browser support (whew!), so we're looking to leverage the latest and greatest that is out there and will be coming out. (HTML5, CSS3, WebGL?, Media Source Extensions, and more!) * * * Here are some **goals of the project** : * The application needs to scale. In the near term our users will be on the order of hundreds to thousands, but we're planning for tens of thousands to millions and beyond. * We hope the application will be around forever. This isn't a temporary solution. (Actually we already have a temporary solution, and what we're architecting is the long-term replacement for what we have). * The application should be secure as it may have contact with sensitive personal information. * The application needs to be stable. (Ideally, it'd be stable around the level of gmail but it doesn't need to be at the extreme of a Mars rover.)"} {"_id": "164976", "title": "Dealing with \"I-am-cool-and-you-are-dumb\" manager", "text": "I have been working with a software company for about 6 months now. I like the projects I work on there and I really like all the people there except for 1 guy. That guy is technically smart, and he is a co-founder of the company. He is an okay guy in person (the kind you wouldn't want to care about much) but things get tricky when he is your manager. In general I am all okay but there are times when I feel I am not being treated fairly: * He doesn't give much thought to when he makes mistakes and when I do something similar, he is super critical. Recently he went as far as to say \"I am not sure if I can trust you with this feature\". The detais of this specific case are this: I was working on this feature, and I was already a couple of hours over my normal working hours, and then I decided to stop and continue tomorrow. We use git, and I like to commit changes locally and only push when I feel they are ready. This manager insists that I push all the changes to the central repo (in case my hard drive crashes). So I push the change, and the ticket is marked as \"to be tested\". Next day I come in, he sits next to me and starts complaining and says that I posted above. I really didn't know what to say, I tried to explain to him that the ticket is still being worked upon but he didn't seem to listen. * He interrupts me in-between when I am coding, which I do not mind, but when I do that same, his face turns like this :| and reacts as if his work was super important and I am just wasting his time. He asks me to accumulate all questions, and then ask him altogether which is not always possible, as you need a clarification before you can continue on a feature implementation. And when I am coding, he talks on the phone with his customers next to me (when he can go to the meeting room with his laptop) and doesn't care. * He made me switch to a whole new IDE (from Netbeans to a commercial IDE costing a lot of money) for a really tiny feature (which I later found out was in Netbeans as well!). I didn't make a big deal out of it as I am equally comfortable working with this new IDE, but I couldn't get the science behind his obsession. He said this feature makes sure that if any method is updated by a programmer, the IDE will turn the method name to red in places where it is used. I told him that I do not have a problem since I always search for method usage in the project and make sure its updated. IDEs even have refactoring features for exactly that, but... * I recently implemented a feature for a project, and I was happy about it and considering him a senior, I asked him his comments about the implementation quality.. he thought long and hard, made a few funny faces, and when he couldn't find anything, he said \"ummm, your program will crash if JS is disabled\" - he was wrong, since I had made sure it would work fine with default values even if JS was disabled. I told him that and then he said \"oh okay\". BUT, the funny thing is, a few days back, he implemented something and I objected with \"But that would not run if JS is disabled\" and his response was \"We don't have to care about people who disable JS\" :-/ * Once he asked me to investigate if there was a way to modify a CMS generated menu programmatically by extending the CMS, I did my research and told him that the only was is to inject a menu item using JavaScript / jQuery and his reaction was \"ah that's ugly, and hacky, not acceptable\" and two days later, I see that feature implemented in the same way as I had suggested. The point is, his reaction was not respectful at all, even if what I proposed was hacky, he should be respectful, that I know what's hacky and if I am suggesting something hacky, there must be a reason for it. There are plenty of other reasons / examples where I feel I am not being treated fairly. I want your advice as to what is it that I am doing wrong and how to deal with such a situation. The other guys in the team are actually very good people, and I do not want to leave the job either (although I could, if I want to). All I want is respect and equal treatment. I have thought about talking to this guy in a face to face meeting, but that worries me that his attitude might get worse and make things more difficult for me (since he doesn't seem to be the guy who thinks he can be wrong too). I am also considering talking to the other co-founder but I am not sure how he will take it (as both founders have been friends forever). Thanks for reading the long message, I really appreciate your help."} {"_id": "124439", "title": "What kind of interface should a double container offer?", "text": "I want to write a class which offers two sequences of elements to its users. The first one (lets call it \"primary\") is the main of the class and will be use 80% of the time. The second one (lets call it \"secondary\") is less important, but still need to be in the same class. The question is: what interface should the class offer to its users? By looking at STL style, a class with a single sequence of elements should offer begin() and end() functions for traversal and function like insert() and erase() for modifications. But how should my class offer the second sequence? For now, I have two ideas: * Expose the two containers to the user (what about the Law of Demeter ?) * Provide the main container with STL interface and expose only the second one. Here is an example. #include class A { public: std::vector& primary(); std::vector& secondary(); private: std::vector m_primary; std::vector m_secondary; }; class B { public: std::vector::iterator begin(); std::vector::iterator end(); std::vector& secondary(); private: std::vector m_primary; std::vector m_secondary; }; // Classes implementation // ... int main() { // -------------------------------------------------- // Case 1 // -------------------------------------------------- A a; for(auto it = a.primary().begin(); it != a.primary().end(); ++it) { // ... } for(auto it = a.secondary().begin(); it != a.secondary().end(); ++it) { // ... } // -------------------------------------------------- // Case 2 // -------------------------------------------------- B b; for(auto it = b.begin(); it != b.end(); ++it) { // ... } for(auto it = b.secondary().begin(); it != b.secondary().end(); ++it) { // ... } } What is the more C++ish way to do that? Is one best than the other or is there an other solution? ## Context This problem came in the context of an exercise in which I am writing a simple database access framework. Here are the classes involved in the question: * table * column * row * field The table class consists of a sequence of columns and an other sequence of rows. The main use of the table is manipulating (access, add and remove) the rows. Deeper in the hierarchy, a row is made of column and field so a user can ask the value (field) corresponding to a given column or column name. Each time a column is add/modify/remove from the table, every rows will need to be modify to reflect the modification. I want the interface to be simple, extensible and combining well with existing code (like STL or Boost)."} {"_id": "237609", "title": "Why don't more languages have the ability to compare a value to more than one other value?", "text": "Consider the following: if(a == b or c) In most languages, this would need to be written as: if(a == b or a == c) which is slightly cumbersome and repeats information. I know my above sample syntax is slightly clunky, but I am sure there are better ways to convey the idea. Why don't more languages offer it? Is there performance or syntax issues?"} {"_id": "118923", "title": "What's the best way to learn image processing?", "text": "I'm a senior in college that hasn't done much image processing before (except for some basic image compression on smartphones). I'm starting a research project on machine learning next semester that would require some biomedical image processing. What's the best way to get up to speed with the basics of image processing in about two months? Or is this impractical? It's my impression that once I'm good with the basics learning more from other resources would be easier."} {"_id": "237600", "title": "UML Diagram for Locations of Components", "text": "I am about to deploy a new project that will have different components being deployed on different servers and in different web sites/technologies/etc. I was just wondering what would be the best way to communicate this with the business? I want to show that Service X will exist on server Y as a Windows Service and Service Z will exist on server Q as a .NET web service etc."} {"_id": "237602", "title": "Enforcing coding standards: What are the trade-offs of different methods?", "text": "Our team has recently agreed on some very light coding standards, and that we need a means of enforcing them. We already have a mature Continuous Integration practice including frequent, small check-ins and pre-tested commits. We're considering two methods for implementing an enforcement mechanism: a pre-commit SVN hook, or a dedicated build configuration on the CI server. Here's the trade-offs we've considered so far: * An SVN hook ... * (-) requires SVN server admin privileges to setup, which we don't have. We can work with IT, but avoiding the overhead is nice. * (-) points to a script, which we'd like to keep under version control with the rest of the product. I don't immediately know how to point at the correct hook-script, taking into account branches, tags, and trunk work. * (-) requires more work for reporting. The script would need to handle both generating and publishing the report. * (+) gives quick feedback. * (+/-) provides hard enforcement: the code checks-in, or it doesn't . Usually a good thing (it's called \"enforcement\" for a reason), but may give false positives. * The CI build configuration ... * (-) consumes time on both the CI server and an agent. Might be trivial, but does add up. * (-) introduces some extra latency, making feedback less immediate. * (+) easily handles standards that evolve over time, by pointing to the version of the script that exists in whichever branch we're working on. * (+) provides easier reporting. We already use the CI system for other reports. Adding a new one would be straight-forward. * (+) configurable enforcement: easier to soften the requirements, if necessary. We could collect a list of violations to fix out-of-band, or provide warnings about standards we're adopting on a trial basis. What are the trade-offs of these different methods in enforcing coding standards here? **Related** : Should coding standards be enforced by the CI server? \\- Raises some good points, but doesn't consider SVN hooks."} {"_id": "13190", "title": "Is it possible to work with distrusting clients", "text": "One of our junior developers has been assigned a new client (we don't have the client yet, we're still working with him to see if we can meet his needs) and the junior developer said the client will hire us if we can do the work on his project without getting access to his server. I've had a direct conversation with the client who turned out to have had his code stolen before by some offshore company that he outsourced. This made me more sympathetic but I still have mixed feelings about this. On one hand I want to prove to the client that we're not all bad apples. Also if we do a good job with him, we get a loyal client who'll hire us for all his projects. I haven't heard of this happen before but I guess it happens more often than we'd all like to admit. On the other hand I'm hesitant to accept working with him because deployment time is going to be a nightmare and no where in my career or education has anyone taught me how to work with clients like him. I (or the junior developer) would have to write a detailed description of exactly what to do with the source to deploy it and that is an annoying burden when I could deploy and test the whole thing in an hour myself. As I said, I've never had to deal with this before (we're signing a non- disclosure but apparently so did the offshore company before us). We're not fully fully booked so it's not like I have an immediate replacement, but we're not begging for work either and I wonder if working under such restricted environment is worth the trouble. Another side is that the experience itself could be rewarding for us, but is it experience worth having, as in what's even the likelihood of getting a similar client anytime soon. Are we even expected to comply with such clients? So since I don't have any first hand experience with this and it definitely wasn't covered in school, how would those with longer experience working with clients deal with a distrusting client like this? Would you even accept the job?"} {"_id": "237604", "title": "Large MySQL Batch Inserts From PHP: Run insert from script, or save as SQL file?", "text": "We have a large dataset that's current residing in many, many spreadsheets. I've been tasked with getting part of that data into our MySQL DB. When all is said and done, I will probably be inserting somewhere around 3 million rows. I intend to collect the inserts into batches of about a quarter-million or so (parsing through it with PHP). Initially, my plan was to just run/send those batch INSERTs straight from my script. More recently, though, I thought about actually writing out the INSERTs to _.sql_ files for later importing. I tried to google around on this a little ... didn't find much. Are there any real pro's/con's one way or the other? If not, I'm inclined to stick with my original plan. Otherwise, I'm completely open to suggestions."} {"_id": "120956", "title": "How to build completely modular web applications", "text": "In the coming months we're going to begin a project where we take a system we've built for a client (v1) and rebuild it from scratch. Our goal with v2 is to make it modular, so that this specific client will have their own set of modules they use, then another client may use a different set of modules altogether. The trick here is that Company A might have a series of checkout and user modules that change how that system works. Company B might stick with the standard checkout procedure but customize how products are browsed. What are some good approaches to application architecture when you're building an application from scratch that you want to have a `Core` that's shared among all clients while still maintaining the flexability for anything to be modified specifically for a client? I've seen CodeIgniter's hooks and don't think that's a good solution as we could end up with 250 hooks and it's still not flexible enough. What are some other solutions? Ideally we won't need to draw a line in the sand."} {"_id": "128735", "title": "Are there any frameworks available for drawing with CSS?", "text": "I realize this sounds silly. Let me explain: I just came across this page about using CSS elements to create shapes- square, circle, triangle, star, yin-yang (yin-yang!), etc. using pseudo elements. Having struggled with HTML5 Canvas drawing in the past, I immediately thought that some combination of JavaScript and CSS pseudo-elements should be able to draw just about anything. So I first googled \"CSS Fractals\". No dice. Then \"JS Fractals\". Some dice, but not what I was looking for. Am I crazy, or is there potential here? jQuery can add and remove elements from the DOM by the thousands all day long. It seems to me that--given the existence of triangles via pure CSS--there is potential for a nice drawing mechanism (2D and 3D) sans Canvas. Any thoughts, frameworks, tutorials, white papers, etc would be appreciated!"} {"_id": "128734", "title": "Self-Executing Anonymous Function vs Prototype", "text": "In Javascript there are a few clearly prominent techniques for create and manage classes/namespaces in javascript. I am curious what situations warrant using one technique vs. the other. I want to pick one and stick with it moving forward. I write enterprise code that is maintained and shared across multiple teams, and I want to know what is the best practice when writing maintainable javascript ? I tend to prefer Self-Executing Anonymous Functions however I am curious what the community vote is on these techniques. **Prototype :** function obj() { } obj.prototype.test = function() { alert('Hello?'); }; var obj2 = new obj(); obj2.test(); **Self-Closing Anonymous Function :** //Self-Executing Anonymous Function (function( skillet, $, undefined ) { //Private Property var isHot = true; //Public Property skillet.ingredient = \"Bacon Strips\"; //Public Method skillet.fry = function() { var oliveOil; addItem( \"\\t\\n Butter \\n\\t\" ); addItem( oliveOil ); console.log( \"Frying \" + skillet.ingredient ); }; //Private Method function addItem( item ) { if ( item !== undefined ) { console.log( \"Adding \" + $.trim(item) ); } } }( window.skillet = window.skillet || {}, jQuery )); //Public Properties console.log( skillet.ingredient ); //Bacon Strips //Public Methods skillet.fry(); //Adding Butter & Fraying Bacon Strips //Adding a Public Property skillet.quantity = \"12\"; console.log( skillet.quantity ); //12 //Adding New Functionality to the Skillet (function( skillet, $, undefined ) { //Private Property var amountOfGrease = \"1 Cup\"; //Public Method skillet.toString = function() { console.log( skillet.quantity + \" \" + skillet.ingredient + \" & \" + amountOfGrease + \" of Grease\" ); console.log( isHot ? \"Hot\" : \"Cold\" ); }; }( window.skillet = window.skillet || {}, jQuery )); //end of skillet definition try { //12 Bacon Strips & 1 Cup of Grease skillet.toString(); //Throws Exception } catch( e ) { console.log( e.message ); //isHot is not defined } I feel that I should mention that the Self-Executing Anonymous Function is the pattern used by the jQuery team. **Update** When I asked this question I didn't truly see the importance of what I was trying to understand. The real issue at hand is whether or not to use new to create instances of your objects or to use patterns which do not require constructors/use of the `new` keyword. I added my own answer, because in my opinion we should make use of patterns which don't use the `new` keyword. For more information please see my answer."} {"_id": "125520", "title": "Why exclusively UNIX is taught at universities' operating systems courses?", "text": "I'm not sure this question can be asked here, but nevertheless. Almost each operating systems or systems programming course is taught using UNIX only. I wonder what are the reasons for this? Why Windows does not take at least a half of those courses?"} {"_id": "225223", "title": "Push vs Poll when large delay (hours) is acceptable", "text": "It seems commonsense nowadays that polling is a bad practice and pushing is the way to go when developing mobile applications that require to constantly receive data from a remote server. All major mobile shops provide their version of a push notification service: * Apple Push Notification Service * Google Cloud Messaging for Android * Microsoft Push Notification Service However I'm wondering to what extend this assumption is valid. I mean, if I have an application that polls a remote server only a couple of times a day, and to which \"notifications\" don't need to be delivered instantly (a large delay is acceptable), then would it be a good decision to poll for data instead of pushing it? Thanks in advance!"} {"_id": "128733", "title": "Tags or specify revision?", "text": "At my company, we have your typical svn structure. Each project has branches, tags and trunk. repo -Project A * trunk * branches * tags -Project B * trunk * branches * tags -Project C * trunk * branches * tags -Project D * trunk * branches * tags -Application 1 * trunk * branches * tags So Project A is core functionality pretty much all of our other projects use. D may depend on C & B as well, and application 1 uses them all. Note that we have many more libraries and applications than I've illistrated above, but its all done the same way. The assemblies generated by building project A are included in needed library or application projects as externals. We've got quite a few libraries like this. What we've been doing after a release is tagging the RC branches we created for each project used by the application, and then these then become tags. Obviously this is a fair amount of work as we ensure that tags only point to other tags, and you start by creating the tag for Project A, making the other stuff point to that, etc., the idea being we want to branch from the tags should a patch release be required before our next feature release. Given the amount of effort that can be involved getting this all together, one proposal is to tag only the application's RC and for all the externals point to a specific revision. If we need to create a branch to fix one of the libraries, branch from that revision and do the work then. I'm not against this, as I would like to not have to spend the time getting all the tags for all the projects setup, but I wanted to ask, can anyone think of any pitfalls of this approach? Is there any reason to create these tags over just pinning to a specific revision?"} {"_id": "128732", "title": "Can we compare programming languages ergonomically?", "text": "For instance, would Python be a more ergonomic programming language since it doesn't force you to make curly braces which requires the AltGr key. Also Python usually requires less code to achieve the same or am I being biased towards Python and PHP actually is an ergonomical and comfortable language despite forcing the programmer to use the AltGr key? Isn't forcing the programmer to use the AltGr key not very ergonomical?"} {"_id": "85049", "title": "c++ makefile importance and cross platform", "text": "I'm new to c++ and I wanted to know few questions regarding makefile. 1) Is writing makefile really important ? I mean there are many IDE's which does this automatically. Also, do people in programming job write makefiles or do they use automation? 2) Should I learn GNU make or something else like cmake or other ? Can anyone point of pros and cons of these?"} {"_id": "255620", "title": "Is it fair to patent workarounds?", "text": "While developing a FAT32 driver for my bootloader, I thought to make it complete by adding the long file name support. While watching around for some specification, and I also found articles about Microsoft suing for infringement of their patents on something that basically is a workaround. Knowing about this events made me think: is it fair to patent things _that are_ workarounds?"} {"_id": "213960", "title": "Is it legal to distribute EXEs inside system32 folder with my product", "text": "I'm developing a product related to printing. To work it properly in all windows versions I need to include few exe (lpr.exe) files from system32 folder. So I need to know it is legal to distribute these exe file with my product."} {"_id": "172965", "title": "Picking a code review tool", "text": "We are a startup looking to migrate from Fogbugz/Kiln to a new issue tracker/code review system. We are very happy with Jira, especially the configurability, but we are undecided on a code review tool. We have been trialing Bitbucket, but it doesn't fit our workflow well. Here are the problems we have identified with BB: 1. Comments can be hard to find: * when commenting on code not visible in the diff * when code that is commented on is later changed * viewing the full file doesn't include comments (also doesn't show changes) * Viewing comments on individual commits can be a pain 2. We have the implementer merge the diff and close the issue, whereas pull requests are more suited to the open source model where someone with commit rights merges 3. We would like to automate creation of the code review (either from Jira or a command line tool) 4. No syntax highlighting 5. Once the pull request exceeds a certain size, BB won't show the whole thing and you have to view individual commits 6. Linking BB pull requests to Jira issues is a bit janky: we have a pull request URL field on Jira, but this doesn't work when there are changes in multiple repositories Does anyone have any good suggestion given the above? We are tight on budget, and Jira integration is a big plus. We also have multiple commits per issue, and would like to have the option of viewing individual commits in the review. It might also be worth noting that we have a separate reviewer and tester for each issue."} {"_id": "48421", "title": "Best practices for documenting ASP.NET code", "text": "Any preferences for Asp.Net programmers on how to document their code? I read XML with Sandcastle is a good way to go. What do you use?"} {"_id": "64683", "title": "What is this domain of study?", "text": "Suppose I have a situation where I'm designing a website for a shoe reseller. They have different brands and kinds of shoes and of course, they want a really good search function. So there are different properties that shoes can have. They can have exclusive properties, such as size, width, gender, and children's/adults'. Or they can have non-exclusive properties such as color (there could be two or more colors on a shoe). Some categories might conflict with certain others, such as 'dress' and 'casual' (a shoe cannot be both a dress shoe and a sneaker (ignoring \"comfort\" dress shoes for this example)), whereas they don't conflict with yet others, such as 'dress' and 'boot' (a shoe can be a dress boot). The exclusive properties are easy to model, but how about potentially conflicting properties? Would this be a problem for set theory? What would this kind of applied computer science be called, in general? Data modelling, or something more specific? I want to get into the more abstract philosophical principles, such as exclusive and non-exclusive properties, and see how those principles are implemented in code, data structures, and database schemas. A good example of what I'm talking about would be the modified preorder tree traversal algorithm. It's a great way to make a nested hierarchical categorization system. So you have a real-life organizational problem: categories, and then you have a data structure that models that problem. Where can I learn more about this type of stuff?"} {"_id": "7325", "title": "Logical Operators", "text": "A typical curly brace programming lang has two types of AND and OR: logical and bitwise. `&&` and `||` for logical ops and `&` and `|` for bitwise ops. Logical ops are more commonly used than bitwise ops, why logical ops are longer to type? Do you think they should be switched?"} {"_id": "10084", "title": "Any other guides out there like Why's Guide to Ruby and Learn You Haskell?", "text": "I was looking for some guides to start programming with, I'm a college student and I'm learning Ruby as my first language (Side-question, is that something good to start with?). I found _Why's guide to Ruby and I also found Learn You Haskell (which is of course not Ruby) and I was curious if there were any more guides/books like these, free and fun to read? Just for future reads and to know about. Will reading a book or guide actually teach me programming, do I just have to hop from one book to another until I understand the concepts because I feel like most books focus on theoretical stuff and I don't actually pick up much programming from them. Are there any \"just do it\" ways of learning programming? Like do I just sit here and start writing bad code until I get better or do I go out there and find some code and edit it? Because ultimately I want to write my own code but I want to see some good examples."} {"_id": "78675", "title": "Math underpinnings of software testing?", "text": "In my software engineering course, we recently (briefly) covered testing. Over the course of the discussion some topics came up that made me wonder about the math underlying software testing (such as finer vs. coarser test sets, using equivalence classes to partition infinite input spaces into \"testable\" sets, etc). Are there any resources available exploring the math behind software testing?"} {"_id": "251445", "title": "docker-izing a classical db-based webapp - single or multiple containers?", "text": "I have a classic Java webapp. It is composed of a database (PostgreSQL), a servlet container (Tomcat) and my code (deployed on Tomcat as a *.war file). I want to package/deploy it using Docker (mostly for testing for now), but I'm unsure what would be the best way to \"map\" it. My initial idea was to have an app-in-a-box - define a container that has Java, Postgres and Tomcat on it, exposing just the http port. Further reading of the Docker docs shows that this, although possible (install and run supervisord as the single foreground process, have it start both Postgres and Tomcat) is probably not the intended usage. Going by the spirit of the tutorials I should probably create a container for Postgres, another for Tomcat, and a data-container to hold the application code (my *.war) and database files. This would mean 3+ containers (should the db files and *.war share the same data container?) What's the common practice here? Since I have no previous experience with Docker, what pitfalls can I expect from each approach? Is there some other approach I'm missing?"} {"_id": "224831", "title": "In PHP, should I delete objects immediately after use?", "text": "I've read in _PHP Advanced and Object Oriented Programming_ by Larry Ullman that it is good programming practice to delete object immediately after use but reason is given nowhere. I am a student web developer and objects in web applications are deleted automatically as soon as the script finishes. So, there seems no good reason to delete them manually. So, my question is what are the good reasons to delete unnecessary objects after use other that blindly following a convention?"} {"_id": "164359", "title": "How come verification does not include actual testing?", "text": "Having read a lot about this topic --- such as on this Software Testing Fundamentals site on verification and validation and Software Testing and Quality Assurance: Theory and Practice by Naik and Tripathy \\--- I still do not get it. Verification should prove that you are building the product right, while validation proves that you built the right product. But only static techniques (code reviews, requirements checks...) are mentioned as being verification methods. How can you say if its implemented correctly if you do not test it? It is said that verification ensures that the product meet specified requirements. Again, if the function is specified to work somehow, only by testing I can say that it does. Could anyone explain this to me please?"} {"_id": "202540", "title": "Is there industry demand for developers who have no GUI experience?", "text": "Is there still demand for developers who crate software without GUI's in the industry? Are jobs still in demand? I only ask because I write a lot of software for myself in C. I mainly use FreeBSD without a GUI. My software is for data mining, automation and marketing purposes most of the time as this is the field I work in. I find that a GUI is not needed and I feel comfortable working within a console. I've never worked for a company as a programmer, but in the industry do you have dedicated programmers who work exclusively on the GUI's and other who write the logic?"} {"_id": "191847", "title": "Should lookup tables enumerating strings have an integer primary key?", "text": "When I learned relational databases, the prof said that one would \"almost always\" want an artificial int as the primary key in a table, but did not specify what the exceptions are. At some time I stopped using them for junction tables, and never had a problem. Now I am making a database with a lot of lookup tables, and wonder whether this is a case where leaving artificial keys out wouldn't make for a cleaner design and simple programming. A toy example: assume that this is a mockup of the UI I want to achieve. ![example for a UI view](http://i.stack.imgur.com/wkZCW.png) The design option with artificial IDs would be (Type is a foreign key): LiteraryWork Title Type Winnie The Pooh 1 The Nightingale and the Rose 2 Snowwhite 2 LiteraryWorkType ID TypeName 1 Novel 2 Fairy Tale And the option without them uses the Type name itself as the key (again, the column Type is properly declared as a foreign key): LiteraryWork Title Type Winnie The Pooh Novel The Nightingale and the Rose Fairy Tale Snowwhite Fairy Tale LiteraryWorkType TypeName Novel Fairy Tale I tend towards using the second option, because I would need one less join when showing data on the screen. (I don't want to get rid of the lookup table entirely because I want to be able to restrict the values users may enter, for example by giving them a drop-down list bound to the lookup table). The only disadvantage I can think of is that, when a stakeholder says \"but I want my UI to say 'story', not 'fairy tale'\", I would have to update all data rows in the `LiteraryWork` table. I can live with this, as I don't expect it to happen often in my case. Does the first design have any other advantages I am missing? Which of the two options is considered best practice, and why? **Edit2** As I understand it, the existing answers are afraid that I am trying to break normalization, as in LiteraryWork Title Type LiteraryWorkTypeIsFiction Winnie The Pooh Novel Yes The Nightingale and the Rose Fairy Tale Yes Snowwhite Fairy Tale Yes To make it clear: the above is **not** what I am trying to do. Instead, if there really was more information pertaining to LiteraryWorkType, and I was using string IDs, I would record it this way: LiteraryWork Title Type Winnie The Pooh Novel The Nightingale and the Rose Fairy Tale Snowwhite Fairy Tale LiteraryWorkType TypeName IsFiction Novel Yes Fairy Tale Yes Conference paper No The only difference to the \"typical\" database design would be that the ID is a nvarchar, not an integer. Which certainly has its drawbacks in storage needed, as pointed out, but I don't see which normalization rule it is supposed to hurt. But this example aside, I am not trying to use string IDs when there actually is more information to be recorded about a LiteraryWorkType (so that LiteraryWorkType should be considered an entity in its own right). I am speaking about cases as simple as the toy example I gave at the beginning: the whole second table exists only because SQL has no \"enum\" type, and each data record in it consists of nothing but a single word, unique between records."} {"_id": "191840", "title": "How web-servers identify a client", "text": "I have seen on some sites which say ,for-example, multiple votes/views from same computer will be neglected/penalized etc. For-example liking a facebook page, or youtube video from same computer(different accounts) will not increase its worth(according to my knowledge). How do these sites identify bogus votes? I just need a direction."} {"_id": "191849", "title": "Any performance advantage in copying the session to a variable?", "text": "I have a fair number of items in a session, and I am wondering if there are any advantages to copying the session variable to a normal php variable (in order to close the session file as quickly as possible) and then do any calls from there. Something like: session_start(); $copiedSession = $_SESSION; session_write_close(); if (isset($copiedSessionData['foo'])){ ... } ... 1. I am looking for ways to test performance on this. 2. Does it make sense to do so? 3. Is there a break-even point where it helps / doesn't help?"} {"_id": "191848", "title": "Are there standard style guides for PHP?", "text": "I've been working in a very small team for PHP development and we've been using an ad-hoc, \"Invented Here\" style guide. Now that our team and codebase is growing, we would like to use a more broadly-used style guide that would be, if not an official standard, a de facto standards used in many PHP communities. Does such a thing exist as it exists in the C/C++/Java/Javascript/etc. circles ?"} {"_id": "178262", "title": "Why Open-Source Code?", "text": "What reasons to companies have for open-sourcing libraries and applications? Doing this may allow a developer to better understand the code, but could doing this allow people to find and exploit vulnerabilities in the library or application?"} {"_id": "252779", "title": "Why would I use ElasticSearch if I already use a graph database?", "text": "I don't find any deep explanation on the web about a comparison between ElasticSearch and the graph databases. Both are optimized to traverse data. ElasticSearch seems to be optimized for analytics. However Neo4j is also based on Lucene to manage indexes and some fulltext features. Why would I use ElasticSearch if I already use a graph database ? In my case, I'm using Neo4j to build a social network. What real benefit may ElasticSearch bring? **UPDATE ----------** I've just found this paragraph: > There are myriad cases in which elasticsearch is useful. Some use cases more > clearly call for it than others. Listed below are some tasks which for which > elasticsearch is particularly well suited. > > Searching a large number of product descriptions for the best match for a > specific phrase (say \u201cchef\u2019s knife\u201d) and returning the best results Given > the previous example, breaking down the various departments where \u201cchef\u2019s > knife\u201d appears (see Faceting later in this book) Searching text for words > that sound like \u201cseason\u201d Auto-completing a search box based on partially > typed words based on previously issued searches while accounting for mis- > spellings Storing a large quantity of semi-structured (JSON) data in a > distributed fashion, with a specified level of redundancy across a cluster > of machines It should be noted, however, that while elasticsearch is great > at solving the aforementioned problems, it\u2019s not the best choice for others. > It\u2019s especially bad at solving problems for which relational databases are > optimized. Problems such as those listed below. > > Calculating how many items are left in the inventory Figuring out the sum of > all line-items on all the invoices sent out in a given month Executing two > operations transactionally with rollback support Creating records that are > guaranteed to be unique across multiple given terms, for instance a phone > number and extension Elasticsearch is generally fantastic at providing > approximate answers from data, such as scoring the results by quality. While > elasticsearch can perform exact matching and statistical calculations, its > primary task of search is an inherently approximate task. Finding > approximate answers is a property that separates elasticsearch from more > traditional databases. That being said, traditional relational databases > excel at precision and data integrity, for which elasticsearch and Lucene > have few provisions. Can I assert that if I don't need approximate answers, then ElasticSearch would be useless compared to an already used graph database?"} {"_id": "121608", "title": "How and where do I publish my open standard?", "text": "I have been writing an open standard for a protocol I am developing. As far as writing concerns I have it under control. But the main question is where can I publish such document. I have been searching on the internet and didn't find such site where I could upload a TXT or PDF file. I have no intention of keeping secrets and I would like people to read this and if possible make protocol better."} {"_id": "176858", "title": "Is there something special about the number 65535?", "text": "2\u00b9\u2076-1 & 2\u2075 = 2\u2075 (or? obviously ?) A developer asked me today what is bitwise 65535 & 32 i.e. 2\u00b9\u2076-1 & 2\u2075 = ? I thought at first spontaneously 32 but it seemed to easy whereupon I thought for several minutes and then answered 32. 32 seems to have been the correct answer but how? 65535=2\u00b9\u2076-1=1111111111111111 (but it doesn't seem right since this binary number all ones should be -1(?)), 32 = 100000 but I could not convert that in my head whereupon I anyway answered 32 since I had to answer something. Is the answer 32 in fact trivial? Is in the same way 2\u00b9\u2076-1 & 2\u2075-1 =31? Why did the developer ask me about exactly 65535? Binary what I was asked to evaluate was 1111111111111111 & 100000 but I don't understand why 1111111111111111 is not -1. Shouldn't it be -1? Is 65535 a number that gives overflow and how do I know that?"} {"_id": "136070", "title": "What are the advantages of separating 'result' from 'status'", "text": "Let's say you have some automated processes that generally go through the following states; scheduled - initiated - validating - executing - completed On top of that these processes can prematurely end because of an error or explicit user cancellation. My first impulse is to simply add _error_ and _cancelled_ to the list of possible status values, but I was wondering about the (conceptual) advantages of separating _result_ from _status_ (even though it seems to me that one might argue that error and cancelled are also simply different states than the _completed_ state)."} {"_id": "4183", "title": "How much of your time at work is actually devoted to working?", "text": "A friend of mine told me that at a **very** big company, the programmers were told they are expected to actually work about 60% of the time **Added:** She meant 60% of the time they were at work, they are supposed to work, the rest if spent on surfing the web playing pool or ping pong or whatever, chatting, etc. I think I'm at about 70% How much do you?"} {"_id": "119329", "title": "Aging vs. Coding Skills", "text": "A little background, since it can be part of my point fo view. I'm a C#/Java programmer with age of 23, coding since my 18's. I started studying C and working with Cobol, and after 1 year I quickly moved to C#/Java Web Development, and have worked with it in about 3/4 companies. (I've just moved again) In my (brief) professional career I encountered some older programmers, all the times it was very hard to work with them, since I was way better programmer than they. And it is not about just the language skills, some of them had seriously problems understanding basic logic. Now I wonder how theese programmer get jobs on the market since (I imagine) they have more expenses, and thus have to make more money, and are really counter-productives. In theese examples, others project member have to constantly keep stoping for helping them out. All the times, they eventually quit... So I wonder... 1. May the aging process slow down the learning rate and logic thinking? 2. Does the programmer has to, or at least should, move to a management area before getting old? **Please, my intention is not to be disrespectful with older persons. I am fully aware that this is NOT the case of all older programmers, I often see around very good old programmers on the net, I just never met them for close.**"} {"_id": "119326", "title": "Is it possible to effectively develop PHP applications on Windows that will be deployed on servers running Linux?", "text": "Is it fine to code PHP on Windows and host it later on a server running Linux? Can there be any problems in the migration of such a project? I would think that there really can't be any problems, especially since I am a beginner in PHP and I won't use any of the advanced functions that may be OS- specific. However, I would like to make sure since I really don't like Linux at all."} {"_id": "119325", "title": "Recommend Stanford Online Database Class?", "text": "I am taking the Stanford Online Machine Learning class which is outstanding. http://ml-class.org Is anyone taking the Stanford Online Database class: http://www.db-class.org? Does it seem useful if you have a few years of fairly simple database experience, but no formal education in the area? Or is it mostly academic and not relevant to developing and maintaining databases in the \"real world\"?"} {"_id": "255523", "title": "Do else blocks increase code complexity?", "text": "Here is a **_very_ simplified example**. This isn't necessarily a language- specific question, and **I ask that you ignore the many other ways the function can be written, and changes that can be made to it.**. Color is of a unique type string CanLeaveWithoutUmbrella() { if(sky.Color.Equals(Color.Blue)) { return \"Yes you can\"; } else { return \"No you can't\"; } } A lot of people I've met, ReSharper, and this guy (whose comment reminded me I've been looking to ask this for a while) would recommend refactoring the code to remove the `else` block leaving this: (I can't recall what the majority have said, I might not have asked this otherwise) string CanLeaveWithoutUmbrella() { if(sky.Color.Equals(Color.Blue)) { return \"Yes you can\"; } return \"No you can't\"; } **Question:** Is there an increase in complexity introduced by not including the `else` block? I'm under the impression the `else` more directly states intent, by stating the fact that the code in both blocks is directly related. Additionally I find that I can prevent subtle mistakes in logic, especially after modifications to code at a later date. Take this variation of my simplified example (Ignoring the fact the `or` operator since this is a purposely simplified example): bool CanLeaveWithoutUmbrella() { if(sky.Color != Color.Blue) { return false; } return true; } Someone can now add a new `if` block based on a condition after the first example without immediately correctly recognizing the first condition is placing a constraint on their own condition. If an `else` block were present, whoever added the new condition would be forced to go move the contents of the `else` block (and if somehow they gloss over it heuristics will show the code is unreachable, which it does not in the case of one `if` constraining another). Of course there are other ways the specific example should be defined anyways, all of which prevent that situation, but it's just an example. The length of the example I gave may skew the visual aspect of this, so assume that space taken up to the brackets is relatively insignificant to the rest of the method. I forgot to mention a case in which I agree with the omission of an else block, and is when using an `if` block to apply a constraint that **must be** logically satisfied for **all** following code, such as a null-check (or any other guards)."} {"_id": "50711", "title": "Video documentary on the open source culture?", "text": "I'm looking for some videos on these subjects: 1. A movie/documentary detailing the origin, history, and current state of open source culture 2. A movie/documentary on how open source software actually gets developed. What are the technical workflows. How do people create projects, recruit contributors, build a community, assign roles, track issues, assimilate new comers ... etc etc. Could someone suggest a title?"} {"_id": "148959", "title": "Why aren't there other programming languages that compile to Python bytecode?", "text": "In Java, there are multiple languages that compile to Java bytecode and can run on the JVM -- Clojure, Groovy, and Scala being the main ones I can remember off the top of my head. However, Python also turns into bytecode (.pyc files) before being run by the Python interpreter. I might just be ignorant, but why aren't there any other programming languages that compile to python bytecode? Is it just because nobody bothered to, or is there some kind of inherent restriction or barrier in place that makes doing so difficult?"} {"_id": "126070", "title": "How to build a .Net app which runs on desktop and as a Windows Service", "text": "Ok, I hope this is not too much confusing (with my poor English). I want to build a small .Net 4.0 app which monitors several other applications on a Windows Server OR on a regular Windows PC. It will have a WPF GUI with a variety of graphical controls. The app will be used in the following scenarios: 1. If installed on a PC it should run as a \u201cnormal\u201d single Windows desktop app 2. If installed on a Server, it should run as a Windows Service. To use/manage the app it must have the same WPF GUI as in scenario 1 and the GUI should be run on the Server or on a remote PC At the moment I consider to write the application logic and connect it to the WPF GUI using a self-hosted WCF Data Service IN BOTH SCENARIOS. Since I\u2019m not a pro developer I suppose it\u2019s possible that I've missed something ;-) Will this work? Are there other/better solutions? Any answer or comment is highly appreciated."} {"_id": "148952", "title": "How much should app iphone or ipad development cost?", "text": "I'm an iOS developer (freelancer) and i want to know how much cost to build an app? and how should i do to estimate the price? like this app http://www.finalcad.fr/en/index.html"} {"_id": "253250", "title": "Why do a lot of logging frameworks provide individual methods instead of a enum 'level'?", "text": "I see some of the more preferred logging frameworks like log4j, log4net etc all use 'trace, debug' etc methods. Why do they use these instead of methods that take something like a enum level?"} {"_id": "127389", "title": "Does it make sense to use Kanban if all steps are done by the same person?", "text": "In our team all task's steps are always done by the same developer. I.e for task 1, person A does design, development, and testing; for task 2, person B does does design, development, and testing, etc. Does it make sense to use Kanban in this case?"} {"_id": "200334", "title": "How to unit test a class which is just an adapter that logs input and output to a third-party library?", "text": "I have the following (in C#, but the question could also apply to Java): public interface ILibraryAdapter { string Property1 { get; } string Method1(string param1); ... } public class ThirdPartyLibrary : ILibraryAdapter { private readonly ThirdPartyClass thirdPartyClass; private readonly ILog log; public ThirdPartyLibrary(ThirdPartyClass thirdPartyClass, ILog log) { this.thirdPartyClass = thirdPartyClass; this.log = log; } public string Property1 { get { log.Trace(\"ThirdPartyClass.get_Property1()\"); var result = thirdPartyClass.Property1; log.Trace(string.Format(\"ThirdPartyClass.get_Property1() returned {0}\", result)); return result; } } public string Method1(string param1) { log.Trace(string.Format(\"ThirdPartyClass.Method1({0})\", param1)); var result = thirdPartyClass.Method1(param1); log.Trace(string.Format(\"ThirdPartyClass.Method1({0}) returned {1}\", param1, result)); return result; } ... } where the `...` represents more of properties and methods being wrapped and logged (about two dozen total). The separate calls to the logger in each method are part of the requirements. How should I unit test this class? **Note:** The names of the properties and methods of the third-party class do not always match the names of the properties and methods of ILibraryAdapter."} {"_id": "129950", "title": "How to avoid \"DO YOU HAZ TEH CODEZ\" situations?", "text": "I have a strange situation at work, where a colleague of mine often asks me and other co-workers for working code. I would like to help him, but this constant request of trivial snippets interrupts my thoughts and sometimes makes it hard to concentrate. Plus, I have the impression (...) that this requests are generated by lack of competence, more than by laziness. In fact, he often asks things pretending to know the answer, since when I solve the problem he usually says things like \"Sure\", \"Yes, that's what I thought\", giving me the impression that my answer isn't worth it. How can I solve this embarrassing situation? Should I show more explicitly in front of other colleagues his lack of knowledge (by saying things like: \"do it yourself if you can, please\") or continue giving him what he wants? I think that he should aggregate all his questions in one, so that I can give him a portion of my time and he can work all by himself on his things. There is no hierarchy in the team, I must say we both have a similar seniority of five years, more or less. For the same reason I believe I cannot report to management, since trivial questions are often ignored. I discussed with other two members and they agree with me: in fact he often ask things cycling through colleagues."} {"_id": "127382", "title": "Pros and cons of using HTML5 vs frameworks for cross-platform mobile development", "text": "I'm going to start developing an app for my job, and I would like to know what is the best approach. The app will be a basic tracker for some application that we have, and since our employers uses both Android and Apple phones I don't want to develop the app twice just so it could work on both platforms. Is it worth developing an HTML5 app that could work on all the devices or should I use some the frameworks like appcelerator? Are there strong pros and cons for either approach?"} {"_id": "245793", "title": "Is there any benefit to just artificially reducing WIP in a Kanban system?", "text": "So imagine this scenario: There is only 1 developer, who also is the bottleneck (testers, etc are busy on other contracts). There is a nontrivial backlog of tickets, most of which are blocked for not being reproducible, unintelligible requirements, unsolvable (with current ideas, staff and technologies). Is there any particular benefit to creating a system that parcels out the tickets to the sole developer one or two at a time? The system so far has immediately created slack on account of delays in assigning new tickets after current tickets are worked. When blocked tickets are assigned, they now sit on the developer's desk blocked until they are moved back to unassigned tickets. Are there any benefits to this \"artificial low-WIP\" system and how could/should it have been implemented?"} {"_id": "230086", "title": "Help With Dependency Injection", "text": "I am still very confused as to why and when to use Dependency Injection. If anyone could explain maybe using the below example that would be great, any other explanations would be appreciated. Lets say I am creating a web-app that will save movie reviews written in C# with ASP.NET MVC 5. If I have the following **Model code** , namespace MovieReviewProject.Models { public class MovieReviews { [Key] public int ReviewID_int{ get; set; } /// /// Submitter email address /// public string EmailAddress_str{ get; set; } /// /// Movie name /// public string MoveName_str{ get; set; } /// /// The review /// public string Review_str { get; set; } /// /// The submission date from /// public DateTime SubmissionDate_dt { get; set; } /// /// The movie rating /// public int Rating_int { get; set; } } } how would a class that provides the Controller with the List of all the reviews, adds an average for a movie, and more look like? I know that DI is mostly used to allow for easier unit testing but other than that what are the perks to it? Is it worth going through old projects and make sure all the providers are using this principal?"} {"_id": "178486", "title": "What exactly does the condition in the MIT license imply?", "text": "To quote the license itself: > Copyright (C) [year] [copyright holders] > > Permission is hereby granted, free of charge, to any person obtaining a copy > of this software and associated documentation files (the \"Software\"), to > deal in the Software without restriction, including without limitation the > rights to use, copy, modify, merge, publish, distribute, sublicense, and/or > sell copies of the Software, and to permit persons to whom the Software is > furnished to do so, subject to the following conditions: > > **The above copyright notice and this permission notice shall be included in > all copies or substantial portions of the Software.** I am not exactly sure what the bold part implies. Lets say that I'm creating some library, and I license it under the MIT license. Someone decides to fork that library and to create a closed-source, commercial version. According to the license, he should be free to do that. However, what does he additionally need to do under those terms? Credit me as the creator? I guess the \"above copyright notice\" refers to the \"Copyright (C) [...\" part, but, wouldn't that list _me_ as the author of _his_ code (although I technically typed out the code)? And wouldn't including the \"permission notice\" in what is now _his_ library practically license it under the same conditions that I licensed my own library in? Or, am I interpreting this incorrectly? Does that refer to my obligations to include the copyright and the permission notice?"} {"_id": "216879", "title": "Fixed Sized Buffer or Variable Buffers with C# Sockets", "text": "I am busy designing a TCP Server class in C# that has events and allows the user of the class to define packets that the server can send a receive by registering a class that is derived from my \"GenericPacket\" class. My TCPListener uses Async methods such as .BeginReceive(..); My issue is that because I am using the .BeginReceive(); I need to specify a buffer size when I call the function. This means I cant read the whole packet if one of my defined packets is too big. I have thought of creating a fixed sized Header that gets read using .BeginRead(); and the read the rest using Stream.Read(); but this will lead to the whole server having to wait for this operation to complete. I would like to know if anyone has come across this before and I would appreciate any suggestions."} {"_id": "200338", "title": "Getting Started with Data Collection and Analysis", "text": "I have a very very limited understanding of SQL and PHP. I have a few projects I want to try and work on, which will require doing some web scraping for data collection and then storing, sorting through, and analyzing the data. I would like to also have this data available on my website so others can sort through it as well. My question is which direction I should go to learn the skills necessary for this. I am willing to put the time and effort in, but want to make sure I am going about it the right way. My first thought was to take an online Standford Intro to Databases course, then some web scraping tutorials, and then some PHP tutorials. I then stumbled onto the software called \"Orange\", which is \"Open source data visualization and analysis for novice and experts. Data mining through visual programming or Python scripting. Components for machine learning. Add-ons for bioinformatics and text mining. Packed with features for data analytics.\" Would it be better to scrap the SQL and PHP learning and focus on learning this \"Orange\" program? Or any other programs? Any help would be appreciated. I am willing to work hard at this, but do not want to spend a lot of time learning SQL and PHP if I should focus elsewhere."} {"_id": "178488", "title": "LSP vs OCP / Liskov Substitution VS Open Close", "text": "I am trying to understand the SOLID principles of OOP and I've come to the conclusion that LSP and OCP have some similarities (if not to say more). > the open/closed principle states \"software entities (classes, modules, > functions, etc.) should be open for extension, but closed for modification\". LSP in simple words states that any instance of `Foo` can be replaced with any instance of `Bar` which is derived from `Foo` and the program will work the same very way. I'm not a pro OOP programmer, but it seems to me that LSP is only possible if `Bar`, derived from `Foo` does not change anything in it but only extends it. That means that in particular program LSP is true only when OCP is true and OCP is true only if LSP is true. That means that they are equal. Correct me if I'm wrong. I really want to understand these ideas. Great thanks for an answer."} {"_id": "151157", "title": "What is the equivalent word for \"compile\" in an interpreted language?", "text": "(I was encouraged to ask this question here.) In C, we say: > GCC _compiles_ `foo.c`. For interpreters (such as Lua), what is the equivalent verb? > The Lua interpreter **___ __ ___** `foo.lua`. When I write instructions for users of my Lua script, I often say: > Run the interpreter on `foo.lua`. I think this can be said more succinctly: > _Interpret_ (or _Translate_ ) `foo.lua`. but that sounds awkward for some reason (perhaps because I'm unsure of its correctness). I can't really say _compile_ because users may confuse it with the usage of the Lua compiler when I actually mean the Lua interpreter."} {"_id": "126389", "title": "Beginners guide to developing optimization software", "text": "I am novice in \"serious\" programming i.e. applications that deal with real- life applications and software projects that go beyond school assignments. My interests include optimization, operations research, algorithms and lately i discovered how much I do like software design/development/engineering. I have already developed some simple desktop applications for some \"famous\" problems like TSP using heuristc approaches, a VRP solver (in progress) and so on. While developing this kind of software I actually used basic concepts taught at school such as object-orientation analysis and design. But, I found these courses rather elementary and quite boring (for my expectations). So I decided to go a little further and start developing \"real\" software (and this is where I realized how important and interesting software engineering/design is.) Now, here's my issue: I can not find a \"study guide\" for developing software of this kind. Currently, there are numerous resources out there (books, websites, tutorials) in designing and developing complex IS, web applications, smartphone apps but I can't find a book for example entitled \"optimization software development\". Definetly, someone could claim that \"design patterns apply to software in general\" but that's not my point. My point is that I could simply use my imagination for \"simple\" implementations, but what happens, when my imagination can not go further? In other words I'm looking for a guide/path to bridge the gap between: Mathematics-Algorithm Design-Software Engineering-Optimization-Software development"} {"_id": "28228", "title": "Boss solution vs Developer solution", "text": "The problem: When we were sending newsletters to customers, there was no way to confirm if the customer already received the mail. So the boss decided to implement this idea: Boss's Idea: Each time mail was being sent, do an INSERT in a db with the title of the newsletter being sent and the email address which is receving the email address. To ensure that any email address does not receive the same email twice, do a SELECT in the table and find the title of the newsletter being sent: if (title of newsletter is found) { check to see of the email we are sending mail to is already present. if it does, do not send mail } else { send mail } MY idea: create a column called unique and mark it as UNIQUE. Each time mail was being sent, concatenate email + newsletter id and record it in the UNIQUE row. The next time we do a \"mysql_affected_rows\" check to see if our INSERT was successful, we send the mail, else, there is already a duplicate and no need to send it."} {"_id": "1380", "title": "How much Code Coverage is \"enough\"?", "text": "We are starting a push for code coverage here at my work, and it has got me to thinking.... How much code coverage is enough? When do you get to the point of diminishing returns on code coverage? What is the sweet spot between good coverage and not enough? Does it vary by the type of project your are making (ie WPF, WCF, Mobile, ASP.NET) (These are C# classes we are writing.)"} {"_id": "245240", "title": "Formulate release notes consistently", "text": "In my project, I came across understanding that writing release notes is both helpful and necessary. However, it is not required by my management and I keep these files virtually to myself. You could say, I do it for self-organization. For each version of the internal prototyping software, I create a version number and update RELEASE file with the changes that have occurred since the last version. Release notes are classified into lists of new features, bug fixes and remaining limitations (features to be implemented in the next version). As my management does not require this document from me, I keep writing release notes in a free manner. Now, I am not sure how to best formulate (I hope not be sent to English@SE; I am interested not in grammar features, but in experience of other developers here who _do_ fill in release notes for customers) features that have been implemented. At the moment I use different grammar which is inconsistent between notes. I want to gather experience from other developers. Do you formulate release notes: * starting with I/we (I/we have integrated/tested/updated some new feature), or * as unpersonalized sentences (some new feature has been integrated/tested/updated), or * as nouns: integration/test/update of some new feature, or * other? At the moment, I mix all of these styles and by choosing one style, I want to be consistent with the industry standard."} {"_id": "95998", "title": "Do I Need To duplicate comments in Every File?", "text": "I have very similar code over multiple files in my project. If a client wanted some level of commented code, would I have to invest the time into duplicating the code for every file. I think it would simply be a waste of time. What would be the best/easiest way to provide comments to a client. (Refer them to a readme to show one file with the comments?) It's along the lines of URL's to point to and BOOL's to change. **For Clarification: I don't want to, nor do I think it would be a good idea to consolidate code to create a single place for settings. If I was building a desktop app or using PHP, I would create a config file. But this isn't a situation where that would be called for. Infact, it might actually be considered (from what I understand) a best-practice. I totally get that It's a good idea to eliminate duplicate code, but I don't even think there's an efficient and simple way to eliminate the duplicate base code. My question pertains how I should Comment for clients/dev's in these sort of situations*** Disclaimer: Next time, I'll start with comments rather than adding them in later."} {"_id": "245247", "title": "Quadtree with duplicates", "text": "I'm implementing a quadtree. For those who don't know this data strucutre, I am including the following small description: > A Quadtree is a data structure and is in the Euclidean plane what an Octree > is in a 3-dimensional space. A common use of quadtrees is spatial indexing. > > To summarize how they work, a quadtree is a collection \u2014 let's say of > rectangles here \u2014 with a maximum capacity and an initial bounding box. When > trying to insert an element into a quadtree which has reached its maximal > capacity, the quadtree is subdivided into 4 quadtrees (a geometric > representation of which will have a four times smaller area than the tree > before insertion); each element is redistributed in the subtrees according > to its position, ie. the top left bound when working with rectangles. > > So a quadtree is either a leaf and has less elements than its capacity, or a > tree with 4 quadtrees as children (usually north-west, north-east, south- > west, south-east). My concern is that if you try to add duplicates, may it be the same element several times or several different elements with the same position, quadtrees have a fundamental problem to handle the edges. For instance, if you work with a quadtree with a capacity of 1 the unit rectangle as the bounding box: [(0,0),(0,1),(1,1),(1,0)] And you try inserting twice a rectangle the upper-left bound of which is the origin: (or similarly if you try inserting it N+1 times in a quadtree with a capacity of N>1) quadtree->insert(0.0, 0.0, 0.1, 0.1) quadtree->insert(0.0, 0.0, 0.1, 0.1) The first insert will not be a problem: ![First insert](http://i.stack.imgur.com/p9NQa.png) But then the first insert will trigger a subdivision (because the capacity is 1): ![Second insert, first subdivision](http://i.stack.imgur.com/a0Gxx.png) Both rectangles are thus put in the same subtree. Then again, the two elements will arrive in the same quadtree and trigger a subdivison\u2026 ![Second insert, second subdivison](http://i.stack.imgur.com/laE9l.png) And so on, and so forth, the subdivision method will run indefinitely because (0, 0) will always be in the same subtree out of the four created, meaning an infinite recursion problem occurs. Is it possible to have a quadtree with duplicates? (If not, one may implement it as a `Set`) How can we solve this problem without breaking completely the architecture of a quadtree?"} {"_id": "245246", "title": "Property value validations on POCO entities", "text": "Sorry in advance if this question is so trivial. ## The situation There is a `Customer` entity whose ID is limited to two letters (A to Z) in the database. Also, a user can enter the `ID` value from a Windows form. I think that the best option is that this form will validate (using the controller) with a regular expression like this `^[a-zA-Z0-9]{2}$` whether the value is valid. ## The question(s) * Does the `Customer` entity should also do the validation when I set the value of the ID property? * Should this validation be outsourced if, for example, there is also similar validation in other properties of the entity? I think that the answer is that it depends if the property value is a requirement of the user or it is a design decision on the database, but I appreciate your knowledge and experience to guide me on the correct way. Thanks in advance."} {"_id": "220146", "title": "Why the Scala fascination with flatmap? (This doesn't seem to be the same for mapcat in the Clojure world)", "text": "In the Scala community - there is an apparent fascination with the FlatMap function. Now I understand that FlatMap is significant because it is used for the bind part of a Monad. (and that the Clojure community hasn't dived into Monads yet, with some wonderful exceptions). Now in the Clojure Community - there is no corresponding cultural idiom, eg \"MapCat that S***\". My theory on the difference between the two communities and the reason for this difference is that the concurrency primitives in Clojure lend it towards solving problems on a single machine, in a single instance. (Ie Clojure is good at concurrency). Whereas in the Scala World, with the rise of the Actor model, Scala is a little more focused on solving multi-machine problems. This focus on multi-machine problems has a higher focus on breaking problems down into their parts, and a greater focus on what can be broken down and Scale. (eg Monoids) (Now I realise there is an STM in Scala, and that Actor models, Avout and Cascalog are wonderful exceptions to this - I'm making a generalization) My question is, why the Scala fascination with flatmap? (I'm not trying to start a flamewar - I think both communities have benefited from each others existence - I'm trying to understand a cultural behaviour)."} {"_id": "27909", "title": "How do you decide your side projects", "text": "At any given time, I usually have a bunch of ideas for weekend/side projects that I can work on. The ideas can be generally be categorized into these: 1\\. Self Learning: Learning a new language/ technology/ framework 2\\. Work related: Learning/Doing something that would help you at work 3\\. Money: Projects that (you think) can make some money 4\\. Fun/Utility projects These are just the rough categories that I can think of and there can be more/other ways of classification. My question is based on your experience what should drive the decision of what kind of project to work on. What parameters apart from the type of project should impact this decision (time, effort, money...)"} {"_id": "25917", "title": "What are the common misuses of \"enum\" in C?", "text": "I have seen C code where people used enum heavily. But all it does is confuse others. In many places plain integers can do the same thing with less ambiguity. What are the common misuses of enum?"} {"_id": "66881", "title": "Is programming tourism a realistic possibility?", "text": "I'm going on vacation to Paris, France for 10 days. Actually, it's my girlfriend's wish to go there but I'm not very interested in visiting, sightseeing, etc. Recently, I came up with an idea of trying to do something like programming tourism. :) I'd like to do something related to programming in a startup-like company. I do not want a salary or any kind of compensation. I want to overview process, social aspects, environment, and\"what it feels like to develop software in another country. I'm from Russia. I've been a software developer since 2003, and while I prefer C#, I'm ready to use anything Turing-complete. I have some MS certifications and am familiar with all .NETs since 1.1. Currently I'm finishing a PhD in CS. I'm interested in multidimensional indexing and I can turn any piece of data and code into an OLAP system, but it'd take too much time. :) What can I do? I have no more than one week, but I want a _totally complete project in a short amount of time_. 1. Implement some features in well-tested project 2. Do a code review 3. Debug memory, performance and concurrency issues 4. Do unit testing So, about the questions: 1. Is it legal? I'm ready to sign NDA if it's necessary, and I'll have a tourist visa. 2. Is it possible? I'm sure that bureaucratic companies with lots of HR people and PMs will not allow such experiments, but small companies can afford it. I'm ready to guarantee support on my code after leaving for home. :) P.S. I still haven't started learning French; I hope it will not take too much time :) * * * P.P.S. 1. Yes, it's girlfriend-approved. 2. What's in it for me? It's fun. It's fun to see new systems and people who created them. It's fun to complete meaningful things. Quickly. 3. What's in it for them? Features, debugging, reviewing or testing. If my short-term colleagues like this style of working I can invite them to make a similar trip to my company. :) I think in Russia it's even more exciting. :)"} {"_id": "118496", "title": "One stop shop for good coding practices and performance tips?", "text": "While this may be a very subjective question I was wondering if there's a place (or many places) on the web where one can read up about good coding and performance tips for different languages and how they may compare with others? For example in AS3 it's faster to multiply rather than to divide, is this the same for JS? What other tips are there to really make our code run lightening fast? And where are these tips?"} {"_id": "197877", "title": "Modern REPL for Haskell - is anybody working on it?", "text": "It's time Haskell had a modern REPL like Mathematica's (or better). Make each calculation run in a separate thread, so user has control over each computation box's resources (ability to pause, play, cancel, replay, set memory/stack/time limits, etc). Allow for graphics output as well (how?). Then detach these boxes from today's GHCi 1D canvas into 2D space, allow the boxes to be cloned (a clone receives same source, waits for the user to press \"play\" button...), their source code edited (allow for more than one line, too). Then allow for these boxes to be assembled into chains - or networks - if types fit (deduce new constraints, if any, as in usual type inference). Make the joined boxes into one function/module, automatically produce the resulting source file, etc., etc. Is this feasible? Can a community of experts self-organize to produce something like that, \"for great good\"?"} {"_id": "160503", "title": "Does custom created code for a client imply copyright ownership?", "text": "I know of a potential customer that has been paying for website development work on an hourly basis for several years by several independent contractors, but has never signed an agreement as to terms or ownership for it. They just get a bill and pay it. So, does this mean that there is implied ownership by the developer(s) who wrote the software and I can't modify the code without something in writing? Who owns the source code in this case as there are two other developers that worked on it? By the way, I'm a developer in the USA, working in Missouri. I did find this useful link that confirmed some suspicions I had about copywrite ownership of software. They also have a free IP(intellectual property) ebook download. No registration required either. Disclaimer: I understand that any answers are not to be construed as legal advise perse, but I'm wondering if someone else has run into this issue and knows by firsthand experience."} {"_id": "160500", "title": "How to achieve a loosely coupled REST API but with a defined and well understood contract?", "text": "I am new to REST and am struggling to understand how one would properly design a REST system to both allow for loose coupling but at the same time allow a consumer of a REST API to understand the API. If, in my client code, I issue a GET request for a resource and get back XML, how do I know what to do with that xml? e.g. if it contains `JohnSmith` how do I know that these refer to the concept of \"first name\", \"last name\"? Is it up to the person writing the REST API to define in documentation some place what each of the XML fields mean? What if producer of the API wants to change the implementation to be `` instead of ``? How do they do this and notify their consumers that this change occurred? Or do the consumers just encounter the error and then look at the payload and figure out on their own that it changed? I've read in REST in Practice that using a WADL tool to create a client implementation based on the WADL (and hide the fact that you're doing a distributed call) is an \"anti-pattern\". But I was planning to do this-- at least then I would have a statically typed API call that, if it changed, I would know at compile time and not at run time. Why is this a bad thing to generate client code based on a WADL? And how do I know what to do with the links that returned in the response of a POST to a REST API? What defines this contract and gives true meaning to what each link will do? Please help! I dont understand how to go from statically-typed or even SOAP/RPC to REST!"} {"_id": "66886", "title": "How do you handle the problem of abstraction when you learn a techonology / language?", "text": "I find that when I am trying to learn say Python for example I end up worrying about problems that are out of scope like how does Python implement this thing etc. For example say I am learning twisted framework , I end up thinking how can Python define things in such a manner and stuff - this leads be to worry mostly about internals of language insted of problem at hand ! However I can take another approach where I accept the abstraction and say ok ... TCP does this ...I need not know more and accept TCP connections are handled when I give this command ... and carry on with my work and then worry about these things later on . What exactly do you guys follow ? I am perlexed to see many of them so good at both these things while I just seem to be having a hard time there :) Do you really sometimes learn topics with abstraction or do you go find under the hoods stuff as much as possible ? Mayve you could say I am facing a problem of overdesign which is a big problem and need some help to solve this :)"} {"_id": "152201", "title": "Sharing object between 2 classes", "text": "**edit** : I am thinking that dependency injection is the best approach. I am struggling to wrap my head around being able to share an object between two classes. I want to be able to create only one instance of the object, `commonlib` in my `main` class and then have the classes, `foo1` and `foo2`, to be able to mutually share the properties of the commonlib. `commonlib` is a 3rd party class which has a property `Queries` that will be **added** to in each child class of `bar`. This is why it is vital that only one instance is created. I create two separate queries in `foo1` and `foo2`. This is my setup: abstract class bar{ //common methods } class foo1 extends bar{ //add query to commonlib } class foo2 extends bar{ //add query to commonlib } class main { public $commonlib = new commonlib(); public function start(){ //goal is to share one instance of $this->commonlib between foo1 and foo2 //so that they can both add to the properites of $this->commonlib (global //between the two) //now execute all of the queries after foo1 and foo2 add their query $this->commonlib->RunQueries(); } }"} {"_id": "216007", "title": "Where should the database and mail parameters be stored in a Symfony2 app?", "text": "In the default folder structure for a Symfony2 project the database and mail server credentials are stored in `parameters.yml` file inside `ProjectRoot/app/config/parameters.yml` with these default values: parameters: database_driver: pdo_mysql database_host: 127.0.0.1 database_port: null database_name: symfony database_user: root database_password: null mailer_transport: smtp mailer_host: 127.0.0.1 mailer_user: null mailer_password: null locale: en secret: ThisTokenIsNotSoSecretChangeIt During development we change these parameters to the development database and mail servers. This file is checked into the source code repository. The problem is when we want to deploy to the production server. We are thinking about automating the deployment process by checking out the project from git and deploy it to the production server. The thing is that our project manager has to manually update these parameters after each update. The production database and mail servers parameters are confidential and only our project manager knows them. I need a way to automate this step and suggestion on where to store the production parameters until they are applied?"} {"_id": "56213", "title": "Focus on Javascript or Jquery?", "text": "I am a student in college, and I notice that a lot of companies look for people who have experience with Javascript. Does this include Javascript's libraries, like JQuery? Or, are they looking for Javascript people only? It probably depends on the company, but what is the general advice for a student wanting to do some front end work? Is Javascript more powerful than JQuery? I know Jquery is a library and simplifies many tasks, but is there some reason why you would use Javascript over Jquery?"} {"_id": "56215", "title": "Why on C++ you can have the method definition inside the header file when in C you cannot?", "text": "In C, you cannot have the function definition/implementation inside the header file. However, in C++ you can have full method implementation inside the header file. Why is the behaviour different?"} {"_id": "116922", "title": "How to handle \"X\" data sets as input", "text": "I participated in a coding competition today, and I found that almost all of the command line input our programs needed to receive would start with an integer representing the amount of data sets to follow. There were 6 different problems, and they all started this way. For example, one sample problem had: \"Input to this problem will begin with a line containing a single integer N (1 <= N <= 100) indicating the number of data sets. Each data set consists of the following components: * A line containing a single integer W that specifies the number of wormholes * A series of W lines containing.... * etc. \" Pretty much all the competition problems had this format, with the first integer representing the amount of data sets to follow. My initial reaction (and the way I tried to solve the problem) was just using a vector of size N, where each element represented a data set. Trouble is, there are a whole bunch of things in these data sets. Using this approach often left me with a vector of vector of vectors (maybe an exaggeration but you get the idea) which was very hard to manage. Another idea was looping through the entire program N times, but this doesn't always seem that applicable. I realize this is a vague question, but that's because I'm looking for a general solution to this type of problem. What is the best approach to handling this type of input?"} {"_id": "110370", "title": "Is it true that having the Nexus S is much better as an Android developer?", "text": "Soon, I'll be starting Android development, as I know a bit of Java. I'll also be getting an Android phone. My two options are the Nexus S and the Atrix 4G. I've heard the Nexus series is ideal for developers - can someone tell me why? Why is getting the Atrix bad as a developer?"} {"_id": "110371", "title": "How do freelancer web developers manage web hosting for customers?", "text": "I have built a number of websites for friends, family, etc. and I have put them all on a single shared web hosting account. Now that they are built, I want to get out of business of supporting them and paying for them (my friends are reimbursing me but I am paying for the actual bill) so I was thinking of having them create their own hosting accounts and slowly migrating the sites over. It got me thinking how does any freelancer do this? Do they force their clients to setup their own hosting up front and let the programmer log into the customer account during development. What if there is a bug in the future and they need to go back in? I was curious to see what model most people use who build websites for others as it seems like a tricky situation."} {"_id": "114690", "title": "How is oData different from a REST service?", "text": "I am looking into writing a web service API and I am thinking of creating a REST service. What does OData means in this context? Can you please explain the difference between OData and REST?"} {"_id": "127215", "title": "HTML5 hype as native app replacement - reliable analysis and sources", "text": "I am asking this as an objective question and have no interest in inciting a flamewar. My point here is to gather some evidence to assist in decisionmaking and communicating with non-technical folks up the management chain. I'm on a UX team at a software company. We\u2019ve worked with WPF and Silverlight in the past, but some managers have bought into the hype and are considering moving everything into HTML5. Our concerns with pursuing HTML5, based on several months of experimentation with it, are these: * HTML5 provides a dramatically less evolved development, backend and UI model than WPF or Silverlight. It takes a long time to do anything and the end result is far less smooth. * Collaboration between designers and developers. One big WPF advantage was that the UX team could design the entire UI and hand it over to developers for remaining work. With web apps we are back to designing redlines/wireframes and throwing them over the wall, given the lack of good tools and the amount of coding required to get even basic animations to work. * Cross browser headaches. Our experience has been that browser support is very inconsistent and fallbacks are required for pretty much anything. * It is not clear how HTML5 is any different from previous HTML/JS/CSS. The new tags (section, nav, etc) don't provide any obvious benefit. We lose all binding, security, database functionality and the rich UI. * Styling, UI presentation. We can't style controls beyond very basic styling or needing to resort to hacks. * Typography is undependable compared to WPF/Silverlight. So my question is: has done any similar analyses for HTML5 vs a native app (and by this I don't mean a trivial phone app, but a very large enterprise complex application), and has sources or analysis they can share?"} {"_id": "104468", "title": "If this is camelCase what-is-this?", "text": "The naming convention for a term like `doSomething` is camel case. What would the naming convention of `do-something` be called?"} {"_id": "195652", "title": "How to calculate percentile in Java without using Library", "text": "I am trying to calculate `95th Percentile` from the data sets which I have populated in my below `ConcurrentHashMap`. **I am interested in finding out how many calls came back in 95th percentile of time** My Map will look like this and it will always be sorted in ascending order on the keys- In which key - means number of milliseconds value - means number of calls that took that much milliseconds Milliseconds Number 0 1702 1 15036 2 14262 3 13190 4 9137 5 5635 6 3742 7 2628 8 1899 9 1298 10 963 11 727 12 503 13 415 14 311 15 235 16 204 17 140 18 109 19 83 20 72 For example, from the above data sets, it means > 1702 calls came back in 0 milliseconds > > 15036 calls came back in 1 milliseconds Now I can calculate the 95th percentile by plugging the above data sets in the `Excel sheet`. But I was thinking to calculate the percentile in Java code. I know the algorithm will look something like this- Sum all values from the map, calculate 95% of the sum, iterate the map keys in ascending order keeping a running total of values, and when sum equals or exceeds the previously calculated 95% of the total sum, the key should be the 95th percentile I guess. But I am not able to plugin this algorithm in the Java code. Below is the map which will have above datasets. Map histogram = new ConcurrentHashMap I am not sure what is the best way to calculate the percentile in Java. I am not sure whether I am algorithm is also correct or not. I am just trying to find out how many calls came back in 95th percentile of time. private static void calculatePercentile() { for (Long time : CassandraTimer.histogram.keySet()) { } } Can anyone provide some example how to do that? Any help will be appreciated. **Updated code:-** Below is the code I have got so far. Let me know if I got everything correct in calculating the 95th percentile- /** * A simple method to log 95th percentile information */ private static void logPercentileInfo() { double total = 0; for (Map.Entry entry : CassandraTimer.histogram.entrySet()) { long value = entry.getKey() * entry.getValue(); total += value; } double sum = 0.95*total; double totalSum = 0; SortedSet keys = new TreeSet(CassandraTimer.histogram.keySet()); for (long key : keys) { totalSum += CassandraTimer.histogram.get(key); if(totalSum >= sum) { //this is the 95th percentile I guess System.out.println(key); } } }"} {"_id": "194922", "title": "How useful is \"rubber duck debugging\"?", "text": "I just learned about rubber duck debugging, where the programmer explains code, line by line, to a rubber duck or other inanimate object in order to find the problem. This approach sounds time-consuming, but seems to work well from what I've read. Can someone with experience with this approach tell just how effective this is, and whether this is a time-efficient way to debug faulty code compared to other techniques, such as stepping through a program and watching variables in a debugger?"} {"_id": "118142", "title": "What is a 'good number' of exceptions to implement for my library?", "text": "I've always wondered how many different exception classes I should implement and throw for various pieces of my software. My particular development is usually C++/C#/Java related, but I believe this is a question for all languages. I want to understand what is a good number of different exceptions to throw, and what the developer community expect of a good library. The trade-offs I see include: * More exception classes can allow very fine grain levels of error handling for API users (prone to user configuration or data errors, or files not being found) * More exception classes allows error specific information to be embedded in the exception, rather than just a string message or error code * More exception classes can mean more code maintenance * More exception classes can mean the API is less approachable to users The scenarios I wish to understand exception usage in include: * During 'configuration' stage, which might include loading files or setting parameters * During an 'operation' type phase where the library might be running tasks and doing some work, perhaps in another thread Other patterns of error reporting without using exceptions, or less exceptions (as a comparison) might include: * Less exceptions, but embedding an error code that can be used as a lookup * Returning error codes and flags directly from functions (sometimes not possible from threads) * Implemented an event or callback system upon error (avoids stack unwinding) As developers, what do you prefer to see? If there are MANY exceptions, do you bother error handling them separately anyway? Do you have a preference for error handling types depending on the stage of operation?"} {"_id": "21697", "title": "When do I need to use a framework?", "text": "I am new to web programming and at this time I am learning about PHP. I would like to know when do I need to use a PHP framework such as CakePHP? What are things that this and other similar PHP frameworks offer for me? And is it really important to use a framework to be a professional? * And can I create my own framework to provide the features I like into it?"} {"_id": "121073", "title": "Would refusing to use a PHP framework be a bad idea?", "text": "> **Possible Duplicate:** > When do I need to use a framework? I feel fairly competent at creating a number of applications in PHP and thus far have not seen the need to use one of the popular frameworks (CakePHP, Zend, etc). As a programmer it gives me comfort to know exactly what my code is doing and I suppose before now that has made sense since I've been learning the language. Is it ignorant of me to not want to use one of these mainstream frameworks? **Will it hurt my productivity? Why should I not just create me own code base that makes sense for me?** Other than possibly hindering later career choices where many roles might require framework experience, I'm not sure I'm missing out..."} {"_id": "253718", "title": "String contains trailing zeroes when converted from decimal", "text": "I've run into an unusual quirk in a program I'm writing, and I was trying to figure out if anyone knew the cause. Note that **fixing** the issue is easy enough. I just can't figure out why it is happening in the first place. I have a WinForms program written in _VB.NET_ that is displaying a subset of data. It contains a few labels that show numeric values (the `.Text` property of the labels are being assigned directly from the Decimal values). These numbers are being returned by a DLL I wrote in _C#_. The DLL calls a webservice which initially returns the values in question. It returns one as a string, the other as a decimal (I don't have any control over the webservice, I just consume it). The DLL assigns these to properties on an object (both of which are decimals) then returns that object back to the WinForm program that called the DLL. Obviously, there's a lot of other data being consumed from the webservice, but no other operations are happening which could modify these properties. So, the short version is: * WinForm requests a new `Foo` from the DLL. * DLL creates object `Foo`. * DLL calls webservice, which returns `SomeOtherFoo`. //Both Foo.Bar1 and Foo.Bar2 are decimals Foo.Bar1 = decimal.Parse(SomeOtherFoo.Bar1); //SomeOtherFoo.Bar1 is a string equal to \"2.9000\" Foo.Bar2 = SomeOtherFoo.Bar2; //SomeOtherFoo.Bar2 is a decimal equal to 2.9D * DLL returns Foo to WinForm. WinForm.lblMockLabelName1.Text = Foo.Bar1 //Inspecting Foo.Bar1 indicates my value is 2.9D WinForm.lblMockLabelName2.Text = Foo.Bar2 //Inspecting Foo.Bar2 also indicates I'm 2.9D So, what's the quirk? WinForm.lblMockLabelName1.Text displays as **\"2.9000\"** , whereas WinForm.lblMockLabelname2.Text displays as **\"2.9\"**. Now, everything I know about C# and VB indicates that the format of the string which was initially parsed into the decimal should have no bearing on the outcome of a later decimal.ToString() operation called on the same decimal. I would expect that `decimal.Parse(someDecimalString).ToString()` would return the string without any trailing zeroes. Everything I find online seems to corroborate this (there are countless Stack Overflow questions asking exactly the opposite...how to keep the formatting from the initial parsing). At the moment, I've just removed the trailing zeroes from the initial string that gets parsed, which has hidden the quirk. However, I'd love to know why it happens in the first place."} {"_id": "155271", "title": "Do I have to change my company to make sure I'm good enough?", "text": "I have been working as a developer since my fourth year of university until now. I'm getting my master's degree next year (in math modeling). I've worked for the same company all the time, first on .Net, then on Android, and now .Net again. It seems I'm doing quite well in my current company. Some of my coursemates have tried to work in my company, but they failed after some time. This (and not only this) makes me think that I'm really worth something. But we're working on a very specific project. I was wondering if I am good enough and if I can make it in another company. I love my current job, but sometimes I have a feeling that I'm not moving on. So, is it possible to keep improving when working at the same company with the same technology and at similar tasks? I know that most of the programmers go from one place to another very frequently. Is it the only way?"} {"_id": "214437", "title": "Prefer examples over documentation. Is it a behavioral problem?", "text": "Whenever I come across a new API or programming language or even simple Linux man pages, I always (ever since I remember) avoided them and instead lazily relied on examples for gaining understanding of new concepts. Subconsciously, I avoid documentation/APIs whenever it is not straightforward or cryptic or just plain boring. It's been years since I began programming and now I feel like I need to mend my ways as I now realize that I'm causing more damage by refraining from reading cryptic/difficult documentation as it is still a million times better than examples as the official documentation has more coverage than any example out there. Even after realizing that examples should be treated as \"added\" value instead of the \"primary\" source for learning. How do I break this bad habit as a programmer or am I overthinking?"} {"_id": "82087", "title": "How much can one programmer do on his own?", "text": "With software products taking whole teams of people to develop, how much can one programmer accomplish on his/her own? In other words, could a single person write Photoshop, MS word, etc...? And if they couldn't, would web development be an area where one programmer could do a lot?"} {"_id": "127036", "title": "How to treat \"The field is never used\" warnings?", "text": "> Warning 1 The field 'MCS_SPS_School.Fees.DiscountAmt.rtvalue' is assigned > but its value is never used G:\\Jagadeeswaran\\Nov 17\\MCS-SPS School\\MCS-SPS > School\\Fees\\DiscountAmt.cs > > Warning 2 The field 'MCS_SPS_School.Fees.DiscountAmt.dt' is never used > G:\\Jagadeeswaran\\Nov 17\\MCS-SPS School\\MCS-SPS School\\Fees\\DiscountAmt.cs This type of warning is still in my project. But the project runs safely. Now what I do for this warnings, to fix or Ignore? Is any dangerous if ignore that, performance wise or other?"} {"_id": "213421", "title": "Method parameter takes object from it's own class?", "text": "I'm not absolutely sure if this is the right place for this question. But I guess StackOverflow won't be the right place. Is it for any reason a bad way to pass an object to a method from the same class? For some reason I don't feel well with the first solution. The method `Start` starts the game with the given player. CurrentGame.Start(CurrentGame.CurrentPlayer); or CurrentGame.StartWithCurrentPlayer(); But the last method could require a second method like: public void SetCurrentPlayer(Player player) { CurrentPlayer = player; }"} {"_id": "253710", "title": "Ruby on Rails - Belongs_to and Active Admin not creating foreign ID", "text": "I have the following setup: class Category < ActiveRecord::Base has_many :products end class Product < ActiveRecord::Base belongs_to :category has_attached_file :photo, :styles => { :medium => \"300x300>\", :thumb => \"200x200>\" } validates_attachment_content_type :photo, :content_type => /\\Aimage\\/.*\\Z/ end ActiveAdmin.register Product do permit_params :title, :price, :slideshow, :photo, :category form do |f| f.inputs \"Product Details\" do f.input :title f.input :category f.input :price f.input :slideshow f.input :photo, :required => false, :as => :file end f.actions end show do |ad| attributes_table do row :title row :category row :photo do image_tag(ad.photo.url(:medium)) end end end index do column :id column :title column :category column :price do |product| number_to_currency product.price end actions end class ProductController < ApplicationController def create @product = Product.create(params[:id]) end end Every time I make a product item in activeadmin the category comes up empty. It wont populate the column category_id for the product. It just leaves it empty. What am I doing wrong?"} {"_id": "155279", "title": "OAuth2 vs Public API", "text": "My understanding of OAuth (2.0) is that its a software stack and protocol to allow 2+ web apps to share information about a single end user. User A is a member of Site B and Site C; Site B wants to fetch some data from Site C about User A, and this is where OAuth steps in. So first off, if this assessment is incorrect, please begin by clarifying this for me and correcting me! Assuming I'm on the right track, then I guess I'm not seeing the need for OAuth to begin with (!). I'm sure I'm just not seeing the \"forest through the trees\" here, but the way I see it, couldn't Site C just expose a public API that Site B could use to fetch the same data (sans OAuth)? If Site C required user credentials to access the data, could this public API just use HTTPS for secure transport and require username/password as a part of each API call? Again, I'm sure I'm missing something, but I'm just not understanding why I would need OAuth when a secure, public API written and exposed by Site C seems more than capable of delivering what Site B needs regarding User A. In general, I'm looking for a set of guidelines to go by when deciding to choose between using OAuth for my web apps or just writing my own web service ( exposing public API). Thanks in advance!"} {"_id": "253712", "title": "How should I store and secure self-signed certificates?", "text": "I'm fairly certain I shouldn't commit certificates into source control. Even if the repository is private and only authenticated coworkers (for example) have access to it. That would allow for accidental exposure (thumb drives, leaked credentials, whatever). But, **how should I store and secure certificates?** I don't suppose I should just plop them on the network file server, for some of the same reasons I wouldn't put them into source control, right? Is there some kind of secure certificate store that I can run? Does the Java \"keystore\" do that generally or is it specific for like weblogic servers or something?"} {"_id": "253713", "title": "Where to store front-end data for \"object calculator\"", "text": "I recently have completed a language library that acts as a giant filter for food items, and flows a bit like this `:Products -> Recipes -> MenuItems -> Meals` and finally, upon submission, creates an `Order`. I have also completed a database structure that stores all the pertinent information to each class, and seems to fit my needs. The issue I'm having is linking the two. I imagined all of the information being local to each instance of the product, where there exists one backend user who edits and manipulates data, and multiple front end users who select their `Meal`(s) to create an `Order`. Ideally, all of the front end users would have all of this information stored locally within the library, and would update the library on startup from a database. How should I go about storing the data so that I can load it into the library every time the user opens the application? Do I package a database onboard and just load and populate every time? The only method I can currently conceive of doing this, even if I only have 500 possible `Product` objects, would require me to foreach the list for every `Product` that I need to match to a `Recipe` and so on and so forth every time I relaunch the program, which seems like a lot of wasteful loading. Here is a general flow of my architecture: Products: public class Product : IPortionable { public Product(string n, uint pNumber = 0) { name = n; productNumber = pNumber; } public string name { get; set; } public uint productNumber { get; set; } } Recipes: public Recipe(string n, decimal yieldAmt, Volume.Unit unit) { name = n; yield = new Volume(yieldAmt, unit); yield.ConvertUnit(); } /// /// Creates a new ingredient object /// /// Name /// Recipe Yield /// Unit of Yield public Recipe(string n, decimal yieldAmt, Weight.Unit unit) { name = n; yield = new Weight(yieldAmt, unit); } public Recipe(Recipe r) { name = r.name; yield = r.yield; ingredients = r.ingredients; } public string name { get; set; } public IMeasure yield; public Dictionary ingredients = new Dictionary(); MenuItems: public abstract class MenuItem : IScalable { public static string title = null; public string name { get; set; } public decimal maxPortionSize { get; set; } public decimal minPortionSize { get; set; } public Dictionary ingredients = new Dictionary(); and Meal: public class Meal { public Meal(int guests) { guestCount = guests; } public int guestCount { get; private set; } //TODO: Make a new MainCourse class that holds pasta and Entree public Dictionary counts = new Dictionary(){ {MainCourse.title, 0}, {Side.title , 0}, {Appetizer.title, 0} }; public List items = new List(); The Database just stores and links each of these basic names and amounts together usings ID's (`RecipeID, ProductID and MenuItemID`)"} {"_id": "214438", "title": "Can you use proprietary libraries in MPL 2.0 licensed source?", "text": "Is it legal to use a third party library in a software project that is itself licensed under MPL 2.0? So some potential project hierarchies would look like this * Library Project [MPL 2.0] (Open Source, Distributed) * Third party library [Proprietary] (Closed source, not distributed) * * * * Someone else's project [MPL 2.0] (Open source, distributed) * Library Project [MPL 2.0] (Open Source, Distributed) * Third party library [Proprietary] (Closed source, not distributed) * * * My practical feeling is that this is fine otherwise there could be no use of this license on many platforms whose base libraries are proprietary, like on iOS for example. The only difference being that these libraries are distributed to registered developers."} {"_id": "253716", "title": "How vibrations might effect Kinect depth measurements", "text": "I'm currently doing some research into development with the Microsoft Kinect product. My project manager has come up with a potential design for mounting the camera to do the capturing. However the solution means that the camera might be subject to vibrations as the platform it is on is directly connected to where the subjects will be moving. It was my thought that vibrations would effect the quality of the results, however I could not come up with a viable explanation as to why, other than it's the same as if you held a camera in your hand and your hand was shaking vs using a tripod. Do vibrations effect the depth measurements on a Kinect and if so how can I explain this in simple terms to my PM to help come up with a better design to attach the sensor to?"} {"_id": "253717", "title": "What is the right way to process inconsistent data files?", "text": "I'm working at a company that uses Excel files to store product data, specifically, test results from products before they are shipped out. There are a few thousand spreadsheets with anywhere from 50-100 relevant data points per file. Over the years, the schema for the spreadsheets has changed significantly, but not unidirectionally - in the sense that, changes often get reverted and then re-added in the space of a few dozen to few hundred files. My project is to convert about 8000 of these spreadsheets into a database that can be queried. I'm using MongoDB to deal with the inconsistency in the data, and Python. My question is, what is the \"right\" or canonical way to deal with the huge variance in my source files? I've written a data structure which stores the data I want for the latest template, which will be the final template used going forward, but that only helps for a few hundred files historically. Brute-forcing a solution would mean writing similar data structures for each version/template - which means potentially writing hundreds of schemas with dozens of fields each. This seems very inefficient, especially when sometimes a change in the template is as little as moving a single line of data one row down or splitting what used to be one data field into two data fields. A slightly more elegant solution I have in mind would be writing schemas for all the variants I can find for pre-defined groups in the source files, and then writing a function to match a particular series of files with a series of variants that matches that set of files. This is because, more often that not, most of the file will remain consistent over a long period, only marred by one or two errant sections, but inside the period, which section is inconsistent, is inconsistent. For example, say a file has four sections with three data fields, which is represented by four Python dictionaries with three keys each. For files 7000-7250, sections 1-3 will be consistent, but section 4 will be shifted one row down. For files 7251-7500, 1-3 are consistent, section 4 is one row down, but a section five appears. For files 7501-7635, sections 1 and 3 will be consistent, but section 2 will have five data fields instead of three, section five disappears, and section 4 is still shifted down one row. For files 7636-7800, section 1 is consistent, section 4 gets shifted back up, section 2 returns to three cells, but section 3 is removed entirely. Files 7800-8000 have everything in order. The proposed function would take the file number and match it to a dictionary representing the data mappings for different variants of each section. For example, a section_four_variants dictionary might have two members, one for the shifted-down version, and one for the normal version, a section_two_variants might have three and five field members, etc. The script would then read the matchings, load the correct mapping, extract the data, and insert it into the database. Is this an accepted/right way to go about solving this problem? Should I structure things differently? I don't know what to search Google for either to see what other solutions might be, though I believe the problem lies in the domain of ETL processing. I also have no formal CS training aside from what I've taught myself over the years. If this is not the right forum for this question, please tell me where to move it, if at all. Any help is most appreciated. Thank you."} {"_id": "120560", "title": "What are the things Java got right?", "text": "What are the things that Java (the language and platform) got categorically right? In other words, what things are more recent programming languages preserving and carrying forward? Some easy answer are: garbage collection, a VM, lack of pointers, classloaders, reflection(?) What about language based answers? Please don't list the things Java did wrong, just right."} {"_id": "25135", "title": "If your workmate is quitting his job, would you notify your boss?", "text": "If a colleague on your team is planning to quit his job and this may cause bad feelings in your company, would you notify your boss?"} {"_id": "215746", "title": "What does the English word \"for\" exactly mean in \"for\" loops?", "text": "English is not my first language, but since the keywords in programming languages are English words, I usually find it easy to read source code as English sentences: * `if (x > 10) f();` => \"If variable `x` is greater than `10`, then call function `f`.\" * `while (i < 10) ++i;` => \"While variable `i` is less than `10`, increase `i` by `1`.\" But how a `for` loop is supposed to be read? * `for (i = 0; i < 10; ++i) f(i);` => ??? I mean, I know what a `for` loop is and how it works. My problem is only that I don't know what the English word \"for\" exactly means in `for` loops."} {"_id": "25131", "title": "Can somebody explain me what are lambda things in programming?", "text": "So far I heard about : * Lambda calculus * Lambda programming * Lambda expressions * Lambda functions Which all seems to be related to functional programming... Apparently it will be integrated into C++1x, so I might better understand it now: http://en.wikipedia.org/wiki/C%2B%2B0x#Lambda_functions_and_expressions Can somebody defines briefly what are lambdas things and give an where it can be useful ?"} {"_id": "186287", "title": "Design Pattern for enterprise application", "text": "I read few articles about composite pattern and I want to know whether its applicable in following situation, I found that \"A Composite Entity object can represent a coarse-grained object and all its related dependent objects\" public class PatientRegistrationDTO { public string RegistrationNo; public string ID; public DateTime AdmitDate; } public class PersonDTO { public string ID{ get; set; } public string FullName { get; set; } public string FirstName { get; set; } } by using these two objects I need to create public class Patient { public string ID{ get; set; } public string FullName { get; set; } public DateTime AdmitDate; } Can I use Composite pattern for enterprise applications in here? Is it possible for me to add some class like below? public class PatientDTO { public static Patient ConvertToEntity(PatientRegistrationDTO pregDTO, PersonDTO person) { Patient p = new Patient(); p.ID= pregDTO.ID; p.FullName = person.FullName; p.AdmitDate = pregDTO.AdmitDate; return p; } }"} {"_id": "25139", "title": "Efficiency of self-education", "text": "Do you think being self-educated in software development is good? Please give an example of what you have learnt successfully by yourself."} {"_id": "222785", "title": "Is it worth using VCS (Version Control Softwares) for hobbystic/small/personal projects?", "text": "The question is fairly self-explainatory. Is it worth using VCS (Version Control Softwares) for hobbystic/small/personal projects?"} {"_id": "222784", "title": "Invoking a web service in a Web API Project\u2026in which layer to invoke?", "text": "I am using **Microsoft ASP.NET Web API 2** and one of my end points has to internally invoke a legacy non-Microsoft web service (not `asmx` or `svc`) . Which layer should I invoke this in? I currently have : **Repository layer:** where all the CRUD calls to DB are done now. **Domain Manager:** where respective manager classes invoke the `Repository Layer` methods. And my **Web API Controller** methods invoke the respective Domain Manager methods. Should I just have another method in my **Repository Layer** which invokes the web service? And follow the usual pattern above?"} {"_id": "53992", "title": "Identifying which pattern fits better", "text": "I'm developing a software to program a device. I have some commands like `Reset`, `Read_Version`, `Read_memory`, `Write_memory`, `Erase_memory`. `Reset` and `Read_Version` are fixed. They don't need parameters. `Read_memory` and `Erase_memory` need the same parameters that are Length and Address. `Write_memory` needs Lenght, Address and Data. For each command, I have the same steps in sequence, that are something like this `sendCommand`, `waitForResponse`, `treatResponse`. I'm having difficulty to identify which pattern should I use. Factory, Template Method, Strategy or other pattern. ## Edit I'll try to explain better taking in count the given comments and answers. I've already done this software and now I'm trying to refactoring it. I'm trying to use patterns, even if it is not necessary because I'm taking advantage of this little software to learn about some patterns. Despite I think that one (or more) pattern fits here and it could improve my code. When I want to read version of the software of my device, I don't have to assembly the command with parameters. It is fixed. So I have to send it. After wait for response. If there is a response, treat (or parse) it and returns. To read a portion of the memory (maximum of 256 bytes), I have to assembly the command using the parameters `Len` and `Address`. So I have to send it. After wait for response. If there is a response, treat (or parse) it and returns. To write a portion in the memory (maximum of 256 bytes), I have to assembly the command using the parameters `Len`, `Address` and `Data`. So I have to send it. After wait for response. If there is a response, treat (or parse) it and returns. I think that I could use Template Method because I have almost the same algorithm for all. But the problem is some commands are fixes, others have 2 or 3 parameters. I think that parameters should be passed on the constructor of the class. But each class will have a constructor overriding the abstract class constructor. Is this a problem for the template method? Should I use other pattern?"} {"_id": "166224", "title": "What happens to programmers most oftenly while reading the code of others?", "text": "When reading others code do you usually have any troubles understanding it, Or its more usually that you question the others code about it being wrong/non- efficient/bad-formatted(etc)? Someone reading what you coded on your first job"} {"_id": "82556", "title": "How did programmers use networking to share expensive computer resources in the 60's and 70's?", "text": "I'm young and wasn't alive during the 60's and 70's to experience networking and programming as it once was. I have been watching some talks by Van Jacobson on Content Centric Networking, and in these talks he gives a historical perspective stating that in the 60's and 70's, networking was designed to solve the problem of resource sharing, such as getting access to scarce card readers or high speed tape drives. He then proceeds to say that there was very little data in this era, and that data \"didn't live on computers\", it was something you carried around with you, e.g. on tapes or printouts. I have two questions regarding this: 1) How did people \"remotely\" use something like a card reader? Surely at some point the physical cards had to be delivered to wherever the computer was. If you were 100 miles away, did this mean they posted the cards off ahead of time and then simply used networking to execute the commands necessary to run those card decks? 2) How did people generally get the result of their programs? Was it sent back across the wire, or were print outs/tapes etcetera posted after the program had been run back to the remote researcher? I apologize if I've gotten my eras mixed up in any way here, as I said, I wasn't alive at the time. Thanks."} {"_id": "194016", "title": "How do I evaluate programming interview preparation websites?", "text": "I ran across the CareerCup website which says they can help prepare you for programming interviews with various US based top tech companies. They don't \"guarantee\" anything, and the site appears to be backed by a well reviewed book*. *The reviews are on Amazon I have been an active member on StackOverflow so according to me best resource for preparation would be StackOverflow. But I saw that CareerCup has collected interview questions from various companies. So the key difference is that I would have to dig for interview questions within SO versus having questions already aggregated by the other site. How do I go about evaluating the credibility of a website like this? How can I evaluate if their interview preparation offerings are worthwhile?"} {"_id": "194014", "title": "Incorporating web designer's HTML pages into an MVC4 application", "text": "We are embarking on a new project which will be using the ASP .NET MVC4 platform. I have been informed that the design is being outsourced to a custom design firm and they will be supplying us with HTML pages and stylesheets. How easy will it be to incorporate these css stylesheets and HTMl pages in our application? Do I need to ask for something more than HTML pages? My personal experience is limited to one site that was built for an in-house and I used the HTMLHelper to generate Views. Another past experience I have from long-ago is using Classic ASP and that was a breeze since I could literally mash up ASP and HTML together. My question is, what is this a good approach to incorporating HTML and stylesheets from a web designer into an ASP.NET MVC4 web application?"} {"_id": "210673", "title": "How do I get rid of cyclic references in this design?", "text": "I have 3 classes: **Meeting** , **Project** and **Agenda**. * A **Project** contains all sort of information + a list of meetings. * The **Agenda** contains a list of upcoming Meetings. * A **Meeting** contains some data + a list of Projects that were discussed there. The Agenda checks for upcoming meetings. When it finds one, it calls its _Meeting::alarm()_ method, which in turn displays data it gets from the list of projects this meeting refers to. **Meetings** can be referenced in a project without being scheduled in the **Agenda** , but it doesn't really make sense to have a reference to a meeting in the agenda if it is not contained in a project. ![Dependency between the classes](http://i.stack.imgur.com/FDo2L.png) Because the **Agenda** can be parsed in a thread while the main thread deletes a **project** , I made both Agenda and Project use shared pointers on Meetings, so that the parsing thread doesn't find a dangling pointer. In the destructor of **Project** , I ask **Agenda** to check the meetings related to this project to clean up those who don't have any other related project. Here is my problem: _What kind of data structure should the Meeting::parentProjects member be?_ If a Meeting gets called by the Agenda while its Project is being deleted, and parentProjects is a simple raw pointers container, I might have a dangling here. But I can't use a shared_ptr to Project either, since that would make a cyclic dependency... I feel like it is unnecessarily complicated. _How could I refactor this?_ Note: I **have to** keep the 2 threads though."} {"_id": "187880", "title": "Algorithm to detect a CLICK within Square range", "text": "It might be a simple question but I am looking for optimal solution. I will have numbers printed on a screen and I will be aware of coordinates. Numbers/Symbols will have 4 points(Square) to define their boundaries. Coordinates of that particular symbol will be stored in a file. Lets' say there is a number _5_ and it's 4 coordinates on screens are: **(2,20,20,20,2,40,20,40)** Now lets assume that a string _555_ represent a value in a file, say the value is _Car1_ When the user press num pad of _5_ thrice then it should detect that he needs _Car1_. I am interested to know whether there's some standard formula/Algo to find the range between these 4 coordinates or I have to work on my own. The Formula that was coming in my mind is: Symbol = (X1+X2+X3+X4)(Y1+Y2+Y3+Y4) = (62)(120) = 182(Representing _5_ ) But I am skeptical whether it's right and whether the formula will always be given a unique value per symbol/character based on given coordinates?"} {"_id": "187881", "title": "Deterministic and controllable fully automated memory management", "text": "Fully automated memory management increases productivity and integrity greatly, but usual implementation (GC) has a critical problem. It's non- deterministic, and not controllable. This causes many problems such as burst CPU load which is critical for realtime applications. Some kind of optimizations (incremental/concurrent GC) can reduce those problems but still non-deterministic and can't eliminate the problem completely. I have been thought GC never can solve this problem, but recently, I learned and realized that GC operation doesn't need to be fully hidden. GC also can be deterministic and CPU burst-free by exposing some properly designed behavior controls. I think (RC + manually invoked cycle detection) can do this. But this doesn't look efficient. Is there any better approach for deterministic and controllable fully automatic memory management implementation? Or can I get some link to example implementations? **Edit** I added these lines to make my question more clear. * _deterministic and controllable_ in this context means user can track and run code at object creation and destroying explicitly. And also controls amount and time of memory management operation load. * _fully automatic_ means it allocates and clears memory as like GC without extra concern."} {"_id": "187882", "title": "How can I become better on explaining the code to other developers?", "text": "While the question itself might sounds silly, the answer is quite important to me, as I feel that issue is negatively affecting my work performance. A bit of the background here: I am a seasoned senior software developer in a medium size software department of non-software company. While being above average on the technical side of the things, I am much poorer on communicating and explaining things. Even when explaining something to other developers. The most difficulties happen when I explain how a particular _small piece of code_ works. The funny thing is, that explaining and providing examples on how something works on a much higher level, e.g. interactions between separate modules and subsystems, is much easier for me. To make it clearer, what I call \"source code explaining skill\" is a a) ability to clearly explain the execution flow of the code - e.g. \"this thingy calls that thingy, which returns that object, which is later calls method A, passing the object B to ...\" a) ability to clearly explain the problems with a current design, or, which is more important, implications of the source code change as in \"if, for performance reasons, we start caching the object as a field of the class, we would have to make modification in ten different places to ensure that the cache is always in up to date state\" etc I tried to analyse why I am bad on explaining things and haven't found any explanations except maybe that I explain things in a bullet points manner, which some may find too rigid. Also when I explain things I perhaps focus too much on what I say myself and missing the questions, what people ask, but again to me it feels like these questions are often irrelevant and simply draggin the conversation away. What could you recommend (except the obvious \"practices which makes it perfect\", which I don't really buy, as I think I would probably practice more of the same mistakes again and again) me to do, so I could improve the source code explaining skills."} {"_id": "191025", "title": "Machine Learning with sample data set", "text": "I have a question regarding Machine Learning in general. Consider the following scenario: Given a piece of text, we want a program to know whether the text is 'abusive' or not. To do this we can give the program 1000 text samples and manually mark which ones are positive and which ones are negative. The program studies these and records which words/patterns are common in abusive texts. We then give it another 1000 un-marked texts, and it manages to identify 95% of these correctly using the patterns it learnt from the original 1000. That's all good, but what about after that once the software 'goes live'? That is, we leave it to pull another 1000 texts every day and it's left to determine whether they are abusive or not on it's own. One might think it would be a good idea to continue to recognise words/patterns in an attempt to 'learn' more and more each day? But the problem here is we don't know for sure whether the program is identifying each text correctly. So if it marks a clean text as abusive, it will falsely record words/patterns as abusive. This will then cause the program's intelligence to become more and more incorrect and off-track. What is the general approach to the above problem?"} {"_id": "117348", "title": "Should interface names begin with an \"I\" prefix?", "text": "I have been reading \"Clean Code\" by Robert Martin to hopefully, become a better programmer. While none of it so far has been really ground breaking it has made me think differently about the way I design applications and write code. There is one part of the book that I not only don't agree with, but doesn't make sense to me, specifically in regards to interface naming conventions. Here's the text, taken directly from the book. I have bolded the aspect of this I find confusing and would like clarification on. > I prefer to leave interfaces unadorned. The preceding I, so common in > today\u2019s legacy wads, is a distraction at best and too much information at > worst. **I don\u2019t want my users knowing that I\u2019m handing them an interface**. Perhaps it is because I'm only a student, or maybe because I have never done any professional or team based programming but _I would want the user to know it is an interface._ There's a big difference between implementing an interface and extending a class. So, my question boils down to, **\"Why should we hide the fact that some part of the code is expecting an interface?\"** **Edit** In response to an answer: > If your type is an interface or a class is your business, not the business > of someone using your code. So you shouldn't leak details of your code in > this thrid party code. Why should I not \"leak\" the details of whether a given type is an interface or a class to third-party code? Isn't it important to the third-party developer using my code to know whether they will be implementing an interface or extending a class? Are the differences simply not as important as I'm making them out to be in my mind?"} {"_id": "61825", "title": "How to not mix HTML with PHP?", "text": "I made an application in EXTJS, but my technical architect and project manager say we don't want big file, so removed the EXTJS and made in object oriented PHP and JavaScript code, mixing HTML with PHP. But I don't want to mix HTML and PHP because of bad coding structure and bad coding practice. What should i do?"} {"_id": "117342", "title": "Should ActiveRecord-based domain models have visible properties? Why or why not?", "text": "Should ActiveRecord-based domain models have visible properties? Why or why not? My experience and studies have always lead me to believe that object properties should always be protected, and that data should only be manipulated/accessed through methods. This ensures that if the internals of the class need to be refactored, other objects that rely on that object do not then also need to be refactored. I often see domain models with publicly visible properties however, and the models can be passed to say view objects which directly access those properties while building the view. I have always thought that this is perhaps the one exception to the above mentioned rule, because otherwise you would basically have to create getters and setters for objects like the view to use, or some other equally troubling sounding solution. Perhaps I am wrong on this? I would like to get some feedback on the subject."} {"_id": "117344", "title": "Is there a good reason to avoid node.js for non-realtime web apps?", "text": "I've seen a lot of talk about how awesome Node.js is for realtime web apps -- things that need sockets, Comet, AJAX-heavy communications, and so forth. I know that its event-driven, asynchronous, thread-driven model is also good for concurrency with low overhead. I've also seen Node.js tutorials for more simple, 'traditional', non-realtime apps (e.g., the standard blog example, which seems to be the standard 'Hello World' for people learning app development). And I also know that node-static allows you to serve static assets. My question is: is there any good reason to _avoid_ Node.js for traditional web apps, like classifieds, forums, the aforementioned blog example, or the sort of CRUD apps you build for internal business applications? Just because it excels at all the funky realtime stuff, does that contraindicate it for more staid uses? The only thing I can think of, off the bat, is the lack of mature libraries (although that's changing). (The reason I'm asking is that I'm considering ditching PHP for Node.js, mostly to get over the impedance mismatch of switching between languages, but also so I can reuse validation code and whatnot. My superego admonishes me to choose the best tool for the job; however, I don't have a lot of time to learn fifteen languages and all their userland libraries just to have a comprehensive arsenal. It's also reassuring that Node.js might give me an easier optimisation path than PHP/Apache in the future when I have to start thinking about heavy traffic.) **[EDIT]** Thanks for the answers so far, folks; I just want to see if anyone else will weigh in before I choose an answer. The answer from @Raynos kinda confirms what I'm thinking, and the links from the commenters provided good food for thought, but I want to see if anyone else has any Node-specific answers, like 'DON'T USE NODE FOR PROBLEM X'. (Besides high-CPU tasks; I know that already :-)"} {"_id": "129069", "title": "How to deal with designers?", "text": "> **Possible Duplicate:** > How do you handle a graphic designer who thinks he's a web designer? As a mobile/web developer in my company, I deal a lot with designers (i.e. Adobe suite designers). The problem is that these designers always want things from their point of view, by which I mean, they don't understand UX rules and other things such as how users interact with the software. And this is where problems start... Any suggestion on how to deal with them?"} {"_id": "53373", "title": "Software Process Management", "text": "We want to formalize our development process. Aim is to provide a clear view of software in each phase. For example, to translate requirements to task, to specify the pre and post-conditions of task etc. So that each role (analyst, developer, tester) can perform better and has a checklist to finish his job properly Does your company follow any software process e.g. Agile, OpenUP, RUP etc. If yes what tools do you use to enforce it or to facilitate it?"} {"_id": "9614", "title": "When does a \"scripter\" become a \"programmer\"?", "text": "Is there a difference between 'scripters' and 'programmers'? What is the dividing line between scripters and programmers? Perhaps all scripters be considered to be a programmer. If not all scripters can fall into the same camp, what about those people who use external objects such as COM objects, Win32's, etc. via an interop library? As far as script languages I'm thinking of are (but not limited to) perl, bash, javascript, powershell, and batch files."} {"_id": "99088", "title": ".Net - web controls and win form controls", "text": "The next version of the application will be a .NET based application (both desktop and web). The **requirement** is that both applications have to look and work exactly the same way. I was thinking that maybe there was a way to create a web control or something and use it in both applications. Is this possible? For example can I create a user control for the web and use it for the desktop (without using a web browser control)? If the above isn't possible, what are my alternatives? Please let me know."} {"_id": "140164", "title": "How frequent should the Token Updation in CSRF security be?", "text": "To start with the background, this post is what Jeff Atwood says about CSRF tokens. In this very page, he goes on to say: > An even stronger, albeit more complex, prevention method is to leverage > server state -- to generate (and track, with timeout) a unique random key > for every single HTML FORM you send down to the client. We use a variant of > this method on Stack Overflow with great success. But in this post itself, Jeff never remarks about when and how the tokens should be updated. I was using a similar technique in a web-app I was working on. It works like this: 1. Whenever the user will `POST` data to my server, a csrf token is sent along. 2. This CSRF token is stored in a cryptographically strong cookie in user's session. 3. If the token is valid, the user's request is processed and vice-versa. 4. If the request is valid, discard the old token on server side and create a new token. The response from server contains a new csrf token to be used in the next request. The old token on all the forms on a page is updated with the new one so that the next request is processed properly. Is it wise to update the tokens after ever `POST` request or should the updation be done only whenever the user makes a `GET` request and keep the same token till the next GET request is made?"} {"_id": "36959", "title": "How does a government development shop transition to developing open source solutions?", "text": "Our shop has identified several reasons why releasing our software solutions to the open source community would be a good idea. However, there are several reasons from a business stand point why converting our shop to open source would be questioned. I need help from anyone out there who has gone through this transition, or is in the process. Specifically a government entity. About our shop: \\- We develop and support web and client applications for the local law enforcement community. \\- We are NOT a private company, rather a public sector entity Some questions that tend to come about when we have this discussion are: 1. We're a government agency, so isn't our code already public? 2. How do we protect ourselves from being 'hacked' if someone looks into our code? (There are obvious answers to this question like making sure you don't hard code passwords, etc. However, the discussion needs to consider an audience of executives who are very security conscience.)"} {"_id": "99083", "title": "Is it possible to deploy Perl or Python scripts in the same way as PHP scripts?", "text": "PHP's deployment model is uber simple: upload and run. This is especially ideal for web applications that are intended to be installed on shared hosting by end users (think: Wordpress). Compare this with the installation of a popular Perl app. I want to stop writing things in PHP, but want to keep the same deployment model. Is it possible to achieve this with Python or Perl in such a way that it'll work with most shared web hosts? In other words, how can I run Python or Perl scripts outside of the cgi-bin directory on most shared hosts, like I can with PHP scripts?"} {"_id": "108601", "title": "Why do schools (or most schools) teach Java as the intro language?", "text": "> **Possible Duplicate:** > Why do we study Java at university? This is a question I've wondered a lot as a young developer that just graduated from college. Why do schools teach Java in-depth vs something like C? A lot of my co-workers complain that there are no good C programmers anymore because schools no longer teach C in any real form. For example, my first 6-8 undergrad CS courses were all in Java. Then I had exactly 2 courses in upper division that used C and C++, and the rest were in Java. That was it. A lot of students had to retake the C/C++ classes because nobody had introduced us to C/C++ or any concepts in C/C++. We were just expected to somehow program instantly in C/C++. Now you could say that once you learn a language, everything other language is more or less the same but a lot of my classmates had problems with pointers and memory allocation. I think I honestly spent half the quarter understanding pointers and memory allocation before I sort of understood it and even now I am not confident enough to say I truly understand it. So why do schools do this? I've heard many developers say that going from C/C++ as a first language to Java is easy, but the reverse is much more difficult (This is also my sentiment). So why not do C/C++ first?"} {"_id": "146633", "title": "Writing commit messages as a solo developer?", "text": "On my projects where the repository is shared between me and other programmers, I always write commit messages even if I'm the primary developer. But on those projects where I'm the solo developer working on a project, and the repository is hosted on my personal laptop, and is not even hosted by the client, hence no one except myself would see the commits, should I still write commit messages? So far I have been writing them, but I've found that I've never gone back and viewed my commit messages. I take time off development to write down the messages, but then they're never seen again even by me. Are there any good reasons for writing commit messages as a solo developer, or should you just skip them in favor of staying focused on development?"} {"_id": "1814", "title": "How to deal with users who think their computer could think?", "text": "Along my career, I had to deal with users who think their computer could think: > My computer hates me! or > He just do this so he could laugh at me! This is often a joke, but some users are serious. It's easy when I know the causes of the problem, but when it's unexpected behavior it's more complicated. In those cases, I usually turn it as a joke, putting that on the fault of moon phases and tide, but they are likely to prefer their explanations. Do you have any tricks to deal with those users?"} {"_id": "112336", "title": "Creating a blog for software changes", "text": "I work for a small company where I maintain a number of project all at once. I would like to create a blog and note software changes/update so that I can keep track of things. Plus it will also serve as help tool for other if they need help. I would like to install something locally on my machine or network, either ASP or PHP is fine. **Which software would you recommend? Is it good idea, bad idea? Has anyone done it? I have worked with wordpress and I like it but I am afraid it is not best for code snippets.** Any thoughts I do use source control. I am not an expert on it though. I use three different development environment. 1. Visual Studio 2. Eclipse 3. SQL Server Management Studio"} {"_id": "241142", "title": "Use unnamed object to invoke method or not?", "text": "If I have a class with only only public method. When I use this class, is it good to use unnamed object to invoke its method? normal: TaxFileParser tax_parser(tax_file_name); auto content = tax_parser.get_content(); or unnamed object version: auto content = TaxFileParser(tax_file_name).get_content(); Because I've told that we should avoid temporary as possible. If `tax_parser` object is used only once, can I call it a temporary and try to eliminate it? Any suggestion will be helpful."} {"_id": "137069", "title": "Programming *into* a language vs. writing C code in Ruby", "text": "Code Complete states that you should aways code _into_ a language as opposed to code _in_ it. By that, they mean > Don't limit your programming thinking only to the concepts that are > supported automatically by your language. The best programmers think of what > they want to do, and then they assess how to accomplish their objectives > with the programming tools at their disposal. (chapter 34.4) Doesn't this lead to using one style of programming in every language out there, regardless of the particular strengths and weaknesses of the language at hand? Or, to put the question in a more answerable format: Would you propose that one should try to encode one's problem as neatly as possible with the particulars of one's language, or should you rather search the most elegant solution overall, even if that means that you need to implement possibly awkward constructs that do not exist natively in one's language?"} {"_id": "137066", "title": "Is this bad design for a Shape interface?", "text": "I'm creating a vector editing program in C++, and I need a Shape interface which other concrete classes will implement. There is a requirement that _no implementation inheritance_ is allowed. The design doc says that if you need polymorphism, **use interfaces**. If you need code reuse, **use composition**. The **Shape** interface is: class Shape { public: virtual void get_name()=0; virtual void set_name()=0; virtual void get_linewidth()=0; virtual void set_linewidth()=0; ... ...about 20 other getters/setters ... virtual void draw()=0; virtual int area()=0; virtual void rotate(int angle)=0; } The Circle class: class Circle: public Shape { string name; int line_width; int angle_of_rotation; int radius; public: string get_name(){ return name; } string set_name(string name){ this->name=name; } ... ...about 20 other getters/setters ... int area() { return PI*pow(this->radius,2); } } I have no problem with this, except common properties have to be **repeated for each type of shape**! This is solved using inheritance, but I am **not allowed to use that**. So, I create a **ShapeProperties** class class ShapeProperties { string name; int line_width; int angle_of_rotation; public: string get_name(){ return name; } string set_name(string name){ this->name=name; } ... ...about 20 other getters/setters ... } and a properties() method for the interface: virtual ShapeProperties* properties()=0; A user would then do: Shape *shape = new Circle(); shape->properties()->set_name(\"my shape\"); shape->properties()->set_line_width(4); int area = shape->area(); **My question:** Is this good design? Is this bad design? Are there any obvious problems? How could it be made better?"} {"_id": "137062", "title": "How to simplify my complex stateful classes and their testing?", "text": "I am in a distributed system project written in java where we have some classes which corresponds to very complex real-world-business objects. These objects have a lot of methods corresponding to actions that the user (or some other agent) may apply to that objects. As a result, these classes became very complex. The system general architecture approach has lead to a lot of behaviors concentrated on a few classes and a lot of possible interaction scenarios. As an example and to keep things easy and clear, let's say that Robot and Car were classes in my project. So, in the Robot class I would have a lot of methods in the following pattern: * sleep(); isSleepAvaliable(); * awake(); isAwakeAvaliable(); * walk(Direction); isWalkAvaliable(); * shoot(Direction); isShootAvaliable(); * turnOnAlert(); isTurnOnAlertAvailable(); * turnOffAlert(); isTurnOffAlertAvailable(); * recharge(); isRechargeAvailable(); * powerOff(); isPowerOffAvailable(); * stepInCar(Car); isStepInCarAvailable(); * stepOutCar(Car); isStepOutCarAvailable(); * selfDestruct(); isSelfDestructAvailable(); * die(); isDieAvailable(); * isAlive(); isAwake(); isAlertOn(); getBatteryLevel(); getCurrentRidingCar(); getAmmo(); * ... In the Car class, it would be similar: * turnOn(); isTurnOnAvaliable(); * turnOff(); isTurnOffAvaliable(); * walk(Direction); isWalkAvaliable(); * refuel(); isRefuelAvailable(); * selfDestruct(); isSelfDestructAvailable(); * crash(); isCrashAvailable(); * isOperational(); isOn(); getFuelLevel(); getCurrentPassenger(); * ... Each of these (Robot and Car) is implemented as a state machine, where some actions are possible in some states and some aren't. The actions changes the object's state. The actions methods throws `IllegalStateException` when called in an invalid state and the `isXXXAvailable()` methods tell if the action is possible at the time. Although some are easily deducible from the state (e.g, in the sleeping state, awake is available), some aren't (to shoot, it must be awake, alive, having ammo and not riding a car). Further, the interactions between the objects are complex too. E.g, the Car can only hold one Robot passenger, so if another one try to enter, an exception should be thrown; If the car crashes, the passenger should die; If the robot is dead inside a vehicle, he can't step out, even if the Car itself is ok; If the Robot is inside a Car, he can't enter another one before stepping out; etc. The result of this, is as I already said, these classes became really complex. To make things worse, there is hundreds possible scenarios when the Robot and the Car interacts. Further, much of that logic does needs to access remote data in other systems. The result is that unit-testing it became very hard and we have a lot of testing problems, one causing the another in a vicious circle: * The testcases setups are very complex, because they need to create a significantly complex world to exercize. * The number of tests is huge. * The test suite takes some hours to run. * Our test coverage is very low. * The testing code tends to be written weeks or months later than the code that they test, or never at all. * A lot of tests are broken too, mainly because the requirements of the tested code changed. * Some scenarios are so complex, that they fail on timeout during the setup (we configured a timeout in each test, in the worst cases 2 minutes long and even this time long they timeouts, we ensured that it is not an infinite loop). * Bugs regularly slips into production environment. That Robot and Car scenario is a gross over-simplification of what we have in reality. Clearly, this situation is not manageable. So, I am asking help and suggestions to: 1, Reduce the classes complexity; 2. Simplify the interactions scenarios between my objects; 3. Reduce the test time and the amout of code to be tested. EDIT: I think I was not clear about the state machines. the Robot is itself a state machine, with states \"sleeping\", \"awake\", \"recharging\", \"dead\", etc. The Car is another state machine. EDIT 2: In the case that you are curious about what my system actually is, the classes that interact are things like Server, IPAddress, Disk, Backup, User, SoftwareLicense, etc. The Robot and Car scenario is just a case that I found that would be simple enough to explain my problem."} {"_id": "190774", "title": "How often is Inheritance used?", "text": "I admit that I am a junior developer, and so far I've only built simple web applications in ASP.NET MVC. But I've never had to use the inheritance aspect of Object Oriented Programming in my own classes! It is true that in using ASP.NET MVC I inadvertently use inheritance (any controller I create will inherit from the base controller class), but I am referring here to the conscious use of inheritance in my design of a particular web system. Is it because I am a bad programmer? Or could it be that inheritance only comes into play in certain scenarios? Don't get me wrong, I understand what inheritance is and how to code using inheritance if I had to. I just can't seem to find a scenario to use it at. Basically, I am wondering how often it is used in OO programming in general."} {"_id": "49437", "title": "Can you be a programmer and Business manager at the same time?", "text": "I think I'm struggled in some situation! We are a new start-up with 5 employees (2 Programmers). I'm the Technical Manager and that was so fine! Now I can see the fingers point to me to take the control of everything, as I've the big vision of what our organization do and play the role of CEO or General Manager! I want to, but I've no idea if it would be risky to our organization to make such a decision? How would managerial interrupts affect the technical productivity? Any tips or previous experience about such situation would help :) Thanks in advance!"} {"_id": "190770", "title": "Object-Oriented Programming: Why \"Oriented\"?", "text": "I am most of the way through my games programming degree. This is not a computer science degree, so a lot of the theory is eschewed in favour of practical portfolio building and what I see as JIT learning, which is apparently more important in the games industry. The first subject was \"Introduction to Object-Oriented Programming\". That phrase didn't bother me until I learned about the different programming paradigms (I'm getting this list from https://en.wikipedia.org/wiki/Comparison_of_programming_paradigms): * Imperative * Functional * Procedural * Structured * Event-Driven * Object-Oriented * Declarative * Automata-Based I get that this is not an exhaustive list, and that not all of these concepts are equal, and most of them aren't even exclusive, but I don't understand why most of them get just one word - imperative; functional; declarative - but when we talk about programming with objects, we have to clarify that we are _oriented_ around those objects. Can't we just _use_ objects? Can't we just _have_ objects? Why must they _orient_ us, as our guiding star? Looking here (https://en.wikipedia.org/wiki/Object-oriented_programming), nowhere is the use of the term \"oriented\" addressed as its own term. Only \"object\" is explained. Also, I can see for practical reasons why Event-Driven is used, because Event Programming is already a thing that you do when you're running a conference, and Automata Programming makes it sound like you're setting up a robotic production line, so it helps to have additional clarifying words there. **What makes Object Programming, as a phrase, not enough to describe what we do when we use objects in our programming?** Obviously from my tone I'm not too fond of the word \"oriented\". It reminds me of my time as a court reporter, listening to lawyer after lawyer use the phrase \"in relation to\" as a kind of verbal tick. It didn't mean anything; it was just a term that they used to fill the air while they tried to think of what to say next. However, I'm not trying to advocate a change of language, I'm just asking why it is the way it is. If someone knows why it came to be known that way for purely historical, vestigial reasons, then that's the answer. It will be ammunition if I ever decide to waste my time advocating for a change of language. On the other hand, if there is actually a useful reason for why a language or piece of code must _point_ towards objects, to the exclusion of all other directions, as opposed to merely having them in its toolbelt, as _tools_ , I would really be interested to learn about it. I like learning useful things."} {"_id": "190771", "title": "Can REST API be used as business layer?", "text": "I am using PHP Codeigniter **MVC** design pattern and I had this project with some sort of specific business processes In my application I will be dealing with 2 existing REST APIs: 1. Google 2. Trello I came up with idea to create **REST API** to act as **Business Logic Layer** (BBL) that in turns access my models directly to fetch needed data to formulate business rules and controller with communicate with BLL with REST client, ![enter image description here](http://i.stack.imgur.com/p5rMQ.png) Is that bad approach for performance ? Is it better to create **2 layers** of Models one as Data Access Layer **(DAL)** and one as Business Logic Layer **(BLL)**"} {"_id": "211114", "title": "BLL structure and organization", "text": "let's say you have a class diagram http://i.stack.imgur.com/gpb55.png and you would like to start a class library project in visual studio to act as BLL to convert this diagram into actual code files. how would you structure your BLL project (in the solution explorer), how would you organize classes files and thier hierarchy?"} {"_id": "211117", "title": "State pattern long state class names", "text": "I am using state pattern on 28 states in my application. The states are for membership cards that has 7 major states, there are 4 boolean attributes for the membership card that actually affects the its behavior so I have decided to embed them with states, that's how it multiplied to 28 states. The problem now is with states class naming, it is getting crazing, I am ending up with class state named like this `Membership-UnderCreation-Printed- Linked-Premium-Frozen` \\----- I have hyphened different attributes to make it clear. Is it ok for state class names to be like this?"} {"_id": "197467", "title": "Preventing Web Users From Using Their Location Bar Within An App", "text": "My boss asked me to alter our Java Webapp such that users cannot go to places in our Webapp by typing URLs into their browser location bar. I told her that I can not disable their location bars. I told her the way this is usually done is to launch a WebApp in a new customized browser window sans a location bar. That was not acceptable to her. I already have a Java Filter class set up to enforce various rules. So, I was thinking of this approach 1. Implement a system wide \"writeFlagCookie\" javascript function to write a cookie anytime a user initiates a GET by clicking on a link or a button. 2. Everywhere the WebApp does a redirect or a forward, put a flag variable, say \"wasRedirected\" into the HTTP session. 3. In my Filter, intercept each request and check for the request type. 4. If it is a POST, I know a human didn't type the URL into their browser, so I automatically let it through. 5. If it is a GET, look for a javascript generated cookie, or the flag stored in the session to indicate a redirect or a forward. If I find neither send the user back to the page they just tried to leave from. Though it will be a lot of work, it sounds too simple to be adequate. Is there anyway this approach can bite me in the ass? My boss wants two problems solved 1. Users going to screens out of sequence and getting error messages. This includes multi-screen forms ( which we have to keep ) and users using a back button. 2. Preventing the user from leaving particular pages and going to other parts of the application until they fill out what we want them to fill out on those screens. I have ideas how to solve #1 & #2, also laborious, but my boss likes the idea of disabling typing URLs for navigation for a catch-all solution. Maybe once she sees how much work is involved in disabling typed navigation I can market her on just solving those problems. Thanks Steve"} {"_id": "211113", "title": "how to write a stored procedure to insert data in to a sql server database in a datagridview in a form?", "text": "I need to insert data through a form using a `datagridview` connected to a SQL Server 2008 R2 database. is it ok, if I write a single insert stored procedure query in a while loop in the `form.cs`"} {"_id": "68027", "title": "How difficult did you find the Java Certification Exam", "text": "I'm just starting out as a Java programmer and have very minimal experience. I would like to improve my knowledge and study for the Java certification. I realize everyone will have a different answer but for those who have taken the certification and passed it, I'd really appreciate if you could tell me your experiences. Just a couple of lines to let me know if you found it hard, very hard. Also how much time did you spend studying. Sorry that this is not a good question. My native language is not English so it is hard for me to say what I want to ask."} {"_id": "157123", "title": "Having 2 Initialize Paragraphs in 1 COBOL program", "text": "I am designing a COBOL program that does 2 things. 1 reads specific rows from a table, and inserts a new row based on the original. 2 reads different rows from the same table, and updates those rows. Is it bad standards to have 2 initialization procedures? I need to declare 2 SQL cursors, one for each task. So I was thinking: `1 initialize cursor, 2 process, 3 initialize cursor, 4 process.` But that seems like bad logic. something is telling me it should be more like: `1 initialize cursor, 2 process` But I'm having trouble thinking it out because to me, they are 2 separate tasks. I tried to think of a way that my SQL query could do the work, and then I could just do some validation. But I'm not sure the query would be any more efficient than having a second read of the table. Here's my structure ideas I've added a 3rd chart that I believe follows the answer given to me on this question. Does the 3rd chart seem to me the most logical?"} {"_id": "68022", "title": "Should I keep all my class definitions in the same class library?", "text": "I am starting to work with class libraries in my C# class. We were told to write code that can be used more than one time. Should I keep all my class definitions in one class library for the length of the term, or separate them by project?"} {"_id": "199565", "title": "Is Objective C a reasonable way to learn C?", "text": "I want to learn C but I tend to learn best when I have a project to work on. I've never done iPhone development, so I'm hoping to kill two birds with one stone. Will learning objective c also teach me to program c reasonably well, or are they too dissimilar? EDIT: I'm mostly wondering if objective-c/iphone-development would teach me all the little gotcha's that are inherent to c. I come from a .Net background so I haven't done much in terms of memory management or working with pointers."} {"_id": "199560", "title": "Finding maximum number of congruent numbers", "text": "Let's say we have a multiset (set with possible duplicates) of integers. We would like to find the size of the largest subset of the multiset such that all numbers in the subset are congruent to each other modulo some m > 1. For example: 1 4 7 7 8 10 for m = 2 the subsets are: (1, 7, 7) and (4, 8, 10), both having size 3. for m = 3 the subsets are: (1, 4, 7, 7, 10) and (8), the larger set of size 5. for m = 4 the subsets are: (1), (4, 8), (7, 7), (10), the largest set of size 2. At this moment it is evident that the best answer is 5 for m = 3. Given m we can find the size of the largest subset in linear time. Because the answer is always equal or larger than half of the size of the set, it is enough to check for values of m upto median of the set. Also I noticed it is necessary to check for only prime values of m. However if values in the set are large the algorithm is still rather slow. Does anyone have any ideas how to improve it?"} {"_id": "199561", "title": "DDD/SOA in (.NET) MVC and Message pattern(s) / Request Response", "text": "We're currently considering whether it makes sense (or if the benefits are worth the added code) to introduce a Message based pattern (such as Request Response) into a Domain Driven Design / Service Oriented Architecture under an MVC app (with DI and potentially used by MVC, WCF, Windows Services, etc.). Basically (in MVC) the controller uses an injected service which in turn uses an injected repository to save/update/delete objects. There are Application Services, and Domain Services and the application service might use multiple domain services to e.g. save something and then send an email about the item being saved. Considering the `example use case` where something is saved and then an email is sent, it could be useful to present any potential errors to the user such as \"Save succeeded, but the email wasn't able to be sent. Please try again in 5 minutes or contact support etc.\" vs just a generic error such as \"Save or email didn't succeed\". The message based pattern with Request and Response objects seems fine (in theory), or at least the response objects with e.g. a couple inherited fields could be useful. public abstract class ResponseBase { public bool Success { get; set; } public string Message { get; set; } } Then there could be an object to extend it with the ViewModel result as a property and finally the controller could decide whether to show e.g. an error message, or update the view etc. This seems nice, but in the `simple cases` where you are simply saving an item and it's clear the save succeeded or failed, it can also seem to be overkill. Creating and passing extra objects where maybe you just need an ID and a Boolean seems to make this pattern redundant. The thing is that this object needs to transfer from the controller, through the services, down to the repository, and back. The request part could be possible to ignore and only worry about returning a meaningful result too. It's hard to estimate whether it could be 50-50 or 40-60 where this type of system could be useful, but there are many cases that could use it, as well as many that aren't very useful. Is there better approaches to this issue, or have you found this pattern useful for the scenario portrayed? Is it worth the extra coding for the simple cases (in your case(s))?"} {"_id": "199563", "title": "Untyped lambda calculus: Why is call-by-value strict?", "text": "I'm currently reading Benjamin C. Pierce's \u201cTypes and Programming Languages\u201d. Before really getting into type theory it explains lambda calculus and evaluation strategies. I am a bit confused by the explanation of _call by name_ vs _call by value_ in this context. The two strategies are explained in the following manner: ### _call by name_ Like _normal order_ in that it chooses the leftmost, outermost redex first, but more restrictive by not allowing reductions inside abstractions. An example: id (id (\u03bbz. id z)) \u2192 id (\u03bbz. id z) \u2192 \u03bbz. id z ### _call by value_ Only the outermost redexes are reduced and a redex is reduced only when its right-hand side has already been reduced to a value\u2014a term that is finished computing and cannot be reduced any further. An example: id (id (\u03bbz. id z)) \u2192 id (\u03bbz. id z) \u2192 \u03bbz. id z (identical to the call by name evaluation) Ok, so far so good. But this is followed by the following paragraph: > The call-by-value strategy is _strict_ , in the sense that the arguments to > functions are always evaluated, whether or not they are used by the body of > the function. In contrast, _non-strict_ (or _lazy_ ) strategies such as > call-by-name and call-by-need evaluate only the arguments that are actually > used. I know what call-by-value and call-by-name means practically, from having used (among others) C and Haskell, but I cannot see why the evaluation strategy explained above leads to this in the lambda calculus. Is this an additional rule that always accompany call-by-value, or does if follow from the reduction strategy outlined above? Especially since the reduction steps in the examples are identical, I fail to see the difference between the two strategies and would love if someone could help me gain some intuition."} {"_id": "193156", "title": "How can I create a web form that displays and accepts Tamil language?", "text": "I am developing a PHP Application that needs to display and accept content written in tamil. For example \"login\" should be written in tamil on the form. How can I achieve this ?"} {"_id": "168334", "title": "Naming Convention for Dedicated Thread Locking objects", "text": "A relatively minor question, but I haven't been able to find official documentation or even blog opinion/discussions on it. Simply put: when I have a private object whose sole purpose is to serve for private `lock`, what do I name that object? class MyClass { private object LockingObject = new object(); void DoSomething() { lock(LockingObject) { //do something } } } What should we name `LockingObject` here? Also consider not just the name of the variable but how it looks in-code when locking. I've seen various examples, but seemingly no solid go-to advice: 1. Plenty of usages of `SyncRoot` (and variations such as `_syncRoot`). * **Code Sample:** `lock(SyncRoot)`, `lock(_syncRoot)` * This appears to be influenced by VB's equivalent `SyncLock` statement, the `SyncRoot` property that exists on some of the ICollection classes and part of some kind of SyncRoot design pattern (which arguably is a bad idea) * Being in a C# context, not sure if I'd want to have a VBish naming. Even worse, in VB naming the variable the same as the keyword. Not sure if this would be a source of confusion or not. 2. `thisLock` and `lockThis` from the MSDN articles: C# lock Statement, VB SyncLock Statement * **Code Sample:** `lock(thisLock)`, `lock(lockThis)` * Not sure if these were named minimally purely for the example or not * Kind of weird if we're using this within a `static` class/method. * EDIT: The Wikipedia article on locks also uses this naming for its example 3. Several usages of `PadLock` (of varying casing) * **Code Sample:** `lock(PadLock)`, `lock(padlock)` * Not bad, but my only beef is it unsurprisingly invokes the image of a _physical_ \"padlock\" which I tend to not associate with the _abstract threading concept_. 4. Naming the lock based on what it's intending to lock * **Code Sample:** `lock(messagesLock)`, `lock(DictionaryLock)`, `lock(commandQueueLock)` * In the VB SyncRoot MSDN page example, it has a `simpleMessageList` example with a private `messagesLock` object * I don't think it's a good idea to name the lock against the type you're locking around (\"DictionaryLock\") as that's an implementation detail that may change. I prefer naming around the concept/object you're locking (\"messagesLock\" or \"commandQueueLock\") * Interestingly, I _very rarely_ see this naming convention for locking objects in code samples online or on StackOverflow. 5. (EDIT) The C# spec under section \"8.12 The Lock Statement\" has an example of this pattern and names it `synchronizationObject` * **Code Sample:** `lock(SynchronizationObject)`, `lock(synchronizationObject)` **Question:** What's your opinion _generally_ about naming _private_ locking objects? Recently, I've started naming them `ThreadLock` (so kinda like option 3), but I'm finding myself questioning that name. I'm frequently using this locking pattern (in the code sample provided above) throughout my applications so I thought it might make sense to get a more professional opinion/discussion about a solid naming convention for them. Thanks!"} {"_id": "168331", "title": "git changing head not reflected on co-dev's branch", "text": "Basically, we undid history. I know this is bad, and I am already committed to avoiding this at all costs in the future, but what is done is done. Anyway, I issued a git push origin <1_week_old_sha>:master to undo some bad commits. I then deleted a buggered branch called release(which had also received some bad commits) from remote and then branched a new release off master. I pushed this to remote. So basically, remote master & release are clones and just how I want them. The issue is if I clone the repo anew(or work in my current repo) everything looks great....but when my co-devs delete their release branch and create a new one based off the new remote release I created, they still see all the old junk I tried to remove. I feel this has to do with some local .git files mistaking the new branch release for the old release. Any thoughts? Thanks."} {"_id": "168330", "title": "Useful git commit messages for merged branches", "text": "_As a follow-up tothis question:_ If I'm working on a team by myself, I can maintain useful commit messages when merging branches by squashing all the commits to a single diff and then merging that diff. That way I can easily see what changes were introduced in the branch, and I have a single summary describing the feature/change/whatever that was accomplished in that branch when browsing the master branch. My question now is, how can I accomplish this when working with a team? In that situation, the branches will be pushed to a remote repository, meaning that I can't squash all the commits in the branch down to a single commit. If the branch is public, can I still have a single useful merge commit in the master branch? (By \"useful\" I mean that the commit in the master line tells me (1) a useful summary of what was done in the branch and (2) diffs of the same.)"} {"_id": "168339", "title": "Optimization of a hybrid pagination scheme", "text": "I'm working on a Web Application using node.js in which I'm building a partial copy of the database on the client-side to decrease the load on my server. Right now, I have a function like this (expressed as python-style pseudocode, but implemented in JavaScript): get(table_name,primary_key): if primary_key in cache[table_name]: return cache[table_name][primary_key] else: x = get_data_from_server(table_name,primary_key) # socket.io return cache[table_name][primary_key] = x While this scheme works perfectly well for caching individual rows, I'd like to extend it to support the creation of paginated tables ordered according to the primary_key, and loading additional data using the above function for only the current and possibly the adjacent pages. Now, I don't want to keep the list of primary keys on the server to be retrieved every time I need to change the page (which, for reasons beyond the scope here, will be very frequent), and keeping it on the client side, subject to real-time create/delete events from the server, doesn't seem that good an idea, even after compression (using ranges, instead of individual values). What is the best way to calculate which items are to be displayed on a random page, minimizing the space requirements & the need for communication with the server?"} {"_id": "58303", "title": "Is Agile the new micromanagement?", "text": "This question has been cooking in my head for a while so I wanted to ask those who are following agile/scrum practices in their development environments. My company has finally ventured into incorporating agile practices and has started out with a team of 4 developers in an agile group on a trial basis. It has been 4 months with 3 iterations and they continue to do it without going fully agile for the rest of us. This is due to the fact that management's trust to meet business requirements with a quite a bit of ad hoc type request from high above. Recently, I talked to the developers who are part of this initiative; they tell me that it's not fun. They are not allowed to talk to other developers by their Scrum master and are not allowed to take any phone calls in the work area (which maybe fine to an extent). For example, if I want to talk to my friend for kicks who is in the agile team, I am not allowed without the approval of the Scrum master; who is sitting right next to the agile team. The idea of all this or the agile is to provide a complete vacuum for agile developers from any interruptions and to have them put in good 6+ productive hours. Well, guys, I am no agile guru but what I have read Yahoo agile rollout document and similar for other organizations, it gives me a feeling that agile is not cheap. It require resources and budget to instill agile into the teams and correct issue as they arrive to put them back on track. For starters, it requires training for developers and coaching for managers and etc, etc... The current Scrum master was a manager who took a couple days agile training class paid by the management is now leading this agile team. I have also heard in the meeting that agile manifesto doesn't dictate that agile is not set in stones and is customized differently for each company. Well, it all sounds good and reason. In conclusion, I always thought the agile was supposed to bring harmony in the development teams which results in happy developers. However, I am getting a very opposite feeling when talking to the developers in the agile team. They are unhappy that they cannot talk anything but work, sitting quietly all day just working, and they feel it's just another way for management to make them work more. Tell me please, if this is one of the examples of good practices used for the purpose of selfish advantage for more dollars? Or maybe, it's just us the developers like me and this agile team feels that they don't like to work in an environment where they only breathe work because they are at work. Thanks. Edit: It's a company in healthcare domain that has offices across US. It definitely feels like a cowboy style agile which makes me really not wanting to go for agile at all, esp at my current company. All of it has to do with the management being completely cheap. Cutting out expensive coffee for cheaper version, emphasis on savings and being productive while staying as lean as possible. My feeling is that someone in the management behind the door threw out this idea, that agile makes you produce more so we can show our bosses we're producing more with the same headcount. Or, maybe, it will allow us to reduce headcount if that's the case. EDITED: They are having their 5 min daily meeting. But not allowed to chat or talk with someone outside of their team. All focus is on work."} {"_id": "58301", "title": "Converting our Windows Software to the web", "text": "We're looking at converting a software package from Windows over to the browser. It's a back office system consisting of Ledgers, Stock Control, Order Processing, etc and it's written in a legacy language (Dataflex). The project consists of 200+ database tables, 100+ reports, 300+ Views, 100+ Dialog boxes and 200 or so Selection Lists. I'm really struggling to get my head around the technologies that I'll need to use to convert this to a more modern web based programming language / framework. Even simple screens (Entering a stock location) seems overly complex. With the current language, I can drag a bunch of Fields to a form (a-la VB), wire in a tiny bit of database code and I've got a screen that'll allow me to find, delete, edit and change records easily. 10 minutes of development, tops. Has anyone got any suggestions on how I can move from my current VB style programming language to one that allows similar functionality on the web? Has anyone got any ideas how I go about working out how many man hours it's going to take to convert this sort of project to a new language on the web? What Web Framework options are available, and how do I decide between going with something like Rails vs going with something like ASP.Net? We've a tonne of code wired to OnClick and OnChange events. (I know this is bad). What strategies can I take with regard to converting this to the web? I'll admit - I'm hopelessly confused by some of the newer syntaxes used by modern languages. Is there any language that allows me access to the database without having to subclass everything through IEnumerable and various weird Typecasting things. Has anyone done anything like what we want to do? What were the pitfalls? Sorry about the vagueness of the question. I'd be more specific but don't know where to start. Delete away if I'm asking the wrong thing or in the wrong place."} {"_id": "58306", "title": "Class Naming in Stock Trading Application", "text": "What do you normally call a class that stores open, high, low, close, volumne, and timestamp of a financial instrument(e.g. stock)? For example, in a 5 minute intraday chart, this object would hold OHLC, and volume between 10:00 and 10:05AM. Can anyone working in the field give me an advice?"} {"_id": "197098", "title": "Why are there non-decidable languages? Can anyone explain me my book's solution?", "text": "Well in my book it is says that \"there are non-decidable languages\" And the proof is: > Every algorithm is a word. Then there are only countable algorithms. But > there are uncountable languages and therefore more than algorithms Why is it said that every algorithm is a word? A word is a concatenation of symbols, elements of an alphabet, then what's the relation between a word and an algorithm? Can anyone explain me that?"} {"_id": "197096", "title": "Using only UI testing. Is that Ok?", "text": "I am a sole developer working with an offshore employer. I do realize the importance of unit testing (although haven't practiced before) but currently, the code hasn't ever been tested. The problem with the code is that it isn't what I call \"been moduled well\", some parts are really messy, extensive use of globals and the code doesn't really show loose coupling. If I am not wrong, the code might require heavy refactoring before being able to be unit tested. Also, it seems the employer, isn't quite interested in testing. He rather suggests to spend this time in building features. So what I suggested to him is UI testing. Because UI testing is something we do every time, testing features, clicking, checking if something is not broke, it made sense to both of us to use it to automate the process of checking features and save us time. Since I have never done UI testing before, I don't know what possible disadvantages might be associated with using just UI testing alone or will it be much beneficial at all?"} {"_id": "197091", "title": "Javascript Architectural Model", "text": "Are there any obvious flaws to this OO architectural model which I intend to implement using javascript? It is similar to the MVP model but instead the role of the model is broken down into three modules: * the cache (stores data instead of rerequesting from server) * the model (manages requests for raw data, retrieving it from the cache if available or the server) * the modifier (handles requests which require data which makes changes to the data on the server) For a diagram showing how data flows through this model: ![enter image description here](http://i.stack.imgur.com/sUjYY.png) **view** * sets up & manages the UI possibly requiring the requesting of data from the presenter * sets up controls to alter what is displayed, possibly requiring the requesting of data from the presenter **cache** * contains a record of some fetched data in data objects * has a .store() method for storing data in the cache * has a .retrieve() method for retrieving data from the cache * access for storage and retrieval is available to the model and the modifier (acting as a faster intermediate location for the sharing of information than the server) **presenter** * handles requests for formatted data from view * requests data from Model (passing a callback for completion) formats data for presentation and calls view callbacks **model** * handles requests for data from presenter * requests data from cache (cache.retrieve()) * if data not in cache, requests it from server and then stores it in cache (cache.store()) * data passed back to presenter via callbacks **modifier** * handles requests from view for modifications to data * specify post, put and delete request types * only support specific types of request which may be handled separately (eg settings) * stores new data in cache (cache.store()) * pushes data to the server * calls callback about the success of data storage This is quite big project and there will be a number of others working on it so it would be good to get it right from the off. **edit 1** The cache.store() and cache.retrieve() methods both only receive the information required to refer to the data in the cache, but do not specify how it is stored. The cache methods use this information to respond with or store the correct data, but the internal structure in which the data is stored is kept secret."} {"_id": "220929", "title": "How do I make multiple calls to a web service without taxing that service heavily? Scaling question", "text": "is there a good pattern for how to send multiple calls to a web service but without taxing it and ensuring the data is sent back? I don't know enough to correctly describe the problem to even start googling it properly - current google results compare streaming vs. non-streaming wcf answers? Scenario: I am working on an app (I'm a jr. dev.) that needs to gather information from several sources about a 'customer' and the domains they own Technical: for one of the sources, I need to send a string array of domains to a web service and this web service returns an entry for each domain name, but this list of domain names will be thousands long - I would like to attempt to divide this list into bite-size chunks (1k domain names each) and then... queue them up to send to that web service, but ensure the web service doesn't skip one PseudoRequirements: Consumer of web page does not care how long it takes, but would like a list of results up front that does not require pagination to navigate. Current Theory: Should I take my massive 30k list, break it into 1k chunks, stuff each 1k-sized chunk into a 'request' object, assemble those 'request chunks' into a 'request chunk list' and iterate over that list (sequentially / blocking, so I don't strangle the WS) and for each 'request chunk' get back a 'response chunk' assemble those into a list, and then pass that list back into the front end for viewing? is this a viable method? is there a better way to queue items? Does anyone know off-hand any useful articles for this sort of queuing? are there any 'gotchas' or additional items to consider before I attempt my first pass? Additional Edits: -I do not have full control to the receiving service, I can not view it's code and the developers that manage it are... less than responsive to email. I do not currently know the stress testing limits of the web service. I emailed the owners of that component but have yet to receive a response - I was going to work up my design while I waited on them."} {"_id": "234408", "title": "How to capture \"Display advertisement\" use case?", "text": "What kind of use case it would be for to \"show ad in part of the view\"? Who could be the actor to relate with? User or System? As user has no specific goal to see ads, i am wondering what is the best way to capture this in use case diagram? Any assistance will be helpful"} {"_id": "220926", "title": "Binary data storage scheme (saving user-uploaded files)", "text": "In our app we are currently saving binary data to the database (terrible, I know; but this is legacy stuff and it wasn't my decision). We're trying to migrate this data out to an external device and I'm trying to come up with a scheme to save these files. We have multiple tenants, I'd like to have a directory per tenant. My scheme for that is to build path using the first-three letters of the tenant. So if you had a tenant called `apple`, its files would be at `/a/p/p/apple`. Within these directories, I will save the files. For the files, I want to generate a random 6-character alphanumeric name (only lowercase characters for the time being because internal reasons). So if we generate a file name called `6a8jxo`, it will stored as `/6/a/6/6a8jxo`. With this strategy, each tenant can have a maximum of approximately 916 billion unique files (not that we're attempting to save that many), with a maximum of 46,656 files per directory. If I go for a 5-character name, we will have a maximum of 60.5 billion unique files with 1,296 files per directory. Are there any drawbacks to this approach? I realize that certain directories may only contain one or two files, but in my mind that is not a huge problem. My colleague doesn't want to do it this way; he wants to use an autoincremented field in the database and then base the directory structure on the hex-value (I assume 32-bit) of that instead of using the tenant. With his strategy the hex value would be used as follows: the file will be stored in the directory located at `/aa/bbb` where `aa` is the first two characters (most significant nibbles) of the hex value, and `bbb` is the next three. His reasoning is that he only wants to create new directories when one fills up instead of having numerous, partially-filled directories. This approach introduces numerous difficulties on the code side of things and I don't see having completely-filled directories as an argument that justifies the extra effort required to do this. Are there any other strategies or approaches I haven't considered?"} {"_id": "115648", "title": "Is there a Django reference?", "text": "I am new to Django. I have found the official topic guides https://docs.djangoproject.com/en/1.3/ an excellent place to learn from. But now I am in a place where I need quick reference to see what methods an object has, what arguments it takes, what it returns etc, kind of like the Python docs or Java docs. More of a reference than a tutorial. The official docs are tutorial style and do not necessarily have reference to all the methods. Am I missing something here? Do the Django official docs suffice for everyone?"} {"_id": "115643", "title": "Why are all desktop environments but KDE using GTK?", "text": "I recently did some hacking with Qt, and found it quite pleasant. Then I switched distributions on my laptop a couple of time, and noted that only KDE is using Qt, all the others are using the Gnu Image Manipulation Program Tool Kit. GTK is primarily a widget toolkit, although Windows and Mac ports do seem to exist. Not sure why one would build a DE on top of a toolkit for an image editor. Qt more of a cross-platform framework including threading, networking, and other non-GUI features. I heard licensing is considered a problem. Did I just answer my own question, or are there other things besides getting all zealotty about freedom? **update** I'd like to clarify that I'm not after opinions, but _observations_. \"Qt is not real C++\" is an observation, as is \"GTK has been general purpose for a very long time\". Besides, > But software and programming isn\u2019t always a hard science, either. Once you > get past the does this code compile or not questions, you\u2019re dealing with > issues of best practices, experiences, and behaviors. Perhaps because our > communities have become so accustomed to getting quick, accurate, and timely > answers, they feel that even a subjective Stack Overflow is better than the > alternatives. So much so, that our fellow programmers created a sister site > specifically for their pent up subjective questions. Take one heaping pile > of subjective questions, bottle it up for over two years and\u2026 kablooey!"} {"_id": "77091", "title": "How to dissuade a customer who just learned a technology and wants to use it everywhere?", "text": "My customer recently discovered what is URL Rewriting, without completely understanding what it is, how it works and the pros and cons of it. Now, he asks for lots of strange changes in actual requirements of current projects and changes in old projects in order to implement what he believes is URL Rewriting. **On one hand, I'm annoyed being asked to do things which doesn't make any sense** instead of doing real work. **On the other hand, I can't tell my customer that he doesn't understand anything in the subject** despite his interest in it. I think many people have had situations when their manager or their customer just learned a new buzzword or a new technology, and he loved it so much than he wanted to use it in every project, everywhere, rewrite the whole codebase just to use this new thing, etc. Also, I've recently read something related on Programmers.SE where people told about their experiences when there was a huge buzz around XML, and some managers would ask to introduce XML in every project just to show to everyone that they have used it. So those who have been in similar situation, how have you managed it?"} {"_id": "114998", "title": "Is there any circumstance in which a strict waterfall project can succeed when requirements are not clearly defined?", "text": "I can not see how a software project can make any meaningful progress under a waterfall methodology if the requirements cannot be clearly stated from the outset. Am I missing anything?"} {"_id": "114999", "title": "How often to release in Scrum sprint", "text": "How often to you release during a sprint. Only at the end of the sprint or every time a feature is ready. And how to you handle bugfix releases?"} {"_id": "78738", "title": "Parameterize Agent Based Simulation (OOP-Question)", "text": "I'd like to hear my fellow programmer's thoughts on the issue of parametrizing agent based simulations: Consider: * Simulation core, including geometry, collision tests, some rules * Different agents (modelled in OOP-fashion: has-a, is-a, abstract interfaces) * Agents have different sensors, different actors, different controllers, ... All connected together by references/pointers and accessed using abstract interfaces. So essentially, each agent is composed of an ownership tree (agent owns sensors, controller, actors), superimposed by a dataflow-graph (sensor connected to controller, connected to actor). The tree, the graph plus the parametrization of the things together form a **simulation setup**. Running a simulation amounts to: * Read in simulation setup * Instantiate a bunch of objects, parametrize and connect them together to form the tree and the graph * Run simulation * Output some data (statistics, signals, whatever) The question is how best to save the simulation setup, and how to instantiate & parametrize stuff. Requirements (some of them conflicting): 1. The parametrization should probably be structured along the ownership tree, as it feels most natural. 2. A lot of times I'd like to instantiate a bunch of similar agents with just the 1 or 2 parameters changed between instances. That needs to be easy. 3. I'd like to keep parametrization and code close together. When experimenting with algorithms that are affected by parametrization, I wouldn't want the changes to be spread out over too many places. 4. Conversely, I'd like to keep parametrization out of the code, so that it's easy to automate simulation runs in order to systematically sweep through parameter spaces. 5. Parameters have meta-data: type, value range, physical unit, textual description, logical dependencies (e.g. if you specify X you must not specify Y) 6. Parameters not only affect data (member variables) but also code (usage of a particular specialization of the abstract base) * * * Now my colleges and me are tasked with building a new agent-based simulation: * Do you know any frameworks / libraries / techniques? * Are any patterns applicable? Best practises? * Meta-programming? * Abandon OOP altogether? Looking forward to your thoughts."} {"_id": "147433", "title": "Is it safe to download VS11/.NET4.5 beta and still deploy to servers with .NET 4.0?", "text": "I've been wanting to try out VS2011 and .NET 4.5 beta, but the upgrade path is confusing at best. If I understand what I've read, the .NET 4.5 framework overwrites the .NET 4.0 libraries on your local machine, so libraries such as mscorlib.dll and System.dll will be replaced with the 4.5 version - fully backward compatible, but still different. If I'm developing ASP.NET sites with VS2011, and assuming I don't use any of the new 4.5 features in the code, do you know of any issues I might have by deploying to a web server with only the .NET 4.0 framework? I wouldn't expect any problems since it's the same CLR, but I'd like to know for sure before I do it. There's always the option of keeping a separate virtual dev machine for 2011/4.5 so that I can develop on 4.5, but go back to my original 4.0 machine to compile and publish, but I'd rather not go that route if I don't have to. Thanks"} {"_id": "132252", "title": "Should I include not-yet released projects on my resume/CV?", "text": "I finished a website project with new technologies such as HTML5 and CSS3. But my company will not let it go to production. They plan to launch the new site in their new project version function. Should I include such a non-yet-online project on my resume/CV?. Maybe with an additional explanation such as \"this project is not released\"?"} {"_id": "186705", "title": "Why is the main memory for object allocation called the 'heap'?", "text": "Has anybody got an idea why the area of main memory where objects are allocated is referred to as the heap. I can understand the rationale for that of the stack LIFO but would like to know what the rationale is for the 'heap' name."} {"_id": "186706", "title": "Should my login logic be part of the controller or a service in MVC webapp", "text": "I'm using Shiro as my security manager for a Spring MVC web application. The login basically happens in these lines: Subject user = SecurityUtils.getSubject(); user.login(new UsernamePasswordToken(username, password)); Where should I put this logic? `login()` can throw exceptions if the requested user doesn't exist or wrong password was provided, etc. Should I call the code from my controller and do the error handling there or should I call it from my service layer, catch exceptions and rethrow my own up to the controller?"} {"_id": "34081", "title": "Is there a graph with FOSS licenses detailing what can be linked with what?", "text": "I'm looking for a chart like this: ![alt text](http://i.stack.imgur.com/MmRTg.jpg) Basically something that tells you whether an app with a given license can be linked with a library of a given license. Does such a thing exist?"} {"_id": "34086", "title": "Dealing with a fundamental design flaw when you're new to the project", "text": "I've just started working on an open source project with around 30 developers in it. I'm working on fixing some of the bugs as a way to get into the \"loop\" and become a regular committer to the project. The problem is I think I've uncovered a fundamental design flaw that's causing one of the bugs I'm working on. But I feel like if I blast this on the mailing list I'm going to come off as arrogant, and some of the discussions I've had about the issue are butting heads with some of the people. How should I go about this?"} {"_id": "111811", "title": "Easiest language for simple Windows applications for novice Windows programmer?", "text": "Expertise: 11 years PHP programming I'd like to get into simple Windows programming for a kiosk project. What language should I choose? **My criteria:** * easy to learn, \"higher\"-level language (e.g. not C++, I don't have a year to learn this) * quick to get up-and-running (that's what I loved about PHP) * well-documented & lots of community resources * easy GUI creation * client wants windows machines so Linux is not an option **What I've gathered from other Stackexchange answers and Google, but have no experience using:** 1. QT + Python (wish PHP-Qt was more mature -- it doesn't seem to have much of a community) 2. C# (seems like overkill for a simple kiosk) 3. Firefox + kiosk extension + AMP for GUI, and macro software to manipulate windows, files, lower-level stuff."} {"_id": "222208", "title": "SQL Query syntax - formatting?", "text": "I'm looking at a very large (5-6 digit LOC), very complex, code-base, full of interacting bulky, interdependent, views and stored procedures (multiple 4-5 digit count silos). The source SQL has been touched many times, by many different people, and no formatting constraints have been enforced. This led to wildly differing formatting for SQL, even within single stored procedure/view definitions. The code continues to be worked on by several groups of, mostly, senior programmers, and sees a steady stream of small changes and improvements. * * * Under these conditions: a. what is a sane standard to conform to for formatting (eg, should we use all caps for keywords? seems like the small benefits from having so many programmers not have to press Shift actually add up to significant time/cost savings) b. is it worthwhile advocating for a major refactor to standardize the formatting (ie, is the cost of training a large body of programmers/changing their behaviour/ill will at enforcing a new constraint worth the benefit in the long-term?)"} {"_id": "34085", "title": "Source code for TCP/IP stack", "text": "Am looking for an open source code for the TCP/IP stack. An explanation along the lines of the stack would also help me in understanding the different modules and their interactions. I tried searching but there is very less information relating to just the TCP/IP stack."} {"_id": "225254", "title": "Naming objects and properties clearly without exposing implementation details", "text": "I'm re-architecting an iOS mobile app that consumes an API with somewhat haphazard and oftentimes slow performance. The reason for the slowness is because the API is actually a layer on top of a variety of other platform- level services, some of which run slowly. You then get into the weakest link type of situation where the time it takes for the API to respond is bound by the slowest lower-level service response. When consuming the API I'm getting dealing with an object, let's call it `Foo`, that I display in a table, and also on a detail screen as necessary. A trade-off made by the API designer/implementer was to provide a call to get a list of `Foo` with some specific properties omitted (getting these properties for more than 1 or 2 objects at a time is very slow) and a \"detailed view\" call to provide a single object with all the properties filled in. The initial model schema had two model objects `Foo` and `FooDetails`. Feeling that this exposes unnecessary implementation details, I've merged both `Foo` and `FooDetails` into `Foo`, but a way to easily represent whether `Foo` is complete or not is still needed. (Meaning does `Foo` have extra properties present or not.) The best I could come up with was an enum: typedef enum : NSUInteger { ELYFooAbridgedState, ELYFooCompletedState } ELYFooCompletenessState; I did not want to go with a `BOOL` due to the possibility of multiple \"in between\" states in the future as some lower level services are tuned. Can anyone provide an alternative naming or representation for this situation? Are there drawbacks w.r.t. to future proofing when taking this approach? This may the only time it's possible to do a ground up refactor of the app's model schema entirely, and I'd like it to be as resilient as possible."} {"_id": "79282", "title": "When you use inheritance to reuse code, do you find it too tricky that it swallows the benifits of reuse?", "text": "I've been coding for about 8 years, however I still find inheritance is too flexible and sometimes it makes you totally confused with the code you have written. One simplest example would be: abstract class AClass { protected void method1() { if(check()) { do1(); } else { do2(); } } protected abstract void do1(); protected abstract void do2(); } The intention of the class is that people can implement do1() and do2() so that some further logic can be done, however sometimes people decide to overload method1(), and then things become complicated immediately. I find only in strategy pattern, code is reused well through inheritance, in most case the designer of the base class know its subclasses very well, and that inheritance is totally optional. I have a class that's inherited by 4 classes - an IoHandler, and it's subclasses for server side, client side, edge server, origin server, and it begins to drive me crazy. I was always in code refactoring, I always came out with ideas I think would work and then were proven not. It's said that human brain can only hold 7 pieces of information one time, am I carrying too many?"} {"_id": "106412", "title": "Writing documentation for well understood methods like equals in Java", "text": "Is it a good practice to write comments for widely known methods like equals, compareTo etc? Consider the below code. /** * This method compares the equality of the current object with the object of same type */ @Override public boolean equals(Object obj) { //code for equals } My company madates to enter comments like the above.Is the above Javadoc comment required? Is it not obvious and well understood what the equals method and the likes (compare,compareTo) etc does? What are your suggestions?"} {"_id": "186709", "title": "How to make ASP .NET MVC website have a continuous process running?", "text": "This website is supposed to be a game where the players have some 'buildings' and these buildings produce resources. E.g. an iron mine may produce 30 pieces of iron ore per minute and automatically add it to the user's inventory. It doesn't matter whether the user is online or not, it should be running 24/7. So when the user does log in to their account, they will see their stack of iron ore has built up depending on how long it has been. Some pointers in the right direction would be greatly appreciated :)"} {"_id": "106417", "title": "What factors should I be looking at to increase performance in image resizing?", "text": "I'm setting up a web app in which people will upload images. Once uploaded the images will be watermarked then resized multiple times (Thumbnails, different sizes etc.) and finally uploaded to Amazon S3 for storage. The web app is written in Python with the Tornado framework, I don't really want to lock up the Tornado threads with the image processing so I'm going to send it out to a seperate script (and possibly even seperate servers) using Gearman (I've developed an async-gearman client for Python/Tornado). One of the advantages of Gearman is that it's possible to launch jobs in multiple languages, so the actual processing and uploading of the image could be done in ether Python, Ruby, PHP, Perl, Java, C, or something else. This leaves the question, is one better for the job? Are there certain libraries only available for specific languages that are especially good at image resizing? The most important thing will be performance: we'd like to be able to run as many jobs on a server as possible. Are there other factors I should be looking at? I'd prefer to stick with Python, Ruby, or PHP because that's what I'm familiar with but if the performance gain from doing it in Java/C is good enough I'd be okay with implementing it like that. I'm **not** looking for code examples, I can find those myself, but I'd like to know if there are any big differences between the image processing libraries. I know PHP probably has the easiest to use wit GD, and I've used a couple of the Python libraries before and they seem okay. I've never done anything with images in Ruby."} {"_id": "216466", "title": "Dependency inversion always includes dependency injection?", "text": "This is a question on a home work. I've been up and down my notes and I can't even see how the two are related. Googling this gives me questions/answers on one or the other, but never both."} {"_id": "216460", "title": "MVVM and service pattern", "text": "I'm building a WPF application using the MVVM pattern. Right now, my viewmodels calls the service layer to retrieve models (how is not relevant to the viewmodel) and convert them to viewmodels. I'm using constructor injection to pass the service required to the viewmodel. It's easily testable and works well for viewmodels with few dependencies, but as soon as I try to create viewModels for complex models, I have a constructor with a LOT of services injected in it (one to retrieve each dependencies and a list of all available values to bind to an itemsSource for example). I'm wondering how to handle multiple services like that and still have a viewmodel that I can unit test easily. I'm thinking of a few solutions: 1. Creating a services singleton (IServices) containing all the available services as interfaces. Example: Services.Current.XXXService.Retrieve(), Services.Current.YYYService.Retrieve(). That way, I don't have a huge constructor with a ton of services parameters in them. 2. Creating a facade for the services used by the viewModel and passing this object in the ctor of my viewmodel. But then, I'll have to create a facade for each of my complexe viewmodels, and it might be a bit much... What do you think is the \"right\" way to implement this kind of architecture ?"} {"_id": "216462", "title": "How to create single integer index value based on two integers where first is unlimited?", "text": "I have table data containing an integer value X ranging from 1.... unknown, and an integer value Y ranging from 1..9 The data need to be presented in order 'X then Y'. For one visual component I can set multiple index names: X;Y But for another component I need a _one-dimensional integer_ value as index (sort order). If X were limited to an upper bound of say 100, the one-dimensional value could simply be X*100 + Y. If the one-dimensional value could have been a real, it could be X + Y/10. But if I want to keep X unlimited, is there a way to calculate a single integer 'indexing' value from X and Y? [Added] **Background information** : I have a Gantt/TreeList component where the tasks are ordered on a TaskIndex integer. This does not need to be a real database field, I can make it a calculated field in the underlying client dataset. My table data is e.g. as follows: ID Baseline ParentID 1 0 0 (task) 5 2 1 (baseline) 8 1 1 (baseline) 9 0 0 (task) 12 0 0 (task) 16 1 12 (baseline) Task 1 has two baselines numbered 1 and 2 (IDs 8 and 5) Task 9 has no baselines Task 12 has one baseline numbered 1 (ID 16) Baselines number 1-9 (the Y variable from my question); 0 or null identify the tasks ID's are unlimited (the X variable) The user plays with visibility of baselines, e.g. he wants to see all tasks and all baselines labeled 1. This is done by updating a filter on the table. Right now I constantly have to recalculate TaskIndex after changing the filter (looping through records with a counter). It would be nice if TaskIndex could be calculated on the fly for each record knowing only the _ID_ and _Baseline_ data in the current record (I work in Delphi where a client dataset has an OnCalcFields event handler, that is triggered for each record when necessary). I have no control over the inner workings of the visual component."} {"_id": "156290", "title": "Entry level engineer question regarding memory management", "text": "It has been a few months since I started my position as an entry level software developer. Now that I am past some learning curves (e.g. the language, jargon, syntax of VB and C#) I'm starting to focus on more esoteric topics, as to write better software. A simple question I presented to a fellow coworker was responded with \"I'm focusing on the wrong things.\" While I respect this coworker I do disagree that this is a \"wrong thing\" to focus upon. Here was the code (in VB) and followed by the question. Note: The Function GenerateAlert() returns an integer. Dim alertID as Integer = GenerateAlert() _errorDictionary.Add(argErrorID, NewErrorInfo(Now(), alertID)) vs... _errorDictionary.Add(argErrorID, New ErrorInfo(Now(), GenerateAlert())) I originally wrote the latter and rewrote it with the \"Dim alertID\" so that someone else might find it easier to read. But here was my concern and question: Should one write this with the Dim AlertID, it would in fact take up more memory; finite but more, and should this method be called many times could it lead to an issue? How will .NET handle this object AlertID. Outside of .NET should one manually dispose of the object after use (near the end of the sub). I want to ensure I become a knowledgeable programmer that does not just rely upon garbage collection. Am I over thinking this? Am I focusing on the wrong things?"} {"_id": "136006", "title": "Would it be hard to screen form submissions (e.g., comments) for non-words/non-sentences?", "text": "I've been thinking a lot lately about the need for better form security, and good ways to accomplish that. We currently use captcha codes to screen for bots, but that's annoying to users and may not work forever. I think that we need a more intuitive, organic system for screening bad comments/contact form submissions. One option that has come to mind would be trying to screen comments for things that are obviously not words, in addition to screening for duplicate comments. E.g., when a spammer on Facebook, Twitter, or a comments section is stopped from just posting the same thing a lot of times, they add garbled letters and or numbers somewhere in the post to make it \"unique\". If it was possible to screen out obvious not-text, this could be overcome. If you could go a step further, and screen out posts that obviously have words placed in for no reason except to make the post \"unique\", you could force the spammer/scam artist down to only use repetitive post options which actually make gramatical sense. At the very least, you could have posts be flagged for moderator attention if they looked similar but which just had random garbage added in for no reason. This would significantly reduce a spammer's ability to keep spamming, even across multiple accounts. Could it be possible to screen form field results for random word and number combinations, and words thrown in just to make a post \"unique\"?"} {"_id": "185248", "title": "Visitor stability vs instanceof flexibility", "text": "I am working on a GUI application which generates a configuration file. I have a class hierarchy for the configuration model and I use an object tree of that hierarchy in several different contexts. Currently, I use the Visitor pattern to avoid polluting my model classes with context specific code. interface IConfigurationElement { void acceptVisitor(IConfigurationElementVisitor visitor); } In an earlier version I used chains of `instanceof` conditions instead of the Visitor. Comparing the two approaches I see the following tradeofs. _Visitor_ * It is easier and safer to add new `IConfigurationElement`. Just add a new declaration to `IConfigurationElementVisitor` and the compiler generates errors for all visitor implementations. With `instanceof` chains you have to remember all the places you have to extend with the new configuration element. Basically `instanceof` violates the DRY principle as it duplicates logic in several places. * The visitor pattern is more efficient than a chain of `instanceof`conditions _instanceof_ * The great advantage of `instanceof` is its flexibility. For example `instanceof` allows me to define special solutions for different subsets of `IConfigurationElement` implementations which need to be handled similarilly in some cases. In contrast, Visitor forces me to implement a method for each implementation class every time. Is there a common solution for this kind of problem? Can I adapt the Visitor somehow, so I can provide a common solution for some cases?"} {"_id": "185244", "title": "Fernando J. Corbat\u00f3's \u201cConstrained languages\u201d", "text": "For his 1990 Turing award speech, Fernando J. Corbat\u00f3 listed reasons why complex systems will inevitably fail. In his conclusion, he gives some suggestions for decreasing the probability of failure. He lists one idea as follows: > [U]se of constrained languages for design or synthesis is a powerful > methodology. By not allowing a programmer or designer to express irrelevant > ideas, the domain of possible errors becomes far more limited. What does he mean by \"constrained language?\" For a moment I considered constraint programming. However, constraint programming is about restricting the program's solution space. It is a tool that empowers a programmer. The feature Corbat\u00f3 is referring to seems to be something which actually restricts the programmer, or at least makes her more inclined to write terser code. My second thought is that he is referring to conservative programming languages. Corbat\u00f3 received his Turing award for work done in the 1960s and 1970s. It's my understanding that he dealt with a lot of punch cards. I have never seen a punch card, so I certainly don't know how to program one, but I might guess that punch card programming is extremely liberal. I suspect the notions of type checking, static analysis, and so forth simply didn't exist. So, is Corbat\u00f3 perhaps referring to the idea of languages that restrict the developer from making dumb mistakes? This doesn't seem to be the case, either. Safety checking and data modeling have nothing to do with terseness, which is what he seems to be talking about when he mentiones \"not allowing ... irrelevant ideas.\""} {"_id": "66147", "title": "Descriptive Locking and Concurrency", "text": "I'm thinking about how one would go about designing a descriptive concurrency model for an OOP language that helps simplify concurrency scenarios for programmers (no easy task, concurrency is hard). I've read a bunch about software transactional memory, the actor model, immutability etc etc but they don't exactly seem to offer an easy model either, and while functional programming is great for some concurrency and parallel scenarios some times you simply need to mutate some kind of state. While locking is considered bad by some atleast it's fairly easy to reason about on a basic level (until you get into deadlock territory that is) Anyways, one of the ways I thought programmers could somewhat easier handle multi-threaded scenarios was that each class is responsible for it's own thread-safety on a per method/property basis. Ie other classes can only access properties and methods, they can't really lock on anything specifically, nor should they have to. One way to make it easier though would be to annotate methods and properties for concurrency with their indent rather than locking specifics. The compiler will then try to deduce the most performing lock from that. So assuming atomicity what do you think good annotations would be? and what are potential problems with this approach, I'm aware that it can't solve all scenarios (like transactions) but it should help the compiler enforce the programmers intent, and might even be able to warn on potential deadlock/racing scenarios Here's some of the attributes i thought of so far (also note that none of them are mandatory but strictly compiler \"hints\") So for instance if get and set was set to shared access you might end up with an reader-writer lock. (do I need an attribute to control visiblity.) If duration for a write was set to nano perhaps it would use a spinlock. If there are many reads and few writes perhaps the compiler would use a different lock that if there were many writes and few reads etc etc. Methods can be coupled into read/write pairs similarly to properties (in case you need more than one parameter). For numeric properties that typically update based on some condition of it's value I was also thinking that instead of passing just a new value, you could pass an operator and an amount. That way you could lock, calculate the new value, see if it meets some internal condition and in that case update. Anyways, I'm struggling a bit coming up with good attributes/notations that are somewhat easy for a programmer to wrap their head and reason about, any help and feedback deeply appreciated. Sorry for the long rant, as I said this isn't an easy topic :)"} {"_id": "185242", "title": "Formal term to web-ify a piece of existing software or program or a module?", "text": "There is a software we run as a windows service. Its currently not designed to take a huge load. So we kind of need this service to be hosted over http so that multiple clients can make use of it. Its not the whole windows service as such, but some modules it uses. Those modules are currenly not designed to work as \"web service\" (generic sense). What would be the formal word for this?"} {"_id": "216635", "title": "category theory based language", "text": "It may sound naive, but is there any programming language, or research thereof, based entirely on category theory? I mean this as opposed to embedding CT concepts as an additional feature (like for Haskell or scala). Would it be too abstract or too complex as an approach, or are there any known reasons that makes it impossible or impractical? I have only a relative understanding of the theory as related to programming, so please give me some explanation if the question doesn't makes sense at all EDIT What I mean is to define a language where categories are first class concepts, instead of let's say types. E.g. haskell is defined in terms of _types_ and _functions_ over those. Does it makes sense to design a language where fundamental entities are _categories_ and _arrows_? From there you would, for example, define algebraic operations on numbers as individual \"intances\" of the proper algebraic category (ring, monoid, group...) instead of starting from standard integer/float operations and defining the corresponding categorical typeclass. Does it makes sense?"} {"_id": "181955", "title": "Want some architecture input to help with current requirements and future (unknown) requirements :-)", "text": "**BACKGROUND:** I am starting to architect a web project using asp.net mvc. I'm going to use a very common architecture where I have the following layers: 1. Service 2. Biz 3. Data 4. Domain The Service layer interacts with the Biz, Data and Domain layers. The mvc controllers will have the service layer injected into them using a DI framework. The controllers will need to be aware of the Service layer interfaces and will reference the Domain objects (POCOs). The front end will be HTML5 + Javascript. I have been told to keep in mind we need to expose _portions_ of this website to mobile devices. The _portions_ to expose via mobile devices is to be defined :-). When rendering the site I can simply have a mobile layout that renders to a mobile device. But one thing management told me is mobile users might want additional functionality the website doesn't expose. Maybe they want to access a feature that only a native mobile application can provide. Hey, I don't make up these requirements, this is what I was told :-). **THOUGHTS:** My service layer is the gateway to storing objects in my database. For any potential native mobile devices I was thinking about using the Web API to wrap my service layer. What I want to do NOW is focus on on my asp.net mvc application and only worry about native mobile apps when those requirements become more defined. **QUESTION:** would it provide any benefits to code the Web API layer now and have my mvc controllers use it? * * * Pros for creating the Web API now 1. It will be coded and ready if / when a native mobile app comes online. Cons for creating the Web API now 1. It is overhead that is not needed, especially if I host the WebAPI in a seperate process or another site besides my original asp.net mvc site. 2. The technology is fairly new and enhancemens will be forthcoming. If i wait until later some of the issues will be fixed. 3. Who knows, my users may never want a native mobile device and the feature is never utilized. 4. The code becomes more complex because of having to maintain the web api layer now. I'm looking at not worrying about the Web API layer for now and simply coding it when necessary. Typically I don't worry about writing for functionality that is yet to be defined. But I thought it might not hurt to ask, you never know what some brilliant maverick programmer is doing out in the wild :-)."} {"_id": "181951", "title": "Does the use of personas in Agile have any value during implementation?", "text": "This is about the use of personas, primarily in the agile development realm. What value, if any, do personas give during implementation? On agile modeling, the discussion about personas remains in the context of requirements investigation. On wikipedia, the benefits are described to \"assist with brainstorming, use case specification, and features definition.\" I'm familiar with personas in use while writing user stories such as the following: > As Willow, I want ordering a combo meal to give me the option to select > alternate sides. > As Xander, I want the default side to be selected when I order a combo > meal. In these examples, Willow is the nutrition-conscientious user of meal-ordering software and Xander is the hungry, impatient user that thinks \"fast food should be _fast_.\" When coming up with these requirements, it may have been helpful to have the personas in mind. I imagine a possible discussion: > Person 1: Willow needs to have more options than fries for the combo meal, > otherwise she would never order a combo. > Person 2: But Xander doesn't want to have to sift through different options > when ordering a combo! He just wants fries and he'll probably always want > fries. This discussion may have created the two requirements above, but once the requirements have been created, why do we mark each requirement with a persona? I wouldn't code the 'default side' requirement any differently knowing that it is for Xander. What value does retaining the \"As Willow\" and \"As Xander\" starts to the requirements once they have been written? What value does the persona give to the one who implements the requirement?"} {"_id": "25963", "title": "Is there any hard data on the (dis-)advantages of working from home?", "text": "Is there any hard data (studies, comparisons, not-just-gut-feel analysis) on the advantages and disadvantages of working from home? My devs asked about e.g. working from home one day per week, the boss doesn't like it for various reasons, some of which I agree with but I think they don't necessarily apply in this case. We have real offices (2..3 people each), distractions are still common. IMO it would be beneficial for focus, and with 1 day / week, there wouldn't be much loss at interaction and communication. In addition it would be a great perk, and saving the commute. * * * Related: Pros and Cons of working Remotely/from Home (interesting points, but no hard facts) * * * To clarify: it's not my decision to make, I agree that there are pro's and con's depending on circumstances, and we _are_ pushing for \"just try it\". I've asked this specific question because (a) facts are a good addition to thoughts in arguing with an engineer boss, and (b) we, as developers, should build upon facts like every respectable trade."} {"_id": "196329", "title": "Extension objects pattern", "text": "In this MSDN Magazine article Peter Vogel describes Extension Objects partten. What is not clear is _whether extensions can be later implemented by client code residing in a separate assembly_. And if so _how in this case can extension get acces to private members of the objet being extended_? I quite often need to set different access levels for different classes. Sometimes I really need that descendants does not have access to the mebmer but separate class does. (good old friend classes) Now I solve this in C# by exposing callback properties in interface of the external class and setting them with private methods. This also alows to adjust access: read only or read|write depending on the desired interface. class Parent { private int foo; public void AcceptExternal(IFoo external) { external.GetFooCallback = () => this.foo; } } interface IFoo { Func GetFooCallback {get;set;} } Other way is to explicitly implement particular interface. But I suspect more aspproaches exist. **Update 1** New Peter's article makes it a bit clearer."} {"_id": "196323", "title": "Approach of delivering \u201cLogging API\u201d", "text": "I faced a question in a .NET interview. As a client i need a LoggingAPI. How you go the approach of design and development and delivering Logging API to the client? I don't care about WPF or a language specific. I want a LoggingAPI which has to log some data? what is your approach in delivering the same. I failed to answer for this question. Please guide how to proceed for answering this type of questions."} {"_id": "196322", "title": "How can we track how well we're preventing and avoiding security vulnerabilities?", "text": "It's pretty easy to track when we fix security vulnerabilities in existing code. But to make sure the whole team is staying on their toes about writing secure code, I'd like to also track how well we are preventing and avoiding writing new security vulnerabilities. What is the right way to measure that?"} {"_id": "29635", "title": "Are there any arguments that can make a contractor reconsider working on fixed price?", "text": "I've been working for a contractor who brings in some good projects, but they are all fixed-price and often fixed-time. As a result he always has me making a quote over loose requirements, which never fails to bring a lot of tension due to feature creep. He claims he'd never get a contract if he couldn't agree on a price with his clients first, but as far as I'm concerned I don't wanna go through another project under these terms. Is there any argument I could make to have him pay me by the hour, or should I just suck less at estimating ?"} {"_id": "156740", "title": "Can applications affect power consumption in a substantial way? ", "text": "Is there anything that can be done for a single general purpose application to affect the power consumption of the device it is running on? I am not familiar with how optimizations to individual applications may affect power consumption in a general way, can someone explain if different approaches to writing applications affect power consumption of the device they are running on? That is, can a single program, that does the exact same thing functionally, written in different ways drastically affect the power consumption of a device in general, not about how different un-related programs might affect the power consumption of a device."} {"_id": "156745", "title": "RDBMS same type of optional data for multiple tables", "text": "I have an embedded database (SQLite), it stores information about events and page views, its purpose is to track the user journey inside my application. I need to provide support for optional custom properties on my events and page views. In code this is not a problem, however I am torn between two solutions for storing that information. So I have Events and PageViews tables, my first option for persisting the optional custom properties is to have a EventsCustomProperties table which has only a name and value column and a reference (foreign key) to the event. In this solution I would also have a PageViewCustomProperties with the same structure. The problem I have with this is having two tables which are identical other then their relationships with other tables. In my other solution I would have one CustomProperties table, and in addition to the name, value and relationship key, it would also have a type column, which I should use manually in code to map the relationship id to an event or a page view. In this solution the actual use of foreign keys is redundant. My dilemma is that in the first solution I am respecting one normalisation rule of repeating groups of data, buy in the second solution I am breaking another rule although I cannot get my head to remember the name of the rule. So it seems I will break rules whatever I do. Can anyone suggest a better way or a preference to the first or second solution."} {"_id": "204451", "title": "Should I split a Python class with many methods into multiple classes?", "text": "I have a class that will end up having more than ~30 methods. They all make sense to be part of the same class because they require access to the same data. However, does it make any sense to split up a large class into several smaller abstract classes (by functionality, type, use, etc.) and have the main class extend (i.e. multiple inheritance) from all the smaller classes?"} {"_id": "3811", "title": "How crucial are computer science topics toward getting a job?", "text": "I've been considering moving to a new city (getting seduced by those big city lights), and I'm trying to decide if my lack of \"computer science\"-y skills is going to make me overly uncompetitive. Let me explain my situation a bit more: * Programming since jr. high school, professionally since 2004. I've worked for a small company that does custom web applications (PHP) since 2006. * I've studied design patterns on my own, and used them in my work. * Worked with MVC frameworks (PHP) for a couple years now. I have a strong understanding of how to write good, maintainable MVC code that adheres to the principles of MVC (rather than just cramming code into wherever I can in the framework.) * Recently done some work with C#, through which I'm learning dependency injection and the MVVM pattern. Grokking these, but still a ways to go. * Full complement of web development skills (normalized databases, SQL, HTML, CSS, JavaScript), and I'm very confident of my skills in these. Also aware of security issues, and how to write a secure application. My main deficiency is scalability, which I've never had a need to learn, unfortunately. Where I get nervous is with the things you'd learn in a CS degree. My degree is in aerospace engineering, not CS, but I've decided that programming is the thing I really care about. What I know: * Basic data structures: I took a class in which I implemented basic linked lists, and queues and stacks built on those. I've written a basic binary tree (inserts, various traversals, but not removal (I was really drunk when I wrote the code, and it turned out not to work at all)). I know about hash tables, and understand some of the principles of their implementation. * I understand big-O notation, but since I'm not really familiar with algorithms, I suspect I might miss interview questions about this topic. (what's the worst-case insertion time into a hash table? I have no idea.) * I've done a little bit of functional programming by way of JavaScript, python, and dabbling in Haskell. (I realize the first two aren't functional programming languages, but they have functional aspects to them). I understand currying and higher-order functions. What I don't know: * Don't really know formal algorithms at all. I couldn't sort my way out of a paper bag (well _maybe_ bubble sort, which I know is O(n2)). I guess I know some of their names. * Never written a parser, or compiler, or any component of an OS. I've never done anything interesting with concurrency (e.g., anything beyond using basic asynchronous calls in .NET to keep my UI from blocking.) I _want_ to learn all these things, purely out of interest, but for now I simply don't have time right now with main job + side job + life outside programming ( _gasp!_ I know). I don't want to put my larger life plans on hold unnecessarily if I don't have to. I'm not aiming for a Google or a Microsoft, but I'd like to at least get a job that's interesting. How much will I be held back by the deficiencies I've listed? I feel relatively confident that in a job I would actually apply for and get, I would be able to perform very well, but what about interviews? I'd like to know: * How crucial are computer science topics toward getting a job?"} {"_id": "144166", "title": "How do I handle a Controller that's not controlling a specific Model?", "text": "I've got a nice MVC set up going but my website requires some views that don't map directly to a model. Specifically I've got some generic Reports users need to run, and now I'm creating a utility for comparing some system configurations. Right now the logic is crammed into a Reports Controller and I'm starting a Comparison Controller but this feels like a big abuse of the system. Both controllers use an assortment of different Models to pull data from, and they're only related based on _what the user is doing_. Reports are run from the Reports Controller and their views are all grouped together in the file system/URL structure. Is this an acceptable use of the Controller paradigm? I can't think of a better way to structure my Controllers, and making a Controller for each model I'm using to make reports/ect doesn't seem like a good idea; I'd end up with one Controller/Model/View per report or comparison, vastly complicating the apparent structure of my site."} {"_id": "238674", "title": "What's the right OO way to create a counter/inventory class that works for both differentiated and undifferentiated countables?", "text": "You are writing a videogame about trading beans. Red beans, black beans, pinto beans, you name it. As everybody knows all beans are the same. You write the \"Inventory\" class for a trader in that videogame as follows (skipping all the null checks): class BeansInventory{ HashMap amountsOwned; public void receive(BeanType typeReceived, int amount) {amountsOwned.put(typeReceived,amountsOwned.get(typeReceived)+amount)} public void remove(BeanType typeRemoved, int amount) {amountsOwned.put(typeRemoved,amountsOwned.get(typeRemoved)-amount)} public Integer amountOwned(BeanType type) {amountsOwned.get(type)} } It works fine for years. Then suddenly somebody else has the great idea to add to the game trading coffee beans. As everybody knows each single coffee bean is completely different from the other. So you can't just add coffee as another bean type. Each coffee bean is its own instance. The coffee inventory class looks something like this: class CoffeeInventory{ HashMap> coffeOwned; public void receive(CoffeType typeReceived, CoffeeBeans... beans) {coffeOwned.get(typeReceived).addAll(beans)} public void remove(CoffeType typeRemoved, CoffeeBeans... beans) {coffeOwned.get(typeRemoved).removeAll(beans)} public Integer amountOwned(CoffeType type) {amountsOwned.get(type).size()} } But now you have tons of problems. You have two APIs for the same task. All the trading infrastructure now has to check carefully if it is trading pinto beans or coffee beans and call a different inventory and a different method with a different signature for what is after all the same task: \"store this\". So now I have this code that is getting filled of `if` checks and `instanceof` and all the other signs of code smell. But I can't figure out a way to use a single simple API. I have no idea what's the right oo thing to do."} {"_id": "144165", "title": "Is dynamic HTML layout good from an SEO perspective?", "text": "Just wondering whether dynamically built HTML layout is fine from SEO perspectives? So let's assume e-commerce engine and its most popular page - products catalog. So 90% of the page is built using AJAX and MVVM library knockoutjs which builds HTML on the fly on the client side. So how search bots would parse such content? Is it fine indexed and would be such effective as server-side built HTML pages from the SEO perspectives?"} {"_id": "144162", "title": "how to do database updates in each release", "text": "Our application uses database (mostly Oracle), and database is at the core. Each customer has its own database, with its own copy of application. Now with each new release of our product, we also need to update the database schema. These changes are adding new tables, removing columns, manipulating data etc. How do the people handle this? Are there any standard processes for this? EDIT:- The main issue is the databases are huge with many tables and more of huge amount of data. We provide the scripts and some utilities to manipulate the data. How to handle the failures and false negatives? More of looking for this kind articles. http://thedailywtf.com/Articles/Database-Changes-Done- Right.aspx"} {"_id": "76999", "title": "Is there any value in Open sourcing your for fun projects", "text": "I have written a bunch of fun for me projects and have shown them to friends and such. Is there any value in doing the work and making these projects open source since the interest and usefulness limited."} {"_id": "109086", "title": "Copying a competitor's database schema?", "text": "I am going to be releasing some software soon which will require users to run a local database. There is a competitor in the space that is doing the same thing and they have a pretty sophisticated database schema that they use. Their software is commercial but the database schema is completely viewable once installed. What's the legality of basing my product of the same database schema and just potentially renaming columns and tables and maybe removing one or two things?"} {"_id": "194845", "title": "Is it reasonable to use Javascript MVC Frameworks for closed source paid web applications?", "text": "I am wondering if it is reasonable to write closed source, paid web apps in Javascript (JS MVC Frameworks like AngularJS, Backbone, Knockout, ...)? I'm concerned because in this type of frameworks you use typically a REST backend for CRUD operations and the majority of business and application logic happens in Javascript which can be looked up by anyone using my app. He can see how i do things. When I use for example PHP or Java (Wicket) most of the logic is happening on the server and so a lot less of my source code is exposed. This seems to me a lot safer if I want to have an edge over my competitors, so potentially I earn more money. So is it reasonable to use JavaScript MVC Frameworks for paid applications? Does it depend on something and if yes on what?"} {"_id": "177278", "title": "are programmers more forgiving of buggy software?", "text": "From your experience are you, or programmers in general, more or less likely to forgive bugs in the software you use? e.g. if you run across a bug in some app (commercial or open source), is your reaction annoyance at incompetence, empathy, or something else (pity)?"} {"_id": "177271", "title": "Is it bad to be the only person supporting software you have developed?", "text": "My employer has a need for a web-based application to manage and share data within the department, with approximately 50-75 possible users. I feel I have the ability to write it for them. I would likely use Python/Django with a MySQL database, so it would be open source. However, I'm the only IT person in my department (our larger organization has a separate IT support staff with which I often work, but not for web development). I want to develop this application, but if I leave in 1-2 years, and someone else has to come in after me and support it, will this be seen as a bad decision? This is assuming all the obvious points -- I will write documentation, I will comment my code, and I will strive to follow good application design principles. But will that be enough? In principle, is it acceptable for one person to develop and support an entire web application? Is this a \"do first, then show and ask\" kind of situation, or should I be certain it will be adopted by everyone involved first? With regards to specifics, I work in an academic department of a university that has specific processes for student applicants and for their being admitted. You have to apply to the university, AND my department separately. The department-specific process is very manual and pieced together, which is where my development would come in. We (myself and IT) are already planning on incorporating the additional questions that pertain to my department into the main university application (which IS electronic, and feeds into PeopleSoft), so that solves the front-end piece and everything would be in one place. But for the faculty and staff to acquire and \"digest\" that information throughout the admissions process will require an application of some sort, which I would prefer to do \"my way\". The IT staff want to develop it ALL in PeopleSoft, and I fear it will be too inflexible and will not be well received by our department, and may still not completely meet our needs. There are third-party solutions which meet this need perfectly but they are cost-prohibitive. I would query the data from PeopleSoft and present it how I know the faculty and staff need to see it. (Getting access to query PeopleSoft is a different battle altogether.)"} {"_id": "177273", "title": "DB API for shell scripting (any shell)", "text": "I am faced with some legacy shell scripts that run batch data processing jobs in Oracle using `SQL+`. For the most part, the data tier does not have to communicate back to the script with retrieved data to be passed for shell- level processing but in a few cases it does. The problem is, SQL+ is really meant to be an end user app and not an API that can communicate with other clients programmaticaly. That is why people have invented APIs such as `DBD::DBI` for Perl, `JDBC` for Java, ODBC etc. The way it is done is they invoke SQL+ and then parse the output, which is clearly designed for human eye consumption, using tools like `sed` and `awk`. The whole thing is at best a hack and very prone to bugs. Since this client is rather conservative with their technology, they don't want to scale their scripts up to Perl or Python where there are data access APIs. So I am wondering whether there are similar APIs for shell, e.g. K or bash. What I would like is if an API would return data in a 2-dimensional array or strings (for the lack of type setting) so that I can just read DB data like that. The way they do it now is akin to parsing regular web page HTML to get a single stock quote rather than cleanly calling a web service and be done with it. Anybody know of a product I can use? Thanks"} {"_id": "187071", "title": "Is this a test smell or is it even worse?", "text": "I have recently been looking at some testscripts which looks a bit like ... try { receiveSomething(); // something was received even though it shouldn't failTest(); } catch (TimeoutException e) { // nothing should be received succedTest(); } The problem I have with these types of tests is 1. They are not deterministic. You don't know if nothing was sent on purpose or if everything has crashed. 2. Its very hard to simultaneously test something else, which might actually, in this case, send something. My thoughts are one the one side, how can this types of tests be designed better, and secondly can this be an indication of a bigger design smell of the software that is being tested? EDIT To clairify, these test scripts are used for black box-testing of event-based, complex software, not unit testing, and my feeling is that the 'doing nothing' event is a very ambiguous one. :)"} {"_id": "224126", "title": "Which layer should order the columns shown to the user when using MVC?", "text": "Say you want to render a table with five columns, but you want the order of the columns to be different depending on some specific parameter. This would be very easy to accomplish if the model sets the order. The view can then simply use a loop and create the table accordingly. However, unless I have misunderstood things, we want to let the view handle how things are rendered (although I guess there may a gray area involved here in terms of what the view \"should decide how to render\")? It also feels ugly to let the model set formatting / order, but maybe this is another thing I might have misunderstood? If the view is supposed to deal with the order of the columns, what is a good way to accomplish it (read: having to use a lot of if-statements and other ugly code in the view)?"} {"_id": "228460", "title": "Single product owner for multiple Scrum teams", "text": "Is that a common practice to have a single product owner for more than one Scrum team, or usually you have a product owner for each team (even if multiple teams work on the same product). Thanks"} {"_id": "224123", "title": "Use a setStatus($arg) function or have separate enable() and disable() functions?", "text": "I've got two functions at the moment: suspendGroupsAndUsers($groupId){} enableGroupsAndUsers($groupId) {} But the difference between the two is one variable/string. Should I just have: setStatusGroupsAndUsers($status,$groupID) {} This feels more DRY... Or have the above two functions actually call the setStatus function? Gut says use setStatus ..."} {"_id": "162786", "title": "Design documents as part of Agile", "text": "At my workplace, we face a challenge in that \"agile\" too often has meant \"vague requirements, bad acceptance criteria, good luck!\" We're trying to address that, as a general improvement effort. So, as part of that, I am proposing that we generate design documents that, above and beyond the user story level, accurately reflect the outcome of preliminary investigations of the effect of a given feature within the system and including answers to questions that we have asked the business. Is there an effective standard for this? We currently face a situation where a new feature may impact multiple areas in our \"big ball of mud\" system, and estimates are starting to climb due to this technical debt. Hopefully more thoughtful design processes can help."} {"_id": "55264", "title": "Haskell vs Erlang for web services", "text": "I am looking to start an experimental project using a functional language and am trying to decide beween Erlang and Haskell, and both have some points that I really like. I like Haskell's strong type system and purity. I have a feeling it will make it easier to write really reliable code. And I think that the power of Haskell will make some of what I want to do much easier. On the minus side I get the feeling that some of the Frameworks for doing web stuff on Haskell such as Yesod are not as advanced as their Erlang counter parts. I rather like the Erlang approach to threads and to fault tolerance. I have a feeling that the scalability of Erlang could be a major plus. Which leads to to my question, what has people's experience been in implementing web application backends in both Haskell and Erlang. Are there packages for Haskell to provide some of the lightweight threads and actors that one has in Erlang?"} {"_id": "254073", "title": "Perl: Negative look behind regex question", "text": "The Perlre in Perldoc didn't go into much detail on negative look around but I tried testing it, and didn't work as expected. I want to see if I can differentiate a C preprocessor macro definition (e.g. #define MAX(X) ....) from actual usage (y = MAX(x);), but it didn't work as expected. my $macroName = 'MAX'; my $macroCall = \"y = MAX(X);\"; my $macroDef = \"# define MAX(X)\"; my $boundary = qr{\\b$macroName\\b}; my $bstr = \" MAX(X)\"; if($bstr =~ /$boundary/) { print \"boundary: $bstr matches: $boundary\\n\"; } else { print \"Error: no match: boundary: $bstr, $boundary\\n\"; } my $negLookBehind = qr{(? Any sufficiently advanced technology is indistinguishable from magic. Used to be I looked on technology with wonder and amazement. I wanted to take it apart, understand how it worked, figure it all out. Technology was magical. I'm older, I know more and I spend my days creating stuff that, hopefully, fills other people with that kind of wonder. But lately I've found my own awe for technology has been seriously curtailed. More often I'm just annoyed that it isn't as elegant or seamless or as polished or perfectly delivered as it seemed to be in my youth. It all looks broken and awkward, or cobbled together and poorly tested. Has programming ruined your ability to enjoy technology? Have you stopped wondering in awe and just started saying, \"They could have done this better\" every time you pick up a bit of technology?"} {"_id": "24615", "title": "Is Visual Studio just an IDE?", "text": "For Windows Development I mean. Looking over other questions there are alternatives to VS, but they seem to be web based ones, which is fine, or you could program an entire .net website in notepad, should the urge drive you to. But is there more to it than just an IDE for Windows Development? I.E. Is it possible for me to create an application in just notepad, is the compiler part of Visual Studio, or is it separate, which could be called via command line or something? I don't want to not use VS, I'm happy with it, does what I need etc etc, just more a facet I'm curious about."} {"_id": "180398", "title": "SICP - Why use accumulate with cons when filter already passes back a list", "text": "In SICP 2nd Edition section 2.2.3, the authors have the following code: (define (even-fibs n) (accumulate cons nil (filter even? (map fib (enumerate-interval 0 n))))) My question is why did they use _accumulate_ in this case? Couldn't they have got the same answer from _filter_ without using _accumulate_?"} {"_id": "130786", "title": "Is using PhoneGap for your first ever mobile app a good idea?", "text": "I am a beginner programmer. I've gotten an AS degree in CS which exposed me to the basics of c++, data structures, and algorithms and have light experience with html/css/javascript from just messing around building my own basic websites. I am no where near a proficient professional programmer as of yet. My question is, would using PhoneGap to attempt to develop my first \"app\" for android/ios/whatever be a good idea or should I start out making native apps first? With all the hype over html5/javascript at the moment, PhoneGap seems too good to be true for a beginner like me. Thoughts? pros/cons?"} {"_id": "200379", "title": "What is a Package Diagram? and What is a Sequence Diagram?", "text": "In many interviews I've been asked this question. What is a Package Diagram? and What is a Sequence Diagram? and difference between Package Diagram and Sequence Diagram. Thanks in Advance. **Edit** Sorry, google give me lot of answer, but i want to practical answer with example."} {"_id": "181043", "title": "Hash function classification interview question", "text": "In several places on the Internet there's the interview question > Classify the Hashing Functions based on the various methods by which the key > value is found. with answers like * Direct method * Subtraction method * Modulo-Division method * Digit-Extraction method * Mid-Square method * Folding method * Pseudo-random method which I find strange. I think I know a lot about hashing, but this is plain gibberish to me, could anybody explain?"} {"_id": "181040", "title": "How to document a dual open source license?", "text": "If a project is dual-licensed **GPL** & **BSD** , should there be one `LICENSE` file with the text of both licenses? Or two separate files, one for each license? And I think I should put a copyright/license comment at the top of each source file. How should that comment indicate the dual-licensed status of the project?"} {"_id": "255862", "title": "Dealing with proj files in multi-platform apps", "text": "So I've been doing some cross-platform mobile applications using cocos2d-x. Basically it uses common c++ code, that can easily be compiled to the popular mobile platforms with some small wrapper code that stays constant while working on the common c++ code. Everything works great, but I have a problem keeping all my project files up to date. For example, to compile to windows I have a .csproj file that holds all the c++ code to compile, for iOS I have a .xcodeproj that holds the files, and for android I have a make file that points to the files to compile. This means any time I add a new file to my project, or do anything to alter the file structure I have 3 files I must update in order to continue to build for all platforms, if I add any other platforms the problem continues to get worse. I am wondering, is this a problem that has been solved? I'm sure I'm not the first to have problems like this, keeping up multiple project / make files to update for the different platforms to compile. I was thinking of writing my own wrapper that can easily update these files without having to open an IDE / notepad to manually update any time I add a file. But before I do I thought I'd ask the question if there was a well accepted way to handle this problem, perhaps something that already exists that deals with this exact problem. I was curious about whether to ask this here or stackoverflow, as I'm curious both at the cocos2d-x specific case, but also the general case whenever multiple projects must be updated whenever new code is added. I chose to post it here, but am interested to the answer to both questions (specifically if cocos2d-x has something already widely used to solve the problem, and solving the problem in the general case)."} {"_id": "135572", "title": "Why is it evil to run selects from a prod server?", "text": "I'm basically looking for arguments to persuade the internal \"consultants\" at work as the following are not working: * What happens if you do a Cartesian join and crash the server? * You don't need to do selects from that server. You have other servers for that. * It's not for you it's for the clients. We're going to be implementing DB wide triggers to stop casual logging on during work hours but with a fail safe so that devs / DBAs can fix issues if they come up but need some more arguments to forstall the screaming that's probably going to happen. **EDIT** To clarify; there are no actual problems. The database will be locked down better no matter what. I'm not looking for technical reasons to do things or advice on how to manage a DB. I'm hoping that some other technical people on here have some advice on how to explain to non-technical people the, possibly only perceived, importance of this particular issue."} {"_id": "141261", "title": "Multi-tenancy - single database vs multiple database", "text": "We have a number of clients, whose systems share some functionality, but also have quite a degree of diversity. The number of clients is growing - always a healthy thing! - and the diversity between their businesses is also increasing. At present there is a single ASP.Net (Web Forms) Web Site (as opposed to web project), which has sub-folders for each tenant, with that tenant's non- standard pages. There is a separate model project, which deals with database access and business logic. Which is preferable - and most importantly, why - between having (a) 1 database per client, with only the features associated with that client; or (b) a single database shared by all clients, where only a subset of tables are used by any one client. The main concerns within the business are over: * maintenance of multiple assets - backups, version control and the like * promoting re-use as much as possible How would you ensure these concerns are addressed, which solution is preferable, and why? (I have been also compiling responses to similar questions) **Edit** : here's the highlights from my research from other sources: Should I use one database per application or share a single database amongst multiple applications Splitting up a single project into libraries Supporting multitenancy Multi-tenancy - single database vs multiple database http://msdn.microsoft.com/en-us/library/aa479086.aspx http://stackoverflow.com/questions/2213006/how-to-create-a-multi-tenant- database-with-shared-table-structures http://cloudcomputing.sys-con.com/node/1610582 http://blogs.msdn.com/b/cbiyikoglu/archive/2011/03/23/moving-to-multi-tenant- model-made-easy-with-sql-azure-federations.aspx http://ask.sqlservercentral.com/questions/3615/one-database-or-multiple.html http://devlicio.us/blogs/anne_epstein/archive/2009/04/24/the-case-for- multiple-dbs-in-multi-tenancy-situations.aspx http://ayende.com/blog/3497/multi-tenancy-the-physical-data-model http://www.codeproject.com/Articles/51334/Multi-Tenants-Database-Architecture http://discuss.joelonsoftware.com/default.asp?design.4.319460.16 http://stackoverflow.com/questions/3479297/multiple-application-using-one- database http://stackoverflow.com/questions/1676552/single-or-multiple-databases http://mikehadlow.blogspot.co.uk/2008/11/multi-tenancy-part-1-strategy.html http://mikehadlow.blogspot.co.uk/2008/11/multi-tenancy-part-2-components- and.html http://www.sqlservercentral.com/Forums/Topic893107-373-1.aspx#bm1047297"} {"_id": "208148", "title": "Good practice for object instantiation in MVC", "text": "In MVC the Domain Models(from Model Layer) should instantiate other Domain Models or all the Domain Models should be instantiate in the controllers and passed down using Dependency Injections? How do you implement this in real applications? If you choose this path isn't the controller getting to fat?"} {"_id": "141269", "title": "Thoughts on web development architecture through integrating C++ in the future to a web application", "text": "I'm looking to build a website (it's actually going to be a commercial startup) I saw this question and it _really_ shed some light on a few things that I was hoping to understand (kudos to the op). After seeing that, it would make sense that, unless the website were required to actually have millions of hits per day, it wouldn't be a viable solution to write a C++ backend on the server side. But this got me thinking. what if it in the (unlikely) events of the future, it _does_ go that route? The problem is that, while I'm thinking of starting this all using .Net (in the beginning) just to get something quick and easy up without a lot of hassle (in terms of learning), and then moving towards something more Open Source (such as Python/Django or RoR) later to save money and to support OSS, I'm wondering IFF the website actually becomes big, will it be a good idea to integrate a C++ backend, and use Python ontop of C++ for a strong foundation, and then mitigate HTML/CSS/AJAX/etc ontop of the backend's foundation? I guess, what I'd like to know is that, given the circumstance, if this _were_ to happen, would it be a proper approach in terms of architecture? I'd definitely be supporting MVC as that seems to be a great way to implement a website. All in all, would one consider this rational, or are there other alternatives? I like .Net, and I'd like to use it in the beginning, because I have _much_ more experience with that than, say, Python or PHP, and I prefer it in general, but I really do want to support OSS in the future. **I suppose the sentence I'm looking for is, \"is this pragmatic?\"**"} {"_id": "22721", "title": "Sharing programming fees with a fellow software developer", "text": "My question relates to how I should share the fees paid by clients to me and a fellow programmer, both of whom are freelancers. I've thought of a few options, but I'm in a dilemma as to which one would be the most motivating for both of us. 1) The first option I've thought is a fifty-fifty share. However, more brainwork is going to be done by my colleague, while initially at least, I will be handling the communication with customers. 2) The second option is a 60-40 share, where the colleague exerting more efforts gets a bigger share. This is the option I feel most inclined to adopt, but I'm not sure how it's going to feel in the long run. 3) The third option is calculating each one's contribution in terms of the number of hours spent, and sharing the revenue accordingly. It will be wonderful to hear everybody's thoughts on this!"} {"_id": "22723", "title": "XAML - Like/Dislike?", "text": "After bashing my head against the brick wall that is XAML, I've decided to come here and ask other people if they are as frustrated as I am. So, * Do you like XAML? Please justify. * Is XAML the problem, or the lack of good tools? My issue could be resolved if the binding system gave me a file-line number location of the binding that's failing. This isn't a XAML issue so-much-as a debugger issue. Apart from the issues with tooling, I find that XML does not make a suitable programming language, and that XAML has workarounds to fix things that wouldn't have been a problem in any other \"real\" programming language. For example, string formatting on a binding. XAML also breaks when you need to step outside the hierarchical structure for things like context menus. Because they are defined within the xaml hierarchy people assume that they are part of the visual hierarchy too, which is not the case. This can lead to subtle binding issues, which are difficult to debug etc. As a comparison, HTML-CSS-Javascript works well because each part handles a specific part of displaying a web page. HTML for data and layout, CSS for style, and Javascript for execution. In contrast, XAML is trying to do everything and fails. Please note, I love WPF, so don't take this as a critisism of WPF, Silverlight or any particular WPF/Silverlight control. I'm only interested in a discussion about the XAML language."} {"_id": "86888", "title": "Case convention- Why the variation between languages?", "text": "Coming from a Java background, I'm very used to camelCase. When writing C, using the underscore wasn't a big adjustment, since it was only used sparingly when writing simple Unix apps. In the meantime, I stuck with camelCase as my style, as did most of the class. However, now that I'm teaching myself C# in preparation for my upcoming Usability Design class in the fall, the PascalCase convention of the language is really tripping me up and I'm having to rely on intellisense a great deal in order to make sure the correct API method is being used. To be honest, switching to the PascalCase layout hasn't quite sunk in the muscle memory just yet, and that is frustrating from my point of view. Since C# and Java are considered to be brother languages, as both are descended from C++, why the variation in the language conventions? Was it a personal decision by the creators based on their comfort level, or was it just to play mindgames with new introductees to the language?"} {"_id": "204875", "title": "How to model optional use cases in UML", "text": "Let's say, I want to model an application which allows users to model class diagrams. The high level use case can be modelled as UC1:Model Class Diagram, which refines itself into UC11: Model Class, UC12: Model Connection, UC13: Model Composition, etc. Since UC11, 12, 13 are part of UC 1, I used the include-Association. Unfortunately, the UML specification says that included use cases are essential parts and if you would leave one of them out the high level behavior could not be achieved any more. But in this example a valid class diagram can be created without modelling a connection or a composition, so these use cases are optional. To boil it down to an essence: How can optional use cases be modelled in UML while providing a mechanism for reuse (like the include association)?"} {"_id": "86883", "title": "PHP Open Source Tools for Agile Development", "text": "Hello Everyone, There are so many Questions on Stackoverflow about Agile tools, but I haven't seen a tool which can be installed on a PHP Server. I'm just looking for an _**Open Source Scrum Tool**_ or just an _**Open Source Scrum Dashboard**_ for my Team, which can be installed on my webspace( **php** ). Thank you for Help!"} {"_id": "166373", "title": "What is occurring in the world of server-side technologies in regards to the mobile app boom?", "text": "With mobile technologies becoming increasingly popular what is happening on the server-side with most of these apps when they need to communicate with a back end? I'm used to the world of technology from 10 years ago when most resources were accessed by requesting a dynamic web page that behind the seen used a server- side language to get the information it needed from a relational database. Is this still the case, and if not, what are the big changes?"} {"_id": "255869", "title": "On linux, how to get \"incompatible\" i386 f77 libraries to work with current Fortran compilers, like gfortran?", "text": "I would like to run the elf32-i386 library libkernlib.a with a Fortran 77 program on my latest Ubuntu linux machine. From what I've read, `gfortran` is backwards compatible with Fortran 77, but I'm having trouble to get it to work with the library. I've tried -ff2c, -fbackflash, etc. but everything is still giving me $ gfortran -ff2c -O -o output f77fortran.f -lkernlib /usr/bin/ld: skipping incompatible //usr/local/lib/libkernlib.a when searching for -lkernlib /usr/bin/ld: canot find -lkernlib I have also tried using `fort77`. I cannot find a `f77` that works. **If anyone knows how to run f77 programs and its libraries, whether using`gfortran` or something else, that would be great.** Btw, the libkernlib.a library has older versions from \"libraries\" links on this page, but the ones I've tried all give the same error and are all i386 (I figured out that by `objdump -f libkernlib.a`). Context: This is part of a Mathematica project that uses old code."} {"_id": "230877", "title": "Domain Model vs View Model", "text": "I'm in the early stages of my programming career and I've been working with MVC for just about a year now. I've spent much time learning about the pattern and the concepts behind it, but as the projects I'm working with get larger I'm starting to think that maybe I don't have the best understanding of how the model layer is supposed to work. I hear a lot about always having a View Model to protect your Domain Model but how does this work in practice? What does the relationship between the two look like? What if a model doesn't need any extra \"view\" logic, should I just create a copy of it? Why?"} {"_id": "230874", "title": "Java for a perfect Media Player?", "text": "I am looking forward to build a media player with java, and basically what I found was JMF. But, then again, this API is not upto date and doesn't support latest formats such as `MKV`. On more research, I stumbled upon http://stackoverflow.com/questions/10440152/any-simple-and-up-to-date-java- frameworks-for-embedding-movies-within-a-swing Got all excited, but then digging some more, left me with this http://stackoverflow.com/questions/8153227/adding-other-video-codecs-dvd- support-to-javafx-2-2 Now, I am disappointed and in a fix that how all the good media players (VLC, KMPlayer etc) been able to support all video and audio formats. They must be build using a programming language, IMHO ! So, my question would be, in-order to build a `complete media player` which supports `all kind of media files`: 1. Is `JAVA` incompetent ? 2. Has one ever build a good media player using `JAVA` ? 3. If Java can't, how are their so many media players on `android`, with all video support ? 4. Is it just Java or no modern language can do it ? 5. Do I have rely and choose C, C++ to do this ?"} {"_id": "230872", "title": "Using a single table for identity and metadata", "text": "I'm in the early design phase of a project to provide an e-commerce platform that will require several entities to be modelled, products, customers, orders, CMS pages, etc. They will all have a few things in common (ID, creation timestamp, last modified timestamp, etc). My first thought was the usual one of giving the various tables an ID column that will use the database's mechanism for assigning uniqueness (autoincrement in MySQL, sequences in Postgres, etc) but given they have a few things in common I was considering a design where all that data is kept in a base BusinessObject table and the tables for the other entities use a primary foreign key that references the BusinessObject table. For example (in pseudocode) CREATE TABLE BusinessObject ( id, date_created, date_updated, is_deleted, // etc PRIMARY KEY id AUTOINCREMENT ); CREATE TABLE Customer ( id, forename, surname, // etc PRIMARY KEY id FOREIGN KEY id REFERENCES BusinessObject.id ); CREATE TABLE Product ( id, name, price, description, // etc PRIMARY KEY id FOREIGN KEY id REFERENCES BusinessObject.id ); and so on. I can think of a number of advantages to this approach. First, a particular ID always only maps onto one particular object. For example, the id 3 in a system where each table generates its own IDs could refer to a customer, an order or anything else, whereas in the above design, ID 3 will always be an order, because there could never be a customer or product with ID 3. This would make stuff like extrapolating the referenced business object from the URL a lot easier, allowing for simpler routing in the application layer. However, it also means that every table in the system must join against the BusinessObject table, and I'm worried that this would result in some significant drawbacks. For example the fact that one particular table is going to be involved in nearly all queries may result in degraded performance for that table, or that it might be possible for a row in Customer to reference the same row in BusinessObject as a row in Product, resulting in loss of data integrity unless some additional steps are taken to prevent that. So basically, what are the pros and cons of a design where a single table provides the identity data for most of the rest of the database? Are such designs fairly common or is it better to just have each table have its own identity source and rely on cleverer application logic to determine the object being referenced?"} {"_id": "165258", "title": "Data structure: sort and search effectively", "text": "I need to have a data structure with say 4 keys . I can sort on any of these keys. What data structure can I opt for? Sorting time should be very little. I thought of a tree, but it will be only help searching on one key. For other keys I'll have to remake the tree on that particular key and then find it. Is there any data structure that can take care of all 4 keys at the same time? these 4 fields [source ip, destination ip, source port, destination] are of total 12 bytes and total size for each record - 40 bytes.. have memory constraints too... around one lac records operations are : insertion, deletion, sorting on different keys. For printing , sorting the records on any of one keys should not take more than 5 seconds."} {"_id": "75646", "title": "Is \"3 or more use a for\" a good rule of thumb?", "text": "When do repetitive operations become a code smell? I read this article by Charles Petzold where he suggested this and was wondering what people thought."} {"_id": "205476", "title": "Refactoring the shipping application code to use DDD factories", "text": "I was trying to find examples for using DDD factories and I came across the shipping application from Eric Evans' book. However when I checked the BookingService the code to create a `Cargo` had this comment at line **37** : 34 public TrackingId bookNewCargo(final UnLocode originUnLocode, 35 final UnLocode destinationUnLocode, 36 final Date arrivalDeadline) { 37 // TODO modeling this as a cargo factory might be suitable 38 final TrackingId trackingId = cargoRepository.nextTrackingId(); 39 final Location origin = locationRepository.find(originUnLocode); 40 final Location destination = locationRepository.find(destinationUnLocode); 41 final RouteSpecification routeSpecification = new RouteSpecification(origin, destination, arrivalDeadline); 42 43 final Cargo cargo = new Cargo(trackingId, routeSpecification); 44 45 cargoRepository.store(cargo); 46 logger.info(\"Booked new cargo with tracking id \" + cargo.trackingId().idString()); 47 48 return cargo.trackingId(); 49 } How should the DDD factory be used here? Which parts exactly should be refactored to use the factory in these piece of code?"} {"_id": "98548", "title": "Should all development, including refactoring work, be accompanied by a tracking issue?", "text": "**The debate:** Should all development, including refactoring work, be accompanied by a tracking issue? (in our case, Jira) **The common ground:** Our primary goal is quality. A working product, every release, is more important than anything else. Our codebase is old and automated tests are lacking; we are working on this but it's a long-term project, we need interim processes. **Position 1:** Refactoring work must be tracked in Jira. If it is not obviously related to the change you are making then you must raise another issue. If you don't then the work bypasses review and testing and there is a risk to our primary goal. The argument has been made that PCI compliance (a near-future goal of the business) requires this level of tracking; I'm not in a position to say that is true or false with any level of certainty. **Position 2:** Code quality is vastly important. The better it gets (to a point; a point we are nowhere near), the more likely we are to keep releasing a working product. Anything which puts a barrier, no matter how small, in the way of refactoring is a risk to our primary goal. Often, the IDE does the work for you, so it isn't likely to go wrong anyway. **The following cases have been made:** Would it satisfy both positions if a developer writes \"Refactor\" and the relevant revision numbers on a card? Honestly, this feels like it's going to make everyone equally unhappy. It still puts a level of resistance on doing the refactoring, but doesn't offer sufficient tracking. What about having all-encompassing Jira issues that cover the refactoring work for an iteration? Yes, this removes the developer resistance layer, but I fear it also removes the tracking benefits of having a Jira issue. How do QA get a clear idea what to test? This seems to be a political solution, keeping everyone calm by adding in a light-weight but ultimately pointless process. It seems to me that, given that both sides of the debate ultimately want the same thing, there should be a solution that makes everyone genuinely happy. We can't have been the first people to ask this question, so what experiences have others had in similar situations?"} {"_id": "75648", "title": "What are the main practices and design patterns every .NET guy should know?", "text": "In my brief time as a professional programmer I've seen lots of applications written by programmers who's entire education appears to have been reading the first couple of chapters in a .NET 2.0 book. Heck when I started I wrote most of those applications! What are the biggest design patterns crucial for writing AWESOME .NET applications? By awesome I mean on the inside too!"} {"_id": "138675", "title": "Which applications have driven the mass spread of floating point units?", "text": "Floating point units are standard on CPUs today, and even desktops might use them today (3D effects). However, I wonder which applications have initially driven the development and mass adoption of floating point units in history. Ten years ago, I think most uses of floating point arithmetics constituted in either 1. Engineering and Science applications 2. 3D graphics in computer games I think for any application where decimal numbers might have appeared at those times, fixed point arithmetic has been sufficient (2D graphics) or even desirable (finance). Usage of integers would have been sufficient then. I think these two applications have been the major motivation to establish floating point arithmetic in hardware as a standard. Can you name others, or is there a compelling reason to disagree?"} {"_id": "138674", "title": "Main class passes dbConn obj to all its services, I need to change the dbConn for one of its services. - suggestion for design pattern", "text": "There is this main class and there are several services ( which uses db connection to retrieve data ) These services are initialized in the main class db properties are obtained from the property file and then dbconnection is opened by calling a method dbOpen() written in the main class and the resultant connection object is set to the service objects by iterating through the list of services and by calling setConnection method on the service `note: that the services are instantiated in the main class and the main class is not a superclass for services.` I also need to mention that there is this `recycle db connection` scenario only main class is aware of. /** connects to DB, optionally recycling existing connection), * throws RuntimeException if unable to connect */ private void connectDb(boolean recycle) { try { if (recycle) { log.status( log.getSB().append(\"Recycling DB Connection\") ); closeDb(); } openDb(); for ( int i = 0 ; i < service.length ; i++ ) { service[i].setConnection(db); } } One of the service needs to use a different database, what is the best design pattern to use?"} {"_id": "254078", "title": "In WPF, should I base my converters on types or use-cases?", "text": "I'm looking for some advice on how to write my WPF value converters. The way I'm currently writing them, they are very specific, like (bool?,bool) => Brush, i.e. I'm writing each converter for a specific use case, in this case, the Brush is bound to an indicator showing equality information between the bool? and the bool. This obviously makes re-use very hard and I end up with a quite large list of converters. Should I strive to write my converters in a more general way? Can I?"} {"_id": "130126", "title": "Technique for multiple users on same datasets", "text": "This is more a learning question than coding, but I'm certain it's a common issue for anyone developing administration systems or applications in php/mysql/js etc. I've developed quite a complex application that lets users upload images, and define hotspots in them with associated actions. The images are stored in a table, and the actions in another, with json data for every action in a text field. However, like I say, the problem is generic. Basically, my fear is that if someone is editing the same image and set of actions at the same time, and they both submit changes, or if it was edited by someone else then there's a whole series of structures that potentially will fail on submission. I don't want to implement a locking system, as the system is very wide ranging (links to other images, etc), and I think it's a bit ugly. I saw this link (MSDN Multi-tenant architecture article) in another question, but it seems a little overwhelming and specialised for sql server. So - what are the terms for data and system architecture here that I can investigate, or are there some good articles to do with this topic that people can recommend? Specifically for php/web world would be great!"} {"_id": "70036", "title": "using trademarked/copyrighted images in my public web application", "text": "Consider a public web application that wants to 'look' better by including the use of corporate images. The imagery would be used to give the website visitor a choice: Coke or Pepsi, Starbucks or Dunkin, Yankees or Red Sox, etc. It's a service that wants to enhancing the user experience by having those logos in place in the relevant spots. The web application would not be selling those images, or anything with the images. The site would be generating revenue via ads or paid services. Is there a general accepted practice on the web regarding trademarked images or logos? Has this been legally tested in any jurisdiction?"} {"_id": "70032", "title": "When can I say I know how to program in C?", "text": "Let's see. I've seen in several places, including Advice for Computer Science College Students, by Joel Spolsky, that a graduated Computer Science student must know C. **How do I know if I know C or not?** I have developed a few projects in C (an implementation of the _ext2_ filesystem with FUSE, and a few others), but I suppose it means more than just knowing pointers, free, etc."} {"_id": "104104", "title": "Scaling yourself up against a better programmer/role model?", "text": "After having watched this video I have to ask how a programmer can go about measuring themselves against other, better programmers much as a chess player would. How would you decide who is a role model to start with? I mean James Gosling is known by every Java programmer but he invented the language and there is almost certainly an expert out there who could show him a few tricks. Now say you have you found that role model. This could be, say, StackOverflow's own superstar Jon Skeet. It's possible to read his answers on StackExchange, visit his blog and read his books but what about actual programming skill? How could you go about fixing challenges and determining where you are in the programming skill spectrum?"} {"_id": "125726", "title": "Review my class hierarchy", "text": "I have created a class hierarchy for an inventory system for a book/magazine. Here's the picture: ![enter image description here](http://i.stack.imgur.com/3WGKd.jpg) Will it do? I know there's no magazine class yet but I was wondering if anyone could suggest a better idea."} {"_id": "85289", "title": "Selling an open source project: some issues", "text": "I am the creator / main developer of a small sized open source (PHP) project (GPL3). Currently there is a development team of 3 people (me included). This team has been quite active for some time, but since almost 2 years not much has happened. I myself have decided I want to stop working on the project, but I can't just leave the project because I care about it and I know if I abandon it, it will just be a matter of time before the project completely dies. At this moment, there are still some users and the project is only slightly out- of-date. So I'm thinking about selling the whole project. Of course I'd need to get consent of the other developers, but for now I'm assuming that's not a big problem. So at this moment I have 2 questions: 1) If the project would be sold to a commercial party, would it be possible for them to convert the project to closed source? I would prefer to sell the project to a company/organization that would continue the development under an open source license. 2) Does anyone have any tips to find interested parties? I don't know if I just want to put up a \"For Sale\" sign on the website of the project. Maybe someone has experience with a comparable situation. Ok guys, thanks in advance!"} {"_id": "122710", "title": "Where is the M in MVC?", "text": "I'm trying to refactor my application into MVC, but I'm stuck on the M part. In a database-backed app, the model is implemented in the app code, right? But then, **what is in the database -- is that not also the model?** _(I'm not using the database as a simple object store -- the data in the DB is an enterprise asset)._"} {"_id": "246322", "title": "How to support multiple firmware versions?", "text": "I'm working on an app which talks to a bluetooth low energy (BLE) device and exchanges customized data with it. Our team has defined the data models and there's a method that will parse the data payload and assign values to the corresponding characteristics. It works all fine until we have introduced a new firmware. In this new firmware some values in the data payload have been re-defined or the offset of a specific value has been changed. So the parsing method will need to be updated/re-written according to the new payload definition. Then here comes the problem: both two versions of the firmware need to be supported! Of course I could write a lot of if/else in the parsing method right now, but what happend if 3 other firmware updates arrive one after other? I can imagine the code will become hard to read and lose it's simplicity. I'm wondering if there's an elegant way to manage the co-existence of firmware versions. Maybe a design pattern that can be adopted here? **To make it more specific:** In my model class `Temperature` I've the following properties: @property (nonatomic, readonly) float temp_outside; @property (nonatomic, readonly) float temp_inside; The library has to support 3 different firmware versions: * **V1** does only support `temp_outside` * **V2** does support `temp_outside` and `temp_inside` * **V3** does support `temp_outside` and `temp_inside`, but `temp_inside` has to be obtained differently compared to **V1 & V2** Assuming the firmware provides their information within the `manufacturer data` of the advertisings as raw data: * **V1 firmware:** 00 42 0a * (byte 1: sw id, byte 2: protocol id, byte 3: temp_outside) * **V2 firmware:** 00 43 0a 10 * (byte 1: sw id, byte 2: protocol id, byte 3: temp_outside, byte 4: temp_inside) * **V3 firmware:** 00 44 10 0a * (byte 1: sw id, byte 2: protocol id, byte 3: temp_inside, byte 4: temp_outside) What is the best way to implement this? There are several things to consider: 1. The model has values which might not be filled by a specific firmware; should I have separate models per firmware or how can I make sure that only supported properties will be accessed? 2. The method responsible for parsing the bytes into the model has to support all firmwares. Should there be one method / parser with several conditions based on the protocol id or several parser classes? 3. The usage of the library within view controllers should be as convenient as possible."} {"_id": "231831", "title": "What sort of information can I extract out of a dll file?", "text": "I was dealing with a virus earlier today which was a .dll file disguised as RUNDLL.dll which is regularly seen in the task manager and launches on startup. I would like to know how much information I could have extracted from that dll before deleting it. I used .NET reflector, and the file was unrecognized (meaning it was not c# code from what I understand.) I used visual studio and attempted to reference it but was greeted by another error. Visual studio handles all of the .net languages so I was surprised to say the least. I expected to find atleast the function names in the object browser if not anything else. Decompiling it was probably a waste of time if it was c++/c since that would just be assembly code. In what ways could a .dll file like this one prevent revealing information (function names, number of functions, etc.)? Are c++/c .dll files basically impossible to investigate for further information? Thanks."} {"_id": "246328", "title": "How can a manager ensure developers are pushing up to the origin every day?", "text": "Our team has been using Git for a couple of years. I love it, after previously using Visual Source Safe, SVN and TFS. However, my manager has been getting increasingly agitated about it and is threatening to go back to SVN or TFS. The problem is human nature: Developers forgetting to push up to the origin every day, then going sick or taking a holiday, and someone else having to pick up the project, and not knowing whether the latest code is in the origin, or whether it is sitting on the absent developer's machine. There is no visibility of which developer was the last to work on which file, as there is with a centralized source control system. I really want to avoid going back to SVN of TFS. So my question is: How can a team use Git successfully in a way that takes into account human nature (forgetfulness, laziness, etc)? How can I ensure that at the end of every day the latest code has been pushed to the origin so that someone else can step in and take over the next day, if need be? Is the answer continuous integration? I've read about workflows that include having to push to a CI branch on the origin to build the project."} {"_id": "119020", "title": "Use Google Analytics to track visitor/download stats for a Google Code Project?", "text": "Can you use Google Analytics to track visitors/downloaders for a Google Code project? I've searched google for an answer but I get results about Google Analytics as a Google Code project itself and not for applying it to a Google Code project for visitor/download data."} {"_id": "176317", "title": "Morse Code - Binary Tree", "text": "How is the Morse Code representation of a letter determined? \"E\" = \".\" \"T\" = \"-\" Why is it not alphabetic? As in let \"A\" = \".\", \"B\" = \"-\", \"C\" =\".-\", etc. I'm trying to develop an algorithm for a traversing Binary Tree filled with these letter. My main aim is to search for a letter, like \"A\", but I don't know what conditions to use that determines when to branch to the right or left node."} {"_id": "84625", "title": "Amazon Cloud (EC2) w/SQL Server. Pay for SQL instance, or use an AMI w/SQL Server Express?", "text": "I have been considering using the Amazon cloud (EC2) for a small workflow application. In terms of power and storage, a SQL Server Express database will more than meet my needs. I have been cautioned against paying for just a Windows server instance and installing SQL Server Express due to mainly security issues. Is it reasonable to think that there is an Amazon Machine Image (preconfigured images that you can load onto your instance) with SQL Server Express installed where most of the server \"hardening\" has been taken care of? The wise course may be to just pay for the Windows + SQL Server instance so it is already configured for me, but it is quite a cost difference."} {"_id": "70693", "title": "SQL problems and answers book for a functioning SQL programmer", "text": "It was pointed out to me yesterday that I need to get a book on SQL and learn about it properly. That is probably a fair assessment; I'm a functioning SQL programmer who can in general extract from the database the information that is needed, but as soon as extracting the data needs more than a couple of `joins` and a `where` clause the query that I come up with is probably not as good as it could be. Previously I was working at an Oracle shop where there were a few very good SQL programmers who would generally design the more complex queries and would be on hand to answer questions, but now I'm at a SQL Server shop where no one really has much of a clue about SQL and so I feel like I need to get my skills up. What I'm looking for is a book that is fairly short and very practical that I can work through in the evenings with a series of difficult SQL problems and then well explained solutions along with a few pieces of sage advice that is aimed at a programmer who can \"do SQL\" who wants to become more of a \"SQL programmer\". I appreciate that many things about SQL are platform dependent, so since I'm currently in a SQL Server shop then my preference would be for a book aimed at TSQL. Any suggestions?"} {"_id": "203133", "title": "How advanced are author-recognition methods?", "text": "From a written text by an author if a computer program analyses the text, how much can a computer program tell today about the author of some (long enough to be statistically significant) texts? Can the computer program even tell with \"certainty\" whether a man or a woman wrote this text based solely on the contents of the text and not an investigation such as ip numbers etc? I'm interested to know if there are algorithms in use for instance to automatically know whether an author was male or female or similar characteristics of an author that a computer program can decide based on analyses of the written text by an author. It could be useful to know before you read a message what a computer analyses says about the author, do you agree? If I for instance get a longer message from my wife that she has had an accident in Nigeria and the computer program says that with 99 % probability the message was written by a male author in his sixties of non-caucasian origin or likewise, or by somebody who is not my wife, then the computer program could help me investigate why a certain message differs in characteristics. There can also be other uses for instance just detecting outliers in a geographically or demographically bounded larger data set. Scam detection is the obvious use I'm thinking of but there could also be other uses. Are there already such programs that analyse a written text to tell something about the author based on word choice, use of pronouns, unusual language usage, or likewise?"} {"_id": "203135", "title": "Algorithms for Data Redundancy and Failover for distributed storage system?", "text": "I'm building a distributed storage system that works with different storage sizes. For instance, my storage devices have sizes of 50GB, 70GB, 150GB, 250GB, 1000GB, 5 storage systems in one system. My application will store any files to the storage system. Question: How can I build a distributed storage with the idea of data redundancy and fail-over to store documents, videos, any type of files at the same time ensuring that should one of any storage devices fail, there would be another copy of these files on another storage device. However, the concern is, 50GB of storage can only store this maximum number of files as compared to 70GB, 150GB etc. With one storage in mind, bringing 5 storage systems like a cloud storage, is there any logical way to distribute or store the files through my application? How do I ensure data redundancy through different storage sizes? Is there any algorithm to collate multiple blob files into a single file archive? What is the best solution for one cloud storage with multiple different storage sizes? I open this topic with the objective of discussing the best way to implement this idea, assuming simplicity, what are the issues of this implementation, performance measurements and discussion of the limitations."} {"_id": "203134", "title": "Uses of persistent data structures in non-functional languages", "text": "Languages that are purely functional or near-purely functional benefit from persistent data structures because they are immutable and fit well with the stateless style of functional programming. But from time to time we see libraries of persistent data structures for (state-based, OOP) languages like Java. A claim often heard in favor of persistent data structures is that because they are immutable, they are _thread-safe_. However, the reason that persistent data structures are thread-safe is that if one thread were to \"add\" an element to a persistent collection, the operation returns a _new_ collection like the original but with the element added. Other threads therefore see the original collection. The two collections share a lot of internal state, of course -- that's why these persistent structures are efficient. But since different threads see different states of data, it would seem that persistent data structures are _not_ in themselves sufficient to handle scenarios where one thread makes a change that is visible to other threads. For this, it seems we must use devices such as atoms, references, software transactional memory, or even classic locks and synchronization mechanisms. Why then, is the immutability of PDSs touted as something beneficial for \"thread safety\"? Are there any real examples where PDSs help in synchronization, or solving concurrency problems? Or are PDSs simply a way to provide a stateless interface to an object in support of a functional programming style?"} {"_id": "202842", "title": "Security Risks of Unsigned ClickOnce Manifests", "text": "Using signed manifests in ClickOnce deployments, it is not possible to modify files after the deployment package has been published - installation will fail as hash information in the manifest won't match up with the modified files. I recently stumbled upon a situation where this was problematic - customers need to be able to set things like connection strings in app.config before deploying the software to their users. I got round the problem by un-checking the option to \"Sign the ClickOnce manifests\" in VS2010 and explicitly excluding the app.config file from the list of files to have hashes generated during the publish process. From a related page on MSDN > \"Unsigned manifests can simplify development and testing of your > application. However, unsigned manifests introduce substantial security > risks in a production environment. Only consider using unsigned manifests if > your ClickOnce application runs on computers within an intranet that is > completely isolated from the internet or other sources of malicious code.\" In my situation, this isn't an immediate problem - the deployment won't be internet-facing. However, I'm curious to learn what the \"substantial security risks\" of what I've done would be if it was internet-facing (or if things changed and it needed to be in the future). Thanks in advance! * * * **Edit / follow-up:** Does _not signing_ the ClickOnce manifest constitute an unsigned manifest (as per MSDN's definition)? The application manifest contains a hash of the files in the deployment package. Any changes to the files within it results in a validation failure during installation. Does this negate the above security risks, at all?"} {"_id": "202843", "title": "Solutions for floating point rounding errors", "text": "In building an application that deals with a lot of mathematical calculations, I have encountered the problem that certain numbers cause rounding errors. While I understand that floating point is not exact, the problem is _how_ do I deal with exact numbers to make sure that when calculations are preformed on them floating point rounding doesn't cause any issues?"} {"_id": "247021", "title": "Automatically create or update object in database", "text": "I have a database class with the following interface: public Database { //returns false if p (its ID) is already available //otherwise adds p the the list and returns true public boolean create(Person p); //returns false if p (its ID) was not found in the list //replaces the available Person q (q.ID == p.ID) with p and returns true public boolean update(Person p); //returns false if p (its ID) was not found in the list //marks the available Person q (q.ID == p.ID) as inactive returns true public boolean remove(Person p); } The `Person` is immutable and can be identified by a unique id. I have a `EditDialog` which looks like this: ![Editing a Person and the available Save/Chancel/Remove functionalities](http://i.stack.imgur.com/H7l0E.png) I wanted to reuse my code and so the `EditDialog` is used for create a `Person` or update it. So this dialog is initialized either by a available `Person` instance (Edit-Operation) or by a new one (Create-Operation). Because I learned that there is always an evil user I wanted to avoid the following scenario: * Click \"Create Person\" button * New `Person` is created in Database * `EditDialog` for new `Person` is displayed * Click \"Chancel\" button * `Person` has to be removed (= marked inactive) from `Database` * `EditDialog` is disposed * repeat Which would lead to many inactive `Person`s in the `Database`. So I decided to only create the `Person` in the `Database`, if the user clicks the \"Save\" button. Which leads to two possible scenarios for the `Listener` of the \"Save\" button: 1. New `Person` should be created in `Database` 2. `Person` should be updaated in `Database` So the code of the Listener looks like the following: if(!database.create(p) { database.update(p); } Which is not so pretty in my opinion. The Long and the short of it is, **should the Database automatically** * **update a`Person` if it's availabe at invokation of `create(Person)`?** * **create a`Person` if it's not yet available at invokation of `update(Person)`?** **Or is there another solution/practice I didn't think about?**"} {"_id": "146286", "title": "How to Model a simple file-system by UML class diagram", "text": "I want to model a file system which contains both files and directories, and directories can contain either files or other directories. This is what I have reached so far: ![My simple file system class diagram](http://i.stack.imgur.com/6URV9.png) In OOSE book, however, a similar model is presented: ![class diagram for file system presented](http://i.stack.imgur.com/GonyF.png) Now I have two questions: 1. I wonder why the author did not use an _Interface_ to represent the abstract type **FileSystemElement**? Isn't using an interface for this type correct? 2. As we know, files and directories must have names with similar constraint (e.g. maximum length of 256 characters). Therefore, it would be nice if I model this in the **FileSystemElement**. But an **interface** cannot have a _abstract attribute_ , so I have to repeat **name** attribute in both **Directory** and **File** classes. Is there any better workaround for this problem?"} {"_id": "146281", "title": "What an architect should do?", "text": "It seems that the role of architect varies in between software companies. What does the role of a standard architect include? E.g... 1. Draw high level diagrams and explain to the team why this is a reasonable architecture. 2. Write documents for managers or customers. 3. Write the infrastructure of a software upon to which others add more features. 4. Write various prototypes and run performance tests. 5. Conceive additional features and improvement to existing features and guide the team towards them. 6. Other responsibilities."} {"_id": "218437", "title": "What is the difference between requirements and acceptance criteria?", "text": "I am trying to understand the difference a little better as it seem like they are the same thing. I have work in projects with no use of the requirements and everything is an acceptance criteria, and on projects that have both."} {"_id": "236826", "title": "New and old technologies coexist in legacy system", "text": "New technologies can accomplish existing tasks in more efficient and powerful way. But sometimes old technologies cannot be discarded unfortunately, so more numbers of technologies in one system make maintain difficult. Is it worth to introduce new technologies? For instance, a legacy system uses SOAP internally and JSON is new technology to replace the SOAP. The problem is JSON can only replace **some** of SOAP, because the rest of SOAP is too complex to replace. As a result, the system will use both SOAP and JSON at same time. The maintain is even more difficult than before. Is it reasonable to introduce new technologies to replace **part** of old ones?"} {"_id": "236823", "title": "Web app design: Local caching vs demanding", "text": "I'm writing a web application in javascript that will be served to the clients' browser. The intent is for the app to appear to the client as a monolithic application, not a series of interlinked refreshing web pages. My question deals with common architecture. In order to relate the question I'll mention a specific feature of the web app is a \"tour\"; when the client clicks a button labeled \"tour\" the web app will load a div with a bit about the app itself, and what it can do for the user. There are two ways this could be handled, the client browser could fetch the tour markup from the server when the user clicks the button, or the server could download it (and all other such features) when the app is initially downloaded. If I fetch the tour on demand, then I possibly save network bandwidth as not everyone will take the tour, the app will be smaller too. But this means that in order for someone to fully use the app there will be more network congestion as there will be another hit on the network for loading the tour. If I just serve all the features initially, the app is larger, but there is less congestion on the network because, once downloaded, there is less need to access the network (unless a fresh copy of the app is required (ie if- modified-since). This 'tour' feature isn't really the issue itself, it's just an example to explain my question. How to balance caching vs demanding in web app design?"} {"_id": "71344", "title": "Should one Consider Periods in \"Year of Experience\" while actually not Coding on regular basis", "text": "Sorry if the title sounds a bit clumsy. The scenario I am trying to describe is during my academic years I have mostly coded in C/C++. Few small projects were done but no large scale work was done. From there on, after entering industry, I rarely code in C++ but whenever I do, I use its features as deep as my understanding. Now, should I even consider myself a C++ programmer and count every year since I first started coding as my number of years in C++."} {"_id": "236829", "title": "Is the following example a strategy pattern?", "text": "In my problem I had lots of objects with slightly different behaviour, but identical attributes and methodes with identical interfaces. The objects variants were quite big in number, and I didn't want to create a class for each of them, with lots of repeting code. I circumvented creating a lot of derived classed by the following construct (pseudocode for abbrevation) #aTP = all the parameters def _function_version_1(aTP): ... ... def _function_version_BIG(aTP): ... class BaseClass(object): _typesOfImplementation = {'ONE' : _function_version_1, ... 'BIG' : _function_version_BIG, } def __init__(self, type, ...) ... if type not in _typesOfImplementation: raise Exception('Unknown type') # respective function for this object self.function = lambda aTP:BaseClass._typesOfImplementation[type](self, aTP) Is this still an implementation of the strategy pattern, although I didn't use classes for the respective function version (which would have been an awful bunch of classes again)?"} {"_id": "12933", "title": "What is the most effective work rhythm for a programmer?", "text": "I was wondering what is the best work rhythm for the job a programmer does? I am coding all day long and sometimes I get stuck in a problem and it keeps me occupied a few hours before I realize that maybe I need a break. Some say frequent and short brakes help you but sometimes when I am focused on a problem I feel like a break would not help, but rather loose my focus. So how often should a break be taken, and how long? The more basic question regarding this issue is comes from the fact that, you can get tons of \"good ideas\" ('promodo' for instance) on the net, that promise you will be more effective in whatever you do. Are these principles good or, this is something everybody should decide for himself? I wonder if any of them can accomplish what it promises! I mean what they promise is that (if the conditions are met) it works for **everybody**. Are there really such principles? And if there are, what are these and how can we find them?"} {"_id": "232562", "title": "Why do most sites require email activation", "text": "Most popular applications nowadays require account activation by email. I've never done it with apps that I've developed so am I missing some crucial security feature? By email activation I mean when you register to a site they send you an email that contains a link that you have to click before your account gets activated."} {"_id": "41798", "title": "programming an expert system", "text": "I need to program an expert system that, according to a series of complex possibilities, returns a well defined result, together with some kind of diagnostic of what that results means. What is the general process to define the behavior of an expert system, from the initial assessment of the conditions to the actual code ?"} {"_id": "232561", "title": "How to POST CSV with XMLHttpRequest", "text": "I would like to send a CSV file via POST in a XMLHttpRequest, but I am unsure of two things. First is there anything to distinguish a CSV file from a string split up by comma's? And what sort of Content-Type am I supposed to put in to the `setRequestHeader`?"} {"_id": "232568", "title": "Calculating combinations from categories", "text": "I'm trying to calculate every possible selection from different categories. 1 choice must be made from every category and the categories are always in the same order. I save a string where each character is the number of that choice from that category. I have solved this with recursion and it works well when there are thousands of possibilities but is too slow when there are millions. I'd like to change it to looping to hopefully speed it up. Here is my recursive function (C#): private void GetPartNumberPermutations(IList choices, string partNumber) { for (int i = 0; i < choices[0]; i++) { if (choices.Count > 1) GetPartNumberPermutations(choices.Skip(1).ToList(), partNumber + i.ToString()); else { //add partNumber to the list } } } Choices is a list of the number of choices for each category and might look like { 2, 4, 5, 8, 2, 4 }. Let's ignore the problem when one of the entries is > 9. My final list of partNumbers comes out to: 000000 000001 000002 000003 000010 000011 000012 000013 000100 000101 etc"} {"_id": "123449", "title": "How to tackle massive Linux/makefile projects effectively?", "text": "I have been developing Windows applications in C++ for like 10 years now. And recently I've started digging into some Linux projects, and I can't stand how unproductive I am... I'm a fast learner, and I've been using Linux as a primary platform for some time now. And I do feel very comfortable with shell, OS principles and GUI. But when it comes to development, it feels like I'm back to school. As soon as I open some larger project, I'm stuck. Most of them are makefile based, so basically when I try to navigate them with QT or CodeBlocks, at best, I can use intellisense on a per-file basis. And most of the time variables leak from scope. Then there is a go-to-definition stuff, which seems nonexistent, try to join some larger project from sourceforge, and you're stuck for days, because navigating to definitions is so hard... `grep -r \"this_def\" . --include \"*.cpp\" --include \"*.h\"` seems so slow and clumsy. And then, the debugging, gdb does work, but no matter what I do, it seems like it's light years behind WinDbg or VisualStudio debugger. And these things are making me desperate, I want to write code, but it just goes so slow... I'm starting to think that Linux developers learn function definitions by heart and analyze code by eyes, but I can't believe it's so. Has anyone gone through this? Is there something that I'm missing that could make me more productive?"} {"_id": "247028", "title": "Alias variable vs multiple use of getter", "text": "Would you rather: $this->getDoctrine()->getManager()->persist($currency); $this->getDoctrine()->getManager()->persist($user); $this->getDoctrine()->getManager()->flush(); or $em = $this->getDoctrine()->getManager(); $em->persist($currency); $em->persist($user); $em->flush(); Is using a aliasing variable for faster coding a smart choice, or should the programmer rather use variables only if they are really variables."} {"_id": "180804", "title": "What is the reason behind methods with return values and methods with void?", "text": "I want to uderstand why there is a method in C# that could reurn a value, for example: public int Accelerate() { Speed++; return Speed; } and a method that does not reurn a value (void)? What is the difference in the following example of the above one: public void Accelerate() { Speed++; Console.WriteLine(Speed); } I see that last one will save us a time rather than defining a variable to hold this field in when creating a new object! I'm beginner, so could anyone explain?"} {"_id": "151365", "title": "Reading input all together or in steps?", "text": "For many programming quizzes we are given a bunch of input lines and we have to process each input , do some computation and output the result. My question is what is the best way to optimize the runtime of the solution ? 1. Read all input, store it (in array or something) ,compute result for all of them, finally output it all together. or 2\\. Read one input, compute the result, output the result and so on for each input given. UPDATE Since no answer was specific I would ask which approach is the best for problems like this: > ### Quadrant Queries (30 points) > > There are N points in the plane. The ith point has coordinates (xi, yi). > Perform the following queries: > > 1) Reflect all points between point i and j both including along the X axis. > This query is represented as \"X i j\" > 2) Reflect all points between point i and j both including along the Y > axis. This query is represented as \"Y i j\" > 3) Count how many points between point i and j both including lie in each > of the 4 quadrants. This query is represented as \"C i j\" > > ### Input: > > The first line contains N, the number of points. N lines follow. > The `i`th line contains xi and yi separated by a space. > The next line contains Q the number of queries. The next Q lines contain > one query each, of one of the above forms. > All indices are 1 indexed. > > ### Output: > > Output one line for each query of the type \"C i j\". The corresponding line > contains 4 integers; the number of points having indices in the range [i..j] > in the 1st,2nd,3rd and 4th quadrants respectively. > > ### Constraints: > > > 1 <= N <= 100000 > 1 <= Q <= 1000000 > > > You may assume that no point lies on the X or the Y axis. > All (xi,yi) will fit in a 32-bit signed integer > ..."} {"_id": "147880", "title": "New senior developer tasks", "text": "I've got a senior developer with eight years of .NET experience starting tomorrow to work on a 11,000-lines-of-code application. In the team there's myself and another programmer. We've both got about three years experience each. It's my first project as a manager (I'm also a developer on the project) and this is the first time I've ever had to introduce someone to an already established code base. Obviously I'll be going over each module, the deployment process, etc., and handing them the location of the source control repository, documentation (which isn't the best), etc. How long should I give them before they're ready to start writing new features and fixing bugs?"} {"_id": "182093", "title": "Why store a function inside a python dictionary?", "text": "I'm a python beginner, and I just learned a technique involving dictionaries and functions. The syntax is easy and it seems like a trivial thing, but my python senses are tingling. Something tells me this is a deep and very pythonic concept and I'm not quite grasping its importance. **Can someone put a name to this technique and explain how/why it's useful?** * * * The technique is when you have a python dictionary and a function that you intend to use on it. You insert an extra element into the dict, whose value is the name of the function. When you're ready to call the function you issue the call _indirectly_ by referring to the dict element, not the function by name. The example I'm working from is from Learn Python the Hard Way, 2nd Ed. (This is the version available when you sign up through Udemy.com; sadly the live free HTML version is currently Ed 3, and no longer includes this example). To paraphrase: # make a dictionary of US states and major cities cities = {'CA':'San Diego', 'NY':'New York', 'MI':'Detroit'} # define a function to use on such a dictionary def find_city (map, city): # does something, returns some value if city in map: return map[city] else: return \"Not found\" # then add a final dict element that refers to the function cities['_found'] = find_city Then the following expressions are equivalent. You can call the function directly, or by referencing the dict element whose value is the function. >>> find_city (cities, 'New York') NY >>> cities['_found'](cities, 'New York') NY Can someone explain what language feature this is, and maybe where it comes to play in \"real\" programming? This toy exercise was enough to teach me the syntax, but didn't take me all the way there."} {"_id": "182094", "title": "How should I represent an enumerated type in a relational database?", "text": "I am working on developing a relational database that tracks transactions that occur on a device I'm working on for my company. There are different types of transactions that could occur on the device, so we have a \"trans_type\" field in one of our main record tables. My group has decided to make the type of this field an integer and treating it as an enumerated type. My intuition tells me that it would be a better idea to make this field a string so that our database data would be more readable and usable. My co-workers seem to be worried that this would cause more trouble than it is worth. That string comparisons are too costly and the possibility of typos is too great of a barrier. So, in your opinion, when dealing with a field in a relational database that is essentially an enumerated value, is it a better design decision to make this field an integer or a string? Or is there some other alternative I've overlooked? Note: explicit enumerated types are not supported by the database we are using. And the software we are developing that will interface with this database is written in C++."} {"_id": "147886", "title": "When programmers talk about \"data structures\", what are they referring to?", "text": "When programmers talk about \"data structures\", are they only talking about abstract data types like lists, trees, hashes, graphs, etc.? Or does that term include any structure that holds data, such as composite types (class objects, structs, enums, etc.) and primitive types (boolean, int, char, etc.)? I've only ever heard programmers use the term to reference complex data structures or abstract data types, however the Wikipedia article that provides a list of data structures includes both composite types and primitive types in the definition, which is not what I expected (even though it does make sense). When looking around online I see other places that refer to the term \"data structure\" in the programming sense as only referring to abstract data types, such as this lecture from Stony Brook University's Department of Computer Science which states > A data structure is an actual implementation of a particular abstract data > type. or this wikibook on data structures, which uses the term in sentences like this: > Because data structures are higher-level abstractions, they present to us > operations on groups of data, such as adding an item to a list, or looking > up the highest-priority item in a queue So why do I only ever hear programmers referring to complex data structures or abstract data types when they use the term \"data structure\"? Do programmers have a different definition for the term than the dictionary definition?"} {"_id": "226193", "title": "How does database connection pooling improve application performance?", "text": "I can't understand why connection pooling improves application performance. I suppose connection establishment would cause latency. But reusing an established connection requires more complex error handling. Application server need track state of every connection to set connection to initial state before reusing (such as discarding all transactions, switching to the default isolation level), which also costs time."} {"_id": "214303", "title": "Why doesn't Git daemon start in the background?", "text": "Just as in the title, why doesn't `git daemon` start the service in the background, since it's a \"daemon\", being a background process? I know that I can start it in the background using `git daemon &`, but I'm just wondering about the philosophical reason for this decision."} {"_id": "237883", "title": "Is recursive code slower than non-recursive code?", "text": "Now I'm only a novice programmer, but what my friends that are studying programming tell me is that recursive code is a good thing, it leads to less code duplication and it looks more elegant. Now it may be so, but everytime I've tried my hand with recursive code I've always found it to be significantly slower than non-recursive code, I'm not sure if it's something about the ways I've used it but I highly doubt it, I've used it for fairly simply thing's like Collatz Conjecture and Fibonacci number generation but whenever I've compared it against normal iterative code the recursive approach will consistently clock at around twice the time of the iterative solutions."} {"_id": "123199", "title": "Organizing large Javascript applications - The view layer", "text": "Today Javascript application of a relevant size become more and more common, and as the need arises, certain patterns are identified to manage the code complexity. I try to follow good advice, but I have some trouble in organizing the view layer of an application I am writing. While the other layers are nicely decoupled, I have a big blob for the UI that I would like to avoid. To make things clear, I am not talking of application the size Osmani (see the link above) considers - like Gmail - but still big enough to deserve a slightly better architecture. My problem is that I have a lot of UI elements that I have to place. All these elements can have margins, paddings and so on, and I need to explicitly set their size, as the UI has a lot of constraint that are not expressible by CSS (for instance, some images have to be sized to that an integer number of them appears in a row, without blank space at the end). As if this was not enough, the size of some elements depends on how much space their container allows, and this may depend on the size of other elements and so on. What I am doing right now is to compute the size of all elements every time the window size changes. I start from the first one, which I can compute freely. As soon as this is placed, I fire an event which trigger the computation of the second one and so on. This approach more or less works, but is ugly as hell. Every function is full of code like function resizeFoo() { var parent = $('#foo').parent(), parentWidth = parent.width(), availableWidth = parentWidth - parseInt(parent.css('padding-left'), 10); $('#foo').width(availableWidth); } only much longer. As soon as I add some right padding to #foo's parent, I have to update this. Of course, I could subtract the padding from both sides right from the start, making it event longer. > Are there any useful patterns to handle complex UI sizing and positioning > requirements, so that I can reorganize all of the above mess into something > meaningful?"} {"_id": "86714", "title": "Using lookahead assertions in regular expressions", "text": "I use regular expressions on a daily basis, as my daily work is 90% in Perl (legacy codebase, but that's a different issue). Despite this, I still find lookahead and lookbehind to be terribly confusing and often unreadable. Right now, if I were to get a code review with a lookahead or lookbehind, I would immediately send it back to see if the problem can be solved by using multiple regular expressions or a different approach. The following are the main reasons I tend not to like them: * They can be terribly unreadable. Lookahead assertions, for example, start from the beginning of the string no matter where they are placed. That, among other things, can cause some very \"interesting\" and non-obvious behaviors. * It used to be the case that many languages didn't support lookahead/lookbehind (or supported them as \"experimental features\"). This isn't the case quite as much, but there's still always the question as to how well it's supported. * Quite frankly, they feel like a dirty hack. Regexps often already are, but they can also be quite elegant, and have gained widespread acceptance. * I've gotten by without any need for them at all... sometimes I think that they're extraneous. Now, I'll freely admit that especially the last two reasons aren't really good ones, but I felt that I should enumerate what goes through my mind when I see one. I'm more than willing to change my mind about them, but I feel that they violate some of my core tenets of programming, including: * Code should be as readable as possible without sacrificing functionality -- this may include doing something in a less efficient, but clearer way as long as the difference is negligible or unimportant to the application as a whole. * Code should be maintainable -- if another programmer comes along to fix my code, non-obvious behavior can hide bugs or make functional code appear buggy (see readability) * \"The right tool for the right job\" -- I'm sure you can come up with contrived examples that could use lookahead, but I've never come across something that really needs them in my real-world development work. Is there anything that they're really the best tool for, as opposed to, say, multiple regexps (or, alternatively, are they the best tool for most cases they're used for today)? My question is this: **Is it good practice to use lookahead/lookbehind in regular expressions, or are they simply a hack that have found their way into modern production code?** I'd be perfectly happy to be convinced that I'm wrong about this, and simple examples are useful for examples or illustration, but by themselves, won't be enough to convince me."} {"_id": "159740", "title": "Python simulation-scripts architecture", "text": "**Situation:** I've some scripts that simulate user-activity on desktop. Therefore I've defined a few cases (workflows) and implemented them in Python. I've also written some classes for interacting with the users' software (e.g. web browser etc.). **Problem:** I'm a total beginner in software design / architecture (coding isn't a problem). How could I structure what I described above? Providing a library which contains all the workflows as functions, or a separate class/module etc. for each workflow? I want to keep the the workflows simple. The complexity should be hidden in the classes for interacting with the users' software. Are there any papers / books I could read about this, or could you provide some tips?"} {"_id": "86712", "title": "Designing controller for modular Java architecture", "text": "We are designing a system which mimics a BPEL application with sets of functional requirements such as bulk messaging, managing SLAs, error handling and so on. One of the intentions is to modularize these functional and non-functional aspects into separate web apps so that we can choose at build time and plug them all together. How does one go about designing this app - choosing design patterns for a controlling framework that delegates between the modules? Put another way - it's like the chain of activities in an ESB, so the controller passes on to A then B then C. In other cases it goes A > B > D and so on. So my question: what design pattern does this controller have? How can it decide the logic of whether at the end of A, should it call B or C"} {"_id": "171190", "title": "what does composition example vs aggregation", "text": "Composition and aggregation both are confusion to me. Does my code sample below indicate composition or aggregation? class A { public static function getData($id) { //something } public static function checkUrl($url) { // something } class B { public function executePatch() { $data = A::getData(12); } public function readUrl() { $url = A::checkUrl('http/erere.com'); } public function storeData() { //something not related to class A at all } } } Is class B a composition of class A or is it aggregation of class A? Does composition purely mean that if class A gets deleted class B does not works at all and aggregation if class A gets deleted methods in class B that do not use class A will work?"} {"_id": "226224", "title": "Suggestion on how to fill a web form (several times)", "text": "I need to fill a form using data from a CSV file. I was planning to use CURL+PHP to do it, but then I realized the form has several steps (one on each page), plus it uses javascript to fill hidden inputs. It is an ASP.NET form, so it has a lot of variables a postback, etc. I am thinking now, to make a browser extension that would load the CSV file and without any more user input would fill the form, and advance from page to page filling up the form and then somehow wait for the form to finish processing, retrieve the output and restart the form. I would like to repeat this for 500-600 times. The website in question does not provide an API so filling up forms is the only way. Does this idea sound feasible, any other ideas?"} {"_id": "212178", "title": "Pointer access and cache coherency", "text": "To my understanding, when you access a variable, that variable and the surrounding area of memory is put into the L1 cache. If I'm wrong here, please tell me. Now my question is, say I have an array of pointers, and I want to iterate through them all performing operation X. If I first have to access the pointer, to get the address of the actual data, does this mean that the memory near the pointer will be put into cache, then the memory near the data, then back to the pointer, then to the data etc? i.e. Cache thrashing? If this is the case, how might one go about keep cache coherency?"} {"_id": "250333", "title": "Relevance of search results", "text": "I implemented an easy-to-use text search for our application. It has the usual text fields for * all of the keywords * any of the keywords * none of the keywords Now I would like to prioritize my results. Just counting the \"all\" and \"any\" hits would be probably good enough, but may be the result is more relevant if more than the one necessary \"any\" term was hit. Or one could say that a lot of hits for a single word should yield \"diminishing returns\". Could you please give me a hint how this problem is usually handled? **Disclaimer** : I know that a \"hand made\" search won't scale and won't give the best possible results. This implementation is just a temporary solution, which will be replaced by something like Apache Solr, as soon as we have the resources."} {"_id": "40443", "title": "Why should I learn to make apps for Windows Phone 7 instead of Android?", "text": "What would be the reason one learn to make apps for Windows Phone 7 instead of Android phones? Is there any reason why Windows Phone 7 is the platform that a developer choose instead of Android? Thanks heaps"} {"_id": "238689", "title": "Is there a difference in the C++ language between Visual Studio and Code::Blocks?", "text": "I used to program C++ on Code::Blocks, but it's boring now. The view of Visual studio is that it's much better. Will it change anything? Or is it the same thing? Same syntax?"} {"_id": "47796", "title": "How important is it that you know the C++ standard?", "text": "I did try searching, but I did not see a similar question (either that or my search terminology was incorrect - if so, feel free to close). I am an avid user of SO, and I notice that there are lots of references to the C++ standard in discussions and answers - and I have to admit, I have never read this particular document, the language makes my eyes hurt... So, the question is, can a C++ developer really code for a living without ever having read this document? Is it really important for us mere mortals who are not in the business of writing compilers?"} {"_id": "47797", "title": "Does Microsoft really offer \"support\"?", "text": "One of the arguments against using Open Source is that there is no \"support\". However, do big vendors (e.g. Microsoft) really offer \"support\" of any kind? I'm sure there is some sort of 4-figure-per-hour \"paid support\" option out there, but is that really an \"option\" for any problem short of one that is going to bankrupt your business? To put it more concretely... I buy a Microsoft product... it has a bug... now what? And how is that better than what I get from Open Source?"} {"_id": "230054", "title": "Pythonic version of Java interfaces", "text": "I fully acknowledge that Python and Java are different programming languages and should be used differently. That said, \"Program to an interface, not to an implementation\" is good language-agnostic programming advice. Say I have some DAO interface in Java: public interface DataDao { Object load(); void update(); void delete(); } Programming to that interface allows me to persist data in files while I'm prototyping, swap that out for a database as I get further along, etc. rather painlessly as long as I honor the contract of the `DataDao`. What's the Pythonic approach/version (if any) to programming to a contract to keep your classes orthogonal, modular, and enable frictionless implementation changes?"} {"_id": "14525", "title": "My coworker created a 96 columns SQL table", "text": "Here we are in 2010, software engineers with 4 or 5 years or experience, still designing tables with 96 fracking columns. I told him it's gonna be a nightmare. I showed him that we have to use ordinals to interface MySQL with C#. I explained that tables with more columns than rows are a huge smell. Still, I get the \"It's going to be simpler this way\". What should I do? EDIT * This table contains data from sensors. We have sensor 1 with Dynamic_D1X Dynamic_D1Y [...] Dynamic_D6X Dynamic_D6Y [...] EDIT2 * Well, I finally left that job. It is a sign when the other programmer goes dark for months at the time, it is another sign when management does not realise this is a problem"} {"_id": "14524", "title": "How a freelancer can learn industry standards?", "text": "Being a freelancer, I don't have access to corporate training programs where employees learn best practices. Most of the time I am advised to look into the available code on the Internet. Ideal places would be: CodePlex and SourceForge. But this proves to be of limited rather very little help. I want my existing code to be analyzed and a better solution be suggested to improve the quality of the code. How to learn coding that matches standards?"} {"_id": "230059", "title": "Apple eating problem", "text": "Player A and Player B play a game. On the middle of the table there is a pot full of N apples of different weights. Player A starts first and choose an apple and start eating it. Losing no time player B do the same. When a player eats the whole apple, without losing time repeat the same procedure. In case both player have eaten the apple at the same time, Player A still have the advantage of choosing first. Note that both players eat with same speed What apple should the Player A choose at first to ensure that with right tactics he'll eat as much as grams of apples possible if the player B plays optimally? * * * I thought that choosing the smallest or the biggest apple should do the job, but there are specific cases when this doesn't work. This is C++ contest problem, so there should be a nice solution to this. I think that brute force maybe provide a solution, but this will require much time, because the number of apples is up to 10000. I would rather like some hint on how to approach this question, how to find the optimal tactic or intuition rather than a code."} {"_id": "219478", "title": "What could be the Model Layer when consuming Web-Services and no Database in Django?", "text": "I'm using Django as an application framework and it only needs to consume web services (no need to have traditional Django Models and the related ORM). In this case, since Django is a variant on the MVC architecture (MVT, to be precise) should I wrap the web service calls into a 'model' that my App uses or should I just call those web services directly from the Django View and remove the model layer?"} {"_id": "68932", "title": "Sell a web app with limit per client?", "text": "This is not a \"classic\" programming question, but it's related to our job. I develop and sell data driven web app for some customer, some small (10 employess or less), some bigger (1000-2000 employers). My questions are: 1. Can i sell my app with different prices, based on customer's \"dimension\" ? Is it common in \"software market\"..isn't ? 2. How can **\"control\" and maybe \"limit to work\" my asp.net web** app when customer exceed their limit license ? Are there some ready-made server side component to do that ? Thanks EDIT: Last question: how to _justify_ the higher price for big company against small price for smaller company ?"} {"_id": "76526", "title": "Pragma Mark in Cocoa", "text": "I'm a newb to cocoa programming and looking around in various open source code online I see things like: #pragma mark global variable declaration and I was wondering what the meaning of \"pragma mark\" is in that is it any different than a // or /* (comment) block..like is it used throughout the program, and is there a commonly used methodology to it, or is it more or less a way for programmers to organize and collect their code for themselves (and others if the case may be) to use later? I found this article: http://inchoo.net/mobile-development/iphone- development/what-is-a-pragma-mark/ and it is pretty helpful, but I just needed to clarify again if it's more or less used by the coder or used by XCode? Thanks!"} {"_id": "38140", "title": "Can a freelancer use agile development?", "text": "I want to improve the way that I develop software. I want to develop faster and a great code! Today I use the waterfall method as freelancer, writing web stuffs (sites, systems, etc). Is there a way to use agile development (XP, SCRUM, etc) working in this way? I don't know nothing about agile development, where should I start? Thank you very much."} {"_id": "72862", "title": "GPL copyright notice", "text": "I don't have copyright licence from local government. How can I use GPL? As far as I know a step to apply GPL is to add the copyright notice: Copyright XXXX shabab haider... What shall I write in this section? Am I eligible to write such thing? Do I have to redirect the copyright to free software foundation or something similar?"} {"_id": "158781", "title": "Cpu-heavy web server", "text": "When reading about web servers, frameworks, etc most of the time I notice that the goal is to have a technology that has the next features: * Able to handle as many connections as possible. * Fit an I/O model (connections to DBs and other web services). Those features fit the actual web model but I am interested in knowing which technologies will fit a heavy-cpu user case. For example, Node.js is a technology that really shines when you have to write an application that uses a lot of I/O. On the other hand, due to the Node.js nature of being evented, It is not suitable for being use in CPU-heavy user cases.(video encoding, machine learning, graphics) I also have take a look at Haskell web frameworks like Snap and Warp and at the benchmarks they really are fast. Are they Haskell web frameworks suitable for CPU-heavy problem? Which other languages/technologies are candidates?"} {"_id": "38145", "title": "What are the advantages of storing xml in a relational database?", "text": "I was poking around the AdventureWorks database today and I noticed that a number of tables (`HumanResources.JobCandidate` and `Sales.Individual` for example) have a column which is storing xml data. What I would to know is, what is the advantage of storing basically a database table row's worth of data in another table's column? Doesn't this make it difficult to query off of this information? Or is the assumption that the data won't need to be queried and just needs to be stored?"} {"_id": "207721", "title": "Why no gender recognition studies are performed yet to recognize gender without seeing (based on behavior)?", "text": "Each website has visitors, and each visitor might be a potential customer (lead), thus it's really good to do whatever a marketer can do to attract that lead and turn it into a real customer. However, some marketing strategies work on the gender of the lead. In other words, men would be attracted in one way, and women in another way. I see that there are some studies on **gender recognition** based on some physiological and facial patterns like: http://ieeexplore.ieee.org/xpl/articleDetails.jsp?arnumber=6249810 http://stackoverflow.com/questions/5268753/face-gender-detection-library http://www.advancedsourcecode.com/gagender.asp However, in online marketing, which is a hot topic in the realm of Internet, you can't see the face of the visitor of your site. But still based on some preferences and some attribute there might be chances that you guess the gender of the visitor, which might increase the profit overall by absorbing more customers through good strategies. For example women usually like pink color and men usually like online games. But I wonder why till now no study has been done for that? In a world where Google and Facebook and internet giants try to understand their users more and more, seems a little odd that websites which have many visitors (like news websites cnn, bbc, msn, yahoo, etc.) has done nothing or few things to handle this concept. Imagine how much work can be done once you understand that the visitor is male, or female. So, are there any technical limitations for not working on **gender recognition** just the way we've worked on **handwriting recognition** and **speech recognition** and **face recognition**?"} {"_id": "136942", "title": "Why doesn't Python need a compiler?", "text": "Just wondering (now that I've started with C++ which needs a compiler) why Python doesn't need a compiler? I just enter the code, save it as an exec, and run it. In C++ I have to make builds and all of that other fun stuff."} {"_id": "207726", "title": "Do I need IDs in my database if the records could be identified by the date?", "text": "I am writing my first application for Android and will use the SQLite database so will be trying to limit the size as much as possible, but I think the question applies in general to database design. I am planning to store records that will have text and the date of creation. The app is a stand-alone app, i.e. it will not link to the internet and only one user will be updating it, so there is no chance that there will be more than one entry with a given date. Does my table still need an ID column? If so, what are the advantages of using the ID as a record identifier as opposed to the Date?"} {"_id": "201760", "title": "Duck typing, data validation and assertive programming in Python", "text": "About duck typing: > Duck typing is aided by habitually not testing for the type of arguments in > method and function bodies, relying on documentation, clear code and testing > to ensure correct use. About argument validation (EAFP: Easier to ask for forgiveness than permission). An adapted example from here: > ...it is considered more pythonic to do : def my_method(self, key): try: value = self.a_dict[member] except TypeError: # do something else > This means that anyone else using your code doesn't have to use a real > dictionary or subclass - they can use any object that implements the mapping > interface. > > Unfortunately in practise it's not that simple. What if member in the above > example might be an integer ? Integers are immutable - so it's perfectly > reasonable to use them as dictionary keys. However they are also used to > index sequence type objects. If member happens to be an integer then example > two could let through lists and strings as well as dictionaries. About assertive programming: > Assertions are a systematic way to check that the internal state of a > program is as the programmer expected, with the goal of catching bugs. In > particular, they're good for catching false assumptions that were made while > writing the code, or abuse of an interface by another programmer. In > addition, they can act as in-line documentation to some extent, by making > the programmer's assumptions obvious. (\"Explicit is better than implicit.\") The mentioned concepts are sometimes in conflict, so i count on the following factors when choosing if i don't do any data validation at all, do strong validation or use asserts: 1. Strong validation. By strong validation I mean raising a custom Exception (`ApiError` for example). If my function/method is part of a public API, it's better to validate the argument to show a good error message about unexpected type. By checking the type I do not mean using only `isinstance`, but also if the object passed supports the needed interface (duck typing). While I document the API and specify the expected type and the user might want to use my function in an unexpected way, I feel safer when I check the assumptions. I usually use `isinstance` and if later I want to support other types or ducks, I change the validation logic. 2. Assertive programming. If my code is new, i use asserts a lot. What are your advices on this? Do you later remove asserts from the code? 3. If my function/method is not part of an API, but passes some of its arguments through to a other code not written, studied or tested by me, I do a lot of asserts according to the called interface. My logic behind this - better fail in my code, then somewhere 10 levels deeper in stacktrace with incomprehensible error which forces be to debug a lot and later add the assert to my code anyway. Comments and advices on when to use or not to use type/value validation, asserts? Sorry for not the best formulation of the question. For example consider the following function, where `Customer` is a SQLAlchemy declarative model: def add_customer(self, customer): \"\"\"Save new customer into the database. @param customer: Customer instance, whose id is None @return: merged into global session customer \"\"\" # no validation here at all # let's hope SQLAlchemy session will break if `customer` is not a model instance customer = self.session.add(customer) self.session.commit() return customer So, there several ways to handle validation: def add_customer(self, customer): # this is an API method, so let's validate the input if not isinstance(customer, Customer): raise ApiError('Invalid type') if customer.id is not None: raise ApiError('id should be None') customer = self.session.add(customer) self.session.commit() return customer or def add_customer(self, customer): # this is an internal method, but i want to be sure # that it's a customer model instance assert isinstance(customer, Customer), 'Achtung!' assert customer.id is None customer = self.session.add(customer) self.session.commit() return customer When and why would you use each of these in the context of duck typing, type checking, data validation?"} {"_id": "11342", "title": "Is testing the easiest way to contribute to an Open Source Project?", "text": "I want to contribute to an open source project, but I don't know much about unit testing. I want to learn how to test and then practice my skills on an open source. Will this also be acknowledged as a contribution. I want to first get my name out there and then conc. on development."} {"_id": "201765", "title": "Reason for return statement in recursive function call", "text": "I just had a doubt in my mind. The following subroutine(to search an element, in a list, for example) has a return statement at the end: list *search_list(list *l, item_type x) { if (l == NULL) return(NULL); if (l->item == x) return(l); else return( search_list(l->next, x) ); } I cannot get the significance of return statement at the end (i.e. return search_list(l->next, x) ). It would be really helpful if anyone could explain this concept, using stack model."} {"_id": "238537", "title": "Deleting Old Nuget Package Folders after upgrading", "text": "Should you delete the old Nuget package files/folders under the packages directory after you upgrade a package? Maybe I'm just being overly picky, but seeing files/folder for older package versions it bothering me. Is there a good reason NOT to delete the package folders for previous package versions?"} {"_id": "174963", "title": "Inter-process and inter-thread data sharing", "text": "I know that operating systems facilitate inter-process and inter-thread data sharing. I want to know about the mechanisms used to facilitate such sharing. I read about \"pipes\". What are the other ways?"} {"_id": "174964", "title": "Sharing Large Database Backup Among Team", "text": "I work on a team of three - five developers that work on an ASP.net web application remotely. We currently run a full local database from a recent backup on all of our machines during development. The current backup, compressed, is about 18 GB. I'm looking to see if there's an easier way to keep all of our local copies relatively fresh without each of us individually downloading the 18 GB file over HTTP from our web server on a regular basis. I guess FTP is an option, but it won't speed the process up at all. I'm familiar with torrents and the thought keeps hitting me that something like that would be effective, but I'm unsure of the security or the process."} {"_id": "240904", "title": "Three variants of circular references between objects: how to choose?", "text": "I'm designing an object dependency graph of my program and one ambiguity between design variants appears from time to time. Imagine two objects having a reference to each other. Obviously, at least one reference should be assigned after object's initialization (for example, through a subscription method) and another one optionally can be a direct dependency. Here's an object diagram ![enter image description here](http://i.stack.imgur.com/7w2Jl.png) I'm not sure I've used correct UML, so here's my description: 1) Both objects have subscription/binding methods. Usage: A a = new A(); B b = new B(); a.Set(b); b.Set(a); 2) `ObjectB` is a component of `ObjectA`: A a = new A(new B()); // somewhere, possibly in a's method : b.Set(a); 3) Third is actually an opposite of the second, no need to explain. In my situation I can use any of these and my program will work, but I want some theoretical reasons. Can they be found?"} {"_id": "26548", "title": "What is the \"default\" software license?", "text": "If I release some code and binaries, but I don't include any license at all with it, what are the legal terms that apply by default (in the US, where I am). I know that I automatically have copyright without doing anything, but what restrictions are there on it? If I upload my code to github and announce it as a free download / contribute at will, then are people allowed to modify and close source my work? I haven't said that they cannot, as a GPL would, but I don't feel that it would by default be acceptable to steal my work either. So what can and cannot people do with code that is freely available, but has absolutely no licensing terms attached? By the way, I know that it would be a good idea for me to pick a license and apply it to my code soon, but I'm still curious about this."} {"_id": "88840", "title": "Common Javascript mistakes that severely affect performance?", "text": "At a recent UI/UX MeetUp that I attended, I gave some feedback on a website that used Javascript (jQuery) for its interaction and UI - it was fairly simple animations and manipulation, but the performance on a decent computer was horrific. It actually reminded me of a lot of sites/programs that I've seen with the same issue, where certain actions just absolutely destroy performance. It is mostly in (or at least more noticeable in) situations where Javascript is almost serving as a Flash replacement. This is in stark contrast to some of the webapps that I have used that have far more Javascript and functionality but run very smoothly (COGNOS by IBM is one I can think of off the top of my head). I'd love to know some of the common issues that aren't considered when developing JS that will kill the performance of the site."} {"_id": "214630", "title": "Why do we write the action to be performed by a function in jQuery inside the parentheses?", "text": "Generally whenever we're programming in any Programming language, say C, we would pass the parameters we need to pass to a function using the parentheses next to the name of the function. Whereas in jQuery, other than the user defined `function()` we write the action we need the function to perform inside the parentheses, for example, $('div').mouseenter(function(){ /* blah blah blah*/ }); Why?"} {"_id": "229470", "title": "Are there advantages to declaring stack variables constant in C++", "text": "It's not clear to me what benefits there are of declaring your stack variables as constant in C++, I was hoping somebody might explain the benefits and purpose for this technique. For example: void func(const std::string& arg) { if(someCondition) { const std::string foo (\"some string plus \" + arg); std::out << foo << std::endl; someFunction(foo); // dozens more lines of code... } // bla bla bla... } Thanks!"} {"_id": "127634", "title": "What would be the best way to manage a lot of jQuery code?", "text": "Sorry for the blunt question, I will try to explain further. I am going to have a website which uses jQuery, and I feel that at some point in time, there will be A LOT of jQuery code, meaning A LOT of functions, jQuery selectors, etc. Is there a way to limit the availability of jQuery code for specific pages? Perhaps using objects? I'm looking for the best approach here so I don't just run off into a storm and can't find my way out. Thanks."} {"_id": "127639", "title": "Why do some sorting methods sort by 1, 10, 2, 3...?", "text": "I've noticed than many numerical sorting methods seem to sort by 1, 10, 2, 3... rather than the expected 1, 2, 3, 10... I'm having trouble coming up with a scenario where I would need the first method and, as a user, I get frustrated whenever I see it in practice. Are there legitimate use cases for the first style over the second? If so, what are they? If not, how did the first sort style ever come into being? What are the official names for each sort method?"} {"_id": "103346", "title": "Why does the Git community seem to ignore side-by-side diffs", "text": "I used to use Windows, SVN, Tortoise SVN, and Beyond Compare. It was a great combination for doing code reviews. Now I use OSX and Git. I've managed to kludge together a bash script along with Gitx and DiffMerge to come up with a barely acceptable solution. I've muddled along with this setup, and similar ones, for over a year. I've also tried using the Github diff viewer and the Gitx diff viewer, so it's not like I've not given them a chance. There are so many smart people doing great stuff with Git. Why not the side- by-side diff with the option of seeing the entire file? With people who have used both, I've never heard of anyone that likes the single +/- view better, at least for more than a quick check."} {"_id": "103345", "title": "How to best annotate MVC custom action filters?", "text": "As I get deeper into the area of ASP.NET MVC 3, I feel like I'm finding a lot of very nice technical nuggets as well as new ways of doing things versus classic ASP.NET. One of those \"nuggets\" is custom action filters. You can simply decorate your controller classes or methods with attributes that, by convention, point to a \"custom action filter\" class that can perfom some operations given some point of execution (e.g. On Exception, On Method Execution, etc). They are VERY handy if you have some sort of operation that needs to be performed at a regular point in the execution of your code. The question is this: How does this best fit in with object oriented design? What do you think is the best way of annotating that on a UML diagram or when you are trying to draw them up on a white board and explaining the design? I'm very visual, so I like explaining my designs on the white board, but I'm at a loss of how to best convey custom action filters in an Object Oriented basis. They are not really an interface that needs to be implemented. They do take advantage of inheritance, but not in the context of where they are executed. They seem to fit best under the realm of \"Aspect oriented\" programming, where you have actions that take place under the covers for different scenarios (e.g. Logging, Exceptions). Any thoughts are appreciated."} {"_id": "214639", "title": "How much freedom should a programmer have in choosing a language and framework?", "text": "I started working at a company that is primarily C# oriented. We have a few people who like Java and JRuby, but a majority of programmers here like C#. I was hired because I have a lot of experience building web applications and because I lean towards newer technologies like JRuby on Rails or nodejs. I have recently started on a project building a web application with a focus on getting a lot of stuff done in a short amount of time. The software lead has dictated that I use mvc4 instead of rails. That might be OK, except I don't know mvc4, I don't know C# and I am the only one responsible for creating the web application server and front-end UI. Wouldn't it make sense to use a framework that I already know extremely well (Rails) instead of using mvc4? The reasoning behind the decision was that the tech lead doesn't know Jruby/rails and there would be no way to reuse the code. Counter arguments: * He won't be contributing to the code and is, frankly, not needed on this project. So, it doesn't really matter if he knows JRuby/rails or not. * We actually can reuse the code since we have a lot of java apps that JRuby can pull code from and vice-versa. In fact, he has dedicated some resources to convert a Java library to C#, instead of just running the Java library on the JRuby on Rails app. All because he doesn't like Java or JRuby I have built many web applications, but using something unfamiliar is causing some spin-up and I am unable to build an awesome application in as short of a time as I'm used to. This would be fine; learning new technologies is important in this field. The problem is, for this project we need to get a lot done fast. At what point should a developer be allowed to choose his tools? Is this dependent on the company? Does my company suck or is this considered normal? Do greener pastures exist? Am I looking at this the wrong way?"} {"_id": "155072", "title": "Overwhelmed by complex C#/ASP.NET project in Visual Studio 2008", "text": "I have been hired as a junior programmer to work on projects that extend existing functionality in a very large, complex solution. The code base consists of C#, ASP.NET, jQuery, javascript, html and xml. I have some knowledge of all these in addition to fair knowledge of object- oriented programming and its fundamental concepts of inheritance, abstraction, polymorphism and encapsulation. I can follow code up through its base classes, interfaces, abstract classes and understand a large part of the code that I read while doing this. However, this solution is so humongous and so many things get tied together whenever I navigate through the code that I feel absolutely overwhelmed. I often find myself unable to fully follow everything that is going on with objects being serialized, large amounts of C# and javascript operating on the same pages and methods being called from template files that consist mainly of markup. I love learning about code, but trying to deal with this really stresses me out. Additionally, I do know that a significant amount of unit testing has been done but I know nothing about unit testing or how to utilize it. Any advice anyone could offer me regarding dealing with a large code base while using Visual Studio 2008 would be greatly appreciated. Are there tools that I can use to help get a handle on what is going on? Perhaps there are things even in Visual Studio that I am not aware of. How can I follow the code to low level functionality in order to get a better grasp of what is going on at a high level?"} {"_id": "229479", "title": "How did separation of code and data become a practice?", "text": "**Please read the question carefully: it asks _how_ , not _why_.** I recently came across this answer, which suggests using a database to store immutable data: > It sounds like many of the magic numbers you describe - particularly if they > are part dependent - are really data, not code. [...] It may mean an SQL > type database, or it may simply mean a formatted text file. It would seem to me that if you have data that is part of what your program does, then the thing to do is to put it _in the program_. For example, if your program's function is to count vowels, what's wrong with having `vowels = \"aeiou\"` in it? After all, most languages have _data structures_ designed for precisely this use. Why would you bother to _separate data_ by putting it in a \"formatted text file\", as suggested above? Why not just make that text file formatted in your programming language of choice? Now is it a database? Or is it code? I'm sure some will think this is a dumb question, but I ask it in all seriousness. I feel like \"separate code and data\" is emerging culturally as some sort of self-evident truth, along with other obvious things like \"don't give your variables misleading names\" and \"don't avoid using whitespace just because your language considers it insignificant\". Take for example, this article: The Problem with Separating Data from Puppet Code. _The Problem_? What problem? If Puppet is a language for describing my infrastructure, why can't it also describe that the nameserver is 8.8.8.8? It seems to me that the problem isn't that code and data are mingled,1 but that Puppet lacks sufficiently rich data structures and ways to interface to other things. I find this shift disturbing. Object oriented programming said \"we want arbitrarily rich data structures\", and so endowed data structures with powers of code. You get encapsulation and abstraction as a result. Even SQL databases have stored procedures. When you sequester data into YAML or text files or dumb databases as if you are removing a tumor from the code, you lose all of that. Can anyone explain how this practice of separating data from code came to be, and where it's going? Can anyone cite publications by luminaries, or provide some relevant data that demonstrates \"separate code from data\" as an emerging commandment, and illustrates its origin? 1: if one can even make such distinctions. I'm looking at you, Lisp programmers."} {"_id": "245008", "title": "Why did Shannon's outguessing machine beat Hagelbarger's?", "text": "I'm reading \"Rock Breaks Scissors\", which describes two \"outguessing machines\" built at Bell Labs that try to exploit human non-randomness in the game of matching pennies. There was an outguessing machine built by the famous Claude Shannon, and one built by Dave Hagelbarger. The machines worked like this (quoted from the book): > Suppose you win twice in a row with the same choice. What would you pick > next? You could stick with the choice that's been winning - or you could > switch, perhaps on the grounds that three times in a row wouldn't be > \"random.\" > > Each time a particular situation occurred, [Shannon's] machine archived what > the player had decided. Every decision was encoded as a 1 or 0 and slotted > into one of the machine's 16 precious bits of memory. For each of the eight > given situations, Shannon's machine cataloged the last two decisions only. > That filled the machine's 16 bits. > > When the machine needed to predict, it looked at what the player had done > the last two times that the same situation had occurred. Whenever the > player's response had been identical both times, the machine predicted that > the player would do the same thing once again. Otherwise, it guessed > randomly from its [random number generator]. > > The main difference between Shannon's machine and Hagelbarger's was that > Shannon's was simpler. Hagelbarger's machine kept track of a percentage of > outcomes for each of the eight situations. The higher the percentage, the > more likely Hagelbarger's machine was to predict a repeat of the past. This > may sound more reasonable and nuanced than Shannon's all-or-nothing > approach, but in practice, Shannon's device was the better predictor. Shannon's machine beat humans at a higher rate than Hagelbarger's, and also beat Hagelbarger's machine when they played each other. I don't understand why Shannon's simpler approach was significantly better than Hagelbarger's percentage-based approach."} {"_id": "215581", "title": "Is there a pattern to restrict which classes can update another class?", "text": "Say I have a class ImportantInfo with a public writable property Data. Many classes will read this property but only a few will ever set it. Basically, if you want to update Data you should really know what you're doing. Is there a pattern I could use to make this explicit other than by documenting it? For example, some way to enforce that only classes that implement IUpdateImportantData can do it (this is just an example)? I'm not talking about security here, but more of a \"hey, are you sure you want to do that?\" kind of thing. ## UPDATE I've been faced with similar situations before, that's why I tried to keep the question more generic but it seems that a more concrete example is definitely required. So here it goes: I have a Context class with a a property CurrentYear. Many classes use this Context object and, whenever CurrentYear changes, they need to react (reload themselves, for example). Now, there are some classes that can legitimately change CurrentYear. Of course, you don't want just everybody to change CurrentYear as that has an effect in many other classes. How would you go about that?"} {"_id": "124979", "title": "What's better approach when creating a user control: receive a full object or only the needed values?", "text": "I'm working with ASP.NET Web User Control (WUC) and I got to this question: I'm creating a WUC to show an object's data. This object is already loaded when I call that WUC. I'm wondering what would be better: give the WUC the full object or only the necessary attributes?"} {"_id": "120363", "title": "How do you keep SOA DRY?", "text": "In our organization, we've shifted to a more \"service oriented architecture\". To give an example, let's assume we need to retrieve a \"Quote\" object. This quote has a shipper, a consignee, phone numbers, contacts, email addresses, and other location information. In other words, a Quote object is made up of many other objects. So, it seems like it would make sense to make a \"Quote Retrieval Service\". In our situation, we've accomplished this by creating a .NET solution and writing the service. The service API looks something like this (in pseudo-code): Function GetQuote(String ID) Returns Quote So, so far so good. Now, when this service is consumed, to keep things \"de- coupled\", we are creating essentially a duplicate of the Quote object and mapping from the QuoteService version of the Quote into the consumer's version of the Quote. In many cases, these classes will have the exact same properties. So, if the Quote service is consumed by 5 other applications, we would have 6 definitions of what a \"Quote\" is. One for each consumer, and one for the service. This feels wrong. I thought code was supposed to be DRY, but it seems like our method of SOA is forcing us to create tons of duplicated class definitions. What are we doing wrong, or is the code duplication just a \"necessary evil\" of SOA?"} {"_id": "235101", "title": "Why are JOINS deprecated for a in-memory database?", "text": "Correct me if I'm misunderstanding. Refer to the following sentence: > Stack Overflow copied a key part of the Wikipedia database design. This > turned out to be a mistake which will need massive and painful database > refactoring to fix. **The refactorings will be to avoid excessive joins in a > lot of key queries.** This is the key lesson from giant multi-terabyte table > schemas (like Google\u2019s BigTable) which are completely join-free. **This is > significant because Stack Overflow's database is almost completely in RAM > and the joins still exact too high a cost.** 1 Actually I'm trying to pass from the interest on single technology like Asp.net MVC to architectures design. Can you better clarify the quoted sentence? [1] http://highscalability.com/stack-overflow-architecture"} {"_id": "235105", "title": "Git work-flow for uncertain premise (pending decision)", "text": "I am not quite sure, how to name my problem. But I will start with how I usually used git. I am working as a single developer on one git repository. Usually I make feature-branches from master, and merge them back into master, when the feature is finished. When meanwhile another feature-branch was merged into master, I rebase the other branch on master, before merging. That worked very good, until recently. One dependency I have in my project, needs an update. But it was not yet decided how this update will be implemented. But I can simulate the new behavior quite simply. So I branched from master, and simulated that fix. So when the actual fix is available and merged into master, all changes above can be rebased back on master. But the decision on that fix is still pending, and my branches are stacking up: * ee8d0ab (HEAD, featureTwo) * e659932 * 27986c0 (featureOne) * f0011e6 * d4187cf * 552e35a * 37d597f (simulatedFix) * e0eb3d0 (origin/master, master) * b06583c * d295b3e I feel, that this is not good. It feels like I am building up dept. Imagine how this would look at featureTen, or featureTwenty. What are the possible drawbacks of this method? And what is the best way, to handle this situation?"} {"_id": "218986", "title": "How is it called when you define constants that simply refer to a large namespace?", "text": "I am using JRuby. I have many classes implemented in Java, and I want to create objects off of them in my Ruby scripts. Suppose that I have a class `Sprite` in Java. In Ruby, to refer to it, I use the following: Java::ComMyWebsiteMyProjectName::Sprite Because Sprite is defined in the package `com.myWebsite.myProjectName`. Now then, to simplify things, I like to do this: Sprite = Java::ComMyWebsiteMyProjectName::Sprite Which sets the constant `Sprite` to the `Sprite` class defined in Java. Now my Ruby code can refer to it by just writing `Sprite`. I have a file full of that kind of declarations. Something like: Sprite = Java::ComMyWebsiteMyProjectName::Sprite Texture = Java::ComMyWebsiteMyProjectName::Texture Bitmap = Java::ComMyWebsiteMyProjectName::Bitmap Math = Java::ComMyWebsiteMyProjectName::Math Geometry = Java::ComMyWebsiteMyProjectName::Geometry Now, I am not exactly very familiar with programming concepts. I am having problems giving this file a name, because I don't even know if there is a name for this kind of thing. **Does this kind of \"practice\" have an actual name?**"} {"_id": "69715", "title": "How do you incorporate GTD into your daily programming tasks?", "text": "David Allen's \"Getting Things Done\" method seems to be a very useful way of organizing tasks and getting those tasks done. Has anyone here used GTD in their day-to-day programming tasks, and if so, what's the best way to go about it?"} {"_id": "2247", "title": "How can I track programming productivity on a daily basis?", "text": "How can I track that I'm developing software more or less productive than the previous days?"} {"_id": "214188", "title": "Is smarter software necessarily bigger?", "text": "This is kind of a vague question so I apologize in advance. When software is \"smarter\", I tend to interpret that as really just saying that it covers more edge cases. First of all, is this correct? Assuming it is, does this mean that the software is full of if/switch statements to make sure input is handled properly, therefore making it \"smarter\"? This thinking makes me feel like the code would end up bloated and messy. I'm running into a point where I'm wanting to handle these edge cases, but I don't want to bloat my code merely overlooking some best practice. What am I missing?"} {"_id": "214183", "title": "Is it possible to do decent spam filtering without scanning the contents of emails?", "text": "I don't have much knowledge of the subject but as I know, when servers receive an email they have two sources of data to classify it as SPAM or not. The contents of the message (subject + body) and the metadata that goes with it (from who / to who / server that sent the message, etc). I know you can do basic filtering by blacklisting emails list and other sorts of metadata. But is it possible to do decent spam filtering without checking the contents of a message? Defining 'check the contents' as using any function that relies on data found on the body/subject of the message to classify it as spam or not spam. **Edit** : The reason I ask for this because of the \"scroogled\" campaign where Microsoft accuses Google of scanning the contents of emails for advertising purposes, and I think that they do the same for spam prevention purposes. So I asked them and they said: > We have tools and system set up to filter spam emails to and from > Outlook.com email accounts. However, we are not allowed to disclose how > these spam filters work for security purposes. > > ME: So can you assure me that no content from my personal email is being > analysed and cross referenced with other data in any way? > > ...In response to your question, its a yes, we can assure you that."} {"_id": "154038", "title": "Interview: how to ask development process/culture related questions", "text": "I just watched a presentation about simplicity by Rich Hickey at InfoQ where he goes over the constructs programmers use to produce artifacts and how those constructs make various trade-offs when it comes to achieving simple artifacts. I think that most programmers would agree with a lot of what he says but at the end of the day I don't know how many development shops are actively practicing development processes and using tools that allow them to make simple artifacts. As an interview candidate I would like to work at a software development shop where producing simple artifacts is a top priority. What are some questions I can ask to figure out if the place that is interviewing me is actually such a place."} {"_id": "201184", "title": "Best Way to Access Hardware per COM Serial Port over USB Adapter", "text": "We are starting a total new Branch at our firm. I am usually developing database interfaces or internal server/client applications/tools for our company, but I never had to do something with hardware. **Szenario** My Boss want to sell a device, which has a touch-screen to set up the many, many features. However, the configuration of the device can be very time consuming. He wants to give a software free for a good customer and for a litte charge to \"small\" customers. The customer should only be able to use the software, if he registers at our system, so that he can't copy the software without a notice by us. **Requirements** The device has a build in USB cable. If you plug it in your computer it opens a new COM serial port. The software has the following requirements: * Configurating different presents for the device * Reading and writing the presets to the device * Software will refuse to start, if the user has no registration at our server * The user can store his presets at our server and share it with other customers * One preset, copied over and over to multiple devices My boss has seen similar system (he said something about logitech harmony remote controls) and it is now on me, to create the architecture. Server Side: The server is not that kind of problem. We have enough different machines to suite any kind of favour (IIS, LAMPP, Apache/Tomcat etc.) I could take MSSQL or MySQL DBase to store the data, thats not the problem. Client Side: This is the tricky part for me. I never did something with hardware. My boss suggested I should use Silverlight. I claimed that I want to do it with a native application. But what is the best combination? * Total Client-Side (C++, C# or Java are the languages I could write) * Mixed Structure #1 Silverlight + Native \"Driver\" Application * Mixed Structure #2 Java-Applet + Native \"Driver\" Application * Browser Only - Silverlight or Java-Applet? * Any other suggest The device is for a special customer group who's getting personal trained with any purchase so security risks like unsafe code or dangerous browser plugins are no negative selling argument. I did some research about COM Serial Port connection with Silverlight and P/Invoke but I don't want to start without a discussion what would be the best way."} {"_id": "201180", "title": "How do I know I'm not violating a software patent?", "text": "I'm using a android like lock screen (The one with the nine dots that you have to connect to log in) in my application currently. I'm afraid of getting a mail from google because of some patents that I didn't know. Is the android lock screen patented or can I rebuild it and use it in my application?"} {"_id": "43494", "title": "What is a good stopword in full text indexing?", "text": "When you go to the Appendix D in Oracle Text Reference they provide lists of stopwords used by Oracle Text when indexing table contents. When I see the English list, nothing puzzles me. But the reason why the French list includes _moyennant_ (French for _in view of which_ ) for example is unclear. Oracle has probably thought it through more than once before including it. How would you constitute a list of appropriate stopwords if you were to design an indexer?"} {"_id": "240567", "title": "Can a layer consist of multiple projects / dlls?", "text": "I am working on the architecture for a new web application and I am pretty much a complete newbie when it comes to architecture. Working my way through .NET Application Architecture Guide, 2nd Edition and learning a lot. I have decided to go for the \"standard\" 3-layer architecture: Presentation layer, Business layer, Data layer. Will be using ASP.NET MVC and Entity Framework. Currently the business layer contains the logic and business entities (anemic models). I am planning to use the business entities as EF entities. That would, however, cause the data layer to have a depency on the business layer. This \"problem\" can be solved by moving the business entities into its own project and have the layers that have dependencies on the business entities reference that project. 1. Is that an ok / good way of doing things? 2. Will / can / should the business entities project be considered part of the business layer? 3. Can a layer consist of multiple projects, that is, the layer only acts as a \"logical container\"? In the above mentioned book they talk about how the business layer can consist of \"Business Workflow\", \"Business Components\", \"Business Entities\", and I assume that each of those are a separate project (maybe even multiple projects)?"} {"_id": "168747", "title": "What are all the components of a \"Facebook App\"?", "text": "I am a developer who has never personally partaken in social media (in any form) for reasons completely outside the scope of this question. I am \"off the grid\" (no Facebook, Twitter, etc accounts). I'm currently building a web app and would like the app to have a presence on Facebook, and possibly even \"port\" my app over as a Facebook app. My _understanding_ of Facebook Apps is that they're just normal web apps that get `